今天碰到一个大坑,差点要了老命!
之前装了ubuntu双系统,后来崩溃,想在就想装VMware虚拟机,再装ubuntu,一切进展顺利,直到在虚拟机的ubuntu中安装IDEA时出现了问题。
安装过程并没有报错,但是启动后运行一个项目,项目还没加载呢就直接退出,生成个错误日志,如下:OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000f5a9b000, 66166784, 0) failed; error='无法分配内存' (errno=12)
日志中的部分内容是:
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 66166784 bytes for committing reserved memory.# Possible reasons:# The system is out of physical RAM or swap space# In 32 bit mode, the process size limit was hit# Possible solutions:# Reduce memory load on the system# Increase physical memory or swap space# Check if swap backing store is full# Use 64 bit Java on a 64 bit OS# Decrease Java heap size (-Xmx/-Xms)# Decrease number of Java threads# Decrease Java thread stack sizes (-Xss)# Set larger code cache with -XX:ReservedCodeCacheSize=# This output file may be truncated or incomplete.## Out of Memory Error (os_linux.cpp:2627), pid=3309, tid=0x00007f0174cfc700## JRE version: OpenJDK Runtime Environment (8.0_112-b16) (build 1.8.0_112-release-736-b16)# Java VM: OpenJDK 64-Bit Server VM (25.112-b16 mixed mode linux-amd64 compressed oops)# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again真是走了很多弯路,尝试了无数次:重装idea,idea配置文件配置jvm的大小,把虚拟机的内存再调大一倍等等等等
最后来根据 (errno=12)这个搜索到一篇文章,说道:
后来看了美军一个文章(http://www.enchantedage.com/node/235),加一个配置即可:echo 1000000 > /proc/sys/vm/max_map_count我看了自己虚拟机ubuntu里面,/proc/sys/vm/max_map_count的初始值很小,只有65530,果断使用这个重设了一下。
啊,终于好了,太艰难了。
后来又请装双系统的同事看下他们系统里的这个值,也是65530,但是他们的却不报错,真是醉醉的。看来造成“无法分配内存”的原因,并不是这里,但是可以通过修改这个来解决。
猜测这个原因:1、双系统和虚拟机不同 2、安装jdk方式的不同(之前的我,和现在的同事们,都是先下载好jdk再安装的;可是现在虚拟机我却使用命令安装,这样不需要配置环境变量)
只是猜测,暂时就不去验证它了,如果再有遇到的同学,解决不了的话,可以朝这个方向尝试下。
后来在IDEA中操作,实在是各种卡、慢,第二天还是决定重装jdk,不使用命令安装,而是使用解压缩包并且配置环境变量,并且将max_map_count的值改了回去
echo 65530> /proc/sys/vm/max_map_count
你猜怎么着,一点问题没有。Maven仓库下载jar包也很溜,果然还是jdk的问题啊。所以最终解决方案是:使用解压缩包并配置环境变量的方式,重新安装jdk!
标签:虚拟机的ubuntu中安装Idea出错 无法分配内存 (errno=12)
原创文章,欢迎转载,转载请注明处!
附 美军文章的内容如下:
Linux mmap() ENOMEM error causing Segmentation Fault
I have a system that creates files on disk, then uses mmap and madvise and mflush to asynchronously do I/O to the disk. This system may potentially create many, many files, each of which will have three mmap sections, that will be rotated through the file.
After trying to run this system for a while, I started getting segmentation violations that I couldn't quite understand. Initially, I thought it was a threading problem, because I'm using boost::asio and boost::thread quite heavily. I used strace() to figure out what the system was doing, and found that right before the crashes, one or more calls to mmap() would fail.
Long story short: There is a limit to the number of mmap() segments that can be active in a Linux process at any one time. This limit is configurable in /proc/sys/vm/max_map_count. I already knew there was a file descriptor limit, and I raised that pretty high, but apparently Linux doesn't think you'll be using lots of mmap() just because you're using lots of files. Adding the following to /etc/rc.local will fix the problem:
echo 1000000 > /proc/sys/vm/max_map_count