I changed the error reporting so that out of swap and out of native (C heap) memory doesn't look like a VM assert or look like a java.lang.OutOfMemoryError exception that someone could catch. The suggested new wording is in the bug link Evaluation section. I'm open to rewording if people have strong preferences.
Also, if solaris runs out of swap space (generally by filling up /tmp), access to thread stacks generate a bus error with ENOMEM errno. I can check for this in the solaris signal handler and give the out of C heap message. I couldn't get Linux or windows to fail the same way, so didn't apply similar changes there.
The apparent "crashes" reported by this bug really come from the internal VM call vm_exit_out_of_memory(). When the VM runs out of C heap also called "native" memory, it cannot continue running. The VM uses a variety of memory allocation schemes, including "Arena"s which are memory pools, but these pools ultimately come out of memory allocation from the operating system (mmap or malloc). When malloc fails, there is not much that the VM can do to recover.
A proposed solution to this problem is to change the error message to be more indicative of this. The former crash message had a string "Out of swap space?" but seems to be missed by the customer. The new messages proposed are something like this:
# There is insufficient native memory for the Java Runtime Environment to continue.
# <comment from vm_exit_out_of_memory() call>
# An error report file with more information is saved as:
The proposed hs_err_pid*log file starts with this:
# There is insufficient memory for the Java Runtime Environment to continue.
# No swap to commit thread stack
# Possible solutions:
# reduce memory load on the system
# increase physical memory or swap space
# check if swap backing store is full
# decrease Java heap size (-Xmx/-Xms)
# decrease number of Java threads
# decrease Java thread stack sizes (-Xss)
# This output file may be truncated or incomplete.
# JRE version: 7.0-b118
# Java VM: Java HotSpot(TM) Server VM (20.0-b04-6892275_1214_1557-fastdebug mixe
d mode solaris-sparc )
--------------- T H R E A D ---------------
<stuff that's typically in the hs_err_pid* file. below>
It's not just the compiler thread. Any thread that tries to allocate memory in an arena when the created threads have eaten up all of memory, will crash:
Stack: [0xf5de2000,0xf5e32000), sp=0xf5e30938, free space=314k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V [libjvm.so+0x8f9c87] void VMError::report_and_die()+0x597
V [libjvm.so+0x38cc2e] void report_vm_out_of_memory(const char*,int,unsigned,const char*)+0xae
V [libjvm.so+0x1efef1] void*ChunkPool::allocate(unsigned)+0x101
V [libjvm.so+0x1edff1] void*Chunk::operator new(unsigned,unsigned)+0x81
V [libjvm.so+0x1ee90f] void*Arena::grow(unsigned)+0x4f
V [libjvm.so+0x1ef9ef] void*Arena::Amalloc(unsigned)+0xbf
V [libjvm.so+0x7e62e5] char*ResourceArea::allocate_bytes(unsigned)+0xc5
V [libjvm.so+0x7e5c71] char*resource_allocate_bytes(unsigned)+0x41
V [libjvm.so+0x45e06b] void*GenericGrowableArray::raw_allocate(int)+0x4b
V [libjvm.so+0x4ca2f1] GrowableArray<methodOop>::GrowableArray(int,bool)+0x61
V [libjvm.so+0x6561ec] int klassVtable::get_num_mirandas(klassOop,objArrayOop,objArrayOop)+0x5c
V [libjvm.so+0x652d2d] void klassVtable::compute_vtable_size_and_num_mirandas(int&,int&,klassOop,objArrayOop,AccessFlags,oop,symbolOop,objArrayOop)+0x2ad
This is a day one problem and is too risky to change the behaviour of the VM for mustang. OutOfMemoryError is thrown and a message in the assert. Are there more bugs open that request better oom handling?
# An unexpected error has been detected by Java Runtime Environment:
# java.lang.OutOfMemoryError: requested 32764 bytes for ChunkPool::allocate. Out of swap space?