Your stated working assumption (that while you have enough free memory, you don't have enough contiguous free memory) is wrong. As a result, there is no need to seek a solution for that problem, because that is not the problem. At least on all production JVMs in the last 22 years or so, JVMs (through their garbage collectors) compact memory and create contiguous memory regions. Without doing that, most Java applications would generally stop working after running for a while.
Can you demonstrate (with specific output and numbers) your starting assertion? Showing the situation under which you are getting an OOME under a condition that you think it should not be happening, and how you are determining the available memory? In general the criteria for JVMs OOME is not that you have no more empty bytes left in the heap. It is that you are thrashing so badly that “you’d probably rather be dead”. This is a subjective criteria which is often configurable via flags. There are other reasons for throwing OOME ( like running out of non-Java-heap types of memory), but the “running low enough on heap memory that The JVM is thrashing badly” is a common reason and the likely one in your case. A significant contributing portion of garbage collector efficiency (in the vast majority of GC algorithms used in real world VMs) is generally linear to 1/portion-of-heap-that-is-empty. E.g. regardless of GC algorithm choice, if you are repeatedly allocating (and forgetting at the same rate) 64 byte objects in a 1GB heap that has only 256 bytes empty (not occupied by currently reachable objects) on a steady basis, you would need to run a complete GC cycle on every 4th allocation, and that GC cycle would have to examine the entire 1GB of heap each time to find the empty 256 GBs and potentially compact the heap to make them contiguous, That would be so much work per allocation that you would never want the JVM to continue running under that condition (and no current JVM will AFAIK). On the other hand the exact same application and load would run very efficiently when there was more empty memory in the heap (improving semi-linearly as 1/portion-of-heap-that-is-empty grows). Note that portion-of-heap-that-is-empty here refers to the portion of the heap that is not occupied by live, reachable objects (not the much small portion that may be currently unused until,the GC gets rid of unreachable objects). E.g. some (stop the world) collectors will through an OOME when too much time (e.g. 98%) of a time interval (e.g. no less than 1 minute) during which multiple (e.g. at least N) full GC cycles have run was spent in stop-the world GC. This gets more intricate with concurrent collectors, but the principle is the same, IMO, the most likely explanation (given the available data) is that you r heap is not large enough to continue to run your application given its live set and load, and that increasing the heap will resolve your problem (assuming you application doesn’t then react by increasing the live set to fill the heap up too far again). If this explanation applies, then OOME is a wholly appropriate indication during your situation. HTH, — Gil. On Thursday, October 1, 2020 at 8:43:59 AM UTC-10 Shevek wrote: > When I do new byte[N], I get OutOfMemoryError, despite that the VM > claims to have more than enough free space (according to MemoryMXBean, > Runtime.freeMemory, visualvm, etc). > > My working assumption is that while I have enough free memory, I don't > have enough contiguous free memory. Is there a solution to this? Will I > get better results from any of: > > * ByteBuffer.allocateDirect() - presumably yes, but has other issues > relating to overall memory usage on the system > * G1GC (or other GC which allocates (relocatable?) regions) - this is a > deep hole I haven't yet explored. > * Calling System.gc() before allocating a contiguous region [apparently > doesn't help]. > * Other? > > If we do follow a strategy using allocateDirect, will we end up with the > same fragmentation issue in the native heap, along with committed > off-heap memory which we can no longer effectively use, or is the > off-heap memory managed in some manner which avoids this problem? > > Thank you. > > S. > -- You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web, visit https://groups.google.com/d/msgid/mechanical-sympathy/ff735126-c158-48ef-a1e8-4d9e12770432n%40googlegroups.com.
