I've found that -Xms=-Xmx is best as it avoids resizes. I suggest running your app with NMT enabled, and looking at the output: https://docs.oracle.com/javase/8/docs/technotes/guides/vm/nmt-8.html#enable_nmt -XX:NativeMemoryTracking=summary -XX:+UnlockDiagnosticVMOptions -XX:+PrintNMTStatistics
There are several things you can trip over when running a jvm under cgroups or docker: (I'm missing a couple I can't think off off my head). 1. Heapsize -Xmx2. DirectMemory allocations (DirectByteBuffer) -XX:MaxDirectMemorySize3. Code cache4. Metaspace -XX:MaxMetaspaceSize5. Unsafe allocations.6. Jni Allocations7. Threads per stack -Xss Most of these you can manage via flags, but there are a few you can't that I don't recall directly off hand.And see [2], for all the flags, it gets tedious. Luckily, 8u121b34 (8u131 for those of you without an Oracle support contract) has a useful option [1] that can help with this.-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap 8u121b34 and later will automatically set the # of cpus specified without any flags, but this doesn't take sharing into account. This configures the max memory the jvm sees to be what docker has specified. (Unfortunately only supports docker, and not other cgroups styles or cgroups v2). See http://mail.openjdk.java.net/pipermail/hotspot-dev/2017-July/027464.html for what looks like a very promising JEP on this front. The one downside I didn't point out is that the UseCGroupMemoryLimitForHeap flag doesn't control Unsafe Allocations or JNI allocations, so you will need to handle approximations for that yourself. :) Hope that helps a little. --Allen Reese [2] is an empty example program pointing out many of the flags and output. [1]: http://www.oracle.com/technetwork/java/javaseproducts/documentation/8u121-revision-builds-relnotes-3450732.html8170888 hotspot runtime [linux] Experimental support for cgroup memory limits in container (ie Docker) environments6515172 hotspot runtime Runtime.availableProcessors() ignores Linux taskset command8161993 hotspot gc G1 crashes if active_processor_count changes during startup [2]:[areese@refusesbruises ]$ java -XX:MaxDirectMemorySize=1m -Xms256m -Xmx256m -XX:NativeMemoryTracking=summary -XX:+UnlockDiagnosticVMOptions -XX:+PrintNMTStatistics -cp . test Native Memory Tracking: Total: reserved=1600200KB, committed=301232KB- Java Heap (reserved=262144KB, committed=262144KB) (mmap: reserved=262144KB, committed=262144KB) - Class (reserved=1059947KB, committed=8043KB) (classes #391) (malloc=3179KB #129) (mmap: reserved=1056768KB, committed=4864KB) - Thread (reserved=10323KB, committed=10323KB) (thread #10) (stack: reserved=10280KB, committed=10280KB) (malloc=32KB #54) (arena=12KB #20) - Code (reserved=249631KB, committed=2567KB) (malloc=31KB #296) (mmap: reserved=249600KB, committed=2536KB) - GC (reserved=13049KB, committed=13049KB) (malloc=3465KB #111) (mmap: reserved=9584KB, committed=9584KB) - Compiler (reserved=132KB, committed=132KB) (malloc=1KB #21) (arena=131KB #3) - Internal (reserved=3277KB, committed=3277KB) (malloc=3245KB #1278) (mmap: reserved=32KB, committed=32KB) - Symbol (reserved=1356KB, committed=1356KB) (malloc=900KB #64) (arena=456KB #1) - Native Memory Tracking (reserved=34KB, committed=34KB) (malloc=3KB #32) (tracking overhead=32KB) - Arena Chunk (reserved=305KB, committed=305KB) (malloc=305KB) [areese@refusesbruise ]$ From: Sebastian Łaskawiec <[email protected]> To: mechanical-sympathy <[email protected]> Sent: Friday, August 4, 2017 6:38 AM Subject: Re: Measuring JVM memory for containers I think you're right Tom. Here is a good snippet from the "Java Performance" book [1]. I'll experiment with this a little bit further but it looks promising. Thanks for the hint and link! [1] https://books.google.pl/books?id=aIhUAwAAQBAJ&printsec=frontcover&dq=Java+Performance:+The+Definitive+Guide:+Getting+the+Most+Out+of+Your+Code&hl=en&sa=X&redir_esc=y#v=onepage&q='-XX%3AMaxRam'%20java&f=false On Friday, 4 August 2017 09:44:32 UTC+2, Tom Lee wrote: Neat, didn't know about MaxRAM or native memory tracking. RE: The downside is that with MaxRAM parameter I lose control over Xms. Oh, it doesn't work? Can't track down definitive info from a quick Google around, but this seems to imply it should: https://stackoverflow.com/ questions/19712446/how-does- java-7-decide-on-the-max- value-of-heap-memory- allocated-xmx-on-osx ... It's a few years old, but this comment sticks out from the OpenJDK copy/paste in the StackOverflow answer: // If the initial_heap_size has not been set with InitialHeapSize // or -Xms, then set it as fraction of the size of physical memory, // respecting the maximum and minimum sizes of the heap. Seems to imply InitialHeapSize/Xms gets precedence. Perhaps that information is out of date / incorrect ... a look at more recent OpenJDK source code might offer some hints. If Xms isn't an option for some reason, is InitialRAMFraction/ MaxRAMFraction available? Maybe something else to look at. In any case, thanks for the info! On Fri, Aug 4, 2017 at 12:18 AM, Sebastian Łaskawiec <[email protected] > wrote: Thanks a lot for all the hints! They helped me a lot. I think I'm moving forward. The key thing was to calculate the amount of occupied memory seen by CGroups. It can be easily done using: - /sys/fs/cgroup/memory/memory. usage_in_bytes - /sys/fs/cgroup/memory/memory. limit_in_bytes Calculated ratio along with Native Memory Tracking [1] helped me to find a good balance. I also found a shortcut which makes setting initial parameters much easier: -XX:MaxRAM [2] (and set it based on CGroups limit). The downside is that with MaxRAM parameter I lose control over Xms. [1] https://docs.oracle.com/ javase/8/docs/technotes/ guides/troubleshoot/ tooldescr007.html[2] https://developers.redhat. com/blog/2017/04/04/openjdk- and-containers/ On Thursday, 3 August 2017 20:16:50 UTC+2, Tom Lee wrote: Hey Sebastian, Dealt with a similar issues on Docker a few years back -- safest way to do it is to use some sort of heuristic for your maximum JVM process size. Working from a very poor memory and perhaps somebody here will tell me this is a bad idea for perfectly good reasons, but iirc the ham-fisted heuristic we used at the time for max total JVM process size was something like: <runtime value of -Xmx> + <runtime value of -XX:MaxDirectMemorySize> + slop Easy enough to see these values via -XX:+PrintFlagsFinal if they're not explicitly defined by your apps. We typically had Xmx somewhere between 8-12GB, but MaxDirectMemorySize varied greatly from app to app. Sometimes a few hundred MB, in some weird cases it was multiples of the JVM heap size. The "slop" was for things we hadn't accounted for, but we really should have included things like the code cache size etc. as Meg's estimate above does. I think we used ~10% of the JVM heap size, which was probably slightly wasteful, but worked well enough for us. Suggest you take the above heuristic and mix it up with Meg's idea to include code cache size etc. & feel your way from there. I'd personally always leave at least a few hundred megs additional overhead on top of my "hard" numbers because I don't trust myself with such things. :) Let's see, what else. At the time our JVM -- think this was an Oracle Java 8 JDK -- set MaxDirectMemorySize to the value of Xmx by default, implying the JVM process could (but not necessarily would) grow up to roughly double its configured size to accommodate heap + direct buffers if you had an application that made heavy use of direct buffers and put enough pressure on the heap to grow it to the configured Xmx value (or as we typically did, set Xmx == Xms). Where possible we would constrain MaxDirectMemorySize to something "real" rather than leaving it to this default, preferring to have the JVM throw up an OOME if we were allocating more direct memory than we expected so we could get more info about the failure rather than worrying about the OOM killer hard kill the entire process & not being able to understand why. YMMV. One caveat: I can't quite remember if Unsafe.allocateMemory()/ Unsafe.freeMemory() count toward your MaxDirectMemorySize ... perhaps somebody else here more familiar with the JVM internals could weigh in on that. Perhaps another thing to watch out for if you're doing "interesting" things with the JVM. I found this sort of "informed guess" to be much more reliable than trying to figure things out empirically by monitoring processes over time etc. ... anyway, hope that helps, curious to know what you ultimately end up with. Cheers,Tom On Thu, Aug 3, 2017 at 10:31 AM, Meg Figura <[email protected]> wrote: Hi Sebastian, Our product runs within the JVM, within a (Hadoop) YARN container. Similar to your situation, YARN will kill the container if it goes over the amount of memory reserved for the container. Java heap sizes (-Xmx) for the apps we run within containers vary from about 6GB to about 31GB, so this may be completely inappropriate if you use much smaller heaps, but here is the heuristic we use on Java 8. 'jvmMemory' is the -Xmx setting given to the JVM and adjustJvmMemoryForYarn() gives the size of the container we request. private static int getReservedCodeCacheSize(int jvmMemory){ return 100;} private static int getMaxMetaspaceSize(int jvmMemory){ return 256;} private static int getCompressedClassSpaceSize( int jvmMemory){ return 256;} private static int getExtraJvmOverhead(int jvmMemory){ if (jvmMemory <= 2048) { return 1024; } else if(jvmMemory <= (1024 * 16)) { return 2048; } else if(jvmMemory <= (1024 * 31)) { return 5120; } else { return 8192; }} public static int adjustJvmMemoryForYarn(int jvmMemory){ if (jvmMemory == 0) { return 0; } return jvmMemory + getReservedCodeCacheSize( jvmMemory) + getMaxMetaspaceSize( jvmMemory) + getCompressedClassSpaceSize( jvmMemory) + getExtraJvmOverhead( jvmMemory);} If the app uses any significant off-heap memory, we just add this to the container size. Obviously, this isn't optimal, but it does prevent the "OOM killer" from kicking in. I'm interested to see if anyone has a better solution! -Meg On Thursday, August 3, 2017 at 5:17:11 AM UTC-4, Sebastian Łaskawiec wrote: Hey, Before digging into the problem, let me say that I'm very happy to meet you! My name is Sebastian Łaskawiec and I've been working for Red Hat focusing mostly on in memory store solutions. A while ago I attended JVM performance and profiling workshop lead by Martin, which was an incredible experience to me. Over the last a couple of days I've been working on tuning and sizing our app for Docker Containers. I'm especially interested in running JVM without swap and constraining memory. Once you hit the memory limit, the OOM Killer kicks and takes your application down. Rafael wrote pretty good pragmatic description here [1]. I'm currently looking for some good practices for measuring and tuning JVM memory size. I'm currently using: - The JVM native memory tracker [2] - pmap -x, which gives me RSS - jstat -gccause, which gives me an idea how GC is behaving - dstat which is not CGroups aware but gives me an overall idea about paging, CPU and memory Here's an example of a log that I'm analyzing [3]. Currently I'm trying to adjust Xmx and Xms correctly so that my application fills the constrained container but doesn't spill out (which would result in OOM Kill done by the kernel). The biggest problem that I have is how to measure the remaining amount of memory inside the container? Also I'm not sure why the amount of committed JVM memory is different from RSS reported by pmap -x? Could you please give me a hand with this? Thanks,Sebastian [1] https://developers.redhat. com/blog/2017/03/14/java- inside-docker/[2] https://docs.oracle.com/ javase/8/docs/technotes/ guides/troubleshoot/ tooldescr007.html[3] https://gist.github.com/ slaskawi/ a6ddb32e1396384d805528884f25ce 4b -- You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group. To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+ [email protected]. For more options, visit https://groups.google.com/d/ optout. -- Tom Lee / https://neeveresearch.com / @ tglee -- You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group. To unsubscribe from this group and stop receiving emails from it, send an email to mechanical-sympathy+ [email protected]. For more options, visit https://groups.google.com/d/ optout. -- Tom Lee / https://neeveresearch.com / @tglee -- You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups "mechanical-sympathy" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.
