Neat, didn't know about MaxRAM or native memory tracking.

RE:

The downside is that with MaxRAM parameter I lose control over Xms.


Oh, it doesn't work? Can't track down definitive info from a quick Google
around, but this seems to imply it should:
https://stackoverflow.com/questions/19712446/how-does-java-7-decide-on-the-max-value-of-heap-memory-allocated-xmx-on-osx
... It's a few years old, but this comment sticks out from the OpenJDK
copy/paste in the StackOverflow answer:

  // If the initial_heap_size has not been set with InitialHeapSize
>   // or -Xms, then set it as fraction of the size of physical memory,
>   // respecting the maximum and minimum sizes of the heap.


Seems to imply InitialHeapSize/Xms gets precedence. Perhaps that
information is out of date / incorrect ... a look at more recent OpenJDK
source code might offer some hints.

If Xms isn't an option for some reason, is
InitialRAMFraction/MaxRAMFraction available? Maybe something else to look
at. In any case, thanks for the info!

On Fri, Aug 4, 2017 at 12:18 AM, Sebastian Łaskawiec <
[email protected]> wrote:

> Thanks a lot for all the hints! They helped me a lot.
>
> I think I'm moving forward. The key thing was to calculate the amount of
> occupied memory seen by CGroups. It can be easily done using:
>
>    - /sys/fs/cgroup/memory/memory.usage_in_bytes
>    - /sys/fs/cgroup/memory/memory.limit_in_bytes
>
> Calculated ratio along with Native Memory Tracking [1] helped me to find a
> good balance. I also found a shortcut which makes setting initial
> parameters much easier: -XX:MaxRAM [2] (and set it based on CGroups limit).
> The downside is that with MaxRAM parameter I lose control over Xms.
>
> [1] https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/
> tooldescr007.html
> [2] https://developers.redhat.com/blog/2017/04/04/openjdk-and-containers/
>
> On Thursday, 3 August 2017 20:16:50 UTC+2, Tom Lee wrote:
>>
>> Hey Sebastian,
>>
>> Dealt with a similar issues on Docker a few years back -- safest way to
>> do it is to use some sort of heuristic for your maximum JVM process size.
>> Working from a very poor memory and perhaps somebody here will tell me this
>> is a bad idea for perfectly good reasons, but iirc the ham-fisted heuristic
>> we used at the time for max total JVM process size was something like:
>>
>> <runtime value of -Xmx> + <runtime value of -XX:MaxDirectMemorySize> +
>> slop
>>
>> Easy enough to see these values via -XX:+PrintFlagsFinal if they're not
>> explicitly defined by your apps. We typically had Xmx somewhere between
>> 8-12GB, but MaxDirectMemorySize varied greatly from app to app. Sometimes a
>> few hundred MB, in some weird cases it was multiples of the JVM heap size.
>>
>> The "slop" was for things we hadn't accounted for, but we really should
>> have included things like the code cache size etc. as Meg's estimate above
>> does. I think we used ~10% of the JVM heap size, which was probably
>> slightly wasteful, but worked well enough for us. Suggest you take the
>> above heuristic and mix it up with Meg's idea to include code cache size
>> etc. & feel your way from there. I'd personally always leave at least a few
>> hundred megs additional overhead on top of my "hard" numbers because I
>> don't trust myself with such things. :)
>>
>> Let's see, what else. At the time our JVM -- think this was an Oracle
>> Java 8 JDK -- set MaxDirectMemorySize to the value of Xmx by default,
>> implying the JVM process *could* (but not necessarily *would*) grow up
>> to roughly double its configured size to accommodate heap + direct buffers
>> if you had an application that made heavy use of direct buffers and put
>> enough pressure on the heap to grow it to the configured Xmx value (or as
>> we typically did, set Xmx == Xms).
>>
>> Where possible we would constrain MaxDirectMemorySize to something "real"
>> rather than leaving it to this default, preferring to have the JVM throw up
>> an OOME if we were allocating more direct memory than we expected so we
>> could get more info about the failure rather than worrying about the OOM
>> killer hard kill the entire process & not being able to understand why.
>> YMMV.
>>
>> One caveat: I can't quite remember if 
>> Unsafe.allocateMemory()/Unsafe.freeMemory()
>> count toward your MaxDirectMemorySize ... perhaps somebody else here more
>> familiar with the JVM internals could weigh in on that. Perhaps another
>> thing to watch out for if you're doing "interesting" things with the JVM.
>>
>> I found this sort of "informed guess" to be much more reliable than
>> trying to figure things out empirically by monitoring processes over time
>> etc. ... anyway, hope that helps, curious to know what you ultimately end
>> up with.
>>
>> Cheers,
>> Tom
>>
>> On Thu, Aug 3, 2017 at 10:31 AM, Meg Figura <[email protected]> wrote:
>>
>>> Hi Sebastian,
>>>
>>> Our product runs within the JVM, within a (Hadoop) YARN container.
>>> Similar to your situation, YARN will kill the container if it goes over the
>>> amount of memory reserved for the container. Java heap sizes (-Xmx) for the
>>> apps we run within containers vary from about 6GB to about 31GB, so this
>>> may be completely inappropriate if you use much smaller heaps, but here is
>>> the heuristic we use on Java 8. 'jvmMemory' is the -Xmx setting given to
>>> the JVM and adjustJvmMemoryForYarn() gives the size of the container we
>>> request.
>>>
>>> private static int getReservedCodeCacheSize(int jvmMemory)
>>> {
>>>     return 100;
>>> }
>>>
>>> private static int getMaxMetaspaceSize(int jvmMemory)
>>> {
>>>     return 256;
>>> }
>>>
>>> private static int getCompressedClassSpaceSize(int jvmMemory)
>>> {
>>>     return 256;
>>> }
>>>
>>> private static int getExtraJvmOverhead(int jvmMemory)
>>> {
>>>     if (jvmMemory <= 2048)
>>>     {
>>>         return 1024;
>>>     }
>>>     else if(jvmMemory <= (1024 * 16))
>>>     {
>>>         return 2048;
>>>     }
>>>     else if(jvmMemory <= (1024 * 31))
>>>     {
>>>         return 5120;
>>>     }
>>>     else
>>>     {
>>>         return 8192;
>>>     }
>>> }
>>>
>>> public static int adjustJvmMemoryForYarn(int jvmMemory)
>>> {
>>>     if (jvmMemory == 0)
>>>     {
>>>         return 0;
>>>     }
>>>
>>>     return jvmMemory +
>>>            getReservedCodeCacheSize(jvmMemory) +
>>>            getMaxMetaspaceSize(jvmMemory) +
>>>            getCompressedClassSpaceSize(jvmMemory) +
>>>            getExtraJvmOverhead(jvmMemory);
>>> }
>>>
>>>
>>>
>>> If the app uses any significant off-heap memory, we just add this to the
>>> container size.
>>>
>>> Obviously, this isn't optimal, but it does prevent the "OOM killer" from
>>> kicking in. I'm interested to see if anyone has a better solution!
>>>
>>> -Meg
>>>
>>>
>>>
>>> On Thursday, August 3, 2017 at 5:17:11 AM UTC-4, Sebastian Łaskawiec
>>> wrote:
>>>>
>>>> Hey,
>>>>
>>>> Before digging into the problem, let me say that I'm very happy to meet
>>>> you! My name is Sebastian Łaskawiec and I've been working for Red Hat
>>>> focusing mostly on in memory store solutions. A while ago I attended JVM
>>>> performance and profiling workshop lead by Martin, which was an incredible
>>>> experience to me.
>>>>
>>>> Over the last a couple of days I've been working on tuning and sizing
>>>> our app for Docker Containers. I'm especially interested in running JVM
>>>> without swap and constraining memory. Once you hit the memory limit, the
>>>> OOM Killer kicks and takes your application down. Rafael wrote pretty good
>>>> pragmatic description here [1].
>>>>
>>>> I'm currently looking for some good practices for measuring and tuning
>>>> JVM memory size. I'm currently using:
>>>>
>>>>    - The JVM native memory tracker [2]
>>>>    - pmap -x, which gives me RSS
>>>>    - jstat -gccause, which gives me an idea how GC is behaving
>>>>    - dstat which is not CGroups aware but gives me an overall idea
>>>>    about paging, CPU and memory
>>>>
>>>> Here's an example of a log that I'm analyzing [3]. Currently I'm trying
>>>> to adjust Xmx and Xms correctly so that my application fills the
>>>> constrained container but doesn't spill out (which would result in OOM Kill
>>>> done by the kernel). The biggest problem that I have is how to measure the
>>>> remaining amount of memory inside the container? Also I'm not sure why the
>>>> amount of committed JVM memory is different from RSS reported by pmap -x?
>>>> Could you please give me a hand with this?
>>>>
>>>> Thanks,
>>>> Sebastian
>>>>
>>>> [1] https://developers.redhat.com/blog/2017/03/14/java-inside-docker/
>>>> [2] https://docs.oracle.com/javase/8/docs/technotes/guides/
>>>> troubleshoot/tooldescr007.html
>>>> [3] https://gist.github.com/slaskawi/a6ddb32e1396384d805528884f25ce4b
>>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "mechanical-sympathy" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to [email protected].
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>
>>
>> --
>> *Tom Lee */ https://neeveresearch.com / @tglee <http://twitter.com/tglee>
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "mechanical-sympathy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.
>



-- 
*Tom Lee */ https://neeveresearch.com / @tglee <http://twitter.com/tglee>

-- 
You received this message because you are subscribed to the Google Groups 
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to