> On Jan 21, 2016, at 2:59 PM, Poonam Bajaj Parhar <poonam.ba...@oracle.com> 
> wrote:
> 
> Hello,
> 
> On 1/21/2016 12:00 PM, charlie hunt wrote:
>> Being you are on Solaris, the transparent huge pages is not an issue. 
>> Solaris is smart enough automatically figure out how to make large pages 
>> work, transparently. You do not need to remove -XX:+UseLargePages.
>> 
>> Also since you are using Solaris, use vmstat to monitor the application and 
>> watch the ‘free’ and ’sr’ columns. You should see ‘0’ in the ’sr’ column 
>> most of the time while the app is running. In short, the ’sr’ column is the 
>> page scan rate. Hence, as memory gets low, (as indicated in the ‘free’ 
>> column), the page scanner will start looking for available pages it can 
>> reclaim. So, if you are starting to see non-zero entries in the ’sr’ column 
>> that tend to be increasing, and the values in the ‘free’ column getting 
>> smaller and smaller, that is a pretty good sign that the system is paging to 
>> virtual memory, or approaching that state.  And, obviously once that paging 
>> starts to happen you will see the ‘pi’ and ‘po’ columns will show activity 
>> as well (they are the page in and page out columns).
>> 
>> On setting an eden size … while you can set an eden size, doing so will 
>> disable the adaptive sizing G1 will do in an effort to meet the pause time 
>> goal. IMO, I don’t think the 13 GB young gen is the root of the long pause. 
>> Need to find the reason / cause for the high sys time and low parallelism.
>> 
> But it may be worthwhile to try and understand why Eden got sized at 13GB. If 
> we look at the other GC entries, the eden size is comparatively very small:
> 
> [Eden: 1384.0M(1376.0M)->0.0B(1248.0M) Survivors: 56.0M->184.0M Heap: 
> 1431.6M(28.0G)->247.6M(28.0G)]
> [Times: user=3.55 sys=0.26, real=0.33 secs] 
> 
> As Jenny suggested, PrintAdaptiveSizePolicy may shed some light on the sizing 
> decisions.

Agreed. :)  

You (and Jenny) are right. It is a good practice to enable 
+PrintAdaptiveSizePolicy to see what sizing decisions are being made.

charlie

> 
> Thanks,
> Poonam
> 
> 
>> hths,
>> 
>> charlie  
>> 
>> PS: It would likely help to move to a JDK 8 JVM since there were many 
>> improvements made in G1 between JDK 7 and JDK 8.
>> 
>>> On Jan 21, 2016, at 1:46 PM, Amit Mishra <amit.mis...@redknee.com 
>>> <mailto:amit.mis...@redknee.com>> wrote:
>>> 
>>> Thanks Charlie & Jenny’s for your expert comments.
>>>  
>>> We are using jdk : jdk1.7.0_45 as our applications have never been tested 
>>> on Java 1.8
>>>  
>>> Yes I dig further and found system is running low on memory so might be OS 
>>> occasionally resort to swapping, I will further ask Customer to increase 
>>> physical memory on node to ensure swap is not getting used at any point of 
>>> time.
>>>  
>>> I am not sure about Eden size as a best practice we haven’t set any 
>>> explicit value for new size or Eden, is it fine to set explicit eden size ?
>>>  
>>> I will ask team to run next test with MaxGCPauseMillis=200 .
>>>  
>>> We are on solaris 5.10 platform and you can see that large pages parameter 
>>> is enabled, shall we need to disable it for better performance.
>>>  
>>> JVM_ARGS="${JVM_ARGS} -XX:+UseLargePages"
>>>  
>>>  
>>> You can also refer to rest of JVM parameters application is using to 
>>> confirm whether we need to disable any of them or need to enable few more 
>>> for better G1 GC performance.
>>>  
>>> argv[11]: -Xms28672m
>>> argv[12]: -Xmx28672m
>>> argv[13]: -Xss1m
>>> argv[14]: -Xoss1m
>>> argv[15]: -XX:PermSize=256m
>>> argv[16]: -XX:MaxPermSize=256m
>>> argv[17]: -XX:ReservedCodeCacheSize=128m
>>> argv[18]: -XX:+HeapDumpOnOutOfMemoryError
>>> argv[19]: -XX:+AggressiveOpts
>>> argv[20]: -XX:+DisableExplicitGC
>>> argv[22]: -Dcom.sun.management.jmxremote.ssl=false
>>> argv[23]: -Dcom.sun.management.jmxremote.authenticate=false
>>> argv[25]: -XX:+PrintGCTimeStamps
>>> argv[26]: -XX:+PrintGCDetails
>>> argv[28]: -verbose:gc
>>> argv[29]: -Dsun.rmi.dgc.server.gcInterval=86400000
>>> argv[30]: -Dsun.rmi.dgc.client.gcInterval=86400000
>>> argv[31]: -XX:+UseLargePages
>>> argv[32]: -XX:+MaxFDLimit
>>> argv[37]: -Dorg.omg.CORBA.ORBClass=org.jacorb.orb.ORB
>>> argv[38]: -Dorg.omg.CORBA.ORBSingletonClass=org.jacorb.orb.ORBSingleton
>>> argv[39]: -XX:-EliminateLocks
>>> argv[40]: -XX:-OmitStackTraceInFastThrow
>>> argv[41]: -XX:-UseSSE42Intrinsics
>>> argv[42]: -XX:-ReduceInitialCardMarks
>>> argv[43]: -XX:+UseG1GC
>>> argv[44]: -XX:MaxGCPauseMillis=500
>>> argv[45]: -XX:+UseCompressedOops
>>> argv[46]: -XX:+PrintFlagsFinal
>>> argv[47]: com.redknee.framework.core.platform.Core
>>>  
>>>  
>>> Thanks,
>>> Amit
>>>  
>>> From: charlie hunt [mailto:charlie.h...@oracle.com 
>>> <mailto:charlie.h...@oracle.com>] 
>>> Sent: Friday, January 22, 2016 01:06
>>> To: Amit Mishra <amit.mis...@redknee.com <mailto:amit.mis...@redknee.com>>
>>> Cc: hotspot-gc-use@openjdk.java.net <mailto:hotspot-gc-use@openjdk.java.net>
>>> Subject: Re: regarding long young generation pause when using G1 GC
>>>  
>>> Hi Amit,
>>>  
>>> To add to Jenny’s comments …
>>>  
>>> In the absence of looking at the entire GC log and looking only at the 
>>> entry included in email, this line really catches my eye:
>>> [Times: user=184.95 sys=34.49, real=148.07 secs] 
>>>  
>>> - This is pretty high sys time. If this is Linux, check that transparent 
>>> huge pages are disabled. If that’s disabled, then you may be paging to 
>>> virtual memory. In short, need to chase down the root of that high sys time.
>>> - Comparing user=184.95 to real=148.07 you do not much parallelism, ~ 
>>> 1.24x. That is not much parallelism for a multi-threaded young GC.
>>>  
>>> hths,
>>>  
>>> charlie
>>>  
>>> On Jan 21, 2016, at 12:12 PM, Yu Zhang <yu.zh...@oracle.com 
>>> <mailto:yu.zh...@oracle.com>> wrote:
>>>  
>>> Amit,
>>> 
>>> Which jdk version are you using?
>>> 
>>> You give MaxGCPauseMillis=500, but it seems g1 is adjusting eden size too 
>>> big, up to 13g. I am curious why. Can you add a flag 
>>> -XX:+PrintAdaptiveSizePolicy? Meanwhile, to reduce young gc pause, can you 
>>> try to a smaller number for MaxGCPauseMillis, or give a smaller eden size?
>>> 
>>> Another thing I noticed, whenever you have long gc pause, the system cpu is 
>>> high. Maybe the system is busy doing something else?
>>> 
>>> Thanks,
>>> Jenny
>>> On 1/21/2016 4:30 AM, Amit Mishra wrote:
>>> Hello team,
>>>  
>>> Please help me with G1 GC long young generation pause..
>>>  
>>> At one of our Customer deployment we were using CMS with 48G heap size and 
>>> facing Concurrent mode failure every month.
>>>  
>>> To solve this we started load test using G1 GC on lab but we are seeing 
>>> intermittent long GC pause as long as 148 seconds,
>>>  
>>> Could you please look into logs and confirm what best we can do to 
>>> ascertain root cause and possible tuning to fix this.
>>>  
>>> Thanks for help in advance.
>>> We are using default GC parameters:(Attaching full GC file as well)
>>>  
>>> JVM_ARGS="${JVM_ARGS} -XX:-OmitStackTraceInFastThrow"
>>> JVM_ARGS="${JVM_ARGS} -XX:-UseSSE42Intrinsics"
>>> JVM_ARGS="${JVM_ARGS} -XX:-ReduceInitialCardMarks"
>>> JVM_ARGS="${JVM_ARGS} -XX:+UseG1GC"
>>> JVM_ARGS="${JVM_ARGS} -XX:MaxGCPauseMillis=500" 
>>> JVM_ARGS="${JVM_ARGS} -XX:+UseCompressedOops" 
>>> JVM_ARGS="${JVM_ARGS} -XX:+PrintFlagsFinal" 
>>> JVM_ARGS="${JVM_ARGS} -XX:-EliminateLocks"
>>> JVM_ARGS="${JVM_ARGS} -XX:+UseLargePages"
>>> export GC_LOG="${PROJECT_HOME}/log/gcstats.log.$$"
>>> JVM_ARGS=$JVM_ARGS" -Xloggc:${GC_LOG}"
>>> JVM_ARGS=$JVM_ARGS" -verbose:gc"
>>> JVM_ARGS="${JVM_ARGS} -XX:+DisableExplicitGC"
>>> JVM_ARGS=$JVM_ARGS" -XX:+AggressiveOpts"
>>> JVM_ARGS="${JVM_ARGS} -XX:+HeapDumpOnOutOfMemoryError"
>>> JVM_ARGS="${JVM_ARGS} -XX:ReservedCodeCacheSize=128m"
>>> JVM_ARGS="${JVM_ARGS} -XX:PermSize=256m"
>>> JVM_ARGS="${JVM_ARGS} -XX:MaxPermSize=256m"
>>> JVM_ARGS="${JVM_ARGS} -Xmx28672m"
>>> JVM_ARGS="${JVM_ARGS} -Xms28672m
>>>  
>>> Long GC pause.
>>>  
>>> 5020.853: [GC pause (young), 148.0343389 secs]
>>>    [Parallel Time: 142474.3 ms, GC Workers: 13]
>>>       [GC Worker Start (ms): Min: 5022695.7, Avg: 5022705.6, Max: 
>>> 5022714.4, Diff: 18.7]
>>>       [Ext Root Scanning (ms): Min: 143.0, Avg: 2550.5, Max: 6442.5, Diff: 
>>> 6299.4, Sum: 33157.1]
>>>       [Update RS (ms): Min: 5704.6, Avg: 19017.1, Max: 45977.9, Diff: 
>>> 40273.3, Sum: 247222.7]
>>>          [Processed Buffers: Min: 0, Avg: 26.9, Max: 291, Diff: 291, Sum: 
>>> 350]
>>>       [Scan RS (ms): Min: 70538.2, Avg: 92539.7, Max: 114688.0, Diff: 
>>> 44149.8, Sum: 1203016.2]
>>>       [Object Copy (ms): Min: 894.7, Avg: 9265.9, Max: 20674.1, Diff: 
>>> 19779.4, Sum: 120456.9]
>>>       [Termination (ms): Min: 10169.3, Avg: 19073.0, Max: 26684.1, Diff: 
>>> 16514.7, Sum: 247949.3]
>>>       [GC Worker Other (ms): Min: 0.1, Avg: 4.5, Max: 18.2, Diff: 18.0, 
>>> Sum: 58.4]
>>>       [GC Worker Total (ms): Min: 142441.2, Avg: 142450.8, Max: 142461.1, 
>>> Diff: 19.8, Sum: 1851860.7]
>>>       [GC Worker End (ms): Min: 5165155.6, Avg: 5165156.4, Max: 5165162.0, 
>>> Diff: 6.4]
>>>    [Code Root Fixup: 2.2 ms]
>>>    [Clear CT: 11.2 ms]
>>>    [Other: 5546.7 ms]
>>>       [Choose CSet: 0.0 ms]
>>>       [Ref Proc: 47.4 ms]
>>>       [Ref Enq: 0.9 ms]
>>>       [Free CSet: 3642.1 ms]
>>>    [Eden: 13.2G(13.2G)->0.0B(1416.0M) Survivors: 16.0M->16.0M Heap: 
>>> 24.7G(28.0G)->11.5G(28.0G)]
>>>  [Times: user=184.95 sys=34.49, real=148.07 secs] 
>>>  
>>> 
>>> Thanks,
>>> Amit
>>> 
>>> 
>>> 
>>> _______________________________________________
>>> hotspot-gc-use mailing list
>>> hotspot-gc-use@openjdk.java.net <mailto:hotspot-gc-use@openjdk.java.net>
>>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use 
>>> <http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use>
>>> 
>>> _______________________________________________
>>> hotspot-gc-use mailing list
>>> hotspot-gc-use@openjdk.java.net <mailto:hotspot-gc-use@openjdk.java.net>
>>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use 
>>> <http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use>
>> 
>> 
>> _______________________________________________
>> hotspot-gc-use mailing list
>> hotspot-gc-use@openjdk.java.net <mailto:hotspot-gc-use@openjdk.java.net>
>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use 
>> <http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use>
> 
> _______________________________________________
> hotspot-gc-use mailing list
> hotspot-gc-use@openjdk.java.net <mailto:hotspot-gc-use@openjdk.java.net>
> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use 
> <http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use>

_______________________________________________
hotspot-gc-use mailing list
hotspot-gc-use@openjdk.java.net
http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use

Reply via email to