Hi Thomas,

It was too early to make conclusions.
After some prolonged testing i've noticed that more thorough tuning may be 
required to avoid this issue completely. 
And -XX:-G1UseAdaptiveIHOP not always enough too. 

What bothers me is the steep jump in time required between the last Mixed GC 
and the previous:
In 9th it took 129.8ms to evacuate 104 old region:
[64394.771s][info][gc,phases     ] GC(38781)   Evacuate Collection Set: 129.8ms
[64394.771s][info][gc,phases     ] GC(38781)   Code Roots: 0.0ms
[64394.771s][info][gc,phases     ] GC(38781)   Clear Card Table: 3.4ms
[64394.771s][info][gc,phases     ] GC(38781)   Expand Heap After Collection: 
[64394.771s][info][gc,phases     ] GC(38781)   Free Collection Set: 3.9ms
[64394.771s][info][gc,phases     ] GC(38781)   Merge Per-Thread State: 0.1ms
[64394.771s][info][gc,phases     ] GC(38781)   Other: 13.8ms
[64394.771s][info][gc,heap       ] GC(38781) Eden regions: 37->0(37)
[64394.771s][info][gc,heap       ] GC(38781) Survivor regions: 3->3(5)
[64394.771s][info][gc,heap       ] GC(38781) Old regions: 457->353
[64394.771s][info][gc,heap       ] GC(38781) Humongous regions: 3->3
[64394.771s][info][gc,metaspace  ] GC(38781) Metaspace: 70587K->70587K(83968K)
[64394.771s][info][gc            ] GC(38781) Pause Mixed (G1 Evacuation Pause) 
15972M->11457M(65536M) (64394.620s, 64394.771s) 150.931ms

While in 10th (the last) it took 3401.3ms to evacuate 87:
[64398.393s][info][gc,phases     ] GC(38782)   Evacuate Collection Set: 3401.3ms
[64398.393s][info][gc,phases     ] GC(38782)   Code Roots: 0.0ms
[64398.393s][info][gc,phases     ] GC(38782)   Clear Card Table: 2.8ms
[64398.393s][info][gc,phases     ] GC(38782)   Expand Heap After Collection: 
[64398.393s][info][gc,phases     ] GC(38782)   Free Collection Set: 4.3ms
[64398.393s][info][gc,phases     ] GC(38782)   Merge Per-Thread State: 0.1ms
[64398.393s][info][gc,phases     ] GC(38782)   Other: 12.2ms
[64398.393s][info][gc,heap       ] GC(38782) Eden regions: 37->0(37)
[64398.393s][info][gc,heap       ] GC(38782) Survivor regions: 3->3(5)
[64398.393s][info][gc,heap       ] GC(38782) Old regions: 353->266
[64398.393s][info][gc,heap       ] GC(38782) Humongous regions: 3->3
[64398.393s][info][gc,metaspace  ] GC(38782) Metaspace: 70587K->70587K(83968K)
[64398.393s][info][gc            ] GC(38782) Pause Mixed (G1 Evacuation Pause) 
12641M->8678M(65536M) (64394.973s, 64398.393s) 3420.666ms

It looks like at average old regions in 10th Mixed GC were 31.5 times more 
expensive than in 9th and it took 39ms to collect just one region. Does it make 
sense? To what extent one old region may be more expensive than another?
I wish G1Ergonomics similar to "reason: predicted time is too high" but for 
order of magnitude jump cases worked here even when min old regions number has 
not been reached. We didn't spend all XX:G1MixedGCCountTarget=12 yet here.

Log file: 
Sadly with no ergonomics.

Next thing i'm going to try is adjusting XX:G1MixedGCLiveThresholdPercent.

Thank you!

Ildar Nurislamov
GetIntent, AdServer Team Leader

> On Sep 29, 2016, at 13:42, Ildar Nurislamov <inurisla...@getintent.com> wrote:
> Hi Thomas,
> Thank you for really helpful advices.
> I have performed 8-hour testing with:
> -XX:+UnlockExperimentalVMOptions -XX:G1NewSizePercent=2 
> -XX:G1HeapWastePercent=10 -XX:G1MixedGCCountTarget=12 
> and they improved situation for both 8u and 9ea.
> Longest pause on 9ea now is 400ms with Adaptive sizing for IHOP
> I will continue testing and report if anything interesting pops out.
> --
> Ildar Nurislamov
> GetIntent, AdServer Team Leader
>> On Sep 28, 2016, at 10:37, Thomas Schatzl <thomas.scha...@oracle.com 
>> <mailto:thomas.scha...@oracle.com>> wrote:
>> Hi Ildar,
>> On Fri, 2016-09-23 at 12:40 +0300, Ildar Nurislamov wrote:
>>> Hi Thomas Schatzl!
>>> Thank you for such prompt responses. 
>>> I'm going to try you advices and send results next week.
>>> Here are log files you have asked about:
>>> https://www.dropbox.com/s/i9o4nuuh5gpsf1y/9noaihop_07_09_16.log.zip?d 
>>> <https://www.dropbox.com/s/i9o4nuuh5gpsf1y/9noaihop_07_09_16.log.zip?d>
>>> l=0
>>> https://www.dropbox.com/s/xa3cfezvlqwwh6v/8u_log_07_09_16.log.zip?dl=
>>> 0
>>   thanks a lot for the logs. As you may have noticed I closed JDK-
>> 8166500 as duplicate of the existing JDK-8159697 issue. They are the
>> same after all.
>> We will continue working on improving out-of-box experience of G1. :)
>> As hypothesized in the text for JDK-8166500, the 8u and 9-without-aihop 
>> show the same general issue. The suggested tunings should improve mixed
>> gc times for now.
>> Thanks,
>>   Thomas

hotspot-gc-use mailing list

Reply via email to