Felix, I ran with G1. Results are very interesting. Shame I can't post images, but
Using Parallel GC - Baseline CPU was smooth and low, reaching a normal rate of about 11% at the peak end of the test (~82,000 requests/min with 6,080 VU) - memory reached the max 12GB after 1 hour of the test, then did GC, causing spikes of CPU to about 60% and a break in requests of about 2.9 seconds - memory then dropped to just under 8GB Using G1 GC - Baseline CPU was very spiky, fluctuating between 15% and 30% at the same point in the test compared to ParallelGC - memory never reached max, it hit 6.6GB at 20 minutes, before dropping to 4GB and from then on until the test end, slowly saw-toothed up to 7.5GB, never getting close to the max - no break in request pattern GC log analyser says this (hope tables are not too stuffed up) Young Generation, allocated=4 gb, peak=1.01 gb Old Generation, allocated=8 gb, peak=6.1 gb Meta Space, allocated=1.02 gb, peak=22.44 mb Young + Old + Meta space, allocated=13.02 gb, peak=7.13 gb Avg Pause GC Time = 90 ms Max Pause GC Time = 190 ms Duration (secs) No. of GCs Percentage 0 - 0.1 837 78.151% 0.1 - 0.2 234 100.0% A substantial CPU cost to achieve that but I have plenty of capacity on those boxes. I did not run CMS Antony > -----Original Message----- > From: Felix Schumacher [mailto:felix.schumac...@internetallee.de] > Sent: Tuesday, 20 March 2018 7:41 PM > To: JMeter Users List <user@jmeter.apache.org> > Subject: RE: Feedback and question re Java memory/GC settings for Jmeter > load generator CPU issue > > > > Am 19. März 2018 22:53:19 MEZ schrieb Antony Bowesman > <antony.bowes...@williamhill.com.au>: > >Mmm, I saw the images had gone too :( > > > >I have set up to do a gc log next time I run the test and will dig into > >it. I've been using the default Java8 GC, which is Parallel, so I am > >going to use CMS to see if that makes a difference. I gather it is > >supposed to favour shorter pauses, so I'll see what happens and post > >back results. > > As you have 12gb of heap, you could try to use g1, too. > > On the other hand side, this seems to be quite a lot of heap. What are you > doing in your test plan? > > And as a nice plus, you could tell us about the used versions for jvm, jmeter > and os. > > Regards, > Felix > > > > > > >Cheers > >Antony > > > > > >> -----Original Message----- > >> From: Kirk Pepperdine [mailto:kirk.pepperd...@gmail.com] > >> Sent: Monday, 19 March 2018 4:39 PM > >> To: JMeter Users List <user@jmeter.apache.org> > >> Subject: Re: Feedback and question re Java memory/GC settings for > >Jmeter > >> load generator CPU issue > >> > >> Hi, > >> > >> The images seem to have been filter out of my email at least. > >> > >> Can you collect and post a GC log. Most likely young gen is too small > >but a gc > >> log would confirm this. > >> > >> Kind regards, > >> Kirk > >> > >> > On Mar 19, 2018, at 3:37 AM, Antony Bowesman > >> <antony.bowes...@williamhill.com.au> wrote: > >> > > >> > Hi, > >> > > >> > I just thought I’d send in some info about a problem I’ve been > >looking at > >> recently – with a question of best GC settings > >> > > >> > I have a number of JMeter load generators (LG) and I have been > >seeing > >> CPU spikes on the boxes during a test. I am monitoring CPU and memory > >> from within a Java sampler, so have the following charts > >> > > >> > 1. First chart shows the request/sec rate (RHS axis) in blue > >and the CPU > >> max % in yellow (sampled every 5s). The blue vertical lines indicate > >a drop in > >> request rate (as recorded by the request finishing and therefore > >being > >> logged) an a corresponding spike to ‘catch up’. I note that the > >spikes always > >> correspond to a spike in CPU. > >> > 2. The second shows the spikes appearing to correlate with > >the increase > >> in committed memory > >> > 3. The third is after the JVM setting change. Note the > >behaviour still > >> occurs in CPU/request rate with a CPU spike in the green circle, but > >not until > >> the later stages. (NB: CPU scale is CPU% * 200 to fit on the graph) > >> > > >> > This behaviour is the same across all the LGs and happens > >regardless of the > >> way the target hosts are reached across the network, so I believe > >it’s a > >> JVM/host issue. > >> > > >> > The original memory settings were > >> > > >> > -Xms1G -Xmx12G -XX:NewSize=1024m -XX:MaxNewSize=4096m > >> > > >> > But I changed –Xms12G so that all memory is allocated initially and > >that > >> makes a huge change to the behaviour. > >> > > >> > However, I still see the CPU spike. Has anyone got some optimum GC > >> settings they have used that can avoid this? > >> > > >> > Thanks > >> > Antony > >> > > >> > > >> > > >> > > >> > > >> > > >> > > > > > > >--------------------------------------------------------------------- > >To unsubscribe, e-mail: user-unsubscr...@jmeter.apache.org > >For additional commands, e-mail: user-h...@jmeter.apache.org > > --------------------------------------------------------------------- > To unsubscribe, e-mail: user-unsubscr...@jmeter.apache.org > For additional commands, e-mail: user-h...@jmeter.apache.org