Thanks you guys, this has been educational, i uploaded up to now, the
server was restarted after adding the extra memory, so
https://gceasy.io/my-gc-report.jsp?p=c2hhcmVkLzIwMTkvMTIvNi8tLXNvbHJfZ2MubG9nLjAuY3VycmVudC0tMTQtMjEtMTA=&channel=WEB

is what im looking at.  tuning the JVM is new to me, so im just going by
what ive researched and what this site is saying.
from what i can tell:
  the peak looks like 31gb would be perfect, will implement that today
  throughput is seems good, assuming gceasy recommendation of above 95% is
the target and im at 99.6
  latency looks like its as good as I really care to get, who really cares
about 200ms
  as far as heap after a GC it looks like it recovered well? or am i
missing something?  the red spikes of a full GC like 28gb, and right after
its down to 14gb

I really appreciate this input, its educational/helpful
-Dave





On Fri, Dec 6, 2019 at 7:48 AM Erick Erickson <erickerick...@gmail.com>
wrote:

> A replication shouldn’t have consumed that much heap. It’s mostly I/O,
> just a write through. If replication really consumes huge amounts of heap
> we need to look at that more closely. Personally I suspect/hope it’s
> coincidental, but that’s only a guess. You can attach jconsole to the
> running process and monitor heap usage in real-time, jconsole is part of
> the JDK so should be relatively easy to install. It has a nifty “gc now”
> button that you can use to see if the heap you’re accumulating is just
> garbage or really accumulates…
>
> And if this really is related to replication and that much heap is
> actually used, we need to figure out why. Shawn’s observation that there is
> very little heap recovered is worrying.
>
> Best,
> Erick
>
> > On Dec 6, 2019, at 7:37 AM, Dave <hastings.recurs...@gmail.com> wrote:
> >
> > Actually at about that time the replication finished and added about
> 20-30gb to the index from the master.  My current set up goes
> > Indexing master -> indexer slave/production master (only replicated on
> command)-> three search slaves (replicate each 15 minutes)
> >
> > We added about 2.3m docs, then I replicated it to the production master
> and since there was a change it replicated out to the slave node the gc
> came from
> >
> > I’ll set one of the slaves to 31/31 and force all load to that one and
> see how she does. Thanks!
> >
> >
> >> On Dec 6, 2019, at 1:02 AM, Shawn Heisey <apa...@elyograg.org> wrote:
> >>
> >> On 12/5/2019 12:57 PM, David Hastings wrote:
> >>> That probably isnt enough data, so if youre interested:
> >>> https://gofile.io/?c=rZQ2y4
> >>
> >> The previous one was less than 4 minutes, so it doesn't reveal anything
> useful.
> >>
> >> This one is a little bit less than two hours.  That's more useful, but
> still pretty short.
> >>
> >> Here's the "heap after GC" graph from the larger file:
> >>
> >>
> https://www.dropbox.com/s/q9hs8fl0gfkfqi1/david.hastings.gc.graph.2019.12.png?dl=0
> >>
> >> At around 14:15, the heap usage was rather high. It got up over 25GB.
> There were some very long GCs right at that time, which probably means they
> were full GCs.  And they didn't free up any significant amount of memory.
> So I'm betting that sometimes you actually *do* need a big chunk of that
> 60GB of heap.  You might try reducing it to 31g instead of 60000m.  Java's
> memory usage is a lot more efficient if the max heap size is less than 32
> GB.
> >>
> >> I can't give you any information about what happened at that time which
> required so much heap.  You could see if you have logfiles that cover that
> timeframe.
> >>
> >> Thanks,
> >> Shawn
>
>

Reply via email to