A replication shouldn’t have consumed that much heap. It’s mostly I/O, just a 
write through. If replication really consumes huge amounts of heap we need to 
look at that more closely. Personally I suspect/hope it’s coincidental, but 
that’s only a guess. You can attach jconsole to the running process and monitor 
heap usage in real-time, jconsole is part of the JDK so should be relatively 
easy to install. It has a nifty “gc now” button that you can use to see if the 
heap you’re accumulating is just garbage or really accumulates…

And if this really is related to replication and that much heap is actually 
used, we need to figure out why. Shawn’s observation that there is very little 
heap recovered is worrying.

Best,
Erick

> On Dec 6, 2019, at 7:37 AM, Dave <hastings.recurs...@gmail.com> wrote:
> 
> Actually at about that time the replication finished and added about 20-30gb 
> to the index from the master.  My current set up goes
> Indexing master -> indexer slave/production master (only replicated on 
> command)-> three search slaves (replicate each 15 minutes)
> 
> We added about 2.3m docs, then I replicated it to the production master and 
> since there was a change it replicated out to the slave node the gc came from
> 
> I’ll set one of the slaves to 31/31 and force all load to that one and see 
> how she does. Thanks!
> 
> 
>> On Dec 6, 2019, at 1:02 AM, Shawn Heisey <apa...@elyograg.org> wrote:
>> 
>> On 12/5/2019 12:57 PM, David Hastings wrote:
>>> That probably isnt enough data, so if youre interested:
>>> https://gofile.io/?c=rZQ2y4
>> 
>> The previous one was less than 4 minutes, so it doesn't reveal anything 
>> useful.
>> 
>> This one is a little bit less than two hours.  That's more useful, but still 
>> pretty short.
>> 
>> Here's the "heap after GC" graph from the larger file:
>> 
>> https://www.dropbox.com/s/q9hs8fl0gfkfqi1/david.hastings.gc.graph.2019.12.png?dl=0
>> 
>> At around 14:15, the heap usage was rather high. It got up over 25GB. There 
>> were some very long GCs right at that time, which probably means they were 
>> full GCs.  And they didn't free up any significant amount of memory.  So I'm 
>> betting that sometimes you actually *do* need a big chunk of that 60GB of 
>> heap.  You might try reducing it to 31g instead of 60000m.  Java's memory 
>> usage is a lot more efficient if the max heap size is less than 32 GB.
>> 
>> I can't give you any information about what happened at that time which 
>> required so much heap.  You could see if you have logfiles that cover that 
>> timeframe.
>> 
>> Thanks,
>> Shawn

Reply via email to