Yes, set Xmx and Xms the same.

We run an 8 GB heap for all our clusters. Unless you are doing some really
memory-intensive stuff like faceting, 8 GB should be fine.

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Jun 5, 2019, at 1:05 PM, Gus Heck <gus.h...@gmail.com> wrote:
> 
> Probably not a solution, but so.ething I notice off the bat... generally
> you want Xmx and Xms set to the same value so the jvm doesn't have to spend
> time asking for more and more memory, and also reduce the chance that the
> memory is not available by the time solr needs it.
> 
> On Wed, Jun 5, 2019, 11:39 AM Rahul Goswami <rahul196...@gmail.com> wrote:
> 
>> Hello,
>> I have a solrcloud setup on Windows server with below config:
>> 3 nodes,
>> 24 shards with replication factor 2
>> Each node hosts 16 cores.
>> 
>> Index size is 1.4 TB per node
>> Xms 8 GB , Xmx 24 GB
>> Directory factory used is SimpleFSDirectoryFactory
>> 
>> The cloud is all nice and green for the most part. Only when we start
>> indexing, within a few seconds, I start seeing Read timeouts and socket
>> write errors and replica recoveries thereafter. We are indexing in 2
>> parallel threads in batches of 50 docs per update request. After examining
>> the thread dump, I see segment merges happening. My understanding is that
>> this is the cause, and the timeouts and recoveries are the symptoms. Is my
>> understanding correct? If yes, what steps could I take to help the
>> situation. I do see that the difference between "Num Docs" and "Max Docs"
>> is about 20%.
>> 
>> Would appreciate your help.
>> 
>> Thanks,
>> Rahul
>> 

Reply via email to