On 10/14/2019 6:18 AM, Vassil Velichkov (Sensika) wrote:
We have 1 x Replica with 1 x Solr Core per JVM and each JVM runs in a separate 
VMware VM.
We have 32 x JVMs/VMs in total, containing between 50M to 180M documents per 
replica/core/JVM.

With 180 million documents, each filterCache entry will be 22.5 megabytes in size. They will ALL be this size.

In our case most filterCache entities (maxDoc/8 + overhead) are typically more than 16MB, 
which is more than 50% of the max setting for "XX:G1HeapRegionSize" (which is 
32MB). That's why I am so interested in Java 13 and ZGC, because ZGC does not have this 
weird limitation and collects even _large_ garbage pieces :-). We have almost no 
documentCache or queryCache entities.

I am not aware of any Solr testing with the new garbage collector. I'm interested in knowing whether it does a better job than CMS and G1, but do not have any opportunities to try it.

Have you tried letting Solr use its default garbage collection settings instead of G1? Have you tried Java 11? Java 9 is one of the releases without long term support, so as Erick says, it is not recommended.

By some time tonight all shards will be rebalanced (we've added 6 more) and will 
contain up to 100-120M documents (14.31MB + overhead should be < 16MB), so 
hopefully this will help us to alleviate the OOM crashes.

It doesn't sound to me like your filterCache can cause OOM. The total size of 256 filterCache entries that are each 22.5 megabytes should be less than 6GB, and I would expect the other Solr caches to be smaller. If you are hitting OOMs, then some other aspect of your setup is the reason that's happening. I would not normally expect a single core with 180 million documents to need more than about 16GB of heap, and 31GB should definitely be enough. Hitting OOM with the heap sizes you have described is very strange.

Perhaps the root cause of your OOMs is not heap memory, but some other system resource. Do you have log entries showing the stacktrace on the OOM?

Thanks,
Shawn

Reply via email to