First thing is to stop using CMS and use G1GC.
We’ve been using these settings with over a hundred machines
in prod for nearly four years.
SOLR_HEAP=8g
# Use G1 GC -- wunder 2017-01-23
# Settings from https://wiki.apache.org/solr/ShawnHeisey
GC_TUNE=" \
-XX:+UseG1GC \
Hi Matthew, Erick!
Thank you very much for the feedback, I'll try to convince them to
reduce the heap size.
current GC settings:
-XX:+CMSParallelRemarkEnabled
-XX:+CMSScavengeBeforeRemark
-XX:+ParallelRefProcEnabled
-XX:+UseCMSInitiatingOccupancyOnly
-XX:+UseConcMarkSweepGC
-XX:+UseParNewGC
12G is not that huge, it’s surprising that you’re seeing this problem.
However, there are a couple of things to look at:
1> If you’re saying that you have 16G total physical memory and are allocating
12G to Solr, that’s an anti-pattern. See:
Your index is so small that it should easily get cached into OS memory
as it is accessed. Having a too-big heap is a known problem
situation.
https://cwiki.apache.org/confluence/display/SOLR/SolrPerformanceProblems#SolrPerformanceProblems-HowmuchheapspacedoIneed?
On Tue, Oct 6, 2020 at 9:44 AM
Hi Matthew,
Thank you for the answer, I cannot reproduce the setup locally I'll
try to convince them to reduce Xmx, I guess they will rather not agree
to 1GB but something less than 12G for sure.
And have some proper dev setup because for now we could only test prod
or stage which are difficult
You have a 12G heap for a 200MB index? Can you just try changing Xmx
to, like, 1g ?
On Tue, Oct 6, 2020 at 7:43 AM Karol Grzyb wrote:
>
> Hi,
>
> I'm involved in investigation of issue that involves huge GC overhead
> that happens during performance tests on Solr Nodes. Solr version is
> 6.1.
Hi,
I'm involved in investigation of issue that involves huge GC overhead
that happens during performance tests on Solr Nodes. Solr version is
6.1. Last test were done on staging env, and we run into problems for
<100 requests/second.
The size of the index itself is ~200MB ~ 50K docs
Index has