I think you're on the wrong track here. Shawn is absolutely
right his statements about when the field cache is invalidated.
That said, the cure to your problem has nothing to do with
opening new searchers. Solr puts values in caches as it needs
them _and keeps them there_. In the
On 10/14/2018 6:32 AM, yasoobhaider wrote:
Memory Analyzer output:
One instance of "org.apache.solr.uninverting.FieldCacheImpl" loaded by
"org.eclipse.jetty.webapp.WebAppClassLoader @ 0x7f60f7b38658" occupies
61,234,712,560 (91.86%) bytes. The memory is accumulated in one instance of
After none of the JVM configuration options helped witH GC, as Erick
suggested I took a heap dump of one of the misbehaving slaves and analysis
shows that fieldcache is using a large amount of the the total heap.
Memory Analyzer output:
One instance of
Hi,
1/
As previously said by other persons, my first action would be to understand
why you need so much heap ?
The first step is to maximize your heap size to 31Gb (or obviously less if
possible).
https://blog.codecentric.de/en/2014/02/35gb-heap-less-32gb-java-jvm-memory-oddities/
Can you
On 10/11/2018 4:51 AM, yasoobhaider wrote:
Hi Shawn, thanks for the inputs.
I have uploaded the gc logs of one of the slaves here:
https://ufile.io/ecvag (should work till 18th Oct '18)
I uploaded the logs to gceasy as well and it says that the problem is
consecutive full GCs. According to the
I have to echo what others have said. An 80G heap is waaay out the norm,
especially when you consider the size of your indexes and the number of docs.
Understanding why you think you need that much heap should be your top
priority. As has already been suggested, insuring docValues are set for
Hi Shawn, thanks for the inputs.
I have uploaded the gc logs of one of the slaves here:
https://ufile.io/ecvag (should work till 18th Oct '18)
I uploaded the logs to gceasy as well and it says that the problem is
consecutive full GCs. According to the solution they have mentioned,
increasing the
Hi,
In addition to what others wrote already, there are a couple of things
that might trigger sudden memory allocation surge that you can't really
account for:
1. Deep paging, especially in a sharded index. Don't allow it and you'll
be much happier.
2. Faceting without docValues
We run a big cluster with 8 GB heap on the JVMs. When we used CMS, I gave 2 GB
to
the new generation. Solr queries make a ton of short-lived allocations. You
want all of that
to come from the new gen. I don’t fool around with ratios. I just set the
numbers.
We used these:
-d64
-server
We use 4.3.0 I found that we went into gc hell as you describe with small
newgen. We use CMS gc as well
Using newration=2 got us out of that 3 wasn't enough...heap of 32 gig
only
I have not gone over 32 gig as testing showed diminishing returns over 32
gig. I only was brave enough to go to
On 10/3/2018 8:01 AM, yasoobhaider wrote:
Master and slave config:
ram: 120GB
cores: 16
At any point there are between 10-20 slaves in the cluster, each serving ~2k
requests per minute. Each slave houses two collections of approx 10G
(~2.5mil docs) and 2G(10mil docs) when optimized.
I am
Hi
I'm working with a Solr cluster with master-slave architecture.
Master and slave config:
ram: 120GB
cores: 16
At any point there are between 10-20 slaves in the cluster, each serving ~2k
requests per minute. Each slave houses two collections of approx 10G
(~2.5mil docs) and 2G(10mil docs)
12 matches
Mail list logo