Hi,

I have a cluster with nodes configured with a 18G heap. We've noticed a 
degradation in performance recently after increasing the volume of data 
we're indexing.

I think the issue is due to the field data cache doing eviction. Some nodes 
are doing lots of them, some aren't doing any. This is explained by our 
routing strategy which results in non-uniform document distribution. Maybe 
we can improve this eventually, but in the meantime, I'm trying to 
understand why the nodes are evicting cached data.

The metrics show that the field data cache is only ~1.5GB in size, yet we 
have this in our elasticsearch.yml:

indices.fielddata.cache.size: 10gb

Why would a node evict cache entries when it should still have plenty of 
room to store more? Are we missing another setting? Is there a way to tell 
what the actual fielddata cache size is at runtime (maybe it did not pickup 
the configuration setting for some reason)?

Thanks,
Philippe

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/e619f974-1632-4694-a0f9-40c32100c504%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to