Thanks for the recommendations Shawn. Those are the lines I am thinking as well. I am reviewing application also.

Going with the note on cache invalidation for every two minutes due to soft commit, wonder how would it go OOM in simply two minutes or is it likely that a thread is holding the searcher due to long running query that might be potentially causing OOM? Was trying to reproduce but could not so far.

Here is the filter cache config

<filterCache class="solr.FastLRUCache" size="5000" initialSize="5000" autowarmCount="1000"/>

Query Results cache

<queryResultCache class="solr.LRUCache" size="20000" initialSize="20000" autowarmCount="5000"/>

On 3/18/16 7:31 AM, Shawn Heisey wrote:
On 3/18/2016 8:22 AM, Rallavagu wrote:
So, each soft commit would create a new searcher that would invalidate
the old cache?

Here is the configuration for Document Cache

<documentCache class="solr.LRUCache" size="100000"
initialSize="100000" autowarmCount="0"/>

<enableLazyFieldLoading>true</enableLazyFieldLoading>

In an earlier message, you indicated you're running into OOM.  I think
we can see why with this cache definition.

There are exactly two ways to deal with OOM.  One is to increase the
heap size.  The other is to reduce the amount of memory that the program
requires by changing something -- that might be the code, the config, or
how you're using it.

Start by reducing that cache size to 4096 or 1024.

https://wiki.apache.org/solr/SolrPerformanceProblems#Java_Heap

If yuo've also got a very large filterCache, reduce that size too.  The
filterCache typically eats up a LOT of memory, because each entry in the
cache is very large.

Thanks,
Shawn

Reply via email to