Hi,

Seriously, try making that monster document cache smaller.  Sure, there will be 
more evictions and more cache misses, but at least you will be less likely to 
get OOMs :).


Oits
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch


----- Original Message ----
> From: gurudev <[EMAIL PROTECTED]>
> To: solr-user@lucene.apache.org
> Sent: Thursday, May 22, 2008 7:27:44 AM
> Subject: RE: SOLR OOM (out of memory) problem
> 
> 
> Hi Rong,
> 
> My cache hit ratio are:
> 
> filtercache: 0.96
> documentcache:0.51
> queryresultcache:0.58
> 
> Thanx
> Pravesh
> 
> 
> Yongjun Rong-2 wrote:
> > 
> > I had the same problem some weeks before. You can try these:
> > 1. Check the hit ratio for the cache via the solr/admin/stats.jsp. If
> > the hit ratio is very low. Just disable those cache. It will save you
> > some memory.
> > 2. set -Xms and -Xmx to the same size will help improve GC performance. 
> > 3. Check what's GC do you use? Default will be parallel. You can try use
> > concurrent GC which will help a lot.
> > 4. This is my sun hotspot jvm startup options: -XX:+UseConcMarkSweepGC
> > -XX:CMSInitiatingOccupancyFraction=50 -XX:-UseGCOverheadLimit
> > The above cannot solve the OOM forever. But they help a lot.
> > Wish this can help.
> > 
> > -----Original Message-----
> > From: Mike Klaas [mailto:[EMAIL PROTECTED] 
> > Sent: Wednesday, May 21, 2008 2:23 PM
> > To: solr-user@lucene.apache.org
> > Subject: Re: SOLR OOM (out of memory) problem
> > 
> > 
> > On 21-May-08, at 4:46 AM, gurudev wrote:
> > 
> >>
> >> Just to add more:
> >>
> >> The JVM heap allocated is 6GB with initial heap size as 2GB. We use 
> >> quadro(which is 8 cpus) on linux servers for SOLR slaves.
> >> We use facet searches, sorting.
> >> document cache is set to 7 million (which is total documents in index)
> > 
> >> filtercache 10000
> > 
> > You definitely don't have enough memory to keep 7 million document,
> > fully realized in java-object form, in memory.
> > 
> > Nor would you want to.  The document cache should aim to keep the most
> > frequently-occuring documents in memory (in the thousands, perhaps 10's
> > of thousands).  By devoting more memory to the OS disk cache, more of
> > the 12GB index can be cached by the OS and thus speed up all document
> > retreival.
> > 
> > -Mike
> > 
> > 
> 
> -- 
> View this message in context: 
> http://www.nabble.com/SOLR-OOM-%28out-of-memory%29-problem-tp17364146p17402234.html
> Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to