I had the same problem some weeks before. You can try these:
1. Check the hit ratio for the cache via the solr/admin/stats.jsp. If
the hit ratio is very low. Just disable those cache. It will save you
some memory.
2. set -Xms and -Xmx to the same size will help improve GC performance. 
3. Check what's GC do you use? Default will be parallel. You can try use
concurrent GC which will help a lot.
4. This is my sun hotspot jvm startup options: -XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=50 -XX:-UseGCOverheadLimit
The above cannot solve the OOM forever. But they help a lot.
Wish this can help.

-----Original Message-----
From: Mike Klaas [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, May 21, 2008 2:23 PM
To: solr-user@lucene.apache.org
Subject: Re: SOLR OOM (out of memory) problem


On 21-May-08, at 4:46 AM, gurudev wrote:

>
> Just to add more:
>
> The JVM heap allocated is 6GB with initial heap size as 2GB. We use 
> quadro(which is 8 cpus) on linux servers for SOLR slaves.
> We use facet searches, sorting.
> document cache is set to 7 million (which is total documents in index)

> filtercache 10000

You definitely don't have enough memory to keep 7 million document,
fully realized in java-object form, in memory.

Nor would you want to.  The document cache should aim to keep the most
frequently-occuring documents in memory (in the thousands, perhaps 10's
of thousands).  By devoting more memory to the OS disk cache, more of
the 12GB index can be cached by the OS and thus speed up all document
retreival.

-Mike

Reply via email to