Facet searches cache a filter per unique term for multivalued fields. There are many ways to reduce memory consumption in these scenarios, but it usually requires a case-by-case solution.

-Mike

On 21-May-08, at 12:08 PM, Lance Norskog wrote:

We have had major OOM problems doing facet searches. Having 20 searches at once used up maybe 5G and one faceting request would blow up at 12. More important, when a facet request throws an OOM it seems like the memory is not released. When a normal search throws an OOM, the memory is released and Solr continues to run. We had to get more ram in order to do facet queries.
This is with Solr 1.3.

-----Original Message-----
From: Mike Klaas [mailto:[EMAIL PROTECTED]
Sent: Wednesday, May 21, 2008 11:23 AM
To: solr-user@lucene.apache.org
Subject: Re: SOLR OOM (out of memory) problem


On 21-May-08, at 4:46 AM, gurudev wrote:


Just to add more:

The JVM heap allocated is 6GB with initial heap size as 2GB. We use
quadro(which is 8 cpus) on linux servers for SOLR slaves.
We use facet searches, sorting.
document cache is set to 7 million (which is total documents in index)
filtercache 10000

You definitely don't have enough memory to keep 7 million document, fully
realized in java-object form, in memory.

Nor would you want to.  The document cache should aim to keep the most
frequently-occuring documents in memory (in the thousands, perhaps 10's of thousands). By devoting more memory to the OS disk cache, more of the 12GB index can be cached by the OS and thus speed up all document retreival.

-Mike


Reply via email to