On 10/1/07, Robert Purdy <[EMAIL PROTECTED]> wrote:
> Hi there, I am having some major CPU performance problems with heavy user
> load with solr 1.2. I currently have approximately 4 million documents in
> the index and I am doing some pretty heavy faceting on multi-valued columns.
> I know that doing facets are expensive on multi-valued columns but the CPU
> seems to max out (400%) with apache bench with just 5 identical concurrent
> requests

One can always max out CPU (unless one is IO bound) with concurrent
requests greater than the number of CPUs on the system.  This isn't a
problem by itself and would exist even if Solr were an order of
magnitude slower or faster.

You should be looking at things the peak throughput (queries per sec)
you need to support and the latency of the requests (look at the 90
percentile, or whatever).


> and I have the potential for a lot more concurrent requests then
> that with my large number of users that hit our site per day and I am
> wondering if there are any workarounds. Currently I am running the out of
> the box solr solution (Example jetty application with my own schema.xml and
> solrconfig.xml) on a dual Intel Duo core 64 bit box with 8 gigs of ram
> allocated to the start.jar process dedicated to solr with no slaves.
>
> I have set up some aggressive caching in the solrconfig.xml for the
> filtercache (class="solr.LRUCache"size="3000000" initialSize="2000000") and
> have the HashDocSet set to 10000 to help with faceting, but still I am
> getting some pretty poor performance. I have also tried autowarming the
> facets by performing a query that hits all my multivalued facets with no
> facet limits across all the documents in the index. This does seem to reduce
> my query times by a lot because the filtercache grows to about 2.1 million
> lookups and finishes the query in about 70 secs.

OK, that's long.  So focus on the latency of a single request instead
of jumping straight to load testing.

2.1 million is a lot - what's the field with the largest number of
unique values that you are faceting on?

> However I have noticed an
> issue with this because each time I do an optimize or a commit after
> prewarming the facets the cache gets cleared, according to the stats on the
> admin page, but the RSize does not shink for the process, and the queries
> get slow again, so I prewarm the facets again and the memory usage keeps
> growing like the cache is not being recycled

The old searcher and cache won't be discarded until all requests using
it have completed.

> and as a results the prewarm
> query starts to get slower and slower as each time this occurs (after about
> 5 times of prewarms and then commit the query takes about 30 mins... ugh)
> and almost run out of memory.
>
> Any thoughts on how to help improve this and fix the memory issue?

You could try the minDf param to reduce the number of facets stored in
the cache and reduce memory consumption.

-Yonik

Reply via email to