As Shalin points out, these cache sizes are waaaay out the norm.

For filterCache, each entry is roughly maxDoc/8. You haven't told
us now many docs are on the node, but you can find maxDoc on
the admin page. What I _have_ seen is a similar situation and
if you ever stop indexing you'll get OOM errors.

Here's the scenario:
Every time you commit, the cache is thrown away and up to
32,000 autowarm queries are fired. So entries 32,001 -> 512,000
are given back to the OS. You may only bet another 10,000 filter
queries before the next commit, so this cache is capped.
But if you ever stop indexing (and thus committing), you'll keep adding
to the cache and blow memory.

The filterCache and queryResult caches are maps. The key is
the query (or filter query) and the value is some representation
of the matching documents. You are set up to execute 48,000
queries every time you commit while indexing, every 15 seconds
in your case (soft commit interval). The only thing that's saving
you I suspect is that your cache isn't actually being filled up to
anywhere near een 32,000. But here's another
prediction; If you keep running this with varying queries, you'll
get slower and slower and slower. You'll see WARN messages
in your log about "max warming searchers exceeded". And
eventually you'll blow up memory. You can simulate this by having
continually submitting fq clauses with bare NOW clauses, like
fq=date:[* TO NOW]....

Really, start with your caches closer to 512 and an autowarm of
16 or so. Look at the admin page for your hit ratios and adjust.

Best,
Erick

On Fri, Dec 19, 2014 at 10:25 AM, heaven <aheave...@gmail.com> wrote:
> Thanks, decreased the caches at twice, increased the heap size to 16G,
> configured Huge Pages and added these options:
> -XX:+UseConcMarkSweepGC
> -XX:+UseLargePages
> -XX:+CMSParallelRemarkEnabled
> -XX:+ParallelRefProcEnabled
> -XX:+UseLargePages
> -XX:+AggressiveOpts
> -XX:CMSInitiatingOccupancyFraction=75
>
> Best,
> Alex
>
>
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Endless-100-CPU-usage-on-searcherExecutor-thread-tp4175088p4175271.html
> Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to