50K is still very, very large. You say you have 50M docs/node. Each filterCache entry will be on the order of 6M. Times 50,000 (potential if you turn indexing off). Or 300G memory for your filter cache alone. There are OOMs out there with your name on them, just waiting to happen at 3:00 AM after you've been at a party....
The only thing I suspect is saving you is your soft commit interval is short enough that you don't have a chance to build up that many cache entries, check the size on the solr admin page to see if I'm on the right track here.... And your excessive autowarm settings are very likely to be the source of your CPU utilization, at least that's what I'd investigate first. Best, Erick On Fri, Dec 19, 2014 at 11:29 AM, heaven <aheave...@gmail.com> wrote: > Okay, thanks for the suggestion, will try to decrease the caches gradually. > Each node has near 50 000 000 docs, perhaps we need more shards... > > We had smaller caches before but that was leading to bad feedback from our > users. Besides our application users we also use Solr internally for data > analyze (very basic, simple searches for lists of keywords to determine docs > category, but we run a lot of such queries). > > Previously it was possible to point those internal queries to one node > (replica) and queries received from the users to another, so the caches did > not interfere with each other. Not sure how to do this now with SolrCloud, > it seems doesn't matter to which node we send requests, SolrCloud decides > which nodes will process it. Am I wrong? > > Best, > Alex > > > > -- > View this message in context: > http://lucene.472066.n3.nabble.com/Endless-100-CPU-usage-on-searcherExecutor-thread-tp4175088p4175285.html > Sent from the Solr - User mailing list archive at Nabble.com.