Hello, I have following conf for filters and commits :
Concurrent LFU Cache(maxSize=64, initialSize=64, minSize=57, acceptableSize=60, cleanupThread=false, timeDecay=true, autowarmCount=8, regenerator=org.apache.solr.search.SolrIndexSearcher$2@169ee0fd) <autoCommit> <!-- Every 15 seconds --> <maxTime>${solr.autoCommit.maxTime:15000}</maxTime> <openSearcher>false</openSearcher> </autoCommit> <autoSoftCommit> <!-- Every 10 minutes --> <maxTime>${solr.autoSoftCommit.maxTime:600000}</maxTime> </autoSoftCommit> and the following stats for filters: lookups = 3602 hits = 3148 hit ratio = 0.87 inserts = 455 evictions = 400 size = 63 warmupTime = 770 *Problem: *a lot of slow queries, for example: {q=*:*&tie=1.0&defType=edismax&qt=standard&json.nl=map&qf=&fl=pk_i,score&start=0&sort=view_counter_i desc&fq={!cost=1 cache=true}type_s:Product AND is_valid_b:true&fq={!cost=50 cache=true}in_languages_t:de&fq={!cost=99 cache=false}(shipping_country_codes_mt: (DE OR EURO OR EUR OR ALL)) AND (cents_ri: [* 3000])&rows=36&wt=json} hits=3768003 status=0 QTime=1378 I could increase the size of the filter so I would decrease the amount of evictions, but it seems to me this would not be solving the root problem. Some ideas on where/how to start for optimisation ? Is it actually normal that this query takes this time ? We have an index of ~14 million docs. 4 replicas with two cores and 1 shard each. thank you. -- -- Lorenzo Fundaro Backend Engineer E-Mail: lorenzo.fund...@dawandamail.com Fax + 49 - (0)30 - 25 76 08 52 Tel + 49 - (0)179 - 51 10 982 DaWanda GmbH Windscheidstraße 18 10627 Berlin Geschäftsführer: Claudia Helming, Michael Pütz Amtsgericht Charlottenburg HRB 104695 B