Solr gives old data while faceting from old deleted or updated documents.

For example we are doing faceting on name. name changes frequently for our
application. When we index the document after changing the name we get both
old name and new name in the search results. After digging more on this I
got to know that Solr indexes are composed of segments (write once) and each
segment contains set of documents. Whenever hard commit happens these
segments will be closed and even if a document is deleted after that it will
still have those documents (which will be marked as deleted). These
documents will not be cleared immediately. It will not be displayed in the
search result though, but somehow faceting is still able to access those

Optimizing fixed this issue. But we cannot perform this each time customer
changes data on production. I tried below options and that did not work for

1) *expungeDeletes*.

Added this line below in solrconfig.xml



<commit waitSearcher="false" expungeDeletes="true"/> // This is not working.
I don't think I can use expungeDeletes like this in solrConfig.xml

When I send commit parameters in update URL it is working. 

2) Using *TieredMergePolicyFactory* might not help me as the threshold might
not reach always and user will see old data during this time.

3) One more way of doing it is calling *optimize*() method which is exposed
in solrj daily once. But not sure what impact this will have on performance.

4) Tried to manipulate filterCache, documentCache and queryResultCache in

This did not solve my issue as well. I do not think any cache is causing
this issue.

Number of documents we index per server will be maximum 2M-3M.

Please suggest if there is any solution to this apart from

Let me know if more data needed.

Sent from:

Reply via email to