Ok finally with some pointers from Ryan, figured out the last problem.
So as a note to anyone else who might encounter the same problems with
multireader
A) Directories can contain multiple segments and a reader for those segments
B) Searches are replayed within each reader in a serial fashion **
I'm not sure that we could parallelize it. Currently, its a serial
process (as you say) - the queue collects across readers by adjusting
the values in the queue to sort correctly against the current reader.
That approach doesn't appear easily parallelized.
patrick o'leary wrote:
Think I may ha
Think I may have found it, it was multiple runs of the filter, one for each
segment reader, I was generating a new map to hold distances each time. So
only the distances from the
last segment reader were stored.
Currently it looks like those segmented searches are done serially, well in
solr they
You might check out this Solr exchange :
http://www.lucidimagination.com/search/document/b2ccc68ca834129/lucene_2_9_migration_issues_multireader_vs_indexreader_document_ids
There are a few suggestions throughout.
--
- Mark
http://www.lucidimagination.com
Uwe Schindler wrote:
What is the
What is the problem exactly? Maybe you use the new Collector API, where the
search is done for each segment, so caching does not work correctly?
-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de
_
From: patrick o'leary [mailto:pj..