I'm not sure that we could parallelize it. Currently, its a serial process (as you say) - the queue collects across readers by adjusting the values in the queue to sort correctly against the current reader. That approach doesn't appear easily parallelized.

patrick o'leary wrote:
Think I may have found it, it was multiple runs of the filter, one for each segment reader, I was generating a new map to hold distances each time. So only the distances from the
last segment reader were stored.

Currently it looks like those segmented searches are done serially, well in solr they are-
I presume the end goal is to make them multi-threaded ?
I'll need to make my map synchronized


On Tue, Apr 28, 2009 at 4:42 PM, Uwe Schindler <u...@thetaphi.de <mailto:u...@thetaphi.de>> wrote:

    What is the problem exactly? Maybe you use the new Collector API,
    where the search is done for each segment, so caching does not
    work correctly?

    -----
    Uwe Schindler
    H.-H.-Meier-Allee 63, D-28213 Bremen
    http://www.thetaphi.de
    eMail: u...@thetaphi.de <mailto:u...@thetaphi.de>

    ------------------------------------------------------------------------

    *From:* patrick o'leary [mailto:pj...@pjaol.com
    <mailto:pj...@pjaol.com>]
    *Sent:* Tuesday, April 28, 2009 10:31 PM
    *To:* java-dev@lucene.apache.org <mailto:java-dev@lucene.apache.org>
    *Subject:* ReadOnlyMultiSegmentReader bitset id vs doc id

    hey

    I've got a filter that's storing document id's with a geo distance
    for spatial lucene using a bitset position for doc id,
    However with a MultiSegmentReader that's no longer going to working.

    What's the most appropriate way to go from bitset position to doc
    id now?

    Thanks
    Patrick




--
- Mark

http://www.lucidimagination.com




---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org

Reply via email to