On Mon, Sep 8, 2008 at 3:04 PM, Michael McCandless <[EMAIL PROTECTED]> wrote: > Right, getCurrentIndex would return a MultiReader that includes > SegmentReader for each segment in the index, plus a "RAMReader" that > searches the RAM buffer. That RAMReader is a tiny shell class that would > basically just record the max docID it's allowed to go up to (the docID as > of when it was opened), and stop enumerating docIDs (eg in the TermDocs) > when it hits a docID beyond that limit.
What about something like term freq? Would it need to count the number of docs after the local maxDoc or is there a better way? > For reading stored fields and term vectors, which are now flushed > immediately to disk, we need to somehow get an IndexInput from the > IndexOutputs that IndexWriter holds open on these files. Or, maybe, just > open new IndexInputs? Hmmm, seems like a case of our nice and simple Directory model not having quite enough features in this case. >> Another thing that will help is if users could get their hands on the >> sub-readers of a multi-segment reader. Right now that is hidden in >> MultiSegmentReader and makes updating anything incrementally >> difficult. > > Besides what's handled by MultiSegmentReader.reopen already, what else do > you need to incrementally update? Anything that you want to incrementally update and uses an IndexReader as a key. Mostly caches I would think... Solr has user-level (application specific) caches, faceting caches, etc. -Yonik --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]