The caching by TermScorer of the next 32 Docs is a way to speed up the serial (in order) reading of docs from the TermDocs object (probably coming direct from disk).
I would like to hold a significant amount of the index in memory but use the disk index as a spill over. Obviously the best situation is to hold in memory only the information that is likely to be used again soon. It seems that caching TermDocs would allow popular search terms to be searched more efficiently while the less common terms would need to be read from disk. Has anyone else done this? Know of a better approach? ----- Original Message ----- From: "Paul Elschot" <[EMAIL PROTECTED]> To: <[EMAIL PROTECTED]> Sent: Tuesday, July 27, 2004 3:07 AM Subject: Re: Caching of TermDocs > On Monday 26 July 2004 21:41, John Patterson wrote: > > > Is there any way to cache TermDocs? Is this a good idea? > > Lucene does this internally by buffering > up to 32 document numbers in advance for a query Term. > You can view the details here in case you're interested: > http://cvs.apache.org/viewcvs.cgi/jakarta-lucene/src/java/org/apache/lucene/search/TermScorer.java > It uses the TermDocs.read() method to fill a buffer of document numbers. > > Is this what you had in mind? > > Regards, > Paul > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: [EMAIL PROTECTED] > For additional commands, e-mail: [EMAIL PROTECTED] > > --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
