[ https://issues.apache.org/jira/browse/LUCENE-1195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Michael Busch updated LUCENE-1195: ---------------------------------- Attachment: lucene-1195.patch Here is the simple patch. The cache is only used in TermInfosReader.get(Term). So if for example a RangeQuery gets a TermEnum from the IndexReader, then enumerating the terms using the TermEnum will not replace the terms in the cache. The LRUCache itself is not synchronized. It might happen that multiple threads lookup the same term at the same time, then we might get an cache miss. But I think such a situation should be very rare, and it's therefore better to avoid the synchronization overhead? I set the default cache size to 1024. A cache entry is a (Term, TermInfo) tuple. TermInfo needs 24 bytes, I think a Term approx. 20-30 bytes? So the cache would need about 1024 * ~50 bytes = 50Kb plus a bit overhead from the LinkedHashMap. This is the memory requirement per index segment, so a non-optimized index with 20 segments would need about 1MB more memory with this cache. I think this is acceptable? Otherwise we can also decrease the cache size. All core & contrib tests pass. > Performance improvement for TermInfosReader > ------------------------------------------- > > Key: LUCENE-1195 > URL: https://issues.apache.org/jira/browse/LUCENE-1195 > Project: Lucene - Java > Issue Type: Improvement > Components: Index > Reporter: Michael Busch > Assignee: Michael Busch > Priority: Minor > Fix For: 2.4 > > Attachments: lucene-1195.patch > > > Currently we have a bottleneck for multi-term queries: the dictionary lookup > is being done > twice for each term. The first time in Similarity.idf(), where > searcher.docFreq() is called. > The second time when the posting list is opened (TermDocs or TermPositions). > The dictionary lookup is not cheap, that's why a significant performance > improvement is > possible here if we avoid the second lookup. An easy way to do this is to add > a small LRU > cache to TermInfosReader. > I ran some performance experiments with an LRU cache size of 20, and an > mid-size index of > 500,000 documents from wikipedia. Here are some test results: > 50,000 AND queries with 3 terms each: > old: 152 secs > new (with LRU cache): 112 secs (26% faster) > 50,000 OR queries with 3 terms each: > old: 175 secs > new (with LRU cache): 133 secs (24% faster) > For bigger indexes this patch will probably have less impact, for smaller > once more. > I will attach a patch soon. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]