[ https://issues.apache.org/jira/browse/LUCENE-1997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12770978#action_12770978 ]
Yonik Seeley commented on LUCENE-1997: -------------------------------------- So if we're considering new comparator APIs, and the indirection seems to be slowing things down... one thing to think about is how to eliminate that indirection. Even thinking about the multiPQ case - why should one need more than a single PQ when dealing with primitives that don't depend on context (i.e. everything except ord). If the comparator API had a way to set (or return) a primitive value for a single docid, and then those were compared (either directly by the PQ or via a callback), there wouldn't be an issue with reader transitions (because you don't compare id vs id) and hence no need for multiple priority queues. Avoiding the creation of intermediate Comparable objects also seems desirable. Perhaps do it how "score" is handled now... inlined into Entry? Should make heap rebalancing faster (fewer callbacks, fewer array lookups). > Explore performance of multi-PQ vs single-PQ sorting API > -------------------------------------------------------- > > Key: LUCENE-1997 > URL: https://issues.apache.org/jira/browse/LUCENE-1997 > Project: Lucene - Java > Issue Type: Improvement > Components: Search > Affects Versions: 2.9 > Reporter: Michael McCandless > Assignee: Michael McCandless > Attachments: LUCENE-1997.patch, LUCENE-1997.patch, LUCENE-1997.patch, > LUCENE-1997.patch, LUCENE-1997.patch, LUCENE-1997.patch, LUCENE-1997.patch, > LUCENE-1997.patch > > > Spinoff from recent "lucene 2.9 sorting algorithm" thread on java-dev, > where a simpler (non-segment-based) comparator API is proposed that > gathers results into multiple PQs (one per segment) and then merges > them in the end. > I started from John's multi-PQ code and worked it into > contrib/benchmark so that we could run perf tests. Then I generified > the Python script I use for running search benchmarks (in > contrib/benchmark/sortBench.py). > The script first creates indexes with 1M docs (based on > SortableSingleDocSource, and based on wikipedia, if available). Then > it runs various combinations: > * Index with 20 balanced segments vs index with the "normal" log > segment size > * Queries with different numbers of hits (only for wikipedia index) > * Different top N > * Different sorts (by title, for wikipedia, and by random string, > random int, and country for the random index) > For each test, 7 search rounds are run and the best QPS is kept. The > script runs singlePQ then multiPQ, and records the resulting best QPS > for each and produces table (in Jira format) as output. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. --------------------------------------------------------------------- To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org For additional commands, e-mail: java-dev-h...@lucene.apache.org