Some varied queries might give is more to go on. I have a feeling this test miight actually be favorable for the new Api?

- Mark

http://www.lucidimagination.com (mobile)

On Oct 25, 2009, at 4:43 PM, "Mark Miller (JIRA)" <j...@apache.org> wrote:


[ https://issues.apache.org/jira/browse/LUCENE-1997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12769863#action_12769863 ]

Mark Miller commented on LUCENE-1997:
-------------------------------------

Given good enough reasons, I could see saying we made a mistake and switching back - as it is, for the reasons I've said, I don't find that to be the case. I don't feel the new API was a mistake yet.

Lots of other guys to weigh in though. If everyone else feels like its the right move, I'm not going to -1 it - just weighing in with how I feel.

I'm not seeing 10-20% faster across the board - on my system it doesnt even hit 10% and I'm a linux user and advocate. I'm all for performance, but < 10% here and there is not enough to sway me against 30-50% loses in the large queue cases, combined with having to shift back. Its not a clear win either way, but I've said which way I lean.

Luckily, its not just me you have to convince. Lots of smart people still to weigh in.

Explore performance of multi-PQ vs single-PQ sorting API
--------------------------------------------------------

               Key: LUCENE-1997
               URL: https://issues.apache.org/jira/browse/LUCENE-1997
           Project: Lucene - Java
        Issue Type: Improvement
        Components: Search
  Affects Versions: 2.9
          Reporter: Michael McCandless
          Assignee: Michael McCandless
Attachments: LUCENE-1997.patch, LUCENE-1997.patch, LUCENE-1997.patch, LUCENE-1997.patch


Spinoff from recent "lucene 2.9 sorting algorithm" thread on java- dev,
where a simpler (non-segment-based) comparator API is proposed that
gathers results into multiple PQs (one per segment) and then merges
them in the end.
I started from John's multi-PQ code and worked it into
contrib/benchmark so that we could run perf tests.  Then I generified
the Python script I use for running search benchmarks (in
contrib/benchmark/sortBench.py).
The script first creates indexes with 1M docs (based on
SortableSingleDocSource, and based on wikipedia, if available).  Then
it runs various combinations:
 * Index with 20 balanced segments vs index with the "normal" log
   segment size
 * Queries with different numbers of hits (only for wikipedia index)
 * Different top N
 * Different sorts (by title, for wikipedia, and by random string,
   random int, and country for the random index)
For each test, 7 search rounds are run and the best QPS is kept.  The
script runs singlePQ then multiPQ, and records the resulting best QPS
for each and produces table (in Jira format) as output.

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org

Reply via email to