[
https://issues.apache.org/jira/browse/LUCENE-2723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Simon Willnauer updated LUCENE-2723:
------------------------------------
Attachment: LUCENE-2723-termscorer.patch
I played around a little with recommended actions from hotspot wiki and
factored some methods out on TemScorer and MatchOnlyTermScorer. I merged my
changes with the ones from Robert from the latest patch.
All tests pass and I checked with a10M docs wikipedia index if base (branch)
and spec (branch + patch). I actually was surprised about the results though:
{code}
Query QPS base QPS spec Pct diff
"unit state" 1.71 1.70 -0.6%
spanFirst(unit, 5) 4.81 4.79 -0.5%
un*d 14.94 14.88 -0.4%
united~1.0 8.79 8.76 -0.4%
unit* 8.70 8.66 -0.4%
uni* 4.16 4.14 -0.3%
u*d 4.01 4.01 -0.1%
united~2.0 1.88 1.88 0.0%
spanNear([unit, state], 10, true) 0.94 0.96 2.8%
unit~1.0 4.32 4.49 4.0%
unit~2.0 4.20 4.38 4.4%
+nebraska +state 24.90 26.11 4.8%
+unit +state 4.60 4.97 8.0%
unit state 3.60 3.93 9.1%
state 9.83 10.98 11.7%
{code}
I ran those twice with very similar results....
3 iters, 40 iters per JVM and 2 threads on delmulti index
hers is my JVM
{code}
java version "1.6.0_22"
Java(TM) SE Runtime Environment (build 1.6.0_22-b04)
Java HotSpot(TM) Server VM (build 17.1-b03, mixed mode)
{code}
started with:
{code}
java -Xbatch -Xms2g -Xmx2g -server
{code}
> Speed up Lucene's low level bulk postings read API
> --------------------------------------------------
>
> Key: LUCENE-2723
> URL: https://issues.apache.org/jira/browse/LUCENE-2723
> Project: Lucene - Java
> Issue Type: Improvement
> Components: Index
> Reporter: Michael McCandless
> Assignee: Michael McCandless
> Fix For: 4.0
>
> Attachments: LUCENE-2723-termscorer.patch, LUCENE-2723.patch,
> LUCENE-2723.patch, LUCENE-2723.patch, LUCENE-2723.patch, LUCENE-2723.patch,
> LUCENE-2723_termscorer.patch
>
>
> Spinoff from LUCENE-1410.
> The flex DocsEnum has a simple bulk-read API that reads the next chunk
> of docs/freqs. But it's a poor fit for intblock codecs like FOR/PFOR
> (from LUCENE-1410). This is not unlike sucking coffee through those
> tiny plastic coffee stirrers they hand out airplanes that,
> surprisingly, also happen to function as a straw.
> As a result we see no perf gain from using FOR/PFOR.
> I had hacked up a fix for this, described at in my blog post at
> http://chbits.blogspot.com/2010/08/lucene-performance-with-pfordelta-codec.html
> I'm opening this issue to get that work to a committable point.
> So... I've worked out a new bulk-read API to address performance
> bottleneck. It has some big changes over the current bulk-read API:
> * You can now also bulk-read positions (but not payloads), but, I
> have yet to cutover positional queries.
> * The buffer contains doc deltas, not absolute values, for docIDs
> and positions (freqs are absolute).
> * Deleted docs are not filtered out.
> * The doc & freq buffers need not be "aligned". For fixed intblock
> codecs (FOR/PFOR) they will be, but for varint codecs (Simple9/16,
> Group varint, etc.) they won't be.
> It's still a work in progress...
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]