[ 
https://issues.apache.org/jira/browse/LUCENE-2723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated LUCENE-2723:
---------------------------------

    Attachment: LUCENE-2723_facetPerSeg.patch

Here's a patch that changes just one pace in faceting to per-segment bulk.
Same 10M doc index, testing w/ cache.minDf=maxdoc only (no use of filterCache 
since I haven't changed that to per-seg yet).  Time in ms.

|unique values in field|trunk per-seg|branch per-seg|speedup|
|10|161|173|-7%|
|100|217|218|-0%|
|1000|267|262|2%|
|10000|465|325|43%|
|100000|2025|678|199%|
|10000000|21061|4393|379%|

Now, facet.method=enum wasn't even designed for many unique values in a field, 
but this more efficient per-segment bulk code certainly expands the range where 
it's feasible.  My guess is that the speedup is due to us dropping to 
per-segment quicker with this patch (trunk get's a bulk enum, and then drops to 
per-segment).

The drop in performance for the high df field (each value will match ~1M docs) 
is curious.  Seems like this should be a more efficient inner loop, but I guess 
hotspot just optimized it differently.

Based on these results, I'll re-convert the rest of the code to go per-segment 
too.

> Speed up Lucene's low level bulk postings read API
> --------------------------------------------------
>
>                 Key: LUCENE-2723
>                 URL: https://issues.apache.org/jira/browse/LUCENE-2723
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Index
>            Reporter: Michael McCandless
>            Assignee: Michael McCandless
>             Fix For: 4.0
>
>         Attachments: LUCENE-2723-termscorer.patch, 
> LUCENE-2723-termscorer.patch, LUCENE-2723-termscorer.patch, 
> LUCENE-2723.patch, LUCENE-2723.patch, LUCENE-2723.patch, LUCENE-2723.patch, 
> LUCENE-2723.patch, LUCENE-2723_facetPerSeg.patch, LUCENE-2723_openEnum.patch, 
> LUCENE-2723_termscorer.patch, LUCENE-2723_wastedint.patch
>
>
> Spinoff from LUCENE-1410.
> The flex DocsEnum has a simple bulk-read API that reads the next chunk
> of docs/freqs.  But it's a poor fit for intblock codecs like FOR/PFOR
> (from LUCENE-1410).  This is not unlike sucking coffee through those
> tiny plastic coffee stirrers they hand out airplanes that,
> surprisingly, also happen to function as a straw.
> As a result we see no perf gain from using FOR/PFOR.
> I had hacked up a fix for this, described at in my blog post at
> http://chbits.blogspot.com/2010/08/lucene-performance-with-pfordelta-codec.html
> I'm opening this issue to get that work to a committable point.
> So... I've worked out a new bulk-read API to address performance
> bottleneck.  It has some big changes over the current bulk-read API:
>   * You can now also bulk-read positions (but not payloads), but, I
>      have yet to cutover positional queries.
>   * The buffer contains doc deltas, not absolute values, for docIDs
>     and positions (freqs are absolute).
>   * Deleted docs are not filtered out.
>   * The doc & freq buffers need not be "aligned".  For fixed intblock
>     codecs (FOR/PFOR) they will be, but for varint codecs (Simple9/16,
>     Group varint, etc.) they won't be.
> It's still a work in progress...

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Reply via email to