[ 
https://issues.apache.org/jira/browse/LUCENE-1410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-1410:
---------------------------------------

    Attachment: TestPFor2.java

New TestPFor2 attached.  I changed vInt -> int in the prints and now
compute "best" bit size per-block using getNumFrameBits() and use that
per block.

I took a big frq file and computed %tg for each #bits:

||bits||count||pct||
|1|0|0.0|
|2|24328|0.9|
|3|94689|3.7|
|4|213887|8.4|
|5|277510|10.8|
|6|284905|11.1|
|7|193081|7.5|
|8|262857|10.3|
|9|260460|10.2|
|10|212046|8.3|
|11|162872|6.4|
|12|109801|4.3|
|13|77366|3.0|
|14|56620|2.2|
|15|34294|1.3|
|16|31000|1.2|
|17|28803|1.1|
|18|21617|0.8|
|19|22317|0.9|
|20|30326|1.2|
|21|162663|6.4|

And for prx:

||bits||count||pct||
|1|23034|0.7|
|2|12637|0.4|
|3|49286|1.4|
|4|56344|1.6|
|5|69385|2.0|
|6|81964|2.4|
|7|108847|3.1|
|8|179296|5.2|
|9|428787|12.4|
|10|873828|25.2|
|11|957913|27.7|
|12|534426|15.4|
|13|81856|2.4|
|14|2436|0.1|
|15|474|0.0|
|16|200|0.0|
|17|43|0.0|
|18|17|0.0|
|19|1|0.0|

It's interesting that prx is so much more tightly clustered than frq.

New results for decoding vInt vs pfor, for frq file:

{code}
327864576 ints; 431137397 bytes compressed vs orig size 447077047

decompress using pfor:
2514 msec to decode 327864576 ints (130415 ints/msec)
2171 msec to decode 327864576 ints (151020 ints/msec)
2137 msec to decode 327864576 ints (153422 ints/msec)
2148 msec to decode 327864576 ints (152637 ints/msec)
2067 msec to decode 327864576 ints (158618 ints/msec)

decompress using readVInt:
4549 msec to read 327864691 ints (72074 ints/msec)
4421 msec to read 327864691 ints (74160 ints/msec)
3240 msec to read 327864691 ints (101192 ints/msec)
3199 msec to read 327864691 ints (102489 ints/msec)
3323 msec to read 327864691 ints (98665 ints/msec)

PFor is 54.766% faster
{code}

and prx file:

{code}
encode _l.prx to _l.prx.pfor...
442979072 ints; 628580878 bytes compressed vs orig size 651670377
decompress using pfor:
6385 msec to decode 442979072 ints (69378 ints/msec)
5904 msec to decode 442979072 ints (75030 ints/msec)
5796 msec to decode 442979072 ints (76428 ints/msec)
5807 msec to decode 442979072 ints (76283 ints/msec)
5803 msec to decode 442979072 ints (76336 ints/msec)

decompress using readVInt:
6893 msec to read 442979104 ints (64265 ints/msec)
6759 msec to read 442979104 ints (65539 ints/msec)
5304 msec to read 442979104 ints (83517 ints/msec)
5275 msec to read 442979104 ints (83977 ints/msec)
5792 msec to read 442979104 ints (76481 ints/msec)

PFor is 8.989% slower
{code}

Some comments:

  * In both cases, the pfor file a bit smaller than the original
    Lucene file.  And, it's unfair because I inject the 4 extra bytes
    per block (which I think won't be needed "for real").

  * For frq file, pfor is quite a bit faster (54.8%) but for prx file
    it's slower (9.0%).  Maybe it's because prx file has much higher
    occurrence of bigger bit sizes (where we don't have unrolled
    version)?  It's odd; maybe I'm doing something wrong.  With a
    fixed 6 bits it is quite a bit faster (I retested to verify!)
    which is also weird because there would be many exceptions.
    Exception processing ought to be slower than non-exception
    processing!

Stepping back a bit: I wonder what %tg of the time in a "typical"
Lucene search is spent decoding vInts?  That would bound how much
improvement we could ever expect from this optimization during
searching.


> PFOR implementation
> -------------------
>
>                 Key: LUCENE-1410
>                 URL: https://issues.apache.org/jira/browse/LUCENE-1410
>             Project: Lucene - Java
>          Issue Type: New Feature
>          Components: Other
>            Reporter: Paul Elschot
>            Priority: Minor
>         Attachments: LUCENE-1410b.patch, TestPFor2.java, TestPFor2.java
>
>   Original Estimate: 21840h
>  Remaining Estimate: 21840h
>
> Implementation of Patched Frame of Reference.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to