[ 
https://issues.apache.org/jira/browse/LUCENE-1796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12741545#action_12741545
 ] 

Mark Miller edited comment on LUCENE-1796 at 8/10/09 2:02 PM:
--------------------------------------------------------------

Just to complete my report:

The tests I reported in this issue were done with a little more beef in the 
documents - I had added about 4 lines from a newspaper article. The result is 
that we are only about 4-5% slower using those documents now. However, with 
Yonik's original test, with very short docs:

{code}
 String[] fields = {"text","simple"
            ,"text","test"
            ,"text","how now brown cow"
            ,"text","what's that?"
            ,"text","radical!"
            ,"text","what's all this about, anyway?"
            ,"text","just how fast is this text indexing?"
{code}

...we are 10% behind. This is a mix of TokenStreams - you can see the 
tokenfilters used in the profile pics. This is huge improvement from before - 
that was 50-60% slower with this test.

All profiling pics are from Yoniks original small doc test with 100000 
iterations.

I'll attach:

before the reflection token stream stuff
after (trunk)
after with this patch

      was (Author: markrmil...@gmail.com):
    Just to complete my report:

The tests I reported for this was with a little more beef in the documents - I 
had added about 4 lines from a newspaper article. The result is that we are 
only about 4-5% slower on that test now. However, with Yonik's original test, 
with very docs:

{code}
 String[] fields = {"text","simple"
            ,"text","test"
            ,"text","how now brown cow"
            ,"text","what's that?"
            ,"text","radical!"
            ,"text","what's all this about, anyway?"
            ,"text","just how fast is this text indexing?"
{code}

We are 10% behind. This is a mix of TokenStreams - you can see the tokenfilters 
used in the profile pics. This is huge improvement from before - that was 
50-60% slower with this test.

All profiling pics are from Yoniks original small doc test with 100000 
iterations.

I'll attach:

before the reflection token stream stuff
after (trunk)
after with this patch
  
> Speed up repeated TokenStream init
> ----------------------------------
>
>                 Key: LUCENE-1796
>                 URL: https://issues.apache.org/jira/browse/LUCENE-1796
>             Project: Lucene - Java
>          Issue Type: Improvement
>            Reporter: Mark Miller
>            Assignee: Uwe Schindler
>             Fix For: 2.9
>
>         Attachments: after.png, afterAndLucene1796.png, before.png, 
> LUCENE-1796.patch, LUCENE-1796.patch, LUCENE-1796.patch, LUCENE-1796.patch, 
> LUCENE-1796.patch
>
>
>  by caching isMethodOverridden results

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org

Reply via email to