[ 
https://issues.apache.org/jira/browse/LUCENE-1826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12745803#action_12745803
 ] 

Uwe Schindler commented on LUCENE-1826:
---------------------------------------

bq. without the Tokenizer.reset(Reader, AttributeSource), i won't be able to 
reuse Tokenizer instances (will have to create a fresh one each time)

This is not possible per design. The AttributeSource cannot be changed. It is 
created during creation of the classes (this is why it is in the ctor and 
nowhere else). For filters, the attributes come from the input token stream.

bq. Is the reflection penalty on the new TokenStream stuff incurred per root 
AttributeSource?, or per TokenFilter/TokenStream?

The reflection penalty is one-time per class (because of static cache of 
"known" classes), so all attributeimpl are inspected one time when a new 
AttributeSouce like TokenStream is created. There is an additional reflection 
cost, when new attributes are added, but also one time per AttributeImpl class.
Since the last changes in TokenStream the reflection is therefore no longer a 
penalty. The only problem is more work to construct an TokenStream (filling the 
LinkedHashMaps), because of that you should reuse TokenStream-chains.

bq. that is, if i pass the same AttributeSource to 10 TokenStreams, is the 
reflection cost the same as if i passed it to just one?

No change!

> All Tokenizer implementations should have constructors that take 
> AttributeSource and AttributeFactory
> -----------------------------------------------------------------------------------------------------
>
>                 Key: LUCENE-1826
>                 URL: https://issues.apache.org/jira/browse/LUCENE-1826
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Analysis
>    Affects Versions: 2.9
>            Reporter: Tim Smith
>             Fix For: 2.9
>
>
> I have a TokenStream implementation that joins together multiple sub 
> TokenStreams (i then do additional filtering on top of this, so i can't just 
> have the indexer do the merging)
> in 2.4, this worked fine.
> once one sub stream was exhausted, i just started using the next stream 
> however, in 2.9, this is very difficult, and requires copying Term buffers 
> for every token being aggregated
> however, if all the sub TokenStreams share the same AttributeSource, and my 
> "concat" TokenStream shares the same AttributeSource, this goes back to being 
> very simple (and very efficient)
> So for example, i would like to see the following constructor added to 
> StandardTokenizer:
> {code}
>   public StandardTokenizer(AttributeSource source, Reader input, boolean 
> replaceInvalidAcronym) {
>     super(source);
>     ...
>   }
> {code}
> would likewise want similar constructors added to all Tokenizer sub classes 
> provided by lucene

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org

Reply via email to