[ 
https://issues.apache.org/jira/browse/LUCENE-8651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16748123#comment-16748123
 ] 

Dan Meehl edited comment on LUCENE-8651 at 1/21/19 5:46 PM:
------------------------------------------------------------

The initial task was that I needed to wrap an incoming TokenStream with 
anchors. I tried to use ConcatenatingTokenStream with KeywordTokenizer as the 
anchors. My particular use case has a copyField directive on the field that I'm 
wrapping. Because of the copyField, DefaultIndexingChain is going to try to 
reset my TokenStream, which in turn resets the underlying TokenStream(s). This 
means that reset gets called on my KeywordTokenizer(s) and causes an 
IllegalStateException to be thrown.

It seems like ConcatenatingTokenStream should be able to use any type of 
TokenStream, otherwise I would have expected it to take a list of TokenFilter 
instead of a list of TokenStream. But, if the resulting 
ConcatenatingTokenStream is going to be used more than once, as it is in a 
copyField scenario, then it will fail if one of the underlying TokenStreams is 
a Tokenizer.

I'm not really sure what I'm proposing as a fix, to be honest. One one hand, it 
seems like TokenStreams were meant to be resettable. On the other I understand 
your point about Tokenizers and Readers.

Possible Solutions:

Perhaps Tokenizers should store captured state to be reused later? This would 
allow Tokenizer to fulfill the contract in TokenStream.

It looks like Tokenizers originating from the schema (WhiteSpaceTokenizer, 
StandardTokenizer, etc) end up turning into Field$StringTokenStream by the time 
they get to the DefaultIndexingChain. That is why they don't fail with the same 
IllegalStateException. Maybe ConcatenatingTokenStream needs to convert 
underlying Tokenizers into a non Reader based TokenStream?

 


was (Author: dmeehl):
The initial task was that I needed to wrap an incoming TokenStream with 
anchors. I tried to use KeywordTokenizer as the anchors. My particular use case 
has a copyField directive on the field that I'm wrapping. Because of the 
copyField, DefaultIndexingChain is going to try to reset my TokenStream, which 
in turn resets the underlying TokenStream(s). This means that reset gets called 
on my KeywordTokenizer(s) and causes an IllegalStateException to be thrown.

It seems like ConcatenatingTokenStream should be able to use any type of 
TokenStream, otherwise I would have expected it to take a list of TokenFilter 
instead of a list of TokenStream. But, if the resulting 
ConcatenatingTokenStream is going to be used more than once, as it is in a 
copyField scenario, then it will fail if one of the underlying TokenStreams is 
a Tokenizer.

I'm not really sure what I'm proposing as a fix, to be honest. One one hand, it 
seems like TokenStreams were meant to be resettable. On the other I understand 
your point about Tokenizers and Readers.

Possible Solutions:

Perhaps Tokenizers should store captured state to be reused later? This would 
allow Tokenizer to fulfill the contract in TokenStream.

It looks like Tokenizers originating from the schema (WhiteSpaceTokenizer, 
StandardTokenizer, etc) end up turning into Field$StringTokenStream by the time 
they get to the DefaultIndexingChain. That is why they don't fail with the same 
IllegalStateException. Maybe ConcatenatingTokenStream needs to convert 
underlying Tokenizers into a non Reader based TokenStream?

 

> Tokenizer implementations can't be reset
> ----------------------------------------
>
>                 Key: LUCENE-8651
>                 URL: https://issues.apache.org/jira/browse/LUCENE-8651
>             Project: Lucene - Core
>          Issue Type: Bug
>          Components: modules/analysis
>            Reporter: Dan Meehl
>            Priority: Major
>         Attachments: LUCENE-8650-2.patch, LUCENE-8651.patch
>
>
> The fine print here is that they can't be reset without calling setReader() 
> every time before reset() is called. The reason for this is that Tokenizer 
> violates the contract put forth by TokenStream.reset() which is the following:
> "Resets this stream to a clean state. Stateful implementations must implement 
> this method so that they can be reused, just as if they had been created 
> fresh."
> Tokenizer implementation's reset function can't reset in that manner because 
> their Tokenizer.close() removes the reference to the underlying Reader 
> because of LUCENE-2387. The catch-22 here is that we don't want to 
> unnecessarily keep around a Reader (memory leak) but we would like to be able 
> to reset() if necessary.
> The patches include an integration test that attempts to use a 
> ConcatenatingTokenStream to join an input TokenStream with a KeywordTokenizer 
> TokenStream. This test fails with an IllegalStateException thrown by 
> Tokenizer.ILLEGAL_STATE_READER.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Reply via email to