[ 
https://issues.apache.org/jira/browse/LUCENE-1448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12653513#action_12653513
 ] 

Michael Busch commented on LUCENE-1448:
---------------------------------------

{quote}
Another option is to "define" the API such that when incrementToken()
returns false, then it has actually advanced to an "end-of-stream
token". OffsetAttribute.getEndOffset() should return the final
offset. Since we have not released the new API, we could simply make
this change (and fix all instances in the core/contrib that use the
new API accordingly). I think I like this option best.
{quote}

This adds some "cleaning up" responsibilities to all existing
TokenFilters out there. So far it is very straightforward to change an
existing TokenFilter to use the new API. You simply have to:
- add  attributes the filter needs in its constructor 
- change next() to incrementToken() and change return calls that
return null to false, others to true (or what input returns)
- don't access a token but the appropriate attributes to set the data

But maybe there's a custom filter in the end of the chain that returns
more tokens even after its input returned the last one. For example a
SynonymExpansionFilter might return a synonym for the last word it
received from its input before it returns false. In this case it might
overwrite endOffset that another filter/stream already set to the
final endOffset. It needs to cache that value and set it when it
returns false.

ALso all filters that currently use an offset need to know now to
clean up before returning false.

I'm not saying this is necessarily bad. I also find this approach
tempting, because it's simple. But it might be a common pitfall for
bugs?

What I'd like to work on soon is an efficient way to buffer attributes
(maybe add methods to attribute that write into a bytebuffer). Then
attributes can implement what variables need to be serialized and
which ones don't. In that case we could add a finalOffset to
OffsetAttribute that does not get serialiezd/deserialized.

And possibly it might be worthwhile to have explicit states defined in
a TokenStream that we can enforce with three methods: start(),
increment(), end(). Then people would now if they have to do something
at the end of a stream they have to do it in end().

> add getFinalOffset() to TokenStream
> -----------------------------------
>
>                 Key: LUCENE-1448
>                 URL: https://issues.apache.org/jira/browse/LUCENE-1448
>             Project: Lucene - Java
>          Issue Type: Bug
>          Components: Analysis
>            Reporter: Michael McCandless
>            Assignee: Michael McCandless
>            Priority: Minor
>             Fix For: 2.9
>
>         Attachments: LUCENE-1448.patch, LUCENE-1448.patch, LUCENE-1448.patch, 
> LUCENE-1448.patch
>
>
> If you add multiple Fieldable instances for the same field name to a 
> document, and you then index those fields with TermVectors storing offsets, 
> it's very likely the offsets for all but the first field instance will be 
> wrong.
> This is because IndexWriter under the hood adds a cumulative base to the 
> offsets of each field instance, where that base is 1 + the endOffset of the 
> last token it saw when analyzing that field.
> But this logic is overly simplistic.  For example, if the WhitespaceAnalyzer 
> is being used, and the text being analyzed ended in 3 whitespace characters, 
> then that information is lost and then next field's offsets are then all 3 
> too small.  Similarly, if a StopFilter appears in the chain, and the last N 
> tokens were stop words, then the base will be 1 + the endOffset of the last 
> non-stopword token.
> To fix this, I'd like to add a new getFinalOffset() to TokenStream.  I'm 
> thinking by default it returns -1, which means "I don't know so you figure it 
> out", meaning we fallback to the faulty logic we have today.
> This has come up several times on the user's list.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to