[
https://issues.apache.org/jira/browse/LUCENE-2207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12801366#action_12801366
]
Robert Muir commented on LUCENE-2207:
-------------------------------------
Koji, sure I can take care of it.
Also i added LUCENE-2219 to find these bugs in other tokenizers.
In the future I also want to explore if we can somehow use a fake CharFilter in
BaseTokenStreamTest to also ensure that correctOffset() is called when setting
offsets in both incrementToken() and end(), don't yet know how it would work
yet.
> CJKTokenizer generates tokens with incorrect offsets
> ----------------------------------------------------
>
> Key: LUCENE-2207
> URL: https://issues.apache.org/jira/browse/LUCENE-2207
> Project: Lucene - Java
> Issue Type: Bug
> Components: contrib/analyzers
> Reporter: Koji Sekiguchi
> Attachments: LUCENE-2207.patch, LUCENE-2207.patch, LUCENE-2207.patch,
> LUCENE-2207.patch, TestCJKOffset.java
>
>
> If I index a Japanese *multi-valued* document with CJKTokenizer and highlight
> a term with FastVectorHighlighter, the output snippets have incorrect
> highlighted string. I'll attach a program that reproduces the problem soon.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]