[
https://issues.apache.org/jira/browse/LUCENE-2947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
David Byrne updated LUCENE-2947:
--------------------------------
Attachment: LUCENE-2947.patch
I've finished my first attempt at a patch. The code could benefit from a bit
of refactoring, but I wanted to make sure everyone agrees to the changes in
principal before working on refining it. The test cases (hopefully) illustrate
most of the nuances.
Benefits:
- accepts strings of any length (not just 1024 chars)
- collapses consecutive whitespace characters
- takes custom sets of chars to be treated as whitespace
- "pads" the beginning and end of the string
- Follows the same format as the PDF Robert linked above
Quirks (examples in the test cases):
- Unigrams aren't "padded"...it just made the most sense
- because of the format, underscores will look identical to whitespace
- leading or trailing whitespace can result in weird looking ngrams (i.e. "__")
- offset values for ngrams with collapsed whitespace can be unintuitive (but
consistent)
> NGramTokenizer shouldn't trim whitespace
> ----------------------------------------
>
> Key: LUCENE-2947
> URL: https://issues.apache.org/jira/browse/LUCENE-2947
> Project: Lucene - Java
> Issue Type: Bug
> Components: contrib/analyzers
> Affects Versions: 3.0.3
> Reporter: David Byrne
> Priority: Minor
> Attachments: LUCENE-2947.patch, NGramTokenizerTest.java
>
>
> Before I tokenize my strings, I am padding them with white space:
> String foobar = " " + foo + " " + bar + " ";
> When constructing term vectors from ngrams, this strategy has a couple
> benefits. First, it places special emphasis on the starting and ending of a
> word. Second, it improves the similarity between phrases with swapped words.
> " foo bar " matches " bar foo " more closely than "foo bar" matches "bar
> foo".
> The problem is that Lucene's NGramTokenizer trims whitespace. This forces me
> to do some preprocessing on my strings before I can tokenize them:
> foobar.replaceAll(" ","$"); //arbitrary char not in my data
> This is undocumented, so users won't realize their strings are being
> trim()'ed, unless they look through the source, or examine the tokens
> manually.
> I am proposing NGramTokenizer should be changed to respect whitespace. Is
> there a compelling reason against this?
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]