[
https://issues.apache.org/jira/browse/LUCENE-2407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
David Smiley updated LUCENE-2407:
---------------------------------
Fix Version/s: (was: 4.7)
4.8
> make CharTokenizer.MAX_WORD_LEN parametrizable
> ----------------------------------------------
>
> Key: LUCENE-2407
> URL: https://issues.apache.org/jira/browse/LUCENE-2407
> Project: Lucene - Core
> Issue Type: Improvement
> Components: modules/analysis
> Affects Versions: 3.0.1
> Reporter: javi
> Priority: Minor
> Labels: dead
> Fix For: 4.8
>
>
> as discussed here
> http://n3.nabble.com/are-long-words-split-into-up-to-256-long-tokens-tp739914p739914.html
> it would be nice to be able to parametrize that value.
--
This message was sent by Atlassian JIRA
(v6.2#6252)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]