[ 
https://issues.apache.org/jira/browse/LUCENE-7705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated LUCENE-7705:
-----------------------------------
    Attachment: LUCENE-7705.patch

Patch that fixes up a few comments, regularized maxChars* to maxToken* and the 
like. I enhanced a test to test tokens longer than 256 characters.

There was a problem with LowerCaseTokenizerFactory, the getMultiTermComponent 
method constructed a LowerCaseFilterFactory with the _original_ arguments 
including maxTokenLen, which then threw an error. There's a nocommit in there 
for the nonce, what's the right thing to do here?

[~amrit sarkar] Do you have any ideas for a more elegant solution? The nocommit 
is there because this is feels just too hacky, but it does prove that this is 
the problem.

It seems like we should close SOLR-10186 and just make the code changes here. 
With this patch I successfully tested adding fields with tokens longer than 256 
and shorter, so I don't think there's anything beyond this patch to do with 
Solr. I suppose we could add some maxTokenLen bits to some of the schemas just 
to exercise that (which would have found the LowerCaseTokenizerFactory bit).

> Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the 
> max token length
> ---------------------------------------------------------------------------------------------
>
>                 Key: LUCENE-7705
>                 URL: https://issues.apache.org/jira/browse/LUCENE-7705
>             Project: Lucene - Core
>          Issue Type: Improvement
>            Reporter: Amrit Sarkar
>            Priority: Minor
>         Attachments: LUCENE-7705.patch
>
>
> SOLR-10186
> [~erickerickson]: Is there a good reason that we hard-code a 256 character 
> limit for the CharTokenizer? In order to change this limit it requires that 
> people copy/paste the incrementToken into some new class since incrementToken 
> is final.
> KeywordTokenizer can easily change the default (which is also 256 bytes), but 
> to do so requires code rather than being able to configure it in the schema.
> For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes 
> (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) 
> (Factories) it would take adding a c'tor to the base class in Lucene and 
> using it in the factory.
> Any objections?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Reply via email to