[
https://issues.apache.org/jira/browse/SOLR-10186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15879971#comment-15879971
]
Amrit Sarkar edited comment on SOLR-10186 at 2/23/17 6:35 AM:
--------------------------------------------------------------
Erick,
If we specify the correct tag in the schema, get(..) and getInt(..) will remove
the desired tuple from the arguments and the _(!args.isEmpty())_ check if for
the unknown parameters only.
{code:xml}
maxCharLen = getInt(args, "maxCharLen",
KeywordTokenizer.DEFAULT_BUFFER_SIZE);
protected final int getInt(Map<String,String> args, String name, int
defaultVal) {
String s = args.remove(name);
return s == null ? defaultVal : Integer.parseInt(s);
}
{code}
I will write tests for this too. Opening JIRA under Lucene, and let me know
where to have the discussion from the either two.
was (Author: [email protected]):
Erick,
If we specify the correct tag in the schema, get(..) and getInt(..) will remove
the desired the tuple from the argument and the _(!args.isEmpty())_ check if
for the unknown parameters only.
{code:xml}
maxCharLen = getInt(args, "maxCharLen",
KeywordTokenizer.DEFAULT_BUFFER_SIZE);
protected final int getInt(Map<String,String> args, String name, int
defaultVal) {
String s = args.remove(name);
return s == null ? defaultVal : Integer.parseInt(s);
}
{code}
I will write tests for this too. Opening JIRA under Lucene, and let me know
where to have the discussion from the either two.
> Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the
> max token length
> ---------------------------------------------------------------------------------------------
>
> Key: SOLR-10186
> URL: https://issues.apache.org/jira/browse/SOLR-10186
> Project: Solr
> Issue Type: Improvement
> Security Level: Public(Default Security Level. Issues are Public)
> Reporter: Erick Erickson
> Priority: Minor
> Attachments: SOLR-10186.patch
>
>
> Is there a good reason that we hard-code a 256 character limit for the
> CharTokenizer? In order to change this limit it requires that people
> copy/paste the incrementToken into some new class since incrementToken is
> final.
> KeywordTokenizer can easily change the default (which is also 256 bytes), but
> to do so requires code rather than being able to configure it in the schema.
> For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes
> (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer)
> (Factories) it would take adding a c'tor to the base class in Lucene and
> using it in the factory.
> Any objections?
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]