[ 
https://issues.apache.org/jira/browse/LUCENE-7705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15884831#comment-15884831
 ] 

Amrit Sarkar commented on LUCENE-7705:
--------------------------------------

Erick,

For every tokenizer init, two parameters are already included in their 
arguments as below:
{noformat}
{class=solr.LowerCaseTokenizerFactory, luceneMatchVersion=7.0.0}
{noformat}

which is consumed by AbstractAnalysisFactory while it instantiate:
{noformat}
  originalArgs = Collections.unmodifiableMap(new HashMap<>(args));
    System.out.println("orgs:: "+originalArgs);
    String version = get(args, LUCENE_MATCH_VERSION_PARAM);
    if (version == null) {
      luceneMatchVersion = Version.LATEST;
    } else {
      try {
        luceneMatchVersion = Version.parseLeniently(version);
      } catch (ParseException pe) {
        throw new IllegalArgumentException(pe);
      }
    }
    args.remove(CLASS_NAME);  // consume the class arg
  }
{noformat}
_class_ parameter is useless, we don't have to worry about it, while it do look 
up for _luceneMatchVersion_ which is kind of sanity check for the versions, not 
sure anything important takes place at Version::parseLeniently(version) 
function. If we can confirm that, we can pass empty map there.

> Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the 
> max token length
> ---------------------------------------------------------------------------------------------
>
>                 Key: LUCENE-7705
>                 URL: https://issues.apache.org/jira/browse/LUCENE-7705
>             Project: Lucene - Core
>          Issue Type: Improvement
>            Reporter: Amrit Sarkar
>            Assignee: Erick Erickson
>            Priority: Minor
>         Attachments: LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch
>
>
> SOLR-10186
> [~erickerickson]: Is there a good reason that we hard-code a 256 character 
> limit for the CharTokenizer? In order to change this limit it requires that 
> people copy/paste the incrementToken into some new class since incrementToken 
> is final.
> KeywordTokenizer can easily change the default (which is also 256 bytes), but 
> to do so requires code rather than being able to configure it in the schema.
> For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes 
> (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) 
> (Factories) it would take adding a c'tor to the base class in Lucene and 
> using it in the factory.
> Any objections?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to