Robert Muir created LUCENE-6913:
-----------------------------------

             Summary: Standard/Classic/UAX tokenizers could be more ram 
efficient
                 Key: LUCENE-6913
                 URL: https://issues.apache.org/jira/browse/LUCENE-6913
             Project: Lucene - Core
          Issue Type: Improvement
            Reporter: Robert Muir


These tokenizers map codepoints to character classes with the following 
datastructure (loaded in clinit):

{noformat}
  private static char [] zzUnpackCMap(String packed) {
    char [] map = new char[0x110000];
{noformat}

This requires 2MB RAM for each tokenizer class (in trunk 6MB if all 3 classes 
are loaded, in branch_5x 10MB since there are 2 additional backwards compat 
classes).

On the other hand, none of our tokenizers actually use a huge number of 
character classes, so {{char}} is overkill: e.g. this map can safely be a byte 
[] and we can save half the memory. Perhaps it could make these tokenizers 
faster too.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to