[
https://issues.apache.org/jira/browse/SOLR-822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Koji Sekiguchi updated SOLR-822:
--------------------------------
Attachment: SOLR-822.patch
bq. I think I found a bug... the correctPosition() returns incorrect position.
I'm working on that...
Attached patch fixes the problem. It also includes:
- some unit tests
- Javadoc for CharStream, CharReader and CharFilter
- rename correctPosition() to correctOffset() and make it final in CharFilter:
{code:java}
public final int correctOffset(int currentOff) {
return input.correctOffset( correctPosition( currentOff ) );
}
protected int correctPosition( int pos ){
return pos;
}
{code}
then correctOffset() calls correctPosition(). correctPosition() can be override
to correct position in subclass of CharFilter.
- rename MappingCJKTokenizer to CharStreamAwareCJKTokenizer
TODO:
# support \uNNNN style in mapping.txt
# add StopCharFilter
> CharFilter - normalize characters before tokenizer
> --------------------------------------------------
>
> Key: SOLR-822
> URL: https://issues.apache.org/jira/browse/SOLR-822
> Project: Solr
> Issue Type: New Feature
> Components: Analysis
> Reporter: Koji Sekiguchi
> Priority: Minor
> Attachments: character-normalization.JPG, sample_mapping_ja.txt,
> SOLR-822.patch, SOLR-822.patch, SOLR-822.patch
>
>
> A new plugin which can be placed in front of <tokenizer/>.
> {code:xml}
> <fieldType name="textCharNorm" class="solr.TextField"
> positionIncrementGap="100" >
> <analyzer>
> <charFilter class="solr.MappingCharFilterFactory"
> mapping="mapping_ja.txt" />
> <tokenizer class="solr.MappingCJKTokenizerFactory"/>
> <filter class="solr.StopFilterFactory" ignoreCase="true"
> words="stopwords.txt"/>
> <filter class="solr.LowerCaseFilterFactory"/>
> </analyzer>
> </fieldType>
> {code}
> <charFilter/> can be multiple (chained). I'll post a JPEG file to show
> character normalization sample soon.
> MOTIVATION:
> In Japan, there are two types of tokenizers -- N-gram (CJKTokenizer) and
> Morphological Analyzer.
> When we use morphological analyzer, because the analyzer uses Japanese
> dictionary to detect terms,
> we need to normalize characters before tokenization.
> I'll post a patch soon, too.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.