[
https://issues.apache.org/jira/browse/LUCENE-6103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14240314#comment-14240314
]
Steve Rowe commented on LUCENE-6103:
------------------------------------
Cool info about Swedish.
0. The beauty of implementing a standard is that once you've done that, making
tweaks to suit particular constituencies isn't necessary. StandardTokenizer
implements UAX#29 word break rules. Done.
1. If you'd like to create tailored tokenizers for each individual language,
please go ahead.
2. See #0.
One other technique you may find useful: put a char filter to change
problematic chars in front of your tokenizer, e.g.
[{{PatternReplaceCharFilter}}|http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/pattern/PatternReplaceCharFilter.html],
with the pattern something like {{(\p\{L\}):\(\p\{L\})}}, and the replacement
{{$1 $2}}.
> StandardTokenizer doesn't tokenize word:word
> --------------------------------------------
>
> Key: LUCENE-6103
> URL: https://issues.apache.org/jira/browse/LUCENE-6103
> Project: Lucene - Core
> Issue Type: Bug
> Components: modules/analysis
> Affects Versions: 4.9
> Reporter: Itamar Syn-Hershko
> Assignee: Steve Rowe
>
> StandardTokenizer (and by result most default analyzers) will not tokenize
> word:word and will preserve it as one token. This can be easily seen using
> Elasticsearch's analyze API:
> localhost:9200/_analyze?tokenizer=standard&text=word%20word:word
> If this is the intended behavior, then why? I can't really see the logic
> behind it.
> If not, I'll be happy to join in the effort of fixing this.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]