[ 
https://issues.apache.org/jira/browse/OPENNLP-1525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17795772#comment-17795772
 ] 

ASF GitHub Bot commented on OPENNLP-1525:
-----------------------------------------

rzo1 merged PR #562:
URL: https://github.com/apache/opennlp/pull/562




> Improve TokenizerME to make use of abbreviations provided in TokenizerModel
> ---------------------------------------------------------------------------
>
>                 Key: OPENNLP-1525
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-1525
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Tokenizer
>    Affects Versions: 2.3.1
>            Reporter: Martin Wiesner
>            Assignee: Martin Wiesner
>            Priority: Minor
>             Fix For: 2.3.2
>
>   Original Estimate: 2h
>          Time Spent: 1h 25m
>  Remaining Estimate: 35m
>
> While working on OPENNLP-1479 and reviewing a PR by [~l-ma], we identified 
> that {{TokenizerME}} doesn't make use of locale/language specific 
> abbreviations provided via the corresponding TokenizerModel. 
> Therefore, terms will get mis-tokenized if they are abbreviated, such as the 
> German token "S." which represents an abbreviated form of "Seite" (-> page). 
> Instead of being tokenized as ["S."], TokenizerME will incorrectly yield: 
> ["S", "."].
> Improvement suggested:
> - Make use of the abbreviations dictionary provided by the {{TokenizerModel}}
> - Adapt the idea suggested and implemented in OPENNLP-570 
> ({{SentenceDetectorME}}) for {{TokenizerME}}
> - Adjust {{TokenizerFactoryTest}} method testCustomPatternForTokenizerMEDeu() 
> for German abbreviations, see sent-detector test case. It should expect and 
> result in 14 tokens, instead of 16 - so there is a TODO here.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to