[ 
https://issues.apache.org/jira/browse/OPENNLP-1525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17795395#comment-17795395
 ] 

ASF GitHub Bot commented on OPENNLP-1525:
-----------------------------------------

mawiesne commented on code in PR #562:
URL: https://github.com/apache/opennlp/pull/562#discussion_r1422554680


##########
opennlp-tools/src/main/java/opennlp/tools/tokenize/TokenizerME.java:
##########
@@ -258,4 +274,24 @@ public boolean useAlphaNumericOptimization() {
     return useAlphaNumericOptimization;
   }
 
+  /**
+   * Allows checking a token abbreviation candidate for acceptability.
+   *
+   * <p>Note: The implementation always returns {@code false} if no
+   * abbreviation dictionary is available for the underlying model.</p>
+   *
+   * @param s the {@link CharSequence} in which the break occurred.
+   * @return {@code true} if the candidate is acceptable, {@code false} 
otherwise.
+   */
+  protected boolean isAcceptableAbbreviation(CharSequence s) {
+    if (abbDict == null)
+      return false;
+
+    for (String abb : abbDict.asStringSet()) {

Review Comment:
   Thx @jzonthemtn for the pointer. It can be used to keep the code cleaner. 
Will push an improved version.





> Improve TokenizerME to make use of abbreviations provided in TokenizerModel
> ---------------------------------------------------------------------------
>
>                 Key: OPENNLP-1525
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-1525
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Tokenizer
>    Affects Versions: 2.3.1
>            Reporter: Martin Wiesner
>            Assignee: Martin Wiesner
>            Priority: Minor
>             Fix For: 2.3.2
>
>   Original Estimate: 2h
>          Time Spent: 1h 25m
>  Remaining Estimate: 35m
>
> While working on OPENNLP-1479 and reviewing a PR by [~l-ma], we identified 
> that {{TokenizerME}} doesn't make use of locale/language specific 
> abbreviations provided via the corresponding TokenizerModel. 
> Therefore, terms will get mis-tokenized if they are abbreviated, such as the 
> German token "S." which represents an abbreviated form of "Seite" (-> page). 
> Instead of being tokenized as ["S."], TokenizerME will incorrectly yield: 
> ["S", "."].
> Improvement suggested:
> - Make use of the abbreviations dictionary provided by the {{TokenizerModel}}
> - Adapt the idea suggested and implemented in OPENNLP-570 
> ({{SentenceDetectorME}}) for {{TokenizerME}}
> - Adjust {{TokenizerFactoryTest}} method testCustomPatternForTokenizerMEDeu() 
> for German abbreviations, see sent-detector test case. It should expect and 
> result in 14 tokens, instead of 16 - so there is a TODO here.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to