[
https://issues.apache.org/jira/browse/SPARK-18374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15652076#comment-15652076
]
Sean Owen commented on SPARK-18374:
-----------------------------------
It's a fair point indeed, because it would be much better to omit "won't" than
incorrectly omit "won", rare as that might be. Right now, simply removing those
words causes a different problem because, I presume, you'd find "won" and "t"
in your tokens. (Worth testing?) If you can see an easy way to improve the
tokenization, that could become the topic of this issue.
> Incorrect words in StopWords/english.txt
> ----------------------------------------
>
> Key: SPARK-18374
> URL: https://issues.apache.org/jira/browse/SPARK-18374
> Project: Spark
> Issue Type: Bug
> Components: ML
> Affects Versions: 2.0.1
> Reporter: nirav patel
>
> I was just double checking english.txt for list of stopwords as I felt it was
> taking out valid tokens like 'won'. I think issue is english.txt list is
> missing apostrophe character and all character after apostrophe. So "won't"
> becam "won" in that list; "wouldn't" is "wouldn" .
> Here are some incorrect tokens in this list:
> won
> wouldn
> ma
> mightn
> mustn
> needn
> shan
> shouldn
> wasn
> weren
> I think ideal list should have both style. i.e. won't and wont both should be
> part of english.txt as some tokenizer might remove special characters. But
> 'won' is obviously shouldn't be in this list.
> Here's list of snowball english stop words:
> http://snowball.tartarus.org/algorithms/english/stop.txt
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]