[
https://issues.apache.org/jira/browse/SPARK-18374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15662015#comment-15662015
]
yuhao yang commented on SPARK-18374:
------------------------------------
With the default behavior of the _Tokenizer_ and _RegexTokenizer_, I think it's
more reasonable to directly include words like _won't_, _haven't_ in the stop
words lists, as shown in the list on http://www.ranks.nl/stopwords.
More specifically, if a user is using the default _Tokenizer_ and
_RegexTokenizer_ in spark.ml without customization, then _weren_, _wasn_ in
current stop words list are useless,whereas _weren't_ and _wasn't_ can be
helpful. The default behavior of ml transformers should be consistent and
effective.
> Incorrect words in StopWords/english.txt
> ----------------------------------------
>
> Key: SPARK-18374
> URL: https://issues.apache.org/jira/browse/SPARK-18374
> Project: Spark
> Issue Type: Bug
> Components: ML
> Affects Versions: 2.0.1
> Reporter: nirav patel
>
> I was just double checking english.txt for list of stopwords as I felt it was
> taking out valid tokens like 'won'. I think issue is english.txt list is
> missing apostrophe character and all character after apostrophe. So "won't"
> becam "won" in that list; "wouldn't" is "wouldn" .
> Here are some incorrect tokens in this list:
> won
> wouldn
> ma
> mightn
> mustn
> needn
> shan
> shouldn
> wasn
> weren
> I think ideal list should have both style. i.e. won't and wont both should be
> part of english.txt as some tokenizer might remove special characters. But
> 'won' is obviously shouldn't be in this list.
> Here's list of snowball english stop words:
> http://snowball.tartarus.org/algorithms/english/stop.txt
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]