[ 
https://issues.apache.org/jira/browse/LUCENE-2238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer resolved LUCENE-2238.
-------------------------------------

    Resolution: Fixed

committed in revision 904521

thanks robert

> deprecate ChineseAnalyzer
> -------------------------
>
>                 Key: LUCENE-2238
>                 URL: https://issues.apache.org/jira/browse/LUCENE-2238
>             Project: Lucene - Java
>          Issue Type: Task
>          Components: contrib/analyzers
>            Reporter: Robert Muir
>            Assignee: Simon Willnauer
>            Priority: Minor
>             Fix For: 3.1
>
>         Attachments: LUCENE-2238.patch
>
>
> The ChineseAnalyzer, ChineseTokenizer, and ChineseFilter (not the smart one, 
> or CJK) indexes chinese text as individual characters and removes english 
> stopwords, etc.
> In my opinion we should simply deprecate all of this in favor of 
> StandardAnalyzer, StandardTokenizer, and StopFilter, which does the same 
> thing.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-dev-h...@lucene.apache.org

Reply via email to