[ https://issues.apache.org/jira/browse/LUCENE-1793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12741241#action_12741241 ]
Robert Muir commented on LUCENE-1793: ------------------------------------- DM, you are right this is a better discussion for another issue/place I was concerned that we would be taking functionality away, but this is not the case, as Uwe says it is only "strange". I just looked at all these encodings: they are all storing characters in the extended ascii range (> 0x7F) Therefore, anyone using this strange encoding support is using 2 bytes per character already! For example someone using CP1251 in the russian analyzer is simply storing Ж as 0xC6, its being represented as Æ. (2 bytes in UTF-8) So, by deprecating these encodings for unicode, nobody's index size will double... > remove custom encoding support in Greek/Russian Analyzers > --------------------------------------------------------- > > Key: LUCENE-1793 > URL: https://issues.apache.org/jira/browse/LUCENE-1793 > Project: Lucene - Java > Issue Type: Improvement > Components: contrib/analyzers > Reporter: Robert Muir > Priority: Minor > Attachments: LUCENE-1793.patch > > > The Greek and Russian analyzers support custom encodings such as KOI-8, they > define things like Lowercase and tokenization for these. > I think that analyzers should support unicode and that conversion/handling of > other charsets belongs somewhere else. > I would like to deprecate/remove the support for these other encodings. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. --------------------------------------------------------------------- To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org For additional commands, e-mail: java-dev-h...@lucene.apache.org