[ https://issues.apache.org/jira/browse/LUCENE-1629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12709867#action_12709867 ]
Xiaoping Gao commented on LUCENE-1629: -------------------------------------- Hello Mingfai! coredict.mem is converted from coredict.dct which come from ICTCLAS1.0, neither 2008 nor 2009. The author authorized me to release just the lexical dictionary from ICTCLAS1.0 under APLv2, but he didn't authorize the dictionary of ictclas2008~2009. As far as I know, coredict.dct just contain GB2312 characters, so it cannot support Big5. I think we should find the proper big5 dictionary first, then I will help you to convert to dct file. On May 15, 2009 6:20pm, "Mingfai Ma (JIRA)" <j...@apache.org> wrote: > contrib intelligent Analyzer for Chinese > ---------------------------------------- > > Key: LUCENE-1629 > URL: https://issues.apache.org/jira/browse/LUCENE-1629 > Project: Lucene - Java > Issue Type: Improvement > Components: contrib/analyzers > Affects Versions: 2.4.1 > Environment: for java 1.5 or higher, lucene 2.4.1 > Reporter: Xiaoping Gao > Assignee: Michael McCandless > Fix For: 2.9 > > Attachments: analysis-data.zip, bigramdict.mem, > build-resources-with-folder.patch, build-resources.patch, > build-resources.patch, coredict.mem, LUCENE-1629-encoding-fix.patch, > LUCENE-1629-java1.4.patch > > > I wrote a Analyzer for apache lucene for analyzing sentences in Chinese > language. it's called "imdict-chinese-analyzer", the project on google code > is here: http://code.google.com/p/imdict-chinese-analyzer/ > In Chinese, "我是中国人"(I am Chinese), should be tokenized as "我"(I) "是"(am) > "中国人"(Chinese), not "我" "是中" "国人". So the analyzer must handle each sentence > properly, or there will be mis-understandings everywhere in the index > constructed by Lucene, and the accuracy of the search engine will be affected > seriously! > Although there are two analyzer packages in apache repository which can > handle Chinese: ChineseAnalyzer and CJKAnalyzer, they take each character or > every two adjoining characters as a single word, this is obviously not true > in reality, also this strategy will increase the index size and hurt the > performance baddly. > The algorithm of imdict-chinese-analyzer is based on Hidden Markov Model > (HMM), so it can tokenize chinese sentence in a really intelligent way. > Tokenizaion accuracy of this model is above 90% according to the paper > "HHMM-based Chinese Lexical analyzer ICTCLAL" while other analyzer's is about > 60%. > As imdict-chinese-analyzer is a really fast and intelligent. I want to > contribute it to the apache lucene repository. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. --------------------------------------------------------------------- To unsubscribe, e-mail: java-dev-unsubscr...@lucene.apache.org For additional commands, e-mail: java-dev-h...@lucene.apache.org