[
https://issues.apache.org/jira/browse/LUCENE-1629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Uwe Schindler updated LUCENE-1629:
----------------------------------
Attachment: build-resources.patch
Hi Mike,
here is a patch that adds a maven-like resources directory. It patches the
build script in two ways:
- The junit test classpath is extended to include src/resources
- The jarify macro is changed to also add src/resources to the jar file
So all resource files mut be put into the corresponding subdirectory under
src/resources. The patch contains this for the stopword.txt file af the arabic
analyzer. The data files should be removed from src/java.
The cn analyzers stopwords must be put in the top-level cn directory, the mem
files into cn/smart/hhmm (I took me some time to find this out).
The patch also includes some src/resources directory additions. For the
compilation to work, every src/ directory now needs at least an empty resources
folder. I found no way to make the jarify macro work without this?
If somebody has an idea, it would be good.
> contrib intelligent Analyzer for Chinese
> ----------------------------------------
>
> Key: LUCENE-1629
> URL: https://issues.apache.org/jira/browse/LUCENE-1629
> Project: Lucene - Java
> Issue Type: Improvement
> Components: contrib/analyzers
> Affects Versions: 2.4.1
> Environment: for java 1.5 or higher, lucene 2.4.1
> Reporter: Xiaoping Gao
> Assignee: Michael McCandless
> Fix For: 2.9
>
> Attachments: analysis-data.zip, bigramdict.mem,
> build-resources.patch, coredict.mem, LUCENE-1629-java1.4.patch
>
>
> I wrote a Analyzer for apache lucene for analyzing sentences in Chinese
> language. it's called "imdict-chinese-analyzer", the project on google code
> is here: http://code.google.com/p/imdict-chinese-analyzer/
> In Chinese, "我是中国人"(I am Chinese), should be tokenized as "我"(I) "是"(am)
> "中国人"(Chinese), not "我" "是中" "国人". So the analyzer must handle each sentence
> properly, or there will be mis-understandings everywhere in the index
> constructed by Lucene, and the accuracy of the search engine will be affected
> seriously!
> Although there are two analyzer packages in apache repository which can
> handle Chinese: ChineseAnalyzer and CJKAnalyzer, they take each character or
> every two adjoining characters as a single word, this is obviously not true
> in reality, also this strategy will increase the index size and hurt the
> performance baddly.
> The algorithm of imdict-chinese-analyzer is based on Hidden Markov Model
> (HMM), so it can tokenize chinese sentence in a really intelligent way.
> Tokenizaion accuracy of this model is above 90% according to the paper
> "HHMM-based Chinese Lexical analyzer ICTCLAL" while other analyzer's is about
> 60%.
> As imdict-chinese-analyzer is a really fast and intelligent. I want to
> contribute it to the apache lucene repository.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]