Github user takuti commented on a diff in the pull request:
https://github.com/apache/incubator-hivemall/pull/91#discussion_r125117516
--- Diff: docs/gitbook/misc/tokenizer.md ---
@@ -46,4 +46,25 @@ select
tokenize_ja("kuromojiã使ã£ãåãã¡æ¸ãã®ãã¹ãã§ãã第
```
>
["kuromoji","使ã","åãã¡æ¸ã","ãã¹ã","第","äº","弿°","normal","search","extended","æå®","ããã©ã«ã","normal","ã¢ã¼ã"]
-For detailed APIs, please refer Javadoc of
[JapaneseAnalyzer](https://lucene.apache.org/core/5_3_1/analyzers-kuromoji/org/apache/lucene/analysis/ja/JapaneseAnalyzer.html)
as well.
\ No newline at end of file
+For detailed APIs, please refer Javadoc of
[JapaneseAnalyzer](https://lucene.apache.org/core/5_3_1/analyzers-kuromoji/org/apache/lucene/analysis/ja/JapaneseAnalyzer.html)
as well.
+
+# Tokenizer for Chinese Texts
+
+Hivemall-NLP module provides a Chinese text tokenizer UDF using
[SmartChineseAnalyzer](http://lucene.apache.org/core/5_3_1/analyzers-smartcn/org/apache/lucene/analysis/cn/smart/SmartChineseAnalyzer.html).
+
+> add jar
/tmp/[hivemall-nlp-xxx-with-dependencies.jar](https://github.com/myui/hivemall/releases);
--- End diff --
`add jar` and `source` instructions are similar to Japanese tokenizer. If
possible, you can organize the page as:
---
# Tokenizer for Non-English Texts
(explain `add jar` and `source` stuff)
## Japanese Tokenizer
`tokenize_ja`
## Chinese Tokenizer
`tokenize_cn`
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---