This is an automated email from the ASF dual-hosted git repository.
myui pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-hivemall-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 2c39ba6 Fixed a broken link
2c39ba6 is described below
commit 2c39ba6c75505f37e87f0507674abb2cc9e7fafc
Author: Makoto Yui <[email protected]>
AuthorDate: Fri May 14 12:37:38 2021 +0900
Fixed a broken link
---
userguide/misc/tokenizer.html | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/userguide/misc/tokenizer.html b/userguide/misc/tokenizer.html
index 2c8e824..88fa901 100644
--- a/userguide/misc/tokenizer.html
+++ b/userguide/misc/tokenizer.html
@@ -2538,7 +2538,7 @@ select tokenize_ja_neologd();
</code></pre>
<p>For detailed APIs, please refer Javadoc of <a
href="https://lucene.apache.org/core/5_3_1/analyzers-smartcn/org/apache/lucene/analysis/cn/smart/SmartChineseAnalyzer.html"
target="_blank">SmartChineseAnalyzer</a> as well.</p>
<h2 id="korean-tokenizer">Korean Tokenizer</h2>
-<p>Korean toknizer internally uses <a href="analyzers-nori: Korean
Morphological Analyzer" target="_blank">lucene-analyzers-nori</a> for
tokenization.</p>
+<p>Korean toknizer internally uses <a
href="https://www.slideshare.net/elasticsearch/nori-the-official-elasticsearch-plugin-for-korean-language-analysis"
target="_blank">lucene-analyzers-nori</a> for tokenization.</p>
<p>The signature of the UDF is as follows:</p>
<pre><code class="lang-sql">tokenize_ko(
String line [, const string mode = "discard" (or const string
opts),
@@ -2742,7 +2742,7 @@ Apache Hivemall is an effort undergoing incubation at The
Apache Software Founda
<script>
var gitbook = gitbook || [];
gitbook.push(function() {
- gitbook.page.hasChanged({"page":{"title":"Text
Tokenizer","level":"2.3","depth":1,"next":{"title":"Approximate Aggregate
Functions","level":"2.4","depth":1,"path":"misc/approx.md","ref":"misc/approx.md","articles":[]},"previous":{"title":"Efficient
Top-K Query
Processing","level":"2.2","depth":1,"path":"misc/topk.md","ref":"misc/topk.md","articles":[]},"dir":"ltr"},"config":{"plugins":["theme-api","edit-link","github","splitter","etoc","callouts","toggle-chapters","anchorjs",
[...]
+ gitbook.page.hasChanged({"page":{"title":"Text
Tokenizer","level":"2.3","depth":1,"next":{"title":"Approximate Aggregate
Functions","level":"2.4","depth":1,"path":"misc/approx.md","ref":"misc/approx.md","articles":[]},"previous":{"title":"Efficient
Top-K Query
Processing","level":"2.2","depth":1,"path":"misc/topk.md","ref":"misc/topk.md","articles":[]},"dir":"ltr"},"config":{"plugins":["theme-api","edit-link","github","splitter","etoc","callouts","toggle-chapters","anchorjs",
[...]
});
</script>
</div>