This is an automated email from the ASF dual-hosted git repository.

yiguolei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git


The following commit(s) were added to refs/heads/master by this push:
     new 4a30200f94a [doc](inverted index) add ignore_pinyin_offset (#3173)
4a30200f94a is described below

commit 4a30200f94af6238eb67285ad4e5549e9a932505
Author: Ryan19929 <[email protected]>
AuthorDate: Sat Dec 13 10:15:25 2025 +0800

    [doc](inverted index) add ignore_pinyin_offset (#3173)
    
    ## Versions
    pinyin tokenizer/filter add ignore_pinyin_offset config
    - [x] dev
    - [x] 4.x
    - [ ] 3.x
    - [ ] 2.1
    
    ## Languages
    
    - [x] Chinese
    - [x] English
    
    ## Docs Checklist
    
    - [ ] Checked by AI
    - [ ] Test Cases Built
---
 docs/ai/text-search/custom-analyzer.md                                   | 1 +
 .../version-4.x/ai/text-search/custom-analyzer.md                        | 1 +
 versioned_docs/version-4.x/ai/text-search/custom-analyzer.md             | 1 +
 3 files changed, 3 insertions(+)

diff --git a/docs/ai/text-search/custom-analyzer.md 
b/docs/ai/text-search/custom-analyzer.md
index 39523b41c7e..d70b1f48d90 100644
--- a/docs/ai/text-search/custom-analyzer.md
+++ b/docs/ai/text-search/custom-analyzer.md
@@ -67,6 +67,7 @@ Available tokenizers:
   - `lowercase`:  Lowercases non-Chinese letters. Default: true
   - `trim_whitespace`: Default: true
   - `remove_duplicated_term`: When enabled, removes duplicated terms to save 
index space. For example, `de的` becomes `de`. Default: false. Note: 
Position-related queries may be influenced
+  - `ignore_pinyin_offset`: This parameter currently has no functionality. 
Default: true
 
 #### 3. Creating a token_filter
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/ai/text-search/custom-analyzer.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/ai/text-search/custom-analyzer.md
index 529fb993fde..2867122ed90 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/ai/text-search/custom-analyzer.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/ai/text-search/custom-analyzer.md
@@ -69,6 +69,7 @@ PROPERTIES (
   - `lowercase`:将非中文字母转换为小写。默认值:true
   - `trim_whitespace`:默认值:true
   - `remove_duplicated_term`:启用时,删除重复的词元以节省索引空间。例如,`de的` 变为 
`de`。默认值:false。注意:可能会影响位置相关的查询
+  - `ignore_pinyin_offset`:该参数暂时没有用处。默认值:true
 
 #### 3. token_filter(词元过滤器)
 
diff --git a/versioned_docs/version-4.x/ai/text-search/custom-analyzer.md 
b/versioned_docs/version-4.x/ai/text-search/custom-analyzer.md
index 9351d7d11c9..c32d8bfde59 100644
--- a/versioned_docs/version-4.x/ai/text-search/custom-analyzer.md
+++ b/versioned_docs/version-4.x/ai/text-search/custom-analyzer.md
@@ -62,6 +62,7 @@ Available tokenizers:
   - `lowercase`:  Lowercases non-Chinese letters. Default: true
   - `trim_whitespace`: Default: true
   - `remove_duplicated_term`: When enabled, removes duplicated terms to save 
index space. For example, `de的` becomes `de`. Default: false. Note: 
Position-related queries may be influenced
+  - `ignore_pinyin_offset`: This parameter currently has no functionality. 
Default: true
 
 #### 3. Creating a token_filter
 


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to