This is an automated email from the ASF dual-hosted git repository.

jin pushed a commit to branch update-doc
in repository https://gitbox.apache.org/repos/asf/incubator-hugegraph-doc.git

commit 708a6df1955e229a75cbfabe0a1dc99764dcd508
Author: imbajin <[email protected]>
AuthorDate: Sun Feb 1 00:03:48 2026 +0800

    AI: Add HugeGraph-AI docs and quickstart updates
    
    Add comprehensive HugeGraph-AI documentation and update quickstart content. 
New files added (Chinese & English): config-reference.md (full configuration 
reference), hugegraph-ml.md (HugeGraph-ML overview, algorithms and examples), 
and rest-api.md (REST API reference including RAG and Text2Gremlin endpoints). 
Updated pages: _index.md (feature list and v1.5.0 highlights such as 
Text2Gremlin, multi-model vectors, bilingual prompts, LiteLLM support, enhanced 
rerankers), hugegraph-llm.md ( [...]
---
 content/cn/docs/quickstart/hugegraph-ai/_index.md  |  31 +-
 .../quickstart/hugegraph-ai/config-reference.md    | 396 +++++++++++++++++++
 .../docs/quickstart/hugegraph-ai/hugegraph-llm.md  |  77 +++-
 .../docs/quickstart/hugegraph-ai/hugegraph-ml.md   | 289 ++++++++++++++
 .../cn/docs/quickstart/hugegraph-ai/quick_start.md |  60 +++
 .../cn/docs/quickstart/hugegraph-ai/rest-api.md    | 428 +++++++++++++++++++++
 content/en/docs/quickstart/hugegraph-ai/_index.md  |  31 +-
 .../quickstart/hugegraph-ai/config-reference.md    | 396 +++++++++++++++++++
 .../docs/quickstart/hugegraph-ai/hugegraph-llm.md  |  75 +++-
 .../docs/quickstart/hugegraph-ai/hugegraph-ml.md   | 289 ++++++++++++++
 .../en/docs/quickstart/hugegraph-ai/quick_start.md |  60 +++
 .../en/docs/quickstart/hugegraph-ai/rest-api.md    | 428 +++++++++++++++++++++
 12 files changed, 2539 insertions(+), 21 deletions(-)

diff --git a/content/cn/docs/quickstart/hugegraph-ai/_index.md 
b/content/cn/docs/quickstart/hugegraph-ai/_index.md
index 330c9314..01906f0d 100644
--- a/content/cn/docs/quickstart/hugegraph-ai/_index.md
+++ b/content/cn/docs/quickstart/hugegraph-ai/_index.md
@@ -18,20 +18,31 @@ weight: 3
 ## ✨ 核心功能
 
 - **GraphRAG**:利用图增强检索构建智能问答系统
+- **Text2Gremlin**:自然语言到图查询的转换,支持 REST API
 - **知识图谱构建**:使用大语言模型从文本自动构建图谱
-- **图机器学习**:集成 20 多种图学习算法(GCN、GAT、GraphSAGE 等)
+- **图机器学习**:集成 21 种图学习算法(GCN、GAT、GraphSAGE 等)
 - **Python 客户端**:易于使用的 HugeGraph Python 操作接口
 - **AI 智能体**:提供智能图分析与推理能力
 
+### 🎉 v1.5.0 新特性
+
+- **Text2Gremlin REST API**:通过 REST 端点将自然语言查询转换为 Gremlin 命令
+- **多模型向量支持**:每个图实例可以使用独立的嵌入模型
+- **双语提示支持**:支持英文和中文提示词切换(EN/CN)
+- **半自动 Schema 生成**:从文本数据智能推断 Schema
+- **半自动 Prompt 生成**:上下文感知的提示词模板
+- **增强的 Reranker 支持**:集成 Cohere 和 SiliconFlow 重排序器
+- **LiteLLM 多供应商支持**:统一接口支持 OpenAI、Anthropic、Gemini 等
+
 ## 🚀 快速开始
 
 > [!NOTE]
 > 如需完整的部署指南和详细示例,请参阅 
 > [hugegraph-llm/README.md](https://github.com/apache/incubator-hugegraph-ai/blob/main/hugegraph-llm/README.md)。
 
 ### 环境要求
-- Python 3.9+(建议 hugegraph-llm 使用 3.10+)
-- [uv](https://docs.astral.sh/uv/)(推荐的包管理器)
-- HugeGraph Server 1.3+(建议 1.5+)
+- Python 3.10+(hugegraph-llm 必需)
+- [uv](https://docs.astral.sh/uv/) 0.7+(推荐的包管理器)
+- HugeGraph Server 1.5+(必需)
 - Docker(可选,用于容器化部署)
 
 ### 方案一:Docker 部署(推荐)
@@ -123,11 +134,13 @@ from pyhugegraph.client import PyHugeClient
 - **AI 智能体**:智能图分析与推理
 
 ### 
[hugegraph-ml](https://github.com/apache/incubator-hugegraph-ai/tree/main/hugegraph-ml)
-包含 20+ 算法的图机器学习:
-- **节点分类**:GCN、GAT、GraphSAGE、APPNP 等
-- **图分类**:DiffPool、P-GNN 等
-- **图嵌入**:DeepWalk、Node2Vec、GRACE 等
-- **链接预测**:SEAL、GATNE 等
+包含 21 种算法的图机器学习:
+- 
**节点分类**:GCN、GAT、GraphSAGE、APPNP、AGNN、ARMA、DAGNN、DeeperGCN、GRAND、JKNet、Cluster-GCN
+- **图分类**:DiffPool、GIN
+- **图嵌入**:DGI、BGRL、GRACE
+- **链接预测**:SEAL、P-GNN、GATNE
+- **欺诈检测**:CARE-GNN、BGNN
+- **后处理**:C&S(Correct & Smooth)
 
 ### 
[hugegraph-python-client](https://github.com/apache/incubator-hugegraph-ai/tree/main/hugegraph-python-client)
 用于 HugeGraph 操作的 Python 客户端:
diff --git a/content/cn/docs/quickstart/hugegraph-ai/config-reference.md 
b/content/cn/docs/quickstart/hugegraph-ai/config-reference.md
new file mode 100644
index 00000000..4172ae12
--- /dev/null
+++ b/content/cn/docs/quickstart/hugegraph-ai/config-reference.md
@@ -0,0 +1,396 @@
+---
+title: "配置参考"
+linkTitle: "配置参考"
+weight: 4
+---
+
+本文档提供 HugeGraph-LLM 所有配置选项的完整参考。
+
+## 配置文件
+
+- **环境文件**:`.env`(从模板创建或自动生成)
+- **提示词配置**:`src/hugegraph_llm/resources/demo/config_prompt.yaml`
+
+> [!TIP]
+> 运行 `python -m hugegraph_llm.config.generate --update` 可自动生成或更新带有默认值的配置文件。
+
+## 环境变量概览
+
+### 1. 语言和模型类型选择
+
+```bash
+# 提示词语言(影响系统提示词和生成文本)
+LANGUAGE=EN                     # 选项: EN | CN
+
+# 不同任务的 LLM 类型
+CHAT_LLM_TYPE=openai           # 对话/RAG: openai | litellm | ollama/local
+EXTRACT_LLM_TYPE=openai        # 实体抽取: openai | litellm | ollama/local
+TEXT2GQL_LLM_TYPE=openai       # 文本转 Gremlin: openai | litellm | ollama/local
+
+# 嵌入模型类型
+EMBEDDING_TYPE=openai          # 选项: openai | litellm | ollama/local
+
+# Reranker 类型(可选)
+RERANKER_TYPE=                 # 选项: cohere | siliconflow | (留空表示无)
+```
+
+### 2. OpenAI 配置
+
+每个 LLM 任务(chat、extract、text2gql)都有独立配置:
+
+#### 2.1 Chat LLM(RAG 答案生成)
+
+```bash
+OPENAI_CHAT_API_BASE=https://api.openai.com/v1
+OPENAI_CHAT_API_KEY=sk-your-api-key-here
+OPENAI_CHAT_LANGUAGE_MODEL=gpt-4o-mini
+OPENAI_CHAT_TOKENS=8192        # 对话响应的最大 tokens
+```
+
+#### 2.2 Extract LLM(实体和关系抽取)
+
+```bash
+OPENAI_EXTRACT_API_BASE=https://api.openai.com/v1
+OPENAI_EXTRACT_API_KEY=sk-your-api-key-here
+OPENAI_EXTRACT_LANGUAGE_MODEL=gpt-4o-mini
+OPENAI_EXTRACT_TOKENS=1024     # 抽取任务的最大 tokens
+```
+
+#### 2.3 Text2GQL LLM(自然语言转 Gremlin)
+
+```bash
+OPENAI_TEXT2GQL_API_BASE=https://api.openai.com/v1
+OPENAI_TEXT2GQL_API_KEY=sk-your-api-key-here
+OPENAI_TEXT2GQL_LANGUAGE_MODEL=gpt-4o-mini
+OPENAI_TEXT2GQL_TOKENS=4096    # 查询生成的最大 tokens
+```
+
+#### 2.4 嵌入模型
+
+```bash
+OPENAI_EMBEDDING_API_BASE=https://api.openai.com/v1
+OPENAI_EMBEDDING_API_KEY=sk-your-api-key-here
+OPENAI_EMBEDDING_MODEL=text-embedding-3-small
+```
+
+> [!NOTE]
+> 您可以为每个任务使用不同的 API 密钥/端点,以优化成本或使用专用模型。
+
+### 3. LiteLLM 配置(多供应商支持)
+
+LiteLLM 支持统一访问 100 多个 LLM 供应商(OpenAI、Anthropic、Google、Azure 等)。
+
+#### 3.1 Chat LLM
+
+```bash
+LITELLM_CHAT_API_BASE=http://localhost:4000       # LiteLLM 代理 URL
+LITELLM_CHAT_API_KEY=sk-litellm-key              # LiteLLM API 密钥
+LITELLM_CHAT_LANGUAGE_MODEL=anthropic/claude-3-5-sonnet-20241022
+LITELLM_CHAT_TOKENS=8192
+```
+
+#### 3.2 Extract LLM
+
+```bash
+LITELLM_EXTRACT_API_BASE=http://localhost:4000
+LITELLM_EXTRACT_API_KEY=sk-litellm-key
+LITELLM_EXTRACT_LANGUAGE_MODEL=openai/gpt-4o-mini
+LITELLM_EXTRACT_TOKENS=256
+```
+
+#### 3.3 Text2GQL LLM
+
+```bash
+LITELLM_TEXT2GQL_API_BASE=http://localhost:4000
+LITELLM_TEXT2GQL_API_KEY=sk-litellm-key
+LITELLM_TEXT2GQL_LANGUAGE_MODEL=openai/gpt-4o-mini
+LITELLM_TEXT2GQL_TOKENS=4096
+```
+
+#### 3.4 嵌入模型
+
+```bash
+LITELLM_EMBEDDING_API_BASE=http://localhost:4000
+LITELLM_EMBEDDING_API_KEY=sk-litellm-key
+LITELLM_EMBEDDING_MODEL=openai/text-embedding-3-small
+```
+
+**模型格式**: `供应商/模型名称`
+
+示例:
+- `openai/gpt-4o-mini`
+- `anthropic/claude-3-5-sonnet-20241022`
+- `google/gemini-2.0-flash-exp`
+- `azure/gpt-4`
+
+完整列表请参阅 [LiteLLM Providers](https://docs.litellm.ai/docs/providers)。
+
+### 4. Ollama 配置(本地部署)
+
+使用 Ollama 运行本地 LLM,确保隐私和成本控制。
+
+#### 4.1 Chat LLM
+
+```bash
+OLLAMA_CHAT_HOST=127.0.0.1
+OLLAMA_CHAT_PORT=11434
+OLLAMA_CHAT_LANGUAGE_MODEL=llama3.1:8b
+```
+
+#### 4.2 Extract LLM
+
+```bash
+OLLAMA_EXTRACT_HOST=127.0.0.1
+OLLAMA_EXTRACT_PORT=11434
+OLLAMA_EXTRACT_LANGUAGE_MODEL=llama3.1:8b
+```
+
+#### 4.3 Text2GQL LLM
+
+```bash
+OLLAMA_TEXT2GQL_HOST=127.0.0.1
+OLLAMA_TEXT2GQL_PORT=11434
+OLLAMA_TEXT2GQL_LANGUAGE_MODEL=qwen2.5-coder:7b
+```
+
+#### 4.4 嵌入模型
+
+```bash
+OLLAMA_EMBEDDING_HOST=127.0.0.1
+OLLAMA_EMBEDDING_PORT=11434
+OLLAMA_EMBEDDING_MODEL=nomic-embed-text
+```
+
+> [!TIP]
+> 下载模型:`ollama pull llama3.1:8b` 或 `ollama pull qwen2.5-coder:7b`
+
+### 5. Reranker 配置
+
+Reranker 通过根据相关性重新排序检索结果来提高 RAG 准确性。
+
+#### 5.1 Cohere Reranker
+
+```bash
+RERANKER_TYPE=cohere
+COHERE_BASE_URL=https://api.cohere.com/v1/rerank
+RERANKER_API_KEY=your-cohere-api-key
+RERANKER_MODEL=rerank-english-v3.0
+```
+
+可用模型:
+- `rerank-english-v3.0`(英文)
+- `rerank-multilingual-v3.0`(100+ 种语言)
+
+#### 5.2 SiliconFlow Reranker
+
+```bash
+RERANKER_TYPE=siliconflow
+RERANKER_API_KEY=your-siliconflow-api-key
+RERANKER_MODEL=BAAI/bge-reranker-v2-m3
+```
+
+### 6. HugeGraph 连接
+
+配置与 HugeGraph 服务器实例的连接。
+
+```bash
+# 服务器连接
+GRAPH_IP=127.0.0.1
+GRAPH_PORT=8080
+GRAPH_NAME=hugegraph            # 图实例名称
+GRAPH_USER=admin                # 用户名
+GRAPH_PWD=admin-password        # 密码
+GRAPH_SPACE=                    # 图空间(可选,用于多租户)
+```
+
+### 7. 查询参数
+
+控制图遍历行为和结果限制。
+
+```bash
+# 图遍历限制
+MAX_GRAPH_PATH=10               # 图查询的最大路径深度
+MAX_GRAPH_ITEMS=30              # 从图中检索的最大项数
+EDGE_LIMIT_PRE_LABEL=8          # 每个标签类型的最大边数
+
+# 属性过滤
+LIMIT_PROPERTY=False            # 限制结果中的属性(True/False)
+```
+
+### 8. 向量搜索配置
+
+配置向量相似性搜索参数。
+
+```bash
+# 向量搜索阈值
+VECTOR_DIS_THRESHOLD=0.9        # 最小余弦相似度(0-1,越高越严格)
+TOPK_PER_KEYWORD=1              # 每个提取关键词的 Top-K 结果
+```
+
+### 9. Rerank 配置
+
+```bash
+# Rerank 结果限制
+TOPK_RETURN_RESULTS=20          # 重排序后的 top 结果数
+```
+
+## 配置优先级
+
+系统按以下顺序加载配置(后面的来源覆盖前面的):
+
+1. **默认值**(在 `*_config.py` 文件中)
+2. **环境变量**(来自 `.env` 文件)
+3. **运行时更新**(通过 Web UI 或 API 调用)
+
+## 配置示例
+
+### 最小配置(OpenAI)
+
+```bash
+# 语言
+LANGUAGE=EN
+
+# LLM 类型
+CHAT_LLM_TYPE=openai
+EXTRACT_LLM_TYPE=openai
+TEXT2GQL_LLM_TYPE=openai
+EMBEDDING_TYPE=openai
+
+# OpenAI 凭据(所有任务共用一个密钥)
+OPENAI_API_BASE=https://api.openai.com/v1
+OPENAI_API_KEY=sk-your-api-key-here
+OPENAI_LANGUAGE_MODEL=gpt-4o-mini
+OPENAI_EMBEDDING_MODEL=text-embedding-3-small
+
+# HugeGraph 连接
+GRAPH_IP=127.0.0.1
+GRAPH_PORT=8080
+GRAPH_NAME=hugegraph
+GRAPH_USER=admin
+GRAPH_PWD=admin
+```
+
+### 生产环境配置(LiteLLM + Reranker)
+
+```bash
+# 双语支持
+LANGUAGE=EN
+
+# 灵活使用 LiteLLM
+CHAT_LLM_TYPE=litellm
+EXTRACT_LLM_TYPE=litellm
+TEXT2GQL_LLM_TYPE=litellm
+EMBEDDING_TYPE=litellm
+
+# LiteLLM 代理
+LITELLM_CHAT_API_BASE=http://localhost:4000
+LITELLM_CHAT_API_KEY=sk-litellm-master-key
+LITELLM_CHAT_LANGUAGE_MODEL=anthropic/claude-3-5-sonnet-20241022
+LITELLM_CHAT_TOKENS=8192
+
+LITELLM_EXTRACT_API_BASE=http://localhost:4000
+LITELLM_EXTRACT_API_KEY=sk-litellm-master-key
+LITELLM_EXTRACT_LANGUAGE_MODEL=openai/gpt-4o-mini
+LITELLM_EXTRACT_TOKENS=256
+
+LITELLM_TEXT2GQL_API_BASE=http://localhost:4000
+LITELLM_TEXT2GQL_API_KEY=sk-litellm-master-key
+LITELLM_TEXT2GQL_LANGUAGE_MODEL=openai/gpt-4o-mini
+LITELLM_TEXT2GQL_TOKENS=4096
+
+LITELLM_EMBEDDING_API_BASE=http://localhost:4000
+LITELLM_EMBEDDING_API_KEY=sk-litellm-master-key
+LITELLM_EMBEDDING_MODEL=openai/text-embedding-3-small
+
+# Cohere Reranker 提高准确性
+RERANKER_TYPE=cohere
+COHERE_BASE_URL=https://api.cohere.com/v1/rerank
+RERANKER_API_KEY=your-cohere-key
+RERANKER_MODEL=rerank-multilingual-v3.0
+
+# 带认证的 HugeGraph
+GRAPH_IP=prod-hugegraph.example.com
+GRAPH_PORT=8080
+GRAPH_NAME=production_graph
+GRAPH_USER=rag_user
+GRAPH_PWD=secure-password
+GRAPH_SPACE=prod_space
+
+# 优化的查询参数
+MAX_GRAPH_PATH=15
+MAX_GRAPH_ITEMS=50
+VECTOR_DIS_THRESHOLD=0.85
+TOPK_RETURN_RESULTS=30
+```
+
+### 本地/离线配置(Ollama)
+
+```bash
+# 语言
+LANGUAGE=EN
+
+# 全部通过 Ollama 使用本地模型
+CHAT_LLM_TYPE=ollama/local
+EXTRACT_LLM_TYPE=ollama/local
+TEXT2GQL_LLM_TYPE=ollama/local
+EMBEDDING_TYPE=ollama/local
+
+# Ollama 端点
+OLLAMA_CHAT_HOST=127.0.0.1
+OLLAMA_CHAT_PORT=11434
+OLLAMA_CHAT_LANGUAGE_MODEL=llama3.1:8b
+
+OLLAMA_EXTRACT_HOST=127.0.0.1
+OLLAMA_EXTRACT_PORT=11434
+OLLAMA_EXTRACT_LANGUAGE_MODEL=llama3.1:8b
+
+OLLAMA_TEXT2GQL_HOST=127.0.0.1
+OLLAMA_TEXT2GQL_PORT=11434
+OLLAMA_TEXT2GQL_LANGUAGE_MODEL=qwen2.5-coder:7b
+
+OLLAMA_EMBEDDING_HOST=127.0.0.1
+OLLAMA_EMBEDDING_PORT=11434
+OLLAMA_EMBEDDING_MODEL=nomic-embed-text
+
+# 离线环境不使用 reranker
+RERANKER_TYPE=
+
+# 本地 HugeGraph
+GRAPH_IP=127.0.0.1
+GRAPH_PORT=8080
+GRAPH_NAME=hugegraph
+GRAPH_USER=admin
+GRAPH_PWD=admin
+```
+
+## 配置验证
+
+修改 `.env` 后,验证配置:
+
+1. **通过 Web UI**:访问 `http://localhost:8001` 并检查设置面板
+2. **通过 Python**:
+```python
+from hugegraph_llm.config import settings
+print(settings.llm_config)
+print(settings.hugegraph_config)
+```
+3. **通过 REST API**:
+```bash
+curl http://localhost:8001/config
+```
+
+## 故障排除
+
+| 问题 | 解决方案 |
+|------|---------|
+| "API key not found" | 检查 `.env` 中的 `*_API_KEY` 是否正确设置 |
+| "Connection refused" | 验证 `GRAPH_IP` 和 `GRAPH_PORT` 是否正确 |
+| "Model not found" | 对于 Ollama:运行 `ollama pull <模型名称>` |
+| "Rate limit exceeded" | 减少 `MAX_GRAPH_ITEMS` 或使用不同的 API 密钥 |
+| "Embedding dimension mismatch" | 删除现有向量并使用正确模型重建 |
+
+## 另见
+
+- [HugeGraph-LLM 概述](./hugegraph-llm.md)
+- [REST API 参考](./rest-api.md)
+- [快速入门指南](./quick_start.md)
diff --git a/content/cn/docs/quickstart/hugegraph-ai/hugegraph-llm.md 
b/content/cn/docs/quickstart/hugegraph-ai/hugegraph-llm.md
index b353a8fb..116f473b 100644
--- a/content/cn/docs/quickstart/hugegraph-ai/hugegraph-llm.md
+++ b/content/cn/docs/quickstart/hugegraph-ai/hugegraph-llm.md
@@ -214,7 +214,7 @@ graph TD
 
 ## 🔧 配置
 
-运行演示后,将自动生成配置文件:
+运行演示后,将自动生成配置文件:
 
 - **环境**:`hugegraph-llm/.env`
 - **提示**:`hugegraph-llm/src/hugegraph_llm/resources/demo/config_prompt.yaml`
@@ -222,7 +222,80 @@ graph TD
 > [!NOTE]
 > 使用 Web 界面时,配置更改会自动保存。对于手动更改,刷新页面即可加载更新。
 
-**LLM 提供商支持**:本项目使用 [LiteLLM](https://docs.litellm.ai/docs/providers) 实现多提供商 
LLM 支持。
+### LLM 提供商配置
+
+本项目使用 [LiteLLM](https://docs.litellm.ai/docs/providers) 实现多提供商 LLM 支持,可统一访问 
OpenAI、Anthropic、Google、Cohere 以及 100 多个其他提供商。
+
+#### 方案一:直接 LLM 连接(OpenAI、Ollama)
+
+```bash
+# .env 配置
+chat_llm_type=openai           # 或 ollama/local
+openai_api_key=sk-xxx
+openai_api_base=https://api.openai.com/v1
+openai_language_model=gpt-4o-mini
+openai_max_tokens=4096
+```
+
+#### 方案二:LiteLLM 多提供商支持
+
+LiteLLM 作为多个 LLM 提供商的统一代理:
+
+```bash
+# .env 配置
+chat_llm_type=litellm
+extract_llm_type=litellm
+text2gql_llm_type=litellm
+
+# LiteLLM 设置
+litellm_api_base=http://localhost:4000  # LiteLLM 代理服务器
+litellm_api_key=sk-1234                  # LiteLLM API 密钥
+
+# 模型选择(提供商/模型格式)
+litellm_language_model=anthropic/claude-3-5-sonnet-20241022
+litellm_max_tokens=4096
+```
+
+**支持的提供商**:OpenAI、Anthropic、Google(Gemini)、Azure、Cohere、Bedrock、Vertex 
AI、Hugging Face 等。
+
+完整提供商列表和配置详情,请访问 [LiteLLM Providers](https://docs.litellm.ai/docs/providers)。
+
+### Reranker 配置
+
+Reranker 通过重新排序检索结果来提高 RAG 准确性。支持的提供商:
+
+```bash
+# Cohere Reranker
+reranker_type=cohere
+cohere_api_key=your-cohere-key
+cohere_rerank_model=rerank-english-v3.0
+
+# SiliconFlow Reranker
+reranker_type=siliconflow
+siliconflow_api_key=your-siliconflow-key
+siliconflow_rerank_model=BAAI/bge-reranker-v2-m3
+```
+
+### Text2Gremlin 配置
+
+将自然语言转换为 Gremlin 查询:
+
+```python
+from hugegraph_llm.operators.graph_rag_task import Text2GremlinPipeline
+
+# 初始化工作流
+text2gremlin = Text2GremlinPipeline()
+
+# 生成 Gremlin 查询
+result = (
+    text2gremlin
+    .query_to_gremlin(query="查找所有由 Francis Ford Coppola 执导的电影")
+    .execute_gremlin_query()
+    .run()
+)
+```
+
+**REST API 端点**:有关 HTTP 端点详情,请参阅 [REST API 文档](./rest-api.md)。
 
 ## 📚 其他资源
 
diff --git a/content/cn/docs/quickstart/hugegraph-ai/hugegraph-ml.md 
b/content/cn/docs/quickstart/hugegraph-ai/hugegraph-ml.md
new file mode 100644
index 00000000..a75ba6c1
--- /dev/null
+++ b/content/cn/docs/quickstart/hugegraph-ai/hugegraph-ml.md
@@ -0,0 +1,289 @@
+---
+title: "HugeGraph-ML"
+linkTitle: "HugeGraph-ML"
+weight: 2
+---
+
+HugeGraph-ML 将 HugeGraph 与流行的图学习库集成,支持直接在图数据上进行端到端的机器学习工作流。
+
+## 概述
+
+`hugegraph-ml` 提供了统一接口,用于将图神经网络和机器学习算法应用于存储在 HugeGraph 中的数据。它通过无缝转换 HugeGraph 
数据到主流 ML 框架兼容格式,消除了复杂的数据导出/导入流程。
+
+### 核心功能
+
+- **直接 HugeGraph 集成**:无需手动导出即可直接从 HugeGraph 查询图数据
+- **21 种算法实现**:全面覆盖节点分类、图分类、嵌入和链接预测
+- **DGL 后端**:利用深度图库(DGL)进行高效训练
+- **端到端工作流**:从数据加载到模型训练和评估
+- **模块化任务**:可复用的常见 ML 场景任务抽象
+
+## 环境要求
+
+- **Python**:3.9+(独立模块)
+- **HugeGraph Server**:1.0+(推荐:1.5+)
+- **UV 包管理器**:0.7+(用于依赖管理)
+
+## 安装
+
+### 1. 启动 HugeGraph Server
+
+```bash
+# 方案一:Docker(推荐)
+docker run -itd --name=hugegraph -p 8080:8080 hugegraph/hugegraph
+
+# 方案二:二进制包
+# 参见 https://hugegraph.apache.org/docs/download/download/
+```
+
+### 2. 克隆并设置
+
+```bash
+git clone https://github.com/apache/incubator-hugegraph-ai.git
+cd incubator-hugegraph-ai/hugegraph-ml
+```
+
+### 3. 安装依赖
+
+```bash
+# uv sync 自动创建 .venv 并安装所有依赖
+uv sync
+
+# 激活虚拟环境
+source .venv/bin/activate
+```
+
+### 4. 导航到源代码目录
+
+```bash
+cd ./src
+```
+
+> [!NOTE]
+> 所有示例均假定您在已激活的虚拟环境中。
+
+## 已实现算法
+
+HugeGraph-ML 目前实现了跨多个类别的 **21 种图机器学习算法**:
+
+### 节点分类(11 种算法)
+
+基于网络结构和特征预测图节点的标签。
+
+| 算法 | 论文 | 描述 |
+|-----|------|------|
+| **GCN** | [Kipf & Welling, 2017](https://arxiv.org/abs/1609.02907) | 图卷积网络 |
+| **GAT** | [Veličković et al., 2018](https://arxiv.org/abs/1710.10903) | 
图注意力网络 |
+| **GraphSAGE** | [Hamilton et al., 2017](https://arxiv.org/abs/1706.02216) | 
归纳式表示学习 |
+| **APPNP** | [Klicpera et al., 2019](https://arxiv.org/abs/1810.05997) | 个性化 
PageRank 传播 |
+| **AGNN** | [Thekumparampil et al., 2018](https://arxiv.org/abs/1803.03735) | 
基于注意力的 GNN |
+| **ARMA** | [Bianchi et al., 2019](https://arxiv.org/abs/1901.01343) | 
自回归移动平均滤波器 |
+| **DAGNN** | [Liu et al., 2020](https://arxiv.org/abs/2007.09296) | 
深度自适应图神经网络 |
+| **DeeperGCN** | [Li et al., 2020](https://arxiv.org/abs/2006.07739) | 非常深的 
GCN 架构 |
+| **GRAND** | [Feng et al., 2020](https://arxiv.org/abs/2005.11079) | 图随机神经网络 |
+| **JKNet** | [Xu et al., 2018](https://arxiv.org/abs/1806.03536) | 跳跃知识网络 |
+| **Cluster-GCN** | [Chiang et al., 2019](https://arxiv.org/abs/1905.07953) | 
通过聚类实现可扩展 GCN 训练 |
+
+### 图分类(2 种算法)
+
+基于结构和节点特征对整个图进行分类。
+
+| 算法 | 论文 | 描述 |
+|-----|------|------|
+| **DiffPool** | [Ying et al., 2018](https://arxiv.org/abs/1806.08804) | 
可微分图池化 |
+| **GIN** | [Xu et al., 2019](https://arxiv.org/abs/1810.00826) | 图同构网络 |
+
+### 图嵌入(3 种算法)
+
+学习用于下游任务的无监督节点表示。
+
+| 算法 | 论文 | 描述 |
+|-----|------|------|
+| **DGI** | [Veličković et al., 2019](https://arxiv.org/abs/1809.10341) | 
深度图信息最大化(对比学习) |
+| **BGRL** | [Thakoor et al., 2021](https://arxiv.org/abs/2102.06514) | 
自举图表示学习 |
+| **GRACE** | [Zhu et al., 2020](https://arxiv.org/abs/2006.04131) | 图对比学习 |
+
+### 链接预测(3 种算法)
+
+预测图中缺失或未来的连接。
+
+| 算法 | 论文 | 描述 |
+|-----|------|------|
+| **SEAL** | [Zhang & Chen, 2018](https://arxiv.org/abs/1802.09691) | 子图提取和标注 |
+| **P-GNN** | [You et al., 
2019](http://proceedings.mlr.press/v97/you19b/you19b.pdf) | 位置感知 GNN |
+| **GATNE** | [Cen et al., 2019](https://arxiv.org/abs/1905.01669) | 
属性多元异构网络嵌入 |
+
+### 欺诈检测(2 种算法)
+
+检测图中的异常节点(例如欺诈账户)。
+
+| 算法 | 论文 | 描述 |
+|-----|------|------|
+| **CARE-GNN** | [Dou et al., 2020](https://arxiv.org/abs/2008.08692) | 抗伪装 
GNN |
+| **BGNN** | [Zheng et al., 2021](https://arxiv.org/abs/2101.08543) | 二部图神经网络 |
+
+### 后处理(1 种算法)
+
+通过标签传播改进预测。
+
+| 算法 | 论文 | 描述 |
+|-----|------|------|
+| **C&S** | [Huang et al., 2020](https://arxiv.org/abs/2010.13993) | 
校正与平滑(预测优化) |
+
+## 使用示例
+
+### 示例 1:使用 DGI 进行节点嵌入
+
+使用深度图信息最大化(DGI)在 Cora 数据集上进行无监督节点嵌入。
+
+#### 步骤 1:导入数据集(如需)
+
+```python
+from hugegraph_ml.utils.dgl2hugegraph_utils import import_graph_from_dgl
+
+# 从 DGL 导入 Cora 数据集到 HugeGraph
+import_graph_from_dgl("cora")
+```
+
+#### 步骤 2:转换图数据
+
+```python
+from hugegraph_ml.data.hugegraph2dgl import HugeGraph2DGL
+
+# 将 HugeGraph 数据转换为 DGL 格式
+hg2d = HugeGraph2DGL()
+graph = hg2d.convert_graph(vertex_label="CORA_vertex", edge_label="CORA_edge")
+```
+
+#### 步骤 3:初始化模型
+
+```python
+from hugegraph_ml.models.dgi import DGI
+
+# 创建 DGI 模型
+model = DGI(n_in_feats=graph.ndata["feat"].shape[1])
+```
+
+#### 步骤 4:训练并生成嵌入
+
+```python
+from hugegraph_ml.tasks.node_embed import NodeEmbed
+
+# 训练模型并生成节点嵌入
+node_embed_task = NodeEmbed(graph=graph, model=model)
+embedded_graph = node_embed_task.train_and_embed(
+    add_self_loop=True,
+    n_epochs=300,
+    patience=30
+)
+```
+
+#### 步骤 5:下游任务(节点分类)
+
+```python
+from hugegraph_ml.models.mlp import MLPClassifier
+from hugegraph_ml.tasks.node_classify import NodeClassify
+
+# 使用嵌入进行节点分类
+model = MLPClassifier(
+    n_in_feat=embedded_graph.ndata["feat"].shape[1],
+    n_out_feat=embedded_graph.ndata["label"].unique().shape[0]
+)
+node_clf_task = NodeClassify(graph=embedded_graph, model=model)
+node_clf_task.train(lr=1e-3, n_epochs=400, patience=40)
+print(node_clf_task.evaluate())
+```
+
+**预期输出:**
+```python
+{'accuracy': 0.82, 'loss': 0.5714246034622192}
+```
+
+**完整示例**:参见 
[dgi_example.py](https://github.com/apache/incubator-hugegraph-ai/blob/main/hugegraph-ml/src/hugegraph_ml/examples/dgi_example.py)
+
+### 示例 2:使用 GRAND 进行节点分类
+
+使用 GRAND 模型直接对节点进行分类(无需单独的嵌入步骤)。
+
+```python
+from hugegraph_ml.data.hugegraph2dgl import HugeGraph2DGL
+from hugegraph_ml.models.grand import GRAND
+from hugegraph_ml.tasks.node_classify import NodeClassify
+
+# 加载图
+hg2d = HugeGraph2DGL()
+graph = hg2d.convert_graph(vertex_label="CORA_vertex", edge_label="CORA_edge")
+
+# 初始化 GRAND 模型
+model = GRAND(
+    n_in_feats=graph.ndata["feat"].shape[1],
+    n_out_feats=graph.ndata["label"].unique().shape[0]
+)
+
+# 训练和评估
+node_clf_task = NodeClassify(graph=graph, model=model)
+node_clf_task.train(lr=1e-2, n_epochs=1500, patience=100)
+print(node_clf_task.evaluate())
+```
+
+**完整示例**:参见 
[grand_example.py](https://github.com/apache/incubator-hugegraph-ai/blob/main/hugegraph-ml/src/hugegraph_ml/examples/grand_example.py)
+
+## 核心组件
+
+### HugeGraph2DGL 转换器
+
+无缝将 HugeGraph 数据转换为 DGL 图格式:
+
+```python
+from hugegraph_ml.data.hugegraph2dgl import HugeGraph2DGL
+
+hg2d = HugeGraph2DGL()
+graph = hg2d.convert_graph(
+    vertex_label="person",      # 要提取的顶点标签
+    edge_label="knows",         # 要提取的边标签
+    directed=False              # 图的方向性
+)
+```
+
+### 任务抽象
+
+用于常见 ML 工作流的可复用任务对象:
+
+| 任务 | 类 | 用途 |
+|-----|-----|------|
+| 节点嵌入 | `NodeEmbed` | 生成无监督节点嵌入 |
+| 节点分类 | `NodeClassify` | 预测节点标签 |
+| 图分类 | `GraphClassify` | 预测图级标签 |
+| 链接预测 | `LinkPredict` | 预测缺失边 |
+
+## 最佳实践
+
+1. **从小数据集开始**:在扩展之前先在小图(例如 Cora、Citeseer)上测试您的流程
+2. **使用早停**:设置 `patience` 参数以避免过拟合
+3. **调整超参数**:根据数据集大小调整学习率、隐藏维度和周期数
+4. **监控 GPU 内存**:大图可能需要批量训练(例如 Cluster-GCN)
+5. **验证 Schema**:确保顶点/边标签与您的 HugeGraph schema 匹配
+
+## 故障排除
+
+| 问题 | 解决方案 |
+|-----|---------|
+| 连接 HugeGraph "Connection refused" | 验证服务器是否在 8080 端口运行 |
+| CUDA 内存不足 | 减少批大小或使用仅 CPU 模式 |
+| 模型收敛问题 | 尝试不同的学习率(1e-2、1e-3、1e-4) |
+| DGL 的 ImportError | 运行 `uv sync` 重新安装依赖 |
+
+## 贡献
+
+添加新算法:
+
+1. 在 `src/hugegraph_ml/models/your_model.py` 创建模型文件
+2. 继承基础模型类并实现 `forward()` 方法
+3. 在 `src/hugegraph_ml/examples/` 添加示例脚本
+4. 更新此文档并添加算法详情
+
+## 另见
+
+- [HugeGraph-AI 概述](../_index.md) - 完整 AI 生态系统
+- [HugeGraph-LLM](./hugegraph-llm.md) - RAG 和知识图谱构建
+- [GitHub 
仓库](https://github.com/apache/incubator-hugegraph-ai/tree/main/hugegraph-ml) - 
源代码和示例
diff --git a/content/cn/docs/quickstart/hugegraph-ai/quick_start.md 
b/content/cn/docs/quickstart/hugegraph-ai/quick_start.md
index 6d8d22f9..da148f7e 100644
--- a/content/cn/docs/quickstart/hugegraph-ai/quick_start.md
+++ b/content/cn/docs/quickstart/hugegraph-ai/quick_start.md
@@ -190,3 +190,63 @@ graph TD;
 # 5. 图工具
 
 输入 Gremlin 查询以执行相应操作。
+
+# 6. 语言切换 (v1.5.0+)
+
+HugeGraph-LLM 支持双语提示词,以提高跨语言的准确性。
+
+### 在英文和中文之间切换
+
+系统语言影响:
+- **系统提示词**:LLM 使用的内部提示词
+- **关键词提取**:特定语言的提取逻辑
+- **答案生成**:响应格式和风格
+
+#### 配置方法一:环境变量
+
+编辑您的 `.env` 文件:
+
+```bash
+# 英文提示词(默认)
+LANGUAGE=EN
+
+# 中文提示词
+LANGUAGE=CN
+```
+
+更改语言设置后重启服务。
+
+#### 配置方法二:Web UI(动态)
+
+如果您的部署中可用,使用 Web UI 中的设置面板切换语言,无需重启:
+
+1. 导航到**设置**或**配置**选项卡
+2. 选择**语言**:`EN` 或 `CN`
+3. 点击**保存** - 更改立即生效
+
+#### 特定语言的行为
+
+| 语言 | 关键词提取 | 答案风格 | 使用场景 |
+|-----|-----------|---------|---------|
+| `EN` | 英文 NLP 模型 | 专业、简洁 | 国际用户、英文文档 |
+| `CN` | 中文 NLP 模型 | 自然的中文表达 | 中文用户、中文文档 |
+
+> [!TIP]
+> 将 `LANGUAGE` 设置与您的主要文档语言匹配,以获得最佳 RAG 准确性。
+
+### REST API 语言覆盖
+
+使用 REST API 时,您可以为每个请求指定自定义提示词,以覆盖默认语言设置:
+
+```bash
+curl -X POST http://localhost:8001/rag \
+  -H "Content-Type: application/json" \
+  -d '{
+    "query": "告诉我关于阿尔·帕西诺的信息",
+    "graph_only": true,
+    "keywords_extract_prompt": "请从以下文本中提取关键实体...",
+    "answer_prompt": "请根据以下上下文回答问题..."
+  }'
+```
+
+完整参数详情请参阅 [REST API 参考](./rest-api.md)。
diff --git a/content/cn/docs/quickstart/hugegraph-ai/rest-api.md 
b/content/cn/docs/quickstart/hugegraph-ai/rest-api.md
new file mode 100644
index 00000000..349ff4c0
--- /dev/null
+++ b/content/cn/docs/quickstart/hugegraph-ai/rest-api.md
@@ -0,0 +1,428 @@
+---
+title: "REST API 参考"
+linkTitle: "REST API"
+weight: 5
+---
+
+HugeGraph-LLM 提供 REST API 端点,用于将 RAG 和 Text2Gremlin 功能集成到您的应用程序中。
+
+## 基础 URL
+
+```
+http://localhost:8001
+```
+
+启动服务时更改主机/端口:
+```bash
+python -m hugegraph_llm.demo.rag_demo.app --host 127.0.0.1 --port 8001
+```
+
+## 认证
+
+目前 API 支持可选的基于令牌的认证:
+
+```bash
+# 在 .env 中启用认证
+ENABLE_LOGIN=true
+USER_TOKEN=your-user-token
+ADMIN_TOKEN=your-admin-token
+```
+
+在请求头中传递令牌:
+```bash
+Authorization: Bearer <token>
+```
+
+---
+
+## RAG 端点
+
+### 1. 完整 RAG 查询
+
+**POST** `/rag`
+
+执行完整的 RAG 工作流,包括关键词提取、图检索、向量搜索、重排序和答案生成。
+
+#### 请求体
+
+```json
+{
+  "query": "给我讲讲阿尔·帕西诺的电影",
+  "raw_answer": false,
+  "vector_only": false,
+  "graph_only": true,
+  "graph_vector_answer": false,
+  "graph_ratio": 0.5,
+  "rerank_method": "cohere",
+  "near_neighbor_first": false,
+  "gremlin_tmpl_num": 5,
+  "max_graph_items": 30,
+  "topk_return_results": 20,
+  "vector_dis_threshold": 0.9,
+  "topk_per_keyword": 1,
+  "custom_priority_info": "",
+  "answer_prompt": "",
+  "keywords_extract_prompt": "",
+  "gremlin_prompt": "",
+  "client_config": {
+    "url": "127.0.0.1:8080",
+    "graph": "hugegraph",
+    "user": "admin",
+    "pwd": "admin",
+    "gs": ""
+  }
+}
+```
+
+**参数说明:**
+
+| 字段 | 类型 | 必需 | 默认值 | 描述 |
+|-----|------|------|-------|------|
+| `query` | string | 是 | - | 用户的自然语言问题 |
+| `raw_answer` | boolean | 否 | false | 返回 LLM 答案而不检索 |
+| `vector_only` | boolean | 否 | false | 仅使用向量搜索(无图) |
+| `graph_only` | boolean | 否 | false | 仅使用图检索(无向量) |
+| `graph_vector_answer` | boolean | 否 | false | 结合图和向量结果 |
+| `graph_ratio` | float | 否 | 0.5 | 图与向量结果的比例(0-1) |
+| `rerank_method` | string | 否 | "" | 重排序器:"cohere"、"siliconflow"、"" |
+| `near_neighbor_first` | boolean | 否 | false | 优先选择直接邻居 |
+| `gremlin_tmpl_num` | integer | 否 | 5 | 尝试的 Gremlin 模板数量 |
+| `max_graph_items` | integer | 否 | 30 | 图检索的最大项数 |
+| `topk_return_results` | integer | 否 | 20 | 重排序后的 Top-K |
+| `vector_dis_threshold` | float | 否 | 0.9 | 向量相似度阈值(0-1) |
+| `topk_per_keyword` | integer | 否 | 1 | 每个关键词的 Top-K 向量 |
+| `custom_priority_info` | string | 否 | "" | 要优先考虑的自定义上下文 |
+| `answer_prompt` | string | 否 | "" | 自定义答案生成提示词 |
+| `keywords_extract_prompt` | string | 否 | "" | 自定义关键词提取提示词 |
+| `gremlin_prompt` | string | 否 | "" | 自定义 Gremlin 生成提示词 |
+| `client_config` | object | 否 | null | 覆盖图连接设置 |
+
+#### 响应
+
+```json
+{
+  "query": "给我讲讲阿尔·帕西诺的电影",
+  "graph_only": {
+    "answer": "阿尔·帕西诺主演了《教父》(1972 年),由弗朗西斯·福特·科波拉执导...",
+    "context": ["《教父》是 1972 年的犯罪电影...", "..."],
+    "graph_paths": ["..."],
+    "keywords": ["阿尔·帕西诺", "电影"]
+  }
+}
+```
+
+#### 示例(curl)
+
+```bash
+curl -X POST http://localhost:8001/rag \
+  -H "Content-Type: application/json" \
+  -d '{
+    "query": "给我讲讲阿尔·帕西诺",
+    "graph_only": true,
+    "max_graph_items": 30
+  }'
+```
+
+### 2. 仅图检索
+
+**POST** `/rag/graph`
+
+检索图上下文而不生成答案。用于调试或自定义处理。
+
+#### 请求体
+
+```json
+{
+  "query": "阿尔·帕西诺的电影",
+  "max_graph_items": 30,
+  "topk_return_results": 20,
+  "vector_dis_threshold": 0.9,
+  "topk_per_keyword": 1,
+  "gremlin_tmpl_num": 5,
+  "rerank_method": "cohere",
+  "near_neighbor_first": false,
+  "custom_priority_info": "",
+  "gremlin_prompt": "",
+  "get_vertex_only": false,
+  "client_config": {
+    "url": "127.0.0.1:8080",
+    "graph": "hugegraph",
+    "user": "admin",
+    "pwd": "admin",
+    "gs": ""
+  }
+}
+```
+
+**额外参数:**
+
+| 字段 | 类型 | 默认值 | 描述 |
+|-----|------|-------|------|
+| `get_vertex_only` | boolean | false | 仅返回顶点 ID,不返回完整详情 |
+
+#### 响应
+
+```json
+{
+  "graph_recall": {
+    "query": "阿尔·帕西诺的电影",
+    "keywords": ["阿尔·帕西诺", "电影"],
+    "match_vids": ["1:阿尔·帕西诺", "2:教父"],
+    "graph_result_flag": true,
+    "gremlin": "g.V('1:阿尔·帕西诺').outE().inV().limit(30)",
+    "graph_result": [
+      {"id": "1:阿尔·帕西诺", "label": "person", "properties": {"name": "阿尔·帕西诺"}},
+      {"id": "2:教父", "label": "movie", "properties": {"title": "教父"}}
+    ],
+    "vertex_degree_list": [5, 12]
+  }
+}
+```
+
+#### 示例(curl)
+
+```bash
+curl -X POST http://localhost:8001/rag/graph \
+  -H "Content-Type: application/json" \
+  -d '{
+    "query": "阿尔·帕西诺",
+    "max_graph_items": 30,
+    "get_vertex_only": false
+  }'
+```
+
+---
+
+## Text2Gremlin 端点
+
+### 3. 自然语言转 Gremlin
+
+**POST** `/text2gremlin`
+
+将自然语言查询转换为可执行的 Gremlin 命令。
+
+#### 请求体
+
+```json
+{
+  "query": "查找所有由弗朗西斯·福特·科波拉执导的电影",
+  "example_num": 5,
+  "gremlin_prompt": "",
+  "output_types": ["GREMLIN", "RESULT"],
+  "client_config": {
+    "url": "127.0.0.1:8080",
+    "graph": "hugegraph",
+    "user": "admin",
+    "pwd": "admin",
+    "gs": ""
+  }
+}
+```
+
+**参数说明:**
+
+| 字段 | 类型 | 必需 | 默认值 | 描述 |
+|-----|------|------|-------|------|
+| `query` | string | 是 | - | 自然语言查询 |
+| `example_num` | integer | 否 | 5 | 使用的示例模板数量 |
+| `gremlin_prompt` | string | 否 | "" | Gremlin 生成的自定义提示词 |
+| `output_types` | array | 否 | null | 输出类型:["GREMLIN", "RESULT", "CYPHER"] |
+| `client_config` | object | 否 | null | 图连接覆盖 |
+
+**输出类型:**
+- `GREMLIN`:生成的 Gremlin 查询
+- `RESULT`:图的执行结果
+- `CYPHER`:Cypher 查询(如果请求)
+
+#### 响应
+
+```json
+{
+  "gremlin": 
"g.V().has('person','name','弗朗西斯·福特·科波拉').out('directed').hasLabel('movie').values('title')",
+  "result": [
+    "教父",
+    "教父 2",
+    "现代启示录"
+  ]
+}
+```
+
+#### 示例(curl)
+
+```bash
+curl -X POST http://localhost:8001/text2gremlin \
+  -H "Content-Type: application/json" \
+  -d '{
+    "query": "查找所有由弗朗西斯·福特·科波拉执导的电影",
+    "output_types": ["GREMLIN", "RESULT"]
+  }'
+```
+
+---
+
+## 配置端点
+
+### 4. 更新图连接
+
+**POST** `/config/graph`
+
+动态更新 HugeGraph 连接设置。
+
+#### 请求体
+
+```json
+{
+  "url": "127.0.0.1:8080",
+  "name": "hugegraph",
+  "user": "admin",
+  "pwd": "admin",
+  "gs": ""
+}
+```
+
+#### 响应
+
+```json
+{
+  "status_code": 201,
+  "message": "图配置更新成功"
+}
+```
+
+### 5. 更新 LLM 配置
+
+**POST** `/config/llm`
+
+运行时更新聊天/提取 LLM 设置。
+
+#### 请求体(OpenAI)
+
+```json
+{
+  "llm_type": "openai",
+  "api_key": "sk-your-api-key",
+  "api_base": "https://api.openai.com/v1";,
+  "language_model": "gpt-4o-mini",
+  "max_tokens": 4096
+}
+```
+
+#### 请求体(Ollama)
+
+```json
+{
+  "llm_type": "ollama/local",
+  "host": "127.0.0.1",
+  "port": 11434,
+  "language_model": "llama3.1:8b"
+}
+```
+
+### 6. 更新嵌入配置
+
+**POST** `/config/embedding`
+
+更新嵌入模型设置。
+
+#### 请求体
+
+```json
+{
+  "llm_type": "openai",
+  "api_key": "sk-your-api-key",
+  "api_base": "https://api.openai.com/v1";,
+  "language_model": "text-embedding-3-small"
+}
+```
+
+### 7. 更新 Reranker 配置
+
+**POST** `/config/rerank`
+
+配置重排序器设置。
+
+#### 请求体(Cohere)
+
+```json
+{
+  "reranker_type": "cohere",
+  "api_key": "your-cohere-key",
+  "reranker_model": "rerank-multilingual-v3.0",
+  "cohere_base_url": "https://api.cohere.com/v1/rerank";
+}
+```
+
+#### 请求体(SiliconFlow)
+
+```json
+{
+  "reranker_type": "siliconflow",
+  "api_key": "your-siliconflow-key",
+  "reranker_model": "BAAI/bge-reranker-v2-m3"
+}
+```
+
+---
+
+## 错误响应
+
+所有端点返回标准 HTTP 状态码:
+
+| 代码 | 含义 |
+|-----|------|
+| 200 | 成功 |
+| 201 | 已创建(配置已更新) |
+| 400 | 错误请求(无效参数) |
+| 500 | 内部服务器错误 |
+| 501 | 未实现 |
+
+错误响应格式:
+```json
+{
+  "detail": "描述错误的消息"
+}
+```
+
+---
+
+## Python 客户端示例
+
+```python
+import requests
+
+BASE_URL = "http://localhost:8001";
+
+# 1. 配置图连接
+graph_config = {
+    "url": "127.0.0.1:8080",
+    "name": "hugegraph",
+    "user": "admin",
+    "pwd": "admin"
+}
+requests.post(f"{BASE_URL}/config/graph", json=graph_config)
+
+# 2. 执行 RAG 查询
+rag_request = {
+    "query": "给我讲讲阿尔·帕西诺",
+    "graph_only": True,
+    "max_graph_items": 30
+}
+response = requests.post(f"{BASE_URL}/rag", json=rag_request)
+print(response.json())
+
+# 3. 从自然语言生成 Gremlin
+text2gql_request = {
+    "query": "查找所有与阿尔·帕西诺合作的导演",
+    "output_types": ["GREMLIN", "RESULT"]
+}
+response = requests.post(f"{BASE_URL}/text2gremlin", json=text2gql_request)
+print(response.json())
+```
+
+---
+
+## 另见
+
+- [配置参考](./config-reference.md) - 完整的 .env 配置指南
+- [HugeGraph-LLM 概述](./hugegraph-llm.md) - 架构和功能
+- [快速入门指南](./quick_start.md) - Web UI 入门
diff --git a/content/en/docs/quickstart/hugegraph-ai/_index.md 
b/content/en/docs/quickstart/hugegraph-ai/_index.md
index 2875a1ce..196e6681 100644
--- a/content/en/docs/quickstart/hugegraph-ai/_index.md
+++ b/content/en/docs/quickstart/hugegraph-ai/_index.md
@@ -18,20 +18,31 @@ weight: 3
 ## ✨ Key Features
 
 - **GraphRAG**: Build intelligent question-answering systems with 
graph-enhanced retrieval
+- **Text2Gremlin**: Natural language to graph query conversion with REST API
 - **Knowledge Graph Construction**: Automated graph building from text using 
LLMs
-- **Graph ML**: Integration with 20+ graph learning algorithms (GCN, GAT, 
GraphSAGE, etc.)
+- **Graph ML**: Integration with 21 graph learning algorithms (GCN, GAT, 
GraphSAGE, etc.)
 - **Python Client**: Easy-to-use Python interface for HugeGraph operations
 - **AI Agents**: Intelligent graph analysis and reasoning capabilities
 
+### 🎉 What's New in v1.5.0
+
+- **Text2Gremlin REST API**: Convert natural language queries to Gremlin 
commands via REST endpoints
+- **Multi-Model Vector Support**: Each graph instance can use independent 
embedding models
+- **Bilingual Prompt Support**: Switch between English and Chinese prompts 
(EN/CN)
+- **Semi-Automatic Schema Generation**: Intelligent schema inference from text 
data
+- **Semi-Automatic Prompt Generation**: Context-aware prompt templates
+- **Enhanced Reranker Support**: Integration with Cohere and SiliconFlow 
rerankers
+- **LiteLLM Multi-Provider Support**: Unified interface for OpenAI, Anthropic, 
Gemini, and more
+
 ## 🚀 Quick Start
 
 > [!NOTE]
 > For a complete deployment guide and detailed examples, please refer to 
 > [hugegraph-llm/README.md](https://github.com/apache/incubator-hugegraph-ai/blob/main/hugegraph-llm/README.md)
 
 ### Prerequisites
-- Python 3.9+ (3.10+ recommended for hugegraph-llm)
-- [uv](https://docs.astral.sh/uv/) (recommended package manager)
-- HugeGraph Server 1.3+ (1.5+ recommended)
+- Python 3.10+ (required for hugegraph-llm)
+- [uv](https://docs.astral.sh/uv/) 0.7+ (recommended package manager)
+- HugeGraph Server 1.5+ (required)
 - Docker (optional, for containerized deployment)
 
 ### Option 1: Docker Deployment (Recommended)
@@ -123,11 +134,13 @@ Large language model integration for graph applications:
 - **AI Agents**: Intelligent graph analysis and reasoning
 
 ### 
[hugegraph-ml](https://github.com/apache/incubator-hugegraph-ai/tree/main/hugegraph-ml)
-Graph machine learning with 20+ implemented algorithms:
-- **Node Classification**: GCN, GAT, GraphSAGE, APPNP, etc.
-- **Graph Classification**: DiffPool, P-GNN, etc.
-- **Graph Embedding**: DeepWalk, Node2Vec, GRACE, etc.
-- **Link Prediction**: SEAL, GATNE, etc.
+Graph machine learning with 21 implemented algorithms:
+- **Node Classification**: GCN, GAT, GraphSAGE, APPNP, AGNN, ARMA, DAGNN, 
DeeperGCN, GRAND, JKNet, Cluster-GCN
+- **Graph Classification**: DiffPool, GIN
+- **Graph Embedding**: DGI, BGRL, GRACE
+- **Link Prediction**: SEAL, P-GNN, GATNE
+- **Fraud Detection**: CARE-GNN, BGNN
+- **Post-Processing**: C&S (Correct & Smooth)
 
 ### 
[hugegraph-python-client](https://github.com/apache/incubator-hugegraph-ai/tree/main/hugegraph-python-client)
 Python client for HugeGraph operations:
diff --git a/content/en/docs/quickstart/hugegraph-ai/config-reference.md 
b/content/en/docs/quickstart/hugegraph-ai/config-reference.md
new file mode 100644
index 00000000..502a1d56
--- /dev/null
+++ b/content/en/docs/quickstart/hugegraph-ai/config-reference.md
@@ -0,0 +1,396 @@
+---
+title: "Configuration Reference"
+linkTitle: "Configuration Reference"
+weight: 4
+---
+
+This document provides a comprehensive reference for all configuration options 
in HugeGraph-LLM.
+
+## Configuration Files
+
+- **Environment File**: `.env` (created from template or auto-generated)
+- **Prompt Configuration**: 
`src/hugegraph_llm/resources/demo/config_prompt.yaml`
+
+> [!TIP]
+> Run `python -m hugegraph_llm.config.generate --update` to auto-generate or 
update configuration files with defaults.
+
+## Environment Variables Overview
+
+### 1. Language and Model Type Selection
+
+```bash
+# Prompt language (affects system prompts and generated text)
+LANGUAGE=EN                     # Options: EN | CN
+
+# LLM Type for different tasks
+CHAT_LLM_TYPE=openai           # Chat/RAG: openai | litellm | ollama/local
+EXTRACT_LLM_TYPE=openai        # Entity extraction: openai | litellm | 
ollama/local
+TEXT2GQL_LLM_TYPE=openai       # Text2Gremlin: openai | litellm | ollama/local
+
+# Embedding type
+EMBEDDING_TYPE=openai          # Options: openai | litellm | ollama/local
+
+# Reranker type (optional)
+RERANKER_TYPE=                 # Options: cohere | siliconflow | (empty for 
none)
+```
+
+### 2. OpenAI Configuration
+
+Each LLM task (chat, extract, text2gql) has independent configuration:
+
+#### 2.1 Chat LLM (RAG Answer Generation)
+
+```bash
+OPENAI_CHAT_API_BASE=https://api.openai.com/v1
+OPENAI_CHAT_API_KEY=sk-your-api-key-here
+OPENAI_CHAT_LANGUAGE_MODEL=gpt-4o-mini
+OPENAI_CHAT_TOKENS=8192        # Max tokens for chat responses
+```
+
+#### 2.2 Extract LLM (Entity & Relation Extraction)
+
+```bash
+OPENAI_EXTRACT_API_BASE=https://api.openai.com/v1
+OPENAI_EXTRACT_API_KEY=sk-your-api-key-here
+OPENAI_EXTRACT_LANGUAGE_MODEL=gpt-4o-mini
+OPENAI_EXTRACT_TOKENS=1024     # Max tokens for extraction
+```
+
+#### 2.3 Text2GQL LLM (Natural Language to Gremlin)
+
+```bash
+OPENAI_TEXT2GQL_API_BASE=https://api.openai.com/v1
+OPENAI_TEXT2GQL_API_KEY=sk-your-api-key-here
+OPENAI_TEXT2GQL_LANGUAGE_MODEL=gpt-4o-mini
+OPENAI_TEXT2GQL_TOKENS=4096    # Max tokens for query generation
+```
+
+#### 2.4 Embedding Model
+
+```bash
+OPENAI_EMBEDDING_API_BASE=https://api.openai.com/v1
+OPENAI_EMBEDDING_API_KEY=sk-your-api-key-here
+OPENAI_EMBEDDING_MODEL=text-embedding-3-small
+```
+
+> [!NOTE]
+> You can use different API keys/endpoints for each task to optimize costs or 
use specialized models.
+
+### 3. LiteLLM Configuration (Multi-Provider Support)
+
+LiteLLM enables unified access to 100+ LLM providers (OpenAI, Anthropic, 
Google, Azure, etc.).
+
+#### 3.1 Chat LLM
+
+```bash
+LITELLM_CHAT_API_BASE=http://localhost:4000       # LiteLLM proxy URL
+LITELLM_CHAT_API_KEY=sk-litellm-key              # LiteLLM API key
+LITELLM_CHAT_LANGUAGE_MODEL=anthropic/claude-3-5-sonnet-20241022
+LITELLM_CHAT_TOKENS=8192
+```
+
+#### 3.2 Extract LLM
+
+```bash
+LITELLM_EXTRACT_API_BASE=http://localhost:4000
+LITELLM_EXTRACT_API_KEY=sk-litellm-key
+LITELLM_EXTRACT_LANGUAGE_MODEL=openai/gpt-4o-mini
+LITELLM_EXTRACT_TOKENS=256
+```
+
+#### 3.3 Text2GQL LLM
+
+```bash
+LITELLM_TEXT2GQL_API_BASE=http://localhost:4000
+LITELLM_TEXT2GQL_API_KEY=sk-litellm-key
+LITELLM_TEXT2GQL_LANGUAGE_MODEL=openai/gpt-4o-mini
+LITELLM_TEXT2GQL_TOKENS=4096
+```
+
+#### 3.4 Embedding
+
+```bash
+LITELLM_EMBEDDING_API_BASE=http://localhost:4000
+LITELLM_EMBEDDING_API_KEY=sk-litellm-key
+LITELLM_EMBEDDING_MODEL=openai/text-embedding-3-small
+```
+
+**Model Format**: `provider/model-name`
+
+Examples:
+- `openai/gpt-4o-mini`
+- `anthropic/claude-3-5-sonnet-20241022`
+- `google/gemini-2.0-flash-exp`
+- `azure/gpt-4`
+
+See [LiteLLM Providers](https://docs.litellm.ai/docs/providers) for the 
complete list.
+
+### 4. Ollama Configuration (Local Deployment)
+
+Run local LLMs with Ollama for privacy and cost control.
+
+#### 4.1 Chat LLM
+
+```bash
+OLLAMA_CHAT_HOST=127.0.0.1
+OLLAMA_CHAT_PORT=11434
+OLLAMA_CHAT_LANGUAGE_MODEL=llama3.1:8b
+```
+
+#### 4.2 Extract LLM
+
+```bash
+OLLAMA_EXTRACT_HOST=127.0.0.1
+OLLAMA_EXTRACT_PORT=11434
+OLLAMA_EXTRACT_LANGUAGE_MODEL=llama3.1:8b
+```
+
+#### 4.3 Text2GQL LLM
+
+```bash
+OLLAMA_TEXT2GQL_HOST=127.0.0.1
+OLLAMA_TEXT2GQL_PORT=11434
+OLLAMA_TEXT2GQL_LANGUAGE_MODEL=qwen2.5-coder:7b
+```
+
+#### 4.4 Embedding
+
+```bash
+OLLAMA_EMBEDDING_HOST=127.0.0.1
+OLLAMA_EMBEDDING_PORT=11434
+OLLAMA_EMBEDDING_MODEL=nomic-embed-text
+```
+
+> [!TIP]
+> Download models: `ollama pull llama3.1:8b` or `ollama pull qwen2.5-coder:7b`
+
+### 5. Reranker Configuration
+
+Rerankers improve RAG accuracy by reordering retrieved results based on 
relevance.
+
+#### 5.1 Cohere Reranker
+
+```bash
+RERANKER_TYPE=cohere
+COHERE_BASE_URL=https://api.cohere.com/v1/rerank
+RERANKER_API_KEY=your-cohere-api-key
+RERANKER_MODEL=rerank-english-v3.0
+```
+
+Available models:
+- `rerank-english-v3.0` (English)
+- `rerank-multilingual-v3.0` (100+ languages)
+
+#### 5.2 SiliconFlow Reranker
+
+```bash
+RERANKER_TYPE=siliconflow
+RERANKER_API_KEY=your-siliconflow-api-key
+RERANKER_MODEL=BAAI/bge-reranker-v2-m3
+```
+
+### 6. HugeGraph Connection
+
+Configure connection to your HugeGraph server instance.
+
+```bash
+# Server connection
+GRAPH_IP=127.0.0.1
+GRAPH_PORT=8080
+GRAPH_NAME=hugegraph            # Graph instance name
+GRAPH_USER=admin                # Username
+GRAPH_PWD=admin-password        # Password
+GRAPH_SPACE=                    # Graph space (optional, for multi-tenancy)
+```
+
+### 7. Query Parameters
+
+Control graph traversal behavior and result limits.
+
+```bash
+# Graph traversal limits
+MAX_GRAPH_PATH=10               # Max path depth for graph queries
+MAX_GRAPH_ITEMS=30              # Max items to retrieve from graph
+EDGE_LIMIT_PRE_LABEL=8          # Max edges per label type
+
+# Property filtering
+LIMIT_PROPERTY=False            # Limit properties in results (True/False)
+```
+
+### 8. Vector Search Configuration
+
+Configure vector similarity search parameters.
+
+```bash
+# Vector search thresholds
+VECTOR_DIS_THRESHOLD=0.9        # Min cosine similarity (0-1, higher = 
stricter)
+TOPK_PER_KEYWORD=1              # Top-K results per extracted keyword
+```
+
+### 9. Rerank Configuration
+
+```bash
+# Rerank result limits
+TOPK_RETURN_RESULTS=20          # Number of top results after reranking
+```
+
+## Configuration Priority
+
+The system loads configuration in the following order (later sources override 
earlier ones):
+
+1. **Default Values** (in `*_config.py` files)
+2. **Environment Variables** (from `.env` file)
+3. **Runtime Updates** (via Web UI or API calls)
+
+## Example Configurations
+
+### Minimal Setup (OpenAI)
+
+```bash
+# Language
+LANGUAGE=EN
+
+# LLM Types
+CHAT_LLM_TYPE=openai
+EXTRACT_LLM_TYPE=openai
+TEXT2GQL_LLM_TYPE=openai
+EMBEDDING_TYPE=openai
+
+# OpenAI Credentials (single key for all tasks)
+OPENAI_API_BASE=https://api.openai.com/v1
+OPENAI_API_KEY=sk-your-api-key-here
+OPENAI_LANGUAGE_MODEL=gpt-4o-mini
+OPENAI_EMBEDDING_MODEL=text-embedding-3-small
+
+# HugeGraph Connection
+GRAPH_IP=127.0.0.1
+GRAPH_PORT=8080
+GRAPH_NAME=hugegraph
+GRAPH_USER=admin
+GRAPH_PWD=admin
+```
+
+### Production Setup (LiteLLM + Reranker)
+
+```bash
+# Bilingual support
+LANGUAGE=EN
+
+# LiteLLM for flexibility
+CHAT_LLM_TYPE=litellm
+EXTRACT_LLM_TYPE=litellm
+TEXT2GQL_LLM_TYPE=litellm
+EMBEDDING_TYPE=litellm
+
+# LiteLLM Proxy
+LITELLM_CHAT_API_BASE=http://localhost:4000
+LITELLM_CHAT_API_KEY=sk-litellm-master-key
+LITELLM_CHAT_LANGUAGE_MODEL=anthropic/claude-3-5-sonnet-20241022
+LITELLM_CHAT_TOKENS=8192
+
+LITELLM_EXTRACT_API_BASE=http://localhost:4000
+LITELLM_EXTRACT_API_KEY=sk-litellm-master-key
+LITELLM_EXTRACT_LANGUAGE_MODEL=openai/gpt-4o-mini
+LITELLM_EXTRACT_TOKENS=256
+
+LITELLM_TEXT2GQL_API_BASE=http://localhost:4000
+LITELLM_TEXT2GQL_API_KEY=sk-litellm-master-key
+LITELLM_TEXT2GQL_LANGUAGE_MODEL=openai/gpt-4o-mini
+LITELLM_TEXT2GQL_TOKENS=4096
+
+LITELLM_EMBEDDING_API_BASE=http://localhost:4000
+LITELLM_EMBEDDING_API_KEY=sk-litellm-master-key
+LITELLM_EMBEDDING_MODEL=openai/text-embedding-3-small
+
+# Cohere Reranker for better accuracy
+RERANKER_TYPE=cohere
+COHERE_BASE_URL=https://api.cohere.com/v1/rerank
+RERANKER_API_KEY=your-cohere-key
+RERANKER_MODEL=rerank-multilingual-v3.0
+
+# HugeGraph with authentication
+GRAPH_IP=prod-hugegraph.example.com
+GRAPH_PORT=8080
+GRAPH_NAME=production_graph
+GRAPH_USER=rag_user
+GRAPH_PWD=secure-password
+GRAPH_SPACE=prod_space
+
+# Optimized query parameters
+MAX_GRAPH_PATH=15
+MAX_GRAPH_ITEMS=50
+VECTOR_DIS_THRESHOLD=0.85
+TOPK_RETURN_RESULTS=30
+```
+
+### Local/Offline Setup (Ollama)
+
+```bash
+# Language
+LANGUAGE=EN
+
+# All local models via Ollama
+CHAT_LLM_TYPE=ollama/local
+EXTRACT_LLM_TYPE=ollama/local
+TEXT2GQL_LLM_TYPE=ollama/local
+EMBEDDING_TYPE=ollama/local
+
+# Ollama endpoints
+OLLAMA_CHAT_HOST=127.0.0.1
+OLLAMA_CHAT_PORT=11434
+OLLAMA_CHAT_LANGUAGE_MODEL=llama3.1:8b
+
+OLLAMA_EXTRACT_HOST=127.0.0.1
+OLLAMA_EXTRACT_PORT=11434
+OLLAMA_EXTRACT_LANGUAGE_MODEL=llama3.1:8b
+
+OLLAMA_TEXT2GQL_HOST=127.0.0.1
+OLLAMA_TEXT2GQL_PORT=11434
+OLLAMA_TEXT2GQL_LANGUAGE_MODEL=qwen2.5-coder:7b
+
+OLLAMA_EMBEDDING_HOST=127.0.0.1
+OLLAMA_EMBEDDING_PORT=11434
+OLLAMA_EMBEDDING_MODEL=nomic-embed-text
+
+# No reranker for offline setup
+RERANKER_TYPE=
+
+# Local HugeGraph
+GRAPH_IP=127.0.0.1
+GRAPH_PORT=8080
+GRAPH_NAME=hugegraph
+GRAPH_USER=admin
+GRAPH_PWD=admin
+```
+
+## Configuration Validation
+
+After modifying `.env`, verify your configuration:
+
+1. **Via Web UI**: Visit `http://localhost:8001` and check the settings panel
+2. **Via Python**:
+```python
+from hugegraph_llm.config import settings
+print(settings.llm_config)
+print(settings.hugegraph_config)
+```
+3. **Via REST API**:
+```bash
+curl http://localhost:8001/config
+```
+
+## Troubleshooting
+
+| Issue | Solution |
+|-------|----------|
+| "API key not found" | Check `*_API_KEY` is set correctly in `.env` |
+| "Connection refused" | Verify `GRAPH_IP` and `GRAPH_PORT` are correct |
+| "Model not found" | For Ollama: run `ollama pull <model-name>` |
+| "Rate limit exceeded" | Reduce `MAX_GRAPH_ITEMS` or use different API keys |
+| "Embedding dimension mismatch" | Delete existing vectors and rebuild with 
correct model |
+
+## See Also
+
+- [HugeGraph-LLM Overview](./hugegraph-llm.md)
+- [REST API Reference](./rest-api.md)
+- [Quick Start Guide](./quick_start.md)
diff --git a/content/en/docs/quickstart/hugegraph-ai/hugegraph-llm.md 
b/content/en/docs/quickstart/hugegraph-ai/hugegraph-llm.md
index b64b1fa7..171c3cf4 100644
--- a/content/en/docs/quickstart/hugegraph-ai/hugegraph-llm.md
+++ b/content/en/docs/quickstart/hugegraph-ai/hugegraph-llm.md
@@ -224,7 +224,80 @@ After running the demo, configuration files are 
automatically generated:
 > [!NOTE]
 > Configuration changes are automatically saved when using the web interface. 
 > For manual changes, simply refresh the page to load updates.
 
-**LLM Provider Support**: This project uses 
[LiteLLM](https://docs.litellm.ai/docs/providers) for multi-provider LLM 
support.
+### LLM Provider Configuration
+
+This project uses [LiteLLM](https://docs.litellm.ai/docs/providers) for 
multi-provider LLM support, enabling unified access to OpenAI, Anthropic, 
Google, Cohere, and 100+ other providers.
+
+#### Option 1: Direct LLM Connection (OpenAI, Ollama)
+
+```bash
+# .env configuration
+chat_llm_type=openai           # or ollama/local
+openai_api_key=sk-xxx
+openai_api_base=https://api.openai.com/v1
+openai_language_model=gpt-4o-mini
+openai_max_tokens=4096
+```
+
+#### Option 2: LiteLLM Multi-Provider Support
+
+LiteLLM acts as a unified proxy for multiple LLM providers:
+
+```bash
+# .env configuration
+chat_llm_type=litellm
+extract_llm_type=litellm
+text2gql_llm_type=litellm
+
+# LiteLLM settings
+litellm_api_base=http://localhost:4000  # LiteLLM proxy server
+litellm_api_key=sk-1234                  # LiteLLM API key
+
+# Model selection (provider/model format)
+litellm_language_model=anthropic/claude-3-5-sonnet-20241022
+litellm_max_tokens=4096
+```
+
+**Supported Providers**: OpenAI, Anthropic, Google (Gemini), Azure, Cohere, 
Bedrock, Vertex AI, Hugging Face, and more.
+
+For full provider list and configuration details, visit [LiteLLM 
Providers](https://docs.litellm.ai/docs/providers).
+
+### Reranker Configuration
+
+Rerankers improve RAG accuracy by reordering retrieved results. Supported 
providers:
+
+```bash
+# Cohere Reranker
+reranker_type=cohere
+cohere_api_key=your-cohere-key
+cohere_rerank_model=rerank-english-v3.0
+
+# SiliconFlow Reranker
+reranker_type=siliconflow
+siliconflow_api_key=your-siliconflow-key
+siliconflow_rerank_model=BAAI/bge-reranker-v2-m3
+```
+
+### Text2Gremlin Configuration
+
+Convert natural language to Gremlin queries:
+
+```python
+from hugegraph_llm.operators.graph_rag_task import Text2GremlinPipeline
+
+# Initialize pipeline
+text2gremlin = Text2GremlinPipeline()
+
+# Generate Gremlin query
+result = (
+    text2gremlin
+    .query_to_gremlin(query="Find all movies directed by Francis Ford Coppola")
+    .execute_gremlin_query()
+    .run()
+)
+```
+
+**REST API Endpoint**: See the [REST API documentation](./rest-api.md) for 
HTTP endpoint details.
 
 ## 📚 Additional Resources
 
diff --git a/content/en/docs/quickstart/hugegraph-ai/hugegraph-ml.md 
b/content/en/docs/quickstart/hugegraph-ai/hugegraph-ml.md
new file mode 100644
index 00000000..18ff1529
--- /dev/null
+++ b/content/en/docs/quickstart/hugegraph-ai/hugegraph-ml.md
@@ -0,0 +1,289 @@
+---
+title: "HugeGraph-ML"
+linkTitle: "HugeGraph-ML"
+weight: 2
+---
+
+HugeGraph-ML integrates HugeGraph with popular graph learning libraries, 
enabling end-to-end machine learning workflows directly on graph data.
+
+## Overview
+
+`hugegraph-ml` provides a unified interface for applying graph neural networks 
and machine learning algorithms to data stored in HugeGraph. It eliminates the 
need for complex data export/import pipelines by seamlessly converting 
HugeGraph data to formats compatible with leading ML frameworks.
+
+### Key Features
+
+- **Direct HugeGraph Integration**: Query graph data directly from HugeGraph 
without manual exports
+- **21 Implemented Algorithms**: Comprehensive coverage of node 
classification, graph classification, embedding, and link prediction
+- **DGL Backend**: Leverages Deep Graph Library (DGL) for efficient training
+- **End-to-End Workflows**: From data loading to model training and evaluation
+- **Modular Tasks**: Reusable task abstractions for common ML scenarios
+
+## Prerequisites
+
+- **Python**: 3.9+ (standalone module)
+- **HugeGraph Server**: 1.0+ (recommended: 1.5+)
+- **UV Package Manager**: 0.7+ (for dependency management)
+
+## Installation
+
+### 1. Start HugeGraph Server
+
+```bash
+# Option 1: Docker (recommended)
+docker run -itd --name=hugegraph -p 8080:8080 hugegraph/hugegraph
+
+# Option 2: Binary packages
+# See https://hugegraph.apache.org/docs/download/download/
+```
+
+### 2. Clone and Setup
+
+```bash
+git clone https://github.com/apache/incubator-hugegraph-ai.git
+cd incubator-hugegraph-ai/hugegraph-ml
+```
+
+### 3. Install Dependencies
+
+```bash
+# uv sync automatically creates .venv and installs all dependencies
+uv sync
+
+# Activate virtual environment
+source .venv/bin/activate
+```
+
+### 4. Navigate to Source Directory
+
+```bash
+cd ./src
+```
+
+> [!NOTE]
+> All examples assume you're in the activated virtual environment.
+
+## Implemented Algorithms
+
+HugeGraph-ML currently implements **21 graph machine learning algorithms** 
across multiple categories:
+
+### Node Classification (11 algorithms)
+
+Predict labels for graph nodes based on network structure and features.
+
+| Algorithm | Paper | Description |
+|-----------|-------|-------------|
+| **GCN** | [Kipf & Welling, 2017](https://arxiv.org/abs/1609.02907) | Graph 
Convolutional Networks |
+| **GAT** | [Veličković et al., 2018](https://arxiv.org/abs/1710.10903) | 
Graph Attention Networks |
+| **GraphSAGE** | [Hamilton et al., 2017](https://arxiv.org/abs/1706.02216) | 
Inductive representation learning |
+| **APPNP** | [Klicpera et al., 2019](https://arxiv.org/abs/1810.05997) | 
Personalized PageRank propagation |
+| **AGNN** | [Thekumparampil et al., 2018](https://arxiv.org/abs/1803.03735) | 
Attention-based GNN |
+| **ARMA** | [Bianchi et al., 2019](https://arxiv.org/abs/1901.01343) | 
Autoregressive moving average filters |
+| **DAGNN** | [Liu et al., 2020](https://arxiv.org/abs/2007.09296) | Deep 
adaptive graph neural networks |
+| **DeeperGCN** | [Li et al., 2020](https://arxiv.org/abs/2006.07739) | Very 
deep GCN architectures |
+| **GRAND** | [Feng et al., 2020](https://arxiv.org/abs/2005.11079) | Graph 
random neural networks |
+| **JKNet** | [Xu et al., 2018](https://arxiv.org/abs/1806.03536) | Jumping 
knowledge networks |
+| **Cluster-GCN** | [Chiang et al., 2019](https://arxiv.org/abs/1905.07953) | 
Scalable GCN training via clustering |
+
+### Graph Classification (2 algorithms)
+
+Classify entire graphs based on their structure and node features.
+
+| Algorithm | Paper | Description |
+|-----------|-------|-------------|
+| **DiffPool** | [Ying et al., 2018](https://arxiv.org/abs/1806.08804) | 
Differentiable graph pooling |
+| **GIN** | [Xu et al., 2019](https://arxiv.org/abs/1810.00826) | Graph 
isomorphism networks |
+
+### Graph Embedding (3 algorithms)
+
+Learn unsupervised node representations for downstream tasks.
+
+| Algorithm | Paper | Description |
+|-----------|-------|-------------|
+| **DGI** | [Veličković et al., 2019](https://arxiv.org/abs/1809.10341) | Deep 
graph infomax (contrastive learning) |
+| **BGRL** | [Thakoor et al., 2021](https://arxiv.org/abs/2102.06514) | 
Bootstrapped graph representation learning |
+| **GRACE** | [Zhu et al., 2020](https://arxiv.org/abs/2006.04131) | Graph 
contrastive learning |
+
+### Link Prediction (3 algorithms)
+
+Predict missing or future connections in graphs.
+
+| Algorithm | Paper | Description |
+|-----------|-------|-------------|
+| **SEAL** | [Zhang & Chen, 2018](https://arxiv.org/abs/1802.09691) | Subgraph 
extraction and labeling |
+| **P-GNN** | [You et al., 
2019](http://proceedings.mlr.press/v97/you19b/you19b.pdf) | Position-aware GNN |
+| **GATNE** | [Cen et al., 2019](https://arxiv.org/abs/1905.01669) | 
Attributed multiplex heterogeneous network embedding |
+
+### Fraud Detection (2 algorithms)
+
+Detect anomalous nodes in graphs (e.g., fraudulent accounts).
+
+| Algorithm | Paper | Description |
+|-----------|-------|-------------|
+| **CARE-GNN** | [Dou et al., 2020](https://arxiv.org/abs/2008.08692) | 
Camouflage-resistant GNN |
+| **BGNN** | [Zheng et al., 2021](https://arxiv.org/abs/2101.08543) | 
Bipartite graph neural network |
+
+### Post-Processing (1 algorithm)
+
+Improve predictions via label propagation.
+
+| Algorithm | Paper | Description |
+|-----------|-------|-------------|
+| **C&S** | [Huang et al., 2020](https://arxiv.org/abs/2010.13993) | Correct & 
Smooth (prediction refinement) |
+
+## Usage Examples
+
+### Example 1: Node Embedding with DGI
+
+Perform unsupervised node embedding on the Cora dataset using Deep Graph 
Infomax (DGI).
+
+#### Step 1: Import Dataset (if needed)
+
+```python
+from hugegraph_ml.utils.dgl2hugegraph_utils import import_graph_from_dgl
+
+# Import Cora dataset from DGL to HugeGraph
+import_graph_from_dgl("cora")
+```
+
+#### Step 2: Convert Graph Data
+
+```python
+from hugegraph_ml.data.hugegraph2dgl import HugeGraph2DGL
+
+# Convert HugeGraph data to DGL format
+hg2d = HugeGraph2DGL()
+graph = hg2d.convert_graph(vertex_label="CORA_vertex", edge_label="CORA_edge")
+```
+
+#### Step 3: Initialize Model
+
+```python
+from hugegraph_ml.models.dgi import DGI
+
+# Create DGI model
+model = DGI(n_in_feats=graph.ndata["feat"].shape[1])
+```
+
+#### Step 4: Train and Generate Embeddings
+
+```python
+from hugegraph_ml.tasks.node_embed import NodeEmbed
+
+# Train model and generate node embeddings
+node_embed_task = NodeEmbed(graph=graph, model=model)
+embedded_graph = node_embed_task.train_and_embed(
+    add_self_loop=True,
+    n_epochs=300,
+    patience=30
+)
+```
+
+#### Step 5: Downstream Task (Node Classification)
+
+```python
+from hugegraph_ml.models.mlp import MLPClassifier
+from hugegraph_ml.tasks.node_classify import NodeClassify
+
+# Use embeddings for node classification
+model = MLPClassifier(
+    n_in_feat=embedded_graph.ndata["feat"].shape[1],
+    n_out_feat=embedded_graph.ndata["label"].unique().shape[0]
+)
+node_clf_task = NodeClassify(graph=embedded_graph, model=model)
+node_clf_task.train(lr=1e-3, n_epochs=400, patience=40)
+print(node_clf_task.evaluate())
+```
+
+**Expected Output:**
+```python
+{'accuracy': 0.82, 'loss': 0.5714246034622192}
+```
+
+**Full Example**: See 
[dgi_example.py](https://github.com/apache/incubator-hugegraph-ai/blob/main/hugegraph-ml/src/hugegraph_ml/examples/dgi_example.py)
+
+### Example 2: Node Classification with GRAND
+
+Directly classify nodes using the GRAND model (no separate embedding step 
needed).
+
+```python
+from hugegraph_ml.data.hugegraph2dgl import HugeGraph2DGL
+from hugegraph_ml.models.grand import GRAND
+from hugegraph_ml.tasks.node_classify import NodeClassify
+
+# Load graph
+hg2d = HugeGraph2DGL()
+graph = hg2d.convert_graph(vertex_label="CORA_vertex", edge_label="CORA_edge")
+
+# Initialize GRAND model
+model = GRAND(
+    n_in_feats=graph.ndata["feat"].shape[1],
+    n_out_feats=graph.ndata["label"].unique().shape[0]
+)
+
+# Train and evaluate
+node_clf_task = NodeClassify(graph=graph, model=model)
+node_clf_task.train(lr=1e-2, n_epochs=1500, patience=100)
+print(node_clf_task.evaluate())
+```
+
+**Full Example**: See 
[grand_example.py](https://github.com/apache/incubator-hugegraph-ai/blob/main/hugegraph-ml/src/hugegraph_ml/examples/grand_example.py)
+
+## Core Components
+
+### HugeGraph2DGL Converter
+
+Seamlessly converts HugeGraph data to DGL graph format:
+
+```python
+from hugegraph_ml.data.hugegraph2dgl import HugeGraph2DGL
+
+hg2d = HugeGraph2DGL()
+graph = hg2d.convert_graph(
+    vertex_label="person",      # Vertex label to extract
+    edge_label="knows",         # Edge label to extract
+    directed=False              # Graph directionality
+)
+```
+
+### Task Abstractions
+
+Reusable task objects for common ML workflows:
+
+| Task | Class | Purpose |
+|------|-------|---------|
+| Node Embedding | `NodeEmbed` | Generate unsupervised node embeddings |
+| Node Classification | `NodeClassify` | Predict node labels |
+| Graph Classification | `GraphClassify` | Predict graph-level labels |
+| Link Prediction | `LinkPredict` | Predict missing edges |
+
+## Best Practices
+
+1. **Start with Small Datasets**: Test your pipeline on small graphs (e.g., 
Cora, Citeseer) before scaling
+2. **Use Early Stopping**: Set `patience` parameter to avoid overfitting
+3. **Tune Hyperparameters**: Adjust learning rate, hidden dimensions, and 
epochs based on dataset size
+4. **Monitor GPU Memory**: Large graphs may require batch training (e.g., 
Cluster-GCN)
+5. **Validate Schema**: Ensure vertex/edge labels match your HugeGraph schema
+
+## Troubleshooting
+
+| Issue | Solution |
+|-------|----------|
+| "Connection refused" to HugeGraph | Verify server is running on port 8080 |
+| CUDA out of memory | Reduce batch size or use CPU-only mode |
+| Model convergence issues | Try different learning rates (1e-2, 1e-3, 1e-4) |
+| ImportError for DGL | Run `uv sync` to reinstall dependencies |
+
+## Contributing
+
+To add a new algorithm:
+
+1. Create model file in `src/hugegraph_ml/models/your_model.py`
+2. Inherit from base model class and implement `forward()` method
+3. Add example script in `src/hugegraph_ml/examples/`
+4. Update this documentation with algorithm details
+
+## See Also
+
+- [HugeGraph-AI Overview](../_index.md) - Full AI ecosystem
+- [HugeGraph-LLM](./hugegraph-llm.md) - RAG and knowledge graph construction
+- [GitHub 
Repository](https://github.com/apache/incubator-hugegraph-ai/tree/main/hugegraph-ml)
 - Source code and examples
diff --git a/content/en/docs/quickstart/hugegraph-ai/quick_start.md 
b/content/en/docs/quickstart/hugegraph-ai/quick_start.md
index 58e36778..04852db5 100644
--- a/content/en/docs/quickstart/hugegraph-ai/quick_start.md
+++ b/content/en/docs/quickstart/hugegraph-ai/quick_start.md
@@ -207,3 +207,63 @@ graph TD;
 # 5. Graph Tools
 
 Input Gremlin queries to execute corresponding operations.
+
+# 6. Language Switching (v1.5.0+)
+
+HugeGraph-LLM supports bilingual prompts for improved accuracy across 
languages.
+
+### Switching Between English and Chinese
+
+The system language affects:
+- **System prompts**: Internal prompts used by the LLM
+- **Keyword extraction**: Language-specific extraction logic
+- **Answer generation**: Response formatting and style
+
+#### Configuration Method 1: Environment Variable
+
+Edit your `.env` file:
+
+```bash
+# English prompts (default)
+LANGUAGE=EN
+
+# Chinese prompts
+LANGUAGE=CN
+```
+
+Restart the service after changing the language setting.
+
+#### Configuration Method 2: Web UI (Dynamic)
+
+If available in your deployment, use the settings panel in the Web UI to 
switch languages without restarting:
+
+1. Navigate to the **Settings** or **Configuration** tab
+2. Select **Language**: `EN` or `CN`
+3. Click **Save** - changes apply immediately
+
+#### Language-Specific Behavior
+
+| Language | Keyword Extraction | Answer Style | Use Case |
+|----------|-------------------|--------------|----------|
+| `EN` | English NLP models | Professional, concise | International users, 
English documents |
+| `CN` | Chinese NLP models | Natural Chinese phrasing | Chinese users, 
Chinese documents |
+
+> [!TIP]
+> Match the `LANGUAGE` setting to your primary document language for best RAG 
accuracy.
+
+### REST API Language Override
+
+When using the REST API, you can specify custom prompts per request to 
override the default language setting:
+
+```bash
+curl -X POST http://localhost:8001/rag \
+  -H "Content-Type: application/json" \
+  -d '{
+    "query": "告诉我关于阿尔·帕西诺的信息",
+    "graph_only": true,
+    "keywords_extract_prompt": "请从以下文本中提取关键实体...",
+    "answer_prompt": "请根据以下上下文回答问题..."
+  }'
+```
+
+See the [REST API Reference](./rest-api.md) for complete parameter details.
diff --git a/content/en/docs/quickstart/hugegraph-ai/rest-api.md 
b/content/en/docs/quickstart/hugegraph-ai/rest-api.md
new file mode 100644
index 00000000..484afdac
--- /dev/null
+++ b/content/en/docs/quickstart/hugegraph-ai/rest-api.md
@@ -0,0 +1,428 @@
+---
+title: "REST API Reference"
+linkTitle: "REST API"
+weight: 5
+---
+
+HugeGraph-LLM provides REST API endpoints for integrating RAG and Text2Gremlin 
capabilities into your applications.
+
+## Base URL
+
+```
+http://localhost:8001
+```
+
+Change host/port as configured when starting the service:
+```bash
+python -m hugegraph_llm.demo.rag_demo.app --host 127.0.0.1 --port 8001
+```
+
+## Authentication
+
+Currently, the API supports optional token-based authentication:
+
+```bash
+# Enable authentication in .env
+ENABLE_LOGIN=true
+USER_TOKEN=your-user-token
+ADMIN_TOKEN=your-admin-token
+```
+
+Pass tokens in request headers:
+```bash
+Authorization: Bearer <token>
+```
+
+---
+
+## RAG Endpoints
+
+### 1. Complete RAG Query
+
+**POST** `/rag`
+
+Execute a full RAG pipeline including keyword extraction, graph retrieval, 
vector search, reranking, and answer generation.
+
+#### Request Body
+
+```json
+{
+  "query": "Tell me about Al Pacino's movies",
+  "raw_answer": false,
+  "vector_only": false,
+  "graph_only": true,
+  "graph_vector_answer": false,
+  "graph_ratio": 0.5,
+  "rerank_method": "cohere",
+  "near_neighbor_first": false,
+  "gremlin_tmpl_num": 5,
+  "max_graph_items": 30,
+  "topk_return_results": 20,
+  "vector_dis_threshold": 0.9,
+  "topk_per_keyword": 1,
+  "custom_priority_info": "",
+  "answer_prompt": "",
+  "keywords_extract_prompt": "",
+  "gremlin_prompt": "",
+  "client_config": {
+    "url": "127.0.0.1:8080",
+    "graph": "hugegraph",
+    "user": "admin",
+    "pwd": "admin",
+    "gs": ""
+  }
+}
+```
+
+**Parameters:**
+
+| Field | Type | Required | Default | Description |
+|-------|------|----------|---------|-------------|
+| `query` | string | Yes | - | User's natural language question |
+| `raw_answer` | boolean | No | false | Return LLM answer without retrieval |
+| `vector_only` | boolean | No | false | Use only vector search (no graph) |
+| `graph_only` | boolean | No | false | Use only graph retrieval (no vector) |
+| `graph_vector_answer` | boolean | No | false | Combine graph and vector 
results |
+| `graph_ratio` | float | No | 0.5 | Ratio of graph vs vector results (0-1) |
+| `rerank_method` | string | No | "" | Reranker: "cohere", "siliconflow", "" |
+| `near_neighbor_first` | boolean | No | false | Prioritize direct neighbors |
+| `gremlin_tmpl_num` | integer | No | 5 | Number of Gremlin templates to try |
+| `max_graph_items` | integer | No | 30 | Max items from graph retrieval |
+| `topk_return_results` | integer | No | 20 | Top-K after reranking |
+| `vector_dis_threshold` | float | No | 0.9 | Vector similarity threshold 
(0-1) |
+| `topk_per_keyword` | integer | No | 1 | Top-K vectors per keyword |
+| `custom_priority_info` | string | No | "" | Custom context to prioritize |
+| `answer_prompt` | string | No | "" | Custom answer generation prompt |
+| `keywords_extract_prompt` | string | No | "" | Custom keyword extraction 
prompt |
+| `gremlin_prompt` | string | No | "" | Custom Gremlin generation prompt |
+| `client_config` | object | No | null | Override graph connection settings |
+
+#### Response
+
+```json
+{
+  "query": "Tell me about Al Pacino's movies",
+  "graph_only": {
+    "answer": "Al Pacino starred in The Godfather (1972), directed by Francis 
Ford Coppola...",
+    "context": ["The Godfather is a 1972 crime film...", "..."],
+    "graph_paths": ["..."],
+    "keywords": ["Al Pacino", "movies"]
+  }
+}
+```
+
+#### Example (curl)
+
+```bash
+curl -X POST http://localhost:8001/rag \
+  -H "Content-Type: application/json" \
+  -d '{
+    "query": "Tell me about Al Pacino",
+    "graph_only": true,
+    "max_graph_items": 30
+  }'
+```
+
+### 2. Graph Retrieval Only
+
+**POST** `/rag/graph`
+
+Retrieve graph context without generating an answer. Useful for debugging or 
custom processing.
+
+#### Request Body
+
+```json
+{
+  "query": "Al Pacino movies",
+  "max_graph_items": 30,
+  "topk_return_results": 20,
+  "vector_dis_threshold": 0.9,
+  "topk_per_keyword": 1,
+  "gremlin_tmpl_num": 5,
+  "rerank_method": "cohere",
+  "near_neighbor_first": false,
+  "custom_priority_info": "",
+  "gremlin_prompt": "",
+  "get_vertex_only": false,
+  "client_config": {
+    "url": "127.0.0.1:8080",
+    "graph": "hugegraph",
+    "user": "admin",
+    "pwd": "admin",
+    "gs": ""
+  }
+}
+```
+
+**Additional Parameter:**
+
+| Field | Type | Default | Description |
+|-------|------|---------|-------------|
+| `get_vertex_only` | boolean | false | Return only vertex IDs without full 
details |
+
+#### Response
+
+```json
+{
+  "graph_recall": {
+    "query": "Al Pacino movies",
+    "keywords": ["Al Pacino", "movies"],
+    "match_vids": ["1:Al Pacino", "2:The Godfather"],
+    "graph_result_flag": true,
+    "gremlin": "g.V('1:Al Pacino').outE().inV().limit(30)",
+    "graph_result": [
+      {"id": "1:Al Pacino", "label": "person", "properties": {"name": "Al 
Pacino"}},
+      {"id": "2:The Godfather", "label": "movie", "properties": {"title": "The 
Godfather"}}
+    ],
+    "vertex_degree_list": [5, 12]
+  }
+}
+```
+
+#### Example (curl)
+
+```bash
+curl -X POST http://localhost:8001/rag/graph \
+  -H "Content-Type: application/json" \
+  -d '{
+    "query": "Al Pacino",
+    "max_graph_items": 30,
+    "get_vertex_only": false
+  }'
+```
+
+---
+
+## Text2Gremlin Endpoint
+
+### 3. Natural Language to Gremlin
+
+**POST** `/text2gremlin`
+
+Convert natural language queries to executable Gremlin commands.
+
+#### Request Body
+
+```json
+{
+  "query": "Find all movies directed by Francis Ford Coppola",
+  "example_num": 5,
+  "gremlin_prompt": "",
+  "output_types": ["GREMLIN", "RESULT"],
+  "client_config": {
+    "url": "127.0.0.1:8080",
+    "graph": "hugegraph",
+    "user": "admin",
+    "pwd": "admin",
+    "gs": ""
+  }
+}
+```
+
+**Parameters:**
+
+| Field | Type | Required | Default | Description |
+|-------|------|----------|---------|-------------|
+| `query` | string | Yes | - | Natural language query |
+| `example_num` | integer | No | 5 | Number of example templates to use |
+| `gremlin_prompt` | string | No | "" | Custom prompt for Gremlin generation |
+| `output_types` | array | No | null | Output types: ["GREMLIN", "RESULT", 
"CYPHER"] |
+| `client_config` | object | No | null | Graph connection override |
+
+**Output Types:**
+- `GREMLIN`: Generated Gremlin query
+- `RESULT`: Execution result from graph
+- `CYPHER`: Cypher query (if requested)
+
+#### Response
+
+```json
+{
+  "gremlin": "g.V().has('person','name','Francis Ford 
Coppola').out('directed').hasLabel('movie').values('title')",
+  "result": [
+    "The Godfather",
+    "The Godfather Part II",
+    "Apocalypse Now"
+  ]
+}
+```
+
+#### Example (curl)
+
+```bash
+curl -X POST http://localhost:8001/text2gremlin \
+  -H "Content-Type: application/json" \
+  -d '{
+    "query": "Find all movies directed by Francis Ford Coppola",
+    "output_types": ["GREMLIN", "RESULT"]
+  }'
+```
+
+---
+
+## Configuration Endpoints
+
+### 4. Update Graph Connection
+
+**POST** `/config/graph`
+
+Dynamically update HugeGraph connection settings.
+
+#### Request Body
+
+```json
+{
+  "url": "127.0.0.1:8080",
+  "name": "hugegraph",
+  "user": "admin",
+  "pwd": "admin",
+  "gs": ""
+}
+```
+
+#### Response
+
+```json
+{
+  "status_code": 201,
+  "message": "Graph configuration updated successfully"
+}
+```
+
+### 5. Update LLM Configuration
+
+**POST** `/config/llm`
+
+Update chat/extract LLM settings at runtime.
+
+#### Request Body (OpenAI)
+
+```json
+{
+  "llm_type": "openai",
+  "api_key": "sk-your-api-key",
+  "api_base": "https://api.openai.com/v1";,
+  "language_model": "gpt-4o-mini",
+  "max_tokens": 4096
+}
+```
+
+#### Request Body (Ollama)
+
+```json
+{
+  "llm_type": "ollama/local",
+  "host": "127.0.0.1",
+  "port": 11434,
+  "language_model": "llama3.1:8b"
+}
+```
+
+### 6. Update Embedding Configuration
+
+**POST** `/config/embedding`
+
+Update embedding model settings.
+
+#### Request Body
+
+```json
+{
+  "llm_type": "openai",
+  "api_key": "sk-your-api-key",
+  "api_base": "https://api.openai.com/v1";,
+  "language_model": "text-embedding-3-small"
+}
+```
+
+### 7. Update Reranker Configuration
+
+**POST** `/config/rerank`
+
+Configure reranker settings.
+
+#### Request Body (Cohere)
+
+```json
+{
+  "reranker_type": "cohere",
+  "api_key": "your-cohere-key",
+  "reranker_model": "rerank-multilingual-v3.0",
+  "cohere_base_url": "https://api.cohere.com/v1/rerank";
+}
+```
+
+#### Request Body (SiliconFlow)
+
+```json
+{
+  "reranker_type": "siliconflow",
+  "api_key": "your-siliconflow-key",
+  "reranker_model": "BAAI/bge-reranker-v2-m3"
+}
+```
+
+---
+
+## Error Responses
+
+All endpoints return standard HTTP status codes:
+
+| Code | Meaning |
+|------|---------|
+| 200 | Success |
+| 201 | Created (config updated) |
+| 400 | Bad Request (invalid parameters) |
+| 500 | Internal Server Error |
+| 501 | Not Implemented |
+
+Error response format:
+```json
+{
+  "detail": "Error message describing what went wrong"
+}
+```
+
+---
+
+## Python Client Example
+
+```python
+import requests
+
+BASE_URL = "http://localhost:8001";
+
+# 1. Configure graph connection
+graph_config = {
+    "url": "127.0.0.1:8080",
+    "name": "hugegraph",
+    "user": "admin",
+    "pwd": "admin"
+}
+requests.post(f"{BASE_URL}/config/graph", json=graph_config)
+
+# 2. Execute RAG query
+rag_request = {
+    "query": "Tell me about Al Pacino",
+    "graph_only": True,
+    "max_graph_items": 30
+}
+response = requests.post(f"{BASE_URL}/rag", json=rag_request)
+print(response.json())
+
+# 3. Generate Gremlin from natural language
+text2gql_request = {
+    "query": "Find all directors who worked with Al Pacino",
+    "output_types": ["GREMLIN", "RESULT"]
+}
+response = requests.post(f"{BASE_URL}/text2gremlin", json=text2gql_request)
+print(response.json())
+```
+
+---
+
+## See Also
+
+- [Configuration Reference](./config-reference.md) - Complete .env 
configuration guide
+- [HugeGraph-LLM Overview](./hugegraph-llm.md) - Architecture and features
+- [Quick Start Guide](./quick_start.md) - Getting started with the Web UI

Reply via email to