This is an automated email from the ASF dual-hosted git repository.

vgalaxies pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-hugegraph-doc.git


The following commit(s) were added to refs/heads/master by this push:
     new fa6ee620 docs: AGENTS/README & refactor server/toolchain docs (#446)
fa6ee620 is described below

commit fa6ee6202ad539009d703fc5d51f455e1a86e5f2
Author: imbajin <[email protected]>
AuthorDate: Sat Jan 31 23:59:16 2026 +0800

    docs: AGENTS/README & refactor server/toolchain docs (#446)
---
 AGENTS.md                                          | 203 ++++++----------
 README.md                                          | 259 +++++++++++++++------
 config.toml                                        |   2 +-
 content/cn/docs/clients/restful-api/auth.md        |   4 +
 content/cn/docs/config/config-option.md            |  42 +++-
 .../docs/quickstart/hugegraph/hugegraph-server.md  |  32 ++-
 .../docs/quickstart/toolchain/hugegraph-hubble.md  |  23 ++
 .../toolchain/hugegraph-spark-connector.md         | 182 +++++++++++++++
 .../docs/quickstart/toolchain/hugegraph-tools.md   |  42 +++-
 content/en/docs/clients/restful-api/auth.md        |   4 +
 content/en/docs/config/config-option.md            |  42 +++-
 .../docs/quickstart/hugegraph/hugegraph-server.md  |  32 ++-
 .../docs/quickstart/toolchain/hugegraph-hubble.md  |  23 ++
 .../toolchain/hugegraph-spark-connector.md         | 182 +++++++++++++++
 .../docs/quickstart/toolchain/hugegraph-tools.md   |  60 +++--
 contribution.md                                    |  17 +-
 16 files changed, 891 insertions(+), 258 deletions(-)

diff --git a/AGENTS.md b/AGENTS.md
index 9108afe5..c4a05451 100644
--- a/AGENTS.md
+++ b/AGENTS.md
@@ -1,171 +1,110 @@
-# AI Development Agent Instructions
+# AGENTS.md
 
 This file provides guidance to AI coding assistants (Claude Code, Cursor, 
GitHub Copilot, etc.) when working with code in this repository.
 
 ## Project Overview
 
-This is the **Apache HugeGraph documentation website** repository 
(`hugegraph-doc`), built with Hugo static site generator using the Docsy theme. 
The site provides comprehensive documentation for the HugeGraph graph database 
system, including quickstart guides, API references, configuration guides, and 
contribution guidelines.
+Apache HugeGraph documentation website built with Hugo static site generator 
and the Docsy theme. The site is bilingual (Chinese/English) and covers the 
complete HugeGraph graph database ecosystem.
 
-The documentation is multilingual, supporting both **Chinese (cn)** and 
**English (en)** content.
-
-## Development Setup
-
-### Prerequisites
-
-1. **Hugo Extended** (v0.95.0 recommended, v0.102.3 used in CI)
-   - Must be the "extended" version (includes SASS/SCSS support)
-   - Download from: https://github.com/gohugoio/hugo/releases
-   - Install location: `/usr/bin` or `/usr/local/bin`
-
-2. **Node.js and npm** (v16+ as specified in CI)
-
-### Quick Start
+## Development Commands
 
 ```bash
-# Install npm dependencies (autoprefixer, postcss, postcss-cli)
+# Install dependencies
 npm install
 
-# Start local development server (with auto-reload)
+# Start development server (auto-reload enabled)
 hugo server
 
-# Custom server with different ip/port
-hugo server -b http://127.0.0.1 -p 80 --bind=0.0.0.0
-
 # Build production site (output to ./public)
 hugo --minify
-```
-
-## Project Structure
-
-### Key Directories
-
-- **`content/`** - All documentation content in Markdown
-  - `content/cn/` - Chinese (simplified) documentation
-  - `content/en/` - English documentation
-  - Each language has parallel structure: `docs/`, `blog/`, `community/`, 
`about/`
-
-- **`themes/docsy/`** - The Docsy Hugo theme (submodule or vendored)
-
-- **`static/`** - Static assets (images, files) served directly
-
-- **`assets/`** - Assets processed by Hugo pipelines (SCSS, images for 
processing)
-
-- **`layouts/`** - Custom Hugo template overrides for the Docsy theme
-
-- **`public/`** - Generated site output (gitignored, created by `hugo` build)
-
-- **`dist/`** - Additional distribution files
-
-### Important Files
-
-- **`config.toml`** - Main site configuration
-  - Defines language settings (cn as default, en available)
-  - Menu structure and navigation
-  - Theme parameters and UI settings
-  - Currently shows version `0.13`
-
-- **`package.json`** - Node.js dependencies for CSS processing (postcss, 
autoprefixer)
 
-- **`.editorconfig`** - Code style rules (UTF-8, LF line endings, spaces for 
indentation)
+# Clean build
+rm -rf public/
 
-- **`contribution.md`** - Contributing guide (Chinese/English mixed)
+# Production build with garbage collection
+HUGO_ENV="production" hugo --gc
 
-- **`maturity.md`** - Project maturity assessment documentation
+# Custom server configuration
+hugo server -b http://127.0.0.1 -p 80 --bind=0.0.0.0
+```
 
-## Content Organization
+## Prerequisites
 
-Documentation is organized into major sections:
+- **Hugo Extended** v0.95.0 recommended (v0.102.3 in CI) - must be the 
"extended" version for SASS/SCSS support
+- **Node.js** v16+ and npm
+- Download Hugo from: https://github.com/gohugoio/hugo/releases
 
-- **`quickstart/`** - Getting started guides for HugeGraph components (Server, 
Loader, Hubble, Tools, Computer, AI)
-- **`config/`** - Configuration documentation
-- **`clients/`** - Client API documentation (Gremlin Console, RESTful API)
-- **`guides/`** - User guides and tutorials
-- **`performance/`** - Performance benchmarks and optimization
-- **`language/`** - Query language documentation
-- **`contribution-guidelines/`** - How to contribute to HugeGraph
-- **`changelog/`** - Release notes and version history
-- **`download/`** - Download links and instructions
+## Architecture
 
-## Common Tasks
+```
+content/
+├── cn/          # Chinese documentation (default language)
+│   ├── docs/    # Main documentation
+│   ├── blog/    # Blog posts
+│   ├── community/
+│   └── about/
+└── en/          # English documentation (parallel structure)
+
+themes/docsy/    # Docsy theme (submodule)
+layouts/         # Custom template overrides
+assets/          # Processed assets (SCSS, images)
+static/          # Static files served directly
+config.toml      # Main site configuration
+```
 
-### Building and Testing
+### Content Structure
 
-```bash
-# Build for production (with minification)
-hugo --minify
+Documentation sections in `content/{cn,en}/docs/`:
+- `quickstart/` - Getting started guides for HugeGraph components
+- `config/` - Configuration documentation
+- `clients/` - Client API documentation (Gremlin, RESTful)
+- `guides/` - User guides and tutorials
+- `performance/` - Benchmarks and optimization
+- `language/` - Query language docs
+- `contribution-guidelines/` - Contributing guides
+- `changelog/` - Release notes
+- `download/` - Download instructions
 
-# Clean previous build
-rm -rf public/
+## Key Configuration Files
 
-# Build with specific environment
-HUGO_ENV="production" hugo --gc
-```
+- `config.toml` - Site-wide settings, language config, menu structure, version 
(currently 0.13)
+- `package.json` - Node dependencies for CSS processing (postcss, 
autoprefixer, mermaid)
+- `.editorconfig` - UTF-8, LF line endings, spaces for indentation
 
-### Working with Content
+## Working with Content
 
 When editing documentation:
-
 1. Maintain parallel structure between `content/cn/` and `content/en/`
-2. Use Markdown format for all documentation files
-3. Include front matter in each file (title, weight, description)
-4. For translated content, ensure both Chinese and English versions are updated
-
-### Theme Customization
-
-- Global site config: `config.toml` (root directory)
-- Theme-specific config: `themes/docsy/config.toml`
-- Custom layouts: Place in `layouts/` to override theme defaults
-- Custom styles: Modify files in `assets/` directory
-
-Refer to [Docsy documentation](https://www.docsy.dev/docs/) for theme 
customization details.
+2. Use Markdown with Hugo front matter (title, weight, description)
+3. For bilingual changes, update both Chinese and English versions
+4. Include mermaid diagrams where appropriate (mermaid.js is available)
 
 ## Deployment
 
-The site uses GitHub Actions for CI/CD (`.github/workflows/hugo.yml`):
-
-1. **Triggers**: On push to `master` branch or pull requests
-2. **Build process**:
-   - Checkout with submodules (for themes)
-   - Setup Node v16 and Hugo v0.102.3 extended
-   - Run `npm i && hugo --minify`
-3. **Deployment**: Publishes to `asf-site` branch (GitHub Pages)
-
-The deployed site is hosted as part of Apache HugeGraph's documentation 
infrastructure.
-
-## HugeGraph Architecture Context
-
-This documentation covers the complete HugeGraph ecosystem:
-
-- **HugeGraph-Server** - Core graph database engine with REST API
-- **HugeGraph-Store** - Distributed storage engine with integrated computation
-- **HugeGraph-PD** - Placement Driver for metadata management
-- **HugeGraph-Toolchain**:
-  - Client (Java RESTful API client)
-  - Loader (data import tool)
-  - Hubble (web visualization platform)
-  - Tools (deployment and management utilities)
-- **HugeGraph-Computer** - Distributed graph processing system (OLAP)
-- **HugeGraph-AI** - Graph neural networks and LLM/RAG components
+- **CI/CD**: GitHub Actions (`.github/workflows/hugo.yml`)
+- **Trigger**: Push to `master` branch or pull requests
+- **Build**: `npm i && hugo --minify` with Node v16 and Hugo v0.102.3 extended
+- **Deploy**: Publishes to `asf-site` branch (GitHub Pages)
+- **PR Requirements**: Include screenshots showing before/after changes
 
-## Git Workflow
+## HugeGraph Ecosystem Context
 
-- **Main branch**: `master` (protected, triggers deployment)
-- **PR requirements**: Include screenshots showing before/after changes in 
documentation
-- **Commit messages**: Follow Apache commit conventions
-- Always create a new branch from `master` for changes
-- Deployment to `asf-site` branch is automated via GitHub Actions
+This documentation covers:
+- **HugeGraph-Server** - Core graph database with REST API
+- **HugeGraph-Store** - Distributed storage engine
+- **HugeGraph-PD** - Placement Driver for metadata
+- **Toolchain** - Client, Loader, Hubble (web UI), Tools
+- **HugeGraph-Computer** - Distributed OLAP graph processing
+- **HugeGraph-AI** - GNN, LLM/RAG components
 
 ## Troubleshooting
 
-**Error: "TOCSS: failed to transform scss/main.scss"**
-- Cause: Using standard Hugo instead of Hugo Extended
-- Solution: Install Hugo Extended version
+**"TOCSS: failed to transform scss/main.scss"**
+- Install Hugo Extended (not standard Hugo)
 
-**Error: Module/theme not found**
-- Cause: Git submodules not initialized
-- Solution: `git submodule update --init --recursive`
+**Theme/module not found**
+- Run: `git submodule update --init --recursive`
 
-**Build fails in CI but works locally**
-- Check Hugo version match (CI uses v0.102.3)
-- Ensure npm dependencies are installed
-- Verify Node.js version (CI uses v16)
+**CI build fails but works locally**
+- Match Hugo version (v0.102.3) and Node.js (v16)
+- Verify npm dependencies are installed
diff --git a/README.md b/README.md
index 18656cd5..8f19005b 100644
--- a/README.md
+++ b/README.md
@@ -1,80 +1,201 @@
+# Apache HugeGraph Documentation Website
+
 [![Ask 
DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/apache/hugegraph-doc)
+[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](LICENSE)
+[![Hugo](https://img.shields.io/badge/Hugo-Extended-ff4088?logo=hugo)](https://gohugo.io/)
+
+---
+
+[中文](#中文版) | **English**
+
+This is the **source code repository** for the [HugeGraph documentation 
website](https://hugegraph.apache.org/docs/).
+
+For the HugeGraph database project, visit 
[apache/hugegraph](https://github.com/apache/hugegraph).
+
+## Quick Start
+
+Only **3 steps** to run the documentation website locally:
+
+**Prerequisites:** [Hugo Extended](https://github.com/gohugoio/hugo/releases) 
v0.95+ and Node.js v16+
+
+```bash
+# 1. Clone repository
+git clone https://github.com/apache/hugegraph-doc.git
+cd hugegraph-doc
+
+# 2. Install dependencies
+npm install
+
+# 3. Start development server (auto-reload)
+hugo server
+```
+
+Open http://localhost:1313 to preview.
 
-## Build/Test/Contribute to website
-
-Please visit the [contribution doc](./contribution.md) to get start, include 
theme/website description & settings~
-
-### Summary
-
-Apache HugeGraph is an easy-to-use, efficient, general-purpose open-source 
graph database system
-(Graph Database, [GitHub project 
address](https://github.com/hugegraph/hugegraph)), implementing the [Apache 
TinkerPop3](https://tinkerpop.apache.org) framework and fully compatible with 
the [Gremlin](https://tinkerpop.apache.org/gremlin.html) query language,
-With complete toolchain components, it helps users easily build applications 
and products based on graph databases. HugeGraph supports fast import of more 
than 10 billion vertices and edges, and provides millisecond-level relational 
query capability (OLTP). 
-It also supports large-scale distributed graph computing (OLAP).
-
-Typical application scenarios of HugeGraph include deep relationship 
exploration, association analysis, path search, feature extraction, data 
clustering, community detection, knowledge graph, etc., and are applicable to 
business fields such as network security, telecommunication fraud, financial 
risk control, advertising recommendation, social network and intelligence 
Robots etc.
-
-### Features
-
-HugeGraph supports graph operations in online and offline environments, batch 
importing of data and efficient complex relationship analysis. It can 
seamlessly be integrated with big data platforms.
-HugeGraph supports multi-user parallel operations. Users can enter Gremlin 
query statements and get graph query results in time. They can also call the 
HugeGraph API in user programs for graph analysis or queries.
-
-This system has the following features: 
-
-- Ease of use: HugeGraph supports the Gremlin graph query language and a 
RESTful API, providing common interfaces for graph retrieval, and peripheral 
tools with complete functions to easily implement various graph-based query and 
analysis operations.
-- Efficiency: HugeGraph has been deeply optimized in graph storage and graph 
computing, and provides a variety of batch import tools, which can easily 
complete the rapid import of tens of billions of data, and achieve 
millisecond-level response for graph retrieval through optimized queries. 
Supports simultaneous online real-time operations of thousands of users.
-- Universal: HugeGraph supports the Apache Gremlin standard graph query 
language and the Property Graph standard graph modeling method, and supports 
graph-based OLTP and OLAP schemes. Integrate Apache Hadoop and Apache Spark big 
data platform.
-- Scalable: supports distributed storage, multiple copies of data and 
horizontal expansion, built-in multiple back-end storage engines, and can 
easily expand the back-end storage engine through plug-ins.
-- Open: HugeGraph code is open source (Apache 2 License), customers can modify 
and customize independently, and selectively give back to the open source 
community.
-
-The functions of this system include but are not limited to: 
-
-- Supports batch import of data from multiple data sources (including local 
files, HDFS files, MySQL databases and other data sources), and supports import 
of multiple file formats (including TXT, CSV, JSON and other formats)
-- With a visual operation interface, it can be used for operation, analysis 
and display diagrams, reducing the threshold for users to use
-- Optimized graph interface: shortest path (Shortest Path), K-step connected 
subgraph (K-neighbor), K-step to reach the adjacent point (K-out), personalized 
recommendation algorithm PersonalRank, etc.
-- Implemented based on the Apache-TinkerPop3 framework, supports Gremlin graph 
query language
-- Support attribute graph, attributes can be added to vertices and edges, and 
support rich attribute types
-- Has independent schema metadata information, has powerful graph modeling 
capabilities, and facilitates third-party system integration
-- Support multi-vertex ID strategy: support primary key ID, support automatic 
ID generation, support user-defined string ID, support user-defined digital ID
-- The attributes of edges and vertices can be indexed to support precise 
query, range query, and full-text search
-- The storage system adopts plug-in mode, supporting RocksDB, Cassandra, 
ScyllaDB, HBase, MySQL, PostgreSQL, Palo, and InMemory, etc.
-- Integrate with big data systems such as Hadoop and Spark GraphX, and support 
Bulk Load operations
-- Support high availability (HA), multiple copies of data, backup recovery, 
monitoring, etc.
-
-### Modules
-
-- [HugeGraph-Store]: HugeGraph-Store is a distributed storage engine to manage 
large-scale graph data by integrating storage and computation within a unified 
system.
-- [HugeGraph-PD]: HugeGraph-PD (Placement Driver) manages metadata and 
coordinates storage nodes.
-- [HugeGraph-Server](/docs/quickstart/hugegraph-server): HugeGraph-Server is 
the core part of the HugeGraph project, containing Core, Backend, API and other 
submodules;
-  - Core: Implements the graph engine, connects to the Backend module 
downwards, and supports the API module upwards;
-  - Backend: Implements the storage of graph data to the backend, supports 
backends including Memory, Cassandra, ScyllaDB, RocksDB, HBase, MySQL and 
PostgreSQL, users can choose one according to the actual situation;
-  - API: Built-in REST Server, provides RESTful API to users, and is fully 
compatible with Gremlin queries. (Supports distributed storage and computation 
pushdown)
-- [HugeGraph-Toolchain](https://github.com/apache/hugegraph-toolchain): 
(Toolchain)
-  - [HugeGraph-Client](/docs/quickstart/hugegraph-client): HugeGraph-Client 
provides a RESTful API client for connecting to HugeGraph-Server, currently 
only the Java version is implemented, users of other languages can implement it 
themselves;
-  - [HugeGraph-Loader](/docs/quickstart/hugegraph-loader): HugeGraph-Loader is 
a data import tool based on HugeGraph-Client, which transforms ordinary text 
data into vertices and edges of the graph and inserts them into the graph 
database;
-  - [HugeGraph-Hubble](/docs/quickstart/hugegraph-hubble): HugeGraph-Hubble is 
HugeGraph's Web 
-  visualization management platform, a one-stop visualization analysis 
platform, the platform covers the whole process from data modeling, to fast 
data import, to online and offline analysis of data, and unified management of 
the graph;
-  - [HugeGraph-Tools](/docs/quickstart/hugegraph-tools): HugeGraph-Tools is 
HugeGraph's deployment and management tool, including graph management, 
backup/recovery, Gremlin execution and other functions.
-- [HugeGraph-Computer](/docs/quickstart/hugegraph-computer): 
HugeGraph-Computer is a distributed graph processing system (OLAP). 
-  It is an implementation of 
[Pregel](https://kowshik.github.io/JPregel/pregel_paper.pdf). It can run on 
clusters such as Kubernetes/Yarn, and supports large-scale graph computing.
-- [HugeGraph-AI](/docs/quickstart/hugegraph-ai): HugeGraph-AI is HugeGraph's 
independent AI 
-  component, providing training and inference functions of graph neural 
networks, LLM/Graph RAG combination/Python-Client and other related components, 
continuously updating.
+> **Troubleshooting:** If you see `TOCSS: failed to transform 
"scss/main.scss"`,
+> install Hugo **Extended** version, not the standard version.
+
+## Repository Structure
+
+```
+hugegraph-doc/
+├── content/                    # 📄 Documentation content (Markdown)
+│   ├── cn/                     # 🇨🇳 Chinese documentation
+│   │   ├── docs/               #    Main documentation
+│   │   │   ├── quickstart/     #    Quick start guides
+│   │   │   ├── config/         #    Configuration docs
+│   │   │   ├── clients/        #    Client docs
+│   │   │   ├── guides/         #    User guides
+│   │   │   └── ...
+│   │   ├── blog/               #    Blog posts
+│   │   └── community/          #    Community pages
+│   └── en/                     # 🇺🇸 English documentation (mirrors cn/ 
structure)
+│
+├── themes/docsy/               # 🎨 Docsy theme (git submodule)
+├── assets/                     # 🖼️  Custom assets (fonts, images, scss)
+├── layouts/                    # 📐 Hugo template overrides
+├── static/                     # 📁 Static files
+├── config.toml                 # ⚙️  Site configuration
+└── package.json                # 📦 Node.js dependencies
+```
 
 ## Contributing
 
-- Welcome to contribute to HugeGraph, please see [How to 
Contribute](https://hugegraph.apache.org/docs/contribution-guidelines/contribute/)
  for more information.  
-- Note: It's recommended to use [GitHub Desktop](https://desktop.github.com/) 
to greatly simplify the PR and commit process.  
-- Thank you to all the people who already contributed to HugeGraph!
+### Contribution Workflow
+
+1. **Fork** this repository
+2. Create a **new branch** from `master`
+3. Make your changes
+4. Submit a **Pull Request** with screenshots
+
+### Requirements
+
+| Requirement | Description |
+|-------------|-------------|
+| **Bilingual Updates** | Update **BOTH** `content/cn/` and `content/en/` |
+| **PR Screenshots** | Include **before/after screenshots** in PR |
+| **Markdown** | Use Markdown with Hugo front matter |
+
+### Detailed Guide
+
+See [contribution.md](./contribution.md) for:
+- Platform-specific Hugo installation
+- Docsy theme customization
+- Translation tips
+
+## Commands
+
+| Command | Description |
+|---------|-------------|
+| `hugo server` | Start dev server (hot reload) |
+| `hugo --minify` | Build production to `./public/` |
+| `hugo server -p 8080` | Custom port |
+
+---
+
+## 中文版
+
+这是 [HugeGraph 官方文档网站](https://hugegraph.apache.org/docs/) 的**源代码仓库**。
+
+如果你想查找 HugeGraph 数据库本身,请访问 
[apache/hugegraph](https://github.com/apache/hugegraph)。
+
+### 快速开始
+
+只需 **3 步**即可在本地启动文档网站:
+
+**前置条件:** [Hugo Extended](https://github.com/gohugoio/hugo/releases) v0.95+ 和 
Node.js v16+
+
+```bash
+# 1. 克隆仓库
+git clone https://github.com/apache/hugegraph-doc.git
+cd hugegraph-doc
+
+# 2. 安装依赖
+npm install
+
+# 3. 启动开发服务器(支持热重载)
+hugo server
+```
+
+打开 http://localhost:1313 预览网站。
+
+> **常见问题:** 如果遇到 `TOCSS: failed to transform "scss/main.scss"` 错误,
+> 说明你需要安装 Hugo **Extended** 版本,而不是标准版本。
+
+### 仓库结构
+
+```
+hugegraph-doc/
+├── content/                    # 📄 文档内容 (Markdown)
+│   ├── cn/                     # 🇨🇳 中文文档
+│   │   ├── docs/               #    主要文档目录
+│   │   │   ├── quickstart/     #    快速开始指南
+│   │   │   ├── config/         #    配置文档
+│   │   │   ├── clients/        #    客户端文档
+│   │   │   ├── guides/         #    使用指南
+│   │   │   └── ...
+│   │   ├── blog/               #    博客文章
+│   │   └── community/          #    社区页面
+│   └── en/                     # 🇺🇸 英文文档(与 cn/ 结构一致)
+│
+├── themes/docsy/               # 🎨 Docsy 主题 (git submodule)
+├── assets/                     # 🖼️  自定义资源 (fonts, images, scss)
+├── layouts/                    # 📐 Hugo 模板覆盖
+├── static/                     # 📁 静态文件
+├── config.toml                 # ⚙️  站点配置
+└── package.json                # 📦 Node.js 依赖
+```
+
+### 如何贡献
+
+#### 贡献流程
+
+1. **Fork** 本仓库
+2. 基于 `master` 创建**新分支**
+3. 修改文档内容
+4. 提交 **Pull Request**(附截图)
+
+#### 重要说明
+
+| 要求 | 说明 |
+|------|------|
+| **双语更新** | 修改内容时需**同时更新** `content/cn/` 和 `content/en/` |
+| **PR 截图** | 提交 PR 时需附上修改**前后对比截图** |
+| **Markdown** | 文档使用 Markdown 格式,带 Hugo front matter |
+
+#### 详细指南
+
+查看 [contribution.md](./contribution.md) 了解:
+- 各平台 Hugo 安装方法
+- Docsy 主题定制
+- 翻译技巧
+
+### 常用命令
+
+| 命令 | 说明 |
+|------|------|
+| `hugo server` | 启动开发服务器(热重载) |
+| `hugo --minify` | 构建生产版本到 `./public/` |
+| `hugo server -p 8080` | 指定端口 |
+
+---
+
+## Contact & Community
+
+- **Issues:** [GitHub Issues](https://github.com/apache/hugegraph-doc/issues)
+- **Mailing List:** 
[[email protected]](mailto:[email protected]) ([subscribe 
first](https://hugegraph.apache.org/docs/contribution-guidelines/subscribe/))
+- **Slack:** [ASF Slack](https://the-asf.slack.com/archives/C059UU2FJ23)
+
+<img src="./assets/images/wechat.png" alt="WeChat QR Code" width="350"/>
+
+## Contributors
 
-[![contributors 
graph](https://contrib.rocks/image?repo=apache/hugegraph-doc)](https://github.com/apache/incubator-hugegraph-doc/graphs/contributors)
+Thanks to all contributors to the HugeGraph documentation!
 
-### Contact Us
+[![contributors](https://contrib.rocks/image?repo=apache/hugegraph-doc)](https://github.com/apache/hugegraph-doc/graphs/contributors)
 
 ---
 
-- [GitHub Issues](https://github.com/apache/incubator-hugegraph-doc/issues): 
Feedback on usage issues and functional requirements (quick response)
-- Feedback Email: 
[[email protected]](mailto:[email protected]) 
([subscriber](https://hugegraph.apache.org/docs/contribution-guidelines/subscribe/)
 only)
-- Security Email: 
[[email protected]](mailto:[email protected]) (Report 
SEC problems)
-- Slack: [ASF Online Channel](https://the-asf.slack.com/archives/C059UU2FJ23)
-- WeChat public account: Apache HugeGraph, welcome to scan this QR code to 
follow us.
+## License
 
- <img src="./assets/images/wechat.png" alt="QR png" width="350"/>
+[Apache License 2.0](LICENSE)
diff --git a/config.toml b/config.toml
index 493c1462..f37873d0 100644
--- a/config.toml
+++ b/config.toml
@@ -152,7 +152,7 @@ archived_version = false
 # The version number for the version of the docs represented in this doc set.
 # Used in the "version-banner" partial to display a version number for the 
 # current doc set.
-version = "0.13"
+version = "1.7"
 
 # A link to latest version of the docs. Used in the "version-banner" partial to
 # point people to the main doc site.
diff --git a/content/cn/docs/clients/restful-api/auth.md 
b/content/cn/docs/clients/restful-api/auth.md
index 606b4e5c..6c3c086a 100644
--- a/content/cn/docs/clients/restful-api/auth.md
+++ b/content/cn/docs/clients/restful-api/auth.md
@@ -4,6 +4,10 @@ linkTitle: "Authentication"
 weight: 16
 ---
 
+> **版本变更说明**:
+> - 1.7.0+: Auth API 路径使用 GraphSpace 格式,如 `/graphspaces/DEFAULT/auth/users`,且 
group/target 等 id 格式与 name 一致(如 `admin`)
+> - 1.5.x 及更早: Auth API 路径包含 graph 名称,group/target 等 id 格式类似 `-69:grant`。参考 
[HugeGraph 1.5.x RESTful 
API](https://github.com/apache/incubator-hugegraph-doc/tree/release-1.5.0)
+
 ### 10.1 用户认证与权限控制
 
 > 开启权限及相关配置请先参考 [权限配置](/cn/docs/config/config-authentication/) 文档
diff --git a/content/cn/docs/config/config-option.md 
b/content/cn/docs/config/config-option.md
index 0bf56af8..238ac8f5 100644
--- a/content/cn/docs/config/config-option.md
+++ b/content/cn/docs/config/config-option.md
@@ -37,9 +37,9 @@ weight: 2
 | gremlinserver.url                  | http://127.0.0.1:8182                   
         | The url of gremlin server.                                           
                                                                                
                                                         |
 | gremlinserver.max_route            | 8                                       
         | The max route number for gremlin server.                             
                                                                                
                                                         |
 | gremlinserver.timeout              | 30                                      
         | The timeout in seconds of waiting for gremlin server.                
                                                                                
                                                         |
-| batch.max_edges_per_batch          | 500                                     
         | The maximum number of edges submitted per batch.                     
                                                                                
                                                         |
-| batch.max_vertices_per_batch       | 500                                     
         | The maximum number of vertices submitted per batch.                  
                                                                                
                                                         |
-| batch.max_write_ratio              | 50                                      
         | The maximum thread ratio for batch writing, only take effect if the 
batch.max_write_threads is 0.                                                   
                                                          |
+| batch.max_edges_per_batch          | 2500                                    
         | The maximum number of edges submitted per batch.                     
                                                                                
                                                         |
+| batch.max_vertices_per_batch       | 2500                                    
         | The maximum number of vertices submitted per batch.                  
                                                                                
                                                         |
+| batch.max_write_ratio              | 70                                      
         | The maximum thread ratio for batch writing, only take effect if the 
batch.max_write_threads is 0.                                                   
                                                          |
 | batch.max_write_threads            | 0                                       
         | The maximum threads for batch writing, if the value is 0, the actual 
value will be set to batch.max_write_ratio * restserver.max_worker_threads.     
                                                         |
 | auth.authenticator                 |                                         
         | The class path of authenticator implementation. e.g., 
org.apache.hugegraph.auth.StandardAuthenticator, or a custom implementation.    
                                                    |
 | auth.graph_store                   | hugegraph                               
         | The name of graph used to store authentication information, like 
users, only for org.apache.hugegraph.auth.StandardAuthenticator.                
                                                              |
@@ -49,9 +49,39 @@ weight: 2
 | auth.remote_url                    |                                         
         | If the address is empty, it provide auth service, otherwise it is 
auth client and also provide auth service through rpc forwarding. The remote 
url can be set to multiple addresses, which are concat by ','. |
 | auth.token_expire                  | 86400                                   
         | The expiration time in seconds after token created                   
                                                                                
                                                         |
 | auth.token_secret                  | FXQXbJtbCLxODc6tGci732pkH1cyf8Qg        
         | Secret key of HS256 algorithm.                                       
                                                                                
                                                         |
-| exception.allow_trace              | false                                   
         | Whether to allow exception trace stack.                              
                                                                                
                                                         |
-| memory_monitor.threshold           | 0.85                                    
         | The threshold of JVM(in-heap) memory usage monitoring , 1 means 
disabling this function.                                                        
                                                                                
                                        |                                       
                                                                                
                   [...]
+| exception.allow_trace              | true                                    
         | Whether to allow exception trace stack.                              
                                                                                
                                                         |
+| memory_monitor.threshold           | 0.85                                    
         | The threshold of JVM(in-heap) memory usage monitoring , 1 means 
disabling this function.                                                        
                                                                                
                                        |
 | memory_monitor.period              | 2000                                    
         | The period in ms of JVM(in-heap) memory usage monitoring.            
                                                                                
                                                         |
+| log.slow_query_threshold           | 1000                                    
         | Slow query log threshold in milliseconds, 0 means disabled.          
                                                                                
                                                         |
+
+### K8s 配置项 (可选)
+
+对应配置文件`rest-server.properties`
+
+| config option    | default value                 | description               
               |
+|------------------|-------------------------------|------------------------------------------|
+| server.use_k8s   | false                         | Whether to enable K8s 
multi-tenancy mode. |
+| k8s.namespace    | hugegraph-computer-system     | K8s namespace for compute 
jobs.          |
+| k8s.kubeconfig   |                               | Path to kubeconfig file.  
               |
+
+### PD/Meta 配置项 (分布式模式)
+
+对应配置文件`rest-server.properties`
+
+| config option    | default value          | description                      
          |
+|------------------|------------------------|--------------------------------------------|
+| pd.peers         | 127.0.0.1:8686         | PD server addresses (comma 
separated).     |
+| meta.endpoints   | http://127.0.0.1:2379  | Meta service endpoints.          
          |
+
+### Arthas 诊断配置项 (可选)
+
+对应配置文件`rest-server.properties`
+
+| config option      | default value | description           |
+|--------------------|---------------|-----------------------|
+| arthas.telnetPort  | 8562          | Arthas telnet port.   |
+| arthas.httpPort    | 8561          | Arthas HTTP port.     |
+| arthas.ip          | 0.0.0.0       | Arthas bind IP.       |
 
 ### 基本配置项
 
@@ -60,7 +90,7 @@ weight: 2
 | config option                         | default value                        
        | description                                                           
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
 
|---------------------------------------|----------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 [...]
 | gremlin.graph                         | org.apache.hugegraph.HugeFactory     
         | Gremlin entrance to create graph.                                    
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
-| backend                               | rocksdb                              
        | The data store type, available values are [memory, rocksdb, 
cassandra, scylladb, hbase, mysql].                                             
                                                                                
                                                                                
                                                                                
                        [...]
+| backend                               | rocksdb                              
        | The data store type. For version 1.7.0+: [memory, rocksdb, hstore, 
hbase]. Note: cassandra, scylladb, mysql, postgresql were removed in 1.7.0 (use 
<= 1.5.x for legacy backends).                                                  
                                                                                
                                                                                
                 [...]
 | serializer                            | binary                               
        | The serializer for backend store, available values are [text, binary, 
cassandra, hbase, mysql].                                                       
                                                                                
                                                                                
                                                                                
              [...]
 | store                                 | hugegraph                            
        | The database name like Cassandra Keyspace.                            
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
 | store.connection_detect_interval      | 600                                  
        | The interval in seconds for detecting connections, if the idle time 
of a connection exceeds this value, detect it and reconnect if needed before 
using, value 0 means detecting every time.                                      
                                                                                
                                                                                
                   [...]
diff --git a/content/cn/docs/quickstart/hugegraph/hugegraph-server.md 
b/content/cn/docs/quickstart/hugegraph/hugegraph-server.md
index d1deef24..b9daaa5f 100644
--- a/content/cn/docs/quickstart/hugegraph/hugegraph-server.md
+++ b/content/cn/docs/quickstart/hugegraph/hugegraph-server.md
@@ -8,7 +8,9 @@ weight: 1
 
 HugeGraph-Server 是 HugeGraph 项目的核心部分,包含 graph-core、backend、API 等子模块。
 
-Core 模块是 Tinkerpop 接口的实现,Backend 
模块用于管理数据存储,目前支持的后端包括:Memory、Cassandra、ScyllaDB 以及 RocksDB,API 模块提供 HTTP 
Server,将 Client 的 HTTP 请求转化为对 Core 的调用。
+Core 模块是 Tinkerpop 接口的实现,Backend 模块用于管理数据存储,1.7.0+ 
版本支持的后端包括:RocksDB(单机默认)、HStore(分布式)、HBase 和 Memory。API 模块提供 HTTP Server,将 
Client 的 HTTP 请求转化为对 Core 的调用。
+
+> ⚠️ **重要变更**: 从 1.7.0 版本开始,MySQL、PostgreSQL、Cassandra、ScyllaDB 
等遗留后端已被移除。如需使用这些后端,请使用 1.5.x 或更早版本。
 
 > 文档中会出现 `HugeGraph-Server` 及 `HugeGraphServer` 这两种写法,其他组件也类似。
 > 这两种写法含义上并明显差异,可以这么区分:`HugeGraph-Server` 表示服务端相关组件代码,`HugeGraphServer` 表示服务进程。
@@ -39,12 +41,12 @@ Core 模块是 Tinkerpop 接口的实现,Backend 模块用于管理数据存
 
 可参考 [Docker 
部署方式](https://github.com/apache/incubator-hugegraph/blob/master/hugegraph-server/hugegraph-dist/docker/README.md)。
 
-我们可以使用 `docker run -itd --name=server -p 8080:8080 -e PASSWORD=xxx 
hugegraph/hugegraph:1.5.0` 去快速启动一个内置了 `RocksDB` 的 `Hugegraph server`.
+我们可以使用 `docker run -itd --name=server -p 8080:8080 -e PASSWORD=xxx 
hugegraph/hugegraph:1.7.0` 去快速启动一个内置了 `RocksDB` 的 `Hugegraph server`.
 
 可选项:
 
 1. 可以使用 `docker exec -it server bash` 进入容器完成一些操作
-2. 可以使用 `docker run -itd --name=server -p 8080:8080 -e PRELOAD="true" 
hugegraph/hugegraph:1.5.0` 在启动的时候预加载一个**内置的**样例图。可以通过 `RESTful API` 
进行验证。具体步骤可以参考 
[5.1.9](#519-%E5%90%AF%E5%8A%A8-server-%E7%9A%84%E6%97%B6%E5%80%99%E5%88%9B%E5%BB%BA%E7%A4%BA%E4%BE%8B%E5%9B%BE)
+2. 可以使用 `docker run -itd --name=server -p 8080:8080 -e PRELOAD="true" 
hugegraph/hugegraph:1.7.0` 在启动的时候预加载一个**内置的**样例图。可以通过 `RESTful API` 
进行验证。具体步骤可以参考 
[5.1.9](#519-%E5%90%AF%E5%8A%A8-server-%E7%9A%84%E6%97%B6%E5%80%99%E5%88%9B%E5%BB%BA%E7%A4%BA%E4%BE%8B%E5%9B%BE)
 3. 可以使用 `-e PASSWORD=xxx` 设置是否开启鉴权模式以及 admin 的密码,具体步骤可以参考 [Config 
Authentication](/cn/docs/config/config-authentication#使用-docker-时开启鉴权模式) 
 
 如果使用 docker desktop,则可以按照如下的方式设置可选项:
@@ -59,7 +61,7 @@ Core 模块是 Tinkerpop 接口的实现,Backend 模块用于管理数据存
 version: '3'
 services:
   server:
-    image: hugegraph/hugegraph:1.5.0
+    image: hugegraph/hugegraph:1.7.0
     container_name: server
     environment:
       - PASSWORD=xxx
@@ -74,12 +76,12 @@ services:
 > 
 > 1. hugegraph 的 docker 镜像是一个便捷版本,用于快速启动 hugegraph,并不是**官方发布物料包方式**。你可以从 [ASF 
 > Release Distribution 
 > Policy](https://infra.apache.org/release-distribution.html#dockerhub) 
 > 中得到更多细节。
 >
-> 2. 推荐使用 `release tag` (如 `1.5.0/1.x.0`) 以获取稳定版。使用 `latest` tag 可以使用开发中的最新功能。
+> 2. 推荐使用 `release tag` (如 `1.7.0/1.x.0`) 以获取稳定版。使用 `latest` tag 可以使用开发中的最新功能。
 
 #### 3.2 下载 tar 包
 
 ```bash
-# use the latest version, here is 1.5.0 for example
+# use the latest version, here is 1.7.0 for example
 wget 
https://downloads.apache.org/incubator/hugegraph/{version}/apache-hugegraph-incubating-{version}.tar.gz
 tar zxf *hugegraph*.tar.gz
 ```
@@ -138,11 +140,11 @@ mvn package -DskipTests
 HugeGraph-Tools 提供了一键部署的命令行工具,用户可以使用该工具快速地一键下载、解压、配置并启动 HugeGraph-Server 和 
HugeGraph-Hubble,最新的 HugeGraph-Toolchain 中已经包含所有的这些工具,直接下载它解压就有工具包集合了
 
 ```bash
-# download toolchain package, it includes loader + tool + hubble, please check 
the latest version (here is 1.5.0)
-wget 
https://downloads.apache.org/incubator/hugegraph/1.5.0/apache-hugegraph-toolchain-incubating-1.5.0.tar.gz
+# download toolchain package, it includes loader + tool + hubble, please check 
the latest version (here is 1.7.0)
+wget 
https://downloads.apache.org/incubator/hugegraph/1.7.0/apache-hugegraph-toolchain-incubating-1.7.0.tar.gz
 tar zxf *hugegraph-*.tar.gz
 # enter the tool's package
-cd *hugegraph*/*tool* 
+cd *hugegraph*/*tool*
 ```
 
 > 注:`${version}` 为版本号,最新版本号可参考 [Download 页面](/docs/download/download),或直接从 
 > Download 页面点击链接下载
@@ -387,6 +389,8 @@ Connecting to HugeGraphServer 
(http://127.0.0.1:8080/graphs)....OK
 
 ##### 5.1.4 MySQL
 
+> ⚠️ **已废弃**: 此后端从 HugeGraph 1.7.0 版本开始已移除。如需使用,请参考 1.5.x 版本文档。
+
 <details>
 <summary>点击展开/折叠 MySQL 配置及启动方法</summary>
 
@@ -431,6 +435,8 @@ Connecting to HugeGraphServer 
(http://127.0.0.1:8080/graphs)....OK
 
 ##### 5.1.5 Cassandra
 
+> ⚠️ **已废弃**: 此后端从 HugeGraph 1.7.0 版本开始已移除。如需使用,请参考 1.5.x 版本文档。
+
 <details>
 <summary>点击展开/折叠 Cassandra 配置及启动方法</summary>
 
@@ -516,6 +522,8 @@ Connecting to HugeGraphServer 
(http://127.0.0.1:8080/graphs)....OK
 
 ##### 5.1.7 ScyllaDB
 
+> ⚠️ **已废弃**: 此后端从 HugeGraph 1.7.0 版本开始已移除。如需使用,请参考 1.5.x 版本文档。
+
 <details>
 <summary>点击展开/折叠 ScyllaDB 配置及启动方法</summary>
 
@@ -584,6 +592,8 @@ Connecting to HugeGraphServer 
(http://127.0.0.1:8080/graphs)......OK
 
 ##### 5.2.1 使用 Cassandra 作为后端
 
+> ⚠️ **已废弃**: Cassandra 后端从 HugeGraph 1.7.0 版本开始已移除。如需使用,请参考 1.5.x 版本文档。
+
 <details>
 <summary>点击展开/折叠 Cassandra 配置及启动方法</summary>
 
@@ -652,7 +662,7 @@ volumes:
 
 1. 使用`docker run`
 
-    使用 `docker run -itd --name=server -p 8080:8080 -e PRELOAD=true 
hugegraph/hugegraph:1.5.0`
+    使用 `docker run -itd --name=server -p 8080:8080 -e PRELOAD=true 
hugegraph/hugegraph:1.7.0`
 
 2. 使用`docker-compose`
 
@@ -662,7 +672,7 @@ volumes:
     version: '3'
     services:
       server:
-        image: hugegraph/hugegraph:1.5.0
+        image: hugegraph/hugegraph:1.7.0
         container_name: server
         environment:
           - PRELOAD=true
diff --git a/content/cn/docs/quickstart/toolchain/hugegraph-hubble.md 
b/content/cn/docs/quickstart/toolchain/hugegraph-hubble.md
index 2167c0a7..f9984786 100644
--- a/content/cn/docs/quickstart/toolchain/hugegraph-hubble.md
+++ b/content/cn/docs/quickstart/toolchain/hugegraph-hubble.md
@@ -551,3 +551,26 @@ Hubble 上暂未提供可视化的 OLAP 算法执行,可调用 RESTful API 进
 <center>
   <img src="/docs/images/images-hubble/355任务详情.png" alt="image">
 </center>
+
+
+### 5 配置说明
+
+HugeGraph-Hubble 可以通过 `conf/hugegraph-hubble.properties` 文件进行配置。
+
+#### 5.1 服务器配置
+
+| 配置项 | 默认值 | 说明 |
+|--------|--------|------|
+| `hubble.host` | `0.0.0.0` | Hubble 服务绑定的地址 |
+| `hubble.port` | `8088` | Hubble 服务监听的端口 |
+
+#### 5.2 Gremlin 查询限制
+
+这些设置控制查询结果限制,防止内存问题:
+
+| 配置项 | 默认值 | 说明 |
+|--------|--------|------|
+| `gremlin.suffix_limit` | `250` | 查询后缀最大长度 |
+| `gremlin.vertex_degree_limit` | `100` | 显示的最大顶点度数 |
+| `gremlin.edges_total_limit` | `500` | 返回的最大边数 |
+| `gremlin.batch_query_ids` | `100` | ID 批量查询大小 |
diff --git a/content/cn/docs/quickstart/toolchain/hugegraph-spark-connector.md 
b/content/cn/docs/quickstart/toolchain/hugegraph-spark-connector.md
new file mode 100644
index 00000000..13dec291
--- /dev/null
+++ b/content/cn/docs/quickstart/toolchain/hugegraph-spark-connector.md
@@ -0,0 +1,182 @@
+---
+title: "HugeGraph-Spark-Connector Quick Start"
+linkTitle: "使用 Spark Connector 读写图数据"
+weight: 4
+---
+
+### 1 HugeGraph-Spark-Connector 概述
+
+HugeGraph-Spark-Connector 是一个用于在 Spark 中以标准格式读写 HugeGraph 数据的连接器应用程序。
+
+### 2 环境要求
+
+- Java 8+
+- Maven 3.6+
+- Spark 3.x
+- Scala 2.12
+
+### 3 编译
+
+#### 3.1 不执行测试的编译
+
+```bash
+mvn clean package -DskipTests
+```
+
+#### 3.2 执行默认测试的编译
+
+```bash
+mvn clean package
+```
+
+### 4 使用方法
+
+首先在你的 pom.xml 中添加依赖:
+
+```xml
+<dependency>
+    <groupId>org.apache.hugegraph</groupId>
+    <artifactId>hugegraph-spark-connector</artifactId>
+    <version>${revision}</version>
+</dependency>
+```
+
+#### 4.1 Schema 定义示例
+
+假设我们有一个图,其 schema 定义如下:
+
+```groovy
+schema.propertyKey("name").asText().ifNotExist().create()
+schema.propertyKey("age").asInt().ifNotExist().create()
+schema.propertyKey("city").asText().ifNotExist().create()
+schema.propertyKey("weight").asDouble().ifNotExist().create()
+schema.propertyKey("lang").asText().ifNotExist().create()
+schema.propertyKey("date").asText().ifNotExist().create()
+schema.propertyKey("price").asDouble().ifNotExist().create()
+
+schema.vertexLabel("person")
+        .properties("name", "age", "city")
+        .useCustomizeStringId()
+        .nullableKeys("age", "city")
+        .ifNotExist()
+        .create()
+
+schema.vertexLabel("software")
+        .properties("name", "lang", "price")
+        .primaryKeys("name")
+        .ifNotExist()
+        .create()
+
+schema.edgeLabel("knows")
+        .sourceLabel("person")
+        .targetLabel("person")
+        .properties("date", "weight")
+        .ifNotExist()
+        .create()
+
+schema.edgeLabel("created")
+        .sourceLabel("person")
+        .targetLabel("software")
+        .properties("date", "weight")
+        .ifNotExist()
+        .create()
+```
+
+#### 4.2 写入顶点数据(Scala)
+
+```scala
+val df = sparkSession.createDataFrame(Seq(
+  Tuple3("marko", 29, "Beijing"),
+  Tuple3("vadas", 27, "HongKong"),
+  Tuple3("Josh", 32, "Beijing"),
+  Tuple3("peter", 35, "ShangHai"),
+  Tuple3("li,nary", 26, "Wu,han"),
+  Tuple3("Bob", 18, "HangZhou"),
+)) toDF("name", "age", "city")
+
+df.show()
+
+df.write
+  .format("org.apache.hugegraph.spark.connector.DataSource")
+  .option("host", "127.0.0.1")
+  .option("port", "8080")
+  .option("graph", "hugegraph")
+  .option("data-type", "vertex")
+  .option("label", "person")
+  .option("id", "name")
+  .option("batch-size", 2)
+  .mode(SaveMode.Overwrite)
+  .save()
+```
+
+#### 4.3 写入边数据(Scala)
+
+```scala
+val df = sparkSession.createDataFrame(Seq(
+  Tuple4("marko", "vadas", "20160110", 0.5),
+  Tuple4("peter", "Josh", "20230801", 1.0),
+  Tuple4("peter", "li,nary", "20130220", 2.0)
+)).toDF("source", "target", "date", "weight")
+
+df.show()
+
+df.write
+  .format("org.apache.hugegraph.spark.connector.DataSource")
+  .option("host", "127.0.0.1")
+  .option("port", "8080")
+  .option("graph", "hugegraph")
+  .option("data-type", "edge")
+  .option("label", "knows")
+  .option("source-name", "source")
+  .option("target-name", "target")
+  .option("batch-size", 2)
+  .mode(SaveMode.Overwrite)
+  .save()
+```
+
+### 5 配置参数
+
+#### 5.1 客户端配置
+
+客户端配置用于配置 hugegraph-client。
+
+| 参数                   | 默认值        | 说明                                       
             |
+|----------------------|------------|-------------------------------------------------------|
+| `host`               | `localhost` | HugeGraphServer 的地址                     
             |
+| `port`               | `8080`      | HugeGraphServer 的端口                     
             |
+| `graph`              | `hugegraph` | 图空间名称                                   
              |
+| `protocol`           | `http`      | 向服务器发送请求的协议,可选 `http` 或 `https`         
              |
+| `username`           | `null`      | 当 HugeGraphServer 开启权限认证时,当前图的用户名       
               |
+| `token`              | `null`      | 当 HugeGraphServer 开启权限认证时,当前图的 token    
               |
+| `timeout`            | `60`        | 插入结果返回的超时时间(秒)                          
              |
+| `max-conn`           | `CPUS * 4`  | HugeClient 与 HugeGraphServer 之间的最大 HTTP 
连接数            |
+| `max-conn-per-route` | `CPUS * 2`  | HugeClient 与 HugeGraphServer 之间每个路由的最大 
HTTP 连接数         |
+| `trust-store-file`   | `null`      | 当请求协议为 https 时,客户端的证书文件路径               
              |
+| `trust-store-token`  | `null`      | 当请求协议为 https 时,客户端的证书密码                 
              |
+
+#### 5.2 图数据配置
+
+图数据配置用于设置图空间的配置。
+
+| 参数                | 默认值   | 说明                                               
                                                                                
                   |
+|-------------------|-------|----------------------------------------------------------------------------------------------------------------------------------------------------|
+| `data-type`       |       | 图数据类型,必须是 `vertex` 或 `edge`                      
                                                                                
                  |
+| `label`           |       | 要导入的顶点/边数据所属的标签                                  
                                                                                
                 |
+| `id`              |       | 指定某一列作为顶点的 id 列。当顶点 id 策略为 CUSTOMIZE 时,必填;当 id 
策略为 PRIMARY_KEY 时,必须为空                                                          
                  |
+| `source-name`     |       | 选择输入源的某些列作为源顶点的 id 列。当源顶点的 id 策略为 CUSTOMIZE 
时,必须指定某一列作为顶点的 id 列;当源顶点的 id 策略为 PRIMARY_KEY 时,必须指定一列或多列用于拼接生成顶点的 id,即无论使用哪种 id 
策略,此项都是必填的        |
+| `target-name`     |       | 指定某些列作为目标顶点的 id 列,与 source-name 类似               
                                                                                
                  |
+| `selected-fields` |       | 选择某些列进行插入,其他未选择的列不插入,不能与 ignored-fields 同时存在     
                                                                                
                 |
+| `ignored-fields`  |       | 忽略某些列使其不参与插入,不能与 selected-fields 同时存在            
                                                                                
                 |
+| `batch-size`      | `500` | 导入数据时每批数据的条目数                                    
                                                                                
                  |
+
+#### 5.3 通用配置
+
+通用配置包含一些常用的配置项。
+
+| 参数          | 默认值 | 说明                                                       
          |
+|-------------|-----|-------------------------------------------------------------------|
+| `delimiter` | `,` | `source-name`、`target-name`、`selected-fields` 或 
`ignored-fields` 的分隔符 |
+
+### 6 许可证
+
+与 HugeGraph 一样,hugegraph-spark-connector 也采用 Apache 2.0 许可证。
diff --git a/content/cn/docs/quickstart/toolchain/hugegraph-tools.md 
b/content/cn/docs/quickstart/toolchain/hugegraph-tools.md
index cd2414ed..73239129 100644
--- a/content/cn/docs/quickstart/toolchain/hugegraph-tools.md
+++ b/content/cn/docs/quickstart/toolchain/hugegraph-tools.md
@@ -55,10 +55,11 @@ mvn package -DskipTests
 
 解压后,进入 hugegraph-tools 目录,可以使用`bin/hugegraph`或者`bin/hugegraph help`来查看 usage 
信息。主要分为:
 
-- 图管理类,graph-mode-set、graph-mode-get、graph-list、graph-get 和 graph-clear
+- 
图管理类,graph-mode-set、graph-mode-get、graph-list、graph-get、graph-clear、graph-create、graph-clone
 和 graph-drop
 - 异步任务管理类,task-list、task-get、task-delete、task-cancel 和 task-clear
 - Gremlin类,gremlin-execute 和 gremlin-schedule
 - 备份/恢复类,backup、restore、migrate、schedule-backup 和 dump
+- 认证数据备份/恢复类,auth-backup 和 auth-restore
 - 安装部署类,deploy、clear、start-all 和 stop-all
 
 ```bash
@@ -105,7 +106,7 @@ Usage: hugegraph [options] [command] [command options]
 #export HUGEGRAPH_TRUST_STORE_PASSWORD=
 ```
 
-##### 3.3 图管理类,graph-mode-set、graph-mode-get、graph-list、graph-get和graph-clear
+##### 3.3 
图管理类,graph-mode-set、graph-mode-get、graph-list、graph-get、graph-clear、graph-create、graph-clone和graph-drop
 
 - graph-mode-set,设置图的 restore mode
     - --graph-mode 或者 -m,必填项,指定将要设置的模式,合法值包括 [NONE, RESTORING, MERGING, 
LOADING]
@@ -114,6 +115,14 @@ Usage: hugegraph [options] [command] [command options]
 - graph-get,获取某个图及其存储后端类型
 - graph-clear,清除某个图的全部 schema 和 data
     - --confirm-message 或者 -c,必填项,删除确认信息,需要手动输入,二次确认防止误删,"I'm sure to delete 
all data",包括双引号
+- graph-create,使用配置文件创建新图
+    - --name 或者 -n,选填项,新图的名称,默认为 hugegraph
+    - --file 或者 -f,必填项,图配置文件的路径
+- graph-clone,克隆已存在的图
+    - --name 或者 -n,选填项,新克隆图的名称,默认为 hugegraph
+    - --clone-graph-name,选填项,要克隆的源图名称,默认为 hugegraph
+- graph-drop,删除图(不同于 graph-clear,这会完全删除图)
+    - --confirm-message 或者 -c,必填项,确认消息 "I'm sure to drop the graph",包括双引号
 
 > 当需要把备份的图原样恢复到一个新的图中的时候,需要先将图模式设置为 RESTORING 模式;当需要将备份的图合并到已存在的图中时,需要先将图模式设置为 
 > MERGING 模式。
 
@@ -159,6 +168,7 @@ Usage: hugegraph [options] [command] [command options]
     - --huge-types 或者 -t,要备份的数据类型,逗号分隔,可选值为 'all' 或者 一个或多个 
[vertex,edge,vertex_label,edge_label,property_key,index_label] 的组合,'all' 
代表全部6种类型,即顶点、边和所有schema
     - --log 或者 -l,指定日志目录,默认为当前目录
     - --retry,指定失败重试次数,默认为 3
+    - --thread-num 或者 -T,使用的线程数,默认为 Math.min(10, Math.max(4, CPUs / 2))
     - --split-size 或者 -s,指定在备份时对顶点或者边分块的大小,默认为 1048576
     - -D,用 -Dkey=value 的模式指定动态参数,用来备份数据到 HDFS 时,指定 HDFS 
的配置项,例如:-Dfs.default.name=hdfs://localhost:9000 
 - restore,将 JSON 格式存储的 schema 或者 data 恢复到一个新图中(RESTORING 
模式)或者合并到已存在的图中(MERGING 模式)
@@ -167,6 +177,7 @@ Usage: hugegraph [options] [command] [command options]
     - --huge-types 或者 -t,要恢复的数据类型,逗号分隔,可选值为 'all' 或者 一个或多个 
[vertex,edge,vertex_label,edge_label,property_key,index_label] 的组合,'all' 
代表全部6种类型,即顶点、边和所有schema
     - --log 或者 -l,指定日志目录,默认为当前目录
     - --retry,指定失败重试次数,默认为 3
+    - --thread-num 或者 -T,使用的线程数,默认为 Math.min(10, Math.max(4, CPUs / 2))
     - -D,用 -Dkey=value 的模式指定动态参数,用来从 HDFS 恢复图时,指定 HDFS 
的配置项,例如:-Dfs.default.name=hdfs://localhost:9000
     > 只有当 --format 为 json 执行 backup 时,才可以使用 restore 命令恢复
 - migrate, 将当前连接的图迁移至另一个 HugeGraphServer 中
@@ -198,9 +209,28 @@ Usage: hugegraph [options] [command] [command options]
     - --log 或者 -l,指定日志目录,默认为当前目录
     - --retry,指定失败重试次数,默认为 3
     - --split-size 或者 -s,指定在备份时对顶点或者边分块的大小,默认为 1048576
-    - -D,用 -Dkey=value 的模式指定动态参数,用来备份数据到 HDFS 时,指定 HDFS 
的配置项,例如:-Dfs.default.name=hdfs://localhost:9000 
+    - -D,用 -Dkey=value 的模式指定动态参数,用来备份数据到 HDFS 时,指定 HDFS 
的配置项,例如:-Dfs.default.name=hdfs://localhost:9000
+
+##### 3.7 认证数据备份/恢复类
+
+- auth-backup,备份认证数据到指定目录
+    - --types 或者 -t,要备份的认证数据类型,逗号分隔,可选值为 'all' 或者一个或多个 [user, group, target, 
belong, access] 的组合,'all' 代表全部5种类型
+    - --directory 或者 -d,备份数据存储目录,默认为当前目录
+    - --log 或者 -l,指定日志目录,默认为当前目录
+    - --retry,指定失败重试次数,默认为 3
+    - --thread-num 或者 -T,使用的线程数,默认为 Math.min(10, Math.max(4, CPUs / 2))
+    - -D,用 -Dkey=value 的模式指定动态参数,用来备份数据到 HDFS 时,指定 HDFS 
的配置项,例如:-Dfs.default.name=hdfs://localhost:9000
+- auth-restore,从指定目录恢复认证数据
+    - --types 或者 -t,要恢复的认证数据类型,逗号分隔,可选值为 'all' 或者一个或多个 [user, group, target, 
belong, access] 的组合,'all' 代表全部5种类型
+    - --directory 或者 -d,备份数据存储目录,默认为当前目录
+    - --log 或者 -l,指定日志目录,默认为当前目录
+    - --retry,指定失败重试次数,默认为 3
+    - --thread-num 或者 -T,使用的线程数,默认为 Math.min(10, Math.max(4, CPUs / 2))
+    - --strategy,冲突处理策略,可选值为 [stop, ignore],默认为 stop。stop 表示遇到冲突时停止恢复,ignore 
表示忽略冲突继续恢复
+    - --init-password,恢复用户时设置的初始密码,恢复用户数据时必填
+    - -D,用 -Dkey=value 的模式指定动态参数,用来从 HDFS 恢复数据时,指定 HDFS 
的配置项,例如:-Dfs.default.name=hdfs://localhost:9000
 
-##### 3.7 安装部署类
+##### 3.8 安装部署类
 
 - deploy,一键下载、安装和启动 HugeGraph-Server 和 HugeGraph-Studio
     - -v,必填项,指明安装的 HugeGraph-Server 和 HugeGraph-Studio 的版本号,最新的是 0.9
@@ -215,7 +245,7 @@ Usage: hugegraph [options] [command] [command options]
 
 > deploy命令中有可选参数 -u,提供时会使用指定的下载地址替代默认下载地址下载 tar 
 > 包,并且将地址写入`~/hugegraph-download-url-prefix`文件中;之后如果不指定地址时,会优先从`~/hugegraph-download-url-prefix`指定的地址下载
 >  tar 包;如果 -u 和`~/hugegraph-download-url-prefix`都没有时,会从默认下载地址进行下载
 
-##### 3.8 具体命令参数
+##### 3.9 具体命令参数
 
 各子命令的具体参数如下:
 
@@ -524,7 +554,7 @@ Usage: hugegraph [options] [command] [command options]
 
 ```
 
-##### 3.9 具体命令示例
+##### 3.10 具体命令示例
 
 ###### 1. gremlin语句
 
diff --git a/content/en/docs/clients/restful-api/auth.md 
b/content/en/docs/clients/restful-api/auth.md
index e90b8408..4d87c90f 100644
--- a/content/en/docs/clients/restful-api/auth.md
+++ b/content/en/docs/clients/restful-api/auth.md
@@ -4,6 +4,10 @@ linkTitle: "Authentication"
 weight: 16
 ---
 
+> **Version Change Notice**:
+> - 1.7.0+: Auth API paths use GraphSpace format, such as 
`/graphspaces/DEFAULT/auth/users`, and group/target IDs match their names 
(e.g., `admin`)
+> - 1.5.x and earlier: Auth API paths include graph name, and group/target IDs 
use format like `-69:grant`. See [HugeGraph 1.5.x RESTful 
API](https://github.com/apache/incubator-hugegraph-doc/tree/release-1.5.0)
+
 ### 10.1 User Authentication and Access Control
 
 > To enable authentication and related configurations, please refer to the 
 > [Authentication Configuration](/docs/config/config-authentication/) 
 > documentation.
diff --git a/content/en/docs/config/config-option.md 
b/content/en/docs/config/config-option.md
index b0d937b6..bfba7d97 100644
--- a/content/en/docs/config/config-option.md
+++ b/content/en/docs/config/config-option.md
@@ -37,9 +37,9 @@ Corresponding configuration file `rest-server.properties`
 | gremlinserver.url                  | http://127.0.0.1:8182                   
         | The url of gremlin server.                                           
                                                                                
                                                         |
 | gremlinserver.max_route            | 8                                       
         | The max route number for gremlin server.                             
                                                                                
                                                         |
 | gremlinserver.timeout              | 30                                      
         | The timeout in seconds of waiting for gremlin server.                
                                                                                
                                                         |
-| batch.max_edges_per_batch          | 500                                     
         | The maximum number of edges submitted per batch.                     
                                                                                
                                                         |
-| batch.max_vertices_per_batch       | 500                                     
         | The maximum number of vertices submitted per batch.                  
                                                                                
                                                         |
-| batch.max_write_ratio              | 50                                      
         | The maximum thread ratio for batch writing, only take effect if the 
batch.max_write_threads is 0.                                                   
                                                          |
+| batch.max_edges_per_batch          | 2500                                    
         | The maximum number of edges submitted per batch.                     
                                                                                
                                                         |
+| batch.max_vertices_per_batch       | 2500                                    
         | The maximum number of vertices submitted per batch.                  
                                                                                
                                                         |
+| batch.max_write_ratio              | 70                                      
         | The maximum thread ratio for batch writing, only take effect if the 
batch.max_write_threads is 0.                                                   
                                                          |
 | batch.max_write_threads            | 0                                       
         | The maximum threads for batch writing, if the value is 0, the actual 
value will be set to batch.max_write_ratio * restserver.max_worker_threads.     
                                                         |
 | auth.authenticator                 |                                         
         | The class path of authenticator implementation. e.g., 
org.apache.hugegraph.auth.StandardAuthenticator, or a custom implementation.    
                                                    |
 | auth.graph_store                   | hugegraph                               
         | The name of graph used to store authentication information, like 
users, only for org.apache.hugegraph.auth.StandardAuthenticator.                
                                                              |
@@ -49,9 +49,39 @@ Corresponding configuration file `rest-server.properties`
 | auth.remote_url                    |                                         
         | If the address is empty, it provide auth service, otherwise it is 
auth client and also provide auth service through rpc forwarding. The remote 
url can be set to multiple addresses, which are concat by ','. |
 | auth.token_expire                  | 86400                                   
         | The expiration time in seconds after token created                   
                                                                                
                                                         |
 | auth.token_secret                  | FXQXbJtbCLxODc6tGci732pkH1cyf8Qg        
         | Secret key of HS256 algorithm.                                       
                                                                                
                                                         |
-| exception.allow_trace              | false                                   
         | Whether to allow exception trace stack.                              
                                                                                
                                                         |
-| memory_monitor.threshold           | 0.85                                    
         | The threshold of JVM(in-heap) memory usage monitoring , 1 means 
disabling this function.                                                        
                                                              |                 
                                                                                
                                    
+| exception.allow_trace              | true                                    
         | Whether to allow exception trace stack.                              
                                                                                
                                                         |
+| memory_monitor.threshold           | 0.85                                    
         | The threshold of JVM(in-heap) memory usage monitoring , 1 means 
disabling this function.                                                        
                                                              |
 | memory_monitor.period              | 2000                                    
         | The period in ms of JVM(in-heap) memory usage monitoring.            
                                                                                
                                                         |
+| log.slow_query_threshold           | 1000                                    
         | Slow query log threshold in milliseconds, 0 means disabled.          
                                                                                
                                                         |
+
+### K8s Config Options (Optional)
+
+Corresponding configuration file `rest-server.properties`
+
+| config option    | default value                 | description               
               |
+|------------------|-------------------------------|------------------------------------------|
+| server.use_k8s   | false                         | Whether to enable K8s 
multi-tenancy mode. |
+| k8s.namespace    | hugegraph-computer-system     | K8s namespace for compute 
jobs.          |
+| k8s.kubeconfig   |                               | Path to kubeconfig file.  
               |
+
+### PD/Meta Config Options (Distributed Mode)
+
+Corresponding configuration file `rest-server.properties`
+
+| config option    | default value          | description                      
          |
+|------------------|------------------------|--------------------------------------------|
+| pd.peers         | 127.0.0.1:8686         | PD server addresses (comma 
separated).     |
+| meta.endpoints   | http://127.0.0.1:2379  | Meta service endpoints.          
          |
+
+### Arthas Diagnostic Config Options (Optional)
+
+Corresponding configuration file `rest-server.properties`
+
+| config option      | default value | description           |
+|--------------------|---------------|-----------------------|
+| arthas.telnetPort  | 8562          | Arthas telnet port.   |
+| arthas.httpPort    | 8561          | Arthas HTTP port.     |
+| arthas.ip          | 0.0.0.0       | Arthas bind IP.       |
 
 ### Basic Config Options
 
@@ -60,7 +90,7 @@ Basic Config Options and Backend Config Options correspond to 
configuration file
 | config option                         | default value                        
        | description                                                           
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
 
|---------------------------------------|----------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 [...]
 | gremlin.graph                         | org.apache.hugegraph.HugeFactory     
         | Gremlin entrance to create graph.                                    
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
-| backend                               | rocksdb                              
        | The data store type, available values are [memory, rocksdb, 
cassandra, scylladb, hbase, mysql].                                             
                                                                                
                                                                                
                                                                                
                        [...]
+| backend                               | rocksdb                              
        | The data store type. For version 1.7.0+: [memory, rocksdb, hstore, 
hbase]. Note: cassandra, scylladb, mysql, postgresql were removed in 1.7.0 (use 
<= 1.5.x for legacy backends).                                                  
                                                                                
                                                                                
                 [...]
 | serializer                            | binary                               
        | The serializer for backend store, available values are [text, binary, 
cassandra, hbase, mysql].                                                       
                                                                                
                                                                                
                                                                                
              [...]
 | store                                 | hugegraph                            
        | The database name like Cassandra Keyspace.                            
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
 | store.connection_detect_interval      | 600                                  
        | The interval in seconds for detecting connections, if the idle time 
of a connection exceeds this value, detect it and reconnect if needed before 
using, value 0 means detecting every time.                                      
                                                                                
                                                                                
                   [...]
diff --git a/content/en/docs/quickstart/hugegraph/hugegraph-server.md 
b/content/en/docs/quickstart/hugegraph/hugegraph-server.md
index 2db77707..06ebc9e8 100644
--- a/content/en/docs/quickstart/hugegraph/hugegraph-server.md
+++ b/content/en/docs/quickstart/hugegraph/hugegraph-server.md
@@ -8,7 +8,9 @@ weight: 1
 
 `HugeGraph-Server` is the core part of the HugeGraph Project, contains 
submodules such as graph-core, backend, API.
 
-The Core Module is an implementation of the Tinkerpop interface; The Backend 
module is used to save the graph data to the data store, currently supported 
backends include: Memory, Cassandra, ScyllaDB, RocksDB; The API Module provides 
HTTP Server, which converts Client's HTTP request into a call to Core Module.
+The Core Module is an implementation of the Tinkerpop interface; The Backend 
module is used to save the graph data to the data store. For version 1.7.0+, 
supported backends include: RocksDB (standalone default), HStore (distributed), 
HBase, and Memory. The API Module provides HTTP Server, which converts Client's 
HTTP request into a call to Core Module.
+
+> ⚠️ **Important Change**: Starting from version 1.7.0, legacy backends such 
as MySQL, PostgreSQL, Cassandra, and ScyllaDB have been removed. If you need to 
use these backends, please use version 1.5.x or earlier.
 
 > There will be two spellings HugeGraph-Server and HugeGraphServer in the 
 > document, and other 
 > modules are similar. There is no big difference in the meaning of these two 
 > ways, 
@@ -42,11 +44,11 @@ There are four ways to deploy HugeGraph-Server components:
 <!-- 3.1 is linked by another place. if change 3.1's title, please check -->
 You can refer to the [Docker deployment 
guide](https://github.com/apache/incubator-hugegraph/blob/master/hugegraph-server/hugegraph-dist/docker/README.md).
 
-We can use `docker run -itd --name=server -p 8080:8080 -e PASSWORD=xxx 
hugegraph/hugegraph:1.5.0` to quickly start a `HugeGraph Server` with a 
built-in `RocksDB` backend.
+We can use `docker run -itd --name=server -p 8080:8080 -e PASSWORD=xxx 
hugegraph/hugegraph:1.7.0` to quickly start a `HugeGraph Server` with a 
built-in `RocksDB` backend.
 
-Optional: 
+Optional:
 1. use `docker exec -it graph bash` to enter the container to do some 
operations.
-2. use `docker run -itd --name=graph -p 8080:8080 -e PRELOAD="true" 
hugegraph/hugegraph:1.5.0` to start with a **built-in** example graph. We can 
use `RESTful API` to verify the result. The detailed step can refer to 
[5.1.8](#518-create-an-example-graph-when-startup)
+2. use `docker run -itd --name=graph -p 8080:8080 -e PRELOAD="true" 
hugegraph/hugegraph:1.7.0` to start with a **built-in** example graph. We can 
use `RESTful API` to verify the result. The detailed step can refer to 
[5.1.8](#518-create-an-example-graph-when-startup)
 3. use `-e PASSWORD=xxx` to enable auth mode and set the password for admin. 
You can find more details from [Config 
Authentication](/docs/config/config-authentication#use-docker-to-enable-authentication-mode)
 
 If you use docker desktop, you can set the option like: 
@@ -60,7 +62,7 @@ Also, if we want to manage the other Hugegraph related 
instances in one file, we
 version: '3'
 services:
   server:
-    image: hugegraph/hugegraph:1.5.0
+    image: hugegraph/hugegraph:1.7.0
     container_name: server
     environment:
      - PASSWORD=xxx
@@ -75,13 +77,13 @@ services:
 >
 > 1. The docker image of the hugegraph is a convenient release to start it 
 > quickly, but not **official distribution** artifacts. You can find more 
 > details from [ASF Release Distribution 
 > Policy](https://infra.apache.org/release-distribution.html#dockerhub).
 >
-> 2. Recommend to use `release tag` (like `1.5.0`/`1.x.0`) for the stable 
version. Use `latest` tag to experience the newest functions in development.
+> 2. Recommend to use `release tag` (like `1.7.0`/`1.x.0`) for the stable 
version. Use `latest` tag to experience the newest functions in development.
 
 #### 3.2 Download the binary tar tarball
 
 You could download the binary tarball from the download page of the ASF site 
like this:
 ```bash
-# use the latest version, here is 1.5.0 for example
+# use the latest version, here is 1.7.0 for example
 wget 
https://downloads.apache.org/incubator/hugegraph/{version}/apache-hugegraph-incubating-{version}.tar.gz
 tar zxf *hugegraph*.tar.gz
 
@@ -156,8 +158,8 @@ Of course, you should download the tarball of 
`HugeGraph-Toolchain` first.
 
 ```bash
 # download toolchain binary package, it includes loader + tool + hubble
-# please check the latest version (e.g. here is 1.5.0)
-wget 
https://downloads.apache.org/incubator/hugegraph/1.5.0/apache-hugegraph-toolchain-incubating-1.5.0.tar.gz
+# please check the latest version (e.g. here is 1.7.0)
+wget 
https://downloads.apache.org/incubator/hugegraph/1.7.0/apache-hugegraph-toolchain-incubating-1.7.0.tar.gz
 tar zxf *hugegraph-*.tar.gz
 
 # enter the tool's package
@@ -384,6 +386,8 @@ Connecting to HugeGraphServer 
(http://127.0.0.1:8080/graphs)....OK
 
 ##### 5.1.4 Cassandra
 
+> ⚠️ **Deprecated**: This backend has been removed starting from HugeGraph 
1.7.0. If you need to use it, please refer to version 1.5.x documentation.
+
 <details>
 <summary>Click to expand/collapse Cassandra configuration and startup 
methods</summary>
 
@@ -444,6 +448,8 @@ Connecting to HugeGraphServer 
(http://127.0.0.1:8080/graphs)....OK
 
 ##### 5.1.5 ScyllaDB
 
+> ⚠️ **Deprecated**: This backend has been removed starting from HugeGraph 
1.7.0. If you need to use it, please refer to version 1.5.x documentation.
+
 <details>
 <summary>Click to expand/collapse ScyllaDB configuration and startup 
methods</summary>
 
@@ -530,6 +536,8 @@ Connecting to HugeGraphServer 
(http://127.0.0.1:8080/graphs)....OK
 
 ##### 5.1.7 MySQL
 
+> ⚠️ **Deprecated**: This backend has been removed starting from HugeGraph 
1.7.0. If you need to use it, please refer to version 1.5.x documentation.
+
 <details>
 <summary>Click to expand/collapse MySQL configuration and startup 
methods</summary>
 
@@ -600,6 +608,8 @@ In [3.1 Use Docker 
container](#31-use-docker-container-convenient-for-testdev),
 
 ##### 5.2.1 Uses Cassandra as storage
 
+> ⚠️ **Deprecated**: Cassandra backend has been removed starting from 
HugeGraph 1.7.0. If you need to use it, please refer to version 1.5.x 
documentation.
+
 <details>
 <summary> Click to expand/collapse Cassandra configuration and startup 
methods</summary>
 
@@ -668,7 +678,7 @@ Set the environment variable `PRELOAD=true` when starting 
Docker to load data du
 
 1. Use `docker run`
 
-    Use `docker run -itd --name=server -p 8080:8080 -e PRELOAD=true 
hugegraph/hugegraph:1.5.0`
+    Use `docker run -itd --name=server -p 8080:8080 -e PRELOAD=true 
hugegraph/hugegraph:1.7.0`
 
 2. Use `docker-compose`
 
@@ -678,7 +688,7 @@ Set the environment variable `PRELOAD=true` when starting 
Docker to load data du
     version: '3'
       services:
         server:
-          image: hugegraph/hugegraph:1.5.0
+          image: hugegraph/hugegraph:1.7.0
           container_name: server
           environment:
             - PRELOAD=true
diff --git a/content/en/docs/quickstart/toolchain/hugegraph-hubble.md 
b/content/en/docs/quickstart/toolchain/hugegraph-hubble.md
index d73403f0..fb642e87 100644
--- a/content/en/docs/quickstart/toolchain/hugegraph-hubble.md
+++ b/content/en/docs/quickstart/toolchain/hugegraph-hubble.md
@@ -557,3 +557,26 @@ There is no visual OLAP algorithm execution on Hubble. You 
can call the RESTful
   <img src="/docs/images/images-hubble/355任务详情.png" alt="image">
 </center>
 
+
+### 5 Configuration
+
+HugeGraph-Hubble can be configured through the 
`conf/hugegraph-hubble.properties` file.
+
+#### 5.1 Server Configuration
+
+| Configuration Item | Default Value | Description |
+|-------------------|---------------|-------------|
+| `hubble.host` | `0.0.0.0` | The address that Hubble service binds to |
+| `hubble.port` | `8088` | The port that Hubble service listens on |
+
+#### 5.2 Gremlin Query Limits
+
+These settings control query result limits to prevent memory issues:
+
+| Configuration Item | Default Value | Description |
+|-------------------|---------------|-------------|
+| `gremlin.suffix_limit` | `250` | Maximum query suffix length |
+| `gremlin.vertex_degree_limit` | `100` | Maximum vertex degree to display |
+| `gremlin.edges_total_limit` | `500` | Maximum number of edges returned |
+| `gremlin.batch_query_ids` | `100` | ID batch query size |
+
diff --git a/content/en/docs/quickstart/toolchain/hugegraph-spark-connector.md 
b/content/en/docs/quickstart/toolchain/hugegraph-spark-connector.md
new file mode 100644
index 00000000..fb7494ef
--- /dev/null
+++ b/content/en/docs/quickstart/toolchain/hugegraph-spark-connector.md
@@ -0,0 +1,182 @@
+---
+title: "HugeGraph-Spark-Connector Quick Start"
+linkTitle: "Read/Write Graph Data with Spark Connector"
+weight: 4
+---
+
+### 1 HugeGraph-Spark-Connector Overview
+
+HugeGraph-Spark-Connector is a Spark connector application for reading and 
writing HugeGraph data in Spark standard format.
+
+### 2 Environment Requirements
+
+- Java 8+
+- Maven 3.6+
+- Spark 3.x
+- Scala 2.12
+
+### 3 Building
+
+#### 3.1 Build without executing tests
+
+```bash
+mvn clean package -DskipTests
+```
+
+#### 3.2 Build with default tests
+
+```bash
+mvn clean package
+```
+
+### 4 Usage
+
+First add the dependency in your pom.xml:
+
+```xml
+<dependency>
+    <groupId>org.apache.hugegraph</groupId>
+    <artifactId>hugegraph-spark-connector</artifactId>
+    <version>${revision}</version>
+</dependency>
+```
+
+#### 4.1 Schema Definition Example
+
+If we have a graph, the schema is defined as follows:
+
+```groovy
+schema.propertyKey("name").asText().ifNotExist().create()
+schema.propertyKey("age").asInt().ifNotExist().create()
+schema.propertyKey("city").asText().ifNotExist().create()
+schema.propertyKey("weight").asDouble().ifNotExist().create()
+schema.propertyKey("lang").asText().ifNotExist().create()
+schema.propertyKey("date").asText().ifNotExist().create()
+schema.propertyKey("price").asDouble().ifNotExist().create()
+
+schema.vertexLabel("person")
+        .properties("name", "age", "city")
+        .useCustomizeStringId()
+        .nullableKeys("age", "city")
+        .ifNotExist()
+        .create()
+
+schema.vertexLabel("software")
+        .properties("name", "lang", "price")
+        .primaryKeys("name")
+        .ifNotExist()
+        .create()
+
+schema.edgeLabel("knows")
+        .sourceLabel("person")
+        .targetLabel("person")
+        .properties("date", "weight")
+        .ifNotExist()
+        .create()
+
+schema.edgeLabel("created")
+        .sourceLabel("person")
+        .targetLabel("software")
+        .properties("date", "weight")
+        .ifNotExist()
+        .create()
+```
+
+#### 4.2 Vertex Sink (Scala)
+
+```scala
+val df = sparkSession.createDataFrame(Seq(
+  Tuple3("marko", 29, "Beijing"),
+  Tuple3("vadas", 27, "HongKong"),
+  Tuple3("Josh", 32, "Beijing"),
+  Tuple3("peter", 35, "ShangHai"),
+  Tuple3("li,nary", 26, "Wu,han"),
+  Tuple3("Bob", 18, "HangZhou"),
+)) toDF("name", "age", "city")
+
+df.show()
+
+df.write
+  .format("org.apache.hugegraph.spark.connector.DataSource")
+  .option("host", "127.0.0.1")
+  .option("port", "8080")
+  .option("graph", "hugegraph")
+  .option("data-type", "vertex")
+  .option("label", "person")
+  .option("id", "name")
+  .option("batch-size", 2)
+  .mode(SaveMode.Overwrite)
+  .save()
+```
+
+#### 4.3 Edge Sink (Scala)
+
+```scala
+val df = sparkSession.createDataFrame(Seq(
+  Tuple4("marko", "vadas", "20160110", 0.5),
+  Tuple4("peter", "Josh", "20230801", 1.0),
+  Tuple4("peter", "li,nary", "20130220", 2.0)
+)).toDF("source", "target", "date", "weight")
+
+df.show()
+
+df.write
+  .format("org.apache.hugegraph.spark.connector.DataSource")
+  .option("host", "127.0.0.1")
+  .option("port", "8080")
+  .option("graph", "hugegraph")
+  .option("data-type", "edge")
+  .option("label", "knows")
+  .option("source-name", "source")
+  .option("target-name", "target")
+  .option("batch-size", 2)
+  .mode(SaveMode.Overwrite)
+  .save()
+```
+
+### 5 Configuration Parameters
+
+#### 5.1 Client Configs
+
+Client Configs are used to configure hugegraph-client.
+
+| Parameter            | Default Value | Description                           
                                                       |
+|----------------------|---------------|----------------------------------------------------------------------------------------------|
+| `host`               | `localhost`   | Address of HugeGraphServer            
                                                       |
+| `port`               | `8080`        | Port of HugeGraphServer               
                                                       |
+| `graph`              | `hugegraph`   | Graph space name                      
                                                       |
+| `protocol`           | `http`        | Protocol for sending requests to the 
server, optional `http` or `https`                      |
+| `username`           | `null`        | Username of the current graph when 
HugeGraphServer enables permission authentication         |
+| `token`              | `null`        | Token of the current graph when 
HugeGraphServer has enabled authorization authentication     |
+| `timeout`            | `60`          | Timeout (seconds) for inserting 
results to return                                            |
+| `max-conn`           | `CPUS * 4`    | The maximum number of HTTP 
connections between HugeClient and HugeGraphServer                |
+| `max-conn-per-route` | `CPUS * 2`    | The maximum number of HTTP 
connections for each route between HugeClient and HugeGraphServer |
+| `trust-store-file`   | `null`        | The client's certificate file path 
when the request protocol is https                        |
+| `trust-store-token`  | `null`        | The client's certificate password 
when the request protocol is https                         |
+
+#### 5.2 Graph Data Configs
+
+Graph Data Configs are used to set graph space configuration.
+
+| Parameter         | Default Value | Description                              
                                                                                
                                                                                
                                                                                
                                                                                
                                                  |
+|-------------------|---------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `data-type`       |               | Graph data type, must be `vertex` or 
`edge`                                                                          
                                                                                
                                                                                
                                                                                
                                                      |
+| `label`           |               | Label to which the vertex/edge data to 
be imported belongs                                                             
                                                                                
                                                                                
                                                                                
                                                    |
+| `id`              |               | Specify a column as the id column of the 
vertex. When the vertex id policy is CUSTOMIZE, it is required; when the id 
policy is PRIMARY_KEY, it must be empty                                         
                                                                                
                                                                                
                                                      |
+| `source-name`     |               | Select certain columns of the input 
source as the id column of source vertex. When the id policy of the source 
vertex is CUSTOMIZE, a certain column must be specified as the id column of the 
vertex; when the id policy of the source vertex is PRIMARY_KEY, one or more 
columns must be specified for splicing the id of the generated vertex, that is, 
no matter which id strategy is used, this item is required |
+| `target-name`     |               | Specify certain columns as the id 
columns of target vertex, similar to source-name                                
                                                                                
                                                                                
                                                                                
                                                         |
+| `selected-fields` |               | Select some columns to insert, other 
unselected ones are not inserted, cannot exist at the same time as 
ignored-fields                                                                  
                                                                                
                                                                                
                                                                   |
+| `ignored-fields`  |               | Ignore some columns so that they do not 
participate in insertion, cannot exist at the same time as selected-fields      
                                                                                
                                                                                
                                                                                
                                                   |
+| `batch-size`      | `500`         | The number of data items in each batch 
when importing data                                                             
                                                                                
                                                                                
                                                                                
                                                    |
+
+#### 5.3 Common Configs
+
+Common Configs contains some common configurations.
+
+| Parameter   | Default Value | Description                                    
                                 |
+|-------------|---------------|---------------------------------------------------------------------------------|
+| `delimiter` | `,`           | Separator of `source-name`, `target-name`, 
`selected-fields` or `ignored-fields` |
+
+### 6 License
+
+The same as HugeGraph, hugegraph-spark-connector is also licensed under Apache 
2.0 License.
diff --git a/content/en/docs/quickstart/toolchain/hugegraph-tools.md 
b/content/en/docs/quickstart/toolchain/hugegraph-tools.md
index c55199f8..39938ee3 100644
--- a/content/en/docs/quickstart/toolchain/hugegraph-tools.md
+++ b/content/en/docs/quickstart/toolchain/hugegraph-tools.md
@@ -55,11 +55,12 @@ Generate tar package hugegraph-tools-${version}.tar.gz
 
 After decompression, enter the hugegraph-tools directory, you can use 
`bin/hugegraph` or `bin/hugegraph help` to view the usage information. mainly 
divided:
 
-- Graph management Type,graph-mode-set、graph-mode-get、graph-list、graph-get and 
graph-clear
-- Asynchronous task management Type,task-list、task-get、task-delete、task-cancel 
and task-clear
-- Gremlin Type,gremlin-execute and gremlin-schedule
-- Backup/Restore Type,backup、restore、migrate、schedule-backup and dump
-- Install the deployment Type,deploy、clear、start-all and stop-all
+- Graph management type, graph-mode-set, graph-mode-get, graph-list, 
graph-get, graph-clear, graph-create, graph-clone and graph-drop
+- Asynchronous task management type, task-list, task-get, task-delete, 
task-cancel and task-clear
+- Gremlin type, gremlin-execute and gremlin-schedule
+- Backup/Restore type, backup, restore, migrate, schedule-backup and dump
+- Authentication data backup/restore type, auth-backup and auth-restore
+- Install deployment type, deploy, clear, start-all and stop-all
 
 ```bash
 Usage: hugegraph [options] [command] [command options]
@@ -105,15 +106,23 @@ Another way is to set the environment variable in the 
bin/hugegraph script:
 #export HUGEGRAPH_TRUST_STORE_PASSWORD=
 ```
 
-##### 3.3 Graph Management 
Type,graph-mode-set、graph-mode-get、graph-list、graph-get and graph-clear
+##### 3.3 Graph Management Type, graph-mode-set, graph-mode-get, graph-list, 
graph-get, graph-clear, graph-create, graph-clone and graph-drop
 
-- graph-mode-set,set graph restore mode
+- graph-mode-set, set graph restore mode
   - --graph-mode or -m, required, specifies the mode to be set, legal values 
include [NONE, RESTORING, MERGING, LOADING]
-- graph-mode-get,get graph restore mode
-- graph-list,list all graphs in a HugeGraph-Server
-- graph-get,get a graph and its storage backend type
-- graph-clear,clear all schema and data of a graph
-  - --confirm-message Or -c, required, delete confirmation information, manual 
input is required, double confirmation to prevent accidental deletion, "I'm 
sure to delete all data", including double quotes
+- graph-mode-get, get graph restore mode
+- graph-list, list all graphs in a HugeGraph-Server
+- graph-get, get a graph and its storage backend type
+- graph-clear, clear all schema and data of a graph
+  - --confirm-message or -c, required, delete confirmation information, manual 
input is required, double confirmation to prevent accidental deletion, "I'm 
sure to delete all data", including double quotes
+- graph-create, create a new graph with configuration file
+  - --name or -n, optional, the name of the new graph, default is hugegraph
+  - --file or -f, required, the path to the graph configuration file
+- graph-clone, clone an existing graph
+  - --name or -n, optional, the name of the cloned graph, default is hugegraph
+  - --clone-graph-name, optional, the name of the source graph to clone from, 
default is hugegraph
+- graph-drop, drop a graph (different from graph-clear, this completely 
removes the graph)
+  - --confirm-message or -c, required, confirmation message "I'm sure to drop 
the graph", including double quotes
 
 > When you need to restore the backup graph to a new graph, you need to set 
 > the graph mode to RESTORING mode; when you need to merge the backup graph 
 > into an existing graph, you need to first set the graph mode to MERGING 
 > model.
 
@@ -159,6 +168,7 @@ Another way is to set the environment variable in the 
bin/hugegraph script:
   - --huge-types or -t, the data types to be backed up, separated by commas, 
the optional value is 'all' or a combination of one or more [vertex, edge, 
vertex_label, edge_label, property_key, index_label], 'all' Represents all 6 
types, namely vertices, edges and all schemas
   - --log or -l, specify the log directory, the default is the current 
directory
   - --retry, specify the number of failed retries, the default is 3
+  - --thread-num or -T, the number of threads to use, default is Math.min(10, 
Math.max(4, CPUs / 2))
   - --split-size or -s, specifies the size of splitting vertices or edges when 
backing up, the default is 1048576
   - -D, use the mode of -Dkey=value to specify dynamic parameters, and specify 
HDFS configuration items when backing up data to HDFS, for example: 
-Dfs.default.name=hdfs://localhost:9000
 - restore, restore schema or data stored in JSON format to a new graph 
(RESTORING mode) or merge into an existing graph (MERGING mode)
@@ -167,6 +177,7 @@ Another way is to set the environment variable in the 
bin/hugegraph script:
   - --huge-types or -t, data types to restore, separated by commas, optional 
value is 'all' or a combination of one or more [vertex, edge, vertex_label, 
edge_label, property_key, index_label], 'all' Represents all 6 types, namely 
vertices, edges and all schemas
   - --log or -l, specify the log directory, the default is the current 
directory
   - --retry, specify the number of failed retries, the default is 3
+  - --thread-num or -T, the number of threads to use, default is Math.min(10, 
Math.max(4, CPUs / 2))
   - -D, use the mode of -Dkey=value to specify dynamic parameters, which are 
used to specify HDFS configuration items when restoring graphs from HDFS, for 
example: -Dfs.default.name=hdfs://localhost:9000
   > restore command can be used only if --format is executed as backup for json
 - migrate, migrate the currently connected graph to another HugeGraphServer
@@ -200,7 +211,26 @@ Another way is to set the environment variable in the 
bin/hugegraph script:
   - --split-size or -s, specifies the size of splitting vertices or edges when 
backing up, the default is 1048576
   - -D, use the mode of -Dkey=value to specify dynamic parameters, and specify 
HDFS configuration items when backing up data to HDFS, for example: 
-Dfs.default.name=hdfs://localhost:9000
 
-##### 3.7 Install the deployment type
+##### 3.7 Authentication data backup/restore type
+
+- auth-backup, backup authentication data to a specified directory
+  - --types or -t, types of authentication data to back up, separated by 
commas, optional value is 'all' or a combination of one or more [user, group, 
target, belong, access], 'all' represents all 5 types
+  - --directory or -d, directory to store backup data, defaults to current 
directory
+  - --log or -l, specify the log directory, the default is the current 
directory
+  - --retry, specify the number of failed retries, the default is 3
+  - --thread-num or -T, the number of threads to use, default is Math.min(10, 
Math.max(4, CPUs / 2))
+  - -D, use the mode of -Dkey=value to specify dynamic parameters, and specify 
HDFS configuration items when backing up data to HDFS, for example: 
-Dfs.default.name=hdfs://localhost:9000
+- auth-restore, restore authentication data from a specified directory
+  - --types or -t, types of authentication data to restore, separated by 
commas, optional value is 'all' or a combination of one or more [user, group, 
target, belong, access], 'all' represents all 5 types
+  - --directory or -d, directory where backup data is stored, defaults to 
current directory
+  - --log or -l, specify the log directory, the default is the current 
directory
+  - --retry, specify the number of failed retries, the default is 3
+  - --thread-num or -T, the number of threads to use, default is Math.min(10, 
Math.max(4, CPUs / 2))
+  - --strategy, conflict handling strategy, optional values are [stop, 
ignore], default is stop. stop means stop restoring when encountering 
conflicts, ignore means ignore conflicts and continue restoring
+  - --init-password, initial password to set when restoring users, required 
when restoring user data
+  - -D, use the mode of -Dkey=value to specify dynamic parameters, which are 
used to specify HDFS configuration items when restoring data from HDFS, for 
example: -Dfs.default.name=hdfs://localhost:9000
+
+##### 3.8 Install the deployment type
 
 - deploy, one-click download, install and start HugeGraph-Server and 
HugeGraph-Studio
   - -v, required, specifies the version number of HugeGraph-Server and 
HugeGraph-Studio installed, the latest is 0.9
@@ -215,7 +245,7 @@ Another way is to set the environment variable in the 
bin/hugegraph script:
 
 > There is an optional parameter -u in the deploy command. When provided, the 
 > specified download address will be used instead of the default download 
 > address to download the tar package, and the address will be written into 
 > the `~/hugegraph-download-url-prefix` file; if no address is specified later 
 > When -u and `~/hugegraph-download-url-prefix` are not specified, the tar 
 > package will be downloaded from the address specified by 
 > `~/hugegraph-download-url-prefix`; if there is neither -u nor [...]
 
-##### 3.8 Specific command parameters
+##### 3.9 Specific command parameters
 
 The specific parameters of each subcommand are as follows:
 
@@ -524,7 +554,7 @@ Usage: hugegraph [options] [command] [command options]
 
 ```
 
-##### 3.9 Specific command example
+##### 3.10 Specific command example
 
 ###### 1. gremlin statement
 
diff --git a/contribution.md b/contribution.md
index 9b59ab13..d736b79b 100644
--- a/contribution.md
+++ b/contribution.md
@@ -1,4 +1,19 @@
-# How to help us (如何参与)
+# Contribution Guide - Detailed Reference
+
+> **快速开始请看 [README.md](./README.md)**,这里是详细的参考文档。
+
+## PR 检查清单
+
+提交 Pull Request 前请确认:
+
+- [ ] 本地构建并验证了修改效果
+- [ ] 同时更新了中文 (`content/cn/`) 和英文 (`content/en/`) 版本
+- [ ] PR 描述中包含修改前后的截图对比
+- [ ] 如有相关 Issue,已在 PR 中关联
+
+---
+
+## How to help us (如何参与)
 
 1. 在本地 3 步快速构建官网环境,启动起来看下目前效果 (Auto reload)
 2. 先 fork 仓库,然后基于 `master` 创建一个**新的**分支,修改完成后提交 PR ✅ (请在 PR 
内**截图**对比一下修改**前后**的效果 & 简要说明,感谢)

Reply via email to