This is an automated email from the ASF dual-hosted git repository.

jin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-hugegraph-doc.git


The following commit(s) were added to refs/heads/master by this push:
     new fe0980ce doc: add pd & hstore quickstart quick-start (#402)
fe0980ce is described below

commit fe0980ce01a19db54814bb10dcb1b59531156528
Author: YangJiaqi <yang.jiaqi.had...@gmail.com>
AuthorDate: Tue Jun 10 21:15:33 2025 +0800

    doc: add pd & hstore quickstart quick-start (#402)
    
    * add pd hstore quickstart doc
    ---------
    
    Co-authored-by: yangjiaqi <jiaqi.yang@veriti@xyz>
---
 content/cn/docs/quickstart/hugegraph-hstore.md | 205 +++++++++++++++++++++++++
 content/cn/docs/quickstart/hugegraph-pd.md     | 138 +++++++++++++++++
 content/cn/docs/quickstart/hugegraph-server.md | 110 ++++++++++++-
 content/en/docs/quickstart/hugegraph-hstore.md | 205 +++++++++++++++++++++++++
 content/en/docs/quickstart/hugegraph-pd.md     | 138 +++++++++++++++++
 content/en/docs/quickstart/hugegraph-server.md | 150 ++++++++++++------
 6 files changed, 890 insertions(+), 56 deletions(-)

diff --git a/content/cn/docs/quickstart/hugegraph-hstore.md 
b/content/cn/docs/quickstart/hugegraph-hstore.md
new file mode 100644
index 00000000..90d49b5a
--- /dev/null
+++ b/content/cn/docs/quickstart/hugegraph-hstore.md
@@ -0,0 +1,205 @@
+---
+title: "HugeGraph-Store Quick Start"
+linkTitle: "安装/构建 HugeGraph-Store"
+weight: 11
+---
+
+### 1 HugeGraph-Store 概述
+
+HugeGraph-Store 是 HugeGraph 分布式版本的存储节点组件,负责实际存储和管理图数据。它与 HugeGraph-PD 
协同工作,共同构成 HugeGraph 的分布式存储引擎,提供高可用性和水平扩展能力。
+
+### 2 依赖
+
+#### 2.1 前置条件
+
+- 操作系统:Linux 或 MacOS(Windows 尚未经过完整测试)
+- Java 版本:≥ 11
+- Maven 版本:≥ 3.5.0
+- 已部署的 HugeGraph-PD(如果是多节点部署)
+
+### 3 部署
+
+有两种方式可以部署 HugeGraph-Store 组件:
+
+- 方式 1:下载 tar 包
+- 方式 2:源码编译
+
+#### 3.1 下载 tar 包
+
+从 Apache HugeGraph 官方下载页面下载最新版本的 HugeGraph-Store:
+
+```bash
+# 用最新版本号替换 {version},例如 1.5.0
+wget 
https://downloads.apache.org/incubator/hugegraph/{version}/apache-hugegraph-incubating-{version}.tar.gz
  
+tar zxf apache-hugegraph-incubating-{version}.tar.gz
+cd 
apache-hugegraph-incubating-{version}/apache-hugegraph-hstore-incubating-{version}
+```
+
+#### 3.2 源码编译
+
+```bash
+# 1. 克隆源代码
+git clone https://github.com/apache/hugegraph.git
+
+# 2. 编译项目
+cd hugegraph
+mvn clean install -DskipTests=true
+
+# 3. 编译成功后,Store 模块的构建产物将位于
+#    
apache-hugegraph-incubating-{version}/apache-hugegraph-hstore-incubating-{version}
+#    target/apache-hugegraph-incubating-{version}.tar.gz
+```
+
+### 4 配置
+
+Store 的主要配置文件为 `conf/application.yml`,以下是关键配置项:
+
+```yaml
+pdserver:
+  # PD 服务地址,多个 PD 地址用逗号分割(配置 PD 的 gRPC 端口)
+  address: 127.0.0.1:8686
+
+grpc:
+  # gRPC 的服务地址
+  host: 127.0.0.1
+  port: 8500
+  netty-server:
+    max-inbound-message-size: 1000MB
+
+raft:
+  # raft 缓存队列大小
+  disruptorBufferSize: 1024
+  address: 127.0.0.1:8510
+  max-log-file-size: 600000000000
+  # 快照生成时间间隔,单位秒
+  snapshotInterval: 1800
+
+server:
+  # REST 服务地址
+  port: 8520
+
+app:
+  # 存储路径,支持多个路径,逗号分割
+  data-path: ./storage
+  #raft-path: ./storage
+
+spring:
+  application:
+    name: store-node-grpc-server
+  profiles:
+    active: default
+    include: pd
+
+logging:
+  config: 'file:./conf/log4j2.xml'
+  level:
+    root: info
+```
+
+对于多节点部署,需要为每个 Store 节点修改以下配置:
+
+1. 每个节点的 `grpc.port`(RPC 端口)
+2. 每个节点的 `raft.address`(Raft 协议端口)
+3. 每个节点的 `server.port`(REST 端口)
+4. 每个节点的 `app.data-path`(数据存储路径)
+
+### 5 启动与停止
+
+#### 5.1 启动 Store
+
+确保 PD 服务已经启动,然后在 Store 安装目录下执行:
+
+```bash
+./bin/start-hugegraph-store.sh
+```
+
+启动成功后,可以在 `logs/hugegraph-store-server.log` 中看到类似以下的日志:
+
+```
+2024-xx-xx xx:xx:xx [main] [INFO] o.a.h.s.n.StoreNodeApplication - Started 
StoreNodeApplication in x.xxx seconds (JVM running for x.xxx)
+```
+
+#### 5.2 停止 Store
+
+在 Store 安装目录下执行:
+
+```bash
+./bin/stop-hugegraph-store.sh
+```
+
+### 6 多节点部署示例
+
+以下是一个三节点部署的配置示例:
+
+#### 6.1 三节点配置参考
+
+- 3 PD 节点
+  - raft 端口: 8610, 8611, 8612
+  - rpc 端口: 8686, 8687, 8688
+  - rest 端口: 8620, 8621, 8622
+- 3 Store 节点
+  - raft 端口: 8510, 8511, 8512
+  - rpc 端口: 8500, 8501, 8502
+  - rest 端口: 8520, 8521, 8522
+
+#### 6.2 Store 节点配置
+
+对于三个 Store 节点,每个节点的主要配置差异如下:
+
+节点 A:
+```yaml
+grpc:
+  port: 8500
+raft:
+  address: 127.0.0.1:8510
+server:
+  port: 8520
+app:
+  data-path: ./storage-a
+```
+
+节点 B:
+```yaml
+grpc:
+  port: 8501
+raft:
+  address: 127.0.0.1:8511
+server:
+  port: 8521
+app:
+  data-path: ./storage-b
+```
+
+节点 C:
+```yaml
+grpc:
+  port: 8502
+raft:
+  address: 127.0.0.1:8512
+server:
+  port: 8522
+app:
+  data-path: ./storage-c
+```
+
+所有节点都应该指向相同的 PD 集群:
+```yaml
+pdserver:
+  address: 127.0.0.1:8686,127.0.0.1:8687,127.0.0.1:8688
+```
+
+### 7 验证 Store 服务
+
+确认 Store 服务是否正常运行:
+
+```bash
+curl http://localhost:8520/actuator/health
+```
+
+如果返回 `{"status":"UP"}`,则表示 Store 服务已成功启动。
+
+此外,可以通过 PD 的 API 查看集群中的 Store 节点状态:
+
+```bash
+curl http://localhost:8620/pd/api/v1/stores
+```
diff --git a/content/cn/docs/quickstart/hugegraph-pd.md 
b/content/cn/docs/quickstart/hugegraph-pd.md
new file mode 100644
index 00000000..36d0ba9e
--- /dev/null
+++ b/content/cn/docs/quickstart/hugegraph-pd.md
@@ -0,0 +1,138 @@
+---
+title: "HugeGraph-PD Quick Start"
+linkTitle: "安装/构建 HugeGraph-PD"
+weight: 10
+---
+
+### 1 HugeGraph-PD 概述
+
+HugeGraph-PD (Placement Driver) 是 HugeGraph 
分布式版本的元数据管理组件,负责管理图数据的分布和存储节点的协调。它在分布式 HugeGraph 中扮演着核心角色,维护集群状态并协调 
HugeGraph-Store 存储节点。
+
+### 2 依赖
+
+#### 2.1 前置条件
+
+- 操作系统:Linux 或 MacOS(Windows 尚未经过完整测试)
+- Java 版本:≥ 11
+- Maven 版本:≥ 3.5.0
+
+### 3 部署
+
+有两种方式可以部署 HugeGraph-PD 组件:
+
+- 方式 1:下载 tar 包
+- 方式 2:源码编译
+
+#### 3.1 下载 tar 包
+
+从 Apache HugeGraph 官方下载页面下载最新版本的 HugeGraph-PD:
+
+```bash
+# 用最新版本号替换 {version},例如 1.5.0
+wget 
https://downloads.apache.org/incubator/hugegraph/{version}/apache-hugegraph-incubating-{version}.tar.gz
  
+tar zxf apache-hugegraph-incubating-{version}.tar.gz
+cd 
apache-hugegraph-incubating-{version}/apache-hugegraph-pd-incubating-{version}
+```
+
+#### 3.2 源码编译
+
+```bash
+# 1. 克隆源代码
+git clone https://github.com/apache/hugegraph.git
+
+# 2. 编译项目
+cd hugegraph
+mvn clean install -DskipTests=true
+
+# 3. 编译成功后,PD 模块的构建产物将位于
+#    
apache-hugegraph-incubating-{version}/apache-hugegraph-pd-incubating-{version}
+#    target/apache-hugegraph-incubating-{version}.tar.gz
+```
+
+### 4 配置
+
+PD 的主要配置文件为 `conf/application.yml`,以下是关键配置项:
+
+```yaml
+spring:
+  application:
+    name: hugegraph-pd
+
+grpc:
+  # 集群模式下的 gRPC 端口
+  port: 8686
+  host: 127.0.0.1
+
+server:
+  # REST 服务端口号
+  port: 8620
+
+pd:
+  # 存储路径
+  data-path: ./pd_data
+  # 自动扩容的检查周期(秒)
+  patrol-interval: 1800
+  # 初始 store 列表,在列表内的 store 自动激活
+  initial-store-count: 1
+  # store 的配置信息,格式为 IP:gRPC端口
+  initial-store-list: 127.0.0.1:8500
+
+raft:
+  # 集群模式
+  address: 127.0.0.1:8610
+  # 集群中所有 PD 节点的 raft 地址
+  peers-list: 127.0.0.1:8610
+
+store:
+  # store 下线时间(秒)。超过该时间,认为 store 永久不可用,分配副本到其他机器
+  max-down-time: 172800
+  # 是否开启 store 监控数据存储
+  monitor_data_enabled: true
+  # 监控数据的间隔
+  monitor_data_interval: 1 minute
+  # 监控数据的保留时间
+  monitor_data_retention: 1 day
+  initial-store-count: 1
+
+partition:
+  # 默认每个分区副本数
+  default-shard-count: 1
+  # 默认每机器最大副本数
+  store-max-shard-count: 12
+```
+
+对于多节点部署,需要修改各节点的端口和地址配置,确保各节点之间能够正常通信。
+
+### 5 启动与停止
+
+#### 5.1 启动 PD
+
+在 PD 安装目录下执行:
+
+```bash
+./bin/start-hugegraph-pd.sh
+```
+
+启动成功后,可以在 `logs/hugegraph-pd-stdout.log` 中看到类似以下的日志:
+
+```
+2024-xx-xx xx:xx:xx [main] [INFO] o.a.h.p.b.HugePDServer - Started 
HugePDServer in x.xxx seconds (JVM running for x.xxx)
+```
+
+#### 5.2 停止 PD
+
+在 PD 安装目录下执行:
+
+```bash
+./bin/stop-hugegraph-pd.sh
+```
+
+### 6 验证
+
+确认 PD 服务是否正常运行:
+
+```bash
+curl http://localhost:8620/actuator/health
+```
+
+如果返回 `{"status":"UP"}`,则表示 PD 服务已成功启动。
diff --git a/content/cn/docs/quickstart/hugegraph-server.md 
b/content/cn/docs/quickstart/hugegraph-server.md
index d6a62843..94e7df02 100644
--- a/content/cn/docs/quickstart/hugegraph-server.md
+++ b/content/cn/docs/quickstart/hugegraph-server.md
@@ -176,7 +176,103 @@ HugeGraphServer 启动时会连接后端存储并尝试检查后端存储版本
 
 **注:** 如果想要开启 HugeGraph 权限系统,在启动 Server 之前应按照 [Server 
鉴权配置](https://hugegraph.apache.org/cn/docs/config/config-authentication/) 
进行配置。(尤其是生产环境/外网环境须开启)
 
-##### 5.1.1 RocksDB
+##### 5.1.1 分布式存储 (HStore)
+
+<details>
+<summary>点击展开/折叠 分布式存储 配置及启动方法</summary>
+
+> 分布式存储是 HugeGraph 1.5.0 之后推出的新特性,它基于 HugeGraph-PD 和 HugeGraph-Store 
组件实现了分布式的数据存储和计算。
+
+要使用分布式存储引擎,需要先部署 HugeGraph-PD 和 HugeGraph-Store,详见 [HugeGraph-PD 
快速入门](/cn/docs/quickstart/hugegraph-pd/) 和 [HugeGraph-Store 
快速入门](/cn/docs/quickstart/hugegraph-hstore/)。
+
+确保 PD 和 Store 服务均已启动后,修改 HugeGraph-Server 的 `hugegraph.properties` 配置:
+
+```properties
+backend=hstore
+serializer=binary
+task.scheduler_type=distributed
+
+# PD 服务地址,多个 PD 地址用逗号分割,配置 PD 的 RPC 端口
+pd.peers=127.0.0.1:8686,127.0.0.1:8687,127.0.0.1:8688
+```
+
+如果配置多个 HugeGraph-Server 节点,需要为每个节点修改 `rest-server.properties` 配置文件,例如:
+
+节点 1(主节点):
+```properties
+restserver.url=http://127.0.0.1:8081
+gremlinserver.url=http://127.0.0.1:8181
+
+rpc.server_host=127.0.0.1
+rpc.server_port=8091
+
+server.id=server-1
+server.role=master
+```
+
+节点 2(工作节点):
+```properties
+restserver.url=http://127.0.0.1:8082
+gremlinserver.url=http://127.0.0.1:8182
+
+rpc.server_host=127.0.0.1
+rpc.server_port=8092
+
+server.id=server-2
+server.role=worker
+```
+
+同时,还需要修改每个节点的 `gremlin-server.yaml` 中的端口配置:
+
+节点 1:
+```yaml
+host: 127.0.0.1
+port: 8181
+```
+
+节点 2:
+```yaml
+host: 127.0.0.1
+port: 8182
+```
+
+初始化数据库:
+
+```bash
+cd *hugegraph-${version}
+bin/init-store.sh
+```
+
+启动 Server:
+
+```bash
+bin/start-hugegraph.sh
+```
+
+使用分布式存储引擎的启动顺序为:
+1. 启动 HugeGraph-PD
+2. 启动 HugeGraph-Store
+3. 初始化数据库(仅首次)
+4. 启动 HugeGraph-Server
+
+验证服务是否正常启动:
+
+```bash
+curl http://localhost:8081/graphs
+# 应返回:{"graphs":["hugegraph"]}
+```
+
+停止服务的顺序应该与启动顺序相反:
+1. 停止 HugeGraph-Server
+2. 停止 HugeGraph-Store
+3. 停止 HugeGraph-PD
+
+```bash
+bin/stop-hugegraph.sh
+```
+</details>
+
+##### 5.1.2 RocksDB
 
 <details>
 <summary>点击展开/折叠 RocksDB 配置及启动方法</summary>
@@ -212,7 +308,7 @@ Connecting to HugeGraphServer 
(http://127.0.0.1:8080/graphs)....OK
 
 </details>
 
-##### 5.1.2 HBase
+##### 5.1.3 HBase
 
 <details>
 <summary>点击展开/折叠 HBase 配置及启动方法</summary>
@@ -254,7 +350,7 @@ Connecting to HugeGraphServer 
(http://127.0.0.1:8080/graphs)....OK
 
 </details>
 
-##### 5.1.3 MySQL
+##### 5.1.4 MySQL
 
 <details>
 <summary>点击展开/折叠 MySQL 配置及启动方法</summary>
@@ -298,7 +394,7 @@ Connecting to HugeGraphServer 
(http://127.0.0.1:8080/graphs)....OK
 
 </details>
 
-##### 5.1.4 Cassandra
+##### 5.1.5 Cassandra
 
 <details>
 <summary>点击展开/折叠 Cassandra 配置及启动方法</summary>
@@ -357,7 +453,7 @@ Connecting to HugeGraphServer 
(http://127.0.0.1:8080/graphs)....OK
 
 </details>
 
-##### 5.1.5 Memory
+##### 5.1.6 Memory
 
 <details>
 <summary>点击展开/折叠 Memory 配置及启动方法</summary>
@@ -383,7 +479,7 @@ Connecting to HugeGraphServer 
(http://127.0.0.1:8080/graphs)....OK
 
 </details>
 
-##### 5.1.6 ScyllaDB
+##### 5.1.7 ScyllaDB
 
 <details>
 <summary>点击展开/折叠 ScyllaDB 配置及启动方法</summary>
@@ -427,7 +523,7 @@ Connecting to HugeGraphServer 
(http://127.0.0.1:8080/graphs)....OK
 
 </details>
 
-##### 5.1.7 启动 server 的时候创建示例图
+##### 5.1.8 启动 server 的时候创建示例图
 
 在脚本启动时候携带 `-p true`参数,表示 preload, 即创建示例图图
 
diff --git a/content/en/docs/quickstart/hugegraph-hstore.md 
b/content/en/docs/quickstart/hugegraph-hstore.md
new file mode 100644
index 00000000..bc36a119
--- /dev/null
+++ b/content/en/docs/quickstart/hugegraph-hstore.md
@@ -0,0 +1,205 @@
+---
+title: "HugeGraph-Store Quick Start"
+linkTitle: "Install/Build HugeGraph-Store"
+weight: 11
+---
+
+### 1 HugeGraph-Store Overview
+
+HugeGraph-Store is the storage node component of HugeGraph's distributed 
version, responsible for actually storing and managing graph data. It works in 
conjunction with HugeGraph-PD to form HugeGraph's distributed storage engine, 
providing high availability and horizontal scalability.
+
+### 2 Prerequisites
+
+#### 2.1 Requirements
+
+- Operating System: Linux or MacOS (Windows has not been fully tested)
+- Java version: ≥ 11
+- Maven version: ≥ 3.5.0
+- Deployed HugeGraph-PD (for multi-node deployment)
+
+### 3 Deployment
+
+There are two ways to deploy the HugeGraph-Store component:
+
+- Method 1: Download the tar package
+- Method 2: Compile from source
+
+#### 3.1 Download the tar package
+
+Download the latest version of HugeGraph-Store from the Apache HugeGraph 
official download page:
+
+```bash
+# Replace {version} with the latest version number, e.g., 1.5.0
+wget 
https://downloads.apache.org/incubator/hugegraph/{version}/apache-hugegraph-incubating-{version}.tar.gz
  
+tar zxf apache-hugegraph-incubating-{version}.tar.gz
+cd 
apache-hugegraph-incubating-{version}/apache-hugegraph-hstore-incubating-{version}
+```
+
+#### 3.2 Compile from source
+
+```bash
+# 1. Clone the source code
+git clone https://github.com/apache/hugegraph.git
+
+# 2. Build the project
+cd hugegraph
+mvn clean install -DskipTests=true
+
+# 3. After successful compilation, the Store module build artifacts will be 
located at
+#    
apache-hugegraph-incubating-{version}/apache-hugegraph-hstore-incubating-{version}
+#    target/apache-hugegraph-incubating-{version}.tar.gz
+```
+
+### 4 Configuration
+
+The main configuration file for Store is `conf/application.yml`. Here are the 
key configuration items:
+
+```yaml
+pdserver:
+  # PD service address, multiple PD addresses are separated by commas 
(configure PD's gRPC port)
+  address: 127.0.0.1:8686
+
+grpc:
+  # gRPC service address
+  host: 127.0.0.1
+  port: 8500
+  netty-server:
+    max-inbound-message-size: 1000MB
+
+raft:
+  # raft cache queue size
+  disruptorBufferSize: 1024
+  address: 127.0.0.1:8510
+  max-log-file-size: 600000000000
+  # Snapshot generation time interval, in seconds
+  snapshotInterval: 1800
+
+server:
+  # REST service address
+  port: 8520
+
+app:
+  # Storage path, supports multiple paths separated by commas
+  data-path: ./storage
+  #raft-path: ./storage
+
+spring:
+  application:
+    name: store-node-grpc-server
+  profiles:
+    active: default
+    include: pd
+
+logging:
+  config: 'file:./conf/log4j2.xml'
+  level:
+    root: info
+```
+
+For multi-node deployment, you need to modify the following configurations for 
each Store node:
+
+1. `grpc.port` (RPC port) for each node
+2. `raft.address` (Raft protocol port) for each node
+3. `server.port` (REST port) for each node
+4. `app.data-path` (data storage path) for each node
+
+### 5 Start and Stop
+
+#### 5.1 Start Store
+
+Ensure that the PD service is already started, then in the Store installation 
directory, execute:
+
+```bash
+./bin/start-hugegraph-store.sh
+```
+
+After successful startup, you can see logs similar to the following in 
`logs/hugegraph-store-server.log`:
+
+```
+2024-xx-xx xx:xx:xx [main] [INFO] o.a.h.s.n.StoreNodeApplication - Started 
StoreNodeApplication in x.xxx seconds (JVM running for x.xxx)
+```
+
+#### 5.2 Stop Store
+
+In the Store installation directory, execute:
+
+```bash
+./bin/stop-hugegraph-store.sh
+```
+
+### 6 Multi-Node Deployment Example
+
+Below is a configuration example for a three-node deployment:
+
+#### 6.1 Three-Node Configuration Reference
+
+- 3 PD nodes
+  - raft ports: 8610, 8611, 8612
+  - rpc ports: 8686, 8687, 8688
+  - rest ports: 8620, 8621, 8622
+- 3 Store nodes
+  - raft ports: 8510, 8511, 8512
+  - rpc ports: 8500, 8501, 8502
+  - rest ports: 8520, 8521, 8522
+
+#### 6.2 Store Node Configuration
+
+For the three Store nodes, the main configuration differences are as follows:
+
+Node A:
+```yaml
+grpc:
+  port: 8500
+raft:
+  address: 127.0.0.1:8510
+server:
+  port: 8520
+app:
+  data-path: ./storage-a
+```
+
+Node B:
+```yaml
+grpc:
+  port: 8501
+raft:
+  address: 127.0.0.1:8511
+server:
+  port: 8521
+app:
+  data-path: ./storage-b
+```
+
+Node C:
+```yaml
+grpc:
+  port: 8502
+raft:
+  address: 127.0.0.1:8512
+server:
+  port: 8522
+app:
+  data-path: ./storage-c
+```
+
+All nodes should point to the same PD cluster:
+```yaml
+pdserver:
+  address: 127.0.0.1:8686,127.0.0.1:8687,127.0.0.1:8688
+```
+
+### 7 Verify Store Service
+
+Confirm that the Store service is running properly:
+
+```bash
+curl http://localhost:8520/actuator/health
+```
+
+If it returns `{"status":"UP"}`, it indicates that the Store service has been 
successfully started.
+
+Additionally, you can check the status of Store nodes in the cluster through 
the PD API:
+
+```bash
+curl http://localhost:8620/pd/api/v1/stores
+```
diff --git a/content/en/docs/quickstart/hugegraph-pd.md 
b/content/en/docs/quickstart/hugegraph-pd.md
new file mode 100644
index 00000000..5d63744f
--- /dev/null
+++ b/content/en/docs/quickstart/hugegraph-pd.md
@@ -0,0 +1,138 @@
+---
+title: "HugeGraph-PD Quick Start"
+linkTitle: "Install/Build HugeGraph-PD"
+weight: 10
+---
+
+### 1 HugeGraph-PD Overview
+
+HugeGraph-PD (Placement Driver) is the metadata management component of 
HugeGraph's distributed version, responsible for managing the distribution of 
graph data and coordinating storage nodes. It plays a central role in 
distributed HugeGraph, maintaining cluster status and coordinating 
HugeGraph-Store storage nodes.
+
+### 2 Prerequisites
+
+#### 2.1 Requirements
+
+- Operating System: Linux or MacOS (Windows has not been fully tested)
+- Java version: ≥ 11
+- Maven version: ≥ 3.5.0
+
+### 3 Deployment
+
+There are two ways to deploy the HugeGraph-PD component:
+
+- Method 1: Download the tar package
+- Method 2: Compile from source
+
+#### 3.1 Download the tar package
+
+Download the latest version of HugeGraph-PD from the Apache HugeGraph official 
download page:
+
+```bash
+# Replace {version} with the latest version number, e.g., 1.5.0
+wget 
https://downloads.apache.org/incubator/hugegraph/{version}/apache-hugegraph-incubating-{version}.tar.gz
  
+tar zxf apache-hugegraph-incubating-{version}.tar.gz
+cd 
apache-hugegraph-incubating-{version}/apache-hugegraph-pd-incubating-{version}
+```
+
+#### 3.2 Compile from source
+
+```bash
+# 1. Clone the source code
+git clone https://github.com/apache/hugegraph.git
+
+# 2. Build the project
+cd hugegraph
+mvn clean install -DskipTests=true
+
+# 3. After successful compilation, the PD module build artifacts will be 
located at
+#    
apache-hugegraph-incubating-{version}/apache-hugegraph-pd-incubating-{version}
+#    target/apache-hugegraph-incubating-{version}.tar.gz
+```
+
+### 4 Configuration
+
+The main configuration file for PD is `conf/application.yml`. Here are the key 
configuration items:
+
+```yaml
+spring:
+  application:
+    name: hugegraph-pd
+
+grpc:
+  # gRPC port for cluster mode
+  port: 8686
+  host: 127.0.0.1
+
+server:
+  # REST service port
+  port: 8620
+
+pd:
+  # Storage path
+  data-path: ./pd_data
+  # Auto-expansion check cycle (seconds)
+  patrol-interval: 1800
+  # Initial store list, stores in the list are automatically activated
+  initial-store-count: 1
+  # Store configuration information, format is IP:gRPC port
+  initial-store-list: 127.0.0.1:8500
+
+raft:
+  # Cluster mode
+  address: 127.0.0.1:8610
+  # Raft addresses of all PD nodes in the cluster
+  peers-list: 127.0.0.1:8610
+
+store:
+  # Store offline time (seconds). After this time, the store is considered 
permanently unavailable
+  max-down-time: 172800
+  # Whether to enable store monitoring data storage
+  monitor_data_enabled: true
+  # Monitoring data interval
+  monitor_data_interval: 1 minute
+  # Monitoring data retention time
+  monitor_data_retention: 1 day
+  initial-store-count: 1
+
+partition:
+  # Default number of replicas per partition
+  default-shard-count: 1
+  # Default maximum number of replicas per machine
+  store-max-shard-count: 12
+```
+
+For multi-node deployment, you need to modify the port and address 
configurations for each node to ensure proper communication between nodes.
+
+### 5 Start and Stop
+
+#### 5.1 Start PD
+
+In the PD installation directory, execute:
+
+```bash
+./bin/start-hugegraph-pd.sh
+```
+
+After successful startup, you can see logs similar to the following in 
`logs/hugegraph-pd-stdout.log`:
+
+```
+2024-xx-xx xx:xx:xx [main] [INFO] o.a.h.p.b.HugePDServer - Started 
HugePDServer in x.xxx seconds (JVM running for x.xxx)
+```
+
+#### 5.2 Stop PD
+
+In the PD installation directory, execute:
+
+```bash
+./bin/stop-hugegraph-pd.sh
+```
+
+### 6 Verification
+
+Confirm that the PD service is running properly:
+
+```bash
+curl http://localhost:8620/actuator/health
+```
+
+If it returns `{"status":"UP"}`, it indicates that the PD service has been 
successfully started.
diff --git a/content/en/docs/quickstart/hugegraph-server.md 
b/content/en/docs/quickstart/hugegraph-server.md
index 19c3fda1..b23a1e7c 100644
--- a/content/en/docs/quickstart/hugegraph-server.md
+++ b/content/en/docs/quickstart/hugegraph-server.md
@@ -195,7 +195,103 @@ Since the configuration (hugegraph.properties) and 
startup steps required by var
 
 Follow the [Server Authentication 
Configuration](https://hugegraph.apache.org/docs/config/config-authentication/) 
before you start Server later.
 
-##### 5.1.1 Memory
+##### 5.1.1 Distributed Storage (HStore)
+
+<details>
+<summary>Click to expand/collapse Distributed Storage configuration and 
startup method</summary>
+
+> Distributed storage is a new feature introduced after HugeGraph 1.5.0, which 
implements distributed data storage and computation based on HugeGraph-PD and 
HugeGraph-Store components.
+
+To use the distributed storage engine, you need to deploy HugeGraph-PD and 
HugeGraph-Store first. See [HugeGraph-PD Quick 
Start](/docs/quickstart/hugegraph-pd/) and [HugeGraph-Store Quick 
Start](/docs/quickstart/hugegraph-hstore/) for details.
+
+After ensuring that both PD and Store services are started, modify the 
`hugegraph.properties` configuration of HugeGraph-Server:
+
+```properties
+backend=hstore
+serializer=binary
+task.scheduler_type=distributed
+
+# PD service address, multiple PD addresses are separated by commas, configure 
PD's RPC port
+pd.peers=127.0.0.1:8686,127.0.0.1:8687,127.0.0.1:8688
+```
+
+If configuring multiple HugeGraph-Server nodes, you need to modify the 
`rest-server.properties` configuration file for each node, for example:
+
+Node 1 (Master node):
+```properties
+restserver.url=http://127.0.0.1:8081
+gremlinserver.url=http://127.0.0.1:8181
+
+rpc.server_host=127.0.0.1
+rpc.server_port=8091
+
+server.id=server-1
+server.role=master
+```
+
+Node 2 (Worker node):
+```properties
+restserver.url=http://127.0.0.1:8082
+gremlinserver.url=http://127.0.0.1:8182
+
+rpc.server_host=127.0.0.1
+rpc.server_port=8092
+
+server.id=server-2
+server.role=worker
+```
+
+Also, you need to modify the port configuration in `gremlin-server.yaml` for 
each node:
+
+Node 1:
+```yaml
+host: 127.0.0.1
+port: 8181
+```
+
+Node 2:
+```yaml
+host: 127.0.0.1
+port: 8182
+```
+
+Initialize the database:
+
+```bash
+cd *hugegraph-${version}
+bin/init-store.sh
+```
+
+Start the Server:
+
+```bash
+bin/start-hugegraph.sh
+```
+
+The startup sequence for using the distributed storage engine is:
+1. Start HugeGraph-PD
+2. Start HugeGraph-Store
+3. Initialize the database (only for the first time)
+4. Start HugeGraph-Server
+
+Verify that the service is started properly:
+
+```bash
+curl http://localhost:8081/graphs
+# Should return: {"graphs":["hugegraph"]}
+```
+
+The sequence to stop the services should be the reverse of the startup 
sequence:
+1. Stop HugeGraph-Server
+2. Stop HugeGraph-Store
+3. Stop HugeGraph-PD
+
+```bash
+bin/stop-hugegraph.sh
+```
+</details>
+
+##### 5.1.2 Memory
 
 <details>
 <summary>Click to expand/collapse Memory configuration and startup 
methods</summary>
@@ -221,7 +317,7 @@ The prompted url is the same as the restserver.url 
configured in rest-server.pro
 
 </details>
 
-##### 5.1.2 RocksDB
+##### 5.1.3 RocksDB
 
 <details>
 <summary>Click to expand/collapse RocksDB configuration and startup 
methods</summary>
@@ -254,7 +350,7 @@ Connecting to HugeGraphServer 
(http://127.0.0.1:8080/graphs)....OK
 
 </details>
 
-##### 5.1.3 Cassandra
+##### 5.1.4 Cassandra
 
 <details>
 <summary>Click to expand/collapse Cassandra configuration and startup 
methods</summary>
@@ -314,7 +410,7 @@ Connecting to HugeGraphServer 
(http://127.0.0.1:8080/graphs)....OK
 
 </details>
 
-##### 5.1.4 ScyllaDB
+##### 5.1.5 ScyllaDB
 
 <details>
 <summary>Click to expand/collapse ScyllaDB configuration and startup 
methods</summary>
@@ -358,7 +454,7 @@ Connecting to HugeGraphServer 
(http://127.0.0.1:8080/graphs)....OK
 
 </details>
 
-##### 5.1.5 HBase
+##### 5.1.6 HBase
 
 <details>
 <summary>Click to expand/collapse HBase configuration and startup 
methods</summary>
@@ -400,50 +496,6 @@ Connecting to HugeGraphServer 
(http://127.0.0.1:8080/graphs)....OK
 
 </details>
 
-##### 5.1.6 MySQL
-
-<details>
-<summary>Click to expand/collapse MySQL configuration and startup 
methods</summary>
-
-> Due to MySQL is under GPL license, which is not compatible with Apache 
License indeed, Users need to install MySQL, [Download 
Link](https://dev.mysql.com/downloads/mysql/)
-
-Download MySQL's [driver package] 
(https://repo1.maven.org/maven2/mysql/mysql-connector-java/), such as 
`mysql-connector-java-8.0.30.jar`, and put it into HugeGraph- Server's `lib` 
directory.
-
-Modify `hugegraph.properties`, configure the database URL, username and 
password, `store` is the database name, if not, it will be created 
automatically.
-
-```properties
-backend=mysql
-serializer=mysql
-
-store=hugegraph
-
-# mysql backend config
-jdbc.driver=com.mysql.cj.jdbc.Driver
-jdbc.url=jdbc:mysql://127.0.0.1:3306
-jdbc.username=
-jdbc.password=
-jdbc.reconnect_max_times=3
-jdbc.reconnect_interval=3
-jdbc.ssl_mode=false
-```
-
-Initialize the database (required on first startup or a new configuration was 
manually added under 'conf/graphs/')
-
-```bash
-cd *hugegraph-${version}
-bin/init-store.sh
-```
-
-Start server
-
-```bash
-bin/start-hugegraph.sh
-Starting HugeGraphServer...
-Connecting to HugeGraphServer (http://127.0.0.1:8080/graphs)....OK
-```
-
-</details>
-
 ##### 5.1.7 Create an example graph when startup
 
 Carry the `-p true` arguments when starting the script, which indicates 
`preload`, to create a sample graph.

Reply via email to