This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-hugegraph-doc.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 881dc66e doc: update loader options (#445) 
28327813a5f9499f3720330aacc76c31f76cb6e7
881dc66e is described below

commit 881dc66e8935b3a9ef5b609f3c06779d9c921d33
Author: imbajin <[email protected]>
AuthorDate: Thu Jan 22 11:24:14 2026 +0000

    doc: update loader options (#445) 28327813a5f9499f3720330aacc76c31f76cb6e7
---
 cn/docs/_print/index.html                          |   2 +-
 cn/docs/index.xml                                  | 154 ++++++++++++++++++++-
 cn/docs/quickstart/_print/index.html               |   2 +-
 cn/docs/quickstart/toolchain/_print/index.html     |   2 +-
 .../toolchain/hugegraph-loader/index.html          |   8 +-
 cn/docs/quickstart/toolchain/index.xml             | 154 ++++++++++++++++++++-
 cn/sitemap.xml                                     |   2 +-
 docs/_print/index.html                             |   2 +-
 docs/index.xml                                     | 154 ++++++++++++++++++++-
 docs/quickstart/_print/index.html                  |   2 +-
 docs/quickstart/toolchain/_print/index.html        |   2 +-
 .../toolchain/hugegraph-loader/index.html          |   8 +-
 docs/quickstart/toolchain/index.xml                | 154 ++++++++++++++++++++-
 en/sitemap.xml                                     |   2 +-
 sitemap.xml                                        |   2 +-
 15 files changed, 625 insertions(+), 25 deletions(-)

diff --git a/cn/docs/_print/index.html b/cn/docs/_print/index.html
index d77f0a53..b2e4cc1d 100644
--- a/cn/docs/_print/index.html
+++ b/cn/docs/_print/index.html
@@ -935,7 +935,7 @@ HugeGraph 支持多用户并行操作,用户可输入 Gremlin 查询语句,
 </span></span><span style=display:flex><span>
 </span></span><span style=display:flex><span><span 
style=color:#8f5902;font-style:italic>// 如果没有更新策略,则会得到
 </span></span></span><span style=display:flex><span><span 
style=color:#8f5902;font-style:italic></span><span 
style=color:#a40000>&#39;</span><span 
style=color:#204a87;font-weight:700>null</span> <span 
style=color:#204a87;font-weight:700>null</span> <span 
style=color:#a40000>c</span> <span style=color:#a40000>d&#39;</span>
-</span></span></code></pre></div><blockquote><p><strong>注意</strong> : 
采用了批量更新的策略后, 磁盘读请求数会大幅上升, 导入速度相比纯写覆盖会慢数倍 (此时HDD磁盘<a 
href=https://en.wikipedia.org/wiki/IOPS>IOPS</a>会成为瓶颈, 
建议采用SSD以保证速度)</p></blockquote><p><strong>顶点映射的特有节点</strong></p><ul><li>id: 
指定某一列作为顶点的 id 列,当顶点 id 策略为<code>CUSTOMIZE</code>时,必填;当 id 
策略为<code>PRIMARY_KEY</code>时,必须为空;</li></ul><p><strong>边映射的特有节点</strong></p><ul><li>source:
 选择输入源某几列作为<strong>源顶点</strong>的 id 列,当源顶点的 id 策略为 
<code>CUSTOMIZE</code>时,必须指定某一列作为顶点的 id  [...]
+</span></span></code></pre></div><blockquote><p><strong>注意</strong> : 
采用了批量更新的策略后, 磁盘读请求数会大幅上升, 导入速度相比纯写覆盖会慢数倍 (此时HDD磁盘<a 
href=https://en.wikipedia.org/wiki/IOPS>IOPS</a>会成为瓶颈, 
建议采用SSD以保证速度)</p></blockquote><p><strong>顶点映射的特有节点</strong></p><ul><li>id: 
指定某一列作为顶点的 id 列,当顶点 id 策略为<code>CUSTOMIZE</code>时,必填;当 id 
策略为<code>PRIMARY_KEY</code>时,必须为空;</li></ul><p><strong>边映射的特有节点</strong></p><ul><li>source:
 选择输入源某几列作为<strong>源顶点</strong>的 id 列,当源顶点的 id 策略为 
<code>CUSTOMIZE</code>时,必须指定某一列作为顶点的 id  [...]
 记录到进度文件中,进度文件位于 <code>${struct}</code> 目录下,文件名形如 <code>load-progress 
${date}</code> ,${struct} 为映射文件的前缀,${date} 为导入开始
 的时刻。比如:在 <code>2019-10-10 12:30:30</code> 开始的一次导入任务,使用的映射文件为 
<code>struct-example.json</code>,则进度文件的路径为与 struct-example.json
 同级的 <code>struct-example/load-progress 2019-10-10 
12:30:30</code>。</p><blockquote><p>注意:进度文件的生成与 &ndash;incremental-mode 
是否打开无关,每次导入结束都会生成一个进度文件。</p></blockquote><p>如果数据文件格式都是合法的,是用户自己停止(CTRL + C 或 
kill,kill -9 不支持)的导入任务,也就是说没有错误记录的情况下,下一次导入只需要设置
diff --git a/cn/docs/index.xml b/cn/docs/index.xml
index d3494e74..62834d23 100644
--- a/cn/docs/index.xml
+++ b/cn/docs/index.xml
@@ -7304,7 +7304,7 @@ HugeGraph 支持多用户并行操作,用户可输入 Gremlin 查询语句,
 &lt;td>图名称&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&lt;code>-gs&lt;/code> 或 &lt;code>--graphspace&lt;/code>&lt;/td>
+&lt;td>&lt;code>--graphspace&lt;/code>&lt;/td>
 &lt;td>DEFAULT&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>图空间&lt;/td>
@@ -7514,11 +7514,161 @@ HugeGraph 支持多用户并行操作,用户可输入 Gremlin 查询语句,
 &lt;td>打开该模式,只解析不导入,通常用于测试&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&lt;code>--help&lt;/code>&lt;/td>
+&lt;td>&lt;code>--help&lt;/code> 或 &lt;code>-help&lt;/code>&lt;/td>
 &lt;td>false&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>打印帮助信息&lt;/td>
 &lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--parser-threads&lt;/code> 或 
&lt;code>--parallel-count&lt;/code>&lt;/td>
+&lt;td>max(2,CPUS)&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>并行读取数据文件最大线程数&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--start-file&lt;/code>&lt;/td>
+&lt;td>0&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>用于部分(分片)导入的起始文件索引&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--end-file&lt;/code>&lt;/td>
+&lt;td>-1&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>用于部分导入的截止文件索引&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--scatter-sources&lt;/code>&lt;/td>
+&lt;td>false&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>分散(并行)读取多个数据源以优化 I/O 性能&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--cdc-flush-interval&lt;/code>&lt;/td>
+&lt;td>30000&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>Flink CDC 的数据刷新间隔&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--cdc-sink-parallelism&lt;/code>&lt;/td>
+&lt;td>1&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>Flink CDC 写入端(Sink)的并行度&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--max-read-errors&lt;/code>&lt;/td>
+&lt;td>1&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>程序退出前允许的最大读取错误行数&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--max-read-lines&lt;/code>&lt;/td>
+&lt;td>-1L&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>最大读取行数限制;一旦达到此行数,导入任务将停止&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--test-mode&lt;/code>&lt;/td>
+&lt;td>false&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>是否开启测试模式&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--use-prefilter&lt;/code>&lt;/td>
+&lt;td>false&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>是否预先过滤顶点&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--short-id&lt;/code>&lt;/td>
+&lt;td>[]&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>将自定义 ID 映射为更短的 ID&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--vertex-edge-limit&lt;/code>&lt;/td>
+&lt;td>-1L&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>单个顶点的最大边数限制&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--sink-type&lt;/code>&lt;/td>
+&lt;td>true&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>是否输出至不同的存储&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--vertex-partitions&lt;/code>&lt;/td>
+&lt;td>64&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>HBase 顶点表的预分区数量&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--edge-partitions&lt;/code>&lt;/td>
+&lt;td>64&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>HBase 边表的预分区数量&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--vertex-table-name&lt;/code>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>HBase 顶点表名称&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--edge-table-name&lt;/code>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>HBase 边表名称&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--hbase-zk-quorum&lt;/code>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>HBase Zookeeper 集群地址&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--hbase-zk-port&lt;/code>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>HBase Zookeeper 端口号&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--hbase-zk-parent&lt;/code>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>HBase Zookeeper 根路径&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--restore&lt;/code>&lt;/td>
+&lt;td>false&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>将图模式设置为恢复模式 (RESTORING)&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--backend&lt;/code>&lt;/td>
+&lt;td>hstore&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>自动创建图(如果不存在)时的后端存储类型&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--serializer&lt;/code>&lt;/td>
+&lt;td>binary&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>自动创建图(如果不存在)时的序列化器类型&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--scheduler-type&lt;/code>&lt;/td>
+&lt;td>distributed&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>自动创建图(如果不存在)时的任务调度器类型&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--batch-failure-fallback&lt;/code>&lt;/td>
+&lt;td>true&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>批量插入失败时是否回退至单条插入模式&lt;/td>
+&lt;/tr>
 &lt;/tbody>
 &lt;/table>
 &lt;h5 id="342-断点续导模式">3.4.2 断点续导模式&lt;/h5>
diff --git a/cn/docs/quickstart/_print/index.html 
b/cn/docs/quickstart/_print/index.html
index 3d62e9ba..f6e6ccda 100644
--- a/cn/docs/quickstart/_print/index.html
+++ b/cn/docs/quickstart/_print/index.html
@@ -925,7 +925,7 @@
 </span></span><span style=display:flex><span>
 </span></span><span style=display:flex><span><span 
style=color:#8f5902;font-style:italic>// 如果没有更新策略,则会得到
 </span></span></span><span style=display:flex><span><span 
style=color:#8f5902;font-style:italic></span><span 
style=color:#a40000>&#39;</span><span 
style=color:#204a87;font-weight:700>null</span> <span 
style=color:#204a87;font-weight:700>null</span> <span 
style=color:#a40000>c</span> <span style=color:#a40000>d&#39;</span>
-</span></span></code></pre></div><blockquote><p><strong>注意</strong> : 
采用了批量更新的策略后, 磁盘读请求数会大幅上升, 导入速度相比纯写覆盖会慢数倍 (此时HDD磁盘<a 
href=https://en.wikipedia.org/wiki/IOPS>IOPS</a>会成为瓶颈, 
建议采用SSD以保证速度)</p></blockquote><p><strong>顶点映射的特有节点</strong></p><ul><li>id: 
指定某一列作为顶点的 id 列,当顶点 id 策略为<code>CUSTOMIZE</code>时,必填;当 id 
策略为<code>PRIMARY_KEY</code>时,必须为空;</li></ul><p><strong>边映射的特有节点</strong></p><ul><li>source:
 选择输入源某几列作为<strong>源顶点</strong>的 id 列,当源顶点的 id 策略为 
<code>CUSTOMIZE</code>时,必须指定某一列作为顶点的 id  [...]
+</span></span></code></pre></div><blockquote><p><strong>注意</strong> : 
采用了批量更新的策略后, 磁盘读请求数会大幅上升, 导入速度相比纯写覆盖会慢数倍 (此时HDD磁盘<a 
href=https://en.wikipedia.org/wiki/IOPS>IOPS</a>会成为瓶颈, 
建议采用SSD以保证速度)</p></blockquote><p><strong>顶点映射的特有节点</strong></p><ul><li>id: 
指定某一列作为顶点的 id 列,当顶点 id 策略为<code>CUSTOMIZE</code>时,必填;当 id 
策略为<code>PRIMARY_KEY</code>时,必须为空;</li></ul><p><strong>边映射的特有节点</strong></p><ul><li>source:
 选择输入源某几列作为<strong>源顶点</strong>的 id 列,当源顶点的 id 策略为 
<code>CUSTOMIZE</code>时,必须指定某一列作为顶点的 id  [...]
 记录到进度文件中,进度文件位于 <code>${struct}</code> 目录下,文件名形如 <code>load-progress 
${date}</code> ,${struct} 为映射文件的前缀,${date} 为导入开始
 的时刻。比如:在 <code>2019-10-10 12:30:30</code> 开始的一次导入任务,使用的映射文件为 
<code>struct-example.json</code>,则进度文件的路径为与 struct-example.json
 同级的 <code>struct-example/load-progress 2019-10-10 
12:30:30</code>。</p><blockquote><p>注意:进度文件的生成与 &ndash;incremental-mode 
是否打开无关,每次导入结束都会生成一个进度文件。</p></blockquote><p>如果数据文件格式都是合法的,是用户自己停止(CTRL + C 或 
kill,kill -9 不支持)的导入任务,也就是说没有错误记录的情况下,下一次导入只需要设置
diff --git a/cn/docs/quickstart/toolchain/_print/index.html 
b/cn/docs/quickstart/toolchain/_print/index.html
index 0f3029e9..853e7869 100644
--- a/cn/docs/quickstart/toolchain/_print/index.html
+++ b/cn/docs/quickstart/toolchain/_print/index.html
@@ -419,7 +419,7 @@
 </span></span><span style=display:flex><span>
 </span></span><span style=display:flex><span><span 
style=color:#8f5902;font-style:italic>// 如果没有更新策略,则会得到
 </span></span></span><span style=display:flex><span><span 
style=color:#8f5902;font-style:italic></span><span 
style=color:#a40000>&#39;</span><span 
style=color:#204a87;font-weight:700>null</span> <span 
style=color:#204a87;font-weight:700>null</span> <span 
style=color:#a40000>c</span> <span style=color:#a40000>d&#39;</span>
-</span></span></code></pre></div><blockquote><p><strong>注意</strong> : 
采用了批量更新的策略后, 磁盘读请求数会大幅上升, 导入速度相比纯写覆盖会慢数倍 (此时HDD磁盘<a 
href=https://en.wikipedia.org/wiki/IOPS>IOPS</a>会成为瓶颈, 
建议采用SSD以保证速度)</p></blockquote><p><strong>顶点映射的特有节点</strong></p><ul><li>id: 
指定某一列作为顶点的 id 列,当顶点 id 策略为<code>CUSTOMIZE</code>时,必填;当 id 
策略为<code>PRIMARY_KEY</code>时,必须为空;</li></ul><p><strong>边映射的特有节点</strong></p><ul><li>source:
 选择输入源某几列作为<strong>源顶点</strong>的 id 列,当源顶点的 id 策略为 
<code>CUSTOMIZE</code>时,必须指定某一列作为顶点的 id  [...]
+</span></span></code></pre></div><blockquote><p><strong>注意</strong> : 
采用了批量更新的策略后, 磁盘读请求数会大幅上升, 导入速度相比纯写覆盖会慢数倍 (此时HDD磁盘<a 
href=https://en.wikipedia.org/wiki/IOPS>IOPS</a>会成为瓶颈, 
建议采用SSD以保证速度)</p></blockquote><p><strong>顶点映射的特有节点</strong></p><ul><li>id: 
指定某一列作为顶点的 id 列,当顶点 id 策略为<code>CUSTOMIZE</code>时,必填;当 id 
策略为<code>PRIMARY_KEY</code>时,必须为空;</li></ul><p><strong>边映射的特有节点</strong></p><ul><li>source:
 选择输入源某几列作为<strong>源顶点</strong>的 id 列,当源顶点的 id 策略为 
<code>CUSTOMIZE</code>时,必须指定某一列作为顶点的 id  [...]
 记录到进度文件中,进度文件位于 <code>${struct}</code> 目录下,文件名形如 <code>load-progress 
${date}</code> ,${struct} 为映射文件的前缀,${date} 为导入开始
 的时刻。比如:在 <code>2019-10-10 12:30:30</code> 开始的一次导入任务,使用的映射文件为 
<code>struct-example.json</code>,则进度文件的路径为与 struct-example.json
 同级的 <code>struct-example/load-progress 2019-10-10 
12:30:30</code>。</p><blockquote><p>注意:进度文件的生成与 &ndash;incremental-mode 
是否打开无关,每次导入结束都会生成一个进度文件。</p></blockquote><p>如果数据文件格式都是合法的,是用户自己停止(CTRL + C 或 
kill,kill -9 不支持)的导入任务,也就是说没有错误记录的情况下,下一次导入只需要设置
diff --git a/cn/docs/quickstart/toolchain/hugegraph-loader/index.html 
b/cn/docs/quickstart/toolchain/hugegraph-loader/index.html
index 32c8f2f4..ff3d6894 100644
--- a/cn/docs/quickstart/toolchain/hugegraph-loader/index.html
+++ b/cn/docs/quickstart/toolchain/hugegraph-loader/index.html
@@ -9,14 +9,14 @@ HugeGraph-Loader 是 HugeGraph 的数据导入组件,能够将多种数据源
 注意:使用 HugeGraph-Loader 需要依赖 HugeGraph Server 服务,下载和启动 Server 请参考 
HugeGraph-Server Quick Start
 测试指南:如需在本地运行 Loader 测试,请参考 工具链本地测试指南
 2 获取 HugeGraph-Loader 有两种方式可以获取 HugeGraph-Loader:
-使用 Docker 镜像 (便于测试) 下载已编译的压缩包 克隆源码编译安装 2.1 使用 Docker 镜像 (便于测试) 我们可以使用 docker 
run -itd --name loader hugegraph/loader:1.5.0 部署 loader 服务。对于需要加载的数据,则可以通过挂载 -v 
/path/to/data/file:/loader/file 或者 docker cp 的方式将文件复制到 loader 容器内部。"><meta 
property="og:type" content="article"><meta property="og:url" 
content="/cn/docs/quickstart/toolchain/hugegraph-loader/"><meta 
property="article:section" content="docs"><meta 
property="article:modified_time" content="2025-12-01T15:38:17+08:00"><meta 
property="og [...]
+使用 Docker 镜像 (便于测试) 下载已编译的压缩包 克隆源码编译安装 2.1 使用 Docker 镜像 (便于测试) 我们可以使用 docker 
run -itd --name loader hugegraph/loader:1.5.0 部署 loader 服务。对于需要加载的数据,则可以通过挂载 -v 
/path/to/data/file:/loader/file 或者 docker cp 的方式将文件复制到 loader 容器内部。"><meta 
property="og:type" content="article"><meta property="og:url" 
content="/cn/docs/quickstart/toolchain/hugegraph-loader/"><meta 
property="article:section" content="docs"><meta 
property="article:modified_time" content="2026-01-22T19:23:37+08:00"><meta 
property="og [...]
 目前支持的数据源包括:
 本地磁盘文件或目录,支持 TEXT、CSV 和 JSON 格式的文件,支持压缩文件 HDFS 文件或目录,支持压缩文件 主流关系型数据库,如 
MySQL、PostgreSQL、Oracle、SQL Server 本地磁盘文件和 HDFS 文件支持断点续传。
 后面会具体说明。
 注意:使用 HugeGraph-Loader 需要依赖 HugeGraph Server 服务,下载和启动 Server 请参考 
HugeGraph-Server Quick Start
 测试指南:如需在本地运行 Loader 测试,请参考 工具链本地测试指南
 2 获取 HugeGraph-Loader 有两种方式可以获取 HugeGraph-Loader:
-使用 Docker 镜像 (便于测试) 下载已编译的压缩包 克隆源码编译安装 2.1 使用 Docker 镜像 (便于测试) 我们可以使用 docker 
run -itd --name loader hugegraph/loader:1.5.0 部署 loader 服务。对于需要加载的数据,则可以通过挂载 -v 
/path/to/data/file:/loader/file 或者 docker cp 的方式将文件复制到 loader 容器内部。"><meta 
itemprop=dateModified content="2025-12-01T15:38:17+08:00"><meta 
itemprop=wordCount content="2388"><meta itemprop=keywords content><meta 
name=twitter:card content="summary"><meta name=twitter:title 
content="HugeGraph-Loader Quick Start"><meta name=twitter:descr [...]
+使用 Docker 镜像 (便于测试) 下载已编译的压缩包 克隆源码编译安装 2.1 使用 Docker 镜像 (便于测试) 我们可以使用 docker 
run -itd --name loader hugegraph/loader:1.5.0 部署 loader 服务。对于需要加载的数据,则可以通过挂载 -v 
/path/to/data/file:/loader/file 或者 docker cp 的方式将文件复制到 loader 容器内部。"><meta 
itemprop=dateModified content="2026-01-22T19:23:37+08:00"><meta 
itemprop=wordCount content="2480"><meta itemprop=keywords content><meta 
name=twitter:card content="summary"><meta name=twitter:title 
content="HugeGraph-Loader Quick Start"><meta name=twitter:descr [...]
 目前支持的数据源包括:
 本地磁盘文件或目录,支持 TEXT、CSV 和 JSON 格式的文件,支持压缩文件 HDFS 文件或目录,支持压缩文件 主流关系型数据库,如 
MySQL、PostgreSQL、Oracle、SQL Server 本地磁盘文件和 HDFS 文件支持断点续传。
 后面会具体说明。
@@ -402,7 +402,7 @@ HugeGraph-Loader 是 HugeGraph 的数据导入组件,能够将多种数据源
 </span></span><span style=display:flex><span>
 </span></span><span style=display:flex><span><span 
style=color:#8f5902;font-style:italic>// 如果没有更新策略,则会得到
 </span></span></span><span style=display:flex><span><span 
style=color:#8f5902;font-style:italic></span><span 
style=color:#a40000>&#39;</span><span 
style=color:#204a87;font-weight:700>null</span> <span 
style=color:#204a87;font-weight:700>null</span> <span 
style=color:#a40000>c</span> <span style=color:#a40000>d&#39;</span>
-</span></span></code></pre></div><blockquote><p><strong>注意</strong> : 
采用了批量更新的策略后, 磁盘读请求数会大幅上升, 导入速度相比纯写覆盖会慢数倍 (此时HDD磁盘<a 
href=https://en.wikipedia.org/wiki/IOPS>IOPS</a>会成为瓶颈, 
建议采用SSD以保证速度)</p></blockquote><p><strong>顶点映射的特有节点</strong></p><ul><li>id: 
指定某一列作为顶点的 id 列,当顶点 id 策略为<code>CUSTOMIZE</code>时,必填;当 id 
策略为<code>PRIMARY_KEY</code>时,必须为空;</li></ul><p><strong>边映射的特有节点</strong></p><ul><li>source:
 选择输入源某几列作为<strong>源顶点</strong>的 id 列,当源顶点的 id 策略为 
<code>CUSTOMIZE</code>时,必须指定某一列作为顶点的 id  [...]
+</span></span></code></pre></div><blockquote><p><strong>注意</strong> : 
采用了批量更新的策略后, 磁盘读请求数会大幅上升, 导入速度相比纯写覆盖会慢数倍 (此时HDD磁盘<a 
href=https://en.wikipedia.org/wiki/IOPS>IOPS</a>会成为瓶颈, 
建议采用SSD以保证速度)</p></blockquote><p><strong>顶点映射的特有节点</strong></p><ul><li>id: 
指定某一列作为顶点的 id 列,当顶点 id 策略为<code>CUSTOMIZE</code>时,必填;当 id 
策略为<code>PRIMARY_KEY</code>时,必须为空;</li></ul><p><strong>边映射的特有节点</strong></p><ul><li>source:
 选择输入源某几列作为<strong>源顶点</strong>的 id 列,当源顶点的 id 策略为 
<code>CUSTOMIZE</code>时,必须指定某一列作为顶点的 id  [...]
 记录到进度文件中,进度文件位于 <code>${struct}</code> 目录下,文件名形如 <code>load-progress 
${date}</code> ,${struct} 为映射文件的前缀,${date} 为导入开始
 的时刻。比如:在 <code>2019-10-10 12:30:30</code> 开始的一次导入任务,使用的映射文件为 
<code>struct-example.json</code>,则进度文件的路径为与 struct-example.json
 同级的 <code>struct-example/load-progress 2019-10-10 
12:30:30</code>。</p><blockquote><p>注意:进度文件的生成与 &ndash;incremental-mode 
是否打开无关,每次导入结束都会生成一个进度文件。</p></blockquote><p>如果数据文件格式都是合法的,是用户自己停止(CTRL + C 或 
kill,kill -9 不支持)的导入任务,也就是说没有错误记录的情况下,下一次导入只需要设置
@@ -569,7 +569,7 @@ HugeGraph Toolchain 
版本:toolchain-1.0.0</p></blockquote><p><code>spark-load
 </span></span></span><span style=display:flex><span><span 
style=color:#4e9a06></span>--deploy-mode cluster --name spark-hugegraph-loader 
--file ./hugegraph.json <span style=color:#4e9a06>\
 </span></span></span><span style=display:flex><span><span 
style=color:#4e9a06></span>--username admin --token admin --host xx.xx.xx.xx 
--port <span style=color:#0000cf;font-weight:700>8093</span> <span 
style=color:#4e9a06>\
 </span></span></span><span style=display:flex><span><span 
style=color:#4e9a06></span>--graph graph-test --num-executors <span 
style=color:#0000cf;font-weight:700>6</span> --executor-cores <span 
style=color:#0000cf;font-weight:700>16</span> --executor-memory 15g
-</span></span></code></pre></div><div class="text-muted mt-5 pt-3 
border-top">Page last updated December 1, 2025: <a 
href=https://github.com/apache/incubator-hugegraph-doc/commit/1e297a2f28b6b3d9349518991048fc94426bc325>docs:
 refactor docs of loader & client for new version(1.7.0) (#415) 
(1e297a2f)</a></div></div></main></div></div><footer class="bg-dark py-3 row 
d-print-none"><div class=footer-container><div class="row bg-dark"><div 
class=col-1></div><div class="col-4 text-center contai [...]
+</span></span></code></pre></div><div class="text-muted mt-5 pt-3 
border-top">Page last updated January 22, 2026: <a 
href=https://github.com/apache/incubator-hugegraph-doc/commit/28327813a5f9499f3720330aacc76c31f76cb6e7>doc:
 update loader options (#445) 
(28327813)</a></div></div></main></div></div><footer class="bg-dark py-3 row 
d-print-none"><div class=footer-container><div class="row bg-dark"><div 
class=col-1></div><div class="col-4 text-center container-center"><div 
class=footer-row>< [...]
 <script src=/js/bootstrap.min.js></script>
 <script src=/js/mermaid.min.js></script>
 <script src=/js/tabpane-persist.js></script>
diff --git a/cn/docs/quickstart/toolchain/index.xml 
b/cn/docs/quickstart/toolchain/index.xml
index e5a582fd..08720645 100644
--- a/cn/docs/quickstart/toolchain/index.xml
+++ b/cn/docs/quickstart/toolchain/index.xml
@@ -1198,7 +1198,7 @@
 &lt;td>图名称&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&lt;code>-gs&lt;/code> 或 &lt;code>--graphspace&lt;/code>&lt;/td>
+&lt;td>&lt;code>--graphspace&lt;/code>&lt;/td>
 &lt;td>DEFAULT&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>图空间&lt;/td>
@@ -1408,11 +1408,161 @@
 &lt;td>打开该模式,只解析不导入,通常用于测试&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&lt;code>--help&lt;/code>&lt;/td>
+&lt;td>&lt;code>--help&lt;/code> 或 &lt;code>-help&lt;/code>&lt;/td>
 &lt;td>false&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>打印帮助信息&lt;/td>
 &lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--parser-threads&lt;/code> 或 
&lt;code>--parallel-count&lt;/code>&lt;/td>
+&lt;td>max(2,CPUS)&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>并行读取数据文件最大线程数&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--start-file&lt;/code>&lt;/td>
+&lt;td>0&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>用于部分(分片)导入的起始文件索引&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--end-file&lt;/code>&lt;/td>
+&lt;td>-1&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>用于部分导入的截止文件索引&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--scatter-sources&lt;/code>&lt;/td>
+&lt;td>false&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>分散(并行)读取多个数据源以优化 I/O 性能&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--cdc-flush-interval&lt;/code>&lt;/td>
+&lt;td>30000&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>Flink CDC 的数据刷新间隔&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--cdc-sink-parallelism&lt;/code>&lt;/td>
+&lt;td>1&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>Flink CDC 写入端(Sink)的并行度&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--max-read-errors&lt;/code>&lt;/td>
+&lt;td>1&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>程序退出前允许的最大读取错误行数&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--max-read-lines&lt;/code>&lt;/td>
+&lt;td>-1L&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>最大读取行数限制;一旦达到此行数,导入任务将停止&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--test-mode&lt;/code>&lt;/td>
+&lt;td>false&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>是否开启测试模式&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--use-prefilter&lt;/code>&lt;/td>
+&lt;td>false&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>是否预先过滤顶点&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--short-id&lt;/code>&lt;/td>
+&lt;td>[]&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>将自定义 ID 映射为更短的 ID&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--vertex-edge-limit&lt;/code>&lt;/td>
+&lt;td>-1L&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>单个顶点的最大边数限制&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--sink-type&lt;/code>&lt;/td>
+&lt;td>true&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>是否输出至不同的存储&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--vertex-partitions&lt;/code>&lt;/td>
+&lt;td>64&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>HBase 顶点表的预分区数量&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--edge-partitions&lt;/code>&lt;/td>
+&lt;td>64&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>HBase 边表的预分区数量&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--vertex-table-name&lt;/code>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>HBase 顶点表名称&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--edge-table-name&lt;/code>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>HBase 边表名称&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--hbase-zk-quorum&lt;/code>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>HBase Zookeeper 集群地址&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--hbase-zk-port&lt;/code>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>HBase Zookeeper 端口号&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--hbase-zk-parent&lt;/code>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>HBase Zookeeper 根路径&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--restore&lt;/code>&lt;/td>
+&lt;td>false&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>将图模式设置为恢复模式 (RESTORING)&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--backend&lt;/code>&lt;/td>
+&lt;td>hstore&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>自动创建图(如果不存在)时的后端存储类型&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--serializer&lt;/code>&lt;/td>
+&lt;td>binary&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>自动创建图(如果不存在)时的序列化器类型&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--scheduler-type&lt;/code>&lt;/td>
+&lt;td>distributed&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>自动创建图(如果不存在)时的任务调度器类型&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--batch-failure-fallback&lt;/code>&lt;/td>
+&lt;td>true&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>批量插入失败时是否回退至单条插入模式&lt;/td>
+&lt;/tr>
 &lt;/tbody>
 &lt;/table>
 &lt;h5 id="342-断点续导模式">3.4.2 断点续导模式&lt;/h5>
diff --git a/cn/sitemap.xml b/cn/sitemap.xml
index 04700c4a..8a940b97 100644
--- a/cn/sitemap.xml
+++ b/cn/sitemap.xml
@@ -1 +1 @@
-<?xml version="1.0" encoding="utf-8" standalone="yes"?><urlset 
xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"; 
xmlns:xhtml="http://www.w3.org/1999/xhtml";><url><loc>/cn/docs/clients/restful-api/graphspace/</loc><lastmod>2025-11-26T19:15:48+08:00</lastmod><xhtml:link
 rel="alternate" hreflang="en" 
href="/docs/clients/restful-api/graphspace/"/><xhtml:link rel="alternate" 
hreflang="cn" 
href="/cn/docs/clients/restful-api/graphspace/"/></url><url><loc>/cn/docs/language/hugegraph-gremlin/</l
 [...]
\ No newline at end of file
+<?xml version="1.0" encoding="utf-8" standalone="yes"?><urlset 
xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"; 
xmlns:xhtml="http://www.w3.org/1999/xhtml";><url><loc>/cn/docs/clients/restful-api/graphspace/</loc><lastmod>2025-11-26T19:15:48+08:00</lastmod><xhtml:link
 rel="alternate" hreflang="en" 
href="/docs/clients/restful-api/graphspace/"/><xhtml:link rel="alternate" 
hreflang="cn" 
href="/cn/docs/clients/restful-api/graphspace/"/></url><url><loc>/cn/docs/language/hugegraph-gremlin/</l
 [...]
\ No newline at end of file
diff --git a/docs/_print/index.html b/docs/_print/index.html
index 3f11efd8..97c5a48c 100644
--- a/docs/_print/index.html
+++ b/docs/_print/index.html
@@ -949,7 +949,7 @@ Visit the <a 
href=https://www.oracle.com/database/technologies/appdev/jdbc-drive
 </span></span><span style=display:flex><span>
 </span></span><span style=display:flex><span><span 
style=color:#8f5902;font-style:italic>// If there is no update strategy, you 
will get
 </span></span></span><span style=display:flex><span><span 
style=color:#8f5902;font-style:italic></span><span 
style=color:#a40000>&#39;</span><span 
style=color:#204a87;font-weight:700>null</span> <span 
style=color:#204a87;font-weight:700>null</span> <span 
style=color:#a40000>c</span> <span style=color:#a40000>d&#39;</span>
-</span></span></code></pre></div><blockquote><p><strong>Note</strong> : After 
adopting the batch update strategy, the number of disk read requests will 
increase significantly, and the import speed will be several times slower than 
that of pure write coverage (at this time HDD disk [IOPS](<a 
href=https://en.wikipedia>https://en.wikipedia</a> .org/wiki/IOPS) will be the 
bottleneck, SSD is recommended for speed)</p></blockquote><p><strong>Unique 
Nodes for Vertex Maps</strong></p><ul><li>id: [...]
+</span></span></code></pre></div><blockquote><p><strong>Note</strong> : After 
adopting the batch update strategy, the number of disk read requests will 
increase significantly, and the import speed will be several times slower than 
that of pure write coverage (at this time HDD disk [IOPS](<a 
href=https://en.wikipedia>https://en.wikipedia</a> .org/wiki/IOPS) will be the 
bottleneck, SSD is recommended for speed)</p></blockquote><p><strong>Unique 
Nodes for Vertex Maps</strong></p><ul><li>id: [...]
 Recorded in the progress file, the progress file is located in the 
<code>${struct}</code> directory, the file name is like <code>load-progress 
${date}</code>, ${struct} is the prefix of the mapping file, and ${date} is the 
start of the import
 moment. For example, for an import task started at <code>2019-10-10 
12:30:30</code>, the mapping file used is <code>struct-example.json</code>, 
then the path of the progress file is the same as struct-example.json
 Sibling <code>struct-example/load-progress 2019-10-10 
12:30:30</code>.</p><blockquote><p>Note: The generation of progress files is 
independent of whether &ndash;incremental-mode is turned on or not, and a 
progress file is generated at the end of each import.</p></blockquote><p>If the 
data file formats are all legal and the import task is stopped by the user 
(CTRL + C or kill, kill -9 is not supported), that is to say, if there is no 
error record, the next import only needs to be set to
diff --git a/docs/index.xml b/docs/index.xml
index af2f01cb..329d5877 100644
--- a/docs/index.xml
+++ b/docs/index.xml
@@ -8028,7 +8028,7 @@ Visit the &lt;a 
href="https://www.oracle.com/database/technologies/appdev/jdbc-d
 &lt;td>Graph name&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&lt;code>-gs&lt;/code> or &lt;code>--graphspace&lt;/code>&lt;/td>
+&lt;td>&lt;code>--graphspace&lt;/code>&lt;/td>
 &lt;td>DEFAULT&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Graph space name&lt;/td>
@@ -8238,11 +8238,161 @@ Visit the &lt;a 
href="https://www.oracle.com/database/technologies/appdev/jdbc-d
 &lt;td>Enable this mode to only parse data without importing; usually used for 
testing&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&lt;code>--help&lt;/code>&lt;/td>
+&lt;td>&lt;code>--help&lt;/code> or &lt;code>-help&lt;/code>&lt;/td>
 &lt;td>false&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Print help information&lt;/td>
 &lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--parser-threads&lt;/code> or 
&lt;code>--parallel-count&lt;/code>&lt;/td>
+&lt;td>max(2,CPUS)&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>Parallel read pipelines for data files&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--start-file&lt;/code>&lt;/td>
+&lt;td>0&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>Start file index for partial loading&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--end-file&lt;/code>&lt;/td>
+&lt;td>-1&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>End file index for partial loading&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--scatter-sources&lt;/code>&lt;/td>
+&lt;td>false&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>Scatter multiple sources for I/O optimization&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--cdc-flush-interval&lt;/code>&lt;/td>
+&lt;td>30000&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>The flush interval for Flink CDC&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--cdc-sink-parallelism&lt;/code>&lt;/td>
+&lt;td>1&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>The sink parallelism for Flink CDC&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--max-read-errors&lt;/code>&lt;/td>
+&lt;td>1&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>The maximum number of read error lines before exiting&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--max-read-lines&lt;/code>&lt;/td>
+&lt;td>-1L&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>The maximum number of read lines, task stops when reached&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--test-mode&lt;/code>&lt;/td>
+&lt;td>false&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>Whether the loader works in test mode&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--use-prefilter&lt;/code>&lt;/td>
+&lt;td>false&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>Whether to filter vertex in advance&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--short-id&lt;/code>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>Mapping customized ID to shorter ID&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--vertex-edge-limit&lt;/code>&lt;/td>
+&lt;td>-1L&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>The maximum number of vertex&amp;rsquo;s edges&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--sink-type&lt;/code>&lt;/td>
+&lt;td>true&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>Sink to different storage type switch&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--vertex-partitions&lt;/code>&lt;/td>
+&lt;td>64&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>The number of partitions of the HBase vertex table&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--edge-partitions&lt;/code>&lt;/td>
+&lt;td>64&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>The number of partitions of the HBase edge table&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--vertex-table-name&lt;/code>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>HBase vertex table name&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--edge-table-name&lt;/code>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>HBase edge table name&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--hbase-zk-quorum&lt;/code>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>HBase ZooKeeper quorum&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--hbase-zk-port&lt;/code>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>HBase ZooKeeper port&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--hbase-zk-parent&lt;/code>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>HBase ZooKeeper parent&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--restore&lt;/code>&lt;/td>
+&lt;td>false&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>Set graph mode to RESTORING&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--backend&lt;/code>&lt;/td>
+&lt;td>hstore&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>The backend store type when creating graph if not exists&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--serializer&lt;/code>&lt;/td>
+&lt;td>binary&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>The serializer type when creating graph if not exists&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--scheduler-type&lt;/code>&lt;/td>
+&lt;td>distributed&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>The task scheduler type when creating graph if not exists&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--batch-failure-fallback&lt;/code>&lt;/td>
+&lt;td>true&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>Whether to fallback to single insert when batch insert fails&lt;/td>
+&lt;/tr>
 &lt;/tbody>
 &lt;/table>
 &lt;h5 id="342-breakpoint-continuation-mode">3.4.2 Breakpoint Continuation 
Mode&lt;/h5>
diff --git a/docs/quickstart/_print/index.html 
b/docs/quickstart/_print/index.html
index e94cb477..b153bc3c 100644
--- a/docs/quickstart/_print/index.html
+++ b/docs/quickstart/_print/index.html
@@ -941,7 +941,7 @@ Visit the <a 
href=https://www.oracle.com/database/technologies/appdev/jdbc-drive
 </span></span><span style=display:flex><span>
 </span></span><span style=display:flex><span><span 
style=color:#8f5902;font-style:italic>// If there is no update strategy, you 
will get
 </span></span></span><span style=display:flex><span><span 
style=color:#8f5902;font-style:italic></span><span 
style=color:#a40000>&#39;</span><span 
style=color:#204a87;font-weight:700>null</span> <span 
style=color:#204a87;font-weight:700>null</span> <span 
style=color:#a40000>c</span> <span style=color:#a40000>d&#39;</span>
-</span></span></code></pre></div><blockquote><p><strong>Note</strong> : After 
adopting the batch update strategy, the number of disk read requests will 
increase significantly, and the import speed will be several times slower than 
that of pure write coverage (at this time HDD disk [IOPS](<a 
href=https://en.wikipedia>https://en.wikipedia</a> .org/wiki/IOPS) will be the 
bottleneck, SSD is recommended for speed)</p></blockquote><p><strong>Unique 
Nodes for Vertex Maps</strong></p><ul><li>id: [...]
+</span></span></code></pre></div><blockquote><p><strong>Note</strong> : After 
adopting the batch update strategy, the number of disk read requests will 
increase significantly, and the import speed will be several times slower than 
that of pure write coverage (at this time HDD disk [IOPS](<a 
href=https://en.wikipedia>https://en.wikipedia</a> .org/wiki/IOPS) will be the 
bottleneck, SSD is recommended for speed)</p></blockquote><p><strong>Unique 
Nodes for Vertex Maps</strong></p><ul><li>id: [...]
 Recorded in the progress file, the progress file is located in the 
<code>${struct}</code> directory, the file name is like <code>load-progress 
${date}</code>, ${struct} is the prefix of the mapping file, and ${date} is the 
start of the import
 moment. For example, for an import task started at <code>2019-10-10 
12:30:30</code>, the mapping file used is <code>struct-example.json</code>, 
then the path of the progress file is the same as struct-example.json
 Sibling <code>struct-example/load-progress 2019-10-10 
12:30:30</code>.</p><blockquote><p>Note: The generation of progress files is 
independent of whether &ndash;incremental-mode is turned on or not, and a 
progress file is generated at the end of each import.</p></blockquote><p>If the 
data file formats are all legal and the import task is stopped by the user 
(CTRL + C or kill, kill -9 is not supported), that is to say, if there is no 
error record, the next import only needs to be set to
diff --git a/docs/quickstart/toolchain/_print/index.html 
b/docs/quickstart/toolchain/_print/index.html
index f0287a0b..544c0139 100644
--- a/docs/quickstart/toolchain/_print/index.html
+++ b/docs/quickstart/toolchain/_print/index.html
@@ -416,7 +416,7 @@ Visit the <a 
href=https://www.oracle.com/database/technologies/appdev/jdbc-drive
 </span></span><span style=display:flex><span>
 </span></span><span style=display:flex><span><span 
style=color:#8f5902;font-style:italic>// If there is no update strategy, you 
will get
 </span></span></span><span style=display:flex><span><span 
style=color:#8f5902;font-style:italic></span><span 
style=color:#a40000>&#39;</span><span 
style=color:#204a87;font-weight:700>null</span> <span 
style=color:#204a87;font-weight:700>null</span> <span 
style=color:#a40000>c</span> <span style=color:#a40000>d&#39;</span>
-</span></span></code></pre></div><blockquote><p><strong>Note</strong> : After 
adopting the batch update strategy, the number of disk read requests will 
increase significantly, and the import speed will be several times slower than 
that of pure write coverage (at this time HDD disk [IOPS](<a 
href=https://en.wikipedia>https://en.wikipedia</a> .org/wiki/IOPS) will be the 
bottleneck, SSD is recommended for speed)</p></blockquote><p><strong>Unique 
Nodes for Vertex Maps</strong></p><ul><li>id: [...]
+</span></span></code></pre></div><blockquote><p><strong>Note</strong> : After 
adopting the batch update strategy, the number of disk read requests will 
increase significantly, and the import speed will be several times slower than 
that of pure write coverage (at this time HDD disk [IOPS](<a 
href=https://en.wikipedia>https://en.wikipedia</a> .org/wiki/IOPS) will be the 
bottleneck, SSD is recommended for speed)</p></blockquote><p><strong>Unique 
Nodes for Vertex Maps</strong></p><ul><li>id: [...]
 Recorded in the progress file, the progress file is located in the 
<code>${struct}</code> directory, the file name is like <code>load-progress 
${date}</code>, ${struct} is the prefix of the mapping file, and ${date} is the 
start of the import
 moment. For example, for an import task started at <code>2019-10-10 
12:30:30</code>, the mapping file used is <code>struct-example.json</code>, 
then the path of the progress file is the same as struct-example.json
 Sibling <code>struct-example/load-progress 2019-10-10 
12:30:30</code>.</p><blockquote><p>Note: The generation of progress files is 
independent of whether &ndash;incremental-mode is turned on or not, and a 
progress file is generated at the end of each import.</p></blockquote><p>If the 
data file formats are all legal and the import task is stopped by the user 
(CTRL + C or kill, kill -9 is not supported), that is to say, if there is no 
error record, the next import only needs to be set to
diff --git a/docs/quickstart/toolchain/hugegraph-loader/index.html 
b/docs/quickstart/toolchain/hugegraph-loader/index.html
index 5df430f4..6fbd8f62 100644
--- a/docs/quickstart/toolchain/hugegraph-loader/index.html
+++ b/docs/quickstart/toolchain/hugegraph-loader/index.html
@@ -1,9 +1,9 @@
 <!doctype html><html lang=en class=no-js><head><meta charset=utf-8><meta 
name=viewport 
content="width=device-width,initial-scale=1,shrink-to-fit=no"><meta 
http-equiv=content-security-policy content="script-src 'self' 'unsafe-inline'; 
script-src-elem 'self' 'unsafe-inline' https://code.jquery.com 
https://cdn.jsdelivr.net https://fonts.googleapis.com;";><meta name=generator 
content="Hugo 0.102.3"><meta name=robots content="index, follow"><link 
rel="shortcut icon" href=/favicons/favicon.ico> [...]
 HugeGraph-Loader is the data import component of HugeGraph, which can convert 
data from various data sources into graph …"><meta property="og:title" 
content="HugeGraph-Loader Quick Start"><meta property="og:description" 
content="1 HugeGraph-Loader Overview HugeGraph-Loader is the data import 
component of HugeGraph, which can convert data from various data sources into 
graph vertices and edges and import them into the graph database in batches.
 Currently supported data sources include:
-Local disk file or directory, supports TEXT, CSV and JSON format files, 
supports compressed files HDFS file or directory supports compressed files 
Mainstream relational databases, such as MySQL, PostgreSQL, Oracle, SQL Server 
Local disk files and HDFS files support resumable uploads."><meta 
property="og:type" content="article"><meta property="og:url" 
content="/docs/quickstart/toolchain/hugegraph-loader/"><meta 
property="article:section" content="docs"><meta property="article:modified_tim 
[...]
+Local disk file or directory, supports TEXT, CSV and JSON format files, 
supports compressed files HDFS file or directory supports compressed files 
Mainstream relational databases, such as MySQL, PostgreSQL, Oracle, SQL Server 
Local disk files and HDFS files support resumable uploads."><meta 
property="og:type" content="article"><meta property="og:url" 
content="/docs/quickstart/toolchain/hugegraph-loader/"><meta 
property="article:section" content="docs"><meta property="article:modified_tim 
[...]
 Currently supported data sources include:
-Local disk file or directory, supports TEXT, CSV and JSON format files, 
supports compressed files HDFS file or directory supports compressed files 
Mainstream relational databases, such as MySQL, PostgreSQL, Oracle, SQL Server 
Local disk files and HDFS files support resumable uploads."><meta 
itemprop=dateModified content="2025-12-01T15:38:17+08:00"><meta 
itemprop=wordCount content="6431"><meta itemprop=keywords content><meta 
name=twitter:card content="summary"><meta name=twitter:title con [...]
+Local disk file or directory, supports TEXT, CSV and JSON format files, 
supports compressed files HDFS file or directory supports compressed files 
Mainstream relational databases, such as MySQL, PostgreSQL, Oracle, SQL Server 
Local disk files and HDFS files support resumable uploads."><meta 
itemprop=dateModified content="2026-01-22T19:23:37+08:00"><meta 
itemprop=wordCount content="6642"><meta itemprop=keywords content><meta 
name=twitter:card content="summary"><meta name=twitter:title con [...]
 Currently supported data sources include:
 Local disk file or directory, supports TEXT, CSV and JSON format files, 
supports compressed files HDFS file or directory supports compressed files 
Mainstream relational databases, such as MySQL, PostgreSQL, Oracle, SQL Server 
Local disk files and HDFS files support resumable uploads."><link rel=preload 
href=/scss/main.min.3276a99ddd5b15fbe3fcf20f8237086c2cbb526b572f4f06a2246fa9279ed395.css
 as=style><link 
href=/scss/main.min.3276a99ddd5b15fbe3fcf20f8237086c2cbb526b572f4f06a2246fa9279ed395
 [...]
 <link rel=stylesheet 
href=/css/prism.css><script>document.addEventListener("DOMContentLoaded",function(){var
 t=document.querySelectorAll("pre code.language-mermaid, code.language-mermaid, 
pre code.language-fallback, 
code.language-fallback"),e=[];t.forEach(function(t){var 
n=t.textContent.trim();(n.match(/^(graph|flowchart|sequenceDiagram|classDiagram|pie|gitgraph|erDiagram|journey|gantt|stateDiagram|mindmap|timeline|quadrantChart)/m)||n.includes("-->")||n.includes("->")||n.includes("style
 [...]
@@ -376,7 +376,7 @@ Visit the <a 
href=https://www.oracle.com/database/technologies/appdev/jdbc-drive
 </span></span><span style=display:flex><span>
 </span></span><span style=display:flex><span><span 
style=color:#8f5902;font-style:italic>// If there is no update strategy, you 
will get
 </span></span></span><span style=display:flex><span><span 
style=color:#8f5902;font-style:italic></span><span 
style=color:#a40000>&#39;</span><span 
style=color:#204a87;font-weight:700>null</span> <span 
style=color:#204a87;font-weight:700>null</span> <span 
style=color:#a40000>c</span> <span style=color:#a40000>d&#39;</span>
-</span></span></code></pre></div><blockquote><p><strong>Note</strong> : After 
adopting the batch update strategy, the number of disk read requests will 
increase significantly, and the import speed will be several times slower than 
that of pure write coverage (at this time HDD disk [IOPS](<a 
href=https://en.wikipedia>https://en.wikipedia</a> .org/wiki/IOPS) will be the 
bottleneck, SSD is recommended for speed)</p></blockquote><p><strong>Unique 
Nodes for Vertex Maps</strong></p><ul><li>id: [...]
+</span></span></code></pre></div><blockquote><p><strong>Note</strong> : After 
adopting the batch update strategy, the number of disk read requests will 
increase significantly, and the import speed will be several times slower than 
that of pure write coverage (at this time HDD disk [IOPS](<a 
href=https://en.wikipedia>https://en.wikipedia</a> .org/wiki/IOPS) will be the 
bottleneck, SSD is recommended for speed)</p></blockquote><p><strong>Unique 
Nodes for Vertex Maps</strong></p><ul><li>id: [...]
 Recorded in the progress file, the progress file is located in the 
<code>${struct}</code> directory, the file name is like <code>load-progress 
${date}</code>, ${struct} is the prefix of the mapping file, and ${date} is the 
start of the import
 moment. For example, for an import task started at <code>2019-10-10 
12:30:30</code>, the mapping file used is <code>struct-example.json</code>, 
then the path of the progress file is the same as struct-example.json
 Sibling <code>struct-example/load-progress 2019-10-10 
12:30:30</code>.</p><blockquote><p>Note: The generation of progress files is 
independent of whether &ndash;incremental-mode is turned on or not, and a 
progress file is generated at the end of each import.</p></blockquote><p>If the 
data file formats are all legal and the import task is stopped by the user 
(CTRL + C or kill, kill -9 is not supported), that is to say, if there is no 
error record, the next import only needs to be set to
@@ -543,7 +543,7 @@ And there is no need to guarantee the order between the two 
parameters.</p><ul><
 </span></span></span><span style=display:flex><span><span 
style=color:#4e9a06></span>--deploy-mode cluster --name spark-hugegraph-loader 
--file ./hugegraph.json <span style=color:#4e9a06>\
 </span></span></span><span style=display:flex><span><span 
style=color:#4e9a06></span>--username admin --token admin --host xx.xx.xx.xx 
--port <span style=color:#0000cf;font-weight:700>8093</span> <span 
style=color:#4e9a06>\
 </span></span></span><span style=display:flex><span><span 
style=color:#4e9a06></span>--graph graph-test --num-executors <span 
style=color:#0000cf;font-weight:700>6</span> --executor-cores <span 
style=color:#0000cf;font-weight:700>16</span> --executor-memory 15g
-</span></span></code></pre></div><div class="text-muted mt-5 pt-3 
border-top">Page last updated December 1, 2025: <a 
href=https://github.com/apache/incubator-hugegraph-doc/commit/1e297a2f28b6b3d9349518991048fc94426bc325>docs:
 refactor docs of loader & client for new version(1.7.0) (#415) 
(1e297a2f)</a></div></div></main></div></div><footer class="bg-dark py-3 row 
d-print-none"><div class=footer-container><div class="row bg-dark"><div 
class=col-1></div><div class="col-4 text-center contai [...]
+</span></span></code></pre></div><div class="text-muted mt-5 pt-3 
border-top">Page last updated January 22, 2026: <a 
href=https://github.com/apache/incubator-hugegraph-doc/commit/28327813a5f9499f3720330aacc76c31f76cb6e7>doc:
 update loader options (#445) 
(28327813)</a></div></div></main></div></div><footer class="bg-dark py-3 row 
d-print-none"><div class=footer-container><div class="row bg-dark"><div 
class=col-1></div><div class="col-4 text-center container-center"><div 
class=footer-row>< [...]
 <script src=/js/bootstrap.min.js></script>
 <script src=/js/mermaid.min.js></script>
 <script src=/js/tabpane-persist.js></script>
diff --git a/docs/quickstart/toolchain/index.xml 
b/docs/quickstart/toolchain/index.xml
index 204f011d..52fe9070 100644
--- a/docs/quickstart/toolchain/index.xml
+++ b/docs/quickstart/toolchain/index.xml
@@ -1219,7 +1219,7 @@ Visit the &lt;a 
href="https://www.oracle.com/database/technologies/appdev/jdbc-d
 &lt;td>Graph name&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&lt;code>-gs&lt;/code> or &lt;code>--graphspace&lt;/code>&lt;/td>
+&lt;td>&lt;code>--graphspace&lt;/code>&lt;/td>
 &lt;td>DEFAULT&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Graph space name&lt;/td>
@@ -1429,11 +1429,161 @@ Visit the &lt;a 
href="https://www.oracle.com/database/technologies/appdev/jdbc-d
 &lt;td>Enable this mode to only parse data without importing; usually used for 
testing&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&lt;code>--help&lt;/code>&lt;/td>
+&lt;td>&lt;code>--help&lt;/code> or &lt;code>-help&lt;/code>&lt;/td>
 &lt;td>false&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Print help information&lt;/td>
 &lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--parser-threads&lt;/code> or 
&lt;code>--parallel-count&lt;/code>&lt;/td>
+&lt;td>max(2,CPUS)&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>Parallel read pipelines for data files&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--start-file&lt;/code>&lt;/td>
+&lt;td>0&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>Start file index for partial loading&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--end-file&lt;/code>&lt;/td>
+&lt;td>-1&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>End file index for partial loading&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--scatter-sources&lt;/code>&lt;/td>
+&lt;td>false&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>Scatter multiple sources for I/O optimization&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--cdc-flush-interval&lt;/code>&lt;/td>
+&lt;td>30000&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>The flush interval for Flink CDC&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--cdc-sink-parallelism&lt;/code>&lt;/td>
+&lt;td>1&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>The sink parallelism for Flink CDC&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--max-read-errors&lt;/code>&lt;/td>
+&lt;td>1&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>The maximum number of read error lines before exiting&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--max-read-lines&lt;/code>&lt;/td>
+&lt;td>-1L&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>The maximum number of read lines, task stops when reached&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--test-mode&lt;/code>&lt;/td>
+&lt;td>false&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>Whether the loader works in test mode&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--use-prefilter&lt;/code>&lt;/td>
+&lt;td>false&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>Whether to filter vertex in advance&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--short-id&lt;/code>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>Mapping customized ID to shorter ID&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--vertex-edge-limit&lt;/code>&lt;/td>
+&lt;td>-1L&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>The maximum number of vertex&amp;rsquo;s edges&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--sink-type&lt;/code>&lt;/td>
+&lt;td>true&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>Sink to different storage type switch&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--vertex-partitions&lt;/code>&lt;/td>
+&lt;td>64&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>The number of partitions of the HBase vertex table&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--edge-partitions&lt;/code>&lt;/td>
+&lt;td>64&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>The number of partitions of the HBase edge table&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--vertex-table-name&lt;/code>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>HBase vertex table name&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--edge-table-name&lt;/code>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>HBase edge table name&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--hbase-zk-quorum&lt;/code>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>HBase ZooKeeper quorum&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--hbase-zk-port&lt;/code>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>HBase ZooKeeper port&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--hbase-zk-parent&lt;/code>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>HBase ZooKeeper parent&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--restore&lt;/code>&lt;/td>
+&lt;td>false&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>Set graph mode to RESTORING&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--backend&lt;/code>&lt;/td>
+&lt;td>hstore&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>The backend store type when creating graph if not exists&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--serializer&lt;/code>&lt;/td>
+&lt;td>binary&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>The serializer type when creating graph if not exists&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--scheduler-type&lt;/code>&lt;/td>
+&lt;td>distributed&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>The task scheduler type when creating graph if not exists&lt;/td>
+&lt;/tr>
+&lt;tr>
+&lt;td>&lt;code>--batch-failure-fallback&lt;/code>&lt;/td>
+&lt;td>true&lt;/td>
+&lt;td>&lt;/td>
+&lt;td>Whether to fallback to single insert when batch insert fails&lt;/td>
+&lt;/tr>
 &lt;/tbody>
 &lt;/table>
 &lt;h5 id="342-breakpoint-continuation-mode">3.4.2 Breakpoint Continuation 
Mode&lt;/h5>
diff --git a/en/sitemap.xml b/en/sitemap.xml
index ac22b8d2..8b01ab16 100644
--- a/en/sitemap.xml
+++ b/en/sitemap.xml
@@ -1 +1 @@
-<?xml version="1.0" encoding="utf-8" standalone="yes"?><urlset 
xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"; 
xmlns:xhtml="http://www.w3.org/1999/xhtml";><url><loc>/docs/guides/architectural/</loc><lastmod>2025-06-13T21:28:50+08:00</lastmod><xhtml:link
 rel="alternate" hreflang="cn" 
href="/cn/docs/guides/architectural/"/><xhtml:link rel="alternate" 
hreflang="en" 
href="/docs/guides/architectural/"/></url><url><loc>/docs/config/config-guide/</loc><lastmod>2025-12-04T18:43:05+08:00</last
 [...]
\ No newline at end of file
+<?xml version="1.0" encoding="utf-8" standalone="yes"?><urlset 
xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"; 
xmlns:xhtml="http://www.w3.org/1999/xhtml";><url><loc>/docs/guides/architectural/</loc><lastmod>2025-06-13T21:28:50+08:00</lastmod><xhtml:link
 rel="alternate" hreflang="cn" 
href="/cn/docs/guides/architectural/"/><xhtml:link rel="alternate" 
hreflang="en" 
href="/docs/guides/architectural/"/></url><url><loc>/docs/config/config-guide/</loc><lastmod>2025-12-04T18:43:05+08:00</last
 [...]
\ No newline at end of file
diff --git a/sitemap.xml b/sitemap.xml
index 5ed5163f..d2d58810 100644
--- a/sitemap.xml
+++ b/sitemap.xml
@@ -1 +1 @@
-<?xml version="1.0" encoding="utf-8" standalone="yes"?><sitemapindex 
xmlns="http://www.sitemaps.org/schemas/sitemap/0.9";><sitemap><loc>/en/sitemap.xml</loc><lastmod>2026-01-21T16:22:18+08:00</lastmod></sitemap><sitemap><loc>/cn/sitemap.xml</loc><lastmod>2026-01-21T16:22:18+08:00</lastmod></sitemap></sitemapindex>
\ No newline at end of file
+<?xml version="1.0" encoding="utf-8" standalone="yes"?><sitemapindex 
xmlns="http://www.sitemaps.org/schemas/sitemap/0.9";><sitemap><loc>/en/sitemap.xml</loc><lastmod>2026-01-22T19:23:37+08:00</lastmod></sitemap><sitemap><loc>/cn/sitemap.xml</loc><lastmod>2026-01-22T19:23:37+08:00</lastmod></sitemap></sitemapindex>
\ No newline at end of file

Reply via email to