This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-hugegraph-doc.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 2f4d187f fix loader conf display (#289) 
699239d3f3c18bcfbbfc569c3b289826407e0b11
2f4d187f is described below

commit 2f4d187ffd957605ac3077e77bd8d3eec913bd2d
Author: simon824 <[email protected]>
AuthorDate: Fri Sep 22 02:07:15 2023 +0000

    fix loader conf display (#289) 699239d3f3c18bcfbbfc569c3b289826407e0b11
---
 cn/docs/_print/index.html                      |  2 +-
 cn/docs/index.xml                              | 58 +++++++++++++-------------
 cn/docs/quickstart/_print/index.html           |  2 +-
 cn/docs/quickstart/hugegraph-loader/index.html |  8 ++--
 cn/docs/quickstart/index.xml                   | 58 +++++++++++++-------------
 cn/sitemap.xml                                 |  2 +-
 docs/_print/index.html                         |  2 +-
 docs/index.xml                                 | 58 +++++++++++++-------------
 docs/quickstart/_print/index.html              |  2 +-
 docs/quickstart/hugegraph-loader/index.html    |  8 ++--
 docs/quickstart/index.xml                      | 58 +++++++++++++-------------
 en/sitemap.xml                                 |  2 +-
 sitemap.xml                                    |  2 +-
 13 files changed, 131 insertions(+), 131 deletions(-)

diff --git a/cn/docs/_print/index.html b/cn/docs/_print/index.html
index db010254..f3ed8efb 100644
--- a/cn/docs/_print/index.html
+++ b/cn/docs/_print/index.html
@@ -591,7 +591,7 @@ HugeGraph支持多用户并行操作,用户可输入Gremlin查询语句,并
 </span></span><span style=display:flex><span>
 </span></span><span style=display:flex><span><span 
style=color:#8f5902;font-style:italic>// 如果没有更新策略, 则会得到
 </span></span></span><span style=display:flex><span><span 
style=color:#8f5902;font-style:italic></span><span 
style=color:#a40000>&#39;</span><span 
style=color:#204a87;font-weight:700>null</span> <span 
style=color:#204a87;font-weight:700>null</span> <span 
style=color:#a40000>c</span> <span style=color:#a40000>d&#39;</span>
-</span></span></code></pre></div><blockquote><p><strong>注意</strong> : 
采用了批量更新的策略后, 磁盘读请求数会大幅上升, 导入速度相比纯写覆盖会慢数倍 (此时HDD磁盘<a 
href=https://en.wikipedia.org/wiki/IOPS>IOPS</a>会成为瓶颈, 
建议采用SSD以保证速度)</p></blockquote><p><strong>顶点映射的特有节点</strong></p><ul><li>id: 
指定某一列作为顶点的 id 列,当顶点 id 策略为<code>CUSTOMIZE</code>时,必填;当 id 
策略为<code>PRIMARY_KEY</code>时,必须为空;</li></ul><p><strong>边映射的特有节点</strong></p><ul><li>source:
 选择输入源某几列作为<strong>源顶点</strong>的 id 列,当源顶点的 id 策略为 
<code>CUSTOMIZE</code>时,必须指定某一列作为顶点的 id  [...]
+</span></span></code></pre></div><blockquote><p><strong>注意</strong> : 
采用了批量更新的策略后, 磁盘读请求数会大幅上升, 导入速度相比纯写覆盖会慢数倍 (此时HDD磁盘<a 
href=https://en.wikipedia.org/wiki/IOPS>IOPS</a>会成为瓶颈, 
建议采用SSD以保证速度)</p></blockquote><p><strong>顶点映射的特有节点</strong></p><ul><li>id: 
指定某一列作为顶点的 id 列,当顶点 id 策略为<code>CUSTOMIZE</code>时,必填;当 id 
策略为<code>PRIMARY_KEY</code>时,必须为空;</li></ul><p><strong>边映射的特有节点</strong></p><ul><li>source:
 选择输入源某几列作为<strong>源顶点</strong>的 id 列,当源顶点的 id 策略为 
<code>CUSTOMIZE</code>时,必须指定某一列作为顶点的 id  [...]
 记录到进度文件中,进度文件位于 <code>${struct}</code> 目录下,文件名形如 <code>load-progress 
${date}</code> ,${struct} 为映射文件的前缀,${date} 为导入开始
 的时刻。比如:在 <code>2019-10-10 12:30:30</code> 开始的一次导入任务,使用的映射文件为 
<code>struct-example.json</code>,则进度文件的路径为与 struct-example.json
 同级的 <code>struct-example/load-progress 2019-10-10 
12:30:30</code>。</p><blockquote><p>注意:进度文件的生成与 &ndash;incremental-mode 
是否打开无关,每次导入结束都会生成一个进度文件。</p></blockquote><p>如果数据文件格式都是合法的,是用户自己停止(CTRL + C 或 
kill,kill -9 不支持)的导入任务,也就是说没有错误记录的情况下,下一次导入只需要设置
diff --git a/cn/docs/index.xml b/cn/docs/index.xml
index cc4b5d60..24b5bf25 100644
--- a/cn/docs/index.xml
+++ b/cn/docs/index.xml
@@ -5576,175 +5576,175 @@ HugeGraph目前采用EdgeCut的分区方案。&lt;/p>
 &lt;/thead>
 &lt;tbody>
 &lt;tr>
-&lt;td>-f 或 &amp;ndash;file&lt;/td>
+&lt;td>&lt;code>-f&lt;/code> 或 &lt;code>--file&lt;/code>&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Y&lt;/td>
 &lt;td>配置脚本的路径&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>-g 或 &amp;ndash;graph&lt;/td>
+&lt;td>&lt;code>-g&lt;/code> 或 &lt;code>--graph&lt;/code>&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Y&lt;/td>
 &lt;td>图数据库空间&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>-s 或 &amp;ndash;schema&lt;/td>
+&lt;td>&lt;code>-s&lt;/code> 或 &lt;code>--schema&lt;/code>&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Y&lt;/td>
 &lt;td>schema文件路径&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>-h 或 &amp;ndash;host&lt;/td>
+&lt;td>&lt;code>-h&lt;/code> 或 &lt;code>--host&lt;/code>&lt;/td>
 &lt;td>localhost&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>HugeGraphServer 的地址&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>-p 或 &amp;ndash;port&lt;/td>
+&lt;td>&lt;code>-p&lt;/code> 或 &lt;code>--port&lt;/code>&lt;/td>
 &lt;td>8080&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>HugeGraphServer 的端口号&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;username&lt;/td>
+&lt;td>&lt;code>--username&lt;/code>&lt;/td>
 &lt;td>null&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>当 HugeGraphServer 开启了权限认证时,当前图的 username&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;token&lt;/td>
+&lt;td>&lt;code>--token&lt;/code>&lt;/td>
 &lt;td>null&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>当 HugeGraphServer 开启了权限认证时,当前图的 token&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;protocol&lt;/td>
+&lt;td>&lt;code>--protocol&lt;/code>&lt;/td>
 &lt;td>http&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>向服务端发请求的协议,可选 http 或 https&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;trust-store-file&lt;/td>
+&lt;td>&lt;code>--trust-store-file&lt;/code>&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>请求协议为 https 时,客户端的证书文件路径&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;trust-store-password&lt;/td>
+&lt;td>&lt;code>--trust-store-password&lt;/code>&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>请求协议为 https 时,客户端证书密码&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;clear-all-data&lt;/td>
+&lt;td>&lt;code>--clear-all-data&lt;/code>&lt;/td>
 &lt;td>false&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>导入数据前是否清除服务端的原有数据&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;clear-timeout&lt;/td>
+&lt;td>&lt;code>--clear-timeout&lt;/code>&lt;/td>
 &lt;td>240&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>导入数据前清除服务端的原有数据的超时时间&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;incremental-mode&lt;/td>
+&lt;td>&lt;code>--incremental-mode&lt;/code>&lt;/td>
 &lt;td>false&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>是否使用断点续导模式,仅输入源为 FILE 和 HDFS 支持该模式,启用该模式能从上一次导入停止的地方开始导&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;failure-mode&lt;/td>
+&lt;td>&lt;code>--failure-mode&lt;/code>&lt;/td>
 &lt;td>false&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>失败模式为 true 时,会导入之前失败了的数据,一般来说失败数据文件需要在人工更正编辑好后,再次进行导入&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;batch-insert-threads&lt;/td>
+&lt;td>&lt;code>--batch-insert-threads&lt;/code>&lt;/td>
 &lt;td>CPUs&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>批量插入线程池大小 (CPUs是当前OS可用&lt;strong>逻辑核&lt;/strong>个数)&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;single-insert-threads&lt;/td>
+&lt;td>&lt;code>--single-insert-threads&lt;/code>&lt;/td>
 &lt;td>8&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>单条插入线程池的大小&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;max-conn&lt;/td>
+&lt;td>&lt;code>--max-conn&lt;/code>&lt;/td>
 &lt;td>4 * CPUs&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>HugeClient 与 HugeGraphServer 的最大 HTTP 
连接数,&lt;strong>调整线程&lt;/strong>的时候建议同时调整此项&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;max-conn-per-route&lt;/td>
+&lt;td>&lt;code>--max-conn-per-route&lt;/code>&lt;/td>
 &lt;td>2 * CPUs&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>HugeClient 与 HugeGraphServer 每个路由的最大 HTTP 
连接数,&lt;strong>调整线程&lt;/strong>的时候建议同时调整此项&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;batch-size&lt;/td>
+&lt;td>&lt;code>--batch-size&lt;/code>&lt;/td>
 &lt;td>500&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>导入数据时每个批次包含的数据条数&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;max-parse-errors&lt;/td>
+&lt;td>&lt;code>--max-parse-errors&lt;/code>&lt;/td>
 &lt;td>1&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>最多允许多少行数据解析错误,达到该值则程序退出&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;max-insert-errors&lt;/td>
+&lt;td>&lt;code>--max-insert-errors&lt;/code>&lt;/td>
 &lt;td>500&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>最多允许多少行数据插入错误,达到该值则程序退出&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;timeout&lt;/td>
+&lt;td>&lt;code>--timeout&lt;/code>&lt;/td>
 &lt;td>60&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>插入结果返回的超时时间(秒)&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;shutdown-timeout&lt;/td>
+&lt;td>&lt;code>--shutdown-timeout&lt;/code>&lt;/td>
 &lt;td>10&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>多线程停止的等待时间(秒)&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;retry-times&lt;/td>
+&lt;td>&lt;code>--retry-times&lt;/code>&lt;/td>
 &lt;td>0&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>发生特定异常时的重试次数&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;retry-interval&lt;/td>
+&lt;td>&lt;code>--retry-interval&lt;/code>&lt;/td>
 &lt;td>10&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>重试之前的间隔时间(秒)&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;check-vertex&lt;/td>
+&lt;td>&lt;code>--check-vertex&lt;/code>&lt;/td>
 &lt;td>false&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>插入边时是否检查边所连接的顶点是否存在&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;print-progress&lt;/td>
+&lt;td>&lt;code>--print-progress&lt;/code>&lt;/td>
 &lt;td>true&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>是否在控制台实时打印导入条数&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;dry-run&lt;/td>
+&lt;td>&lt;code>--dry-run&lt;/code>&lt;/td>
 &lt;td>false&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>打开该模式,只解析不导入,通常用于测试&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;help&lt;/td>
+&lt;td>&lt;code>--help&lt;/code>&lt;/td>
 &lt;td>false&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>打印帮助信息&lt;/td>
diff --git a/cn/docs/quickstart/_print/index.html 
b/cn/docs/quickstart/_print/index.html
index 6cfa29be..4763aead 100644
--- a/cn/docs/quickstart/_print/index.html
+++ b/cn/docs/quickstart/_print/index.html
@@ -585,7 +585,7 @@
 </span></span><span style=display:flex><span>
 </span></span><span style=display:flex><span><span 
style=color:#8f5902;font-style:italic>// 如果没有更新策略, 则会得到
 </span></span></span><span style=display:flex><span><span 
style=color:#8f5902;font-style:italic></span><span 
style=color:#a40000>&#39;</span><span 
style=color:#204a87;font-weight:700>null</span> <span 
style=color:#204a87;font-weight:700>null</span> <span 
style=color:#a40000>c</span> <span style=color:#a40000>d&#39;</span>
-</span></span></code></pre></div><blockquote><p><strong>注意</strong> : 
采用了批量更新的策略后, 磁盘读请求数会大幅上升, 导入速度相比纯写覆盖会慢数倍 (此时HDD磁盘<a 
href=https://en.wikipedia.org/wiki/IOPS>IOPS</a>会成为瓶颈, 
建议采用SSD以保证速度)</p></blockquote><p><strong>顶点映射的特有节点</strong></p><ul><li>id: 
指定某一列作为顶点的 id 列,当顶点 id 策略为<code>CUSTOMIZE</code>时,必填;当 id 
策略为<code>PRIMARY_KEY</code>时,必须为空;</li></ul><p><strong>边映射的特有节点</strong></p><ul><li>source:
 选择输入源某几列作为<strong>源顶点</strong>的 id 列,当源顶点的 id 策略为 
<code>CUSTOMIZE</code>时,必须指定某一列作为顶点的 id  [...]
+</span></span></code></pre></div><blockquote><p><strong>注意</strong> : 
采用了批量更新的策略后, 磁盘读请求数会大幅上升, 导入速度相比纯写覆盖会慢数倍 (此时HDD磁盘<a 
href=https://en.wikipedia.org/wiki/IOPS>IOPS</a>会成为瓶颈, 
建议采用SSD以保证速度)</p></blockquote><p><strong>顶点映射的特有节点</strong></p><ul><li>id: 
指定某一列作为顶点的 id 列,当顶点 id 策略为<code>CUSTOMIZE</code>时,必填;当 id 
策略为<code>PRIMARY_KEY</code>时,必须为空;</li></ul><p><strong>边映射的特有节点</strong></p><ul><li>source:
 选择输入源某几列作为<strong>源顶点</strong>的 id 列,当源顶点的 id 策略为 
<code>CUSTOMIZE</code>时,必须指定某一列作为顶点的 id  [...]
 记录到进度文件中,进度文件位于 <code>${struct}</code> 目录下,文件名形如 <code>load-progress 
${date}</code> ,${struct} 为映射文件的前缀,${date} 为导入开始
 的时刻。比如:在 <code>2019-10-10 12:30:30</code> 开始的一次导入任务,使用的映射文件为 
<code>struct-example.json</code>,则进度文件的路径为与 struct-example.json
 同级的 <code>struct-example/load-progress 2019-10-10 
12:30:30</code>。</p><blockquote><p>注意:进度文件的生成与 &ndash;incremental-mode 
是否打开无关,每次导入结束都会生成一个进度文件。</p></blockquote><p>如果数据文件格式都是合法的,是用户自己停止(CTRL + C 或 
kill,kill -9 不支持)的导入任务,也就是说没有错误记录的情况下,下一次导入只需要设置
diff --git a/cn/docs/quickstart/hugegraph-loader/index.html 
b/cn/docs/quickstart/hugegraph-loader/index.html
index e1070b3a..8c01339f 100644
--- a/cn/docs/quickstart/hugegraph-loader/index.html
+++ b/cn/docs/quickstart/hugegraph-loader/index.html
@@ -11,7 +11,7 @@ HDFS …"><meta property="og:title" content="HugeGraph-Loader 
Quick Start"><meta
 2 获取 HugeGraph-Loader 有两种方式可以获取 HugeGraph-Loader:
 下载已编译的压缩包 克隆源码编译安装 2.1 下载已编译的压缩包 下载最新版本的 HugeGraph-Toolchain Release 包, 里面包含了 
loader + tool + hubble 全套工具, 如果你已经下载, 可跳过重复步骤
 wget 
https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0.tar.gz
 tar zxf *hugegraph*.tar.gz 2.2 克隆源码编译安装 克隆最新版本的 HugeGraph-Loader 源码包:
-# 1. get from github git clone https://github.";><meta property="og:type" 
content="article"><meta property="og:url" 
content="/cn/docs/quickstart/hugegraph-loader/"><meta 
property="article:section" content="docs"><meta 
property="article:modified_time" content="2023-05-17T23:12:35+08:00"><meta 
property="og:site_name" content="HugeGraph"><meta itemprop=name 
content="HugeGraph-Loader Quick Start"><meta itemprop=description content="1 
HugeGraph-Loader概述 HugeGraph-Loader 是 HugeGraph 的数据导入组件,能够将 [...]
+# 1. get from github git clone https://github.";><meta property="og:type" 
content="article"><meta property="og:url" 
content="/cn/docs/quickstart/hugegraph-loader/"><meta 
property="article:section" content="docs"><meta 
property="article:modified_time" content="2023-09-22T10:06:32+08:00"><meta 
property="og:site_name" content="HugeGraph"><meta itemprop=name 
content="HugeGraph-Loader Quick Start"><meta itemprop=description content="1 
HugeGraph-Loader概述 HugeGraph-Loader 是 HugeGraph 的数据导入组件,能够将 [...]
 目前支持的数据源包括:
 本地磁盘文件或目录,支持 TEXT、CSV 和 JSON 格式的文件,支持压缩文件 HDFS 文件或目录,支持压缩文件 主流关系型数据库,如 
MySQL、PostgreSQL、Oracle、SQL Server 本地磁盘文件和 HDFS 文件支持断点续传。
 后面会具体说明。
@@ -19,7 +19,7 @@ wget 
https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-too
 2 获取 HugeGraph-Loader 有两种方式可以获取 HugeGraph-Loader:
 下载已编译的压缩包 克隆源码编译安装 2.1 下载已编译的压缩包 下载最新版本的 HugeGraph-Toolchain Release 包, 里面包含了 
loader + tool + hubble 全套工具, 如果你已经下载, 可跳过重复步骤
 wget 
https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0.tar.gz
 tar zxf *hugegraph*.tar.gz 2.2 克隆源码编译安装 克隆最新版本的 HugeGraph-Loader 源码包:
-# 1. get from github git clone https://github.";><meta itemprop=dateModified 
content="2023-05-17T23:12:35+08:00"><meta itemprop=wordCount 
content="1870"><meta itemprop=keywords content><meta name=twitter:card 
content="summary"><meta name=twitter:title content="HugeGraph-Loader Quick 
Start"><meta name=twitter:description content="1 HugeGraph-Loader概述 
HugeGraph-Loader 是 HugeGraph 的数据导入组件,能够将多种数据源的数据转化为图的顶点和边并批量导入到图数据库中。
+# 1. get from github git clone https://github.";><meta itemprop=dateModified 
content="2023-09-22T10:06:32+08:00"><meta itemprop=wordCount 
content="1870"><meta itemprop=keywords content><meta name=twitter:card 
content="summary"><meta name=twitter:title content="HugeGraph-Loader Quick 
Start"><meta name=twitter:description content="1 HugeGraph-Loader概述 
HugeGraph-Loader 是 HugeGraph 的数据导入组件,能够将多种数据源的数据转化为图的顶点和边并批量导入到图数据库中。
 目前支持的数据源包括:
 本地磁盘文件或目录,支持 TEXT、CSV 和 JSON 格式的文件,支持压缩文件 HDFS 文件或目录,支持压缩文件 主流关系型数据库,如 
MySQL、PostgreSQL、Oracle、SQL Server 本地磁盘文件和 HDFS 文件支持断点续传。
 后面会具体说明。
@@ -383,7 +383,7 @@ wget 
https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-too
 </span></span><span style=display:flex><span>
 </span></span><span style=display:flex><span><span 
style=color:#8f5902;font-style:italic>// 如果没有更新策略, 则会得到
 </span></span></span><span style=display:flex><span><span 
style=color:#8f5902;font-style:italic></span><span 
style=color:#a40000>&#39;</span><span 
style=color:#204a87;font-weight:700>null</span> <span 
style=color:#204a87;font-weight:700>null</span> <span 
style=color:#a40000>c</span> <span style=color:#a40000>d&#39;</span>
-</span></span></code></pre></div><blockquote><p><strong>注意</strong> : 
采用了批量更新的策略后, 磁盘读请求数会大幅上升, 导入速度相比纯写覆盖会慢数倍 (此时HDD磁盘<a 
href=https://en.wikipedia.org/wiki/IOPS>IOPS</a>会成为瓶颈, 
建议采用SSD以保证速度)</p></blockquote><p><strong>顶点映射的特有节点</strong></p><ul><li>id: 
指定某一列作为顶点的 id 列,当顶点 id 策略为<code>CUSTOMIZE</code>时,必填;当 id 
策略为<code>PRIMARY_KEY</code>时,必须为空;</li></ul><p><strong>边映射的特有节点</strong></p><ul><li>source:
 选择输入源某几列作为<strong>源顶点</strong>的 id 列,当源顶点的 id 策略为 
<code>CUSTOMIZE</code>时,必须指定某一列作为顶点的 id  [...]
+</span></span></code></pre></div><blockquote><p><strong>注意</strong> : 
采用了批量更新的策略后, 磁盘读请求数会大幅上升, 导入速度相比纯写覆盖会慢数倍 (此时HDD磁盘<a 
href=https://en.wikipedia.org/wiki/IOPS>IOPS</a>会成为瓶颈, 
建议采用SSD以保证速度)</p></blockquote><p><strong>顶点映射的特有节点</strong></p><ul><li>id: 
指定某一列作为顶点的 id 列,当顶点 id 策略为<code>CUSTOMIZE</code>时,必填;当 id 
策略为<code>PRIMARY_KEY</code>时,必须为空;</li></ul><p><strong>边映射的特有节点</strong></p><ul><li>source:
 选择输入源某几列作为<strong>源顶点</strong>的 id 列,当源顶点的 id 策略为 
<code>CUSTOMIZE</code>时,必须指定某一列作为顶点的 id  [...]
 记录到进度文件中,进度文件位于 <code>${struct}</code> 目录下,文件名形如 <code>load-progress 
${date}</code> ,${struct} 为映射文件的前缀,${date} 为导入开始
 的时刻。比如:在 <code>2019-10-10 12:30:30</code> 开始的一次导入任务,使用的映射文件为 
<code>struct-example.json</code>,则进度文件的路径为与 struct-example.json
 同级的 <code>struct-example/load-progress 2019-10-10 
12:30:30</code>。</p><blockquote><p>注意:进度文件的生成与 &ndash;incremental-mode 
是否打开无关,每次导入结束都会生成一个进度文件。</p></blockquote><p>如果数据文件格式都是合法的,是用户自己停止(CTRL + C 或 
kill,kill -9 不支持)的导入任务,也就是说没有错误记录的情况下,下一次导入只需要设置
@@ -509,7 +509,7 @@ HugeGraph Toolchain 版本: 
toolchain-1.0.0</p></blockquote><p><code>spark-loade
 </span></span></span><span style=display:flex><span><span 
style=color:#4e9a06></span>--deploy-mode cluster --name spark-hugegraph-loader 
--file ./hugegraph.json <span style=color:#4e9a06>\
 </span></span></span><span style=display:flex><span><span 
style=color:#4e9a06></span>--username admin --token admin --host xx.xx.xx.xx 
--port <span style=color:#0000cf;font-weight:700>8093</span> <span 
style=color:#4e9a06>\
 </span></span></span><span style=display:flex><span><span 
style=color:#4e9a06></span>--graph graph-test --num-executors <span 
style=color:#0000cf;font-weight:700>6</span> --executor-cores <span 
style=color:#0000cf;font-weight:700>16</span> --executor-memory 15g
-</span></span></code></pre></div><style>.feedback--answer{display:inline-block}.feedback--answer-no{margin-left:1em}.feedback--response{display:none;margin-top:1em}.feedback--response__visible{display:block}</style><script>const
 
yesButton=document.querySelector(".feedback--answer-yes"),noButton=document.querySelector(".feedback--answer-no"),yesResponse=document.querySelector(".feedback--response-yes"),noResponse=document.querySelector(".feedback--response-no"),disableButtons=()=>{yesButt
 [...]
+</span></span></code></pre></div><style>.feedback--answer{display:inline-block}.feedback--answer-no{margin-left:1em}.feedback--response{display:none;margin-top:1em}.feedback--response__visible{display:block}</style><script>const
 
yesButton=document.querySelector(".feedback--answer-yes"),noButton=document.querySelector(".feedback--answer-no"),yesResponse=document.querySelector(".feedback--response-yes"),noResponse=document.querySelector(".feedback--response-no"),disableButtons=()=>{yesButt
 [...]
 <script 
src=https://cdn.jsdelivr.net/npm/[email protected]/dist/js/bootstrap.min.js 
integrity="sha512-UR25UO94eTnCVwjbXozyeVd6ZqpaAE9naiEUBK/A+QDbfSTQFhPGj5lOR6d8tsgbBk84Ggb5A3EkjsOgPRPcKA=="
 crossorigin=anonymous></script>
 <script src=/js/tabpane-persist.js></script>
 <script 
src=/js/main.min.aa9f4c5dae6a98b2c46277f4c56f1673a2b000d1756ce4ffae93784cab25e6d5.js
 integrity="sha256-qp9MXa5qmLLEYnf0xW8Wc6KwANF1bOT/rpN4TKsl5tU=" 
crossorigin=anonymous></script>
diff --git a/cn/docs/quickstart/index.xml b/cn/docs/quickstart/index.xml
index 80bca01c..6c78593b 100644
--- a/cn/docs/quickstart/index.xml
+++ b/cn/docs/quickstart/index.xml
@@ -1058,175 +1058,175 @@
 &lt;/thead>
 &lt;tbody>
 &lt;tr>
-&lt;td>-f 或 &amp;ndash;file&lt;/td>
+&lt;td>&lt;code>-f&lt;/code> 或 &lt;code>--file&lt;/code>&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Y&lt;/td>
 &lt;td>配置脚本的路径&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>-g 或 &amp;ndash;graph&lt;/td>
+&lt;td>&lt;code>-g&lt;/code> 或 &lt;code>--graph&lt;/code>&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Y&lt;/td>
 &lt;td>图数据库空间&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>-s 或 &amp;ndash;schema&lt;/td>
+&lt;td>&lt;code>-s&lt;/code> 或 &lt;code>--schema&lt;/code>&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Y&lt;/td>
 &lt;td>schema文件路径&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>-h 或 &amp;ndash;host&lt;/td>
+&lt;td>&lt;code>-h&lt;/code> 或 &lt;code>--host&lt;/code>&lt;/td>
 &lt;td>localhost&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>HugeGraphServer 的地址&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>-p 或 &amp;ndash;port&lt;/td>
+&lt;td>&lt;code>-p&lt;/code> 或 &lt;code>--port&lt;/code>&lt;/td>
 &lt;td>8080&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>HugeGraphServer 的端口号&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;username&lt;/td>
+&lt;td>&lt;code>--username&lt;/code>&lt;/td>
 &lt;td>null&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>当 HugeGraphServer 开启了权限认证时,当前图的 username&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;token&lt;/td>
+&lt;td>&lt;code>--token&lt;/code>&lt;/td>
 &lt;td>null&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>当 HugeGraphServer 开启了权限认证时,当前图的 token&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;protocol&lt;/td>
+&lt;td>&lt;code>--protocol&lt;/code>&lt;/td>
 &lt;td>http&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>向服务端发请求的协议,可选 http 或 https&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;trust-store-file&lt;/td>
+&lt;td>&lt;code>--trust-store-file&lt;/code>&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>请求协议为 https 时,客户端的证书文件路径&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;trust-store-password&lt;/td>
+&lt;td>&lt;code>--trust-store-password&lt;/code>&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>请求协议为 https 时,客户端证书密码&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;clear-all-data&lt;/td>
+&lt;td>&lt;code>--clear-all-data&lt;/code>&lt;/td>
 &lt;td>false&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>导入数据前是否清除服务端的原有数据&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;clear-timeout&lt;/td>
+&lt;td>&lt;code>--clear-timeout&lt;/code>&lt;/td>
 &lt;td>240&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>导入数据前清除服务端的原有数据的超时时间&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;incremental-mode&lt;/td>
+&lt;td>&lt;code>--incremental-mode&lt;/code>&lt;/td>
 &lt;td>false&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>是否使用断点续导模式,仅输入源为 FILE 和 HDFS 支持该模式,启用该模式能从上一次导入停止的地方开始导&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;failure-mode&lt;/td>
+&lt;td>&lt;code>--failure-mode&lt;/code>&lt;/td>
 &lt;td>false&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>失败模式为 true 时,会导入之前失败了的数据,一般来说失败数据文件需要在人工更正编辑好后,再次进行导入&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;batch-insert-threads&lt;/td>
+&lt;td>&lt;code>--batch-insert-threads&lt;/code>&lt;/td>
 &lt;td>CPUs&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>批量插入线程池大小 (CPUs是当前OS可用&lt;strong>逻辑核&lt;/strong>个数)&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;single-insert-threads&lt;/td>
+&lt;td>&lt;code>--single-insert-threads&lt;/code>&lt;/td>
 &lt;td>8&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>单条插入线程池的大小&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;max-conn&lt;/td>
+&lt;td>&lt;code>--max-conn&lt;/code>&lt;/td>
 &lt;td>4 * CPUs&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>HugeClient 与 HugeGraphServer 的最大 HTTP 
连接数,&lt;strong>调整线程&lt;/strong>的时候建议同时调整此项&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;max-conn-per-route&lt;/td>
+&lt;td>&lt;code>--max-conn-per-route&lt;/code>&lt;/td>
 &lt;td>2 * CPUs&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>HugeClient 与 HugeGraphServer 每个路由的最大 HTTP 
连接数,&lt;strong>调整线程&lt;/strong>的时候建议同时调整此项&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;batch-size&lt;/td>
+&lt;td>&lt;code>--batch-size&lt;/code>&lt;/td>
 &lt;td>500&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>导入数据时每个批次包含的数据条数&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;max-parse-errors&lt;/td>
+&lt;td>&lt;code>--max-parse-errors&lt;/code>&lt;/td>
 &lt;td>1&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>最多允许多少行数据解析错误,达到该值则程序退出&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;max-insert-errors&lt;/td>
+&lt;td>&lt;code>--max-insert-errors&lt;/code>&lt;/td>
 &lt;td>500&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>最多允许多少行数据插入错误,达到该值则程序退出&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;timeout&lt;/td>
+&lt;td>&lt;code>--timeout&lt;/code>&lt;/td>
 &lt;td>60&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>插入结果返回的超时时间(秒)&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;shutdown-timeout&lt;/td>
+&lt;td>&lt;code>--shutdown-timeout&lt;/code>&lt;/td>
 &lt;td>10&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>多线程停止的等待时间(秒)&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;retry-times&lt;/td>
+&lt;td>&lt;code>--retry-times&lt;/code>&lt;/td>
 &lt;td>0&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>发生特定异常时的重试次数&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;retry-interval&lt;/td>
+&lt;td>&lt;code>--retry-interval&lt;/code>&lt;/td>
 &lt;td>10&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>重试之前的间隔时间(秒)&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;check-vertex&lt;/td>
+&lt;td>&lt;code>--check-vertex&lt;/code>&lt;/td>
 &lt;td>false&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>插入边时是否检查边所连接的顶点是否存在&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;print-progress&lt;/td>
+&lt;td>&lt;code>--print-progress&lt;/code>&lt;/td>
 &lt;td>true&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>是否在控制台实时打印导入条数&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;dry-run&lt;/td>
+&lt;td>&lt;code>--dry-run&lt;/code>&lt;/td>
 &lt;td>false&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>打开该模式,只解析不导入,通常用于测试&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;help&lt;/td>
+&lt;td>&lt;code>--help&lt;/code>&lt;/td>
 &lt;td>false&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>打印帮助信息&lt;/td>
diff --git a/cn/sitemap.xml b/cn/sitemap.xml
index a121f093..b9d172c2 100644
--- a/cn/sitemap.xml
+++ b/cn/sitemap.xml
@@ -1 +1 @@
-<?xml version="1.0" encoding="utf-8" standalone="yes"?><urlset 
xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"; 
xmlns:xhtml="http://www.w3.org/1999/xhtml";><url><loc>/cn/docs/guides/architectural/</loc><lastmod>2023-06-25T21:06:07+08:00</lastmod><xhtml:link
 rel="alternate" hreflang="en" href="/docs/guides/architectural/"/><xhtml:link 
rel="alternate" hreflang="cn" 
href="/cn/docs/guides/architectural/"/></url><url><loc>/cn/docs/config/config-guide/</loc><lastmod>2023-09-19T14:14:13+08:00
 [...]
\ No newline at end of file
+<?xml version="1.0" encoding="utf-8" standalone="yes"?><urlset 
xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"; 
xmlns:xhtml="http://www.w3.org/1999/xhtml";><url><loc>/cn/docs/guides/architectural/</loc><lastmod>2023-06-25T21:06:07+08:00</lastmod><xhtml:link
 rel="alternate" hreflang="en" href="/docs/guides/architectural/"/><xhtml:link 
rel="alternate" hreflang="cn" 
href="/cn/docs/guides/architectural/"/></url><url><loc>/cn/docs/config/config-guide/</loc><lastmod>2023-09-19T14:14:13+08:00
 [...]
\ No newline at end of file
diff --git a/docs/_print/index.html b/docs/_print/index.html
index 892283da..052fcfd6 100644
--- a/docs/_print/index.html
+++ b/docs/_print/index.html
@@ -602,7 +602,7 @@ Visit the <a 
href=https://www.oracle.com/database/technologies/appdev/jdbc-downl
 </span></span><span style=display:flex><span>
 </span></span><span style=display:flex><span><span 
style=color:#8f5902;font-style:italic>// If there is no update strategy, you 
will get
 </span></span></span><span style=display:flex><span><span 
style=color:#8f5902;font-style:italic></span><span 
style=color:#4e9a06>&#39;null null c d&#39;</span>
-</span></span></code></pre></div><blockquote><p><strong>Note</strong> : After 
adopting the batch update strategy, the number of disk read requests will 
increase significantly, and the import speed will be several times slower than 
that of pure write coverage (at this time HDD disk [IOPS](<a 
href=https://en.wikipedia>https://en.wikipedia</a> .org/wiki/IOPS) will be the 
bottleneck, SSD is recommended for speed)</p></blockquote><p><strong>Unique 
Nodes for Vertex Maps</strong></p><ul><li>id: [...]
+</span></span></code></pre></div><blockquote><p><strong>Note</strong> : After 
adopting the batch update strategy, the number of disk read requests will 
increase significantly, and the import speed will be several times slower than 
that of pure write coverage (at this time HDD disk [IOPS](<a 
href=https://en.wikipedia>https://en.wikipedia</a> .org/wiki/IOPS) will be the 
bottleneck, SSD is recommended for speed)</p></blockquote><p><strong>Unique 
Nodes for Vertex Maps</strong></p><ul><li>id: [...]
 Recorded in the progress file, the progress file is located in the 
<code>${struct}</code> directory, the file name is like <code>load-progress 
${date}</code>, ${struct} is the prefix of the mapping file, and ${date} is the 
start of the import
 moment. For example: for an import task started at <code>2019-10-10 
12:30:30</code>, the mapping file used is <code>struct-example.json</code>, 
then the path of the progress file is the same as struct-example.json
 Sibling <code>struct-example/load-progress 2019-10-10 
12:30:30</code>.</p><blockquote><p>Note: The generation of progress files is 
independent of whether &ndash;incremental-mode is turned on or not, and a 
progress file is generated at the end of each import.</p></blockquote><p>If the 
data file formats are all legal and the import task is stopped by the user 
(CTRL + C or kill, kill -9 is not supported), that is to say, if there is no 
error record, the next import only needs to be set
diff --git a/docs/index.xml b/docs/index.xml
index c874b26a..d57ec66a 100644
--- a/docs/index.xml
+++ b/docs/index.xml
@@ -5562,175 +5562,175 @@ Visit the &lt;a 
href="https://www.oracle.com/database/technologies/appdev/jdbc-d
 &lt;/thead>
 &lt;tbody>
 &lt;tr>
-&lt;td>-f or &amp;ndash;file&lt;/td>
+&lt;td>&lt;code>-f&lt;/code> or &lt;code>--file&lt;/code>&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Y&lt;/td>
 &lt;td>path to configure script&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>-g or &amp;ndash;graph&lt;/td>
+&lt;td>&lt;code>-g&lt;/code> or &lt;code>--graph&lt;/code>&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Y&lt;/td>
 &lt;td>graph space name&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>-s or &amp;ndash;schema&lt;/td>
+&lt;td>&lt;code>-s&lt;/code> or &lt;code>--schema&lt;/code>&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Y&lt;/td>
 &lt;td>schema file path&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>-h or &amp;ndash;host&lt;/td>
+&lt;td>&lt;code>-h&lt;/code> or &lt;code>--host&lt;/code>&lt;/td>
 &lt;td>localhost&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>address of HugeGraphServer&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>-p or &amp;ndash;port&lt;/td>
+&lt;td>&lt;code>-p&lt;/code> or &lt;code>--port&lt;/code>&lt;/td>
 &lt;td>8080&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>port number of HugeGraphServer&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;username&lt;/td>
+&lt;td>&lt;code>--username&lt;/code>&lt;/td>
 &lt;td>null&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>When HugeGraphServer enables permission authentication, the username of 
the current graph&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;token&lt;/td>
+&lt;td>&lt;code>--token&lt;/code>&lt;/td>
 &lt;td>null&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>When HugeGraphServer has enabled authorization authentication, the 
token of the current graph&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;protocol&lt;/td>
+&lt;td>&lt;code>--protocol&lt;/code>&lt;/td>
 &lt;td>http&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Protocol for sending requests to the server, optional http or 
https&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;trust-store-file&lt;/td>
+&lt;td>&lt;code>--trust-store-file&lt;/code>&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>When the request protocol is https, the client&amp;rsquo;s certificate 
file path&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;trust-store-password&lt;/td>
+&lt;td>&lt;code>--trust-store-password&lt;/code>&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>When the request protocol is https, the client certificate 
password&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;clear-all-data&lt;/td>
+&lt;td>&lt;code>--clear-all-data&lt;/code>&lt;/td>
 &lt;td>false&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Whether to clear the original data on the server before importing 
data&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;clear-timeout&lt;/td>
+&lt;td>&lt;code>--clear-timeout&lt;/code>&lt;/td>
 &lt;td>240&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Timeout for clearing the original data on the server before importing 
data&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;incremental-mode&lt;/td>
+&lt;td>&lt;code>--incremental-mode&lt;/code>&lt;/td>
 &lt;td>false&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Whether to use the breakpoint resume mode, only the input source is 
FILE and HDFS support this mode, enabling this mode can start the import from 
the place where the last import stopped&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;failure-mode&lt;/td>
+&lt;td>&lt;code>--failure-mode&lt;/code>&lt;/td>
 &lt;td>false&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>When the failure mode is true, the data that failed before will be 
imported. Generally speaking, the failed data file needs to be manually 
corrected and edited, and then imported again&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;batch-insert-threads&lt;/td>
+&lt;td>&lt;code>--batch-insert-threads&lt;/code>&lt;/td>
 &lt;td>CPUs&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Batch insert thread pool size (CPUs is the number of &lt;strong>logical 
cores&lt;/strong> available to the current OS)&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;single-insert-threads&lt;/td>
+&lt;td>&lt;code>--single-insert-threads&lt;/code>&lt;/td>
 &lt;td>8&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Size of single insert thread pool&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;max-conn&lt;/td>
+&lt;td>&lt;code>--max-conn&lt;/code>&lt;/td>
 &lt;td>4 * CPUs&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>The maximum number of HTTP connections between HugeClient and 
HugeGraphServer, it is recommended to adjust this when &lt;strong>adjusting 
threads&lt;/strong>&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;max-conn-per-route&lt;/td>
+&lt;td>&lt;code>--max-conn-per-route&lt;/code>&lt;/td>
 &lt;td>2 * CPUs&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>The maximum number of HTTP connections for each route between 
HugeClient and HugeGraphServer, it is recommended to adjust this item at the 
same time when &lt;strong>adjusting the thread&lt;/strong>&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;batch-size&lt;/td>
+&lt;td>&lt;code>--batch-size&lt;/code>&lt;/td>
 &lt;td>500&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>The number of data items in each batch when importing data&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;max-parse-errors&lt;/td>
+&lt;td>&lt;code>--max-parse-errors&lt;/code>&lt;/td>
 &lt;td>1&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>The maximum number of lines of data parsing errors allowed, and the 
program exits when this value is reached&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;max-insert-errors&lt;/td>
+&lt;td>&lt;code>--max-insert-errors&lt;/code>&lt;/td>
 &lt;td>500&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>The maximum number of rows of data insertion errors allowed, and the 
program exits when this value is reached&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;timeout&lt;/td>
+&lt;td>&lt;code>--timeout&lt;/code>&lt;/td>
 &lt;td>60&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Timeout (seconds) for inserting results to return&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;shutdown-timeout&lt;/td>
+&lt;td>&lt;code>--shutdown-timeout&lt;/code>&lt;/td>
 &lt;td>10&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Waiting time for multithreading to stop (seconds)&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;retry-times&lt;/td>
+&lt;td>&lt;code>--retry-times&lt;/code>&lt;/td>
 &lt;td>0&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Number of retries when a specific exception occurs&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;retry-interval&lt;/td>
+&lt;td>&lt;code>--retry-interval&lt;/code>&lt;/td>
 &lt;td>10&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>interval before retry (seconds)&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;check-vertex&lt;/td>
+&lt;td>&lt;code>--check-vertex&lt;/code>&lt;/td>
 &lt;td>false&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Whether to check whether the vertex connected by the edge exists when 
inserting the edge&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;print-progress&lt;/td>
+&lt;td>&lt;code>--print-progress&lt;/code>&lt;/td>
 &lt;td>true&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Whether to print the number of imported items in the console in real 
time&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;dry-run&lt;/td>
+&lt;td>&lt;code>--dry-run&lt;/code>&lt;/td>
 &lt;td>false&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Turn on this mode, only parsing but not importing, usually used for 
testing&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;help&lt;/td>
+&lt;td>&lt;code>--help&lt;/code>&lt;/td>
 &lt;td>false&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>print help information&lt;/td>
diff --git a/docs/quickstart/_print/index.html 
b/docs/quickstart/_print/index.html
index ef597ecb..3599790c 100644
--- a/docs/quickstart/_print/index.html
+++ b/docs/quickstart/_print/index.html
@@ -597,7 +597,7 @@ Visit the <a 
href=https://www.oracle.com/database/technologies/appdev/jdbc-downl
 </span></span><span style=display:flex><span>
 </span></span><span style=display:flex><span><span 
style=color:#8f5902;font-style:italic>// If there is no update strategy, you 
will get
 </span></span></span><span style=display:flex><span><span 
style=color:#8f5902;font-style:italic></span><span 
style=color:#4e9a06>&#39;null null c d&#39;</span>
-</span></span></code></pre></div><blockquote><p><strong>Note</strong> : After 
adopting the batch update strategy, the number of disk read requests will 
increase significantly, and the import speed will be several times slower than 
that of pure write coverage (at this time HDD disk [IOPS](<a 
href=https://en.wikipedia>https://en.wikipedia</a> .org/wiki/IOPS) will be the 
bottleneck, SSD is recommended for speed)</p></blockquote><p><strong>Unique 
Nodes for Vertex Maps</strong></p><ul><li>id: [...]
+</span></span></code></pre></div><blockquote><p><strong>Note</strong> : After 
adopting the batch update strategy, the number of disk read requests will 
increase significantly, and the import speed will be several times slower than 
that of pure write coverage (at this time HDD disk [IOPS](<a 
href=https://en.wikipedia>https://en.wikipedia</a> .org/wiki/IOPS) will be the 
bottleneck, SSD is recommended for speed)</p></blockquote><p><strong>Unique 
Nodes for Vertex Maps</strong></p><ul><li>id: [...]
 Recorded in the progress file, the progress file is located in the 
<code>${struct}</code> directory, the file name is like <code>load-progress 
${date}</code>, ${struct} is the prefix of the mapping file, and ${date} is the 
start of the import
 moment. For example: for an import task started at <code>2019-10-10 
12:30:30</code>, the mapping file used is <code>struct-example.json</code>, 
then the path of the progress file is the same as struct-example.json
 Sibling <code>struct-example/load-progress 2019-10-10 
12:30:30</code>.</p><blockquote><p>Note: The generation of progress files is 
independent of whether &ndash;incremental-mode is turned on or not, and a 
progress file is generated at the end of each import.</p></blockquote><p>If the 
data file formats are all legal and the import task is stopped by the user 
(CTRL + C or kill, kill -9 is not supported), that is to say, if there is no 
error record, the next import only needs to be set
diff --git a/docs/quickstart/hugegraph-loader/index.html 
b/docs/quickstart/hugegraph-loader/index.html
index 007968fd..3348e110 100644
--- a/docs/quickstart/hugegraph-loader/index.html
+++ b/docs/quickstart/hugegraph-loader/index.html
@@ -1,9 +1,9 @@
 <!doctype html><html lang=en class=no-js><head><meta charset=utf-8><meta 
name=viewport 
content="width=device-width,initial-scale=1,shrink-to-fit=no"><meta 
name=generator content="Hugo 0.102.3"><meta name=robots content="index, 
follow"><link rel="shortcut icon" href=/favicons/favicon.ico><link 
rel=apple-touch-icon href=/favicons/apple-touch-icon-180x180.png 
sizes=180x180><link rel=icon type=image/png href=/favicons/favicon-16x16.png 
sizes=16x16><link rel=icon type=image/png href=/favicons [...]
 HugeGraph-Loader is the data import component of HugeGraph, which can convert 
data from various data sources into graph …"><meta property="og:title" 
content="HugeGraph-Loader Quick Start"><meta property="og:description" 
content="1 HugeGraph-Loader Overview HugeGraph-Loader is the data import 
component of HugeGraph, which can convert data from various data sources into 
graph vertices and edges and import them into the graph database in batches.
 Currently supported data sources include:
-Local disk file or directory, supports TEXT, CSV and JSON format files, 
supports compressed files HDFS file or directory, supports compressed files 
Mainstream relational databases, such as MySQL, PostgreSQL, Oracle, SQL Server 
Local disk files and HDFS files support resumable uploads."><meta 
property="og:type" content="article"><meta property="og:url" 
content="/docs/quickstart/hugegraph-loader/"><meta property="article:section" 
content="docs"><meta property="article:modified_time" conten [...]
+Local disk file or directory, supports TEXT, CSV and JSON format files, 
supports compressed files HDFS file or directory, supports compressed files 
Mainstream relational databases, such as MySQL, PostgreSQL, Oracle, SQL Server 
Local disk files and HDFS files support resumable uploads."><meta 
property="og:type" content="article"><meta property="og:url" 
content="/docs/quickstart/hugegraph-loader/"><meta property="article:section" 
content="docs"><meta property="article:modified_time" conten [...]
 Currently supported data sources include:
-Local disk file or directory, supports TEXT, CSV and JSON format files, 
supports compressed files HDFS file or directory, supports compressed files 
Mainstream relational databases, such as MySQL, PostgreSQL, Oracle, SQL Server 
Local disk files and HDFS files support resumable uploads."><meta 
itemprop=dateModified content="2023-05-17T23:12:35+08:00"><meta 
itemprop=wordCount content="5299"><meta itemprop=keywords content><meta 
name=twitter:card content="summary"><meta name=twitter:title co [...]
+Local disk file or directory, supports TEXT, CSV and JSON format files, 
supports compressed files HDFS file or directory, supports compressed files 
Mainstream relational databases, such as MySQL, PostgreSQL, Oracle, SQL Server 
Local disk files and HDFS files support resumable uploads."><meta 
itemprop=dateModified content="2023-09-22T10:06:32+08:00"><meta 
itemprop=wordCount content="5299"><meta itemprop=keywords content><meta 
name=twitter:card content="summary"><meta name=twitter:title co [...]
 Currently supported data sources include:
 Local disk file or directory, supports TEXT, CSV and JSON format files, 
supports compressed files HDFS file or directory, supports compressed files 
Mainstream relational databases, such as MySQL, PostgreSQL, Oracle, SQL Server 
Local disk files and HDFS files support resumable uploads."><link rel=preload 
href=/scss/main.min.1764bdd1b00b15c82ea08e6a847f47114a8787b9770c047a8c6082457466ce2b.css
 as=style><link 
href=/scss/main.min.1764bdd1b00b15c82ea08e6a847f47114a8787b9770c047a8c6082457466ce2
 [...]
 <link rel=stylesheet href=/css/prism.css><script 
type=application/javascript>var 
doNotTrack=!1;doNotTrack||(window.ga=window.ga||function(){(ga.q=ga.q||[]).push(arguments)},ga.l=+new
 
Date,ga("create","UA-00000000-0","auto"),ga("send","pageview"))</script><script 
async src=https://www.google-analytics.com/analytics.js></script></head><body 
class=td-page><header><nav class="js-navbar-scroll navbar navbar-expand 
navbar-dark flex-column flex-md-row td-navbar"><a class=navbar-brand href=/><sp 
[...]
@@ -361,7 +361,7 @@ Visit the <a 
href=https://www.oracle.com/database/technologies/appdev/jdbc-downl
 </span></span><span style=display:flex><span>
 </span></span><span style=display:flex><span><span 
style=color:#8f5902;font-style:italic>// If there is no update strategy, you 
will get
 </span></span></span><span style=display:flex><span><span 
style=color:#8f5902;font-style:italic></span><span 
style=color:#4e9a06>&#39;null null c d&#39;</span>
-</span></span></code></pre></div><blockquote><p><strong>Note</strong> : After 
adopting the batch update strategy, the number of disk read requests will 
increase significantly, and the import speed will be several times slower than 
that of pure write coverage (at this time HDD disk [IOPS](<a 
href=https://en.wikipedia>https://en.wikipedia</a> .org/wiki/IOPS) will be the 
bottleneck, SSD is recommended for speed)</p></blockquote><p><strong>Unique 
Nodes for Vertex Maps</strong></p><ul><li>id: [...]
+</span></span></code></pre></div><blockquote><p><strong>Note</strong> : After 
adopting the batch update strategy, the number of disk read requests will 
increase significantly, and the import speed will be several times slower than 
that of pure write coverage (at this time HDD disk [IOPS](<a 
href=https://en.wikipedia>https://en.wikipedia</a> .org/wiki/IOPS) will be the 
bottleneck, SSD is recommended for speed)</p></blockquote><p><strong>Unique 
Nodes for Vertex Maps</strong></p><ul><li>id: [...]
 Recorded in the progress file, the progress file is located in the 
<code>${struct}</code> directory, the file name is like <code>load-progress 
${date}</code>, ${struct} is the prefix of the mapping file, and ${date} is the 
start of the import
 moment. For example: for an import task started at <code>2019-10-10 
12:30:30</code>, the mapping file used is <code>struct-example.json</code>, 
then the path of the progress file is the same as struct-example.json
 Sibling <code>struct-example/load-progress 2019-10-10 
12:30:30</code>.</p><blockquote><p>Note: The generation of progress files is 
independent of whether &ndash;incremental-mode is turned on or not, and a 
progress file is generated at the end of each import.</p></blockquote><p>If the 
data file formats are all legal and the import task is stopped by the user 
(CTRL + C or kill, kill -9 is not supported), that is to say, if there is no 
error record, the next import only needs to be set
@@ -487,7 +487,7 @@ And there is no need to guarantee the order between the two 
parameters.</p><ul><
 </span></span></span><span style=display:flex><span><span 
style=color:#4e9a06></span>--deploy-mode cluster --name spark-hugegraph-loader 
--file ./hugegraph.json <span style=color:#4e9a06>\
 </span></span></span><span style=display:flex><span><span 
style=color:#4e9a06></span>--username admin --token admin --host xx.xx.xx.xx 
--port <span style=color:#0000cf;font-weight:700>8093</span> <span 
style=color:#4e9a06>\
 </span></span></span><span style=display:flex><span><span 
style=color:#4e9a06></span>--graph graph-test --num-executors <span 
style=color:#0000cf;font-weight:700>6</span> --executor-cores <span 
style=color:#0000cf;font-weight:700>16</span> --executor-memory 15g
-</span></span></code></pre></div><style>.feedback--answer{display:inline-block}.feedback--answer-no{margin-left:1em}.feedback--response{display:none;margin-top:1em}.feedback--response__visible{display:block}</style><script>const
 
yesButton=document.querySelector(".feedback--answer-yes"),noButton=document.querySelector(".feedback--answer-no"),yesResponse=document.querySelector(".feedback--response-yes"),noResponse=document.querySelector(".feedback--response-no"),disableButtons=()=>{yesButt
 [...]
+</span></span></code></pre></div><style>.feedback--answer{display:inline-block}.feedback--answer-no{margin-left:1em}.feedback--response{display:none;margin-top:1em}.feedback--response__visible{display:block}</style><script>const
 
yesButton=document.querySelector(".feedback--answer-yes"),noButton=document.querySelector(".feedback--answer-no"),yesResponse=document.querySelector(".feedback--response-yes"),noResponse=document.querySelector(".feedback--response-no"),disableButtons=()=>{yesButt
 [...]
 <script 
src=https://cdn.jsdelivr.net/npm/[email protected]/dist/js/bootstrap.min.js 
integrity="sha512-UR25UO94eTnCVwjbXozyeVd6ZqpaAE9naiEUBK/A+QDbfSTQFhPGj5lOR6d8tsgbBk84Ggb5A3EkjsOgPRPcKA=="
 crossorigin=anonymous></script>
 <script src=/js/tabpane-persist.js></script>
 <script 
src=/js/main.min.aa9f4c5dae6a98b2c46277f4c56f1673a2b000d1756ce4ffae93784cab25e6d5.js
 integrity="sha256-qp9MXa5qmLLEYnf0xW8Wc6KwANF1bOT/rpN4TKsl5tU=" 
crossorigin=anonymous></script>
diff --git a/docs/quickstart/index.xml b/docs/quickstart/index.xml
index 24e4c16c..69907d42 100644
--- a/docs/quickstart/index.xml
+++ b/docs/quickstart/index.xml
@@ -1073,175 +1073,175 @@ Visit the &lt;a 
href="https://www.oracle.com/database/technologies/appdev/jdbc-d
 &lt;/thead>
 &lt;tbody>
 &lt;tr>
-&lt;td>-f or &amp;ndash;file&lt;/td>
+&lt;td>&lt;code>-f&lt;/code> or &lt;code>--file&lt;/code>&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Y&lt;/td>
 &lt;td>path to configure script&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>-g or &amp;ndash;graph&lt;/td>
+&lt;td>&lt;code>-g&lt;/code> or &lt;code>--graph&lt;/code>&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Y&lt;/td>
 &lt;td>graph space name&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>-s or &amp;ndash;schema&lt;/td>
+&lt;td>&lt;code>-s&lt;/code> or &lt;code>--schema&lt;/code>&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Y&lt;/td>
 &lt;td>schema file path&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>-h or &amp;ndash;host&lt;/td>
+&lt;td>&lt;code>-h&lt;/code> or &lt;code>--host&lt;/code>&lt;/td>
 &lt;td>localhost&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>address of HugeGraphServer&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>-p or &amp;ndash;port&lt;/td>
+&lt;td>&lt;code>-p&lt;/code> or &lt;code>--port&lt;/code>&lt;/td>
 &lt;td>8080&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>port number of HugeGraphServer&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;username&lt;/td>
+&lt;td>&lt;code>--username&lt;/code>&lt;/td>
 &lt;td>null&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>When HugeGraphServer enables permission authentication, the username of 
the current graph&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;token&lt;/td>
+&lt;td>&lt;code>--token&lt;/code>&lt;/td>
 &lt;td>null&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>When HugeGraphServer has enabled authorization authentication, the 
token of the current graph&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;protocol&lt;/td>
+&lt;td>&lt;code>--protocol&lt;/code>&lt;/td>
 &lt;td>http&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Protocol for sending requests to the server, optional http or 
https&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;trust-store-file&lt;/td>
+&lt;td>&lt;code>--trust-store-file&lt;/code>&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>When the request protocol is https, the client&amp;rsquo;s certificate 
file path&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;trust-store-password&lt;/td>
+&lt;td>&lt;code>--trust-store-password&lt;/code>&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>When the request protocol is https, the client certificate 
password&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;clear-all-data&lt;/td>
+&lt;td>&lt;code>--clear-all-data&lt;/code>&lt;/td>
 &lt;td>false&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Whether to clear the original data on the server before importing 
data&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;clear-timeout&lt;/td>
+&lt;td>&lt;code>--clear-timeout&lt;/code>&lt;/td>
 &lt;td>240&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Timeout for clearing the original data on the server before importing 
data&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;incremental-mode&lt;/td>
+&lt;td>&lt;code>--incremental-mode&lt;/code>&lt;/td>
 &lt;td>false&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Whether to use the breakpoint resume mode, only the input source is 
FILE and HDFS support this mode, enabling this mode can start the import from 
the place where the last import stopped&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;failure-mode&lt;/td>
+&lt;td>&lt;code>--failure-mode&lt;/code>&lt;/td>
 &lt;td>false&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>When the failure mode is true, the data that failed before will be 
imported. Generally speaking, the failed data file needs to be manually 
corrected and edited, and then imported again&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;batch-insert-threads&lt;/td>
+&lt;td>&lt;code>--batch-insert-threads&lt;/code>&lt;/td>
 &lt;td>CPUs&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Batch insert thread pool size (CPUs is the number of &lt;strong>logical 
cores&lt;/strong> available to the current OS)&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;single-insert-threads&lt;/td>
+&lt;td>&lt;code>--single-insert-threads&lt;/code>&lt;/td>
 &lt;td>8&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Size of single insert thread pool&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;max-conn&lt;/td>
+&lt;td>&lt;code>--max-conn&lt;/code>&lt;/td>
 &lt;td>4 * CPUs&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>The maximum number of HTTP connections between HugeClient and 
HugeGraphServer, it is recommended to adjust this when &lt;strong>adjusting 
threads&lt;/strong>&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;max-conn-per-route&lt;/td>
+&lt;td>&lt;code>--max-conn-per-route&lt;/code>&lt;/td>
 &lt;td>2 * CPUs&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>The maximum number of HTTP connections for each route between 
HugeClient and HugeGraphServer, it is recommended to adjust this item at the 
same time when &lt;strong>adjusting the thread&lt;/strong>&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;batch-size&lt;/td>
+&lt;td>&lt;code>--batch-size&lt;/code>&lt;/td>
 &lt;td>500&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>The number of data items in each batch when importing data&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;max-parse-errors&lt;/td>
+&lt;td>&lt;code>--max-parse-errors&lt;/code>&lt;/td>
 &lt;td>1&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>The maximum number of lines of data parsing errors allowed, and the 
program exits when this value is reached&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;max-insert-errors&lt;/td>
+&lt;td>&lt;code>--max-insert-errors&lt;/code>&lt;/td>
 &lt;td>500&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>The maximum number of rows of data insertion errors allowed, and the 
program exits when this value is reached&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;timeout&lt;/td>
+&lt;td>&lt;code>--timeout&lt;/code>&lt;/td>
 &lt;td>60&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Timeout (seconds) for inserting results to return&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;shutdown-timeout&lt;/td>
+&lt;td>&lt;code>--shutdown-timeout&lt;/code>&lt;/td>
 &lt;td>10&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Waiting time for multithreading to stop (seconds)&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;retry-times&lt;/td>
+&lt;td>&lt;code>--retry-times&lt;/code>&lt;/td>
 &lt;td>0&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Number of retries when a specific exception occurs&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;retry-interval&lt;/td>
+&lt;td>&lt;code>--retry-interval&lt;/code>&lt;/td>
 &lt;td>10&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>interval before retry (seconds)&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;check-vertex&lt;/td>
+&lt;td>&lt;code>--check-vertex&lt;/code>&lt;/td>
 &lt;td>false&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Whether to check whether the vertex connected by the edge exists when 
inserting the edge&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;print-progress&lt;/td>
+&lt;td>&lt;code>--print-progress&lt;/code>&lt;/td>
 &lt;td>true&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Whether to print the number of imported items in the console in real 
time&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;dry-run&lt;/td>
+&lt;td>&lt;code>--dry-run&lt;/code>&lt;/td>
 &lt;td>false&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>Turn on this mode, only parsing but not importing, usually used for 
testing&lt;/td>
 &lt;/tr>
 &lt;tr>
-&lt;td>&amp;ndash;help&lt;/td>
+&lt;td>&lt;code>--help&lt;/code>&lt;/td>
 &lt;td>false&lt;/td>
 &lt;td>&lt;/td>
 &lt;td>print help information&lt;/td>
diff --git a/en/sitemap.xml b/en/sitemap.xml
index f27d1d86..a7a97743 100644
--- a/en/sitemap.xml
+++ b/en/sitemap.xml
@@ -1 +1 @@
-<?xml version="1.0" encoding="utf-8" standalone="yes"?><urlset 
xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"; 
xmlns:xhtml="http://www.w3.org/1999/xhtml";><url><loc>/docs/guides/architectural/</loc><lastmod>2023-06-25T21:06:07+08:00</lastmod><xhtml:link
 rel="alternate" hreflang="cn" 
href="/cn/docs/guides/architectural/"/><xhtml:link rel="alternate" 
hreflang="en" 
href="/docs/guides/architectural/"/></url><url><loc>/docs/config/config-guide/</loc><lastmod>2023-09-19T14:14:13+08:00</last
 [...]
\ No newline at end of file
+<?xml version="1.0" encoding="utf-8" standalone="yes"?><urlset 
xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"; 
xmlns:xhtml="http://www.w3.org/1999/xhtml";><url><loc>/docs/guides/architectural/</loc><lastmod>2023-06-25T21:06:07+08:00</lastmod><xhtml:link
 rel="alternate" hreflang="cn" 
href="/cn/docs/guides/architectural/"/><xhtml:link rel="alternate" 
hreflang="en" 
href="/docs/guides/architectural/"/></url><url><loc>/docs/config/config-guide/</loc><lastmod>2023-09-19T14:14:13+08:00</last
 [...]
\ No newline at end of file
diff --git a/sitemap.xml b/sitemap.xml
index 1b9ac1cb..55f07d2d 100644
--- a/sitemap.xml
+++ b/sitemap.xml
@@ -1 +1 @@
-<?xml version="1.0" encoding="utf-8" standalone="yes"?><sitemapindex 
xmlns="http://www.sitemaps.org/schemas/sitemap/0.9";><sitemap><loc>/en/sitemap.xml</loc><lastmod>2023-09-19T14:14:13+08:00</lastmod></sitemap><sitemap><loc>/cn/sitemap.xml</loc><lastmod>2023-09-19T14:14:13+08:00</lastmod></sitemap></sitemapindex>
\ No newline at end of file
+<?xml version="1.0" encoding="utf-8" standalone="yes"?><sitemapindex 
xmlns="http://www.sitemaps.org/schemas/sitemap/0.9";><sitemap><loc>/en/sitemap.xml</loc><lastmod>2023-09-22T10:06:32+08:00</lastmod></sitemap><sitemap><loc>/cn/sitemap.xml</loc><lastmod>2023-09-22T10:06:32+08:00</lastmod></sitemap></sitemapindex>
\ No newline at end of file

Reply via email to