This is an automated email from the ASF dual-hosted git repository.
github-bot pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-hugegraph-doc.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 2f4d187f fix loader conf display (#289)
699239d3f3c18bcfbbfc569c3b289826407e0b11
2f4d187f is described below
commit 2f4d187ffd957605ac3077e77bd8d3eec913bd2d
Author: simon824 <[email protected]>
AuthorDate: Fri Sep 22 02:07:15 2023 +0000
fix loader conf display (#289) 699239d3f3c18bcfbbfc569c3b289826407e0b11
---
cn/docs/_print/index.html | 2 +-
cn/docs/index.xml | 58 +++++++++++++-------------
cn/docs/quickstart/_print/index.html | 2 +-
cn/docs/quickstart/hugegraph-loader/index.html | 8 ++--
cn/docs/quickstart/index.xml | 58 +++++++++++++-------------
cn/sitemap.xml | 2 +-
docs/_print/index.html | 2 +-
docs/index.xml | 58 +++++++++++++-------------
docs/quickstart/_print/index.html | 2 +-
docs/quickstart/hugegraph-loader/index.html | 8 ++--
docs/quickstart/index.xml | 58 +++++++++++++-------------
en/sitemap.xml | 2 +-
sitemap.xml | 2 +-
13 files changed, 131 insertions(+), 131 deletions(-)
diff --git a/cn/docs/_print/index.html b/cn/docs/_print/index.html
index db010254..f3ed8efb 100644
--- a/cn/docs/_print/index.html
+++ b/cn/docs/_print/index.html
@@ -591,7 +591,7 @@ HugeGraph支持多用户并行操作,用户可输入Gremlin查询语句,并
</span></span><span style=display:flex><span>
</span></span><span style=display:flex><span><span
style=color:#8f5902;font-style:italic>// 如果没有更新策略, 则会得到
</span></span></span><span style=display:flex><span><span
style=color:#8f5902;font-style:italic></span><span
style=color:#a40000>'</span><span
style=color:#204a87;font-weight:700>null</span> <span
style=color:#204a87;font-weight:700>null</span> <span
style=color:#a40000>c</span> <span style=color:#a40000>d'</span>
-</span></span></code></pre></div><blockquote><p><strong>注意</strong> :
采用了批量更新的策略后, 磁盘读请求数会大幅上升, 导入速度相比纯写覆盖会慢数倍 (此时HDD磁盘<a
href=https://en.wikipedia.org/wiki/IOPS>IOPS</a>会成为瓶颈,
建议采用SSD以保证速度)</p></blockquote><p><strong>顶点映射的特有节点</strong></p><ul><li>id:
指定某一列作为顶点的 id 列,当顶点 id 策略为<code>CUSTOMIZE</code>时,必填;当 id
策略为<code>PRIMARY_KEY</code>时,必须为空;</li></ul><p><strong>边映射的特有节点</strong></p><ul><li>source:
选择输入源某几列作为<strong>源顶点</strong>的 id 列,当源顶点的 id 策略为
<code>CUSTOMIZE</code>时,必须指定某一列作为顶点的 id [...]
+</span></span></code></pre></div><blockquote><p><strong>注意</strong> :
采用了批量更新的策略后, 磁盘读请求数会大幅上升, 导入速度相比纯写覆盖会慢数倍 (此时HDD磁盘<a
href=https://en.wikipedia.org/wiki/IOPS>IOPS</a>会成为瓶颈,
建议采用SSD以保证速度)</p></blockquote><p><strong>顶点映射的特有节点</strong></p><ul><li>id:
指定某一列作为顶点的 id 列,当顶点 id 策略为<code>CUSTOMIZE</code>时,必填;当 id
策略为<code>PRIMARY_KEY</code>时,必须为空;</li></ul><p><strong>边映射的特有节点</strong></p><ul><li>source:
选择输入源某几列作为<strong>源顶点</strong>的 id 列,当源顶点的 id 策略为
<code>CUSTOMIZE</code>时,必须指定某一列作为顶点的 id [...]
记录到进度文件中,进度文件位于 <code>${struct}</code> 目录下,文件名形如 <code>load-progress
${date}</code> ,${struct} 为映射文件的前缀,${date} 为导入开始
的时刻。比如:在 <code>2019-10-10 12:30:30</code> 开始的一次导入任务,使用的映射文件为
<code>struct-example.json</code>,则进度文件的路径为与 struct-example.json
同级的 <code>struct-example/load-progress 2019-10-10
12:30:30</code>。</p><blockquote><p>注意:进度文件的生成与 –incremental-mode
是否打开无关,每次导入结束都会生成一个进度文件。</p></blockquote><p>如果数据文件格式都是合法的,是用户自己停止(CTRL + C 或
kill,kill -9 不支持)的导入任务,也就是说没有错误记录的情况下,下一次导入只需要设置
diff --git a/cn/docs/index.xml b/cn/docs/index.xml
index cc4b5d60..24b5bf25 100644
--- a/cn/docs/index.xml
+++ b/cn/docs/index.xml
@@ -5576,175 +5576,175 @@ HugeGraph目前采用EdgeCut的分区方案。</p>
</thead>
<tbody>
<tr>
-<td>-f 或 &ndash;file</td>
+<td><code>-f</code> 或 <code>--file</code></td>
<td></td>
<td>Y</td>
<td>配置脚本的路径</td>
</tr>
<tr>
-<td>-g 或 &ndash;graph</td>
+<td><code>-g</code> 或 <code>--graph</code></td>
<td></td>
<td>Y</td>
<td>图数据库空间</td>
</tr>
<tr>
-<td>-s 或 &ndash;schema</td>
+<td><code>-s</code> 或 <code>--schema</code></td>
<td></td>
<td>Y</td>
<td>schema文件路径</td>
</tr>
<tr>
-<td>-h 或 &ndash;host</td>
+<td><code>-h</code> 或 <code>--host</code></td>
<td>localhost</td>
<td></td>
<td>HugeGraphServer 的地址</td>
</tr>
<tr>
-<td>-p 或 &ndash;port</td>
+<td><code>-p</code> 或 <code>--port</code></td>
<td>8080</td>
<td></td>
<td>HugeGraphServer 的端口号</td>
</tr>
<tr>
-<td>&ndash;username</td>
+<td><code>--username</code></td>
<td>null</td>
<td></td>
<td>当 HugeGraphServer 开启了权限认证时,当前图的 username</td>
</tr>
<tr>
-<td>&ndash;token</td>
+<td><code>--token</code></td>
<td>null</td>
<td></td>
<td>当 HugeGraphServer 开启了权限认证时,当前图的 token</td>
</tr>
<tr>
-<td>&ndash;protocol</td>
+<td><code>--protocol</code></td>
<td>http</td>
<td></td>
<td>向服务端发请求的协议,可选 http 或 https</td>
</tr>
<tr>
-<td>&ndash;trust-store-file</td>
+<td><code>--trust-store-file</code></td>
<td></td>
<td></td>
<td>请求协议为 https 时,客户端的证书文件路径</td>
</tr>
<tr>
-<td>&ndash;trust-store-password</td>
+<td><code>--trust-store-password</code></td>
<td></td>
<td></td>
<td>请求协议为 https 时,客户端证书密码</td>
</tr>
<tr>
-<td>&ndash;clear-all-data</td>
+<td><code>--clear-all-data</code></td>
<td>false</td>
<td></td>
<td>导入数据前是否清除服务端的原有数据</td>
</tr>
<tr>
-<td>&ndash;clear-timeout</td>
+<td><code>--clear-timeout</code></td>
<td>240</td>
<td></td>
<td>导入数据前清除服务端的原有数据的超时时间</td>
</tr>
<tr>
-<td>&ndash;incremental-mode</td>
+<td><code>--incremental-mode</code></td>
<td>false</td>
<td></td>
<td>是否使用断点续导模式,仅输入源为 FILE 和 HDFS 支持该模式,启用该模式能从上一次导入停止的地方开始导</td>
</tr>
<tr>
-<td>&ndash;failure-mode</td>
+<td><code>--failure-mode</code></td>
<td>false</td>
<td></td>
<td>失败模式为 true 时,会导入之前失败了的数据,一般来说失败数据文件需要在人工更正编辑好后,再次进行导入</td>
</tr>
<tr>
-<td>&ndash;batch-insert-threads</td>
+<td><code>--batch-insert-threads</code></td>
<td>CPUs</td>
<td></td>
<td>批量插入线程池大小 (CPUs是当前OS可用<strong>逻辑核</strong>个数)</td>
</tr>
<tr>
-<td>&ndash;single-insert-threads</td>
+<td><code>--single-insert-threads</code></td>
<td>8</td>
<td></td>
<td>单条插入线程池的大小</td>
</tr>
<tr>
-<td>&ndash;max-conn</td>
+<td><code>--max-conn</code></td>
<td>4 * CPUs</td>
<td></td>
<td>HugeClient 与 HugeGraphServer 的最大 HTTP
连接数,<strong>调整线程</strong>的时候建议同时调整此项</td>
</tr>
<tr>
-<td>&ndash;max-conn-per-route</td>
+<td><code>--max-conn-per-route</code></td>
<td>2 * CPUs</td>
<td></td>
<td>HugeClient 与 HugeGraphServer 每个路由的最大 HTTP
连接数,<strong>调整线程</strong>的时候建议同时调整此项</td>
</tr>
<tr>
-<td>&ndash;batch-size</td>
+<td><code>--batch-size</code></td>
<td>500</td>
<td></td>
<td>导入数据时每个批次包含的数据条数</td>
</tr>
<tr>
-<td>&ndash;max-parse-errors</td>
+<td><code>--max-parse-errors</code></td>
<td>1</td>
<td></td>
<td>最多允许多少行数据解析错误,达到该值则程序退出</td>
</tr>
<tr>
-<td>&ndash;max-insert-errors</td>
+<td><code>--max-insert-errors</code></td>
<td>500</td>
<td></td>
<td>最多允许多少行数据插入错误,达到该值则程序退出</td>
</tr>
<tr>
-<td>&ndash;timeout</td>
+<td><code>--timeout</code></td>
<td>60</td>
<td></td>
<td>插入结果返回的超时时间(秒)</td>
</tr>
<tr>
-<td>&ndash;shutdown-timeout</td>
+<td><code>--shutdown-timeout</code></td>
<td>10</td>
<td></td>
<td>多线程停止的等待时间(秒)</td>
</tr>
<tr>
-<td>&ndash;retry-times</td>
+<td><code>--retry-times</code></td>
<td>0</td>
<td></td>
<td>发生特定异常时的重试次数</td>
</tr>
<tr>
-<td>&ndash;retry-interval</td>
+<td><code>--retry-interval</code></td>
<td>10</td>
<td></td>
<td>重试之前的间隔时间(秒)</td>
</tr>
<tr>
-<td>&ndash;check-vertex</td>
+<td><code>--check-vertex</code></td>
<td>false</td>
<td></td>
<td>插入边时是否检查边所连接的顶点是否存在</td>
</tr>
<tr>
-<td>&ndash;print-progress</td>
+<td><code>--print-progress</code></td>
<td>true</td>
<td></td>
<td>是否在控制台实时打印导入条数</td>
</tr>
<tr>
-<td>&ndash;dry-run</td>
+<td><code>--dry-run</code></td>
<td>false</td>
<td></td>
<td>打开该模式,只解析不导入,通常用于测试</td>
</tr>
<tr>
-<td>&ndash;help</td>
+<td><code>--help</code></td>
<td>false</td>
<td></td>
<td>打印帮助信息</td>
diff --git a/cn/docs/quickstart/_print/index.html
b/cn/docs/quickstart/_print/index.html
index 6cfa29be..4763aead 100644
--- a/cn/docs/quickstart/_print/index.html
+++ b/cn/docs/quickstart/_print/index.html
@@ -585,7 +585,7 @@
</span></span><span style=display:flex><span>
</span></span><span style=display:flex><span><span
style=color:#8f5902;font-style:italic>// 如果没有更新策略, 则会得到
</span></span></span><span style=display:flex><span><span
style=color:#8f5902;font-style:italic></span><span
style=color:#a40000>'</span><span
style=color:#204a87;font-weight:700>null</span> <span
style=color:#204a87;font-weight:700>null</span> <span
style=color:#a40000>c</span> <span style=color:#a40000>d'</span>
-</span></span></code></pre></div><blockquote><p><strong>注意</strong> :
采用了批量更新的策略后, 磁盘读请求数会大幅上升, 导入速度相比纯写覆盖会慢数倍 (此时HDD磁盘<a
href=https://en.wikipedia.org/wiki/IOPS>IOPS</a>会成为瓶颈,
建议采用SSD以保证速度)</p></blockquote><p><strong>顶点映射的特有节点</strong></p><ul><li>id:
指定某一列作为顶点的 id 列,当顶点 id 策略为<code>CUSTOMIZE</code>时,必填;当 id
策略为<code>PRIMARY_KEY</code>时,必须为空;</li></ul><p><strong>边映射的特有节点</strong></p><ul><li>source:
选择输入源某几列作为<strong>源顶点</strong>的 id 列,当源顶点的 id 策略为
<code>CUSTOMIZE</code>时,必须指定某一列作为顶点的 id [...]
+</span></span></code></pre></div><blockquote><p><strong>注意</strong> :
采用了批量更新的策略后, 磁盘读请求数会大幅上升, 导入速度相比纯写覆盖会慢数倍 (此时HDD磁盘<a
href=https://en.wikipedia.org/wiki/IOPS>IOPS</a>会成为瓶颈,
建议采用SSD以保证速度)</p></blockquote><p><strong>顶点映射的特有节点</strong></p><ul><li>id:
指定某一列作为顶点的 id 列,当顶点 id 策略为<code>CUSTOMIZE</code>时,必填;当 id
策略为<code>PRIMARY_KEY</code>时,必须为空;</li></ul><p><strong>边映射的特有节点</strong></p><ul><li>source:
选择输入源某几列作为<strong>源顶点</strong>的 id 列,当源顶点的 id 策略为
<code>CUSTOMIZE</code>时,必须指定某一列作为顶点的 id [...]
记录到进度文件中,进度文件位于 <code>${struct}</code> 目录下,文件名形如 <code>load-progress
${date}</code> ,${struct} 为映射文件的前缀,${date} 为导入开始
的时刻。比如:在 <code>2019-10-10 12:30:30</code> 开始的一次导入任务,使用的映射文件为
<code>struct-example.json</code>,则进度文件的路径为与 struct-example.json
同级的 <code>struct-example/load-progress 2019-10-10
12:30:30</code>。</p><blockquote><p>注意:进度文件的生成与 –incremental-mode
是否打开无关,每次导入结束都会生成一个进度文件。</p></blockquote><p>如果数据文件格式都是合法的,是用户自己停止(CTRL + C 或
kill,kill -9 不支持)的导入任务,也就是说没有错误记录的情况下,下一次导入只需要设置
diff --git a/cn/docs/quickstart/hugegraph-loader/index.html
b/cn/docs/quickstart/hugegraph-loader/index.html
index e1070b3a..8c01339f 100644
--- a/cn/docs/quickstart/hugegraph-loader/index.html
+++ b/cn/docs/quickstart/hugegraph-loader/index.html
@@ -11,7 +11,7 @@ HDFS …"><meta property="og:title" content="HugeGraph-Loader
Quick Start"><meta
2 获取 HugeGraph-Loader 有两种方式可以获取 HugeGraph-Loader:
下载已编译的压缩包 克隆源码编译安装 2.1 下载已编译的压缩包 下载最新版本的 HugeGraph-Toolchain Release 包, 里面包含了
loader + tool + hubble 全套工具, 如果你已经下载, 可跳过重复步骤
wget
https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0.tar.gz
tar zxf *hugegraph*.tar.gz 2.2 克隆源码编译安装 克隆最新版本的 HugeGraph-Loader 源码包:
-# 1. get from github git clone https://github."><meta property="og:type"
content="article"><meta property="og:url"
content="/cn/docs/quickstart/hugegraph-loader/"><meta
property="article:section" content="docs"><meta
property="article:modified_time" content="2023-05-17T23:12:35+08:00"><meta
property="og:site_name" content="HugeGraph"><meta itemprop=name
content="HugeGraph-Loader Quick Start"><meta itemprop=description content="1
HugeGraph-Loader概述 HugeGraph-Loader 是 HugeGraph 的数据导入组件,能够将 [...]
+# 1. get from github git clone https://github."><meta property="og:type"
content="article"><meta property="og:url"
content="/cn/docs/quickstart/hugegraph-loader/"><meta
property="article:section" content="docs"><meta
property="article:modified_time" content="2023-09-22T10:06:32+08:00"><meta
property="og:site_name" content="HugeGraph"><meta itemprop=name
content="HugeGraph-Loader Quick Start"><meta itemprop=description content="1
HugeGraph-Loader概述 HugeGraph-Loader 是 HugeGraph 的数据导入组件,能够将 [...]
目前支持的数据源包括:
本地磁盘文件或目录,支持 TEXT、CSV 和 JSON 格式的文件,支持压缩文件 HDFS 文件或目录,支持压缩文件 主流关系型数据库,如
MySQL、PostgreSQL、Oracle、SQL Server 本地磁盘文件和 HDFS 文件支持断点续传。
后面会具体说明。
@@ -19,7 +19,7 @@ wget
https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-too
2 获取 HugeGraph-Loader 有两种方式可以获取 HugeGraph-Loader:
下载已编译的压缩包 克隆源码编译安装 2.1 下载已编译的压缩包 下载最新版本的 HugeGraph-Toolchain Release 包, 里面包含了
loader + tool + hubble 全套工具, 如果你已经下载, 可跳过重复步骤
wget
https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0.tar.gz
tar zxf *hugegraph*.tar.gz 2.2 克隆源码编译安装 克隆最新版本的 HugeGraph-Loader 源码包:
-# 1. get from github git clone https://github."><meta itemprop=dateModified
content="2023-05-17T23:12:35+08:00"><meta itemprop=wordCount
content="1870"><meta itemprop=keywords content><meta name=twitter:card
content="summary"><meta name=twitter:title content="HugeGraph-Loader Quick
Start"><meta name=twitter:description content="1 HugeGraph-Loader概述
HugeGraph-Loader 是 HugeGraph 的数据导入组件,能够将多种数据源的数据转化为图的顶点和边并批量导入到图数据库中。
+# 1. get from github git clone https://github."><meta itemprop=dateModified
content="2023-09-22T10:06:32+08:00"><meta itemprop=wordCount
content="1870"><meta itemprop=keywords content><meta name=twitter:card
content="summary"><meta name=twitter:title content="HugeGraph-Loader Quick
Start"><meta name=twitter:description content="1 HugeGraph-Loader概述
HugeGraph-Loader 是 HugeGraph 的数据导入组件,能够将多种数据源的数据转化为图的顶点和边并批量导入到图数据库中。
目前支持的数据源包括:
本地磁盘文件或目录,支持 TEXT、CSV 和 JSON 格式的文件,支持压缩文件 HDFS 文件或目录,支持压缩文件 主流关系型数据库,如
MySQL、PostgreSQL、Oracle、SQL Server 本地磁盘文件和 HDFS 文件支持断点续传。
后面会具体说明。
@@ -383,7 +383,7 @@ wget
https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-too
</span></span><span style=display:flex><span>
</span></span><span style=display:flex><span><span
style=color:#8f5902;font-style:italic>// 如果没有更新策略, 则会得到
</span></span></span><span style=display:flex><span><span
style=color:#8f5902;font-style:italic></span><span
style=color:#a40000>'</span><span
style=color:#204a87;font-weight:700>null</span> <span
style=color:#204a87;font-weight:700>null</span> <span
style=color:#a40000>c</span> <span style=color:#a40000>d'</span>
-</span></span></code></pre></div><blockquote><p><strong>注意</strong> :
采用了批量更新的策略后, 磁盘读请求数会大幅上升, 导入速度相比纯写覆盖会慢数倍 (此时HDD磁盘<a
href=https://en.wikipedia.org/wiki/IOPS>IOPS</a>会成为瓶颈,
建议采用SSD以保证速度)</p></blockquote><p><strong>顶点映射的特有节点</strong></p><ul><li>id:
指定某一列作为顶点的 id 列,当顶点 id 策略为<code>CUSTOMIZE</code>时,必填;当 id
策略为<code>PRIMARY_KEY</code>时,必须为空;</li></ul><p><strong>边映射的特有节点</strong></p><ul><li>source:
选择输入源某几列作为<strong>源顶点</strong>的 id 列,当源顶点的 id 策略为
<code>CUSTOMIZE</code>时,必须指定某一列作为顶点的 id [...]
+</span></span></code></pre></div><blockquote><p><strong>注意</strong> :
采用了批量更新的策略后, 磁盘读请求数会大幅上升, 导入速度相比纯写覆盖会慢数倍 (此时HDD磁盘<a
href=https://en.wikipedia.org/wiki/IOPS>IOPS</a>会成为瓶颈,
建议采用SSD以保证速度)</p></blockquote><p><strong>顶点映射的特有节点</strong></p><ul><li>id:
指定某一列作为顶点的 id 列,当顶点 id 策略为<code>CUSTOMIZE</code>时,必填;当 id
策略为<code>PRIMARY_KEY</code>时,必须为空;</li></ul><p><strong>边映射的特有节点</strong></p><ul><li>source:
选择输入源某几列作为<strong>源顶点</strong>的 id 列,当源顶点的 id 策略为
<code>CUSTOMIZE</code>时,必须指定某一列作为顶点的 id [...]
记录到进度文件中,进度文件位于 <code>${struct}</code> 目录下,文件名形如 <code>load-progress
${date}</code> ,${struct} 为映射文件的前缀,${date} 为导入开始
的时刻。比如:在 <code>2019-10-10 12:30:30</code> 开始的一次导入任务,使用的映射文件为
<code>struct-example.json</code>,则进度文件的路径为与 struct-example.json
同级的 <code>struct-example/load-progress 2019-10-10
12:30:30</code>。</p><blockquote><p>注意:进度文件的生成与 –incremental-mode
是否打开无关,每次导入结束都会生成一个进度文件。</p></blockquote><p>如果数据文件格式都是合法的,是用户自己停止(CTRL + C 或
kill,kill -9 不支持)的导入任务,也就是说没有错误记录的情况下,下一次导入只需要设置
@@ -509,7 +509,7 @@ HugeGraph Toolchain 版本:
toolchain-1.0.0</p></blockquote><p><code>spark-loade
</span></span></span><span style=display:flex><span><span
style=color:#4e9a06></span>--deploy-mode cluster --name spark-hugegraph-loader
--file ./hugegraph.json <span style=color:#4e9a06>\
</span></span></span><span style=display:flex><span><span
style=color:#4e9a06></span>--username admin --token admin --host xx.xx.xx.xx
--port <span style=color:#0000cf;font-weight:700>8093</span> <span
style=color:#4e9a06>\
</span></span></span><span style=display:flex><span><span
style=color:#4e9a06></span>--graph graph-test --num-executors <span
style=color:#0000cf;font-weight:700>6</span> --executor-cores <span
style=color:#0000cf;font-weight:700>16</span> --executor-memory 15g
-</span></span></code></pre></div><style>.feedback--answer{display:inline-block}.feedback--answer-no{margin-left:1em}.feedback--response{display:none;margin-top:1em}.feedback--response__visible{display:block}</style><script>const
yesButton=document.querySelector(".feedback--answer-yes"),noButton=document.querySelector(".feedback--answer-no"),yesResponse=document.querySelector(".feedback--response-yes"),noResponse=document.querySelector(".feedback--response-no"),disableButtons=()=>{yesButt
[...]
+</span></span></code></pre></div><style>.feedback--answer{display:inline-block}.feedback--answer-no{margin-left:1em}.feedback--response{display:none;margin-top:1em}.feedback--response__visible{display:block}</style><script>const
yesButton=document.querySelector(".feedback--answer-yes"),noButton=document.querySelector(".feedback--answer-no"),yesResponse=document.querySelector(".feedback--response-yes"),noResponse=document.querySelector(".feedback--response-no"),disableButtons=()=>{yesButt
[...]
<script
src=https://cdn.jsdelivr.net/npm/[email protected]/dist/js/bootstrap.min.js
integrity="sha512-UR25UO94eTnCVwjbXozyeVd6ZqpaAE9naiEUBK/A+QDbfSTQFhPGj5lOR6d8tsgbBk84Ggb5A3EkjsOgPRPcKA=="
crossorigin=anonymous></script>
<script src=/js/tabpane-persist.js></script>
<script
src=/js/main.min.aa9f4c5dae6a98b2c46277f4c56f1673a2b000d1756ce4ffae93784cab25e6d5.js
integrity="sha256-qp9MXa5qmLLEYnf0xW8Wc6KwANF1bOT/rpN4TKsl5tU="
crossorigin=anonymous></script>
diff --git a/cn/docs/quickstart/index.xml b/cn/docs/quickstart/index.xml
index 80bca01c..6c78593b 100644
--- a/cn/docs/quickstart/index.xml
+++ b/cn/docs/quickstart/index.xml
@@ -1058,175 +1058,175 @@
</thead>
<tbody>
<tr>
-<td>-f 或 &ndash;file</td>
+<td><code>-f</code> 或 <code>--file</code></td>
<td></td>
<td>Y</td>
<td>配置脚本的路径</td>
</tr>
<tr>
-<td>-g 或 &ndash;graph</td>
+<td><code>-g</code> 或 <code>--graph</code></td>
<td></td>
<td>Y</td>
<td>图数据库空间</td>
</tr>
<tr>
-<td>-s 或 &ndash;schema</td>
+<td><code>-s</code> 或 <code>--schema</code></td>
<td></td>
<td>Y</td>
<td>schema文件路径</td>
</tr>
<tr>
-<td>-h 或 &ndash;host</td>
+<td><code>-h</code> 或 <code>--host</code></td>
<td>localhost</td>
<td></td>
<td>HugeGraphServer 的地址</td>
</tr>
<tr>
-<td>-p 或 &ndash;port</td>
+<td><code>-p</code> 或 <code>--port</code></td>
<td>8080</td>
<td></td>
<td>HugeGraphServer 的端口号</td>
</tr>
<tr>
-<td>&ndash;username</td>
+<td><code>--username</code></td>
<td>null</td>
<td></td>
<td>当 HugeGraphServer 开启了权限认证时,当前图的 username</td>
</tr>
<tr>
-<td>&ndash;token</td>
+<td><code>--token</code></td>
<td>null</td>
<td></td>
<td>当 HugeGraphServer 开启了权限认证时,当前图的 token</td>
</tr>
<tr>
-<td>&ndash;protocol</td>
+<td><code>--protocol</code></td>
<td>http</td>
<td></td>
<td>向服务端发请求的协议,可选 http 或 https</td>
</tr>
<tr>
-<td>&ndash;trust-store-file</td>
+<td><code>--trust-store-file</code></td>
<td></td>
<td></td>
<td>请求协议为 https 时,客户端的证书文件路径</td>
</tr>
<tr>
-<td>&ndash;trust-store-password</td>
+<td><code>--trust-store-password</code></td>
<td></td>
<td></td>
<td>请求协议为 https 时,客户端证书密码</td>
</tr>
<tr>
-<td>&ndash;clear-all-data</td>
+<td><code>--clear-all-data</code></td>
<td>false</td>
<td></td>
<td>导入数据前是否清除服务端的原有数据</td>
</tr>
<tr>
-<td>&ndash;clear-timeout</td>
+<td><code>--clear-timeout</code></td>
<td>240</td>
<td></td>
<td>导入数据前清除服务端的原有数据的超时时间</td>
</tr>
<tr>
-<td>&ndash;incremental-mode</td>
+<td><code>--incremental-mode</code></td>
<td>false</td>
<td></td>
<td>是否使用断点续导模式,仅输入源为 FILE 和 HDFS 支持该模式,启用该模式能从上一次导入停止的地方开始导</td>
</tr>
<tr>
-<td>&ndash;failure-mode</td>
+<td><code>--failure-mode</code></td>
<td>false</td>
<td></td>
<td>失败模式为 true 时,会导入之前失败了的数据,一般来说失败数据文件需要在人工更正编辑好后,再次进行导入</td>
</tr>
<tr>
-<td>&ndash;batch-insert-threads</td>
+<td><code>--batch-insert-threads</code></td>
<td>CPUs</td>
<td></td>
<td>批量插入线程池大小 (CPUs是当前OS可用<strong>逻辑核</strong>个数)</td>
</tr>
<tr>
-<td>&ndash;single-insert-threads</td>
+<td><code>--single-insert-threads</code></td>
<td>8</td>
<td></td>
<td>单条插入线程池的大小</td>
</tr>
<tr>
-<td>&ndash;max-conn</td>
+<td><code>--max-conn</code></td>
<td>4 * CPUs</td>
<td></td>
<td>HugeClient 与 HugeGraphServer 的最大 HTTP
连接数,<strong>调整线程</strong>的时候建议同时调整此项</td>
</tr>
<tr>
-<td>&ndash;max-conn-per-route</td>
+<td><code>--max-conn-per-route</code></td>
<td>2 * CPUs</td>
<td></td>
<td>HugeClient 与 HugeGraphServer 每个路由的最大 HTTP
连接数,<strong>调整线程</strong>的时候建议同时调整此项</td>
</tr>
<tr>
-<td>&ndash;batch-size</td>
+<td><code>--batch-size</code></td>
<td>500</td>
<td></td>
<td>导入数据时每个批次包含的数据条数</td>
</tr>
<tr>
-<td>&ndash;max-parse-errors</td>
+<td><code>--max-parse-errors</code></td>
<td>1</td>
<td></td>
<td>最多允许多少行数据解析错误,达到该值则程序退出</td>
</tr>
<tr>
-<td>&ndash;max-insert-errors</td>
+<td><code>--max-insert-errors</code></td>
<td>500</td>
<td></td>
<td>最多允许多少行数据插入错误,达到该值则程序退出</td>
</tr>
<tr>
-<td>&ndash;timeout</td>
+<td><code>--timeout</code></td>
<td>60</td>
<td></td>
<td>插入结果返回的超时时间(秒)</td>
</tr>
<tr>
-<td>&ndash;shutdown-timeout</td>
+<td><code>--shutdown-timeout</code></td>
<td>10</td>
<td></td>
<td>多线程停止的等待时间(秒)</td>
</tr>
<tr>
-<td>&ndash;retry-times</td>
+<td><code>--retry-times</code></td>
<td>0</td>
<td></td>
<td>发生特定异常时的重试次数</td>
</tr>
<tr>
-<td>&ndash;retry-interval</td>
+<td><code>--retry-interval</code></td>
<td>10</td>
<td></td>
<td>重试之前的间隔时间(秒)</td>
</tr>
<tr>
-<td>&ndash;check-vertex</td>
+<td><code>--check-vertex</code></td>
<td>false</td>
<td></td>
<td>插入边时是否检查边所连接的顶点是否存在</td>
</tr>
<tr>
-<td>&ndash;print-progress</td>
+<td><code>--print-progress</code></td>
<td>true</td>
<td></td>
<td>是否在控制台实时打印导入条数</td>
</tr>
<tr>
-<td>&ndash;dry-run</td>
+<td><code>--dry-run</code></td>
<td>false</td>
<td></td>
<td>打开该模式,只解析不导入,通常用于测试</td>
</tr>
<tr>
-<td>&ndash;help</td>
+<td><code>--help</code></td>
<td>false</td>
<td></td>
<td>打印帮助信息</td>
diff --git a/cn/sitemap.xml b/cn/sitemap.xml
index a121f093..b9d172c2 100644
--- a/cn/sitemap.xml
+++ b/cn/sitemap.xml
@@ -1 +1 @@
-<?xml version="1.0" encoding="utf-8" standalone="yes"?><urlset
xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"
xmlns:xhtml="http://www.w3.org/1999/xhtml"><url><loc>/cn/docs/guides/architectural/</loc><lastmod>2023-06-25T21:06:07+08:00</lastmod><xhtml:link
rel="alternate" hreflang="en" href="/docs/guides/architectural/"/><xhtml:link
rel="alternate" hreflang="cn"
href="/cn/docs/guides/architectural/"/></url><url><loc>/cn/docs/config/config-guide/</loc><lastmod>2023-09-19T14:14:13+08:00
[...]
\ No newline at end of file
+<?xml version="1.0" encoding="utf-8" standalone="yes"?><urlset
xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"
xmlns:xhtml="http://www.w3.org/1999/xhtml"><url><loc>/cn/docs/guides/architectural/</loc><lastmod>2023-06-25T21:06:07+08:00</lastmod><xhtml:link
rel="alternate" hreflang="en" href="/docs/guides/architectural/"/><xhtml:link
rel="alternate" hreflang="cn"
href="/cn/docs/guides/architectural/"/></url><url><loc>/cn/docs/config/config-guide/</loc><lastmod>2023-09-19T14:14:13+08:00
[...]
\ No newline at end of file
diff --git a/docs/_print/index.html b/docs/_print/index.html
index 892283da..052fcfd6 100644
--- a/docs/_print/index.html
+++ b/docs/_print/index.html
@@ -602,7 +602,7 @@ Visit the <a
href=https://www.oracle.com/database/technologies/appdev/jdbc-downl
</span></span><span style=display:flex><span>
</span></span><span style=display:flex><span><span
style=color:#8f5902;font-style:italic>// If there is no update strategy, you
will get
</span></span></span><span style=display:flex><span><span
style=color:#8f5902;font-style:italic></span><span
style=color:#4e9a06>'null null c d'</span>
-</span></span></code></pre></div><blockquote><p><strong>Note</strong> : After
adopting the batch update strategy, the number of disk read requests will
increase significantly, and the import speed will be several times slower than
that of pure write coverage (at this time HDD disk [IOPS](<a
href=https://en.wikipedia>https://en.wikipedia</a> .org/wiki/IOPS) will be the
bottleneck, SSD is recommended for speed)</p></blockquote><p><strong>Unique
Nodes for Vertex Maps</strong></p><ul><li>id: [...]
+</span></span></code></pre></div><blockquote><p><strong>Note</strong> : After
adopting the batch update strategy, the number of disk read requests will
increase significantly, and the import speed will be several times slower than
that of pure write coverage (at this time HDD disk [IOPS](<a
href=https://en.wikipedia>https://en.wikipedia</a> .org/wiki/IOPS) will be the
bottleneck, SSD is recommended for speed)</p></blockquote><p><strong>Unique
Nodes for Vertex Maps</strong></p><ul><li>id: [...]
Recorded in the progress file, the progress file is located in the
<code>${struct}</code> directory, the file name is like <code>load-progress
${date}</code>, ${struct} is the prefix of the mapping file, and ${date} is the
start of the import
moment. For example: for an import task started at <code>2019-10-10
12:30:30</code>, the mapping file used is <code>struct-example.json</code>,
then the path of the progress file is the same as struct-example.json
Sibling <code>struct-example/load-progress 2019-10-10
12:30:30</code>.</p><blockquote><p>Note: The generation of progress files is
independent of whether –incremental-mode is turned on or not, and a
progress file is generated at the end of each import.</p></blockquote><p>If the
data file formats are all legal and the import task is stopped by the user
(CTRL + C or kill, kill -9 is not supported), that is to say, if there is no
error record, the next import only needs to be set
diff --git a/docs/index.xml b/docs/index.xml
index c874b26a..d57ec66a 100644
--- a/docs/index.xml
+++ b/docs/index.xml
@@ -5562,175 +5562,175 @@ Visit the <a
href="https://www.oracle.com/database/technologies/appdev/jdbc-d
</thead>
<tbody>
<tr>
-<td>-f or &ndash;file</td>
+<td><code>-f</code> or <code>--file</code></td>
<td></td>
<td>Y</td>
<td>path to configure script</td>
</tr>
<tr>
-<td>-g or &ndash;graph</td>
+<td><code>-g</code> or <code>--graph</code></td>
<td></td>
<td>Y</td>
<td>graph space name</td>
</tr>
<tr>
-<td>-s or &ndash;schema</td>
+<td><code>-s</code> or <code>--schema</code></td>
<td></td>
<td>Y</td>
<td>schema file path</td>
</tr>
<tr>
-<td>-h or &ndash;host</td>
+<td><code>-h</code> or <code>--host</code></td>
<td>localhost</td>
<td></td>
<td>address of HugeGraphServer</td>
</tr>
<tr>
-<td>-p or &ndash;port</td>
+<td><code>-p</code> or <code>--port</code></td>
<td>8080</td>
<td></td>
<td>port number of HugeGraphServer</td>
</tr>
<tr>
-<td>&ndash;username</td>
+<td><code>--username</code></td>
<td>null</td>
<td></td>
<td>When HugeGraphServer enables permission authentication, the username of
the current graph</td>
</tr>
<tr>
-<td>&ndash;token</td>
+<td><code>--token</code></td>
<td>null</td>
<td></td>
<td>When HugeGraphServer has enabled authorization authentication, the
token of the current graph</td>
</tr>
<tr>
-<td>&ndash;protocol</td>
+<td><code>--protocol</code></td>
<td>http</td>
<td></td>
<td>Protocol for sending requests to the server, optional http or
https</td>
</tr>
<tr>
-<td>&ndash;trust-store-file</td>
+<td><code>--trust-store-file</code></td>
<td></td>
<td></td>
<td>When the request protocol is https, the client&rsquo;s certificate
file path</td>
</tr>
<tr>
-<td>&ndash;trust-store-password</td>
+<td><code>--trust-store-password</code></td>
<td></td>
<td></td>
<td>When the request protocol is https, the client certificate
password</td>
</tr>
<tr>
-<td>&ndash;clear-all-data</td>
+<td><code>--clear-all-data</code></td>
<td>false</td>
<td></td>
<td>Whether to clear the original data on the server before importing
data</td>
</tr>
<tr>
-<td>&ndash;clear-timeout</td>
+<td><code>--clear-timeout</code></td>
<td>240</td>
<td></td>
<td>Timeout for clearing the original data on the server before importing
data</td>
</tr>
<tr>
-<td>&ndash;incremental-mode</td>
+<td><code>--incremental-mode</code></td>
<td>false</td>
<td></td>
<td>Whether to use the breakpoint resume mode, only the input source is
FILE and HDFS support this mode, enabling this mode can start the import from
the place where the last import stopped</td>
</tr>
<tr>
-<td>&ndash;failure-mode</td>
+<td><code>--failure-mode</code></td>
<td>false</td>
<td></td>
<td>When the failure mode is true, the data that failed before will be
imported. Generally speaking, the failed data file needs to be manually
corrected and edited, and then imported again</td>
</tr>
<tr>
-<td>&ndash;batch-insert-threads</td>
+<td><code>--batch-insert-threads</code></td>
<td>CPUs</td>
<td></td>
<td>Batch insert thread pool size (CPUs is the number of <strong>logical
cores</strong> available to the current OS)</td>
</tr>
<tr>
-<td>&ndash;single-insert-threads</td>
+<td><code>--single-insert-threads</code></td>
<td>8</td>
<td></td>
<td>Size of single insert thread pool</td>
</tr>
<tr>
-<td>&ndash;max-conn</td>
+<td><code>--max-conn</code></td>
<td>4 * CPUs</td>
<td></td>
<td>The maximum number of HTTP connections between HugeClient and
HugeGraphServer, it is recommended to adjust this when <strong>adjusting
threads</strong></td>
</tr>
<tr>
-<td>&ndash;max-conn-per-route</td>
+<td><code>--max-conn-per-route</code></td>
<td>2 * CPUs</td>
<td></td>
<td>The maximum number of HTTP connections for each route between
HugeClient and HugeGraphServer, it is recommended to adjust this item at the
same time when <strong>adjusting the thread</strong></td>
</tr>
<tr>
-<td>&ndash;batch-size</td>
+<td><code>--batch-size</code></td>
<td>500</td>
<td></td>
<td>The number of data items in each batch when importing data</td>
</tr>
<tr>
-<td>&ndash;max-parse-errors</td>
+<td><code>--max-parse-errors</code></td>
<td>1</td>
<td></td>
<td>The maximum number of lines of data parsing errors allowed, and the
program exits when this value is reached</td>
</tr>
<tr>
-<td>&ndash;max-insert-errors</td>
+<td><code>--max-insert-errors</code></td>
<td>500</td>
<td></td>
<td>The maximum number of rows of data insertion errors allowed, and the
program exits when this value is reached</td>
</tr>
<tr>
-<td>&ndash;timeout</td>
+<td><code>--timeout</code></td>
<td>60</td>
<td></td>
<td>Timeout (seconds) for inserting results to return</td>
</tr>
<tr>
-<td>&ndash;shutdown-timeout</td>
+<td><code>--shutdown-timeout</code></td>
<td>10</td>
<td></td>
<td>Waiting time for multithreading to stop (seconds)</td>
</tr>
<tr>
-<td>&ndash;retry-times</td>
+<td><code>--retry-times</code></td>
<td>0</td>
<td></td>
<td>Number of retries when a specific exception occurs</td>
</tr>
<tr>
-<td>&ndash;retry-interval</td>
+<td><code>--retry-interval</code></td>
<td>10</td>
<td></td>
<td>interval before retry (seconds)</td>
</tr>
<tr>
-<td>&ndash;check-vertex</td>
+<td><code>--check-vertex</code></td>
<td>false</td>
<td></td>
<td>Whether to check whether the vertex connected by the edge exists when
inserting the edge</td>
</tr>
<tr>
-<td>&ndash;print-progress</td>
+<td><code>--print-progress</code></td>
<td>true</td>
<td></td>
<td>Whether to print the number of imported items in the console in real
time</td>
</tr>
<tr>
-<td>&ndash;dry-run</td>
+<td><code>--dry-run</code></td>
<td>false</td>
<td></td>
<td>Turn on this mode, only parsing but not importing, usually used for
testing</td>
</tr>
<tr>
-<td>&ndash;help</td>
+<td><code>--help</code></td>
<td>false</td>
<td></td>
<td>print help information</td>
diff --git a/docs/quickstart/_print/index.html
b/docs/quickstart/_print/index.html
index ef597ecb..3599790c 100644
--- a/docs/quickstart/_print/index.html
+++ b/docs/quickstart/_print/index.html
@@ -597,7 +597,7 @@ Visit the <a
href=https://www.oracle.com/database/technologies/appdev/jdbc-downl
</span></span><span style=display:flex><span>
</span></span><span style=display:flex><span><span
style=color:#8f5902;font-style:italic>// If there is no update strategy, you
will get
</span></span></span><span style=display:flex><span><span
style=color:#8f5902;font-style:italic></span><span
style=color:#4e9a06>'null null c d'</span>
-</span></span></code></pre></div><blockquote><p><strong>Note</strong> : After
adopting the batch update strategy, the number of disk read requests will
increase significantly, and the import speed will be several times slower than
that of pure write coverage (at this time HDD disk [IOPS](<a
href=https://en.wikipedia>https://en.wikipedia</a> .org/wiki/IOPS) will be the
bottleneck, SSD is recommended for speed)</p></blockquote><p><strong>Unique
Nodes for Vertex Maps</strong></p><ul><li>id: [...]
+</span></span></code></pre></div><blockquote><p><strong>Note</strong> : After
adopting the batch update strategy, the number of disk read requests will
increase significantly, and the import speed will be several times slower than
that of pure write coverage (at this time HDD disk [IOPS](<a
href=https://en.wikipedia>https://en.wikipedia</a> .org/wiki/IOPS) will be the
bottleneck, SSD is recommended for speed)</p></blockquote><p><strong>Unique
Nodes for Vertex Maps</strong></p><ul><li>id: [...]
Recorded in the progress file, the progress file is located in the
<code>${struct}</code> directory, the file name is like <code>load-progress
${date}</code>, ${struct} is the prefix of the mapping file, and ${date} is the
start of the import
moment. For example: for an import task started at <code>2019-10-10
12:30:30</code>, the mapping file used is <code>struct-example.json</code>,
then the path of the progress file is the same as struct-example.json
Sibling <code>struct-example/load-progress 2019-10-10
12:30:30</code>.</p><blockquote><p>Note: The generation of progress files is
independent of whether –incremental-mode is turned on or not, and a
progress file is generated at the end of each import.</p></blockquote><p>If the
data file formats are all legal and the import task is stopped by the user
(CTRL + C or kill, kill -9 is not supported), that is to say, if there is no
error record, the next import only needs to be set
diff --git a/docs/quickstart/hugegraph-loader/index.html
b/docs/quickstart/hugegraph-loader/index.html
index 007968fd..3348e110 100644
--- a/docs/quickstart/hugegraph-loader/index.html
+++ b/docs/quickstart/hugegraph-loader/index.html
@@ -1,9 +1,9 @@
<!doctype html><html lang=en class=no-js><head><meta charset=utf-8><meta
name=viewport
content="width=device-width,initial-scale=1,shrink-to-fit=no"><meta
name=generator content="Hugo 0.102.3"><meta name=robots content="index,
follow"><link rel="shortcut icon" href=/favicons/favicon.ico><link
rel=apple-touch-icon href=/favicons/apple-touch-icon-180x180.png
sizes=180x180><link rel=icon type=image/png href=/favicons/favicon-16x16.png
sizes=16x16><link rel=icon type=image/png href=/favicons [...]
HugeGraph-Loader is the data import component of HugeGraph, which can convert
data from various data sources into graph …"><meta property="og:title"
content="HugeGraph-Loader Quick Start"><meta property="og:description"
content="1 HugeGraph-Loader Overview HugeGraph-Loader is the data import
component of HugeGraph, which can convert data from various data sources into
graph vertices and edges and import them into the graph database in batches.
Currently supported data sources include:
-Local disk file or directory, supports TEXT, CSV and JSON format files,
supports compressed files HDFS file or directory, supports compressed files
Mainstream relational databases, such as MySQL, PostgreSQL, Oracle, SQL Server
Local disk files and HDFS files support resumable uploads."><meta
property="og:type" content="article"><meta property="og:url"
content="/docs/quickstart/hugegraph-loader/"><meta property="article:section"
content="docs"><meta property="article:modified_time" conten [...]
+Local disk file or directory, supports TEXT, CSV and JSON format files,
supports compressed files HDFS file or directory, supports compressed files
Mainstream relational databases, such as MySQL, PostgreSQL, Oracle, SQL Server
Local disk files and HDFS files support resumable uploads."><meta
property="og:type" content="article"><meta property="og:url"
content="/docs/quickstart/hugegraph-loader/"><meta property="article:section"
content="docs"><meta property="article:modified_time" conten [...]
Currently supported data sources include:
-Local disk file or directory, supports TEXT, CSV and JSON format files,
supports compressed files HDFS file or directory, supports compressed files
Mainstream relational databases, such as MySQL, PostgreSQL, Oracle, SQL Server
Local disk files and HDFS files support resumable uploads."><meta
itemprop=dateModified content="2023-05-17T23:12:35+08:00"><meta
itemprop=wordCount content="5299"><meta itemprop=keywords content><meta
name=twitter:card content="summary"><meta name=twitter:title co [...]
+Local disk file or directory, supports TEXT, CSV and JSON format files,
supports compressed files HDFS file or directory, supports compressed files
Mainstream relational databases, such as MySQL, PostgreSQL, Oracle, SQL Server
Local disk files and HDFS files support resumable uploads."><meta
itemprop=dateModified content="2023-09-22T10:06:32+08:00"><meta
itemprop=wordCount content="5299"><meta itemprop=keywords content><meta
name=twitter:card content="summary"><meta name=twitter:title co [...]
Currently supported data sources include:
Local disk file or directory, supports TEXT, CSV and JSON format files,
supports compressed files HDFS file or directory, supports compressed files
Mainstream relational databases, such as MySQL, PostgreSQL, Oracle, SQL Server
Local disk files and HDFS files support resumable uploads."><link rel=preload
href=/scss/main.min.1764bdd1b00b15c82ea08e6a847f47114a8787b9770c047a8c6082457466ce2b.css
as=style><link
href=/scss/main.min.1764bdd1b00b15c82ea08e6a847f47114a8787b9770c047a8c6082457466ce2
[...]
<link rel=stylesheet href=/css/prism.css><script
type=application/javascript>var
doNotTrack=!1;doNotTrack||(window.ga=window.ga||function(){(ga.q=ga.q||[]).push(arguments)},ga.l=+new
Date,ga("create","UA-00000000-0","auto"),ga("send","pageview"))</script><script
async src=https://www.google-analytics.com/analytics.js></script></head><body
class=td-page><header><nav class="js-navbar-scroll navbar navbar-expand
navbar-dark flex-column flex-md-row td-navbar"><a class=navbar-brand href=/><sp
[...]
@@ -361,7 +361,7 @@ Visit the <a
href=https://www.oracle.com/database/technologies/appdev/jdbc-downl
</span></span><span style=display:flex><span>
</span></span><span style=display:flex><span><span
style=color:#8f5902;font-style:italic>// If there is no update strategy, you
will get
</span></span></span><span style=display:flex><span><span
style=color:#8f5902;font-style:italic></span><span
style=color:#4e9a06>'null null c d'</span>
-</span></span></code></pre></div><blockquote><p><strong>Note</strong> : After
adopting the batch update strategy, the number of disk read requests will
increase significantly, and the import speed will be several times slower than
that of pure write coverage (at this time HDD disk [IOPS](<a
href=https://en.wikipedia>https://en.wikipedia</a> .org/wiki/IOPS) will be the
bottleneck, SSD is recommended for speed)</p></blockquote><p><strong>Unique
Nodes for Vertex Maps</strong></p><ul><li>id: [...]
+</span></span></code></pre></div><blockquote><p><strong>Note</strong> : After
adopting the batch update strategy, the number of disk read requests will
increase significantly, and the import speed will be several times slower than
that of pure write coverage (at this time HDD disk [IOPS](<a
href=https://en.wikipedia>https://en.wikipedia</a> .org/wiki/IOPS) will be the
bottleneck, SSD is recommended for speed)</p></blockquote><p><strong>Unique
Nodes for Vertex Maps</strong></p><ul><li>id: [...]
Recorded in the progress file, the progress file is located in the
<code>${struct}</code> directory, the file name is like <code>load-progress
${date}</code>, ${struct} is the prefix of the mapping file, and ${date} is the
start of the import
moment. For example: for an import task started at <code>2019-10-10
12:30:30</code>, the mapping file used is <code>struct-example.json</code>,
then the path of the progress file is the same as struct-example.json
Sibling <code>struct-example/load-progress 2019-10-10
12:30:30</code>.</p><blockquote><p>Note: The generation of progress files is
independent of whether –incremental-mode is turned on or not, and a
progress file is generated at the end of each import.</p></blockquote><p>If the
data file formats are all legal and the import task is stopped by the user
(CTRL + C or kill, kill -9 is not supported), that is to say, if there is no
error record, the next import only needs to be set
@@ -487,7 +487,7 @@ And there is no need to guarantee the order between the two
parameters.</p><ul><
</span></span></span><span style=display:flex><span><span
style=color:#4e9a06></span>--deploy-mode cluster --name spark-hugegraph-loader
--file ./hugegraph.json <span style=color:#4e9a06>\
</span></span></span><span style=display:flex><span><span
style=color:#4e9a06></span>--username admin --token admin --host xx.xx.xx.xx
--port <span style=color:#0000cf;font-weight:700>8093</span> <span
style=color:#4e9a06>\
</span></span></span><span style=display:flex><span><span
style=color:#4e9a06></span>--graph graph-test --num-executors <span
style=color:#0000cf;font-weight:700>6</span> --executor-cores <span
style=color:#0000cf;font-weight:700>16</span> --executor-memory 15g
-</span></span></code></pre></div><style>.feedback--answer{display:inline-block}.feedback--answer-no{margin-left:1em}.feedback--response{display:none;margin-top:1em}.feedback--response__visible{display:block}</style><script>const
yesButton=document.querySelector(".feedback--answer-yes"),noButton=document.querySelector(".feedback--answer-no"),yesResponse=document.querySelector(".feedback--response-yes"),noResponse=document.querySelector(".feedback--response-no"),disableButtons=()=>{yesButt
[...]
+</span></span></code></pre></div><style>.feedback--answer{display:inline-block}.feedback--answer-no{margin-left:1em}.feedback--response{display:none;margin-top:1em}.feedback--response__visible{display:block}</style><script>const
yesButton=document.querySelector(".feedback--answer-yes"),noButton=document.querySelector(".feedback--answer-no"),yesResponse=document.querySelector(".feedback--response-yes"),noResponse=document.querySelector(".feedback--response-no"),disableButtons=()=>{yesButt
[...]
<script
src=https://cdn.jsdelivr.net/npm/[email protected]/dist/js/bootstrap.min.js
integrity="sha512-UR25UO94eTnCVwjbXozyeVd6ZqpaAE9naiEUBK/A+QDbfSTQFhPGj5lOR6d8tsgbBk84Ggb5A3EkjsOgPRPcKA=="
crossorigin=anonymous></script>
<script src=/js/tabpane-persist.js></script>
<script
src=/js/main.min.aa9f4c5dae6a98b2c46277f4c56f1673a2b000d1756ce4ffae93784cab25e6d5.js
integrity="sha256-qp9MXa5qmLLEYnf0xW8Wc6KwANF1bOT/rpN4TKsl5tU="
crossorigin=anonymous></script>
diff --git a/docs/quickstart/index.xml b/docs/quickstart/index.xml
index 24e4c16c..69907d42 100644
--- a/docs/quickstart/index.xml
+++ b/docs/quickstart/index.xml
@@ -1073,175 +1073,175 @@ Visit the <a
href="https://www.oracle.com/database/technologies/appdev/jdbc-d
</thead>
<tbody>
<tr>
-<td>-f or &ndash;file</td>
+<td><code>-f</code> or <code>--file</code></td>
<td></td>
<td>Y</td>
<td>path to configure script</td>
</tr>
<tr>
-<td>-g or &ndash;graph</td>
+<td><code>-g</code> or <code>--graph</code></td>
<td></td>
<td>Y</td>
<td>graph space name</td>
</tr>
<tr>
-<td>-s or &ndash;schema</td>
+<td><code>-s</code> or <code>--schema</code></td>
<td></td>
<td>Y</td>
<td>schema file path</td>
</tr>
<tr>
-<td>-h or &ndash;host</td>
+<td><code>-h</code> or <code>--host</code></td>
<td>localhost</td>
<td></td>
<td>address of HugeGraphServer</td>
</tr>
<tr>
-<td>-p or &ndash;port</td>
+<td><code>-p</code> or <code>--port</code></td>
<td>8080</td>
<td></td>
<td>port number of HugeGraphServer</td>
</tr>
<tr>
-<td>&ndash;username</td>
+<td><code>--username</code></td>
<td>null</td>
<td></td>
<td>When HugeGraphServer enables permission authentication, the username of
the current graph</td>
</tr>
<tr>
-<td>&ndash;token</td>
+<td><code>--token</code></td>
<td>null</td>
<td></td>
<td>When HugeGraphServer has enabled authorization authentication, the
token of the current graph</td>
</tr>
<tr>
-<td>&ndash;protocol</td>
+<td><code>--protocol</code></td>
<td>http</td>
<td></td>
<td>Protocol for sending requests to the server, optional http or
https</td>
</tr>
<tr>
-<td>&ndash;trust-store-file</td>
+<td><code>--trust-store-file</code></td>
<td></td>
<td></td>
<td>When the request protocol is https, the client&rsquo;s certificate
file path</td>
</tr>
<tr>
-<td>&ndash;trust-store-password</td>
+<td><code>--trust-store-password</code></td>
<td></td>
<td></td>
<td>When the request protocol is https, the client certificate
password</td>
</tr>
<tr>
-<td>&ndash;clear-all-data</td>
+<td><code>--clear-all-data</code></td>
<td>false</td>
<td></td>
<td>Whether to clear the original data on the server before importing
data</td>
</tr>
<tr>
-<td>&ndash;clear-timeout</td>
+<td><code>--clear-timeout</code></td>
<td>240</td>
<td></td>
<td>Timeout for clearing the original data on the server before importing
data</td>
</tr>
<tr>
-<td>&ndash;incremental-mode</td>
+<td><code>--incremental-mode</code></td>
<td>false</td>
<td></td>
<td>Whether to use the breakpoint resume mode, only the input source is
FILE and HDFS support this mode, enabling this mode can start the import from
the place where the last import stopped</td>
</tr>
<tr>
-<td>&ndash;failure-mode</td>
+<td><code>--failure-mode</code></td>
<td>false</td>
<td></td>
<td>When the failure mode is true, the data that failed before will be
imported. Generally speaking, the failed data file needs to be manually
corrected and edited, and then imported again</td>
</tr>
<tr>
-<td>&ndash;batch-insert-threads</td>
+<td><code>--batch-insert-threads</code></td>
<td>CPUs</td>
<td></td>
<td>Batch insert thread pool size (CPUs is the number of <strong>logical
cores</strong> available to the current OS)</td>
</tr>
<tr>
-<td>&ndash;single-insert-threads</td>
+<td><code>--single-insert-threads</code></td>
<td>8</td>
<td></td>
<td>Size of single insert thread pool</td>
</tr>
<tr>
-<td>&ndash;max-conn</td>
+<td><code>--max-conn</code></td>
<td>4 * CPUs</td>
<td></td>
<td>The maximum number of HTTP connections between HugeClient and
HugeGraphServer, it is recommended to adjust this when <strong>adjusting
threads</strong></td>
</tr>
<tr>
-<td>&ndash;max-conn-per-route</td>
+<td><code>--max-conn-per-route</code></td>
<td>2 * CPUs</td>
<td></td>
<td>The maximum number of HTTP connections for each route between
HugeClient and HugeGraphServer, it is recommended to adjust this item at the
same time when <strong>adjusting the thread</strong></td>
</tr>
<tr>
-<td>&ndash;batch-size</td>
+<td><code>--batch-size</code></td>
<td>500</td>
<td></td>
<td>The number of data items in each batch when importing data</td>
</tr>
<tr>
-<td>&ndash;max-parse-errors</td>
+<td><code>--max-parse-errors</code></td>
<td>1</td>
<td></td>
<td>The maximum number of lines of data parsing errors allowed, and the
program exits when this value is reached</td>
</tr>
<tr>
-<td>&ndash;max-insert-errors</td>
+<td><code>--max-insert-errors</code></td>
<td>500</td>
<td></td>
<td>The maximum number of rows of data insertion errors allowed, and the
program exits when this value is reached</td>
</tr>
<tr>
-<td>&ndash;timeout</td>
+<td><code>--timeout</code></td>
<td>60</td>
<td></td>
<td>Timeout (seconds) for inserting results to return</td>
</tr>
<tr>
-<td>&ndash;shutdown-timeout</td>
+<td><code>--shutdown-timeout</code></td>
<td>10</td>
<td></td>
<td>Waiting time for multithreading to stop (seconds)</td>
</tr>
<tr>
-<td>&ndash;retry-times</td>
+<td><code>--retry-times</code></td>
<td>0</td>
<td></td>
<td>Number of retries when a specific exception occurs</td>
</tr>
<tr>
-<td>&ndash;retry-interval</td>
+<td><code>--retry-interval</code></td>
<td>10</td>
<td></td>
<td>interval before retry (seconds)</td>
</tr>
<tr>
-<td>&ndash;check-vertex</td>
+<td><code>--check-vertex</code></td>
<td>false</td>
<td></td>
<td>Whether to check whether the vertex connected by the edge exists when
inserting the edge</td>
</tr>
<tr>
-<td>&ndash;print-progress</td>
+<td><code>--print-progress</code></td>
<td>true</td>
<td></td>
<td>Whether to print the number of imported items in the console in real
time</td>
</tr>
<tr>
-<td>&ndash;dry-run</td>
+<td><code>--dry-run</code></td>
<td>false</td>
<td></td>
<td>Turn on this mode, only parsing but not importing, usually used for
testing</td>
</tr>
<tr>
-<td>&ndash;help</td>
+<td><code>--help</code></td>
<td>false</td>
<td></td>
<td>print help information</td>
diff --git a/en/sitemap.xml b/en/sitemap.xml
index f27d1d86..a7a97743 100644
--- a/en/sitemap.xml
+++ b/en/sitemap.xml
@@ -1 +1 @@
-<?xml version="1.0" encoding="utf-8" standalone="yes"?><urlset
xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"
xmlns:xhtml="http://www.w3.org/1999/xhtml"><url><loc>/docs/guides/architectural/</loc><lastmod>2023-06-25T21:06:07+08:00</lastmod><xhtml:link
rel="alternate" hreflang="cn"
href="/cn/docs/guides/architectural/"/><xhtml:link rel="alternate"
hreflang="en"
href="/docs/guides/architectural/"/></url><url><loc>/docs/config/config-guide/</loc><lastmod>2023-09-19T14:14:13+08:00</last
[...]
\ No newline at end of file
+<?xml version="1.0" encoding="utf-8" standalone="yes"?><urlset
xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"
xmlns:xhtml="http://www.w3.org/1999/xhtml"><url><loc>/docs/guides/architectural/</loc><lastmod>2023-06-25T21:06:07+08:00</lastmod><xhtml:link
rel="alternate" hreflang="cn"
href="/cn/docs/guides/architectural/"/><xhtml:link rel="alternate"
hreflang="en"
href="/docs/guides/architectural/"/></url><url><loc>/docs/config/config-guide/</loc><lastmod>2023-09-19T14:14:13+08:00</last
[...]
\ No newline at end of file
diff --git a/sitemap.xml b/sitemap.xml
index 1b9ac1cb..55f07d2d 100644
--- a/sitemap.xml
+++ b/sitemap.xml
@@ -1 +1 @@
-<?xml version="1.0" encoding="utf-8" standalone="yes"?><sitemapindex
xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"><sitemap><loc>/en/sitemap.xml</loc><lastmod>2023-09-19T14:14:13+08:00</lastmod></sitemap><sitemap><loc>/cn/sitemap.xml</loc><lastmod>2023-09-19T14:14:13+08:00</lastmod></sitemap></sitemapindex>
\ No newline at end of file
+<?xml version="1.0" encoding="utf-8" standalone="yes"?><sitemapindex
xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"><sitemap><loc>/en/sitemap.xml</loc><lastmod>2023-09-22T10:06:32+08:00</lastmod></sitemap><sitemap><loc>/cn/sitemap.xml</loc><lastmod>2023-09-22T10:06:32+08:00</lastmod></sitemap></sitemapindex>
\ No newline at end of file