This is an automated email from the ASF dual-hosted git repository.
liaoxin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git
The following commit(s) were added to refs/heads/master by this push:
new 8e53c40e85 [doc](typo) fix some typo in `broker-load` section (#1307)
8e53c40e85 is described below
commit 8e53c40e85e64c29e4f51cc314ee68f94a7a49b5
Author: yagagagaga <[email protected]>
AuthorDate: Thu Nov 7 21:10:24 2024 +0800
[doc](typo) fix some typo in `broker-load` section (#1307)
---
.../import/import-way/broker-load-manual.md | 43 ++++++++++++---------
.../import/import-way/broker-load-manual.md | 45 ++++++++++++----------
.../data-operate/import/broker-load-manual.md | 45 ++++++++++++----------
.../import/import-way/broker-load-manual.md | 45 ++++++++++++----------
.../import/import-way/broker-load-manual.md | 45 ++++++++++++----------
.../data-operate/import/broker-load-manual.md | 43 ++++++++++++---------
.../import/import-way/broker-load-manual.md | 43 ++++++++++++---------
.../import/import-way/broker-load-manual.md | 43 ++++++++++++---------
8 files changed, 196 insertions(+), 156 deletions(-)
diff --git a/docs/data-operate/import/import-way/broker-load-manual.md
b/docs/data-operate/import/import-way/broker-load-manual.md
index 715dc1847d..702357fc0b 100644
--- a/docs/data-operate/import/import-way/broker-load-manual.md
+++ b/docs/data-operate/import/import-way/broker-load-manual.md
@@ -64,7 +64,7 @@ For the specific syntax for usage, please refer to [BROKER
LOAD](../../../sql-ma
Broker Load is an asynchronous import method, and the specific import results
can be viewed through the [SHOW
LOAD](../../../sql-manual/sql-statements/Show-Statements/SHOW-LOAD) command.
-```Plain
+```sql
mysql> show load order by createtime desc limit 1\G;
*************************** 1. row ***************************
JobId: 41326624
@@ -167,7 +167,7 @@ This configuration is used to access HDFS clusters deployed
in HA (High Availabi
An example configuration is as follows:
-```Plain
+```sql
(
"fs.defaultFS" = "hdfs://my_ha",
"dfs.nameservices" = "my_ha",
@@ -180,7 +180,7 @@ An example configuration is as follows:
HA mode can be combined with the previous two authentication methods for
cluster access. For example, accessing HA HDFS through simple authentication:
-```Plain
+```sql
(
"username"="user",
"password"="passwd",
@@ -257,7 +257,7 @@ HA mode can be combined with the previous two
authentication methods for cluster
SET (
k2 = tmp_k2 + 1,
k3 = tmp_k3 + 1
- )
+ ),
DATA INFILE("hdfs://host:port/input/file-20*")
INTO TABLE `my_table2`
COLUMNS TERMINATED BY ","
@@ -312,7 +312,7 @@ The default method is to determine by file extension.
- Import the data and extract the partition field from the file path
```sql
- LOAD LABEL example_db.label10
+ LOAD LABEL example_db.label5
(
DATA INFILE("hdfs://host:port/input/city=beijing/*/*")
INTO TABLE `my_table`
@@ -397,10 +397,15 @@ There are the following files under the path:
The table structure is as follows:
-```Plain
-data_time DATETIME,
-k2 INT,
-k3 INT
+```sql
+CREATE TABLE IF NOT EXISTS tbl12 (
+ data_time DATETIME,
+ k2 INT,
+ k3 INT
+) DISTRIBUTED BY HASH(data_time) BUCKETS 10
+PROPERTIES (
+ "replication_num" = "3"
+);
```
- Use Merge mode for import
@@ -448,7 +453,7 @@ To use Merge mode for import, the "my_table" must be a
Unique Key table. When th
- Import the specified file format as `json`, and specify the `json_root` and
jsonpaths accordingly.
- ```SQL
+ ```sql
LOAD LABEL example_db.label10
(
DATA INFILE("hdfs://host:port/input/file.json")
@@ -456,7 +461,7 @@ To use Merge mode for import, the "my_table" must be a
Unique Key table. When th
FORMAT AS "json"
PROPERTIES(
"json_root" = "$.item",
- "jsonpaths" = "[$.id, $.city, $.code]"
+ "jsonpaths" = "[\"$.id\", \"$.city\", \"$.code\"]"
)
)
with HDFS
@@ -478,7 +483,7 @@ The `jsonpaths` can also be used in conjunction with the
column list and `SET (c
SET (id = id * 10)
PROPERTIES(
"json_root" = "$.item",
- "jsonpaths" = "[$.id, $.code, $.city]"
+ "jsonpaths" = "[\"$.id\", \"$.city\", \"$.code\"]"
)
)
with HDFS
@@ -524,7 +529,7 @@ Doris supports importing data directly from object storage
systems that support
- The S3 SDK defaults to using the virtual-hosted style method for accessing
objects. However, some object storage systems may not have enabled or supported
the virtual-hosted style access. In such cases, we can add the `use_path_style`
parameter to force the use of the path style method:
- ```Plain
+ ```sql
WITH S3
(
"AWS_ENDPOINT" = "AWS_ENDPOINT",
@@ -537,7 +542,7 @@ Doris supports importing data directly from object storage
systems that support
- Support for accessing all object storage systems that support the S3
protocol using temporary credentials (TOKEN) is available. The usage is as
follows:
- ```Plain
+ ```sql
WITH S3
(
"AWS_ENDPOINT" = "AWS_ENDPOINT",
@@ -576,7 +581,7 @@ This section primarily focuses on the parameters required
by the Broker when acc
The information of the Broker consists of two parts: the name (Broker name)
and the authentication information. The usual syntax format is as follows:
-```Plain
+```sql
WITH BROKER "broker_name"
(
"username" = "xxx",
@@ -601,7 +606,7 @@ Different Broker types and access methods require different
authentication infor
- Alibaba Cloud OSS
- ```Plain
+ ```sql
(
"fs.oss.accessKeyId" = "",
"fs.oss.accessKeySecret" = "",
@@ -611,7 +616,7 @@ Different Broker types and access methods require different
authentication infor
- JuiceFS
- ```Plain
+ ```sql
(
"fs.defaultFS" = "jfs://xxx/",
"fs.jfs.impl" = "io.juicefs.JuiceFileSystem",
@@ -625,7 +630,7 @@ Different Broker types and access methods require different
authentication infor
When using a Broker to access GCS, the Project ID is required, while other
parameters are optional. Please refer to the [GCS
Config](https://github.com/GoogleCloudDataproc/hadoop-connectors/blob/branch-2.2.x/gcs/CONFIGURATION.md)
for all parameter configurations.
- ```Plain
+ ```sql
(
"fs.gs.project.id" = "Your Project ID",
"fs.AbstractFileSystem.gs.impl" =
"com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS",
@@ -678,7 +683,7 @@ Appropriately adjust the `query_timeout` and
`streaming_load_rpc_max_alive_time_
For PARQUET or ORC format data, the column names in the file header must match
the column names in the Doris table. For example:
-```Plain
+```sql
(tmp_c1,tmp_c2)
SET
(
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/broker-load-manual.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/broker-load-manual.md
index 1989caf79c..276360fcf1 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/broker-load-manual.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/broker-load-manual.md
@@ -63,7 +63,7 @@ WITH [HDFS|S3|BROKER broker_name]
Broker load 是一个异步的导入方式,具体导入结果可以通过 [SHOW
LOAD](../../../sql-manual/sql-statements/Show-Statements/SHOW-LOAD) 命令查看
-```Plain
+```sql
mysql> show load order by createtime desc limit 1\G;
*************************** 1. row ***************************
JobId: 41326624
@@ -167,7 +167,7 @@ username 配置为要访问的用户,密码置空即可。
示例如下:
-```Plain
+```sql
(
"fs.defaultFS" = "hdfs://my_ha",
"dfs.nameservices" = "my_ha",
@@ -180,7 +180,7 @@ username 配置为要访问的用户,密码置空即可。
HA 模式可以和前面两种认证方式组合,进行集群访问。如通过简单认证访问 HA HDFS:
-```Plain
+```sql
(
"username"="user",
"password"="passwd",
@@ -257,7 +257,7 @@ HA 模式可以和前面两种认证方式组合,进行集群访问。如通
SET (
k2 = tmp_k2 + 1,
k3 = tmp_k3 + 1
- )
+ ),
DATA INFILE("hdfs://host:port/input/file-20*")
INTO TABLE `my_table2`
COLUMNS TERMINATED BY ","
@@ -312,7 +312,7 @@ HA 模式可以和前面两种认证方式组合,进行集群访问。如通
- 导入数据,并提取文件路径中的分区字段
```sql
- LOAD LABEL example_db.label10
+ LOAD LABEL example_db.label5
(
DATA INFILE("hdfs://host:port/input/city=beijing/*/*")
INTO TABLE `my_table`
@@ -397,10 +397,15 @@ HA 模式可以和前面两种认证方式组合,进行集群访问。如通
表结构为:
- ```Plain
- data_time DATETIME,
- k2 INT,
- k3 INT
+ ```sql
+ CREATE TABLE IF NOT EXISTS tbl12 (
+ data_time DATETIME,
+ k2 INT,
+ k3 INT
+ ) DISTRIBUTED BY HASH(data_time) BUCKETS 10
+ PROPERTIES (
+ "replication_num" = "3"
+ );
```
- 使用 Merge 方式导入
@@ -457,7 +462,7 @@ HA 模式可以和前面两种认证方式组合,进行集群访问。如通
FORMAT AS "json"
PROPERTIES(
"json_root" = "$.item",
- "jsonpaths" = "[$.id, $.city, $.code]"
+ "jsonpaths" = "[\"$.id\", \"$.city\", \"$.code\"]"
)
)
with HDFS
@@ -479,7 +484,7 @@ HA 模式可以和前面两种认证方式组合,进行集群访问。如通
SET (id = id * 10)
PROPERTIES(
"json_root" = "$.item",
- "jsonpaths" = "[$.id, $.code, $.city]"
+ "jsonpaths" = "[\"$.id\", \"$.city\", \"$.code\"]"
)
)
with HDFS
@@ -525,7 +530,7 @@ Doris 支持通过 S3 协议直接从支持 S3 协议的对象存储系统导入
- S3 SDK 默认使用 virtual-hosted style 方式。但某些对象存储系统可能没开启或没支持 virtual-hosted style
方式的访问,此时我们可以添加 `use_path_style` 参数来强制使用 path style 方式:
- ```Plain
+ ```sql
WITH S3
(
"AWS_ENDPOINT" = "AWS_ENDPOINT",
@@ -538,7 +543,7 @@ Doris 支持通过 S3 协议直接从支持 S3 协议的对象存储系统导入
- 支持使用临时秘钥 (TOKEN) 访问所有支持 S3 协议的对象存储,用法如下:
- ```Plain
+ ```sql
WITH S3
(
"AWS_ENDPOINT" = "AWS_ENDPOINT",
@@ -577,7 +582,7 @@ Broker 仅作为一个数据通路,并不参与任何计算,因此仅需占
Broker 的信息包括 名称(Broker name)和 认证信息 两部分。通常的语法格式如下:
-```Plain
+```sql
WITH BROKER "broker_name"
(
"username" = "xxx",
@@ -603,7 +608,7 @@ Broker Name 只是一个用户自定义名称,不代表 Broker 的类型。
- 阿里云 OSS
-```Plain
+```sql
(
"fs.oss.accessKeyId" = "",
"fs.oss.accessKeySecret" = "",
@@ -615,7 +620,7 @@ Broker Name 只是一个用户自定义名称,不代表 Broker 的类型。
当前使用 BOS 时需要下载相应的 SDK 包,具体配置与使用,可以参考 [BOS HDFS
官方文档](https://cloud.baidu.com/doc/BOS/s/fk53rav99)。在下载完成并解压后将 jar 包放到 broker 的
lib 目录下。
-```Plain
+```sql
(
"fs.bos.access.key" = "xx",
"fs.bos.secret.access.key" = "xx",
@@ -625,7 +630,7 @@ Broker Name 只是一个用户自定义名称,不代表 Broker 的类型。
- 华为云 OBS
-```Plain
+```sql
(
"fs.obs.access.key" = "xx",
"fs.obs.secret.key" = "xx",
@@ -635,7 +640,7 @@ Broker Name 只是一个用户自定义名称,不代表 Broker 的类型。
- JuiceFS
-```Plain
+```sql
(
"fs.defaultFS" = "jfs://xxx/",
"fs.jfs.impl" = "io.juicefs.JuiceFileSystem",
@@ -649,7 +654,7 @@ Broker Name 只是一个用户自定义名称,不代表 Broker 的类型。
在使用 Broker 访问 GCS 时,Project ID 是必须的,其他参数可选,所有参数配置请参考 [GCS
Config](https://github.com/GoogleCloudDataproc/hadoop-connectors/blob/branch-2.2.x/gcs/CONFIGURATION.md)
-```Plain
+```sql
(
"fs.gs.project.id" = "你的 Project ID",
"fs.AbstractFileSystem.gs.impl" =
"com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS",
@@ -702,7 +707,7 @@ Broker Name 只是一个用户自定义名称,不代表 Broker 的类型。
如果是 PARQUET 或者 ORC 格式的数据,则文件头的列名需要与 doris 表中的列名保持一致,如:
-```Plain
+```sql
(tmp_c1,tmp_c2)
SET
(
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/data-operate/import/broker-load-manual.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/data-operate/import/broker-load-manual.md
index d974c4c0c6..95037dfb71 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/data-operate/import/broker-load-manual.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/data-operate/import/broker-load-manual.md
@@ -63,7 +63,7 @@ WITH [HDFS|S3|BROKER broker_name]
Broker load 是一个异步的导入方式,具体导入结果可以通过 [SHOW
LOAD](../../sql-manual/sql-reference/Show-Statements/SHOW-LOAD) 命令查看
-```Plain
+```sql
mysql> show load order by createtime desc limit 1\G;
*************************** 1. row ***************************
JobId: 41326624
@@ -167,7 +167,7 @@ username 配置为要访问的用户,密码置空即可。
示例如下:
-```Plain
+```sql
(
"fs.defaultFS" = "hdfs://my_ha",
"dfs.nameservices" = "my_ha",
@@ -180,7 +180,7 @@ username 配置为要访问的用户,密码置空即可。
HA 模式可以和前面两种认证方式组合,进行集群访问。如通过简单认证访问 HA HDFS:
-```Plain
+```sql
(
"username"="user",
"password"="passwd",
@@ -257,7 +257,7 @@ HA 模式可以和前面两种认证方式组合,进行集群访问。如通
SET (
k2 = tmp_k2 + 1,
k3 = tmp_k3 + 1
- )
+ ),
DATA INFILE("hdfs://host:port/input/file-20*")
INTO TABLE `my_table2`
COLUMNS TERMINATED BY ","
@@ -312,7 +312,7 @@ HA 模式可以和前面两种认证方式组合,进行集群访问。如通
- 导入数据,并提取文件路径中的分区字段
```sql
- LOAD LABEL example_db.label10
+ LOAD LABEL example_db.label5
(
DATA INFILE("hdfs://host:port/input/city=beijing/*/*")
INTO TABLE `my_table`
@@ -397,10 +397,15 @@ HA 模式可以和前面两种认证方式组合,进行集群访问。如通
表结构为:
- ```Plain
- data_time DATETIME,
- k2 INT,
- k3 INT
+ ```sql
+ CREATE TABLE IF NOT EXISTS tbl12 (
+ data_time DATETIME,
+ k2 INT,
+ k3 INT
+ ) DISTRIBUTED BY HASH(data_time) BUCKETS 10
+ PROPERTIES (
+ "replication_num" = "3"
+ );
```
- 使用 Merge 方式导入
@@ -457,7 +462,7 @@ HA 模式可以和前面两种认证方式组合,进行集群访问。如通
FORMAT AS "json"
PROPERTIES(
"json_root" = "$.item",
- "jsonpaths" = "[$.id, $.city, $.code]"
+ "jsonpaths" = "[\"$.id\", \"$.city\", \"$.code\"]"
)
)
with HDFS
@@ -479,7 +484,7 @@ HA 模式可以和前面两种认证方式组合,进行集群访问。如通
SET (id = id * 10)
PROPERTIES(
"json_root" = "$.item",
- "jsonpaths" = "[$.id, $.code, $.city]"
+ "jsonpaths" = "[\"$.id\", \"$.city\", \"$.code\"]"
)
)
with HDFS
@@ -525,7 +530,7 @@ Doris 支持通过 S3 协议直接从支持 S3 协议的对象存储系统导入
- S3 SDK 默认使用 virtual-hosted style 方式。但某些对象存储系统可能没开启或没支持 virtual-hosted style
方式的访问,此时我们可以添加 `use_path_style` 参数来强制使用 path style 方式:
- ```Plain
+ ```sql
WITH S3
(
"AWS_ENDPOINT" = "AWS_ENDPOINT",
@@ -538,7 +543,7 @@ Doris 支持通过 S3 协议直接从支持 S3 协议的对象存储系统导入
- 支持使用临时秘钥 (TOKEN) 访问所有支持 S3 协议的对象存储,用法如下:
- ```Plain
+ ```sql
WITH S3
(
"AWS_ENDPOINT" = "AWS_ENDPOINT",
@@ -577,7 +582,7 @@ Broker 仅作为一个数据通路,并不参与任何计算,因此仅需占
Broker 的信息包括 名称(Broker name)和 认证信息 两部分。通常的语法格式如下:
-```Plain
+```sql
WITH BROKER "broker_name"
(
"username" = "xxx",
@@ -603,7 +608,7 @@ Broker Name 只是一个用户自定义名称,不代表 Broker 的类型。
- 阿里云 OSS
-```Plain
+```sql
(
"fs.oss.accessKeyId" = "",
"fs.oss.accessKeySecret" = "",
@@ -615,7 +620,7 @@ Broker Name 只是一个用户自定义名称,不代表 Broker 的类型。
当前使用 BOS 时需要下载相应的 SDK 包,具体配置与使用,可以参考 [BOS HDFS
官方文档](https://cloud.baidu.com/doc/BOS/s/fk53rav99)。在下载完成并解压后将 jar 包放到 broker 的
lib 目录下。
-```Plain
+```sql
(
"fs.bos.access.key" = "xx",
"fs.bos.secret.access.key" = "xx",
@@ -625,7 +630,7 @@ Broker Name 只是一个用户自定义名称,不代表 Broker 的类型。
- 华为云 OBS
-```Plain
+```sql
(
"fs.obs.access.key" = "xx",
"fs.obs.secret.key" = "xx",
@@ -635,7 +640,7 @@ Broker Name 只是一个用户自定义名称,不代表 Broker 的类型。
- JuiceFS
-```Plain
+```sql
(
"fs.defaultFS" = "jfs://xxx/",
"fs.jfs.impl" = "io.juicefs.JuiceFileSystem",
@@ -649,7 +654,7 @@ Broker Name 只是一个用户自定义名称,不代表 Broker 的类型。
在使用 Broker 访问 GCS 时,Project ID 是必须的,其他参数可选,所有参数配置请参考 [GCS
Config](https://github.com/GoogleCloudDataproc/hadoop-connectors/blob/branch-2.2.x/gcs/CONFIGURATION.md)
-```Plain
+```sql
(
"fs.gs.project.id" = "你的 Project ID",
"fs.AbstractFileSystem.gs.impl" =
"com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS",
@@ -702,7 +707,7 @@ Broker Name 只是一个用户自定义名称,不代表 Broker 的类型。
如果是 PARQUET 或者 ORC 格式的数据,则文件头的列名需要与 doris 表中的列名保持一致,如:
-```Plain
+```sql
(tmp_c1,tmp_c2)
SET
(
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/import/import-way/broker-load-manual.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/import/import-way/broker-load-manual.md
index 1989caf79c..276360fcf1 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/import/import-way/broker-load-manual.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/import/import-way/broker-load-manual.md
@@ -63,7 +63,7 @@ WITH [HDFS|S3|BROKER broker_name]
Broker load 是一个异步的导入方式,具体导入结果可以通过 [SHOW
LOAD](../../../sql-manual/sql-statements/Show-Statements/SHOW-LOAD) 命令查看
-```Plain
+```sql
mysql> show load order by createtime desc limit 1\G;
*************************** 1. row ***************************
JobId: 41326624
@@ -167,7 +167,7 @@ username 配置为要访问的用户,密码置空即可。
示例如下:
-```Plain
+```sql
(
"fs.defaultFS" = "hdfs://my_ha",
"dfs.nameservices" = "my_ha",
@@ -180,7 +180,7 @@ username 配置为要访问的用户,密码置空即可。
HA 模式可以和前面两种认证方式组合,进行集群访问。如通过简单认证访问 HA HDFS:
-```Plain
+```sql
(
"username"="user",
"password"="passwd",
@@ -257,7 +257,7 @@ HA 模式可以和前面两种认证方式组合,进行集群访问。如通
SET (
k2 = tmp_k2 + 1,
k3 = tmp_k3 + 1
- )
+ ),
DATA INFILE("hdfs://host:port/input/file-20*")
INTO TABLE `my_table2`
COLUMNS TERMINATED BY ","
@@ -312,7 +312,7 @@ HA 模式可以和前面两种认证方式组合,进行集群访问。如通
- 导入数据,并提取文件路径中的分区字段
```sql
- LOAD LABEL example_db.label10
+ LOAD LABEL example_db.label5
(
DATA INFILE("hdfs://host:port/input/city=beijing/*/*")
INTO TABLE `my_table`
@@ -397,10 +397,15 @@ HA 模式可以和前面两种认证方式组合,进行集群访问。如通
表结构为:
- ```Plain
- data_time DATETIME,
- k2 INT,
- k3 INT
+ ```sql
+ CREATE TABLE IF NOT EXISTS tbl12 (
+ data_time DATETIME,
+ k2 INT,
+ k3 INT
+ ) DISTRIBUTED BY HASH(data_time) BUCKETS 10
+ PROPERTIES (
+ "replication_num" = "3"
+ );
```
- 使用 Merge 方式导入
@@ -457,7 +462,7 @@ HA 模式可以和前面两种认证方式组合,进行集群访问。如通
FORMAT AS "json"
PROPERTIES(
"json_root" = "$.item",
- "jsonpaths" = "[$.id, $.city, $.code]"
+ "jsonpaths" = "[\"$.id\", \"$.city\", \"$.code\"]"
)
)
with HDFS
@@ -479,7 +484,7 @@ HA 模式可以和前面两种认证方式组合,进行集群访问。如通
SET (id = id * 10)
PROPERTIES(
"json_root" = "$.item",
- "jsonpaths" = "[$.id, $.code, $.city]"
+ "jsonpaths" = "[\"$.id\", \"$.city\", \"$.code\"]"
)
)
with HDFS
@@ -525,7 +530,7 @@ Doris 支持通过 S3 协议直接从支持 S3 协议的对象存储系统导入
- S3 SDK 默认使用 virtual-hosted style 方式。但某些对象存储系统可能没开启或没支持 virtual-hosted style
方式的访问,此时我们可以添加 `use_path_style` 参数来强制使用 path style 方式:
- ```Plain
+ ```sql
WITH S3
(
"AWS_ENDPOINT" = "AWS_ENDPOINT",
@@ -538,7 +543,7 @@ Doris 支持通过 S3 协议直接从支持 S3 协议的对象存储系统导入
- 支持使用临时秘钥 (TOKEN) 访问所有支持 S3 协议的对象存储,用法如下:
- ```Plain
+ ```sql
WITH S3
(
"AWS_ENDPOINT" = "AWS_ENDPOINT",
@@ -577,7 +582,7 @@ Broker 仅作为一个数据通路,并不参与任何计算,因此仅需占
Broker 的信息包括 名称(Broker name)和 认证信息 两部分。通常的语法格式如下:
-```Plain
+```sql
WITH BROKER "broker_name"
(
"username" = "xxx",
@@ -603,7 +608,7 @@ Broker Name 只是一个用户自定义名称,不代表 Broker 的类型。
- 阿里云 OSS
-```Plain
+```sql
(
"fs.oss.accessKeyId" = "",
"fs.oss.accessKeySecret" = "",
@@ -615,7 +620,7 @@ Broker Name 只是一个用户自定义名称,不代表 Broker 的类型。
当前使用 BOS 时需要下载相应的 SDK 包,具体配置与使用,可以参考 [BOS HDFS
官方文档](https://cloud.baidu.com/doc/BOS/s/fk53rav99)。在下载完成并解压后将 jar 包放到 broker 的
lib 目录下。
-```Plain
+```sql
(
"fs.bos.access.key" = "xx",
"fs.bos.secret.access.key" = "xx",
@@ -625,7 +630,7 @@ Broker Name 只是一个用户自定义名称,不代表 Broker 的类型。
- 华为云 OBS
-```Plain
+```sql
(
"fs.obs.access.key" = "xx",
"fs.obs.secret.key" = "xx",
@@ -635,7 +640,7 @@ Broker Name 只是一个用户自定义名称,不代表 Broker 的类型。
- JuiceFS
-```Plain
+```sql
(
"fs.defaultFS" = "jfs://xxx/",
"fs.jfs.impl" = "io.juicefs.JuiceFileSystem",
@@ -649,7 +654,7 @@ Broker Name 只是一个用户自定义名称,不代表 Broker 的类型。
在使用 Broker 访问 GCS 时,Project ID 是必须的,其他参数可选,所有参数配置请参考 [GCS
Config](https://github.com/GoogleCloudDataproc/hadoop-connectors/blob/branch-2.2.x/gcs/CONFIGURATION.md)
-```Plain
+```sql
(
"fs.gs.project.id" = "你的 Project ID",
"fs.AbstractFileSystem.gs.impl" =
"com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS",
@@ -702,7 +707,7 @@ Broker Name 只是一个用户自定义名称,不代表 Broker 的类型。
如果是 PARQUET 或者 ORC 格式的数据,则文件头的列名需要与 doris 表中的列名保持一致,如:
-```Plain
+```sql
(tmp_c1,tmp_c2)
SET
(
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/import-way/broker-load-manual.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/import-way/broker-load-manual.md
index 1989caf79c..276360fcf1 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/import-way/broker-load-manual.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/data-operate/import/import-way/broker-load-manual.md
@@ -63,7 +63,7 @@ WITH [HDFS|S3|BROKER broker_name]
Broker load 是一个异步的导入方式,具体导入结果可以通过 [SHOW
LOAD](../../../sql-manual/sql-statements/Show-Statements/SHOW-LOAD) 命令查看
-```Plain
+```sql
mysql> show load order by createtime desc limit 1\G;
*************************** 1. row ***************************
JobId: 41326624
@@ -167,7 +167,7 @@ username 配置为要访问的用户,密码置空即可。
示例如下:
-```Plain
+```sql
(
"fs.defaultFS" = "hdfs://my_ha",
"dfs.nameservices" = "my_ha",
@@ -180,7 +180,7 @@ username 配置为要访问的用户,密码置空即可。
HA 模式可以和前面两种认证方式组合,进行集群访问。如通过简单认证访问 HA HDFS:
-```Plain
+```sql
(
"username"="user",
"password"="passwd",
@@ -257,7 +257,7 @@ HA 模式可以和前面两种认证方式组合,进行集群访问。如通
SET (
k2 = tmp_k2 + 1,
k3 = tmp_k3 + 1
- )
+ ),
DATA INFILE("hdfs://host:port/input/file-20*")
INTO TABLE `my_table2`
COLUMNS TERMINATED BY ","
@@ -312,7 +312,7 @@ HA 模式可以和前面两种认证方式组合,进行集群访问。如通
- 导入数据,并提取文件路径中的分区字段
```sql
- LOAD LABEL example_db.label10
+ LOAD LABEL example_db.label5
(
DATA INFILE("hdfs://host:port/input/city=beijing/*/*")
INTO TABLE `my_table`
@@ -397,10 +397,15 @@ HA 模式可以和前面两种认证方式组合,进行集群访问。如通
表结构为:
- ```Plain
- data_time DATETIME,
- k2 INT,
- k3 INT
+ ```sql
+ CREATE TABLE IF NOT EXISTS tbl12 (
+ data_time DATETIME,
+ k2 INT,
+ k3 INT
+ ) DISTRIBUTED BY HASH(data_time) BUCKETS 10
+ PROPERTIES (
+ "replication_num" = "3"
+ );
```
- 使用 Merge 方式导入
@@ -457,7 +462,7 @@ HA 模式可以和前面两种认证方式组合,进行集群访问。如通
FORMAT AS "json"
PROPERTIES(
"json_root" = "$.item",
- "jsonpaths" = "[$.id, $.city, $.code]"
+ "jsonpaths" = "[\"$.id\", \"$.city\", \"$.code\"]"
)
)
with HDFS
@@ -479,7 +484,7 @@ HA 模式可以和前面两种认证方式组合,进行集群访问。如通
SET (id = id * 10)
PROPERTIES(
"json_root" = "$.item",
- "jsonpaths" = "[$.id, $.code, $.city]"
+ "jsonpaths" = "[\"$.id\", \"$.city\", \"$.code\"]"
)
)
with HDFS
@@ -525,7 +530,7 @@ Doris 支持通过 S3 协议直接从支持 S3 协议的对象存储系统导入
- S3 SDK 默认使用 virtual-hosted style 方式。但某些对象存储系统可能没开启或没支持 virtual-hosted style
方式的访问,此时我们可以添加 `use_path_style` 参数来强制使用 path style 方式:
- ```Plain
+ ```sql
WITH S3
(
"AWS_ENDPOINT" = "AWS_ENDPOINT",
@@ -538,7 +543,7 @@ Doris 支持通过 S3 协议直接从支持 S3 协议的对象存储系统导入
- 支持使用临时秘钥 (TOKEN) 访问所有支持 S3 协议的对象存储,用法如下:
- ```Plain
+ ```sql
WITH S3
(
"AWS_ENDPOINT" = "AWS_ENDPOINT",
@@ -577,7 +582,7 @@ Broker 仅作为一个数据通路,并不参与任何计算,因此仅需占
Broker 的信息包括 名称(Broker name)和 认证信息 两部分。通常的语法格式如下:
-```Plain
+```sql
WITH BROKER "broker_name"
(
"username" = "xxx",
@@ -603,7 +608,7 @@ Broker Name 只是一个用户自定义名称,不代表 Broker 的类型。
- 阿里云 OSS
-```Plain
+```sql
(
"fs.oss.accessKeyId" = "",
"fs.oss.accessKeySecret" = "",
@@ -615,7 +620,7 @@ Broker Name 只是一个用户自定义名称,不代表 Broker 的类型。
当前使用 BOS 时需要下载相应的 SDK 包,具体配置与使用,可以参考 [BOS HDFS
官方文档](https://cloud.baidu.com/doc/BOS/s/fk53rav99)。在下载完成并解压后将 jar 包放到 broker 的
lib 目录下。
-```Plain
+```sql
(
"fs.bos.access.key" = "xx",
"fs.bos.secret.access.key" = "xx",
@@ -625,7 +630,7 @@ Broker Name 只是一个用户自定义名称,不代表 Broker 的类型。
- 华为云 OBS
-```Plain
+```sql
(
"fs.obs.access.key" = "xx",
"fs.obs.secret.key" = "xx",
@@ -635,7 +640,7 @@ Broker Name 只是一个用户自定义名称,不代表 Broker 的类型。
- JuiceFS
-```Plain
+```sql
(
"fs.defaultFS" = "jfs://xxx/",
"fs.jfs.impl" = "io.juicefs.JuiceFileSystem",
@@ -649,7 +654,7 @@ Broker Name 只是一个用户自定义名称,不代表 Broker 的类型。
在使用 Broker 访问 GCS 时,Project ID 是必须的,其他参数可选,所有参数配置请参考 [GCS
Config](https://github.com/GoogleCloudDataproc/hadoop-connectors/blob/branch-2.2.x/gcs/CONFIGURATION.md)
-```Plain
+```sql
(
"fs.gs.project.id" = "你的 Project ID",
"fs.AbstractFileSystem.gs.impl" =
"com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS",
@@ -702,7 +707,7 @@ Broker Name 只是一个用户自定义名称,不代表 Broker 的类型。
如果是 PARQUET 或者 ORC 格式的数据,则文件头的列名需要与 doris 表中的列名保持一致,如:
-```Plain
+```sql
(tmp_c1,tmp_c2)
SET
(
diff --git
a/versioned_docs/version-2.0/data-operate/import/broker-load-manual.md
b/versioned_docs/version-2.0/data-operate/import/broker-load-manual.md
index 17fcc968d7..b54df0a4b8 100644
--- a/versioned_docs/version-2.0/data-operate/import/broker-load-manual.md
+++ b/versioned_docs/version-2.0/data-operate/import/broker-load-manual.md
@@ -64,7 +64,7 @@ For the specific syntax for usage, please refer to [BROKER
LOAD](../../sql-manua
Broker Load is an asynchronous import method, and the specific import results
can be viewed through the [SHOW
LOAD](../../sql-manual/sql-reference/Show-Statements/SHOW-LOAD) command.
-```Plain
+```sql
mysql> show load order by createtime desc limit 1\G;
*************************** 1. row ***************************
JobId: 41326624
@@ -167,7 +167,7 @@ This configuration is used to access HDFS clusters deployed
in HA (High Availabi
An example configuration is as follows:
-```Plain
+```sql
(
"fs.defaultFS" = "hdfs://my_ha",
"dfs.nameservices" = "my_ha",
@@ -180,7 +180,7 @@ An example configuration is as follows:
HA mode can be combined with the previous two authentication methods for
cluster access. For example, accessing HA HDFS through simple authentication:
-```Plain
+```sql
(
"username"="user",
"password"="passwd",
@@ -257,7 +257,7 @@ HA mode can be combined with the previous two
authentication methods for cluster
SET (
k2 = tmp_k2 + 1,
k3 = tmp_k3 + 1
- )
+ ),
DATA INFILE("hdfs://host:port/input/file-20*")
INTO TABLE `my_table2`
COLUMNS TERMINATED BY ","
@@ -312,7 +312,7 @@ The default method is to determine by file extension.
- Import the data and extract the partition field from the file path
```sql
- LOAD LABEL example_db.label10
+ LOAD LABEL example_db.label5
(
DATA INFILE("hdfs://host:port/input/city=beijing/*/*")
INTO TABLE `my_table`
@@ -397,10 +397,15 @@ There are the following files under the path:
The table structure is as follows:
-```Plain
-data_time DATETIME,
-k2 INT,
-k3 INT
+```sql
+CREATE TABLE IF NOT EXISTS tbl12 (
+ data_time DATETIME,
+ k2 INT,
+ k3 INT
+) DISTRIBUTED BY HASH(data_time) BUCKETS 10
+PROPERTIES (
+ "replication_num" = "3"
+);
```
- Use Merge mode for import
@@ -448,7 +453,7 @@ To use Merge mode for import, the "my_table" must be a
Unique Key table. When th
- Import the specified file format as `json`, and specify the `json_root` and
jsonpaths accordingly.
- ```SQL
+ ```sql
LOAD LABEL example_db.label10
(
DATA INFILE("hdfs://host:port/input/file.json")
@@ -456,7 +461,7 @@ To use Merge mode for import, the "my_table" must be a
Unique Key table. When th
FORMAT AS "json"
PROPERTIES(
"json_root" = "$.item",
- "jsonpaths" = "[$.id, $.city, $.code]"
+ "jsonpaths" = "[\"$.id\", \"$.city\", \"$.code\"]"
)
)
with HDFS
@@ -478,7 +483,7 @@ The `jsonpaths` can also be used in conjunction with the
column list and `SET (c
SET (id = id * 10)
PROPERTIES(
"json_root" = "$.item",
- "jsonpaths" = "[$.id, $.code, $.city]"
+ "jsonpaths" = "[\"$.id\", \"$.city\", \"$.code\"]"
)
)
with HDFS
@@ -524,7 +529,7 @@ Doris supports importing data directly from object storage
systems that support
- The S3 SDK defaults to using the virtual-hosted style method for accessing
objects. However, some object storage systems may not have enabled or supported
the virtual-hosted style access. In such cases, we can add the `use_path_style`
parameter to force the use of the path style method:
- ```Plain
+ ```sql
WITH S3
(
"AWS_ENDPOINT" = "AWS_ENDPOINT",
@@ -537,7 +542,7 @@ Doris supports importing data directly from object storage
systems that support
- Support for accessing all object storage systems that support the S3
protocol using temporary credentials (TOKEN) is available. The usage is as
follows:
- ```Plain
+ ```sql
WITH S3
(
"AWS_ENDPOINT" = "AWS_ENDPOINT",
@@ -576,7 +581,7 @@ This section primarily focuses on the parameters required
by the Broker when acc
The information of the Broker consists of two parts: the name (Broker name)
and the authentication information. The usual syntax format is as follows:
-```Plain
+```sql
WITH BROKER "broker_name"
(
"username" = "xxx",
@@ -601,7 +606,7 @@ Different Broker types and access methods require different
authentication infor
- Alibaba Cloud OSS
- ```Plain
+ ```sql
(
"fs.oss.accessKeyId" = "",
"fs.oss.accessKeySecret" = "",
@@ -611,7 +616,7 @@ Different Broker types and access methods require different
authentication infor
- JuiceFS
- ```Plain
+ ```sql
(
"fs.defaultFS" = "jfs://xxx/",
"fs.jfs.impl" = "io.juicefs.JuiceFileSystem",
@@ -625,7 +630,7 @@ Different Broker types and access methods require different
authentication infor
When using a Broker to access GCS, the Project ID is required, while other
parameters are optional. Please refer to the [GCS
Config](https://github.com/GoogleCloudDataproc/hadoop-connectors/blob/branch-2.2.x/gcs/CONFIGURATION.md)
for all parameter configurations.
- ```Plain
+ ```sql
(
"fs.gs.project.id" = "Your Project ID",
"fs.AbstractFileSystem.gs.impl" =
"com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS",
@@ -678,7 +683,7 @@ Appropriately adjust the `query_timeout` and
`streaming_load_rpc_max_alive_time_
For PARQUET or ORC format data, the column names in the file header must match
the column names in the Doris table. For example:
-```Plain
+```sql
(tmp_c1,tmp_c2)
SET
(
diff --git
a/versioned_docs/version-2.1/data-operate/import/import-way/broker-load-manual.md
b/versioned_docs/version-2.1/data-operate/import/import-way/broker-load-manual.md
index 715dc1847d..702357fc0b 100644
---
a/versioned_docs/version-2.1/data-operate/import/import-way/broker-load-manual.md
+++
b/versioned_docs/version-2.1/data-operate/import/import-way/broker-load-manual.md
@@ -64,7 +64,7 @@ For the specific syntax for usage, please refer to [BROKER
LOAD](../../../sql-ma
Broker Load is an asynchronous import method, and the specific import results
can be viewed through the [SHOW
LOAD](../../../sql-manual/sql-statements/Show-Statements/SHOW-LOAD) command.
-```Plain
+```sql
mysql> show load order by createtime desc limit 1\G;
*************************** 1. row ***************************
JobId: 41326624
@@ -167,7 +167,7 @@ This configuration is used to access HDFS clusters deployed
in HA (High Availabi
An example configuration is as follows:
-```Plain
+```sql
(
"fs.defaultFS" = "hdfs://my_ha",
"dfs.nameservices" = "my_ha",
@@ -180,7 +180,7 @@ An example configuration is as follows:
HA mode can be combined with the previous two authentication methods for
cluster access. For example, accessing HA HDFS through simple authentication:
-```Plain
+```sql
(
"username"="user",
"password"="passwd",
@@ -257,7 +257,7 @@ HA mode can be combined with the previous two
authentication methods for cluster
SET (
k2 = tmp_k2 + 1,
k3 = tmp_k3 + 1
- )
+ ),
DATA INFILE("hdfs://host:port/input/file-20*")
INTO TABLE `my_table2`
COLUMNS TERMINATED BY ","
@@ -312,7 +312,7 @@ The default method is to determine by file extension.
- Import the data and extract the partition field from the file path
```sql
- LOAD LABEL example_db.label10
+ LOAD LABEL example_db.label5
(
DATA INFILE("hdfs://host:port/input/city=beijing/*/*")
INTO TABLE `my_table`
@@ -397,10 +397,15 @@ There are the following files under the path:
The table structure is as follows:
-```Plain
-data_time DATETIME,
-k2 INT,
-k3 INT
+```sql
+CREATE TABLE IF NOT EXISTS tbl12 (
+ data_time DATETIME,
+ k2 INT,
+ k3 INT
+) DISTRIBUTED BY HASH(data_time) BUCKETS 10
+PROPERTIES (
+ "replication_num" = "3"
+);
```
- Use Merge mode for import
@@ -448,7 +453,7 @@ To use Merge mode for import, the "my_table" must be a
Unique Key table. When th
- Import the specified file format as `json`, and specify the `json_root` and
jsonpaths accordingly.
- ```SQL
+ ```sql
LOAD LABEL example_db.label10
(
DATA INFILE("hdfs://host:port/input/file.json")
@@ -456,7 +461,7 @@ To use Merge mode for import, the "my_table" must be a
Unique Key table. When th
FORMAT AS "json"
PROPERTIES(
"json_root" = "$.item",
- "jsonpaths" = "[$.id, $.city, $.code]"
+ "jsonpaths" = "[\"$.id\", \"$.city\", \"$.code\"]"
)
)
with HDFS
@@ -478,7 +483,7 @@ The `jsonpaths` can also be used in conjunction with the
column list and `SET (c
SET (id = id * 10)
PROPERTIES(
"json_root" = "$.item",
- "jsonpaths" = "[$.id, $.code, $.city]"
+ "jsonpaths" = "[\"$.id\", \"$.city\", \"$.code\"]"
)
)
with HDFS
@@ -524,7 +529,7 @@ Doris supports importing data directly from object storage
systems that support
- The S3 SDK defaults to using the virtual-hosted style method for accessing
objects. However, some object storage systems may not have enabled or supported
the virtual-hosted style access. In such cases, we can add the `use_path_style`
parameter to force the use of the path style method:
- ```Plain
+ ```sql
WITH S3
(
"AWS_ENDPOINT" = "AWS_ENDPOINT",
@@ -537,7 +542,7 @@ Doris supports importing data directly from object storage
systems that support
- Support for accessing all object storage systems that support the S3
protocol using temporary credentials (TOKEN) is available. The usage is as
follows:
- ```Plain
+ ```sql
WITH S3
(
"AWS_ENDPOINT" = "AWS_ENDPOINT",
@@ -576,7 +581,7 @@ This section primarily focuses on the parameters required
by the Broker when acc
The information of the Broker consists of two parts: the name (Broker name)
and the authentication information. The usual syntax format is as follows:
-```Plain
+```sql
WITH BROKER "broker_name"
(
"username" = "xxx",
@@ -601,7 +606,7 @@ Different Broker types and access methods require different
authentication infor
- Alibaba Cloud OSS
- ```Plain
+ ```sql
(
"fs.oss.accessKeyId" = "",
"fs.oss.accessKeySecret" = "",
@@ -611,7 +616,7 @@ Different Broker types and access methods require different
authentication infor
- JuiceFS
- ```Plain
+ ```sql
(
"fs.defaultFS" = "jfs://xxx/",
"fs.jfs.impl" = "io.juicefs.JuiceFileSystem",
@@ -625,7 +630,7 @@ Different Broker types and access methods require different
authentication infor
When using a Broker to access GCS, the Project ID is required, while other
parameters are optional. Please refer to the [GCS
Config](https://github.com/GoogleCloudDataproc/hadoop-connectors/blob/branch-2.2.x/gcs/CONFIGURATION.md)
for all parameter configurations.
- ```Plain
+ ```sql
(
"fs.gs.project.id" = "Your Project ID",
"fs.AbstractFileSystem.gs.impl" =
"com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS",
@@ -678,7 +683,7 @@ Appropriately adjust the `query_timeout` and
`streaming_load_rpc_max_alive_time_
For PARQUET or ORC format data, the column names in the file header must match
the column names in the Doris table. For example:
-```Plain
+```sql
(tmp_c1,tmp_c2)
SET
(
diff --git
a/versioned_docs/version-3.0/data-operate/import/import-way/broker-load-manual.md
b/versioned_docs/version-3.0/data-operate/import/import-way/broker-load-manual.md
index 715dc1847d..702357fc0b 100644
---
a/versioned_docs/version-3.0/data-operate/import/import-way/broker-load-manual.md
+++
b/versioned_docs/version-3.0/data-operate/import/import-way/broker-load-manual.md
@@ -64,7 +64,7 @@ For the specific syntax for usage, please refer to [BROKER
LOAD](../../../sql-ma
Broker Load is an asynchronous import method, and the specific import results
can be viewed through the [SHOW
LOAD](../../../sql-manual/sql-statements/Show-Statements/SHOW-LOAD) command.
-```Plain
+```sql
mysql> show load order by createtime desc limit 1\G;
*************************** 1. row ***************************
JobId: 41326624
@@ -167,7 +167,7 @@ This configuration is used to access HDFS clusters deployed
in HA (High Availabi
An example configuration is as follows:
-```Plain
+```sql
(
"fs.defaultFS" = "hdfs://my_ha",
"dfs.nameservices" = "my_ha",
@@ -180,7 +180,7 @@ An example configuration is as follows:
HA mode can be combined with the previous two authentication methods for
cluster access. For example, accessing HA HDFS through simple authentication:
-```Plain
+```sql
(
"username"="user",
"password"="passwd",
@@ -257,7 +257,7 @@ HA mode can be combined with the previous two
authentication methods for cluster
SET (
k2 = tmp_k2 + 1,
k3 = tmp_k3 + 1
- )
+ ),
DATA INFILE("hdfs://host:port/input/file-20*")
INTO TABLE `my_table2`
COLUMNS TERMINATED BY ","
@@ -312,7 +312,7 @@ The default method is to determine by file extension.
- Import the data and extract the partition field from the file path
```sql
- LOAD LABEL example_db.label10
+ LOAD LABEL example_db.label5
(
DATA INFILE("hdfs://host:port/input/city=beijing/*/*")
INTO TABLE `my_table`
@@ -397,10 +397,15 @@ There are the following files under the path:
The table structure is as follows:
-```Plain
-data_time DATETIME,
-k2 INT,
-k3 INT
+```sql
+CREATE TABLE IF NOT EXISTS tbl12 (
+ data_time DATETIME,
+ k2 INT,
+ k3 INT
+) DISTRIBUTED BY HASH(data_time) BUCKETS 10
+PROPERTIES (
+ "replication_num" = "3"
+);
```
- Use Merge mode for import
@@ -448,7 +453,7 @@ To use Merge mode for import, the "my_table" must be a
Unique Key table. When th
- Import the specified file format as `json`, and specify the `json_root` and
jsonpaths accordingly.
- ```SQL
+ ```sql
LOAD LABEL example_db.label10
(
DATA INFILE("hdfs://host:port/input/file.json")
@@ -456,7 +461,7 @@ To use Merge mode for import, the "my_table" must be a
Unique Key table. When th
FORMAT AS "json"
PROPERTIES(
"json_root" = "$.item",
- "jsonpaths" = "[$.id, $.city, $.code]"
+ "jsonpaths" = "[\"$.id\", \"$.city\", \"$.code\"]"
)
)
with HDFS
@@ -478,7 +483,7 @@ The `jsonpaths` can also be used in conjunction with the
column list and `SET (c
SET (id = id * 10)
PROPERTIES(
"json_root" = "$.item",
- "jsonpaths" = "[$.id, $.code, $.city]"
+ "jsonpaths" = "[\"$.id\", \"$.city\", \"$.code\"]"
)
)
with HDFS
@@ -524,7 +529,7 @@ Doris supports importing data directly from object storage
systems that support
- The S3 SDK defaults to using the virtual-hosted style method for accessing
objects. However, some object storage systems may not have enabled or supported
the virtual-hosted style access. In such cases, we can add the `use_path_style`
parameter to force the use of the path style method:
- ```Plain
+ ```sql
WITH S3
(
"AWS_ENDPOINT" = "AWS_ENDPOINT",
@@ -537,7 +542,7 @@ Doris supports importing data directly from object storage
systems that support
- Support for accessing all object storage systems that support the S3
protocol using temporary credentials (TOKEN) is available. The usage is as
follows:
- ```Plain
+ ```sql
WITH S3
(
"AWS_ENDPOINT" = "AWS_ENDPOINT",
@@ -576,7 +581,7 @@ This section primarily focuses on the parameters required
by the Broker when acc
The information of the Broker consists of two parts: the name (Broker name)
and the authentication information. The usual syntax format is as follows:
-```Plain
+```sql
WITH BROKER "broker_name"
(
"username" = "xxx",
@@ -601,7 +606,7 @@ Different Broker types and access methods require different
authentication infor
- Alibaba Cloud OSS
- ```Plain
+ ```sql
(
"fs.oss.accessKeyId" = "",
"fs.oss.accessKeySecret" = "",
@@ -611,7 +616,7 @@ Different Broker types and access methods require different
authentication infor
- JuiceFS
- ```Plain
+ ```sql
(
"fs.defaultFS" = "jfs://xxx/",
"fs.jfs.impl" = "io.juicefs.JuiceFileSystem",
@@ -625,7 +630,7 @@ Different Broker types and access methods require different
authentication infor
When using a Broker to access GCS, the Project ID is required, while other
parameters are optional. Please refer to the [GCS
Config](https://github.com/GoogleCloudDataproc/hadoop-connectors/blob/branch-2.2.x/gcs/CONFIGURATION.md)
for all parameter configurations.
- ```Plain
+ ```sql
(
"fs.gs.project.id" = "Your Project ID",
"fs.AbstractFileSystem.gs.impl" =
"com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS",
@@ -678,7 +683,7 @@ Appropriately adjust the `query_timeout` and
`streaming_load_rpc_max_alive_time_
For PARQUET or ORC format data, the column names in the file header must match
the column names in the Doris table. For example:
-```Plain
+```sql
(tmp_c1,tmp_c2)
SET
(
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]