This is an automated email from the ASF dual-hosted git repository.
lingmiao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-doris.git
The following commit(s) were added to refs/heads/master by this push:
new e93a6da [Doc] correct format errors in English doc (#5321)
e93a6da is described below
commit e93a6da0e59b4dca34bf6604ad3641725a0871ed
Author: Ting Sun <[email protected]>
AuthorDate: Fri Feb 26 11:32:14 2021 +0800
[Doc] correct format errors in English doc (#5321)
Fix some English doc format errors
---
docs/en/README.md | 16 +++++------
.../alter-table/alter-table-bitmap-index.md | 2 +-
docs/en/administrator-guide/broker.md | 14 +++++-----
docs/en/administrator-guide/config/fe_config.md | 2 +-
docs/en/administrator-guide/dynamic-partition.md | 12 ++++----
docs/en/administrator-guide/export-manual.md | 2 +-
.../http-actions/cancel-label.md | 2 +-
.../load-data/broker-load-manual.md | 18 ++++++------
.../administrator-guide/load-data/delete-manual.md | 6 ++--
.../load-data/load-json-format.md | 18 ++++++------
.../load-data/sequence-column-manual.md | 2 +-
.../load-data/stream-load-manual.md | 10 +++----
.../operation/tablet-repair-and-balance.md | 2 +-
docs/en/administrator-guide/outfile.md | 2 +-
docs/en/administrator-guide/privilege.md | 6 ++--
docs/en/administrator-guide/resource-management.md | 20 +++++++-------
docs/en/administrator-guide/running-profile.md | 20 +++++++-------
docs/en/administrator-guide/time-zone.md | 2 +-
docs/en/community/pull-request.md | 2 +-
docs/en/community/release-process.md | 2 +-
docs/en/developer-guide/fe-eclipse-dev.md | 6 ++--
docs/en/developer-guide/format-code.md | 2 +-
docs/en/extending-doris/doris-on-es.md | 24 ++++++++--------
docs/en/extending-doris/logstash.md | 6 ++--
docs/en/extending-doris/odbc-of-doris.md | 24 ++++++++--------
.../udf/contrib/udaf-orthogonal-bitmap-manual.md | 6 ++--
docs/en/getting-started/advance-usage.md | 2 +-
docs/en/getting-started/basic-usage.md | 2 +-
docs/en/getting-started/best-practice.md | 2 +-
docs/en/getting-started/data-model-rollup.md | 6 ++--
docs/en/getting-started/hit-the-rollup.md | 2 +-
docs/en/installing/install-deploy.md | 6 ++--
docs/en/installing/upgrade.md | 2 +-
docs/en/internal/grouping_sets_design.md | 32 +++++++++++-----------
.../date-time-functions/from_unixtime.md | 16 +++++------
.../date-time-functions/time_round.md | 2 +-
.../sql-statements/Account Management/GRANT.md | 6 ++--
.../sql-statements/Account Management/REVOKE.md | 2 +-
.../Administration/ADMIN CHECK TABLET.md | 4 +--
.../sql-statements/Administration/ALTER SYSTEM.md | 10 +++----
.../sql-statements/Data Definition/ALTER TABLE.md | 10 +++----
.../Data Definition/CREATE TABLE LIKE.md | 2 +-
.../sql-statements/Data Definition/CREATE TABLE.md | 8 ++----
.../Data Definition/Colocate Join.md | 2 +-
.../Data Definition/SHOW RESOURCES.md | 2 +-
.../Data Manipulation/BROKER LOAD.md | 24 ++++++++--------
.../sql-statements/Data Manipulation/LOAD.md | 6 ++--
.../sql-statements/Data Manipulation/MINI LOAD.md | 4 +--
.../sql-statements/Data Manipulation/MULTI LOAD.md | 4 +--
.../Data Manipulation/ROUTINE LOAD.md | 4 +--
.../sql-statements/Data Manipulation/SHOW ALTER.md | 2 +-
.../SHOW DYNAMIC PARTITION TABLES.md | 2 +-
.../Data Manipulation/STREAM LOAD.md | 10 +++----
.../Data Manipulation/alter-routine-load.md | 2 +-
54 files changed, 201 insertions(+), 203 deletions(-)
diff --git a/docs/en/README.md b/docs/en/README.md
index 2be596f..19138c6 100644
--- a/docs/en/README.md
+++ b/docs/en/README.md
@@ -23,10 +23,10 @@ heroText:
- Welcome to
- Apache Doris
tagline: A fast MPP database for all modern analytics on big data.
-structure:
+structure:
title: Apache Doris
- subTitle:
- descriptions:
+ subTitle:
+ descriptions:
- Apache Doris is a modern MPP analytical database product. It can provide
sub-second queries and efficient real-time data analysis. With it's distributed
architecture, up to 10PB level datasets will be well supported and easy to
operate.
- Apache Doris can meet various data analysis demands, including history
data reports, real-time data analysis, interactive data analysis, and
exploratory data analysis. Make your data analysis easier!
image: /images/home/structure-fresh.png
@@ -34,8 +34,8 @@ structure:
actionLink: /en/getting-started/basic-usage
features:
title: Apache Doris Core Features
- subTitle:
- list:
+ subTitle:
+ list:
- title: Modern MPP architecture
icon: /images/home/struct.png
- title: Getting result of a query within one second
@@ -46,15 +46,15 @@ features:
icon: /images/home/program.png
- title: Effective data model for aggregation
icon: /images/home/aggr.png
- - title: Rollup,novel pre-computation mechanism
+ - title: Rollup, novel pre-computation mechanism
icon: /images/home/rollup.png
- title: High performance, high availability, high reliability
icon: /images/home/cpu.png
- - title: easy for operation,Elastic data warehouse for big data
+ - title: easy for operation, Elastic data warehouse for big data
icon: /images/home/dev.png
cases:
title: Apache Doris Users
- subTitle:
+ subTitle:
list:
- logo: /images/home/logo-meituan.png
alt: 美团
diff --git
a/docs/en/administrator-guide/alter-table/alter-table-bitmap-index.md
b/docs/en/administrator-guide/alter-table/alter-table-bitmap-index.md
index 73c8a4c..0864817 100644
--- a/docs/en/administrator-guide/alter-table/alter-table-bitmap-index.md
+++ b/docs/en/administrator-guide/alter-table/alter-table-bitmap-index.md
@@ -42,7 +42,7 @@ create/drop index syntax
Please refer to [CREATE
INDEX](../../sql-reference/sql-statements/Data%20Definition/CREATE%20INDEX.html)
or [ALTER
TABLE](../../sql-reference/sql-statements/Data%20Definition/ALTER%20TABLE.html),
- You can also specify a bitmap index when creating a table,Please refer to
[CREATE
TABLE](../../sql-reference/sql-statements/Data%20Definition/CREATE%20TABLE.html)
+ You can also specify a bitmap index when creating a table, Please refer to
[CREATE
TABLE](../../sql-reference/sql-statements/Data%20Definition/CREATE%20TABLE.html)
2. Show Index
diff --git a/docs/en/administrator-guide/broker.md
b/docs/en/administrator-guide/broker.md
index 4065e3b..8a0d1c0 100644
--- a/docs/en/administrator-guide/broker.md
+++ b/docs/en/administrator-guide/broker.md
@@ -164,7 +164,7 @@ Authentication information is usually provided as a
Key-Value in the Property Ma
Simple authentication means that Hadoop configures
`hadoop.security.authentication` to` simple`.
- Use system users to access HDFS. Or add in the environment variable
started by Broker:```HADOOP_USER_NAME```。
+ Use system users to access HDFS. Or add in the environment variable
started by Broker: ```HADOOP_USER_NAME```。
```
(
@@ -177,14 +177,14 @@ Authentication information is usually provided as a
Key-Value in the Property Ma
2. Kerberos Authentication
- The authentication method needs to provide the following information::
+ The authentication method needs to provide the following information::
* `hadoop.security.authentication`: Specify the authentication method as
kerberos.
- * `kerberos_principal`: Specify the principal of kerberos.
+ * `kerberos_principal`: Specify the principal of kerberos.
* `kerberos_keytab`: Specify the path to the keytab file for kerberos. The
file must be an absolute path to a file on the server where the broker process
is located. And can be accessed by the Broker process.
* `kerberos_keytab_content`: Specify the content of the keytab file in
kerberos after base64 encoding. You can choose one of these with
`kerberos_keytab` configuration.
- Examples are as follows:
+ Examples are as follows:
```
(
@@ -201,7 +201,7 @@ Authentication information is usually provided as a
Key-Value in the Property Ma
)
```
If Kerberos authentication is used, the
[krb5.conf](https://web.mit.edu/kerberos/krb5-1.12/doc/admin/conf_files/krb5_conf.html)
file is required when deploying the Broker process.
- The krb5.conf file contains Kerberos configuration information,Normally,
you should install your krb5.conf file in the directory /etc. You can override
the default location by setting the environment variable KRB5_CONFIG.
+ The krb5.conf file contains Kerberos configuration information, Normally,
you should install your krb5.conf file in the directory /etc. You can override
the default location by setting the environment variable KRB5_CONFIG.
An example of the contents of the krb5.conf file is as follows:
```
[libdefaults]
@@ -226,7 +226,7 @@ Authentication information is usually provided as a
Key-Value in the Property Ma
* `dfs.namenode.rpc-address.xxx.nn`: Specify the rpc address information
of namenode, Where nn represents the name of the namenode configured in
`dfs.ha.namenodes.xxx`, such as: "dfs.namenode.rpc-address.my_ha.my_nn" =
"host:port".
* `dfs.client.failover.proxy.provider`: Specify the provider for the
client to connect to the namenode. The default is:
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.
- Examples are as follows:
+ Examples are as follows:
```
(
@@ -263,7 +263,7 @@ Authentication information is usually provided as a
Key-Value in the Property Ma
* Region Endpoint: Endpoint of the BOS region.
* For the regions supported by BOS and corresponding Endpoints, please see
[Get access domain
name](https://cloud.baidu.com/doc/BOS/s/Ck1rk80hn#%E8%8E%B7%E5%8F%96%E8%AE
%BF%E9%97%AE%E5%9F%9F%E5%90%8D)
- Examples are as follows:
+ Examples are as follows:
```
(
diff --git a/docs/en/administrator-guide/config/fe_config.md
b/docs/en/administrator-guide/config/fe_config.md
index 472a2d6..d38775a 100644
--- a/docs/en/administrator-guide/config/fe_config.md
+++ b/docs/en/administrator-guide/config/fe_config.md
@@ -744,7 +744,7 @@ Used to set default database data quota size, default is 1T.
### `default_max_filter_ratio`
-Used to set default max filter ratio of load Job. It will be overridden by
`max_filter_ratio` of the load job properties,default value is 0, value range
0-1.
+Used to set default max filter ratio of load Job. It will be overridden by
`max_filter_ratio` of the load job properties, default value is 0, value range
0-1.
### `enable_http_server_v2`
diff --git a/docs/en/administrator-guide/dynamic-partition.md
b/docs/en/administrator-guide/dynamic-partition.md
index ab23f44..1a75491 100644
--- a/docs/en/administrator-guide/dynamic-partition.md
+++ b/docs/en/administrator-guide/dynamic-partition.md
@@ -289,8 +289,8 @@ mysql> SHOW DYNAMIC PARTITION TABLES;
```
* LastUpdateTime: The last time of modifying dynamic partition properties
-* LastSchedulerTime: The last time of performing dynamic partition scheduling
-* State: The state of the last execution of dynamic partition scheduling
+* LastSchedulerTime: The last time of performing dynamic partition scheduling
+* State: The state of the last execution of dynamic partition scheduling
* LastCreatePartitionMsg: Error message of the last time to dynamically add
partition scheduling
* LastDropPartitionMsg: Error message of the last execution of dynamic
deletion partition scheduling
@@ -302,11 +302,11 @@ mysql> SHOW DYNAMIC PARTITION TABLES;
Whether to enable Doris's dynamic partition feature. The default value is
false, which is off. This parameter only affects the partitioning operation of
dynamic partition tables, not normal tables. You can modify the parameters in
`fe.conf` and restart FE to take effect. You can also execute the following
commands at runtime to take effect:
- MySQL protocol:
+ MySQL protocol:
`ADMIN SET FRONTEND CONFIG ("dynamic_partition_enable" = "true")`
- HTTP protocol:
+ HTTP protocol:
`curl --location-trusted -u username:password -XGET
http://fe_host:fe_http_port/api/_set_config?dynamic_partition_enable=true`
@@ -316,11 +316,11 @@ mysql> SHOW DYNAMIC PARTITION TABLES;
The execution frequency of dynamic partition threads defaults to 3600 (1
hour), that is, scheduling is performed every 1 hour. You can modify the
parameters in `fe.conf` and restart FE to take effect. You can also modify the
following commands at runtime:
- MySQL protocol:
+ MySQL protocol:
`ADMIN SET FRONTEND CONFIG ("dynamic_partition_check_interval_seconds" =
"7200")`
- HTTP protocol:
+ HTTP protocol:
`curl --location-trusted -u username:password -XGET
http://fe_host:fe_http_port/api/_set_config?dynamic_partition_check_interval_seconds=432000`
diff --git a/docs/en/administrator-guide/export-manual.md
b/docs/en/administrator-guide/export-manual.md
index 20a1985..d4bac90 100644
--- a/docs/en/administrator-guide/export-manual.md
+++ b/docs/en/administrator-guide/export-manual.md
@@ -177,7 +177,7 @@ Usually, a query plan for an Export job has only two parts
`scan`- `export`, and
* During the operation of the Export job, if FE restarts or cuts the master,
the Export job will fail, requiring the user to resubmit.
* If the Export job fails, the `__doris_export_tmp_xxx` temporary directory
generated in the remote storage and the generated files will not be deleted,
requiring the user to delete them manually.
* If the Export job runs successfully, the `__doris_export_tmp_xxx` directory
generated in the remote storage may be retained or cleared according to the
file system semantics of the remote storage. For example, in Baidu Object
Storage (BOS), after removing the last file in a directory through rename
operation, the directory will also be deleted. If the directory is not cleared,
the user can clear it manually.
-* When the Export runs successfully or fails, the FE reboots or cuts, then
some information of the jobs displayed by `SHOW EXPORT` will be lost and can
not be viewed.
+* When the Export runs successfully or fails, the FE reboots or cuts, then
some information of the jobs displayed by `SHOW EXPORT` will be lost and cannot
be viewed.
* Export jobs only export data from Base tables, not Rollup Index.
* Export jobs scan data and occupy IO resources, which may affect the query
latency of the system.
diff --git a/docs/en/administrator-guide/http-actions/cancel-label.md
b/docs/en/administrator-guide/http-actions/cancel-label.md
index 3ba84fc..f668238 100644
--- a/docs/en/administrator-guide/http-actions/cancel-label.md
+++ b/docs/en/administrator-guide/http-actions/cancel-label.md
@@ -55,7 +55,7 @@ under the License.
## keyword
- CANCEL,LABEL
+ CANCEL, LABEL
diff --git a/docs/en/administrator-guide/load-data/broker-load-manual.md
b/docs/en/administrator-guide/load-data/broker-load-manual.md
index 29b9fb3..2b410b6 100644
--- a/docs/en/administrator-guide/load-data/broker-load-manual.md
+++ b/docs/en/administrator-guide/load-data/broker-load-manual.md
@@ -235,7 +235,7 @@ The following is a detailed explanation of some parameters
of the import operati
2. Strict mode does not affect the imported column when it is generated
by a function transformation.
- 3. For a column type imported that contains scope restrictions, strict
mode does not affect it if the original data can normally pass type conversion,
but can not pass scope restrictions. For example, if the type is decimal (1,0)
and the original data is 10, it falls within the scope of type conversion but
not column declaration. This data strict has no effect on it.
+ 3. For a column type imported that contains scope restrictions, strict
mode does not affect it if the original data can normally pass type conversion,
but cannot pass scope restrictions. For example, if the type is decimal (1,0)
and the original data is 10, it falls within the scope of type conversion but
not column declaration. This data strict has no effect on it.
#### Import Relation between strict mode source data
@@ -336,11 +336,11 @@ The following is mainly about the significance of viewing
the parameters in the
```
USER_CANCEL: User Canceled Tasks
- ETL_RUN_FAIL:Import tasks that failed in the ETL phase
- ETL_QUALITY_UNSATISFIED:Data quality is not up to standard, that is, the
error rate exceedsmax_filter_ratio
- LOAD_RUN_FAIL:Import tasks that failed in the LOADING phase
- TIMEOUT:Import task not completed in overtime
- UNKNOWN:Unknown import error
+ ETL_RUN_FAIL: Import tasks that failed in the ETL phase
+ ETL_QUALITY_UNSATISFIED: Data quality is not up to standard, that is, the
error rate exceedsmax_filter_ratio
+ LOAD_RUN_FAIL: Import tasks that failed in the LOADING phase
+ TIMEOUT: Import task not completed in overtime
+ UNKNOWN: Unknown import error
```
+ CreateTime /EtlStartTime /EtlFinishTime /LoadStartTime /LoadFinishTime
@@ -498,15 +498,15 @@ Cluster situation: The number of BEs in the cluster is
about 3, and the Broker n
## Common Questions
-* failed with : `Scan bytes per broker scanner exceed limit:xxx`
+* failed with: `Scan bytes per broker scanner exceed limit:xxx`
Refer to the Best Practices section of the document to modify the FE
configuration items `max_bytes_per_broker_scanner` and
`max_broker_concurrency'.`
-* failed with :`failed to send batch` or `TabletWriter add batch with unknown
id`
+* failed with: `failed to send batch` or `TabletWriter add batch with unknown
id`
Refer to **General System Configuration** in **BE Configuration** in
the Import Manual (./load-manual.md), and modify `query_timeout` and
`streaming_load_rpc_max_alive_time_sec` appropriately.
-* failed with : `LOAD_RUN_FAIL; msg: Invalid Column Name: xxx`
+* failed with: `LOAD_RUN_FAIL; msg: Invalid Column Name: xxx`
If it is PARQUET or ORC format data, you need to keep the column names in
the file header consistent with the column names in the doris table, such as:
`` `
diff --git a/docs/en/administrator-guide/load-data/delete-manual.md
b/docs/en/administrator-guide/load-data/delete-manual.md
index 6ab2f67..eff6e99 100644
--- a/docs/en/administrator-guide/load-data/delete-manual.md
+++ b/docs/en/administrator-guide/load-data/delete-manual.md
@@ -31,7 +31,7 @@ Unlike other import methods, delete is a synchronization
process. Similar to ins
## Syntax
-The delete statement's syntax is as follows:
+The delete statement's syntax is as follows:
```
DELETE FROM table_name [PARTITION partition_name]
@@ -39,7 +39,7 @@ WHERE
column_name1 op value[ AND column_name2 op value ...];
```
-example 1:
+example 1:
```
DELETE FROM my_table PARTITION p1 WHERE k1 = 3;
@@ -118,7 +118,7 @@ The delete command is an SQL command, and the returned
results are synchronous.
ERROR 1064 (HY000): errCode = 2, detailMessage = {错误原因}
```
- example:
+ example:
A timeout deletion will return the timeout and unfinished replicas
displayed as ` (tablet = replica)`
diff --git a/docs/en/administrator-guide/load-data/load-json-format.md
b/docs/en/administrator-guide/load-data/load-json-format.md
index 57a1038..39a82aa 100644
--- a/docs/en/administrator-guide/load-data/load-json-format.md
+++ b/docs/en/administrator-guide/load-data/load-json-format.md
@@ -304,11 +304,11 @@ If you want to load the above data as expected, the load
statement is as follows
curl -v --location-trusted -u root: -H "format: json" -H "strip_outer_array:
true" -H "jsonpaths: [\"$.k1\", \"$.k2\"]"- H "columns: k1, tmp_k2, k2 =
ifnull(tmp_k2,'x')" -T example.json
http://127.0.0.1:8030/api/db1/tbl1/_stream_load
```
-## LargetInt与Decimal
+## LargetInt and Decimal
Doris supports data types such as largeint and decimal with larger data range
and higher data precision. However, due to the fact that the maximum range of
the rapid JSON library used by Doris for the resolution of digital types is
Int64 and double, there may be some problems when importing largeint or decimal
by JSON format, such as loss of precision, data conversion error, etc.
-For example:
+For example:
```
[
@@ -325,9 +325,9 @@ To solve this problem, Doris provides a param
`num_as_string `. Doris converts t
curl -v --location-trusted -u root: -H "format: json" -H "num_as_string: true"
-T example.json http://127.0.0.1:8030/api/db1/tbl1/_stream_load
```
-But using the param will cause unexpected side effects. Doris currently does
not support composite types, such as Array, Map, etc. So when a non basic type
is matched, Doris will convert the type to a string in JSON format.`
num_as_string`will also convert compound type numbers into strings, for example:
+But using the param will cause unexpected side effects. Doris currently does
not support composite types, such as Array, Map, etc. So when a non basic type
is matched, Doris will convert the type to a string in JSON format.`
num_as_string`will also convert compound type numbers into strings, for example:
-JSON Data:
+JSON Data:
{ "id": 123, "city" : { "name" : "beijing", "city_id" : 1 }}
@@ -369,7 +369,7 @@ code INT NULL
curl --location-trusted -u user:passwd -H "format: json" -T data.json
http://localhost:8030/api/db1/tbl1/_stream_load
```
- Results:
+ Results:
```
100 beijing 1
@@ -381,7 +381,7 @@ code INT NULL
curl --location-trusted -u user:passwd -H "format: json" -H
"jsonpaths: [\"$.id\",\"$.city\",\"$.code\"]" -T data.json
http://localhost:8030/api/db1/tbl1/_stream_load
```
- Results:
+ Results:
```
100 beijing 1
@@ -399,7 +399,7 @@ code INT NULL
curl --location-trusted -u user:passwd -H "format: json" -H
"jsonpaths: [\"$.id\",\"$.content.city\",\"$.content.code\"]" -T data.json
http://localhost:8030/api/db1/tbl1/_stream_load
```
- Results:
+ Results:
```
100 beijing 1
@@ -430,7 +430,7 @@ code INT NULL
curl --location-trusted -u user:passwd -H "format: json" -H
"jsonpaths: [\"$.id\",\"$.city\",\"$.code\"]" -H "strip_outer_array: true" -T
data.json http://localhost:8030/api/db1/tbl1/_stream_load
```
- Results:
+ Results:
```
100 beijing 1
@@ -449,7 +449,7 @@ code INT NULL
curl --location-trusted -u user:passwd -H "format: json" -H "jsonpaths:
[\"$.id\",\"$.city\",\"$.code\"]" -H "strip_outer_array: true" -H "columns: id,
city, tmpc, code=tmpc+1" -T data.json
http://localhost:8030/api/db1/tbl1/_stream_load
```
- Results:
+ Results:
```
100 beijing 2
diff --git a/docs/en/administrator-guide/load-data/sequence-column-manual.md
b/docs/en/administrator-guide/load-data/sequence-column-manual.md
index 8f29789..5a21c2c 100644
--- a/docs/en/administrator-guide/load-data/sequence-column-manual.md
+++ b/docs/en/administrator-guide/load-data/sequence-column-manual.md
@@ -148,7 +148,7 @@ MySQL > desc test_table;
+-------------+--------------+------+-------+---------+---------+
```
-2. Import data normally:
+2. Import data normally:
Import the following data
```
diff --git a/docs/en/administrator-guide/load-data/stream-load-manual.md
b/docs/en/administrator-guide/load-data/stream-load-manual.md
index 6d60398..e8c9dc5 100644
--- a/docs/en/administrator-guide/load-data/stream-load-manual.md
+++ b/docs/en/administrator-guide/load-data/stream-load-manual.md
@@ -187,7 +187,7 @@ The following main explanations are given for the Stream
load import result para
"Publish Timeout": This state also indicates that the import has been
completed, except that the data may be delayed and visible without retrying.
- "Label Already Exists":Label duplicate, need to be replaced Label.
+ "Label Already Exists": Label duplicate, need to be replaced Label.
"Fail": Import failed.
@@ -211,13 +211,13 @@ The following main explanations are given for the Stream
load import result para
+ BeginTxnTimeMs: The time cost for RPC to Fe to begin a transaction, Unit
milliseconds.
-+ StreamLoadPutTimeMs:The time cost for RPC to Fe to get a stream load plan,
Unit milliseconds.
++ StreamLoadPutTimeMs: The time cost for RPC to Fe to get a stream load plan,
Unit milliseconds.
-+ ReadDataTimeMs:Read data time, Unit milliseconds.
++ ReadDataTimeMs: Read data time, Unit milliseconds.
-+ WriteDataTimeMs:Write data time, Unit milliseconds.
++ WriteDataTimeMs: Write data time, Unit milliseconds.
-+ CommitAndPublishTimeMs:The time cost for RPC to Fe to commit and publish a
transaction, Unit milliseconds.
++ CommitAndPublishTimeMs: The time cost for RPC to Fe to commit and publish a
transaction, Unit milliseconds.
+ ErrorURL: If you have data quality problems, visit this URL to see specific
error lines.
diff --git a/docs/en/administrator-guide/operation/tablet-repair-and-balance.md
b/docs/en/administrator-guide/operation/tablet-repair-and-balance.md
index 45bce98..a7ff6e6 100644
--- a/docs/en/administrator-guide/operation/tablet-repair-and-balance.md
+++ b/docs/en/administrator-guide/operation/tablet-repair-and-balance.md
@@ -683,4 +683,4 @@ The following parameters do not support modification for
the time being, just fo
* In some cases, the default replica repair and balancing strategy may cause
the network to be full (mostly in the case of gigabit network cards and a large
number of disks per BE). At this point, some parameters need to be adjusted to
reduce the number of simultaneous balancing and repair tasks.
-* Current balancing strategies for copies of Colocate Table do not guarantee
that copies of the same Tablet will not be distributed on the BE of the same
host. However, the repair strategy of the copy of Colocate Table detects this
distribution error and corrects it. However, it may occur that after
correction, the balancing strategy regards the replicas as unbalanced and
rebalances them. As a result, the Colocate Group can not achieve stability
because of the continuous alternation betw [...]
+* Current balancing strategies for copies of Colocate Table do not guarantee
that copies of the same Tablet will not be distributed on the BE of the same
host. However, the repair strategy of the copy of Colocate Table detects this
distribution error and corrects it. However, it may occur that after
correction, the balancing strategy regards the replicas as unbalanced and
rebalances them. As a result, the Colocate Group cannot achieve stability
because of the continuous alternation betwe [...]
diff --git a/docs/en/administrator-guide/outfile.md
b/docs/en/administrator-guide/outfile.md
index a86b81e..436751a 100644
--- a/docs/en/administrator-guide/outfile.md
+++ b/docs/en/administrator-guide/outfile.md
@@ -81,7 +81,7 @@ WITH BROKER `broker_name`
* `column_separator`: Column separator, only applicable to CSV format. The
default is `\t`.
* `line_delimiter`: Line delimiter, only applicable to CSV format. The
default is `\n`.
- * `max_file_size`:The max size of a single file. Default is 1GB. Range
from 5MB to 2GB. Files exceeding this size will be splitted.
+ * `max_file_size`: The max size of a single file. Default is 1GB. Range
from 5MB to 2GB. Files exceeding this size will be splitted.
1. Example 1
diff --git a/docs/en/administrator-guide/privilege.md
b/docs/en/administrator-guide/privilege.md
index 82d0579..f20316b 100644
--- a/docs/en/administrator-guide/privilege.md
+++ b/docs/en/administrator-guide/privilege.md
@@ -52,11 +52,11 @@ Doris's new privilege management system refers to Mysql's
privilege management m
## Supported operations
-1. Create users:CREATE USER
+1. Create users: CREATE USER
2. Delete users: DROP USER
3. Authorization: GRANT
4. Withdrawal: REVOKE
-5. Create role:CREATE ROLE
+5. Create role: CREATE ROLE
6. Delete Roles: DROP ROLE
7. View current user privileges: SHOW GRANTS
8. View all user privilegesSHOW ALL GRANTS;
@@ -132,7 +132,7 @@ ADMIN\_PRIV and GRANT\_PRIV have the authority of **"grant
authority"** at the s
* Users with ADMIN or GLOBAL GRANT privileges can set any user's
password.
* Ordinary users can set their corresponding User Identity password.
The corresponding User Identity can be viewed by `SELECT
CURRENT_USER();`command.
- * Users with GRANT privileges at non-GLOBAL level can not set the
password of existing users, but can only specify the password when creating
users.
+ * Users with GRANT privileges at non-GLOBAL level cannot set the
password of existing users, but can only specify the password when creating
users.
## Some explanations
diff --git a/docs/en/administrator-guide/resource-management.md
b/docs/en/administrator-guide/resource-management.md
index e5dce53..890c42d 100644
--- a/docs/en/administrator-guide/resource-management.md
+++ b/docs/en/administrator-guide/resource-management.md
@@ -50,8 +50,8 @@ There are three main commands for resource management:
`create resource`, `drop
In the command to create a resource, the user must provide the following
information:
* `resource_name` name of the resource
- * `PROPERTIES` related parameters, as follows:
- * `type`:resource type, required. Currently, only spark and odbc_catalog
are supported.
+ * `PROPERTIES` related parameters, as follows:
+ * `type`: resource type, required. Currently, only spark and odbc_catalog
are supported.
* For other parameters, see the resource introduction
@@ -62,7 +62,7 @@ There are three main commands for resource management:
`create resource`, `drop
3. SHOW RESOURCES
- This command can view the resources that the user has permission to use.
Please refer to:`HELP SHOW RESOURCES`
+ This command can view the resources that the user has permission to use.
Please refer to: `HELP SHOW RESOURCES`
@@ -70,8 +70,8 @@ There are three main commands for resource management:
`create resource`, `drop
Currently, Doris can support
-* Spark resource : do ETL work
-* ODBC resource : query and import data from external tables
+* Spark resource: do ETL work
+* ODBC resource: query and import data from external tables
The following shows how the two resources are used.
@@ -79,9 +79,9 @@ The following shows how the two resources are used.
#### Parameter
-##### Spark Parameters:
+##### Spark Parameters:
-`spark.master`: required, currently supported yarn,spark://host:port。
+`spark.master`: required, currently supported yarn, spark://host:port。
`spark.submit.deployMode`: The deployment mode of spark. required. It supports
cluster and client.
@@ -91,7 +91,7 @@ The following shows how the two resources are used.
Other parameters are optional, refer to:
http://spark.apache.org/docs/latest/configuration.html.
-##### If spark is used for ETL, also need to specify the following parameters:
+##### If spark is used for ETL, also need to specify the following parameters:
`working_dir`: Directory used by ETL. Spark is required when used as an ETL
resource. For example: hdfs://host:port/tmp/doris.
@@ -130,9 +130,9 @@ PROPERTIES
#### Parameter
-##### ODBC Parameters:
+##### ODBC Parameters:
-`type`: Required,must be `odbc_catalog`. As the type identifier of resource.
+`type`: Required, must be `odbc_catalog`. As the type identifier of resource.
`user`: The user name of the external table, required.
diff --git a/docs/en/administrator-guide/running-profile.md
b/docs/en/administrator-guide/running-profile.md
index 25a4a70..4002f73 100644
--- a/docs/en/administrator-guide/running-profile.md
+++ b/docs/en/administrator-guide/running-profile.md
@@ -119,7 +119,7 @@ There are many statistical information collected at BE. so
we list the correspo
- DataArrivalWaitTime: Total waiting time of sender to push data
- FirstBatchArrivalWaitTime: The time waiting for the first batch come from
sender
- DeserializeRowBatchTimer: Time consuming to receive data deserialization
- - SendersBlockedTotalTimer(*): When the DataStreamRecv's queue buffer is
full,wait time of sender
+ - SendersBlockedTotalTimer(*): When the DataStreamRecv's queue buffer is
full, wait time of sender
- ConvertRowBatchTime: Time taken to transfer received data to RowBatch
- RowsReturned: Number of receiving rows
- RowsReturnedRate: Rate of rows received
@@ -135,16 +135,16 @@ There are many statistical information collected at BE.
so we list the correspo
#### `AGGREGATION_NODE`
- PartitionsCreated: Number of partition split by aggregate
- GetResultsTime: Time to get aggregate results from each partition
- - HTResizeTime: Time spent in resizing hashtable
- - HTResize: Number of times hashtable resizes
+ - HTResizeTime: Time spent in resizing hashtable
+ - HTResize: Number of times hashtable resizes
- HashBuckets: Number of buckets in hashtable
- - HashBucketsWithDuplicate: Number of buckets with duplicatenode in
hashtable
- - HashCollisions: Number of hash conflicts generated
- - HashDuplicateNodes: Number of duplicate nodes with the same buckets in
hashtable
- - HashFailedProbe: Number of failed probe operations
- - HashFilledBuckets: Number of buckets filled data
- - HashProbe: Number of hashtable probe
- - HashTravelLength: The number of steps moved when hashtable queries
+ - HashBucketsWithDuplicate: Number of buckets with duplicatenode in hashtable
+ - HashCollisions: Number of hash conflicts generated
+ - HashDuplicateNodes: Number of duplicate nodes with the same buckets in
hashtable
+ - HashFailedProbe: Number of failed probe operations
+ - HashFilledBuckets: Number of buckets filled data
+ - HashProbe: Number of hashtable probe
+ - HashTravelLength: The number of steps moved when hashtable queries
#### `HASH_JOIN_NODE`
- ExecOption: The way to construct a HashTable for the right child
(synchronous or asynchronous), the right child in Join may be a table or a
subquery, the same is true for the left child
diff --git a/docs/en/administrator-guide/time-zone.md
b/docs/en/administrator-guide/time-zone.md
index eba38d5..003e6ea 100644
--- a/docs/en/administrator-guide/time-zone.md
+++ b/docs/en/administrator-guide/time-zone.md
@@ -39,7 +39,7 @@ There are multiple time zone related parameters in Doris
* `system_time_zone`:
-When the server starts, it will be set automatically according to the time
zone set by the machine, which can not be modified after setting.
+When the server starts, it will be set automatically according to the time
zone set by the machine, which cannot be modified after setting.
* `time_zone`:
diff --git a/docs/en/community/pull-request.md
b/docs/en/community/pull-request.md
index 296e731..2b59f12 100644
--- a/docs/en/community/pull-request.md
+++ b/docs/en/community/pull-request.md
@@ -69,7 +69,7 @@ upstream https://github.com/apache/incubator-doris.git (push)
git checkout -b <your_branch_name>
```
-Note: \<your\_branch\_name\> name is customized for you.
+Note: \<your\_branch\_name\> name is customized for you.
Code changes can be made after creation.
diff --git a/docs/en/community/release-process.md
b/docs/en/community/release-process.md
index 953d44c..d390781 100644
--- a/docs/en/community/release-process.md
+++ b/docs/en/community/release-process.md
@@ -340,7 +340,7 @@ $ git tag
### Packing Signature
-The following steps also need to log into user accounts directly through
terminals such as SecureCRT, and can not be transferred through Su - user or
ssh, otherwise the password input box will not show and error will be reported.
+The following steps also need to log into user accounts directly through
terminals such as SecureCRT, and cannot be transferred through Su - user or
ssh, otherwise the password input box will not show and error will be reported.
```
$ git checkout 0.9.0-rc01
diff --git a/docs/en/developer-guide/fe-eclipse-dev.md
b/docs/en/developer-guide/fe-eclipse-dev.md
index 03aa416..5fecccf 100644
--- a/docs/en/developer-guide/fe-eclipse-dev.md
+++ b/docs/en/developer-guide/fe-eclipse-dev.md
@@ -30,7 +30,7 @@ under the License.
* JDK 1.8+
* Maven 3.x+
-* Eclipse,with [M2Eclipse](http://www.eclipse.org/m2e/) installed
+* Eclipse, with [M2Eclipse](http://www.eclipse.org/m2e/) installed
### Code Generation
@@ -63,9 +63,9 @@ The FE module requires part of the generated code, such as
Thrift, Protobuf, Jfl
2. Import FE project
- * Open Eclipse,choose `File -> Import`.
+ * Open Eclipse, choose `File -> Import`.
* Choose `General -> Existing Projects into Workspace`.
- * `Select root directory` and choose `fe/` directory,click `Finish` to
finish.
+ * `Select root directory` and choose `fe/` directory, click `Finish` to
finish.
* Right click the project, and choose `Build Path -> Configure Build Path`.
* In the `Java Build Path` dialog, choose the `Source` tab, click `Add
Folder`, and select the `java/` directory that was copied and unzipped before
adding.
* Click `Apply and Close` to finish.
diff --git a/docs/en/developer-guide/format-code.md
b/docs/en/developer-guide/format-code.md
index 32fde71..ce10492 100644
--- a/docs/en/developer-guide/format-code.md
+++ b/docs/en/developer-guide/format-code.md
@@ -75,7 +75,7 @@ clang-format in settings.
Open the vs code configuration page and search `clang_format`, fill the box as
follows.
```
-"clang_format_path": "$clang-format path$",
+"clang_format_path": "$clang-format path$",
"clang_format_style": "file"
```
Then, right click the file and choose `Format Document`.
diff --git a/docs/en/extending-doris/doris-on-es.md
b/docs/en/extending-doris/doris-on-es.md
index 104a832..d7e68f7 100644
--- a/docs/en/extending-doris/doris-on-es.md
+++ b/docs/en/extending-doris/doris-on-es.md
@@ -207,9 +207,9 @@ Doris obtains data from ES following the following two
principles:
* **Best effort**: Automatically detect whether the column to be read has
column storage enabled (doc_value: true).If all the fields obtained have column
storage, Doris will obtain the values of all fields from the column
storage(doc_values)
* **Automatic downgrade**: If the field to be obtained has one or more field
that is not have doc_value, the values of all fields will be parsed from the
line store `_source`
-##### Advantage:
+##### Advantage:
-By default, Doris On ES will get all the required columns from the row
storage, which is `_source`, and the storage of `_source` is the origin json
format document,Inferior to column storage in batch read performance,Especially
obvious when only a few columns are needed,When only a few columns are
obtained, the performance of docvalue is about ten times that of _source
+By default, Doris On ES will get all the required columns from the row
storage, which is `_source`, and the storage of `_source` is the origin json
format document, Inferior to column storage in batch read performance,
Especially obvious when only a few columns are needed, When only a few columns
are obtained, the performance of docvalue is about ten times that of _source
##### Tip
1. Fields of type `text` are not column-stored in ES, so if the value of the
field to be obtained has a field of type `text`, it will be automatically
downgraded to get from `_source`
@@ -237,13 +237,13 @@ PROPERTIES (
);
```
-Parameter Description:
+Parameter Description:
Parameter | Description
---|---
**enable\_keyword\_sniff** | Whether to detect the string type (**text**)
`fields` in ES to obtain additional not analyzed (**keyword**) field
name(multi-fields mechanism)
-You can directly import data without creating an index. At this time, ES will
automatically create a new index in ES, For a field of type string, a field of
type `text` and field of type `keyword` will be created meantime, This is the
multi-fields feature of ES, mapping is as follows:
+You can directly import data without creating an index. At this time, ES will
automatically create a new index in ES, For a field of type string, a field of
type `text` and field of type `keyword` will be created meantime, This is the
multi-fields feature of ES, mapping is as follows:
```
"k4": {
@@ -256,15 +256,15 @@ You can directly import data without creating an index.
At this time, ES will au
}
}
```
-When performing conditional filtering on k4, for example =,Doris On ES will
convert the query to ES's TermQuery
+When performing conditional filtering on k4, for example =, Doris On ES will
convert the query to ES's TermQuery
-SQL filter:
+SQL filter:
```
k4 = "Doris On ES"
```
-The query DSL converted into ES is:
+The query DSL converted into ES is:
```
"term" : {
@@ -273,7 +273,7 @@ The query DSL converted into ES is:
}
```
-Because the first field type of k4 is `text`, when data is imported, it will
perform word segmentation processing according to the word segmentator set by
k4 (if it is not set, it is the standard word segmenter) to get three Term of
doris, on, and es, as follows ES analyze API analysis:
+Because the first field type of k4 is `text`, when data is imported, it will
perform word segmentation processing according to the word segmentator set by
k4 (if it is not set, it is the standard word segmenter) to get three Term of
doris, on, and es, as follows ES analyze API analysis:
```
POST /_analyze
@@ -282,7 +282,7 @@ POST /_analyze
"text": "Doris On ES"
}
```
-The result of analyzed is:
+The result of analyzed is:
```
{
@@ -311,14 +311,14 @@ The result of analyzed is:
]
}
```
-The query uses:
+The query uses:
```
"term" : {
"k4": "Doris On ES"
}
```
-This term does not match any term in the dictionary,and will not return any
results,enable `enable_keyword_sniff: true` will automatically convert `k4 =
"Doris On ES"` into `k4.keyword = "Doris On ES"`to exactly match SQL
semantics,The converted ES query DSL is:
+This term does not match any term in the dictionary, and will not return any
results, enable `enable_keyword_sniff: true` will automatically convert `k4 =
"Doris On ES"` into `k4.keyword = "Doris On ES"`to exactly match SQL semantics,
The converted ES query DSL is:
```
"term" : {
@@ -456,7 +456,7 @@ select * from es_table where esquery(k4, ' {
### Suggestions for using Date type fields
-The use of Datetype fields in ES is very flexible, but in Doris On ES, if the
type of the Date type field is not set properly, it will cause the filter
condition can not be pushed down.
+The use of Datetype fields in ES is very flexible, but in Doris On ES, if the
type of the Date type field is not set properly, it will cause the filter
condition cannot be pushed down.
When creating an index, do maximum format compatibility with the setting of
the Date type format:
diff --git a/docs/en/extending-doris/logstash.md
b/docs/en/extending-doris/logstash.md
index d44f630..3917720 100644
--- a/docs/en/extending-doris/logstash.md
+++ b/docs/en/extending-doris/logstash.md
@@ -53,7 +53,7 @@ Executing an order
Install logstash-output-doris plugin
## Configuration
-### Example:
+### Example:
Create a new configuration file in the config directory and name it
logstash-doris.conf
@@ -145,7 +145,7 @@ Get the file logstash-output-doris-0.1.0.gem, and the
compilation is complete
/tmp/doris.data is the doris data path
-3> Start filebeat:
+3> Start filebeat:
`./filebeat -e -c filebeat.yml -d "publish"`
@@ -184,7 +184,7 @@ Install the plugin
The configuration here needs to be configured according to the configuration
instructions
-5> Start logstash:
+5> Start logstash:
./bin/logstash -f ./config/logstash-doris.conf --config.reload.automatic
diff --git a/docs/en/extending-doris/odbc-of-doris.md
b/docs/en/extending-doris/odbc-of-doris.md
index 84f4d8f..9680dab 100644
--- a/docs/en/extending-doris/odbc-of-doris.md
+++ b/docs/en/extending-doris/odbc-of-doris.md
@@ -99,7 +99,7 @@ PROPERTIES (
);
```
-The following parameters are accepted by ODBC external table::
+The following parameters are accepted by ODBC external table:
Parameter | Description
---|---
@@ -123,8 +123,8 @@ Description = ODBC for MySQL
Driver = /usr/lib64/libmyodbc8w.so
FileUsage = 1
```
-* `[]`:The corresponding driver name in is the driver name. When creating an
external table, the driver name of the external table should be consistent with
that in the configuration file.
-* `Driver=`: This should be setted in according to the actual be installation
path of the driver. It is essentially the path of a dynamic library. Here, we
need to ensure that the pre dependencies of the dynamic library are met.
+* `[]`: The corresponding driver name in is the driver name. When creating an
external table, the driver name of the external table should be consistent with
that in the configuration file.
+* `Driver=`: This should be setted in according to the actual be installation
path of the driver. It is essentially the path of a dynamic library. Here, we
need to ensure that the pre dependencies of the dynamic library are met.
**Remember, all BE nodes are required to have the same driver installed, the
same installation path and the same be/conf/odbcinst.ini config.**
@@ -158,7 +158,7 @@ Transactions ensure the atomicity of ODBC external table
writing, but it will re
## Data type mapping
-There are different data types among different database. Here, the types in
each database and the data type matching in Doris are listed.
+There are different data types among different databases. Here, the types in
each database and the data type matching in Doris are listed.
### MySQL
@@ -214,23 +214,23 @@ There are different data types among different database.
Here, the types in each
## Q&A
-1. Relationship with the original external table of MySQL
+1. Relationship with the original external table of MySQL?
After accessing the ODBC external table, the original way to access the MySQL
external table will be gradually abandoned. If you have not used the MySQL
external table before, it is recommended that the newly accessed MySQL tables
use ODBC external table directly.
-2. Besides MySQL and Oracle, can doris support more databases
+2. Besides MySQL and Oracle, can doris support more databases?
-Currently, Doris only adapts to MySQL and Oracle. The adaptation of other
databases is under planning. In principle, any database that supports ODBC
access can be accessed through the ODBC external table. If you need to access
other database, you are welcome to modify the code and contribute to Doris.
+Currently, Doris only adapts to MySQL and Oracle. The adaptation of other
databases is under planning. In principle, any database that supports ODBC
access can be accessed through the ODBC external table. If you need to access
other databases, you are welcome to modify the code and contribute to Doris.
-3. When is it appropriate to use ODBC external tables.
+3. When is it appropriate to use ODBC external tables?
- Generally, when the amount of external data is small and less than 100W.
It can be accessed through ODBC external table. Since external table the can
not play the role of Doris in the storage engine and will bring additional
network overhead. it is recommended to determine whether to access through
external tables or import data into Doris according to the actual access delay
requirements for queries.
+ Generally, when the amount of external data is small and less than 100W,
it can be accessed through ODBC external table. Since external table the cannot
play the role of Doris in the storage engine and will bring additional network
overhead, it is recommended to determine whether to access through external
tables or import data into Doris according to the actual access delay
requirements for queries.
-4. Garbled code in Oracle access
+4. Garbled code in Oracle access?
- Add the following parameters to the BE start up script:`export
NLS_LANG=AMERICAN_AMERICA.AL32UTF8`, Restart all be
+ Add the following parameters to the BE start up script: `export
NLS_LANG=AMERICAN_AMERICA.AL32UTF8`R, Restart all be
-5. ANSI Driver or Unicode Driver ?
+5. ANSI Driver or Unicode Driver?
Currently, ODBC supports both ANSI and Unicode driver forms, while Doris
only supports Unicode driver. If you force the use of ANSI driver, the query
results may be wrong.
diff --git
a/docs/en/extending-doris/udf/contrib/udaf-orthogonal-bitmap-manual.md
b/docs/en/extending-doris/udf/contrib/udaf-orthogonal-bitmap-manual.md
index 1f6cdca..05fcac2 100644
--- a/docs/en/extending-doris/udf/contrib/udaf-orthogonal-bitmap-manual.md
+++ b/docs/en/extending-doris/udf/contrib/udaf-orthogonal-bitmap-manual.md
@@ -134,7 +134,7 @@ The new UDAF aggregate function is created in mysql client
link Session. It is c
The bitmap intersection function
-Syntax:
+Syntax:
orthogonal_bitmap_intersect(bitmap_column, column_to_filter, filter_values)
@@ -178,7 +178,7 @@ select BITMAP_COUNT(orthogonal_bitmap_intersect(user_id,
tag, 13080800, 11110200
To calculate the bitmap intersection count function, the syntax is the same as
the original Intersect_Count, but the implementation is different
-Syntax:
+Syntax:
orthogonal_bitmap_intersect_count(bitmap_column, column_to_filter,
filter_values)
@@ -208,7 +208,7 @@ PROPERTIES (
Figure out the bitmap union count function, syntax with the original
bitmap_union_count, but the implementation is different.
-Syntax:
+Syntax:
orthogonal_bitmap_union_count(bitmap_column)
diff --git a/docs/en/getting-started/advance-usage.md
b/docs/en/getting-started/advance-usage.md
index 8162489..4fdba5d 100644
--- a/docs/en/getting-started/advance-usage.md
+++ b/docs/en/getting-started/advance-usage.md
@@ -207,7 +207,7 @@ Modify the timeout to 1 minute:
### 2.3 Broadcast/Shuffle Join
-By default, the system implements Join by conditionally filtering small
tables, broadcasting them to the nodes where the large tables are located,
forming a memory Hash table, and then streaming out the data of the large
tables Hash Join. However, if the amount of data filtered by small tables can
not be put into memory, Join will not be able to complete at this time. The
usual error should be caused by memory overrun first.
+By default, the system implements Join by conditionally filtering small
tables, broadcasting them to the nodes where the large tables are located,
forming a memory Hash table, and then streaming out the data of the large
tables Hash Join. However, if the amount of data filtered by small tables
cannot be put into memory, Join will not be able to complete at this time. The
usual error should be caused by memory overrun first.
If you encounter the above situation, it is recommended to use Shuffle Join
explicitly, also known as Partitioned Join. That is, small and large tables are
Hash according to Join's key, and then distributed Join. This memory
consumption is allocated to all computing nodes in the cluster.
diff --git a/docs/en/getting-started/basic-usage.md
b/docs/en/getting-started/basic-usage.md
index 41cffa1..c4b08ae 100644
--- a/docs/en/getting-started/basic-usage.md
+++ b/docs/en/getting-started/basic-usage.md
@@ -27,7 +27,7 @@ under the License.
# Guidelines for Basic Use
-Doris uses MySQL protocol to communicate. Users can connect to Doris cluster
through MySQL client or MySQL JDBC. When selecting the MySQL client version, it
is recommended to use the version after 5.1, because user names of more than 16
characters can not be supported before 5.1. This paper takes MySQL client as an
example to show users the basic usage of Doris through a complete process.
+Doris uses MySQL protocol to communicate. Users can connect to Doris cluster
through MySQL client or MySQL JDBC. When selecting the MySQL client version, it
is recommended to use the version after 5.1, because user names of more than 16
characters cannot be supported before 5.1. This paper takes MySQL client as an
example to show users the basic usage of Doris through a complete process.
## 1 Create Users
diff --git a/docs/en/getting-started/best-practice.md
b/docs/en/getting-started/best-practice.md
index b29ca83..280cba7 100644
--- a/docs/en/getting-started/best-practice.md
+++ b/docs/en/getting-started/best-practice.md
@@ -165,7 +165,7 @@ ALTER TABLE session_data ADD ROLLUP
rollup_brower(brower,province,ip,url) DUPLIC
## 2 Schema Change
-There are three Schema Change in doris:Sorted Schema Change,Direct Schema
Change, Linked Schema Change。
+There are three Schema Change in doris: Sorted Schema Change, Direct Schema
Change, Linked Schema Change。
2.1. Sorted Schema Change
diff --git a/docs/en/getting-started/data-model-rollup.md
b/docs/en/getting-started/data-model-rollup.md
index 0fbe689..f348d91 100644
--- a/docs/en/getting-started/data-model-rollup.md
+++ b/docs/en/getting-started/data-model-rollup.md
@@ -473,11 +473,11 @@ We use the prefix index of ** 36 bytes ** of a row of
data as the prefix index o
When our query condition is the prefix of ** prefix index **, it can greatly
speed up the query speed. For example, in the first example, we execute the
following queries:
-`SELECT * FROM table WHERE user_id=1829239 and age=20;`
+`SELECT * FROM table WHERE user_id=1829239 and age=20;`
The efficiency of this query is much higher than that of ** the following
queries:
-`SELECT * FROM table WHERE age=20;`
+`SELECT * FROM table WHERE age=20;`
Therefore, when constructing tables, ** correctly choosing column order can
greatly improve query efficiency **.
@@ -633,5 +633,5 @@ Duplicate model has no limitation of aggregation model.
Because the model does n
Because the data model was established when the table was built, and **could
not be modified **. Therefore, it is very important to select an appropriate
data model**.
1. Aggregate model can greatly reduce the amount of data scanned and the
amount of query computation by pre-aggregation. It is very suitable for report
query scenarios with fixed patterns. But this model is not very friendly for
count (*) queries. At the same time, because the aggregation method on the
Value column is fixed, semantic correctness should be considered in other types
of aggregation queries.
-2. Uniq model guarantees the uniqueness of primary key for scenarios requiring
unique primary key constraints. However, the query advantage brought by
pre-aggregation such as ROLLUP can not be exploited (because the essence is
REPLACE, there is no such aggregation as SUM).
+2. Uniq model guarantees the uniqueness of primary key for scenarios requiring
unique primary key constraints. However, the query advantage brought by
pre-aggregation such as ROLLUP cannot be exploited (because the essence is
REPLACE, there is no such aggregation as SUM).
3. Duplicate is suitable for ad-hoc queries of any dimension. Although it is
also impossible to take advantage of the pre-aggregation feature, it is not
constrained by the aggregation model and can take advantage of the queue-store
model (only reading related columns, but not all Key columns).
diff --git a/docs/en/getting-started/hit-the-rollup.md
b/docs/en/getting-started/hit-the-rollup.md
index 7a1e224..1a9b848 100644
--- a/docs/en/getting-started/hit-the-rollup.md
+++ b/docs/en/getting-started/hit-the-rollup.md
@@ -122,7 +122,7 @@ The prefix indexes of the three tables are
```
Base(k1 ,k2, k3, k4, k5, k6, k7)
-rollup_index1(k9),rollup_index2(k9)
+rollup_index1(k9), rollup_index2(k9)
rollup_index3(k4, k5, k6, k1, k2, k3, k7)
diff --git a/docs/en/installing/install-deploy.md
b/docs/en/installing/install-deploy.md
index 964c847..b130f07 100644
--- a/docs/en/installing/install-deploy.md
+++ b/docs/en/installing/install-deploy.md
@@ -122,7 +122,7 @@ This is a representation of [CIDR]
(https://en.wikipedia.org/wiki/Classless_Inte
BE is configured as `priority_networks = 10.1.3.0/24'.`.
-When you want to ADD BACKEND use :`ALTER SYSTEM ADD BACKEND
"192.168.0.1:9050";`
+When you want to ADD BACKEND use: `ALTER SYSTEM ADD BACKEND
"192.168.0.1:9050";`
Then FE and BE will not be able to communicate properly.
@@ -312,7 +312,7 @@ The DROP statement is as follows:
**Note: DROP BACKEND will delete the BE directly and the data on it will not
be recovered!!! So we strongly do not recommend DROP BACKEND to delete BE
nodes. When you use this statement, there will be corresponding error-proof
operation hints.**
-DECOMMISSION clause:
+DECOMMISSION clause:
```ALTER SYSTEM DECOMMISSION BACKEND "be_host:be_heartbeat_service_port";```
@@ -424,6 +424,6 @@ Broker is a stateless process that can be started or
stopped at will. Of course,
The default value of max_file_descriptor_number is 131072.
- For Example : ulimit -n 65536; this command set file descriptor to 65536.
+ For Example: ulimit -n 65536; this command set file descriptor to 65536.
After starting BE process, you can use **cat /proc/$pid/limits** to see the
actual limit of process.
diff --git a/docs/en/installing/upgrade.md b/docs/en/installing/upgrade.md
index 9ff4572..260be07 100644
--- a/docs/en/installing/upgrade.md
+++ b/docs/en/installing/upgrade.md
@@ -40,7 +40,7 @@ Doris can upgrade smoothly by rolling upgrades. The following
steps are recommen
## Testing FE Metadata Compatibility
-0. **Important! Exceptional metadata compatibility is likely to cause data can
not be restored!!**
+0. **Important! Exceptional metadata compatibility is likely to cause data
cannot be restored!!**
1. Deploy a test FE process (such as your own local developer) using the new
version alone.
2. Modify the FE configuration file fe.conf for testing and set all ports to
**different from online**.
3. Add configuration in fe.conf: cluster_id=123456
diff --git a/docs/en/internal/grouping_sets_design.md
b/docs/en/internal/grouping_sets_design.md
index f1a3b8b..16acc33 100644
--- a/docs/en/internal/grouping_sets_design.md
+++ b/docs/en/internal/grouping_sets_design.md
@@ -53,7 +53,7 @@ UNION
SELECT null, null, SUM( k3 ) FROM t
```
-This is an example of real query:
+This is an example of real query:
```
mysql> SELECT * FROM t;
@@ -98,7 +98,7 @@ mysql> SELECT k1, k2, SUM(k3) FROM t GROUP BY GROUPING SETS (
(k1, k2), (k2), (k
SELECT a, b,c, SUM( d ) FROM tab1 GROUP BY ROLLUP(a,b,c)
```
-This statement is equivalent to GROUPING SETS as followed:
+This statement is equivalent to GROUPING SETS as followed:
```
GROUPING SETS (
@@ -140,7 +140,7 @@ Indicates whether a specified column expression in a `GROUP
BY` list is aggregat
Each `GROUPING_ID` argument must be an element of the `GROUP BY` list.
`GROUPING_ID ()` returns an **integer** bitmap whose lowest N bits may be lit.
A lit **bit** indicates the corresponding argument is not a grouping column for
the given output row. The lowest-order **bit** corresponds to argument N, and
the N-1th lowest-order **bit** corresponds to argument 1. If the column is a
grouping column the bit is 0 else is 1.
-For example:
+For example:
```
mysql> select * from t;
@@ -158,7 +158,7 @@ mysql> select * from t;
+------+------+------+
```
-grouping sets result:
+grouping sets result:
```
mysql> SELECT k1, k2, GROUPING(k1), GROUPING(k2), SUM(k3) FROM t GROUP BY
GROUPING SETS ( (k1, k2), (k2), (k1), ( ) );
@@ -218,7 +218,7 @@ First of all, a GROUP BY clause is essentially a special
case of GROUPING SETS,
GROUP BY a
is equivalent to:
GROUP BY GROUPING SETS((a))
-also,
+also,
GROUP BY a,b,c
is equivalent to:
GROUP BY GROUPING SETS((a,b,c))
@@ -260,7 +260,7 @@ Presto supports composition, but not nesting.
## 2. Object
-Support `GROUPING SETS`, `ROLLUP` and `CUBE ` syntax,implements 1.1, 1.2, 1.3
1.4, 1.5, not support the combination
+Support `GROUPING SETS`, `ROLLUP` and `CUBE ` syntax, implements 1.1, 1.2,
1.3 1.4, 1.5, not support the combination
and nesting of GROUPING SETS in current version.
### 2.1 GROUPING SETS Syntax
@@ -275,7 +275,7 @@ GROUP BY GROUPING SETS ( groupSet [ , groupSet [ , ... ] ] )
groupSet ::= { ( expr [ , expr [ , ... ] ] )}
<expr>
-Expression,column name.
+Expression, column name.
```
### 2.2 ROLLUP Syntax
@@ -288,7 +288,7 @@ GROUP BY ROLLUP ( expr [ , expr [ , ... ] ] )
[ ... ]
<expr>
-Expression,column name.
+Expression, column name.
```
### 2.3 CUBE Syntax
@@ -301,7 +301,7 @@ GROUP BY CUBE ( expr [ , expr [ , ... ] ] )
[ ... ]
<expr>
-Expression,column name.
+Expression, column name.
```
## 3. Implementation
@@ -310,20 +310,20 @@ Expression,column name.
For `GROUPING SET` is equivalent to the `UNION` of `GROUP BY` . So we can
expand input rows, and run an GROUP BY on these rows.
-For example:
+For example:
```
SELECT a, b FROM src GROUP BY a, b GROUPING SETS ((a, b), (a), (b), ());
```
-Data in table src :
+Data in table src:
```
1, 2
3, 4
```
-Base on GROUPING SETS , we can expend the input to:
+Base on GROUPING SETS , we can expend the input to:
```
1, 2 (GROUPING_ID: a, b -> 00 -> 0)
@@ -341,7 +341,7 @@ And then use those row as input, then GROUP BY a, b,
GROUPING_ID
### 3.2 Example
-Table t:
+Table t:
```
mysql> select * from t;
@@ -360,15 +360,15 @@ mysql> select * from t;
8 rows in set (0.01 sec)
```
-for the query:
+for the query:
```
SELECT k1, k2, GROUPING_ID(k1,k2), SUM(k3) FROM t GROUP BY GROUPING SETS ((k1,
k2), (k1), (k2), ());
```
-First,expand the input,every row expand into 4 rows ( the size of GROUPING
SETS), and insert GROUPING_ID column
+First, expand the input, every row expand into 4 rows ( the size of GROUPING
SETS), and insert GROUPING_ID column
-e.g. a, A, 1 expanded to:
+e.g. a, A, 1 expanded to:
```
+------+------+------+-------------------------+
diff --git
a/docs/en/sql-reference/sql-functions/date-time-functions/from_unixtime.md
b/docs/en/sql-reference/sql-functions/date-time-functions/from_unixtime.md
index 83f2a95..8f444ed 100644
--- a/docs/en/sql-reference/sql-functions/date-time-functions/from_unixtime.md
+++ b/docs/en/sql-reference/sql-functions/date-time-functions/from_unixtime.md
@@ -34,14 +34,14 @@ Convert the UNIX timestamp to the corresponding time format
of bits, and the for
Input is an integer and return is a string type
-Currently, `string_format` supports following formats:
-
- %Y: Year. eg. 2014,1900
- %m: Month. eg. 12,09
- %d: Day. eg. 11,01
- %H: Hour. eg. 23,01,12
- %i: Minute. eg. 05,11
- %s: Second. eg. 59,01
+Currently, `string_format` supports following formats:
+
+ %Y: Year. eg. 2014, 1900
+ %m: Month. eg. 12, 09
+ %d: Day. eg. 11, 01
+ %H: Hour. eg. 23, 01, 12
+ %i: Minute. eg. 05, 11
+ %s: Second. eg. 59, 01
Default is `%Y-%m-%d %H:%i:%s`
diff --git
a/docs/en/sql-reference/sql-functions/date-time-functions/time_round.md
b/docs/en/sql-reference/sql-functions/date-time-functions/time_round.md
index 9fb70f8..f336bf2 100644
--- a/docs/en/sql-reference/sql-functions/date-time-functions/time_round.md
+++ b/docs/en/sql-reference/sql-functions/date-time-functions/time_round.md
@@ -36,7 +36,7 @@ under the License.
`DATETIME TIME_ROUND(DATETIME expr, INT period, DATETIME origin)`
-The function name `TIME_ROUND` consists of two parts,Each part consists of the
following optional values.
+The function name `TIME_ROUND` consists of two parts, Each part consists of
the following optional values.
- `TIME`: `SECOND`, `MINUTE`, `HOUR`, `DAY`, `WEEK`, `MONTH`, `YEAR`
- `ROUND`: `FLOOR`, `CEIL`
diff --git a/docs/en/sql-reference/sql-statements/Account Management/GRANT.md
b/docs/en/sql-reference/sql-statements/Account Management/GRANT.md
index 397aa30..0d59295 100644
--- a/docs/en/sql-reference/sql-statements/Account Management/GRANT.md
+++ b/docs/en/sql-reference/sql-statements/Account Management/GRANT.md
@@ -36,7 +36,7 @@ GRANT privilege_list ON db_name[.tbl_name] TO user_identity
[ROLE role_name]
Privilege_list is a list of permissions that need to be granted, separated by
commas. Currently Doris supports the following permissions:
-NODE_PRIV: Operational privileges of cluster nodes, including operation of
nodes' up and down lines. Only root users have this privilege and can not be
given to other users.
+NODE_PRIV: Operational privileges of cluster nodes, including operation of
nodes' up and down lines. Only root users have this privilege and cannot be
given to other users.
ADMIN_PRIV: All rights except NODE_PRIV.
GRANT_PRIV: Permission to operate permissions. Including the creation and
deletion of users, roles, authorization and revocation, password settings and
so on.
SELECT_PRIV: Read permissions for specified libraries or tables
@@ -45,7 +45,7 @@ ALTER_PRIV: schema change permissions for specified libraries
or tables
CREATE_PRIV: Creation permissions for specified libraries or tables
DROP_PRIV: Delete permissions for specified libraries or tables
-旧版权限中的 ALL 和 READ_WRITE
会被转换成:SELECT_PRIV,LOAD_PRIV,ALTER_PRIV,CREATE_PRIV,DROP_PRIV;
+旧版权限中的 ALL 和 READ_WRITE 会被转换成:
SELECT_PRIV,LOAD_PRIV,ALTER_PRIV,CREATE_PRIV,DROP_PRIV;
READ_ONLY is converted to SELECT_PRIV.
Db_name [.tbl_name] supports the following three forms:
@@ -56,7 +56,7 @@ Db_name [.tbl_name] supports the following three forms:
The libraries or tables specified here can be non-existent libraries and
tables.
-user_identity:
+user_identity:
The user_identity syntax here is the same as CREATE USER. And you must create
user_identity for the user using CREATE USER. The host in user_identity can be
a domain name. If it is a domain name, the validity time of permissions may be
delayed by about one minute.
diff --git a/docs/en/sql-reference/sql-statements/Account Management/REVOKE.md
b/docs/en/sql-reference/sql-statements/Account Management/REVOKE.md
index 8c0734f..d619f20 100644
--- a/docs/en/sql-reference/sql-statements/Account Management/REVOKE.md
+++ b/docs/en/sql-reference/sql-statements/Account Management/REVOKE.md
@@ -31,7 +31,7 @@ The REVOKE command is used to revoke the rights specified by
the specified user
Syntax
REVOKE privilege_list ON db_name[.tbl_name] FROM user_identity [ROLE role_name]
-user_identity:
+user_identity:
The user_identity syntax here is the same as CREATE USER. And you must create
user_identity for the user using CREATE USER. The host in user_identity can be
a domain name. If it is a domain name, the revocation time of permission may be
delayed by about one minute.
diff --git a/docs/en/sql-reference/sql-statements/Administration/ADMIN CHECK
TABLET.md b/docs/en/sql-reference/sql-statements/Administration/ADMIN CHECK
TABLET.md
index 8301791..101d506 100644
--- a/docs/en/sql-reference/sql-statements/Administration/ADMIN CHECK TABLET.md
+++ b/docs/en/sql-reference/sql-statements/Administration/ADMIN CHECK TABLET.md
@@ -29,14 +29,14 @@ under the License.
This statement is used to perform a specified check operation on a list of
tablets.
-Syntax:
+Syntax:
```
ADMIN CHECK TABLE (tablet_id1, tablet_id2, ...)
PROPERTIES("type" = "...");
```
-说明:
+Note:
1. You must specify the list of tablet ids and the "type" property in
PROPERTIES.
2. Currently "type" only supports:
diff --git a/docs/en/sql-reference/sql-statements/Administration/ALTER
SYSTEM.md b/docs/en/sql-reference/sql-statements/Administration/ALTER SYSTEM.md
index 8f41015..92059a7 100644
--- a/docs/en/sql-reference/sql-statements/Administration/ALTER SYSTEM.md
+++ b/docs/en/sql-reference/sql-statements/Administration/ALTER SYSTEM.md
@@ -60,12 +60,12 @@ If you need to delete the current load error hub, you can
set type to null.
1) When using the Mysql type, the error information generated when importing
will be inserted into the specified MySQL library table, and then the error
information can be viewed directly through the show load warnings statement.
Hub of Mysql type needs to specify the following parameters:
-host:mysql host
-port:mysql port
-user:mysql user
-password:mysql password
+host: mysql host
+port: mysql port
+user: mysql user
+password: mysql password
database mysql database
-table:mysql table
+table: mysql table
2) When the Broker type is used, the error information generated when
importing will form a file and be written to the designated remote storage
system through the broker. Make sure that the corresponding broker is deployed
Hub of Broker type needs to specify the following parameters:
diff --git a/docs/en/sql-reference/sql-statements/Data Definition/ALTER
TABLE.md b/docs/en/sql-reference/sql-statements/Data Definition/ALTER TABLE.md
index f469bb0..07a5a29 100644
--- a/docs/en/sql-reference/sql-statements/Data Definition/ALTER TABLE.md
+++ b/docs/en/sql-reference/sql-statements/Data Definition/ALTER TABLE.md
@@ -89,7 +89,7 @@ under the License.
ADD ROLLUP [rollup_name (column_name1, column_name2, ...)
[FROM from_index_name]
[PROPERTIES ("key"="value", ...)],...]
- example:
+ example:
ADD ROLLUP r1(col1,col2) from r0, r2(col3,col4) from r0
1.3 note:
1) If from_index_name is not specified, it is created by default
from base index
@@ -103,8 +103,8 @@ under the License.
example:
DROP ROLLUP r1
2.1 Batch Delete rollup index
- grammar:DROP ROLLUP [rollup_name [PROPERTIES ("key"="value", ...)],...]
- example:DROP ROLLUP r1,r2
+ grammar: DROP ROLLUP [rollup_name [PROPERTIES ("key"="value",
...)],...]
+ example: DROP ROLLUP r1,r2
2.2 note:
1) Cannot delete base index
@@ -173,7 +173,7 @@ under the License.
1) All columns in index must be written
2) value is listed after the key column
- 6. Modify the properties of the table, currently supports modifying the
bloom filter column, the colocate_with attribute and the dynamic_partition
attribute, the replication_num and default.replication_num.
+ 6. Modify the properties of the table, currently supports modifying the
bloom filter column, the colocate_with attribute and the dynamic_partition
attribute, the replication_num and default.replication_num.
grammar:
PROPERTIES ("key"="value")
note:
@@ -217,7 +217,7 @@ under the License.
2. BITMAP index only supports apply on single column
2. drop index
grammar:
- DROP INDEX index_name;
+ DROP INDEX index_name;
## example
diff --git a/docs/en/sql-reference/sql-statements/Data Definition/CREATE TABLE
LIKE.md b/docs/en/sql-reference/sql-statements/Data Definition/CREATE TABLE
LIKE.md
index e602eb1..9507af5 100644
--- a/docs/en/sql-reference/sql-statements/Data Definition/CREATE TABLE LIKE.md
+++ b/docs/en/sql-reference/sql-statements/Data Definition/CREATE TABLE LIKE.md
@@ -29,7 +29,7 @@ under the License.
## description
Use CREATE TABLE ... LIKE to create an empty table based on the definition of
another table, including any column attributes, table partitions and table
properties defined in the original table:
-Syntax:
+Syntax:
```
CREATE [EXTERNAL] TABLE [IF NOT EXISTS] [database.]table_name LIKE
[database.]table_name
diff --git a/docs/en/sql-reference/sql-statements/Data Definition/CREATE
TABLE.md b/docs/en/sql-reference/sql-statements/Data Definition/CREATE TABLE.md
index 08096a1..c03a142 100644
--- a/docs/en/sql-reference/sql-statements/Data Definition/CREATE TABLE.md
+++ b/docs/en/sql-reference/sql-statements/Data Definition/CREATE TABLE.md
@@ -93,15 +93,13 @@ Syntax:
* REPLACE_IF_NOT_NULL: The meaning of this aggregation type is that
substitution will occur if and only if the newly imported data is a non-null
value. If the newly imported data is null, Doris will still retain the
original value. Note: if NOT NULL is specified in the REPLACE_IF_NOT_NULL
column when the user creates the table, Doris will convert it to NULL and
will not report an error to the user. Users can leverage this aggregate type
to achieve importing some of columns.
* BITMAP_UNION: Only for BITMAP type
Allow NULL: Default is NOT NULL. NULL value should be represented as `\N`
in load source file.
- Notice:
-
- The origin value of BITMAP_UNION column should be TINYINT, SMALLINT,
INT, BIGINT.
+ Notice: The origin value of BITMAP_UNION column should be TINYINT,
SMALLINT, INT, BIGINT.
2. index_definition
Syntax:
`INDEX index_name (col_name[, col_name, ...]) [USING BITMAP] COMMENT
'xxxxxx'`
Explain:
- index_name:index name
- col_name:column name
+ index_name: index name
+ col_name: column name
Notice:
Only support BITMAP index in current version, BITMAP can only apply to
single column
3. ENGINE type
diff --git a/docs/en/sql-reference/sql-statements/Data Definition/Colocate
Join.md b/docs/en/sql-reference/sql-statements/Data Definition/Colocate Join.md
index 11730b5..d54af22 100644
--- a/docs/en/sql-reference/sql-statements/Data Definition/Colocate Join.md
+++ b/docs/en/sql-reference/sql-statements/Data Definition/Colocate Join.md
@@ -91,7 +91,7 @@ A: ALTER TABLE example_db.my_table set
("colocate_with"="target_table");
Q: 229144; colcoate join?
-A: set disable_colocate_join = true; 就可以禁用Colocate Join,查询时就会使用Shuffle Join
和Broadcast Join
+A: set disable_colocate_join = true; 就可以禁用Colocate Join, 查询时就会使用Shuffle Join
和Broadcast Join
## keyword
diff --git a/docs/en/sql-reference/sql-statements/Data Definition/SHOW
RESOURCES.md b/docs/en/sql-reference/sql-statements/Data Definition/SHOW
RESOURCES.md
index bbac051..b916385 100644
--- a/docs/en/sql-reference/sql-statements/Data Definition/SHOW RESOURCES.md
+++ b/docs/en/sql-reference/sql-statements/Data Definition/SHOW RESOURCES.md
@@ -40,7 +40,7 @@ under the License.
[ORDER BY ...]
[LIMIT limit][OFFSET offset];
- Explain:
+ Explain:
1) If use NAME LIKE, the name of resource is matched to show.
2) If use NAME =, the specified name is exactly matched.
3) RESOURCETYPE is specified, the corresponding rerouce type is
matched.
diff --git a/docs/en/sql-reference/sql-statements/Data Manipulation/BROKER
LOAD.md b/docs/en/sql-reference/sql-statements/Data Manipulation/BROKER LOAD.md
index ba98120..b77b5b4 100644
--- a/docs/en/sql-reference/sql-statements/Data Manipulation/BROKER LOAD.md
+++ b/docs/en/sql-reference/sql-statements/Data Manipulation/BROKER LOAD.md
@@ -37,7 +37,7 @@ under the License.
3. Baidu Object Storage(BOS): BOS on Baidu Cloud.
4. Apache HDFS.
-### Syntax:
+### Syntax:
LOAD LABEL load_label
(
@@ -50,13 +50,13 @@ under the License.
1. load_label
Unique load label within a database.
- syntax:
+ syntax:
[database_name.]your_label
2. data_desc
To describe the data source.
- syntax:
+ syntax:
[MERGE|APPEND|DELETE]
DATA INFILE
(
@@ -73,7 +73,7 @@ under the License.
[WHERE predicate]
[DELETE ON label=true]
- Explain:
+ Explain:
file_path:
File path. Support wildcard. Must match to file, not directory.
@@ -82,7 +82,7 @@ under the License.
Data will only be loaded to specified partitions. Data out of
partition's range will be filtered. If not specifed, all partitions will be
loaded.
- NEGATIVE:
+ NEGATIVE:
If this parameter is specified, it is equivalent to importing a
batch of "negative" data to offset the same batch of data loaded before.
@@ -99,13 +99,13 @@ under the License.
Used to specify the type of imported file, such as parquet, orc,
csv. Default values are determined by the file suffix name.
- column_list:
+ column_list:
Used to specify the correspondence between columns in the import
file and columns in the table.
When you need to skip a column in the import file, specify it as a
column name that does not exist in the table.
- syntax:
+ syntax:
(col_name1, col_name2, ...)
PRECEDING FILTER predicate:
@@ -164,13 +164,13 @@ under the License.
kerberos authentication:
hadoop.security.authentication = kerberos
- kerberos_principal: kerberos's principal
- kerberos_keytab: path of kerberos's keytab file. This file should
be able to access by Broker
+ kerberos_principal: kerberos's principal
+ kerberos_keytab: path of kerberos's keytab file. This file should
be able to access by Broker
kerberos_keytab_content: Specify the contents of the KeyTab file
in Kerberos after base64 encoding. This option is optional from the
kerberos_keytab configuration.
namenode HA:
By configuring namenode HA, new namenode can be automatically
identified when the namenode is switched
- dfs.nameservices: hdfs service name,customize,eg:
"dfs.nameservices" = "my_ha"
+ dfs.nameservices: hdfs service name, customize, eg:
"dfs.nameservices" = "my_ha"
dfs.ha.namenodes.xxx: Customize the name of a namenode, separated
by commas. XXX is a custom name in dfs. name services, such as "dfs. ha.
namenodes. my_ha" = "my_nn"
dfs.namenode.rpc-address.xxx.nn: Specify RPC address information
for namenode, where NN denotes the name of the namenode configured in
dfs.ha.namenodes.xxxx, such as: "dfs.namenode.rpc-address.my_ha.my_nn"=
"host:port"
dfs.client.failover.proxy.provider: Specify the provider that
client connects to namenode by default: org. apache. hadoop. hdfs. server.
namenode. ha. Configured Failover ProxyProvider.
@@ -178,10 +178,10 @@ under the License.
4. opt_properties
Used to specify some special parameters.
- Syntax:
+ Syntax:
[PROPERTIES ("key"="value", ...)]
- You can specify the following parameters:
+ You can specify the following parameters:
timout: Specifies the timeout time for the import operation. The
default timeout is 4 hours per second.
diff --git a/docs/en/sql-reference/sql-statements/Data Manipulation/LOAD.md
b/docs/en/sql-reference/sql-statements/Data Manipulation/LOAD.md
index e329ac8..358df15 100644
--- a/docs/en/sql-reference/sql-statements/Data Manipulation/LOAD.md
+++ b/docs/en/sql-reference/sql-statements/Data Manipulation/LOAD.md
@@ -85,7 +85,7 @@ PARTICIPATION:
If this parameter is specified, only the specified partition will be imported,
and data outside the imported partition will be filtered out.
If not specified, all partitions of the table are imported by default.
-NEGATIVE:
+NEGATIVE:
If this parameter is specified, it is equivalent to importing a batch of
"negative" data. Used to offset the same batch of data imported before.
This parameter applies only to the case where there are value columns and the
aggregation type of value columns is SUM only.
@@ -99,7 +99,7 @@ File type:
Used to specify the type of imported file, such as parquet, orc, csv. The
default value is determined by the file suffix name.
-column_list:
+column_list:
Used to specify the correspondence between columns in the import file and
columns in the table.
When you need to skip a column in the import file, specify it as a column name
that does not exist in the table.
@@ -157,7 +157,7 @@ Integer classes (TINYINT/SMALLINT/INT/BIGINT/LARGEINT):
1,1000,1234
Floating Point Class (FLOAT/DOUBLE/DECIMAL): 1.1, 0.23, 356
Date class (DATE/DATETIME): 2017-10-03, 2017-06-13 12:34:03.
(Note: If it's in other date formats, you can use strftime or time_format
functions to convert in the import command)
-字符串类(CHAR/VARCHAR):"I am a student", "a"
+字符串类(CHAR/VARCHAR): "I am a student", "a"
NULL value: N
'35;'35; example
diff --git a/docs/en/sql-reference/sql-statements/Data Manipulation/MINI
LOAD.md b/docs/en/sql-reference/sql-statements/Data Manipulation/MINI LOAD.md
index 87699b4..0547e95 100644
--- a/docs/en/sql-reference/sql-statements/Data Manipulation/MINI LOAD.md
+++ b/docs/en/sql-reference/sql-statements/Data Manipulation/MINI LOAD.md
@@ -92,10 +92,10 @@ NOTE:
It is recommended that the amount of data imported should not exceed 1 GB.
2. Currently, it is not possible to submit multiple files in the form of
`curl-T', `{file1, file2}', because curl splits them into multiple files.
-Request sent, multiple requests can not share a label number, so it can not be
used
+Request sent, multiple requests cannot share a label number, so it cannot be
used
3. Miniload is imported in exactly the same way as streaming. It returns the
results synchronously to users after the import of streaming is completed.
-Although the information of mini load can be found in subsequent queries, it
can not be operated on. The queries are only compatible with the old ways of
use.
+Although the information of mini load can be found in subsequent queries, it
cannot be operated on. The queries are only compatible with the old ways of use.
4. When importing from the curl command line, you need to add escape before &
or the parameter information will be lost.
diff --git a/docs/en/sql-reference/sql-statements/Data Manipulation/MULTI
LOAD.md b/docs/en/sql-reference/sql-statements/Data Manipulation/MULTI LOAD.md
index 62031ff..78ff3ed 100644
--- a/docs/en/sql-reference/sql-statements/Data Manipulation/MULTI LOAD.md
+++ b/docs/en/sql-reference/sql-statements/Data Manipulation/MULTI LOAD.md
@@ -79,10 +79,10 @@ NOTE:
It is recommended that the amount of data imported should not exceed 1GB
2. Currently, it is not possible to submit multiple files in the form of
`curl-T', `{file1, file2}', because curl splits them into multiple files.
-Request sent, multiple requests can not share a label number, so it can not be
used
+Request sent, multiple requests cannot share a label number, so it cannot be
used
3. Supports streaming-like ways to use curl to import data into Doris, but
Doris will have to wait until the streaming is over
-Real import behavior will occur, and the amount of data in this way can not be
too large.
+Real import behavior will occur, and the amount of data in this way cannot be
too large.
'35;'35; example
diff --git a/docs/en/sql-reference/sql-statements/Data Manipulation/ROUTINE
LOAD.md b/docs/en/sql-reference/sql-statements/Data Manipulation/ROUTINE LOAD.md
index 1d8bb4d..615c911 100644
--- a/docs/en/sql-reference/sql-statements/Data Manipulation/ROUTINE LOAD.md
+++ b/docs/en/sql-reference/sql-statements/Data Manipulation/ROUTINE LOAD.md
@@ -417,7 +417,7 @@ FROM data_source
"kafka_offsets" = "0,0,0"
);
```
- It support two kinds data style:
+ It support two kinds data style:
1){"category":"a9jadhx","author":"test","price":895}
2)[
{"category":"a9jadhx","author":"test","price":895},
@@ -475,7 +475,7 @@ FROM data_source
{"category":"33","author":"3avc","title":"SayingsoftheCentury","timestamp":1589191387}
]
- Tips:
+ Tips:
1)If the json data starts as an array and each object in the array is a
record, you need to set the strip_outer_array to true to represent the flat
array.
2)If the json data starts with an array, and each object in the array is
a record, our ROOT node is actually an object in the array when we set jsonpath.
diff --git a/docs/en/sql-reference/sql-statements/Data Manipulation/SHOW
ALTER.md b/docs/en/sql-reference/sql-statements/Data Manipulation/SHOW ALTER.md
index 8ae554e..37fb8f3 100644
--- a/docs/en/sql-reference/sql-statements/Data Manipulation/SHOW ALTER.md
+++ b/docs/en/sql-reference/sql-statements/Data Manipulation/SHOW ALTER.md
@@ -31,7 +31,7 @@ Grammar:
SHOW ALTER [CLUSTER | TABLE [COLUMN | ROLLUP] [FROM db_name]];
Explain:
-TABLE COLUMN:Shows the task of alter table column.
+TABLE COLUMN: Shows the task of alter table column.
Support grammar [WHERE TableName|CreateTime|FinishTime|State]
[ORDER BY] [LIMIT]
TABLE ROLLUP: Shows the task of creating or deleting ROLLUP index
If db_name is not specified, use the current default DB
diff --git a/docs/en/sql-reference/sql-statements/Data Manipulation/SHOW
DYNAMIC PARTITION TABLES.md b/docs/en/sql-reference/sql-statements/Data
Manipulation/SHOW DYNAMIC PARTITION TABLES.md
index fc1c291..ef1900d 100644
--- a/docs/en/sql-reference/sql-statements/Data Manipulation/SHOW DYNAMIC
PARTITION TABLES.md
+++ b/docs/en/sql-reference/sql-statements/Data Manipulation/SHOW DYNAMIC
PARTITION TABLES.md
@@ -25,7 +25,7 @@ under the License.
# SHOW DYNAMIC PARTITION TABLES
## description
This statement is used to display all dynamically partitioned table states
under the current db
- Grammar:
+ Grammar:
SHOW DYNAMIC PARTITION TABLES [FROM db_name];
## example
diff --git a/docs/en/sql-reference/sql-statements/Data Manipulation/STREAM
LOAD.md b/docs/en/sql-reference/sql-statements/Data Manipulation/STREAM LOAD.md
index 5440c7f..330f30e 100644
--- a/docs/en/sql-reference/sql-statements/Data Manipulation/STREAM LOAD.md
+++ b/docs/en/sql-reference/sql-statements/Data Manipulation/STREAM LOAD.md
@@ -206,14 +206,14 @@ Where url is the url given by ErrorURL.
```Curl --location-trusted -u root -H "columns: k1, k2, v1=to_bitmap(k1),
v2=bitmap_empty()" -T testData
http://host:port/api/testDb/testTbl/_stream_load```
10. a simple load json
- table schema:
+ table schema:
`category` varchar(512) NULL COMMENT "",
`author` varchar(512) NULL COMMENT "",
`title` varchar(512) NULL COMMENT "",
`price` double NULL COMMENT ""
- json data:
+ json data:
{"category":"C++","author":"avc","title":"C++ primer","price":895}
- load command by curl:
+ load command by curl:
curl --location-trusted -u root -H "label:123" -H "format: json"
-T testData http://host:port/api/testDb/testTbl/_stream_load
you can load multiple records, for example:
[
@@ -230,7 +230,7 @@ Where url is the url given by ErrorURL.
]
Matched imports are made by specifying jsonpath parameter, such as
`category`, `author`, and `price`, for example:
curl --location-trusted -u root -H "columns: category, price,
author" -H "label:123" -H "format: json" -H "jsonpaths:
[\"$.category\",\"$.price\",\"$.author\"]" -H "strip_outer_array: true" -T
testData http://host:port/api/testDb/testTbl/_stream_load
- Tips:
+ Tips:
1)If the json data starts as an array and each object in the array is
a record, you need to set the strip_outer_array to true to represent the flat
array.
2)If the json data starts with an array, and each object in the array
is a record, our ROOT node is actually an object in the array when we set
jsonpath.
@@ -243,7 +243,7 @@ Where url is the url given by ErrorURL.
{"category":"33","author":"3avc","title":"SayingsoftheCentury","timestamp":1589191387}
]
}
- Matched imports are made by specifying jsonpath parameter, such as
`category`, `author`, and `price`, for example:
+ Matched imports are made by specifying jsonpath parameter, such as
`category`, `author`, and `price`, for example:
curl --location-trusted -u root -H "columns: category, price,
author" -H "label:123" -H "format: json" -H "jsonpaths:
[\"$.category\",\"$.price\",\"$.author\"]" -H "strip_outer_array: true" -H
"json_root: $.RECORDS" -T testData
http://host:port/api/testDb/testTbl/_stream_load
13. delete all data which key columns match the load data
diff --git a/docs/en/sql-reference/sql-statements/Data
Manipulation/alter-routine-load.md b/docs/en/sql-reference/sql-statements/Data
Manipulation/alter-routine-load.md
index fb7c8dd..8945196 100644
--- a/docs/en/sql-reference/sql-statements/Data
Manipulation/alter-routine-load.md
+++ b/docs/en/sql-reference/sql-statements/Data
Manipulation/alter-routine-load.md
@@ -74,7 +74,7 @@ Syntax:
2. `kafka_offsets`
3. Custom property, such as `property.group.id`
- Notice:
+ Notice:
1. `kafka_partitions` and `kafka_offsets` are used to modify the offset of
the kafka partition to be consumed, and can only modify the currently consumed
partition. Cannot add partition.
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]