This is an automated email from the ASF dual-hosted git repository.
morningman pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git
The following commit(s) were added to refs/heads/master by this push:
new 349925f647c [fix](iceberg) add list-all-tables doc (#3017)
349925f647c is described below
commit 349925f647c64b71643252569d577ef2cc003cf0
Author: Mingyu Chen (Rayner) <[email protected]>
AuthorDate: Wed Nov 5 18:00:38 2025 +0800
[fix](iceberg) add list-all-tables doc (#3017)
---
community/developer-guide/debug-tool.md | 2 +-
community/how-to-contribute/docs-format-specification.md | 8 ++++----
community/release-and-verify/release-prepare.md | 3 +--
community/release-and-verify/release-verify.md | 2 +-
community/source-install/compilation-win.md | 4 ++--
docs/lakehouse/catalogs/hive-catalog.mdx | 16 ++++++++--------
docs/lakehouse/catalogs/iceberg-catalog.mdx | 11 +++++++++++
.../date-time-functions/quarter-floor.md | 4 ++--
docs/table-design/data-type.md | 2 +-
.../current/developer-guide/debug-tool.md | 2 +-
.../how-to-contribute/docs-format-specification.md | 8 ++++----
.../current/release-and-verify/release-prepare.md | 1 -
.../current/source-install/compilation-win.md | 4 ++--
.../current/lakehouse/catalogs/hive-catalog.mdx | 16 ++++++++--------
.../current/lakehouse/catalogs/iceberg-catalog.mdx | 13 +++++++++++++
.../date-time-functions/quarter-floor.md | 2 +-
.../version-2.1/lakehouse/catalogs/hive-catalog.mdx | 16 ++++++++--------
.../version-2.1/lakehouse/catalogs/iceberg-catalog.mdx | 13 +++++++++++++
.../version-3.x/lakehouse/catalogs/hive-catalog.mdx | 16 ++++++++--------
.../version-3.x/lakehouse/catalogs/iceberg-catalog.mdx | 13 +++++++++++++
.../version-4.x/lakehouse/catalogs/hive-catalog.mdx | 16 ++++++++--------
.../version-4.x/lakehouse/catalogs/iceberg-catalog.mdx | 13 +++++++++++++
.../date-time-functions/quarter-floor.md | 2 +-
.../version-2.1/lakehouse/catalogs/hive-catalog.mdx | 16 ++++++++--------
.../version-2.1/lakehouse/catalogs/iceberg-catalog.mdx | 11 +++++++++++
.../version-3.x/lakehouse/catalogs/hive-catalog.mdx | 16 ++++++++--------
.../version-3.x/lakehouse/catalogs/iceberg-catalog.mdx | 11 +++++++++++
.../rw/file-cache-rw-compute-group-best-practice.md | 4 ++--
.../ecosystem/doris-operator/doris-operator-overview.md | 2 +-
.../version-4.x/ecosystem/observability/beats.md | 2 +-
.../version-4.x/ecosystem/observability/fluentbit.md | 2 +-
.../version-4.x/ecosystem/observability/logstash.md | 2 +-
.../separating-storage-compute/config-cluster.md | 2 +-
.../lakehouse/best-practices/doris-iceberg.md | 2 +-
.../version-4.x/lakehouse/best-practices/doris-paimon.md | 2 +-
.../version-4.x/lakehouse/catalogs/hive-catalog.mdx | 16 ++++++++--------
.../version-4.x/lakehouse/catalogs/hudi-catalog.md | 2 +-
.../version-4.x/lakehouse/catalogs/iceberg-catalog.mdx | 11 +++++++++++
.../version-4.x/lakehouse/lakehouse-overview.md | 2 +-
.../version-4.x/releasenotes/v1.1/release-1.1.0.md | 2 +-
.../version-4.x/releasenotes/v2.1/release-2.1.2.md | 4 ++--
.../version-4.x/releasenotes/v2.1/release-2.1.4.md | 6 +++---
.../version-4.x/releasenotes/v2.1/release-2.1.6.md | 4 ++--
.../date-time-functions/quarter-floor.md | 4 ++--
.../json-functions/json-extract-double.md | 4 ++--
.../storage-management/CREATE-STORAGE-VAULT.md | 4 ++--
versioned_docs/version-4.x/table-design/data-type.md | 8 ++++----
.../version-4.x/table-design/temporary-table.md | 8 ++++----
48 files changed, 214 insertions(+), 120 deletions(-)
diff --git a/community/developer-guide/debug-tool.md
b/community/developer-guide/debug-tool.md
index 1d629171afd..3d71e5d88ad 100644
--- a/community/developer-guide/debug-tool.md
+++ b/community/developer-guide/debug-tool.md
@@ -404,7 +404,7 @@ We get the following output in be.out
==24732==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 1024 byte(s) in 1 object(s) allocated from:
- #0 0xd10586 in operator new[](unsigned long)
../../../../gcc-7.3.0/libsanitizer/lsan/lsan_interceptors.cc:164
+ #0 0xd10586 in operator new[] (unsigned long)
../../../../gcc-7.3.0/libsanitizer/lsan/lsan_interceptors.cc:164
#1 0xe333a2 in doris::StorageEngine::open(doris::EngineOptions const&,
doris::StorageEngine**)
/home/ssd0/zc/palo/doris/core/be/src/olap/storage_engine.cpp:104
#2 0xd3cc96 in main
/home/ssd0/zc/palo/doris/core/be/src/service/doris_main.cpp:159
#3 0x7f573b5eebd4 in __libc_start_main
(/opt/compiler/gcc-4.8.2/lib64/libc.so.6+0x21bd4)
diff --git a/community/how-to-contribute/docs-format-specification.md
b/community/how-to-contribute/docs-format-specification.md
index 46dcf4ed177..c6902156aff 100644
--- a/community/how-to-contribute/docs-format-specification.md
+++ b/community/how-to-contribute/docs-format-specification.md
@@ -143,9 +143,9 @@ Link descriptions are not advisable to repeatedly use
phrases such as "see detai
**2. Link Format**
-- Link to other headings within the same document, such as [Inverted
Index](#Prefix-Index)
+- Link to other headings within the same document, such as [Inverted Index]
-- Link to adjacent documents: [BITMAP
Index](../../data-table/index/bloomfilter)
+- Link to adjacent documents: [BITMAP Index]
- Link to external websites: [Wikipedia - Inverted
Index](https://en.wikipedia.org/wiki/Inverted_index)
@@ -232,7 +232,7 @@ When you want to display images, it is convenient to
co-locate the asset next to
You can display images in two different ways:
-- Simple syntax: `  `
+- Simple syntax: ` ![Alt text for images description] (co-locate file
structure or link) `
- If you want the image to be centered, you can use HTML as following:
@@ -293,4 +293,4 @@ If features need to be version-specific, it is suggested to
use admonitions (ref
It is not recommended to use the `>` to quotation for content descriptions.
-If there is a need for more details or explanations, it is suggested to use
admonitions (refer to point six) with the `:::info :::` annotation.
\ No newline at end of file
+If there is a need for more details or explanations, it is suggested to use
admonitions (refer to point six) with the `:::info :::` annotation.
diff --git a/community/release-and-verify/release-prepare.md
b/community/release-and-verify/release-prepare.md
index e92159d8d69..e4c2de9f775 100644
--- a/community/release-and-verify/release-prepare.md
+++ b/community/release-and-verify/release-prepare.md
@@ -36,7 +36,6 @@ This document describes the main process and prep work for
release. For specific
* [Doris Core Release](./release-doris-core.md)
* [Doris Connectors Release](./release-doris-connectors.md)
-* [Doris Manager Release](./release-doris-manager.md)
* [Doris Shade Release](./release-doris-shade.md)
* [Doris Sdk Release](./release-doris-sdk.md)
@@ -67,7 +66,7 @@ The overall release process is as follows.
2. upload the content to be released to the [Apache Dev SVN
repository](https://dist.apache.org/repos/dist/dev/doris)
3. preparation of other Convenience Binaries (e.g. upload to [Maven
Staging repository](https://repository.apache.org/#stagingRepositories))
4. Community Release Polling Process
- 2. Initiate a VOTE in the [Doris Community Dev Mail
Group]([email protected]).
+ 2. Initiate a VOTE in the Doris Community Dev Mail Group:
[email protected].
3. After the vote is approved, send a Result email in the Doris
community.
5. Complete the work
1. Upload the signed packages to the [Apache Release
repository](https://dist.apache.org/repos/dist/release/doris) and generate the
relevant links.
diff --git a/community/release-and-verify/release-verify.md
b/community/release-and-verify/release-verify.md
index 90f7239ee6c..13c99ffefd7 100644
--- a/community/release-and-verify/release-verify.md
+++ b/community/release-and-verify/release-verify.md
@@ -101,7 +101,7 @@ Please see the compilation documentation of each component
to verify the compila
* Spark Doris Connector, see [compilation
documentation](/docs/ecosystem/spark-doris-connector)
## 5. Vote
-See the [ASF voting process]((https://www.apache.org/foundation/voting.html))
page for general information about voting.
+See the [ASF voting process](https://www.apache.org/foundation/voting.html)
page for general information about voting.
After the verification is completed, the following template can be used to
send voting emails to the dev@doris:
diff --git a/community/source-install/compilation-win.md
b/community/source-install/compilation-win.md
index 00c1beb5eb8..d369ac8585a 100644
--- a/community/source-install/compilation-win.md
+++ b/community/source-install/compilation-win.md
@@ -48,8 +48,8 @@ Refer to the official Microsoft [WSL installation
documentation](https://learn.m
Once you have WSL2 up and running, you can choose any of the available
compilation methods for Doris on Linux:
-- [Compile with LDB Toolchain
(Recommended)](../../install/source-install/compilation-with-ldb-toolchain)
-- [Docker Deployment
(Recommended)](../../install/source-install/compilation-with-docker)
+- [Compile with LDB Toolchain
(Recommended)](./compilation-with-ldb-toolchain.md)
+- [Docker Deployment (Recommended)](./compilation-with-docker.md)
## Note
diff --git a/docs/lakehouse/catalogs/hive-catalog.mdx
b/docs/lakehouse/catalogs/hive-catalog.mdx
index f42c1a46a54..7c7603c36f9 100644
--- a/docs/lakehouse/catalogs/hive-catalog.mdx
+++ b/docs/lakehouse/catalogs/hive-catalog.mdx
@@ -32,8 +32,8 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
'fs.defaultFS' = '<fs_defaultfs>', -- optional
{MetaStoreProperties},
{StorageProperties},
- {CommonProperties},
- {OtherProperties}
+ {HiveProperties},
+ {CommonProperties}
);
```
@@ -59,13 +59,9 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
The StorageProperties section is for entering connection and authentication
information related to the storage system. Refer to the "Supported Storage
Systems" section for details.
-* `{CommonProperties}`
-
- The CommonProperties section is for entering common attributes. Please see
the "Common Properties" section in the [Catalog
Overview](../catalog-overview.md).
-
-* `{OtherProperties}`
+* `{HiveProperties}`
- OtherProperties section is for entering properties related to Hive Catalog.
+ HiveProperties section is for entering properties related to Hive Catalog.
* `get_schema_from_table`: The default value is false. By default, Doris
will obtain the table schema information from the Hive Metastore. However, in
some cases, compatibility issues may occur, such as the error `Storage schema
reading not supported`. In this case, you can set this parameter to true, and
the table schema will be obtained directly from the Table object. But please
note that this method will cause the default value information of the column to
be ignored. This property [...]
@@ -73,6 +69,10 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
* `hive.ignore_absent_partitions`: Whether to ignore non-existent
partitions. Defaults to `true`. If set to `false`, the query will report an
error when encountering non-existent partitions. This parameter has been
supported since version 3.0.2.
+* `{CommonProperties}`
+
+ The CommonProperties section is for entering common attributes. Please see
the "Common Properties" section in the [Catalog
Overview](../catalog-overview.md).
+
### Supported Hive Versions
Supports Hive 1.x, 2.x, 3.x, and 4.x.
diff --git a/docs/lakehouse/catalogs/iceberg-catalog.mdx
b/docs/lakehouse/catalogs/iceberg-catalog.mdx
index c823f9dbad1..6e48dd5b926 100644
--- a/docs/lakehouse/catalogs/iceberg-catalog.mdx
+++ b/docs/lakehouse/catalogs/iceberg-catalog.mdx
@@ -35,6 +35,7 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
'warehouse' = '<warehouse>' --optional
{MetaStoreProperties},
{StorageProperties},
+ {IcebergProperties},
{CommonProperties}
);
```
@@ -69,6 +70,16 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
The StorageProperties section is for entering connection and authentication
information related to the storage system. Refer to the section on [Supported
Storage Systems].
+* `{IcebergProperties}`
+
+ The IcebergProperties section is used to fill in parameters specific to
Iceberg Catalog.
+
+ - `list-all-tables`
+
+ For Iceberg Catalog that uses Hive Metastore as the metadata service.
Default is `true`. By default, the `SHOW TABLES` operation will list all types
of tables in the current Database (Hive Metastore may store non-Iceberg type
tables). This approach has the best performance.
+
+ If set to `false`, Doris will check the type of each table one by one and
only return Iceberg type tables. This mode will have poor performance when
there are many tables.
+
* `{CommonProperties}`
The CommonProperties section is for entering general properties. See the
[Catalog Overview](../catalog-overview.md) for details on common properties.
diff --git
a/docs/sql-manual/sql-functions/scalar-functions/date-time-functions/quarter-floor.md
b/docs/sql-manual/sql-functions/scalar-functions/date-time-functions/quarter-floor.md
index 5759f440787..ed3a1a24d2b 100644
---
a/docs/sql-manual/sql-functions/scalar-functions/date-time-functions/quarter-floor.md
+++
b/docs/sql-manual/sql-functions/scalar-functions/date-time-functions/quarter-floor.md
@@ -33,7 +33,7 @@ QUARTER_FLOOR(`<date_or_time_expr>`, `<period>`, `<origin>`)
| Parameter | Description |
| ---- | ---- |
-| `<date_or_time_expr>` | The datetime value to be rounded down, type DATETIME
or DATE. For specific datetime/date formats, see [datetime
conversion](../../../../sql-manual/basic-element/sql-data-types/conversion/datetime-conversion.md)
and [date
conversion](../../../../../sql-manual/basic-element/sql-data-types/conversion/date-conversion)
|
+| `<date_or_time_expr>` | The datetime value to be rounded down, type DATETIME
or DATE. For specific datetime/date formats, see [datetime
conversion](../../../../sql-manual/basic-element/sql-data-types/conversion/datetime-conversion.md)
and [date
conversion](../../../../sql-manual/basic-element/sql-data-types/conversion/date-conversion)
|
| `<period>` | Quarter period value, type INT, indicating the number of
quarters contained in each period |
| `<origin_datetime>` | Starting time point of the period, type DATETIME/DATE,
default is 0001-01-01 00:00:00 |
@@ -76,7 +76,7 @@ QUARTER_CEIL(`<date_or_time_expr>`, `<period>`, `<origin>`)
| Parameter | Description |
| ---- | ---- |
-| `<date_or_time_expr>` | The datetime value to be rounded up. It is a valid
date expression that supports date/datetime types. For specific datetime and
date formats, see [datetime
conversion](../../../../sql-manual/basic-element/sql-data-types/conversion/datetime-conversion.md)
and [date
conversion](../../../../../docs/sql-manual/basic-element/sql-data-types/conversion/date-conversion).
|
+| `<date_or_time_expr>` | The datetime value to be rounded up. It is a valid
date expression that supports date/datetime types. For specific datetime and
date formats, see [datetime
conversion](../../../../sql-manual/basic-element/sql-data-types/conversion/datetime-conversion.md)
and [date
conversion](../../../../sql-manual/basic-element/sql-data-types/conversion/date-conversion).
|
| `<period>` | Quarter period value, type INT, indicating the number of
quarters contained in each period |
| `<origin_datetime>` | The starting time point of the period, supports
date/datetime types, default value is 0001-01-01 00:00:00 |
diff --git a/docs/table-design/data-type.md b/docs/table-design/data-type.md
index c1be68f5c11..183a47d9cde 100644
--- a/docs/table-design/data-type.md
+++ b/docs/table-design/data-type.md
@@ -54,7 +54,7 @@ The list of data types supported by Doris is as follows:
| [HLL](../sql-manual/basic-element/sql-data-types/aggregate/HLL) |
Variable Length | HLL stands for HyperLogLog, is a fuzzy deduplication. It
performs better than Count Distinct when dealing with large datasets. The
error rate of HLL is typically around 1%, and sometimes it can reach 2%. HLL
cannot be used as a key column, and the aggregation type is HLL_UNION when
creating a table. Users do not need to specify the length or default value as
it is internally controlled bas [...]
| [BITMAP](../sql-manual/basic-element/sql-data-types/aggregate/BITMAP)
| Variable Length | BITMAP type can be used in Aggregate tables, Unique tables
or Duplicate tables. - When used in a Unique table or a Duplicate table,
BITMAP must be employed as non-key columns. - When used in an Aggregate table,
BITMAP must also serve as non-key columns, and the aggregation type must be set
to BITMAP_UNION during table creation. Users do not need to specify the length
or default value as [...]
|
[QUANTILE_STATE](../sql-manual/basic-element/sql-data-types/aggregate/QUANTILE-STATE.md)
| Variable Length | A type used to calculate approximate quantile values.
When loading, it performs pre-aggregation for the same keys with different
values. When the number of values does not exceed 2048, it records all data in
detail. When the number of values is greater than 2048, it employs the TDigest
algorithm to aggregate (cluster) the data and store the centroid points after
clustering. Q [...]
-| [AGG_STATE](../sql-manual/basic-element/sql-data-types/aggregate/AGG_STATE)
| Variable Length | Aggregate function can only be used with
state/merge/union function combiners. AGG_STATE cannot be used as a key
column. When creating a table, the signature of the aggregate function needs to
be declared alongside. Users do not need to specify the length or default
value. The actual data storage size depends on the function's implementation. |
+| [AGG_STATE](../sql-manual/basic-element/sql-data-types/aggregate/AGG-STATE)
| Variable Length | Aggregate function can only be used with
state/merge/union function combiners. AGG_STATE cannot be used as a key
column. When creating a table, the signature of the aggregate function needs to
be declared alongside. Users do not need to specify the length or default
value. The actual data storage size depends on the function's implementation. |
## [IP
types](../sql-manual/basic-element/sql-data-types/data-type-overview#ip-types)
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs-community/current/developer-guide/debug-tool.md
b/i18n/zh-CN/docusaurus-plugin-content-docs-community/current/developer-guide/debug-tool.md
index 9d8c15a514b..894c8c16820 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs-community/current/developer-guide/debug-tool.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs-community/current/developer-guide/debug-tool.md
@@ -403,7 +403,7 @@ BUILD_TYPE=LSAN ./build.sh
==24732==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 1024 byte(s) in 1 object(s) allocated from:
- #0 0xd10586 in operator new[](unsigned long)
../../../../gcc-7.3.0/libsanitizer/lsan/lsan_interceptors.cc:164
+ #0 0xd10586 in operator new[] (unsigned long)
../../../../gcc-7.3.0/libsanitizer/lsan/lsan_interceptors.cc:164
#1 0xe333a2 in doris::StorageEngine::open(doris::EngineOptions const&,
doris::StorageEngine**)
/home/ssd0/zc/palo/doris/core/be/src/olap/storage_engine.cpp:104
#2 0xd3cc96 in main
/home/ssd0/zc/palo/doris/core/be/src/service/doris_main.cpp:159
#3 0x7f573b5eebd4 in __libc_start_main
(/opt/compiler/gcc-4.8.2/lib64/libc.so.6+0x21bd4)
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs-community/current/how-to-contribute/docs-format-specification.md
b/i18n/zh-CN/docusaurus-plugin-content-docs-community/current/how-to-contribute/docs-format-specification.md
index 98ee2d9dea1..8a5fac3d500 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs-community/current/how-to-contribute/docs-format-specification.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs-community/current/how-to-contribute/docs-format-specification.md
@@ -225,11 +225,11 @@ SQL 函数文档排版请参考文档贡献指南-**[如何编写命令帮助手
- **链接格式**
- - 链接至同一文档中的其他标题:[倒排索引](# 前缀索引)
+ - 链接至同一文档中的其他标题:[倒排索引]
- - 链接至相邻文档:[BITMAP 索引](../data-table/index/bloomfilter)
+ - 链接至相邻文档:[BITMAP 索引]
- - 链接至外部站点:[维基百科 - Inverted
Index](https://en.wikipedia.org/wiki/Inverted_index)
+ - 链接至外部站点:[维基百科 - Inverted
Index](https://en.wikipedia.org/wiki/Inverted_index)
- **链接路径**
@@ -391,4 +391,4 @@ import TabItem from '@theme/TabItem';
### 13 引用块
-在新版文档中,**不建议使用 ` > ` 引用符号**进行内容描述或嵌套。如需说明备注,可使用注释说明(参考第六点)`:::info :::` 标注。
\ No newline at end of file
+在新版文档中,**不建议使用 ` > ` 引用符号**进行内容描述或嵌套。如需说明备注,可使用注释说明(参考第六点)`:::info :::` 标注。
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs-community/current/release-and-verify/release-prepare.md
b/i18n/zh-CN/docusaurus-plugin-content-docs-community/current/release-and-verify/release-prepare.md
index e308b7e7e52..dbd08e51321 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs-community/current/release-and-verify/release-prepare.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs-community/current/release-and-verify/release-prepare.md
@@ -36,7 +36,6 @@ Apache 项目的版本发布必须严格遵循 Apache 基金会的版本发布
* [Doris Core Release](./release-doris-core.md)
* [Doris Connectors Release](./release-doris-connectors.md)
-* [Doris Manager Release](./release-doris-manager.md)
* [Doris Shade Release](./release-doris-shade.md)
* [Doris Sdk Release](./release-doris-sdk.md)
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs-community/current/source-install/compilation-win.md
b/i18n/zh-CN/docusaurus-plugin-content-docs-community/current/source-install/compilation-win.md
index 58ed3936425..19466d4ddd9 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs-community/current/source-install/compilation-win.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs-community/current/source-install/compilation-win.md
@@ -44,9 +44,9 @@ under the License.
通过使用 WSL2 启动的 Linux 子系统,选择任意 Doris 在 Linux 上的编译方式即可。
-- [使用 LDB Toolchain 编译 (推荐)
](https://doris.apache.org/zh-CN/community/source-install/compilation-with-ldb-toolchain)
+- [使用 LDB Toolchain 编译 (推荐) ](./compilation-with-ldb-toolchain.md)
-- [使用 Docker
开发镜像编译(推荐)](https://doris.apache.org/zh-CN/community/source-install/compilation-with-docker)
+- [使用 Docker 开发镜像编译(推荐)](./compilation-with-docker.md)
## 注意事项
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/hive-catalog.mdx
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/hive-catalog.mdx
index b303894a97c..9aedf3baaf5 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/hive-catalog.mdx
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/hive-catalog.mdx
@@ -32,8 +32,8 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
'fs.defaultFS' = '<fs_defaultfs>', -- optional
{MetaStoreProperties},
{StorageProperties},
- {CommonProperties},
- {OtherProperties}
+ {HiveProperties},
+ {CommonProperties}
);
```
@@ -61,13 +61,9 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
StorageProperties 部分用于填写存储系统相关的连接和认证信息。具体可参阅【支持的存储系统】部分。
-* `{CommonProperties}`
-
- CommonProperties 部分用于填写通用属性。请参阅[ 数据目录概述 ](../catalog-overview.md)中【通用属性】部分。
-
-* `{OtherProperties}`
+* `{HiveProperties}`
- OtherProperties 部分用于填写和 Hive Catalog 相关的其他参数。
+ HiveProperties 部分用于填写和 Hive Catalog 相关的其他参数。
* `get_schema_from_table`:默认为 false。默认情况下,Doris 会从 Hive Metastore 中获取表的
Schema 信息。但某些情况下可能出现兼容问题,如错误 `Storage schema reading not
supported`。此时可以将这个参数设置为 true,则会从 Table 对象中直接获取表
Schema。但注意,该方式会导致列的默认值信息被忽略。该参数自 2.1.10 和 3.0.6 版本支持。
@@ -75,6 +71,10 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
* `hive.ignore_absent_partitions`:是否忽略不存在的分区。默认为 `true`。如果设为
`false`,当遇到不存在的分区时,查询会报错。该参数自 3.0.2 版本支持。
+* `{CommonProperties}`
+
+ CommonProperties 部分用于填写通用属性。请参阅[ 数据目录概述 ](../catalog-overview.md)中【通用属性】部分。
+
### 支持的 Hive 版本
支持 Hive 1.x,2.x,3.x,4.x。
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/iceberg-catalog.mdx
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/iceberg-catalog.mdx
index 1ecbe3cdbfd..0f0ffe1ef1f 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/iceberg-catalog.mdx
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/iceberg-catalog.mdx
@@ -35,6 +35,7 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
'warehouse' = '<warehouse>' --optional
{MetaStoreProperties},
{StorageProperties},
+ {IcebergProperties},
{CommonProperties}
);
```
@@ -69,6 +70,18 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
StorageProperties 部分用于填写存储系统相关的连接和认证信息。具体可参阅【支持的存储系统】部分。
+* `{IcebergProperties}`
+
+ IcebergProperties 部分用于填写一些 Iceberg Catalog 特有的参数。
+
+ - `list-all-tables`
+
+ 自 3.1.2 版本支持。
+
+ 针对以 Hive Metastore 作为元数据服务的 Iceberg Catalog。默认为 `true`。在默认情况下,`SHOW
TABLES` 操作会罗列出当前 Database 下的所有类型的 Table(Hive Metastore 中可能存储了非 Iceberg 类型的表)。
+
+ 这种方式性能最好。如果设置为 `false`,则 Doris 会逐一检查每个 Table 的类型,并只返回 Iceberg 类型的
Table。该模式在表很多的情况下,性能会比较差。
+
* `{CommonProperties}`
CommonProperties 部分用于填写通用属性。请参阅[ 数据目录概述 ](../catalog-overview.md)中【通用属性】部分。
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-functions/scalar-functions/date-time-functions/quarter-floor.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-functions/scalar-functions/date-time-functions/quarter-floor.md
index 6efbedbcca5..1b0e1f38a0e 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-functions/scalar-functions/date-time-functions/quarter-floor.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-functions/scalar-functions/date-time-functions/quarter-floor.md
@@ -33,7 +33,7 @@ QUARTER_FLOOR(`<date_or_time_expr>`, `<period>`, `<origin>`)
| 参数 | 说明 |
| ---- | ---- |
-| `<date_or_time_expr>` | 需要向下取整的日期时间值,类型为 DATETIME 或 DATE ,具体 datetime/date
格式请查看 [datetime
的转换](../../../../../current/sql-manual/basic-element/sql-data-types/conversion/datetime-conversion)
和 [date
的转换](../../../../../current/sql-manual/basic-element/sql-data-types/conversion/date-conversion)|
+| `<date_or_time_expr>` | 需要向下取整的日期时间值,类型为 DATETIME 或 DATE ,具体 datetime/date
格式请查看 [datetime
的转换](../../../../sql-manual/basic-element/sql-data-types/conversion/datetime-conversion)
和 [date
的转换](../../../../sql-manual/basic-element/sql-data-types/conversion/date-conversion)|
| `<period>` | 季度周期值,类型为 INT,表示每个周期包含的季度数 |
| `<origin_datetime>` | 周期的起始时间点,类型为 DATETIME/DATE ,默认值为 0001-01-01 00:00:00 |
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/catalogs/hive-catalog.mdx
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/catalogs/hive-catalog.mdx
index b303894a97c..9aedf3baaf5 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/catalogs/hive-catalog.mdx
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/catalogs/hive-catalog.mdx
@@ -32,8 +32,8 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
'fs.defaultFS' = '<fs_defaultfs>', -- optional
{MetaStoreProperties},
{StorageProperties},
- {CommonProperties},
- {OtherProperties}
+ {HiveProperties},
+ {CommonProperties}
);
```
@@ -61,13 +61,9 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
StorageProperties 部分用于填写存储系统相关的连接和认证信息。具体可参阅【支持的存储系统】部分。
-* `{CommonProperties}`
-
- CommonProperties 部分用于填写通用属性。请参阅[ 数据目录概述 ](../catalog-overview.md)中【通用属性】部分。
-
-* `{OtherProperties}`
+* `{HiveProperties}`
- OtherProperties 部分用于填写和 Hive Catalog 相关的其他参数。
+ HiveProperties 部分用于填写和 Hive Catalog 相关的其他参数。
* `get_schema_from_table`:默认为 false。默认情况下,Doris 会从 Hive Metastore 中获取表的
Schema 信息。但某些情况下可能出现兼容问题,如错误 `Storage schema reading not
supported`。此时可以将这个参数设置为 true,则会从 Table 对象中直接获取表
Schema。但注意,该方式会导致列的默认值信息被忽略。该参数自 2.1.10 和 3.0.6 版本支持。
@@ -75,6 +71,10 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
* `hive.ignore_absent_partitions`:是否忽略不存在的分区。默认为 `true`。如果设为
`false`,当遇到不存在的分区时,查询会报错。该参数自 3.0.2 版本支持。
+* `{CommonProperties}`
+
+ CommonProperties 部分用于填写通用属性。请参阅[ 数据目录概述 ](../catalog-overview.md)中【通用属性】部分。
+
### 支持的 Hive 版本
支持 Hive 1.x,2.x,3.x,4.x。
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/catalogs/iceberg-catalog.mdx
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/catalogs/iceberg-catalog.mdx
index 1ecbe3cdbfd..0f0ffe1ef1f 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/catalogs/iceberg-catalog.mdx
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/catalogs/iceberg-catalog.mdx
@@ -35,6 +35,7 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
'warehouse' = '<warehouse>' --optional
{MetaStoreProperties},
{StorageProperties},
+ {IcebergProperties},
{CommonProperties}
);
```
@@ -69,6 +70,18 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
StorageProperties 部分用于填写存储系统相关的连接和认证信息。具体可参阅【支持的存储系统】部分。
+* `{IcebergProperties}`
+
+ IcebergProperties 部分用于填写一些 Iceberg Catalog 特有的参数。
+
+ - `list-all-tables`
+
+ 自 3.1.2 版本支持。
+
+ 针对以 Hive Metastore 作为元数据服务的 Iceberg Catalog。默认为 `true`。在默认情况下,`SHOW
TABLES` 操作会罗列出当前 Database 下的所有类型的 Table(Hive Metastore 中可能存储了非 Iceberg 类型的表)。
+
+ 这种方式性能最好。如果设置为 `false`,则 Doris 会逐一检查每个 Table 的类型,并只返回 Iceberg 类型的
Table。该模式在表很多的情况下,性能会比较差。
+
* `{CommonProperties}`
CommonProperties 部分用于填写通用属性。请参阅[ 数据目录概述 ](../catalog-overview.md)中【通用属性】部分。
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/catalogs/hive-catalog.mdx
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/catalogs/hive-catalog.mdx
index b303894a97c..9aedf3baaf5 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/catalogs/hive-catalog.mdx
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/catalogs/hive-catalog.mdx
@@ -32,8 +32,8 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
'fs.defaultFS' = '<fs_defaultfs>', -- optional
{MetaStoreProperties},
{StorageProperties},
- {CommonProperties},
- {OtherProperties}
+ {HiveProperties},
+ {CommonProperties}
);
```
@@ -61,13 +61,9 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
StorageProperties 部分用于填写存储系统相关的连接和认证信息。具体可参阅【支持的存储系统】部分。
-* `{CommonProperties}`
-
- CommonProperties 部分用于填写通用属性。请参阅[ 数据目录概述 ](../catalog-overview.md)中【通用属性】部分。
-
-* `{OtherProperties}`
+* `{HiveProperties}`
- OtherProperties 部分用于填写和 Hive Catalog 相关的其他参数。
+ HiveProperties 部分用于填写和 Hive Catalog 相关的其他参数。
* `get_schema_from_table`:默认为 false。默认情况下,Doris 会从 Hive Metastore 中获取表的
Schema 信息。但某些情况下可能出现兼容问题,如错误 `Storage schema reading not
supported`。此时可以将这个参数设置为 true,则会从 Table 对象中直接获取表
Schema。但注意,该方式会导致列的默认值信息被忽略。该参数自 2.1.10 和 3.0.6 版本支持。
@@ -75,6 +71,10 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
* `hive.ignore_absent_partitions`:是否忽略不存在的分区。默认为 `true`。如果设为
`false`,当遇到不存在的分区时,查询会报错。该参数自 3.0.2 版本支持。
+* `{CommonProperties}`
+
+ CommonProperties 部分用于填写通用属性。请参阅[ 数据目录概述 ](../catalog-overview.md)中【通用属性】部分。
+
### 支持的 Hive 版本
支持 Hive 1.x,2.x,3.x,4.x。
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/catalogs/iceberg-catalog.mdx
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/catalogs/iceberg-catalog.mdx
index 1ecbe3cdbfd..0f0ffe1ef1f 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/catalogs/iceberg-catalog.mdx
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/catalogs/iceberg-catalog.mdx
@@ -35,6 +35,7 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
'warehouse' = '<warehouse>' --optional
{MetaStoreProperties},
{StorageProperties},
+ {IcebergProperties},
{CommonProperties}
);
```
@@ -69,6 +70,18 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
StorageProperties 部分用于填写存储系统相关的连接和认证信息。具体可参阅【支持的存储系统】部分。
+* `{IcebergProperties}`
+
+ IcebergProperties 部分用于填写一些 Iceberg Catalog 特有的参数。
+
+ - `list-all-tables`
+
+ 自 3.1.2 版本支持。
+
+ 针对以 Hive Metastore 作为元数据服务的 Iceberg Catalog。默认为 `true`。在默认情况下,`SHOW
TABLES` 操作会罗列出当前 Database 下的所有类型的 Table(Hive Metastore 中可能存储了非 Iceberg 类型的表)。
+
+ 这种方式性能最好。如果设置为 `false`,则 Doris 会逐一检查每个 Table 的类型,并只返回 Iceberg 类型的
Table。该模式在表很多的情况下,性能会比较差。
+
* `{CommonProperties}`
CommonProperties 部分用于填写通用属性。请参阅[ 数据目录概述 ](../catalog-overview.md)中【通用属性】部分。
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/lakehouse/catalogs/hive-catalog.mdx
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/lakehouse/catalogs/hive-catalog.mdx
index b303894a97c..9aedf3baaf5 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/lakehouse/catalogs/hive-catalog.mdx
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/lakehouse/catalogs/hive-catalog.mdx
@@ -32,8 +32,8 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
'fs.defaultFS' = '<fs_defaultfs>', -- optional
{MetaStoreProperties},
{StorageProperties},
- {CommonProperties},
- {OtherProperties}
+ {HiveProperties},
+ {CommonProperties}
);
```
@@ -61,13 +61,9 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
StorageProperties 部分用于填写存储系统相关的连接和认证信息。具体可参阅【支持的存储系统】部分。
-* `{CommonProperties}`
-
- CommonProperties 部分用于填写通用属性。请参阅[ 数据目录概述 ](../catalog-overview.md)中【通用属性】部分。
-
-* `{OtherProperties}`
+* `{HiveProperties}`
- OtherProperties 部分用于填写和 Hive Catalog 相关的其他参数。
+ HiveProperties 部分用于填写和 Hive Catalog 相关的其他参数。
* `get_schema_from_table`:默认为 false。默认情况下,Doris 会从 Hive Metastore 中获取表的
Schema 信息。但某些情况下可能出现兼容问题,如错误 `Storage schema reading not
supported`。此时可以将这个参数设置为 true,则会从 Table 对象中直接获取表
Schema。但注意,该方式会导致列的默认值信息被忽略。该参数自 2.1.10 和 3.0.6 版本支持。
@@ -75,6 +71,10 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
* `hive.ignore_absent_partitions`:是否忽略不存在的分区。默认为 `true`。如果设为
`false`,当遇到不存在的分区时,查询会报错。该参数自 3.0.2 版本支持。
+* `{CommonProperties}`
+
+ CommonProperties 部分用于填写通用属性。请参阅[ 数据目录概述 ](../catalog-overview.md)中【通用属性】部分。
+
### 支持的 Hive 版本
支持 Hive 1.x,2.x,3.x,4.x。
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/lakehouse/catalogs/iceberg-catalog.mdx
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/lakehouse/catalogs/iceberg-catalog.mdx
index 1ecbe3cdbfd..0f0ffe1ef1f 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/lakehouse/catalogs/iceberg-catalog.mdx
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/lakehouse/catalogs/iceberg-catalog.mdx
@@ -35,6 +35,7 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
'warehouse' = '<warehouse>' --optional
{MetaStoreProperties},
{StorageProperties},
+ {IcebergProperties},
{CommonProperties}
);
```
@@ -69,6 +70,18 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
StorageProperties 部分用于填写存储系统相关的连接和认证信息。具体可参阅【支持的存储系统】部分。
+* `{IcebergProperties}`
+
+ IcebergProperties 部分用于填写一些 Iceberg Catalog 特有的参数。
+
+ - `list-all-tables`
+
+ 自 3.1.2 版本支持。
+
+ 针对以 Hive Metastore 作为元数据服务的 Iceberg Catalog。默认为 `true`。在默认情况下,`SHOW
TABLES` 操作会罗列出当前 Database 下的所有类型的 Table(Hive Metastore 中可能存储了非 Iceberg 类型的表)。
+
+ 这种方式性能最好。如果设置为 `false`,则 Doris 会逐一检查每个 Table 的类型,并只返回 Iceberg 类型的
Table。该模式在表很多的情况下,性能会比较差。
+
* `{CommonProperties}`
CommonProperties 部分用于填写通用属性。请参阅[ 数据目录概述 ](../catalog-overview.md)中【通用属性】部分。
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/sql-manual/sql-functions/scalar-functions/date-time-functions/quarter-floor.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/sql-manual/sql-functions/scalar-functions/date-time-functions/quarter-floor.md
index 6efbedbcca5..1b0e1f38a0e 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/sql-manual/sql-functions/scalar-functions/date-time-functions/quarter-floor.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/sql-manual/sql-functions/scalar-functions/date-time-functions/quarter-floor.md
@@ -33,7 +33,7 @@ QUARTER_FLOOR(`<date_or_time_expr>`, `<period>`, `<origin>`)
| 参数 | 说明 |
| ---- | ---- |
-| `<date_or_time_expr>` | 需要向下取整的日期时间值,类型为 DATETIME 或 DATE ,具体 datetime/date
格式请查看 [datetime
的转换](../../../../../current/sql-manual/basic-element/sql-data-types/conversion/datetime-conversion)
和 [date
的转换](../../../../../current/sql-manual/basic-element/sql-data-types/conversion/date-conversion)|
+| `<date_or_time_expr>` | 需要向下取整的日期时间值,类型为 DATETIME 或 DATE ,具体 datetime/date
格式请查看 [datetime
的转换](../../../../sql-manual/basic-element/sql-data-types/conversion/datetime-conversion)
和 [date
的转换](../../../../sql-manual/basic-element/sql-data-types/conversion/date-conversion)|
| `<period>` | 季度周期值,类型为 INT,表示每个周期包含的季度数 |
| `<origin_datetime>` | 周期的起始时间点,类型为 DATETIME/DATE ,默认值为 0001-01-01 00:00:00 |
diff --git a/versioned_docs/version-2.1/lakehouse/catalogs/hive-catalog.mdx
b/versioned_docs/version-2.1/lakehouse/catalogs/hive-catalog.mdx
index f42c1a46a54..7c7603c36f9 100644
--- a/versioned_docs/version-2.1/lakehouse/catalogs/hive-catalog.mdx
+++ b/versioned_docs/version-2.1/lakehouse/catalogs/hive-catalog.mdx
@@ -32,8 +32,8 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
'fs.defaultFS' = '<fs_defaultfs>', -- optional
{MetaStoreProperties},
{StorageProperties},
- {CommonProperties},
- {OtherProperties}
+ {HiveProperties},
+ {CommonProperties}
);
```
@@ -59,13 +59,9 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
The StorageProperties section is for entering connection and authentication
information related to the storage system. Refer to the "Supported Storage
Systems" section for details.
-* `{CommonProperties}`
-
- The CommonProperties section is for entering common attributes. Please see
the "Common Properties" section in the [Catalog
Overview](../catalog-overview.md).
-
-* `{OtherProperties}`
+* `{HiveProperties}`
- OtherProperties section is for entering properties related to Hive Catalog.
+ HiveProperties section is for entering properties related to Hive Catalog.
* `get_schema_from_table`: The default value is false. By default, Doris
will obtain the table schema information from the Hive Metastore. However, in
some cases, compatibility issues may occur, such as the error `Storage schema
reading not supported`. In this case, you can set this parameter to true, and
the table schema will be obtained directly from the Table object. But please
note that this method will cause the default value information of the column to
be ignored. This property [...]
@@ -73,6 +69,10 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
* `hive.ignore_absent_partitions`: Whether to ignore non-existent
partitions. Defaults to `true`. If set to `false`, the query will report an
error when encountering non-existent partitions. This parameter has been
supported since version 3.0.2.
+* `{CommonProperties}`
+
+ The CommonProperties section is for entering common attributes. Please see
the "Common Properties" section in the [Catalog
Overview](../catalog-overview.md).
+
### Supported Hive Versions
Supports Hive 1.x, 2.x, 3.x, and 4.x.
diff --git a/versioned_docs/version-2.1/lakehouse/catalogs/iceberg-catalog.mdx
b/versioned_docs/version-2.1/lakehouse/catalogs/iceberg-catalog.mdx
index c823f9dbad1..6e48dd5b926 100644
--- a/versioned_docs/version-2.1/lakehouse/catalogs/iceberg-catalog.mdx
+++ b/versioned_docs/version-2.1/lakehouse/catalogs/iceberg-catalog.mdx
@@ -35,6 +35,7 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
'warehouse' = '<warehouse>' --optional
{MetaStoreProperties},
{StorageProperties},
+ {IcebergProperties},
{CommonProperties}
);
```
@@ -69,6 +70,16 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
The StorageProperties section is for entering connection and authentication
information related to the storage system. Refer to the section on [Supported
Storage Systems].
+* `{IcebergProperties}`
+
+ The IcebergProperties section is used to fill in parameters specific to
Iceberg Catalog.
+
+ - `list-all-tables`
+
+ For Iceberg Catalog that uses Hive Metastore as the metadata service.
Default is `true`. By default, the `SHOW TABLES` operation will list all types
of tables in the current Database (Hive Metastore may store non-Iceberg type
tables). This approach has the best performance.
+
+ If set to `false`, Doris will check the type of each table one by one and
only return Iceberg type tables. This mode will have poor performance when
there are many tables.
+
* `{CommonProperties}`
The CommonProperties section is for entering general properties. See the
[Catalog Overview](../catalog-overview.md) for details on common properties.
diff --git a/versioned_docs/version-3.x/lakehouse/catalogs/hive-catalog.mdx
b/versioned_docs/version-3.x/lakehouse/catalogs/hive-catalog.mdx
index f42c1a46a54..7c7603c36f9 100644
--- a/versioned_docs/version-3.x/lakehouse/catalogs/hive-catalog.mdx
+++ b/versioned_docs/version-3.x/lakehouse/catalogs/hive-catalog.mdx
@@ -32,8 +32,8 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
'fs.defaultFS' = '<fs_defaultfs>', -- optional
{MetaStoreProperties},
{StorageProperties},
- {CommonProperties},
- {OtherProperties}
+ {HiveProperties},
+ {CommonProperties}
);
```
@@ -59,13 +59,9 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
The StorageProperties section is for entering connection and authentication
information related to the storage system. Refer to the "Supported Storage
Systems" section for details.
-* `{CommonProperties}`
-
- The CommonProperties section is for entering common attributes. Please see
the "Common Properties" section in the [Catalog
Overview](../catalog-overview.md).
-
-* `{OtherProperties}`
+* `{HiveProperties}`
- OtherProperties section is for entering properties related to Hive Catalog.
+ HiveProperties section is for entering properties related to Hive Catalog.
* `get_schema_from_table`: The default value is false. By default, Doris
will obtain the table schema information from the Hive Metastore. However, in
some cases, compatibility issues may occur, such as the error `Storage schema
reading not supported`. In this case, you can set this parameter to true, and
the table schema will be obtained directly from the Table object. But please
note that this method will cause the default value information of the column to
be ignored. This property [...]
@@ -73,6 +69,10 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
* `hive.ignore_absent_partitions`: Whether to ignore non-existent
partitions. Defaults to `true`. If set to `false`, the query will report an
error when encountering non-existent partitions. This parameter has been
supported since version 3.0.2.
+* `{CommonProperties}`
+
+ The CommonProperties section is for entering common attributes. Please see
the "Common Properties" section in the [Catalog
Overview](../catalog-overview.md).
+
### Supported Hive Versions
Supports Hive 1.x, 2.x, 3.x, and 4.x.
diff --git a/versioned_docs/version-3.x/lakehouse/catalogs/iceberg-catalog.mdx
b/versioned_docs/version-3.x/lakehouse/catalogs/iceberg-catalog.mdx
index c823f9dbad1..6e48dd5b926 100644
--- a/versioned_docs/version-3.x/lakehouse/catalogs/iceberg-catalog.mdx
+++ b/versioned_docs/version-3.x/lakehouse/catalogs/iceberg-catalog.mdx
@@ -35,6 +35,7 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
'warehouse' = '<warehouse>' --optional
{MetaStoreProperties},
{StorageProperties},
+ {IcebergProperties},
{CommonProperties}
);
```
@@ -69,6 +70,16 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
The StorageProperties section is for entering connection and authentication
information related to the storage system. Refer to the section on [Supported
Storage Systems].
+* `{IcebergProperties}`
+
+ The IcebergProperties section is used to fill in parameters specific to
Iceberg Catalog.
+
+ - `list-all-tables`
+
+ For Iceberg Catalog that uses Hive Metastore as the metadata service.
Default is `true`. By default, the `SHOW TABLES` operation will list all types
of tables in the current Database (Hive Metastore may store non-Iceberg type
tables). This approach has the best performance.
+
+ If set to `false`, Doris will check the type of each table one by one and
only return Iceberg type tables. This mode will have poor performance when
there are many tables.
+
* `{CommonProperties}`
The CommonProperties section is for entering general properties. See the
[Catalog Overview](../catalog-overview.md) for details on common properties.
diff --git
a/versioned_docs/version-4.x/compute-storage-decoupled/rw/file-cache-rw-compute-group-best-practice.md
b/versioned_docs/version-4.x/compute-storage-decoupled/rw/file-cache-rw-compute-group-best-practice.md
index b995e207f72..359adc73620 100644
---
a/versioned_docs/version-4.x/compute-storage-decoupled/rw/file-cache-rw-compute-group-best-practice.md
+++
b/versioned_docs/version-4.x/compute-storage-decoupled/rw/file-cache-rw-compute-group-best-practice.md
@@ -30,7 +30,7 @@ This is a more intelligent and automated mechanism. It
establishes a warm-up rel
- Most scenarios.
- Requires user permission to configure warm-up relationships.
-> **[Documentation Link]**: For detailed information on how to configure and
use proactive incremental warm-up, please refer to the official documentation
**[FileCache Proactive Incremental Warm-up](./read-write-splitting.md)**.
+> **[Documentation Link]**: For detailed information on how to configure and
use proactive incremental warm-up, please refer to the official documentation
**[FileCache Proactive Incremental Warm-up](./read-write-separation.md)**.
### 2. Read-Only Compute Group Automatic Warm-up
@@ -168,4 +168,4 @@ High-frequency data ingestion (like `INSERT INTO`, `Stream
Load`) continuously p
| Active incremental pre-warming + delayed commit + configurable data
freshness tolerance (optional) | Suitable for scenarios with extremely high
query latency requirements; requires users to have permission to configure
pre-warming relationships | Compaction: None <br> Heavyweight schema change:
None <br> Newly written data: Depends on freshness tolerance |
| Read-only compute group with automatic pre-warming + prefer cached data +
configurable data freshness tolerance (optional) | Users have no permission to
configure pre-warming relationships <br> If freshness tolerance is not
configured, ineffective for MOW primary key tables | Compaction: None <br>
Heavyweight schema change: Cache miss <br> Newly written data: Depends on
freshness tolerance |
-By reasonably applying the above cache warm-up strategies and related
configurations, you can effectively manage the cache behavior of Apache Doris
in a read-write splitting architecture, minimize performance loss due to cache
misses, and ensure the stability and efficiency of your read-only query
services.
\ No newline at end of file
+By reasonably applying the above cache warm-up strategies and related
configurations, you can effectively manage the cache behavior of Apache Doris
in a read-write splitting architecture, minimize performance loss due to cache
misses, and ensure the stability and efficiency of your read-only query
services.
diff --git
a/versioned_docs/version-4.x/ecosystem/doris-operator/doris-operator-overview.md
b/versioned_docs/version-4.x/ecosystem/doris-operator/doris-operator-overview.md
index 7bc55b670a6..040c0065158 100644
---
a/versioned_docs/version-4.x/ecosystem/doris-operator/doris-operator-overview.md
+++
b/versioned_docs/version-4.x/ecosystem/doris-operator/doris-operator-overview.md
@@ -49,7 +49,7 @@ Based on the deployment definition provided by Doris
Operator, users can customi
- **Runtime debugging**:
One of the biggest challenges for Trouble Shooting with containerized
services is how to debug at runtime. While pursuing availability and ease of
use, Doris Operator also provides more convenient conditions for problem
location. In the basic image of Doris, a variety of tools for problem location
are pre-set. When you need to view the status in real time, you can enter the
container through the exec command provided by kubectl and use the built-in
tools for troubleshooting.
- When the service cannot be started for unknown reasons, Doris Operator
provides a Debug running mode. When a Pod is set to Debug startup mode, the
container will automatically enter the running state. At this time, you can
enter the container through the `exec` command, manually start the service and
locate the problem. For details, please refer to [this
document](../../install/deploy-on-kubernetes/compute-storage-coupled/cluster-operation.md#How-to-enter-the-container-when-the-pod-crashes)
+ When the service cannot be started for unknown reasons, Doris Operator
provides a Debug running mode. When a Pod is set to Debug startup mode, the
container will automatically enter the running state. At this time, you can
enter the container through the `exec` command, manually start the service and
locate the problem. For details, please refer to [this
document](../../install/deploy-on-kubernetes/integrated-storage-compute/cluster-operation.md#How-to-enter-the-container-when-the-pod-
[...]
## Compatibility
diff --git a/versioned_docs/version-4.x/ecosystem/observability/beats.md
b/versioned_docs/version-4.x/ecosystem/observability/beats.md
index a1f9ab70c19..83a1cfa66e1 100644
--- a/versioned_docs/version-4.x/ecosystem/observability/beats.md
+++ b/versioned_docs/version-4.x/ecosystem/observability/beats.md
@@ -11,7 +11,7 @@
The Beats Doris output plugin supports
[Filebeat](https://github.com/elastic/beats/tree/master/filebeat),
[Metricbeat](https://github.com/elastic/beats/tree/master/metricbeat),
[Packetbeat](https://github.com/elastic/beats/tree/master/packetbeat),
[Winlogbeat](https://github.com/elastic/beats/tree/master/winlogbeat),
[Auditbeat](https://github.com/elastic/beats/tree/master/auditbeat), and
[Heartbeat](https://github.com/elastic/beats/tree/master/heartbeat).
-By invoking the [Doris Stream
Load](../data-operate/import/import-way/stream-load-manual) HTTP interface, the
Beats Doris output plugin writes data into Doris in real-time, offering
capabilities such as multi-threaded concurrency, failure retries, custom Stream
Load formats and parameters, and output write speed.
+By invoking the [Doris Stream
Load](../../data-operate/import/import-way/stream-load-manual) HTTP interface,
the Beats Doris output plugin writes data into Doris in real-time, offering
capabilities such as multi-threaded concurrency, failure retries, custom Stream
Load formats and parameters, and output write speed.
To use the Beats Doris output plugin, there are three main steps:
1. Download or compile the Beats binary program that includes the Doris output
plugin.
diff --git a/versioned_docs/version-4.x/ecosystem/observability/fluentbit.md
b/versioned_docs/version-4.x/ecosystem/observability/fluentbit.md
index 739ccec11df..6b87288100c 100644
--- a/versioned_docs/version-4.x/ecosystem/observability/fluentbit.md
+++ b/versioned_docs/version-4.x/ecosystem/observability/fluentbit.md
@@ -7,7 +7,7 @@
[Fluent Bit](https://fluentbit.io/) is a fast log processor and forwarder that
supports custom output plugins to write data into storage systems, with the
Fluent Bit Doris output plugin being the one for outputting to Doris.
-By invoking the [Doris Stream
Load](../data-operate/import/import-way/stream-load-manual) HTTP interface, the
Fluent Bit Doris output plugin writes data into Doris in real-time, offering
capabilities such as multi-threaded concurrency, failure retries, custom Stream
Load formats and parameters, and output write speed.
+By invoking the [Doris Stream
Load](../../data-operate/import/import-way/stream-load-manual) HTTP interface,
the Fluent Bit Doris output plugin writes data into Doris in real-time,
offering capabilities such as multi-threaded concurrency, failure retries,
custom Stream Load formats and parameters, and output write speed.
To use the Fluent Bit Doris output plugin, there are three main steps:
1. Download or compile the Fluent Bit binary program that includes the Doris
output plugin.
diff --git a/versioned_docs/version-4.x/ecosystem/observability/logstash.md
b/versioned_docs/version-4.x/ecosystem/observability/logstash.md
index e819849895f..3653f9e4236 100644
--- a/versioned_docs/version-4.x/ecosystem/observability/logstash.md
+++ b/versioned_docs/version-4.x/ecosystem/observability/logstash.md
@@ -11,7 +11,7 @@
Logstash is a log ETL framework (collect, preprocess, send to storage systems)
that supports custom output plugins to write data into storage systems. The
Logstash Doris output plugin is a plugin for outputting data to Doris.
-The Logstash Doris output plugin calls the [Doris Stream
Load](../data-operate/import/import-way/stream-load-manual) HTTP interface to
write data into Doris in real-time, offering capabilities such as
multi-threaded concurrency, failure retries, custom Stream Load formats and
parameters, and output write speed.
+The Logstash Doris output plugin calls the [Doris Stream
Load](../../data-operate/import/import-way/stream-load-manual) HTTP interface
to write data into Doris in real-time, offering capabilities such as
multi-threaded concurrency, failure retries, custom Stream Load formats and
parameters, and output write speed.
Using the Logstash Doris output plugin mainly involves three steps:
1. Install the plugin into Logstash
diff --git
a/versioned_docs/version-4.x/install/deploy-on-kubernetes/separating-storage-compute/config-cluster.md
b/versioned_docs/version-4.x/install/deploy-on-kubernetes/separating-storage-compute/config-cluster.md
index 25b107cd5e7..f91cb81d264 100644
---
a/versioned_docs/version-4.x/install/deploy-on-kubernetes/separating-storage-compute/config-cluster.md
+++
b/versioned_docs/version-4.x/install/deploy-on-kubernetes/separating-storage-compute/config-cluster.md
@@ -229,4 +229,4 @@ The Doris Operator mounts the krb5.conf file using a
ConfigMap resource and moun
keytabSecretName: ${keytabSecretName}
keytabPath: ${keytabPath}
```
- ${krb5ConfigMapName}: Name of the ConfigMap containing the krb5.conf file.
${keytabSecretName}: Name of the Secret containing the keytab files.
${keytabPath}: The directory path in the container where the Secret mounts the
keytab files. This path should match the directory specified by
hadoop.kerberos.keytab when creating a catalog. For catalog configuration
details, refer to the [Hive Catalog
configuration](../../../lakehouse/catalogs/hive-catalog.md#configuring-catalog)
documentation.
+ ${krb5ConfigMapName}: Name of the ConfigMap containing the krb5.conf file.
${keytabSecretName}: Name of the Secret containing the keytab files.
${keytabPath}: The directory path in the container where the Secret mounts the
keytab files. This path should match the directory specified by
hadoop.kerberos.keytab when creating a catalog. For catalog configuration
details, refer to the [Hive Catalog
configuration](../../../lakehouse/catalogs/hive-catalog.mdx) documentation.
diff --git
a/versioned_docs/version-4.x/lakehouse/best-practices/doris-iceberg.md
b/versioned_docs/version-4.x/lakehouse/best-practices/doris-iceberg.md
index 9419eb4691c..2988511e9c9 100644
--- a/versioned_docs/version-4.x/lakehouse/best-practices/doris-iceberg.md
+++ b/versioned_docs/version-4.x/lakehouse/best-practices/doris-iceberg.md
@@ -36,7 +36,7 @@ Users can quickly build an efficient Data Lakehouse solution
based on Apache Dor
In the future, Apache Iceberg will serve as one of the native table engines
for Apache Doris, providing more comprehensive analysis and management
functions for lake-formatted data. Apache Doris will also gradually support
more advanced features of Apache Iceberg, including Update/Delete/Merge,
sorting during write-back, incremental data reading, metadata management, etc.,
to jointly build a unified, high-performance, real-time data lake platform.
-For more information, please refer to [Iceberg
Catalog](../catalogs/iceberg-catalog.md)
+For more information, please refer to [Iceberg
Catalog](../catalogs/iceberg-catalog.mdx)
## User Guide
diff --git
a/versioned_docs/version-4.x/lakehouse/best-practices/doris-paimon.md
b/versioned_docs/version-4.x/lakehouse/best-practices/doris-paimon.md
index e62a99642c4..96b0b86ce7c 100644
--- a/versioned_docs/version-4.x/lakehouse/best-practices/doris-paimon.md
+++ b/versioned_docs/version-4.x/lakehouse/best-practices/doris-paimon.md
@@ -35,7 +35,7 @@ In the future, Apache Doris will gradually support more
advanced features of Apa
This article will explain how to quickly set up an Apache Doris + Apache
Paimon testing & demonstration environment in a Docker environment and
demonstrate the usage of various features.
-For more information, please refer to [Paimon
Catalog](../catalogs/paimon-catalog.md)
+For more information, please refer to [Paimon
Catalog](../catalogs/paimon-catalog.mdx)
## User Guide
diff --git a/versioned_docs/version-4.x/lakehouse/catalogs/hive-catalog.mdx
b/versioned_docs/version-4.x/lakehouse/catalogs/hive-catalog.mdx
index f42c1a46a54..7c7603c36f9 100644
--- a/versioned_docs/version-4.x/lakehouse/catalogs/hive-catalog.mdx
+++ b/versioned_docs/version-4.x/lakehouse/catalogs/hive-catalog.mdx
@@ -32,8 +32,8 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
'fs.defaultFS' = '<fs_defaultfs>', -- optional
{MetaStoreProperties},
{StorageProperties},
- {CommonProperties},
- {OtherProperties}
+ {HiveProperties},
+ {CommonProperties}
);
```
@@ -59,13 +59,9 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
The StorageProperties section is for entering connection and authentication
information related to the storage system. Refer to the "Supported Storage
Systems" section for details.
-* `{CommonProperties}`
-
- The CommonProperties section is for entering common attributes. Please see
the "Common Properties" section in the [Catalog
Overview](../catalog-overview.md).
-
-* `{OtherProperties}`
+* `{HiveProperties}`
- OtherProperties section is for entering properties related to Hive Catalog.
+ HiveProperties section is for entering properties related to Hive Catalog.
* `get_schema_from_table`: The default value is false. By default, Doris
will obtain the table schema information from the Hive Metastore. However, in
some cases, compatibility issues may occur, such as the error `Storage schema
reading not supported`. In this case, you can set this parameter to true, and
the table schema will be obtained directly from the Table object. But please
note that this method will cause the default value information of the column to
be ignored. This property [...]
@@ -73,6 +69,10 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
* `hive.ignore_absent_partitions`: Whether to ignore non-existent
partitions. Defaults to `true`. If set to `false`, the query will report an
error when encountering non-existent partitions. This parameter has been
supported since version 3.0.2.
+* `{CommonProperties}`
+
+ The CommonProperties section is for entering common attributes. Please see
the "Common Properties" section in the [Catalog
Overview](../catalog-overview.md).
+
### Supported Hive Versions
Supports Hive 1.x, 2.x, 3.x, and 4.x.
diff --git a/versioned_docs/version-4.x/lakehouse/catalogs/hudi-catalog.md
b/versioned_docs/version-4.x/lakehouse/catalogs/hudi-catalog.md
index d6cf7491b47..aeb62cbedd8 100644
--- a/versioned_docs/version-4.x/lakehouse/catalogs/hudi-catalog.md
+++ b/versioned_docs/version-4.x/lakehouse/catalogs/hudi-catalog.md
@@ -101,7 +101,7 @@ The current dependent Hudi version is 0.15. It is
recommended to access Hudi dat
## Examples
-The creation of a Hudi Catalog is similar to a Hive Catalog. For more
examples, please refer to [Hive Catalog](./hive-catalog.md).
+The creation of a Hudi Catalog is similar to a Hive Catalog. For more
examples, please refer to [Hive Catalog](./hive-catalog.mdx).
```sql
CREATE CATALOG hudi_hms PROPERTIES (
diff --git a/versioned_docs/version-4.x/lakehouse/catalogs/iceberg-catalog.mdx
b/versioned_docs/version-4.x/lakehouse/catalogs/iceberg-catalog.mdx
index c823f9dbad1..6e48dd5b926 100644
--- a/versioned_docs/version-4.x/lakehouse/catalogs/iceberg-catalog.mdx
+++ b/versioned_docs/version-4.x/lakehouse/catalogs/iceberg-catalog.mdx
@@ -35,6 +35,7 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
'warehouse' = '<warehouse>' --optional
{MetaStoreProperties},
{StorageProperties},
+ {IcebergProperties},
{CommonProperties}
);
```
@@ -69,6 +70,16 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
The StorageProperties section is for entering connection and authentication
information related to the storage system. Refer to the section on [Supported
Storage Systems].
+* `{IcebergProperties}`
+
+ The IcebergProperties section is used to fill in parameters specific to
Iceberg Catalog.
+
+ - `list-all-tables`
+
+ For Iceberg Catalog that uses Hive Metastore as the metadata service.
Default is `true`. By default, the `SHOW TABLES` operation will list all types
of tables in the current Database (Hive Metastore may store non-Iceberg type
tables). This approach has the best performance.
+
+ If set to `false`, Doris will check the type of each table one by one and
only return Iceberg type tables. This mode will have poor performance when
there are many tables.
+
* `{CommonProperties}`
The CommonProperties section is for entering general properties. See the
[Catalog Overview](../catalog-overview.md) for details on common properties.
diff --git a/versioned_docs/version-4.x/lakehouse/lakehouse-overview.md
b/versioned_docs/version-4.x/lakehouse/lakehouse-overview.md
index 071979211d8..52a4e785a88 100644
--- a/versioned_docs/version-4.x/lakehouse/lakehouse-overview.md
+++ b/versioned_docs/version-4.x/lakehouse/lakehouse-overview.md
@@ -23,7 +23,7 @@ Whether it's Hive, Iceberg, Hudi, Paimon, or database systems
supporting the JDB
For lakehouse systems, Doris can obtain the structure and distribution
information of data tables from metadata services such as Hive Metastore, AWS
Glue, and Unity Catalog, perform reasonable query planning, and utilize the MPP
architecture for distributed computing.
-For details, refer to each catalog document, such as [Iceberg
Catalog](./catalogs/iceberg-catalog.md)
+For details, refer to each catalog document, such as [Iceberg
Catalog](./catalogs/iceberg-catalog.mdx)
#### Extensible Connector Framework
diff --git a/versioned_docs/version-4.x/releasenotes/v1.1/release-1.1.0.md
b/versioned_docs/version-4.x/releasenotes/v1.1/release-1.1.0.md
index fab5d3be8fb..83cc5957f0c 100644
--- a/versioned_docs/version-4.x/releasenotes/v1.1/release-1.1.0.md
+++ b/versioned_docs/version-4.x/releasenotes/v1.1/release-1.1.0.md
@@ -170,7 +170,7 @@ If you encounter any problems with use, please feel free to
contact us through G
GitHub Forum:
[https://github.com/apache/doris/discussions](https://github.com/apache/doris/discussions)
-Mailing list: [[email protected]]([email protected])
+Mailing list: [email protected]
## Thanks
diff --git a/versioned_docs/version-4.x/releasenotes/v2.1/release-2.1.2.md
b/versioned_docs/version-4.x/releasenotes/v2.1/release-2.1.2.md
index 1bab1c03ebc..64086421a49 100644
--- a/versioned_docs/version-4.x/releasenotes/v2.1/release-2.1.2.md
+++ b/versioned_docs/version-4.x/releasenotes/v2.1/release-2.1.2.md
@@ -13,7 +13,7 @@
2. Some of MySQL Connector (eg, dotnet MySQL.Data) rely on variable's column
type to make connection.
- eg, select @[@autocommit]([@autocommit](https://github.com/autocommit))
should with column type BIGINT, not BIT, otherwise it will throw error. So we
change column type of @[@autocommit](https://github.com/autocommit) to BIGINT.
+ eg, `select @autocommit` should with column type BIGINT, not BIT, otherwise
it will throw error. So we change column type of `@autocommit` to BIGINT.
- https://github.com/apache/doris/pull/33282
@@ -88,4 +88,4 @@
6. Fix `unix_timestamp` core for string input in auto partition.
- - https://github.com/apache/doris/pull/32871
\ No newline at end of file
+ - https://github.com/apache/doris/pull/32871
diff --git a/versioned_docs/version-4.x/releasenotes/v2.1/release-2.1.4.md
b/versioned_docs/version-4.x/releasenotes/v2.1/release-2.1.4.md
index 7bf9ca4a213..bb4939f23a4 100644
--- a/versioned_docs/version-4.x/releasenotes/v2.1/release-2.1.4.md
+++ b/versioned_docs/version-4.x/releasenotes/v2.1/release-2.1.4.md
@@ -63,9 +63,9 @@
- Build support for internal table triggered updates, where if a materialized
view uses an internal table and the data in the internal table changes, it can
trigger a refresh of the materialized view, specifying REFRESH ON COMMIT when
creating the materialized view.
-- Support transparent rewriting for single tables. For more information, see
[Querying Async Materialized
View](../query/view-materialized-view/query-async-materialized-view.md).
+- Support transparent rewriting for single tables. For more information, see
[Querying Async Materialized
View](../../query-acceleration/materialized-view/async-materialized-view/functions-and-demands.md).
-- Transparent rewriting supports aggregation roll-up for agg_state, agg_union
types; materialized views can be defined as agg_state or agg_union, queries can
use specific aggregation functions, or use agg_merge. For more information, see
[AGG_STATE](../sql-manual/sql-types/Data-Types/AGG_STATE.md).
+- Transparent rewriting supports aggregation roll-up for agg_state, agg_union
types; materialized views can be defined as agg_state or agg_union, queries can
use specific aggregation functions, or use agg_merge. For more information, see
[AGG_STATE](../../sql-manual/basic-element/sql-data-types/aggregate/AGG-STATE.md)
### Others
@@ -267,4 +267,4 @@
Thanks to every one who contributes to this release.
-@airborne12, @amorynan, @AshinGau, @BePPPower, @BiteTheDDDDt, @ByteYue,
@caiconghui, @CalvinKirs, @cambyzju, @catpineapple, @cjj2010, @csun5285,
@DarvenDuan, @dataroaring, @deardeng, @Doris-Extras, @eldenmoon, @englefly,
@feiniaofeiafei, @felixwluo, @freemandealer, @Gabriel39, @gavinchou, @GoGoWen,
@HappenLee, @hello-stephen, @hubgeter, @hust-hhb, @jacktengg, @jackwener,
@jeffreys-cat, @Jibing-Li, @kaijchen, @kaka11chen, @Lchangliang, @liaoxin01,
@LiBinfeng-01, @lide-reed, @luennng, @luw [...]
\ No newline at end of file
+@airborne12, @amorynan, @AshinGau, @BePPPower, @BiteTheDDDDt, @ByteYue,
@caiconghui, @CalvinKirs, @cambyzju, @catpineapple, @cjj2010, @csun5285,
@DarvenDuan, @dataroaring, @deardeng, @Doris-Extras, @eldenmoon, @englefly,
@feiniaofeiafei, @felixwluo, @freemandealer, @Gabriel39, @gavinchou, @GoGoWen,
@HappenLee, @hello-stephen, @hubgeter, @hust-hhb, @jacktengg, @jackwener,
@jeffreys-cat, @Jibing-Li, @kaijchen, @kaka11chen, @Lchangliang, @liaoxin01,
@LiBinfeng-01, @lide-reed, @luennng, @luw [...]
diff --git a/versioned_docs/version-4.x/releasenotes/v2.1/release-2.1.6.md
b/versioned_docs/version-4.x/releasenotes/v2.1/release-2.1.6.md
index 963c8dc4b9e..a56a26c87a3 100644
--- a/versioned_docs/version-4.x/releasenotes/v2.1/release-2.1.6.md
+++ b/versioned_docs/version-4.x/releasenotes/v2.1/release-2.1.6.md
@@ -396,7 +396,7 @@ Dear community, **Apache Doris version 2.1.6 was officially
released on Septembe
- Fixed the occasional planning error issue when executing `insert into as
select` with CTEs. [#38526](https://github.com/apache/doris/pull/38526)
-- Fixed the issue where `insert into values` cannot automatically fill null
default values. **[[fix](Nereids) fix insert into table with null literal
default value #39122](https://github.com/apache/doris/pull/39122)**
+- Fixed the issue where `insert into values` cannot automatically fill null
default values. [#39122](https://github.com/apache/doris/pull/39122)
- Fixed the NPE issue caused by using cte in delete without using it.
[#39379](https://github.com/apache/doris/pull/39379)
@@ -502,4 +502,4 @@ When upgrading Doris, please follow the principle of not
skipping two minor vers
For example, if you are upgrading from version 0.15.x to 2.0.x, it is
recommended to first upgrade to the latest version of 1.1, then upgrade to the
latest version of 1.2, and finally upgrade to the latest version of 2.0.
-For more upgrade information, see the documentation: [Cluster
Upgrade](../../admin-manual/cluster-management/upgrade)
\ No newline at end of file
+For more upgrade information, see the documentation: [Cluster
Upgrade](../../admin-manual/cluster-management/upgrade)
diff --git
a/versioned_docs/version-4.x/sql-manual/sql-functions/scalar-functions/date-time-functions/quarter-floor.md
b/versioned_docs/version-4.x/sql-manual/sql-functions/scalar-functions/date-time-functions/quarter-floor.md
index 6dc2d6eeff0..ed3a1a24d2b 100644
---
a/versioned_docs/version-4.x/sql-manual/sql-functions/scalar-functions/date-time-functions/quarter-floor.md
+++
b/versioned_docs/version-4.x/sql-manual/sql-functions/scalar-functions/date-time-functions/quarter-floor.md
@@ -33,7 +33,7 @@ QUARTER_FLOOR(`<date_or_time_expr>`, `<period>`, `<origin>`)
| Parameter | Description |
| ---- | ---- |
-| `<date_or_time_expr>` | The datetime value to be rounded down, type DATETIME
or DATE. For specific datetime/date formats, see [datetime
conversion](../../../../../current/sql-manual/basic-element/sql-data-types/conversion/datetime-conversion)
and [date
conversion](../../../../../current/sql-manual/basic-element/sql-data-types/conversion/date-conversion)
|
+| `<date_or_time_expr>` | The datetime value to be rounded down, type DATETIME
or DATE. For specific datetime/date formats, see [datetime
conversion](../../../../sql-manual/basic-element/sql-data-types/conversion/datetime-conversion.md)
and [date
conversion](../../../../sql-manual/basic-element/sql-data-types/conversion/date-conversion)
|
| `<period>` | Quarter period value, type INT, indicating the number of
quarters contained in each period |
| `<origin_datetime>` | Starting time point of the period, type DATETIME/DATE,
default is 0001-01-01 00:00:00 |
@@ -76,7 +76,7 @@ QUARTER_CEIL(`<date_or_time_expr>`, `<period>`, `<origin>`)
| Parameter | Description |
| ---- | ---- |
-| `<date_or_time_expr>` | The datetime value to be rounded up. It is a valid
date expression that supports date/datetime types. For specific datetime and
date formats, see [datetime
conversion](../../../../../docs/sql-manual/basic-element/sql-data-types/conversion/datetime-conversion)
and [date
conversion](../../../../../docs/sql-manual/basic-element/sql-data-types/conversion/date-conversion).
|
+| `<date_or_time_expr>` | The datetime value to be rounded up. It is a valid
date expression that supports date/datetime types. For specific datetime and
date formats, see [datetime
conversion](../../../../sql-manual/basic-element/sql-data-types/conversion/datetime-conversion.md)
and [date
conversion](../../../../sql-manual/basic-element/sql-data-types/conversion/date-conversion).
|
| `<period>` | Quarter period value, type INT, indicating the number of
quarters contained in each period |
| `<origin_datetime>` | The starting time point of the period, supports
date/datetime types, default value is 0001-01-01 00:00:00 |
diff --git
a/versioned_docs/version-4.x/sql-manual/sql-functions/scalar-functions/json-functions/json-extract-double.md
b/versioned_docs/version-4.x/sql-manual/sql-functions/scalar-functions/json-functions/json-extract-double.md
index 51a2e420030..2f71dced57e 100644
---
a/versioned_docs/version-4.x/sql-manual/sql-functions/scalar-functions/json-functions/json-extract-double.md
+++
b/versioned_docs/version-4.x/sql-manual/sql-functions/scalar-functions/json-functions/json-extract-double.md
@@ -6,7 +6,7 @@
---
## Description
-`JSON_EXTRACT_DOUBLE` extracts the field specified by `<json_path>` from a
JSON object and converts it to
[`DOUBLE`](../../../basic-element/sql-data-types/numeric/DOUBLE.md) type.
+`JSON_EXTRACT_DOUBLE` extracts the field specified by `<json_path>` from a
JSON object and converts it to
[`DOUBLE`](../../../basic-element/sql-data-types/numeric/FLOATING-POINT.md)
type.
## Syntax
```sql
@@ -83,4 +83,4 @@ JSON_EXTRACT_DOUBLE(<json_object>, <json_path>)
+--------------------------------------------------------------+
| NULL |
+--------------------------------------------------------------+
- ```
\ No newline at end of file
+ ```
diff --git
a/versioned_docs/version-4.x/sql-manual/sql-statements/cluster-management/storage-management/CREATE-STORAGE-VAULT.md
b/versioned_docs/version-4.x/sql-manual/sql-statements/cluster-management/storage-management/CREATE-STORAGE-VAULT.md
index 174836973fe..731fef3b360 100644
---
a/versioned_docs/version-4.x/sql-manual/sql-statements/cluster-management/storage-management/CREATE-STORAGE-VAULT.md
+++
b/versioned_docs/version-4.x/sql-manual/sql-statements/cluster-management/storage-management/CREATE-STORAGE-VAULT.md
@@ -48,7 +48,7 @@ CREATE STORAGE VAULT [IF NOT EXISTS] <vault_name> [
<properties> ]
1. `s3.endpoint` if neither `http://` nor `https://` prefix is not provided,
`http` will be used by default. If a prefix is explicitly specified, it
will take effect with the prefix;
-2. Doris also support `AWS Assume Role` for S3 Vault(only for AWS S3 now),
please refer to [AWS
intergration](../../../admin-manual/auth/integrations/aws-authentication-and-authorization.md#assumed-role-authentication).
+2. Doris also support `AWS Assume Role` for S3 Vault(only for AWS S3 now),
please refer to [AWS intergration](../../../../lakehouse/storages/s3.md).
### HDFS vault
@@ -165,7 +165,7 @@ PROPERTIES (
**Note: **
-Doris also support `AWS Assume Role` for S3 Vault(only for AWS S3 now), please
refer to [AWS
intergration](../../../admin-manual/auth/integrations/aws-authentication-and-authorization.md#assumed-role-authentication).
+Doris also support `AWS Assume Role` for S3 Vault(only for AWS S3 now), please
refer to [AWS intergration](../../../../lakehouse/storages/s3.md)
### 7. Create MinIO storage vault
diff --git a/versioned_docs/version-4.x/table-design/data-type.md
b/versioned_docs/version-4.x/table-design/data-type.md
index 21dbc252af0..183a47d9cde 100644
--- a/versioned_docs/version-4.x/table-design/data-type.md
+++ b/versioned_docs/version-4.x/table-design/data-type.md
@@ -19,8 +19,8 @@ The list of data types supported by Doris is as follows:
| [INT](../sql-manual/basic-element/sql-data-types/numeric/INT)
| 4 | Integer value, signed range is from -2147483648 to
2147483647. |
| [BIGINT](../sql-manual/basic-element/sql-data-types/numeric/BIGINT)
| 8 | Integer value, signed range is from -9223372036854775808 to
9223372036854775807. |
| [LARGEINT](../sql-manual/basic-element/sql-data-types/numeric/LARGEINT)
| 16 | Integer value, range is [-2^127 + 1 to 2^127 - 1].
|
-| [FLOAT](../sql-manual/basic-element/sql-data-types/numeric/FLOAT)
| 4 | Single precision floating point number, range is [-3.4 *
10^38 to 3.4 * 10^38]. |
-| [DOUBLE](../sql-manual/basic-element/sql-data-types/numeric/DOUBLE)
| 8 | Double precision floating point number, range is [-1.79 *
10^308 to 1.79 * 10^308]. |
+| [FLOAT](../sql-manual/basic-element/sql-data-types/numeric/FLOATING-POINT)
| 4 | Single precision floating point number, range is
[-3.4 * 10^38 to 3.4 * 10^38]. |
+| [DOUBLE](../sql-manual/basic-element/sql-data-types/numeric/FLOATING-POINT)
| 8 | Double precision floating point number, range is
[-1.79 * 10^308 to 1.79 * 10^308]. |
| [DECIMAL](../sql-manual/basic-element/sql-data-types/numeric/DECIMAL)
| 4/8/16/32 | An exact fixed-point number defined by precision (total
number of digits) and scale (number of digits to the right of the decimal
point). Format: DECIMAL(P[,S]), where P is precision and S is scale. The range
for P is [1, MAX_P], where MAX_P=38 when `enable_decimal256`=false, and
MAX_P=76 when `enable_decimal256`=true, and for S is [0, P]. <br>The default
value of `enable_decimal256` is f [...]
## [Datetime data
type](../sql-manual/basic-element/sql-data-types/data-type-overview#date-types)
@@ -53,8 +53,8 @@ The list of data types supported by Doris is as follows:
| -------------- | --------------- |
------------------------------------------------------------ |
| [HLL](../sql-manual/basic-element/sql-data-types/aggregate/HLL) |
Variable Length | HLL stands for HyperLogLog, is a fuzzy deduplication. It
performs better than Count Distinct when dealing with large datasets. The
error rate of HLL is typically around 1%, and sometimes it can reach 2%. HLL
cannot be used as a key column, and the aggregation type is HLL_UNION when
creating a table. Users do not need to specify the length or default value as
it is internally controlled bas [...]
| [BITMAP](../sql-manual/basic-element/sql-data-types/aggregate/BITMAP)
| Variable Length | BITMAP type can be used in Aggregate tables, Unique tables
or Duplicate tables. - When used in a Unique table or a Duplicate table,
BITMAP must be employed as non-key columns. - When used in an Aggregate table,
BITMAP must also serve as non-key columns, and the aggregation type must be set
to BITMAP_UNION during table creation. Users do not need to specify the length
or default value as [...]
-| [QUANTILE_STATE](../sql-manual/sql-data-types/aggregate/QUANTILE_STATE) |
Variable Length | A type used to calculate approximate quantile values. When
loading, it performs pre-aggregation for the same keys with different values.
When the number of values does not exceed 2048, it records all data in detail.
When the number of values is greater than 2048, it employs the TDigest
algorithm to aggregate (cluster) the data and store the centroid points after
clustering. QUANTILE_STATE can [...]
-| [AGG_STATE](../sql-manual/sql-data-types/aggregate/AGG_STATE) |
Variable Length | Aggregate function can only be used with state/merge/union
function combiners. AGG_STATE cannot be used as a key column. When creating a
table, the signature of the aggregate function needs to be declared alongside.
Users do not need to specify the length or default value. The actual data
storage size depends on the function's implementation. |
+|
[QUANTILE_STATE](../sql-manual/basic-element/sql-data-types/aggregate/QUANTILE-STATE.md)
| Variable Length | A type used to calculate approximate quantile values.
When loading, it performs pre-aggregation for the same keys with different
values. When the number of values does not exceed 2048, it records all data in
detail. When the number of values is greater than 2048, it employs the TDigest
algorithm to aggregate (cluster) the data and store the centroid points after
clustering. Q [...]
+| [AGG_STATE](../sql-manual/basic-element/sql-data-types/aggregate/AGG-STATE)
| Variable Length | Aggregate function can only be used with
state/merge/union function combiners. AGG_STATE cannot be used as a key
column. When creating a table, the signature of the aggregate function needs to
be declared alongside. Users do not need to specify the length or default
value. The actual data storage size depends on the function's implementation. |
## [IP
types](../sql-manual/basic-element/sql-data-types/data-type-overview#ip-types)
diff --git a/versioned_docs/version-4.x/table-design/temporary-table.md
b/versioned_docs/version-4.x/table-design/temporary-table.md
index a66c119c598..6cba6bd10f6 100644
--- a/versioned_docs/version-4.x/table-design/temporary-table.md
+++ b/versioned_docs/version-4.x/table-design/temporary-table.md
@@ -27,9 +27,9 @@ If a temporary table and a non-temporary table with the same
name exist simultan
### Creating a Temporay Table
Tables of various models can be defined as temporary tables, whether they are
Unique, Aggregate, or Duplicate models. You can create temporary tables by
adding the TEMPORARY keyword in the following SQL statements:
-- [CREATE
TABLE](../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-TABLE)
-- [CREATE TABLE AS
SELECT](../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-TABLE-AS-SELECT)
-- [CREATE TABLE
LIKE](../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-TABLE-LIKE)
+- [CREATE
TABLE](../sql-manual/sql-statements/table-and-view/table/CREATE-TABLE.md)
+- [CREATE TABLE AS
SELECT](../sql-manual/sql-statements/table-and-view/table/CREATE-TABLE.md)
+- [CREATE TABLE
LIKE](../sql-manual/sql-statements/table-and-view/table/CREATE-TABLE.md)
The other uses of temporary tables are basically the same as regular internal
tables. Except for the above-mentioned Create statement, other DDL and DML
statements do not require adding the TEMPORARY keyword.
@@ -41,4 +41,4 @@ The other uses of temporary tables are basically the same as
regular internal ta
- Due to their temporary nature, creating views and materialized views based
on temporary tables is not supported.
- Temporary tables cannot be backed up and are not supported for
synchronization using CCR/Sync Job.
- Export, Stream Load, Broker Load, S3 Load, MySQL Load, Routine Load, and
Spark Load are not supported.
-- When a temporary table is deleted, it does not go to the recycle bin but is
permanently deleted immediately.
\ No newline at end of file
+- When a temporary table is deleted, it does not go to the recycle bin but is
permanently deleted immediately.
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]