This is an automated email from the ASF dual-hosted git repository.

morningman pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git


The following commit(s) were added to refs/heads/master by this push:
     new 3e647599adf [opt](doc) opt doris catalog doc and add kerberos doc 
(#3168)
3e647599adf is described below

commit 3e647599adf7281098432753b4b585acdc4fb925
Author: Mingyu Chen (Rayner) <[email protected]>
AuthorDate: Mon Dec 8 21:35:33 2025 +0800

    [opt](doc) opt doris catalog doc and add kerberos doc (#3168)
    
    ## Versions
    
    - [x] dev
    - [x] 4.x
    - [x] 3.x
    - [ ] 2.1
    
    ## Languages
    
    - [x] Chinese
    - [x] English
    
    ## Docs Checklist
    
    - [ ] Checked by AI
    - [ ] Test Cases Built
---
 docs/lakehouse/best-practices/kerberos.md          | 264 +++++++++++++++++++++
 docs/lakehouse/catalogs/doris-catalog.mdx          |  19 +-
 .../current/lakehouse/best-practices/kerberos.md   | 264 +++++++++++++++++++++
 .../current/lakehouse/catalogs/doris-catalog.mdx   |  17 +-
 .../lakehouse/best-practices/kerberos.md           | 264 +++++++++++++++++++++
 .../lakehouse/best-practices/kerberos.md           | 264 +++++++++++++++++++++
 .../lakehouse/catalogs/doris-catalog.mdx           |  17 +-
 sidebars.ts                                        |   1 +
 .../lakehouse/best-practices/kerberos.md           | 264 +++++++++++++++++++++
 .../lakehouse/best-practices/kerberos.md           | 264 +++++++++++++++++++++
 .../lakehouse/catalogs/doris-catalog.mdx           |  19 +-
 versioned_sidebars/version-3.x-sidebars.json       |   1 +
 versioned_sidebars/version-4.x-sidebars.json       |   1 +
 13 files changed, 1653 insertions(+), 6 deletions(-)

diff --git a/docs/lakehouse/best-practices/kerberos.md 
b/docs/lakehouse/best-practices/kerberos.md
new file mode 100644
index 00000000000..dbc8170abe9
--- /dev/null
+++ b/docs/lakehouse/best-practices/kerberos.md
@@ -0,0 +1,264 @@
+---
+{
+    "title": "Kerberos Best Practices",
+    "language": "en"
+}
+---
+
+When users use Doris for federated analytical queries across multiple data 
sources, different clusters may use different Kerberos authentication 
credentials.
+
+Take a large fund company as an example. Its internal data platform is divided 
into multiple functional clusters, maintained by different technical or 
business teams, each configured with independent Kerberos Realms for identity 
authentication and access control:
+
+- Production cluster is used for daily net asset value calculations and risk 
assessments, with strictly isolated data that only allows authorized service 
access (Realm: PROD.FUND.COM).
+- Analysis cluster is used for strategy research and model backtesting, where 
Doris implements temporary queries to this cluster through TVF (Realm: 
ANALYSIS.FUND.COM).
+- Data lake cluster integrates Iceberg Catalog for archiving and analyzing 
large volumes of historical market data, logs, and other data (Realm: 
LAKE.FUND.COM).
+
+Since these clusters have not established cross-domain trust relationships and 
authentication information cannot be shared, unified access to these 
heterogeneous data sources requires simultaneous support for authentication and 
context management of multiple Kerberos instances.
+
+**This document focuses on how to configure and access data sources in 
multi-Kerberos environments.**
+
+> This feature is supported since 3.1+
+
+## Multi-Kerberos Cluster Authentication Configuration
+
+### krb5.conf
+
+`krb5.conf` contains Kerberos configuration information, KDC locations, some 
**default values** for Kerberos services, and hostname-to-Realm mapping 
information.
+
+When applying krb5.conf, ensure it is placed on every node. The default 
location is `/etc/krb5.conf`.
+
+### realms
+
+Contains KDC and Kerberos networks of many clients, such as EXAMPLE.COM.
+
+When configuring multiple clusters, you need to configure multiple Realms in 
one `krb5.conf`. KDC and `admin_server` can also be domain names.
+
+```
+[realms]
+EMR-IP.EXAMPLE = {
+    kdc = 172.21.16.8:88
+    admin_server = 172.21.16.8
+}
+EMR-HOST.EXAMPLE = {
+    kdc = emr_hostname
+    admin_server = emr_hostname
+}
+```
+
+### domain_realm
+
+Configures the mapping from domain to Realm for nodes where Kerberos services 
are located.
+
+```toml
+[libdefaults]
+dns_lookup_realm = true
+dns_lookup_kdc = true
+[domain_realm]
+172.21.16.8 = EMR-IP.EXAMPLE
+emr-host.example = EMR-HOST.EXAMPLE
+```
+
+For example, for principal `emr1/[email protected]`, when looking up KDC, 
use `domain_name` to find the corresponding Realm. If it doesn't match, the KDC 
for the Realm cannot be found.
+
+You will typically see two types of errors in Doris's `log/be.out` or 
`log/fe.out` that are related to `domain_realm`:
+
+```
+* Unable to locate KDC for realm/Cannot locate KDC
+
+* No service creds
+```
+
+### keytab and principal
+
+In multi-Kerberos cluster environments, keytab files usually use different 
paths, such as: `/path/to/serverA.keytab`, `/path/to/serverB.keytab`. When 
accessing different clusters, you need to use the corresponding keytab.
+
+If the HDFS cluster has Kerberos authentication enabled, you can generally see 
the `hadoop.security.auth_to_local` property in the `core-site.xml` file, which 
is used to map Kerberos principals to shorter local usernames, and Hadoop 
reuses Kerberos syntax rules.
+
+If not configured, you may encounter a `NoMatchingRule("No rules applied to` 
exception. See code:
+
+[hadoop/src/core/org/apache/hadoop/security/KerberosName.java](https://github.com/hanborq/hadoop/blob/master/src/core/org/apache/hadoop/security/KerberosName.java#L399)
+
+The `hadoop.security.auth_to_local` parameter contains a set of mapping rules 
that match principals against RULEs from top to bottom. When a matching mapping 
rule is found, it outputs a username and ignores unmatched rules. The specific 
configuration format:
+
+```
+RULE:[<principal translation>](acceptance filter)<short name substitution>
+```
+
+To match principals used by different Kerberos services in multi-cluster 
environments, the recommended configuration is:
+
+```xml
+<property>
+    <name>hadoop.security.auth_to_local</name>
+    <value>RULE:[1:$1@$0](^.*@.*$)s/^(.*)@.*$/$1/g
+           RULE:[2:$1@$0](^.*@.*$)s/^(.*)@.*$/$1/g
+           DEFAULT</value>
+</property>
+```
+
+The above configuration can be used to add or replace the 
`hadoop.security.auth_to_local` property in `core-site.xml`. Place 
`core-site.xml` in `fe/conf` and `be/conf` to make it effective in the Doris 
environment.
+
+If you need it to take effect separately in OUTFILE, EXPORT, Broker Load, 
Catalog (Hive, Iceberg, Hudi), TVF, and other functions, you can configure it 
directly in their properties:
+
+```sql
+"hadoop.security.auth_to_local" = "RULE:[1:$1@$0](^.*@.*$)s/^(.*)@.*$/$1/g
+                                   RULE:[2:$1@$0](^.*@.*$)s/^(.*)@.*$/$1/g
+                                   DEFAULT"
+```
+
+To verify whether mapping rules can match correctly, check if this error 
occurs when accessing different clusters:
+
+```
+NoMatchingRule: No rules applied to hadoop/domain\[email protected]
+```
+
+If it appears, it indicates unsuccessful matching.
+
+## Best Practices
+
+This section introduces how to use the Docker environment provided by the 
[Apache Doris official 
repository](https://github.com/apache/doris/tree/master/docker/thirdparties) to 
start Hive/HDFS services with Kerberos using Docker, and create 
Kerberos-enabled Hive Catalogs through Doris.
+
+### Environment Description
+
+* Use Kerberos services provided by Doris (two sets of HIVE, two sets of KDC):
+
+  * Docker startup directory: `docker/thirdparties`
+
+  * krb5.conf template:
+
+    
[`docker-compose/kerberos/common/conf/doris-krb5.conf`](https://github.com/apache/doris/blob/master/docker/thirdparties/docker-compose/kerberos/common/conf/doris-krb5.conf)
+
+### 1. Prepare keytab files and permissions
+
+Copy keytab files to local directory:
+
+```bash
+mkdir -p ~/doris-keytabs
+cp <hive-presto-master.keytab> ~/doris-keytabs/
+cp <other-hive-presto-master.keytab> ~/doris-keytabs/
+```
+
+Set file permissions to prevent authentication failure:
+
+```bash
+chmod 400 ~/doris-keytabs/*.keytab
+```
+
+### 2. Prepare krb5.conf file
+
+1. Use the `krb5.conf` template file provided by Doris
+
+2. If you need to access multiple Kerberos HDFS clusters simultaneously, you 
need to **merge krb5.conf**, with basic requirements:
+
+   * `[realms]`: Write Realms and KDC IPs for all clusters.
+
+   * `[domain_realm]`: Write domain or IP to Realm mappings.
+
+   * `[libdefaults]`: Unified encryption algorithms (such as des3-cbc-sha1).
+
+3. Example:
+
+    ```toml
+    [libdefaults]
+        default_realm = LABS.TERADATA.COM
+        allow_weak_crypto = true
+        dns_lookup_realm = true
+        dns_lookup_kdc = true
+
+    [realms]
+        LABS.TERADATA.COM = {
+            kdc = 127.0.0.1
+            admin_server = 127.0.0.1
+        }
+        OTHERREALM.COM = {
+            kdc = 127.0.0.1
+            admin_server = 127.0.0.1
+        }
+
+    [domain_realm]
+        presto-master.docker.cluster = LABS.TERADATA.COM
+        hadoop-master-2 = OTHERREALM.COM
+        .labs.teradata.com = LABS.TERADATA.COM
+        .otherrealm.com = OTHERREALM.COM
+    ```
+
+4. Copy `krb5.conf` to the corresponding Docker directory:
+
+    ```bash
+    cp doris-krb5.conf ~/doris-kerberos/krb5.conf
+    ```
+
+### 3. Start Docker Kerberos environment
+
+1. Enter directory:
+
+    ```bash
+    cd docker/thirdparties
+    ```
+
+2. Start Kerberos environment:
+
+    ```bash
+    ./run-thirdparties-docker.sh -c kerberos
+    ```
+
+3. Services after startup include:
+
+   * Hive Metastore 1:9583
+   * Hive Metastore 2:9683
+   * HDFS 1:8520
+   * HDFS 2:8620
+
+### 4. Get container IP
+
+Use command to view Docker IP:
+
+```bash
+docker inspect <container-name> | grep IPAddress
+```
+
+Or directly use 127.0.0.1 (provided that the service has been mapped to the 
host network).
+
+### 5. Create Kerberos Hive Catalog
+
+1. Hive Catalog1
+
+    ```sql
+    CREATE CATALOG IF NOT EXISTS multi_kerberos_one
+    PROPERTIES (
+    "type" = "hms",
+    "hive.metastore.uris" = "thrift://127.0.0.1:9583",
+    "fs.defaultFS" = "hdfs://127.0.0.1:8520",
+    "hadoop.kerberos.min.seconds.before.relogin" = "5",
+    "hadoop.security.authentication" = "kerberos",
+    "hadoop.kerberos.principal" = 
"hive/[email protected]",
+    "hadoop.kerberos.keytab" = 
"/mnt/disk1/gq/keytabs/keytabs/hive-presto-master.keytab",
+    "hive.metastore.sasl.enabled " = "true",
+    "hadoop.security.auth_to_local" = 
"RULE:[2:$1@$0](.*@LABS.TERADATA.COM)s/@.*//
+                                        
RULE:[2:$1@$0](.*@OTHERLABS.TERADATA.COM)s/@.*//
+                                        
RULE:[2:$1@$0](.*@OTHERREALM.COM)s/@.*//
+                                        DEFAULT",
+    "hive.metastore.kerberos.principal" = 
"hive/[email protected]"
+    );
+    ```
+
+2. Hive Catalog2
+
+    ```sql
+    CREATE CATALOG IF NOT EXISTS multi_kerberos_two
+    PROPERTIES (
+    "type" = "hms",
+    "hive.metastore.uris" = "thrift://127.0.0.1:9683",
+    "fs.defaultFS" = "hdfs://127.0.0.1:8620",
+    "hadoop.kerberos.min.seconds.before.relogin" = "5",
+    "hadoop.security.authentication" = "kerberos",
+    "hadoop.kerberos.principal" = 
"hive/[email protected]",
+    "hadoop.kerberos.keytab" = 
"/mnt/disk1/gq/keytabs/keytabs/other-hive-presto-master.keytab",
+    "hive.metastore.sasl.enabled " = "true",
+    "hadoop.security.auth_to_local" = "RULE:[2:$1@$0](.*@OTHERREALM.COM)s/@.*//
+                                        
RULE:[2:$1@$0](.*@OTHERLABS.TERADATA.COM)s/@.*//
+                                        DEFAULT",
+    "hive.metastore.kerberos.principal" = "hive/[email protected]"
+    );
+    ```
+
+At this point, the multi-Kerberos cluster access configuration is complete. 
You can view data from both Hive clusters and use different Kerberos 
credentials.
diff --git a/docs/lakehouse/catalogs/doris-catalog.mdx 
b/docs/lakehouse/catalogs/doris-catalog.mdx
index a1055af020e..47d4233de44 100644
--- a/docs/lakehouse/catalogs/doris-catalog.mdx
+++ b/docs/lakehouse/catalogs/doris-catalog.mdx
@@ -89,6 +89,8 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
 
 ### Arrow Flight Mode
 
+> Supported since 4.0.2.
+
 When the `use_arrow_flight` property is `true`, it operates in Arrow Flight 
mode.
 
 ![arrow-flight-mode](/images/Lakehouse/doris-catalog/arrow-flight-mode.png)
@@ -101,6 +103,8 @@ In this mode, during cross-cluster queries, FEs synchronize 
schema and other met
 
 ### Virtual Cluster Mode
 
+> Supported since 4.0.3.
+
 When the `use_arrow_flight` property is `false`, it operates in virtual 
cluster mode.
 
 > Currently, this mode only support compute-storage coupled Doris cluster. 
@@ -117,7 +121,18 @@ FEs synchronize schema and other metadata through HTTP 
protocol. BEs directly tr
 
 ## Column Type Mapping
 
-Doris external table types are completely identical to local Doris types.
+### Arrow Flight Mode
+
+The supported column types and table types in this mode depend on the 
capabilities of Arrow Flight SQL. Currently, it has the following capabilities 
and limitations:
+
+- Supports all primitive types
+- Supports all nested types (Array, Map, Struct)
+- Does not support hll, bitmap, and variant types
+- Supports all table models (detail tables, aggregate tables, and primary key 
tables)
+
+### Virtual Cluster Mode
+
+In virtual cluster mode, all column types and all table models (detail tables, 
aggregate tables, and primary key tables) are supported.
 
 ## Query Operations
 
@@ -225,4 +240,4 @@ MySQL [(none)]> explain select * from demo.inner_table a 
join edoris.external.ex
 |      tablets=1/1, tabletList=1762481736238                                   
                                                             |
 |      cardinality=1, avgRowSize=7425.0, numNodes=1                            
                                                             |
 |      pushAggOp=NONE
-```
\ No newline at end of file
+```
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/best-practices/kerberos.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/best-practices/kerberos.md
new file mode 100644
index 00000000000..e379a7f512c
--- /dev/null
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/best-practices/kerberos.md
@@ -0,0 +1,264 @@
+---
+{
+    "title": "Kerberos 最佳实践",
+    "language": "zh-CN"
+}
+---
+
+当用户使用 Doris 进行多数据源的联邦分析查询时,不同的集群可能使用不同的 Kerberos 认证凭据。
+
+以某大型基金公司为例,其内部数据平台划分为多个功能集群,分别由不同的技术或业务团队维护,并配置了独立的 Kerberos Realm 
用于身份认证和访问控制。其中:
+
+- 生产集群用于日常净值计算和风险评估,数据严格隔离,仅允许授权服务访问(Realm:PROD.FUND.COM)。
+- 分析集群则用于策略研究与模型回测,Doris 通过 TVF 实现对该集群的临时查询(Realm:ANALYSIS.FUND.COM)。
+- 数据湖集群接入了 Iceberg Catalog,用于归档和分析历史行情、日志等大体量数据(Realm:LAKE.FUND.COM)。
+
+由于各集群未建立跨域信任关系,认证信息无法共享,若希望统一访问这些异构数据源,就必须同时支持多个 Kerberos 实例的认证与上下文管理。
+
+**本文档重点介绍如何在多 Kerberos 环境下配置和访问数据源。**
+
+> 本文档适用于 Doris 3.1+ 
+
+## 多 Kerberos 集群认证配置
+
+### krb5.conf
+
+`krb5.conf` 包含 Kerberos 配置信息、KDC 位置、Kerberos 服务的一些**默认值**,以及主机名到 Realm 的映射信息等。
+
+应用 krb5.conf 时,要确保将它放到每个节点。默认位置在 `/etc/krb5.conf`。
+
+### realms
+
+包含 KDC 和许多客户端的 Kerberos 网络,例如 EXAMPLE.COM。
+
+配置多集群时,需要把多个 Realm 配置到一个 `krb5.conf` 里。KDC 和 `admin_server` 也可以是域名。
+
+```
+[realms]
+EMR-IP.EXAMPLE = {
+    kdc = 172.21.16.8:88
+    admin_server = 172.21.16.8
+}
+EMR-HOST.EXAMPLE = {
+    kdc = emr_hostname
+    admin_server = emr_hostname
+}
+```
+
+### domain_realm
+
+配置 Kerberos 服务所在节点的 domain 到 Realm 的映射。
+
+```toml
+[libdefaults]
+dns_lookup_realm = true
+dns_lookup_kdc = true
+[domain_realm]
+172.21.16.8 = EMR-IP.EXAMPLE
+emr-host.example = EMR-HOST.EXAMPLE
+```
+
+例如,对于 principal `emr1/[email protected]`,查找 KDC 时使用 `domain_name` 去找相对应的 
Realm,如果匹配不上,会找不到 Realm 所在的 KDC。
+
+通常会在 Doris 的 `log/be.out` 或者 `log/fe.out` 看到两种错误都与 `domain_realm` 有关:
+
+```
+* Unable to locate KDC for realm/Cannot locate KDC
+
+* No service creds
+```
+
+### keytab 和 principal
+
+在多 Kerberos 集群的环境下,keytab 
通常会使用不同的路径,比如:`/path/to/serverA.keytab`,`/path/to/serverB.keytab`。访问不同集群时,需要使用对应的
 keytab。
+
+如果 HDFS 集群开启了 Kerberos 认证,我们一般在 `core-site.xml` 文件中能看到 
`hadoop.security.auth_to_local` 属性,用于将 Kerberos 的 principal 映射为比较短的本地用户名称,并且 
Hadoop 复用了 Kerberos 语法规则。
+
+如未配置,则可能遇到 `NoMatchingRule("No rules applied to` 异常,见代码:
+
+[hadoop/src/core/org/apache/hadoop/security/KerberosName.java](https://github.com/hanborq/hadoop/blob/master/src/core/org/apache/hadoop/security/KerberosName.java#L399)
+
+`hadoop.security.auth_to_local` 参数中包含一组映射规则,将 principal 由上至下逐一匹配 
RULE,当找到匹配的映射规则后,会输出一个用户名称,同时忽略未进行匹配的规则。具体的配置格式:
+
+```
+RULE:[<principal translation>](acceptance filter)<short name substitution>
+```
+
+为了在多集群环境下,能匹配到不同 Kerberos 服务用到的 principal,推荐配置如下:
+
+```xml
+<property>
+    <name>hadoop.security.auth_to_local</name>
+    <value>RULE:[1:$1@$0](^.*@.*$)s/^(.*)@.*$/$1/g
+           RULE:[2:$1@$0](^.*@.*$)s/^(.*)@.*$/$1/g
+           DEFAULT</value>
+</property>
+```
+
+以上配置可以用于添加或者替换 `core-site.xml` 中的 `hadoop.security.auth_to_local` 属性,将 
`core-site.xml` 放到 `fe/conf` 以及 `be/conf` 下,使其在 Doris 环境中生效。
+
+如果需要在 OUTFILE、EXPORT、Broker Load、Catalog(Hive、Iceberg、Hudi)、TVF 
等功能中单独生效,可以直接配置在他们的 properties 中:
+
+```sql
+"hadoop.security.auth_to_local" = "RULE:[1:$1@$0](^.*@.*$)s/^(.*)@.*$/$1/g
+                                   RULE:[2:$1@$0](^.*@.*$)s/^(.*)@.*$/$1/g
+                                   DEFAULT"
+```
+
+检验映射规则是否能正确匹配,只要看访问不同集群时是否出现这个错误:
+
+```
+NoMatchingRule: No rules applied to hadoop/domain\[email protected]
+```
+
+如果出现,则表示匹配不成功。
+
+## 最佳实践
+
+本小节介绍如何基于 [Apache Doris 
官方仓库](https://github.com/apache/doris/tree/master/docker/thirdparties) 提供的 
Docker 环境,使用 Docker 启动带 Kerberos 的 Hive/HDFS 服务,并通过 Doris 创建支持 Kerberos 的 Hive 
Catalog。
+
+### 环境说明
+
+* 使用 Doris 提供的 Kerberos 服务(两套 HIVE,两套 KDC):
+
+  * Docker 启动目录:`docker/thirdparties`
+
+  * krb5.conf 模板:
+
+    
[`docker-compose/kerberos/common/conf/doris-krb5.conf`](https://github.com/apache/doris/blob/master/docker/thirdparties/docker-compose/kerberos/common/conf/doris-krb5.conf)
+
+### 1. 准备 keytab 文件和权限
+
+拷贝 keytab 文件到本地目录:
+
+```bash
+mkdir -p ~/doris-keytabs
+cp <hive-presto-master.keytab> ~/doris-keytabs/
+cp <other-hive-presto-master.keytab> ~/doris-keytabs/
+```
+
+设置文件权限,防止认证失败:
+
+```bash
+chmod 400 ~/doris-keytabs/*.keytab
+```
+
+### 2. 准备 krb5.conf 文件
+
+1. 使用 Doris 提供的 `krb5.conf` 模板文件
+
+2. 如果需要同时访问多个 Kerberos HDFS 集群,需要 **合并 krb5.conf**,基本要求:
+
+   * `[realms]`:写入所有集群的 Realm 和 KDC IP。
+
+   * `[domain_realm]`:写入 domain 或 IP 和 Realm 的映射。
+
+   * `[libdefaults]`:统一加密算法(如 des3-cbc-sha1)。
+
+3. 示例:
+
+    ```toml
+    [libdefaults]
+        default_realm = LABS.TERADATA.COM
+        allow_weak_crypto = true
+        dns_lookup_realm = true
+        dns_lookup_kdc = true
+
+    [realms]
+        LABS.TERADATA.COM = {
+            kdc = 127.0.0.1
+            admin_server = 127.0.0.1
+        }
+        OTHERREALM.COM = {
+            kdc = 127.0.0.1
+            admin_server = 127.0.0.1
+        }
+
+    [domain_realm]
+        presto-master.docker.cluster = LABS.TERADATA.COM
+        hadoop-master-2 = OTHERREALM.COM
+        .labs.teradata.com = LABS.TERADATA.COM
+        .otherrealm.com = OTHERREALM.COM
+    ```
+
+4. 拷贝 `krb5.conf` 到 Docker 对应目录:
+
+    ```bash
+    cp doris-krb5.conf ~/doris-kerberos/krb5.conf
+    ```
+
+### 3. 启动 Docker Kerberos 环境
+
+1. 进入目录:
+
+    ```bash
+    cd docker/thirdparties
+    ```
+
+2. 启动 Kerberos 环境:
+
+    ```bash
+    ./run-thirdparties-docker.sh -c kerberos
+    ```
+
+3. 启动后服务包括:
+
+   * Hive Metastore 1:9583
+   * Hive Metastore 2:9683
+   * HDFS 1:8520
+   * HDFS 2:8620
+
+### 4. 获取容器 IP
+
+使用命令查看 Docker IP:
+
+```bash
+docker inspect <container-name> | grep IPAddress
+```
+
+或直接使用 127.0.0.1(前提是服务已经映射到宿主机网络)。
+
+### 5. 创建 Kerberos Hive Catalog
+
+1. Hive Catalog1
+
+    ```sql
+    CREATE CATALOG IF NOT EXISTS multi_kerberos_one
+    PROPERTIES (
+    "type" = "hms",
+    "hive.metastore.uris" = "thrift://127.0.0.1:9583",
+    "fs.defaultFS" = "hdfs://127.0.0.1:8520",
+    "hadoop.kerberos.min.seconds.before.relogin" = "5",
+    "hadoop.security.authentication" = "kerberos",
+    "hadoop.kerberos.principal" = 
"hive/[email protected]",
+    "hadoop.kerberos.keytab" = 
"/mnt/disk1/gq/keytabs/keytabs/hive-presto-master.keytab",
+    "hive.metastore.sasl.enabled " = "true",
+    "hadoop.security.auth_to_local" = 
"RULE:[2:$1@$0](.*@LABS.TERADATA.COM)s/@.*//
+                                        
RULE:[2:$1@$0](.*@OTHERLABS.TERADATA.COM)s/@.*//
+                                        
RULE:[2:$1@$0](.*@OTHERREALM.COM)s/@.*//
+                                        DEFAULT",
+    "hive.metastore.kerberos.principal" = 
"hive/[email protected]"
+    );
+    ```
+
+2. Hive Catalog2
+
+    ```sql
+    CREATE CATALOG IF NOT EXISTS multi_kerberos_two
+    PROPERTIES (
+    "type" = "hms",
+    "hive.metastore.uris" = "thrift://127.0.0.1:9683",
+    "fs.defaultFS" = "hdfs://127.0.0.1:8620",
+    "hadoop.kerberos.min.seconds.before.relogin" = "5",
+    "hadoop.security.authentication" = "kerberos",
+    "hadoop.kerberos.principal" = 
"hive/[email protected]",
+    "hadoop.kerberos.keytab" = 
"/mnt/disk1/gq/keytabs/keytabs/other-hive-presto-master.keytab",
+    "hive.metastore.sasl.enabled " = "true",
+    "hadoop.security.auth_to_local" = "RULE:[2:$1@$0](.*@OTHERREALM.COM)s/@.*//
+                                        
RULE:[2:$1@$0](.*@OTHERLABS.TERADATA.COM)s/@.*//
+                                        DEFAULT",
+    "hive.metastore.kerberos.principal" = "hive/[email protected]"
+    );
+    ```
+
+至此,完成多 Kerberos 集群访问配置,您可以查看两个 Hive 集群中的数据,并使用不同的 Kerberos 凭证。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/doris-catalog.mdx
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/doris-catalog.mdx
index 8ed645f5b52..8597467fb45 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/doris-catalog.mdx
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/doris-catalog.mdx
@@ -90,6 +90,8 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
 
 ### Arrow Flight 模式
 
+> 自 4.0.2 版本支持。
+
 当 `use_arrow_flight` 属性为 `true` 时,则为 Arrow Flight 模式。
 
 ![arrow-flight-mode](/images/Lakehouse/doris-catalog/arrow-flight-mode.png)
@@ -102,6 +104,8 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
 
 ### 虚拟集群模式
 
+> 自 4.0.3 版本支持。
+
 当 `use_arrow_flight` 属性为 `false` 时,则为虚拟集群模式。
 
 > 该模式目前仅支持存算一体方式部署的 Doris 集群。
@@ -118,7 +122,18 @@ FE 之间通过 HTTP 协议同步 Schema 等元信息。BE 直接通过内部通
 
 ## 列类型映射
 
-Doris 外表类型与本地 Doris 类型完全相同。
+### Arrow Flight 模式
+
+该模式下支持的列类型和表类型,取决于 Arrow Flight SQL 的支持能力,目前有以下能力和限制:
+
+- 支持所有基础类型(Primitive Type)
+- 支持所有嵌套类型(Array、Map、Struct)
+- 不支持 hll、bitmap、variant 类型
+- 支持所有的表模式(明细表、聚合表和主键表)
+
+### 虚拟集群模式
+
+虚拟集群模式下,支持所有的列类型和所有的表模式(明细表、聚合表和主键表)。
 
 ## 查询操作
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/best-practices/kerberos.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/best-practices/kerberos.md
new file mode 100644
index 00000000000..e379a7f512c
--- /dev/null
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/lakehouse/best-practices/kerberos.md
@@ -0,0 +1,264 @@
+---
+{
+    "title": "Kerberos 最佳实践",
+    "language": "zh-CN"
+}
+---
+
+当用户使用 Doris 进行多数据源的联邦分析查询时,不同的集群可能使用不同的 Kerberos 认证凭据。
+
+以某大型基金公司为例,其内部数据平台划分为多个功能集群,分别由不同的技术或业务团队维护,并配置了独立的 Kerberos Realm 
用于身份认证和访问控制。其中:
+
+- 生产集群用于日常净值计算和风险评估,数据严格隔离,仅允许授权服务访问(Realm:PROD.FUND.COM)。
+- 分析集群则用于策略研究与模型回测,Doris 通过 TVF 实现对该集群的临时查询(Realm:ANALYSIS.FUND.COM)。
+- 数据湖集群接入了 Iceberg Catalog,用于归档和分析历史行情、日志等大体量数据(Realm:LAKE.FUND.COM)。
+
+由于各集群未建立跨域信任关系,认证信息无法共享,若希望统一访问这些异构数据源,就必须同时支持多个 Kerberos 实例的认证与上下文管理。
+
+**本文档重点介绍如何在多 Kerberos 环境下配置和访问数据源。**
+
+> 本文档适用于 Doris 3.1+ 
+
+## 多 Kerberos 集群认证配置
+
+### krb5.conf
+
+`krb5.conf` 包含 Kerberos 配置信息、KDC 位置、Kerberos 服务的一些**默认值**,以及主机名到 Realm 的映射信息等。
+
+应用 krb5.conf 时,要确保将它放到每个节点。默认位置在 `/etc/krb5.conf`。
+
+### realms
+
+包含 KDC 和许多客户端的 Kerberos 网络,例如 EXAMPLE.COM。
+
+配置多集群时,需要把多个 Realm 配置到一个 `krb5.conf` 里。KDC 和 `admin_server` 也可以是域名。
+
+```
+[realms]
+EMR-IP.EXAMPLE = {
+    kdc = 172.21.16.8:88
+    admin_server = 172.21.16.8
+}
+EMR-HOST.EXAMPLE = {
+    kdc = emr_hostname
+    admin_server = emr_hostname
+}
+```
+
+### domain_realm
+
+配置 Kerberos 服务所在节点的 domain 到 Realm 的映射。
+
+```toml
+[libdefaults]
+dns_lookup_realm = true
+dns_lookup_kdc = true
+[domain_realm]
+172.21.16.8 = EMR-IP.EXAMPLE
+emr-host.example = EMR-HOST.EXAMPLE
+```
+
+例如,对于 principal `emr1/[email protected]`,查找 KDC 时使用 `domain_name` 去找相对应的 
Realm,如果匹配不上,会找不到 Realm 所在的 KDC。
+
+通常会在 Doris 的 `log/be.out` 或者 `log/fe.out` 看到两种错误都与 `domain_realm` 有关:
+
+```
+* Unable to locate KDC for realm/Cannot locate KDC
+
+* No service creds
+```
+
+### keytab 和 principal
+
+在多 Kerberos 集群的环境下,keytab 
通常会使用不同的路径,比如:`/path/to/serverA.keytab`,`/path/to/serverB.keytab`。访问不同集群时,需要使用对应的
 keytab。
+
+如果 HDFS 集群开启了 Kerberos 认证,我们一般在 `core-site.xml` 文件中能看到 
`hadoop.security.auth_to_local` 属性,用于将 Kerberos 的 principal 映射为比较短的本地用户名称,并且 
Hadoop 复用了 Kerberos 语法规则。
+
+如未配置,则可能遇到 `NoMatchingRule("No rules applied to` 异常,见代码:
+
+[hadoop/src/core/org/apache/hadoop/security/KerberosName.java](https://github.com/hanborq/hadoop/blob/master/src/core/org/apache/hadoop/security/KerberosName.java#L399)
+
+`hadoop.security.auth_to_local` 参数中包含一组映射规则,将 principal 由上至下逐一匹配 
RULE,当找到匹配的映射规则后,会输出一个用户名称,同时忽略未进行匹配的规则。具体的配置格式:
+
+```
+RULE:[<principal translation>](acceptance filter)<short name substitution>
+```
+
+为了在多集群环境下,能匹配到不同 Kerberos 服务用到的 principal,推荐配置如下:
+
+```xml
+<property>
+    <name>hadoop.security.auth_to_local</name>
+    <value>RULE:[1:$1@$0](^.*@.*$)s/^(.*)@.*$/$1/g
+           RULE:[2:$1@$0](^.*@.*$)s/^(.*)@.*$/$1/g
+           DEFAULT</value>
+</property>
+```
+
+以上配置可以用于添加或者替换 `core-site.xml` 中的 `hadoop.security.auth_to_local` 属性,将 
`core-site.xml` 放到 `fe/conf` 以及 `be/conf` 下,使其在 Doris 环境中生效。
+
+如果需要在 OUTFILE、EXPORT、Broker Load、Catalog(Hive、Iceberg、Hudi)、TVF 
等功能中单独生效,可以直接配置在他们的 properties 中:
+
+```sql
+"hadoop.security.auth_to_local" = "RULE:[1:$1@$0](^.*@.*$)s/^(.*)@.*$/$1/g
+                                   RULE:[2:$1@$0](^.*@.*$)s/^(.*)@.*$/$1/g
+                                   DEFAULT"
+```
+
+检验映射规则是否能正确匹配,只要看访问不同集群时是否出现这个错误:
+
+```
+NoMatchingRule: No rules applied to hadoop/domain\[email protected]
+```
+
+如果出现,则表示匹配不成功。
+
+## 最佳实践
+
+本小节介绍如何基于 [Apache Doris 
官方仓库](https://github.com/apache/doris/tree/master/docker/thirdparties) 提供的 
Docker 环境,使用 Docker 启动带 Kerberos 的 Hive/HDFS 服务,并通过 Doris 创建支持 Kerberos 的 Hive 
Catalog。
+
+### 环境说明
+
+* 使用 Doris 提供的 Kerberos 服务(两套 HIVE,两套 KDC):
+
+  * Docker 启动目录:`docker/thirdparties`
+
+  * krb5.conf 模板:
+
+    
[`docker-compose/kerberos/common/conf/doris-krb5.conf`](https://github.com/apache/doris/blob/master/docker/thirdparties/docker-compose/kerberos/common/conf/doris-krb5.conf)
+
+### 1. 准备 keytab 文件和权限
+
+拷贝 keytab 文件到本地目录:
+
+```bash
+mkdir -p ~/doris-keytabs
+cp <hive-presto-master.keytab> ~/doris-keytabs/
+cp <other-hive-presto-master.keytab> ~/doris-keytabs/
+```
+
+设置文件权限,防止认证失败:
+
+```bash
+chmod 400 ~/doris-keytabs/*.keytab
+```
+
+### 2. 准备 krb5.conf 文件
+
+1. 使用 Doris 提供的 `krb5.conf` 模板文件
+
+2. 如果需要同时访问多个 Kerberos HDFS 集群,需要 **合并 krb5.conf**,基本要求:
+
+   * `[realms]`:写入所有集群的 Realm 和 KDC IP。
+
+   * `[domain_realm]`:写入 domain 或 IP 和 Realm 的映射。
+
+   * `[libdefaults]`:统一加密算法(如 des3-cbc-sha1)。
+
+3. 示例:
+
+    ```toml
+    [libdefaults]
+        default_realm = LABS.TERADATA.COM
+        allow_weak_crypto = true
+        dns_lookup_realm = true
+        dns_lookup_kdc = true
+
+    [realms]
+        LABS.TERADATA.COM = {
+            kdc = 127.0.0.1
+            admin_server = 127.0.0.1
+        }
+        OTHERREALM.COM = {
+            kdc = 127.0.0.1
+            admin_server = 127.0.0.1
+        }
+
+    [domain_realm]
+        presto-master.docker.cluster = LABS.TERADATA.COM
+        hadoop-master-2 = OTHERREALM.COM
+        .labs.teradata.com = LABS.TERADATA.COM
+        .otherrealm.com = OTHERREALM.COM
+    ```
+
+4. 拷贝 `krb5.conf` 到 Docker 对应目录:
+
+    ```bash
+    cp doris-krb5.conf ~/doris-kerberos/krb5.conf
+    ```
+
+### 3. 启动 Docker Kerberos 环境
+
+1. 进入目录:
+
+    ```bash
+    cd docker/thirdparties
+    ```
+
+2. 启动 Kerberos 环境:
+
+    ```bash
+    ./run-thirdparties-docker.sh -c kerberos
+    ```
+
+3. 启动后服务包括:
+
+   * Hive Metastore 1:9583
+   * Hive Metastore 2:9683
+   * HDFS 1:8520
+   * HDFS 2:8620
+
+### 4. 获取容器 IP
+
+使用命令查看 Docker IP:
+
+```bash
+docker inspect <container-name> | grep IPAddress
+```
+
+或直接使用 127.0.0.1(前提是服务已经映射到宿主机网络)。
+
+### 5. 创建 Kerberos Hive Catalog
+
+1. Hive Catalog1
+
+    ```sql
+    CREATE CATALOG IF NOT EXISTS multi_kerberos_one
+    PROPERTIES (
+    "type" = "hms",
+    "hive.metastore.uris" = "thrift://127.0.0.1:9583",
+    "fs.defaultFS" = "hdfs://127.0.0.1:8520",
+    "hadoop.kerberos.min.seconds.before.relogin" = "5",
+    "hadoop.security.authentication" = "kerberos",
+    "hadoop.kerberos.principal" = 
"hive/[email protected]",
+    "hadoop.kerberos.keytab" = 
"/mnt/disk1/gq/keytabs/keytabs/hive-presto-master.keytab",
+    "hive.metastore.sasl.enabled " = "true",
+    "hadoop.security.auth_to_local" = 
"RULE:[2:$1@$0](.*@LABS.TERADATA.COM)s/@.*//
+                                        
RULE:[2:$1@$0](.*@OTHERLABS.TERADATA.COM)s/@.*//
+                                        
RULE:[2:$1@$0](.*@OTHERREALM.COM)s/@.*//
+                                        DEFAULT",
+    "hive.metastore.kerberos.principal" = 
"hive/[email protected]"
+    );
+    ```
+
+2. Hive Catalog2
+
+    ```sql
+    CREATE CATALOG IF NOT EXISTS multi_kerberos_two
+    PROPERTIES (
+    "type" = "hms",
+    "hive.metastore.uris" = "thrift://127.0.0.1:9683",
+    "fs.defaultFS" = "hdfs://127.0.0.1:8620",
+    "hadoop.kerberos.min.seconds.before.relogin" = "5",
+    "hadoop.security.authentication" = "kerberos",
+    "hadoop.kerberos.principal" = 
"hive/[email protected]",
+    "hadoop.kerberos.keytab" = 
"/mnt/disk1/gq/keytabs/keytabs/other-hive-presto-master.keytab",
+    "hive.metastore.sasl.enabled " = "true",
+    "hadoop.security.auth_to_local" = "RULE:[2:$1@$0](.*@OTHERREALM.COM)s/@.*//
+                                        
RULE:[2:$1@$0](.*@OTHERLABS.TERADATA.COM)s/@.*//
+                                        DEFAULT",
+    "hive.metastore.kerberos.principal" = "hive/[email protected]"
+    );
+    ```
+
+至此,完成多 Kerberos 集群访问配置,您可以查看两个 Hive 集群中的数据,并使用不同的 Kerberos 凭证。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/lakehouse/best-practices/kerberos.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/lakehouse/best-practices/kerberos.md
new file mode 100644
index 00000000000..e379a7f512c
--- /dev/null
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/lakehouse/best-practices/kerberos.md
@@ -0,0 +1,264 @@
+---
+{
+    "title": "Kerberos 最佳实践",
+    "language": "zh-CN"
+}
+---
+
+当用户使用 Doris 进行多数据源的联邦分析查询时,不同的集群可能使用不同的 Kerberos 认证凭据。
+
+以某大型基金公司为例,其内部数据平台划分为多个功能集群,分别由不同的技术或业务团队维护,并配置了独立的 Kerberos Realm 
用于身份认证和访问控制。其中:
+
+- 生产集群用于日常净值计算和风险评估,数据严格隔离,仅允许授权服务访问(Realm:PROD.FUND.COM)。
+- 分析集群则用于策略研究与模型回测,Doris 通过 TVF 实现对该集群的临时查询(Realm:ANALYSIS.FUND.COM)。
+- 数据湖集群接入了 Iceberg Catalog,用于归档和分析历史行情、日志等大体量数据(Realm:LAKE.FUND.COM)。
+
+由于各集群未建立跨域信任关系,认证信息无法共享,若希望统一访问这些异构数据源,就必须同时支持多个 Kerberos 实例的认证与上下文管理。
+
+**本文档重点介绍如何在多 Kerberos 环境下配置和访问数据源。**
+
+> 本文档适用于 Doris 3.1+ 
+
+## 多 Kerberos 集群认证配置
+
+### krb5.conf
+
+`krb5.conf` 包含 Kerberos 配置信息、KDC 位置、Kerberos 服务的一些**默认值**,以及主机名到 Realm 的映射信息等。
+
+应用 krb5.conf 时,要确保将它放到每个节点。默认位置在 `/etc/krb5.conf`。
+
+### realms
+
+包含 KDC 和许多客户端的 Kerberos 网络,例如 EXAMPLE.COM。
+
+配置多集群时,需要把多个 Realm 配置到一个 `krb5.conf` 里。KDC 和 `admin_server` 也可以是域名。
+
+```
+[realms]
+EMR-IP.EXAMPLE = {
+    kdc = 172.21.16.8:88
+    admin_server = 172.21.16.8
+}
+EMR-HOST.EXAMPLE = {
+    kdc = emr_hostname
+    admin_server = emr_hostname
+}
+```
+
+### domain_realm
+
+配置 Kerberos 服务所在节点的 domain 到 Realm 的映射。
+
+```toml
+[libdefaults]
+dns_lookup_realm = true
+dns_lookup_kdc = true
+[domain_realm]
+172.21.16.8 = EMR-IP.EXAMPLE
+emr-host.example = EMR-HOST.EXAMPLE
+```
+
+例如,对于 principal `emr1/[email protected]`,查找 KDC 时使用 `domain_name` 去找相对应的 
Realm,如果匹配不上,会找不到 Realm 所在的 KDC。
+
+通常会在 Doris 的 `log/be.out` 或者 `log/fe.out` 看到两种错误都与 `domain_realm` 有关:
+
+```
+* Unable to locate KDC for realm/Cannot locate KDC
+
+* No service creds
+```
+
+### keytab 和 principal
+
+在多 Kerberos 集群的环境下,keytab 
通常会使用不同的路径,比如:`/path/to/serverA.keytab`,`/path/to/serverB.keytab`。访问不同集群时,需要使用对应的
 keytab。
+
+如果 HDFS 集群开启了 Kerberos 认证,我们一般在 `core-site.xml` 文件中能看到 
`hadoop.security.auth_to_local` 属性,用于将 Kerberos 的 principal 映射为比较短的本地用户名称,并且 
Hadoop 复用了 Kerberos 语法规则。
+
+如未配置,则可能遇到 `NoMatchingRule("No rules applied to` 异常,见代码:
+
+[hadoop/src/core/org/apache/hadoop/security/KerberosName.java](https://github.com/hanborq/hadoop/blob/master/src/core/org/apache/hadoop/security/KerberosName.java#L399)
+
+`hadoop.security.auth_to_local` 参数中包含一组映射规则,将 principal 由上至下逐一匹配 
RULE,当找到匹配的映射规则后,会输出一个用户名称,同时忽略未进行匹配的规则。具体的配置格式:
+
+```
+RULE:[<principal translation>](acceptance filter)<short name substitution>
+```
+
+为了在多集群环境下,能匹配到不同 Kerberos 服务用到的 principal,推荐配置如下:
+
+```xml
+<property>
+    <name>hadoop.security.auth_to_local</name>
+    <value>RULE:[1:$1@$0](^.*@.*$)s/^(.*)@.*$/$1/g
+           RULE:[2:$1@$0](^.*@.*$)s/^(.*)@.*$/$1/g
+           DEFAULT</value>
+</property>
+```
+
+以上配置可以用于添加或者替换 `core-site.xml` 中的 `hadoop.security.auth_to_local` 属性,将 
`core-site.xml` 放到 `fe/conf` 以及 `be/conf` 下,使其在 Doris 环境中生效。
+
+如果需要在 OUTFILE、EXPORT、Broker Load、Catalog(Hive、Iceberg、Hudi)、TVF 
等功能中单独生效,可以直接配置在他们的 properties 中:
+
+```sql
+"hadoop.security.auth_to_local" = "RULE:[1:$1@$0](^.*@.*$)s/^(.*)@.*$/$1/g
+                                   RULE:[2:$1@$0](^.*@.*$)s/^(.*)@.*$/$1/g
+                                   DEFAULT"
+```
+
+检验映射规则是否能正确匹配,只要看访问不同集群时是否出现这个错误:
+
+```
+NoMatchingRule: No rules applied to hadoop/domain\[email protected]
+```
+
+如果出现,则表示匹配不成功。
+
+## 最佳实践
+
+本小节介绍如何基于 [Apache Doris 
官方仓库](https://github.com/apache/doris/tree/master/docker/thirdparties) 提供的 
Docker 环境,使用 Docker 启动带 Kerberos 的 Hive/HDFS 服务,并通过 Doris 创建支持 Kerberos 的 Hive 
Catalog。
+
+### 环境说明
+
+* 使用 Doris 提供的 Kerberos 服务(两套 HIVE,两套 KDC):
+
+  * Docker 启动目录:`docker/thirdparties`
+
+  * krb5.conf 模板:
+
+    
[`docker-compose/kerberos/common/conf/doris-krb5.conf`](https://github.com/apache/doris/blob/master/docker/thirdparties/docker-compose/kerberos/common/conf/doris-krb5.conf)
+
+### 1. 准备 keytab 文件和权限
+
+拷贝 keytab 文件到本地目录:
+
+```bash
+mkdir -p ~/doris-keytabs
+cp <hive-presto-master.keytab> ~/doris-keytabs/
+cp <other-hive-presto-master.keytab> ~/doris-keytabs/
+```
+
+设置文件权限,防止认证失败:
+
+```bash
+chmod 400 ~/doris-keytabs/*.keytab
+```
+
+### 2. 准备 krb5.conf 文件
+
+1. 使用 Doris 提供的 `krb5.conf` 模板文件
+
+2. 如果需要同时访问多个 Kerberos HDFS 集群,需要 **合并 krb5.conf**,基本要求:
+
+   * `[realms]`:写入所有集群的 Realm 和 KDC IP。
+
+   * `[domain_realm]`:写入 domain 或 IP 和 Realm 的映射。
+
+   * `[libdefaults]`:统一加密算法(如 des3-cbc-sha1)。
+
+3. 示例:
+
+    ```toml
+    [libdefaults]
+        default_realm = LABS.TERADATA.COM
+        allow_weak_crypto = true
+        dns_lookup_realm = true
+        dns_lookup_kdc = true
+
+    [realms]
+        LABS.TERADATA.COM = {
+            kdc = 127.0.0.1
+            admin_server = 127.0.0.1
+        }
+        OTHERREALM.COM = {
+            kdc = 127.0.0.1
+            admin_server = 127.0.0.1
+        }
+
+    [domain_realm]
+        presto-master.docker.cluster = LABS.TERADATA.COM
+        hadoop-master-2 = OTHERREALM.COM
+        .labs.teradata.com = LABS.TERADATA.COM
+        .otherrealm.com = OTHERREALM.COM
+    ```
+
+4. 拷贝 `krb5.conf` 到 Docker 对应目录:
+
+    ```bash
+    cp doris-krb5.conf ~/doris-kerberos/krb5.conf
+    ```
+
+### 3. 启动 Docker Kerberos 环境
+
+1. 进入目录:
+
+    ```bash
+    cd docker/thirdparties
+    ```
+
+2. 启动 Kerberos 环境:
+
+    ```bash
+    ./run-thirdparties-docker.sh -c kerberos
+    ```
+
+3. 启动后服务包括:
+
+   * Hive Metastore 1:9583
+   * Hive Metastore 2:9683
+   * HDFS 1:8520
+   * HDFS 2:8620
+
+### 4. 获取容器 IP
+
+使用命令查看 Docker IP:
+
+```bash
+docker inspect <container-name> | grep IPAddress
+```
+
+或直接使用 127.0.0.1(前提是服务已经映射到宿主机网络)。
+
+### 5. 创建 Kerberos Hive Catalog
+
+1. Hive Catalog1
+
+    ```sql
+    CREATE CATALOG IF NOT EXISTS multi_kerberos_one
+    PROPERTIES (
+    "type" = "hms",
+    "hive.metastore.uris" = "thrift://127.0.0.1:9583",
+    "fs.defaultFS" = "hdfs://127.0.0.1:8520",
+    "hadoop.kerberos.min.seconds.before.relogin" = "5",
+    "hadoop.security.authentication" = "kerberos",
+    "hadoop.kerberos.principal" = 
"hive/[email protected]",
+    "hadoop.kerberos.keytab" = 
"/mnt/disk1/gq/keytabs/keytabs/hive-presto-master.keytab",
+    "hive.metastore.sasl.enabled " = "true",
+    "hadoop.security.auth_to_local" = 
"RULE:[2:$1@$0](.*@LABS.TERADATA.COM)s/@.*//
+                                        
RULE:[2:$1@$0](.*@OTHERLABS.TERADATA.COM)s/@.*//
+                                        
RULE:[2:$1@$0](.*@OTHERREALM.COM)s/@.*//
+                                        DEFAULT",
+    "hive.metastore.kerberos.principal" = 
"hive/[email protected]"
+    );
+    ```
+
+2. Hive Catalog2
+
+    ```sql
+    CREATE CATALOG IF NOT EXISTS multi_kerberos_two
+    PROPERTIES (
+    "type" = "hms",
+    "hive.metastore.uris" = "thrift://127.0.0.1:9683",
+    "fs.defaultFS" = "hdfs://127.0.0.1:8620",
+    "hadoop.kerberos.min.seconds.before.relogin" = "5",
+    "hadoop.security.authentication" = "kerberos",
+    "hadoop.kerberos.principal" = 
"hive/[email protected]",
+    "hadoop.kerberos.keytab" = 
"/mnt/disk1/gq/keytabs/keytabs/other-hive-presto-master.keytab",
+    "hive.metastore.sasl.enabled " = "true",
+    "hadoop.security.auth_to_local" = "RULE:[2:$1@$0](.*@OTHERREALM.COM)s/@.*//
+                                        
RULE:[2:$1@$0](.*@OTHERLABS.TERADATA.COM)s/@.*//
+                                        DEFAULT",
+    "hive.metastore.kerberos.principal" = "hive/[email protected]"
+    );
+    ```
+
+至此,完成多 Kerberos 集群访问配置,您可以查看两个 Hive 集群中的数据,并使用不同的 Kerberos 凭证。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/lakehouse/catalogs/doris-catalog.mdx
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/lakehouse/catalogs/doris-catalog.mdx
index 8ed645f5b52..8597467fb45 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/lakehouse/catalogs/doris-catalog.mdx
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/lakehouse/catalogs/doris-catalog.mdx
@@ -90,6 +90,8 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
 
 ### Arrow Flight 模式
 
+> 自 4.0.2 版本支持。
+
 当 `use_arrow_flight` 属性为 `true` 时,则为 Arrow Flight 模式。
 
 ![arrow-flight-mode](/images/Lakehouse/doris-catalog/arrow-flight-mode.png)
@@ -102,6 +104,8 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
 
 ### 虚拟集群模式
 
+> 自 4.0.3 版本支持。
+
 当 `use_arrow_flight` 属性为 `false` 时,则为虚拟集群模式。
 
 > 该模式目前仅支持存算一体方式部署的 Doris 集群。
@@ -118,7 +122,18 @@ FE 之间通过 HTTP 协议同步 Schema 等元信息。BE 直接通过内部通
 
 ## 列类型映射
 
-Doris 外表类型与本地 Doris 类型完全相同。
+### Arrow Flight 模式
+
+该模式下支持的列类型和表类型,取决于 Arrow Flight SQL 的支持能力,目前有以下能力和限制:
+
+- 支持所有基础类型(Primitive Type)
+- 支持所有嵌套类型(Array、Map、Struct)
+- 不支持 hll、bitmap、variant 类型
+- 支持所有的表模式(明细表、聚合表和主键表)
+
+### 虚拟集群模式
+
+虚拟集群模式下,支持所有的列类型和所有的表模式(明细表、聚合表和主键表)。
 
 ## 查询操作
 
diff --git a/sidebars.ts b/sidebars.ts
index f3900051d8d..8575bdf6935 100644
--- a/sidebars.ts
+++ b/sidebars.ts
@@ -515,6 +515,7 @@ const sidebars: SidebarsConfig = {
                                 'lakehouse/best-practices/doris-gravitino',
                                 'lakehouse/best-practices/doris-onelake',
                                 'lakehouse/best-practices/doris-unity-catalog',
+                                'lakehouse/best-practices/kerberos',
                                 'lakehouse/best-practices/tpch',
                                 'lakehouse/best-practices/tpcds',
                             ],
diff --git a/versioned_docs/version-3.x/lakehouse/best-practices/kerberos.md 
b/versioned_docs/version-3.x/lakehouse/best-practices/kerberos.md
new file mode 100644
index 00000000000..dbc8170abe9
--- /dev/null
+++ b/versioned_docs/version-3.x/lakehouse/best-practices/kerberos.md
@@ -0,0 +1,264 @@
+---
+{
+    "title": "Kerberos Best Practices",
+    "language": "en"
+}
+---
+
+When users use Doris for federated analytical queries across multiple data 
sources, different clusters may use different Kerberos authentication 
credentials.
+
+Take a large fund company as an example. Its internal data platform is divided 
into multiple functional clusters, maintained by different technical or 
business teams, each configured with independent Kerberos Realms for identity 
authentication and access control:
+
+- Production cluster is used for daily net asset value calculations and risk 
assessments, with strictly isolated data that only allows authorized service 
access (Realm: PROD.FUND.COM).
+- Analysis cluster is used for strategy research and model backtesting, where 
Doris implements temporary queries to this cluster through TVF (Realm: 
ANALYSIS.FUND.COM).
+- Data lake cluster integrates Iceberg Catalog for archiving and analyzing 
large volumes of historical market data, logs, and other data (Realm: 
LAKE.FUND.COM).
+
+Since these clusters have not established cross-domain trust relationships and 
authentication information cannot be shared, unified access to these 
heterogeneous data sources requires simultaneous support for authentication and 
context management of multiple Kerberos instances.
+
+**This document focuses on how to configure and access data sources in 
multi-Kerberos environments.**
+
+> This feature is supported since 3.1+
+
+## Multi-Kerberos Cluster Authentication Configuration
+
+### krb5.conf
+
+`krb5.conf` contains Kerberos configuration information, KDC locations, some 
**default values** for Kerberos services, and hostname-to-Realm mapping 
information.
+
+When applying krb5.conf, ensure it is placed on every node. The default 
location is `/etc/krb5.conf`.
+
+### realms
+
+Contains KDC and Kerberos networks of many clients, such as EXAMPLE.COM.
+
+When configuring multiple clusters, you need to configure multiple Realms in 
one `krb5.conf`. KDC and `admin_server` can also be domain names.
+
+```
+[realms]
+EMR-IP.EXAMPLE = {
+    kdc = 172.21.16.8:88
+    admin_server = 172.21.16.8
+}
+EMR-HOST.EXAMPLE = {
+    kdc = emr_hostname
+    admin_server = emr_hostname
+}
+```
+
+### domain_realm
+
+Configures the mapping from domain to Realm for nodes where Kerberos services 
are located.
+
+```toml
+[libdefaults]
+dns_lookup_realm = true
+dns_lookup_kdc = true
+[domain_realm]
+172.21.16.8 = EMR-IP.EXAMPLE
+emr-host.example = EMR-HOST.EXAMPLE
+```
+
+For example, for principal `emr1/[email protected]`, when looking up KDC, 
use `domain_name` to find the corresponding Realm. If it doesn't match, the KDC 
for the Realm cannot be found.
+
+You will typically see two types of errors in Doris's `log/be.out` or 
`log/fe.out` that are related to `domain_realm`:
+
+```
+* Unable to locate KDC for realm/Cannot locate KDC
+
+* No service creds
+```
+
+### keytab and principal
+
+In multi-Kerberos cluster environments, keytab files usually use different 
paths, such as: `/path/to/serverA.keytab`, `/path/to/serverB.keytab`. When 
accessing different clusters, you need to use the corresponding keytab.
+
+If the HDFS cluster has Kerberos authentication enabled, you can generally see 
the `hadoop.security.auth_to_local` property in the `core-site.xml` file, which 
is used to map Kerberos principals to shorter local usernames, and Hadoop 
reuses Kerberos syntax rules.
+
+If not configured, you may encounter a `NoMatchingRule("No rules applied to` 
exception. See code:
+
+[hadoop/src/core/org/apache/hadoop/security/KerberosName.java](https://github.com/hanborq/hadoop/blob/master/src/core/org/apache/hadoop/security/KerberosName.java#L399)
+
+The `hadoop.security.auth_to_local` parameter contains a set of mapping rules 
that match principals against RULEs from top to bottom. When a matching mapping 
rule is found, it outputs a username and ignores unmatched rules. The specific 
configuration format:
+
+```
+RULE:[<principal translation>](acceptance filter)<short name substitution>
+```
+
+To match principals used by different Kerberos services in multi-cluster 
environments, the recommended configuration is:
+
+```xml
+<property>
+    <name>hadoop.security.auth_to_local</name>
+    <value>RULE:[1:$1@$0](^.*@.*$)s/^(.*)@.*$/$1/g
+           RULE:[2:$1@$0](^.*@.*$)s/^(.*)@.*$/$1/g
+           DEFAULT</value>
+</property>
+```
+
+The above configuration can be used to add or replace the 
`hadoop.security.auth_to_local` property in `core-site.xml`. Place 
`core-site.xml` in `fe/conf` and `be/conf` to make it effective in the Doris 
environment.
+
+If you need it to take effect separately in OUTFILE, EXPORT, Broker Load, 
Catalog (Hive, Iceberg, Hudi), TVF, and other functions, you can configure it 
directly in their properties:
+
+```sql
+"hadoop.security.auth_to_local" = "RULE:[1:$1@$0](^.*@.*$)s/^(.*)@.*$/$1/g
+                                   RULE:[2:$1@$0](^.*@.*$)s/^(.*)@.*$/$1/g
+                                   DEFAULT"
+```
+
+To verify whether mapping rules can match correctly, check if this error 
occurs when accessing different clusters:
+
+```
+NoMatchingRule: No rules applied to hadoop/domain\[email protected]
+```
+
+If it appears, it indicates unsuccessful matching.
+
+## Best Practices
+
+This section introduces how to use the Docker environment provided by the 
[Apache Doris official 
repository](https://github.com/apache/doris/tree/master/docker/thirdparties) to 
start Hive/HDFS services with Kerberos using Docker, and create 
Kerberos-enabled Hive Catalogs through Doris.
+
+### Environment Description
+
+* Use Kerberos services provided by Doris (two sets of HIVE, two sets of KDC):
+
+  * Docker startup directory: `docker/thirdparties`
+
+  * krb5.conf template:
+
+    
[`docker-compose/kerberos/common/conf/doris-krb5.conf`](https://github.com/apache/doris/blob/master/docker/thirdparties/docker-compose/kerberos/common/conf/doris-krb5.conf)
+
+### 1. Prepare keytab files and permissions
+
+Copy keytab files to local directory:
+
+```bash
+mkdir -p ~/doris-keytabs
+cp <hive-presto-master.keytab> ~/doris-keytabs/
+cp <other-hive-presto-master.keytab> ~/doris-keytabs/
+```
+
+Set file permissions to prevent authentication failure:
+
+```bash
+chmod 400 ~/doris-keytabs/*.keytab
+```
+
+### 2. Prepare krb5.conf file
+
+1. Use the `krb5.conf` template file provided by Doris
+
+2. If you need to access multiple Kerberos HDFS clusters simultaneously, you 
need to **merge krb5.conf**, with basic requirements:
+
+   * `[realms]`: Write Realms and KDC IPs for all clusters.
+
+   * `[domain_realm]`: Write domain or IP to Realm mappings.
+
+   * `[libdefaults]`: Unified encryption algorithms (such as des3-cbc-sha1).
+
+3. Example:
+
+    ```toml
+    [libdefaults]
+        default_realm = LABS.TERADATA.COM
+        allow_weak_crypto = true
+        dns_lookup_realm = true
+        dns_lookup_kdc = true
+
+    [realms]
+        LABS.TERADATA.COM = {
+            kdc = 127.0.0.1
+            admin_server = 127.0.0.1
+        }
+        OTHERREALM.COM = {
+            kdc = 127.0.0.1
+            admin_server = 127.0.0.1
+        }
+
+    [domain_realm]
+        presto-master.docker.cluster = LABS.TERADATA.COM
+        hadoop-master-2 = OTHERREALM.COM
+        .labs.teradata.com = LABS.TERADATA.COM
+        .otherrealm.com = OTHERREALM.COM
+    ```
+
+4. Copy `krb5.conf` to the corresponding Docker directory:
+
+    ```bash
+    cp doris-krb5.conf ~/doris-kerberos/krb5.conf
+    ```
+
+### 3. Start Docker Kerberos environment
+
+1. Enter directory:
+
+    ```bash
+    cd docker/thirdparties
+    ```
+
+2. Start Kerberos environment:
+
+    ```bash
+    ./run-thirdparties-docker.sh -c kerberos
+    ```
+
+3. Services after startup include:
+
+   * Hive Metastore 1:9583
+   * Hive Metastore 2:9683
+   * HDFS 1:8520
+   * HDFS 2:8620
+
+### 4. Get container IP
+
+Use command to view Docker IP:
+
+```bash
+docker inspect <container-name> | grep IPAddress
+```
+
+Or directly use 127.0.0.1 (provided that the service has been mapped to the 
host network).
+
+### 5. Create Kerberos Hive Catalog
+
+1. Hive Catalog1
+
+    ```sql
+    CREATE CATALOG IF NOT EXISTS multi_kerberos_one
+    PROPERTIES (
+    "type" = "hms",
+    "hive.metastore.uris" = "thrift://127.0.0.1:9583",
+    "fs.defaultFS" = "hdfs://127.0.0.1:8520",
+    "hadoop.kerberos.min.seconds.before.relogin" = "5",
+    "hadoop.security.authentication" = "kerberos",
+    "hadoop.kerberos.principal" = 
"hive/[email protected]",
+    "hadoop.kerberos.keytab" = 
"/mnt/disk1/gq/keytabs/keytabs/hive-presto-master.keytab",
+    "hive.metastore.sasl.enabled " = "true",
+    "hadoop.security.auth_to_local" = 
"RULE:[2:$1@$0](.*@LABS.TERADATA.COM)s/@.*//
+                                        
RULE:[2:$1@$0](.*@OTHERLABS.TERADATA.COM)s/@.*//
+                                        
RULE:[2:$1@$0](.*@OTHERREALM.COM)s/@.*//
+                                        DEFAULT",
+    "hive.metastore.kerberos.principal" = 
"hive/[email protected]"
+    );
+    ```
+
+2. Hive Catalog2
+
+    ```sql
+    CREATE CATALOG IF NOT EXISTS multi_kerberos_two
+    PROPERTIES (
+    "type" = "hms",
+    "hive.metastore.uris" = "thrift://127.0.0.1:9683",
+    "fs.defaultFS" = "hdfs://127.0.0.1:8620",
+    "hadoop.kerberos.min.seconds.before.relogin" = "5",
+    "hadoop.security.authentication" = "kerberos",
+    "hadoop.kerberos.principal" = 
"hive/[email protected]",
+    "hadoop.kerberos.keytab" = 
"/mnt/disk1/gq/keytabs/keytabs/other-hive-presto-master.keytab",
+    "hive.metastore.sasl.enabled " = "true",
+    "hadoop.security.auth_to_local" = "RULE:[2:$1@$0](.*@OTHERREALM.COM)s/@.*//
+                                        
RULE:[2:$1@$0](.*@OTHERLABS.TERADATA.COM)s/@.*//
+                                        DEFAULT",
+    "hive.metastore.kerberos.principal" = "hive/[email protected]"
+    );
+    ```
+
+At this point, the multi-Kerberos cluster access configuration is complete. 
You can view data from both Hive clusters and use different Kerberos 
credentials.
diff --git a/versioned_docs/version-4.x/lakehouse/best-practices/kerberos.md 
b/versioned_docs/version-4.x/lakehouse/best-practices/kerberos.md
new file mode 100644
index 00000000000..dbc8170abe9
--- /dev/null
+++ b/versioned_docs/version-4.x/lakehouse/best-practices/kerberos.md
@@ -0,0 +1,264 @@
+---
+{
+    "title": "Kerberos Best Practices",
+    "language": "en"
+}
+---
+
+When users use Doris for federated analytical queries across multiple data 
sources, different clusters may use different Kerberos authentication 
credentials.
+
+Take a large fund company as an example. Its internal data platform is divided 
into multiple functional clusters, maintained by different technical or 
business teams, each configured with independent Kerberos Realms for identity 
authentication and access control:
+
+- Production cluster is used for daily net asset value calculations and risk 
assessments, with strictly isolated data that only allows authorized service 
access (Realm: PROD.FUND.COM).
+- Analysis cluster is used for strategy research and model backtesting, where 
Doris implements temporary queries to this cluster through TVF (Realm: 
ANALYSIS.FUND.COM).
+- Data lake cluster integrates Iceberg Catalog for archiving and analyzing 
large volumes of historical market data, logs, and other data (Realm: 
LAKE.FUND.COM).
+
+Since these clusters have not established cross-domain trust relationships and 
authentication information cannot be shared, unified access to these 
heterogeneous data sources requires simultaneous support for authentication and 
context management of multiple Kerberos instances.
+
+**This document focuses on how to configure and access data sources in 
multi-Kerberos environments.**
+
+> This feature is supported since 3.1+
+
+## Multi-Kerberos Cluster Authentication Configuration
+
+### krb5.conf
+
+`krb5.conf` contains Kerberos configuration information, KDC locations, some 
**default values** for Kerberos services, and hostname-to-Realm mapping 
information.
+
+When applying krb5.conf, ensure it is placed on every node. The default 
location is `/etc/krb5.conf`.
+
+### realms
+
+Contains KDC and Kerberos networks of many clients, such as EXAMPLE.COM.
+
+When configuring multiple clusters, you need to configure multiple Realms in 
one `krb5.conf`. KDC and `admin_server` can also be domain names.
+
+```
+[realms]
+EMR-IP.EXAMPLE = {
+    kdc = 172.21.16.8:88
+    admin_server = 172.21.16.8
+}
+EMR-HOST.EXAMPLE = {
+    kdc = emr_hostname
+    admin_server = emr_hostname
+}
+```
+
+### domain_realm
+
+Configures the mapping from domain to Realm for nodes where Kerberos services 
are located.
+
+```toml
+[libdefaults]
+dns_lookup_realm = true
+dns_lookup_kdc = true
+[domain_realm]
+172.21.16.8 = EMR-IP.EXAMPLE
+emr-host.example = EMR-HOST.EXAMPLE
+```
+
+For example, for principal `emr1/[email protected]`, when looking up KDC, 
use `domain_name` to find the corresponding Realm. If it doesn't match, the KDC 
for the Realm cannot be found.
+
+You will typically see two types of errors in Doris's `log/be.out` or 
`log/fe.out` that are related to `domain_realm`:
+
+```
+* Unable to locate KDC for realm/Cannot locate KDC
+
+* No service creds
+```
+
+### keytab and principal
+
+In multi-Kerberos cluster environments, keytab files usually use different 
paths, such as: `/path/to/serverA.keytab`, `/path/to/serverB.keytab`. When 
accessing different clusters, you need to use the corresponding keytab.
+
+If the HDFS cluster has Kerberos authentication enabled, you can generally see 
the `hadoop.security.auth_to_local` property in the `core-site.xml` file, which 
is used to map Kerberos principals to shorter local usernames, and Hadoop 
reuses Kerberos syntax rules.
+
+If not configured, you may encounter a `NoMatchingRule("No rules applied to` 
exception. See code:
+
+[hadoop/src/core/org/apache/hadoop/security/KerberosName.java](https://github.com/hanborq/hadoop/blob/master/src/core/org/apache/hadoop/security/KerberosName.java#L399)
+
+The `hadoop.security.auth_to_local` parameter contains a set of mapping rules 
that match principals against RULEs from top to bottom. When a matching mapping 
rule is found, it outputs a username and ignores unmatched rules. The specific 
configuration format:
+
+```
+RULE:[<principal translation>](acceptance filter)<short name substitution>
+```
+
+To match principals used by different Kerberos services in multi-cluster 
environments, the recommended configuration is:
+
+```xml
+<property>
+    <name>hadoop.security.auth_to_local</name>
+    <value>RULE:[1:$1@$0](^.*@.*$)s/^(.*)@.*$/$1/g
+           RULE:[2:$1@$0](^.*@.*$)s/^(.*)@.*$/$1/g
+           DEFAULT</value>
+</property>
+```
+
+The above configuration can be used to add or replace the 
`hadoop.security.auth_to_local` property in `core-site.xml`. Place 
`core-site.xml` in `fe/conf` and `be/conf` to make it effective in the Doris 
environment.
+
+If you need it to take effect separately in OUTFILE, EXPORT, Broker Load, 
Catalog (Hive, Iceberg, Hudi), TVF, and other functions, you can configure it 
directly in their properties:
+
+```sql
+"hadoop.security.auth_to_local" = "RULE:[1:$1@$0](^.*@.*$)s/^(.*)@.*$/$1/g
+                                   RULE:[2:$1@$0](^.*@.*$)s/^(.*)@.*$/$1/g
+                                   DEFAULT"
+```
+
+To verify whether mapping rules can match correctly, check if this error 
occurs when accessing different clusters:
+
+```
+NoMatchingRule: No rules applied to hadoop/domain\[email protected]
+```
+
+If it appears, it indicates unsuccessful matching.
+
+## Best Practices
+
+This section introduces how to use the Docker environment provided by the 
[Apache Doris official 
repository](https://github.com/apache/doris/tree/master/docker/thirdparties) to 
start Hive/HDFS services with Kerberos using Docker, and create 
Kerberos-enabled Hive Catalogs through Doris.
+
+### Environment Description
+
+* Use Kerberos services provided by Doris (two sets of HIVE, two sets of KDC):
+
+  * Docker startup directory: `docker/thirdparties`
+
+  * krb5.conf template:
+
+    
[`docker-compose/kerberos/common/conf/doris-krb5.conf`](https://github.com/apache/doris/blob/master/docker/thirdparties/docker-compose/kerberos/common/conf/doris-krb5.conf)
+
+### 1. Prepare keytab files and permissions
+
+Copy keytab files to local directory:
+
+```bash
+mkdir -p ~/doris-keytabs
+cp <hive-presto-master.keytab> ~/doris-keytabs/
+cp <other-hive-presto-master.keytab> ~/doris-keytabs/
+```
+
+Set file permissions to prevent authentication failure:
+
+```bash
+chmod 400 ~/doris-keytabs/*.keytab
+```
+
+### 2. Prepare krb5.conf file
+
+1. Use the `krb5.conf` template file provided by Doris
+
+2. If you need to access multiple Kerberos HDFS clusters simultaneously, you 
need to **merge krb5.conf**, with basic requirements:
+
+   * `[realms]`: Write Realms and KDC IPs for all clusters.
+
+   * `[domain_realm]`: Write domain or IP to Realm mappings.
+
+   * `[libdefaults]`: Unified encryption algorithms (such as des3-cbc-sha1).
+
+3. Example:
+
+    ```toml
+    [libdefaults]
+        default_realm = LABS.TERADATA.COM
+        allow_weak_crypto = true
+        dns_lookup_realm = true
+        dns_lookup_kdc = true
+
+    [realms]
+        LABS.TERADATA.COM = {
+            kdc = 127.0.0.1
+            admin_server = 127.0.0.1
+        }
+        OTHERREALM.COM = {
+            kdc = 127.0.0.1
+            admin_server = 127.0.0.1
+        }
+
+    [domain_realm]
+        presto-master.docker.cluster = LABS.TERADATA.COM
+        hadoop-master-2 = OTHERREALM.COM
+        .labs.teradata.com = LABS.TERADATA.COM
+        .otherrealm.com = OTHERREALM.COM
+    ```
+
+4. Copy `krb5.conf` to the corresponding Docker directory:
+
+    ```bash
+    cp doris-krb5.conf ~/doris-kerberos/krb5.conf
+    ```
+
+### 3. Start Docker Kerberos environment
+
+1. Enter directory:
+
+    ```bash
+    cd docker/thirdparties
+    ```
+
+2. Start Kerberos environment:
+
+    ```bash
+    ./run-thirdparties-docker.sh -c kerberos
+    ```
+
+3. Services after startup include:
+
+   * Hive Metastore 1:9583
+   * Hive Metastore 2:9683
+   * HDFS 1:8520
+   * HDFS 2:8620
+
+### 4. Get container IP
+
+Use command to view Docker IP:
+
+```bash
+docker inspect <container-name> | grep IPAddress
+```
+
+Or directly use 127.0.0.1 (provided that the service has been mapped to the 
host network).
+
+### 5. Create Kerberos Hive Catalog
+
+1. Hive Catalog1
+
+    ```sql
+    CREATE CATALOG IF NOT EXISTS multi_kerberos_one
+    PROPERTIES (
+    "type" = "hms",
+    "hive.metastore.uris" = "thrift://127.0.0.1:9583",
+    "fs.defaultFS" = "hdfs://127.0.0.1:8520",
+    "hadoop.kerberos.min.seconds.before.relogin" = "5",
+    "hadoop.security.authentication" = "kerberos",
+    "hadoop.kerberos.principal" = 
"hive/[email protected]",
+    "hadoop.kerberos.keytab" = 
"/mnt/disk1/gq/keytabs/keytabs/hive-presto-master.keytab",
+    "hive.metastore.sasl.enabled " = "true",
+    "hadoop.security.auth_to_local" = 
"RULE:[2:$1@$0](.*@LABS.TERADATA.COM)s/@.*//
+                                        
RULE:[2:$1@$0](.*@OTHERLABS.TERADATA.COM)s/@.*//
+                                        
RULE:[2:$1@$0](.*@OTHERREALM.COM)s/@.*//
+                                        DEFAULT",
+    "hive.metastore.kerberos.principal" = 
"hive/[email protected]"
+    );
+    ```
+
+2. Hive Catalog2
+
+    ```sql
+    CREATE CATALOG IF NOT EXISTS multi_kerberos_two
+    PROPERTIES (
+    "type" = "hms",
+    "hive.metastore.uris" = "thrift://127.0.0.1:9683",
+    "fs.defaultFS" = "hdfs://127.0.0.1:8620",
+    "hadoop.kerberos.min.seconds.before.relogin" = "5",
+    "hadoop.security.authentication" = "kerberos",
+    "hadoop.kerberos.principal" = 
"hive/[email protected]",
+    "hadoop.kerberos.keytab" = 
"/mnt/disk1/gq/keytabs/keytabs/other-hive-presto-master.keytab",
+    "hive.metastore.sasl.enabled " = "true",
+    "hadoop.security.auth_to_local" = "RULE:[2:$1@$0](.*@OTHERREALM.COM)s/@.*//
+                                        
RULE:[2:$1@$0](.*@OTHERLABS.TERADATA.COM)s/@.*//
+                                        DEFAULT",
+    "hive.metastore.kerberos.principal" = "hive/[email protected]"
+    );
+    ```
+
+At this point, the multi-Kerberos cluster access configuration is complete. 
You can view data from both Hive clusters and use different Kerberos 
credentials.
diff --git a/versioned_docs/version-4.x/lakehouse/catalogs/doris-catalog.mdx 
b/versioned_docs/version-4.x/lakehouse/catalogs/doris-catalog.mdx
index a1055af020e..47d4233de44 100644
--- a/versioned_docs/version-4.x/lakehouse/catalogs/doris-catalog.mdx
+++ b/versioned_docs/version-4.x/lakehouse/catalogs/doris-catalog.mdx
@@ -89,6 +89,8 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
 
 ### Arrow Flight Mode
 
+> Supported since 4.0.2.
+
 When the `use_arrow_flight` property is `true`, it operates in Arrow Flight 
mode.
 
 ![arrow-flight-mode](/images/Lakehouse/doris-catalog/arrow-flight-mode.png)
@@ -101,6 +103,8 @@ In this mode, during cross-cluster queries, FEs synchronize 
schema and other met
 
 ### Virtual Cluster Mode
 
+> Supported since 4.0.3.
+
 When the `use_arrow_flight` property is `false`, it operates in virtual 
cluster mode.
 
 > Currently, this mode only support compute-storage coupled Doris cluster. 
@@ -117,7 +121,18 @@ FEs synchronize schema and other metadata through HTTP 
protocol. BEs directly tr
 
 ## Column Type Mapping
 
-Doris external table types are completely identical to local Doris types.
+### Arrow Flight Mode
+
+The supported column types and table types in this mode depend on the 
capabilities of Arrow Flight SQL. Currently, it has the following capabilities 
and limitations:
+
+- Supports all primitive types
+- Supports all nested types (Array, Map, Struct)
+- Does not support hll, bitmap, and variant types
+- Supports all table models (detail tables, aggregate tables, and primary key 
tables)
+
+### Virtual Cluster Mode
+
+In virtual cluster mode, all column types and all table models (detail tables, 
aggregate tables, and primary key tables) are supported.
 
 ## Query Operations
 
@@ -225,4 +240,4 @@ MySQL [(none)]> explain select * from demo.inner_table a 
join edoris.external.ex
 |      tablets=1/1, tabletList=1762481736238                                   
                                                             |
 |      cardinality=1, avgRowSize=7425.0, numNodes=1                            
                                                             |
 |      pushAggOp=NONE
-```
\ No newline at end of file
+```
diff --git a/versioned_sidebars/version-3.x-sidebars.json 
b/versioned_sidebars/version-3.x-sidebars.json
index 6a8816271ea..4466605e133 100644
--- a/versioned_sidebars/version-3.x-sidebars.json
+++ b/versioned_sidebars/version-3.x-sidebars.json
@@ -476,6 +476,7 @@
                                 "lakehouse/best-practices/doris-gravitino",
                                 "lakehouse/best-practices/doris-onelake",
                                 "lakehouse/best-practices/doris-unity-catalog",
+                                "lakehouse/best-practices/kerberos",
                                 "lakehouse/best-practices/tpch",
                                 "lakehouse/best-practices/tpcds"
                             ]
diff --git a/versioned_sidebars/version-4.x-sidebars.json 
b/versioned_sidebars/version-4.x-sidebars.json
index fae3202f110..bdbedd67b36 100644
--- a/versioned_sidebars/version-4.x-sidebars.json
+++ b/versioned_sidebars/version-4.x-sidebars.json
@@ -522,6 +522,7 @@
                                 "lakehouse/best-practices/doris-gravitino",
                                 "lakehouse/best-practices/doris-onelake",
                                 "lakehouse/best-practices/doris-unity-catalog",
+                                "lakehouse/best-practices/kerberos",
                                 "lakehouse/best-practices/tpch",
                                 "lakehouse/best-practices/tpcds"
                             ]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to