This is an automated email from the ASF dual-hosted git repository.

morningman pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git


The following commit(s) were added to refs/heads/master by this push:
     new 65973faf007 [opt](hive) opt hive metastore doc (#2784)
65973faf007 is described below

commit 65973faf0074d6e2b04986220ae46243bcc2d9ad
Author: Mingyu Chen (Rayner) <[email protected]>
AuthorDate: Sun Aug 24 11:55:10 2025 -0700

    [opt](hive) opt hive metastore doc (#2784)
    
    ## Versions
    
    - [x] dev
    - [x] 3.0
    - [x] 2.1
    - [ ] 2.0
    
    ## Languages
    
    - [x] Chinese
    - [x] English
    
    ## Docs Checklist
    
    - [ ] Checked by AI
    - [ ] Test Cases Built
---
 docs/lakehouse/metastores/hive-metastore.md        | 287 +++++++++++++-----
 docs/lakehouse/storages/azure-blob.md              |   2 +-
 docs/lakehouse/storages/baidu-bos.md               |   2 +-
 docs/lakehouse/storages/gcs.md                     |   2 +-
 docs/lakehouse/storages/hdfs.md                    |  38 ++-
 .../current/lakehouse/metastores/hive-metastore.md | 331 +++++++++++++--------
 .../current/lakehouse/storages/baidu-bos.md        |   2 +-
 .../current/lakehouse/storages/gcs.md              |   2 +-
 .../current/lakehouse/storages/hdfs.md             |  38 ++-
 .../lakehouse/metastores/hive-metastore.md         | 241 ++++++++++++++-
 .../version-2.1/lakehouse/storages/baidu-bos.md    |   2 +-
 .../version-2.1/lakehouse/storages/gcs.md          |   2 +-
 .../version-2.1/lakehouse/storages/hdfs.md         |  38 ++-
 .../lakehouse/metastores/hive-metastore.md         | 241 ++++++++++++++-
 .../version-3.0/lakehouse/storages/baidu-bos.md    |   2 +-
 .../version-3.0/lakehouse/storages/gcs.md          |   2 +-
 .../version-3.0/lakehouse/storages/hdfs.md         |  38 ++-
 .../lakehouse/metastores/hive-metastore.md         | 241 ++++++++++++++-
 .../version-2.1/lakehouse/storages/azure-blob.md   |   3 +-
 .../version-2.1/lakehouse/storages/baidu-bos.md    |   2 +-
 .../version-2.1/lakehouse/storages/gcs.md          |   2 +-
 .../version-2.1/lakehouse/storages/hdfs.md         |  38 ++-
 .../lakehouse/metastores/hive-metastore.md         | 241 ++++++++++++++-
 .../version-3.0/lakehouse/storages/azure-blob.md   |   3 +-
 .../version-3.0/lakehouse/storages/baidu-bos.md    |   2 +-
 .../version-3.0/lakehouse/storages/gcs.md          |   2 +-
 .../version-3.0/lakehouse/storages/hdfs.md         |  38 ++-
 27 files changed, 1541 insertions(+), 301 deletions(-)

diff --git a/docs/lakehouse/metastores/hive-metastore.md 
b/docs/lakehouse/metastores/hive-metastore.md
index d47fb2c4329..5abefd52081 100644
--- a/docs/lakehouse/metastores/hive-metastore.md
+++ b/docs/lakehouse/metastores/hive-metastore.md
@@ -5,85 +5,238 @@
 }
 ---
 
-This document is used to introduce the parameters supported when connecting 
and accessing the Hive Metastore through the `CREATE CATALOG` statement.
-## Parameter Overview
-| Property Name                         | Alias | Description                  
                                                                                
                                                                                
                                               | Default | Required |
-|--------------------------------------|---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------|------|
-| `hive.metastore.uris`                | | The URI address of the Hive 
Metastore. Multiple URIs can be specified, separated by commas. The first URI 
is used by default, and if the first URI is unavailable, others will be tried. 
For example: `thrift://172.0.0.1:9083` or 
`thrift://172.0.0.1:9083,thrift://172.0.0.2:9083` | None   | Yes  |
-| `hive.conf.resources`                | | The location of the hive-site.xml 
file, used to load the parameters needed to connect to HMS from the 
hive-site.xml file. If the hive-site.xml file contains complete connection 
parameter information, only this parameter needs to be filled in. The 
configuration file must be placed in the FE deployment directory, with the 
default directory being `/plugins/hadoop_conf/` under the deployment directory 
(the default path can be changed by modifying `h [...]
-| `hive.metastore.authentication.type` | | The authentication method for the 
Hive Metastore. Supports `simple` and `kerberos`. In versions 2.1 and earlier, 
the authentication method is determined by the `hadoop.security.authentication` 
property. Starting from version 3.0, the authentication method for the Hive 
Metastore can be specified separately. | simple | No   |
-| `hive.metastore.service.principal`   | | When the authentication method is 
kerberos, used to specify the principal of the Hive Metastore server.           
                                                                                
                                                                                
          | Empty  | No   |
-| `hive.metastore.client.principal`    | | When the authentication method is 
kerberos, used to specify the principal of the Hive Metastore client. In 
versions 2.1 and earlier, this parameter is determined by the 
`hadoop.kerberos.principal` property.                                           
                                                                                
         | Empty  | No   |
-| `hive.metastore.client.keytab`       | | When the authentication method is 
kerberos, used to specify the keytab of the Hive Metastore client. The keytab 
file must be placed in the same directory on all FE nodes.                      
                                                                                
                                                    | Empty  | No   |
-
-## Authentication Parameters
-In Hive Metastore, there are two authentication methods: simple and kerberos.
-
-### `hive.metastore.authentication.type`
-
-- Description  
-    Specifies the authentication method for the Hive Metastore.
-
-- Optional Values
-    - `simple` (default): No authentication is used.
-    - `kerberos`: Enable Kerberos authentication
-
-- Version Differences
-    - Versions 2.1 and earlier: Relies on the global parameter 
`hadoop.security.authentication`
-    - Version 3.1+: Can be configured independently
-
-### Enabling Simple Authentication Related Parameters
-Simply specify `hive.metastore.authentication.type = simple`. **Not 
recommended for production environments**
-
-#### Complete Example
-```plaintext
-"hive.metastore.authentication.type" = "simple"
+This document describes all supported parameters when connecting to and 
accessing Hive MetaStore services through the `CREATE CATALOG` statement.
+
+## Supported Catalog Types
+
+| Catalog Type | Type Identifier (type) | Description                      |
+| ------------ | ---------------------- | -------------------------------- |
+| Hive         | hms                    | Catalog for connecting to Hive 
Metastore |
+| Iceberg      | iceberg                | Catalog for Iceberg table format |
+| Paimon       | paimon                 | Catalog for Apache Paimon table 
format |
+
+## Common Parameters Overview
+
+The following parameters are common to different Catalog types.
+
+| Parameter Name                     | Former Name                       | 
Required | Default | Description                                                
                                                                                
                                              |
+| ---------------------------------- | --------------------------------- | 
-------- | ------- | 
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 |
+| hive.metastore.uris                |                                   | Yes 
     | None    | URI address of Hive Metastore, supports multiple URIs 
separated by commas. Example: 'hive.metastore.uris' = 
'thrift://127.0.0.1:9083','hive.metastore.uris' = 
'thrift://127.0.0.1:9083,thrift://127.0.0.1:9084' |
+| hive.metastore.authentication.type | hadoop.security.authentication    | No  
     | simple  | Metastore authentication method: supports simple (default) or 
kerberos. In versions 3.0 and earlier, authentication method was determined by 
hadoop.security.authentication property. Starting from version 3.1, Hive 
Metastore authentication method can be specified separately. Example: 
'hive.metastore.authentication.type' = 'kerberos' |
+| hive.metastore.service.principal   | hive.metastore.kerberos.principal | No  
     | Empty   | Hive server principal, supports _HOST placeholder. Example: 
'hive.metastore.service.principal' = 'hive/[email protected]'                   
                                            |
+| hive.metastore.client.principal    | hadoop.kerberos.principal         | No  
     | Empty   | Kerberos principal used by Doris to connect to Hive MetaStore 
service.                                                                        
                                          |
+| hive.metastore.client.keytab       | hadoop.kerberos.keytab            | No  
     | Empty   | Kerberos keytab file path                                      
                                                                                
                                         |
+| hive.metastore.username            | hadoop.username                   | No  
     | hadoop  | Hive Metastore username, used in non-Kerberos mode             
                                                                                
                                         |
+| hive.conf.resources                |                                   | No  
     | Empty   | hive-site.xml configuration file path, using relative path     
                                                                                
                                        |
+
+> Note:
+>
+> For versions before 3.1.0, please use the former names.
+
+### Required Parameters
+
+* `hive.metastore.uris`: Must specify the URI address of Hive Metastore
+
+### Optional Parameters
+
+* `hive.metastore.authentication.type`: Authentication method, default is 
`simple`, optional `kerberos`
+
+* `hive.metastore.service.principal`: Kerberos principal of Hive MetaStore 
service, must be specified when using Kerberos authentication.
+
+* `hive.metastore.client.principal`: Kerberos principal used by Doris to 
connect to Hive MetaStore service, must be specified when using Kerberos 
authentication.
+
+* `hive.metastore.client.keytab`: Kerberos keytab file path, must be specified 
when using Kerberos authentication.
+
+* `hive.metastore.username`: Username for connecting to Hive MetaStore 
service, used in non-Kerberos mode, default is `hadoop`.
+
+* `hive.conf.resources`: hive-site.xml configuration file path, used when 
configuration for connecting to Hive Metastore service needs to be read from 
configuration files.
+
+### Authentication Methods
+
+#### Simple Authentication
+
+* `simple`: Non-Kerberos mode, directly connects to Hive Metastore service.
+
+#### Kerberos Authentication
+
+To use Kerberos authentication to connect to Hive Metastore service, configure 
the following parameters:
+
+* `hive.metastore.authentication.type`: Set to `kerberos`
+
+* `hive.metastore.service.principal`: Kerberos principal of Hive MetaStore 
service
+
+* `hive.metastore.client.principal`: Kerberos principal used by Doris to 
connect to Hive MetaStore service
+
+* `hive.metastore.client.keytab`: Kerberos keytab file path
+
+```sql
+'hive.metastore.authentication.type' = 'kerberos',
+'hive.metastore.service.principal' = 'hive/[email protected]',
+'hive.metastore.client.principal' = 'hive/[email protected]',
+'hive.metastore.client.keytab' = '/etc/security/keytabs/hive.keytab'
 ```
 
-### Enabling Kerberos Authentication Related Parameters
+When using Hive MetaStore service with Kerberos authentication enabled, ensure 
that the same keytab file exists on all FE nodes, the user running the Doris 
process has read permission to the keytab file, and the krb5 configuration file 
is properly configured.
 
-#### `hive.metastore.service.principal`
-- Description  
-    The Kerberos principal of the Hive Metastore service, used for Doris to 
verify the identity of the Metastore.
+For detailed Kerberos configuration, refer to Kerberos Authentication.
 
-- Placeholder Support  
-    `_HOST` will automatically be replaced with the actual hostname of the 
connected Metastore (suitable for multi-node Metastore clusters).
+### Configuration File Parameters
 
-- Example
-    ```plaintext
-    hive/[email protected]
-    hive/[email protected]  # Dynamically resolve the actual hostname
-    ```
+#### `hive.conf.resources`
 
-#### `hive.metastore.client.principal`
-- Description
-    The Kerberos principal used when connecting to the Hive Metastore service. 
For example: `doris/[email protected]` or `doris/[email protected]`.
+If you need to read configuration for connecting to Hive Metastore service 
through configuration files, you can configure the `hive.conf.resources` 
parameter to set the conf file path.
 
-- Placeholder Support  
-    `_HOST` will automatically be replaced with the actual hostname of the 
connected Metastore (suitable for multi-node Metastore clusters).
+> Note: The `hive.conf.resources` parameter only supports relative paths, do 
not use absolute paths. The default path is under the 
`${DORIS_HOME}/plugins/hadoop_conf/` directory. You can specify other 
directories by modifying hadoop_config_dir in fe.conf.
 
-- Example
-    ```plaintext
-    doris/[email protected]
-    doris/[email protected]  # Dynamically resolve the actual hostname
+Example: `'hive.conf.resources' = 'hms-1/hive-site.xml'`
+
+## Catalog Type-Specific Data
+
+The following parameters are specific to each Catalog type, in addition to the 
common parameters.
+
+### Hive Catalog
+
+| Parameter Name      | Former Name | Required | Default | Description         
                                                 |
+| ------------------- | ----------- | -------- | ------- | 
-------------------------------------------------------------------- |
+| type                |             | Yes      | None    | Catalog type, fixed 
as hms for Hive Catalog                         |
+| hive.metastore.type |             | No       | 'hms'   | Metadata Catalog 
type, fixed as hms for Hive Metastore, must be hms when using HiveMetaStore |
+
+#### Examples
+
+1. Create a Hive Catalog using unauthenticated Hive Metastore as metadata 
service, with S3 storage service.
+
+   ```sql
+   CREATE CATALOG hive_hms_s3_test_catalog PROPERTIES (
+       'type' = 'hms',
+       'hive.metastore.uris' = 'thrift://127.0.0.1:9383',
+       's3.access_key' = 'S3_ACCESS_KEY',
+       's3.secret_key' = 'S3_SECRET_KEY',
+       's3.region' = 's3.ap-east-1.amazonaws.com'
+   );
+   ```
+
+2. Create a Hive Catalog using Hive Metastore with Kerberos authentication 
enabled as metadata service, with S3 storage service.
+
+   ```sql
+    CREATE CATALOG hive_hms_on_oss_kerberos_new_catalog PROPERTIES (
+       'type' = 'hms',
+       'hive.metastore.uris' = 'thrift://127.0.0.1:9583',
+       
'hive.metastore.client.principal'='hive/[email protected]',
+       'hive.metastore.client.keytab' = 
'/mnt/keytabs/keytabs/hive-presto-master.keytab',
+       'hive.metastore.service.principal' = 
'hive/[email protected]',
+       'hive.metastore.authentication.type'='kerberos',
+       'hadoop.security.auth_to_local' = 
'RULE:[2:\$1@\$0](.*@LABS.TERADATA.COM)s/@.*//
+                          RULE:[2:\$1@\$0](.*@OTHERLABS.TERADATA.COM)s/@.*//
+                          RULE:[2:\$1@\$0](.*@OTHERREALM.COM)s/@.*//
+                          DEFAULT',
+       'oss.access_key' = 'OSS_ACCESS_KEY',
+       'oss.secret_key' = 'OSS_SECRET_KEY',
+       'oss.endpoint' = 'oss-cn-beijing.aliyuncs.com'
+   );
+   ```
+
+### Iceberg Catalog
+
+| Parameter Name       | Former Name | Required | Default | Description        
                                                  |
+| -------------------- | ----------- | -------- | ------- | 
-------------------------------------------------------------------- |
+| type                 |             | Yes      | None    | Catalog type, 
fixed as iceberg for Iceberg                          |
+| iceberg.catalog.type |             | No       | None    | Metadata Catalog 
type, fixed as hms for Hive Metastore, must be hms when using HiveMetaStore |
+| warehouse            |             | No       | None    | Iceberg warehouse 
path                                               |
+
+#### Examples
+
+1. Create an Iceberg Catalog using Hive Metastore as metadata service, with S3 
storage service.
+
+    ```sql
+     CREATE CATALOG iceberg_hms_s3_test_catalog PROPERTIES (
+        'type' = 'iceberg',
+        'iceberg.catalog.type' = 'hms',
+        'hive.metastore.uris' = 'thrift://127.0.0.1:9383',
+        'warehouse' = 's3://doris/iceberg_warehouse/',
+        's3.access_key' = 'S3_ACCESS_KEY',
+        's3.secret_key' = 'S3_SECRET_KEY',
+        's3.region' = 's3.ap-east-1.amazonaws.com'
+    );
+    ```
+
+2. Create an Iceberg Catalog using Hive Metastore with Kerberos authentication 
enabled as metadata service in a multi-Kerberos environment, with S3 storage 
service.
+
+    ```sql
+    CREATE CATALOG IF NOT EXISTS iceberg_hms_on_oss_kerberos_new_catalog 
PROPERTIES (
+        'type' = 'iceberg',
+        'iceberg.catalog.type' = 'hms',
+        'hive.metastore.uris' = 'thrift://127.0.0.1:9583',
+        'warehouse' = 'oss://doris/iceberg_warehouse/',
+        
'hive.metastore.client.principal'='hive/[email protected]',
+        'hive.metastore.client.keytab' = 
'/mnt/keytabs/keytabs/hive-presto-master.keytab',
+        'hive.metastore.service.principal' = 
'hive/[email protected]',
+        'hive.metastore.authentication.type'='kerberos',
+        'hadoop.security.auth_to_local' = 
'RULE:[2:\$1@\$0](.*@LABS.TERADATA.COM)s/@.*//
+                           RULE:[2:\$1@\$0](.*@OTHERLABS.TERADATA.COM)s/@.*//
+                           RULE:[2:\$1@\$0](.*@OTHERREALM.COM)s/@.*//
+                           DEFAULT',
+        'oss.access_key' = 'OSS_ACCESS_KEY',
+        'oss.secret_key' = 'OSS_SECRET_KEY',
+        'oss.endpoint' = 'oss-cn-beijing.aliyuncs.com'
+    );
     ```
 
-#### `hive.metastore.client.keytab`
-- Description
-    The path to the keytab file containing the key for the specified 
principal. The operating system user running all FEs must have permission to 
read this file.
+### Paimon Catalog
+
+| Parameter Name      | Former Name | Required | Default    | Description      
                                                   |
+| ------------------- | ----------- | -------- | ---------- | 
------------------------------------------------------------------- |
+| type                |             | Yes      | None       | Catalog type, 
fixed as paimon for Paimon                           |
+| paimon.catalog.type |             | No       | filesystem | Must be hms when 
using HiveMetaStore, default is filesystem for storing metadata in filesystem |
+| warehouse           |             | Yes      | None       | Paimon warehouse 
path                                               |
 
-- Example
-    ```plaintext
-    "hive.metastore.client.keytab" = "conf/doris.keytab"
+#### Examples
+
+1. Create a Paimon Catalog using Hive Metastore as metadata service, with S3 
storage service.
+
+    ```sql
+     CREATE CATALOG IF NOT EXISTS paimon_hms_s3_test_catalog PROPERTIES (
+        'type' = 'paimon',
+        'paimon.catalog.type' = 'hms',
+        'hive.metastore.uris' = 'thrift://127.0.0.1:9383',
+        'warehouse' = 's3://doris/paimon_warehouse/',
+        's3.access_key' = 'S3_ACCESS_KEY',
+        's3.secret_key' = 'S3_SECRET_KEY',
+        's3.region' = 's3.ap-east-1.amazonaws.com'
+    );
     ```
 
-#### Complete Example  
+2. Create a Paimon Catalog using Hive Metastore with Kerberos authentication 
enabled as metadata service in a multi-Kerberos environment, with S3 storage 
service.
+
+    ```sql
+     CREATE CATALOG IF NOT EXISTS paimon_hms_on_oss_kerberos_new_catalog 
PROPERTIES (
+        'type' = 'paimon',
+        'paimon.catalog.type' = 'hms',
+        'hive.metastore.uris' = 'thrift://127.0.0.1:9583',
+        'warehouse' = 's3://doris/iceberg_warehouse/',
+        
'hive.metastore.client.principal'='hive/[email protected]',
+        'hive.metastore.client.keytab' = 
'/mnt/keytabs/keytabs/hive-presto-master.keytab',
+        'hive.metastore.service.principal' = 
'hive/[email protected]',
+        'hive.metastore.authentication.type'='kerberos',
+        'hadoop.security.auth_to_local' = 
'RULE:[2:\$1@\$0](.*@LABS.TERADATA.COM)s/@.*//
+                           RULE:[2:\$1@\$0](.*@OTHERLABS.TERADATA.COM)s/@.*//
+                           RULE:[2:\$1@\$0](.*@OTHERREALM.COM)s/@.*//
+                           DEFAULT',
+        'oss.access_key' = 'OSS_ACCESS_KEY',
+        'oss.secret_key' = 'OSS_SECRET_KEY',
+        'oss.endpoint' = 'oss-cn-beijing.aliyuncs.com'
+    );
+    ```
 
-Enable Kerberos authentication
+## Frequently Asked Questions (FAQ)
 
-```plaintext
-"hive.metastore.authentication.type" = "kerberos",
-"hive.metastore.service.principal" = "hive/[email protected]",
-"hive.metastore.client.principal" = "doris/[email protected]",
-"hive.metastore.client.keytab" = "etc/doris/conf/doris.keytab"
-```
+- Q1: Is hive-site.xml mandatory?
+
+    No, it's only used when configuration needs to be read from it.
+
+- Q2: Must the keytab file exist on every node?
+
+    Yes, all FE nodes must be able to access the specified path.
+
+- Q3: What should be noted when using write-back functionality, i.e., creating 
Hive/Iceberg databases/tables in Doris?
+
+    Since creating tables involves metadata operations on the storage side, 
i.e., accessing the storage system, the Hive MetaStore service server side 
needs to configure corresponding storage parameters, such as access parameters 
for S3, OSS and other storage services. When using object storage as the 
underlying storage system, ensure that the bucket being written to matches the 
configured Region.
\ No newline at end of file
diff --git a/docs/lakehouse/storages/azure-blob.md 
b/docs/lakehouse/storages/azure-blob.md
index 77655569560..9acc6c6d6f3 100644
--- a/docs/lakehouse/storages/azure-blob.md
+++ b/docs/lakehouse/storages/azure-blob.md
@@ -5,5 +5,5 @@
 }
 ---
 
-The document is under development, please refer to versioned doc 2.1 or 3.0
+Azure Blob will be supported later.
 
diff --git a/docs/lakehouse/storages/baidu-bos.md 
b/docs/lakehouse/storages/baidu-bos.md
index 512f642a1f9..2465f67dc63 100644
--- a/docs/lakehouse/storages/baidu-bos.md
+++ b/docs/lakehouse/storages/baidu-bos.md
@@ -5,5 +5,5 @@
 }
 ---
 
-Baidu Cloud BOS will be supported later.
+Document is under development.
 
diff --git a/docs/lakehouse/storages/gcs.md b/docs/lakehouse/storages/gcs.md
index 5ffa14a7ae1..e99471b68dc 100644
--- a/docs/lakehouse/storages/gcs.md
+++ b/docs/lakehouse/storages/gcs.md
@@ -5,5 +5,5 @@
 }
 ---
 
-The document is under development, please refer to versioned doc 2.1 or 3.0
+The document is under development.
 
diff --git a/docs/lakehouse/storages/hdfs.md b/docs/lakehouse/storages/hdfs.md
index 201fe7bf00e..6e214cc85ba 100644
--- a/docs/lakehouse/storages/hdfs.md
+++ b/docs/lakehouse/storages/hdfs.md
@@ -41,7 +41,7 @@ Simple authentication is suitable for HDFS clusters that have 
not enabled Kerber
 
 Using Simple authentication, you can set the following parameters or use the 
default values directly:
 
-```plain
+```sql
 "hdfs.authentication.type" = "simple"
 ```
 
@@ -51,14 +51,14 @@ Examples:
 
 Using `lakers` username to access HDFS
 
-```plain
+```sql
 "hdfs.authentication.type" = "simple",
 "hadoop.username" = "lakers"
 ```
 
 Using default system user to access HDFS
 
-```plain
+```sql
 "hdfs.authentication.type" = "simple"
 ```
 
@@ -68,7 +68,7 @@ Kerberos authentication is suitable for HDFS clusters with 
Kerberos enabled.
 
 Using Kerberos authentication, you need to set the following parameters:
 
-```plain
+```sql
 "hdfs.authentication.type" = "kerberos",
 "hdfs.authentication.kerberos.principal" = "<your_principal>",
 "hdfs.authentication.kerberos.keytab" = "<your_keytab>"
@@ -84,12 +84,34 @@ Doris will access HDFS with the identity specified by the 
`hdfs.authentication.k
 
 Example:
 
-```plain
+```sql
 "hdfs.authentication.type" = "kerberos",
 "hdfs.authentication.kerberos.principal" = "hdfs/[email protected]",
 "hdfs.authentication.kerberos.keytab" = "/etc/security/keytabs/hdfs.keytab",
 ```
 
+## HDFS HA Configuration
+
+If HDFS HA mode is enabled, need to configure `dfs.nameservices` related 
parameters:
+
+```sql
+'dfs.nameservices' = '<your-nameservice>',
+'dfs.ha.namenodes.<your-nameservice>' = '<nn1>,<nn2>',
+'dfs.namenode.rpc-address.<your-nameservice>.<nn1>' = '<nn1_host:port>',
+'dfs.namenode.rpc-address.<your-nameservice>.<nn2>' = '<nn2_host:port>',
+'dfs.client.failover.proxy.provider.<your-nameservice>' = 
'org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider',
+```
+
+Example:
+
+```sql
+'dfs.nameservices' = 'nameservice1',
+'dfs.ha.namenodes.nameservice1' = 'nn1,nn2',
+'dfs.namenode.rpc-address.nameservice1.nn1' = '172.21.0.2:8088',
+'dfs.namenode.rpc-address.nameservice1.nn2' = '172.21.0.3:8088',
+'dfs.client.failover.proxy.provider.nameservice1' = 
'org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider',
+```
+
 ## Configuration Files
 
 > This feature is supported since version 3.1.0
@@ -103,9 +125,9 @@ If the configuration files contain the above parameters 
mentioned in this docume
 **Examples:**
 
 ```sql
-Multiple configuration files
+-- Multiple configuration files
 
'hadoop.config.resources'='hdfs-cluster-1/core-site.xml,hdfs-cluster-1/hdfs-site.xml'
-Single configuration file
+-- Single configuration file
 'hadoop.config.resources'='hdfs-cluster-2/hdfs-site.xml'
 ```
 
@@ -121,7 +143,7 @@ Note: This feature may increase the load on the HDFS 
cluster, please use it judi
 
 You can enable this feature in the following way:
 
-```plain
+```sql
 "dfs.client.hedged.read.threadpool.size" = "128",
 "dfs.client.hedged.read.threshold.millis" = "500"
 ```
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/metastores/hive-metastore.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/metastores/hive-metastore.md
index 7bb58e050cb..1145dc5ee47 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/metastores/hive-metastore.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/metastores/hive-metastore.md
@@ -4,154 +4,239 @@
   "language": "zh-CN"
 }
 ---
-# 使用 `CREATE CATALOG` 连接外部元数据服务的参数说明
 
-本文档用于介绍通过 `CREATE CATALOG` 语句连接并访问外部元数据服务时支持的所有参数,当前支持 Hive、Iceberg 和 Paimon 
三种 Catalog 类型。
+本文档用于介绍通过 `CREATE CATALOG` 语句连接并访问 Hive MetaStore 服务时支持的所有参数。
 
-## ✅ 当前支持的 Catalog 类型
+## 支持的 Catalog 类型
 
-| Catalog 类型 | 类型标识 (`type`)   | 描述                                |
-|--------------|---------------------|-------------------------------------|
-| Hive         | `hms`               | 对接 Hive Metastore 的 Catalog      |
-| Iceberg      | `iceberg_hms` / `iceberg_rest` | 对接 Iceberg 表格式               
 |
-| Paimon       | `paimon`            | 对接 Apache Paimon 表格式           |
+| Catalog 类型 | 类型标识 (type) | 描述                          |
+| ---------- | ----------- | --------------------------- |
+| Hive       | hms         | 对接 Hive Metastore 的 Catalog |
+| Iceberg    | iceberg     | 对接 Iceberg 表格式              |
+| Paimon     | paimon      | 对接 Apache Paimon 表格式        |
 
----
-
-# 一、Hive Catalog
-
-Hive Catalog 用于连接 Hive Metastore,并读取 Hive 表信息。支持 Kerberos 认证。
-
-## 📋 参数总揽
-
-| 参数名称                             | 是否必须 | 默认值   | 简要描述                       
                              |
-|--------------------------------------|----------|----------|--------------------------------------------------------------|
-| `type`                               | ✅ 是    | 无       | Catalog 类型,Hive 
固定为 `hms`                              |
-| `hive.metastore.uris`                | ✅ 是    | 无       | Hive Metastore 的 
URI 地址                                   |
-| `hive.conf.resources`                | 否       | 空       | hive-site.xml 
配置文件相对路径                               |
-| `hive.metastore.authentication.type` | 否       | simple   | Metastore 
认证方式,支持 `simple` 或 `kerberos`              |
-| `hive.metastore.service.principal`   | 否       | 空       | Kerberos 服务端 
principal                                     |
-| `hive.metastore.client.principal`    | 否       | 空       | Kerberos 客户端 
principal                                     |
-| `hive.metastore.client.keytab`       | 否       | 空       | Kerberos 客户端 
keytab 文件路径                              |
-
-## 📖 参数详细说明
+## 通用参数总览
 
-### `type`
-Catalog 类型,Hive 固定为 `hms`  
-示例:`"type" = "hms"`
+以下参数为不同 Catalog 类型的通用参数。
 
-### `hive.metastore.uris`
-Hive Metastore 的 URI 地址,支持多个逗号分隔  
-示例:`"hive.metastore.uris" = "thrift://127.0.0.1:9083"`
+| 参数名称                               | 曾用名                               | 
是否必须 | 默认值    | 简要描述                                                            
                                                                                
                                     |
+| ---------------------------------- | --------------------------------- | 
---- | ------ | 
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 |
+| hive.metastore.uris                |                                   | 是   
 | 无      | Hive Metastore 的 URI 地址,支持多个逗号分隔,示例:'hive.metastore.uris' = 
'thrift://127.0.0.1:9083','hive.metastore.uris' = 
'thrift://127.0.0.1:9083,thrift://127.0.0.1:9084'                     |
+| hive.metastore.authentication.type | hadoop.security.authentication    | 否   
 | simple | Metastore 认证方式:支持 simple(默认)或 kerberos,在 3.0 及之前版本中,认证方式由 
hadoop.security.authentication 属性决定。3.1 版本开始,可以单独指定 Hive Metastore 
的认证方式,示例:'hive.metastore.authentication.type' = 'kerberos' |
+| hive.metastore.service.principal   | hive.metastore.kerberos.principal | 否   
 | 空      | Hive 服务端 principal,支持 \_HOST 
占位符,示例:'hive.metastore.service.principal' = 'hive/<[email protected]>'          
                                                                        |
+| hive.metastore.client.principal    | hadoop.kerberos.principal         | 否   
 | 空      | Doris 连接到 Hive MetaStore 服务时使用的 Kerberos 主体。                        
                                                                                
                                |
+| hive.metastore.client.keytab       | hadoop.kerberos.keytab            | 否   
 | 空      | Kerberos keytab 文件路径                                                
                                                                                
                                 |
+| hive.metastore.username            | hadoop.username                   | 否   
 | hadoop | Hive Metastore 用户名,非 Kerberos 模式下使用                                 
                                                                                
                                 |
+| hive.conf.resources                |                                   | 否   
 | 空      | hive-site.xml 配置文件路径,使用相对路径                                         
                                                                                
                                 |
 
-### `hive.conf.resources`
-hive-site.xml 配置文件路径,默认目录为 `/plugins/hadoop_conf/`  
-示例:`"hive.conf.resources" = "hms-1/hive-site.xml"`
+> 注:
+>
+> 3.1.0 版本之前,请使用曾用名。
 
-### `hive.metastore.authentication.type`
-认证方式:`simple`(默认)或 `kerberos`,在 3.0 
及之前版本中,认证方式由`hadoop.security.authentication`属性决定。3.1 版本开始,可以单独指定 Hive Metastore 
的认证方式。
-示例:`"hive.metastore.authentication.type" = "kerberos"`
+### 必填参数
 
-### `hive.metastore.service.principal`
-Hive 服务端 principal,支持 `_HOST` 占位符  
-示例:`"hive.metastore.service.principal" = "hive/[email protected]"`
+* `hive.metastore.uris`:必须指定 Hive Metastore 的 URI 地址
 
-### `hive.metastore.client.principal`
-客户端 principal(Kerberos 模式)  
-示例:`"hive.metastore.client.principal" = "doris/[email protected]"`
+### 可选参数
 
-### `hive.metastore.client.keytab`
-keytab 文件路径,所有 FE 节点均需存在  
-示例:`"hive.metastore.client.keytab" = "conf/doris.keytab"`
+* `hive.metastore.authentication.type`:认证方式,默认为 `simple`,可选 `kerberos`
 
-## ✅ 示例:Hive Catalog(Kerberos)
-
-```
-CREATE CATALOG hive_catalog WITH (
-  "type" = "hms",
-  "hive.metastore.uris" = "thrift://127.0.0.1:9083",
-  "hive.metastore.authentication.type" = "kerberos",
-  "hive.metastore.service.principal" = "hive/[email protected]",
-  "hive.metastore.client.principal" = "doris/[email protected]",
-  "hive.metastore.client.keytab" = "conf/doris.keytab"
-);
-```
-
----
+* `hive.metastore.service.principal`:Hive MetaStore 服务的 Kerberos 主体,当使用 
Kerberos 认证时必须指定。
 
-# 二、Iceberg Catalog
+* `hive.metastore.client.principal`:Doris 连接到 Hive MetaStore 服务时使用的 Kerberos 
主体,当使用 Kerberos 认证时必须指定。
 
-支持使用 Hive Metastore。
+* `hive.metastore.client.keytab`:Kerberos keytab 文件路径,当使用 Kerberos 认证时必须指定。
 
-## 📋 参数总揽
+* `hive.metastore.username`:连接 Hive MetaStore 服务的用户名,非 Kerberos 模式下使用,默认为 
`hadoop`。
 
-| 参数名称                                 | 是否必须 | 默认值         | 简要描述             
                       |
-|--------------------------------------|----------|----------------|-----------------------------------------|
-| `type`                               | ✅ 是    | 无             | Catalog 
类型:固定为 `iceberg`                |
-| `iceberg.catalog.type`               | ✅ 是    | 无             | Mestadata 
Catalog 类型,固定为 `hms`          |
-| `warehouse`                          | ✅ 是    | 无             | Iceberg 仓库路径 
                           |
-| `hive.metastore.uris`                | ✅ 是    | 无       | Hive Metastore 的 
URI 地址                 |
-| `hive.conf.resources`                | 否       | 空       | hive-site.xml 
配置文件相对路径                  |
-| `hive.metastore.authentication.type` | 否       | simple   | Metastore 
认证方式,支持 `simple` 或 `kerberos` |
-| `hive.metastore.service.principal`   | 否       | 空       | Kerberos 服务端 
principal                  |
-| `hive.metastore.client.principal`    | 否       | 空       | Kerberos 客户端 
principal                  |
-| `hive.metastore.client.keytab`       | 否       | 空       | Kerberos 客户端 
keytab 文件路径                |
+* `hive.conf.resources`:hive-site.xml 配置文件路径,当需要通过配置文件的方式读取链接 Hive Metastore 
服务的配置时使用。
 
+### 认证方式
 
-### `type`
-Catalog 类型,Hive 固定为 `hms`  
-示例:`"type" = "hms"`
+#### Simple 认证
 
-### `hive.metastore.uris`
-Hive Metastore 的 URI 地址,支持多个逗号分隔  
-示例:`"hive.metastore.uris" = "thrift://127.0.0.1:9083"`
+* `simple`:非 Kerberos 模式,直接连接 Hive Metastore 服务。
 
-### `hive.conf.resources`
-hive-site.xml 配置文件路径,默认目录为 `/plugins/hadoop_conf/`  
-示例:`"hive.conf.resources" = "hms-1/hive-site.xml"`
+#### Kerberos 认证
 
-### `hive.metastore.authentication.type`
-认证方式:`simple`(默认)或 `kerberos`,在 3.0 
及之前版本中,认证方式由`hadoop.security.authentication`属性决定。3.1 版本开始,可以单独指定 Hive Metastore 
的认证方式。
-示例:`"hive.metastore.authentication.type" = "kerberos"`
+使用 Kerberos 认证连接 Hive Metastore 服务,需要配置以下参数:
 
-### `hive.metastore.service.principal`
-Hive 服务端 principal,支持 `_HOST` 占位符  
-示例:`"hive.metastore.service.principal" = "hive/[email protected]"`
+* `hive.metastore.authentication.type`:设置为 `kerberos`
 
-### `hive.metastore.client.principal`
-客户端 principal(Kerberos 模式)  
-示例:`"hive.metastore.client.principal" = "doris/[email protected]"`
+* `hive.metastore.service.principal`:Hive MetaStore 服务的 Kerberos 主体
 
-### `hive.metastore.client.keytab`
-keytab 文件路径,所有 FE 节点均需存在  
-示例:`"hive.metastore.client.keytab" = "conf/doris.keytab"`
+* `hive.metastore.client.principal`:Doris 连接到 Hive MetaStore 服务时使用的 Kerberos 主体
 
-## ✅ 示例
+* `hive.metastore.client.keytab`:Kerberos keytab 文件路径
 
+```sql
+'hive.metastore.authentication.type' = 'kerberos',
+'hive.metastore.service.principal' = 'hive/[email protected]',
+'hive.metastore.client.principal' = 'hive/[email protected]',
+'hive.metastore.client.keytab' = '/etc/security/keytabs/hive.keytab'
 ```
-CREATE CATALOG iceberg_catalog WITH (
-  "type" = "iceberg_hms",
-  "iceberg.hive.metastore.uris" = "thrift://127.0.0.1:9083",
-  "warehouse" = "hdfs:///user/hive/warehouse"
-  ----
-  Standard Hive Metastore parameters
-);
-```
-
-
----
-
-# 三、Paimon Catalog
-
-补充中
-
-
----
-
-# 四、常见问题 FAQ
-
-**Q1:** hive-site.xml 是必须的吗?  
-不是,仅当需要从中读取链接配置时使用。
 
-**Q2:** keytab 文件是否必须每个节点都存在?  
-是的,所有 FE 节点必须可访问指定路径。
+使用开启 Kerberos 认证的 Hive MetaStore 服务时需要确保所有 FE 节点上都存在相同的 keytab 文件,并且运行 Doris 
进程的用户具有该 keytab 文件的读权限。以及 krb5 配置文件配置正确。
+
+Kerberos 详细配置参考 Kerberos 认证。
+
+### 配置文件参数
+
+#### `hive.conf.resources`
+
+如需要通过配置文件的方式读取链接 Hive Metastore 服务的配置,可以配置 `hive.conf.resources` 参数来设置 conf 
文件路径。
+
+> 注意:`hive.conf.resources` 参数仅支持相对路径,请勿使用绝对路径。默认路径为 
`${DORIS_HOME}/plugins/hadoop_conf/` 目录下。可通过修改 fe.conf 中的 hadoop\_config\_dir 
来指定其他目录。
+
+示例:`'hive.conf.resources' = 'hms-1/hive-site.xml'`
+
+## Catalog 类型数据
+
+以下参数是除通用参数外,各个 Catalog 特有的参数说明。
+
+### Hive Catalog
+
+| 参数名称                | 曾用名 | 是否必须 | 默认值   | 简要描述                              
                                   |
+| ------------------- | --- | ---- | ----- | 
-------------------------------------------------------------------- |
+| type                |     | 是    | 无     | Catalog 类型,Hive Catalog 固定为 
iceberg                                  |
+| hive.metastore.type |     | 否    | 'hms' | Metadata Catalog 类型,Hive 
Metastore 固定为 hms,使用 HiveMetaStore 则必须为 hms |
+
+#### 示例
+
+1. 创建一个使用无认证的 Hive Metastore 作为元数据服务的 Hive Catalog,存储使用 S3 存储服务。
+
+   ```sql
+   CREATE CATALOG hive_hms_s3_test_catalog PROPERTIES (
+       'type' = 'hms',
+       'hive.metastore.uris' = 'thrift://127.0.0.1:9383',
+       's3.access_key' = 'S3_ACCESS_KEY',
+       's3.secret_key' = 'S3_SECRET_KEY',
+       's3.region' = 's3.ap-east-1.amazonaws.com'
+   );
+   ```
+
+2. 创建一个使用开启了 Kerberos 认证的 Hive Metastore 作为元数据服务的 Hive Catalog,存储使用 S3 存储服务。
+
+   ```sql
+    CREATE CATALOG hive_hms_on_oss_kerberos_new_catalog PROPERTIES (
+       'type' = 'hms',
+       'hive.metastore.uris' = 'thrift://127.0.0.1:9583',
+       
'hive.metastore.client.principal'='hive/[email protected]',
+       'hive.metastore.client.keytab' = 
'/mnt/keytabs/keytabs/hive-presto-master.keytab',
+       'hive.metastore.service.principal' = 
'hive/[email protected]',
+       'hive.metastore.authentication.type'='kerberos',
+       'hadoop.security.auth_to_local' = 
'RULE:[2:\$1@\$0](.*@LABS.TERADATA.COM)s/@.*//
+                          RULE:[2:\$1@\$0](.*@OTHERLABS.TERADATA.COM)s/@.*//
+                          RULE:[2:\$1@\$0](.*@OTHERREALM.COM)s/@.*//
+                          DEFAULT',
+       'oss.access_key' = 'OSS_ACCESS_KEY',
+       'oss.secret_key' = 'OSS_SECRET_KEY',
+       'oss.endpoint' = 'oss-cn-beijing.aliyuncs.com'
+   );
+   ```
+
+### Iceberg Catalog
+
+| 参数名称                 | 曾用名 | 是否必须 | 默认值 | 简要描述                               
                                  |
+| -------------------- | --- | ---- | --- | 
-------------------------------------------------------------------- |
+| type                 |     | 是    | 无   | Catalog 类型,Iceberg 固定为 iceberg     
                                  |
+| iceberg.catalog.type |     | 否    | 无   | Metadata Catalog 类型,Hive Metastore 
固定为 hms,使用 HiveMetaStore 则必须为 hms |
+| warehouse            |     | 否    | 无   | Iceberg 仓库路径                       
                                  |
+
+#### 示例
+
+1. 创建一个使用 Hive Metastore 作为元数据服务的 Iceberg Catalog,存储使用 S3 存储服务。
+
+       ```sql
+        CREATE CATALOG iceberg_hms_s3_test_catalog PROPERTIES (
+           'type' = 'iceberg',
+           'iceberg.catalog.type' = 'hms',
+           'hive.metastore.uris' = 'thrift://127.0.0.1:9383',
+           'warehouse' = 's3://doris/iceberg_warehouse/',
+           's3.access_key' = 'S3_ACCESS_KEY',
+           's3.secret_key' = 'S3_SECRET_KEY',
+           's3.region' = 's3.ap-east-1.amazonaws.com'
+       );
+       ```
+
+* 创建一个使用开启了 Kerberos 认证的 Hive Metastore 作为元数据服务的 Iceberg Catalog,并且处于多 
kerberos 环境下。存储使用 S3 存储服务。
+
+       ```sql
+       CREATE CATALOG IF NOT EXISTS iceberg_hms_on_oss_kerberos_new_catalog 
PROPERTIES (
+           'type' = 'iceberg',
+           'iceberg.catalog.type' = 'hms',
+           'hive.metastore.uris' = 'thrift://127.0.0.1:9583',
+           'warehouse' = 'oss://doris/iceberg_warehouse/',
+           
'hive.metastore.client.principal'='hive/[email protected]',
+           'hive.metastore.client.keytab' = 
'/mnt/keytabs/keytabs/hive-presto-master.keytab',
+           'hive.metastore.service.principal' = 
'hive/[email protected]',
+           'hive.metastore.authentication.type'='kerberos',
+           'hadoop.security.auth_to_local' = 
'RULE:[2:\$1@\$0](.*@LABS.TERADATA.COM)s/@.*//
+                              
RULE:[2:\$1@\$0](.*@OTHERLABS.TERADATA.COM)s/@.*//
+                              RULE:[2:\$1@\$0](.*@OTHERREALM.COM)s/@.*//
+                              DEFAULT',
+           'oss.access_key' = 'OSS_ACCESS_KEY',
+           'oss.secret_key' = 'OSS_SECRET_KEY',
+           'oss.endpoint' = 'oss-cn-beijing.aliyuncs.com'
+       );
+       ```
+
+### Paimon Catalog
+
+| 参数名称                | 曾用名 | 是否必须 | 默认值        | 简要描述                         
                           |
+| ------------------- | --- | ---- | ---------- | 
------------------------------------------------------- |
+| type                |     | 是    | 无          | Catalog 类型,Iceberg 固定为 
iceberg                          |
+| paimon.catalog.type |     | 否    | filesystem | 使用 HiveMetaStore 则必须为 
hms,默认值为 filesystem, 即使用文件系统存储元数据 |
+| warehouse           |     | 是    | 无          | Paimon 仓库路径                  
                           |
+
+#### 示例
+
+1. 创建一个使用 Hive Metastore 作为元数据服务的 Paimon Catalog,存储使用 S3 存储服务。
+
+       ```sql
+        CREATE CATALOG IF NOT EXISTS paimon_hms_s3_test_catalog PROPERTIES (
+           'type' = 'paimon',
+           'paimon.catalog.type' = 'hms',
+           'hive.metastore.uris' = 'thrift://127.0.0.1:9383',
+           'warehouse' = 's3://doris/paimon_warehouse/',
+           's3.access_key' = 'S3_ACCESS_KEY',
+           's3.secret_key' = 'S3_SECRET_KEY',
+           's3.region' = 's3.ap-east-1.amazonaws.com'
+       );
+       ```
+
+* 创建一个使用开启了 Kerberos 认证的 Hive Metastore 作为元数据服务的 Paimon Catalog,并且处于多 kerberos 
环境下。存储使用 S3 存储服务。
+
+       ```sql
+        CREATE CATALOG IF NOT EXISTS paimon_hms_on_oss_kerberos_new_catalog 
PROPERTIES (
+           'type' = 'paimon',
+           'paimon.catalog.type' = 'hms',
+           'hive.metastore.uris' = 'thrift://127.0.0.1:9583',
+           'warehouse' = 's3://doris/iceberg_warehouse/',
+           
'hive.metastore.client.principal'='hive/[email protected]',
+           'hive.metastore.client.keytab' = 
'/mnt/keytabs/keytabs/hive-presto-master.keytab',
+           'hive.metastore.service.principal' = 
'hive/[email protected]',
+           'hive.metastore.authentication.type'='kerberos',
+           'hadoop.security.auth_to_local' = 
'RULE:[2:\$1@\$0](.*@LABS.TERADATA.COM)s/@.*//
+                              
RULE:[2:\$1@\$0](.*@OTHERLABS.TERADATA.COM)s/@.*//
+                              RULE:[2:\$1@\$0](.*@OTHERREALM.COM)s/@.*//
+                              DEFAULT',
+           'oss.access_key' = 'OSS_ACCESS_KEY',
+           'oss.secret_key' = 'OSS_SECRET_KEY',
+           'oss.endpoint' = 'oss-cn-beijing.aliyuncs.com'
+       );
+       ```
+
+## 常见问题 FAQ
+
+- Q1: hive-site.xml 是必须的吗?
+
+       不是,仅当需要从中读取链接配置时使用。
+
+- Q2: keytab 文件是否必须每个节点都存在?
+
+       是的,所有 FE 节点必须可访问指定路径。
+
+- Q3: 如使用回写功能,即在 Doris 中创建 Hive/Iceberg 库/表,需要注意什么?
+
+       由于创建表涉及到存储端的元数据操作,即需要访问存储系统,因此 Hive MetaStore 服务 Server 端需要配置对应存储参数,如 
S3、OSS 等存储服务的访问参数。如使用对象存储作为底层存储系统,还需要确保写入的 bucket 与配置的 Region 一致。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/baidu-bos.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/baidu-bos.md
index 551eff90766..908c1d33cc6 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/baidu-bos.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/baidu-bos.md
@@ -5,4 +5,4 @@
 }
 ---
 
-文章更新中,请先参阅 2.1/3.0 版本文档。
+文档更新中。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/gcs.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/gcs.md
index 8359fbe64d9..c3312250071 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/gcs.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/gcs.md
@@ -5,5 +5,5 @@
 }
 ---
 
-文章更新中,请先参阅 2.1/3.0 版本文档。
+文档更新中。
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/hdfs.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/hdfs.md
index b8d7b7ab6e5..780903fa97b 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/hdfs.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/hdfs.md
@@ -41,7 +41,7 @@ Simple 认证适用于未开启 Kerberos 的 HDFS 集群。
 
 使用 Simple 认证方式,可以设置以下参数,或直接使用默认值:
 
-```plain
+```sql
 "hdfs.authentication.type" = "simple"
 ```
 
@@ -51,14 +51,14 @@ Simple 认证模式下,可以使用 `hadoop.username` 参数来指定用户名
 
 使用 `lakers` 用户名访问 HDFS
 
-```plain
+```sql
 "hdfs.authentication.type" = "simple",
 "hadoop.username" = "lakers"
 ```
 
 使用默认系统用户访问 HDFS
 
-```plain
+```sql
 "hdfs.authentication.type" = "simple"
 ```
 
@@ -68,7 +68,7 @@ Kerberos 认证适用于已开启 Kerberos 的 HDFS 集群。
 
 使用 Kerberos 认证方式,需要设置以下参数:
 
-```plain
+```sql
 "hdfs.authentication.type" = "kerberos",
 "hdfs.authentication.kerberos.principal" = "<your_principal>",
 "hdfs.authentication.kerberos.keytab" = "<your_keytab>"
@@ -84,12 +84,34 @@ Doris 将以该 `hdfs.authentication.kerberos.principal` 属性指定的主体
 
 示例:
 
-```plain
+```sql
 "hdfs.authentication.type" = "kerberos",
 "hdfs.authentication.kerberos.principal" = "hdfs/[email protected]",
 "hdfs.authentication.kerberos.keytab" = "/etc/security/keytabs/hdfs.keytab",
 ```
 
+## 高可用配置(HDFS HA)
+
+如 HDFS 开启了 HA 模式,需要配置 `dfs.nameservices` 相关参数:
+
+```sql
+'dfs.nameservices' = '<your-nameservice>',
+'dfs.ha.namenodes.<your-nameservice>' = '<nn1>,<nn2>',
+'dfs.namenode.rpc-address.<your-nameservice>.<nn1>' = '<nn1_host:port>',
+'dfs.namenode.rpc-address.<your-nameservice>.<nn2>' = '<nn2_host:port>',
+'dfs.client.failover.proxy.provider.<your-nameservice>' = 
'org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider',
+```
+
+示例:
+
+```sql
+'dfs.nameservices' = 'nameservice1',
+'dfs.ha.namenodes.nameservice1' = 'nn1,nn2',
+'dfs.namenode.rpc-address.nameservice1.nn1' = '172.21.0.2:8088',
+'dfs.namenode.rpc-address.nameservice1.nn2' = '172.21.0.3:8088',
+'dfs.client.failover.proxy.provider.nameservice1' = 
'org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider',
+```
+
 ## 配置文件
 
 > 该功能自 3.1.0 版本支持
@@ -103,9 +125,9 @@ Doris 支持通过 `hadoop.config.resources` 参数来指定 HDFS 相关配置
 **示例:**
 
 ```sql
-多个配置文件
+-- 多个配置文件
 
'hadoop.config.resources'='hdfs-cluster-1/core-site.xml,hdfs-cluster-1/hdfs-site.xml'
-单个配置文件
+-- 单个配置文件
 'hadoop.config.resources'='hdfs-cluster-2/hdfs-site.xml'
 ```
 
@@ -121,7 +143,7 @@ HDFS Client 提供了 Hedged Read 功能。该功能可以在一个读请求超
 
 可以通过以下方式开启这个功能:
 
-```plain
+```sql
 "dfs.client.hedged.read.threadpool.size" = "128",
 "dfs.client.hedged.read.threshold.millis" = "500"
 ```
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/metastores/hive-metastore.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/metastores/hive-metastore.md
index 27a274199fe..1145dc5ee47 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/metastores/hive-metastore.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/metastores/hive-metastore.md
@@ -5,21 +5,238 @@
 }
 ---
 
-本文档用于介绍通过 `CREATE CATALOG` 语句连接并访问 Hive Metastore 时所支持的参数。
+本文档用于介绍通过 `CREATE CATALOG` 语句连接并访问 Hive MetaStore 服务时支持的所有参数。
 
-## 参数总览
+## 支持的 Catalog 类型
 
-| 属性名称                                 | 曾用名 | 描述                              
                                                                                
                                                                                
                                          | 默认值    | 是否必须 |
-|--------------------------------------|---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------|------|
-| `hive.metastore.uris`                | | Hive Metastore 的 URI 地址。支持指定多个 
URI,使用逗号分隔。默认使用第一个 URI,当第一个 URI 不可用时,会尝试使用其他的。如:`thrift://172.0.0.1:9083` 或 
`thrift://172.0.0.1:9083,thrift://172.0.0.2:9083`                               
                                               | 无      | 是    |
+| Catalog 类型 | 类型标识 (type) | 描述                          |
+| ---------- | ----------- | --------------------------- |
+| Hive       | hms         | 对接 Hive Metastore 的 Catalog |
+| Iceberg    | iceberg     | 对接 Iceberg 表格式              |
+| Paimon     | paimon      | 对接 Apache Paimon 表格式        |
 
-## 启用 Kerberos 认证相关参数
+## 通用参数总览
 
-```plaintext
-"hadoop.authentication.type" = "kerberos",
-"hive.metastore.service.principal" = "hive/[email protected]",
-"hadoop.kerberos.principal" = "doris/[email protected]",
-"hadoop.kerberos.keytab" = "etc/doris/conf/doris.keytab"
+以下参数为不同 Catalog 类型的通用参数。
+
+| 参数名称                               | 曾用名                               | 
是否必须 | 默认值    | 简要描述                                                            
                                                                                
                                     |
+| ---------------------------------- | --------------------------------- | 
---- | ------ | 
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 |
+| hive.metastore.uris                |                                   | 是   
 | 无      | Hive Metastore 的 URI 地址,支持多个逗号分隔,示例:'hive.metastore.uris' = 
'thrift://127.0.0.1:9083','hive.metastore.uris' = 
'thrift://127.0.0.1:9083,thrift://127.0.0.1:9084'                     |
+| hive.metastore.authentication.type | hadoop.security.authentication    | 否   
 | simple | Metastore 认证方式:支持 simple(默认)或 kerberos,在 3.0 及之前版本中,认证方式由 
hadoop.security.authentication 属性决定。3.1 版本开始,可以单独指定 Hive Metastore 
的认证方式,示例:'hive.metastore.authentication.type' = 'kerberos' |
+| hive.metastore.service.principal   | hive.metastore.kerberos.principal | 否   
 | 空      | Hive 服务端 principal,支持 \_HOST 
占位符,示例:'hive.metastore.service.principal' = 'hive/<[email protected]>'          
                                                                        |
+| hive.metastore.client.principal    | hadoop.kerberos.principal         | 否   
 | 空      | Doris 连接到 Hive MetaStore 服务时使用的 Kerberos 主体。                        
                                                                                
                                |
+| hive.metastore.client.keytab       | hadoop.kerberos.keytab            | 否   
 | 空      | Kerberos keytab 文件路径                                                
                                                                                
                                 |
+| hive.metastore.username            | hadoop.username                   | 否   
 | hadoop | Hive Metastore 用户名,非 Kerberos 模式下使用                                 
                                                                                
                                 |
+| hive.conf.resources                |                                   | 否   
 | 空      | hive-site.xml 配置文件路径,使用相对路径                                         
                                                                                
                                 |
+
+> 注:
+>
+> 3.1.0 版本之前,请使用曾用名。
+
+### 必填参数
+
+* `hive.metastore.uris`:必须指定 Hive Metastore 的 URI 地址
+
+### 可选参数
+
+* `hive.metastore.authentication.type`:认证方式,默认为 `simple`,可选 `kerberos`
+
+* `hive.metastore.service.principal`:Hive MetaStore 服务的 Kerberos 主体,当使用 
Kerberos 认证时必须指定。
+
+* `hive.metastore.client.principal`:Doris 连接到 Hive MetaStore 服务时使用的 Kerberos 
主体,当使用 Kerberos 认证时必须指定。
+
+* `hive.metastore.client.keytab`:Kerberos keytab 文件路径,当使用 Kerberos 认证时必须指定。
+
+* `hive.metastore.username`:连接 Hive MetaStore 服务的用户名,非 Kerberos 模式下使用,默认为 
`hadoop`。
+
+* `hive.conf.resources`:hive-site.xml 配置文件路径,当需要通过配置文件的方式读取链接 Hive Metastore 
服务的配置时使用。
+
+### 认证方式
+
+#### Simple 认证
+
+* `simple`:非 Kerberos 模式,直接连接 Hive Metastore 服务。
+
+#### Kerberos 认证
+
+使用 Kerberos 认证连接 Hive Metastore 服务,需要配置以下参数:
+
+* `hive.metastore.authentication.type`:设置为 `kerberos`
+
+* `hive.metastore.service.principal`:Hive MetaStore 服务的 Kerberos 主体
+
+* `hive.metastore.client.principal`:Doris 连接到 Hive MetaStore 服务时使用的 Kerberos 主体
+
+* `hive.metastore.client.keytab`:Kerberos keytab 文件路径
+
+```sql
+'hive.metastore.authentication.type' = 'kerberos',
+'hive.metastore.service.principal' = 'hive/[email protected]',
+'hive.metastore.client.principal' = 'hive/[email protected]',
+'hive.metastore.client.keytab' = '/etc/security/keytabs/hive.keytab'
 ```
 
-> 注意,当前版本中,hive 的 kerberos 认证参数和 hdfs 的 kerberos 认证参数共用。
+使用开启 Kerberos 认证的 Hive MetaStore 服务时需要确保所有 FE 节点上都存在相同的 keytab 文件,并且运行 Doris 
进程的用户具有该 keytab 文件的读权限。以及 krb5 配置文件配置正确。
+
+Kerberos 详细配置参考 Kerberos 认证。
+
+### 配置文件参数
+
+#### `hive.conf.resources`
+
+如需要通过配置文件的方式读取链接 Hive Metastore 服务的配置,可以配置 `hive.conf.resources` 参数来设置 conf 
文件路径。
+
+> 注意:`hive.conf.resources` 参数仅支持相对路径,请勿使用绝对路径。默认路径为 
`${DORIS_HOME}/plugins/hadoop_conf/` 目录下。可通过修改 fe.conf 中的 hadoop\_config\_dir 
来指定其他目录。
+
+示例:`'hive.conf.resources' = 'hms-1/hive-site.xml'`
+
+## Catalog 类型数据
+
+以下参数是除通用参数外,各个 Catalog 特有的参数说明。
+
+### Hive Catalog
+
+| 参数名称                | 曾用名 | 是否必须 | 默认值   | 简要描述                              
                                   |
+| ------------------- | --- | ---- | ----- | 
-------------------------------------------------------------------- |
+| type                |     | 是    | 无     | Catalog 类型,Hive Catalog 固定为 
iceberg                                  |
+| hive.metastore.type |     | 否    | 'hms' | Metadata Catalog 类型,Hive 
Metastore 固定为 hms,使用 HiveMetaStore 则必须为 hms |
+
+#### 示例
+
+1. 创建一个使用无认证的 Hive Metastore 作为元数据服务的 Hive Catalog,存储使用 S3 存储服务。
+
+   ```sql
+   CREATE CATALOG hive_hms_s3_test_catalog PROPERTIES (
+       'type' = 'hms',
+       'hive.metastore.uris' = 'thrift://127.0.0.1:9383',
+       's3.access_key' = 'S3_ACCESS_KEY',
+       's3.secret_key' = 'S3_SECRET_KEY',
+       's3.region' = 's3.ap-east-1.amazonaws.com'
+   );
+   ```
+
+2. 创建一个使用开启了 Kerberos 认证的 Hive Metastore 作为元数据服务的 Hive Catalog,存储使用 S3 存储服务。
+
+   ```sql
+    CREATE CATALOG hive_hms_on_oss_kerberos_new_catalog PROPERTIES (
+       'type' = 'hms',
+       'hive.metastore.uris' = 'thrift://127.0.0.1:9583',
+       
'hive.metastore.client.principal'='hive/[email protected]',
+       'hive.metastore.client.keytab' = 
'/mnt/keytabs/keytabs/hive-presto-master.keytab',
+       'hive.metastore.service.principal' = 
'hive/[email protected]',
+       'hive.metastore.authentication.type'='kerberos',
+       'hadoop.security.auth_to_local' = 
'RULE:[2:\$1@\$0](.*@LABS.TERADATA.COM)s/@.*//
+                          RULE:[2:\$1@\$0](.*@OTHERLABS.TERADATA.COM)s/@.*//
+                          RULE:[2:\$1@\$0](.*@OTHERREALM.COM)s/@.*//
+                          DEFAULT',
+       'oss.access_key' = 'OSS_ACCESS_KEY',
+       'oss.secret_key' = 'OSS_SECRET_KEY',
+       'oss.endpoint' = 'oss-cn-beijing.aliyuncs.com'
+   );
+   ```
+
+### Iceberg Catalog
+
+| 参数名称                 | 曾用名 | 是否必须 | 默认值 | 简要描述                               
                                  |
+| -------------------- | --- | ---- | --- | 
-------------------------------------------------------------------- |
+| type                 |     | 是    | 无   | Catalog 类型,Iceberg 固定为 iceberg     
                                  |
+| iceberg.catalog.type |     | 否    | 无   | Metadata Catalog 类型,Hive Metastore 
固定为 hms,使用 HiveMetaStore 则必须为 hms |
+| warehouse            |     | 否    | 无   | Iceberg 仓库路径                       
                                  |
+
+#### 示例
+
+1. 创建一个使用 Hive Metastore 作为元数据服务的 Iceberg Catalog,存储使用 S3 存储服务。
+
+       ```sql
+        CREATE CATALOG iceberg_hms_s3_test_catalog PROPERTIES (
+           'type' = 'iceberg',
+           'iceberg.catalog.type' = 'hms',
+           'hive.metastore.uris' = 'thrift://127.0.0.1:9383',
+           'warehouse' = 's3://doris/iceberg_warehouse/',
+           's3.access_key' = 'S3_ACCESS_KEY',
+           's3.secret_key' = 'S3_SECRET_KEY',
+           's3.region' = 's3.ap-east-1.amazonaws.com'
+       );
+       ```
+
+* 创建一个使用开启了 Kerberos 认证的 Hive Metastore 作为元数据服务的 Iceberg Catalog,并且处于多 
kerberos 环境下。存储使用 S3 存储服务。
+
+       ```sql
+       CREATE CATALOG IF NOT EXISTS iceberg_hms_on_oss_kerberos_new_catalog 
PROPERTIES (
+           'type' = 'iceberg',
+           'iceberg.catalog.type' = 'hms',
+           'hive.metastore.uris' = 'thrift://127.0.0.1:9583',
+           'warehouse' = 'oss://doris/iceberg_warehouse/',
+           
'hive.metastore.client.principal'='hive/[email protected]',
+           'hive.metastore.client.keytab' = 
'/mnt/keytabs/keytabs/hive-presto-master.keytab',
+           'hive.metastore.service.principal' = 
'hive/[email protected]',
+           'hive.metastore.authentication.type'='kerberos',
+           'hadoop.security.auth_to_local' = 
'RULE:[2:\$1@\$0](.*@LABS.TERADATA.COM)s/@.*//
+                              
RULE:[2:\$1@\$0](.*@OTHERLABS.TERADATA.COM)s/@.*//
+                              RULE:[2:\$1@\$0](.*@OTHERREALM.COM)s/@.*//
+                              DEFAULT',
+           'oss.access_key' = 'OSS_ACCESS_KEY',
+           'oss.secret_key' = 'OSS_SECRET_KEY',
+           'oss.endpoint' = 'oss-cn-beijing.aliyuncs.com'
+       );
+       ```
+
+### Paimon Catalog
+
+| 参数名称                | 曾用名 | 是否必须 | 默认值        | 简要描述                         
                           |
+| ------------------- | --- | ---- | ---------- | 
------------------------------------------------------- |
+| type                |     | 是    | 无          | Catalog 类型,Iceberg 固定为 
iceberg                          |
+| paimon.catalog.type |     | 否    | filesystem | 使用 HiveMetaStore 则必须为 
hms,默认值为 filesystem, 即使用文件系统存储元数据 |
+| warehouse           |     | 是    | 无          | Paimon 仓库路径                  
                           |
+
+#### 示例
+
+1. 创建一个使用 Hive Metastore 作为元数据服务的 Paimon Catalog,存储使用 S3 存储服务。
+
+       ```sql
+        CREATE CATALOG IF NOT EXISTS paimon_hms_s3_test_catalog PROPERTIES (
+           'type' = 'paimon',
+           'paimon.catalog.type' = 'hms',
+           'hive.metastore.uris' = 'thrift://127.0.0.1:9383',
+           'warehouse' = 's3://doris/paimon_warehouse/',
+           's3.access_key' = 'S3_ACCESS_KEY',
+           's3.secret_key' = 'S3_SECRET_KEY',
+           's3.region' = 's3.ap-east-1.amazonaws.com'
+       );
+       ```
+
+* 创建一个使用开启了 Kerberos 认证的 Hive Metastore 作为元数据服务的 Paimon Catalog,并且处于多 kerberos 
环境下。存储使用 S3 存储服务。
+
+       ```sql
+        CREATE CATALOG IF NOT EXISTS paimon_hms_on_oss_kerberos_new_catalog 
PROPERTIES (
+           'type' = 'paimon',
+           'paimon.catalog.type' = 'hms',
+           'hive.metastore.uris' = 'thrift://127.0.0.1:9583',
+           'warehouse' = 's3://doris/iceberg_warehouse/',
+           
'hive.metastore.client.principal'='hive/[email protected]',
+           'hive.metastore.client.keytab' = 
'/mnt/keytabs/keytabs/hive-presto-master.keytab',
+           'hive.metastore.service.principal' = 
'hive/[email protected]',
+           'hive.metastore.authentication.type'='kerberos',
+           'hadoop.security.auth_to_local' = 
'RULE:[2:\$1@\$0](.*@LABS.TERADATA.COM)s/@.*//
+                              
RULE:[2:\$1@\$0](.*@OTHERLABS.TERADATA.COM)s/@.*//
+                              RULE:[2:\$1@\$0](.*@OTHERREALM.COM)s/@.*//
+                              DEFAULT',
+           'oss.access_key' = 'OSS_ACCESS_KEY',
+           'oss.secret_key' = 'OSS_SECRET_KEY',
+           'oss.endpoint' = 'oss-cn-beijing.aliyuncs.com'
+       );
+       ```
+
+## 常见问题 FAQ
+
+- Q1: hive-site.xml 是必须的吗?
+
+       不是,仅当需要从中读取链接配置时使用。
+
+- Q2: keytab 文件是否必须每个节点都存在?
+
+       是的,所有 FE 节点必须可访问指定路径。
+
+- Q3: 如使用回写功能,即在 Doris 中创建 Hive/Iceberg 库/表,需要注意什么?
+
+       由于创建表涉及到存储端的元数据操作,即需要访问存储系统,因此 Hive MetaStore 服务 Server 端需要配置对应存储参数,如 
S3、OSS 等存储服务的访问参数。如使用对象存储作为底层存储系统,还需要确保写入的 bucket 与配置的 Region 一致。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/storages/baidu-bos.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/storages/baidu-bos.md
index ffa67c45ec0..908c1d33cc6 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/storages/baidu-bos.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/storages/baidu-bos.md
@@ -5,4 +5,4 @@
 }
 ---
 
-TODO
+文档更新中。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/storages/gcs.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/storages/gcs.md
index a6c334e2e63..c3312250071 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/storages/gcs.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/storages/gcs.md
@@ -5,5 +5,5 @@
 }
 ---
 
-TODO
+文档更新中。
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/storages/hdfs.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/storages/hdfs.md
index b8d7b7ab6e5..780903fa97b 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/storages/hdfs.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/storages/hdfs.md
@@ -41,7 +41,7 @@ Simple 认证适用于未开启 Kerberos 的 HDFS 集群。
 
 使用 Simple 认证方式,可以设置以下参数,或直接使用默认值:
 
-```plain
+```sql
 "hdfs.authentication.type" = "simple"
 ```
 
@@ -51,14 +51,14 @@ Simple 认证模式下,可以使用 `hadoop.username` 参数来指定用户名
 
 使用 `lakers` 用户名访问 HDFS
 
-```plain
+```sql
 "hdfs.authentication.type" = "simple",
 "hadoop.username" = "lakers"
 ```
 
 使用默认系统用户访问 HDFS
 
-```plain
+```sql
 "hdfs.authentication.type" = "simple"
 ```
 
@@ -68,7 +68,7 @@ Kerberos 认证适用于已开启 Kerberos 的 HDFS 集群。
 
 使用 Kerberos 认证方式,需要设置以下参数:
 
-```plain
+```sql
 "hdfs.authentication.type" = "kerberos",
 "hdfs.authentication.kerberos.principal" = "<your_principal>",
 "hdfs.authentication.kerberos.keytab" = "<your_keytab>"
@@ -84,12 +84,34 @@ Doris 将以该 `hdfs.authentication.kerberos.principal` 属性指定的主体
 
 示例:
 
-```plain
+```sql
 "hdfs.authentication.type" = "kerberos",
 "hdfs.authentication.kerberos.principal" = "hdfs/[email protected]",
 "hdfs.authentication.kerberos.keytab" = "/etc/security/keytabs/hdfs.keytab",
 ```
 
+## 高可用配置(HDFS HA)
+
+如 HDFS 开启了 HA 模式,需要配置 `dfs.nameservices` 相关参数:
+
+```sql
+'dfs.nameservices' = '<your-nameservice>',
+'dfs.ha.namenodes.<your-nameservice>' = '<nn1>,<nn2>',
+'dfs.namenode.rpc-address.<your-nameservice>.<nn1>' = '<nn1_host:port>',
+'dfs.namenode.rpc-address.<your-nameservice>.<nn2>' = '<nn2_host:port>',
+'dfs.client.failover.proxy.provider.<your-nameservice>' = 
'org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider',
+```
+
+示例:
+
+```sql
+'dfs.nameservices' = 'nameservice1',
+'dfs.ha.namenodes.nameservice1' = 'nn1,nn2',
+'dfs.namenode.rpc-address.nameservice1.nn1' = '172.21.0.2:8088',
+'dfs.namenode.rpc-address.nameservice1.nn2' = '172.21.0.3:8088',
+'dfs.client.failover.proxy.provider.nameservice1' = 
'org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider',
+```
+
 ## 配置文件
 
 > 该功能自 3.1.0 版本支持
@@ -103,9 +125,9 @@ Doris 支持通过 `hadoop.config.resources` 参数来指定 HDFS 相关配置
 **示例:**
 
 ```sql
-多个配置文件
+-- 多个配置文件
 
'hadoop.config.resources'='hdfs-cluster-1/core-site.xml,hdfs-cluster-1/hdfs-site.xml'
-单个配置文件
+-- 单个配置文件
 'hadoop.config.resources'='hdfs-cluster-2/hdfs-site.xml'
 ```
 
@@ -121,7 +143,7 @@ HDFS Client 提供了 Hedged Read 功能。该功能可以在一个读请求超
 
 可以通过以下方式开启这个功能:
 
-```plain
+```sql
 "dfs.client.hedged.read.threadpool.size" = "128",
 "dfs.client.hedged.read.threshold.millis" = "500"
 ```
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/lakehouse/metastores/hive-metastore.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/lakehouse/metastores/hive-metastore.md
index 27a274199fe..1145dc5ee47 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/lakehouse/metastores/hive-metastore.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/lakehouse/metastores/hive-metastore.md
@@ -5,21 +5,238 @@
 }
 ---
 
-本文档用于介绍通过 `CREATE CATALOG` 语句连接并访问 Hive Metastore 时所支持的参数。
+本文档用于介绍通过 `CREATE CATALOG` 语句连接并访问 Hive MetaStore 服务时支持的所有参数。
 
-## 参数总览
+## 支持的 Catalog 类型
 
-| 属性名称                                 | 曾用名 | 描述                              
                                                                                
                                                                                
                                          | 默认值    | 是否必须 |
-|--------------------------------------|---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------|------|
-| `hive.metastore.uris`                | | Hive Metastore 的 URI 地址。支持指定多个 
URI,使用逗号分隔。默认使用第一个 URI,当第一个 URI 不可用时,会尝试使用其他的。如:`thrift://172.0.0.1:9083` 或 
`thrift://172.0.0.1:9083,thrift://172.0.0.2:9083`                               
                                               | 无      | 是    |
+| Catalog 类型 | 类型标识 (type) | 描述                          |
+| ---------- | ----------- | --------------------------- |
+| Hive       | hms         | 对接 Hive Metastore 的 Catalog |
+| Iceberg    | iceberg     | 对接 Iceberg 表格式              |
+| Paimon     | paimon      | 对接 Apache Paimon 表格式        |
 
-## 启用 Kerberos 认证相关参数
+## 通用参数总览
 
-```plaintext
-"hadoop.authentication.type" = "kerberos",
-"hive.metastore.service.principal" = "hive/[email protected]",
-"hadoop.kerberos.principal" = "doris/[email protected]",
-"hadoop.kerberos.keytab" = "etc/doris/conf/doris.keytab"
+以下参数为不同 Catalog 类型的通用参数。
+
+| 参数名称                               | 曾用名                               | 
是否必须 | 默认值    | 简要描述                                                            
                                                                                
                                     |
+| ---------------------------------- | --------------------------------- | 
---- | ------ | 
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 |
+| hive.metastore.uris                |                                   | 是   
 | 无      | Hive Metastore 的 URI 地址,支持多个逗号分隔,示例:'hive.metastore.uris' = 
'thrift://127.0.0.1:9083','hive.metastore.uris' = 
'thrift://127.0.0.1:9083,thrift://127.0.0.1:9084'                     |
+| hive.metastore.authentication.type | hadoop.security.authentication    | 否   
 | simple | Metastore 认证方式:支持 simple(默认)或 kerberos,在 3.0 及之前版本中,认证方式由 
hadoop.security.authentication 属性决定。3.1 版本开始,可以单独指定 Hive Metastore 
的认证方式,示例:'hive.metastore.authentication.type' = 'kerberos' |
+| hive.metastore.service.principal   | hive.metastore.kerberos.principal | 否   
 | 空      | Hive 服务端 principal,支持 \_HOST 
占位符,示例:'hive.metastore.service.principal' = 'hive/<[email protected]>'          
                                                                        |
+| hive.metastore.client.principal    | hadoop.kerberos.principal         | 否   
 | 空      | Doris 连接到 Hive MetaStore 服务时使用的 Kerberos 主体。                        
                                                                                
                                |
+| hive.metastore.client.keytab       | hadoop.kerberos.keytab            | 否   
 | 空      | Kerberos keytab 文件路径                                                
                                                                                
                                 |
+| hive.metastore.username            | hadoop.username                   | 否   
 | hadoop | Hive Metastore 用户名,非 Kerberos 模式下使用                                 
                                                                                
                                 |
+| hive.conf.resources                |                                   | 否   
 | 空      | hive-site.xml 配置文件路径,使用相对路径                                         
                                                                                
                                 |
+
+> 注:
+>
+> 3.1.0 版本之前,请使用曾用名。
+
+### 必填参数
+
+* `hive.metastore.uris`:必须指定 Hive Metastore 的 URI 地址
+
+### 可选参数
+
+* `hive.metastore.authentication.type`:认证方式,默认为 `simple`,可选 `kerberos`
+
+* `hive.metastore.service.principal`:Hive MetaStore 服务的 Kerberos 主体,当使用 
Kerberos 认证时必须指定。
+
+* `hive.metastore.client.principal`:Doris 连接到 Hive MetaStore 服务时使用的 Kerberos 
主体,当使用 Kerberos 认证时必须指定。
+
+* `hive.metastore.client.keytab`:Kerberos keytab 文件路径,当使用 Kerberos 认证时必须指定。
+
+* `hive.metastore.username`:连接 Hive MetaStore 服务的用户名,非 Kerberos 模式下使用,默认为 
`hadoop`。
+
+* `hive.conf.resources`:hive-site.xml 配置文件路径,当需要通过配置文件的方式读取链接 Hive Metastore 
服务的配置时使用。
+
+### 认证方式
+
+#### Simple 认证
+
+* `simple`:非 Kerberos 模式,直接连接 Hive Metastore 服务。
+
+#### Kerberos 认证
+
+使用 Kerberos 认证连接 Hive Metastore 服务,需要配置以下参数:
+
+* `hive.metastore.authentication.type`:设置为 `kerberos`
+
+* `hive.metastore.service.principal`:Hive MetaStore 服务的 Kerberos 主体
+
+* `hive.metastore.client.principal`:Doris 连接到 Hive MetaStore 服务时使用的 Kerberos 主体
+
+* `hive.metastore.client.keytab`:Kerberos keytab 文件路径
+
+```sql
+'hive.metastore.authentication.type' = 'kerberos',
+'hive.metastore.service.principal' = 'hive/[email protected]',
+'hive.metastore.client.principal' = 'hive/[email protected]',
+'hive.metastore.client.keytab' = '/etc/security/keytabs/hive.keytab'
 ```
 
-> 注意,当前版本中,hive 的 kerberos 认证参数和 hdfs 的 kerberos 认证参数共用。
+使用开启 Kerberos 认证的 Hive MetaStore 服务时需要确保所有 FE 节点上都存在相同的 keytab 文件,并且运行 Doris 
进程的用户具有该 keytab 文件的读权限。以及 krb5 配置文件配置正确。
+
+Kerberos 详细配置参考 Kerberos 认证。
+
+### 配置文件参数
+
+#### `hive.conf.resources`
+
+如需要通过配置文件的方式读取链接 Hive Metastore 服务的配置,可以配置 `hive.conf.resources` 参数来设置 conf 
文件路径。
+
+> 注意:`hive.conf.resources` 参数仅支持相对路径,请勿使用绝对路径。默认路径为 
`${DORIS_HOME}/plugins/hadoop_conf/` 目录下。可通过修改 fe.conf 中的 hadoop\_config\_dir 
来指定其他目录。
+
+示例:`'hive.conf.resources' = 'hms-1/hive-site.xml'`
+
+## Catalog 类型数据
+
+以下参数是除通用参数外,各个 Catalog 特有的参数说明。
+
+### Hive Catalog
+
+| 参数名称                | 曾用名 | 是否必须 | 默认值   | 简要描述                              
                                   |
+| ------------------- | --- | ---- | ----- | 
-------------------------------------------------------------------- |
+| type                |     | 是    | 无     | Catalog 类型,Hive Catalog 固定为 
iceberg                                  |
+| hive.metastore.type |     | 否    | 'hms' | Metadata Catalog 类型,Hive 
Metastore 固定为 hms,使用 HiveMetaStore 则必须为 hms |
+
+#### 示例
+
+1. 创建一个使用无认证的 Hive Metastore 作为元数据服务的 Hive Catalog,存储使用 S3 存储服务。
+
+   ```sql
+   CREATE CATALOG hive_hms_s3_test_catalog PROPERTIES (
+       'type' = 'hms',
+       'hive.metastore.uris' = 'thrift://127.0.0.1:9383',
+       's3.access_key' = 'S3_ACCESS_KEY',
+       's3.secret_key' = 'S3_SECRET_KEY',
+       's3.region' = 's3.ap-east-1.amazonaws.com'
+   );
+   ```
+
+2. 创建一个使用开启了 Kerberos 认证的 Hive Metastore 作为元数据服务的 Hive Catalog,存储使用 S3 存储服务。
+
+   ```sql
+    CREATE CATALOG hive_hms_on_oss_kerberos_new_catalog PROPERTIES (
+       'type' = 'hms',
+       'hive.metastore.uris' = 'thrift://127.0.0.1:9583',
+       
'hive.metastore.client.principal'='hive/[email protected]',
+       'hive.metastore.client.keytab' = 
'/mnt/keytabs/keytabs/hive-presto-master.keytab',
+       'hive.metastore.service.principal' = 
'hive/[email protected]',
+       'hive.metastore.authentication.type'='kerberos',
+       'hadoop.security.auth_to_local' = 
'RULE:[2:\$1@\$0](.*@LABS.TERADATA.COM)s/@.*//
+                          RULE:[2:\$1@\$0](.*@OTHERLABS.TERADATA.COM)s/@.*//
+                          RULE:[2:\$1@\$0](.*@OTHERREALM.COM)s/@.*//
+                          DEFAULT',
+       'oss.access_key' = 'OSS_ACCESS_KEY',
+       'oss.secret_key' = 'OSS_SECRET_KEY',
+       'oss.endpoint' = 'oss-cn-beijing.aliyuncs.com'
+   );
+   ```
+
+### Iceberg Catalog
+
+| 参数名称                 | 曾用名 | 是否必须 | 默认值 | 简要描述                               
                                  |
+| -------------------- | --- | ---- | --- | 
-------------------------------------------------------------------- |
+| type                 |     | 是    | 无   | Catalog 类型,Iceberg 固定为 iceberg     
                                  |
+| iceberg.catalog.type |     | 否    | 无   | Metadata Catalog 类型,Hive Metastore 
固定为 hms,使用 HiveMetaStore 则必须为 hms |
+| warehouse            |     | 否    | 无   | Iceberg 仓库路径                       
                                  |
+
+#### 示例
+
+1. 创建一个使用 Hive Metastore 作为元数据服务的 Iceberg Catalog,存储使用 S3 存储服务。
+
+       ```sql
+        CREATE CATALOG iceberg_hms_s3_test_catalog PROPERTIES (
+           'type' = 'iceberg',
+           'iceberg.catalog.type' = 'hms',
+           'hive.metastore.uris' = 'thrift://127.0.0.1:9383',
+           'warehouse' = 's3://doris/iceberg_warehouse/',
+           's3.access_key' = 'S3_ACCESS_KEY',
+           's3.secret_key' = 'S3_SECRET_KEY',
+           's3.region' = 's3.ap-east-1.amazonaws.com'
+       );
+       ```
+
+* 创建一个使用开启了 Kerberos 认证的 Hive Metastore 作为元数据服务的 Iceberg Catalog,并且处于多 
kerberos 环境下。存储使用 S3 存储服务。
+
+       ```sql
+       CREATE CATALOG IF NOT EXISTS iceberg_hms_on_oss_kerberos_new_catalog 
PROPERTIES (
+           'type' = 'iceberg',
+           'iceberg.catalog.type' = 'hms',
+           'hive.metastore.uris' = 'thrift://127.0.0.1:9583',
+           'warehouse' = 'oss://doris/iceberg_warehouse/',
+           
'hive.metastore.client.principal'='hive/[email protected]',
+           'hive.metastore.client.keytab' = 
'/mnt/keytabs/keytabs/hive-presto-master.keytab',
+           'hive.metastore.service.principal' = 
'hive/[email protected]',
+           'hive.metastore.authentication.type'='kerberos',
+           'hadoop.security.auth_to_local' = 
'RULE:[2:\$1@\$0](.*@LABS.TERADATA.COM)s/@.*//
+                              
RULE:[2:\$1@\$0](.*@OTHERLABS.TERADATA.COM)s/@.*//
+                              RULE:[2:\$1@\$0](.*@OTHERREALM.COM)s/@.*//
+                              DEFAULT',
+           'oss.access_key' = 'OSS_ACCESS_KEY',
+           'oss.secret_key' = 'OSS_SECRET_KEY',
+           'oss.endpoint' = 'oss-cn-beijing.aliyuncs.com'
+       );
+       ```
+
+### Paimon Catalog
+
+| 参数名称                | 曾用名 | 是否必须 | 默认值        | 简要描述                         
                           |
+| ------------------- | --- | ---- | ---------- | 
------------------------------------------------------- |
+| type                |     | 是    | 无          | Catalog 类型,Iceberg 固定为 
iceberg                          |
+| paimon.catalog.type |     | 否    | filesystem | 使用 HiveMetaStore 则必须为 
hms,默认值为 filesystem, 即使用文件系统存储元数据 |
+| warehouse           |     | 是    | 无          | Paimon 仓库路径                  
                           |
+
+#### 示例
+
+1. 创建一个使用 Hive Metastore 作为元数据服务的 Paimon Catalog,存储使用 S3 存储服务。
+
+       ```sql
+        CREATE CATALOG IF NOT EXISTS paimon_hms_s3_test_catalog PROPERTIES (
+           'type' = 'paimon',
+           'paimon.catalog.type' = 'hms',
+           'hive.metastore.uris' = 'thrift://127.0.0.1:9383',
+           'warehouse' = 's3://doris/paimon_warehouse/',
+           's3.access_key' = 'S3_ACCESS_KEY',
+           's3.secret_key' = 'S3_SECRET_KEY',
+           's3.region' = 's3.ap-east-1.amazonaws.com'
+       );
+       ```
+
+* 创建一个使用开启了 Kerberos 认证的 Hive Metastore 作为元数据服务的 Paimon Catalog,并且处于多 kerberos 
环境下。存储使用 S3 存储服务。
+
+       ```sql
+        CREATE CATALOG IF NOT EXISTS paimon_hms_on_oss_kerberos_new_catalog 
PROPERTIES (
+           'type' = 'paimon',
+           'paimon.catalog.type' = 'hms',
+           'hive.metastore.uris' = 'thrift://127.0.0.1:9583',
+           'warehouse' = 's3://doris/iceberg_warehouse/',
+           
'hive.metastore.client.principal'='hive/[email protected]',
+           'hive.metastore.client.keytab' = 
'/mnt/keytabs/keytabs/hive-presto-master.keytab',
+           'hive.metastore.service.principal' = 
'hive/[email protected]',
+           'hive.metastore.authentication.type'='kerberos',
+           'hadoop.security.auth_to_local' = 
'RULE:[2:\$1@\$0](.*@LABS.TERADATA.COM)s/@.*//
+                              
RULE:[2:\$1@\$0](.*@OTHERLABS.TERADATA.COM)s/@.*//
+                              RULE:[2:\$1@\$0](.*@OTHERREALM.COM)s/@.*//
+                              DEFAULT',
+           'oss.access_key' = 'OSS_ACCESS_KEY',
+           'oss.secret_key' = 'OSS_SECRET_KEY',
+           'oss.endpoint' = 'oss-cn-beijing.aliyuncs.com'
+       );
+       ```
+
+## 常见问题 FAQ
+
+- Q1: hive-site.xml 是必须的吗?
+
+       不是,仅当需要从中读取链接配置时使用。
+
+- Q2: keytab 文件是否必须每个节点都存在?
+
+       是的,所有 FE 节点必须可访问指定路径。
+
+- Q3: 如使用回写功能,即在 Doris 中创建 Hive/Iceberg 库/表,需要注意什么?
+
+       由于创建表涉及到存储端的元数据操作,即需要访问存储系统,因此 Hive MetaStore 服务 Server 端需要配置对应存储参数,如 
S3、OSS 等存储服务的访问参数。如使用对象存储作为底层存储系统,还需要确保写入的 bucket 与配置的 Region 一致。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/lakehouse/storages/baidu-bos.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/lakehouse/storages/baidu-bos.md
index ffa67c45ec0..908c1d33cc6 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/lakehouse/storages/baidu-bos.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/lakehouse/storages/baidu-bos.md
@@ -5,4 +5,4 @@
 }
 ---
 
-TODO
+文档更新中。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/lakehouse/storages/gcs.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/lakehouse/storages/gcs.md
index a6c334e2e63..c3312250071 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/lakehouse/storages/gcs.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/lakehouse/storages/gcs.md
@@ -5,5 +5,5 @@
 }
 ---
 
-TODO
+文档更新中。
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/lakehouse/storages/hdfs.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/lakehouse/storages/hdfs.md
index b8d7b7ab6e5..780903fa97b 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/lakehouse/storages/hdfs.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/lakehouse/storages/hdfs.md
@@ -41,7 +41,7 @@ Simple 认证适用于未开启 Kerberos 的 HDFS 集群。
 
 使用 Simple 认证方式,可以设置以下参数,或直接使用默认值:
 
-```plain
+```sql
 "hdfs.authentication.type" = "simple"
 ```
 
@@ -51,14 +51,14 @@ Simple 认证模式下,可以使用 `hadoop.username` 参数来指定用户名
 
 使用 `lakers` 用户名访问 HDFS
 
-```plain
+```sql
 "hdfs.authentication.type" = "simple",
 "hadoop.username" = "lakers"
 ```
 
 使用默认系统用户访问 HDFS
 
-```plain
+```sql
 "hdfs.authentication.type" = "simple"
 ```
 
@@ -68,7 +68,7 @@ Kerberos 认证适用于已开启 Kerberos 的 HDFS 集群。
 
 使用 Kerberos 认证方式,需要设置以下参数:
 
-```plain
+```sql
 "hdfs.authentication.type" = "kerberos",
 "hdfs.authentication.kerberos.principal" = "<your_principal>",
 "hdfs.authentication.kerberos.keytab" = "<your_keytab>"
@@ -84,12 +84,34 @@ Doris 将以该 `hdfs.authentication.kerberos.principal` 属性指定的主体
 
 示例:
 
-```plain
+```sql
 "hdfs.authentication.type" = "kerberos",
 "hdfs.authentication.kerberos.principal" = "hdfs/[email protected]",
 "hdfs.authentication.kerberos.keytab" = "/etc/security/keytabs/hdfs.keytab",
 ```
 
+## 高可用配置(HDFS HA)
+
+如 HDFS 开启了 HA 模式,需要配置 `dfs.nameservices` 相关参数:
+
+```sql
+'dfs.nameservices' = '<your-nameservice>',
+'dfs.ha.namenodes.<your-nameservice>' = '<nn1>,<nn2>',
+'dfs.namenode.rpc-address.<your-nameservice>.<nn1>' = '<nn1_host:port>',
+'dfs.namenode.rpc-address.<your-nameservice>.<nn2>' = '<nn2_host:port>',
+'dfs.client.failover.proxy.provider.<your-nameservice>' = 
'org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider',
+```
+
+示例:
+
+```sql
+'dfs.nameservices' = 'nameservice1',
+'dfs.ha.namenodes.nameservice1' = 'nn1,nn2',
+'dfs.namenode.rpc-address.nameservice1.nn1' = '172.21.0.2:8088',
+'dfs.namenode.rpc-address.nameservice1.nn2' = '172.21.0.3:8088',
+'dfs.client.failover.proxy.provider.nameservice1' = 
'org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider',
+```
+
 ## 配置文件
 
 > 该功能自 3.1.0 版本支持
@@ -103,9 +125,9 @@ Doris 支持通过 `hadoop.config.resources` 参数来指定 HDFS 相关配置
 **示例:**
 
 ```sql
-多个配置文件
+-- 多个配置文件
 
'hadoop.config.resources'='hdfs-cluster-1/core-site.xml,hdfs-cluster-1/hdfs-site.xml'
-单个配置文件
+-- 单个配置文件
 'hadoop.config.resources'='hdfs-cluster-2/hdfs-site.xml'
 ```
 
@@ -121,7 +143,7 @@ HDFS Client 提供了 Hedged Read 功能。该功能可以在一个读请求超
 
 可以通过以下方式开启这个功能:
 
-```plain
+```sql
 "dfs.client.hedged.read.threadpool.size" = "128",
 "dfs.client.hedged.read.threshold.millis" = "500"
 ```
diff --git a/versioned_docs/version-2.1/lakehouse/metastores/hive-metastore.md 
b/versioned_docs/version-2.1/lakehouse/metastores/hive-metastore.md
index b3b609b944d..5abefd52081 100644
--- a/versioned_docs/version-2.1/lakehouse/metastores/hive-metastore.md
+++ b/versioned_docs/version-2.1/lakehouse/metastores/hive-metastore.md
@@ -5,21 +5,238 @@
 }
 ---
 
-This document describes the supported parameters when connecting to and 
accessing Hive Metastore through the `CREATE CATALOG` statement.
+This document describes all supported parameters when connecting to and 
accessing Hive MetaStore services through the `CREATE CATALOG` statement.
 
-## Parameter Overview
+## Supported Catalog Types
 
-| Property Name                        | Former Name | Description             
                                                                                
                                                                                
                                                    | Default Value | Required |
-|--------------------------------------|-------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|----------|
-| `hive.metastore.uris`                |             | The URI address of Hive 
Metastore. Supports specifying multiple URIs separated by commas. Uses the 
first URI by default, and tries others when the first URI is unavailable. For 
example: `thrift://172.0.0.1:9083` or 
`thrift://172.0.0.1:9083,thrift://172.0.0.2:9083` | None          | Yes      |
+| Catalog Type | Type Identifier (type) | Description                      |
+| ------------ | ---------------------- | -------------------------------- |
+| Hive         | hms                    | Catalog for connecting to Hive 
Metastore |
+| Iceberg      | iceberg                | Catalog for Iceberg table format |
+| Paimon       | paimon                 | Catalog for Apache Paimon table 
format |
 
-## Kerberos Authentication Related Parameters
+## Common Parameters Overview
 
-```plaintext
-"hadoop.authentication.type" = "kerberos",
-"hive.metastore.service.principal" = "hive/[email protected]",
-"hadoop.kerberos.principal" = "doris/[email protected]",
-"hadoop.kerberos.keytab" = "etc/doris/conf/doris.keytab"
+The following parameters are common to different Catalog types.
+
+| Parameter Name                     | Former Name                       | 
Required | Default | Description                                                
                                                                                
                                              |
+| ---------------------------------- | --------------------------------- | 
-------- | ------- | 
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 |
+| hive.metastore.uris                |                                   | Yes 
     | None    | URI address of Hive Metastore, supports multiple URIs 
separated by commas. Example: 'hive.metastore.uris' = 
'thrift://127.0.0.1:9083','hive.metastore.uris' = 
'thrift://127.0.0.1:9083,thrift://127.0.0.1:9084' |
+| hive.metastore.authentication.type | hadoop.security.authentication    | No  
     | simple  | Metastore authentication method: supports simple (default) or 
kerberos. In versions 3.0 and earlier, authentication method was determined by 
hadoop.security.authentication property. Starting from version 3.1, Hive 
Metastore authentication method can be specified separately. Example: 
'hive.metastore.authentication.type' = 'kerberos' |
+| hive.metastore.service.principal   | hive.metastore.kerberos.principal | No  
     | Empty   | Hive server principal, supports _HOST placeholder. Example: 
'hive.metastore.service.principal' = 'hive/[email protected]'                   
                                            |
+| hive.metastore.client.principal    | hadoop.kerberos.principal         | No  
     | Empty   | Kerberos principal used by Doris to connect to Hive MetaStore 
service.                                                                        
                                          |
+| hive.metastore.client.keytab       | hadoop.kerberos.keytab            | No  
     | Empty   | Kerberos keytab file path                                      
                                                                                
                                         |
+| hive.metastore.username            | hadoop.username                   | No  
     | hadoop  | Hive Metastore username, used in non-Kerberos mode             
                                                                                
                                         |
+| hive.conf.resources                |                                   | No  
     | Empty   | hive-site.xml configuration file path, using relative path     
                                                                                
                                        |
+
+> Note:
+>
+> For versions before 3.1.0, please use the former names.
+
+### Required Parameters
+
+* `hive.metastore.uris`: Must specify the URI address of Hive Metastore
+
+### Optional Parameters
+
+* `hive.metastore.authentication.type`: Authentication method, default is 
`simple`, optional `kerberos`
+
+* `hive.metastore.service.principal`: Kerberos principal of Hive MetaStore 
service, must be specified when using Kerberos authentication.
+
+* `hive.metastore.client.principal`: Kerberos principal used by Doris to 
connect to Hive MetaStore service, must be specified when using Kerberos 
authentication.
+
+* `hive.metastore.client.keytab`: Kerberos keytab file path, must be specified 
when using Kerberos authentication.
+
+* `hive.metastore.username`: Username for connecting to Hive MetaStore 
service, used in non-Kerberos mode, default is `hadoop`.
+
+* `hive.conf.resources`: hive-site.xml configuration file path, used when 
configuration for connecting to Hive Metastore service needs to be read from 
configuration files.
+
+### Authentication Methods
+
+#### Simple Authentication
+
+* `simple`: Non-Kerberos mode, directly connects to Hive Metastore service.
+
+#### Kerberos Authentication
+
+To use Kerberos authentication to connect to Hive Metastore service, configure 
the following parameters:
+
+* `hive.metastore.authentication.type`: Set to `kerberos`
+
+* `hive.metastore.service.principal`: Kerberos principal of Hive MetaStore 
service
+
+* `hive.metastore.client.principal`: Kerberos principal used by Doris to 
connect to Hive MetaStore service
+
+* `hive.metastore.client.keytab`: Kerberos keytab file path
+
+```sql
+'hive.metastore.authentication.type' = 'kerberos',
+'hive.metastore.service.principal' = 'hive/[email protected]',
+'hive.metastore.client.principal' = 'hive/[email protected]',
+'hive.metastore.client.keytab' = '/etc/security/keytabs/hive.keytab'
 ```
 
-> Note: In the current version, Hive's Kerberos authentication parameters are 
shared with HDFS's
+When using Hive MetaStore service with Kerberos authentication enabled, ensure 
that the same keytab file exists on all FE nodes, the user running the Doris 
process has read permission to the keytab file, and the krb5 configuration file 
is properly configured.
+
+For detailed Kerberos configuration, refer to Kerberos Authentication.
+
+### Configuration File Parameters
+
+#### `hive.conf.resources`
+
+If you need to read configuration for connecting to Hive Metastore service 
through configuration files, you can configure the `hive.conf.resources` 
parameter to set the conf file path.
+
+> Note: The `hive.conf.resources` parameter only supports relative paths, do 
not use absolute paths. The default path is under the 
`${DORIS_HOME}/plugins/hadoop_conf/` directory. You can specify other 
directories by modifying hadoop_config_dir in fe.conf.
+
+Example: `'hive.conf.resources' = 'hms-1/hive-site.xml'`
+
+## Catalog Type-Specific Data
+
+The following parameters are specific to each Catalog type, in addition to the 
common parameters.
+
+### Hive Catalog
+
+| Parameter Name      | Former Name | Required | Default | Description         
                                                 |
+| ------------------- | ----------- | -------- | ------- | 
-------------------------------------------------------------------- |
+| type                |             | Yes      | None    | Catalog type, fixed 
as hms for Hive Catalog                         |
+| hive.metastore.type |             | No       | 'hms'   | Metadata Catalog 
type, fixed as hms for Hive Metastore, must be hms when using HiveMetaStore |
+
+#### Examples
+
+1. Create a Hive Catalog using unauthenticated Hive Metastore as metadata 
service, with S3 storage service.
+
+   ```sql
+   CREATE CATALOG hive_hms_s3_test_catalog PROPERTIES (
+       'type' = 'hms',
+       'hive.metastore.uris' = 'thrift://127.0.0.1:9383',
+       's3.access_key' = 'S3_ACCESS_KEY',
+       's3.secret_key' = 'S3_SECRET_KEY',
+       's3.region' = 's3.ap-east-1.amazonaws.com'
+   );
+   ```
+
+2. Create a Hive Catalog using Hive Metastore with Kerberos authentication 
enabled as metadata service, with S3 storage service.
+
+   ```sql
+    CREATE CATALOG hive_hms_on_oss_kerberos_new_catalog PROPERTIES (
+       'type' = 'hms',
+       'hive.metastore.uris' = 'thrift://127.0.0.1:9583',
+       
'hive.metastore.client.principal'='hive/[email protected]',
+       'hive.metastore.client.keytab' = 
'/mnt/keytabs/keytabs/hive-presto-master.keytab',
+       'hive.metastore.service.principal' = 
'hive/[email protected]',
+       'hive.metastore.authentication.type'='kerberos',
+       'hadoop.security.auth_to_local' = 
'RULE:[2:\$1@\$0](.*@LABS.TERADATA.COM)s/@.*//
+                          RULE:[2:\$1@\$0](.*@OTHERLABS.TERADATA.COM)s/@.*//
+                          RULE:[2:\$1@\$0](.*@OTHERREALM.COM)s/@.*//
+                          DEFAULT',
+       'oss.access_key' = 'OSS_ACCESS_KEY',
+       'oss.secret_key' = 'OSS_SECRET_KEY',
+       'oss.endpoint' = 'oss-cn-beijing.aliyuncs.com'
+   );
+   ```
+
+### Iceberg Catalog
+
+| Parameter Name       | Former Name | Required | Default | Description        
                                                  |
+| -------------------- | ----------- | -------- | ------- | 
-------------------------------------------------------------------- |
+| type                 |             | Yes      | None    | Catalog type, 
fixed as iceberg for Iceberg                          |
+| iceberg.catalog.type |             | No       | None    | Metadata Catalog 
type, fixed as hms for Hive Metastore, must be hms when using HiveMetaStore |
+| warehouse            |             | No       | None    | Iceberg warehouse 
path                                               |
+
+#### Examples
+
+1. Create an Iceberg Catalog using Hive Metastore as metadata service, with S3 
storage service.
+
+    ```sql
+     CREATE CATALOG iceberg_hms_s3_test_catalog PROPERTIES (
+        'type' = 'iceberg',
+        'iceberg.catalog.type' = 'hms',
+        'hive.metastore.uris' = 'thrift://127.0.0.1:9383',
+        'warehouse' = 's3://doris/iceberg_warehouse/',
+        's3.access_key' = 'S3_ACCESS_KEY',
+        's3.secret_key' = 'S3_SECRET_KEY',
+        's3.region' = 's3.ap-east-1.amazonaws.com'
+    );
+    ```
+
+2. Create an Iceberg Catalog using Hive Metastore with Kerberos authentication 
enabled as metadata service in a multi-Kerberos environment, with S3 storage 
service.
+
+    ```sql
+    CREATE CATALOG IF NOT EXISTS iceberg_hms_on_oss_kerberos_new_catalog 
PROPERTIES (
+        'type' = 'iceberg',
+        'iceberg.catalog.type' = 'hms',
+        'hive.metastore.uris' = 'thrift://127.0.0.1:9583',
+        'warehouse' = 'oss://doris/iceberg_warehouse/',
+        
'hive.metastore.client.principal'='hive/[email protected]',
+        'hive.metastore.client.keytab' = 
'/mnt/keytabs/keytabs/hive-presto-master.keytab',
+        'hive.metastore.service.principal' = 
'hive/[email protected]',
+        'hive.metastore.authentication.type'='kerberos',
+        'hadoop.security.auth_to_local' = 
'RULE:[2:\$1@\$0](.*@LABS.TERADATA.COM)s/@.*//
+                           RULE:[2:\$1@\$0](.*@OTHERLABS.TERADATA.COM)s/@.*//
+                           RULE:[2:\$1@\$0](.*@OTHERREALM.COM)s/@.*//
+                           DEFAULT',
+        'oss.access_key' = 'OSS_ACCESS_KEY',
+        'oss.secret_key' = 'OSS_SECRET_KEY',
+        'oss.endpoint' = 'oss-cn-beijing.aliyuncs.com'
+    );
+    ```
+
+### Paimon Catalog
+
+| Parameter Name      | Former Name | Required | Default    | Description      
                                                   |
+| ------------------- | ----------- | -------- | ---------- | 
------------------------------------------------------------------- |
+| type                |             | Yes      | None       | Catalog type, 
fixed as paimon for Paimon                           |
+| paimon.catalog.type |             | No       | filesystem | Must be hms when 
using HiveMetaStore, default is filesystem for storing metadata in filesystem |
+| warehouse           |             | Yes      | None       | Paimon warehouse 
path                                               |
+
+#### Examples
+
+1. Create a Paimon Catalog using Hive Metastore as metadata service, with S3 
storage service.
+
+    ```sql
+     CREATE CATALOG IF NOT EXISTS paimon_hms_s3_test_catalog PROPERTIES (
+        'type' = 'paimon',
+        'paimon.catalog.type' = 'hms',
+        'hive.metastore.uris' = 'thrift://127.0.0.1:9383',
+        'warehouse' = 's3://doris/paimon_warehouse/',
+        's3.access_key' = 'S3_ACCESS_KEY',
+        's3.secret_key' = 'S3_SECRET_KEY',
+        's3.region' = 's3.ap-east-1.amazonaws.com'
+    );
+    ```
+
+2. Create a Paimon Catalog using Hive Metastore with Kerberos authentication 
enabled as metadata service in a multi-Kerberos environment, with S3 storage 
service.
+
+    ```sql
+     CREATE CATALOG IF NOT EXISTS paimon_hms_on_oss_kerberos_new_catalog 
PROPERTIES (
+        'type' = 'paimon',
+        'paimon.catalog.type' = 'hms',
+        'hive.metastore.uris' = 'thrift://127.0.0.1:9583',
+        'warehouse' = 's3://doris/iceberg_warehouse/',
+        
'hive.metastore.client.principal'='hive/[email protected]',
+        'hive.metastore.client.keytab' = 
'/mnt/keytabs/keytabs/hive-presto-master.keytab',
+        'hive.metastore.service.principal' = 
'hive/[email protected]',
+        'hive.metastore.authentication.type'='kerberos',
+        'hadoop.security.auth_to_local' = 
'RULE:[2:\$1@\$0](.*@LABS.TERADATA.COM)s/@.*//
+                           RULE:[2:\$1@\$0](.*@OTHERLABS.TERADATA.COM)s/@.*//
+                           RULE:[2:\$1@\$0](.*@OTHERREALM.COM)s/@.*//
+                           DEFAULT',
+        'oss.access_key' = 'OSS_ACCESS_KEY',
+        'oss.secret_key' = 'OSS_SECRET_KEY',
+        'oss.endpoint' = 'oss-cn-beijing.aliyuncs.com'
+    );
+    ```
+
+## Frequently Asked Questions (FAQ)
+
+- Q1: Is hive-site.xml mandatory?
+
+    No, it's only used when configuration needs to be read from it.
+
+- Q2: Must the keytab file exist on every node?
+
+    Yes, all FE nodes must be able to access the specified path.
+
+- Q3: What should be noted when using write-back functionality, i.e., creating 
Hive/Iceberg databases/tables in Doris?
+
+    Since creating tables involves metadata operations on the storage side, 
i.e., accessing the storage system, the Hive MetaStore service server side 
needs to configure corresponding storage parameters, such as access parameters 
for S3, OSS and other storage services. When using object storage as the 
underlying storage system, ensure that the bucket being written to matches the 
configured Region.
\ No newline at end of file
diff --git a/versioned_docs/version-2.1/lakehouse/storages/azure-blob.md 
b/versioned_docs/version-2.1/lakehouse/storages/azure-blob.md
index 9a28c796852..9acc6c6d6f3 100644
--- a/versioned_docs/version-2.1/lakehouse/storages/azure-blob.md
+++ b/versioned_docs/version-2.1/lakehouse/storages/azure-blob.md
@@ -5,4 +5,5 @@
 }
 ---
 
-TODO
+Azure Blob will be supported later.
+
diff --git a/versioned_docs/version-2.1/lakehouse/storages/baidu-bos.md 
b/versioned_docs/version-2.1/lakehouse/storages/baidu-bos.md
index 512f642a1f9..2465f67dc63 100644
--- a/versioned_docs/version-2.1/lakehouse/storages/baidu-bos.md
+++ b/versioned_docs/version-2.1/lakehouse/storages/baidu-bos.md
@@ -5,5 +5,5 @@
 }
 ---
 
-Baidu Cloud BOS will be supported later.
+Document is under development.
 
diff --git a/versioned_docs/version-2.1/lakehouse/storages/gcs.md 
b/versioned_docs/version-2.1/lakehouse/storages/gcs.md
index a6c334e2e63..e99471b68dc 100644
--- a/versioned_docs/version-2.1/lakehouse/storages/gcs.md
+++ b/versioned_docs/version-2.1/lakehouse/storages/gcs.md
@@ -5,5 +5,5 @@
 }
 ---
 
-TODO
+The document is under development.
 
diff --git a/versioned_docs/version-2.1/lakehouse/storages/hdfs.md 
b/versioned_docs/version-2.1/lakehouse/storages/hdfs.md
index 201fe7bf00e..6e214cc85ba 100644
--- a/versioned_docs/version-2.1/lakehouse/storages/hdfs.md
+++ b/versioned_docs/version-2.1/lakehouse/storages/hdfs.md
@@ -41,7 +41,7 @@ Simple authentication is suitable for HDFS clusters that have 
not enabled Kerber
 
 Using Simple authentication, you can set the following parameters or use the 
default values directly:
 
-```plain
+```sql
 "hdfs.authentication.type" = "simple"
 ```
 
@@ -51,14 +51,14 @@ Examples:
 
 Using `lakers` username to access HDFS
 
-```plain
+```sql
 "hdfs.authentication.type" = "simple",
 "hadoop.username" = "lakers"
 ```
 
 Using default system user to access HDFS
 
-```plain
+```sql
 "hdfs.authentication.type" = "simple"
 ```
 
@@ -68,7 +68,7 @@ Kerberos authentication is suitable for HDFS clusters with 
Kerberos enabled.
 
 Using Kerberos authentication, you need to set the following parameters:
 
-```plain
+```sql
 "hdfs.authentication.type" = "kerberos",
 "hdfs.authentication.kerberos.principal" = "<your_principal>",
 "hdfs.authentication.kerberos.keytab" = "<your_keytab>"
@@ -84,12 +84,34 @@ Doris will access HDFS with the identity specified by the 
`hdfs.authentication.k
 
 Example:
 
-```plain
+```sql
 "hdfs.authentication.type" = "kerberos",
 "hdfs.authentication.kerberos.principal" = "hdfs/[email protected]",
 "hdfs.authentication.kerberos.keytab" = "/etc/security/keytabs/hdfs.keytab",
 ```
 
+## HDFS HA Configuration
+
+If HDFS HA mode is enabled, need to configure `dfs.nameservices` related 
parameters:
+
+```sql
+'dfs.nameservices' = '<your-nameservice>',
+'dfs.ha.namenodes.<your-nameservice>' = '<nn1>,<nn2>',
+'dfs.namenode.rpc-address.<your-nameservice>.<nn1>' = '<nn1_host:port>',
+'dfs.namenode.rpc-address.<your-nameservice>.<nn2>' = '<nn2_host:port>',
+'dfs.client.failover.proxy.provider.<your-nameservice>' = 
'org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider',
+```
+
+Example:
+
+```sql
+'dfs.nameservices' = 'nameservice1',
+'dfs.ha.namenodes.nameservice1' = 'nn1,nn2',
+'dfs.namenode.rpc-address.nameservice1.nn1' = '172.21.0.2:8088',
+'dfs.namenode.rpc-address.nameservice1.nn2' = '172.21.0.3:8088',
+'dfs.client.failover.proxy.provider.nameservice1' = 
'org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider',
+```
+
 ## Configuration Files
 
 > This feature is supported since version 3.1.0
@@ -103,9 +125,9 @@ If the configuration files contain the above parameters 
mentioned in this docume
 **Examples:**
 
 ```sql
-Multiple configuration files
+-- Multiple configuration files
 
'hadoop.config.resources'='hdfs-cluster-1/core-site.xml,hdfs-cluster-1/hdfs-site.xml'
-Single configuration file
+-- Single configuration file
 'hadoop.config.resources'='hdfs-cluster-2/hdfs-site.xml'
 ```
 
@@ -121,7 +143,7 @@ Note: This feature may increase the load on the HDFS 
cluster, please use it judi
 
 You can enable this feature in the following way:
 
-```plain
+```sql
 "dfs.client.hedged.read.threadpool.size" = "128",
 "dfs.client.hedged.read.threshold.millis" = "500"
 ```
diff --git a/versioned_docs/version-3.0/lakehouse/metastores/hive-metastore.md 
b/versioned_docs/version-3.0/lakehouse/metastores/hive-metastore.md
index b3b609b944d..5abefd52081 100644
--- a/versioned_docs/version-3.0/lakehouse/metastores/hive-metastore.md
+++ b/versioned_docs/version-3.0/lakehouse/metastores/hive-metastore.md
@@ -5,21 +5,238 @@
 }
 ---
 
-This document describes the supported parameters when connecting to and 
accessing Hive Metastore through the `CREATE CATALOG` statement.
+This document describes all supported parameters when connecting to and 
accessing Hive MetaStore services through the `CREATE CATALOG` statement.
 
-## Parameter Overview
+## Supported Catalog Types
 
-| Property Name                        | Former Name | Description             
                                                                                
                                                                                
                                                    | Default Value | Required |
-|--------------------------------------|-------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|----------|
-| `hive.metastore.uris`                |             | The URI address of Hive 
Metastore. Supports specifying multiple URIs separated by commas. Uses the 
first URI by default, and tries others when the first URI is unavailable. For 
example: `thrift://172.0.0.1:9083` or 
`thrift://172.0.0.1:9083,thrift://172.0.0.2:9083` | None          | Yes      |
+| Catalog Type | Type Identifier (type) | Description                      |
+| ------------ | ---------------------- | -------------------------------- |
+| Hive         | hms                    | Catalog for connecting to Hive 
Metastore |
+| Iceberg      | iceberg                | Catalog for Iceberg table format |
+| Paimon       | paimon                 | Catalog for Apache Paimon table 
format |
 
-## Kerberos Authentication Related Parameters
+## Common Parameters Overview
 
-```plaintext
-"hadoop.authentication.type" = "kerberos",
-"hive.metastore.service.principal" = "hive/[email protected]",
-"hadoop.kerberos.principal" = "doris/[email protected]",
-"hadoop.kerberos.keytab" = "etc/doris/conf/doris.keytab"
+The following parameters are common to different Catalog types.
+
+| Parameter Name                     | Former Name                       | 
Required | Default | Description                                                
                                                                                
                                              |
+| ---------------------------------- | --------------------------------- | 
-------- | ------- | 
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 |
+| hive.metastore.uris                |                                   | Yes 
     | None    | URI address of Hive Metastore, supports multiple URIs 
separated by commas. Example: 'hive.metastore.uris' = 
'thrift://127.0.0.1:9083','hive.metastore.uris' = 
'thrift://127.0.0.1:9083,thrift://127.0.0.1:9084' |
+| hive.metastore.authentication.type | hadoop.security.authentication    | No  
     | simple  | Metastore authentication method: supports simple (default) or 
kerberos. In versions 3.0 and earlier, authentication method was determined by 
hadoop.security.authentication property. Starting from version 3.1, Hive 
Metastore authentication method can be specified separately. Example: 
'hive.metastore.authentication.type' = 'kerberos' |
+| hive.metastore.service.principal   | hive.metastore.kerberos.principal | No  
     | Empty   | Hive server principal, supports _HOST placeholder. Example: 
'hive.metastore.service.principal' = 'hive/[email protected]'                   
                                            |
+| hive.metastore.client.principal    | hadoop.kerberos.principal         | No  
     | Empty   | Kerberos principal used by Doris to connect to Hive MetaStore 
service.                                                                        
                                          |
+| hive.metastore.client.keytab       | hadoop.kerberos.keytab            | No  
     | Empty   | Kerberos keytab file path                                      
                                                                                
                                         |
+| hive.metastore.username            | hadoop.username                   | No  
     | hadoop  | Hive Metastore username, used in non-Kerberos mode             
                                                                                
                                         |
+| hive.conf.resources                |                                   | No  
     | Empty   | hive-site.xml configuration file path, using relative path     
                                                                                
                                        |
+
+> Note:
+>
+> For versions before 3.1.0, please use the former names.
+
+### Required Parameters
+
+* `hive.metastore.uris`: Must specify the URI address of Hive Metastore
+
+### Optional Parameters
+
+* `hive.metastore.authentication.type`: Authentication method, default is 
`simple`, optional `kerberos`
+
+* `hive.metastore.service.principal`: Kerberos principal of Hive MetaStore 
service, must be specified when using Kerberos authentication.
+
+* `hive.metastore.client.principal`: Kerberos principal used by Doris to 
connect to Hive MetaStore service, must be specified when using Kerberos 
authentication.
+
+* `hive.metastore.client.keytab`: Kerberos keytab file path, must be specified 
when using Kerberos authentication.
+
+* `hive.metastore.username`: Username for connecting to Hive MetaStore 
service, used in non-Kerberos mode, default is `hadoop`.
+
+* `hive.conf.resources`: hive-site.xml configuration file path, used when 
configuration for connecting to Hive Metastore service needs to be read from 
configuration files.
+
+### Authentication Methods
+
+#### Simple Authentication
+
+* `simple`: Non-Kerberos mode, directly connects to Hive Metastore service.
+
+#### Kerberos Authentication
+
+To use Kerberos authentication to connect to Hive Metastore service, configure 
the following parameters:
+
+* `hive.metastore.authentication.type`: Set to `kerberos`
+
+* `hive.metastore.service.principal`: Kerberos principal of Hive MetaStore 
service
+
+* `hive.metastore.client.principal`: Kerberos principal used by Doris to 
connect to Hive MetaStore service
+
+* `hive.metastore.client.keytab`: Kerberos keytab file path
+
+```sql
+'hive.metastore.authentication.type' = 'kerberos',
+'hive.metastore.service.principal' = 'hive/[email protected]',
+'hive.metastore.client.principal' = 'hive/[email protected]',
+'hive.metastore.client.keytab' = '/etc/security/keytabs/hive.keytab'
 ```
 
-> Note: In the current version, Hive's Kerberos authentication parameters are 
shared with HDFS's
+When using Hive MetaStore service with Kerberos authentication enabled, ensure 
that the same keytab file exists on all FE nodes, the user running the Doris 
process has read permission to the keytab file, and the krb5 configuration file 
is properly configured.
+
+For detailed Kerberos configuration, refer to Kerberos Authentication.
+
+### Configuration File Parameters
+
+#### `hive.conf.resources`
+
+If you need to read configuration for connecting to Hive Metastore service 
through configuration files, you can configure the `hive.conf.resources` 
parameter to set the conf file path.
+
+> Note: The `hive.conf.resources` parameter only supports relative paths, do 
not use absolute paths. The default path is under the 
`${DORIS_HOME}/plugins/hadoop_conf/` directory. You can specify other 
directories by modifying hadoop_config_dir in fe.conf.
+
+Example: `'hive.conf.resources' = 'hms-1/hive-site.xml'`
+
+## Catalog Type-Specific Data
+
+The following parameters are specific to each Catalog type, in addition to the 
common parameters.
+
+### Hive Catalog
+
+| Parameter Name      | Former Name | Required | Default | Description         
                                                 |
+| ------------------- | ----------- | -------- | ------- | 
-------------------------------------------------------------------- |
+| type                |             | Yes      | None    | Catalog type, fixed 
as hms for Hive Catalog                         |
+| hive.metastore.type |             | No       | 'hms'   | Metadata Catalog 
type, fixed as hms for Hive Metastore, must be hms when using HiveMetaStore |
+
+#### Examples
+
+1. Create a Hive Catalog using unauthenticated Hive Metastore as metadata 
service, with S3 storage service.
+
+   ```sql
+   CREATE CATALOG hive_hms_s3_test_catalog PROPERTIES (
+       'type' = 'hms',
+       'hive.metastore.uris' = 'thrift://127.0.0.1:9383',
+       's3.access_key' = 'S3_ACCESS_KEY',
+       's3.secret_key' = 'S3_SECRET_KEY',
+       's3.region' = 's3.ap-east-1.amazonaws.com'
+   );
+   ```
+
+2. Create a Hive Catalog using Hive Metastore with Kerberos authentication 
enabled as metadata service, with S3 storage service.
+
+   ```sql
+    CREATE CATALOG hive_hms_on_oss_kerberos_new_catalog PROPERTIES (
+       'type' = 'hms',
+       'hive.metastore.uris' = 'thrift://127.0.0.1:9583',
+       
'hive.metastore.client.principal'='hive/[email protected]',
+       'hive.metastore.client.keytab' = 
'/mnt/keytabs/keytabs/hive-presto-master.keytab',
+       'hive.metastore.service.principal' = 
'hive/[email protected]',
+       'hive.metastore.authentication.type'='kerberos',
+       'hadoop.security.auth_to_local' = 
'RULE:[2:\$1@\$0](.*@LABS.TERADATA.COM)s/@.*//
+                          RULE:[2:\$1@\$0](.*@OTHERLABS.TERADATA.COM)s/@.*//
+                          RULE:[2:\$1@\$0](.*@OTHERREALM.COM)s/@.*//
+                          DEFAULT',
+       'oss.access_key' = 'OSS_ACCESS_KEY',
+       'oss.secret_key' = 'OSS_SECRET_KEY',
+       'oss.endpoint' = 'oss-cn-beijing.aliyuncs.com'
+   );
+   ```
+
+### Iceberg Catalog
+
+| Parameter Name       | Former Name | Required | Default | Description        
                                                  |
+| -------------------- | ----------- | -------- | ------- | 
-------------------------------------------------------------------- |
+| type                 |             | Yes      | None    | Catalog type, 
fixed as iceberg for Iceberg                          |
+| iceberg.catalog.type |             | No       | None    | Metadata Catalog 
type, fixed as hms for Hive Metastore, must be hms when using HiveMetaStore |
+| warehouse            |             | No       | None    | Iceberg warehouse 
path                                               |
+
+#### Examples
+
+1. Create an Iceberg Catalog using Hive Metastore as metadata service, with S3 
storage service.
+
+    ```sql
+     CREATE CATALOG iceberg_hms_s3_test_catalog PROPERTIES (
+        'type' = 'iceberg',
+        'iceberg.catalog.type' = 'hms',
+        'hive.metastore.uris' = 'thrift://127.0.0.1:9383',
+        'warehouse' = 's3://doris/iceberg_warehouse/',
+        's3.access_key' = 'S3_ACCESS_KEY',
+        's3.secret_key' = 'S3_SECRET_KEY',
+        's3.region' = 's3.ap-east-1.amazonaws.com'
+    );
+    ```
+
+2. Create an Iceberg Catalog using Hive Metastore with Kerberos authentication 
enabled as metadata service in a multi-Kerberos environment, with S3 storage 
service.
+
+    ```sql
+    CREATE CATALOG IF NOT EXISTS iceberg_hms_on_oss_kerberos_new_catalog 
PROPERTIES (
+        'type' = 'iceberg',
+        'iceberg.catalog.type' = 'hms',
+        'hive.metastore.uris' = 'thrift://127.0.0.1:9583',
+        'warehouse' = 'oss://doris/iceberg_warehouse/',
+        
'hive.metastore.client.principal'='hive/[email protected]',
+        'hive.metastore.client.keytab' = 
'/mnt/keytabs/keytabs/hive-presto-master.keytab',
+        'hive.metastore.service.principal' = 
'hive/[email protected]',
+        'hive.metastore.authentication.type'='kerberos',
+        'hadoop.security.auth_to_local' = 
'RULE:[2:\$1@\$0](.*@LABS.TERADATA.COM)s/@.*//
+                           RULE:[2:\$1@\$0](.*@OTHERLABS.TERADATA.COM)s/@.*//
+                           RULE:[2:\$1@\$0](.*@OTHERREALM.COM)s/@.*//
+                           DEFAULT',
+        'oss.access_key' = 'OSS_ACCESS_KEY',
+        'oss.secret_key' = 'OSS_SECRET_KEY',
+        'oss.endpoint' = 'oss-cn-beijing.aliyuncs.com'
+    );
+    ```
+
+### Paimon Catalog
+
+| Parameter Name      | Former Name | Required | Default    | Description      
                                                   |
+| ------------------- | ----------- | -------- | ---------- | 
------------------------------------------------------------------- |
+| type                |             | Yes      | None       | Catalog type, 
fixed as paimon for Paimon                           |
+| paimon.catalog.type |             | No       | filesystem | Must be hms when 
using HiveMetaStore, default is filesystem for storing metadata in filesystem |
+| warehouse           |             | Yes      | None       | Paimon warehouse 
path                                               |
+
+#### Examples
+
+1. Create a Paimon Catalog using Hive Metastore as metadata service, with S3 
storage service.
+
+    ```sql
+     CREATE CATALOG IF NOT EXISTS paimon_hms_s3_test_catalog PROPERTIES (
+        'type' = 'paimon',
+        'paimon.catalog.type' = 'hms',
+        'hive.metastore.uris' = 'thrift://127.0.0.1:9383',
+        'warehouse' = 's3://doris/paimon_warehouse/',
+        's3.access_key' = 'S3_ACCESS_KEY',
+        's3.secret_key' = 'S3_SECRET_KEY',
+        's3.region' = 's3.ap-east-1.amazonaws.com'
+    );
+    ```
+
+2. Create a Paimon Catalog using Hive Metastore with Kerberos authentication 
enabled as metadata service in a multi-Kerberos environment, with S3 storage 
service.
+
+    ```sql
+     CREATE CATALOG IF NOT EXISTS paimon_hms_on_oss_kerberos_new_catalog 
PROPERTIES (
+        'type' = 'paimon',
+        'paimon.catalog.type' = 'hms',
+        'hive.metastore.uris' = 'thrift://127.0.0.1:9583',
+        'warehouse' = 's3://doris/iceberg_warehouse/',
+        
'hive.metastore.client.principal'='hive/[email protected]',
+        'hive.metastore.client.keytab' = 
'/mnt/keytabs/keytabs/hive-presto-master.keytab',
+        'hive.metastore.service.principal' = 
'hive/[email protected]',
+        'hive.metastore.authentication.type'='kerberos',
+        'hadoop.security.auth_to_local' = 
'RULE:[2:\$1@\$0](.*@LABS.TERADATA.COM)s/@.*//
+                           RULE:[2:\$1@\$0](.*@OTHERLABS.TERADATA.COM)s/@.*//
+                           RULE:[2:\$1@\$0](.*@OTHERREALM.COM)s/@.*//
+                           DEFAULT',
+        'oss.access_key' = 'OSS_ACCESS_KEY',
+        'oss.secret_key' = 'OSS_SECRET_KEY',
+        'oss.endpoint' = 'oss-cn-beijing.aliyuncs.com'
+    );
+    ```
+
+## Frequently Asked Questions (FAQ)
+
+- Q1: Is hive-site.xml mandatory?
+
+    No, it's only used when configuration needs to be read from it.
+
+- Q2: Must the keytab file exist on every node?
+
+    Yes, all FE nodes must be able to access the specified path.
+
+- Q3: What should be noted when using write-back functionality, i.e., creating 
Hive/Iceberg databases/tables in Doris?
+
+    Since creating tables involves metadata operations on the storage side, 
i.e., accessing the storage system, the Hive MetaStore service server side 
needs to configure corresponding storage parameters, such as access parameters 
for S3, OSS and other storage services. When using object storage as the 
underlying storage system, ensure that the bucket being written to matches the 
configured Region.
\ No newline at end of file
diff --git a/versioned_docs/version-3.0/lakehouse/storages/azure-blob.md 
b/versioned_docs/version-3.0/lakehouse/storages/azure-blob.md
index 9a28c796852..9acc6c6d6f3 100644
--- a/versioned_docs/version-3.0/lakehouse/storages/azure-blob.md
+++ b/versioned_docs/version-3.0/lakehouse/storages/azure-blob.md
@@ -5,4 +5,5 @@
 }
 ---
 
-TODO
+Azure Blob will be supported later.
+
diff --git a/versioned_docs/version-3.0/lakehouse/storages/baidu-bos.md 
b/versioned_docs/version-3.0/lakehouse/storages/baidu-bos.md
index 512f642a1f9..2465f67dc63 100644
--- a/versioned_docs/version-3.0/lakehouse/storages/baidu-bos.md
+++ b/versioned_docs/version-3.0/lakehouse/storages/baidu-bos.md
@@ -5,5 +5,5 @@
 }
 ---
 
-Baidu Cloud BOS will be supported later.
+Document is under development.
 
diff --git a/versioned_docs/version-3.0/lakehouse/storages/gcs.md 
b/versioned_docs/version-3.0/lakehouse/storages/gcs.md
index a6c334e2e63..e99471b68dc 100644
--- a/versioned_docs/version-3.0/lakehouse/storages/gcs.md
+++ b/versioned_docs/version-3.0/lakehouse/storages/gcs.md
@@ -5,5 +5,5 @@
 }
 ---
 
-TODO
+The document is under development.
 
diff --git a/versioned_docs/version-3.0/lakehouse/storages/hdfs.md 
b/versioned_docs/version-3.0/lakehouse/storages/hdfs.md
index 201fe7bf00e..6e214cc85ba 100644
--- a/versioned_docs/version-3.0/lakehouse/storages/hdfs.md
+++ b/versioned_docs/version-3.0/lakehouse/storages/hdfs.md
@@ -41,7 +41,7 @@ Simple authentication is suitable for HDFS clusters that have 
not enabled Kerber
 
 Using Simple authentication, you can set the following parameters or use the 
default values directly:
 
-```plain
+```sql
 "hdfs.authentication.type" = "simple"
 ```
 
@@ -51,14 +51,14 @@ Examples:
 
 Using `lakers` username to access HDFS
 
-```plain
+```sql
 "hdfs.authentication.type" = "simple",
 "hadoop.username" = "lakers"
 ```
 
 Using default system user to access HDFS
 
-```plain
+```sql
 "hdfs.authentication.type" = "simple"
 ```
 
@@ -68,7 +68,7 @@ Kerberos authentication is suitable for HDFS clusters with 
Kerberos enabled.
 
 Using Kerberos authentication, you need to set the following parameters:
 
-```plain
+```sql
 "hdfs.authentication.type" = "kerberos",
 "hdfs.authentication.kerberos.principal" = "<your_principal>",
 "hdfs.authentication.kerberos.keytab" = "<your_keytab>"
@@ -84,12 +84,34 @@ Doris will access HDFS with the identity specified by the 
`hdfs.authentication.k
 
 Example:
 
-```plain
+```sql
 "hdfs.authentication.type" = "kerberos",
 "hdfs.authentication.kerberos.principal" = "hdfs/[email protected]",
 "hdfs.authentication.kerberos.keytab" = "/etc/security/keytabs/hdfs.keytab",
 ```
 
+## HDFS HA Configuration
+
+If HDFS HA mode is enabled, need to configure `dfs.nameservices` related 
parameters:
+
+```sql
+'dfs.nameservices' = '<your-nameservice>',
+'dfs.ha.namenodes.<your-nameservice>' = '<nn1>,<nn2>',
+'dfs.namenode.rpc-address.<your-nameservice>.<nn1>' = '<nn1_host:port>',
+'dfs.namenode.rpc-address.<your-nameservice>.<nn2>' = '<nn2_host:port>',
+'dfs.client.failover.proxy.provider.<your-nameservice>' = 
'org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider',
+```
+
+Example:
+
+```sql
+'dfs.nameservices' = 'nameservice1',
+'dfs.ha.namenodes.nameservice1' = 'nn1,nn2',
+'dfs.namenode.rpc-address.nameservice1.nn1' = '172.21.0.2:8088',
+'dfs.namenode.rpc-address.nameservice1.nn2' = '172.21.0.3:8088',
+'dfs.client.failover.proxy.provider.nameservice1' = 
'org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider',
+```
+
 ## Configuration Files
 
 > This feature is supported since version 3.1.0
@@ -103,9 +125,9 @@ If the configuration files contain the above parameters 
mentioned in this docume
 **Examples:**
 
 ```sql
-Multiple configuration files
+-- Multiple configuration files
 
'hadoop.config.resources'='hdfs-cluster-1/core-site.xml,hdfs-cluster-1/hdfs-site.xml'
-Single configuration file
+-- Single configuration file
 'hadoop.config.resources'='hdfs-cluster-2/hdfs-site.xml'
 ```
 
@@ -121,7 +143,7 @@ Note: This feature may increase the load on the HDFS 
cluster, please use it judi
 
 You can enable this feature in the following way:
 
-```plain
+```sql
 "dfs.client.hedged.read.threadpool.size" = "128",
 "dfs.client.hedged.read.threshold.millis" = "500"
 ```


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to