This is an automated email from the ASF dual-hosted git repository.

dataroaring pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/master by this push:
     new 2536b57590 [doc](catalog) optimize catalog doc (#19601)
2536b57590 is described below

commit 2536b57590fbce6d1d3d95e102fda766a029ae6f
Author: Mingyu Chen <[email protected]>
AuthorDate: Wed May 17 21:45:08 2023 +0800

    [doc](catalog) optimize catalog doc (#19601)
---
 docs/en/docs/lakehouse/multi-catalog/hive.md       | 282 +++++++++------------
 docs/en/docs/lakehouse/multi-catalog/iceberg.md    | 138 +++++-----
 docs/en/docs/lakehouse/multi-catalog/jdbc.md       |   2 +-
 .../docs/lakehouse/multi-catalog/multi-catalog.md  |   4 -
 .../Create/CREATE-CATALOG.md                       | 182 +------------
 docs/zh-CN/docs/lakehouse/multi-catalog/hive.md    | 176 +++++--------
 docs/zh-CN/docs/lakehouse/multi-catalog/iceberg.md |  96 +++----
 docs/zh-CN/docs/lakehouse/multi-catalog/jdbc.md    |   2 +-
 .../docs/lakehouse/multi-catalog/multi-catalog.md  |   6 -
 .../Create/CREATE-CATALOG.md                       | 210 ++-------------
 10 files changed, 321 insertions(+), 777 deletions(-)

diff --git a/docs/en/docs/lakehouse/multi-catalog/hive.md 
b/docs/en/docs/lakehouse/multi-catalog/hive.md
index 4bd2667799..52269a648c 100644
--- a/docs/en/docs/lakehouse/multi-catalog/hive.md
+++ b/docs/en/docs/lakehouse/multi-catalog/hive.md
@@ -1,4 +1,3 @@
-
 ---
 {
     "title": "Hive",
@@ -27,29 +26,16 @@ under the License.
 
 # Hive
 
-Once Doris is connected to Hive Metastore or made compatible with Hive 
Metastore metadata service, it can access databases and tables in Hive and 
conduct queries.
-
-Besides Hive, many other systems, such as Iceberg and Hudi, use Hive Metastore 
to keep their metadata. Thus, Doris can also access these systems via Hive 
Catalog. 
-
-## Usage
-
-When connnecting to Hive, Doris:
+By connecting to Hive Metastore, or a metadata service compatible with Hive 
Metatore, Doris can automatically obtain Hive database table information and 
perform data queries.
 
-1. Supports Hive version 1/2/3;
-2. Supports both Managed Table and External Table;
-3. Can identify metadata of Hive, Iceberg, and Hudi stored in Hive Metastore;
-4. Supports Hive tables with data stored in JuiceFS, which can be used the 
same way as normal Hive tables (put `juicefs-hadoop-x.x.x.jar` in `fe/lib/` and 
`apache_hdfs_broker/lib/`).
-5. Supports Hive tables with data stored in CHDFS, which can be used the same 
way as normal Hive tables. Follow below steps to prepare doris environment:
-    1. put chdfs_hadoop_plugin_network-x.x.jar in fe/lib/ and 
apache_hdfs_broker/lib/
-    2. copy core-site.xml and hdfs-site.xml from hive cluster to fe/conf/ and 
apache_hdfs_broker/conf
+In addition to Hive, many other systems also use the Hive Metastore to store 
metadata. So through Hive Catalog, we can not only access Hive, but also access 
systems that use Hive Metastore as metadata storage. Such as Iceberg, Hudi, etc.
 
-<version since="dev">
+## Limitations
 
-6. Supports Hive / Iceberg tables with data stored in GooseFS(GFS), which can 
be used the same way as normal Hive tables. Follow below steps to prepare doris 
environment:
-    1. put goosefs-x.x.x-client.jar in fe/lib/ and apache_hdfs_broker/lib/
-    2. add extra properties 'fs.AbstractFileSystem.gfs.impl' = 
'com.qcloud.cos.goosefs.hadoop.GooseFileSystem', 'fs.gfs.impl' = 
'com.qcloud.cos.goosefs.hadoop.FileSystem' when creating catalog
-
-</version>
+1. Need to put core-site.xml, hdfs-site.xml in the conf directory of FE and BE.
+2. hive supports version 1/2/3.
+3. Support Managed Table and External Table.
+4. Can identify hive, iceberg, hudi metadata stored in Hive Metastore.
 
 ## Create Catalog
 
@@ -66,9 +52,9 @@ CREATE CATALOG hive PROPERTIES (
 );
 ```
 
- In addition to `type` and  `hive.metastore.uris` , which are required, you 
can specify other parameters regarding the connection.
+In addition to the two required parameters of `type` and 
`hive.metastore.uris`, more parameters can be passed to pass the information 
required for the connection.
 
-For example, to specify HDFS HA:
+If HDFS HA information is provided, the example is as follows:
 
 ```sql
 CREATE CATALOG hive PROPERTIES (
@@ -83,7 +69,7 @@ CREATE CATALOG hive PROPERTIES (
 );
 ```
 
-To specify HDFS HA and Kerberos authentication information:
+Provide HDFS HA information and Kerberos authentication information at the 
same time, examples are as follows:
 
 ```sql
 CREATE CATALOG hive PROPERTIES (
@@ -102,12 +88,11 @@ CREATE CATALOG hive PROPERTIES (
 );
 ```
 
-Remember `krb5.conf` and `keytab` file should be placed at all `BE` nodes and 
`FE` nodes. The location of `keytab` file should be equal to the value of 
`hadoop.kerberos.keytab`.
-As default, `krb5.conf` should be placed at `/etc/krb5.conf`.
+Please place the `krb5.conf` file and `keytab` authentication file under all 
`BE` and `FE` nodes. The path of the `keytab` authentication file is consistent 
with the configuration. The `krb5.conf` file is placed in `/etc by default 
/krb5.conf` path.
 
-Value of `hive.metastore.kerberos.principal` should be same with the same name 
property used by HMS you are connecting to, which can be found in 
`hive-site.xml`.
+The value of `hive.metastore.kerberos.principal` needs to be consistent with 
the property of the same name of the connected hive metastore, which can be 
obtained from `hive-site.xml`.
 
-To provide Hadoop KMS encrypted transmission information:
+Provide Hadoop KMS encrypted transmission information, examples are as follows:
 
 ```sql
 CREATE CATALOG hive PROPERTIES (
@@ -117,7 +102,11 @@ CREATE CATALOG hive PROPERTIES (
 );
 ```
 
-Or to connect to Hive data stored on JuiceFS:
+### Hive On JuiceFS
+
+Data is stored in JuiceFS, examples are as follows:
+
+(Need to put `juicefs-hadoop-x.x.x.jar` under `fe/lib/` and 
`apache_hdfs_broker/lib/`)
 
 ```sql
 CREATE CATALOG hive PROPERTIES (
@@ -132,8 +121,6 @@ CREATE CATALOG hive PROPERTIES (
 
 ### Hive On S3
 
-Data stored on S3:
-
 ```sql
 CREATE CATALOG hive PROPERTIES (
     "type"="hms",
@@ -145,16 +132,14 @@ CREATE CATALOG hive PROPERTIES (
 );
 ```
 
-Optional:
+Options:
 
-* s3.connection.maximum: s3最大连接数,默认50
-* s3.connection.request.timeout:s3请求超时时间,默认3000ms
-* s3.connection.timeout: s3连接超时时间,默认1000ms
+* s3.connection.maximum: s3 maximum connection number, default 50
+* s3.connection.request.timeout: s3 request timeout, default 3000ms
+* s3.connection.timeout: s3 connection timeout, default 1000ms
 
 ### Hive On OSS
 
-Data stored on OSS:
-
 ```sql
 CREATE CATALOG hive PROPERTIES (
     "type"="hms",
@@ -167,8 +152,6 @@ CREATE CATALOG hive PROPERTIES (
 
 ### Hive On OBS
 
-Data stored on OBS:
-
 ```sql
 CREATE CATALOG hive PROPERTIES (
     "type"="hms",
@@ -181,8 +164,6 @@ CREATE CATALOG hive PROPERTIES (
 
 ### Hive On COS
 
-Data stored on COS:
-
 ```sql
 CREATE CATALOG hive PROPERTIES (
     "type"="hms",
@@ -195,8 +176,6 @@ CREATE CATALOG hive PROPERTIES (
 
 ### Hive With Glue
 
-Connect to Glue:
-
 ```sql
 CREATE CATALOG hive PROPERTIES (
     "type"="hms",
@@ -207,32 +186,9 @@ CREATE CATALOG hive PROPERTIES (
 );
 ```
 
-In Doris 1.2.1 and newer, you can create a Resource that contains all these 
parameters, and reuse the Resource when creating new Catalogs. Here is an 
example:
-
-```sql
-# 1. Create Resource
-CREATE RESOURCE hms_resource PROPERTIES (
-    'type'='hms',
-    'hive.metastore.uris' = 'thrift://172.0.0.1:9083',
-    'hadoop.username' = 'hive',
-    'dfs.nameservices'='your-nameservice',
-    'dfs.ha.namenodes.your-nameservice'='nn1,nn2',
-    'dfs.namenode.rpc-address.your-nameservice.nn1'='172.0.0.2:8088',
-    'dfs.namenode.rpc-address.your-nameservice.nn2'='172.0.0.3:8088',
-    
'dfs.client.failover.proxy.provider.your-nameservice'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider'
-);
-       
-# 2. Create Catalog and use an existing Resource. The key and value 
information in the followings will overwrite the corresponding information in 
the Resource.
-CREATE CATALOG hive WITH RESOURCE hms_resource PROPERTIES(
-    'key' = 'value'
-);
-```
-
-<version since="dev"></version> 
+## Metadata cache settings
 
-You can use the config `file.meta.cache.ttl-second` to set TTL(Time-to-Live) 
config of File Cache, so that the stale file info will be invalidated 
automatically after expiring. The unit of time is second.
-
-You can also set file_meta_cache_ttl_second to 0 to disable file cache.Here is 
an example:
+When creating a Catalog, you can use the parameter 
`file.meta.cache.ttl-second` to set the metadata File Cache automatic 
expiration time, or set this value to 0 to disable File Cache. The time unit 
is: second. Examples are as follows:
 
 ```sql
 CREATE CATALOG hive PROPERTIES (
@@ -248,15 +204,9 @@ CREATE CATALOG hive PROPERTIES (
 );
 ```
 
-You can also put the `hive-site.xml` file in the `conf`  directories of FE and 
BE. This will enable Doris to automatically read information from 
`hive-site.xml`. The relevant information will be overwritten based on the 
following rules :
-       
-
-* Information in Resource will overwrite that in  `hive-site.xml`. 
-* Information in `CREATE CATALOG PROPERTIES` will overwrite that in Resource.
+## Hive Version
 
-### Hive Versions
-
-Doris can access Hive Metastore in all Hive versions. By default, Doris uses 
the interface compatible with Hive 2.3 to access Hive Metastore. You can 
specify a certain Hive version when creating Catalogs, for example:
+Doris can correctly access the Hive Metastore in different Hive versions. By 
default, Doris will access the Hive Metastore with a Hive 2.3 compatible 
interface. You can also specify the hive version when creating the Catalog. If 
accessing Hive 1.1.0 version:
 
 ```sql 
 CREATE CATALOG hive PROPERTIES (
@@ -266,118 +216,116 @@ CREATE CATALOG hive PROPERTIES (
 );
 ```
 
-## Column Type Mapping
-
-This is applicable for Hive/Iceberge/Hudi.
+## Column type mapping
 
-| HMS Type      | Doris Type    | Comment                                      
     |
-| ------------- | ------------- | 
------------------------------------------------- |
-| boolean       | boolean       |                                              
     |
-| tinyint       | tinyint       |                                              
     |
-| smallint      | smallint      |                                              
     |
-| int           | int           |                                              
     |
-| bigint        | bigint        |                                              
     |
-| date          | date          |                                              
     |
-| timestamp     | datetime      |                                              
     |
-| float         | float         |                                              
     |
-| double        | double        |                                              
     |
-| char          | char          |                                              
     |
-| varchar       | varchar       |                                              
     |
-| decimal       | decimal       |                                              
     |
-| `array<type>` | `array<type>` | Support nested array, such as 
`array<array<int>>` |
-| `map<KeyType, ValueType>` | `map<KeyType, ValueType>` | Not support nested 
map. KeyType and ValueType should be primitive types. |
-| `struct<col1: Type1, col2: Type2, ...>` | `struct<col1: Type1, col2: Type2, 
...>` | Not support nested struct. Type1, Type2, ... should be primitive types. 
|
-| other         | unsupported   |                                              
     |
+For Hive/Iceberge/Hudi
 
-## Use Ranger for permission verification
+| HMS Type | Doris Type | Comment |
+|---|---|---|
+| boolean| boolean | |
+| tinyint|tinyint | |
+| smallint| smallint| |
+| int| int | |
+| bigint| bigint | |
+| date| date| |
+| timestamp| datetime| |
+| float| float| |
+| double| double| |
+| char| char | |
+| varchar| varchar| |
+| decimal| decimal | |
+| `array<type>` | `array<type>`| 支持array嵌套,如 `array<array<int>>` |
+| `map<KeyType, ValueType>` | `map<KeyType, ValueType>` | 暂不支持嵌套,KeyType 和 
ValueType 需要为基础类型 |
+| `struct<col1: Type1, col2: Type2, ...>` | `struct<col1: Type1, col2: Type2, 
...>` | 暂不支持嵌套,Type1, Type2, ... 需要为基础类型 |
+| other | unsupported | |
 
-<version since="dev">
+## Integrate with Apache Ranger
 
-Apache Ranger is a security framework for monitoring, enabling services, and 
managing comprehensive data security access on the Hadoop platform.
+Apache Ranger is a security framework for monitoring, enabling services, and 
comprehensive data security access management on the Hadoop platform.
 
-Currently, Doris supports Ranger's library, table, and column permissions, but 
does not support encryption, row permissions, and so on.
+Currently doris supports ranger library, table, and column permissions, but 
does not support encryption, row permissions, etc.
 
-</version>
+### Settings
 
+To connect to the Hive Metastore with Ranger permission verification enabled, 
you need to add configuration & configuration environment:
 
-### Environment configuration
-
-Connecting to Hive Metastore with Ranger permission verification enabled 
requires additional configuration&configuration environment:
-1. When creating a catalog, add:
+1. When creating a Catalog, add:
 
 ```sql
 "access_controller.properties.ranger.service.name" = "hive",
 "access_controller.class" = 
"org.apache.doris.catalog.authorizer.RangerHiveAccessControllerFactory",
 ```
+
 2. Configure all FE environments:
 
-    1. Copy the configuration files ranger-live-audit.xml, 
ranger-live-security.xml, ranger-policymgr-ssl.xml under the HMS conf directory 
to<doris_ Home>/conf directory.
-
-    2. Modify the properties of ranger-live-security.xml. The reference 
configuration is as follows:
-
-    ```sql
-    <?xml version="1.0" encoding="UTF-8"?>
-    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
-    <configuration>
-        #The directory for caching permission data, needs to be writable
-        <property>
-            <name>ranger.plugin.hive.policy.cache.dir</name>
-            <value>/mnt/datadisk0/zhangdong/rangerdata</value>
-        </property>
-        #The time interval for periodically pulling permission data
-        <property>
-            <name>ranger.plugin.hive.policy.pollIntervalMs</name>
-            <value>30000</value>
-        </property>
-    
-        <property>
-            
<name>ranger.plugin.hive.policy.rest.client.connection.timeoutMs</name>
-            <value>60000</value>
-        </property>
-    
-        <property>
-            <name>ranger.plugin.hive.policy.rest.client.read.timeoutMs</name>
-            <value>60000</value>
-        </property>
-    
-        <property>
-            <name>ranger.plugin.hive.policy.rest.ssl.config.file</name>
-            <value></value>
-        </property>
-    
-        <property>
-            <name>ranger.plugin.hive.policy.rest.url</name>
-            <value>http://172.21.0.32:6080</value>
-        </property>
-    
-        <property>
-            <name>ranger.plugin.hive.policy.source.impl</name>
-            <value>org.apache.ranger.admin.client.RangerAdminRESTClient</value>
-        </property>
-    
-        <property>
-            <name>ranger.plugin.hive.service.name</name>
-            <value>hive</value>
-        </property>
-    
-        <property>
-            <name>xasecure.hive.update.xapolicies.on.grant.revoke</name>
-            <value>true</value>
-        </property>
-    
-    </configuration>
-    ```
-    3. To obtain the log of Ranger authentication itself, you can click<doris_ 
Add the configuration file log4j.properties under the home>/conf directory.
+    1. Copy the configuration files ranger-hive-audit.xml, 
ranger-hive-security.xml, and ranger-policymgr-ssl.xml under the HMS conf 
directory to the FE conf directory.
+
+    2. Modify the properties of ranger-hive-security.xml, the reference 
configuration is as follows:
+
+        ```sql
+        <?xml version="1.0" encoding="UTF-8"?>
+        <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+        <configuration>
+            #The directory for caching permission data, needs to be writable
+            <property>
+                <name>ranger.plugin.hive.policy.cache.dir</name>
+                <value>/mnt/datadisk0/zhangdong/rangerdata</value>
+            </property>
+            #The time interval for periodically pulling permission data
+            <property>
+                <name>ranger.plugin.hive.policy.pollIntervalMs</name>
+                <value>30000</value>
+            </property>
+        
+            <property>
+                
<name>ranger.plugin.hive.policy.rest.client.connection.timeoutMs</name>
+                <value>60000</value>
+            </property>
+        
+            <property>
+                
<name>ranger.plugin.hive.policy.rest.client.read.timeoutMs</name>
+                <value>60000</value>
+            </property>
+        
+            <property>
+                <name>ranger.plugin.hive.policy.rest.ssl.config.file</name>
+                <value></value>
+            </property>
+        
+            <property>
+                <name>ranger.plugin.hive.policy.rest.url</name>
+                <value>http://172.21.0.32:6080</value>
+            </property>
+        
+            <property>
+                <name>ranger.plugin.hive.policy.source.impl</name>
+                
<value>org.apache.ranger.admin.client.RangerAdminRESTClient</value>
+            </property>
+        
+            <property>
+                <name>ranger.plugin.hive.service.name</name>
+                <value>hive</value>
+            </property>
+        
+            <property>
+                <name>xasecure.hive.update.xapolicies.on.grant.revoke</name>
+                <value>true</value>
+            </property>
+        
+        </configuration>
+        ```
+
+    3. In order to obtain the log of Ranger authentication itself, add the 
configuration file log4j.properties in the `<doris_home>/conf` directory.
 
     4. Restart FE.
 
 ### Best Practices
 
-1.Create user user1 on the ranger side and authorize the query permission of 
db1.table1.col1 
+1. Create user user1 on the ranger side and authorize the query permission of 
db1.table1.col1
 
-2.Create the role role1 on the ranger side and authorize the query permission 
of db1.table1.col2
+2. Create role role1 on the ranger side and authorize the query permission of 
db1.table1.col2
 
-3.Create user user1 with the same name in Doris, and user1 will directly have 
the query permission of db1.table1.col1
+3. Create a user user1 with the same name in doris, user1 will directly have 
the query authority of db1.table1.col1
 
-4.Create the role role1 with the same name in Doris and assign role1 to user1. 
User1 will have query permissions for both db1.table1.col1 and col2
+4. Create role1 with the same name in doris, and assign role1 to user1, user1 
will have the query authority of db1.table1.col1 and col2 at the same time
 
diff --git a/docs/en/docs/lakehouse/multi-catalog/iceberg.md 
b/docs/en/docs/lakehouse/multi-catalog/iceberg.md
index 2284eb945e..383a01b7b7 100644
--- a/docs/en/docs/lakehouse/multi-catalog/iceberg.md
+++ b/docs/en/docs/lakehouse/multi-catalog/iceberg.md
@@ -27,26 +27,16 @@ under the License.
 
 # Iceberg
 
-## Usage
+## Limitations
 
-When connecting to Iceberg, Doris:
-
-1. Supports Iceberg V1/V2 table formats;
-2. Supports Position Delete but not Equality Delete for V2 format;
-
-<version since="dev">
-
-3. Supports Hive / Iceberg tables with data stored in GooseFS(GFS), which can 
be used the same way as normal Hive tables. Follow below steps to prepare doris 
environment:
-    1. put goosefs-x.x.x-client.jar in fe/lib/ and apache_hdfs_broker/lib/
-    2. add extra properties 'fs.AbstractFileSystem.gfs.impl' = 
'com.qcloud.cos.goosefs.hadoop.GooseFileSystem', 'fs.gfs.impl' = 
'com.qcloud.cos.goosefs.hadoop.FileSystem' when creating catalog
-
-</version>
+1. Support Iceberg V1/V2.
+2. The V2 format only supports Position Delete, not Equality Delete.
 
 ## Create Catalog
 
-### Hive Metastore Catalog
+### Create Catalog Based on Hive Metastore
 
-Same as creating Hive Catalogs. A simple example is provided here. See 
[Hive](./hive.md) for more information.
+It is basically the same as Hive Catalog, and only a simple example is given 
here. See [Hive Catalog](./hive.md) for other examples.
 
 ```sql
 CREATE CATALOG iceberg PROPERTIES (
@@ -61,85 +51,77 @@ CREATE CATALOG iceberg PROPERTIES (
 );
 ```
 
-### Iceberg Native Catalog
-
-<version since="dev">
-
-Access metadata with the iceberg API. The Hive, REST, Glue and other services 
can serve as the iceberg catalog.
-
-</version>
-
-#### Using Iceberg Hive Catalog
-
-```sql
-CREATE CATALOG iceberg PROPERTIES (
-    'type'='iceberg',
-    'iceberg.catalog.type'='hms',
-    'hive.metastore.uris' = 'thrift://172.21.0.1:7004',
-    'hadoop.username' = 'hive',
-    'dfs.nameservices'='your-nameservice',
-    'dfs.ha.namenodes.your-nameservice'='nn1,nn2',
-    'dfs.namenode.rpc-address.your-nameservice.nn1'='172.21.0.2:4007',
-    'dfs.namenode.rpc-address.your-nameservice.nn2'='172.21.0.3:4007',
-    
'dfs.client.failover.proxy.provider.your-nameservice'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider'
-);
-```
-
-#### Using Iceberg Glue Catalog
-
-```sql
-CREATE CATALOG glue PROPERTIES (
-"type"="iceberg",
-"iceberg.catalog.type" = "glue",
-"glue.endpoint" = "https://glue.us-east-1.amazonaws.com";,
-"glue.access_key" = "ak",
-"glue.secret_key" = "sk"
-);
-```
-
-The other properties can refer to [Iceberg Glue 
Catalog](https://iceberg.apache.org/docs/latest/aws/#glue-catalog)
-
-- Using Iceberg REST Catalog
-
-RESTful service as the server side. Implementing RESTCatalog interface of 
iceberg to obtain metadata.
-
-```sql
-CREATE CATALOG iceberg PROPERTIES (
-    'type'='iceberg',
-    'iceberg.catalog.type'='rest',
-    'uri' = 'http://172.21.0.1:8181',
-);
-```
-
-If you want to use S3 storage, the following properties need to be set.
+### Create Catalog based on Iceberg API
+
+Use the Iceberg API to access metadata, and support services such as Hive, 
REST, and Glue as Iceberg's Catalog.
+
+- Hive Metastore
+
+    ```sql
+    CREATE CATALOG iceberg PROPERTIES (
+        'type'='iceberg',
+        'iceberg.catalog.type'='hms',
+        'hive.metastore.uris' = 'thrift://172.21.0.1:7004',
+        'hadoop.username' = 'hive',
+        'dfs.nameservices'='your-nameservice',
+        'dfs.ha.namenodes.your-nameservice'='nn1,nn2',
+        'dfs.namenode.rpc-address.your-nameservice.nn1'='172.21.0.2:4007',
+        'dfs.namenode.rpc-address.your-nameservice.nn2'='172.21.0.3:4007',
+        
'dfs.client.failover.proxy.provider.your-nameservice'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider'
+    );
+    ```
+
+- Glue Catalog
+
+    ```sql
+    CREATE CATALOG glue PROPERTIES (
+        "type"="iceberg",
+        "iceberg.catalog.type" = "glue",
+        "glue.endpoint" = "https://glue.us-east-1.amazonaws.com";,
+        "glue.access_key" = "ak",
+        "glue.secret_key" = "sk"
+    );
+    ```
+
+    For Iceberg properties, see [Iceberg Glue 
Catalog](https://iceberg.apache.org/docs/latest/aws/#glue-catalog)
+
+- REST Catalog
+
+    This method needs to provide REST services in advance, and users need to 
implement the REST interface for obtaining Iceberg metadata.
+    
+    ```sql
+    CREATE CATALOG iceberg PROPERTIES (
+        'type'='iceberg',
+        'iceberg.catalog.type'='rest',
+        'uri' = 'http://172.21.0.1:8181',
+    );
+    ```
+
+If the data is stored on S3, the following parameters can be used in 
properties:
 
 ```
 "s3.access_key" = "ak"
 "s3.secret_key" = "sk"
 "s3.endpoint" = "http://endpoint-uri";
-"s3.credentials.provider" = "provider-class-name" // Optional. The default 
credentials class is based on BasicAWSCredentials.
+"s3.credentials.provider" = "provider-class-name" // 
可选,默认凭证类基于BasicAWSCredentials实现。
 ```
 
-## Column Type Mapping
+## Column type mapping
 
-Same as that in Hive Catalogs. See the relevant section in [Hive](./hive.md).
+Consistent with Hive Catalog, please refer to the **column type mapping** 
section in [Hive Catalog](./hive.md).
 
 ## Time Travel
 
-<version since="1.2.2">
-
-Doris supports reading the specified Snapshot of Iceberg tables.
-
-</version>
+Supports reading the snapshot specified by the Iceberg table.
 
-Each write operation to an Iceberg table will generate a new Snapshot.
+Every write operation to the iceberg table will generate a new snapshot.
 
-By default, a read request will only read the latest Snapshot.
+By default, read requests will only read the latest version of the snapshot.
 
-You can read data of historical table versions using the  `FOR TIME AS OF`  or 
 `FOR VERSION AS OF`  statements based on the Snapshot ID or the timepoint the 
Snapshot is generated. For example:
+You can use the `FOR TIME AS OF` and `FOR VERSION AS OF` statements to read 
historical versions of data based on the snapshot ID or the time when the 
snapshot was generated. Examples are as follows:
 
 `SELECT * FROM iceberg_tbl FOR TIME AS OF "2022-10-07 17:20:37";`
 
 `SELECT * FROM iceberg_tbl FOR VERSION AS OF 868895038966572;`
 
-You can use the 
[iceberg_meta](https://doris.apache.org/docs/dev/sql-manual/sql-functions/table-functions/iceberg_meta/)
 table function to view the Snapshot details of the specified table.
+In addition, you can use the 
[iceberg_meta](../../sql-manual/sql-functions/table-functions/iceberg_meta.md) 
table function to query the snapshot information of the specified table.
diff --git a/docs/en/docs/lakehouse/multi-catalog/jdbc.md 
b/docs/en/docs/lakehouse/multi-catalog/jdbc.md
index ccefe84e64..4ad4ce493b 100644
--- a/docs/en/docs/lakehouse/multi-catalog/jdbc.md
+++ b/docs/en/docs/lakehouse/multi-catalog/jdbc.md
@@ -536,7 +536,7 @@ For Oracle mode, please refer to [Oracle type 
mapping](#Oracle)
    failed to load driver class com.mysql.jdbc.driver in either of hikariconfig 
class loader
    ```
 
-   Such errors occur because the `driver_class` has been wrongly put when 
creating the Resource. The problem with the above example is the letter case. 
It should be corrected as `"driver_class" = "com.mysql.jdbc.Driver"`.
+   Such errors occur because the `driver_class` has been wrongly put when 
creating the catalog. The problem with the above example is the letter case. It 
should be corrected as `"driver_class" = "com.mysql.jdbc.Driver"`.
 
 5. How to fix communication link failures?
 
diff --git a/docs/en/docs/lakehouse/multi-catalog/multi-catalog.md 
b/docs/en/docs/lakehouse/multi-catalog/multi-catalog.md
index 92de24b727..934d62d255 100644
--- a/docs/en/docs/lakehouse/multi-catalog/multi-catalog.md
+++ b/docs/en/docs/lakehouse/multi-catalog/multi-catalog.md
@@ -76,10 +76,6 @@ Multi-Catalog works as an additional and enhanced external 
table connection meth
     
     The deletion only means to remove the mapping in Doris to the 
corresponding catalog. It doesn't change the external catalog itself by all 
means.
     
-5. Resource
-
-       Resource is a set of configurations. Users can create a Resource using 
the [CREATE 
RESOURCE](https://doris.apache.org/docs/dev/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-RESOURCE/)
 command, and then apply this Resource for a newly created Catalog. One 
Resource can be reused for multiple Catalogs. 
-
 ## Examples
 
 ### Connect to Hive
diff --git 
a/docs/en/docs/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-CATALOG.md
 
b/docs/en/docs/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-CATALOG.md
index 92eb2345ee..b174e09d4d 100644
--- 
a/docs/en/docs/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-CATALOG.md
+++ 
b/docs/en/docs/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-CATALOG.md
@@ -38,39 +38,7 @@ Syntax:
 
 ```sql
 CREATE CATALOG [IF NOT EXISTS] catalog_name
-       [WITH RESOURCE resource_name]
-       [PROPERTIES ("key"="value", ...)];
-```
-
-`RESOURCE` can be created from [CREATE 
RESOURCE](../../../sql-reference/Data-Definition-Statements/Create/CREATE-RESOURCE.md),
 current supports:
-
-* hms:Hive MetaStore
-* es:Elasticsearch
-* jdbc: Standard interface for database access (JDBC), currently supports 
MySQL and PostgreSQL
-
-### Create catalog
-
-**Create catalog through resource**
-
-In later versions of `1.2.0`, it is recommended to create a catalog through 
resource.
-```sql
-CREATE RESOURCE catalog_resource PROPERTIES (
-    'type'='hms|es|jdbc',
-    ...
-);
-CREATE CATALOG catalog_name WITH RESOURCE catalog_resource PROPERTIES (
-    'key' = 'value'
-);
-```
-
-**Create catalog through properties**
-
-Version `1.2.0` creates a catalog through properties.
-```sql
-CREATE CATALOG catalog_name PROPERTIES (
-    'type'='hms|es|jdbc',
-    ...
-);
+       PROPERTIES ("key"="value", ...);
 ```
 
 ### Example
@@ -78,19 +46,6 @@ CREATE CATALOG catalog_name PROPERTIES (
 1. Create catalog hive
 
        ```sql
-       -- 1.2.0+ Version
-       CREATE RESOURCE hms_resource PROPERTIES (
-               'type'='hms',
-               'hive.metastore.uris' = 'thrift://127.0.0.1:7004',
-               'dfs.nameservices'='HANN',
-               'dfs.ha.namenodes.HANN'='nn1,nn2',
-               'dfs.namenode.rpc-address.HANN.nn1'='nn1_host:rpc_port',
-               'dfs.namenode.rpc-address.HANN.nn2'='nn2_host:rpc_port',
-               
'dfs.client.failover.proxy.provider.HANN'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider'
-       );
-       CREATE CATALOG hive WITH RESOURCE hms_resource;
-
-       -- 1.2.0 Version
        CREATE CATALOG hive PROPERTIES (
                'type'='hms',
                'hive.metastore.uris' = 'thrift://127.0.0.1:7004',
@@ -105,14 +60,6 @@ CREATE CATALOG catalog_name PROPERTIES (
 2. Create catalog es
 
        ```sql
-       -- 1.2.0+ Version
-       CREATE RESOURCE es_resource PROPERTIES (
-               "type"="es",
-               "hosts"="http://127.0.0.1:9200";
-       );
-       CREATE CATALOG es WITH RESOURCE es_resource;
-
-       -- 1.2.0 Version
        CREATE CATALOG es PROPERTIES (
                "type"="es",
                "hosts"="http://127.0.0.1:9200";
@@ -120,22 +67,10 @@ CREATE CATALOG catalog_name PROPERTIES (
        ```
 
 3. Create catalog jdbc
+
        **mysql**
 
        ```sql
-       -- 1.2.0+ Version
-       -- The first way 
-       CREATE RESOURCE mysql_resource PROPERTIES (
-               "type"="jdbc",
-               "user"="root",
-               "password"="123456",
-               "jdbc_url" = 
"jdbc:mysql://127.0.0.1:3316/doris_test?useSSL=false",
-               "driver_url" = 
"https://doris-community-test-1308700295.cos.ap-hongkong.myqcloud.com/jdbc_driver/mysql-connector-java-8.0.25.jar";,
-               "driver_class" = "com.mysql.cj.jdbc.Driver"
-       );
-       CREATE CATALOG jdbc WITH RESOURCE mysql_resource;
-
-       -- The second way
        CREATE CATALOG jdbc PROPERTIES (
                "type"="jdbc",
                "user"="root",
@@ -144,33 +79,11 @@ CREATE CATALOG catalog_name PROPERTIES (
                "driver_url" = 
"https://doris-community-test-1308700295.cos.ap-hongkong.myqcloud.com/jdbc_driver/mysql-connector-java-8.0.25.jar";,
                "driver_class" = "com.mysql.cj.jdbc.Driver"
        );
-       
-       -- 1.2.0 Version
-       CREATE CATALOG jdbc PROPERTIES (
-               "type"="jdbc",
-               "jdbc.user"="root",
-               "jdbc.password"="123456",
-               "jdbc.jdbc_url" = 
"jdbc:mysql://127.0.0.1:3316/doris_test?useSSL=false",
-               "jdbc.driver_url" = 
"https://doris-community-test-1308700295.cos.ap-hongkong.myqcloud.com/jdbc_driver/mysql-connector-java-8.0.25.jar";,
-               "jdbc.driver_class" = "com.mysql.cj.jdbc.Driver"
-       );
        ```
 
        **postgresql**
 
        ```sql
-       -- The first way
-       CREATE RESOURCE pg_resource PROPERTIES (
-               "type"="jdbc",
-               "user"="postgres",
-               "password"="123456",
-               "jdbc_url" = "jdbc:postgresql://127.0.0.1:5432/demo",
-               "driver_url" = "file:///path/to/postgresql-42.5.1.jar",
-               "driver_class" = "org.postgresql.Driver"
-       );
-       CREATE CATALOG jdbc WITH RESOURCE pg_resource;
-
-       -- The second way
        CREATE CATALOG jdbc PROPERTIES (
                "type"="jdbc",
                "user"="postgres",
@@ -184,18 +97,6 @@ CREATE CATALOG catalog_name PROPERTIES (
        **clickhouse**
 
        ```sql
-       -- The first way
-       CREATE RESOURCE clickhouse_resource PROPERTIES (
-               "type"="jdbc",
-               "user"="default",
-               "password"="123456",
-               "jdbc_url" = "jdbc:clickhouse://127.0.0.1:8123/demo",
-               "driver_url" = 
"file:///path/to/clickhouse-jdbc-0.3.2-patch11-all.jar",
-               "driver_class" = "com.clickhouse.jdbc.ClickHouseDriver"
-       )
-       CREATE CATALOG jdbc WITH RESOURCE clickhouse_resource;
-       
-       -- The second way
        CREATE CATALOG jdbc PROPERTIES (
                "type"="jdbc",
                "user"="default",
@@ -208,18 +109,6 @@ CREATE CATALOG catalog_name PROPERTIES (
 
        **oracle**
        ```sql
-       -- The first way
-       CREATE RESOURCE oracle_resource PROPERTIES (
-               "type"="jdbc",
-               "user"="doris",
-               "password"="123456",
-               "jdbc_url" = "jdbc:oracle:thin:@127.0.0.1:1521:helowin",
-               "driver_url" = "file:///path/to/ojdbc6.jar",
-               "driver_class" = "oracle.jdbc.driver.OracleDriver"
-       );
-       CREATE CATALOG jdbc WITH RESOURCE oracle_resource;
-
-       -- The second way
        CREATE CATALOG jdbc PROPERTIES (
                "type"="jdbc",
                "user"="doris",
@@ -232,18 +121,6 @@ CREATE CATALOG catalog_name PROPERTIES (
 
        **SQLServer**
        ```sql
-       -- The first way
-       CREATE RESOURCE sqlserver_resource PROPERTIES (
-               "type"="jdbc",
-               "user"="SA",
-               "password"="Doris123456",
-               "jdbc_url" = 
"jdbc:sqlserver://localhost:1433;DataBaseName=doris_test",
-               "driver_url" = "file:///path/to/mssql-jdbc-11.2.3.jre8.jar",
-               "driver_class" = "com.microsoft.sqlserver.jdbc.SQLServerDriver"
-       );
-       CREATE CATALOG sqlserver_catalog WITH RESOURCE sqlserver_resource;
-
-       -- The second way
        CREATE CATALOG sqlserver_catalog PROPERTIES (
                "type"="jdbc",
                "user"="SA",
@@ -254,20 +131,8 @@ CREATE CATALOG catalog_name PROPERTIES (
        );      
        ```
 
-   **SAP HANA**
-   ```sql
-   -- The first way
-   CREATE RESOURCE saphana_resource PROPERTIES (
-       "type"="jdbc",
-       "user"="SYSTEM",
-       "password"="SAPHANA",
-       "jdbc_url" = "jdbc:sap://localhost:31515/TEST",
-       "driver_url" = "file:///path/to/ngdbc.jar",
-       "driver_class" = "com.sap.db.jdbc.Driver"
-   );
-   CREATE CATALOG saphana_catalog WITH RESOURCE saphana_resource;
-
-   -- The second way
+    **SAP HANA**
+    ```sql
        CREATE CATALOG saphana_catalog PROPERTIES (
        "type"="jdbc",
        "user"="SYSTEM",
@@ -276,22 +141,10 @@ CREATE CATALOG catalog_name PROPERTIES (
        "driver_url" = "file:///path/to/ngdbc.jar",
        "driver_class" = "com.sap.db.jdbc.Driver"
        );
-   ```
-
-   **Trino**
-   ```sql
-   -- The first way
-       CREATE EXTERNAL RESOURCE trino_resource PROPERTIES (
-       "type"="jdbc",
-       "user"="hadoop",
-       "password"="",
-       "jdbc_url" = "jdbc:trino://localhost:8080/hive",
-       "driver_url" = "file:///path/to/trino-jdbc-389.jar",
-       "driver_class" = "io.trino.jdbc.TrinoDriver"
-       );
-   CREATE CATALOG trino_catalog WITH RESOURCE trino_resource;
+    ```
 
-   -- The second way
+    **Trino**
+    ```sql
        CREATE CATALOG trino_catalog PROPERTIES (
        "type"="jdbc",
        "user"="hadoop",
@@ -300,23 +153,10 @@ CREATE CATALOG catalog_name PROPERTIES (
        "driver_url" = "file:///path/to/trino-jdbc-389.jar",
        "driver_class" = "io.trino.jdbc.TrinoDriver"
        );
-   ```
-
-   **OceanBase**
-   ```sql
-   -- The first way
-       CREATE EXTERNAL RESOURCE oceanbase_resource PROPERTIES (
-       "type"="jdbc",
-       "user"="root",
-       "password"="",
-       "jdbc_url" = "jdbc:oceanbase://localhost:2881/demo",
-       "driver_url" = "file:///path/to/oceanbase-client-2.4.2.jar",
-       "driver_class" = "com.oceanbase.jdbc.Driver",
-          "oceanbase_mode" = "mysql" or "oracle"
-       );
-   CREATE CATALOG oceanbase_catalog WITH RESOURCE oceanbase_resource;
+    ```
 
-   -- The second way
+    **OceanBase**
+    ```sql
        CREATE CATALOG oceanbase_catalog PROPERTIES (
        "type"="jdbc",
        "user"="root",
@@ -326,7 +166,7 @@ CREATE CATALOG catalog_name PROPERTIES (
        "driver_class" = "com.oceanbase.jdbc.Driver",
           "oceanbase_mode" = "mysql" or "oracle"
        );
-   ```
+    ```
 
 ### Keywords
 
diff --git a/docs/zh-CN/docs/lakehouse/multi-catalog/hive.md 
b/docs/zh-CN/docs/lakehouse/multi-catalog/hive.md
index 0d2d9928af..ff0ebdd157 100644
--- a/docs/zh-CN/docs/lakehouse/multi-catalog/hive.md
+++ b/docs/zh-CN/docs/lakehouse/multi-catalog/hive.md
@@ -32,10 +32,10 @@ under the License.
 
 ## 使用限制
 
-1. hive 支持 1/2/3 版本。
-2. 支持 Managed Table 和 External Table。
-3. 可以识别 Hive Metastore 中存储的 hive、iceberg、hudi 元数据。
-4. 支持数据存储在 Juicefs 上的 hive 表,用法如下(需要把juicefs-hadoop-x.x.x.jar放在 fe/lib/ 和 
apache_hdfs_broker/lib/ 下)。
+1. 需将 core-site.xml,hdfs-site.xml 放到 FE 和 BE 的 conf 目录下。
+2. hive 支持 1/2/3 版本。
+3. 支持 Managed Table 和 External Table。
+4. 可以识别 Hive Metastore 中存储的 hive、iceberg、hudi 元数据。
 
 ## 创建 Catalog
 
@@ -54,11 +54,6 @@ CREATE CATALOG hive PROPERTIES (
 
 除了 `type` 和 `hive.metastore.uris` 两个必须参数外,还可以通过更多参数来传递连接所需要的信息。
 
-> `specified_database_list`:
->
-> 支持只同步指定的同步多个database,以','分隔。默认为'',同步所有database。db名称是大小写敏感的。
->
-
 如提供 HDFS HA 信息,示例如下:
 
 ```sql
@@ -110,6 +105,8 @@ CREATE CATALOG hive PROPERTIES (
 
 数据存储在JuiceFS,示例如下:
 
+(需要把 `juicefs-hadoop-x.x.x.jar` 放在 `fe/lib/` 和 `apache_hdfs_broker/lib/` 下)
+
 ```sql
 CREATE CATALOG hive PROPERTIES (
     'type'='hms',
@@ -123,8 +120,6 @@ CREATE CATALOG hive PROPERTIES (
 
 ### Hive On S3
 
-数据存储在S3,示例如下:
-
 ```sql
 CREATE CATALOG hive PROPERTIES (
     "type"="hms",
@@ -144,8 +139,6 @@ CREATE CATALOG hive PROPERTIES (
 
 ### Hive On OSS
 
-数据存储在OSS,示例如下:
-
 ```sql
 CREATE CATALOG hive PROPERTIES (
     "type"="hms",
@@ -158,8 +151,6 @@ CREATE CATALOG hive PROPERTIES (
 
 ### Hive On OBS
 
-数据存储在OBS,示例如下:
-
 ```sql
 CREATE CATALOG hive PROPERTIES (
     "type"="hms",
@@ -172,8 +163,6 @@ CREATE CATALOG hive PROPERTIES (
 
 ### Hive On COS
 
-数据存储在COS,示例如下:
-
 ```sql
 CREATE CATALOG hive PROPERTIES (
     "type"="hms",
@@ -186,8 +175,6 @@ CREATE CATALOG hive PROPERTIES (
 
 ### Hive With Glue
 
-元数据存储在Glue,示例如下:
-
 ```sql
 CREATE CATALOG hive PROPERTIES (
     "type"="hms",
@@ -198,30 +185,10 @@ CREATE CATALOG hive PROPERTIES (
 );
 ```
 
-### Hive Resource
+## 元数据缓存设置
 
-在 1.2.1 版本之后,我们也可以将这些信息通过创建一个 Resource 统一存储,然后在创建 Catalog 时使用这个 Resource。示例如下:
+创建 Catalog 时可以采用参数 `file.meta.cache.ttl-second` 来设置元数据 File Cache 
自动失效时间,也可以将该值设置为 0 来禁用 File Cache。时间单位为:秒。示例如下:
 
-```sql
-# 1. 创建 Resource
-CREATE RESOURCE hms_resource PROPERTIES (
-    'type'='hms',
-    'hive.metastore.uris' = 'thrift://172.0.0.1:9083',
-    'hadoop.username' = 'hive',
-    'dfs.nameservices'='your-nameservice',
-    'dfs.ha.namenodes.your-nameservice'='nn1,nn2',
-    'dfs.namenode.rpc-address.your-nameservice.nn1'='172.0.0.2:8088',
-    'dfs.namenode.rpc-address.your-nameservice.nn2'='172.0.0.3:8088',
-    
'dfs.client.failover.proxy.provider.your-nameservice'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider'
-);
-       
-# 2. 创建 Catalog 并使用 Resource,这里的 Key Value 信息会覆盖 Resource 中的信息。
-CREATE CATALOG hive WITH RESOURCE hms_resource PROPERTIES(
-       'key' = 'value'
-);
-```
-<version since="dev"></version>
-创建 Catalog 时可以采用参数 `file.meta.cache.ttl-second` 来设置 File Cache 
自动失效时间,也可以将该值设置为 0 来禁用 File Cache。时间单位为:秒。示例如下:
 ```sql
 CREATE CATALOG hive PROPERTIES (
     'type'='hms',
@@ -236,13 +203,7 @@ CREATE CATALOG hive PROPERTIES (
 );
 ```
 
-
-我们也可以直接将 hive-site.xml 放到 FE 和 BE 的 conf 目录下,系统也会自动读取 hive-site.xml 
中的信息。信息覆盖的规则如下:
-
-* Resource 中的信息覆盖 hive-site.xml 中的信息。
-* CREATE CATALOG PROPERTIES 中的信息覆盖 Resource 中的信息。
-
-### Hive 版本
+## Hive 版本
 
 Doris 可以正确访问不同 Hive 版本中的 Hive Metastore。在默认情况下,Doris 会以 Hive 2.3 版本的兼容接口访问 
Hive Metastore。你也可以在创建 Catalog 时指定 hive 的版本。如访问 Hive 1.1.0 版本:
 
@@ -277,84 +238,83 @@ CREATE CATALOG hive PROPERTIES (
 | `struct<col1: Type1, col2: Type2, ...>` | `struct<col1: Type1, col2: Type2, 
...>` | 暂不支持嵌套,Type1, Type2, ... 需要为基础类型 |
 | other | unsupported | |
 
-## 使用Ranger进行权限校验
-
-<version since="dev">
+## 使用 Ranger 进行权限校验
 
 Apache Ranger是一个用来在Hadoop平台上进行监控,启用服务,以及全方位数据安全访问管理的安全框架。
 
 目前doris支持ranger的库、表、列权限,不支持加密、行权限等。
 
-</version>
-
 ### 环境配置
 
 连接开启 Ranger 权限校验的 Hive Metastore 需要增加配置 & 配置环境:
+
 1. 创建 Catalog 时增加:
 
 ```sql
 "access_controller.properties.ranger.service.name" = "hive",
 "access_controller.class" = 
"org.apache.doris.catalog.authorizer.RangerHiveAccessControllerFactory",
 ```
+
 2. 配置所有 FE 环境:
 
-    1. 将 HMS conf 
目录下的配置文件ranger-hive-audit.xml,ranger-hive-security.xml,ranger-policymgr-ssl.xml复制到
 <doris_home>/conf 目录下。
+    1. 将 HMS conf 
目录下的配置文件ranger-hive-audit.xml,ranger-hive-security.xml,ranger-policymgr-ssl.xml复制到
 FE 的 conf 目录下。
 
     2. 修改 ranger-hive-security.xml 的属性,参考配置如下:
 
-    ```sql
-    <?xml version="1.0" encoding="UTF-8"?>
-    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
-    <configuration>
-        #The directory for caching permission data, needs to be writable
-        <property>
-            <name>ranger.plugin.hive.policy.cache.dir</name>
-            <value>/mnt/datadisk0/zhangdong/rangerdata</value>
-        </property>
-        #The time interval for periodically pulling permission data
-        <property>
-            <name>ranger.plugin.hive.policy.pollIntervalMs</name>
-            <value>30000</value>
-        </property>
-    
-        <property>
-            
<name>ranger.plugin.hive.policy.rest.client.connection.timeoutMs</name>
-            <value>60000</value>
-        </property>
-    
-        <property>
-            <name>ranger.plugin.hive.policy.rest.client.read.timeoutMs</name>
-            <value>60000</value>
-        </property>
-    
-        <property>
-            <name>ranger.plugin.hive.policy.rest.ssl.config.file</name>
-            <value></value>
-        </property>
-    
-        <property>
-            <name>ranger.plugin.hive.policy.rest.url</name>
-            <value>http://172.21.0.32:6080</value>
-        </property>
-    
-        <property>
-            <name>ranger.plugin.hive.policy.source.impl</name>
-            <value>org.apache.ranger.admin.client.RangerAdminRESTClient</value>
-        </property>
-    
-        <property>
-            <name>ranger.plugin.hive.service.name</name>
-            <value>hive</value>
-        </property>
-    
-        <property>
-            <name>xasecure.hive.update.xapolicies.on.grant.revoke</name>
-            <value>true</value>
-        </property>
-    
-    </configuration>
-    ```
-    3. 为获取到 Ranger 鉴权本身的日志,可在 <doris_home>/conf 目录下添加配置文件 log4j.properties。
+        ```sql
+        <?xml version="1.0" encoding="UTF-8"?>
+        <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+        <configuration>
+            #The directory for caching permission data, needs to be writable
+            <property>
+                <name>ranger.plugin.hive.policy.cache.dir</name>
+                <value>/mnt/datadisk0/zhangdong/rangerdata</value>
+            </property>
+            #The time interval for periodically pulling permission data
+            <property>
+                <name>ranger.plugin.hive.policy.pollIntervalMs</name>
+                <value>30000</value>
+            </property>
+        
+            <property>
+                
<name>ranger.plugin.hive.policy.rest.client.connection.timeoutMs</name>
+                <value>60000</value>
+            </property>
+        
+            <property>
+                
<name>ranger.plugin.hive.policy.rest.client.read.timeoutMs</name>
+                <value>60000</value>
+            </property>
+        
+            <property>
+                <name>ranger.plugin.hive.policy.rest.ssl.config.file</name>
+                <value></value>
+            </property>
+        
+            <property>
+                <name>ranger.plugin.hive.policy.rest.url</name>
+                <value>http://172.21.0.32:6080</value>
+            </property>
+        
+            <property>
+                <name>ranger.plugin.hive.policy.source.impl</name>
+                
<value>org.apache.ranger.admin.client.RangerAdminRESTClient</value>
+            </property>
+        
+            <property>
+                <name>ranger.plugin.hive.service.name</name>
+                <value>hive</value>
+            </property>
+        
+            <property>
+                <name>xasecure.hive.update.xapolicies.on.grant.revoke</name>
+                <value>true</value>
+            </property>
+        
+        </configuration>
+        ```
+
+    3. 为获取到 Ranger 鉴权本身的日志,可在 `<doris_home>/conf` 目录下添加配置文件 log4j.properties。
 
     4. 重启 FE。
 
@@ -369,5 +329,3 @@ Apache Ranger是一个用来在Hadoop平台上进行监控,启用服务,以
 4.在doris创建同名角色role1,并将role1分配给user1,user1将同时拥有db1.table1.col1和col2的查询权限
 
 
-
-
diff --git a/docs/zh-CN/docs/lakehouse/multi-catalog/iceberg.md 
b/docs/zh-CN/docs/lakehouse/multi-catalog/iceberg.md
index 79d2198ce2..77b959c239 100644
--- a/docs/zh-CN/docs/lakehouse/multi-catalog/iceberg.md
+++ b/docs/zh-CN/docs/lakehouse/multi-catalog/iceberg.md
@@ -51,60 +51,51 @@ CREATE CATALOG iceberg PROPERTIES (
 );
 ```
 
-> `specified_database_list`:
->
-> 支持只同步指定的同步多个database,以','分隔。默认为'',同步所有database。db名称是大小写敏感的。
->
-
 ### 基于Iceberg API创建Catalog
 
-<version since="dev">
-
 使用Iceberg API访问元数据的方式,支持Hive、REST、Glue等服务作为Iceberg的Catalog。
 
-</version>
-
-#### Hive Metastore作为元数据服务
-
-```sql
-CREATE CATALOG iceberg PROPERTIES (
-    'type'='iceberg',
-    'iceberg.catalog.type'='hms',
-    'hive.metastore.uris' = 'thrift://172.21.0.1:7004',
-    'hadoop.username' = 'hive',
-    'dfs.nameservices'='your-nameservice',
-    'dfs.ha.namenodes.your-nameservice'='nn1,nn2',
-    'dfs.namenode.rpc-address.your-nameservice.nn1'='172.21.0.2:4007',
-    'dfs.namenode.rpc-address.your-nameservice.nn2'='172.21.0.3:4007',
-    
'dfs.client.failover.proxy.provider.your-nameservice'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider'
-);
-```
-
-#### Glue Catalog作为元数据服务
-
-```sql
-CREATE CATALOG glue PROPERTIES (
-"type"="iceberg",
-"iceberg.catalog.type" = "glue",
-"glue.endpoint" = "https://glue.us-east-1.amazonaws.com";,
-"glue.access_key" = "ak",
-"glue.secret_key" = "sk"
-);
-```
-
-Iceberg属性详情参见 [Iceberg Glue 
Catalog](https://iceberg.apache.org/docs/latest/aws/#glue-catalog)
-
-- REST Catalog作为元数据服务
-
-该方式需要预先提供REST服务,用户需实现获取Iceberg元数据的REST接口。
-
-```sql
-CREATE CATALOG iceberg PROPERTIES (
-    'type'='iceberg',
-    'iceberg.catalog.type'='rest',
-    'uri' = 'http://172.21.0.1:8181',
-);
-```
+- Hive Metastore 作为元数据服务
+
+    ```sql
+    CREATE CATALOG iceberg PROPERTIES (
+        'type'='iceberg',
+        'iceberg.catalog.type'='hms',
+        'hive.metastore.uris' = 'thrift://172.21.0.1:7004',
+        'hadoop.username' = 'hive',
+        'dfs.nameservices'='your-nameservice',
+        'dfs.ha.namenodes.your-nameservice'='nn1,nn2',
+        'dfs.namenode.rpc-address.your-nameservice.nn1'='172.21.0.2:4007',
+        'dfs.namenode.rpc-address.your-nameservice.nn2'='172.21.0.3:4007',
+        
'dfs.client.failover.proxy.provider.your-nameservice'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider'
+    );
+    ```
+
+- Glue Catalog 作为元数据服务
+
+    ```sql
+    CREATE CATALOG glue PROPERTIES (
+        "type"="iceberg",
+        "iceberg.catalog.type" = "glue",
+        "glue.endpoint" = "https://glue.us-east-1.amazonaws.com";,
+        "glue.access_key" = "ak",
+        "glue.secret_key" = "sk"
+    );
+    ```
+
+    Iceberg 属性详情参见 [Iceberg Glue 
Catalog](https://iceberg.apache.org/docs/latest/aws/#glue-catalog)
+
+- REST Catalog 作为元数据服务
+
+    该方式需要预先提供REST服务,用户需实现获取Iceberg元数据的REST接口。
+    
+    ```sql
+    CREATE CATALOG iceberg PROPERTIES (
+        'type'='iceberg',
+        'iceberg.catalog.type'='rest',
+        'uri' = 'http://172.21.0.1:8181',
+    );
+    ```
 
 若数据存放在S3上,properties中可以使用以下参数
 
@@ -121,12 +112,8 @@ CREATE CATALOG iceberg PROPERTIES (
 
 ## Time Travel
 
-<version since="dev">
-
 支持读取 Iceberg 表指定的 Snapshot。
 
-</version>
-
 每一次对iceberg表的写操作都会产生一个新的快照。
 
 默认情况下,读取请求只会读取最新版本的快照。
@@ -138,3 +125,4 @@ CREATE CATALOG iceberg PROPERTIES (
 `SELECT * FROM iceberg_tbl FOR VERSION AS OF 868895038966572;`
 
 另外,可以使用 
[iceberg_meta](../../sql-manual/sql-functions/table-functions/iceberg_meta.md) 
表函数查询指定表的 snapshot 信息。
+
diff --git a/docs/zh-CN/docs/lakehouse/multi-catalog/jdbc.md 
b/docs/zh-CN/docs/lakehouse/multi-catalog/jdbc.md
index 192dc99a27..10bc6286a6 100644
--- a/docs/zh-CN/docs/lakehouse/multi-catalog/jdbc.md
+++ b/docs/zh-CN/docs/lakehouse/multi-catalog/jdbc.md
@@ -534,7 +534,7 @@ Oracle 模式请参考 [Oracle类型映射](#Oracle)
     failed to load driver class com.mysql.jdbc.driver in either of 
hikariconfig class loader
     ```
  
-    这是因为在创建resource时,填写的driver_class不正确,需要正确填写,如上方例子为大小写问题,应填写为 
`"driver_class" = "com.mysql.jdbc.Driver"`
+    这是因为在创建 catalog 时,填写的driver_class不正确,需要正确填写,如上方例子为大小写问题,应填写为 
`"driver_class" = "com.mysql.jdbc.Driver"`
 
 5. 读取 MySQL 问题出现通信链路异常
 
diff --git a/docs/zh-CN/docs/lakehouse/multi-catalog/multi-catalog.md 
b/docs/zh-CN/docs/lakehouse/multi-catalog/multi-catalog.md
index 39e78f9d78..d9465efec1 100644
--- a/docs/zh-CN/docs/lakehouse/multi-catalog/multi-catalog.md
+++ b/docs/zh-CN/docs/lakehouse/multi-catalog/multi-catalog.md
@@ -76,12 +76,6 @@ under the License.
     
     该操作仅会删除 Doris 中该 Catalog 的映射信息,并不会修改或变更任何外部数据目录的内容。
     
-5. Resource
-
-       Resource 是一组配置的集合。用户可以通过 [CREATE 
RESOURCE](../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-RESOURCE.md)
 命令创建一个 Resource。之后可以在创建 Catalog 时使用这个 Resource。
-       
-       一个 Resource 可以被多个 Catalog 使用,以复用其中的配置。
-
 ## 连接示例
 
 ### 连接 Hive
diff --git 
a/docs/zh-CN/docs/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-CATALOG.md
 
b/docs/zh-CN/docs/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-CATALOG.md
index 30ad218161..ad4e6e0c53 100644
--- 
a/docs/zh-CN/docs/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-CATALOG.md
+++ 
b/docs/zh-CN/docs/sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-CATALOG.md
@@ -28,12 +28,8 @@ under the License.
 
 ### Name
 
-<version since="1.2">
-
 CREATE CATALOG
 
-</version>
-
 ### Description
 
 该语句用于创建外部数据目录(catalog)
@@ -42,61 +38,18 @@ CREATE CATALOG
 
 ```sql
 CREATE CATALOG [IF NOT EXISTS] catalog_name
-       [WITH RESOURCE resource_name]
-       [PROPERTIES ("key"="value", ...)];
+       PROPERTIES ("key"="value", ...);
 ```
 
-`RESOURCE` 可以通过 [CREATE 
RESOURCE](../../../sql-reference/Data-Definition-Statements/Create/CREATE-RESOURCE.md)
 创建,目前支持三种 Resource,分别连接三种外部数据源:
-
 * hms:Hive MetaStore
 * es:Elasticsearch
 * jdbc:数据库访问的标准接口(JDBC), 当前支持 MySQL 和 PostgreSQL
 
-### 创建 catalog
-
-**通过 resource 创建 catalog**
-
-`1.2.0` 以后的版本推荐通过 resource 创建 catalog,多个使用场景可以复用相同的 resource。
-```sql
-CREATE RESOURCE catalog_resource PROPERTIES (
-    'type'='hms|es|jdbc',
-    ...
-);
-
-// 在 PROERPTIES 中指定的配置,将会覆盖 Resource 中的配置。
-CREATE CATALOG catalog_name WITH RESOURCE catalog_resource PROPERTIES(
-    'key' = 'value'
-)
-```
-
-**通过 properties 创建 catalog**
-
-`1.2.0` 版本通过 properties 创建 catalog。
-```sql
-CREATE CATALOG catalog_name PROPERTIES (
-    'type'='hms|es|jdbc',
-    ...
-);
-```
-
 ### Example
 
 1. 新建数据目录 hive
 
        ```sql
-       -- 1.2.0+ 版本
-       CREATE RESOURCE hms_resource PROPERTIES (
-               'type'='hms',
-               'hive.metastore.uris' = 'thrift://127.0.0.1:7004',
-               'dfs.nameservices'='HANN',
-               'dfs.ha.namenodes.HANN'='nn1,nn2',
-               'dfs.namenode.rpc-address.HANN.nn1'='nn1_host:rpc_port',
-               'dfs.namenode.rpc-address.HANN.nn2'='nn2_host:rpc_port',
-               
'dfs.client.failover.proxy.provider.HANN'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider'
-       );
-       CREATE CATALOG hive WITH RESOURCE hms_resource;
-
-       -- 1.2.0 版本
        CREATE CATALOG hive PROPERTIES (
                'type'='hms',
                'hive.metastore.uris' = 'thrift://127.0.0.1:7004',
@@ -111,14 +64,6 @@ CREATE CATALOG catalog_name PROPERTIES (
 2. 新建数据目录 es
 
        ```sql
-       -- 1.2.0+ 版本
-       CREATE RESOURCE es_resource PROPERTIES (
-               "type"="es",
-               "hosts"="http://127.0.0.1:9200";
-       );
-       CREATE CATALOG es WITH RESOURCE es_resource;
-
-       -- 1.2.0 版本
        CREATE CATALOG es PROPERTIES (
                "type"="es",
                "hosts"="http://127.0.0.1:9200";
@@ -126,23 +71,11 @@ CREATE CATALOG catalog_name PROPERTIES (
        ```
 
 3. 新建数据目录 jdbc
+
        **mysql**
 
        ```sql
-       -- 1.2.0+ 版本
-       -- 方式一 
-       CREATE RESOURCE mysql_resource PROPERTIES (
-               "type"="jdbc",
-               "user"="root",
-               "password"="123456",
-               "jdbc_url" = 
"jdbc:mysql://127.0.0.1:3316/doris_test?useSSL=false",
-               "driver_url" = 
"https://doris-community-test-1308700295.cos.ap-hongkong.myqcloud.com/jdbc_driver/mysql-connector-java-8.0.25.jar";,
-               "driver_class" = "com.mysql.cj.jdbc.Driver"
-       );
-       CREATE CATALOG jdbc WITH RESOURCE mysql_resource;
-
-       -- 方式二
-       CREATE CATALOG jdbc PROPERTIES (
+       CREATE CATALOG jdbc PROPERTIES (
                "type"="jdbc",
                "user"="root",
                "password"="123456",
@@ -150,33 +83,11 @@ CREATE CATALOG catalog_name PROPERTIES (
                "driver_url" = 
"https://doris-community-test-1308700295.cos.ap-hongkong.myqcloud.com/jdbc_driver/mysql-connector-java-8.0.25.jar";,
                "driver_class" = "com.mysql.cj.jdbc.Driver"
        );
-       
-       -- 1.2.0 版本
-       CREATE CATALOG jdbc PROPERTIES (
-               "type"="jdbc",
-               "jdbc.user"="root",
-               "jdbc.password"="123456",
-               "jdbc.jdbc_url" = 
"jdbc:mysql://127.0.0.1:3316/doris_test?useSSL=false",
-               "jdbc.driver_url" = 
"https://doris-community-test-1308700295.cos.ap-hongkong.myqcloud.com/jdbc_driver/mysql-connector-java-8.0.25.jar";,
-               "jdbc.driver_class" = "com.mysql.cj.jdbc.Driver"
-       );
        ```
 
        **postgresql**
 
        ```sql
-       -- 方式一
-       CREATE RESOURCE pg_resource PROPERTIES (
-               "type"="jdbc",
-               "user"="postgres",
-               "password"="123456",
-               "jdbc_url" = "jdbc:postgresql://127.0.0.1:5432/demo",
-               "driver_url" = "file:///path/to/postgresql-42.5.1.jar",
-               "driver_class" = "org.postgresql.Driver"
-       );
-       CREATE CATALOG jdbc WITH RESOURCE pg_resource;
-
-       -- 方式二
        CREATE CATALOG jdbc PROPERTIES (
                "type"="jdbc",
                "user"="postgres",
@@ -187,45 +98,21 @@ CREATE CATALOG catalog_name PROPERTIES (
        );
        ```
  
-   **clickhouse**
-
-   ```sql
-   -- 方式一
-   CREATE RESOURCE clickhouse_resource PROPERTIES (
-       "type"="jdbc",
-       "user"="default",
-       "password"="123456",
-       "jdbc_url" = "jdbc:clickhouse://127.0.0.1:8123/demo",
-       "driver_url" = "file:///path/to/clickhouse-jdbc-0.3.2-patch11-all.jar",
-       "driver_class" = "com.clickhouse.jdbc.ClickHouseDriver"
-   )
-   CREATE CATALOG jdbc WITH RESOURCE clickhouse_resource;
-   
-   -- 方式一
-   CREATE CATALOG jdbc PROPERTIES (
-       "type"="jdbc",
-       "user"="default",
-       "password"="123456",
-       "jdbc_url" = "jdbc:clickhouse://127.0.0.1:8123/demo",
-       "driver_url" = "file:///path/to/clickhouse-jdbc-0.3.2-patch11-all.jar",
-       "driver_class" = "com.clickhouse.jdbc.ClickHouseDriver"
-   )
-   ```
+    **clickhouse**
+
+    ```sql
+    CREATE CATALOG jdbc PROPERTIES (
+        "type"="jdbc",
+        "user"="default",
+        "password"="123456",
+        "jdbc_url" = "jdbc:clickhouse://127.0.0.1:8123/demo",
+        "driver_url" = "file:///path/to/clickhouse-jdbc-0.3.2-patch11-all.jar",
+        "driver_class" = "com.clickhouse.jdbc.ClickHouseDriver"
+    )
+    ```
 
        **oracle**
        ```sql
-       -- 方式一
-       CREATE RESOURCE oracle_resource PROPERTIES (
-               "type"="jdbc",
-               "user"="doris",
-               "password"="123456",
-               "jdbc_url" = "jdbc:oracle:thin:@127.0.0.1:1521:helowin",
-               "driver_url" = "file:///path/to/ojdbc6.jar",
-               "driver_class" = "oracle.jdbc.driver.OracleDriver"
-       );
-       CREATE CATALOG jdbc WITH RESOURCE oracle_resource;
-
-       -- 方式二
        CREATE CATALOG jdbc PROPERTIES (
                "type"="jdbc",
                "user"="doris",
@@ -238,18 +125,6 @@ CREATE CATALOG catalog_name PROPERTIES (
 
        **SQLServer**
        ```sql
-       -- 方式一
-       CREATE RESOURCE sqlserver_resource PROPERTIES (
-               "type"="jdbc",
-               "user"="SA",
-               "password"="Doris123456",
-               "jdbc_url" = 
"jdbc:sqlserver://localhost:1433;DataBaseName=doris_test",
-               "driver_url" = "file:///path/to/mssql-jdbc-11.2.3.jre8.jar",
-               "driver_class" = "com.microsoft.sqlserver.jdbc.SQLServerDriver"
-       );
-       CREATE CATALOG sqlserver_catalog WITH RESOURCE sqlserver_resource;
-
-       -- 方式二
        CREATE CATALOG sqlserver_catalog PROPERTIES (
                "type"="jdbc",
                "user"="SA",
@@ -260,20 +135,8 @@ CREATE CATALOG catalog_name PROPERTIES (
        );      
        ```
 
-   **SAP HANA**
-   ```sql
-   -- 方式一
-   CREATE RESOURCE saphana_resource PROPERTIES (
-       "type"="jdbc",
-       "user"="SYSTEM",
-       "password"="SAPHANA",
-       "jdbc_url" = "jdbc:sap://localhost:31515/TEST",
-       "driver_url" = "file:///path/to/ngdbc.jar",
-       "driver_class" = "com.sap.db.jdbc.Driver"
-   );
-   CREATE CATALOG saphana_catalog WITH RESOURCE saphana_resource;
-
-   -- 方式二
+    **SAP HANA**
+    ```sql
        CREATE CATALOG saphana_catalog PROPERTIES (
        "type"="jdbc",
        "user"="SYSTEM",
@@ -282,22 +145,10 @@ CREATE CATALOG catalog_name PROPERTIES (
        "driver_url" = "file:///path/to/ngdbc.jar",
        "driver_class" = "com.sap.db.jdbc.Driver"
        );
-   ```
-
-   **Trino**
-   ```sql
-   -- 方式一
-       CREATE EXTERNAL RESOURCE trino_resource PROPERTIES (
-       "type"="jdbc",
-       "user"="hadoop",
-       "password"="",
-       "jdbc_url" = "jdbc:trino://localhost:8080/hive",
-       "driver_url" = "file:///path/to/trino-jdbc-389.jar",
-       "driver_class" = "io.trino.jdbc.TrinoDriver"
-       );
-   CREATE CATALOG trino_catalog WITH RESOURCE trino_resource;
+    ```
 
-   -- 方式二
+    **Trino**
+    ```sql
        CREATE CATALOG trino_catalog PROPERTIES (
        "type"="jdbc",
        "user"="hadoop",
@@ -306,23 +157,10 @@ CREATE CATALOG catalog_name PROPERTIES (
        "driver_url" = "file:///path/to/trino-jdbc-389.jar",
        "driver_class" = "io.trino.jdbc.TrinoDriver"
        );
-   ```
-
-   **OceanBase**
-   ```sql
-   -- 方式一
-       CREATE EXTERNAL RESOURCE oceanbase_resource PROPERTIES (
-       "type"="jdbc",
-       "user"="root",
-       "password"="",
-       "jdbc_url" = "jdbc:oceanbase://localhost:2881/demo",
-       "driver_url" = "file:///path/to/oceanbase-client-2.4.2.jar",
-       "driver_class" = "com.oceanbase.jdbc.Driver",
-          "oceanbase_mode" = "mysql" or "oracle"
-       );
-   CREATE CATALOG oceanbase_catalog WITH RESOURCE oceanbase_resource;
+    ```
 
-   -- 方式二
+    **OceanBase**
+    ```sql
        CREATE CATALOG oceanbase_catalog PROPERTIES (
        "type"="jdbc",
        "user"="root",
@@ -332,7 +170,7 @@ CREATE CATALOG catalog_name PROPERTIES (
        "driver_class" = "com.oceanbase.jdbc.Driver",
           "oceanbase_mode" = "mysql" or "oracle"
        );
-   ```
+    ```
 
 ### Keywords
 


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to