morningman commented on code in PR #54114:
URL: https://github.com/apache/doris/pull/54114#discussion_r2246570715


##########
fe/fe-core/src/main/java/org/apache/doris/datasource/paimon/PaimonExternalCatalogFactory.java:
##########
@@ -36,12 +35,9 @@ public static ExternalCatalog createCatalog(long catalogId, 
String name, String
         metastoreType = metastoreType.toLowerCase();
         switch (metastoreType) {
             case PaimonExternalCatalog.PAIMON_HMS:
-                return new PaimonHMSExternalCatalog(catalogId, name, resource, 
props, comment);
             case PaimonExternalCatalog.PAIMON_FILESYSTEM:
-                return new PaimonFileExternalCatalog(catalogId, name, 
resource, props, comment);
             case PaimonExternalCatalog.PAIMON_DLF:
-                props.put(HMSProperties.HIVE_METASTORE_TYPE, 
HMSProperties.DLF_TYPE);
-                return new PaimonDLFExternalCatalog(catalogId, name, resource, 
props, comment);
+                return  new PaimonExternalCatalog(catalogId, name, resource, 
props, comment);

Review Comment:
   ```suggestion
                   return new PaimonExternalCatalog(catalogId, name, resource, 
props, comment);
   ```



##########
fe/fe-core/src/main/java/org/apache/doris/datasource/paimon/PaimonFileExternalCatalog.java:
##########
@@ -43,51 +36,7 @@ protected void initLocalObjectsImpl() {
         super.initLocalObjectsImpl();

Review Comment:
   No need to override this `initLocalObjectsImpl()` method.
   All are done in parent class.
   And do we still these subclasses like `PaimonFileExternalCatalog`? There is 
no extract info in these subclass



##########
fe/fe-core/src/main/java/org/apache/doris/datasource/paimon/PaimonExternalCatalog.java:
##########
@@ -133,120 +125,87 @@ public List<String> listTableNames(SessionContext ctx, 
String dbName) {
                 }
                 return tableNames;
             });
-        } catch (IOException e) {
+        } catch (Exception e) {
             throw new RuntimeException("Failed to list table names, catalog 
name: " + getName(), e);
         }
     }
 
     public org.apache.paimon.table.Table getPaimonTable(NameMapping 
nameMapping) {
         makeSureInitialized();
         try {
-            return hadoopAuthenticator.doAs(() -> catalog.getTable(
-                    Identifier.create(nameMapping.getRemoteDbName(), 
nameMapping.getRemoteTblName())));
+            return executionAuthenticator.execute(() -> 
catalog.getTable(Identifier.create(nameMapping
+                    .getRemoteDbName(), nameMapping.getRemoteTblName())));
         } catch (Exception e) {
-            throw new RuntimeException("Failed to get Paimon table:" + 
getName() + "."
-                    + nameMapping.getLocalDbName() + "." + 
nameMapping.getLocalTblName() + ", because "
-                    + e.getMessage(), e);
+            throw new RuntimeException("Failed to get Paimon table:" + 
getName() + "." + nameMapping.getLocalDbName()
+                    + "." + nameMapping.getLocalTblName() + ", because " + 
ExceptionUtils.getRootCauseMessage(e), e);
         }
     }
 
     public List<Partition> getPaimonPartitions(NameMapping nameMapping) {
         makeSureInitialized();
         try {
-            return hadoopAuthenticator.doAs(() -> {
+            return executionAuthenticator.execute(() -> {
                 List<Partition> partitions = new ArrayList<>();
                 try {
-                    partitions = catalog.listPartitions(
-                            Identifier.create(nameMapping.getRemoteDbName(), 
nameMapping.getRemoteTblName()));
+                    partitions = 
catalog.listPartitions(Identifier.create(nameMapping.getRemoteDbName(),
+                            nameMapping.getRemoteTblName()));
                 } catch (Catalog.TableNotExistException e) {
                     LOG.warn("TableNotExistException", e);
                 }
                 return partitions;
             });
-        } catch (IOException e) {
+        } catch (Exception e) {
             throw new RuntimeException("Failed to get Paimon table 
partitions:" + getName() + "."
-                    + nameMapping.getRemoteDbName() + "." + 
nameMapping.getRemoteTblName()
-                    + ", because " + e.getMessage(), e);
+                    + nameMapping.getRemoteDbName() + "." + 
nameMapping.getRemoteTblName() + ", because "
+                    + ExceptionUtils.getRootCauseMessage(e), e);
         }
     }
 
     public org.apache.paimon.table.Table getPaimonSystemTable(NameMapping 
nameMapping, String queryType) {
         return getPaimonSystemTable(nameMapping, null, queryType);
     }
 
-    public org.apache.paimon.table.Table getPaimonSystemTable(NameMapping 
nameMapping, String branch,
-            String queryType) {
+    public org.apache.paimon.table.Table getPaimonSystemTable(NameMapping 
nameMapping,
+                                                              String branch, 
String queryType) {
         makeSureInitialized();
         try {
-            return hadoopAuthenticator.doAs(() -> catalog.getTable(new 
Identifier(nameMapping.getRemoteDbName(),
+            return executionAuthenticator.execute(() -> catalog.getTable(new 
Identifier(nameMapping.getRemoteDbName(),
                     nameMapping.getRemoteTblName(), branch, queryType)));
         } catch (Exception e) {
             throw new RuntimeException("Failed to get Paimon system table:" + 
getName() + "."
                     + nameMapping.getRemoteDbName() + "." + 
nameMapping.getRemoteTblName() + "$" + queryType
-                    + ", because " + e.getMessage(), e);
+                    + ", because " + ExceptionUtils.getRootCauseMessage(e), e);
         }
     }
 
-    protected String getPaimonCatalogType(String catalogType) {
-        if (PAIMON_HMS.equalsIgnoreCase(catalogType)) {
-            return PaimonProperties.PAIMON_HMS_CATALOG;
-        } else {
-            return PaimonProperties.PAIMON_FILESYSTEM_CATALOG;
-        }
-    }
 
     protected Catalog createCatalog() {
         try {
-            return hadoopAuthenticator.doAs(() -> {
-                Options options = new Options();
-                Map<String, String> paimonOptionsMap = getPaimonOptionsMap();
-                for (Map.Entry<String, String> kv : 
paimonOptionsMap.entrySet()) {
-                    options.set(kv.getKey(), kv.getValue());
-                }
-                CatalogContext context = CatalogContext.create(options, 
getConfiguration());
-                return createCatalogImpl(context);
-            });
-        } catch (IOException e) {
-            throw new RuntimeException("Failed to create catalog, catalog 
name: " + getName(), e);
+            return paimonProperties.initializeCatalog(getName(), new 
ArrayList<>(catalogProperty
+                    .getStoragePropertiesMap().values()));
+        } catch (Exception e) {
+            throw new RuntimeException("Failed to create catalog, catalog 
name: " + getName() + ", exception: "
+                    + ExceptionUtils.getRootCauseMessage(e), e);
         }
     }
 
-    protected Catalog createCatalogImpl(CatalogContext context) {
-        return CatalogFactory.createCatalog(context);
-    }
-
     public Map<String, String> getPaimonOptionsMap() {
-        Map<String, String> properties = catalogProperty.getHadoopProperties();
-        Map<String, String> options = Maps.newHashMap();
-        options.put(PaimonProperties.WAREHOUSE, 
properties.get(PaimonProperties.WAREHOUSE));
-        setPaimonCatalogOptions(properties, options);
-        setPaimonExtraOptions(properties, options);
-        return options;
-    }
-
-    protected abstract void setPaimonCatalogOptions(Map<String, String> 
properties, Map<String, String> options);
-
-    protected void setPaimonExtraOptions(Map<String, String> properties, 
Map<String, String> options) {
-        for (Map.Entry<String, String> kv : properties.entrySet()) {
-            if (kv.getKey().startsWith(PaimonProperties.PAIMON_PREFIX)) {
-                
options.put(kv.getKey().substring(PaimonProperties.PAIMON_PREFIX.length()), 
kv.getValue());
-            }
-        }
-
-        // hive version.
-        // This property is used for both FE and BE, so it has no "paimon." 
prefix.
-        // We need to handle it separately.
-        if (properties.containsKey(HMSProperties.HIVE_VERSION)) {
-            options.put(HMSProperties.HIVE_VERSION, 
properties.get(HMSProperties.HIVE_VERSION));
-        }
+        makeSureInitialized();
+        Map<String, String> optionsMap = Maps.newHashMap();

Review Comment:
   I think this `optionsMap` can be set only once? in `initLocalObjectsImpl`.



##########
fe/fe-core/src/main/java/org/apache/doris/datasource/paimon/source/PaimonScanNode.java:
##########
@@ -422,13 +422,13 @@ public TableIf getTargetTable() {
 
     @Override
     public Map<String, String> getLocationProperties() throws 
MetaNotFoundException, DdlException {
-        HashMap<String, String> map = new 
HashMap<>(source.getCatalog().getProperties());
-        
source.getCatalog().getCatalogProperty().getHadoopProperties().forEach((k, v) 
-> {
-            if (!map.containsKey(k)) {
-                map.put(k, v);
-            }
-        });
-        return map;
+        return source.getCatalog().getCatalogProperty()

Review Comment:
   can this `LocationProperties` be init only once?



##########
fe/fe-core/src/main/java/org/apache/doris/datasource/property/metastore/AbstractPaimonProperties.java:
##########
@@ -0,0 +1,160 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.datasource.property.metastore;
+
+import org.apache.doris.common.security.authentication.ExecutionAuthenticator;
+import org.apache.doris.datasource.property.ConnectorProperty;
+import org.apache.doris.datasource.property.storage.S3Properties;
+import org.apache.doris.datasource.property.storage.StorageProperties;
+
+import lombok.Getter;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.paimon.catalog.Catalog;
+import org.apache.paimon.options.CatalogOptions;
+import org.apache.paimon.options.Options;
+
+import java.util.List;
+import java.util.Map;
+
+public abstract class AbstractPaimonProperties extends MetastoreProperties {
+    @ConnectorProperty(
+            names = {"warehouse"},
+            description = "The location of the Paimon warehouse. This is where 
the tables will be stored."
+    )
+    protected String warehouse;
+

Review Comment:
   Watch the redundant empty line



##########
fe/fe-core/src/main/java/org/apache/doris/datasource/paimon/source/PaimonScanNode.java:
##########
@@ -422,13 +422,13 @@ public TableIf getTargetTable() {
 
     @Override
     public Map<String, String> getLocationProperties() throws 
MetaNotFoundException, DdlException {

Review Comment:
   ```suggestion
       protected Map<String, String> getLocationProperties() throws 
MetaNotFoundException, DdlException {
   ```



##########
fe/fe-core/src/main/java/org/apache/doris/datasource/property/storage/COSProperties.java:
##########
@@ -130,14 +129,17 @@ public AwsCredentialsProvider getAwsCredentialsProvider() 
{
 
     @Override
     public void initializeHadoopStorageConfig() {
-        hadoopStorageConfig = new Configuration();
+        super.initializeHadoopStorageConfig();

Review Comment:
   `cos` and `cosn` are 2 different storage.
   `cos` use `s3`, `cosn` use hadoop fs.
   But here you put both `s3` and `cos` in configuration? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to