jerryshao commented on code in PR #5244:
URL: https://github.com/apache/gravitino/pull/5244#discussion_r1816220170


##########
catalogs/catalog-common/src/main/java/org/apache/gravitino/storage/GCSProperties.java:
##########
@@ -0,0 +1,28 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.gravitino.storage;
+
+public class GCSProperties {
+
+  // The path of service account JSON file of Google Cloud Storage.
+  public static final String GCS_SERVICE_ACCOUNT_JSON_PATH = 
"gcs-service-account-file";
+
+  public GCSProperties() {}

Review Comment:
   Please change this to `private`.



##########
catalogs/catalog-common/src/main/java/org/apache/gravitino/storage/S3Properties.java:
##########
@@ -31,5 +31,8 @@ public class S3Properties {
   // The region of the S3 service.
   public static final String GRAVITINO_S3_REGION = "s3-region";
 
+  // The S3 credentials provider class name.
+  public static final String GRAVITINO_S3_CREDS_PROVIDER = "s3-creds-provider";
+
   private S3Properties() {}
 }

Review Comment:
   > he key s3-creds-provider in Gravitino server and use s3.creds.providers 
for GVFS(replace '-' with '.', other keys are similar), is that OK?
   
   I'm OK with this.



##########
catalogs/catalog-hadoop/src/main/java/org/apache/gravitino/catalog/hadoop/config/HadoopGCSFileSystemConfig.java:
##########
@@ -0,0 +1,66 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.gravitino.catalog.hadoop.config;
+
+import com.google.common.collect.ImmutableMap;
+import java.util.Map;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.gravitino.Config;
+import org.apache.gravitino.config.ConfigBuilder;
+import org.apache.gravitino.config.ConfigConstants;
+import org.apache.gravitino.config.ConfigEntry;
+import org.apache.gravitino.connector.PropertyEntry;
+import org.apache.gravitino.storage.GCSProperties;
+
+public class HadoopGCSFileSystemConfig extends Config {
+
+  public static final Map<String, String> GRAVITINO_KEY_TO_GCS_HADOOP_KEY =
+      ImmutableMap.of(
+          GCSProperties.GCS_SERVICE_ACCOUNT_JSON_PATH, 
"fs.gs.auth.service.account.json.keyfile");
+
+  public HadoopGCSFileSystemConfig(Map<String, String> properties) {
+    super(false);
+    loadFromMap(properties, k -> true);
+  }
+
+  public static final ConfigEntry<String> HADOOP_OSS_ENDPOINT_ENTRY =
+      new ConfigBuilder(GCSProperties.GCS_SERVICE_ACCOUNT_JSON_PATH)
+          .doc("The path of service account json file of Google Cloud Storage")
+          .version(ConfigConstants.VERSION_0_7_0)
+          .stringConf()
+          .checkValue(StringUtils::isNotBlank, 
ConfigConstants.NOT_BLANK_ERROR_MSG)
+          .create();
+
+  public String getServiceAccountJsonPath() {
+    return get(HADOOP_OSS_ENDPOINT_ENTRY);
+  }
+
+  public static final Map<String, PropertyEntry<?>> 
GCS_FILESYSTEM_PROPERTY_ENTRIES =

Review Comment:
   Also here.



##########
catalogs/catalog-hadoop/src/main/java/org/apache/gravitino/catalog/hadoop/HadoopCatalogOperations.java:
##########
@@ -771,6 +774,23 @@ FileSystem getFileSystem(Path path, Map<String, String> 
config) throws IOExcepti
               scheme, path, fileSystemProvidersMap.keySet(), 
fileSystemProvidersMap.values()));
     }
 
-    return provider.getFileSystem(path, config);
+    return provider.getFileSystem(path, convertGravitinoToHadoopConf(config));
+  }
+
+  private Map<String, String> convertGravitinoToHadoopConf(Map<String, String> 
gravitinoConf) {
+    Map<String, String> hadoopConf = Maps.newHashMap();
+
+    gravitinoConf.forEach(
+        (k, v) -> {
+          if 
(HadoopS3FileSystemConfig.GRAVITINO_KEY_TO_S3_HADOOP_KEY.containsKey(k)) {
+            
hadoopConf.put(HadoopS3FileSystemConfig.GRAVITINO_KEY_TO_S3_HADOOP_KEY.get(k), 
v);
+          } else if 
(HadoopGCSFileSystemConfig.GRAVITINO_KEY_TO_GCS_HADOOP_KEY.containsKey(k)) {
+            
hadoopConf.put(HadoopGCSFileSystemConfig.GRAVITINO_KEY_TO_GCS_HADOOP_KEY.get(k),
 v);
+          } else
+            hadoopConf.put(
+                
HadoopOSSFileSystemConfig.GRAVITINO_KEY_TO_OSS_HADOOP_KEY.getOrDefault(k, k), 
v);
+        });
+
+    return hadoopConf;

Review Comment:
   I think we can split and do this in each provider, we don't have to expose 
here with `if...else if...else`.



##########
catalogs/catalog-hadoop/src/main/java/org/apache/gravitino/catalog/hadoop/config/HadoopGCSFileSystemConfig.java:
##########
@@ -0,0 +1,66 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.gravitino.catalog.hadoop.config;
+
+import com.google.common.collect.ImmutableMap;
+import java.util.Map;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.gravitino.Config;
+import org.apache.gravitino.config.ConfigBuilder;
+import org.apache.gravitino.config.ConfigConstants;
+import org.apache.gravitino.config.ConfigEntry;
+import org.apache.gravitino.connector.PropertyEntry;
+import org.apache.gravitino.storage.GCSProperties;
+
+public class HadoopGCSFileSystemConfig extends Config {
+
+  public static final Map<String, String> GRAVITINO_KEY_TO_GCS_HADOOP_KEY =
+      ImmutableMap.of(
+          GCSProperties.GCS_SERVICE_ACCOUNT_JSON_PATH, 
"fs.gs.auth.service.account.json.keyfile");
+
+  public HadoopGCSFileSystemConfig(Map<String, String> properties) {
+    super(false);
+    loadFromMap(properties, k -> true);
+  }
+
+  public static final ConfigEntry<String> HADOOP_OSS_ENDPOINT_ENTRY =

Review Comment:
   Do we need to expose this as public?



##########
catalogs/catalog-hadoop/src/main/java/org/apache/gravitino/catalog/hadoop/config/HadoopGCSFileSystemConfig.java:
##########
@@ -0,0 +1,66 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.gravitino.catalog.hadoop.config;
+
+import com.google.common.collect.ImmutableMap;
+import java.util.Map;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.gravitino.Config;
+import org.apache.gravitino.config.ConfigBuilder;
+import org.apache.gravitino.config.ConfigConstants;
+import org.apache.gravitino.config.ConfigEntry;
+import org.apache.gravitino.connector.PropertyEntry;
+import org.apache.gravitino.storage.GCSProperties;
+
+public class HadoopGCSFileSystemConfig extends Config {

Review Comment:
   I think each configuration belongs to each provider, we can move it to the 
provider, don't have to manage them in a new place.



##########
catalogs/catalog-hadoop/src/main/java/org/apache/gravitino/catalog/hadoop/config/HadoopGCSFileSystemConfig.java:
##########
@@ -0,0 +1,66 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.gravitino.catalog.hadoop.config;
+
+import com.google.common.collect.ImmutableMap;
+import java.util.Map;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.gravitino.Config;
+import org.apache.gravitino.config.ConfigBuilder;
+import org.apache.gravitino.config.ConfigConstants;
+import org.apache.gravitino.config.ConfigEntry;
+import org.apache.gravitino.connector.PropertyEntry;
+import org.apache.gravitino.storage.GCSProperties;
+
+public class HadoopGCSFileSystemConfig extends Config {
+
+  public static final Map<String, String> GRAVITINO_KEY_TO_GCS_HADOOP_KEY =
+      ImmutableMap.of(
+          GCSProperties.GCS_SERVICE_ACCOUNT_JSON_PATH, 
"fs.gs.auth.service.account.json.keyfile");
+
+  public HadoopGCSFileSystemConfig(Map<String, String> properties) {
+    super(false);
+    loadFromMap(properties, k -> true);
+  }
+
+  public static final ConfigEntry<String> HADOOP_OSS_ENDPOINT_ENTRY =
+      new ConfigBuilder(GCSProperties.GCS_SERVICE_ACCOUNT_JSON_PATH)
+          .doc("The path of service account json file of Google Cloud Storage")
+          .version(ConfigConstants.VERSION_0_7_0)
+          .stringConf()
+          .checkValue(StringUtils::isNotBlank, 
ConfigConstants.NOT_BLANK_ERROR_MSG)
+          .create();
+
+  public String getServiceAccountJsonPath() {
+    return get(HADOOP_OSS_ENDPOINT_ENTRY);
+  }
+
+  public static final Map<String, PropertyEntry<?>> 
GCS_FILESYSTEM_PROPERTY_ENTRIES =

Review Comment:
   We'd also define the catalog properties in a centralized place.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to