Copilot commented on code in PR #10425:
URL: https://github.com/apache/gravitino/pull/10425#discussion_r2928853833


##########
flink-connector/flink/src/main/java/org/apache/gravitino/flink/connector/CatalogPropertiesConverter.java:
##########
@@ -81,7 +82,11 @@ default Map<String, String> 
toFlinkCatalogProperties(Map<String, String> graviti
           } else {
             String convertedKey = 
transformPropertyToFlinkCatalog(flinkConfigKey);
             if (convertedKey != null) {
-              allProperties.put(convertedKey, value);
+              if 
(convertedKey.equals(PaimonConstants.OSS_FILESYSTEM_IMPLEMENTATION)) {
+                allProperties.put("hadoop." + convertedKey, value);
+              } else {
+                allProperties.put(convertedKey, value);
+              }

Review Comment:
   `CatalogPropertiesConverter` is a shared Flink-connector interface, but it 
now hard-codes Paimon-specific behavior by importing `PaimonConstants` and 
special-casing `fs.oss.impl`. This couples all catalog converters to Paimon and 
makes future maintenance harder. Move the `fs.oss.impl` -> `hadoop.fs.oss.impl` 
rewrite into the Paimon-specific converter (e.g., 
`flink.connector.paimon.PaimonPropertiesConverter`) or into that catalog’s 
mapping logic, and consider using the existing `PropertyUtils.HADOOP_PREFIX` 
constant instead of the string literal.



##########
spark-connector/spark-common/src/main/java/org/apache/gravitino/spark/connector/paimon/GravitinoPaimonCatalog.java:
##########
@@ -42,6 +43,13 @@ protected TableCatalog createAndInitSparkCatalog(
     TableCatalog paimonCatalog = new SparkCatalog();
     Map<String, String> all =
         getPropertiesConverter().toSparkCatalogProperties(options, properties);
+    all.entrySet().stream()
+        .forEach(
+            entry ->
+                SparkSession.active()
+                    .sessionState()
+                    .conf()
+                    .setConfString(entry.getKey(), entry.getValue()));

Review Comment:
   `all` includes sensitive catalog properties (e.g., OSS access key/secret). 
Writing every entry into the global `SparkSession` SQLConf via `setConfString` 
risks leaking credentials (users can read them via Spark SQL configs) and may 
also fail for non-Spark SQL config keys (many `fs.*` keys are not SQLConf 
entries). Instead, avoid mutating SQLConf and only propagate the needed Hadoop 
filesystem setting (e.g., `fs.oss.impl`) via Spark’s Hadoop configuration 
(`sparkContext().hadoopConfiguration().set(...)`) or by passing the correct 
`spark.hadoop.*` / `hadoop.*`-prefixed options into the underlying catalog.
   



##########
catalogs/catalog-lakehouse-paimon/src/main/java/org/apache/gravitino/catalog/lakehouse/paimon/utils/CatalogUtils.java:
##########
@@ -160,7 +161,11 @@ public static Map<String, String> toInnerProperty(
           } else if (S3_CONFIGURATION.containsKey(key)) {
             gravitinoConfig.put(S3_CONFIGURATION.get(key), value);
           } else if (OSS_CONFIGURATION.containsKey(key)) {
-            gravitinoConfig.put(OSS_CONFIGURATION.get(key), value);
+            if 
(key.equals(OSSProperties.GRAVITINO_OSS_FILESYSTEM_IMPLEMENTATION)) {
+              gravitinoConfig.put("hadoop." + OSS_CONFIGURATION.get(key), 
value);
+            } else {
+              gravitinoConfig.put(OSS_CONFIGURATION.get(key), value);
+            }

Review Comment:
   This new `oss-filesystem-impl` handling changes the effective keys sent into 
Paimon by adding a `hadoop.` prefix for `fs.oss.impl`, but there is no unit 
test asserting this conversion. Please add a focused test for 
`CatalogUtils.toInnerProperty(...)` to verify that 
`OSSProperties.GRAVITINO_OSS_FILESYSTEM_IMPLEMENTATION` is converted to 
`hadoop.fs.oss.impl` (and that other OSS properties are unaffected).



##########
catalogs/catalog-common/src/main/java/org/apache/gravitino/catalog/lakehouse/paimon/PaimonPropertiesUtils.java:
##########
@@ -60,6 +60,9 @@ public class PaimonPropertiesUtils {
         OSSProperties.GRAVITINO_OSS_ACCESS_KEY_ID, 
PaimonConstants.OSS_ACCESS_KEY);
     gravitinoConfigToPaimon.put(
         OSSProperties.GRAVITINO_OSS_ACCESS_KEY_SECRET, 
PaimonConstants.OSS_SECRET_KEY);
+    gravitinoConfigToPaimon.put(
+        OSSProperties.GRAVITINO_OSS_FILESYSTEM_IMPLEMENTATION,
+        PaimonConstants.OSS_FILESYSTEM_IMPLEMENTATION);

Review Comment:
   A new mapping for `oss-filesystem-impl` -> `fs.oss.impl` is introduced here, 
but there are no unit tests validating the conversion behavior in the 
Spark/Flink Paimon property converters (especially given some paths require 
`hadoop.` prefixing for `fs.oss.impl`). Please extend the existing Paimon 
converter tests to cover this new property end-to-end so regressions are caught.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to