openinx commented on a change in pull request #1586:
URL: https://github.com/apache/iceberg/pull/1586#discussion_r509863370
##########
File path: flink/src/main/java/org/apache/iceberg/flink/FlinkCatalogFactory.java
##########
@@ -71,12 +72,16 @@ protected CatalogLoader createCatalogLoader(String name,
Map<String, String> pro
String catalogType = properties.getOrDefault(ICEBERG_CATALOG_TYPE, "hive");
switch (catalogType) {
case "hive":
- int clientPoolSize =
Integer.parseInt(properties.getOrDefault(HIVE_CLIENT_POOL_SIZE, "2"));
+ // The values of properties 'uri', 'warehouse', 'hive-conf-dir' are
allowed to be null, in that case it will
+ // fallback to parse those values from hadoop configuration which is
loaded from classpath.
String uri = properties.get(HIVE_URI);
- return CatalogLoader.hive(name, hadoopConf, uri, clientPoolSize);
+ String warehouse = properties.get(WAREHOUSE_LOCATION);
+ int clientPoolSize =
Integer.parseInt(properties.getOrDefault(HIVE_CLIENT_POOL_SIZE, "2"));
+ String hiveConfDir = properties.get(HIVE_CONF_DIR);
+ return CatalogLoader.hive(name, hadoopConf, uri, warehouse,
clientPoolSize, hiveConfDir);
Review comment:
Oh, I almost forgot that the `loadCatalog` would be executed at task
manager side for flink. we should merge hive conf into hadoop conf before
initializing the catalog loader, so that we won't miss to load a non-existed
path in task manager node.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]