CTTY commented on code in PR #9221:
URL: https://github.com/apache/hudi/pull/9221#discussion_r1276970132
##########
hudi-sync/hudi-hive-sync/src/main/java/org/apache/hudi/hive/HiveSyncConfig.java:
##########
@@ -98,8 +98,9 @@ public HiveSyncConfig(Properties props) {
public HiveSyncConfig(Properties props, Configuration hadoopConf) {
super(props, hadoopConf);
- HiveConf hiveConf = hadoopConf instanceof HiveConf
- ? (HiveConf) hadoopConf : new HiveConf(hadoopConf, HiveConf.class);
+ HiveConf hiveConf = new HiveConf();
+ // HiveConf needs to load Hadoop conf to allow instantiation via
AWSGlueClientFactory
+ hiveConf.addResource(hadoopConf);
Review Comment:
Loading `AWSGlueClientFactory` property specifically should solve the issue
on AWS side, but it's possible that there are other configs/custom configs
passed in via Spark session, and won't be loaded by hard-coded logic. I still
think loading entire hadoop conf here would be a safer choice.
And this change could change the order of adding resource for hive conf.
I've gone thru the `HiveConf` constructor but didn't see any usage of
`resource` during constructing so I think it shouldn't affect, maybe I've
overlooked something?
An alternative solution would be always pass `hadoopConf` to `HiveConf`
constructor. Wdyt?
```suggestion
HiveConf hiveConf = new HiveConf(hadoopConf, HiveConf.class);
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]