dheerajpanangat commented on code in PR #7097:
URL: https://github.com/apache/hudi/pull/7097#discussion_r1010174663


##########
hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/table/HoodieTableFactory.java:
##########
@@ -347,4 +351,12 @@ private static void inferAvroSchema(Configuration conf, 
LogicalType rowType) {
       conf.setString(FlinkOptions.SOURCE_AVRO_SCHEMA, inferredSchema);
     }
   }
+
+  private static void setupRootOptions(Configuration conf, ReadableConfig 
configuration) {
+    if (configuration instanceof TableConfig) {
+      ((Configuration)((TableConfig) 
configuration).getRootConfiguration()).toMap().forEach((rootConfigKey, 
rootConfigValue) -> {
+        conf.setString(rootConfigKey, rootConfigValue);

Review Comment:
   Hi @danny0405 ,
   Thanks for looking at this.
   
   The issue is when we try to use Hudi with Flink and Azure.
   The flink configurations includes properties which are needed by Hadoop to 
connect to Azure
   In the flow from Flink -> Hudi -> Hadoop -> Storage for table, the 
configurations are not passed.
   
   The hadoop-azure library never gets the configs for ClientId, Credentials, 
AuthType, etc.
   Made this change to pass these configuration from Flink Library to Hadoop 
library.
   
   Another work around is to send the configs as part of the CatalogTable 
Options. but that does not seem correct.
   Let me know your thoughts though.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to