zentol commented on a change in pull request #13027:
URL: https://github.com/apache/flink/pull/13027#discussion_r466868396



##########
File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/HiveTableSink.java
##########
@@ -241,12 +242,14 @@ public final DataStreamSink consumeDataStream(DataStream 
dataStream) {
                        formatTypes[i] = 
tableSchema.getFieldDataType(i).get().getLogicalType();
                }
                RowType formatType = RowType.of(formatTypes, formatNames);
-               Configuration formatConf = new Configuration(jobConf);
-               sd.getSerdeInfo().getParameters().forEach(formatConf::set);
                if (serLib.contains("parquet")) {
+                       Configuration formatConf = new Configuration(jobConf);
+                       
sd.getSerdeInfo().getParameters().forEach(formatConf::set);
                        return 
Optional.of(ParquetRowDataBuilder.createWriterFactory(
                                        formatType, formatConf, 
hiveVersion.startsWith("3.")));
                } else if (serLib.contains("orc")) {
+                       Configuration formatConf = new 
ThreadLocalClassLoaderConfiguration(jobConf);

Review comment:
       If we were to use a normal configuration, what would happen? Subsequent 
jobs would not be able to run?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to