luoyuxia commented on code in PR #20377:
URL: https://github.com/apache/flink/pull/20377#discussion_r941121136


##########
flink-connectors/flink-connector-hive/src/test/java/org/apache/flink/connectors/hive/PartitionMonitorTest.java:
##########
@@ -99,6 +99,10 @@ private void commitPartitionWithGivenCreateTime(
     private void preparePartitionMonitor() {
         List<List<String>> seenPartitionsSinceOffset = new ArrayList<>();
         JobConf jobConf = new JobConf();

Review Comment:
   Also, when adding test, I found the current implementation only works for 
orc format. When it's other format, although we set 
`mapreduce.input.fileinputformat.split.maxsize`, but the other formats won't 
take this congfiguration as consideration when call method 
`InputFormat#getSplits` to get splits.
   
   So, I add a note in the doc to tell these related configuration only works 
for orc format. And  I changes the code so that we only try to set 
`mapreduce.input.fileinputformat.split.maxsize` when it's orc format.
   As a result, the pr titlte also changes.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to