[
https://issues.apache.org/jira/browse/HUDI-2106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17375072#comment-17375072
]
ASF GitHub Bot commented on HUDI-2106:
--------------------------------------
danny0405 commented on a change in pull request #3192:
URL: https://github.com/apache/hudi/pull/3192#discussion_r664168074
##########
File path:
hudi-flink/src/main/java/org/apache/hudi/sink/compact/HoodieFlinkCompactor.java
##########
@@ -112,7 +112,8 @@ public static void main(String[] args) throws Exception {
}
// get compactionParallelism.
- int compactionParallelism =
Math.min(conf.getInteger(FlinkOptions.COMPACTION_TASKS),
compactionPlan.getOperations().size());
+ int compactionParallelism = conf.getInteger(FlinkOptions.COMPACTION_TASKS)
== -1
Review comment:
It’s fine because Flink don’t allow negative parallelism.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
> Fix flink batch compaction bug while user don't set compaction tasks
> --------------------------------------------------------------------
>
> Key: HUDI-2106
> URL: https://issues.apache.org/jira/browse/HUDI-2106
> Project: Apache Hudi
> Issue Type: Bug
> Components: Flink Integration
> Reporter: Zheng yunhong
> Priority: Major
> Labels: pull-request-available
> Fix For: 0.9.0
>
>
> There is a bug in flink batch compaction while we did not set compaction
> tasks, the compaction tasks would always default value instead of
> compactionPlan operations size.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)