[
https://issues.apache.org/jira/browse/FLINK-8037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16313320#comment-16313320
]
ASF GitHub Bot commented on FLINK-8037:
---------------------------------------
Github user pnowojski commented on a diff in the pull request:
https://github.com/apache/flink/pull/5205#discussion_r159904126
--- Diff:
flink-connectors/flink-connector-kafka-0.11/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaProducer011.java
---
@@ -742,7 +742,7 @@ public void snapshotState(FunctionSnapshotContext
context) throws Exception {
// case we adjust nextFreeTransactionalId by the range
of transactionalIds that could be used for this
// scaling up.
if (getRuntimeContext().getNumberOfParallelSubtasks() >
nextTransactionalIdHint.lastParallelism) {
- nextFreeTransactionalId +=
getRuntimeContext().getNumberOfParallelSubtasks() * kafkaProducersPoolSize;
+ nextFreeTransactionalId += (long)
getRuntimeContext().getNumberOfParallelSubtasks() * kafkaProducersPoolSize;
--- End diff --
Good change, although that is rather theoretical bug. To trigger it there
would need to be more then 1_000_000 subtasks and more then 2000 parallel
ongoing checkpoints.
> Missing cast in integer arithmetic in
> TransactionalIdsGenerator#generateIdsToAbort
> ----------------------------------------------------------------------------------
>
> Key: FLINK-8037
> URL: https://issues.apache.org/jira/browse/FLINK-8037
> Project: Flink
> Issue Type: Bug
> Reporter: Ted Yu
> Assignee: Greg Hogan
> Priority: Minor
>
> {code}
> public Set<String> generateIdsToAbort() {
> Set<String> idsToAbort = new HashSet<>();
> for (int i = 0; i < safeScaleDownFactor; i++) {
> idsToAbort.addAll(generateIdsToUse(i * poolSize *
> totalNumberOfSubtasks));
> {code}
> The operands are integers where generateIdsToUse() expects long parameter.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)