pnowojski commented on a change in pull request #15221:
URL: https://github.com/apache/flink/pull/15221#discussion_r604177096
##########
File path:
flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/Execution.java
##########
@@ -719,7 +721,9 @@ private void updatePartitionConsumers(final
IntermediateResultPartition partitio
// Consumer is deploying => cache the partition info which would be
// sent after switching to running
// ----------------------------------------------------------------
- if (consumerState == DEPLOYING || consumerState == RUNNING) {
+ if (consumerState == DEPLOYING
Review comment:
I think @dawidwys is right, the logic is different compared to what it
used to be. However I think now it's wrong if we ever enter this method for
unaligned checkpoints. With UC, consumer might be in `RECOVERING` state and
already it might need partition informations to continue recovering.
So all in all IMO the original change as proposed by @akalash made the most
sense for me. If consumer is deploying, just cache the update. If consumer is
running or recovering, send the RPC - in both cases `StreamTask`/`Task` will be
able to process the request. Also only such change would preserve the previous
behaviour, where on master `RECOVERING` is essentially/implicitly part of the
`RUNNING` state.
> If I understand this PR well, after the PR sendUpdatePartitionInfoRpcCall
will be send twice. When switching to RECOVERING and a second time when
switching over to RUNNING.
I don't think so. Here we will/we should cache the update if consumer is
`DEPLOYING` or send it to the consumer if it's `RECOVERING` or `RUNNING`.
Cached updates are sent to the consumer once consumer switches from `DEPLOYING`
to `RECOVERING` in lines 1162-1163:
```
if (switchTo(DEPLOYING, RECOVERING)) {
sendPartitionInfos();
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]