StefanRRichter commented on a change in pull request #8322: [FLINK-12364]
Introduce a CheckpointFailureManager to centralized manage checkpoint failure
URL: https://github.com/apache/flink/pull/8322#discussion_r285062528
##########
File path:
flink-end-to-end-tests/flink-streaming-kafka-test-base/src/main/java/org/apache/flink/streaming/kafka/test/base/KafkaExampleUtil.java
##########
@@ -45,6 +45,7 @@ public static StreamExecutionEnvironment
prepareExecutionEnv(ParameterTool param
env.getConfig().disableSysoutLogging();
env.getConfig().setRestartStrategy(RestartStrategies.fixedDelayRestart(4,
10000));
env.enableCheckpointing(5000); // create a checkpoint every 5
seconds
+
env.getCheckpointConfig().setTolerableCheckpointFailureNumber(Integer.MAX_VALUE);
Review comment:
Ok, then the question is through which failure reason, and if there was a
higher tolerance for that reason before. Could it be DECLINED was tolerated
before, e.g. when tasks where not yet ready?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services