[
https://issues.apache.org/jira/browse/FLINK-3190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364020#comment-15364020
]
ASF GitHub Bot commented on FLINK-3190:
---------------------------------------
Github user tillrohrmann commented on a diff in the pull request:
https://github.com/apache/flink/pull/1954#discussion_r69696777
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/restart/FixedDelayRestartStrategy.java
---
@@ -108,22 +87,7 @@ public static FixedDelayRestartStrategyFactory
createFactory(Configuration confi
timeoutString
);
- long delay;
-
- try {
- delay = Duration.apply(delayString).toMillis();
- } catch (NumberFormatException nfe) {
- if (delayString.equals(timeoutString)) {
- throw new Exception("Invalid config value for "
+
-
ConfigConstants.AKKA_WATCH_HEARTBEAT_PAUSE + ": " + timeoutString +
- ". Value must be a valid duration (such
as '10 s' or '1 min')");
- } else {
- throw new Exception("Invalid config value for "
+
-
ConfigConstants.EXECUTION_RETRY_DELAY_KEY + ": " + delayString +
- ". Value must be a valid duration (such
as '100 milli' or '10 s')");
- }
- }
-
--- End diff --
No we cannot remove this exception handling here, because this method and
the code in `RestartStrategyFactory` are two different code paths. So please
reintroduce the exception handling.
> Retry rate limits for DataStream API
> ------------------------------------
>
> Key: FLINK-3190
> URL: https://issues.apache.org/jira/browse/FLINK-3190
> Project: Flink
> Issue Type: Improvement
> Reporter: Sebastian Klemke
> Assignee: Michał Fijołek
> Priority: Minor
>
> For a long running stream processing job, absolute numbers of retries don't
> make much sense: The job will accumulate transient errors over time and will
> die eventually when thresholds are exceeded. Rate limits are better suited in
> this scenario: A job should only die, if it fails too often in a given time
> frame. To better overcome transient errors, retry delays could be used, as
> suggested in other issues.
> Absolute numbers of retries can still make sense, if failing operators don't
> make any progress at all. We can measure progress by OperatorState changes
> and by observing output, as long as the operator in question is not a sink.
> If operator state changes and/or operator produces output, we can assume it
> makes progress.
> As an example, let's say we configured a retry rate limit of 10 retries per
> hour and a non-sink operator A. If the operator fails once every 10 minutes
> and produces output between failures, it should not lead to job termination.
> But if the operator fails 11 times in an hour or does not produce output
> between 11 consecutive failures, job should be terminated.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)