[
https://issues.apache.org/jira/browse/FLINK-21884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Flink Jira Bot updated FLINK-21884:
-----------------------------------
Labels: auto-deprioritized-critical auto-deprioritized-major reactive
(was: auto-deprioritized-critical reactive stale-major)
Priority: Minor (was: Major)
This issue was labeled "stale-major" 7 ago and has not received any updates so
it is being deprioritized. If this ticket is actually Major, please raise the
priority and ask a committer to assign you the issue or revive the public
discussion.
> Reduce TaskManager failure detection time
> -----------------------------------------
>
> Key: FLINK-21884
> URL: https://issues.apache.org/jira/browse/FLINK-21884
> Project: Flink
> Issue Type: Improvement
> Components: Runtime / Coordination
> Reporter: Robert Metzger
> Priority: Minor
> Labels: auto-deprioritized-critical, auto-deprioritized-major,
> reactive
> Attachments: image-2021-03-19-20-10-40-324.png
>
>
> In Flink 1.13 (and older versions), TaskManager failures stall the processing
> for a significant amount of time, even though the system gets indications for
> the failure almost immediately through network connection losses.
> This is due to a high (default) heartbeat timeout of 50 seconds [1] to
> accommodate for GC pauses, transient network disruptions or generally slow
> environments (otherwise, we would unregister a healthy TaskManager).
> Such a high timeout can lead to disruptions in the processing (no processing
> for certain periods, high latencies, buildup of consumer lag etc.). In
> Reactive Mode (FLINK-10407), the issue surfaces on scale-down events, where
> the loss of a TaskManager is immediately visible in the logs, but the job is
> stuck in "FAILING" for quite a while until the TaskManger is really
> deregistered. (Note that this issue is not that critical in a autoscaling
> setup, because Flink can control the scale-down events and trigger them
> proactively)
> On the attached metrics dashboard, one can see that the job has significant
> throughput drops / consumer lags during scale down (and also CPU usage spikes
> on processing the queued events, leading to incorrect scale up events again).
> !image-2021-03-19-20-10-40-324.png|thumbnail!
> One idea to solve this problem is to:
> - Score TaskManagers based on certain signals (# exceptions reported,
> exception types (connection losses, akka failures), failure frequencies,
> ...) and blacklist them accordingly.
> - Introduce a best-effort TaskManager unregistration mechanism: When a
> TaskManager receives a sigterm, it sends a final message to the JobManager
> saying "goodbye", and the JobManager can immediately remove the TM from its
> bookkeeping.
> [1]
> https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/config/#heartbeat-timeout
--
This message was sent by Atlassian Jira
(v8.3.4#803005)