[
https://issues.apache.org/jira/browse/FLINK-31482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17706781#comment-17706781
]
Martijn Visser commented on FLINK-31482:
----------------------------------------
[~Fei Feng] I would think that you detect that a job is restarting by looking
at your metrics and then you do an investigation for the cause; that would
probably also require you to look into logging etc. If it's a customers
restart, you would have a different lifecycle state then it would be for a
failing job imho, which you can already determine from the metrics.
> support count jobmanager-failed failover times
> ----------------------------------------------
>
> Key: FLINK-31482
> URL: https://issues.apache.org/jira/browse/FLINK-31482
> Project: Flink
> Issue Type: Improvement
> Components: Runtime / Coordination, Runtime / Metrics
> Affects Versions: 1.16.1
> Reporter: Fei Feng
> Priority: Major
>
> we have a metric `numRestarts` which indicate how many times a job failover
> , but we don't have a metric indicate the job recover from ha ( high
> availability).
> there are two problems:
> 1. when a jobmanager process crashed , we have no way of knowing that
> jobmanager is crash and job was recovered from metric system
> 2. when a new jobmanager become leader, the `numRestarts` will started from
> zero,
> Sometimes misleading our users。most user think that whether failover because
> of a JM failure or because of a job failure, these failover is same , the
> effect, at least, is the same.
>
> I suggest we can
> 1. add new metric that indicate how many time the job was recovered from ha
> 2. metric `numRestarts` also count the times recover from ha
>
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)