[ 
https://issues.apache.org/jira/browse/FLINK-21309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17281058#comment-17281058
 ] 

Chesnay Schepler commented on FLINK-21309:
------------------------------------------

Imagine a Flink session cluster where continuously new jobs are submitted 
against. Or any cluster with a long-running streaming jobs.

Whenever a new job is submitted, or a restart occurs, the number of metrics 
stored in the PushGateway grows. Existing metrics are never deleted; as this 
only occurs when the JM/TM process (== the prometheus "job") shuts down. That 
job you ran 2 months ago? It's metrics are still around. Your job restarted a 
thousand times? Well those metrics from the first run are also still there.
If enough of these events occur the PushGateway will crash, be it due to being 
out of memory or disk space.
We can neither guard against this by cleaning up metrics (because you can only 
delete by grouping key, not labels), nor can users guard against this because 
the PushGateway provides provides no hooks to cleanup up stale date.

The underlying issue is that the PushGateway is not meant for long-running 
applications, but Flink jobs are usually exactly that. It is not a really good 
fit; and as such some friction and inconveniences are to be expected and this 
is one of them.


What you are proposing essentially boils down to consciously leaking resources 
on the _assumption_ that it won't crash. And indeed it may work fine, until a 
certain point, where it fails in the worst possible way.

> Metrics of JobManager and TaskManager overwrite each other in pushgateway
> -------------------------------------------------------------------------
>
>                 Key: FLINK-21309
>                 URL: https://issues.apache.org/jira/browse/FLINK-21309
>             Project: Flink
>          Issue Type: Bug
>          Components: Runtime / Metrics
>    Affects Versions: 1.9.0, 1.10.0, 1.11.0
>         Environment: 1. Components :
> Flink 1.9.0/1.10.0/1.11.0 + Prometheus + Pushgateway + Yarn
> 2. Metrics Configuration in flink-conf.yaml :
> {code:java}
> metrics.reporter.promgateway.class: 
> org.apache.flink.metrics.prometheus.PrometheusPushGatewayReporter
> metrics.reporter.promgateway.jobName: myjob
> metrics.reporter.promgateway.randomJobNameSuffix: false{code}
>  
>            Reporter: jiguodai
>            Priority: Major
>         Attachments: image-2021-02-05-21-07-42-292.png
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
>     When a flink job run on yarn, metrics of jobmanager and taskmanagers will 
> overwrite each other. The  phenomenon is that on one second you can find only 
> jobmanager metrics on pushgateway web ui,  while on the next second you can 
> find only taskmanager metrics on pushgateway web ui, these two kinds of 
> metrics appear alternately. One metric of taskmanager on grafana will be like 
> below intermittently (this taskmanager metric disappear on grafana when 
> jobmanager metrics overwrite taskmanager metrics):
> !image-2021-02-05-21-07-42-292.png!
>     The real reason is that Flink PrometheusPushGatewayReporter use PUT style 
> instead of POST style to push metrics to pushgateway, what's more, 
> taskmanagers and jobmanager use the same jobName (the only grouping key) 
> which we configured in flink-conf.yaml. 
>     Althought REST URLs are same as below, 
> {code:java}
> /metrics/job/<JOB_NAME>{/<LABEL_NAME>/<LABEL_VALUE>}
> {code}
> PUT and POST caused different results, as we can see below :
>  * PUT is used to push a group of metrics. All metrics with the grouping key 
> specified in the URL are replaced by the metrics pushed with PUT.
>  * POST works exactly like the PUT method but only metrics with the same name 
> as the newly pushed metrics are replaced.
>     For these reasons, it's better to use POST style to push metrics to 
> pushgateway to prevent jobmanager metrics and taskmanager metrics from 
> overwriting each other, so that we can get continuous graph on grafana. Maybe 
> you will say that we can set
> {code:java}
> metrics.reporter.promgateway.randomJobNameSuffix: true{code}
> in flink-conf.yaml, in this way, jobName from different nodes will has a 
> random suffix and metrics will not overwrite each other any more. While we 
> should be aware that most of users tend to use jobName as filter condition in 
> PromQL, and using regular expressions to find exact jobName will degrade the 
> speed of data retrieval in prometheus. 
>     Everytime some body ask why metrics on grafana is discontinuous on Flink 
> mailing list, i will tell him that you should change the style of pushing 
> metrics to pushgateway from PUT to POST and then repackage the 
> flink-metrics-prometheus module. So, why don't we solve the problem 
> permanently now ? I hope to have the chance to solve the problem, sincerely.
> related links : 
> [https://github.com/prometheus/pushgateway#put-method]
> [https://github.com/prometheus/pushgateway/issues/308]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to