I don't think there would be any issues with multiple jobs with one target OR one job with multiple targets.
Is the data being exposed by each PG different? On Sun, Nov 15, 2020 at 9:57 PM Andy Pan <[email protected]> wrote: > Is there a right path forward for Prometheus to collect metrics from > multiple push gateways directly instead of going through a load-balancer? > > 在2020年11月16日星期一 UTC+8 上午10:51:06<Andy Pan> 写道: > >> Can't Prometheus absorb metrics from multiple pushgateways? >> My initial thought was to deploy multiple pushgateways and put a >> load-balancer in front of those pushgateways, >> then have the Prometheus consume all pushgateways, Does Prometheus >> support consuming multiple pushgateways? >> 在2020年11月13日星期五 UTC+8 下午5:17:42<Stuart Clark> 写道: >> >>> On 13/11/2020 08:34, Andy Pan wrote: >>> > I've been investigating the push gateway for the past few days and I >>> > read this page https://prometheus.io/docs/practices/pushing/ which >>> > shows some pitfalls when using push gateway, I noticed there is a >>> > single point issue with push gateway, but I was confused, wouldn't >>> > deploying multiple machines and load-balancing from business side >>> > solve this problem? or it is impossible for Prometheus to collect >>> > metrics from multiple push gateways, only for single point? -- >>> >>> >>> If you have multiple Push gateway servers behind a load balancer you >>> would quickly get meaningless data returned. >>> >>> For example if you have 2 servers with Prometheus scraping through the >>> load balancer, Prometheus would probably alternate between scraping each >>> one (assuming round robin balancing). If a system pushes a set of new >>> metrics then it would only update one of the two servers. After that >>> time every other scrape would return the new data, with the old data (on >>> the other server) being scraped for some of the time. >>> >>> You could have the scrapes come via the load balancer and then have the >>> metrics creating process push to both servers, but there would be quite >>> a bit of complexity, as you'd need to handle things like service >>> discovery (how do you know which servers to push to, which might include >>> dynamic changes if on totally fails and is removed from the load >>> balancer pool) and retries (if there is a temporary failure you need to >>> retry so the servers don't contain inconsistent data, again causing >>> meaningless data on the Prometheus side). >>> >>> The Push gateway does allow you to persist the data stored to disk, so >>> in the event of a failure a restart wouldn't lose anything, just have an >>> availability gap (which could of course mean some pushes of new data are >>> missed). That sort of failure can be fairly easily detected and >>> rectified by many orchestration systems automatically - for example >>> liveness probes failing in Kubernetes causing a pod to be rescheduled. >>> >>> I have also tied the Push gateway more closely to the source of the >>> metrics. So instead of having a single central service which has to have >>> 100% uptime, have several instances which are used by different pieces >>> of functionality (e.g. one per namespace or per type of non-scrapable >>> metrics source). This then reduces the impact of a temporary failure. >>> There is a small overhead of multiple instances, but it is fairly >>> lightweight. >>> >>> -- > You received this message because you are subscribed to the Google Groups > "Prometheus Users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To view this discussion on the web visit > https://groups.google.com/d/msgid/prometheus-users/8972d5ee-17fc-402d-a03f-7c5c66f5ff63n%40googlegroups.com > <https://groups.google.com/d/msgid/prometheus-users/8972d5ee-17fc-402d-a03f-7c5c66f5ff63n%40googlegroups.com?utm_medium=email&utm_source=footer> > . > -- You received this message because you are subscribed to the Google Groups "Prometheus Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/CAOAKi8zXtJsRnx1Esg2vO-X7hAqHvSY%2Be3Qhmd7WLjzjF%3DQOww%40mail.gmail.com.

