It is marathon_sd_config ( https://prometheus.io/docs/prometheus/latest/configuration/configuration/#marathon_sd_config), prometheus get instances from service discovery (marathon), and call them directly, i think it something wrong with prometheus merathon_sd module.
суббота, 4 апреля 2020 г., 16:30:41 UTC+3 пользователь Ben Kochie написал: > > I assume you're hitting the metrics through some kind of load balancer. > Prometheus assumes direct access to each instance of an application, rather > than through a load balancer. > > > On Sat, Apr 4, 2020 at 2:53 PM Ivan Pohodnya <[email protected] > <javascript:>> wrote: > >> >> Hello everyone, i have question about rolling update services >> (mesos/marathon orchestration). >> >> When we upgrade service (or just restart), it can be possible two >> services working on the same server simultaneously for short period, but >> each service reports metrics, new with reseted counter and old with old >> counter values. Prometheus starts scrape two services and it results in >> spikes in metrics (see attachments). >> >> Is where any decision other than adding unique id to each scrape service? >> >> -- >> You received this message because you are subscribed to the Google Groups >> "Prometheus Users" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to [email protected] <javascript:>. >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/prometheus-users/f18543da-ffb4-447f-b28b-8f899b3970aa%40googlegroups.com >> >> <https://groups.google.com/d/msgid/prometheus-users/f18543da-ffb4-447f-b28b-8f899b3970aa%40googlegroups.com?utm_medium=email&utm_source=footer> >> . >> > -- You received this message because you are subscribed to the Google Groups "Prometheus Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/dfc0f0bb-77cc-411a-b6d2-1d3bf9dec5a5%40googlegroups.com.

