So, since I didn't receive any idea and didn't find any solution, I created a tool that helps us with this issue: https://github.com/juliofalbo/docker-compose-prometheus-service-discovery
Em segunda-feira, 18 de maio de 2020 20:19:58 UTC+2, Júlio Falbo escreveu: > > "I have a Prometheus setup that monitors metrics exposed by my own > services. This works fine for a single instance, but once I start scaling > them, Prometheus gets completely confused and starts tracking incorrect > values. > > All services are running on a single node, through docker-compose. > > This is the job in the scrape_configs: > > - job_name: 'wowanalyzer' > static_configs: > - targets: ['prod:8000'] > > Each instance of prod tracks metrics in its memory and serves it at > /metrics. I'm guessing Prometheus picks a random container each time it > scraps which leads to the huge increase in counts recorded, building up > over time. Instead, I'd like Prometheus to read /metrics on all instances > simultaneously, regardless of the number of instances active at that time." > > > Reference: > https://stackoverflow.com/questions/53308951/with-prometheus-how-to-monitor-a-scaled-docker-service-where-each-instance-serve > > > This question was made almost 2 years ago and have no solution. I think > this feature would be really nice and useful! > -- You received this message because you are subscribed to the Google Groups "Prometheus Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/76b765bb-0c82-4f0e-9036-b5882ed96db2%40googlegroups.com.

