"I have a Prometheus setup that monitors metrics exposed by my own
services. This works fine for a single instance, but once I start scaling
them, Prometheus gets completely confused and starts tracking incorrect
values.
All services are running on a single node, through docker-compose.
This is the job in the scrape_configs:
- job_name: 'wowanalyzer'
static_configs:
- targets: ['prod:8000']
Each instance of prod tracks metrics in its memory and serves it at
/metrics. I'm guessing Prometheus picks a random container each time it
scraps which leads to the huge increase in counts recorded, building up
over time. Instead, I'd like Prometheus to read /metrics on all instances
simultaneously, regardless of the number of instances active at that time."
Reference:
https://stackoverflow.com/questions/53308951/with-prometheus-how-to-monitor-a-scaled-docker-service-where-each-instance-serve
This question was made almost 2 years ago and have no solution. I think
this feature would be really nice and useful!
--
You received this message because you are subscribed to the Google Groups
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/prometheus-users/306c7cd7-f60f-45af-bbe8-89624d783574%40googlegroups.com.