It's a sharded config. That means each target is only scraped by one of
the three nodes (or put another way: each node only scrapes one third of
the targets given).
All three nodes have the same config with one node:
- job_name: 'node_exporter'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['localhost:9100']
But the string "localhost:9100" always hashes to the same value. So only
that node will process it.
You should use your hashmod config for a big list of remote targets. You
could either include all three nodes as named targets (in which case
they'll all be scraped, but not necessarily by themselves); or you can have
a separate job for scraping localhost which *doesn't* use the hashmod.
--
You received this message because you are subscribed to the Google Groups
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/prometheus-users/42d53a04-4c97-4683-889c-d23a9cb2fd64%40googlegroups.com.