I'm setting up a kubernetes cluster for user. They need to deploy different
deployment, each of them have their own metrics for
pods/deployment/service, and also have its own bear_token to write to
remote storage. It's something like a big company user, own the cluster,
with multiple department , each department owns a seprate bear_token and
remote storage.
What's the best way to set a prometheus cluster to scrape the metrics and
send them to remote storage?
I can think of two way:
1. set up a prometheus pod for each deployment
1. pro: easy and safe, different deployments have different token and
remote storage
2. con: bad performance?Not quite sure. Each of prometheus will
collect all metrics from cadvisors. If there are many deployments, with
many prometheus, each of them will scrape the whole cluster metrics, it
may
lead to high pressure of cluster
2. set up an HA prometheus for a whole cluster and wrote a own program
to scrape its own metrics and send it to remote storage
1. pro: seems lighter? not quite sure. It seems also does something
similar to prometheus
2. con: need a lot of code to implement the scrape and send metrics
part
Does anyone have any suggestions? Of are there any other better solutions?
--
You received this message because you are subscribed to the Google Groups
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/prometheus-users/7218b608-4464-4946-885d-434963084e6dn%40googlegroups.com.