The exporter consumes almost no resources. It should not be resource
impacting.
Just run the exporter 1:1 with your nodes, this is the correct design, It
means you are communicating over localhost with the mariadb process, which
allows you to improve security by restricting the exporter user to
I see. I did it with relabeling. But this option can be in the exporter
because sometimes we have no access to the MySQL's host or some another
reasons. The multi-target idea can be done as well. I have 3 mariadb nodes
and I forced to run one exporter per mariadb instance and I have 3 exporter
And to add to this, for the case of managed MySQL where the sidecar is not
possible, we can add multi-target exporter support.
https://prometheus.io/docs/guides/multi-target-exporter/
There's a partial implementation of this in a PR on the exporter, but the
author has not responded to feedback
This is not supported in the exporter and we have no plans to add it. Most
exporters use a different approach, which we recommend for exporters in
general.
Deploy the expory as a sidecar alongside the MySQL instance. In Kubernetes,
this means an additional container in the MySQL pod. This solves
I install the mysqld-exporter on Kubernetes and when I scrape it with
Prometheus, the instance label will show the pod IP of the mysqld-exporter
instance and when we saw the MySqlIsDown alert, I don't know what MySQL
instance is for it. I wanna add a label to the exposed metrics to show the
5 matches
Mail list logo