> If you solve this problem with virtual threshold metrics, you have to
create a virtual metric per disk you add. You may end up with many metrics.
That's the point: Prometheus *does* scale to millions of metrics, easily.
Personally, I'd also prefer that prometheus was able to have "virtual"
timeseries for thresholds, where the value is whatever it is right now, and
assumed to be the same for all points forwards and backwards in history and
not saved. The idea has been raised, and has been rejected. If you want
to build this yourself, you could probably do so using the Remote Read
protocol.
But you'll find your life much easier if you just do what everyone
recommends, which is to scrape new timeseries for the thresholds. In
practice, the amount of disk space is used is miniscule, because Prometheus
compresses so well: adjacent threshold values in the same timeseries are
identical, so the difference between them is zero.
> switch ( computer-instance, disk-volume )
> {
> case PC1, Volume1: Assign to Alert A.
> case PC3, Volume3: Assign to Alert K.
> default: Assign to Alert M.
> }
You can write that switch statement as a series of alerting rules too.
It's no different. Expand your alerting rules from a templating language
of your choice.
--
You received this message because you are subscribed to the Google Groups
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/prometheus-users/babfea63-64a1-423b-ab6d-10d4f655a233n%40googlegroups.com.