Correct, its collection is disabled per default for performance reasons [0].
To enable per-pool statistics, you can either provide a list of pools,
or use a wildcard as described in [1]:
ceph config set mgr mgr/prometheus/rbd_stats_pools "*"
And you might need to enable the iostat mgr module:
ceph mgr module enable iostat
But I assume that the module is already enabled since you are able to
see the stats with rbd perf image iostat.
Grafana then should display the stats.
[0]
https://docs.ceph.com/en/latest/cephadm/services/monitoring/#setting-up-rbd-image-monitoring
[1]
https://docs.ceph.com/en/latest/mgr/prometheus/#prometheus-rbd-io-statistics
Zitat von Marc <m...@f1-outsourcing.eu>:
That can't be true, I have a grafana dashboard somewhere that shows
this. I think you need to enable plugin in mgr or so.
What do people use to store per-RBD-image I/O stats?
To retrospectively see which VM was keeping a Ceph cluster busy in terms
of I/O, I'd like to track `rbd perf image iostat` and `rbd perf image
iotop`-like metrics.
I've looked at the LibreNMS plugin [1], Prometheus metrics in the
manager [2], and Ceph ingestion in Elasticsearch [3].
None of them seem to export per-RBD metrics. I can't find much
information about this use case on the mailing list either, though it's
hard for me to imagine I'm the only one with this want.
How do other users handle this?
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io