Hi all,
we have a 10 nodes cluster used for S3 with radosgw.
Initially the cluster has been installed with ceph-ansible and was at
version 18.
Some months ago an external consultant converted to cephadm and upgrade
to 19.2.3
Almost everything is ok but we have some issues with dashboard and
grafana graphs.
In Dashboard → Cluster → OSDs → Overall Performance we can't see "OSD
Read Latencies" and "OSD Write Latencies" graphs, they have red triangle
icons with this error popup:
"execution: found duplicate series for the match group
{ceph_daemon=\"osd.0\"} on the right hand-side of the operation:
[{ceph_daemon=\"osd.0\",
cluster=\"6e06959e-3ef3-4017-a467-b1d482bc7269\",
instance=\"ceph_cluster\", job=\"ceph\"}, {ceph_daemon=\"osd.0\",
cluster=\"6e06959e-3ef3-4017-a467-b1d482bc7269\", instance=\"node1\",
job=\"ceph-exporter\"}];many-to-many matching not allowed: matching
labels must be unique on one side"
After some my investigations I found out there are some duplicate
metrics in prometheus db, one coming from ceph (I think mgr module?) and
one from ceph-exporter, for example this metric:
ceph_osd_op_r_latency_sum
be there with two distinct values with different job labels (values are
exactly the same)
Every node has a running ceph-exporter container and there are 3 nodes
with mgr services.
I have tried to modify grafana dashboard queries to use only one of two
metrics specifying the job label but they are readonly (I can make a
copy and modifying it that works, but i don't know how to use this new
grafana dashboard in ceph dashboard).
I have read from ceph documentation and I understand ceph-exporter is no
longer required but on another ceph cluster recently set up with cephadm
version 19.2.3 from the very start, ceph-exporter container are presents
on every nodes, so I'm not sure if I should keep ceph-exporter
containers running
Which is the right way of extract these metrics from ceph?
Thank you all
Francesco Usseglio
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]