To conclude this story, we finally discovered that one of our users was
using a prometheus exporter (s3_exporter) that constantly listed the
content of their buckets containing millions of objects. That really didn't
play well with Ceph. 2 of these exporters were generating ~ 700k read IOPS
on the
Hi ceph-users,
I'm not sure if this mail got send correctly, my colleague seems to not have
received it.
Either way. We've managed to replicate this issue from a local http_check test.
The ceph-mgr seems to go down with every first visit, and works perfectly fine
right after a couple re-visi
Dear Kai Stian Olstad,
Thank you for your information. It's good knowledge for me.
Vào Th 5, 28 thg 12, 2023 vào lúc 15:06 Kai Stian Olstad <
ceph+l...@olstad.com> đã viết:
> On 27.12.2023 04:54, Phong Tran Thanh wrote:
> > Thank you for your knowledge. I have a question. Which pool is affected
On 27.12.2023 04:54, Phong Tran Thanh wrote:
Thank you for your knowledge. I have a question. Which pool is affected
when the PG is down, and how can I show it?
When a PG is down, is only one pool affected or are multiple pools
affected?
If only 1 PG is down only 1 pool is affected.
The name o