[ceph-users] Re: CEPH Cluster performance review

2023-11-13 Thread Alexander E. Patrakov
Hello Mosharaf, There is an automated service available that will criticize your cluster: https://analyzer.clyso.com/#/analyzer On Sun, Nov 12, 2023 at 12:03 PM Mosharaf Hossain < mosharaf.hoss...@bol-online.com> wrote: > Hello Community > > Currently, I operate a CEPH Cluster utilizing Ceph

[ceph-users] Re: CEPH Cluster performance review

2023-11-12 Thread Peter Grandi
>>> during scrubbing, OSD latency spikes to 300-600 ms, >> I have seen Ceph clusters spike to several seconds per IO >> operation as they were designed for the same goals. >>> resulting in sluggish performance for all VMs. Additionally, >>> some OSDs fail during the scrubbing process. >> Most

[ceph-users] Re: CEPH Cluster performance review

2023-11-12 Thread Peter Grandi
> during scrubbing, OSD latency spikes to 300-600 ms, I have seen Ceph clusters spike to several seconds per IO operation as they were designed for the same goals. > resulting in sluggish performance for all VMs. Additionally, > some OSDs fail during the scrubbing process. Most likely they time

[ceph-users] Re: CEPH Cluster performance review

2023-11-11 Thread Anthony D'Atri
I'm going to assume that ALL of your pools are replicated with size 3, since you didn't provide that info, and that all but the *hdd pools are on SSDs. `ceph osd dump | grep pool` Let me know if that isn't the case. With that assumption, I make your pg ratio to be ~ 57, which is way too low.