[ceph-users] Re: ceph PGs issues

2021-06-15 Thread Aly, Adel
Hi Reed, Thank you for getting back to us. We had indeed several disk failures at the same time. Regarding the OSD map, we have an OSD that failed and we needed to remove but we didn't update the crushmap. The question here, is it safe to update the OSD crushmap without affecting the data

[ceph-users] Re: ceph PGs issues

2021-06-15 Thread Reed Dier
Note: I am not entirely sure here, and would love other input from the ML about this, so take this with a grain of salt. You don't show any unfound objects, which I think is excellent news as far as data loss. >>96 active+clean+scrubbing+deep+repair The deep scrub + repair seems

[ceph-users] Re: ceph PGs issues

2021-06-15 Thread Reed Dier
You have incomplete PGs, which means you have inactive data, because the data isn't there. This will typically only happen when you have multiple concurrent disk failures, or something like that, so I think there is some missing info. >1 osds exist in the crush map but not in the