We recently did some work on the Ceph cluster, and a few disks ended up
offline at the same time. There are now 6 PG's that are stuck in a
"remapped" state, and this is all of their recovery states:









*recovery_state: 0: name: Started/Primary/WaitActingChangeenter_time:
2020-10-21 18:48:02.034430comment: waiting for pg acting set to change1:
name: Startedenter_time: 2020-10-21 18:48:01.752957*
Any ideas?

Mac Wynkoop
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to