Hello Ceph users,

We have updated our cluster from 10.2.7 to 10.2.11. A few hours after
the update, 1 OSD crashed.
When trying to add the OSD back to the cluster, other 2 OSDs started
crashing with segmentation fault. Had to mark all 3 OSDs as down as we
had stuck PGs and blocked operations and the cluster status was
HEALTH_ERR.

We have tried various ways to re-add the OSDs back to the cluster but
after a while they start crashing and won't start anymore. After a
while they can be started again and marked as in but after some
rebalancing they will start the crashing imediately after starting.

Here are some logs:
https://pastebin.com/nCRamgRU

Do you know of any existing bug report that might be related? (I
couldn't find anything).

I will happily provide any information that would help solving this issue.

Thank you,
Alex Cucu
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to