http://tracker.ceph.com/issues/22796

I was curious if anyone here had any ideas or experience with this
problem.  I created the tracker for this yesterday when I woke up to find
all 3 of my SSD OSDs not running and unable to start due to this segfault.
These OSDs are in my small home cluster and hold the cephfs_cache and
cephfs_metadata pools.

To recap, I upgraded from 10.2.10 to 12.2.2, successfully swapped out my 9
OSDs to Bluestore, reconfigured my crush rules to utilize OSD classes,
failed to remove the CephFS cache tier due to
http://tracker.ceph.com/issues/22754, created these 3 SSD OSDs and updated
the cephfs_cache and cephfs_metadata pools to use the replicated_ssd crush
rule... fast forward 2 days of this working great to me waking up with all
3 of them crashed and unable to start.  There is an OSD log with debug
bluestore = 5 attached to the tracker at the top of the email.

My CephFS is completely down while these 2 pools are inaccessible.  The
OSDs themselves are in-tact if I need to move the data out manually to the
HDDs or something.  Any help is appreciated.
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to