Hello,

We have a test project where we are using ceph+openstack.

Today we had some problems with this setup and we had to force reboot the
server. After that, the partition where we keep the ceph journal could not
mount.

When we checked it, we got this:

btrfsck /dev/mapper/vg_ssd-ceph_ssd
Checking filesystem on /dev/mapper/vg_ssd-ceph_ssd
UUID: 7121568d-3f6b-46b2-afaa-b2e543f31ba4
checking extents
checking fs roots
root 5 inode 257 errors 80
Segmentation fault


Considering that we are using btrfs on ceph, could we format the journal
and continue our work? Or will this kill our entire node? We don't care
very much about the data from the last minutes before the crash.

Best regards,
Cristian Falcas
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to