Hi,

I've 6x 3TB HDD RAID1 BTRFS filesystem where HBA card failed and
caused some corruption.
When I try to mount it I get
$ mount /dev/sdt /mnt
mount: /mnt/: wrong fs type, bad option, bad superblock on /dev/sdt,
missing codepage or helper program, or other error
$ dmesg | tail -n 9
[  617.158962] BTRFS info (device sdt): disk space caching is enabled
[  617.158965] BTRFS info (device sdt): has skinny extents
[  617.756924] BTRFS info (device sdt): bdev /dev/sdl errs: wr 0, rd
0, flush 0, corrupt 473, gen 0
[  617.756929] BTRFS info (device sdt): bdev /dev/sdj errs: wr 31626,
rd 18765, flush 178, corrupt 5841, gen 0
[  617.756933] BTRFS info (device sdt): bdev /dev/sdg errs: wr 6867,
rd 2640, flush 178, corrupt 1066, gen 0
[  631.353725] BTRFS warning (device sdt): sdt checksum verify failed
on 21057101103104 wanted 0x753cdd5f found 0x9c0ba035 level 0
[  631.376024] BTRFS warning (device sdt): sdt checksum verify failed
on 21057101103104 wanted 0x753cdd5f found 0xb908effa level 0
[  631.376038] BTRFS error (device sdt): failed to read block groups: -5
[  631.422811] BTRFS error (device sdt): open_ctree failed

$ uname -r
5.9.14-arch1-1
$ btrfs --version
btrfs-progs v5.9
$ btrfs check /dev/sdt
Opening filesystem to check...
checksum verify failed on 21057101103104 found 000000B9 wanted 00000075
checksum verify failed on 21057101103104 found 0000009C wanted 00000075
checksum verify failed on 21057101103104 found 000000B9 wanted 00000075
Csum didn't match
ERROR: failed to read block groups: Input/output error
ERROR: cannot open file system

$ btrfs filesystem show
Label: 'RAID'  uuid: 8aef11a9-beb6-49ea-9b2d-7876611a39e5
Total devices 6 FS bytes used 4.69TiB
devid    1 size 2.73TiB used 1.71TiB path /dev/sdt
devid    2 size 2.73TiB used 1.70TiB path /dev/sdl
devid    3 size 2.73TiB used 1.71TiB path /dev/sdj
devid    4 size 2.73TiB used 1.70TiB path /dev/sds
devid    5 size 2.73TiB used 1.69TiB path /dev/sdg
devid    6 size 2.73TiB used 1.69TiB path /dev/sdc


My guess is that some drives dropped out while kernel was still
writing to rest thus causing inconsistency.
There should be some way to find out which drives has the most
up-to-date info and assume those are correct.
I tried to mount with
$ mount -o ro,degraded,rescue=usebackuproot /dev/sdt /mnt
but that didn't make any difference

So any idea how to fix this filesystem?

Thanks!

Best regards,
Dāvis

Reply via email to