On 2018年07月16日 16:15, Udo Waechter wrote:
> Hello,
> 
> noone any ideas? Do you need more information?
> 
> Cheers,
> udo.
> 
> On 11/07/18 17:37, Udo Waechter wrote:
>> Hello everyone,
>>
>> I have a corrupted filesystem which I can't seem to recover.
>>
>> The machine is:
>> Debian Linux, kernel 4.9 and btrfs-progs v4.13.3
>>
>> I have a HDD RAID5 with LVM and the volume in question is a LVM volume.
>> On top of that I had a RAID1 SSD cache with lvm-cache.
>>
>> Yesterday both! SSDs died within minutes. This lead to the corruped
>> filesystem that I have now.
>>
>> I hope I followed the procedure correctly.
>>
>> What I tried so far:
>> * "mount -o usebackuproot,ro " and "nospace_cache" "clear_cache" and all
>> permutations of these mount options
>>
>> I'm getting:
>>
>> [96926.830400] BTRFS info (device dm-2): trying to use backup root at
>> mount time
>> [96926.830406] BTRFS info (device dm-2): disk space caching is enabled
>> [96926.927978] BTRFS error (device dm-2): parent transid verify failed
>> on 321269628928 wanted 3276017 found 3275985
>> [96926.938619] BTRFS error (device dm-2): parent transid verify failed
>> on 321269628928 wanted 3276017 found 3275985
>> [96926.940705] BTRFS error (device dm-2): failed to recover balance: -5

This means your fs failed to recover the balance.

And it should mostly be caused by transid error just one line above.
Normally this means your fs is more or less corrupted, could be caused
by powerloss or something else.

>> [96926.985801] BTRFS error (device dm-2): open_ctree failed
>>
>> The weird thing is that I can't really find information about the
>> "failed to recover balance: -5" error. - There was no rebalancing
>> running when during the crash.

Can only be determined by tree dump.

# btrfs ins dump-tree -t root <device>

>>
>> * btrfs-find-root: https://pastebin.com/qkjnSUF7 - It bothers me that I
>> don't see any "good generations" as described here:
>> https://btrfs.wiki.kernel.org/index.php/Restore
>>
>> * "btrfs rescue" - it starts, then goes to "looping on XYZ" then stops
>>
>> * "btrfs rescue super-recover -v" gives:
>>
>> All Devices:
>>      Device: id = 1, name = /dev/vg00/...
>> Before Recovering:
>>      [All good supers]:
>>              device name = /dev/vg00/...
>>              superblock bytenr = 65536
>>
>>              device name = /dev/vg00/...
>>              superblock bytenr = 67108864
>>
>>              device name = /dev/vg00/...
>>              superblock bytenr = 274877906944
>>
>>      [All bad supers]:
>>
>> All supers are valid, no need to recover
>>
>>
>> * Unfortunatly I did a "btrfs rescue zero-log" at some point :( - As it
>> turns out that might have been a bad idea
>>
>>
>> * Also, a "btrfs  check --init-extent-tree" - https://pastebin.com/jATDCFZy

Then it is making things worse, fortunately it should terminate before
it causes more damage.

I'm just curious why people doesn't try the safest "btrfs check" without
any options, but goes the most dangerous option.

And "btrfs check" output please.
If possible, "btrfs check --mode=lowmem" is also good for debug.

Thanks,
Qu

>>
>> The volume contained qcow2 images for VMs. I need only one of those,
>> since one piece of important software decided to not do backups :(
>>
>> Any help is highly appreciated.
>>
>> Many thanks,
>> udo.
>>
> 

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to