On 2018年01月30日 02:16, ^m'e wrote:
> Thanks!
> 
> Got these
> 
>   # ./btrfs.static inspect dump-super -fFa /dev/sdb3 |grep
> backup_tree_root: | sort -u
>         backup_tree_root:    180410073088    gen: 463765    level: 1
>         backup_tree_root:    180415758336    gen: 463766    level: 1
>         backup_tree_root:    180416364544    gen: 463767    level: 1
>         backup_tree_root:    4194304    gen: 463764    level: 1
> 
> but, nada: all have transid failures...

That's why I call it "small chance"

> 
> The backup snapshots are OK as per original check.

Then you should be OK to restore.

Thanks,
Qu

> 
> 
> On Mon, Jan 29, 2018 at 3:09 PM, Qu Wenruo <quwenruo.bt...@gmx.com> wrote:
>>
>>
>> On 2018年01月29日 22:49, ^m'e wrote:
>>> On Mon, Jan 29, 2018 at 2:04 PM, Qu Wenruo <quwenruo.bt...@gmx.com> wrote:
>>>>
>>>>
>>>> On 2018年01月29日 21:58, ^m'e wrote:
>>>>> Thanks for the advice, Qu!
>>>>>
>>>>> I used the system for a while, did some package upgrades -- writing in
>>>>> the suspect corrupted area. Then tried a btrfs-send to my backup vol,
>>>>> and it failed miserably with a nice kernel oops.
>>>>>
>>>>> So I went for a lowmem repair:
>>>>> ----------------------------------------------------------------------------------------
>>>>> # ./btrfsck.static check --repair --mode=lowmem /dev/sdb3 2>&1 | tee
>>>>> /mnt/custom/rescue/btrfs-recovery/btrfs-repair.BTR-POOL.1.log
>>>>> WARNING: low-memory mode repair support is only partial
>>>>> Fixed 0 roots.
>>>>> checking extents
>>>>> checking free space cache
>>>>> checking fs roots
>>>>> ERROR: failed to add inode 28891726 as orphan item root 257
>>>>> ERROR: root 257 INODE[28891726] is orphan item
>>>>
>>>> At least I need dig the kernel code further to determine if the orphan
>>>> inode handling in btrfs-progs is correct or not.
>>>>
>>>> So there won't be more dirty fix soon.
>>>>
>>>> Hopefully you could get some good backup and restore the system.
>>>>
>>>> At least the problem is limited to a very small range, and it's
>>>> something we could handle easily.
>>>>
>>>> Thanks for all your report,
>>>> Qu
>>>>
>>>>
>>>
>>> Right.
>>>
>>> Meanwhile, could you please suggest the best course of action? btrfs
>>> rescue or restore?
>>> I have snapshots of my two subvols (rootfs, home -- now fs-checking
>>> them  just in case...)
>>
>> Don't run --repair any more.
>> It seems to make the case worse.
>>
>> While the RW mount with orphan cleanup abort seems to screwed up the
>> filesystem.
>>
>> In this case, it's pretty hard to recover, but still has small chance.
>>
>> Use btrfs inspec dump-super to get backup roots:
>>
>> # btrfs inspect dump-super -fFa <device> |grep backup_tree_root: | sort
>> | uniq
>>
>> And try all the 4 numbers in the following commands:
>>
>> # btrfs check --tree-root <number> <device>
>>
>> To see if is there any good one without transid error.
>>
>> Thanks,
>> Qu
>>
>>>
>>> Cheers,
>>>
>>>   Marco
>>>
>>
> 
> 
> 

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to