On 2019/3/11 上午11:20, Chris Murphy wrote:
> On Sun, Mar 10, 2019 at 7:18 PM Qu Wenruo <quwenruo.bt...@gmx.com>
> wrote:
>> 
>> 
>> 
>> On 2019/3/11 上午7:09, Chris Murphy wrote:
>>> In the case where superblock 0 at 65536 is valid but stale (older
>>> than the others):
>> 
>> Then this means either the fs is fuzzed, or the FUA implementation
>> of the disk is completely screwed up.
> 
> Fuzzed in this case by me.
> 
> (Backstory: On linux-raid@ list, user accidentally zero'd first 1MiB 
> of an mdadm array which contains Btrfs, but has a backup of this
> 1MiB. So I was testing in advance the behavior of restoring this
> 1MiB backup; but I'm guessing upon zero the working file system may
> have changed as it's not unmount, and in fact probably very soon
> after zeroing, wrote a good super replacement anyway. It seems the
> only missing thing we need is LVM metadata, maybe.)
> 
> 
>> So IMHO always use the primary superblock is the designed
>> behavior.
> 
> OK interesting. So in what case are the backup supers used? Only by 
> `btrfs rescue super` or by explicit request, e.g. I notice even with 
> an erased primary super signature, a `btrfs check -S1 --repair` does 
> not cause the S0 super to be fixed up;

This is because there is no thing to repair thus no need to commit
transaction.
If --repair modified anything, then it should fix all supers.

But indeed, this behavior is a problem.

> and `btrfs rescue super` lacks an -S flag, so fixing accidentally
> wiped Btrfs super requires manual intervention.

Normally 'btrfs rescue super' should be enough for accidentally wiped btrfs.

If not, then we should fix it of course.

Thanks,
Qu

> 
> 

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to