On Wed, Jan 16, 2019 at 11:22 AM Christian Schneider <[email protected]> wrote:
>
> THx for your hints!
>
>  > Your problem could be due to a block layer bug that's been discovered.
>  > It should be fixed in 4.19.8+
>  > https://lwn.net/Articles/774440/
> I looked into the article, and it mentiones, that the bug occurs, when
> no IO scheduler is used, which is not the case for me. So, I would rule
> this out.
>
>  > 'btrfs insp dump-t -b 448888832 <dev>'
>  >
>  > and remove file names before posting it; this might help a dev sort
>  > out what the problem is.
> Have done this, though no file names appear there. this is the output:
>
>
> btrfs inspect-internal dump-tree -b 448888832 /dev/md42
> btrfs-progs v4.19.1
> leaf 448888832 items 29 free space 1298 generation 68768 owner CSUM_TREE
> leaf 448888832 flags 0x1(WRITTEN) backref revision 1

OK so for some reason that leaf is considered stale. I can't tell if
it really is stale, or if the complaint is bogus. The generation for
this leaf is 68768 but the current good tree expects it to be 68773,
which isn't that far off. Some decent chance it can be repaired
depending on what was happening at the time of the power failure.

What do you get for:

btrfs rescue super -v <dev>
btrfs insp dump-s -fa <dev>

These are readonly commands and do not change anything on disk; just
to reiterate I don't recommend 'btrfs check --repair' yet. If the
first command reports that all supers are good, no bad supers, then
you can try

btrfs check -b <dev> which will use the previous backup roots and see
if there's anything that can be done, or if it falls over with the
same complaint. It is possible to use your btrfs-find-tree results to
plug in specific root addresses using btrfs check -b <address> <dev>
working from the top of the list (the highest generation number) and
work down. But for starters just the first two commands above might
reveal a clue.


-- 
Chris Murphy

Reply via email to