On Mon, Dec 12, 2016 at 5:12 PM, Chris Murphy wrote:
> Another idea is btrfs-find-root -a. This is slow for me, it took about
> a minute for less than 1GiB of metadata. But I've got over 50
> candidate tree roots and generations.
Same behaviour with the newer versioned
Ok, some news. I chrooted into the old OS-root (Jessie), and low and
behold, old-version btrfs-find-root seemed to work:
# btrfs-find-root /dev/mapper/think--big-home
Super think's the tree root is at 138821632, chunk root 21020672
Well block 4194304 seems great, but generation doesn't match,
Tomasz - try using 'btrfs-find-root -a ' I totally forgot about
this option. It goes through the extent tree and might have a chance
of finding additional generations that aren't otherwise being found.
You can then plug those tree roots into 'btrfs restore -t '
and do it with the -D and -v options
Another idea is btrfs-find-root -a. This is slow for me, it took about
a minute for less than 1GiB of metadata. But I've got over 50
candidate tree roots and generations.
But still you can try the tree root for the oldest generation in your
full superblock listing, like I described. If that
I don't know, maybe. This is not a new file system, clearly, it has
half million+ generations.
backup_roots[4]:
backup 0:
backup_tree_root:29360128gen: 593817level: 1
backup_chunk_root:20971520gen: 591139level: 1
backup_extent_root:29376512
> This is what I'd expect if the volume has only had a mkfs done and
> then mounted and umounted. No files. What do you get for
>
> btrfs ins dump-s -fa /dev/mapper/think--big-home
(attached)
Also tried btrfs check -b /dev/mapper/think--big-home, but that errored:
# btrfs check -b
On Sun, Dec 11, 2016 at 5:56 PM, Tomasz Kusmierz wrote:
> Chris, for all the time you helped so far I have to really appologize
> I've led you a stray ... so, reson the subvolumes were deleted is
> nothing to do with btrfs it self, I'm using "Rockstor" to ease
> managment
On Sun, Dec 11, 2016 at 5:12 PM, Markus Binsteiner wrote:
>> You might try 'btrfs check' without repairing, using a recent version
>> of btrfs-progs and see if it finds anything unusual.
>
> Not quite sure what that output means, but btrfs check returns instantly:
>
> $ sudo
Chris, for all the time you helped so far I have to really appologize
I've led you a stray ... so, reson the subvolumes were deleted is
nothing to do with btrfs it self, I'm using "Rockstor" to ease
managment tasks. This tool / environment / distribution treats a
singular btrfs FS as a "pool" (
> You might try 'btrfs check' without repairing, using a recent version
> of btrfs-progs and see if it finds anything unusual.
Not quite sure what that output means, but btrfs check returns instantly:
$ sudo btrfs check /dev/mapper/think--big-home
Checking filesystem on
On Sun, Dec 11, 2016 at 4:30 PM, Markus Binsteiner wrote:
>> OK when I do it on a file system with just 14GiB of metadata it's
>> maybe 15 seconds. So a few minutes sounds sorta suspicious to me but,
>> *shrug* I don't have a file system the same size to try it on, maybe
>> it's
> OK when I do it on a file system with just 14GiB of metadata it's
> maybe 15 seconds. So a few minutes sounds sorta suspicious to me but,
> *shrug* I don't have a file system the same size to try it on, maybe
> it's a memory intensive task and once the system gets low on RAM while
> traversing
Yes. Command and device only.
>
> I've tried that initially, but it run for a few hours with no output
> beside the initial 'Superblock...'.
OK when I do it on a file system with just 14GiB of metadata it's
maybe 15 seconds. So a few minutes sounds sorta suspicious to me but,
*shrug* I don't
I recon it took me about 5 minutes to realise what I'd done, then I
unmounted the volume. I don't think I wrote anything inbetween, but
there were a few applications open at that time, so there might have
been some i/o.
When you say 'by itself', you mean without the '-o 5'?
I've tried that
Hi Zygo,
Since the corruption happens after I/O and checksum,
could it be possible to add some bug catcher code in code path for debug build,
to help narrowing down the issue?
Thanks,
Xin
Sent: Saturday, December 10, 2016 at 9:16 PM
From: "Zygo Blaxell"
To:
On Sun, Dec 11, 2016 at 10:40 AM, Tomasz Kusmierz
wrote:
> Hi,
>
> So, I've found my self in a pickle after following this steps:
> 1. trying to migrate an array to different system, it became apparent
> that importing array there was not possible to import it because I've
Hi,
So, I've found my self in a pickle after following this steps:
1. trying to migrate an array to different system, it became apparent
that importing array there was not possible to import it because I've
had a very large amount of snapshots (every 15 minutes during office
hours amounting to
17 matches
Mail list logo