Re: mount btrfs takes 30 minutes, btrfs check runs out of memory

2015-08-01 Thread Georgi Georgiev
Quoting John Ettedgui at 2015-07-30-21:10:27(-0700):
> On Thu, Jul 30, 2015 at 7:34 PM, Qu Wenruo  wrote:
> >
> > Hi John,
> > Thanks for the trace output.
> You are welcome, thank you for looking at it!
> >
> > But it seems that, your root partition is also btrfs, causing a lot of btrfs
> > trace from your systemd journal.
> >
> Oh yes sorry about that.
> I actually have 3 partition in btrfs, the problematic one being the
> only big one.
> > Would you mind re-collecting the ftrace without such logging system caused
> > btrfs trace?
> Sure, how would I do that?
> This is my first time using ftrace.
> >
> > BTW, although I'm not quite familiar with ftrace, would you please consider
> > collect ftrace with function_graph tracer?
> Sure, how would I do that one as well?

You can use set_ftrace_pid to trace only a single process (for example,
the mount command). There is a sample script I found in the ftrace
documentation that goes something like this:

# First disable tracing, to clear the trace buffer
echo nop> current_tracer
echo 0  > tracing_on
echo 0  > tracing_enabled

# Then re-enable it after setting the filters
echo $$ > set_ftrace_pid
echo '*btrfs*'  > set_ftrace_filter
echo function_graph > current_tracer
echo 1  > tracing_enabled
echo 1  > tracing_on

# And finally *exec* the command to trace:
exec mount 

I tried it, but the logs were way too large, and I was still fiddling
with the trace_options to set. If someone has good advice, we can try it
again

-- 
Georgi
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Data single *and* raid?

2015-08-01 Thread Duncan
Chris Murphy posted on Sat, 01 Aug 2015 19:14:13 -0600 as excerpted:

> On Sat, Aug 1, 2015 at 6:27 PM, Duncan <1i5t5.dun...@cox.net> wrote:
> 
>> 1) If this fs was created with btrfs-progs v4.1.1, get what you need to
>> retrieve off of it immediately, then blow it away and start over, as
>> the thing isn't stable and all data is at risk.
> 
> Agreed. But I'd go so far as to say at this point it looks like it's
> wedged itself into a kind of self-induced faux-ENOSPC state because
> there isn't room to allocate more raid5 chunks. So, I think it's stuck
> in any case.

Well, yes and no.  If it was setup with progs v4.1.1, save what you can 
and blow it away as it's not stable enough to try anything else.

If it was setup with something earlier (not sure about 4.1.0, was it 
affected? but 4.0.x and earlier should be fine for setup), however, once 
on a new kernel the usual ENOSPC workarounds can be given a try.  That 
would include a first balance start -dusage=0 -musage=0, and if that 
didn't free up at least a gig on a second device, I'd try the old add-a-
device trick and see what happens.  (A few GiB thumb drive should work in 
a pinch, or even a ramdrive if you're willing to risk loss in the event 
of a crash vaporizing everything in memory including the ramdrive.)

After all, even if he didn't know the risk of the still very new raid56 
mode before, he does after reading my earlier message, and anything of 
value should be backed up before he attempts anything, so at that point, 
there's nothing to lose and it's upto him whether he wants to simply blow 
away the current setup and start over with either raid1 or (with another 
device) raid10, avoiding the current risk of raid56, or blow away and 
start over with raid56 again, knowing the risks now, or try to recover 
what's there, viewing it not as the easiest way but as practice in 
disaster recovery, again, with anything of value backed up so there's 
nothing to lose besides the time of fiddling with it and ending up having 
to blow away and restart from backups anyway, regardless of how it goes.

> It'd be great to reproduce this with a current kernel and see if it's
> still happening.

Absolutely.

Tho at this point I believe the chances are pretty good it's simply 
either that bad 4.1.1 mkfs.btrfs, or an older pre-full-raid56-support 
kernel that didn't handle balance to raid56 so well, yet, and that on the 
latest userspace and kernel the problem shouldn't reoccur.

But it'd be nice to *KNOW* that, by trying to reproduce, absolutely.  He 
may well have stumbled upon yet another confirmation of my recommendation 
to wait on raid56 unless you're deliberately testing it, and confirmation 
thereof would be halfway to getting it fixed, so those who /do/ choose to 
wait won't be dealing with it. =:^)

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Data single *and* raid?

2015-08-01 Thread Chris Murphy
On Sat, Aug 1, 2015 at 6:27 PM, Duncan <1i5t5.dun...@cox.net> wrote:

> 1) If this fs was created with btrfs-progs v4.1.1, get what you need to
> retrieve off of it immediately, then blow it away and start over, as the
> thing isn't stable and all data is at risk.

Agreed. But I'd go so far as to say at this point it looks like it's
wedged itself into a kind of self-induced faux-ENOSPC state because
there isn't room to allocate more raid5 chunks. So, I think it's stuck
in any case.

It'd be great to reproduce this with a current kernel and see if it's
still happening.


-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Data single *and* raid?

2015-08-01 Thread Duncan
Hugo Mills posted on Sat, 01 Aug 2015 22:34:13 + as excerpted:

>> Yes and that also puts it in the realm of kernels that weren't
>> releasing/deallocating empty chunks; although I don't know if that's a
>> factor, if dconvert forcibly deals with this..
> 
>It does -- you only have to look at the btrfs fi df output to see
> that there's no empty block groups (to within 0.1% or so)

Exactly.  The allocations are all full.

And the fi show says there's little to no room to allocate more, as 
well.  There's room on one device, but that's not going to help except 
with single, which shouldn't be allocated any more.

I'd say...

1) If this fs was created with btrfs-progs v4.1.1, get what you need to 
retrieve off of it immediately, then blow it away and start over, as the 
thing isn't stable and all data is at risk.

2) If it wasn't created with progs v4.1.1, the next issue is that kernel, 
since it's obviously from before raid56 was fully functional (well either 
that or there's a more serious bug going on).  Existing data should at 
least be reasonably stable, but with raid56 mode being so new, the newer 
the kernel you're using to work with it, the better.  4.1.x at LEAST, if 
not 4.2-rc, as we're nearing the end of the 4.2 development cycle.  And 
plan on keeping even better than normal backups and on current on kernels 
for at least another several kernel cycles, if you're going to use raid56 
mode, as while it's complete now, it's going to take a bit to stabilize 
to the level of the rest of btrfs itself, which of course is stabilizing 
now, but not really fully stable and mature yet, so the sysadmin's rule 
that data with any value is backed up, or by definition it's throw-away 
data, despite any claims to the contrary, continues to apply double on 
btrfs, compared to more mature and stable filesystems.

So definitely upgrade the kernel.  Then see where things stand.

3) Meanwhile, based on raid56 mode's newness, I've been recommending that 
people stay off it until 4.5-ish or so, basically a year after initial 
nominal full support, unless of course their intent is to be a leading/
bleeding edge testing and report deployment.  Otherwise, use raid1 or 
raid10 mode until then, and evaluate raid56 mode stability around 4.5, 
before deploying.

And if you're one of the brave doing current raid56 deployment, testing 
and bug reporting in full knowledge of its newness and lack of current 
stability and maturity, THANKS, it's your work that's helping to 
stabilize it for others, when they do switch to it. =:^)

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Data single *and* raid?

2015-08-01 Thread Hugo Mills
On Sat, Aug 01, 2015 at 04:26:25PM -0600, Chris Murphy wrote:
> On Sat, Aug 1, 2015 at 3:45 PM, Duncan <1i5t5.dun...@cox.net> wrote:
> > Does fi usage deal with raid5 yet?
> 
> Now that you mention it, I think it doesn't. But if it did, it would
> show this problem better than df I think.
> 
> > But, somewhere along the line he got 6 GiB of raid1 metadata.  Either he
> > added/rewrote a *BUNCH* of files after adding at least one device before
> > the conversion that he didn't tell us about, or the conversion screwed
> > up, because that's a lot of raid1 metadata coming out of nowhere!
> 
> Yeah it's a little confusing, that's why I asked about the kernel version.

   It's 6 GiB of metadata for 5.7 GIB of data, or thereabouts. 0.1% is
about the expected size.

   Now, given the original description, it's not clear at all why the
data has suddenly doubled in size -- unless there's some snapshots,
and the OP did a defrag as well.

> > But I'm *strongly* suspecting a pre-full-raid56-support kernel, because
> > btrfs-progs is certainly reasonably new (v4.1.1, as of yesterday 4.1.2 is
> > the newest as I mention above), but the fi df doesn't report global
> > reserve.  The only way I know of that it wouldn't report that with a new
> > userspace is if the kernelspace is too old.
> 
> Yes and that also puts it in the realm of kernels that weren't
> releasing/deallocating empty chunks; although I don't know if that's a
> factor, if dconvert forcibly deals with this..

   It does -- you only have to look at the btrfs fi df output to see
that there's no empty block groups (to within 0.1% or so)

> > Finally, that's btrfs-progs 4.1.1, the one that's blacklisted due to a
> > buggy mkfs.btrfs.  If he created the filesystem with that mkfs.btrfs...
> > maybe that explains the funky results, as well.
> 
> Good catch. That really ought to be filed as a bug with that distro to
> flat out remove 4.1.1 from the repos.

   If they picked up 4.1.1 fast enough, they should pick up 4.1.2 just
as quickly...

   Hugo.

-- 
Hugo Mills | "What are we going to do tonight?"
hugo@... carfax.org.uk | "The same thing we do every night, Pinky. Try to
http://carfax.org.uk/  | take over the world!"
PGP: E2AB1DE4  |


signature.asc
Description: Digital signature


Re: Data single *and* raid?

2015-08-01 Thread Chris Murphy
On Sat, Aug 1, 2015 at 3:45 PM, Duncan <1i5t5.dun...@cox.net> wrote:
> Does fi usage deal with raid5 yet?

Now that you mention it, I think it doesn't. But if it did, it would
show this problem better than df I think.

> But, somewhere along the line he got 6 GiB of raid1 metadata.  Either he
> added/rewrote a *BUNCH* of files after adding at least one device before
> the conversion that he didn't tell us about, or the conversion screwed
> up, because that's a lot of raid1 metadata coming out of nowhere!

Yeah it's a little confusing, that's why I asked about the kernel version.
>
>
> But I'm *strongly* suspecting a pre-full-raid56-support kernel, because
> btrfs-progs is certainly reasonably new (v4.1.1, as of yesterday 4.1.2 is
> the newest as I mention above), but the fi df doesn't report global
> reserve.  The only way I know of that it wouldn't report that with a new
> userspace is if the kernelspace is too old.

Yes and that also puts it in the realm of kernels that weren't
releasing/deallocating empty chunks; although I don't know if that's a
factor, if dconvert forcibly deals with this..


> Finally, that's btrfs-progs 4.1.1, the one that's blacklisted due to a
> buggy mkfs.btrfs.  If he created the filesystem with that mkfs.btrfs...
> maybe that explains the funky results, as well.

Good catch. That really ought to be filed as a bug with that distro to
flat out remove 4.1.1 from the repos.



-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Data single *and* raid?

2015-08-01 Thread Duncan
Chris Murphy posted on Sat, 01 Aug 2015 14:44:52 -0600 as excerpted:

> On Sat, Aug 1, 2015 at 2:32 PM, Hugo Mills  wrote:
>> On Sat, Aug 01, 2015 at 10:09:35PM +0200, Hendrik Friedel wrote:
>>> Hello,
>>>
>>> I converted an array to raid5 by
>>> btrfs device add /dev/sdd /mnt/new_storage
>>> btrfs device add /dev/sdc /mnt/new_storage btrfs
>>> balance start -dconvert=raid5 -mconvert=raid5 /mnt/new_storage/
>>>
>>> The Balance went through. But now:
>>> Label: none  uuid: a8af3832-48c7-4568-861f-e80380dd7e0b
>>> Total devices 3 FS bytes used 5.28TiB
>>> devid1 size 2.73TiB  used 2.57TiB path /dev/sde
>>> devid2 size 2.73TiB used  2.73TiB path /dev/sdc
>>> devid3 size 2.73TiB used 2.73TiB  path /dev/sdd
>>> btrfs-progs v4.1.1
>>>
>>> Already the 2.57TiB is a bit surprising:
>>> root@homeserver:/mnt# btrfs fi df /mnt/new_storage/
>>> Data, single: total=2.55TiB, used=2.55TiB
>>> Data, RAID5: total=2.73TiB, used=2.72TiB
>>> System, RAID5: total=32.00MiB, used=736.00KiB
>>> Metadata, RAID1: total=6.00GiB, used=5.33GiB
>>> Metadata, RAID5: total=3.00GiB, used=2.99GiB
>>
>>Looking at the btrfs fi show output, you've probably run out of
>> space during the conversion, probably due to an uneven distribution of
>> the original "single" chunks.
>>
>>I think I would suggest balancing the single chunks, and trying the
>> conversion (of the unconverted parts) again:
>>
>> # btrfs balance start -dprofiles=single -mprofile=raid1
>> /mnt/new_storage/
>> # btrfs balance start -dconvert=raid5,soft -mconvert=raid5,soft
>> /mnt/new_storage/
>>
>>
> Yep I bet that's it also. btrfs fi usage might be better at exposing
> this case.

Does fi usage deal with raid5 yet?

Last I knew it didn't deal with raid56 (which I'm not using) or with 
mixed-bg (which I am using on one btrfs).  On the mixed-bg btrfs it still 
warns it doesn't handle that, reporting unallocated of 16 EiB on a 256 
MiB filesystem, so clearly the warning is valid.  16 EiB unallocated on a 
256 MiB btrfs, I wish!  progs v4.1.2, latest as of yesterday, I believe.


Meanwhile, three devices the same size.  He just added two of them and 
didn't do a rebalance after that until the raid5 conversion, so except 
for anything added since, all data/metadata should have started on a 
single device, with single data and dup metadata.  The conversion was 
specifically to raid5 for both data/metadata, so that's what he should 
have ended up with.

But, somewhere along the line he got 6 GiB of raid1 metadata.  Either he 
added/rewrote a *BUNCH* of files after adding at least one device before 
the conversion that he didn't tell us about, or the conversion screwed 
up, because that's a lot of raid1 metadata coming out of nowhere!


But I'm *strongly* suspecting a pre-full-raid56-support kernel, because 
btrfs-progs is certainly reasonably new (v4.1.1, as of yesterday 4.1.2 is 
the newest as I mention above), but the fi df doesn't report global 
reserve.  The only way I know of that it wouldn't report that with a new 
userspace is if the kernelspace is too old.  And AFAIK, the kernel was 
reporting global reserve (with fi df listing it as unknown if it was too 
old) _well_ before full raid56 support.  So it's gotta be an old kernel, 
with only partial raid56 support, which might explain the weird to-raid56 
conversion results.


Finally, that's btrfs-progs 4.1.1, the one that's blacklisted due to a 
buggy mkfs.btrfs.  If he created the filesystem with that mkfs.btrfs... 
maybe that explains the funky results, as well.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: ext4 convert bugs, wiki warning?

2015-08-01 Thread Chris Murphy
On Sat, Aug 1, 2015 at 2:42 PM, Hugo Mills  wrote:
> On Sat, Aug 01, 2015 at 11:29:40AM -0600, Chris Murphy wrote:
>> Does someone with wiki edit capability want to put up a warning about
>> btrfs-convert problems? I don't think it needs to be a lengthy write
>> up since the scope of the problem is not clear yet. But since there's
>> definitely reproduced problems that have been going on for some time
>> now maybe it'd be a good idea to put up a warning until this is more
>> stable?
>>
>> I'm thinking of just a yellow warning "sidebar" like thing that maybe
>> just says there's been a regression and it's breaking file systems,
>> sometimes irreparably?
>
>You mean a warning like the very first sentence, in bold, that's
> already on the wiki page?
>
> https://btrfs.wiki.kernel.org/index.php/Conversion_from_Ext3

Doh!

-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Data single *and* raid?

2015-08-01 Thread Chris Murphy
On Sat, Aug 1, 2015 at 2:32 PM, Hugo Mills  wrote:
> On Sat, Aug 01, 2015 at 10:09:35PM +0200, Hendrik Friedel wrote:
>> Hello,
>>
>> I converted an array to raid5 by
>> btrfs device add /dev/sdd /mnt/new_storage
>> btrfs device add /dev/sdc /mnt/new_storage
>> btrfs balance start -dconvert=raid5 -mconvert=raid5 /mnt/new_storage/
>>
>> The Balance went through. But now:
>> Label: none  uuid: a8af3832-48c7-4568-861f-e80380dd7e0b
>> Total devices 3 FS bytes used 5.28TiB
>> devid1 size 2.73TiB used 2.57TiB path /dev/sde
>> devid2 size 2.73TiB used 2.73TiB path /dev/sdc
>> devid3 size 2.73TiB used 2.73TiB path /dev/sdd
>> btrfs-progs v4.1.1
>>
>> Already the 2.57TiB is a bit surprising:
>> root@homeserver:/mnt# btrfs fi df /mnt/new_storage/
>> Data, single: total=2.55TiB, used=2.55TiB
>> Data, RAID5: total=2.73TiB, used=2.72TiB
>> System, RAID5: total=32.00MiB, used=736.00KiB
>> Metadata, RAID1: total=6.00GiB, used=5.33GiB
>> Metadata, RAID5: total=3.00GiB, used=2.99GiB
>
>Looking at the btrfs fi show output, you've probably run out of
> space during the conversion, probably due to an uneven distribution of
> the original "single" chunks.
>
>I think I would suggest balancing the single chunks, and trying the
> conversion (of the unconverted parts) again:
>
> # btrfs balance start -dprofiles=single -mprofile=raid1 /mnt/new_storage/
> # btrfs balance start -dconvert=raid5,soft -mconvert=raid5,soft 
> /mnt/new_storage/
>

Yep I bet that's it also. btrfs fi usage might be better at exposing this case.


-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: ext4 convert bugs, wiki warning?

2015-08-01 Thread Hugo Mills
On Sat, Aug 01, 2015 at 11:29:40AM -0600, Chris Murphy wrote:
> Does someone with wiki edit capability want to put up a warning about
> btrfs-convert problems? I don't think it needs to be a lengthy write
> up since the scope of the problem is not clear yet. But since there's
> definitely reproduced problems that have been going on for some time
> now maybe it'd be a good idea to put up a warning until this is more
> stable?
> 
> I'm thinking of just a yellow warning "sidebar" like thing that maybe
> just says there's been a regression and it's breaking file systems,
> sometimes irreparably?

   You mean a warning like the very first sentence, in bold, that's
already on the wiki page?

https://btrfs.wiki.kernel.org/index.php/Conversion_from_Ext3

   Hugo.

-- 
Hugo Mills | You've read the project plan. Forget that. We're
hugo@... carfax.org.uk | going to Do Stuff and Have Fun doing it.
http://carfax.org.uk/  |
PGP: E2AB1DE4  |   Jeremy Frey


signature.asc
Description: Digital signature


Re: Data single *and* raid?

2015-08-01 Thread Hugo Mills
On Sat, Aug 01, 2015 at 10:09:35PM +0200, Hendrik Friedel wrote:
> Hello,
> 
> I converted an array to raid5 by
> btrfs device add /dev/sdd /mnt/new_storage
> btrfs device add /dev/sdc /mnt/new_storage
> btrfs balance start -dconvert=raid5 -mconvert=raid5 /mnt/new_storage/
> 
> The Balance went through. But now:
> Label: none  uuid: a8af3832-48c7-4568-861f-e80380dd7e0b
> Total devices 3 FS bytes used 5.28TiB
> devid1 size 2.73TiB used 2.57TiB path /dev/sde
> devid2 size 2.73TiB used 2.73TiB path /dev/sdc
> devid3 size 2.73TiB used 2.73TiB path /dev/sdd
> btrfs-progs v4.1.1
> 
> Already the 2.57TiB is a bit surprising:
> root@homeserver:/mnt# btrfs fi df /mnt/new_storage/
> Data, single: total=2.55TiB, used=2.55TiB
> Data, RAID5: total=2.73TiB, used=2.72TiB
> System, RAID5: total=32.00MiB, used=736.00KiB
> Metadata, RAID1: total=6.00GiB, used=5.33GiB
> Metadata, RAID5: total=3.00GiB, used=2.99GiB

   Looking at the btrfs fi show output, you've probably run out of
space during the conversion, probably due to an uneven distribution of
the original "single" chunks.

   I think I would suggest balancing the single chunks, and trying the
conversion (of the unconverted parts) again:

# btrfs balance start -dprofiles=single -mprofile=raid1 /mnt/new_storage/
# btrfs balance start -dconvert=raid5,soft -mconvert=raid5,soft 
/mnt/new_storage/

   You may have to do this more than once.

   Hugo.

> Why is there Data single and Raid?
> Why is Metadata RAID1 and Raid5?
> 
> A scrub is currently running and showed no errors yet.

-- 
Hugo Mills | You've read the project plan. Forget that. We're
hugo@... carfax.org.uk | going to Do Stuff and Have Fun doing it.
http://carfax.org.uk/  |
PGP: E2AB1DE4  |   Jeremy Frey


signature.asc
Description: Digital signature


Re: Data single *and* raid?

2015-08-01 Thread Chris Murphy
On Sat, Aug 1, 2015 at 2:09 PM, Hendrik Friedel  wrote:
> Hello,
>
> I converted an array to raid5 by
> btrfs device add /dev/sdd /mnt/new_storage
> btrfs device add /dev/sdc /mnt/new_storage
> btrfs balance start -dconvert=raid5 -mconvert=raid5 /mnt/new_storage/
>
> The Balance went through. But now:
> Label: none  uuid: a8af3832-48c7-4568-861f-e80380dd7e0b
> Total devices 3 FS bytes used 5.28TiB
> devid1 size 2.73TiB used 2.57TiB path /dev/sde
> devid2 size 2.73TiB used 2.73TiB path /dev/sdc
> devid3 size 2.73TiB used 2.73TiB path /dev/sdd
> btrfs-progs v4.1.1
>
> Already the 2.57TiB is a bit surprising:
> root@homeserver:/mnt# btrfs fi df /mnt/new_storage/
> Data, single: total=2.55TiB, used=2.55TiB
> Data, RAID5: total=2.73TiB, used=2.72TiB
> System, RAID5: total=32.00MiB, used=736.00KiB
> Metadata, RAID1: total=6.00GiB, used=5.33GiB
> Metadata, RAID5: total=3.00GiB, used=2.99GiB
>
> Why is there Data single and Raid?
> Why is Metadata RAID1 and Raid5?


What kernel version?

-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Data single *and* raid?

2015-08-01 Thread Hendrik Friedel

Hello,

I converted an array to raid5 by
btrfs device add /dev/sdd /mnt/new_storage
btrfs device add /dev/sdc /mnt/new_storage
btrfs balance start -dconvert=raid5 -mconvert=raid5 /mnt/new_storage/

The Balance went through. But now:
Label: none  uuid: a8af3832-48c7-4568-861f-e80380dd7e0b
Total devices 3 FS bytes used 5.28TiB
devid1 size 2.73TiB used 2.57TiB path /dev/sde
devid2 size 2.73TiB used 2.73TiB path /dev/sdc
devid3 size 2.73TiB used 2.73TiB path /dev/sdd
btrfs-progs v4.1.1

Already the 2.57TiB is a bit surprising:
root@homeserver:/mnt# btrfs fi df /mnt/new_storage/
Data, single: total=2.55TiB, used=2.55TiB
Data, RAID5: total=2.73TiB, used=2.72TiB
System, RAID5: total=32.00MiB, used=736.00KiB
Metadata, RAID1: total=6.00GiB, used=5.33GiB
Metadata, RAID5: total=3.00GiB, used=2.99GiB

Why is there Data single and Raid?
Why is Metadata RAID1 and Raid5?

A scrub is currently running and showed no errors yet.

Greetings,
Hendrik

---
Diese E-Mail wurde von Avast Antivirus-Software auf Viren geprüft.
https://www.avast.com/antivirus

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Filesystem unmountable

2015-08-01 Thread Chris Murphy
There appears to be some regressions in btrfs-convert that are in the
list archive over the last month in particular - but I think the
problems particularly started to surface about 6ish months ago. When I
filed the bugs I was seeing with converted ext4 file systems, they
were completely consistently reproducible. But then I went back a few
days later to the same source qcow2 that I was snapshotting (qcow2
snapshot, not a btrfs snapshot) to do the testing, and the problem did
not always trigger. But fortunately Qu has reproduced this and is
working on it.

In your case it might be possible to mount with ro,recovery and get
data off of it. Or you might have to resort to btrfs restore if it's
badly broken and at least get /home data out of it.  If you have
backups, it's easier to just obliterate it and start over, unless you
want to experiment and learn more with btrfs restore...


Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


ext4 convert bugs, wiki warning?

2015-08-01 Thread Chris Murphy
Does someone with wiki edit capability want to put up a warning about
btrfs-convert problems? I don't think it needs to be a lengthy write
up since the scope of the problem is not clear yet. But since there's
definitely reproduced problems that have been going on for some time
now maybe it'd be a good idea to put up a warning until this is more
stable?

I'm thinking of just a yellow warning "sidebar" like thing that maybe
just says there's been a regression and it's breaking file systems,
sometimes irreparably?


-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Filesystem unmountable

2015-08-01 Thread Cornelius van Rooyen
Good day,

This might be something you are already aware of, but I thought I should
inform someone of what happened to my sytem.

I was running ArchLinux with kernel 4.1.3-1-ARCH #1 SMP PREEMPT x86_64
GNU/Linux and executed btrfs-convert /mnt from an ArchLinux live disk to
convert my ext4 filesystem to btrfs.

The conversion was seemingly successful, but after booting journalctl
--verify reported some corrupt files. I saw on a an online post that I
should try to do a btrfs balance start after converting from ext4, and so I
executed the command and left the laptop to complete the task. The next
morning however I found the system frozen and had to do a hard reset. After
this the filesystem was unmountable.

The initial boot after the hard reset left me in an emergency shell at
which point I tried to mount the system and the shell stopped responding.
Booting a live disk and running btrfsck exited with a segmentation fault.
Attempts to mount the filesystem also exited with a segmentation fault or
the shell seemed to be waiting for the command to complete for very long
and did not respond to CTRL-C.

I know nothing of kernels or filesystem drivers and I hope I am not
waisting any body's time with something that was probably my own fault. I
have since reinstalled my OS and in hindsight I should probably have
attempted to save some of the kernel log files.

Kind regards,
Neels van Rooyen
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: mount btrfs takes 30 minutes, btrfs check runs out of memory

2015-08-01 Thread Russell Coker
On Sat, 1 Aug 2015 02:35:39 PM John Ettedgui wrote:
> >> It seems that you're using Chromium while doing the dump. :)
> >> If no CD drive, I'll recommend to use Archlinux installation iso to make
> >> a bootable USB stick and do the dump.
> >> (just download and dd would do the trick)
> >> As its kernel and tools is much newer than most distribution.
> 
> So I did not have any usb sticks large enough for this task (only 4Gb)
> so I restarted into emergency runlevel with only / mounted and as ro,
> I hope that'll do.

The Debian/Jessie Netinst image is about 120M and allows you to launch a 
shell.  If you want a newer kernel you could rebuild the Debian Netinst 
yourself.

Also a basic text-only Linux installation takes a lot less than 4G of storage.  
I have a couple of 1G USB sticks with Debian installed that I use to fix 
things.

-- 
My Main Blog http://etbe.coker.com.au/
My Documents Bloghttp://doc.coker.com.au/
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html