Re: btrfs balance start -dconvert=raid5 on raid1 not converting to raid5

2015-04-25 Thread Omar Sandoval
On Sat, Apr 25, 2015 at 08:18:30AM +, Duncan wrote:
 None None posted on Sat, 25 Apr 2015 05:56:50 +0200 as excerpted:
 
  Omar Sandoval osan...@osandov.com írta:
 On Sat, Apr 25, 2015 at 04:47:31AM +0200, None None wrote:
  I tried to convert my btrfs from raid1 to raid5 but after the balance
  command it#39;s still raid1.
 
 This is a known bug in v4.0. I sent in a patch [1] to revert the commit
 that caused the regression, but it didn#39;t get any response. You
 could apply that or just revert 2f0810880f08 (btrfs: delete chunk
 allocation attemp when setting block group ro) to fix your problem for
 now.
 
 [1]: https://patchwork.kernel.org/patch/6238111/
 
  I'll give it a try, thanks for the fast reply.
 
 FWIW, while I've not been following this /too/ closely (as my use-case 
 has zero reason to do a convert), I believe I saw Chris and others 
 discussing the patch, and I /think/ it (or an update) may have actually 
 been in the 4.1 upgrade window pull.
 
 Presumably they'll queue it for stable too, once it's in 4.1 (or whatever 
 if I'm wrong on the 4.1 pull).
 
 A verify either way would be useful, but as I said I've not been 
 following it /too/ closely, and I'm still on 4.0 here, so...
 

Just reproduced the bug on Chris' integration-4.1 branch, which appears
to have everything that was pulled for 4.1-rc1 plus some more. Maybe
you're thinking of the original bisect report [1]? I never saw anything
else come out of that.

[1]: http://thread.gmane.org/gmane.comp.file-systems.btrfs/43117

-- 
Omar
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: btrfs balance start -dconvert=raid5 on raid1 not converting to raid5

2015-04-25 Thread Duncan
None None posted on Sat, 25 Apr 2015 05:56:50 +0200 as excerpted:

 Omar Sandoval osan...@osandov.com írta:
On Sat, Apr 25, 2015 at 04:47:31AM +0200, None None wrote:
 I tried to convert my btrfs from raid1 to raid5 but after the balance
 command it#39;s still raid1.

This is a known bug in v4.0. I sent in a patch [1] to revert the commit
that caused the regression, but it didn#39;t get any response. You
could apply that or just revert 2f0810880f08 (btrfs: delete chunk
allocation attemp when setting block group ro) to fix your problem for
now.

[1]: https://patchwork.kernel.org/patch/6238111/

 I'll give it a try, thanks for the fast reply.

FWIW, while I've not been following this /too/ closely (as my use-case 
has zero reason to do a convert), I believe I saw Chris and others 
discussing the patch, and I /think/ it (or an update) may have actually 
been in the 4.1 upgrade window pull.

Presumably they'll queue it for stable too, once it's in 4.1 (or whatever 
if I'm wrong on the 4.1 pull).

A verify either way would be useful, but as I said I've not been 
following it /too/ closely, and I'm still on 4.0 here, so...

-- 
Duncan - List replies preferred.   No HTML msgs.
Every nonfree program has a lord, a master --
and if you use the program, he is your master.  Richard Stallman

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


btrfs balance start -dconvert=raid5 on raid1 not converting to raid5

2015-04-24 Thread None None
I tried to convert my btrfs from raid1 to raid5 but after the balance command 
it's still raid1.
Also for raid56 the wiki says Parity may be inconsistent after a crash (the 
write hole)
does that mean if I convert metadata to raid5/6 and the parity becomes 
inconsistent my btrfs will be lost?

Kernel is v4.0 on debian/sid
The filesystem was created with nodesize 8k if I remember correctly
Mount options for /srv/ noatime,nodev,space_cache,subvol=@
No snapshots and only a few subvolumes
Free space is ~450GiB

To convert the data profile to raid5 (with btrfs-progs v3.17) I did
btrfs balance start -v -dconvert=raid5 /srv/
but after the the command was done (after 10 days)
btrfs fi sho /srv/
still shows data as raid1, free space is also what would be expected for raid1
no errors, no problems, no raid5


So I compiled the newer btrfs-progs v3.19.1 and did (I also tried raid6, same 
result still raid1)
btrfs balance start -v -dconvert=raid5 -dlimit=1 /srv/
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x120): converting, target=128, soft is off, limit=1
Done, had to relocate 1 out of 12071 chunks

dmesg shows only this, no errors
[170427.207107] BTRFS info (device sdj): relocating block group 65294058848256 
flags 17
[170461.591056] BTRFS info (device sdj): found 129 extents
[170476.270765] BTRFS info (device sdj): found 129 extents

btrfs fi sho /srv/
shows all data as raid1


btrfs fi sho
Label: none  uuid: ---x
Total devices 9 FS bytes used 11.78TiB
devid1 size 2.73TiB used 2.62TiB path /dev/sdh
devid2 size 2.73TiB used 2.62TiB path /dev/sdj
devid3 size 2.73TiB used 2.62TiB path /dev/sdg
devid4 size 2.73TiB used 2.62TiB path /dev/sdi
devid5 size 2.73TiB used 2.62TiB path /dev/sdf
devid6 size 2.73TiB used 2.62TiB path /dev/sde
devid7 size 2.73TiB used 2.62TiB path /dev/sdc
devid9 size 2.73TiB used 2.62TiB path /dev/sdd
devid   10 size 2.73TiB used 2.62TiB path /dev/sda

btrfs-progs v3.19.1


btrfs fi df /srv/
Data, RAID1: total=11.76TiB, used=11.76TiB
System, RAID1: total=32.00MiB, used=1.62MiB
Metadata, RAID1: total=17.06GiB, used=14.85GiB
GlobalReserve, single: total=512.00MiB, used=0.00B
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: btrfs balance start -dconvert=raid5 on raid1 not converting to raid5

2015-04-24 Thread Omar Sandoval
On Sat, Apr 25, 2015 at 04:47:31AM +0200, None None wrote:
 I tried to convert my btrfs from raid1 to raid5 but after the balance command 
 it's still raid1.
 Also for raid56 the wiki says Parity may be inconsistent after a crash (the 
 write hole)
 does that mean if I convert metadata to raid5/6 and the parity becomes 
 inconsistent my btrfs will be lost?
 
 Kernel is v4.0 on debian/sid
 The filesystem was created with nodesize 8k if I remember correctly
 Mount options for /srv/ noatime,nodev,space_cache,subvol=@
 No snapshots and only a few subvolumes
 Free space is ~450GiB
 
 To convert the data profile to raid5 (with btrfs-progs v3.17) I did
 btrfs balance start -v -dconvert=raid5 /srv/
 but after the the command was done (after 10 days)
 btrfs fi sho /srv/
 still shows data as raid1, free space is also what would be expected for raid1
 no errors, no problems, no raid5
 
 
 So I compiled the newer btrfs-progs v3.19.1 and did (I also tried raid6, same 
 result still raid1)
 btrfs balance start -v -dconvert=raid5 -dlimit=1 /srv/
 Dumping filters: flags 0x1, state 0x0, force is off
   DATA (flags 0x120): converting, target=128, soft is off, limit=1
 Done, had to relocate 1 out of 12071 chunks
 
 dmesg shows only this, no errors
 [170427.207107] BTRFS info (device sdj): relocating block group 
 65294058848256 flags 17
 [170461.591056] BTRFS info (device sdj): found 129 extents
 [170476.270765] BTRFS info (device sdj): found 129 extents
 
 btrfs fi sho /srv/
 shows all data as raid1
 
 
 btrfs fi sho
 Label: none  uuid: ---x
 Total devices 9 FS bytes used 11.78TiB
 devid1 size 2.73TiB used 2.62TiB path /dev/sdh
 devid2 size 2.73TiB used 2.62TiB path /dev/sdj
 devid3 size 2.73TiB used 2.62TiB path /dev/sdg
 devid4 size 2.73TiB used 2.62TiB path /dev/sdi
 devid5 size 2.73TiB used 2.62TiB path /dev/sdf
 devid6 size 2.73TiB used 2.62TiB path /dev/sde
 devid7 size 2.73TiB used 2.62TiB path /dev/sdc
 devid9 size 2.73TiB used 2.62TiB path /dev/sdd
 devid   10 size 2.73TiB used 2.62TiB path /dev/sda
 
 btrfs-progs v3.19.1
 
 
 btrfs fi df /srv/
 Data, RAID1: total=11.76TiB, used=11.76TiB
 System, RAID1: total=32.00MiB, used=1.62MiB
 Metadata, RAID1: total=17.06GiB, used=14.85GiB
 GlobalReserve, single: total=512.00MiB, used=0.00B
 --
 To unsubscribe from this list: send the line unsubscribe linux-btrfs in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

This is a known bug in v4.0. I sent in a patch [1] to revert the commit
that caused the regression, but it didn't get any response. You could
apply that or just revert 2f0810880f08 (btrfs: delete chunk allocation
attemp when setting block group ro) to fix your problem for now.

[1]: https://patchwork.kernel.org/patch/6238111/

-- 
Omar
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: btrfs balance start -dconvert=raid5 on raid1 not converting to raid5

2015-04-24 Thread None None
 
Omar Sandoval osan...@osandov.com írta:
On Sat, Apr 25, 2015 at 04:47:31AM +0200, None None wrote:
 I tried to convert my btrfs from raid1 to raid5 but after the balance 
 command it#39;s still raid1.
 Also for raid56 the wiki says Parity may be inconsistent after a crash (the 
 write hole)
 does that mean if I convert metadata to raid5/6 and the parity becomes 
 inconsistent my btrfs will be lost?
 
 Kernel is v4.0 on debian/sid
 The filesystem was created with nodesize 8k if I remember correctly
 Mount options for /srv/ noatime,nodev,space_cache,subvol=@
 No snapshots and only a few subvolumes
 Free space is ~450GiB
 
 To convert the data profile to raid5 (with btrfs-progs v3.17) I did
 btrfs balance start -v -dconvert=raid5 /srv/
 but after the the command was done (after 10 days)
 btrfs fi sho /srv/
 still shows data as raid1, free space is also what would be expected for 
 raid1
 no errors, no problems, no raid5
 
 
 So I compiled the newer btrfs-progs v3.19.1 and did (I also tried raid6, 
 same result still raid1)
 btrfs balance start -v -dconvert=raid5 -dlimit=1 /srv/
 Dumping filters: flags 0x1, state 0x0, force is off
   DATA (flags 0x120): converting, target=128, soft is off, limit=1
 Done, had to relocate 1 out of 12071 chunks
 
 dmesg shows only this, no errors
 [170427.207107] BTRFS info (device sdj): relocating block group 
 65294058848256 flags 17
 [170461.591056] BTRFS info (device sdj): found 129 extents
 [170476.270765] BTRFS info (device sdj): found 129 extents
 
 btrfs fi sho /srv/
 shows all data as raid1
 
 
 btrfs fi sho
 Label: none  uuid: ---x
 Total devices 9 FS bytes used 11.78TiB
 devid1 size 2.73TiB used 2.62TiB path /dev/sdh
 devid2 size 2.73TiB used 2.62TiB path /dev/sdj
 devid3 size 2.73TiB used 2.62TiB path /dev/sdg
 devid4 size 2.73TiB used 2.62TiB path /dev/sdi
 devid5 size 2.73TiB used 2.62TiB path /dev/sdf
 devid6 size 2.73TiB used 2.62TiB path /dev/sde
 devid7 size 2.73TiB used 2.62TiB path /dev/sdc
 devid9 size 2.73TiB used 2.62TiB path /dev/sdd
 devid   10 size 2.73TiB used 2.62TiB path /dev/sda
 
 btrfs-progs v3.19.1
 
 
 btrfs fi df /srv/
 Data, RAID1: total=11.76TiB, used=11.76TiB
 System, RAID1: total=32.00MiB, used=1.62MiB
 Metadata, RAID1: total=17.06GiB, used=14.85GiB
 GlobalReserve, single: total=512.00MiB, used=0.00B
 --
 To unsubscribe from this list: send the line unsubscribe linux-btrfs in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

This is a known bug in v4.0. I sent in a patch [1] to revert the commit
that caused the regression, but it didn#39;t get any response. You could
apply that or just revert 2f0810880f08 (btrfs: delete chunk allocation
attemp when setting block group ro) to fix your problem for now.

[1]: https://patchwork.kernel.org/patch/6238111/

-- 
Omar

I'll give it a try, thanks for the fast reply.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: converting to raid5

2013-03-19 Thread Remco Hosman - Yerf-IT

Op 15-3-2013 13:47, David Sterba schreef:

On Mon, Mar 11, 2013 at 09:15:44PM +0100, Remco Hosman wrote:

first, i did the following: `btrfs val start -dconvert=raid5,usage=1` to 
convert the mostly empty chunks.
This resulted in a lot of allocated space (10's of gigs), with only a few 100 
meg used.

Matches my expectation, converting to the new profile needs to allocate
full 1G chunks, but the usage=1 filter allows to fill them partially.

After this step, several ~empty raid1 chunks should disappear.

It did not only happen when i added the usage=1, but also without.

i did `btrfs val start -dusage=75` to clean things up.

then i ran `btrfs bal start -dconvert=raid5,soft`.
I noticed how the difference between total and used for raid5 kept growing.

Do you remember if this was temporary or if the difference was
unexpectedly big after the whole operation finished?
It did not finish, the filesystem did not have that much space free so i 
canceled it (even before it ran out of space) and ran `btrfs bal start 
-dusage=1` to cleanup the unused space

My guess is that its taking 1 raid1 chunk (2x1 gig disk space, 1 gig
data), and moving it to 1 raid5 chunk (4gig disk space, 3gig data),
leaving all chunks 33% used.

Why 3G of data in raid5 case? I assume you talk about the actually used
data and this should be the same as in raid1 case, but spread over 3x
1GB chunks and leaving them 33% utilized, that makes sense, but is not
clear from your description.
I assumed that with raid5, btrfs allocated 1 gig on each disk and uses 1 
for parity, giving 3 gig of data in 4gig diskspace.

This is what 3 calls of `btrfs file df /` looks like a few minutes after each 
other, with the balance still running:
Data, RAID1: total=807.00GB, used=805.70GB
Data, RAID5: total=543.00GB, used=192.81GB
--
Data, RAID1: total=800.00GB, used=798.70GB
Data, RAID5: total=564.00GB, used=199.30GB
--
Data, RAID1: total=795.00GB, used=793.70GB
Data, RAID5: total=579.00GB, used=204.81GB

raid1 numbers going down, raid5 going up, all ok.

david


--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: converting to raid5

2013-03-15 Thread David Sterba
On Mon, Mar 11, 2013 at 09:15:44PM +0100, Remco Hosman wrote:
 first, i did the following: `btrfs val start -dconvert=raid5,usage=1` to 
 convert the mostly empty chunks.
 This resulted in a lot of allocated space (10's of gigs), with only a few 100 
 meg used.

Matches my expectation, converting to the new profile needs to allocate
full 1G chunks, but the usage=1 filter allows to fill them partially.

After this step, several ~empty raid1 chunks should disappear.

 i did `btrfs val start -dusage=75` to clean things up.
 
 then i ran `btrfs bal start -dconvert=raid5,soft`.
 I noticed how the difference between total and used for raid5 kept growing. 

Do you remember if this was temporary or if the difference was
unexpectedly big after the whole operation finished?

 My guess is that its taking 1 raid1 chunk (2x1 gig disk space, 1 gig
 data), and moving it to 1 raid5 chunk (4gig disk space, 3gig data),
 leaving all chunks 33% used.

Why 3G of data in raid5 case? I assume you talk about the actually used
data and this should be the same as in raid1 case, but spread over 3x
1GB chunks and leaving them 33% utilized, that makes sense, but is not
clear from your description.

 This is what 3 calls of `btrfs file df /` looks like a few minutes after each 
 other, with the balance still running:

 Data, RAID1: total=807.00GB, used=805.70GB
 Data, RAID5: total=543.00GB, used=192.81GB
 --
 Data, RAID1: total=800.00GB, used=798.70GB
 Data, RAID5: total=564.00GB, used=199.30GB
 --
 Data, RAID1: total=795.00GB, used=793.70GB
 Data, RAID5: total=579.00GB, used=204.81GB

raid1 numbers going down, raid5 going up, all ok.

david
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


converting to raid5

2013-03-11 Thread Remco Hosman
Hi,

Just installed 3.9.0-rc2 and the latest btrfs-progs. 

filesystem is a 4 disk raid1 array.

first, i did the following: `btrfs val start -dconvert=raid5,usage=1` to 
convert the mostly empty chunks.
This resulted in a lot of allocated space (10's of gigs), with only a few 100 
meg used.
i did `btrfs val start -dusage=75` to clean things up.

then i ran `btrfs bal start -dconvert=raid5,soft`.
I noticed how the difference between total and used for raid5 kept growing. 
My guess is that its taking 1 raid1 chunk (2x1 gig disk space, 1 gig data), and 
moving it to 1 raid5 chunk (4gig disk space, 3gig data), leaving all chunks 33% 
used.

This is what 3 calls of `btrfs file df /` looks like a few minutes after each 
other, with the balance still running:

Data, RAID1: total=807.00GB, used=805.70GB
Data, RAID5: total=543.00GB, used=192.81GB
System, RAID1: total=32.00MB, used=192.00KB
Metadata, RAID1: total=6.00GB, used=3.54GB
--
Data, RAID1: total=800.00GB, used=798.70GB
Data, RAID5: total=564.00GB, used=199.30GB
System, RAID1: total=32.00MB, used=192.00KB
Metadata, RAID1: total=6.00GB, used=3.53GB
--
Data, RAID1: total=795.00GB, used=793.70GB
Data, RAID5: total=579.00GB, used=204.81GB
System, RAID1: total=32.00MB, used=192.00KB
Metadata, RAID1: total=6.00GB, used=3.54GB


Remco--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html