Re: RAID 6 full, but there is still space left on some devices

2016-03-01 Thread Gareth Pye
When I've been converting from RAID1 to RAID5 I've been getting
stripes that only contain 1G regardless of how wide the stripe is. So
when I've done a large convert I've had to limit the blocks and then
do a balance of the target profile and repeat till finished.

Has anyone else seen similar?

On Wed, Mar 2, 2016 at 1:13 AM, Dan Blazejewski
 wrote:
> Hey all,
>
> Just wanted to follow up with this for anyone experiencing the same issue.
>
> First, I tried Qu's suggestion, of re-balancing to single, then
> re-balancing to RAID 6. I noticed when I completed the conversion to
> single, that a few drives didn't receive an identical amount of data.
> Balancing back to RAID 6 didn't totally work either. It definitely
> made it better, but I still had multiple stripes of varying widths.
> IIRC, I had one ~1.7TB stripe that went across all 7 drives, and then
> a conglomerate of stripes ranging from 2-5 drives wide, and sizes 30GB
> - 1TB. The majority of data was striped across all 7, but I was
> concerned that as I added data, I'd run into the same situation as
> before.
>
> This process took quite a long time, as you guys expected. About 11
> days for RAID 6 -> Single -> Raid 6. Patience is a virtue with large
> arrays.
>
>
>
> Henk, for some reason I didn't receive the email suggesting using the
> -dstripes= filter until I was well into the conversion to single. Once
> I finished the RAID 6 -> Single -> RAID 6, I attempted your method.
> I'm happy to say that it worked, using -dstripes="1..6". This only
> took about 30 hours, as most of the data was striped correctly. When
> it finished, I was left with one RAID 6 profile, about ~2.50 TB
> striped across all 7 drives. As I understand, running a balance with
> the -dstripes="1..$drivecount-1" filter will force BTRFS to balance
> chunks that are not evenly striped across all drives. I will
> definitely have to keep this trick in mind in the future.
>
>
> A side note, I'm happy with how robust BTRFS is becoming. I had a
> sustained power outage while I wasn't home that resulted in an unclean
> shutdown in the middle of the balance. (I had preciously disconnected
> my UPS' USB connector to move the server to a different room and
> forgot to reconnect it. Doh!). When power was returned, it started
> right back up where it left off with no corruption or data loss. I
> have backups, but I wasn't looking forward to the idea of restoring 11
> TB of data.
>
> Than you everyone for your help, and thank you for putting all this
> work into BTRFS. Your efforts are truly appreciated.
>
> Regards,
> Dan
>
> On Thu, Feb 18, 2016 at 8:36 PM, Qu Wenruo  wrote:
>>
>>
>> Henk Slager wrote on 2016/02/19 00:27 +0100:
>>>
>>> On Thu, Feb 18, 2016 at 3:03 AM, Qu Wenruo 
>>> wrote:



 Dan Blazejewski wrote on 2016/02/17 18:04 -0500:
>
>
> Hello,
>
> I upgraded my kernel to 4.4.2, and btrfs-progs to 4.4. I also added
> another 4TB disk and kicked off a full balance (currently 7x4TB
> RAID6). I'm interested to see what an additional drive will do to
> this. I'll also have to wait and see if a full system balance on a
> newer version of BTRFS tools does the trick or not.
>
> I also noticed that "btrfs device usage" shows multiple entries for
> Data, RAID 6 on some drives. Is this normal? Please note that /dev/sdh
> is the new disk, and I only just started the balance.
>
> # btrfs dev usage /mnt/data
> /dev/sda, ID: 5
>  Device size: 3.64TiB
>  Data,RAID6:  1.43TiB
>  Data,RAID6:  1.48TiB
>  Data,RAID6:320.00KiB
>  Metadata,RAID6:  2.55GiB
>  Metadata,RAID6:  1.50GiB
>  System,RAID6:   16.00MiB
>  Unallocated:   733.67GiB
>
> /dev/sdb, ID: 6
>  Device size: 3.64TiB
>  Data,RAID6:  1.48TiB
>  Data,RAID6:320.00KiB
>  Metadata,RAID6:  1.50GiB
>  System,RAID6:   16.00MiB
>  Unallocated: 2.15TiB
>
> /dev/sdc, ID: 7
>  Device size: 3.64TiB
>  Data,RAID6:  1.43TiB
>  Data,RAID6:732.69GiB
>  Data,RAID6:  1.48TiB
>  Data,RAID6:320.00KiB
>  Metadata,RAID6:  2.55GiB
>  Metadata,RAID6:982.00MiB
>  Metadata,RAID6:  1.50GiB
>  System,RAID6:   16.00MiB
>  Unallocated:25.21MiB
>
> /dev/sdd, ID: 1
>  Device size: 3.64TiB
>  Data,RAID6:  1.43TiB
>  Data,RAID6:732.69GiB
>  Data,RAID6:  1.48TiB
>  Data,RAID6:320.00KiB
>  Metadata,RAID6:  2.55GiB
>  Metadata,RAID6:

Re: RAID 6 full, but there is still space left on some devices

2016-03-01 Thread Dan Blazejewski
Hey all,

Just wanted to follow up with this for anyone experiencing the same issue.

First, I tried Qu's suggestion, of re-balancing to single, then
re-balancing to RAID 6. I noticed when I completed the conversion to
single, that a few drives didn't receive an identical amount of data.
Balancing back to RAID 6 didn't totally work either. It definitely
made it better, but I still had multiple stripes of varying widths.
IIRC, I had one ~1.7TB stripe that went across all 7 drives, and then
a conglomerate of stripes ranging from 2-5 drives wide, and sizes 30GB
- 1TB. The majority of data was striped across all 7, but I was
concerned that as I added data, I'd run into the same situation as
before.

This process took quite a long time, as you guys expected. About 11
days for RAID 6 -> Single -> Raid 6. Patience is a virtue with large
arrays.



Henk, for some reason I didn't receive the email suggesting using the
-dstripes= filter until I was well into the conversion to single. Once
I finished the RAID 6 -> Single -> RAID 6, I attempted your method.
I'm happy to say that it worked, using -dstripes="1..6". This only
took about 30 hours, as most of the data was striped correctly. When
it finished, I was left with one RAID 6 profile, about ~2.50 TB
striped across all 7 drives. As I understand, running a balance with
the -dstripes="1..$drivecount-1" filter will force BTRFS to balance
chunks that are not evenly striped across all drives. I will
definitely have to keep this trick in mind in the future.


A side note, I'm happy with how robust BTRFS is becoming. I had a
sustained power outage while I wasn't home that resulted in an unclean
shutdown in the middle of the balance. (I had preciously disconnected
my UPS' USB connector to move the server to a different room and
forgot to reconnect it. Doh!). When power was returned, it started
right back up where it left off with no corruption or data loss. I
have backups, but I wasn't looking forward to the idea of restoring 11
TB of data.

Than you everyone for your help, and thank you for putting all this
work into BTRFS. Your efforts are truly appreciated.

Regards,
Dan

On Thu, Feb 18, 2016 at 8:36 PM, Qu Wenruo  wrote:
>
>
> Henk Slager wrote on 2016/02/19 00:27 +0100:
>>
>> On Thu, Feb 18, 2016 at 3:03 AM, Qu Wenruo 
>> wrote:
>>>
>>>
>>>
>>> Dan Blazejewski wrote on 2016/02/17 18:04 -0500:


 Hello,

 I upgraded my kernel to 4.4.2, and btrfs-progs to 4.4. I also added
 another 4TB disk and kicked off a full balance (currently 7x4TB
 RAID6). I'm interested to see what an additional drive will do to
 this. I'll also have to wait and see if a full system balance on a
 newer version of BTRFS tools does the trick or not.

 I also noticed that "btrfs device usage" shows multiple entries for
 Data, RAID 6 on some drives. Is this normal? Please note that /dev/sdh
 is the new disk, and I only just started the balance.

 # btrfs dev usage /mnt/data
 /dev/sda, ID: 5
  Device size: 3.64TiB
  Data,RAID6:  1.43TiB
  Data,RAID6:  1.48TiB
  Data,RAID6:320.00KiB
  Metadata,RAID6:  2.55GiB
  Metadata,RAID6:  1.50GiB
  System,RAID6:   16.00MiB
  Unallocated:   733.67GiB

 /dev/sdb, ID: 6
  Device size: 3.64TiB
  Data,RAID6:  1.48TiB
  Data,RAID6:320.00KiB
  Metadata,RAID6:  1.50GiB
  System,RAID6:   16.00MiB
  Unallocated: 2.15TiB

 /dev/sdc, ID: 7
  Device size: 3.64TiB
  Data,RAID6:  1.43TiB
  Data,RAID6:732.69GiB
  Data,RAID6:  1.48TiB
  Data,RAID6:320.00KiB
  Metadata,RAID6:  2.55GiB
  Metadata,RAID6:982.00MiB
  Metadata,RAID6:  1.50GiB
  System,RAID6:   16.00MiB
  Unallocated:25.21MiB

 /dev/sdd, ID: 1
  Device size: 3.64TiB
  Data,RAID6:  1.43TiB
  Data,RAID6:732.69GiB
  Data,RAID6:  1.48TiB
  Data,RAID6:320.00KiB
  Metadata,RAID6:  2.55GiB
  Metadata,RAID6:982.00MiB
  Metadata,RAID6:  1.50GiB
  System,RAID6:   16.00MiB
  Unallocated:25.21MiB

 /dev/sdf, ID: 3
  Device size: 3.64TiB
  Data,RAID6:  1.43TiB
  Data,RAID6:732.69GiB
  Data,RAID6:  1.48TiB
  Data,RAID6:320.00KiB
  Metadata,RAID6:  2.55GiB
  Metadata,RAID6:982.00MiB
  Metadata,RAID6:  1.50GiB
  System,RAID6:  

Re: RAID 6 full, but there is still space left on some devices

2016-02-18 Thread Qu Wenruo



Henk Slager wrote on 2016/02/19 00:27 +0100:

On Thu, Feb 18, 2016 at 3:03 AM, Qu Wenruo  wrote:



Dan Blazejewski wrote on 2016/02/17 18:04 -0500:


Hello,

I upgraded my kernel to 4.4.2, and btrfs-progs to 4.4. I also added
another 4TB disk and kicked off a full balance (currently 7x4TB
RAID6). I'm interested to see what an additional drive will do to
this. I'll also have to wait and see if a full system balance on a
newer version of BTRFS tools does the trick or not.

I also noticed that "btrfs device usage" shows multiple entries for
Data, RAID 6 on some drives. Is this normal? Please note that /dev/sdh
is the new disk, and I only just started the balance.

# btrfs dev usage /mnt/data
/dev/sda, ID: 5
 Device size: 3.64TiB
 Data,RAID6:  1.43TiB
 Data,RAID6:  1.48TiB
 Data,RAID6:320.00KiB
 Metadata,RAID6:  2.55GiB
 Metadata,RAID6:  1.50GiB
 System,RAID6:   16.00MiB
 Unallocated:   733.67GiB

/dev/sdb, ID: 6
 Device size: 3.64TiB
 Data,RAID6:  1.48TiB
 Data,RAID6:320.00KiB
 Metadata,RAID6:  1.50GiB
 System,RAID6:   16.00MiB
 Unallocated: 2.15TiB

/dev/sdc, ID: 7
 Device size: 3.64TiB
 Data,RAID6:  1.43TiB
 Data,RAID6:732.69GiB
 Data,RAID6:  1.48TiB
 Data,RAID6:320.00KiB
 Metadata,RAID6:  2.55GiB
 Metadata,RAID6:982.00MiB
 Metadata,RAID6:  1.50GiB
 System,RAID6:   16.00MiB
 Unallocated:25.21MiB

/dev/sdd, ID: 1
 Device size: 3.64TiB
 Data,RAID6:  1.43TiB
 Data,RAID6:732.69GiB
 Data,RAID6:  1.48TiB
 Data,RAID6:320.00KiB
 Metadata,RAID6:  2.55GiB
 Metadata,RAID6:982.00MiB
 Metadata,RAID6:  1.50GiB
 System,RAID6:   16.00MiB
 Unallocated:25.21MiB

/dev/sdf, ID: 3
 Device size: 3.64TiB
 Data,RAID6:  1.43TiB
 Data,RAID6:732.69GiB
 Data,RAID6:  1.48TiB
 Data,RAID6:320.00KiB
 Metadata,RAID6:  2.55GiB
 Metadata,RAID6:982.00MiB
 Metadata,RAID6:  1.50GiB
 System,RAID6:   16.00MiB
 Unallocated:25.21MiB

/dev/sdg, ID: 2
 Device size: 3.64TiB
 Data,RAID6:  1.43TiB
 Data,RAID6:732.69GiB
 Data,RAID6:  1.48TiB
 Data,RAID6:320.00KiB
 Metadata,RAID6:  2.55GiB
 Metadata,RAID6:982.00MiB
 Metadata,RAID6:  1.50GiB
 System,RAID6:   16.00MiB
 Unallocated:25.21MiB

/dev/sdh, ID: 8
 Device size: 3.64TiB
 Data,RAID6:320.00KiB
 Unallocated: 3.64TiB



Not sure how that multiple chunk type shows up.
Maybe all these shown RAID6 has different number of stripes?


Indeed, its 4 different sets of stripe-widths, i.e. how many drives is
striped accross. Someone has suggested to indicate this in the output
ofbtrfs de us  comand some time ago.

The fs has only RAID6 profile and I am not fully sure if the
'Unallocated'  numbers are correct (on RAID10 they are 2x too high
with unpatched v4.4 progs), but anyhow the lower devid's are way too
full.

 From the size, one can derive how many devices (or stipe-width):
732.69GiB 4, 1.43TiB 5, 1.48TiB 6, 320.00KiB 7


Qu, in regards to your question, I ran RAID 1 on multiple disks of
different sizes. I believe I had a mix of 2x4TB, 1x2TB, and 1x3TB
drive. I replaced the 2TB drive first with a 4TB, and balanced it.
Later on, I replaced the 3TB drive with another 4TB, and balanced,
yielding an array of 4x4TB RAID1. A little while later, I wound up
sticking a fifth 4TB drive in, and converting to RAID6. The sixth 4TB
drive was added some time after that. The seventh was added just a few
minutes ago.



Personally speaking, I just came up to one method to balance all these
disks, and in fact you don't need to add a disk.

1) Balance all data chunk to single profile
2) Balance all metadata chunk to single or RAID1 profile
3) Balance all data chunk back to RAID6 profile
4) Balance all metadata chunk back to RAID6 profile
System chunk is so small that normally you don't need to bother.

The trick is, as single is the most flex chunk type, only needs one disk
with unallocated space.
And btrfs chunk allocater will allocate chunk to device with most
unallocated space.

So after 1) and 2) you should found that chunk allocation is almost
perfectly balanced across all devices, as long as they are in same size.

Now you have a balance base layout for RAID6 allocation. Should make things
go quite smooth and result a balanced RAID6 chunk layout.


This is a good trick to get out 

Re: RAID 6 full, but there is still space left on some devices

2016-02-18 Thread Dan Blazejewski
Qu, thanks for your input. I cancelled the existing balance, and
kicked off a balance set to dconvert=single. Should be busy for the
next few days, but I already see the multiple RAID 6 stripes
disappearing, and the chunk distribution across all drives is starting
to normalize. I'll let you know if it works once it's done. Thanks!

On Wed, Feb 17, 2016 at 9:03 PM, Qu Wenruo  wrote:
>
>
> Dan Blazejewski wrote on 2016/02/17 18:04 -0500:
>>
>> Hello,
>>
>> I upgraded my kernel to 4.4.2, and btrfs-progs to 4.4. I also added
>> another 4TB disk and kicked off a full balance (currently 7x4TB
>> RAID6). I'm interested to see what an additional drive will do to
>> this. I'll also have to wait and see if a full system balance on a
>> newer version of BTRFS tools does the trick or not.
>>
>> I also noticed that "btrfs device usage" shows multiple entries for
>> Data, RAID 6 on some drives. Is this normal? Please note that /dev/sdh
>> is the new disk, and I only just started the balance.
>>
>> # btrfs dev usage /mnt/data
>> /dev/sda, ID: 5
>> Device size: 3.64TiB
>> Data,RAID6:  1.43TiB
>> Data,RAID6:  1.48TiB
>> Data,RAID6:320.00KiB
>> Metadata,RAID6:  2.55GiB
>> Metadata,RAID6:  1.50GiB
>> System,RAID6:   16.00MiB
>> Unallocated:   733.67GiB
>>
>> /dev/sdb, ID: 6
>> Device size: 3.64TiB
>> Data,RAID6:  1.48TiB
>> Data,RAID6:320.00KiB
>> Metadata,RAID6:  1.50GiB
>> System,RAID6:   16.00MiB
>> Unallocated: 2.15TiB
>>
>> /dev/sdc, ID: 7
>> Device size: 3.64TiB
>> Data,RAID6:  1.43TiB
>> Data,RAID6:732.69GiB
>> Data,RAID6:  1.48TiB
>> Data,RAID6:320.00KiB
>> Metadata,RAID6:  2.55GiB
>> Metadata,RAID6:982.00MiB
>> Metadata,RAID6:  1.50GiB
>> System,RAID6:   16.00MiB
>> Unallocated:25.21MiB
>>
>> /dev/sdd, ID: 1
>> Device size: 3.64TiB
>> Data,RAID6:  1.43TiB
>> Data,RAID6:732.69GiB
>> Data,RAID6:  1.48TiB
>> Data,RAID6:320.00KiB
>> Metadata,RAID6:  2.55GiB
>> Metadata,RAID6:982.00MiB
>> Metadata,RAID6:  1.50GiB
>> System,RAID6:   16.00MiB
>> Unallocated:25.21MiB
>>
>> /dev/sdf, ID: 3
>> Device size: 3.64TiB
>> Data,RAID6:  1.43TiB
>> Data,RAID6:732.69GiB
>> Data,RAID6:  1.48TiB
>> Data,RAID6:320.00KiB
>> Metadata,RAID6:  2.55GiB
>> Metadata,RAID6:982.00MiB
>> Metadata,RAID6:  1.50GiB
>> System,RAID6:   16.00MiB
>> Unallocated:25.21MiB
>>
>> /dev/sdg, ID: 2
>> Device size: 3.64TiB
>> Data,RAID6:  1.43TiB
>> Data,RAID6:732.69GiB
>> Data,RAID6:  1.48TiB
>> Data,RAID6:320.00KiB
>> Metadata,RAID6:  2.55GiB
>> Metadata,RAID6:982.00MiB
>> Metadata,RAID6:  1.50GiB
>> System,RAID6:   16.00MiB
>> Unallocated:25.21MiB
>>
>> /dev/sdh, ID: 8
>> Device size: 3.64TiB
>> Data,RAID6:320.00KiB
>> Unallocated: 3.64TiB
>>
>
> Not sure how that multiple chunk type shows up.
> Maybe all these shown RAID6 has different number of stripes?
>
>>
>>
>> Qu, in regards to your question, I ran RAID 1 on multiple disks of
>> different sizes. I believe I had a mix of 2x4TB, 1x2TB, and 1x3TB
>> drive. I replaced the 2TB drive first with a 4TB, and balanced it.
>> Later on, I replaced the 3TB drive with another 4TB, and balanced,
>> yielding an array of 4x4TB RAID1. A little while later, I wound up
>> sticking a fifth 4TB drive in, and converting to RAID6. The sixth 4TB
>> drive was added some time after that. The seventh was added just a few
>> minutes ago.
>
>
> Personally speaking, I just came up to one method to balance all these
> disks, and in fact you don't need to add a disk.
>
> 1) Balance all data chunk to single profile
> 2) Balance all metadata chunk to single or RAID1 profile
> 3) Balance all data chunk back to RAID6 profile
> 4) Balance all metadata chunk back to RAID6 profile
> System chunk is so small that normally you don't need to bother.
>
> The trick is, as single is the most flex chunk type, only needs one disk
> with unallocated space.
> And btrfs chunk allocater will allocate chunk to device with most
> unallocated space.
>
> So after 1) and 2) you should found that chunk allocation is almost
> perfectly balanced across all devices, as long as they are in same size.
>
> Now you have a balance base layout for RAID6 allocation. Should make things
> go quite smooth and result a balanced RAID6 chunk 

Re: RAID 6 full, but there is still space left on some devices

2016-02-18 Thread Henk Slager
On Thu, Feb 18, 2016 at 3:03 AM, Qu Wenruo  wrote:
>
>
> Dan Blazejewski wrote on 2016/02/17 18:04 -0500:
>>
>> Hello,
>>
>> I upgraded my kernel to 4.4.2, and btrfs-progs to 4.4. I also added
>> another 4TB disk and kicked off a full balance (currently 7x4TB
>> RAID6). I'm interested to see what an additional drive will do to
>> this. I'll also have to wait and see if a full system balance on a
>> newer version of BTRFS tools does the trick or not.
>>
>> I also noticed that "btrfs device usage" shows multiple entries for
>> Data, RAID 6 on some drives. Is this normal? Please note that /dev/sdh
>> is the new disk, and I only just started the balance.
>>
>> # btrfs dev usage /mnt/data
>> /dev/sda, ID: 5
>> Device size: 3.64TiB
>> Data,RAID6:  1.43TiB
>> Data,RAID6:  1.48TiB
>> Data,RAID6:320.00KiB
>> Metadata,RAID6:  2.55GiB
>> Metadata,RAID6:  1.50GiB
>> System,RAID6:   16.00MiB
>> Unallocated:   733.67GiB
>>
>> /dev/sdb, ID: 6
>> Device size: 3.64TiB
>> Data,RAID6:  1.48TiB
>> Data,RAID6:320.00KiB
>> Metadata,RAID6:  1.50GiB
>> System,RAID6:   16.00MiB
>> Unallocated: 2.15TiB
>>
>> /dev/sdc, ID: 7
>> Device size: 3.64TiB
>> Data,RAID6:  1.43TiB
>> Data,RAID6:732.69GiB
>> Data,RAID6:  1.48TiB
>> Data,RAID6:320.00KiB
>> Metadata,RAID6:  2.55GiB
>> Metadata,RAID6:982.00MiB
>> Metadata,RAID6:  1.50GiB
>> System,RAID6:   16.00MiB
>> Unallocated:25.21MiB
>>
>> /dev/sdd, ID: 1
>> Device size: 3.64TiB
>> Data,RAID6:  1.43TiB
>> Data,RAID6:732.69GiB
>> Data,RAID6:  1.48TiB
>> Data,RAID6:320.00KiB
>> Metadata,RAID6:  2.55GiB
>> Metadata,RAID6:982.00MiB
>> Metadata,RAID6:  1.50GiB
>> System,RAID6:   16.00MiB
>> Unallocated:25.21MiB
>>
>> /dev/sdf, ID: 3
>> Device size: 3.64TiB
>> Data,RAID6:  1.43TiB
>> Data,RAID6:732.69GiB
>> Data,RAID6:  1.48TiB
>> Data,RAID6:320.00KiB
>> Metadata,RAID6:  2.55GiB
>> Metadata,RAID6:982.00MiB
>> Metadata,RAID6:  1.50GiB
>> System,RAID6:   16.00MiB
>> Unallocated:25.21MiB
>>
>> /dev/sdg, ID: 2
>> Device size: 3.64TiB
>> Data,RAID6:  1.43TiB
>> Data,RAID6:732.69GiB
>> Data,RAID6:  1.48TiB
>> Data,RAID6:320.00KiB
>> Metadata,RAID6:  2.55GiB
>> Metadata,RAID6:982.00MiB
>> Metadata,RAID6:  1.50GiB
>> System,RAID6:   16.00MiB
>> Unallocated:25.21MiB
>>
>> /dev/sdh, ID: 8
>> Device size: 3.64TiB
>> Data,RAID6:320.00KiB
>> Unallocated: 3.64TiB
>>
>
> Not sure how that multiple chunk type shows up.
> Maybe all these shown RAID6 has different number of stripes?

Indeed, its 4 different sets of stripe-widths, i.e. how many drives is
striped accross. Someone has suggested to indicate this in the output
ofbtrfs de us  comand some time ago.

The fs has only RAID6 profile and I am not fully sure if the
'Unallocated'  numbers are correct (on RAID10 they are 2x too high
with unpatched v4.4 progs), but anyhow the lower devid's are way too
full.

>From the size, one can derive how many devices (or stipe-width):
732.69GiB 4, 1.43TiB 5, 1.48TiB 6, 320.00KiB 7

>> Qu, in regards to your question, I ran RAID 1 on multiple disks of
>> different sizes. I believe I had a mix of 2x4TB, 1x2TB, and 1x3TB
>> drive. I replaced the 2TB drive first with a 4TB, and balanced it.
>> Later on, I replaced the 3TB drive with another 4TB, and balanced,
>> yielding an array of 4x4TB RAID1. A little while later, I wound up
>> sticking a fifth 4TB drive in, and converting to RAID6. The sixth 4TB
>> drive was added some time after that. The seventh was added just a few
>> minutes ago.
>
>
> Personally speaking, I just came up to one method to balance all these
> disks, and in fact you don't need to add a disk.
>
> 1) Balance all data chunk to single profile
> 2) Balance all metadata chunk to single or RAID1 profile
> 3) Balance all data chunk back to RAID6 profile
> 4) Balance all metadata chunk back to RAID6 profile
> System chunk is so small that normally you don't need to bother.
>
> The trick is, as single is the most flex chunk type, only needs one disk
> with unallocated space.
> And btrfs chunk allocater will allocate chunk to device with most
> unallocated space.
>
> So after 1) and 2) you should found that chunk allocation is almost
> perfectly balanced across all devices, as 

Re: RAID 6 full, but there is still space left on some devices

2016-02-17 Thread Qu Wenruo



Dan Blazejewski wrote on 2016/02/17 18:04 -0500:

Hello,

I upgraded my kernel to 4.4.2, and btrfs-progs to 4.4. I also added
another 4TB disk and kicked off a full balance (currently 7x4TB
RAID6). I'm interested to see what an additional drive will do to
this. I'll also have to wait and see if a full system balance on a
newer version of BTRFS tools does the trick or not.

I also noticed that "btrfs device usage" shows multiple entries for
Data, RAID 6 on some drives. Is this normal? Please note that /dev/sdh
is the new disk, and I only just started the balance.

# btrfs dev usage /mnt/data
/dev/sda, ID: 5
Device size: 3.64TiB
Data,RAID6:  1.43TiB
Data,RAID6:  1.48TiB
Data,RAID6:320.00KiB
Metadata,RAID6:  2.55GiB
Metadata,RAID6:  1.50GiB
System,RAID6:   16.00MiB
Unallocated:   733.67GiB

/dev/sdb, ID: 6
Device size: 3.64TiB
Data,RAID6:  1.48TiB
Data,RAID6:320.00KiB
Metadata,RAID6:  1.50GiB
System,RAID6:   16.00MiB
Unallocated: 2.15TiB

/dev/sdc, ID: 7
Device size: 3.64TiB
Data,RAID6:  1.43TiB
Data,RAID6:732.69GiB
Data,RAID6:  1.48TiB
Data,RAID6:320.00KiB
Metadata,RAID6:  2.55GiB
Metadata,RAID6:982.00MiB
Metadata,RAID6:  1.50GiB
System,RAID6:   16.00MiB
Unallocated:25.21MiB

/dev/sdd, ID: 1
Device size: 3.64TiB
Data,RAID6:  1.43TiB
Data,RAID6:732.69GiB
Data,RAID6:  1.48TiB
Data,RAID6:320.00KiB
Metadata,RAID6:  2.55GiB
Metadata,RAID6:982.00MiB
Metadata,RAID6:  1.50GiB
System,RAID6:   16.00MiB
Unallocated:25.21MiB

/dev/sdf, ID: 3
Device size: 3.64TiB
Data,RAID6:  1.43TiB
Data,RAID6:732.69GiB
Data,RAID6:  1.48TiB
Data,RAID6:320.00KiB
Metadata,RAID6:  2.55GiB
Metadata,RAID6:982.00MiB
Metadata,RAID6:  1.50GiB
System,RAID6:   16.00MiB
Unallocated:25.21MiB

/dev/sdg, ID: 2
Device size: 3.64TiB
Data,RAID6:  1.43TiB
Data,RAID6:732.69GiB
Data,RAID6:  1.48TiB
Data,RAID6:320.00KiB
Metadata,RAID6:  2.55GiB
Metadata,RAID6:982.00MiB
Metadata,RAID6:  1.50GiB
System,RAID6:   16.00MiB
Unallocated:25.21MiB

/dev/sdh, ID: 8
Device size: 3.64TiB
Data,RAID6:320.00KiB
Unallocated: 3.64TiB



Not sure how that multiple chunk type shows up.
Maybe all these shown RAID6 has different number of stripes?




Qu, in regards to your question, I ran RAID 1 on multiple disks of
different sizes. I believe I had a mix of 2x4TB, 1x2TB, and 1x3TB
drive. I replaced the 2TB drive first with a 4TB, and balanced it.
Later on, I replaced the 3TB drive with another 4TB, and balanced,
yielding an array of 4x4TB RAID1. A little while later, I wound up
sticking a fifth 4TB drive in, and converting to RAID6. The sixth 4TB
drive was added some time after that. The seventh was added just a few
minutes ago.


Personally speaking, I just came up to one method to balance all these 
disks, and in fact you don't need to add a disk.


1) Balance all data chunk to single profile
2) Balance all metadata chunk to single or RAID1 profile
3) Balance all data chunk back to RAID6 profile
4) Balance all metadata chunk back to RAID6 profile
System chunk is so small that normally you don't need to bother.

The trick is, as single is the most flex chunk type, only needs one disk 
with unallocated space.
And btrfs chunk allocater will allocate chunk to device with most 
unallocated space.


So after 1) and 2) you should found that chunk allocation is almost 
perfectly balanced across all devices, as long as they are in same size.


Now you have a balance base layout for RAID6 allocation. Should make 
things go quite smooth and result a balanced RAID6 chunk layout.


Thanks,
Qu




Thanks!

On Wed, Feb 17, 2016 at 12:58 AM, Qu Wenruo  wrote:



Dan Blazejewski wrote on 2016/02/16 15:20 -0500:


Hello,

I've searched high and low about my issue, but have been unable to
turn up anything like what I'm seeing right now.

A little background: I started using BTRFS over a year ago, in RAID 1
with mixed size drives. A few months ago, I started replacing the
disks with 4 TB drives, and eventually switched over to RAID 6. I am
currently running a 6x4TB RAID6 drive configuration, which should give
me ~14.5 TB
usable, but I'm only getting around 11.

The weird thing is that It seems to completely fill 4/6 of the disks,
while leaving lots of space 

Re: RAID 6 full, but there is still space left on some devices

2016-02-16 Thread Duncan
Dan Blazejewski posted on Tue, 16 Feb 2016 15:20:12 -0500 as excerpted:

> A little background: I started using BTRFS over a year ago, in RAID 1
> with mixed size drives. A few months ago, I started replacing the disks
> with 4 TB drives, and eventually switched over to RAID 6. I am currently
> running a 6x4TB RAID6 drive configuration, which should give me ~14.5 TB
> usable, but I'm only getting around 11.
> 
> The weird thing is that It seems to completely fill 4/6 of the disks,
> while leaving lots of space free on 2 of the disks. I've tried full
> filesystem balances, yet the problem continues.
> 
> # btrfs fi show
> 
> Label: none  uuid: 78733087-d597-4301-8efa-8e1df800b108
> Total devices 6 FS bytes used 11.59TiB
> devid1 size 3.64TiB used 3.64TiB path /dev/sdd
> devid2 size 3.64TiB used 3.64TiB path /dev/sdg
> devid3 size 3.64TiB used 3.64TiB path /dev/sdf
> devid5 size 3.64TiB used 2.92TiB path /dev/sda
> devid6 size 3.64TiB used 1.48TiB path /dev/sdb
> devid7 size 3.64TiB used 3.64TiB path /dev/sdc
> 
> btrfs-progs v4.2.3
> 
> 
> 
> # btrfs fi df /mnt/data
> 
> Data, RAID6: total=11.67TiB, used=11.58TiB
> System, RAID6: total=64.00MiB, used=1.70MiB
> Metadata, RAID6: total=15.58GiB, used=13.89GiB
> GlobalReserve, single: total=512.00MiB, used=0.00B
> 
> # btrfs fi usage /mnt/data
> 
> WARNING: RAID56 detected, not implemented

Your btrfs-progs is old and I don't see any indication of kernel version 
at all, but I'll guess it's old as well.  Particularly for raid56 mode, 
which still isn't to the maturity level of the rest of btrfs, using 
current kernel and btrfs-progs is *very* strongly recommended.

Among other things, current userspace 4.4 btrfs fi usage should support 
raid56 mode properly, now.  Also, with newer userspace and kernel, btrfs 
balance supports the stripes= filter, which appears to be what you're 
looking for, to rebalance to full-width stripes anything that's not yet 
full width, thereby evening out your usage.

A full balance /should/ do it as well, I believe, but with raid56 support 
still not yet at the maturity level of btrfs in general, it's likely your 
version is old and buggy in that regard.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html