Re: wrong values in df and btrfs filesystem df

2011-04-12 Thread Miao Xie
On Mon, 11 Apr 2011 08:29:46 +0100, Stephane Chazelas wrote:
 2011-04-10 18:13:51 +0800, Miao Xie:
 [...]
 # df /srv/MM

 Filesystem   1K-blocks  Used Available Use% Mounted on
 /dev/sdd15846053400 1593436456 2898463184  36% /srv/MM

 # btrfs filesystem df /srv/MM

 Data, RAID0: total=1.67TB, used=1.48TB
 System, RAID1: total=16.00MB, used=112.00KB
 System: total=4.00MB, used=0.00
 Metadata, RAID1: total=3.75GB, used=2.26GB

 # btrfs-show

 Label: MMedia  uuid: 120b036a-883f-46aa-bd9a-cb6a1897c8d2
Total devices 3 FS bytes used 1.48TB
devid3 size 1.81TB used 573.76GB path /dev/sdb1
devid2 size 1.81TB used 573.77GB path /dev/sde1
devid1 size 1.82TB used 570.01GB path /dev/sdd1

 Btrfs Btrfs v0.19

 

 df shows an Available value which isn't related to any real value.  

I _think_ that value is the amount of space not allocated to any
 block group. If that's so, then Available (from df) plus the three
 total values (from btrfs fi df) should equal the size value from df.

 This value excludes the space that can not be allocated to any block group,
 This feature was implemented to fix the bug df command add the disk space, 
 which
 can not be allocated to any block group forever, into the Available value.
 (see the changelog of the commit 6d07bcec969af335d4e35b3921131b7929bd634e)

 This implementation just like fake chunk allocation, but the fake allocation
 just allocate the space from two of these three disks, doesn't spread the
 stripes over all the disks, which has enough space.
 [...]
 
 Hi Miao,
 
 would you care to expand a bit on that. In Helmut's case above
 where all the drives have at least 1.2TB free, how would there
 be un-allocatable space?
 
 What's the implication of having disks of differing sizes? Does
 that mean that the extra space on larger disks is lost?

I'm sorry that I couldn't explain it clearly.

As we know, Btrfs introduced RAID fucntion, and it can allocate some stripes 
from
different disks to make up a RAID block group. But if there is not enough disk 
space
to allocate enough stripes, btrfs can't make up a new block group, and the left 
disk
space can't be used forever. 

For example, If we have two disks, one is 5GB, and the other is 10GB, and we 
use RAID0
block groups to store the file data. The RAID0 block group needs two stripes 
which are
on the different disks at least. After all space on the 5GB disk is allocated, 
there is
about 5GB free space on the 10GB disk, this space can not be used because we 
have
no free space on the other disk to allocate, and can't make up a new RAID0 
block group. 

Beside the two-stripe limit, the chunk allocator will allocate stripes from 
every disk
as much as possible, to make up a new RAID0 block group. That is if all the 
disks have
enough free space, the allocator will allocate stripes from all the disks.

In Helmut's case, the chunk allocator will allocate three same-size stripes 
from those
three disks to make up the new RAID0 block group, every time btrfs allocate new 
chunks
(block groups), until there is no free space on two disks. So btrfs can use 
most of the
disk space for RAID0 block group.

But the algorithm of df command doesn't simulate the above allocation 
correctly, this
simulated allocation just allocates the stripes from two disks, and then, these 
two disks
have no free space, but the third disk still has 1.2TB free space, df command 
thinks
this space can be used to make a new RAID0 block group and ignores it. This is 
a bug,
I think.

BTW: Available value is the size of the free space that we may use it to 
store the file
data. In btrfs filesystem, it is hard to calculate, because the block groups 
are allocated
dynamically, not all the free space on the disks is allocated to make up data 
block groups,
some of the space may be allocated to make up data block groups. So we just 
tell the users
the size of free space maybe they can use to store the file data.

Thanks
Miao

 
 Thanks,
 Stephane
 

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: wrong values in df and btrfs filesystem df

2011-04-12 Thread Stephane Chazelas
2011-04-12 15:22:57 +0800, Miao Xie:
[...]
 But the algorithm of df command doesn't simulate the above allocation 
 correctly, this
 simulated allocation just allocates the stripes from two disks, and then, 
 these two disks
 have no free space, but the third disk still has 1.2TB free space, df command 
 thinks
 this space can be used to make a new RAID0 block group and ignores it. This 
 is a bug,
 I think.
[...]

Thanks a lot Miao for the detailed explanation. So, the disk
space is not lost, it's just df not reporting the available
space correctly. That's me relieved.

It explains why I'm getting:

# blockdev --getsize64 /dev/sda4
2967698087424
# blockdev --getsize64 /dev/sdb
3000592982016
# blockdev --getsize64 /dev/sdc
3000592982016
# truncate -s 2967698087424 a
# truncate -s 3000592982016 b
# truncate -s 3000592982016 c
# losetup /dev/loop0 ./a
# losetup /dev/loop1 ./b
# losetup /dev/loop2 ./c
# mkfs.btrfs a b c
# btrfs device scan /dev/loop[0-2]
Scanning for Btrfs filesystems in '/dev/loop0'
Scanning for Btrfs filesystems in '/dev/loop1'
Scanning for Btrfs filesystems in '/dev/loop2'
# mount  /dev/loop0 /mnt/1
# df -k /mnt/1
Filesystem   1K-blocks  Used Available Use% Mounted on
/dev/loop0   875867582856 5859474304   1% /mnt/1
# echo $(((8758675828 - 5859474304)*2**10))
2968782360576

One disk worth of space lost according to df.

While it should have been more something like
$(((3000592982016-2967698087424)*2)) (about 60GB), or about 0
after the quasi-round-robin allocation patch, right?

Best regards,
Stephane
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: wrong values in df and btrfs filesystem df

2011-04-11 Thread Stephane Chazelas
2011-04-10 18:13:51 +0800, Miao Xie:
[...]
  # df /srv/MM
 
  Filesystem   1K-blocks  Used Available Use% Mounted on
  /dev/sdd15846053400 1593436456 2898463184  36% /srv/MM
 
  # btrfs filesystem df /srv/MM
 
  Data, RAID0: total=1.67TB, used=1.48TB
  System, RAID1: total=16.00MB, used=112.00KB
  System: total=4.00MB, used=0.00
  Metadata, RAID1: total=3.75GB, used=2.26GB
 
  # btrfs-show
 
  Label: MMedia  uuid: 120b036a-883f-46aa-bd9a-cb6a1897c8d2
 Total devices 3 FS bytes used 1.48TB
 devid3 size 1.81TB used 573.76GB path /dev/sdb1
 devid2 size 1.81TB used 573.77GB path /dev/sde1
 devid1 size 1.82TB used 570.01GB path /dev/sdd1
 
  Btrfs Btrfs v0.19
 
  
 
  df shows an Available value which isn't related to any real value.  
  
 I _think_ that value is the amount of space not allocated to any
  block group. If that's so, then Available (from df) plus the three
  total values (from btrfs fi df) should equal the size value from df.
 
 This value excludes the space that can not be allocated to any block group,
 This feature was implemented to fix the bug df command add the disk space, 
 which
 can not be allocated to any block group forever, into the Available value.
 (see the changelog of the commit 6d07bcec969af335d4e35b3921131b7929bd634e)
 
 This implementation just like fake chunk allocation, but the fake allocation
 just allocate the space from two of these three disks, doesn't spread the
 stripes over all the disks, which has enough space.
[...]

Hi Miao,

would you care to expand a bit on that. In Helmut's case above
where all the drives have at least 1.2TB free, how would there
be un-allocatable space?

What's the implication of having disks of differing sizes? Does
that mean that the extra space on larger disks is lost?

Thanks,
Stephane
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: wrong values in df and btrfs filesystem df

2011-04-11 Thread Arne Jansen
On 11.04.2011 09:29, Stephane Chazelas wrote:
 2011-04-10 18:13:51 +0800, Miao Xie:
 [...]

 
 What's the implication of having disks of differing sizes? Does
 that mean that the extra space on larger disks is lost?

Yes. Currently the allocator cannot handle different sizes well,
especially when mirroring is involved. I sent a patch for this
to the list some weeks ago (see quasi-round-robin), but it hasn't
been merged yet.

-Arne

 
 Thanks,
 Stephane
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: wrong values in df and btrfs filesystem df

2011-04-11 Thread Helmut Hullen
Hallo, Stephane,

Du meintest am 11.04.11:

 What's the implication of having disks of differing sizes? Does
 that mean that the extra space on larger disks is lost?

Seems to work.
I've tried:

/dev/sda   140 GByte
/dev/sdb   140 GByte
/dev/sdc70 GByte

mkfs.btrfs -d raid0 -m raid1 /dev/sdb1 /dev/sdc1

mounted, more than 140 GByte free

Filled with more than 140 GByte (/dev/sdc1 was full to the brim)

btrfs device add /dev/sda1 ...
btrfs filesystem balance ...

Needed many hours, but then more than 210 GByte were usable.

Filled up to about 220 GByte; /dev/sdc1 was again full to the brim

btrfs device delete /dev/sdc1
umount
mount

All looks as expected, only the 2 bigger devices are seen, and they  
contain the expected files.

And that looks good: my major interest in btrfs is working in that way -  
adding a bigger device, deleting a smaller device.

Kernel 2.6.38.1
btrfs from november 2010


Only the values shown with df and btrfs filesystem df need getting  
used to; maybe available has to be seen as at least available.

Viele Gruesse!
Helmut
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


wrong values in df and btrfs filesystem df

2011-04-09 Thread Helmut Hullen
Hallo, linux-btrfs,

First I create an array of 2 disks with

  mkfs.btrfs -d raid0 -m raid1 /dev/sdb1 /dev/sdd1

and mount it at /srv/MM.

Then I fill it with about 1,6 TByte.
And then I add /dev/sde1 via

  btrfs device add /dev/sde1 /srv/MM
  btrfs filesystem balance /srv/MM
(it run about 20 hours)

Then I work on it, copy some new files, delete some old files - all  
works well. Only

  df /srv/MM
  btrfs filesystem df /srv/MM

show some completely wrong values:

# df /srv/MM

Filesystem   1K-blocks  Used Available Use% Mounted on
/dev/sdd15846053400 1593436456 2898463184  36% /srv/MM

# btrfs filesystem df /srv/MM

Data, RAID0: total=1.67TB, used=1.48TB
System, RAID1: total=16.00MB, used=112.00KB
System: total=4.00MB, used=0.00
Metadata, RAID1: total=3.75GB, used=2.26GB

# btrfs-show

Label: MMedia  uuid: 120b036a-883f-46aa-bd9a-cb6a1897c8d2
Total devices 3 FS bytes used 1.48TB
devid3 size 1.81TB used 573.76GB path /dev/sdb1
devid2 size 1.81TB used 573.77GB path /dev/sde1
devid1 size 1.82TB used 570.01GB path /dev/sdd1

Btrfs Btrfs v0.19



df shows an Available value which isn't related to any real value.  
The sum of used and Available is far away from the really existent  
disk space. When I copy additional files to /srv/MM then used still  
shows the right value, and the sum grows (slowly) to the max. available  
space.

In btrfs filesystem df /srv/MM the line

  Data, RAID0: total=1.67TB, used=1.48TB

shows a total value which isn't related to any existent value; maybe  
it still shows the used space before adding the third partition.
This (wrong) value seems not to change.

Kernel 2.6.38.1
btrfs from november 2010

Viele Gruesse!
Helmut
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: wrong values in df and btrfs filesystem df

2011-04-09 Thread Hugo Mills
On Sat, Apr 09, 2011 at 08:25:00AM +0200, Helmut Hullen wrote:
 Hallo, linux-btrfs,
 
 First I create an array of 2 disks with
 
   mkfs.btrfs -d raid0 -m raid1 /dev/sdb1 /dev/sdd1
 
 and mount it at /srv/MM.
 
 Then I fill it with about 1,6 TByte.
 And then I add /dev/sde1 via
 
   btrfs device add /dev/sde1 /srv/MM
   btrfs filesystem balance /srv/MM
 (it run about 20 hours)
 
 Then I work on it, copy some new files, delete some old files - all  
 works well. Only
 
   df /srv/MM
   btrfs filesystem df /srv/MM
 
 show some completely wrong values:
 
 # df /srv/MM
 
 Filesystem   1K-blocks  Used Available Use% Mounted on
 /dev/sdd15846053400 1593436456 2898463184  36% /srv/MM
 
 # btrfs filesystem df /srv/MM
 
 Data, RAID0: total=1.67TB, used=1.48TB
 System, RAID1: total=16.00MB, used=112.00KB
 System: total=4.00MB, used=0.00
 Metadata, RAID1: total=3.75GB, used=2.26GB
 
 # btrfs-show
 
 Label: MMedia  uuid: 120b036a-883f-46aa-bd9a-cb6a1897c8d2
   Total devices 3 FS bytes used 1.48TB
   devid3 size 1.81TB used 573.76GB path /dev/sdb1
   devid2 size 1.81TB used 573.77GB path /dev/sde1
   devid1 size 1.82TB used 570.01GB path /dev/sdd1
 
 Btrfs Btrfs v0.19
 
 
 
 df shows an Available value which isn't related to any real value.  

   I _think_ that value is the amount of space not allocated to any
block group. If that's so, then Available (from df) plus the three
total values (from btrfs fi df) should equal the size value from df.

 The sum of used and Available is far away from the really existent  
 disk space. When I copy additional files to /srv/MM then used still  
 shows the right value, and the sum grows (slowly) to the max. available  
 space.
 
 In btrfs filesystem df /srv/MM the line
 
   Data, RAID0: total=1.67TB, used=1.48TB
 
 shows a total value which isn't related to any existent value; maybe  
 it still shows the used space before adding the third partition.
 This (wrong) value seems not to change.

   It's not wrong -- it simply doesn't mean what you think it does. :)

   The total value in the output of btrfs fi df is the total space
allocated to block groups. As the filesystem needs more space, it
will allocate more block groups from the available raw storage pool,
and the number will go up.

   This is explained on the wiki at [1].

   HTH,
   Hugo.

[1] 
https://btrfs.wiki.kernel.org/index.php/FAQ#Why_does_df_show_incorrect_free_space_for_my_RAID_volume.3F

-- 
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
  --- There are three things you should never see being made: laws, ---  
standards,  and sausages.


signature.asc
Description: Digital signature


Re: wrong values in df and btrfs filesystem df

2011-04-09 Thread Stephane Chazelas
2011-04-09 10:11:41 +0100, Hugo Mills:
[...]
  # df /srv/MM
  
  Filesystem   1K-blocks  Used Available Use% Mounted on
  /dev/sdd15846053400 1593436456 2898463184  36% /srv/MM
  
  # btrfs filesystem df /srv/MM
  
  Data, RAID0: total=1.67TB, used=1.48TB
  System, RAID1: total=16.00MB, used=112.00KB
  System: total=4.00MB, used=0.00
  Metadata, RAID1: total=3.75GB, used=2.26GB
  
  # btrfs-show
  
  Label: MMedia  uuid: 120b036a-883f-46aa-bd9a-cb6a1897c8d2
  Total devices 3 FS bytes used 1.48TB
  devid3 size 1.81TB used 573.76GB path /dev/sdb1
  devid2 size 1.81TB used 573.77GB path /dev/sde1
  devid1 size 1.82TB used 570.01GB path /dev/sdd1
  
  Btrfs Btrfs v0.19
  
  
  
  df shows an Available value which isn't related to any real value.  
 
I _think_ that value is the amount of space not allocated to any
 block group. If that's so, then Available (from df) plus the three
 total values (from btrfs fi df) should equal the size value from df.
[...]

Well,

$ echo $((2898463184 + 1.67*2**30 + 4*2**10 + 16*2**10*2 + 3.75*2**20*2))
4699513214.079

I do get the same kind of discrepancy:

$ df -h /mnt
FilesystemSize  Used Avail Use% Mounted on
/dev/sdb  8.2T  3.5T  3.2T  53% /mnt
$ sudo btrfs fi show
Label: none  uuid: ...
Total devices 3 FS bytes used 3.43TB
devid4 size 2.73TB used 1.17TB path /dev/sdc
devid3 size 2.73TB used 1.17TB path /dev/sdb
devid2 size 2.70TB used 1.14TB path /dev/sda4
$ sudo btrfs fi df /mnt
Data, RAID0: total=3.41TB, used=3.41TB
System, RAID1: total=16.00MB, used=232.00KB
Metadata, RAID1: total=35.25GB, used=20.55GB


$ echo $((3.2 + 3.41 + 2*16/2**20 + 2*35.25/2**10))
6.678847656253

-- 
Stephane

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: wrong values in df and btrfs filesystem df

2011-04-09 Thread Helmut Hullen
Hallo, Hugo,

Du meintest am 09.04.11:

   df /srv/MM
   btrfs filesystem df /srv/MM

 show some completely wrong values:

 # df /srv/MM

 Filesystem   1K-blocks  Used Available Use% Mounted on
 /dev/sdd15846053400 1593436456 2898463184  36% /srv/MM

 # btrfs filesystem df /srv/MM

 Data, RAID0: total=1.67TB, used=1.48TB
 System, RAID1: total=16.00MB, used=112.00KB
 System: total=4.00MB, used=0.00
 Metadata, RAID1: total=3.75GB, used=2.26GB

 # btrfs-show

 Label: MMedia  uuid: 120b036a-883f-46aa-bd9a-cb6a1897c8d2
  Total devices 3 FS bytes used 1.48TB
  devid3 size 1.81TB used 573.76GB path /dev/sdb1
  devid2 size 1.81TB used 573.77GB path /dev/sde1
  devid1 size 1.82TB used 570.01GB path /dev/sdd1

 Btrfs Btrfs v0.19

 

 df shows an Available value which isn't related to any real
 value.

I _think_ that value is the amount of space not allocated to any
 block group. If that's so, then Available (from df) plus the three
 total values (from btrfs fi df) should equal the size value from
 df.

I'm not convinced - sorry.

used plus available should be nearly the same value as the total  
in df; in my example 1.6 TB + 2.9 TB is far away from the total of 5.8  
TB (here: Tera = 10^9).

The total value in the output of btrfs fi df is the total space
 allocated to block groups. As the filesystem needs more space, it
 will allocate more block groups from the available raw storage pool,
 and the number will go up.

This is explained on the wiki at [1].

I've studied the page - the data shown there looks to be consistent.  
Especially: used plus avail = size

-

The data seems (mostly) to be consistent on a new array of disks/ 
partitions. But when I work with device add and device delete then
there are holes.

Another machine:

/dev/sda about 140 GB
/dev/sdb about 140 GB
/dev/sdc about  70 GB

# mkfs.btrfs -L SCSI -d raid0 -m raid1 /dev/sdb1 /dev/sdc1
# mit 70 GByte gefüllt

# df -t btrfs
FilesystemType   1K-blocks  Used Available Use% Mounted on
/dev/sdb1btrfs   214058944  73956996  65522124  54% /mnt/SCSI

# btrfs filesystem show
Label: 'SCSI'  uuid: 1932d11d-021a-4054-8429-db25a0204221
Total devices 2 FS bytes used 70.43GB
devid1 size 136.73GB used 37.03GB path /dev/sdb1
devid2 size 67.41GB used 37.01GB path /dev/sdc1

# btrfs filesystem df /mnt/SCSI
Data, RAID0: total=72.00GB, used=70.33GB
Data: total=8.00MB, used=0.00
System, RAID1: total=8.00MB, used=12.00KB
System: total=4.00MB, used=0.00
Metadata, RAID1: total=1.00GB, used=104.78MB
Metadata: total=8.00MB, used=0.00

#
# btrfs device add /dev/sda1 /mnt/SCSI

# df -t btrfs
FilesystemType   1K-blocks  Used Available Use% Mounted on
/dev/sdb1btrfs   356234160  84177056 200667344  30% /mnt/SCSI

# btrfs filesystem show
Label: 'SCSI'  uuid: 1932d11d-021a-4054-8429-db25a0204221
Total devices 3 FS bytes used 80.16GB
devid1 size 136.73GB used 42.03GB path /dev/sdb1
devid3 size 135.59GB used 0.00 path /dev/sda1
devid2 size 67.41GB used 42.01GB path /dev/sdc1

# btrfs filesystem df /mnt/SCSI
Data, RAID0: total=82.00GB, used=80.04GB
Data: total=8.00MB, used=0.00
System, RAID1: total=8.00MB, used=12.00KB
System: total=4.00MB, used=0.00
Metadata, RAID1: total=1.00GB, used=119.54MB
Metadata: total=8.00MB, used=0.00

#
# btrfs fi balance /mnt/SCSI

# df -t btrfs
FilesystemType   1K-blocks  Used Available Use% Mounted on
/dev/sdb1btrfs   356234160  84125344 228211536  27% /mnt/SCSI

# btrfs filesystem show
Label: 'SCSI'  uuid: 1932d11d-021a-4054-8429-db25a0204221
Total devices 3 FS bytes used 80.14GB
devid1 size 136.73GB used 28.28GB path /dev/sdb1
devid3 size 135.59GB used 27.00GB path /dev/sda1
devid2 size 67.41GB used 29.01GB path /dev/sdc1

# btrfs filesystem df /mnt/SCSI
Data, RAID0: total=81.00GB, used=80.04GB
Data: total=8.00MB, used=0.00
System, RAID1: total=8.00MB, used=12.00KB
System: total=4.00MB, used=0.00
Metadata, RAID1: total=1.25GB, used=94.29MB
Metadata: total=8.00MB, used=0.00

#
# btrfs device delete /dev/sdc1 /mnt/SCSI
# umount /mnt/SCSI
# mount LABEL=SCSI /mnt/SCSI

# df -t btrfs
FilesystemType   1K-blocks  Used Available Use% Mounted on
/dev/sdb1btrfs   285548192 232024076  51758540  82% /mnt/SCSI

# btrfs filesystem show
Label: 'SCSI'  uuid: 1932d11d-021a-4054-8429-db25a0204221
Total devices 2 FS bytes used 221.04GB
devid1 size 136.73GB used 111.51GB path /dev/sdb1
devid3 size 135.59GB used 111.51GB path /dev/sda1

# btrfs filesystem df /mnt/SCSI
Data, RAID0: total=222.00GB, used=220.80GB
System, RAID1: total=8.00MB, used=24.00KB

Re: wrong values in df and btrfs filesystem df

2011-04-09 Thread Calvin Walton
On Sat, 2011-04-09 at 10:11 +0100, Hugo Mills wrote:
 On Sat, Apr 09, 2011 at 08:25:00AM +0200, Helmut Hullen wrote:
  Hallo, linux-btrfs,
  
  First I create an array of 2 disks with
  
mkfs.btrfs -d raid0 -m raid1 /dev/sdb1 /dev/sdd1
  
  and mount it at /srv/MM.
  
  Then I fill it with about 1,6 TByte.
  And then I add /dev/sde1 via
  
btrfs device add /dev/sde1 /srv/MM
btrfs filesystem balance /srv/MM
  (it run about 20 hours)
  
  Then I work on it, copy some new files, delete some old files - all  
  works well. Only
  
df /srv/MM
btrfs filesystem df /srv/MM
  
  show some completely wrong values:

It's not wrong -- it simply doesn't mean what you think it does. :)
 
The total value in the output of btrfs fi df is the total space
 allocated to block groups. As the filesystem needs more space, it
 will allocate more block groups from the available raw storage pool,
 and the number will go up.
 
This is explained on the wiki at [1].

And I just drew up a picture which I think should help explain it a bit,
too: http://www.kepstin.ca/dump/btrfs-alloc.png

If I can figure out how to add images to the btrfs wiki, and find a good
place to put it, do you think this would be a helpful addition?

 [1] 
 https://btrfs.wiki.kernel.org/index.php/FAQ#Why_does_df_show_incorrect_free_space_for_my_RAID_volume.3F

-- 
Calvin Walton calvin.wal...@kepstin.ca

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: wrong values in df and btrfs filesystem df

2011-04-09 Thread Helmut Hullen
Hallo, Calvin,

Du meintest am 09.04.11:

 Then I work on it, copy some new files, delete some old files - all
 works well. Only

   df /srv/MM
   btrfs filesystem df /srv/MM

 show some completely wrong values:

[...]

 And I just drew up a picture which I think should help explain it a
 bit, too: http://www.kepstin.ca/dump/btrfs-alloc.png

Nice picture. But it doesn't solve the problem that I need a reliable  
information about the free/available space. And I prefer asking with  
df for this information - df should work in the same way for all  
filesystems.

Viele Gruesse!
Helmut
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: wrong values in df and btrfs filesystem df

2011-04-09 Thread Calvin Walton
On Sat, 2011-04-09 at 19:05 +0200, Helmut Hullen wrote:
  Then I work on it, copy some new files, delete some old files - all
  works well. Only
 
df /srv/MM
btrfs filesystem df /srv/MM
 
  show some completely wrong values:

  And I just drew up a picture which I think should help explain it a
  bit, too: http://www.kepstin.ca/dump/btrfs-alloc.png
 
 Nice picture. But it doesn't solve the problem that I need a reliable  
 information about the free/available space. And I prefer asking with  
 df for this information - df should work in the same way for all  
 filesystems.

The problem is that the answer to the seemingly simple question: How
much more data can I put onto this filesystem? gets pretty hard with
btrfs.

Your case is one of the simpler ones - To calculate the remaining space
for files, you take the unused allocated data space (light blue on my
picture), add the unallocated space (white), divide by the raid mode
redundancy, and subtract some percentage (this is only an estimate, of
course...) of that unallocated space for the additional metadata
overhead.

Now imagine the case where your btrfs filesystem has files stored in
multiple raid modes: e.g. some files are raid5, others raid0.
The amount of data you can write to the filesystem then depends on how
you write the data!

You might be able to fit 64gb if you use raid0, but only 48gb with
raid5; and only 16gb with raid1!

There isn't a single number that btrfs can report which does what you
want.

-- 
Calvin Walton calvin.wal...@kepstin.ca

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: wrong values in df and btrfs filesystem df

2011-04-09 Thread Helmut Hullen
Hallo, Calvin,

Du meintest am 09.04.11:

 Nice picture. But it doesn't solve the problem that I need a
 reliable information about the free/available space. And I prefer
 asking with df for this information - df should work in the same
 way for all filesystems.

 The problem is that the answer to the seemingly simple question: How
 much more data can I put onto this filesystem? gets pretty hard with
 btrfs.

 Your case is one of the simpler ones - To calculate the remaining
 space for files, you take the unused allocated data space (light blue
 on my picture), add the unallocated space (white), divide by the raid
 mode redundancy, and subtract some percentage (this is only an
 estimate, of course...) of that unallocated space for the additional
 metadata overhead.

That's simple?

Maybe I'm simple minded. But I expect the same meaning of the shown date  
with

df mointpoint

regardless what kind of filesystem is mounted.
But no test item for an IQ test.

If the value of available is unresolvable then btrfs should not show  
any value.


Ok - there's a problem slightly similar to the available value of  
compressed partitions.

Viele Gruesse!
Helmut
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: wrong values in df and btrfs filesystem df

2011-04-09 Thread Peter Stuge
Helmut Hullen wrote:
 If the value of available is unresolvable then btrfs should not
 show any value.

Disagree strongly. I think a pessimistic estimate would be much
better to show, than no value at all. This may be what is currently
shown.

As for solving this with a high degree of usability, that's not
really possible when constrained to the traditional paradigm that one
filesystem will have completely consistent behavior backing all of
it.

I think the only answer is to have btrfs-specific tools that know
more about the filesystem, and can present the relevant facts.

Taking the example of part fs being raid0 and part being raid5, such
a tool would then list calculated values for both those parts of the
fs. One showing how much could go into the raid0 part, the other how
much could go into the raid5 part.

But for such filesystems, Linux can't do what Helmut would like.

Maybe it would be possible to optimize the reported numbers, to be
what the user actually wants as often as possible. Ie. if there is
only one type of backing storage (sorry, don't know the terms) then
the calculation would be easier to get right, following the simple
formula that was just given. This is all eye candy however,
completely irrelevant IMO as long as the filesystem oopses, or eats
root nodes. :)


//Peter
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html