On Mon, 11 Apr 2011 08:29:46 +0100, Stephane Chazelas wrote:
2011-04-10 18:13:51 +0800, Miao Xie:
[...]
# df /srv/MM
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdd15846053400 1593436456 2898463184 36% /srv/MM
# btrfs filesystem df /srv/MM
Data,
2011-04-12 15:22:57 +0800, Miao Xie:
[...]
But the algorithm of df command doesn't simulate the above allocation
correctly, this
simulated allocation just allocates the stripes from two disks, and then,
these two disks
have no free space, but the third disk still has 1.2TB free space, df
2011-04-10 18:13:51 +0800, Miao Xie:
[...]
# df /srv/MM
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdd15846053400 1593436456 2898463184 36% /srv/MM
# btrfs filesystem df /srv/MM
Data, RAID0: total=1.67TB, used=1.48TB
System, RAID1:
On 11.04.2011 09:29, Stephane Chazelas wrote:
2011-04-10 18:13:51 +0800, Miao Xie:
[...]
What's the implication of having disks of differing sizes? Does
that mean that the extra space on larger disks is lost?
Yes. Currently the allocator cannot handle different sizes well,
especially when
Hallo, Stephane,
Du meintest am 11.04.11:
What's the implication of having disks of differing sizes? Does
that mean that the extra space on larger disks is lost?
Seems to work.
I've tried:
/dev/sda 140 GByte
/dev/sdb 140 GByte
/dev/sdc70 GByte
mkfs.btrfs -d raid0 -m raid1
Hallo, linux-btrfs,
First I create an array of 2 disks with
mkfs.btrfs -d raid0 -m raid1 /dev/sdb1 /dev/sdd1
and mount it at /srv/MM.
Then I fill it with about 1,6 TByte.
And then I add /dev/sde1 via
btrfs device add /dev/sde1 /srv/MM
btrfs filesystem balance /srv/MM
(it run about
On Sat, Apr 09, 2011 at 08:25:00AM +0200, Helmut Hullen wrote:
Hallo, linux-btrfs,
First I create an array of 2 disks with
mkfs.btrfs -d raid0 -m raid1 /dev/sdb1 /dev/sdd1
and mount it at /srv/MM.
Then I fill it with about 1,6 TByte.
And then I add /dev/sde1 via
btrfs device
2011-04-09 10:11:41 +0100, Hugo Mills:
[...]
# df /srv/MM
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdd15846053400 1593436456 2898463184 36% /srv/MM
# btrfs filesystem df /srv/MM
Data, RAID0: total=1.67TB, used=1.48TB
System, RAID1:
Hallo, Hugo,
Du meintest am 09.04.11:
df /srv/MM
btrfs filesystem df /srv/MM
show some completely wrong values:
# df /srv/MM
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdd15846053400 1593436456 2898463184 36% /srv/MM
# btrfs filesystem df
On Sat, 2011-04-09 at 10:11 +0100, Hugo Mills wrote:
On Sat, Apr 09, 2011 at 08:25:00AM +0200, Helmut Hullen wrote:
Hallo, linux-btrfs,
First I create an array of 2 disks with
mkfs.btrfs -d raid0 -m raid1 /dev/sdb1 /dev/sdd1
and mount it at /srv/MM.
Then I fill it with
Hallo, Calvin,
Du meintest am 09.04.11:
Then I work on it, copy some new files, delete some old files - all
works well. Only
df /srv/MM
btrfs filesystem df /srv/MM
show some completely wrong values:
[...]
And I just drew up a picture which I think should help explain it a
bit,
On Sat, 2011-04-09 at 19:05 +0200, Helmut Hullen wrote:
Then I work on it, copy some new files, delete some old files - all
works well. Only
df /srv/MM
btrfs filesystem df /srv/MM
show some completely wrong values:
And I just drew up a picture which I think should help explain
Hallo, Calvin,
Du meintest am 09.04.11:
Nice picture. But it doesn't solve the problem that I need a
reliable information about the free/available space. And I prefer
asking with df for this information - df should work in the same
way for all filesystems.
The problem is that the answer to
Helmut Hullen wrote:
If the value of available is unresolvable then btrfs should not
show any value.
Disagree strongly. I think a pessimistic estimate would be much
better to show, than no value at all. This may be what is currently
shown.
As for solving this with a high degree of usability,
14 matches
Mail list logo