[PATCH] btrfs: fix race in reada

2012-02-25 Thread Arne Jansen
When inserting into the radix tree returns EEXIST, get the existing
entry without giving up the spinlock in between.
There was a race for both the zones trees and the extent tree.

Signed-off-by: Arne Jansen sensi...@gmx.net
---
 fs/btrfs/reada.c |   36 
 1 files changed, 16 insertions(+), 20 deletions(-)

diff --git a/fs/btrfs/reada.c b/fs/btrfs/reada.c
index 2373b39..85ce503 100644
--- a/fs/btrfs/reada.c
+++ b/fs/btrfs/reada.c
@@ -250,14 +250,12 @@ static struct reada_zone *reada_find_zone(struct 
btrfs_fs_info *fs_info,
  struct btrfs_bio *bbio)
 {
int ret;
-   int looped = 0;
struct reada_zone *zone;
struct btrfs_block_group_cache *cache = NULL;
u64 start;
u64 end;
int i;
 
-again:
zone = NULL;
spin_lock(fs_info-reada_lock);
ret = radix_tree_gang_lookup(dev-reada_zones, (void **)zone,
@@ -274,9 +272,6 @@ again:
spin_unlock(fs_info-reada_lock);
}
 
-   if (looped)
-   return NULL;
-
cache = btrfs_lookup_block_group(fs_info, logical);
if (!cache)
return NULL;
@@ -307,13 +302,14 @@ again:
ret = radix_tree_insert(dev-reada_zones,
(unsigned long)zone-end  PAGE_CACHE_SHIFT,
zone);
-   spin_unlock(fs_info-reada_lock);
-
-   if (ret) {
+   if (ret == -EEXIST) {
kfree(zone);
-   looped = 1;
-   goto again;
+   ret = radix_tree_gang_lookup(dev-reada_zones, (void **)zone,
+logical  PAGE_CACHE_SHIFT, 1);
+   if (ret == 1)
+   kref_get(zone-refcnt);
}
+   spin_unlock(fs_info-reada_lock);
 
return zone;
 }
@@ -323,8 +319,8 @@ static struct reada_extent *reada_find_extent(struct 
btrfs_root *root,
  struct btrfs_key *top, int level)
 {
int ret;
-   int looped = 0;
struct reada_extent *re = NULL;
+   struct reada_extent *re_exist = NULL;
struct btrfs_fs_info *fs_info = root-fs_info;
struct btrfs_mapping_tree *map_tree = fs_info-mapping_tree;
struct btrfs_bio *bbio = NULL;
@@ -335,14 +331,13 @@ static struct reada_extent *reada_find_extent(struct 
btrfs_root *root,
int i;
unsigned long index = logical  PAGE_CACHE_SHIFT;
 
-again:
spin_lock(fs_info-reada_lock);
re = radix_tree_lookup(fs_info-reada_tree, index);
if (re)
kref_get(re-refcnt);
spin_unlock(fs_info-reada_lock);
 
-   if (re || looped)
+   if (re)
return re;
 
re = kzalloc(sizeof(*re), GFP_NOFS);
@@ -398,12 +393,15 @@ again:
/* insert extent in reada_tree + all per-device trees, all or nothing */
spin_lock(fs_info-reada_lock);
ret = radix_tree_insert(fs_info-reada_tree, index, re);
+   if (ret == -EEXIST) {
+   re_exist = radix_tree_lookup(fs_info-reada_tree, index);
+   BUG_ON(!re_exist);
+   kref_get(re_exist-refcnt);
+   spin_unlock(fs_info-reada_lock);
+   goto error;
+   }
if (ret) {
spin_unlock(fs_info-reada_lock);
-   if (ret != -ENOMEM) {
-   /* someone inserted the extent in the meantime */
-   looped = 1;
-   }
goto error;
}
for (i = 0; i  nzones; ++i) {
@@ -450,9 +448,7 @@ error:
}
kfree(bbio);
kfree(re);
-   if (looped)
-   goto again;
-   return NULL;
+   return re_exist;
 }
 
 static void reada_kref_dummy(struct kref *kr)
-- 
1.7.3.4

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] btrfs: don't add both copies of DUP to reada extent tree

2012-02-25 Thread Arne Jansen
Normally when there are 2 copies of a block, we add both to the
reada extent tree and prefetch only the one that is easier to reach.
This way we can better utilize multiple devices.
In case of DUP this makes no sense as both copies reside on the
same device.

Signed-off-by: Arne Jansen sensi...@gmx.net
---
 fs/btrfs/reada.c |   13 +
 1 files changed, 13 insertions(+), 0 deletions(-)

diff --git a/fs/btrfs/reada.c b/fs/btrfs/reada.c
index 85ce503..8127ae9 100644
--- a/fs/btrfs/reada.c
+++ b/fs/btrfs/reada.c
@@ -325,6 +325,7 @@ static struct reada_extent *reada_find_extent(struct 
btrfs_root *root,
struct btrfs_mapping_tree *map_tree = fs_info-mapping_tree;
struct btrfs_bio *bbio = NULL;
struct btrfs_device *dev;
+   struct btrfs_device *prev_dev;
u32 blocksize;
u64 length;
int nzones = 0;
@@ -404,8 +405,20 @@ static struct reada_extent *reada_find_extent(struct 
btrfs_root *root,
spin_unlock(fs_info-reada_lock);
goto error;
}
+   prev_dev = NULL;
for (i = 0; i  nzones; ++i) {
dev = bbio-stripes[i].dev;
+   if (dev == prev_dev) {
+   /*
+* in case of DUP, just add the first zone. As both
+* are on the same device, there's nothing to gain
+* from adding both.
+* Also, it wouldn't work, as the tree is per device
+* and adding would fail with EEXIST
+*/
+   continue;
+   }
+   prev_dev = dev;
ret = radix_tree_insert(dev-reada_extents, index, re);
if (ret) {
while (--i = 0) {
-- 
1.7.3.4

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 3.2-rc4: scrubbing locks up the kernel, then hung tasks on boot

2012-02-25 Thread Arne Jansen

Hi Martin,

I just sent 2 patches to the list. Could you please test if these
fix your problem with scrub?

Thanks,
Arne


On 02/24/12 16:51, Martin Steigerwald wrote:

Am Samstag, 21. Januar 2012 schrieb Martin Steigerwald:

Am Samstag, 21. Januar 2012 schrieb Martin Steigerwald:

I still have this with 3.2.0-1-pae - which is a debian kernel based
on 3.2.1.

When I do btrfs scrub start / the machine locks immediately up hard.

Then usually on next boot it stops on space_cache enabled message,
but not  the one for /, but the one for /home which is mounted
later.

When I then boot with 3.1 it works. BTRFS redos the space_cache then
while  the machine takes ages to boot - I mean ages - 10 minutes till
KDM prompt is no problem there.


I now tested scrubbing /home which is a different BTRFS filesystem on
the same machine.

Then the scrub is started, scrub status tells me so, but nothing
happens, no block in/out activity in vmstat, no CPU related activity
in top.

btrfs scrub cancel then hangs, but not the complete machine, only the
process.

I had this once on my T520 with the internal Intel SSD 320 as well. The
other time it worked.

Well maybe that is due to BTRFS doing something else on my T23 now:

deepdance:~  ps aux | grep ino-cache | grep -v grep
root  1992  5.5  0.0  0 0 ?D12:15   0:09
[btrfs- ino-cache]

Hmmm, so I just let it sit for a while, maybe eventually it will scrub
/home.

At least it doesn´t lock up hard, so there might really be something
strange with /.


FWIW a btrfs filesystem balance / does work. After this a btrfs scrub start
/ still locks the kernel.

Anyway, I might be waiting for the new fsck and try it on the partition.
Or redo the filesystem from scratch, cause I think trying to debug this
will take way more time.

I might also as well redo /home as well. Two fresh 3.2 or 3.3 kernel BTRFS
and see whether they work better.



--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] btrfs: don't add both copies of DUP to reada extent tree

2012-02-25 Thread Duncan
Arne Jansen posted on Sat, 25 Feb 2012 09:09:47 +0100 as excerpted:

 Normally when there are 2 copies of a block, we add both to the reada
 extent tree and prefetch only the one that is easier to reach.
 This way we can better utilize multiple devices.
 In case of DUP this makes no sense as both copies reside on the same
 device.

I'm not a coder and thus can't really read the code to know, but wouldn't 
the same easier-to-reach logic apply there, only to seeking?  One of the 
DUPs should be easier to reach (less seeking) than the other.

-- 
Duncan - List replies preferred.   No HTML msgs.
Every nonfree program has a lord, a master --
and if you use the program, he is your master.  Richard Stallman

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] btrfs: don't add both copies of DUP to reada extent tree

2012-02-25 Thread Arne Jansen

On 02/25/12 09:33, Duncan wrote:

Arne Jansen posted on Sat, 25 Feb 2012 09:09:47 +0100 as excerpted:


Normally when there are 2 copies of a block, we add both to the reada
extent tree and prefetch only the one that is easier to reach.
This way we can better utilize multiple devices.
In case of DUP this makes no sense as both copies reside on the same
device.


I'm not a coder and thus can't really read the code to know, but wouldn't
the same easier-to-reach logic apply there, only to seeking?  One of the
DUPs should be easier to reach (less seeking) than the other.



Well, the commit message is kind of sloppy. The reada code collects
as many blocks near to each other and reads them sequentially. From
time to time it moves on to the next filled zone. That's when a longer
seek happens. It doesn't try to keep these longer seeks as short as
possible, instead it picks a zone on the disk where it has many
blocks to read.
As both parts of a DUP are filled equally, it doesn't matter which one
it picks, so it is sufficient to keep track of only one half of the
mirror. Things change in a multi-disk setup, as the heads can move
independently.

(this explanation is kind of sloppy, too)

-Arne
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 3.2-rc4: scrubbing locks up the kernel, then hung tasks on boot

2012-02-25 Thread Martin Steigerwald
Am Samstag, 25. Februar 2012 schrieb Arne Jansen:
 Hi Martin,
 
 I just sent 2 patches to the list. Could you please test if these
 fix your problem with scrub?

I saved them and I try to. But no promises as to when. The machine is slow 
at compiling kernels. And there are two annoying Pulseaudio issues I´d 
like to take time to gather more infos about that Pulseaudio developers 
requested. So there is some sort of a backlog.

Ciao,
-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] btrfs: don't add both copies of DUP to reada extent tree

2012-02-25 Thread Duncan
Arne Jansen posted on Sat, 25 Feb 2012 14:09:50 +0100 as excerpted:

 On 02/25/12 09:33, Duncan wrote:
 Arne Jansen posted on Sat, 25 Feb 2012 09:09:47 +0100 as excerpted:

 Normally when there are 2 copies of a block, we add both to the reada
 extent tree and prefetch only the one that is easier to reach.
 This way we can better utilize multiple devices.
 In case of DUP this makes no sense as both copies reside on the same
 device.

 I'm not a coder and thus can't really read the code to know, but
 wouldn't the same easier-to-reach logic apply there, only to seeking? 
 One of the DUPs should be easier to reach (less seeking) than the
 other.


 Well, the commit message is kind of sloppy. The reada code collects as
 many blocks near to each other and reads them sequentially. From time to
 time it moves on to the next filled zone. That's when a longer seek
 happens. It doesn't try to keep these longer seeks as short as possible,
 instead it picks a zone on the disk where it has many blocks to read.
 As both parts of a DUP are filled equally, it doesn't matter which one
 it picks, so it is sufficient to keep track of only one half of the
 mirror. Things change in a multi-disk setup, as the heads can move
 independently.
 
 (this explanation is kind of sloppy, too)

Thanks.  Makes sense, now.

-- 
Duncan - List replies preferred.   No HTML msgs.
Every nonfree program has a lord, a master --
and if you use the program, he is your master.  Richard Stallman

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


filesystem full when it's not? out of inodes? huh?

2012-02-25 Thread Brian J. Murrell
I have a 5G /usr btrfs filesystem on a 3.0.0-12-generic kernel that is
returning ENOSPC when it's only 75% full:

FilesystemSize  Used Avail Use% Mounted on
/dev/mapper/rootvol-mint_usr
  5.0G  2.8G  967M  75% /usr

And yet I can't even unpack a linux-headers package on to it, which
should be nowhere near 967MB.  dpkg says it will need 10MB:

$ sudo apt-get install -f
Reading package lists... Done
Building dependency tree   
Reading state information... Done
Correcting dependencies... Done
The following extra packages will be installed:
  linux-headers-3.0.0-16-generic
The following NEW packages will be installed:
  linux-headers-3.0.0-16-generic
0 upgraded, 1 newly installed, 0 to remove and 2 not upgraded.
264 not fully installed or removed.
Need to get 0 B/851 kB of archives.
After this operation, 10.8 MB of additional disk space will be used.
Do you want to continue [Y/n]? y
(Reading database ... 180246 files and directories currently installed.)
Unpacking linux-headers-3.0.0-16-generic (from 
.../linux-headers-3.0.0-16-generic_3.0.0-16.28_i386.deb) ...
dpkg: error processing 
/var/cache/apt/archives/linux-headers-3.0.0-16-generic_3.0.0-16.28_i386.deb 
(--unpack):
 unable to install new version of 
`/usr/src/linux-headers-3.0.0-16-generic/include/config/dvb/tuner/dib0070.h': 
No space left on device

And indeed, using dd I am able to create a 967MB file:

$ sudo dd if=/dev/zero of=/usr/bigfile bs=1M count=1000
dd: writing `/usr/bigfile': No space left on device
967+0 records in
966+0 records out
1012924416 bytes (1.0 GB) copied, 16.1545 s, 62.7 MB/s

strace yields this as the cause of the ENOSPC:

8213  
rename(/usr/src/linux-headers-3.0.0-16-generic/include/config/dvb/tuner/dib0070.h.dpkg-new,
 /usr/src/linux-headers-3.0.0-16-generic/include/config/dvb/tuner/dib0070.h 
unfinished ...
...
8213  ... rename resumed )= -1 ENOSPC (No space left on device)

So this starts to feel like some kind of inode count limitation.  But I
didn't think btrfs had inode count limitations.  Here's the df stats on
the filesystem:

$ btrfs filesystem df /usr
Data: total=3.22GB, used=3.22GB
System, DUP: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, DUP: total=896.00MB, used=251.62MB
Metadata: total=8.00MB, used=0.00

I don't know if that's useful or not.

Any ideas?

Cheers
b.



signature.asc
Description: OpenPGP digital signature


Re: filesystem full when it's not? out of inodes? huh?

2012-02-25 Thread Fahrzin Hemmati
btrfs is horrible for small filesystems (like a 5GB drive). df -h says 
you have 967MB available, but btrfs (at least by default) allocates 1GB 
at a time to data/metadata. This means that your 10MB file is too big 
for the current allocation and requires a new data chunk, or another 
1GB, which you don't have.


Others might know of a way of changing the allocation size to less than 
1GB, but otherwise I recommend switching to something more stable like 
ext4/reiserfs/etc.


On 2/25/2012 5:55 PM, Brian J. Murrell wrote:

I have a 5G /usr btrfs filesystem on a 3.0.0-12-generic kernel that is
returning ENOSPC when it's only 75% full:

FilesystemSize  Used Avail Use% Mounted on
/dev/mapper/rootvol-mint_usr
   5.0G  2.8G  967M  75% /usr

And yet I can't even unpack a linux-headers package on to it, which
should be nowhere near 967MB.  dpkg says it will need 10MB:

$ sudo apt-get install -f
Reading package lists... Done
Building dependency tree
Reading state information... Done
Correcting dependencies... Done
The following extra packages will be installed:
   linux-headers-3.0.0-16-generic
The following NEW packages will be installed:
   linux-headers-3.0.0-16-generic
0 upgraded, 1 newly installed, 0 to remove and 2 not upgraded.
264 not fully installed or removed.
Need to get 0 B/851 kB of archives.
After this operation, 10.8 MB of additional disk space will be used.
Do you want to continue [Y/n]? y
(Reading database ... 180246 files and directories currently installed.)
Unpacking linux-headers-3.0.0-16-generic (from 
.../linux-headers-3.0.0-16-generic_3.0.0-16.28_i386.deb) ...
dpkg: error processing 
/var/cache/apt/archives/linux-headers-3.0.0-16-generic_3.0.0-16.28_i386.deb 
(--unpack):
  unable to install new version of 
`/usr/src/linux-headers-3.0.0-16-generic/include/config/dvb/tuner/dib0070.h': 
No space left on device

And indeed, using dd I am able to create a 967MB file:

$ sudo dd if=/dev/zero of=/usr/bigfile bs=1M count=1000
dd: writing `/usr/bigfile': No space left on device
967+0 records in
966+0 records out
1012924416 bytes (1.0 GB) copied, 16.1545 s, 62.7 MB/s

strace yields this as the cause of the ENOSPC:

8213  
rename(/usr/src/linux-headers-3.0.0-16-generic/include/config/dvb/tuner/dib0070.h.dpkg-new, 
/usr/src/linux-headers-3.0.0-16-generic/include/config/dvb/tuner/dib0070.hunfinished 
...
...
8213... rename resumed  )= -1 ENOSPC (No space left on device)

So this starts to feel like some kind of inode count limitation.  But I
didn't think btrfs had inode count limitations.  Here's the df stats on
the filesystem:

$ btrfs filesystem df /usr
Data: total=3.22GB, used=3.22GB
System, DUP: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, DUP: total=896.00MB, used=251.62MB
Metadata: total=8.00MB, used=0.00

I don't know if that's useful or not.

Any ideas?

Cheers
b.



--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: filesystem full when it's not? out of inodes? huh?

2012-02-25 Thread Fahrzin Hemmati

On 2/25/2012 6:16 PM, Brian J. Murrell wrote:

Others might know of a way of changing the allocation size to less than
1GB, but otherwise I recommend switching to something more stable like
ext4/reiserfs/etc.

So btrfs is still not yet suitable to be a root/usr/var filesystem, even
in kernel 3.0.0?

b.

Nope, still in heavy development, though you should upgrade to 3.2. 
Also, the devs mentioned in several places it's not friendly to small 
drives, and I'm pretty sure 5GB is considered tiny.


I don't think you need to separate /usr out to it's own disk. You could 
instead create a single drive with multiple subvolumes for /, /var, 
/usr, etc. When you have Ubuntu use btrfs for /, it creates @ and @home 
for / and /home, respectively, so it's a common phenomenon if you look 
for help.


--Farz
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: filesystem full when it's not? out of inodes? huh?

2012-02-25 Thread Brian J. Murrell
On 12-02-25 09:37 PM, Fahrzin Hemmati wrote:

 Nope, still in heavy development, though you should upgrade to 3.2.

I recall being told I should upgrade to 2.6.36 (or was it .37 or .38) at
one time.  Seems like one should always upgrade.  :-/

 Also, the devs mentioned in several places it's not friendly to small
 drives, and I'm pretty sure 5GB is considered tiny.

But it won't ever get taken serious if it can't be used on regular
filesystems.  I shouldn't have to allocate an 80G filesystem for 3G of
data just so that the filesystem isn't tiny.

 I don't think you need to separate /usr out to it's own disk. You could
 instead create a single drive with multiple subvolumes for /, /var,
 /usr, etc.

The point is to separate filesystems which can easily fill with
application data growth from filesystems that can have more fatal
effects by being filled.

That said, I don't think having /var as a subvolume in the same pool as
/ and /usr achieves that usage isolation, does it?  Isn't /var still
allowed to consume all of the space that it, / and /usr share with them
all being subvolumes in the same pool?

 When you have Ubuntu use btrfs for /, it creates @ and @home
 for / and /home, respectively,

Yes, I had noticed that.  I also didn't immediately see anything that
prevents /home from filling / as I describe above.

Cheers,
b.



signature.asc
Description: OpenPGP digital signature


Re: filesystem full when it's not? out of inodes? huh?

2012-02-25 Thread Brian J. Murrell
On 12-02-25 09:10 PM, Fahrzin Hemmati wrote:
 btrfs is horrible for small filesystems (like a 5GB drive). df -h says
 you have 967MB available, but btrfs (at least by default) allocates 1GB
 at a time to data/metadata. This means that your 10MB file is too big
 for the current allocation and requires a new data chunk, or another
 1GB, which you don't have.

So increasing the size of the filesystem should suffice then?  How much
bigger?  10G?  Nope.  still not big enough:

# lvextend -L+1G /dev/rootvol/mint_usr; btrfs fi resize max /usr; df -h /usr
  Extending logical volume mint_usr to 10.00 GiB
  Logical volume mint_usr successfully resized
Resize '/usr' of 'max'
FilesystemSize  Used Avail Use% Mounted on
/dev/mapper/rootvol-mint_usr
   10G  2.8G  6.0G  32% /usr
test ~ # apt-get install -y -f
Reading package lists... Done
Building dependency tree   
Reading state information... Done
Correcting dependencies... Done
The following extra packages will be installed:
  linux-headers-3.0.0-16-generic
The following NEW packages will be installed:
  linux-headers-3.0.0-16-generic
0 upgraded, 1 newly installed, 0 to remove and 2 not upgraded.
264 not fully installed or removed.
Need to get 0 B/851 kB of archives.
After this operation, 10.8 MB of additional disk space will be used.
(Reading database ... 180246 files and directories currently installed.)
Unpacking linux-headers-3.0.0-16-generic (from 
.../linux-headers-3.0.0-16-generic_3.0.0-16.28_i386.deb) ...
dpkg: error processing 
/var/cache/apt/archives/linux-headers-3.0.0-16-generic_3.0.0-16.28_i386.deb 
(--unpack):
 unable to install new version of 
`/usr/src/linux-headers-3.0.0-16-generic/include/config/dvb/usb.h': No space 
left on device

20G maybe?  Nope:

# lvextend -L20G /dev/rootvol/mint_usr; btrfs fi resize max /usr; df -h /usr
  Extending logical volume mint_usr to 20.00 GiB
  Logical volume mint_usr successfully resized
Resize '/usr' of 'max'
FilesystemSize  Used Avail Use% Mounted on
/dev/mapper/rootvol-mint_usr
   20G  2.8G   16G  15% /usr
test ~ # apt-get install -y -f
Reading package lists... Done
Building dependency tree   
Reading state information... Done
Correcting dependencies... Done
The following extra packages will be installed:
  linux-headers-3.0.0-16-generic
The following NEW packages will be installed:
  linux-headers-3.0.0-16-generic
0 upgraded, 1 newly installed, 0 to remove and 2 not upgraded.
264 not fully installed or removed.
Need to get 0 B/851 kB of archives.
After this operation, 10.8 MB of additional disk space will be used.
(Reading database ... 180246 files and directories currently installed.)
Unpacking linux-headers-3.0.0-16-generic (from 
.../linux-headers-3.0.0-16-generic_3.0.0-16.28_i386.deb) ...
dpkg: error processing 
/var/cache/apt/archives/linux-headers-3.0.0-16-generic_3.0.0-16.28_i386.deb 
(--unpack):
 unable to install new version of 
`/usr/src/linux-headers-3.0.0-16-generic/include/config/ncpfs/packet/signing.h':
 No space left on device

Maybe 50G?  Yup:

# apt-get install -y -f
Reading package lists... Done
Building dependency tree   
Reading state information... Done
Correcting dependencies... Done
The following extra packages will be installed:
  linux-headers-3.0.0-16-generic
The following NEW packages will be installed:
  linux-headers-3.0.0-16-generic
0 upgraded, 1 newly installed, 0 to remove and 2 not upgraded.
264 not fully installed or removed.
Need to get 0 B/851 kB of archives.
After this operation, 10.8 MB of additional disk space will be used.
(Reading database ... 180246 files and directories currently installed.)
Unpacking linux-headers-3.0.0-16-generic (from 
.../linux-headers-3.0.0-16-generic_3.0.0-16.28_i386.deb) ...
Setting up linux-image-3.0.0-16-generic (3.0.0-16.28) ...
...
# df -h /usr
FilesystemSize  Used Avail Use% Mounted on
/dev/mapper/rootvol-mint_usr
   50G  2.8G   43G   7% /usr

So I guess I need a 50G btrfs filesystem for 2.8G worth of data?

Does that really seem right?  I suppose to be fair it could have been
some other value between 20G and 50G since I didn't test values in
between.  So still, I need some amount more than 20G of space to store
2.8G of data?

Surely there is something going on here other than just btrfs sucks for
small filesystems.

b.






signature.asc
Description: OpenPGP digital signature


Re: filesystem full when it's not? out of inodes? huh?

2012-02-25 Thread Fahrzin Hemmati

On 2/25/2012 9:45 PM, Brian J. Murrell wrote:

On 12-02-25 09:10 PM, Fahrzin Hemmati wrote:

btrfs is horrible for small filesystems (like a 5GB drive). df -h says
you have 967MB available, but btrfs (at least by default) allocates 1GB
at a time to data/metadata. This means that your 10MB file is too big
for the current allocation and requires a new data chunk, or another
1GB, which you don't have.

So increasing the size of the filesystem should suffice then?  How much
bigger?  10G?  Nope.  still not big enough:

# lvextend -L+1G /dev/rootvol/mint_usr; btrfs fi resize max /usr; df -h /usr
   Extending logical volume mint_usr to 10.00 GiB
   Logical volume mint_usr successfully resized
Resize '/usr' of 'max'
FilesystemSize  Used Avail Use% Mounted on
/dev/mapper/rootvol-mint_usr
10G  2.8G  6.0G  32% /usr
test ~ # apt-get install -y -f
Reading package lists... Done
Building dependency tree
Reading state information... Done
Correcting dependencies... Done
The following extra packages will be installed:
   linux-headers-3.0.0-16-generic
The following NEW packages will be installed:
   linux-headers-3.0.0-16-generic
0 upgraded, 1 newly installed, 0 to remove and 2 not upgraded.
264 not fully installed or removed.
Need to get 0 B/851 kB of archives.
After this operation, 10.8 MB of additional disk space will be used.
(Reading database ... 180246 files and directories currently installed.)
Unpacking linux-headers-3.0.0-16-generic (from 
.../linux-headers-3.0.0-16-generic_3.0.0-16.28_i386.deb) ...
dpkg: error processing 
/var/cache/apt/archives/linux-headers-3.0.0-16-generic_3.0.0-16.28_i386.deb 
(--unpack):
  unable to install new version of 
`/usr/src/linux-headers-3.0.0-16-generic/include/config/dvb/usb.h': No space 
left on device

20G maybe?  Nope:

# lvextend -L20G /dev/rootvol/mint_usr; btrfs fi resize max /usr; df -h /usr
   Extending logical volume mint_usr to 20.00 GiB
   Logical volume mint_usr successfully resized
Resize '/usr' of 'max'
FilesystemSize  Used Avail Use% Mounted on
/dev/mapper/rootvol-mint_usr
20G  2.8G   16G  15% /usr
test ~ # apt-get install -y -f
Reading package lists... Done
Building dependency tree
Reading state information... Done
Correcting dependencies... Done
The following extra packages will be installed:
   linux-headers-3.0.0-16-generic
The following NEW packages will be installed:
   linux-headers-3.0.0-16-generic
0 upgraded, 1 newly installed, 0 to remove and 2 not upgraded.
264 not fully installed or removed.
Need to get 0 B/851 kB of archives.
After this operation, 10.8 MB of additional disk space will be used.
(Reading database ... 180246 files and directories currently installed.)
Unpacking linux-headers-3.0.0-16-generic (from 
.../linux-headers-3.0.0-16-generic_3.0.0-16.28_i386.deb) ...
dpkg: error processing 
/var/cache/apt/archives/linux-headers-3.0.0-16-generic_3.0.0-16.28_i386.deb 
(--unpack):
  unable to install new version of 
`/usr/src/linux-headers-3.0.0-16-generic/include/config/ncpfs/packet/signing.h':
 No space left on device

Maybe 50G?  Yup:

# apt-get install -y -f
Reading package lists... Done
Building dependency tree
Reading state information... Done
Correcting dependencies... Done
The following extra packages will be installed:
   linux-headers-3.0.0-16-generic
The following NEW packages will be installed:
   linux-headers-3.0.0-16-generic
0 upgraded, 1 newly installed, 0 to remove and 2 not upgraded.
264 not fully installed or removed.
Need to get 0 B/851 kB of archives.
After this operation, 10.8 MB of additional disk space will be used.
(Reading database ... 180246 files and directories currently installed.)
Unpacking linux-headers-3.0.0-16-generic (from 
.../linux-headers-3.0.0-16-generic_3.0.0-16.28_i386.deb) ...
Setting up linux-image-3.0.0-16-generic (3.0.0-16.28) ...
...
# df -h /usr
FilesystemSize  Used Avail Use% Mounted on
/dev/mapper/rootvol-mint_usr
50G  2.8G   43G   7% /usr

So I guess I need a 50G btrfs filesystem for 2.8G worth of data?

Does that really seem right?  I suppose to be fair it could have been
some other value between 20G and 50G since I didn't test values in
between.  So still, I need some amount more than 20G of space to store
2.8G of data?

Surely there is something going on here other than just btrfs sucks for
small filesystems.

b.




You should have been fine with adding 1GB (really only 57MB), or at 
worst 2GB in case you were on the edge of both data and metadata.


A btrfs dev might be able to debug the problem there, since your 
original problem seemed only that you couldn't allocate a new chunk. It 
might be a problem with btrfs filesystem resize?

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: filesystem full when it's not? out of inodes? huh?

2012-02-25 Thread Jérôme Poulin
On Sun, Feb 26, 2012 at 1:14 AM, Brian J. Murrell br...@interlinx.bc.ca wrote:

 # btrfs fi resize 5G /usr; df -h /usr
 Resize '/usr' of '5G'


What would be interesting is getting an eye on btrfs fi df of your
filesystem to see what part is getting full, or maybe just do a
balance.

I have been running 3.0.0 for quite a while without any problem,
metadata grew a bit too much (1.5 TB for 2 TB of data) and balance
fixed it back to 50 GB of metadata then 20 GB after deleting some
snapshots.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html