On 12-02-26 06:00 AM, Hugo Mills wrote:
The option that nobody's mentioned yet is to use mixed mode. This
is the -M or --mixed option when you create the filesystem. It's
designed specifically for small filesystems, and removes the
data/metadata split for more efficient packing.
Cool.
I seem to have the following subvolumes of my filesystem:
# btrfs sub li /
ID 256 top level 5 path @
ID 257 top level 5 path @home
ID 258 top level 5 path @/etc/apt/oneiric
I *think* the last one is there due to a:
# btrfsctl -s oneiric /
that I did prior to doing an upgrade. I can't seem to
On 12-03-02 08:36 AM, cwillu wrote:
Try btrfs sub delete /etc/apt/oneiric, assuming that that's the path
where you actually see it.
Well, there is a root filesystem at /etc/apt/oneiric:
# ls /etc/apt/oneiric/
bin etc initrd.img.old mnt root selinux tmp vmlinuz
boot home
On 12-02-26 02:37 PM, Daniel Lee wrote:
3.22GB + (896MB * 2) = 5GB
There's no mystery here, you're simply out of space.
Except the mystery that I had to expand the filesystem to something
between 20GB and 50GB in order to complete the operation, after which I
could reduce it back down to 5GB.
On 12-02-26 02:19 AM, Jérôme Poulin wrote:
What would be interesting is getting an eye on btrfs fi df of your
filesystem to see what part is getting full, or maybe just do a
balance.
I did try a balance. As I had mentioned subsequently, I ended up having
to grow the filesystem to 10x
On 12-02-26 02:52 PM, Daniel Lee wrote:
What's mysterious about that?
What's mysterious about needing to grow the filesystem to over 20GB to
unpack 10MB of (small, so yes, many) files?
When you shrink it btrfs is going to throw
away unused data to cram it all in the requested space and you
I have a 5G /usr btrfs filesystem on a 3.0.0-12-generic kernel that is
returning ENOSPC when it's only 75% full:
FilesystemSize Used Avail Use% Mounted on
/dev/mapper/rootvol-mint_usr
5.0G 2.8G 967M 75% /usr
And yet I can't even unpack a linux-headers
On 12-02-25 09:37 PM, Fahrzin Hemmati wrote:
Nope, still in heavy development, though you should upgrade to 3.2.
I recall being told I should upgrade to 2.6.36 (or was it .37 or .38) at
one time. Seems like one should always upgrade. :-/
Also, the devs mentioned in several places it's not
On 12-02-25 09:10 PM, Fahrzin Hemmati wrote:
btrfs is horrible for small filesystems (like a 5GB drive). df -h says
you have 967MB available, but btrfs (at least by default) allocates 1GB
at a time to data/metadata. This means that your 10MB file is too big
for the current allocation and
On 11-03-06 11:06 AM, Calvin Walton wrote:
To see exactly what's going on, you should use the btrfs filesystem df
command to see how space is being allocated for data and metadata
separately:
OK. So with an empty filesystem, before my first copy (i.e. the base on
which the next copy will
On 11-03-23 11:53 AM, Chester wrote:
I'm not a developer, but I think it goes something like this:
btrfs doesn't write the filesystem on the entire device/partition at
format time, rather, it dynamically increases the size of the
filesystem as data is used. That's why formating a disk in btrfs
I notice when I issue a btrfs fi df the result is in units of GB (for a
large filesystem -- maybe it's smaller for smaller filesystems). Is
there any way to force the units? I'd like to see the granularity of
KBs if possible.
Cheers,
b.
signature.asc
Description: OpenPGP digital signature
I have a backup volume on an ext4 filesystem that is using rsync and
it's --link-dest option to create hard-linked incremental backups. I
am sure everyone here is familiar with the technique but in case anyone
isn't basically it's effectively doing (each backup):
# cp -al
On 11-03-06 11:06 AM, Calvin Walton wrote:
There actually is such a periodic jump in overhead,
Ahh. So my instincts were correct.
caused by the way
which btrfs dynamically allocates space for metadata as needed by the
creation of new files, which it does whenever the free metadata space
On 11-03-06 11:17 AM, Calvin Walton wrote:
To add a bit to this: if you *do not* use the --inplace option on rsync,
rsync will rewrite the entire file, instead of updating the existing
file!
Of course. As I mentioned to Fajar previously, I am indeed using
--inplace when copying from the
On 11-03-06 11:02 AM, Fajar A. Nugraha wrote:
If you have snapshots anyway, why not :
- create a snapshot before each backup run
- use the same directory (e.g. just /backup), no need to cp anything
- add --inplace to rsync
Which is exactly what I am doing. There is no cp involved in making
For some time after I issue a snapshot delete, the space in the volume
is freed. It starts to free quite fast and then the progress slows and
speeds up again.
Given that the return from the snapshot delete command is immediate and
the space is freed asynchronously, how can I determine absolutely
On Wed, 01 Apr 2009 21:13:19 +1100, Dmitri Nikulin wrote:
On Wed, 2009-04-01 at 21:13 +1100, Dmitri Nikulin wrote:
I assume you mean read bandwidth, since write bandwidth cannot be
increased by mirroring, only striping.
No, I mean write bandwidth. You can get increased write bandwidth with
18 matches
Mail list logo