> 
> BTRFS is much better for changing things.  It supports things like a "RAID-1"
> array where the disks have different sizes (as long as no single disk is
> larger than all the others combined there will be no wasted space).  Also it
> allows removing disks while online.

Is there somewhere that describes how it allocates files to drives? I had an 
array of 4 bcache volumes - 2 x 500GB + 2 x 2TB, and the 2TB drives were 
filling up and the 500GB drives were empty. Then I had problems with bcache so 
I progressively removed each drive and re-added without bcache, with a 2TB 
drive as the last one, so now it looks like this:

size 448.76GiB used 252.00GiB path /dev/sda3
size 448.76GiB used 251.03GiB path /dev/sdb3
size 1.80TiB used 1.19TiB path /dev/sdd3
size 1.80TiB used 713.00GiB path /dev/sdc3

And I assume it's ultimately aiming for some optimal ratio of disk use, but I'm 
curious about what that ratio is...

> 
> On Sat, 6 Dec 2014, Chris Samuel <[email protected]> wrote:
> > > Can't comment on BTRFS.
> >
> > The RAID5/6 code is very experimental and I wouldn't suggest trying to use
> > that functionality for any data you're attached to.  Stick to 2 drives,
> > and  good backups, for btrfs.
> 
> I have servers running BTRFS RAID-1, but so far I haven't even tested BTRFS
> RAID-5/6.  Right at this moment they are merging patches that should make it
> theoretically usable, but I'd rather have someone else test it first.
> 

I'm kind of surprised that RAID[56] is getting more attention than any SSD 
caching project, which I see as a necessity before I'd contemplate using 
striped raid levels. bcache solves the problem to some extent, but BTRFS really 
needs to fully control the cache...

Do you know if BTRFS RAID[56] is "proper" RAID, eg striped across all the 
disks, small writes require read-modify-write, etc? Or is it some fancy 
RAID[56]-like implementation that avoids some of these shortcomings?

I'm using btrfs on all my new home machines now. cp --reflink is really really 
awesome!

James

_______________________________________________
luv-main mailing list
[email protected]
http://lists.luv.asn.au/listinfo/luv-main

Reply via email to