On Sat, Jan 22, 2011 at 5:45 AM, Hugo Mills <hugo-l...@carfax.org.uk> wrote:
> On Fri, Jan 21, 2011 at 11:28:19AM -0800, Freddie Cash wrote:
>> So, is Btrfs pooled storage or not?  Do you throw 24 disks into a
>> single Btrfs filesystem, and then split that up into separate
>> sub-volumes as needed?
>
>   Yes, except that the subvolumes aren't quite as separate as you
> seem to think that they are. There's no preallocation of storage to a
> subvolume (in the way that LVM works), so you're only limited by the
> amount of free space in the whole pool. Also, data stored in the pool
> is actually free for use by any subvolume, and can be shared (see the
> deeper explanation below).

Ah, perfect, that I understand.  :)  It's the same with ZFS:  you add
storage to a pool, filesystems in the pool are free to use as much as
there is available, you don't have to pre-allocate or partition or
anything that.  ZFS supports quotas and reservations, though, so you
can (if you want/need) allocate bytes to specific filesystems.

>>  From the looks of things, you don't have to
>> partition disks or worry about sizes before formatting (if the space
>> is available, Btrfs will use it).  But it also looks like you still
>> have to manage disks.
>>
>> Or, maybe it's just that the initial creation is done via mkfs (as in,
>> formatting a partition with a filesystem) that's tripping me up after
>> using ZFS for so long (zpool creates the storage pool, manages the
>> disks, sets up redundancy levels, etc;  zfs creates filesystems and
>> volumes, and sets properties; no newfs/mkfs involved).
>
>   So potentially zpool -> mkfs.btrfs, and zfs -> btrfs. However, I
> don't know enough about ZFS internals to know whether this is a
> reasonable analogy to make or not.

That's what I figured.  It's not a perfect analogue, but it's close
enough.  Clears things up a bit.

The big different is that ZFS separates storage management (the pool)
from filesystem management; while btrfs "creates a pool" underneath
one filesystem, and allows you to split it up via sub-volumes.

I think I'm figuring this out.  :)

>   Note that the actual file data, and the management of its location
> on the disk (and its replication), is completely shared across
> subvolumes. The same extent may be used multiple times by different
> files, and those files may be in any subvolumes on the filesystem. In
> theory, the same extent could even appear several times in the same
> file. This sharing is how snapshots and COW copies are implemented.
> It's also the basis for Josef's dedup implementation.

That's similar to how ZFS works, only they use "blocks" instead of
"extents", but it works in a similar manner.

I think I've got this mostly figured out.

Now, to just wait for multiple parity redundancy (RAID5/6/+) support
to hit the tree, so I can start playing around with it.  :)

Thanks for taking the time to explain some things.  Sorry if I came
across as being harsh or whatnot.

-- 
Freddie Cash
fjwc...@gmail.com
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to