On Thu, May 23, 2013 at 11:01:47AM +0000, James Harper wrote:
> > http://etbe.coker.com.au/2012/12/17/using-btrfs/
> 
> You say "One is to use a single BTRFS filesystem with RAID-1 for
> all the storage and then have each VM use a file on that big BTRFS
> filesystem for all it's storage" 

note that if you don't have any other particular reasons to use btrfs
rather than zfs, then zfs is a better choice for this job.

zfs allows you to create disk volumes (called "zvol") as well as
filesystems from the pool, which are similar to a disk partition or
an LVM logical volume but with better performance and all the other
benefits of being part of a zpool.

http://zfsonlinux.org/example-zvol.html

http://pthree.org/2012/12/21/zfs-administration-part-xiv-zvols/

a zvol can also be exported via iscsi, so a VM on a compute node could
use a zvol exported from a zfs file-server. could even use 2 or more
zvols from different servers and raid them with mdadm (i haven't tried
this myself but there's no reason why it shouldn't work - synchronised
snapshotting may be problematic, you'd probably want to pause the VM
briefly so you can snapshot the zvols on the file servers).


btrfs doesn't yet have a zvol-like feature (i have no idea when or even
if it is planned), so the only option there for a KVM or Xen VM is to
use a large file as a qcow2 or whatever disk image.

the btrfs wiki mentions "btrvols" on the Project Ideas page but it looks
like no-one's even working on it yet.

https://btrfs.wiki.kernel.org/index.php/Project_ideas#block_devices_.27btrvols.27


and, of course, with container-style VMs, you could use a btrfs
subvolume or zfs filesystem.

craig

-- 
craig sanders <[email protected]>
_______________________________________________
luv-main mailing list
[email protected]
http://lists.luv.asn.au/listinfo/luv-main

Reply via email to