On Sat, May 25, 2013 at 05:54:44PM +1000, Russell Coker wrote:
> On Sat, 25 May 2013, Craig Sanders <[email protected]> wrote:
> > note that if you don't have any other particular reasons to use
> > btrfs rather than zfs, then zfs is a better choice for this job.
>
> I noted in the first paragraph that 4G of RAM for ZFS alone seems
> inadequate.

yeah, well, that counts as a particular reason for using
btrfs...although tuning the zfs_arc_min and zfs_arc_max module options
is worth-while on a low-RAM zfs server.

e.g. i have the following /etc/modprobe.d/zfs.conf file on my 16GB
system....it's a desktop workstation as well as a ZFS fileserver, so I
need to limit how much RAM zfs takes.

    # use minimum 1GB and maxmum of 4GB RAM for ZFS ARC
    options zfs zfs_arc_min=1073741824 zfs_arc_max=4294967296 



does btrfs use significantly less RAM than zfs? i suppose it would, as
it uses the linux cache whereas ZFS has its separate ARC.

> [...problems with only 4GB RAM...] so I upgraded it to 12G of RAM
> (8G of RAM was a lot cheaper for the client than paying me to figure
> out the ZFS problem).

yep, adding RAM is a cheap and easy fix.

> I've got a Xen server that uses ZVols for the DomU block devices.     
> I've been wondering if it really gives a benefit.                     

in my experience, qcow2 files are slow.  and especially slow over NFS.

if shared storage for live migration isn't important, it would be
worthwhile doing some benchmarking of zvol vs qcow on zfs vs qcow on
btrfs.

> > a zvol can also be exported via iscsi, so a VM on a compute node
> > could use a zvol exported from a zfs file-server. could even use 2
> > or more zvols from different servers and raid them with mdadm (i
> > haven't tried this myself but there's no reason why it shouldn't
> > work - synchronised snapshotting may be problematic, you'd probably
> > want to pause the VM briefly so you can snapshot the zvols on the
> > file servers).
>
> Why would pausing the VM be necessary?

it's not, as a general rule.

i was speculating that in the case of an mdadm raid array of iscsi
zvols, it's possible the snapshots of the zvols on different servers
could be different - it would be almost impossible to guarantee that the
snapshots would run at exactly the same time.

whether that's actually important or not, I don't know - but it doesn't
sound like a desirable thing to happen.

if the VM is paused briefly, that would prevent the VM from writing to
the raid array while it was being snapshotted.

e.g. 'virsh suspend <domain>', snapshot on the zfs servers, followed
by 'virsh resume <domain>' - similar to what happens when you 'virsh
migrate' a VM.

zfs snapshots are fast, so the VM would only pause for a matter of
seconds or perhaps even less than a second.


i really ought to setup iscsi on my home zfs servers and experiment with
this...i'll put it on my TODO list.

craig

-- 
craig sanders <[email protected]>

BOFH excuse #434:

Please state the nature of the technical emergency
_______________________________________________
luv-main mailing list
[email protected]
http://lists.luv.asn.au/listinfo/luv-main

Reply via email to