> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Karl Wagner
> I am just wondering why you export the ZFS system through NFS?
> I have had much better results (albeit spending more time setting up) using
> iSCSI. I found that performance was much better,
A couple years ago, I tested and benchmarked both configurations on the same
system. I found that the performance was equal both ways (which surprised me
because I expected NFS to be slower due to FS overhead.) I cannot say if CPU
utilization was different - but the IO measurements were the same. At least,
Based on those findings, I opted to use NFS for several weak reasons. If I
wanted to, I could export NFS to more different systems. I know everything
nowadays supports iscsi initiation, but it's not as easy to set up as a NFS
client. If you want to expand the guest disk, in iscsi, ... I'm not
completely sure you *can* expand a zvol, but if you can, you at least have to
shut everything down, then expand and bring it all back up and then have the
iscsi initiator expand to occupy the new space. But in NFS, the client can
simply expand, no hassle.
I like being able to look in a filesystem and see the guests listed there as
files. Know I could, if I wanted to, copy those things out to any type of
storage I wish. Someday, perhaps I'll want to move some guest VM's over to a
BTRFS server instead of ZFS. But it would be more difficult with iscsi.
For what it's worth, in more recent times, I've opted to use iscsi. And here
are the reasons:
When you create a guest file in a ZFS filesystem, it doesn't automatically get
a refreservation. Which means, if you run out of disk space thanks to
snapshots and stuff, the guest OS suddenly can't write to disk, and it's a hard
guest crash/failure. Yes you can manually set the refreservation, if you're
clever, but it's easy to get wrong.
If you create a zvol, by default, it has an appropriately sized refreservation
that guarantees the guest will always be able to write to disk.
Although I got the same performance using iscsi or NFS with ESXi... I did NOT
get the same result using VirtualBox.
In Virtualbox, if I use a *.vdi file... The performance is *way* slower than
using a *.vmdk wrapper for physical device (zvol). ( using VBoxManage
internalcommands createrawvmdk )
The only problem with the zvol / vmdk idea in virtualbox is that every reboot
(or remount) the zvol becomes owned by root again. So I have to manually chown
the zvol for each guest each time I restart the host.
zfs-discuss mailing list