On 2012-11-08 05:43, Edward Ned Harvey
you've got a Linux or Windows VM isnide of ESX, which is writing to a virtual
disk, which ESX is then wrapping up inside NFS and TCP, talking on the virtual
LAN to the ZFS server, which unwraps the TCP and NFS, pushes it all through the
ZFS/Zpool layer, writing back to the virtual disk that ESX gave it, which is
itself a layer on top of Ext3
I think this is a part where you disagree. The way I get all-in-ones,
the VM running "a ZFS OS" enjoys PCI-pass-through, so it gets dedicated
hardware access to the HBA(s) and harddisks at raw speeds, with no
extra layers of lags in between. So there are a couple of OS disks
where ESXi itself is installed, distros, logging and stuff, and the
other disks are managed by a ZFS in a VM and served back to ESXi
to store other VMs on the system.
Also, VMWare does not (AFAIK) use ext3, but their own VMFS which is,
among other things, cluster-aware (same storage can be shared by
several VMware hosts).
That said, on older ESX (with minimized RHEL userspace interface)
which was picky about only using certified hardware with virt-enabled
drivers, I did combine some disks served by the motherboard into a
Linux mdadm array (within the RHEL-based management OS) and exported
that to the vmkernel over NFS. Back then disk performance was indeed
abysmal whatever you do, so the NFS disks were not after all used to
store virtual disks, but rather distros and backups.
zfs-discuss mailing list