We currently use ZFS (currently Nexenta's Developer Edition, www.nexenta.com)
as a NFS datastore. All works well, however copying VMDKs during ESX
provisioning is bothering me a bit (we use cheap SATA based HW, so scalability
of large number of simultaneous requests is an issue). However, with ZFS's
unlimited snapshots/clones (clone is a RW snapshot), copying is almost
I'm creating an esxzfs.pm which is using ZFS snapshots and clones to "thin
provision" (clone) VM's for ESX servers, basically the same as esxthin.pm for
NetApp HW. Since ZFS has limitations on cloning (can't do file level cloning,
only filesystem snapshot/cloning), things are not so straightforward as
ESX NFS client (and Nexenta's NFS server) doesn't support automatic mounting of
subfilesystems, so you have to mount each VCL's slot (computer) individually.
In ordinary case (one filesystem - vcl, shared as NFS), you mount [VCL] in ESX
and you see golden, inuse and all subfolders, use ssh cp and rm for image
manipulation, ... and all is ok.
ZFS supports only filesystem snapshoting/cloning, however snapshoting/cloning
EVERYTHING for a single VM clone is not practical. So the basic idea is:
* every folder is a distinct filesystem (in ZFS/Nexenta, folder=separate FS)
* snapshot golden/imagex (e.g. golden/ima...@now) and clone it to inuse/sloty
* mount /inuse/sloty on a ESX as [sloty]
* register and start VM from [sloty]
* ESX supports only 64 NFS mounts, so maximum number of slots (computers) on a
single ESX server is 64. (Which is ok in my situation).
* can't mix "normal" esx and esxzfs provisioning in a single NFS server (if you
mount only "root" vcl, you do't see files in golden/imagex or inuse/slotx, you
have to mount slotx or imagey explicitly). I'm ok with that too.