> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
> the VM running "a ZFS OS" enjoys PCI-pass-through, so it gets dedicated
> hardware access to the HBA(s) and harddisks at raw speeds, with no
> extra layers of lags in between.
Ah. But even with PCI pass-thru, you're still limited by the virtual LAN
switch that connects ESXi to the ZFS guest via NFS. When I connected ESXi and
a guest this way, obviously your bandwidth between the host & guest is purely
CPU and memory limited. Because you're not using a real network interface;
you're just emulating the LAN internally. I streamed data as fast as I could
between ESXi and a guest, and found only about 2-3 Gbit. That was over a year
ago so I forget precisely how I measured it ... NFS read/write perhaps, or wget
or something. I know I didn't use ssh or scp, because those tend to slow down
network streams quite a bit. The virtual network is a bottleneck (unless
you're only using 2 disks, in which case 2-3 Gbit is fine.)
I think THIS is where we're disagreeing: I'm saying "Only 2-3 gbit" but I see
Dan's email said " since the traffic never leaves the host (I get 3gb/sec or so
usable thruput.)" and "No offense, but quite a few people are doing exactly
what I describe and it works just fine..."
It would seem we simply have different definitions of "fine" and "abysmal."
> Also, VMWare does not (AFAIK) use ext3, but their own VMFS which is,
> among other things, cluster-aware (same storage can be shared by
> several VMware hosts).
I didn't know vmfs3 had extensions - I think vmfs3 is based on ext3. At least,
all the performance characteristics I've ever observed are on-par with ext3.
But it makes sense they would extend it in some way.
zfs-discuss mailing list