> From: Dan Swartzendruber [mailto:dswa...@druber.com]
> I'm curious here. Your experience is 180 degrees opposite from mine. I
> run an all in one in production and I get native disk performance, and
> ESXi virtual disk I/O is faster than with a physical SAN/NAS for the NFS
> datastore, since the traffic never leaves the host (I get 3gb/sec or so
> usable thruput.)
What is all in one?
I wonder if we crossed wires somehow... I thought Tiernan said he was running
Nexenta inside of ESXi, where Nexenta exports NFS back to the ESXi machine, so
ESXi will have the benefit of ZFS underneath its storage.
That's what I used to do.
When I said performance was abysmal, I meant, if you dig right down and
pressure the system for throughput to disk, you've got a Linux or Windows VM
isnide of ESX, which is writing to a virtual disk, which ESX is then wrapping
up inside NFS and TCP, talking on the virtual LAN to the ZFS server, which
unwraps the TCP and NFS, pushes it all through the ZFS/Zpool layer, writing
back to the virtual disk that ESX gave it, which is itself a layer on top of
Ext3, before it finally hits disk. Based purely on CPU and memory throughput,
my VM guests were seeing a max throughput of around 2-3 Gbit/sec. That's not
*horrible* abysmal. But it's bad to be CPU/memory/bus limited if you can just
eliminate all those extra layers, and do the virtualization directly isnide a
system that supports zfs.
> > I have abandoned ESXi in favor of openindiana or solaris running as the
> host, with virtualbox running the guests. I am SOOOO much happier now.
> But it takes a higher level of expertise than running ESXi, but the results
> much better.
> in what respect? due to the 'abysmal performance'?
No - mostly just the fact that I am no longer constrained by ESXi. In ESXi,
you have such limited capabilities of monitoring, storage, and how you
interface it ... You need a windows client, you only have a few options in
terms of guest autostart and so forth. If you manage all that in a shell
script (or whatever) you can literally do anything you want. Startup one
guest, then launch something that polls the first guest for the operational
XMPP interface (or whatever service you happen to care about) before launching
the second guest, etc. Obviously you can still do brain-dead timeouts or
monitoring for the existence of late-boot-cycle services such as vmware-tools
too, but that's no longer your only option.
Of particular interest, I formerly had ESXi running a guest that was a DHCP and
DNS server, and everything else had to wait for it. Now I run DHCP and DNS
directly inside of the host openindiana. (So I eliminated one VM). I am now
able to connect to guest consoles via VNC or RDP (ok on mac and linux), whereas
with ESXi your only choice is to connect via VSphere from windows.
In ESXi, you cannot use a removable USB drive to store your removable backup
storage. I was using an eSATA drive, and I needed to reboot the whole system
every time I rotated backups offsite. But with openindiana as the host, I can
add/remove removable storage, perform my zpool imports / exports, etc, all
without any rebooting.
Stuff like that. I could go on, but it basically comes down to: With
openindiana, you can do a lot more than you can with ESXi. Because it's a
complete OS. You simply have more freedom, better performance, less
maintenance, less complexity. IMHO, it's better in every way.
I say "less complexity" but maybe not. It depends. I have greater complexity
in the host OS, but I have less confusion and less VM dependencies, so to me
that's less complexity.
zfs-discuss mailing list