On 11/8/2012 1:41 PM, Edward Ned Harvey
doesn't vbox have to do some sort of virtual switch? i think you're
making a distinction that doesn't exist. what you're saying is that
write performance is marginally better, but read performance is 2x? you
have me curious enough to try the vmxnet3 driver again (it's been over a
year since the last time - maybe they've fixed the perf bugs...)
From: Dan Swartzendruber [mailto:dswa...@druber.com]
Now you have me totally confused. How does your setup get data from the
guest to the OI box? If thru a wire, if it's gig-e, it's going to be
1/3-1/2 the speed of the other way. If you're saying you use 10gig or
some-such, we're talking about a whole different animal.
In the old setup, I had ESXi host, with solaris 10 guest, exporting NFS back to
the host. So ESXi created the other guests inside the NFS storage pool. In
this setup, the bottleneck is the virtual LAN that maxes out around 2-3 Gbit,
plus TCP/IP and NFS overhead that degrades the usable performance a bit more.
In the new setup, I have openindiana running directly on the hardware (OI is the
host) and virtualization is managed by VirtualBox. I would use zones if I wanted
solaris/OI guests, but it just so happens I want linux& windows guests. There
is no bottleneck. My linux guest can read 6Gbit/sec and write 3Gbit/sec (I'm using
3 disks mirrored with another 3 disks, each can read/write 1 Gbit/sec).
zfs-discuss mailing list