Edward Ned Harvey <[email protected]> writes:

> > From: Brad Knowles [mailto:[email protected]]
> > 
> > I strongly suspect that this performance between ESX clients is bus and
> > software limited, and that IB, FC, or 10GigE is not going to be able to
> move
> > any faster than what you can do over the bus of the machine.
> 
> What bus are you talking about?  To me, it seems, this whole thing should be
> ramspeed and cpu limited only.  Granted, there's a lot of overhead ... If
> client 1 wants to read a file, it creates a file open request, which goes to
> nfs client, which encapsulates in tcp, which goes to the virtual network
> interface, thinks about mac address, instructs virtual hardware to send a
> packet, which ESX then thinks about a virtual switch, and passes the packet
> to some other virtual hardware ... etc etc etc.  And finally, client 2 reads
> the disk, and sends the response packet through virtual hardware.

I believe you are correct in saying that the system will be cpu/ram limited
not bus limited;  but I think what the other poster meant was that going
from guest to guest on the same box is probably always going to be faster
than going from guest -> network-> somewhere else.

Personally, I'm not sure.  In the case of dedicating a cpu to the software
switch, he's probably right.  In the case of giving all guests all CPUs?
eh, it could go either way.  context switching can get expensive.  

> Of course, there is another way to give ZFS control of disks for some other
> machine ... If solaris is the primary OS, and the other machines are VM
> guests inside solaris.  But the only way I'm aware to virtualize
> windows/linux inside solaris is virtualbox, which is not amazingly
> attractive.

OpenSolaris had rather nice Xen support back in the day.  They called it
Xvm.  

I've been talking about paying someone to forward-port the xen0 stuff into
OpenIndiana, but I am poor.  
_______________________________________________
Tech mailing list
[email protected]
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to