On Nov 16, 2010, at 7:51 AM, Edward Ned Harvey wrote: >> From: Sean Lutner [mailto:[email protected]] >> Sent: Monday, November 15, 2010 11:20 PM >> >> ESX absolutely supports bonding (teaming). It's a standard best practice >> configuration for your vSwitches if you want to do failover or outbound > load >> balancing. See the esxcfg-vswitch command for details. If you can and want >> to setup port channels (in IOS land) you can also control inbound traffic. >> There are plentiful blogs and vmware community posts about how to set this >> all up. > > This is true. In fact yesterday while I was doing something unrelated, I > stumbled upon those controls. So it's pretty obvious, if you just give it a > try. > > However, the fact remains, that bonding/teaming will still be limited to one > wirespeed for one connection. So bonding/teaming might be useful if you > have a bunch of machines talking to a bunch of machines... But it won't be > useful if your ESX host is talking directly to one other host, for the sake > of storage.
I run a somewhat sizable virtual environment with almost 50 ESX hosts and up to 50 guests per host. The environment has over 1,000 VMs and we don't have a single ESX host with more than 1GbE NICs. Our storage devices (Netapp) all have 10GbE out the back. We have never seen a bottleneck on the host side using 1GbE connections. We split the hosts into three networks; 1 for vkernel and storage (all NFS), 1 for guest networking and 1 for the service console/vmotion. These are all bonded/teamed failover pairs. seperating your traffic like this is also a best practice and something I can highly recommend you do. Having a full 1GbE connection dedicated to all these things is almost certainly more than enough. What are you doing in your environment that you need 10GbE? > > _______________________________________________ > bblisa mailing list > [email protected] > http://www.bblisa.org/mailman/listinfo/bblisa > _______________________________________________ bblisa mailing list [email protected] http://www.bblisa.org/mailman/listinfo/bblisa
