We've had great luck with Intel dual-port 10gbe NICs and Arista 7050S
switches. We have the NICs configured as an 802.3ad bond with each port
going to a different switch.

I haven't noticed any performance issues. I remember we did some benchmarks
last summer and were a little disappointed about the transfer rate of
various protocols, but then saw that we were able have multiple sessions
running at the same peak rate, so we figured it was a software limitation
of some sort.



On Mon, Oct 21, 2013 at 3:19 PM, matthew zeier <[email protected]> wrote:

> Wondering what others have used for multi homed OpenStack nodes on 10GbE.
> Linux bonding? Something else?
>
> In past lives I've encountered performance issues with 10GbE on Linux
> bonding and have used Myricom cards for active/failover.
>
> What are others using?
>
> --
> matthew zeier | Dir. Operations | Lookout | https://twitter.com/mrz
>
>
> _______________________________________________
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : [email protected]
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>



-- 
Joe Topjian
Systems Architect
Cybera Inc.

www.cybera.ca

Cybera is a not-for-profit organization that works to spur and support
innovation, for the economic benefit of Alberta, through the use
of cyberinfrastructure.
_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : [email protected]
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to