We don't use Linux bonding at all. We use OVS bonding.

Actually, mixing linux bridges and ovs bridges didn't work for us with dual
10GbE in compute nodes: so we dropped all linux bridges and our compute
nodes run 100% on top of OVS (management and service networks).

We use Intel cards mainly, and we have used other manufacturers in the
past.

Cheers
Diego

 --
Diego Parrilla
 <http://www.stackops.com/>*CEO*
*www.stackops.com | * [email protected]** | US: +1 (512) 646-0068
 | EU: +34 91 005-2164 | skype:diegoparrilla
*

*



On Mon, Oct 21, 2013 at 11:19 PM, matthew zeier <[email protected]> wrote:

> Wondering what others have used for multi homed OpenStack nodes on 10GbE.
> Linux bonding? Something else?
>
> In past lives I've encountered performance issues with 10GbE on Linux
> bonding and have used Myricom cards for active/failover.
>
> What are others using?
>
> --
> matthew zeier | Dir. Operations | Lookout | https://twitter.com/mrz
>
>
> _______________________________________________
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : [email protected]
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : [email protected]
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to