[openstack-dev] [neutron] openvswitch integration

2014-10-02 Thread Andreas Scheuring
Hi together, 
I'm wondering why ovs was integrated into openstack in the way it is
today
(http://docs.openstack.org/grizzly/openstack-network/admin/content/under_the_hood_openvswitch.html)

Especially I would like to understand
- why does every physical interface have its own bridge? (I guess you
could also plug it directly into the br-int)
- and why does the br-int use vlan separation and not directly the
configured tenant-network-type separation (e.g. vxlan or something
else)? Tagging a packet with the internal vlan and then converting it to
the external vlan again looks strange to me in the first place.

It's just a feeling but this surely has impact on the performance. I
guess latency and cpu consumption will surely go up with this design.
Are there any technical or historical reasons for it? Or is it just to
reduce complexity?

Thanks!


-- 
Andreas 
(irc: scheuran)




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] openvswitch integration

2014-10-02 Thread Kevin Benton
- why does every physical interface have its own bridge? (I guess you could
also plug it directly into the br-int)

This is where iptables rules are applied. Until they are implemented in OVS
directly, this bridge is necessary.

- and why does the br-int use vlan separation and not directly the configured
tenant-network-type separation..

The process plugging into the vswitch (Nova) has no idea what the network
segmentation method will be, which is set by Neutron via the Neutron agent.


On Thu, Oct 2, 2014 at 12:20 AM, Andreas Scheuring 
scheu...@linux.vnet.ibm.com wrote:

 Hi together,
 I'm wondering why ovs was integrated into openstack in the way it is
 today
 (
 http://docs.openstack.org/grizzly/openstack-network/admin/content/under_the_hood_openvswitch.html
 )

 Especially I would like to understand
 - why does every physical interface have its own bridge? (I guess you
 could also plug it directly into the br-int)
 - and why does the br-int use vlan separation and not directly the
 configured tenant-network-type separation (e.g. vxlan or something
 else)? Tagging a packet with the internal vlan and then converting it to
 the external vlan again looks strange to me in the first place.

 It's just a feeling but this surely has impact on the performance. I
 guess latency and cpu consumption will surely go up with this design.
 Are there any technical or historical reasons for it? Or is it just to
 reduce complexity?

 Thanks!


 --
 Andreas
 (irc: scheuran)




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev