Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports
Can we move the discussion of deprecating veth pairs to here? https://bugs.launchpad.net/neutron/+bug/1587296 https://review.openstack.org/323310 As you can see in the related bugs and linked patches there are some complications. Some of the veth config options were already deprecated and the change had to be reverted recently. Would be good to hear your opinions on how to solve the remaining problems. Bence Romsics On Wed, Jun 15, 2016 at 8:01 PM, Peters, Rawlinwrote: > On Tuesday, June 14, 2016 6:27 PM, Kevin Benton (ke...@benton.pub) wrote: >> >which generates an arbitrary name >> >> I'm not a fan of this approach because it requires coordinated assumptions. >> With the OVS hybrid plug strategy we have to make guesses on the agent side >> about the presence of bridges with specific names that we never explicitly >> requested and that we were never explicitly told about. So we end up with >> code >> like [1] that is looking for a particular end of a veth pair it just hopes is >> there so the rules have an effect. > > I don't think this should be viewed as a downside of Strategy 1 because, at > least when we use patch port pairs, we can easily get the peer name from the > port on br-int, then use the equivalent of "ovs-vsctl iface-to-br " > to get the name of the bridge. If we allow supporting veth pairs to implement > the subports, then getting the arbitrary trunk bridge/veth names isn't as > trivial. > > This also brings up the question: do we even need to support veth pairs over > patch port pairs anymore? Are there any distros out there that support > openstack but not OVS patch ports? > >> >> >it seems that the LinuxBridge implementation can simply use an L2 agent >> >extension for creating the vlan interfaces for the subports >> >> LinuxBridge implementation is the same regardless of the strategy for OVS. >> The >> whole reason we have to come up with these alternative approaches for OVS is >> because we can't use the obvious architecture of letting it plug into the >> integration bridge due to VLANs already being used for network isolation. I'm >> not sure pushing complexity out to os-vif to deal with this is a great >> long-term strategy. > > The complexity we'd be pushing out to os-vif is not much worse than the > current > complexity of the hybrid_ovs strategy already in place today. > >> >> >Also, we didn’t make the OVS agent monitor for new linux bridges in the >> >hybrid_ovs strategy so that Neutron could be responsible for creating the >> >veth >> >pair. >> >> Linux Bridges are outside of the domain of OVS and even its agent. The L2 >> agent >> doesn't actually do anything with the bridge itself, it just needs a veth >> device it can put iptables rules on. That's in contrast to these new OVS >> bridges that we will be managing rules for, creating additional patch ports, >> etc. > > I wouldn't say linux bridges are totally outside of its domain because it > relies > on them for security groups. Rather than relying on an arbitrary naming > convention between Neutron and Nova, we could've implemented monitoring for > new > linux bridges to create veth pairs and firewall rules on. I'm glad we didn't, > because that logic is specific to that particular firewall driver, similar to > how this trunk bridge monitoring would be specific to only vlan-aware-vms. I > think the logic lives best within an L2 agent extension, outside of the core > of the OVS agent. > >> >> >Why shouldn't we use the tools that are already available to us? >> >> Because we're trying to build a house and all we have are paint brushes. :) > > To me it seems like we already have a house that just needs a little paint :) > >> >> >> 1. >> https://github.com/openstack/neutron/blob/f78e5b4ec812cfcf5ab8b50fca62d1ae0dd7741d/neutron/agent/linux/iptables_firewall.py#L919-L923 > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports
On 16 June 2016 at 03:33, Matt Riedemannwrote: > On 6/13/2016 3:35 AM, Daniel P. Berrange wrote: > >> On Thu, Jun 09, 2016 at 05:31:13PM -0600, Carl Baldwin wrote: >> >>> Hi, >>> >>> You may or may not be aware of the vlan-aware-vms effort [1] in >>> Neutron. If not, there is a spec and a fair number of patches in >>> progress for this. Essentially, the goal is to allow a VM to connect >>> to multiple Neutron networks by tagging traffic on a single port with >>> VLAN tags. >>> >>> This effort will have some effect on vif plugging because the datapath >>> will include some changes that will effect how vif plugging is done >>> today. >>> >>> The design proposal for trunk ports with OVS adds a new bridge for >>> each trunk port. This bridge will demux the traffic and then connect >>> to br-int with patch ports for each of the networks. Rawlin Peters >>> has some ideas for expanding the vif capability to include this >>> wiring. >>> >>> There is also a proposal for connecting to linux bridges by using >>> kernel vlan interfaces. >>> >>> This effort is pretty important to Neutron in the Newton timeframe. I >>> wanted to send this out to start rounding up the reviewers and other >>> participants we need to see how we can start putting together a plan >>> for nova integration of this feature (via os-vif?). >>> >> >> I've not taken a look at the proposal, but on the timing side of things >> it is really way to late to start this email thread asking for design >> input from os-vif or nova. We're way past the spec proposal deadline >> for Nova in the Newton cycle, so nothing is going to happen until the >> Ocata cycle no matter what Neutron want in Newton. For os-vif our >> focus right now is exclusively on getting existing functionality ported >> over, and integrated into Nova in Newton. So again we're not really >> looking >> to spend time on further os-vif design work right now. >> >> In the Ocata cycle we'll be looking to integrate os-vif into Neutron to >> let it directly serialize VIF objects and send them over to Nova, instead >> of using the ad-hoc port-binding dicts. From the Nova side, we're not >> likely to want to support any new functionality that affects port-binding >> data until after Neutron is converted to os-vif. So Ocata at the earliest, >> but probably more like P, unless the Neutron conversion to os-vif gets >> completed unexpectedly quickly. >> >> Regards, >> Daniel >> >> > +1. Nova is past non-priority spec approval freeze for Newton. With > respect to os-vif it's a priority to integrate that into Nova in Newton [1]. > > We're also working on refactoring how we allocate and bind ports when > creating a server [2]. This is a dependency for the routed networks work > and it's also going to bump up against the changes I'm making in nova for > get-me-a-network in Newton (which is another priority). > > So if vlan-aware-vms changes how nova allocates/binds ports, that's going > to be dependent on this also, and will have to be worked into the Ocata > release from Nova's POV. > If my understanding is correct, everything that was required in Nova was done in the context of [1], which completed in Mitaka. What's left is the os-vif part: if os-vif is not tied to the Nova release cycle or the spec/blueprint approval and freeze process and the change in question is trivial, then I hope we can make an effort to pull it off. Now, if the review process unveiled loose ends and changes that are indeed required to Nova, then I'd agree we should not change priorities. Thanks, Armando [1] https://blueprints.launchpad.net/nova/+spec/neutron-ovs-bridge-name > > [1] > https://specs.openstack.org/openstack/nova-specs/priorities/newton-priorities.html#os-vif-integration > [2] > http://specs.openstack.org/openstack/nova-specs/specs/newton/approved/prep-for-network-aware-scheduling.html > > -- > > Thanks, > > Matt Riedemann > > > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports
On 16 June 2016 at 03:33, Matt Riedemannwrote: > On 6/13/2016 3:35 AM, Daniel P. Berrange wrote: > >> On Thu, Jun 09, 2016 at 05:31:13PM -0600, Carl Baldwin wrote: >> >>> Hi, >>> >>> You may or may not be aware of the vlan-aware-vms effort [1] in >>> Neutron. If not, there is a spec and a fair number of patches in >>> progress for this. Essentially, the goal is to allow a VM to connect >>> to multiple Neutron networks by tagging traffic on a single port with >>> VLAN tags. >>> >>> This effort will have some effect on vif plugging because the datapath >>> will include some changes that will effect how vif plugging is done >>> today. >>> >>> The design proposal for trunk ports with OVS adds a new bridge for >>> each trunk port. This bridge will demux the traffic and then connect >>> to br-int with patch ports for each of the networks. Rawlin Peters >>> has some ideas for expanding the vif capability to include this >>> wiring. >>> >>> There is also a proposal for connecting to linux bridges by using >>> kernel vlan interfaces. >>> >>> This effort is pretty important to Neutron in the Newton timeframe. I >>> wanted to send this out to start rounding up the reviewers and other >>> participants we need to see how we can start putting together a plan >>> for nova integration of this feature (via os-vif?). >>> >> >> I've not taken a look at the proposal, but on the timing side of things >> it is really way to late to start this email thread asking for design >> input from os-vif or nova. We're way past the spec proposal deadline >> for Nova in the Newton cycle, so nothing is going to happen until the >> Ocata cycle no matter what Neutron want in Newton. For os-vif our >> focus right now is exclusively on getting existing functionality ported >> over, and integrated into Nova in Newton. So again we're not really >> looking >> to spend time on further os-vif design work right now. >> >> In the Ocata cycle we'll be looking to integrate os-vif into Neutron to >> let it directly serialize VIF objects and send them over to Nova, instead >> of using the ad-hoc port-binding dicts. From the Nova side, we're not >> likely to want to support any new functionality that affects port-binding >> data until after Neutron is converted to os-vif. So Ocata at the earliest, >> but probably more like P, unless the Neutron conversion to os-vif gets >> completed unexpectedly quickly. >> >> Regards, >> Daniel >> >> > +1. Nova is past non-priority spec approval freeze for Newton. With > respect to os-vif it's a priority to integrate that into Nova in Newton [1]. > > We're also working on refactoring how we allocate and bind ports when > creating a server [2]. This is a dependency for the routed networks work > and it's also going to bump up against the changes I'm making in nova for > get-me-a-network in Newton (which is another priority). > > So if vlan-aware-vms changes how nova allocates/binds ports, that's going > to be dependent on this also, and will have to be worked into the Ocata > release from Nova's POV. > If my understanding is correct, everything that was required in Nova was done in the context of [1], which completed in Mitaka. What's left is the os-vif part: if os-vif is not tied to the Nova release cycle or the spec/blueprint approval and freeze process and the change in question is trivial, then I hope we can make an effort to pull it off. After all, the review effort required would be roughly the same to the one involved to pay participate to this thread. Now, if the review process unveiled loose ends and changes that are indeed required to Nova, then I'd agree we should not change priorities. Thanks, Armando [1] https://blueprints.launchpad.net/nova/+spec/neutron-ovs-bridge-name > > [1] > https://specs.openstack.org/openstack/nova-specs/priorities/newton-priorities.html#os-vif-integration > [2] > http://specs.openstack.org/openstack/nova-specs/specs/newton/approved/prep-for-network-aware-scheduling.html > > -- > > Thanks, > > Matt Riedemann > > > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports
On 16 June 2016 at 00:31, Carl Baldwinwrote: > I know I've been pretty quiet since I started this thread. Y'all have > been doing so well, I've just been reading the thread every day and > enjoying it. I thought I'd top post here to kind of summarize. > > I see wisdom in the strategy suggested by Sean Mooney to make a very > minimal change to os-vif to merely create a bridge if it doesn't exist > already and otherwise plug as normal. I agree that this is the > strategy that we should take and I think there is a lot of support for > it in this thread. I'm assuming at this point that this is the way > we're going to move forward unless we hear something soon. > +1 > > There were a few side conversations about deprecating veth, linux > bridge topics, and possibly something else. I think those are their > own topics and we don't necessarily need to wrap anything up on those > at the moment. > > Thank you for all of your thoughts. > > Carl > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports
On 6/13/2016 3:35 AM, Daniel P. Berrange wrote: On Thu, Jun 09, 2016 at 05:31:13PM -0600, Carl Baldwin wrote: Hi, You may or may not be aware of the vlan-aware-vms effort [1] in Neutron. If not, there is a spec and a fair number of patches in progress for this. Essentially, the goal is to allow a VM to connect to multiple Neutron networks by tagging traffic on a single port with VLAN tags. This effort will have some effect on vif plugging because the datapath will include some changes that will effect how vif plugging is done today. The design proposal for trunk ports with OVS adds a new bridge for each trunk port. This bridge will demux the traffic and then connect to br-int with patch ports for each of the networks. Rawlin Peters has some ideas for expanding the vif capability to include this wiring. There is also a proposal for connecting to linux bridges by using kernel vlan interfaces. This effort is pretty important to Neutron in the Newton timeframe. I wanted to send this out to start rounding up the reviewers and other participants we need to see how we can start putting together a plan for nova integration of this feature (via os-vif?). I've not taken a look at the proposal, but on the timing side of things it is really way to late to start this email thread asking for design input from os-vif or nova. We're way past the spec proposal deadline for Nova in the Newton cycle, so nothing is going to happen until the Ocata cycle no matter what Neutron want in Newton. For os-vif our focus right now is exclusively on getting existing functionality ported over, and integrated into Nova in Newton. So again we're not really looking to spend time on further os-vif design work right now. In the Ocata cycle we'll be looking to integrate os-vif into Neutron to let it directly serialize VIF objects and send them over to Nova, instead of using the ad-hoc port-binding dicts. From the Nova side, we're not likely to want to support any new functionality that affects port-binding data until after Neutron is converted to os-vif. So Ocata at the earliest, but probably more like P, unless the Neutron conversion to os-vif gets completed unexpectedly quickly. Regards, Daniel +1. Nova is past non-priority spec approval freeze for Newton. With respect to os-vif it's a priority to integrate that into Nova in Newton [1]. We're also working on refactoring how we allocate and bind ports when creating a server [2]. This is a dependency for the routed networks work and it's also going to bump up against the changes I'm making in nova for get-me-a-network in Newton (which is another priority). So if vlan-aware-vms changes how nova allocates/binds ports, that's going to be dependent on this also, and will have to be worked into the Ocata release from Nova's POV. [1] https://specs.openstack.org/openstack/nova-specs/priorities/newton-priorities.html#os-vif-integration [2] http://specs.openstack.org/openstack/nova-specs/specs/newton/approved/prep-for-network-aware-scheduling.html -- Thanks, Matt Riedemann __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports
I know I've been pretty quiet since I started this thread. Y'all have been doing so well, I've just been reading the thread every day and enjoying it. I thought I'd top post here to kind of summarize. I see wisdom in the strategy suggested by Sean Mooney to make a very minimal change to os-vif to merely create a bridge if it doesn't exist already and otherwise plug as normal. I agree that this is the strategy that we should take and I think there is a lot of support for it in this thread. I'm assuming at this point that this is the way we're going to move forward unless we hear something soon. There were a few side conversations about deprecating veth, linux bridge topics, and possibly something else. I think those are their own topics and we don't necessarily need to wrap anything up on those at the moment. Thank you for all of your thoughts. Carl __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports
>I wouldn't say linux bridges are totally outside of its domain because it relies on them for security groups. It relies on a side effect of their existence - iptables rules being applied to the veth interface. It does nothing to the actual linux bridge itself. If there was a way to plug a veth directly between the VM and OVS and have iptables be applied, there would be *no* code changes on the OVS agent because it doesn't do anything with the bridge. >then use the equivalent of "ovs-vsctl iface-to-br " to get the name of the bridge. So then we have to have logic to guess which bridges connected by patch ports 'belong' to trunk ports because we weren't explicit anywhere. On Wed, Jun 15, 2016 at 11:01 AM, Peters, Rawlinwrote: > On Tuesday, June 14, 2016 6:27 PM, Kevin Benton (ke...@benton.pub) wrote: > > >which generates an arbitrary name > > > > I'm not a fan of this approach because it requires coordinated > assumptions. > > With the OVS hybrid plug strategy we have to make guesses on the agent > side > > about the presence of bridges with specific names that we never > explicitly > > requested and that we were never explicitly told about. So we end up > with code > > like [1] that is looking for a particular end of a veth pair it just > hopes is > > there so the rules have an effect. > > I don't think this should be viewed as a downside of Strategy 1 because, at > least when we use patch port pairs, we can easily get the peer name from > the > port on br-int, then use the equivalent of "ovs-vsctl iface-to-br name>" > to get the name of the bridge. If we allow supporting veth pairs to > implement > the subports, then getting the arbitrary trunk bridge/veth names isn't as > trivial. > > This also brings up the question: do we even need to support veth pairs > over > patch port pairs anymore? Are there any distros out there that support > openstack but not OVS patch ports? > > > > > >it seems that the LinuxBridge implementation can simply use an L2 agent > > >extension for creating the vlan interfaces for the subports > > > > LinuxBridge implementation is the same regardless of the strategy for > OVS. The > > whole reason we have to come up with these alternative approaches for > OVS is > > because we can't use the obvious architecture of letting it plug into the > > integration bridge due to VLANs already being used for network > isolation. I'm > > not sure pushing complexity out to os-vif to deal with this is a great > > long-term strategy. > > The complexity we'd be pushing out to os-vif is not much worse than the > current > complexity of the hybrid_ovs strategy already in place today. > > > > > >Also, we didn’t make the OVS agent monitor for new linux bridges in the > > >hybrid_ovs strategy so that Neutron could be responsible for creating > the veth > > >pair. > > > > Linux Bridges are outside of the domain of OVS and even its agent. The > L2 agent > > doesn't actually do anything with the bridge itself, it just needs a veth > > device it can put iptables rules on. That's in contrast to these new OVS > > bridges that we will be managing rules for, creating additional patch > ports, > > etc. > > I wouldn't say linux bridges are totally outside of its domain because it > relies > on them for security groups. Rather than relying on an arbitrary naming > convention between Neutron and Nova, we could've implemented monitoring > for new > linux bridges to create veth pairs and firewall rules on. I'm glad we > didn't, > because that logic is specific to that particular firewall driver, similar > to > how this trunk bridge monitoring would be specific to only vlan-aware-vms. > I > think the logic lives best within an L2 agent extension, outside of the > core > of the OVS agent. > > > > > >Why shouldn't we use the tools that are already available to us? > > > > Because we're trying to build a house and all we have are paint brushes. > :) > > To me it seems like we already have a house that just needs a little paint > :) > > > > > > > 1. > > > https://github.com/openstack/neutron/blob/f78e5b4ec812cfcf5ab8b50fca62d1ae0dd7741d/neutron/agent/linux/iptables_firewall.py#L919-L923 > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports
On Wednesday, June 15, 2016 12:45 PM, Mooney, Sean K [sean.k.moo...@intel.com] wrote: > > On Tuesday, June 14, 2016 6:27 PM, Kevin Benton (ke...@benton.pub) > > wrote: > > > >which generates an arbitrary name > > > > > > I'm not a fan of this approach because it requires coordinated > > assumptions. > > > With the OVS hybrid plug strategy we have to make guesses on the > > > agent side about the presence of bridges with specific names that we > > > never explicitly requested and that we were never explicitly told > > > about. So we end up with code like [1] that is looking for a > > > particular end of a veth pair it just hopes is there so the rules have an > effect. > [Mooney, Sean K] I really would like to avoid encoding knowledge to > Generate the names the same way in both neutron and os-vf/nova or having > any Other special casing to figure out the bridge or interface names. > > > > > I don't think this should be viewed as a downside of Strategy 1 > > because, at least when we use patch port pairs, we can easily get the > > peer name from the port on br-int, then use the equivalent of > > "ovs-vsctl iface-to- br " > > to get the name of the bridge. If we allow supporting veth pairs to > > implement the subports, then getting the arbitrary trunk bridge/veth > > names isn't as trivial. > > > > This also brings up the question: do we even need to support veth > > pairs over patch port pairs anymore? Are there any distros out there > > that support openstack but not OVS patch ports? > [Mooney, Sean K] that is a separate discussions In general im in favor of > deprecating support for veth interconnect with ovs And removing it in ocata. > I belive I was originally added in juno for centos and suse as then did not > Support ovs 2.0 or there kernel ovs module did not support patchports. > As far as I aware there is no major linux os version that does not have patch > Support in ovs and also meets the minimum python version of 2.7 required > by OpenStack So this functionality could safely be removed. > Ok, we should follow up on this to have it deprecated for removal in Ocata. > > > > > > > > >it seems that the LinuxBridge implementation can simply use an L2 > > > >agent extension for creating the vlan interfaces for the subports > > > > > > LinuxBridge implementation is the same regardless of the strategy > > > for OVS. The whole reason we have to come up with these alternative > > > approaches for OVS is because we can't use the obvious architecture > > > of letting it plug into the integration bridge due to VLANs already > > > being used for network isolation. I'm not sure pushing complexity > > > out to os-vif to deal with this is a great long-term strategy. > > > > The complexity we'd be pushing out to os-vif is not much worse than > > the current complexity of the hybrid_ovs strategy already in place today. > [Mooney, Sean K] I don’t think strategy 1 is the correct course Of action > long- > term with the trunk bridge approch. I honestly think that The patch port > creation should be the responsibility of the ovs agent alone. > > I think the DRY principle applies in this respect also. The ovs agent will Be > required to add or remove patch ports after the vm is booted if subports Are > added/removed from the truck port. I don’t think it make sense to Write the > code to do that both in the ovs agent and separately in os-vif. > > Having os-vif simply create the bridge if it does not exist and Add the port > to > it is a much simpler solution in that respect as you can reuse The patch port > code that is already in neutron and not duplicate it in os-vif. > https://github.com/openstack/neutron/blob/master/neutron/agent/comm > on/ovs_lib.py#L368-L371 I don't think we'd be in too much danger in terms of DRY for creating a patch port pair in os-vif, because it already has [1] from Nova which is just some basic wrapping around a shell command. [1] could be extended to allow creating an OVS patch port rather than just a regular port, and I think we can rely on the shell command to not produce any bugs that would need fixed in both places. [1] https://github.com/openstack/os-vif/blob/master/vif_plug_ovs/linux_net.py#L49-L60 > > > > > > > > > > >Also, we didn’t make the OVS agent monitor for new linux bridges in > > > >the hybrid_ovs strategy so that Neutron could be responsible for > > > >creating the veth pair. > > > > > > Linux Bridges are outside of the domain of OVS and even its agent. The > > > L2 agent doesn't actually do anything with the bridge itself, it just > > > needs a veth device it can put iptables rules on. That's in contrast > > > to these new OVS bridges that we will be managing rules for, creating > > > additional patch ports, etc. > > > > I wouldn't say linux bridges are totally outside of its domain because > > it relies on them for security groups. Rather than relying on an > > arbitrary naming convention between Neutron and Nova, we could've > > implemented monitoring
Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports
> -Original Message- > From: Peters, Rawlin [mailto:rawlin.pet...@hpe.com] > Sent: Wednesday, June 15, 2016 7:02 PM > To: Kevin Benton <ke...@benton.pub> > Cc: OpenStack Development Mailing List (not for usage questions) > <openstack-dev@lists.openstack.org> > Subject: Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability > for wiring trunk ports > > On Tuesday, June 14, 2016 6:27 PM, Kevin Benton (ke...@benton.pub) > wrote: > > >which generates an arbitrary name > > > > I'm not a fan of this approach because it requires coordinated > assumptions. > > With the OVS hybrid plug strategy we have to make guesses on the agent > > side about the presence of bridges with specific names that we never > > explicitly requested and that we were never explicitly told about. So > > we end up with code like [1] that is looking for a particular end of a > > veth pair it just hopes is there so the rules have an effect. [Mooney, Sean K] I really would like to avoid encoding knowledge to Generate the names the same way in both neutron and os-vf/nova or having any Other special casing to figure out the bridge or interface names. > > I don't think this should be viewed as a downside of Strategy 1 because, > at least when we use patch port pairs, we can easily get the peer name > from the port on br-int, then use the equivalent of "ovs-vsctl iface-to- > br " > to get the name of the bridge. If we allow supporting veth pairs to > implement the subports, then getting the arbitrary trunk bridge/veth > names isn't as trivial. > > This also brings up the question: do we even need to support veth pairs > over patch port pairs anymore? Are there any distros out there that > support openstack but not OVS patch ports? [Mooney, Sean K] that is a separate discussions In general im in favor of deprecating support for veth interconnect with ovs And removing it in ocata. I belive I was originally added in juno for centos and suse as then did not Support ovs 2.0 or there kernel ovs module did not support patchports. As far as I aware there is no major linux os version that does not have patch Support in ovs and also meets the minimum python version of 2.7 required by OpenStack So this functionality could safely be removed. > > > > > >it seems that the LinuxBridge implementation can simply use an L2 > > >agent extension for creating the vlan interfaces for the subports > > > > LinuxBridge implementation is the same regardless of the strategy for > > OVS. The whole reason we have to come up with these alternative > > approaches for OVS is because we can't use the obvious architecture of > > letting it plug into the integration bridge due to VLANs already being > > used for network isolation. I'm not sure pushing complexity out to > > os-vif to deal with this is a great long-term strategy. > > The complexity we'd be pushing out to os-vif is not much worse than the > current complexity of the hybrid_ovs strategy already in place today. [Mooney, Sean K] I don’t think strategy 1 is the correct course Of action long-term with the trunk bridge approch. I honestly think that The patch port creation should be the responsibility of the ovs agent alone. I think the DRY principle applies in this respect also. The ovs agent will Be required to add or remove patch ports after the vm is booted if subports Are added/removed from the truck port. I don’t think it make sense to Write the code to do that both in the ovs agent and separately in os-vif. Having os-vif simply create the bridge if it does not exist and Add the port to it is a much simpler solution in that respect as you can reuse The patch port code that is already in neutron and not duplicate it in os-vif. https://github.com/openstack/neutron/blob/master/neutron/agent/common/ovs_lib.py#L368-L371 > > > > > >Also, we didn’t make the OVS agent monitor for new linux bridges in > > >the hybrid_ovs strategy so that Neutron could be responsible for > > >creating the veth pair. > > > > Linux Bridges are outside of the domain of OVS and even its agent. The > > L2 agent doesn't actually do anything with the bridge itself, it just > > needs a veth device it can put iptables rules on. That's in contrast > > to these new OVS bridges that we will be managing rules for, creating > > additional patch ports, etc. > > I wouldn't say linux bridges are totally outside of its domain because > it relies on them for security groups. Rather than relying on an > arbitrary naming convention between Neutron and Nova, we could've > implemented monitoring for new linux bridges to create veth pairs and > firewall rules on. I'm glad we didn't, because that logic is specific to >
Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports
On Wed, Jun 15, 2016 at 2:01 PM, Peters, Rawlinwrote: > On Tuesday, June 14, 2016 6:27 PM, Kevin Benton (ke...@benton.pub) wrote: >> >which generates an arbitrary name >> >> I'm not a fan of this approach because it requires coordinated assumptions. >> With the OVS hybrid plug strategy we have to make guesses on the agent side >> about the presence of bridges with specific names that we never explicitly >> requested and that we were never explicitly told about. So we end up with >> code >> like [1] that is looking for a particular end of a veth pair it just hopes is >> there so the rules have an effect. > > I don't think this should be viewed as a downside of Strategy 1 because, at > least when we use patch port pairs, we can easily get the peer name from the > port on br-int, then use the equivalent of "ovs-vsctl iface-to-br " > to get the name of the bridge. If we allow supporting veth pairs to implement > the subports, then getting the arbitrary trunk bridge/veth names isn't as > trivial. > > This also brings up the question: do we even need to support veth pairs over > patch port pairs anymore? Are there any distros out there that support > openstack but not OVS patch ports? I really doubt it. This stopped being an issue in Fedora/CentOS/RHEL like 18~ months ago. > >> >> >it seems that the LinuxBridge implementation can simply use an L2 agent >> >extension for creating the vlan interfaces for the subports >> >> LinuxBridge implementation is the same regardless of the strategy for OVS. >> The >> whole reason we have to come up with these alternative approaches for OVS is >> because we can't use the obvious architecture of letting it plug into the >> integration bridge due to VLANs already being used for network isolation. I'm >> not sure pushing complexity out to os-vif to deal with this is a great >> long-term strategy. > > The complexity we'd be pushing out to os-vif is not much worse than the > current > complexity of the hybrid_ovs strategy already in place today. > >> >> >Also, we didn’t make the OVS agent monitor for new linux bridges in the >> >hybrid_ovs strategy so that Neutron could be responsible for creating the >> >veth >> >pair. >> >> Linux Bridges are outside of the domain of OVS and even its agent. The L2 >> agent >> doesn't actually do anything with the bridge itself, it just needs a veth >> device it can put iptables rules on. That's in contrast to these new OVS >> bridges that we will be managing rules for, creating additional patch ports, >> etc. > > I wouldn't say linux bridges are totally outside of its domain because it > relies > on them for security groups. Rather than relying on an arbitrary naming > convention between Neutron and Nova, we could've implemented monitoring for > new > linux bridges to create veth pairs and firewall rules on. I'm glad we didn't, > because that logic is specific to that particular firewall driver, similar to > how this trunk bridge monitoring would be specific to only vlan-aware-vms. I > think the logic lives best within an L2 agent extension, outside of the core > of the OVS agent. > >> >> >Why shouldn't we use the tools that are already available to us? >> >> Because we're trying to build a house and all we have are paint brushes. :) > > To me it seems like we already have a house that just needs a little paint :) > >> >> >> 1. >> https://github.com/openstack/neutron/blob/f78e5b4ec812cfcf5ab8b50fca62d1ae0dd7741d/neutron/agent/linux/iptables_firewall.py#L919-L923 > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports
On Tuesday, June 14, 2016 6:27 PM, Kevin Benton (ke...@benton.pub) wrote: > >which generates an arbitrary name > > I'm not a fan of this approach because it requires coordinated assumptions. > With the OVS hybrid plug strategy we have to make guesses on the agent side > about the presence of bridges with specific names that we never explicitly > requested and that we were never explicitly told about. So we end up with code > like [1] that is looking for a particular end of a veth pair it just hopes is > there so the rules have an effect. I don't think this should be viewed as a downside of Strategy 1 because, at least when we use patch port pairs, we can easily get the peer name from the port on br-int, then use the equivalent of "ovs-vsctl iface-to-br " to get the name of the bridge. If we allow supporting veth pairs to implement the subports, then getting the arbitrary trunk bridge/veth names isn't as trivial. This also brings up the question: do we even need to support veth pairs over patch port pairs anymore? Are there any distros out there that support openstack but not OVS patch ports? > > >it seems that the LinuxBridge implementation can simply use an L2 agent > >extension for creating the vlan interfaces for the subports > > LinuxBridge implementation is the same regardless of the strategy for OVS. The > whole reason we have to come up with these alternative approaches for OVS is > because we can't use the obvious architecture of letting it plug into the > integration bridge due to VLANs already being used for network isolation. I'm > not sure pushing complexity out to os-vif to deal with this is a great > long-term strategy. The complexity we'd be pushing out to os-vif is not much worse than the current complexity of the hybrid_ovs strategy already in place today. > > >Also, we didn’t make the OVS agent monitor for new linux bridges in the > >hybrid_ovs strategy so that Neutron could be responsible for creating the > >veth > >pair. > > Linux Bridges are outside of the domain of OVS and even its agent. The L2 > agent > doesn't actually do anything with the bridge itself, it just needs a veth > device it can put iptables rules on. That's in contrast to these new OVS > bridges that we will be managing rules for, creating additional patch ports, > etc. I wouldn't say linux bridges are totally outside of its domain because it relies on them for security groups. Rather than relying on an arbitrary naming convention between Neutron and Nova, we could've implemented monitoring for new linux bridges to create veth pairs and firewall rules on. I'm glad we didn't, because that logic is specific to that particular firewall driver, similar to how this trunk bridge monitoring would be specific to only vlan-aware-vms. I think the logic lives best within an L2 agent extension, outside of the core of the OVS agent. > > >Why shouldn't we use the tools that are already available to us? > > Because we're trying to build a house and all we have are paint brushes. :) To me it seems like we already have a house that just needs a little paint :) > > > 1. > https://github.com/openstack/neutron/blob/f78e5b4ec812cfcf5ab8b50fca62d1ae0dd7741d/neutron/agent/linux/iptables_firewall.py#L919-L923 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports
>which generates an arbitrary name I'm not a fan of this approach because it requires coordinated assumptions. With the OVS hybrid plug strategy we have to make guesses on the agent side about the presence of bridges with specific names that we never explicitly requested and that we were never explicitly told about. So we end up with code like [1] that is looking for a particular end of a veth pair it just hopes is there so the rules have an effect. >it seems that the LinuxBridge implementation can simply use an L2 agent extension for creating the vlan interfaces for the subports LinuxBridge implementation is the same regardless of the strategy for OVS. The whole reason we have to come up with these alternative approaches for OVS is because we can't use the obvious architecture of letting it plug into the integration bridge due to VLANs already being used for network isolation. I'm not sure pushing complexity out to os-vif to deal with this is a great long-term strategy. >Also, we didn’t make the OVS agent monitor for new linux bridges in the hybrid_ovs strategy so that Neutron could be responsible for creating the veth pair. Linux Bridges are outside of the domain of OVS and even its agent. The L2 agent doesn't actually do anything with the bridge itself, it just needs a veth device it can put iptables rules on. That's in contrast to these new OVS bridges that we will be managing rules for, creating additional patch ports, etc. >Why shouldn't we use the tools that are already available to us? Because we're trying to build a house and all we have are paint brushes. :) 1. https://github.com/openstack/neutron/blob/f78e5b4ec812cfcf5ab8b50fca62d1ae0dd7741d/neutron/agent/linux/iptables_firewall.py#L919-L923 On Tue, Jun 14, 2016 at 9:49 AM, Peters, Rawlinwrote: > On Tuesday, June 14, 2016 3:43 AM, Daniel P. Berrange (berra...@redhat.com) > wrote: > > > > On Tue, Jun 14, 2016 at 02:35:57AM -0700, Kevin Benton wrote: > > > In strategy 2 we just pass 1 bridge name to Nova. That's the one that > > > is ensures is created and plumbs the VM to. Since it's not responsible > > > for patch ports it doesn't need to know anything about the other > bridge. > > > > Ok, so we're already passing that bridge name - all we need change is > make > > sure it is actuall created if it doesn't already exist ? If so that > sounds simple > > enough to add to os-vif - we already have exactly the same logic for the > > linux_bridge plugin > > Neutron doesn't actually pass the bridge name in the vif_details today, > but Nova will use that bridge rather than br-int if it's passed in the > vif_details. > > In terms of strategy 1, I was still only envisioning one bridge name > getting passed in the vif_details (br-int). The "plug" action is only a > variation of the hybrid_ovs strategy I mentioned earlier, which generates > an arbitrary name for the linux bridge, uses that bridge in the instance's > libvert XML config file, then creates a veth pair between the linux bridge > and br-int. Like hybrid_ovs, the only bridge Nova/os-vif needs to know > about is br-int for Strategy 1. > > In terms of architecture, we get KISS with Strategy 1 (W.R.T. the OVS > agent, which is the most complex piece of this IMO). Using an L2 agent > extension, we will also get DRY as well because it seems that the > LinuxBridge implementation can simply use an L2 agent extension for > creating the vlan interfaces for the subports. Similar to how QoS has > different drivers for its L2 agent extension, we could have different > drivers for OVS and LinuxBridge within the 'trunk' L2 agent extension. Each > driver will want to make use of the same RPC calls/push mechanisms for > subport creation/deletion. > > Also, we didn’t make the OVS agent monitor for new linux bridges in the > hybrid_ovs strategy so that Neutron could be responsible for creating the > veth pair. Was that a mistake or just an instance of KISS? Why shouldn't we > use the tools that are already available to us? > > Regards, > Rawlin > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports
On Tuesday, June 14, 2016 3:43 AM, Daniel P. Berrange (berra...@redhat.com) wrote: > > On Tue, Jun 14, 2016 at 02:35:57AM -0700, Kevin Benton wrote: > > In strategy 2 we just pass 1 bridge name to Nova. That's the one that > > is ensures is created and plumbs the VM to. Since it's not responsible > > for patch ports it doesn't need to know anything about the other bridge. > > Ok, so we're already passing that bridge name - all we need change is make > sure it is actuall created if it doesn't already exist ? If so that sounds > simple > enough to add to os-vif - we already have exactly the same logic for the > linux_bridge plugin Neutron doesn't actually pass the bridge name in the vif_details today, but Nova will use that bridge rather than br-int if it's passed in the vif_details. In terms of strategy 1, I was still only envisioning one bridge name getting passed in the vif_details (br-int). The "plug" action is only a variation of the hybrid_ovs strategy I mentioned earlier, which generates an arbitrary name for the linux bridge, uses that bridge in the instance's libvert XML config file, then creates a veth pair between the linux bridge and br-int. Like hybrid_ovs, the only bridge Nova/os-vif needs to know about is br-int for Strategy 1. In terms of architecture, we get KISS with Strategy 1 (W.R.T. the OVS agent, which is the most complex piece of this IMO). Using an L2 agent extension, we will also get DRY as well because it seems that the LinuxBridge implementation can simply use an L2 agent extension for creating the vlan interfaces for the subports. Similar to how QoS has different drivers for its L2 agent extension, we could have different drivers for OVS and LinuxBridge within the 'trunk' L2 agent extension. Each driver will want to make use of the same RPC calls/push mechanisms for subport creation/deletion. Also, we didn’t make the OVS agent monitor for new linux bridges in the hybrid_ovs strategy so that Neutron could be responsible for creating the veth pair. Was that a mistake or just an instance of KISS? Why shouldn't we use the tools that are already available to us? Regards, Rawlin __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports
On 06/14/2016 11:43 AM, Daniel P. Berrange wrote: Ok, so we're already passing that bridge name - all we need change is make sure it is actuall created if it doesn't already exist ? If so that sounds simple enough to add to os-vif - we already have exactly the same logic for the linux_bridge plugin That's exactly what I wanted to ask, thanks Daniel! Note that in the devref [1] about the OVS agent implementation for trunk port we were already assuming strategy 2 . I am glad to see we are on the same page! cheers, Rossella [1] https://review.openstack.org/#/c/318317 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports
Well in terms of the ovs plugin change for strategy 2 that requires a single function call Here: https://github.com/openstack/os-vif/blob/master/vif_plug_ovs/ovs.py#L109 And here: https://github.com/openstack/os-vif/blob/master/vif_plug_ovs/ovs.py#L84 and one new function in https://github.com/openstack/os-vif/blob/master/vif_plug_ovs/linux_net.py with unit test it probably <100 lines of code. For strategy 1 we would need to do a little more work as we would have to pass two bridges but as you said creating the bridge if it does not exist is needed in either case. From: Kevin Benton [mailto:ke...@benton.pub] Sent: Tuesday, June 14, 2016 10:49 AM To: Daniel P. Berrange <berra...@redhat.com> Cc: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports Yep, and both strategies depend on that "create if not exists" logic so it makes sense to at least get that implemented while we continue to argue about which strategy to use. On Jun 14, 2016 02:43, "Daniel P. Berrange" <berra...@redhat.com<mailto:berra...@redhat.com>> wrote: On Tue, Jun 14, 2016 at 02:35:57AM -0700, Kevin Benton wrote: > In strategy 2 we just pass 1 bridge name to Nova. That's the one that is > ensures is created and plumbs the VM to. Since it's not responsible for > patch ports it doesn't need to know anything about the other bridge. Ok, so we're already passing that bridge name - all we need change is make sure it is actuall created if it doesn't already exist ? If so that sounds simple enough to add to os-vif - we already have exactly the same logic for the linux_bridge plugin Regards, Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports
Yep, and both strategies depend on that "create if not exists" logic so it makes sense to at least get that implemented while we continue to argue about which strategy to use. On Jun 14, 2016 02:43, "Daniel P. Berrange"wrote: > On Tue, Jun 14, 2016 at 02:35:57AM -0700, Kevin Benton wrote: > > In strategy 2 we just pass 1 bridge name to Nova. That's the one that is > > ensures is created and plumbs the VM to. Since it's not responsible for > > patch ports it doesn't need to know anything about the other bridge. > > Ok, so we're already passing that bridge name - all we need change is > make sure it is actuall created if it doesn't already exist ? If so > that sounds simple enough to add to os-vif - we already have exactly > the same logic for the linux_bridge plugin > > > Regards, > Daniel > -- > |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ > :| > |: http://libvirt.org -o- http://virt-manager.org > :| > |: http://autobuild.org -o- http://search.cpan.org/~danberr/ > :| > |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc > :| > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports
On Tue, Jun 14, 2016 at 02:35:57AM -0700, Kevin Benton wrote: > In strategy 2 we just pass 1 bridge name to Nova. That's the one that is > ensures is created and plumbs the VM to. Since it's not responsible for > patch ports it doesn't need to know anything about the other bridge. Ok, so we're already passing that bridge name - all we need change is make sure it is actuall created if it doesn't already exist ? If so that sounds simple enough to add to os-vif - we already have exactly the same logic for the linux_bridge plugin Regards, Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports
In strategy 2 we just pass 1 bridge name to Nova. That's the one that is ensures is created and plumbs the VM to. Since it's not responsible for patch ports it doesn't need to know anything about the other bridge. On Tue, Jun 14, 2016 at 2:30 AM, Daniel P. Berrangewrote: > On Tue, Jun 14, 2016 at 02:10:52AM -0700, Kevin Benton wrote: > > Strategy 1 is being pitched to make it easier to implement with the > current > > internals of the Neutron OVS agent (using integration bridge plugging > > events). I'm not sure that's better architecturally long term because the > > OVS agent has to have logic to wire up patch ports for the sub-interfaces > > anyway, so having the logic to make it wire up patch port for the parent > > interface is not out of place. > > > > Also consider that we will now have to tell os-vif two bridges to use if > we > > go with strategy 1. One bridge to create and attach the VM to, and > another > > for the other half of the patch port. This means that we are going to > have > > to leak more details of what Neutron is doing into the VIF details of the > > neutron port data model and relay that to Nova... > > It sounds like strategy 2 also requires you to pass a second bridge > name to nova/os-vif, unless I'm mis-understanding the description > below. > > > > On Tue, Jun 14, 2016 at 1:29 AM, Daniel P. Berrange > > > wrote: > > > > > On Mon, Jun 13, 2016 at 11:35:17PM +, Peters, Rawlin wrote: > > > > That said, there are currently a couple of vif-plugging strategies > > > > we could go with for wiring trunk ports for OVS, each of them > > > > requiring varying levels of os-vif augmentation: > > > > > > > > Strategy 1) When Nova is plugging a trunk port, it creates the OVS > > > > trunk bridge, attaches the tap to it, and creates one patch port > > > > pair from the trunk bridge to br-int. > > > > > > > > Strategy 2) Neutron passes a bridge name to Nova, and Nova uses this > > > > bridge name to create the OVS trunk bridge and attach the tap to it > > > > (no patch port pair plugging into br-int). > > > > > > [snip] > > > > > > > If neither of these strategies would be in danger of not making it > > > > into the Newton release, then I think we should definitely opt for > > > > Strategy 1 because it leads to a simpler overall solution. If only > > > > Strategy 2 is feasible enough to make it into os-vif for Newton, > > > > then we need to know ASAP so that we can start implementing the > > > > required functionality for the OVS agent to monitor for dynamic trunk > > > > bridge creation/deletion. > > > > > > IMHO the answer should always be to go for the right long term > > > architectural > > > solution, not take short cuts just to meet some arbitrary deadline, > because > > > that will compromise the code over the long term. From what you are > saying > > > it sounds like strategy 1 is the optimal long term solution, so that > should > > > be where effort is focused regardless. > > > > > > Regards, > > > Daniel > > > -- > > > |: http://berrange.com -o- > http://www.flickr.com/photos/dberrange/ > > > :| > > > |: http://libvirt.org -o- > http://virt-manager.org > > > :| > > > |: http://autobuild.org -o- > http://search.cpan.org/~danberr/ > > > :| > > > |: http://entangle-photo.org -o- > http://live.gnome.org/gtk-vnc > > > :| > > > > > > > __ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > Regards, > Daniel > -- > |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ > :| > |: http://libvirt.org -o- http://virt-manager.org > :| > |: http://autobuild.org -o- http://search.cpan.org/~danberr/ > :| > |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc > :| > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports
On Tue, Jun 14, 2016 at 02:10:52AM -0700, Kevin Benton wrote: > Strategy 1 is being pitched to make it easier to implement with the current > internals of the Neutron OVS agent (using integration bridge plugging > events). I'm not sure that's better architecturally long term because the > OVS agent has to have logic to wire up patch ports for the sub-interfaces > anyway, so having the logic to make it wire up patch port for the parent > interface is not out of place. > > Also consider that we will now have to tell os-vif two bridges to use if we > go with strategy 1. One bridge to create and attach the VM to, and another > for the other half of the patch port. This means that we are going to have > to leak more details of what Neutron is doing into the VIF details of the > neutron port data model and relay that to Nova... It sounds like strategy 2 also requires you to pass a second bridge name to nova/os-vif, unless I'm mis-understanding the description below. > On Tue, Jun 14, 2016 at 1:29 AM, Daniel P. Berrange> wrote: > > > On Mon, Jun 13, 2016 at 11:35:17PM +, Peters, Rawlin wrote: > > > That said, there are currently a couple of vif-plugging strategies > > > we could go with for wiring trunk ports for OVS, each of them > > > requiring varying levels of os-vif augmentation: > > > > > > Strategy 1) When Nova is plugging a trunk port, it creates the OVS > > > trunk bridge, attaches the tap to it, and creates one patch port > > > pair from the trunk bridge to br-int. > > > > > > Strategy 2) Neutron passes a bridge name to Nova, and Nova uses this > > > bridge name to create the OVS trunk bridge and attach the tap to it > > > (no patch port pair plugging into br-int). > > > > [snip] > > > > > If neither of these strategies would be in danger of not making it > > > into the Newton release, then I think we should definitely opt for > > > Strategy 1 because it leads to a simpler overall solution. If only > > > Strategy 2 is feasible enough to make it into os-vif for Newton, > > > then we need to know ASAP so that we can start implementing the > > > required functionality for the OVS agent to monitor for dynamic trunk > > > bridge creation/deletion. > > > > IMHO the answer should always be to go for the right long term > > architectural > > solution, not take short cuts just to meet some arbitrary deadline, because > > that will compromise the code over the long term. From what you are saying > > it sounds like strategy 1 is the optimal long term solution, so that should > > be where effort is focused regardless. > > > > Regards, > > Daniel > > -- > > |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ > > :| > > |: http://libvirt.org -o- http://virt-manager.org > > :| > > |: http://autobuild.org -o- http://search.cpan.org/~danberr/ > > :| > > |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc > > :| > > > > __ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > Regards, Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports
Strategy 1 is being pitched to make it easier to implement with the current internals of the Neutron OVS agent (using integration bridge plugging events). I'm not sure that's better architecturally long term because the OVS agent has to have logic to wire up patch ports for the sub-interfaces anyway, so having the logic to make it wire up patch port for the parent interface is not out of place. Also consider that we will now have to tell os-vif two bridges to use if we go with strategy 1. One bridge to create and attach the VM to, and another for the other half of the patch port. This means that we are going to have to leak more details of what Neutron is doing into the VIF details of the neutron port data model and relay that to Nova... On Tue, Jun 14, 2016 at 1:29 AM, Daniel P. Berrangewrote: > On Mon, Jun 13, 2016 at 11:35:17PM +, Peters, Rawlin wrote: > > That said, there are currently a couple of vif-plugging strategies > > we could go with for wiring trunk ports for OVS, each of them > > requiring varying levels of os-vif augmentation: > > > > Strategy 1) When Nova is plugging a trunk port, it creates the OVS > > trunk bridge, attaches the tap to it, and creates one patch port > > pair from the trunk bridge to br-int. > > > > Strategy 2) Neutron passes a bridge name to Nova, and Nova uses this > > bridge name to create the OVS trunk bridge and attach the tap to it > > (no patch port pair plugging into br-int). > > [snip] > > > If neither of these strategies would be in danger of not making it > > into the Newton release, then I think we should definitely opt for > > Strategy 1 because it leads to a simpler overall solution. If only > > Strategy 2 is feasible enough to make it into os-vif for Newton, > > then we need to know ASAP so that we can start implementing the > > required functionality for the OVS agent to monitor for dynamic trunk > > bridge creation/deletion. > > IMHO the answer should always be to go for the right long term > architectural > solution, not take short cuts just to meet some arbitrary deadline, because > that will compromise the code over the long term. From what you are saying > it sounds like strategy 1 is the optimal long term solution, so that should > be where effort is focused regardless. > > Regards, > Daniel > -- > |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ > :| > |: http://libvirt.org -o- http://virt-manager.org > :| > |: http://autobuild.org -o- http://search.cpan.org/~danberr/ > :| > |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc > :| > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports
On Mon, Jun 13, 2016 at 11:35:17PM +, Peters, Rawlin wrote: > That said, there are currently a couple of vif-plugging strategies > we could go with for wiring trunk ports for OVS, each of them > requiring varying levels of os-vif augmentation: > > Strategy 1) When Nova is plugging a trunk port, it creates the OVS > trunk bridge, attaches the tap to it, and creates one patch port > pair from the trunk bridge to br-int. > > Strategy 2) Neutron passes a bridge name to Nova, and Nova uses this > bridge name to create the OVS trunk bridge and attach the tap to it > (no patch port pair plugging into br-int). [snip] > If neither of these strategies would be in danger of not making it > into the Newton release, then I think we should definitely opt for > Strategy 1 because it leads to a simpler overall solution. If only > Strategy 2 is feasible enough to make it into os-vif for Newton, > then we need to know ASAP so that we can start implementing the > required functionality for the OVS agent to monitor for dynamic trunk > bridge creation/deletion. IMHO the answer should always be to go for the right long term architectural solution, not take short cuts just to meet some arbitrary deadline, because that will compromise the code over the long term. From what you are saying it sounds like strategy 1 is the optimal long term solution, so that should be where effort is focused regardless. Regards, Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports
That is already supported in stable/mitaka – please see https://review.openstack.org/#/c/260700/ I agree with Kevin From: Kevin Benton <ke...@benton.pub> Reply-To: OpenStack List <openstack-dev@lists.openstack.org> Date: Monday, June 13, 2016 at 11:59 PM To: OpenStack List <openstack-dev@lists.openstack.org> Subject: Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports +1. Neutron should already be able to tell Nova which bridge to use for an OVS port.[1] For the Linux bridge implementation it's a matter of creating vlan interfaces and plugging them into bridges like regular VM ports, which is all the responsibility of the L2 agent. We shouldn't need any changes from Nova or os-vif from what I can see. 1. https://github.com/openstack/nova/blob/6e2e1dc912199e057e5c3a5e07d39f26cbbbdd5b/nova/network/neutronv2/api.py#L1618<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_nova_blob_6e2e1dc912199e057e5c3a5e07d39f26cbbbdd5b_nova_network_neutronv2_api.py-23L1618=CwMFaQ=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs=VlZxHpZBmzzkWT5jqz9JYBk8YTeq9N3-diTlNj4GyNc=aLF644Euno1BNoUIMh7bFndKIXDcBjRS2_4IRRAOPd8=rZ57uFzphC8svGsKNC-YKhhE9VTkLoNxuEu4oOkJEAQ=> On Mon, Jun 13, 2016 at 5:26 AM, Mooney, Sean K <sean.k.moo...@intel.com<mailto:sean.k.moo...@intel.com>> wrote: > -Original Message- > From: Daniel P. Berrange > [mailto:berra...@redhat.com<mailto:berra...@redhat.com>] > Sent: Monday, June 13, 2016 1:12 PM > To: Armando M. <arma...@gmail.com<mailto:arma...@gmail.com>> > Cc: Carl Baldwin <c...@ecbaldwin.net<mailto:c...@ecbaldwin.net>>; OpenStack > Development Mailing > List > <openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>; > Jay Pipes > <jaypi...@gmail.com<mailto:jaypi...@gmail.com>>; Maxime Leroy > <maxime.le...@6wind.com<mailto:maxime.le...@6wind.com>>; Moshe Levi > <mosh...@mellanox.com<mailto:mosh...@mellanox.com>>; Russell Bryant > <rbry...@redhat.com<mailto:rbry...@redhat.com>>; sahid > <sahid.ferdja...@redhat.com<mailto:sahid.ferdja...@redhat.com>>; Mooney, Sean > K <sean.k.moo...@intel.com<mailto:sean.k.moo...@intel.com>> > Subject: Re: [Neutron][os-vif] Expanding vif capability for wiring trunk > ports > > On Mon, Jun 13, 2016 at 02:08:30PM +0200, Armando M. wrote: > > On 13 June 2016 at 10:35, Daniel P. Berrange > > <berra...@redhat.com<mailto:berra...@redhat.com>> > wrote: > > > > > On Thu, Jun 09, 2016 at 05:31:13PM -0600, Carl Baldwin wrote: > > > > Hi, > > > > > > > > You may or may not be aware of the vlan-aware-vms effort [1] in > > > > Neutron. If not, there is a spec and a fair number of patches in > > > > progress for this. Essentially, the goal is to allow a VM to > > > > connect to multiple Neutron networks by tagging traffic on a > > > > single port with VLAN tags. > > > > > > > > This effort will have some effect on vif plugging because the > > > > datapath will include some changes that will effect how vif > > > > plugging is done today. > > > > > > > > The design proposal for trunk ports with OVS adds a new bridge for > > > > each trunk port. This bridge will demux the traffic and then > > > > connect to br-int with patch ports for each of the networks. > > > > Rawlin Peters has some ideas for expanding the vif capability to > > > > include this wiring. > > > > > > > > There is also a proposal for connecting to linux bridges by using > > > > kernel vlan interfaces. > > > > > > > > This effort is pretty important to Neutron in the Newton > > > > timeframe. I wanted to send this out to start rounding up the > > > > reviewers and other participants we need to see how we can start > > > > putting together a plan for nova integration of this feature (via > os-vif?). > > > > > > I've not taken a look at the proposal, but on the timing side of > > > things it is really way to late to start this email thread asking > > > for design input from os-vif or nova. We're way past the spec > > > proposal deadline for Nova in the Newton cycle, so nothing is going > > > to happen until the Ocata cycle no matter what Neutron want in > Newton. > > > > > > For sake of clarity, does this mean that the management of the os-vif > > project matches exactly Nova's, e.g. same deadlines and processes > > apply, even though the core team and its release model are differen
Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports
On Monday, June 13, 2016 6:28 AM, Daniel P. Berrange wrote: > > On Mon, Jun 13, 2016 at 07:39:29AM -0400, Assaf Muller wrote: > > On Mon, Jun 13, 2016 at 4:35 AM, Daniel P. Berrange >wrote: > > > On Thu, Jun 09, 2016 at 05:31:13PM -0600, Carl Baldwin wrote: > > >> Hi, > > >> > > >> You may or may not be aware of the vlan-aware-vms effort [1] in > > >> Neutron. If not, there is a spec and a fair number of patches in > > >> progress for this. Essentially, the goal is to allow a VM to > > >> connect to multiple Neutron networks by tagging traffic on a single > > >> port with VLAN tags. > > >> > > >> This effort will have some effect on vif plugging because the > > >> datapath will include some changes that will effect how vif > > >> plugging is done today. > > >> > > >> The design proposal for trunk ports with OVS adds a new bridge for > > >> each trunk port. This bridge will demux the traffic and then > > >> connect to br-int with patch ports for each of the networks. > > >> Rawlin Peters has some ideas for expanding the vif capability to > > >> include this wiring. > > >> > > >> There is also a proposal for connecting to linux bridges by using > > >> kernel vlan interfaces. > > >> > > >> This effort is pretty important to Neutron in the Newton timeframe. > > >> I wanted to send this out to start rounding up the reviewers and > > >> other participants we need to see how we can start putting together > > >> a plan for nova integration of this feature (via os-vif?). > > > > > > I've not taken a look at the proposal, but on the timing side of > > > things it is really way to late to start this email thread asking > > > for design input from os-vif or nova. We're way past the spec > > > proposal deadline for Nova in the Newton cycle, so nothing is going > > > to happen until the Ocata cycle no matter what Neutron want in > > > Newton. For os-vif our focus right now is exclusively on getting > > > existing functionality ported over, and integrated into Nova in > > > Newton. So again we're not really looking to spend time on further os-vif > design work right now. > > > > > > In the Ocata cycle we'll be looking to integrate os-vif into Neutron > > > to let it directly serialize VIF objects and send them over to Nova, > > > instead of using the ad-hoc port-binding dicts. From the Nova side, > > > we're not likely to want to support any new functionality that > > > affects port-binding data until after Neutron is converted to > > > os-vif. So Ocata at the earliest, but probably more like P, > > > unless the Neutron conversion to os-vif gets completed unexpectedly > quickly. > > > > In light of this feature being requested by the NFV, container and > > baremetal communities, and that Neutron's os-vif integration work > > hasn't begun, does it make sense to block Nova VIF work? Are we > > comfortable, from a wider OpenStack perspective, to wait until > > possibly the P release? I think it's our collective responsibility as > > developers to find creative ways to meet deadlines, not serializing > > work on features and letting processes block us. > > Everyone has their own personal set of features that are their personal > priority items. Nova evaluates all the competing demands and decides on > what the project's priorities are for the given cycle. For Newton Nova's > priority is to convert existing VIF functionality to use os-vif. Anything > else vif > related takes a backseat to this project priority. This formal modelling of > VIFs > and developing a plugin facility has already been strung out over at least 3 > release cycles now. We're finally in a position to get it completed, and we're > not going to divert attention away from this, to other new features requests > until its done as that'll increase the chances of it getting strung out for > yet > another release which is in no ones interests. I think we are all in agreement that integrating os-vif into Nova during the Newton cycle is the highest priority. The question is, once os-vif has been integrated into Nova are we going to have any problem augmenting the current os-vif OvsPlugin in order to support vlan-aware-vms in the Newton release? Based upon the current Nova integration patch [1] I believe that any vif-plugging changes required to implement vlan-aware-vms could be entirely localized to the os-vif OvsPlugin, so Nova wouldn't directly need to be involved there. That said, there are currently a couple of vif-plugging strategies we could go with for wiring trunk ports for OVS, each of them requiring varying levels of os-vif augmentation: Strategy 1) When Nova is plugging a trunk port, it creates the OVS trunk bridge, attaches the tap to it, and creates one patch port pair from the trunk bridge to br-int. Strategy 2) Neutron passes a bridge name to Nova, and Nova uses this bridge name to create the OVS trunk bridge and attach the tap to it (no patch port pair plugging into br-int). Strategy 1 requires
Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports
+1. Neutron should already be able to tell Nova which bridge to use for an OVS port.[1] For the Linux bridge implementation it's a matter of creating vlan interfaces and plugging them into bridges like regular VM ports, which is all the responsibility of the L2 agent. We shouldn't need any changes from Nova or os-vif from what I can see. 1. https://github.com/openstack/nova/blob/6e2e1dc912199e057e5c3a5e07d39f26cbbbdd5b/nova/network/neutronv2/api.py#L1618 On Mon, Jun 13, 2016 at 5:26 AM, Mooney, Sean Kwrote: > > > > -Original Message- > > From: Daniel P. Berrange [mailto:berra...@redhat.com] > > Sent: Monday, June 13, 2016 1:12 PM > > To: Armando M. > > Cc: Carl Baldwin ; OpenStack Development Mailing > > List ; Jay Pipes > > ; Maxime Leroy ; Moshe Levi > > ; Russell Bryant ; sahid > > ; Mooney, Sean K > > Subject: Re: [Neutron][os-vif] Expanding vif capability for wiring trunk > > ports > > > > On Mon, Jun 13, 2016 at 02:08:30PM +0200, Armando M. wrote: > > > On 13 June 2016 at 10:35, Daniel P. Berrange > > wrote: > > > > > > > On Thu, Jun 09, 2016 at 05:31:13PM -0600, Carl Baldwin wrote: > > > > > Hi, > > > > > > > > > > You may or may not be aware of the vlan-aware-vms effort [1] in > > > > > Neutron. If not, there is a spec and a fair number of patches in > > > > > progress for this. Essentially, the goal is to allow a VM to > > > > > connect to multiple Neutron networks by tagging traffic on a > > > > > single port with VLAN tags. > > > > > > > > > > This effort will have some effect on vif plugging because the > > > > > datapath will include some changes that will effect how vif > > > > > plugging is done today. > > > > > > > > > > The design proposal for trunk ports with OVS adds a new bridge for > > > > > each trunk port. This bridge will demux the traffic and then > > > > > connect to br-int with patch ports for each of the networks. > > > > > Rawlin Peters has some ideas for expanding the vif capability to > > > > > include this wiring. > > > > > > > > > > There is also a proposal for connecting to linux bridges by using > > > > > kernel vlan interfaces. > > > > > > > > > > This effort is pretty important to Neutron in the Newton > > > > > timeframe. I wanted to send this out to start rounding up the > > > > > reviewers and other participants we need to see how we can start > > > > > putting together a plan for nova integration of this feature (via > > os-vif?). > > > > > > > > I've not taken a look at the proposal, but on the timing side of > > > > things it is really way to late to start this email thread asking > > > > for design input from os-vif or nova. We're way past the spec > > > > proposal deadline for Nova in the Newton cycle, so nothing is going > > > > to happen until the Ocata cycle no matter what Neutron want in > > Newton. > > > > > > > > > For sake of clarity, does this mean that the management of the os-vif > > > project matches exactly Nova's, e.g. same deadlines and processes > > > apply, even though the core team and its release model are different > > from Nova's? > > > I may have erroneously implied that it wasn't, also from past talks I > > > had with johnthetubaguy. > > > > No, we don't intend to force ourselves to only release at milestones > > like nova does. We'll release the os-vif library whenever there is new > > functionality in its code that we need to make available to > > nova/neutron. > > This could be as frequently as once every few weeks. > [Mooney, Sean K] > I have been tracking contributing to the vlan aware vm work in > neutron since the Vancouver summit so I am quite familiar with what would > have > to be modified to support the vlan trucking. Provided the modifications do > not > delay the conversion to os-vif in nova this cycle I would be happy to > review > and help develop the code to support this use case. > > In the ovs case at lease which we have been discussing here > > https://review.openstack.org/#/c/318317/4/doc/source/devref/openvswitch_agent.rst > no changes should be required for nova and all changes will be confined to > the ovs > plugin. In is essence check if bridge exists, if not create it with port > id, > Then plug as normal. > > Again though I do agree that we should focus on completing the initial > nova integration > But I don't think that mean we have to exclude other feature enhancements > as long as they > do not prevent us achieving that goal. > > > > > > Regards, > > Daniel > > -- > > |: http://berrange.com -o- > > http://www.flickr.com/photos/dberrange/ :| > > |: http://libvirt.org -o- http://virt- > > manager.org :| > > |: http://autobuild.org -o- > > http://search.cpan.org/~danberr/ :| > > |:
Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports
> -Original Message- > From: Daniel P. Berrange [mailto:berra...@redhat.com] > Sent: Monday, June 13, 2016 1:12 PM > To: Armando M.> Cc: Carl Baldwin ; OpenStack Development Mailing > List ; Jay Pipes > ; Maxime Leroy ; Moshe Levi > ; Russell Bryant ; sahid > ; Mooney, Sean K > Subject: Re: [Neutron][os-vif] Expanding vif capability for wiring trunk > ports > > On Mon, Jun 13, 2016 at 02:08:30PM +0200, Armando M. wrote: > > On 13 June 2016 at 10:35, Daniel P. Berrange > wrote: > > > > > On Thu, Jun 09, 2016 at 05:31:13PM -0600, Carl Baldwin wrote: > > > > Hi, > > > > > > > > You may or may not be aware of the vlan-aware-vms effort [1] in > > > > Neutron. If not, there is a spec and a fair number of patches in > > > > progress for this. Essentially, the goal is to allow a VM to > > > > connect to multiple Neutron networks by tagging traffic on a > > > > single port with VLAN tags. > > > > > > > > This effort will have some effect on vif plugging because the > > > > datapath will include some changes that will effect how vif > > > > plugging is done today. > > > > > > > > The design proposal for trunk ports with OVS adds a new bridge for > > > > each trunk port. This bridge will demux the traffic and then > > > > connect to br-int with patch ports for each of the networks. > > > > Rawlin Peters has some ideas for expanding the vif capability to > > > > include this wiring. > > > > > > > > There is also a proposal for connecting to linux bridges by using > > > > kernel vlan interfaces. > > > > > > > > This effort is pretty important to Neutron in the Newton > > > > timeframe. I wanted to send this out to start rounding up the > > > > reviewers and other participants we need to see how we can start > > > > putting together a plan for nova integration of this feature (via > os-vif?). > > > > > > I've not taken a look at the proposal, but on the timing side of > > > things it is really way to late to start this email thread asking > > > for design input from os-vif or nova. We're way past the spec > > > proposal deadline for Nova in the Newton cycle, so nothing is going > > > to happen until the Ocata cycle no matter what Neutron want in > Newton. > > > > > > For sake of clarity, does this mean that the management of the os-vif > > project matches exactly Nova's, e.g. same deadlines and processes > > apply, even though the core team and its release model are different > from Nova's? > > I may have erroneously implied that it wasn't, also from past talks I > > had with johnthetubaguy. > > No, we don't intend to force ourselves to only release at milestones > like nova does. We'll release the os-vif library whenever there is new > functionality in its code that we need to make available to > nova/neutron. > This could be as frequently as once every few weeks. [Mooney, Sean K] I have been tracking contributing to the vlan aware vm work in neutron since the Vancouver summit so I am quite familiar with what would have to be modified to support the vlan trucking. Provided the modifications do not delay the conversion to os-vif in nova this cycle I would be happy to review and help develop the code to support this use case. In the ovs case at lease which we have been discussing here https://review.openstack.org/#/c/318317/4/doc/source/devref/openvswitch_agent.rst no changes should be required for nova and all changes will be confined to the ovs plugin. In is essence check if bridge exists, if not create it with port id, Then plug as normal. Again though I do agree that we should focus on completing the initial nova integration But I don't think that mean we have to exclude other feature enhancements as long as they do not prevent us achieving that goal. > > Regards, > Daniel > -- > |: http://berrange.com -o- > http://www.flickr.com/photos/dberrange/ :| > |: http://libvirt.org -o- http://virt- > manager.org :| > |: http://autobuild.org -o- > http://search.cpan.org/~danberr/ :| > |: http://entangle-photo.org -o- http://live.gnome.org/gtk- > vnc :| __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports
On Mon, Jun 13, 2016 at 07:39:29AM -0400, Assaf Muller wrote: > On Mon, Jun 13, 2016 at 4:35 AM, Daniel P. Berrange> wrote: > > On Thu, Jun 09, 2016 at 05:31:13PM -0600, Carl Baldwin wrote: > >> Hi, > >> > >> You may or may not be aware of the vlan-aware-vms effort [1] in > >> Neutron. If not, there is a spec and a fair number of patches in > >> progress for this. Essentially, the goal is to allow a VM to connect > >> to multiple Neutron networks by tagging traffic on a single port with > >> VLAN tags. > >> > >> This effort will have some effect on vif plugging because the datapath > >> will include some changes that will effect how vif plugging is done > >> today. > >> > >> The design proposal for trunk ports with OVS adds a new bridge for > >> each trunk port. This bridge will demux the traffic and then connect > >> to br-int with patch ports for each of the networks. Rawlin Peters > >> has some ideas for expanding the vif capability to include this > >> wiring. > >> > >> There is also a proposal for connecting to linux bridges by using > >> kernel vlan interfaces. > >> > >> This effort is pretty important to Neutron in the Newton timeframe. I > >> wanted to send this out to start rounding up the reviewers and other > >> participants we need to see how we can start putting together a plan > >> for nova integration of this feature (via os-vif?). > > > > I've not taken a look at the proposal, but on the timing side of things > > it is really way to late to start this email thread asking for design > > input from os-vif or nova. We're way past the spec proposal deadline > > for Nova in the Newton cycle, so nothing is going to happen until the > > Ocata cycle no matter what Neutron want in Newton. For os-vif our > > focus right now is exclusively on getting existing functionality ported > > over, and integrated into Nova in Newton. So again we're not really looking > > to spend time on further os-vif design work right now. > > > > In the Ocata cycle we'll be looking to integrate os-vif into Neutron to > > let it directly serialize VIF objects and send them over to Nova, instead > > of using the ad-hoc port-binding dicts. From the Nova side, we're not > > likely to want to support any new functionality that affects port-binding > > data until after Neutron is converted to os-vif. So Ocata at the earliest, > > but probably more like P, unless the Neutron conversion to os-vif gets > > completed unexpectedly quickly. > > In light of this feature being requested by the NFV, container and > baremetal communities, and that Neutron's os-vif integration work > hasn't begun, does it make sense to block Nova VIF work? Are we > comfortable, from a wider OpenStack perspective, to wait until > possibly the P release? I think it's our collective responsibility as > developers to find creative ways to meet deadlines, not serializing > work on features and letting processes block us. Everyone has their own personal set of features that are their personal priority items. Nova evaluates all the competing demands and decides on what the project's priorities are for the given cycle. For Newton Nova's priority is to convert existing VIF functionality to use os-vif. Anything else vif related takes a backseat to this project priority. This formal modelling of VIFs and developing a plugin facility has already been strung out over at least 3 release cycles now. We're finally in a position to get it completed, and we're not going to divert attention away from this, to other new features requests until its done as that'll increase the chances of it getting strung out for yet another release which is in no ones interests. Regards, Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports
On 13 June 2016 at 14:11, Daniel P. Berrangewrote: > On Mon, Jun 13, 2016 at 02:08:30PM +0200, Armando M. wrote: > > On 13 June 2016 at 10:35, Daniel P. Berrange > wrote: > > > > > On Thu, Jun 09, 2016 at 05:31:13PM -0600, Carl Baldwin wrote: > > > > Hi, > > > > > > > > You may or may not be aware of the vlan-aware-vms effort [1] in > > > > Neutron. If not, there is a spec and a fair number of patches in > > > > progress for this. Essentially, the goal is to allow a VM to connect > > > > to multiple Neutron networks by tagging traffic on a single port with > > > > VLAN tags. > > > > > > > > This effort will have some effect on vif plugging because the > datapath > > > > will include some changes that will effect how vif plugging is done > > > > today. > > > > > > > > The design proposal for trunk ports with OVS adds a new bridge for > > > > each trunk port. This bridge will demux the traffic and then connect > > > > to br-int with patch ports for each of the networks. Rawlin Peters > > > > has some ideas for expanding the vif capability to include this > > > > wiring. > > > > > > > > There is also a proposal for connecting to linux bridges by using > > > > kernel vlan interfaces. > > > > > > > > This effort is pretty important to Neutron in the Newton timeframe. > I > > > > wanted to send this out to start rounding up the reviewers and other > > > > participants we need to see how we can start putting together a plan > > > > for nova integration of this feature (via os-vif?). > > > > > > I've not taken a look at the proposal, but on the timing side of things > > > it is really way to late to start this email thread asking for design > > > input from os-vif or nova. We're way past the spec proposal deadline > > > for Nova in the Newton cycle, so nothing is going to happen until the > > > Ocata cycle no matter what Neutron want in Newton. > > > > > > For sake of clarity, does this mean that the management of the os-vif > > project matches exactly Nova's, e.g. same deadlines and processes apply, > > even though the core team and its release model are different from > Nova's? > > I may have erroneously implied that it wasn't, also from past talks I had > > with johnthetubaguy. > > No, we don't intend to force ourselves to only release at milestones > like nova does. We'll release the os-vif library whenever there is new > functionality in its code that we need to make available to nova/neutron. > This could be as frequently as once every few weeks. > Thanks, but I could get this answer from [1]. I was asking about specs and deadlines. [1] https://governance.openstack.org/reference/projects/nova.html > Regards, > Daniel > -- > |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ > :| > |: http://libvirt.org -o- http://virt-manager.org > :| > |: http://autobuild.org -o- http://search.cpan.org/~danberr/ > :| > |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc > :| > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports
On Mon, Jun 13, 2016 at 02:08:30PM +0200, Armando M. wrote: > On 13 June 2016 at 10:35, Daniel P. Berrangewrote: > > > On Thu, Jun 09, 2016 at 05:31:13PM -0600, Carl Baldwin wrote: > > > Hi, > > > > > > You may or may not be aware of the vlan-aware-vms effort [1] in > > > Neutron. If not, there is a spec and a fair number of patches in > > > progress for this. Essentially, the goal is to allow a VM to connect > > > to multiple Neutron networks by tagging traffic on a single port with > > > VLAN tags. > > > > > > This effort will have some effect on vif plugging because the datapath > > > will include some changes that will effect how vif plugging is done > > > today. > > > > > > The design proposal for trunk ports with OVS adds a new bridge for > > > each trunk port. This bridge will demux the traffic and then connect > > > to br-int with patch ports for each of the networks. Rawlin Peters > > > has some ideas for expanding the vif capability to include this > > > wiring. > > > > > > There is also a proposal for connecting to linux bridges by using > > > kernel vlan interfaces. > > > > > > This effort is pretty important to Neutron in the Newton timeframe. I > > > wanted to send this out to start rounding up the reviewers and other > > > participants we need to see how we can start putting together a plan > > > for nova integration of this feature (via os-vif?). > > > > I've not taken a look at the proposal, but on the timing side of things > > it is really way to late to start this email thread asking for design > > input from os-vif or nova. We're way past the spec proposal deadline > > for Nova in the Newton cycle, so nothing is going to happen until the > > Ocata cycle no matter what Neutron want in Newton. > > > For sake of clarity, does this mean that the management of the os-vif > project matches exactly Nova's, e.g. same deadlines and processes apply, > even though the core team and its release model are different from Nova's? > I may have erroneously implied that it wasn't, also from past talks I had > with johnthetubaguy. No, we don't intend to force ourselves to only release at milestones like nova does. We'll release the os-vif library whenever there is new functionality in its code that we need to make available to nova/neutron. This could be as frequently as once every few weeks. Regards, Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports
On 13 June 2016 at 10:35, Daniel P. Berrangewrote: > On Thu, Jun 09, 2016 at 05:31:13PM -0600, Carl Baldwin wrote: > > Hi, > > > > You may or may not be aware of the vlan-aware-vms effort [1] in > > Neutron. If not, there is a spec and a fair number of patches in > > progress for this. Essentially, the goal is to allow a VM to connect > > to multiple Neutron networks by tagging traffic on a single port with > > VLAN tags. > > > > This effort will have some effect on vif plugging because the datapath > > will include some changes that will effect how vif plugging is done > > today. > > > > The design proposal for trunk ports with OVS adds a new bridge for > > each trunk port. This bridge will demux the traffic and then connect > > to br-int with patch ports for each of the networks. Rawlin Peters > > has some ideas for expanding the vif capability to include this > > wiring. > > > > There is also a proposal for connecting to linux bridges by using > > kernel vlan interfaces. > > > > This effort is pretty important to Neutron in the Newton timeframe. I > > wanted to send this out to start rounding up the reviewers and other > > participants we need to see how we can start putting together a plan > > for nova integration of this feature (via os-vif?). > > I've not taken a look at the proposal, but on the timing side of things > it is really way to late to start this email thread asking for design > input from os-vif or nova. We're way past the spec proposal deadline > for Nova in the Newton cycle, so nothing is going to happen until the > Ocata cycle no matter what Neutron want in Newton. For sake of clarity, does this mean that the management of the os-vif project matches exactly Nova's, e.g. same deadlines and processes apply, even though the core team and its release model are different from Nova's? I may have erroneously implied that it wasn't, also from past talks I had with johnthetubaguy. Perhaps the answer to this question is clearly stated somewhere else, but I must have missed it. I want to make sure I ask explicitly now to avoid future confusion. For os-vif our > focus right now is exclusively on getting existing functionality ported > over, and integrated into Nova in Newton. So again we're not really looking > to spend time on further os-vif design work right now. > > In the Ocata cycle we'll be looking to integrate os-vif into Neutron to > let it directly serialize VIF objects and send them over to Nova, instead > of using the ad-hoc port-binding dicts. From the Nova side, we're not > likely to want to support any new functionality that affects port-binding > data until after Neutron is converted to os-vif. So Ocata at the earliest, > but probably more like P, unless the Neutron conversion to os-vif gets > completed unexpectedly quickly. > > Regards, > Daniel > -- > |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ > :| > |: http://libvirt.org -o- http://virt-manager.org > :| > |: http://autobuild.org -o- http://search.cpan.org/~danberr/ > :| > |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc > :| > __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports
On Mon, Jun 13, 2016 at 4:35 AM, Daniel P. Berrangewrote: > On Thu, Jun 09, 2016 at 05:31:13PM -0600, Carl Baldwin wrote: >> Hi, >> >> You may or may not be aware of the vlan-aware-vms effort [1] in >> Neutron. If not, there is a spec and a fair number of patches in >> progress for this. Essentially, the goal is to allow a VM to connect >> to multiple Neutron networks by tagging traffic on a single port with >> VLAN tags. >> >> This effort will have some effect on vif plugging because the datapath >> will include some changes that will effect how vif plugging is done >> today. >> >> The design proposal for trunk ports with OVS adds a new bridge for >> each trunk port. This bridge will demux the traffic and then connect >> to br-int with patch ports for each of the networks. Rawlin Peters >> has some ideas for expanding the vif capability to include this >> wiring. >> >> There is also a proposal for connecting to linux bridges by using >> kernel vlan interfaces. >> >> This effort is pretty important to Neutron in the Newton timeframe. I >> wanted to send this out to start rounding up the reviewers and other >> participants we need to see how we can start putting together a plan >> for nova integration of this feature (via os-vif?). > > I've not taken a look at the proposal, but on the timing side of things > it is really way to late to start this email thread asking for design > input from os-vif or nova. We're way past the spec proposal deadline > for Nova in the Newton cycle, so nothing is going to happen until the > Ocata cycle no matter what Neutron want in Newton. For os-vif our > focus right now is exclusively on getting existing functionality ported > over, and integrated into Nova in Newton. So again we're not really looking > to spend time on further os-vif design work right now. > > In the Ocata cycle we'll be looking to integrate os-vif into Neutron to > let it directly serialize VIF objects and send them over to Nova, instead > of using the ad-hoc port-binding dicts. From the Nova side, we're not > likely to want to support any new functionality that affects port-binding > data until after Neutron is converted to os-vif. So Ocata at the earliest, > but probably more like P, unless the Neutron conversion to os-vif gets > completed unexpectedly quickly. In light of this feature being requested by the NFV, container and baremetal communities, and that Neutron's os-vif integration work hasn't begun, does it make sense to block Nova VIF work? Are we comfortable, from a wider OpenStack perspective, to wait until possibly the P release? I think it's our collective responsibility as developers to find creative ways to meet deadlines, not serializing work on features and letting processes block us. > > Regards, > Daniel > -- > |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| > |: http://libvirt.org -o- http://virt-manager.org :| > |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| > |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| > > __ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports
On Thu, Jun 09, 2016 at 05:31:13PM -0600, Carl Baldwin wrote: > Hi, > > You may or may not be aware of the vlan-aware-vms effort [1] in > Neutron. If not, there is a spec and a fair number of patches in > progress for this. Essentially, the goal is to allow a VM to connect > to multiple Neutron networks by tagging traffic on a single port with > VLAN tags. > > This effort will have some effect on vif plugging because the datapath > will include some changes that will effect how vif plugging is done > today. > > The design proposal for trunk ports with OVS adds a new bridge for > each trunk port. This bridge will demux the traffic and then connect > to br-int with patch ports for each of the networks. Rawlin Peters > has some ideas for expanding the vif capability to include this > wiring. > > There is also a proposal for connecting to linux bridges by using > kernel vlan interfaces. > > This effort is pretty important to Neutron in the Newton timeframe. I > wanted to send this out to start rounding up the reviewers and other > participants we need to see how we can start putting together a plan > for nova integration of this feature (via os-vif?). I've not taken a look at the proposal, but on the timing side of things it is really way to late to start this email thread asking for design input from os-vif or nova. We're way past the spec proposal deadline for Nova in the Newton cycle, so nothing is going to happen until the Ocata cycle no matter what Neutron want in Newton. For os-vif our focus right now is exclusively on getting existing functionality ported over, and integrated into Nova in Newton. So again we're not really looking to spend time on further os-vif design work right now. In the Ocata cycle we'll be looking to integrate os-vif into Neutron to let it directly serialize VIF objects and send them over to Nova, instead of using the ad-hoc port-binding dicts. From the Nova side, we're not likely to want to support any new functionality that affects port-binding data until after Neutron is converted to os-vif. So Ocata at the earliest, but probably more like P, unless the Neutron conversion to os-vif gets completed unexpectedly quickly. Regards, Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports
Here's a link directly to the current design proposal [1] that might be of interest. [1] https://review.openstack.org/#/c/318317/4/doc/source/devref/openvswitch_agent.rst@463 On Thu, Jun 9, 2016 at 5:31 PM, Carl Baldwinwrote: > Hi, > > You may or may not be aware of the vlan-aware-vms effort [1] in > Neutron. If not, there is a spec and a fair number of patches in > progress for this. Essentially, the goal is to allow a VM to connect > to multiple Neutron networks by tagging traffic on a single port with > VLAN tags. > > This effort will have some effect on vif plugging because the datapath > will include some changes that will effect how vif plugging is done > today. > > The design proposal for trunk ports with OVS adds a new bridge for > each trunk port. This bridge will demux the traffic and then connect > to br-int with patch ports for each of the networks. Rawlin Peters > has some ideas for expanding the vif capability to include this > wiring. > > There is also a proposal for connecting to linux bridges by using > kernel vlan interfaces. > > This effort is pretty important to Neutron in the Newton timeframe. I > wanted to send this out to start rounding up the reviewers and other > participants we need to see how we can start putting together a plan > for nova integration of this feature (via os-vif?). > > Carl Baldwin > > [1] https://review.openstack.org/#/q/topic:bp/vlan-aware-vms+-status:abandoned __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev