Re: [openstack-dev] [Magnum] Containers and networking
I agree, that is my take too. Russell, since you lead the OVN session in Vancouver, would it be possible to include the VLAN-aware-vms BP in that session? Thanks, Bob From: Ian Wells ijw.ubu...@cack.org.ukmailto:ijw.ubu...@cack.org.uk Reply-To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Date: fredag 3 april 2015 13:17 To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [Magnum] Containers and networking This puts me in mind of a previous proposal, from the Neutron side of things. Specifically, I would look at Erik Moe's proposal for VM ports attached to multiple networks: https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms . I believe that you want logical ports hiding behind a conventional port (which that has); the logical ports attached to a variety of Neutron networks despite coming through the same VM interface (ditto); and an encap on the logical port with a segmentation ID (that uses exclusively VLANs, which probably suits here, though there's no particular reason why it has to be VLANs or why it couldn't be selectable). The original concept didn't require multiple ports attached to the same incoming subnetwork, but that's a comparatively minor adaptation. -- Ian. On 2 April 2015 at 11:35, Russell Bryant rbry...@redhat.commailto:rbry...@redhat.com wrote: On 04/02/2015 01:45 PM, Kevin Benton wrote: +1. I added a suggestion for a container networking suggestion to the etherpad for neutron. It would be sad if the container solution built yet another overlay on top of the Neutron networks with yet another network management workflow. By the time the packets are traveling across the wires, it would be nice not to have double encapsulation from completely different systems. Yeah, that's what I like about this proposal. Most of the existing work in this space seems to result in double encapsulation. Now we just need to finish building it ... -- Russell Bryant __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Magnum] Containers and networking
This puts me in mind of a previous proposal, from the Neutron side of things. Specifically, I would look at Erik Moe's proposal for VM ports attached to multiple networks: https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms . I believe that you want logical ports hiding behind a conventional port (which that has); the logical ports attached to a variety of Neutron networks despite coming through the same VM interface (ditto); and an encap on the logical port with a segmentation ID (that uses exclusively VLANs, which probably suits here, though there's no particular reason why it has to be VLANs or why it couldn't be selectable). The original concept didn't require multiple ports attached to the same incoming subnetwork, but that's a comparatively minor adaptation. -- Ian. On 2 April 2015 at 11:35, Russell Bryant rbry...@redhat.com wrote: On 04/02/2015 01:45 PM, Kevin Benton wrote: +1. I added a suggestion for a container networking suggestion to the etherpad for neutron. It would be sad if the container solution built yet another overlay on top of the Neutron networks with yet another network management workflow. By the time the packets are traveling across the wires, it would be nice not to have double encapsulation from completely different systems. Yeah, that's what I like about this proposal. Most of the existing work in this space seems to result in double encapsulation. Now we just need to finish building it ... -- Russell Bryant __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Magnum] Containers and networking
On 04/02/2015 12:36 PM, Adrian Otto wrote: What to expect in the Liberty release cycle: ... * Overlay networking ... This is totally unrelated to your PTL email, but on this point, I'd be curious what the Magnum team thinks of this proposal: http://openvswitch.org/pipermail/dev/2015-March/052663.html It's a proposed (and now merged) design for how containers that live inside OpenStack managed VMs can be natively connected to virtual networks managed by Neutron. There's some parts of the process that are handled by the container orchestration system being used. -- Russell Bryant __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Magnum] Containers and networking
On 04/02/2015 01:45 PM, Kevin Benton wrote: +1. I added a suggestion for a container networking suggestion to the etherpad for neutron. It would be sad if the container solution built yet another overlay on top of the Neutron networks with yet another network management workflow. By the time the packets are traveling across the wires, it would be nice not to have double encapsulation from completely different systems. Yeah, that's what I like about this proposal. Most of the existing work in this space seems to result in double encapsulation. Now we just need to finish building it ... -- Russell Bryant __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Magnum] Containers and networking
Russell, On Apr 2, 2015, at 9:51 AM, Russell Bryant rbry...@redhat.com wrote: On 04/02/2015 12:36 PM, Adrian Otto wrote: What to expect in the Liberty release cycle: ... * Overlay networking ... This is totally unrelated to your PTL email, but on this point, I'd be curious what the Magnum team thinks of this proposal: http://openvswitch.org/pipermail/dev/2015-March/052663.html It's a proposed (and now merged) design for how containers that live inside OpenStack managed VMs can be natively connected to virtual networks managed by Neutron. There's some parts of the process that are handled by the container orchestration system being used. This sounds a lot like what we talked about in the Midcycle when we discussed this topic. We’ll take a good look at this. I’m planning for this topic to be at least one of our Design session topics, and would love to coordinate for OVS contributors and Neutron stakeholders to join us. Thanks for sharing this proposal with us! Keep in mind that my characterization of “Overlay Networking” does not necessarily mean double encapsulation. It means having a reliable means for containers to communicate across hosts using an IP address scheme that is user-supplied. Cheers, Adrian __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Magnum] Containers and networking
+1. I added a suggestion for a container networking suggestion to the etherpad for neutron. It would be sad if the container solution built yet another overlay on top of the Neutron networks with yet another network management workflow. By the time the packets are traveling across the wires, it would be nice not to have double encapsulation from completely different systems. On Thu, Apr 2, 2015 at 9:51 AM, Russell Bryant rbry...@redhat.com wrote: On 04/02/2015 12:36 PM, Adrian Otto wrote: What to expect in the Liberty release cycle: ... * Overlay networking ... This is totally unrelated to your PTL email, but on this point, I'd be curious what the Magnum team thinks of this proposal: http://openvswitch.org/pipermail/dev/2015-March/052663.html It's a proposed (and now merged) design for how containers that live inside OpenStack managed VMs can be natively connected to virtual networks managed by Neutron. There's some parts of the process that are handled by the container orchestration system being used. -- Russell Bryant __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Kevin Benton __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Magnum] Containers and networking
On 4/2/15, 9:51 AM, Russell Bryant rbry...@redhat.com wrote: On 04/02/2015 12:36 PM, Adrian Otto wrote: What to expect in the Liberty release cycle: ... * Overlay networking ... This is totally unrelated to your PTL email, but on this point, I'd be curious what the Magnum team thinks of this proposal: http://openvswitch.org/pipermail/dev/2015-March/052663.html It's a proposed (and now merged) design for how containers that live inside OpenStack managed VMs can be natively connected to virtual networks managed by Neutron. There's some parts of the process that are handled by the container orchestration system being used. Looks like a fantastic solution to getting rid of overlay networks. I don¹t prefer the overlay network model because it introduces memcpy and latency to copy the packets around. I prefer the container have access directly to neutron, so this proposal seems spot on. Note the Magnum community takes COE¹s as they come, and atm COE¹s come built by design with overlay networking. We don¹t do a whole lot of rd in this area. Instead we just set up the system as it should be via the heat-coe-templates stackforge repo. I suspect as COEs adopt these new OVS models, we will just set them up according to the upstream documentation as we do now. There is still a use case for vxlan that requires overlay networks, which we will have to sort out. Regards -steve -- Russell Bryant __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev