v2-v3: - Dropped first patch that was merged. - Reorder remaining patches to put everything directly related to the feature at the end and all of the pre-reqs at the beginning. - Update flows to ensure that a packet that comes in via a localnet port never goes out over a tunnel to avoid having multiple copies of the same packet arrive at a chassis. - This is still only tested using ovs-sandbox, but I think it's fine to go ahead and review. Neutron integration and testing will follow soon after. - Follow-up work to do: - add VLAN support for "localnet" ports - create OVN admin manual that discusses OVN deployment and config - make ovn-controller automatically create br-int if needed - consider plain text file config support for ovn-controller
Russell Bryant (10): lib: Add smap_equal(). ovn: Fix uninit access warning from valgrind. ovn: Make column comparisons more generic. ovn: Drop unnecessary br_int local variable. ovn: Add bridge mappings to ovn-controller. ovn: Add patch ports for ovn bridge mappings. ovn: Set up some bridge mappings in ovs-sandbox. ovn: Add type and options to logical port. ovn: Get/set lport type and options in ovn-nbctl. ovn: Add "localnet" logical port type. lib/smap.c | 34 ++++++++ lib/smap.h | 2 + ovn/controller/ofctrl.c | 2 +- ovn/controller/ovn-controller.c | 173 +++++++++++++++++++++++++++++++++++++++- ovn/controller/ovn-controller.h | 7 ++ ovn/controller/physical.c | 141 +++++++++++++++++++++++++------- ovn/northd/ovn-northd.c | 42 +++++++--- ovn/ovn-nb.ovsschema | 6 ++ ovn/ovn-nb.xml | 27 +++++++ ovn/ovn-nbctl.8.xml | 24 +++++- ovn/ovn-nbctl.c | 111 ++++++++++++++++++++++++++ ovn/ovn-sb.ovsschema | 6 ++ ovn/ovn-sb.xml | 41 ++++++++++ tutorial/ovs-sandbox | 3 + 14 files changed, 570 insertions(+), 49 deletions(-) OpenStack Neutron as an API extension called "provider networks" which allows an administrator to specify that it would like ports directly attached to some pre-existing network in their environment. There was a previous thread where we got into the details of this here: http://openvswitch.org/pipermail/dev/2015-June/056765.html The case where this would be used is an environment that isn't actually interested in virtual networks and just wants all of their compute resources connected up to externally managed networks. Even in this environment, OVN still has a lot of value to add. OVN implements port security and ACLs for all ports connected to these networks. OVN also provides the configuration interface and control plane to manage this across many hypervisors. Let's start from how this would be used from Neutron and go down through OVN to show how it works in OVN. Imagine an environment where every hypervisor has a NIC attached to the same physical network that you would like all of your VMs connected to. We'll refer to this physical network as "physnet1". Let's also assume that the interface to "physne1" is eth1 on every hypervisor. You would need to first create an OVS bridge and add eth1 to it by doing something like: $ ovs-vsctl add-br br-eth1 $ ovs-vsctl add-port br-eth1 eth1 Now you must also configure ovn-controller to tell it that it can get traffic to "physnet1" by sending it to the bridge "br-eth1". $ ovs-vsctl set open . external-ids:ovn-bridge-mappings=physnet1:br-eth1 When ovn-controller starts up, it parses the bridge mappings and automatically creates patch ports between the OVN integration bridge and the bridges specified in bridge mappings. Now that ovn-controller on every hypervisor understands what "physnet1" is, you can create this network in Neutron. The following command defines a network in Neutron called "provnet1" which is implemented as connecting to a physical network called "physnet1". The type is set to "flat" meaning that the traffic is not tagged. $ neutron net-create provnet1 --shared \ > --provider:physical_network physnet1 \ --provider:network_type flat (Note that the Neutron API supports specifying a VLAN tag here, but that is not yet supported in this patch series but will be added later as an addition.) At this point an OpenStack user can start creating Neutron ports for VMs to be attached to this network. $ neutron port-create provnet1 When the Neutron network is defined, nothing is actually created in OVN_Northbound. Instead, every time a Neutron port is created on this Neutron provider network, this connection is modeled as a 2-port OVN logical switch. At this point, we can model what would happen by using ovn-nbctl. Consider the following script, which sets up what Neutron would create for 2 Neutron ports connected to the same Neutron provider network. for n in 1 2 ; do ovn-nbctl lswitch-add provnet1-$n ovn-nbctl lport-add provnet1-$n provnet1-$n-port1 ovn-nbctl lport-add provnet1-$n provnet1-$n-port1 ovn-nbctl lport-set-macs provnet1-$n-port1 00:00:00:00:00:0$n ovn-nbctl lport-set-port-security provnet1-$n-port1 00:00:00:00:00:0$n ovs-vsctl add-port br-int lport$n -- set Interface lport$n external_ids:iface-id=provnet1-$n-port1 ovn-nbctl lport-add provnet1-$n provnet1-$n-physnet1 ovn-nbctl lport-set-macs provnet1-$n-physnet1 unknown ovn-nbctl lport-set-type provnet1-$n-physnet1 localnet ovn-nbctl lport-set-options provnet1-$n-physnet1 network_name=physnet1 done This creates 2 OVN logical switches. One port on each logical switch is a "normal" port to be used by a VM or container. The other is a special type of port which represents the connection to the provider network. The special port has a type of "localnet" and a type-specific option called "network_name" which maps to the value we put in "ovn-bridge-mappings". When ovn-northd processes this, the logical Pipeline is no different than it would be for 2 "normal" logical ports on a logical switch. As a result, the OpenFlow flows that implement the logical pipeline also remain unchanged. ovn-northd copies the "type" and "options" columsn from the logical port in OVN_Northbound to the Binding table in OVN_Southbound. With that information, ovn-controller can wire things up appropriately. Specifically, the changes are in ovn-controller's code that does the logical to physical mappings and creates the associated OpenFlow flows. Here is the final state of the system using ovs-sandbox using this example. First, here's a list of the bridges and ports: $ ovs-vsctl show 1500b021-0bd4-447c-b79f-4ca91a982c46 Bridge "br-eth1" Port "patch-br-eth1-to-br-int" Interface "patch-br-eth1-to-br-int" type: patch options: {peer=br-int} Port "br-eth1" Interface "br-eth1" type: internal Bridge br-int fail_mode: secure Port "lport1" Interface "lport1" Port "lport2" Interface "lport2" Port br-int Interface br-int type: internal Port "patch-br-int-to-br-eth1" Interface "patch-br-int-to-br-eth1" type: patch options: {peer="br-eth1"} Before showing the flows, here are the OpenFlow port numbers for the ports on br-int: patch-br-int-to-br-eth1 -- 1 lport1 -- 2 lport2 -- 3 Finally, here are the flows (with unimportant pieces stripped) related to physical-to-logical and logical-to-physical translation: table=0, priority=100,in_port=2 actions=set_field:0x1->metadata,set_field:0x1->reg6,resubmit(,16) table=0, priority=100,in_port=3 actions=set_field:0x2->metadata,set_field:0x3->reg6,resubmit(,16) table=0, priority=100,in_port=1 actions=set_field:0x1->reg5,set_field:0x2->metadata,set_field:0x4->reg6,resubmit(,16),set_field:0x1->reg5,set_field:0x1->metadata,set_field:0x2->reg6,resubmit(,16) table=0, priority=50,tun_id=0x1 actions=output:2 table=0, priority=50,tun_id=0x3 actions=output:3 ... table=64, priority=100,reg6=0x1,reg7=0x1 actions=drop table=64, priority=100,reg6=0x2,reg7=0x2 actions=drop table=64, priority=100,reg6=0x3,reg7=0x3 actions=drop table=64, priority=100,reg6=0x4,reg7=0x4 actions=drop table=64, priority=50,reg7=0x1 actions=output:2 table=64, priority=50,reg7=0x2 actions=output:1 table=64, priority=50,reg7=0x3 actions=output:3 table=64, priority=50,reg7=0x4 actions=output:1 As a variation to this example, consider the same set of OVN switches and ports, but for a hypervisor where the logical ports are on a different chassis. When a packet comes in on a localnet port, a bit is set in reg5 to remember that it came in on a localnet port. When doing logical-to-physical translation for output, a packet is only sent out on a tunnel if it did *not* arrive on a localnet port. table=0, priority=100,in_port=1 actions=set_field:0x1->reg5,set_field:0x1->metadata,set_field:0x2->reg6,resubmit(,16),set_field:0x1->reg5,set_field:0x2->metadata,set_field:0x4->reg6,resubmit(,16) ... table=64, priority=100,reg6=0x2,reg7=0x2 actions=drop table=64, priority=100,reg6=0x4,reg7=0x4 actions=drop table=64, priority=100,reg6=0x1,reg7=0x1 actions=drop table=64, priority=100,reg6=0x3,reg7=0x3 actions=drop table=64, priority=50,reg7=0x2 actions=output:1 table=64, priority=50,reg7=0x4 actions=output:1 table=64, priority=50,reg5=0/0x1,reg7=0x1 actions=set_field:0x1->tun_id,output:2 table=64, priority=50,reg5=0/0x1,reg7=0x3 actions=set_field:0x3->tun_id,output:2 Thanks! -- 2.4.3 _______________________________________________ dev mailing list dev@openvswitch.org http://openvswitch.org/mailman/listinfo/dev