On Tue, Jul 23, 2024 at 4:08 PM Numan Siddique <[email protected]> wrote:
>
> On Tue, Jul 23, 2024 at 2:16 PM Mark Michelson <[email protected]> wrote:
> >
> > Hi Numan,
> >
> > The code change itself is pretty simple, and the new tests are fantastic.
> >
> > Before giving this an ack, I have a couple of questions:
> >
> > 1) Is the name "provider_network_overlay" something that is going to be
> > inherently understood? The documentation makes it clear what it does,
> > but the name seems odd to me, as someone who is not an OpenStack admin.
> > Something as simple as "always_tunnel" would give a clearer indication
> > of what the option does, IMO.
>
> Thanks for the reviews.  I struggled to come up with a good name.
> "always_tunnel"
> sounds good to me.  I'll submit v2.
> >
> > 2) Can you see a situation where this behavior would be desired on one
> > logical switch but not another? I ask because it seems possible to
> > enable the option per logical switch instead of globally.
>
> I thought about it.  Let's say S1 and S2 logical switches (both with
> localnet ports) are connected to a
> logical router and S1 has this option set.  In this case, traffic from
> S1 to S2 will be sent out of the
> localnet port of S2 and the reply traffic from S2 to S1 will be
> tunnelled.   This is undesirable.
> But we can argue that it's the responsibility of CMS to configure
> properly to ensure proper symmetry.
>  Also SB datapath table doesn't have "options" or "other_config" column
> where we can store per datapath option.  NB Logical_Switch has
> "other_config" column.
> I'd prefer global option for now.    What do you think ?
>

Submitted v2 changing the config option to 'always_tunnel' -
https://patchwork.ozlabs.org/project/ovn/patch/[email protected]/

Request to take a look.

Numan

> Thanks
> Numan
>
>
> >
> > Thanks,
> > Mark Michelson
> >
> > On 5/31/24 12:15, [email protected] wrote:
> > > From: Numan Siddique <[email protected]>
> > >
> > > This patch adds a global config option - 'provider_network_overlay' and
> > > when set to true, any traffic destined to a VIF logical port of a
> > > provider logical switch (having localnet port(s)), is tunnelled to
> > > the destination chassis, instead of sending it out via the localnet
> > > port.  This feature is useful for the following reasons:
> > >
> > > 1.  CMS can add both provider logical switches and overlay logical
> > >      swithes to a logical router.  With this option set, E-W routing 
> > > between
> > >      these logical switches will be tunnelled all the time.  The router 
> > > port
> > >      mac addresses are not leaked from multiple chassis to the upstream
> > >      switches anymore.
> > >
> > > 2.  NATting will work as expected either in the gateway chassis or on
> > >      the source VIF chassis (if external_mac and logical_port set).
> > >
> > > 3.  With this option set, there is no need to centralize routing
> > >      for provider logical switches ('reside-on-redirect-chassis').
> > >
> > > 4.  With the commits [1] now merged, MTU issues arising due to tunnel
> > >      overhead will be handled gracefully.
> > >
> > > [1] - 3faadc76ad71 ("northd: Fix pmtud for non routed traffic.")
> > >        221476a01f26 ("ovn: Add tunnel PMTUD support.")
> > >
> > > Reported-at: https://issues.redhat.com/browse/FDP-209
> > > Signed-off-by: Numan Siddique <[email protected]>
> > > ---
> > >   controller/ovn-controller.c |  27 +++
> > >   controller/physical.c       |  10 +-
> > >   controller/physical.h       |   1 +
> > >   northd/en-global-config.c   |   5 +
> > >   ovn-nb.xml                  |  16 ++
> > >   tests/multinode-macros.at   |  19 ++
> > >   tests/multinode.at          | 358 ++++++++++++++++++++++++++++++++++++
> > >   tests/ovn.at                | 156 ++++++++++++++++
> > >   8 files changed, 591 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/controller/ovn-controller.c b/controller/ovn-controller.c
> > > index 6b38f113dc..a1954d7870 100644
> > > --- a/controller/ovn-controller.c
> > > +++ b/controller/ovn-controller.c
> > > @@ -3719,6 +3719,11 @@ non_vif_data_ovs_iface_handler(struct engine_node 
> > > *node, void *data OVS_UNUSED)
> > >   struct ed_type_northd_options {
> > >       bool lb_hairpin_use_ct_mark;
> > >       bool explicit_arp_ns_output;
> > > +    bool provider_network_overlay; /* Indicates if the traffic to the
> > > +                                    * logical port of a bridged logical
> > > +                                    * switch (i.e with localnet port) 
> > > should
> > > +                                    * be tunnelled or sent via the 
> > > localnet
> > > +                                    * port.  Default value is 'false'. */
> > >   };
> > >
> > >
> > > @@ -3756,6 +3761,12 @@ en_northd_options_run(struct engine_node *node, 
> > > void *data)
> > >                               false)
> > >               : false;
> > >
> > > +    n_opts->provider_network_overlay =
> > > +            sb_global
> > > +            ? smap_get_bool(&sb_global->options, 
> > > "provider_network_overlay",
> > > +                            false)
> > > +            : false;
> > > +
> > >       engine_set_node_state(node, EN_UPDATED);
> > >   }
> > >
> > > @@ -3790,6 +3801,17 @@ en_northd_options_sb_sb_global_handler(struct 
> > > engine_node *node, void *data)
> > >           engine_set_node_state(node, EN_UPDATED);
> > >       }
> > >
> > > +    bool provider_network_overlay =
> > > +            sb_global
> > > +            ? smap_get_bool(&sb_global->options, 
> > > "provider_network_overlay",
> > > +                            false)
> > > +            : false;
> > > +
> > > +    if (provider_network_overlay != n_opts->provider_network_overlay) {
> > > +        n_opts->provider_network_overlay = provider_network_overlay;
> > > +        engine_set_node_state(node, EN_UPDATED);
> > > +    }
> > > +
> > >       return true;
> > >   }
> > >
> > > @@ -4691,6 +4713,9 @@ static void init_physical_ctx(struct engine_node 
> > > *node,
> > >           engine_get_input_data("ct_zones", node);
> > >       struct simap *ct_zones = &ct_zones_data->current;
> > >
> > > +    struct ed_type_northd_options *n_opts =
> > > +        engine_get_input_data("northd_options", node);
> > > +
> > >       parse_encap_ips(ovs_table, &p_ctx->n_encap_ips, &p_ctx->encap_ips);
> > >       p_ctx->sbrec_port_binding_by_name = sbrec_port_binding_by_name;
> > >       p_ctx->sbrec_port_binding_by_datapath = 
> > > sbrec_port_binding_by_datapath;
> > > @@ -4708,6 +4733,7 @@ static void init_physical_ctx(struct engine_node 
> > > *node,
> > >       p_ctx->local_bindings = &rt_data->lbinding_data.bindings;
> > >       p_ctx->patch_ofports = &non_vif_data->patch_ofports;
> > >       p_ctx->chassis_tunnels = &non_vif_data->chassis_tunnels;
> > > +    p_ctx->provider_network_overlay = n_opts->provider_network_overlay;
> > >
> > >       struct controller_engine_ctx *ctrl_ctx = 
> > > engine_get_context()->client_ctx;
> > >       p_ctx->if_mgr = ctrl_ctx->if_mgr;
> > > @@ -5376,6 +5402,7 @@ main(int argc, char *argv[])
> > >        */
> > >       engine_add_input(&en_pflow_output, &en_non_vif_data,
> > >                        NULL);
> > > +    engine_add_input(&en_pflow_output, &en_northd_options, NULL);
> > >       engine_add_input(&en_pflow_output, &en_ct_zones,
> > >                        pflow_output_ct_zones_handler);
> > >       engine_add_input(&en_pflow_output, &en_sb_chassis,
> > > diff --git a/controller/physical.c b/controller/physical.c
> > > index 25da789f0b..c4526cae13 100644
> > > --- a/controller/physical.c
> > > +++ b/controller/physical.c
> > > @@ -1488,6 +1488,7 @@ consider_port_binding(struct ovsdb_idl_index 
> > > *sbrec_port_binding_by_name,
> > >                         const struct if_status_mgr *if_mgr,
> > >                         size_t n_encap_ips,
> > >                         const char **encap_ips,
> > > +                      bool provider_network_overlay,
> > >                         struct ovn_desired_flow_table *flow_table,
> > >                         struct ofpbuf *ofpacts_p)
> > >   {
> > > @@ -1921,7 +1922,7 @@ consider_port_binding(struct ovsdb_idl_index 
> > > *sbrec_port_binding_by_name,
> > >                               binding->header_.uuid.parts[0], &match,
> > >                               ofpacts_p, &binding->header_.uuid);
> > >           }
> > > -    } else if (access_type == PORT_LOCALNET) {
> > > +    } else if (access_type == PORT_LOCALNET && 
> > > !provider_network_overlay) {
> > >           /* Remote port connected by localnet port */
> > >           /* Table 40, priority 100.
> > >            * =======================
> > > @@ -1929,6 +1930,11 @@ consider_port_binding(struct ovsdb_idl_index 
> > > *sbrec_port_binding_by_name,
> > >            * Implements switching to localnet port. Each flow matches a
> > >            * logical output port on remote hypervisor, switch the output 
> > > port
> > >            * to connected localnet port and resubmits to same table.
> > > +         *
> > > +         * Note: If 'provider_network_overlay' is true, then
> > > +         * put_remote_port_redirect_overlay() called from below takes 
> > > care
> > > +         * of adding the flow in OFTABLE_REMOTE_OUTPUT table to tunnel to
> > > +         * the destination chassis.
> > >            */
> > >
> > >           ofpbuf_clear(ofpacts_p);
> > > @@ -2354,6 +2360,7 @@ physical_eval_port_binding(struct physical_ctx 
> > > *p_ctx,
> > >                             p_ctx->if_mgr,
> > >                             p_ctx->n_encap_ips,
> > >                             p_ctx->encap_ips,
> > > +                          p_ctx->provider_network_overlay,
> > >                             flow_table, &ofpacts);
> > >       ofpbuf_uninit(&ofpacts);
> > >   }
> > > @@ -2481,6 +2488,7 @@ physical_run(struct physical_ctx *p_ctx,
> > >                                 p_ctx->if_mgr,
> > >                                 p_ctx->n_encap_ips,
> > >                                 p_ctx->encap_ips,
> > > +                              p_ctx->provider_network_overlay,
> > >                                 flow_table, &ofpacts);
> > >       }
> > >
> > > diff --git a/controller/physical.h b/controller/physical.h
> > > index 7fe8ee3c18..d171d25829 100644
> > > --- a/controller/physical.h
> > > +++ b/controller/physical.h
> > > @@ -69,6 +69,7 @@ struct physical_ctx {
> > >       size_t n_encap_ips;
> > >       const char **encap_ips;
> > >       struct physical_debug debug;
> > > +    bool provider_network_overlay;
> > >   };
> > >
> > >   void physical_register_ovs_idl(struct ovsdb_idl *);
> > > diff --git a/northd/en-global-config.c b/northd/en-global-config.c
> > > index 28c78a12c1..bab805ca2d 100644
> > > --- a/northd/en-global-config.c
> > > +++ b/northd/en-global-config.c
> > > @@ -533,6 +533,11 @@ check_nb_options_out_of_sync(const struct 
> > > nbrec_nb_global *nb,
> > >           return true;
> > >       }
> > >
> > > +    if (config_out_of_sync(&nb->options, &config_data->nb_options,
> > > +                           "provider_network_overlay", false)) {
> > > +        return true;
> > > +    }
> > > +
> > >       return false;
> > >   }
> > >
> > > diff --git a/ovn-nb.xml b/ovn-nb.xml
> > > index 7bc77da684..2972867e06 100644
> > > --- a/ovn-nb.xml
> > > +++ b/ovn-nb.xml
> > > @@ -381,6 +381,22 @@
> > >           of SB changes would be very noticeable.
> > >         </column>
> > >
> > > +      <column name="options" key="provider_network_overlay"
> > > +           type='{"type": "boolean"}'>
> > > +        <p>
> > > +          If set to true, then the traffic destined to a VIF of a 
> > > provider
> > > +          logical switch (having a localnet port) will be tunnelled 
> > > instead
> > > +          of sending it via the localnet port.  This option will be 
> > > useful
> > > +          if CMS wants to connect overlay logical switches (without
> > > +          localnet port) and provider logical switches to a router.  
> > > Without
> > > +          this option set, the traffic path will be a mix of tunnelling 
> > > and
> > > +          localnet ports (since routing is distributed) resulting in the
> > > +          leakage of the router port mac address to the upstream switches
> > > +          and undefined behavior if NATting is involed.  This option is
> > > +          disabled by default.
> > > +        </p>
> > > +      </column>
> > > +
> > >         <group title="Options for configuring interconnection route 
> > > advertisement">
> > >           <p>
> > >             These options control how routes are advertised between OVN
> > > diff --git a/tests/multinode-macros.at b/tests/multinode-macros.at
> > > index ef41087ae3..786e564860 100644
> > > --- a/tests/multinode-macros.at
> > > +++ b/tests/multinode-macros.at
> > > @@ -22,6 +22,25 @@ m4_define([M_NS_CHECK_EXEC],
> > >       [ AT_CHECK([M_NS_EXEC([$1], [$2], [$3])], 
> > > m4_shift(m4_shift(m4_shift($@)))) ]
> > >   )
> > >
> > > +# M_DAEMONIZE([fake_node],[command],[pidfile])
> > > +m4_define([M_DAEMONIZE],
> > > +    [podman exec $1 $2 & echo $! > $3
> > > +     echo "kill \`cat $3\`" >> cleanup
> > > +    ]
> > > +)
> > > +
> > > +# M_START_TCPDUMP([fake_node], [params], [name])
> > > +#
> > > +# Helper to properly start tcpdump and wait for the startup.
> > > +# The tcpdump output is available in <name>.tcpdump file.
> > > +m4_define([M_START_TCPDUMP],
> > > +    [
> > > +     podman exec $1 tcpdump -l $2 >$3.tcpdump 2>$3.stderr &
> > > +     OVS_WAIT_UNTIL([grep -q "listening" $3.stderr])
> > > +    ]
> > > +)
> > > +
> > > +
> > >   OVS_START_SHELL_HELPERS
> > >
> > >   m_as() {
> > > diff --git a/tests/multinode.at b/tests/multinode.at
> > > index 1e6eeb6610..d0ea4aa4f6 100644
> > > --- a/tests/multinode.at
> > > +++ b/tests/multinode.at
> > > @@ -1034,3 +1034,361 @@ done
> > >   M_NS_CHECK_EXEC([ovn-chassis-1], [sw0p1], [ip route get 10.0.0.1 dev 
> > > sw0p1 | grep -q 'mtu 942'])
> > >
> > >   AT_CLEANUP
> > > +
> > > +AT_SETUP([ovn provider network overlay])
> > > +
> > > +# Check that ovn-fake-multinode setup is up and running
> > > +check_fake_multinode_setup
> > > +
> > > +# Delete the multinode NB and OVS resources before starting the test.
> > > +cleanup_multinode_resources
> > > +
> > > +m_as ovn-chassis-1 ip link del sw0p1-p
> > > +m_as ovn-chassis-2 ip link del sw0p2-p
> > > +
> > > +# Reset geneve tunnels
> > > +for c in ovn-chassis-1 ovn-chassis-2 ovn-gw-1
> > > +do
> > > +    m_as $c ovs-vsctl set open . external-ids:ovn-encap-type=geneve
> > > +done
> > > +
> > > +OVS_WAIT_UNTIL([m_as ovn-chassis-1 ip link show | grep -q genev_sys])
> > > +OVS_WAIT_UNTIL([m_as ovn-chassis-2 ip link show | grep -q genev_sys])
> > > +OVS_WAIT_UNTIL([m_as ovn-gw-1 ip link show | grep -q genev_sys])
> > > +
> > > +# The goal of this test case is to see the traffic works for
> > > +# E-W switching and routing when the logical switches has localnet ports
> > > +# and the option - provider_network_overlay=true is set.  When this 
> > > option
> > > +# is set, traffic is tunneled to the destination chassis instead of using
> > > +# localnet ports.
> > > +
> > > +check multinode_nbctl ls-add sw0
> > > +check multinode_nbctl lsp-add sw0 sw0-port1
> > > +check multinode_nbctl lsp-set-addresses sw0-port1 "50:54:00:00:00:03 
> > > 10.0.0.3 1000::3"
> > > +check multinode_nbctl lsp-add sw0 sw0-port2
> > > +check multinode_nbctl lsp-set-addresses sw0-port2 "50:54:00:00:00:04 
> > > 10.0.0.4 1000::4"
> > > +
> > > +m_as ovn-chassis-1 /data/create_fake_vm.sh sw0-port1 sw0p1 
> > > 50:54:00:00:00:03 10.0.0.3 24 10.0.0.1 1000::3/64 1000::a
> > > +m_as ovn-chassis-2 /data/create_fake_vm.sh sw0-port2 sw0p2 
> > > 50:54:00:00:00:04 10.0.0.4 24 10.0.0.1 1000::4/64 1000::a
> > > +
> > > +m_wait_for_ports_up
> > > +
> > > +M_NS_CHECK_EXEC([ovn-chassis-1], [sw0p1], [ping -q -c 3 -i 0.3 -w 2 
> > > 10.0.0.4 | FORMAT_PING], \
> > > +[0], [dnl
> > > +3 packets transmitted, 3 received, 0% packet loss, time 0ms
> > > +])
> > > +
> > > +# Create the second logical switch with one port
> > > +check multinode_nbctl ls-add sw1
> > > +check multinode_nbctl lsp-add sw1 sw1-port1
> > > +check multinode_nbctl lsp-set-addresses sw1-port1 "40:54:00:00:00:03 
> > > 20.0.0.3 2000::3"
> > > +
> > > +# Create a logical router and attach both logical switches
> > > +check multinode_nbctl lr-add lr0
> > > +check multinode_nbctl lrp-add lr0 lr0-sw0 00:00:00:00:ff:01 10.0.0.1/24 
> > > 1000::a/64
> > > +check multinode_nbctl lsp-add sw0 sw0-lr0
> > > +check multinode_nbctl lsp-set-type sw0-lr0 router
> > > +check multinode_nbctl lsp-set-addresses sw0-lr0 router
> > > +check multinode_nbctl lsp-set-options sw0-lr0 router-port=lr0-sw0
> > > +
> > > +check multinode_nbctl lrp-add lr0 lr0-sw1 00:00:00:00:ff:02 20.0.0.1/24 
> > > 2000::a/64
> > > +check multinode_nbctl lsp-add sw1 sw1-lr0
> > > +check multinode_nbctl lsp-set-type sw1-lr0 router
> > > +check multinode_nbctl lsp-set-addresses sw1-lr0 router
> > > +check multinode_nbctl lsp-set-options sw1-lr0 router-port=lr0-sw1
> > > +
> > > +m_as ovn-chassis-2 /data/create_fake_vm.sh sw1-port1 sw1p1 
> > > 40:54:00:00:00:03 20.0.0.3 24 20.0.0.1 2000::3/64 2000::a
> > > +
> > > +# create exteranl connection for N/S traffic
> > > +check multinode_nbctl ls-add public
> > > +check multinode_nbctl lsp-add public ln-lublic
> > > +check multinode_nbctl lsp-set-type ln-lublic localnet
> > > +check multinode_nbctl lsp-set-addresses ln-lublic unknown
> > > +check multinode_nbctl lsp-set-options ln-lublic network_name=public
> > > +
> > > +check multinode_nbctl lrp-add lr0 lr0-public 00:11:22:00:ff:01 
> > > 172.20.0.100/24
> > > +check multinode_nbctl lsp-add public public-lr0
> > > +check multinode_nbctl lsp-set-type public-lr0 router
> > > +check multinode_nbctl lsp-set-addresses public-lr0 router
> > > +check multinode_nbctl lsp-set-options public-lr0 router-port=lr0-public
> > > +check multinode_nbctl lrp-set-gateway-chassis lr0-public ovn-gw-1 10
> > > +check multinode_nbctl lr-route-add lr0 0.0.0.0/0 172.20.0.1
> > > +
> > > +check multinode_nbctl lr-nat-add lr0 snat 172.20.0.100 10.0.0.0/24
> > > +check multinode_nbctl lr-nat-add lr0 snat 172.20.0.100 20.0.0.0/24
> > > +
> > > +# create localnet ports for sw0 and sw1
> > > +check multinode_nbctl lsp-add sw0 ln-sw0
> > > +check multinode_nbctl lsp-set-type ln-sw0 localnet
> > > +check multinode_nbctl lsp-set-addresses ln-sw0 unknown
> > > +check multinode_nbctl lsp-set-options ln-sw0 network_name=public
> > > +check multinode_nbctl set logical_switch_port ln-sw0 tag_request=100
> > > +
> > > +check multinode_nbctl lsp-add sw1 ln-sw1
> > > +check multinode_nbctl lsp-set-type ln-sw1 localnet
> > > +check multinode_nbctl lsp-set-addresses ln-sw1 unknown
> > > +check multinode_nbctl lsp-set-options ln-sw1 network_name=public
> > > +check multinode_nbctl set logical_switch_port ln-sw1 tag_request=101
> > > +
> > > +check multinode_nbctl --wait=hv sync
> > > +
> > > +M_START_TCPDUMP([ovn-chassis-1], [-c 2 -neei genev_sys_6081 icmp], 
> > > [ch1_genev])
> > > +M_START_TCPDUMP([ovn-chassis-1], [-c 2 -neei eth2 icmp], [ch1_eth2])
> > > +
> > > +M_NS_CHECK_EXEC([ovn-chassis-1], [sw0p1], [ping -q -c 3 -i 0.3 -w 2 
> > > 10.0.0.4 | FORMAT_PING], \
> > > +[0], [dnl
> > > +3 packets transmitted, 3 received, 0% packet loss, time 0ms
> > > +])
> > > +
> > > +AT_CHECK([cat ch1_eth2.tcpdump | cut -d  ' ' -f2-22], [0], [dnl
> > > +50:54:00:00:00:03 > 50:54:00:00:00:04, ethertype 802.1Q (0x8100), length 
> > > 102: vlan 100, p 0, ethertype IPv4 (0x0800), 10.0.0.3 > 10.0.0.4: ICMP 
> > > echo request,
> > > +50:54:00:00:00:04 > 50:54:00:00:00:03, ethertype 802.1Q (0x8100), length 
> > > 102: vlan 100, p 0, ethertype IPv4 (0x0800), 10.0.0.4 > 10.0.0.3: ICMP 
> > > echo reply,
> > > +])
> > > +
> > > +AT_CHECK([cat ch1_genev.tcpdump], [0], [dnl
> > > +])
> > > +
> > > +m_as ovn-chassis-1 killall tcpdump
> > > +rm -f *.tcpdump
> > > +rm -f *.stderr
> > > +
> > > +M_START_TCPDUMP([ovn-chassis-1], [-c 2 -neei genev_sys_6081 icmp], 
> > > [ch1_genev])
> > > +M_START_TCPDUMP([ovn-chassis-1], [-c 2 -neei eth2 icmp], [ch1_eth2])
> > > +
> > > +M_NS_CHECK_EXEC([ovn-chassis-1], [sw0p1], [ping -q -c 3 -i 0.3 -w 2 
> > > 20.0.0.3 | FORMAT_PING], \
> > > +[0], [dnl
> > > +3 packets transmitted, 3 received, 0% packet loss, time 0ms
> > > +])
> > > +
> > > +AT_CHECK([cat ch1_eth2.tcpdump | cut -d  ' ' -f2-22], [0], [dnl
> > > +00:00:00:00:ff:02 > 40:54:00:00:00:03, ethertype 802.1Q (0x8100), length 
> > > 102: vlan 101, p 0, ethertype IPv4 (0x0800), 10.0.0.3 > 20.0.0.3: ICMP 
> > > echo request,
> > > +00:00:00:00:ff:01 > 50:54:00:00:00:03, ethertype 802.1Q (0x8100), length 
> > > 102: vlan 100, p 0, ethertype IPv4 (0x0800), 20.0.0.3 > 10.0.0.3: ICMP 
> > > echo reply,
> > > +])
> > > +
> > > +AT_CHECK([cat ch1_genev.tcpdump], [0], [dnl
> > > +])
> > > +
> > > +# Set the option provider_network_overlay=true.
> > > +# Traffic from sw0p1 to sw0p2 should be tunneled.
> > > +check multinode_nbctl set NB_Global . 
> > > options:provider_network_overlay=true
> > > +check multinode_nbctl --wait=hv sync
> > > +
> > > +m_as ovn-chassis-1 killall tcpdump
> > > +rm -f *.tcpdump
> > > +rm -f *.stderr
> > > +
> > > +M_START_TCPDUMP([ovn-chassis-1], [-c 2 -neei genev_sys_6081 icmp], 
> > > [ch1_genev])
> > > +M_START_TCPDUMP([ovn-chassis-1], [-c 2 -neei eth2 icmp], [ch1_eth2])
> > > +
> > > +M_NS_CHECK_EXEC([ovn-chassis-1], [sw0p1], [ping -q -c 3 -i 0.3 -w 2 
> > > 10.0.0.4 | FORMAT_PING], \
> > > +[0], [dnl
> > > +3 packets transmitted, 3 received, 0% packet loss, time 0ms
> > > +])
> > > +
> > > +AT_CHECK([cat ch1_genev.tcpdump | cut -d  ' ' -f2-15], [0], [dnl
> > > +50:54:00:00:00:03 > 50:54:00:00:00:04, ethertype IPv4 (0x0800), length 
> > > 98: 10.0.0.3 > 10.0.0.4: ICMP echo request,
> > > +50:54:00:00:00:04 > 50:54:00:00:00:03, ethertype IPv4 (0x0800), length 
> > > 98: 10.0.0.4 > 10.0.0.3: ICMP echo reply,
> > > +])
> > > +
> > > +AT_CHECK([cat ch1_eth2.tcpdump], [0], [dnl
> > > +])
> > > +
> > > +m_as ovn-chassis-1 killall tcpdump
> > > +rm -f *.tcpdump
> > > +rm -f *.stderr
> > > +
> > > +M_START_TCPDUMP([ovn-chassis-1], [-c 2 -neei genev_sys_6081 icmp], 
> > > [ch1_genev])
> > > +M_START_TCPDUMP([ovn-chassis-1], [-c 2 -neei eth2 icmp], [ch1_eth2])
> > > +
> > > +M_NS_CHECK_EXEC([ovn-chassis-1], [sw0p1], [ping -q -c 3 -i 0.3 -w 2 
> > > 20.0.0.3 | FORMAT_PING], \
> > > +[0], [dnl
> > > +3 packets transmitted, 3 received, 0% packet loss, time 0ms
> > > +])
> > > +
> > > +AT_CHECK([cat ch1_genev.tcpdump | cut -d  ' ' -f2-15], [0], [dnl
> > > +00:00:00:00:ff:02 > 40:54:00:00:00:03, ethertype IPv4 (0x0800), length 
> > > 98: 10.0.0.3 > 20.0.0.3: ICMP echo request,
> > > +00:00:00:00:ff:01 > 50:54:00:00:00:03, ethertype IPv4 (0x0800), length 
> > > 98: 20.0.0.3 > 10.0.0.3: ICMP echo reply,
> > > +])
> > > +
> > > +AT_CHECK([cat ch1_eth2.tcpdump], [0], [dnl
> > > +])
> > > +
> > > +m_as ovn-chassis-1 killall tcpdump
> > > +rm -f *.tcpdump
> > > +rm -f *.stderr
> > > +
> > > +# Delete ln-sw1.
> > > +check multinode_nbctl --wait=hv lsp-del ln-sw1
> > > +# Traffic from sw0p1 to sw1p1 should be tunneled.
> > > +
> > > +M_START_TCPDUMP([ovn-chassis-1], [-c 2 -neei genev_sys_6081 icmp], 
> > > [ch1_genev])
> > > +M_START_TCPDUMP([ovn-chassis-1], [-c 2 -neei eth2 icmp], [ch1_eth2])
> > > +
> > > +M_NS_CHECK_EXEC([ovn-chassis-1], [sw0p1], [ping -q -c 3 -i 0.3 -w 2 
> > > 20.0.0.3 | FORMAT_PING], \
> > > +[0], [dnl
> > > +3 packets transmitted, 3 received, 0% packet loss, time 0ms
> > > +])
> > > +
> > > +AT_CHECK([cat ch1_genev.tcpdump | cut -d  ' ' -f2-15], [0], [dnl
> > > +00:00:00:00:ff:02 > 40:54:00:00:00:03, ethertype IPv4 (0x0800), length 
> > > 98: 10.0.0.3 > 20.0.0.3: ICMP echo request,
> > > +00:00:00:00:ff:01 > 50:54:00:00:00:03, ethertype IPv4 (0x0800), length 
> > > 98: 20.0.0.3 > 10.0.0.3: ICMP echo reply,
> > > +])
> > > +
> > > +AT_CHECK([cat ch1_eth2.tcpdump], [0], [dnl
> > > +])
> > > +
> > > +m_as ovn-chassis-1 killall tcpdump
> > > +rm -f *.tcpdump
> > > +rm -f *.stderr
> > > +
> > > +# Make sure that traffic from sw0 still goes out of localnet port
> > > +# for IPs not managed by OVN.
> > > +# Create a fake vm in br-ex on ovn-gw-1 with IP - 10.0.0.10
> > > +m_as ovn-gw-1 ip netns add sw0-p10
> > > +m_as ovn-gw-1 ovs-vsctl add-port br-ex sw0-p10 -- set interface sw0-p10 
> > > type=internal
> > > +m_as ovn-gw-1 ovs-vsctl set port sw0-p10 tag=100
> > > +m_as ovn-gw-1 ip link set sw0-p10 netns sw0-p10
> > > +m_as ovn-gw-1 ip netns exec sw0-p10 ip link set sw0-p10 up
> > > +m_as ovn-gw-1 ip netns exec sw0-p10 ip link set sw0-p10 address 
> > > 32:31:8c:da:64:4f
> > > +m_as ovn-gw-1 ip netns exec sw0-p10 ip addr add 10.0.0.10/24 dev sw0-p10
> > > +
> > > +# Ping from sw0p1 (on ovn-chassis-1) tp sw0-p10 which is in ovn-gw-1 on
> > > +# external bridge.  The traffic path is
> > > +# sw0p1 -> br-int -> localnet port (vlan tagged 100) -> br-ex -> eth2 of 
> > > ovn-chassis-1 to
> > > +# eth2 of ovn-gw-1  -> br-ex -> sw0-p10
> > > +
> > > +M_START_TCPDUMP([ovn-chassis-1], [-c 2 -neei genev_sys_6081 icmp], 
> > > [ch1_genev])
> > > +M_START_TCPDUMP([ovn-chassis-1], [-c 2 -neei eth2 icmp], [ch1_eth2])
> > > +M_START_TCPDUMP([ovn-gw-1], [-c 2 -neei eth2 icmp], [gw1_eth2])
> > > +
> > > +M_NS_CHECK_EXEC([ovn-chassis-1], [sw0p1], [ping -q -c 3 -i 0.3 -w 2 
> > > 10.0.0.10 | FORMAT_PING], \
> > > +[0], [dnl
> > > +3 packets transmitted, 3 received, 0% packet loss, time 0ms
> > > +])
> > > +
> > > +m_as ovn-chassis-1 killall tcpdump
> > > +m_as ovn-gw-1 killall tcpdump
> > > +
> > > +AT_CHECK([cat ch1_eth2.tcpdump | cut -d  ' ' -f2-22], [0], [dnl
> > > +50:54:00:00:00:03 > 32:31:8c:da:64:4f, ethertype 802.1Q (0x8100), length 
> > > 102: vlan 100, p 0, ethertype IPv4 (0x0800), 10.0.0.3 > 10.0.0.10: ICMP 
> > > echo request,
> > > +32:31:8c:da:64:4f > 50:54:00:00:00:03, ethertype 802.1Q (0x8100), length 
> > > 102: vlan 100, p 0, ethertype IPv4 (0x0800), 10.0.0.10 > 10.0.0.3: ICMP 
> > > echo reply,
> > > +])
> > > +
> > > +AT_CHECK([cat ch1_genev.tcpdump], [0], [dnl
> > > +
> > > +])
> > > +
> > > +AT_CHECK([cat gw1_eth2.tcpdump | cut -d  ' ' -f2-22], [0], [dnl
> > > +50:54:00:00:00:03 > 32:31:8c:da:64:4f, ethertype 802.1Q (0x8100), length 
> > > 102: vlan 100, p 0, ethertype IPv4 (0x0800), 10.0.0.3 > 10.0.0.10: ICMP 
> > > echo request,
> > > +32:31:8c:da:64:4f > 50:54:00:00:00:03, ethertype 802.1Q (0x8100), length 
> > > 102: vlan 100, p 0, ethertype IPv4 (0x0800), 10.0.0.10 > 10.0.0.3: ICMP 
> > > echo reply,
> > > +])
> > > +
> > > +rm -f *.tcpdump
> > > +rm -f *.stderr
> > > +
> > > +# Add dnat_and_snat entry for 10.0.0.3 <-> 172.20.0.110
> > > +check multinode_nbctl --wait=hv lr-nat-add lr0 dnat_and_snat 
> > > 172.20.0.110 10.0.0.3 sw0-port1 30:54:00:00:00:03
> > > +
> > > +# Ping from sw1-p1 to 172.20.0.110
> > > +# Traffic path is
> > > +# sw1-p1 in ovn-chassis-2 -> tunnel -> ovn-gw-1 -> In ovn-gw-1 SNAT 
> > > 20.0.0.3 to 172.20.0.100 ->
> > > +#  -> ln-public -> br-ex -> eth2 -> ovn-chassis-1 -> br-ex -> ln-public 
> > > -> br-int ->
> > > +#  -> DNAT 172.20.0.110 to 10.0.0.3 -> sw0-p1 with src ip 172.20.0.100 
> > > and dst ip 10.0.0.3.
> > > +
> > > +M_START_TCPDUMP([ovn-chassis-2], [-c 2 -neei genev_sys_6081 icmp], 
> > > [ch2_genev])
> > > +M_START_TCPDUMP([ovn-chassis-2], [-c 2 -neei eth2 icmp], [ch2_eth2])
> > > +M_START_TCPDUMP([ovn-gw-1], [-c 2 -neei genev_sys_6081 icmp], 
> > > [gw1_geneve])
> > > +M_START_TCPDUMP([ovn-gw-1], [-c 2 -neei eth2 icmp], [gw1_eth2])
> > > +M_START_TCPDUMP([ovn-chassis-1], [-c 2 -neei genev_sys_6081 icmp], 
> > > [ch1_genev])
> > > +M_START_TCPDUMP([ovn-chassis-1], [-c 2 -neei eth2 icmp], [ch1_eth2])
> > > +
> > > +M_NS_CHECK_EXEC([ovn-chassis-2], [sw1p1], [ping -q -c 3 -i 0.3 -w 2 
> > > 172.20.0.110 | FORMAT_PING], \
> > > +[0], [dnl
> > > +3 packets transmitted, 3 received, 0% packet loss, time 0ms
> > > +])
> > > +
> > > +m_as ovn-chassis-1 killall tcpdump
> > > +m_as ovn-chassis-2 killall tcpdump
> > > +m_as ovn-gw-1 killall tcpdump
> > > +
> > > +AT_CHECK([cat ch2_genev.tcpdump | cut -d  ' ' -f2-15], [0], [dnl
> > > +00:11:22:00:ff:01 > 30:54:00:00:00:03, ethertype IPv4 (0x0800), length 
> > > 98: 20.0.0.3 > 172.20.0.110: ICMP echo request,
> > > +00:00:00:00:ff:02 > 40:54:00:00:00:03, ethertype IPv4 (0x0800), length 
> > > 98: 172.20.0.110 > 20.0.0.3: ICMP echo reply,
> > > +])
> > > +
> > > +AT_CHECK([cat gw1_geneve.tcpdump | cut -d  ' ' -f2-15], [0], [dnl
> > > +00:11:22:00:ff:01 > 30:54:00:00:00:03, ethertype IPv4 (0x0800), length 
> > > 98: 20.0.0.3 > 172.20.0.110: ICMP echo request,
> > > +00:00:00:00:ff:02 > 40:54:00:00:00:03, ethertype IPv4 (0x0800), length 
> > > 98: 172.20.0.110 > 20.0.0.3: ICMP echo reply,
> > > +])
> > > +
> > > +AT_CHECK([cat gw1_eth2.tcpdump | cut -d  ' ' -f2-15], [0], [dnl
> > > +00:11:22:00:ff:01 > 30:54:00:00:00:03, ethertype IPv4 (0x0800), length 
> > > 98: 172.20.0.100 > 172.20.0.110: ICMP echo request,
> > > +30:54:00:00:00:03 > 00:11:22:00:ff:01, ethertype IPv4 (0x0800), length 
> > > 98: 172.20.0.110 > 172.20.0.100: ICMP echo reply,
> > > +])
> > > +
> > > +AT_CHECK([cat ch1_eth2.tcpdump | cut -d  ' ' -f2-15], [0], [dnl
> > > +00:11:22:00:ff:01 > 30:54:00:00:00:03, ethertype IPv4 (0x0800), length 
> > > 98: 172.20.0.100 > 172.20.0.110: ICMP echo request,
> > > +30:54:00:00:00:03 > 00:11:22:00:ff:01, ethertype IPv4 (0x0800), length 
> > > 98: 172.20.0.110 > 172.20.0.100: ICMP echo reply,
> > > +])
> > > +
> > > +AT_CHECK([cat ch1_genev.tcpdump], [0], [dnl
> > > +
> > > +])
> > > +
> > > +rm -f *.tcpdump
> > > +rm -f *.stderr
> > > +
> > > +# Now clear the logical_port of dnat_and_snat entry.  ovn-gw-1 should 
> > > handle the DNAT.
> > > +check multinode_nbctl lr-nat-del lr0 dnat_and_snat 172.20.0.110
> > > +check multinode_nbctl --wait=hv lr-nat-add lr0 dnat_and_snat 
> > > 172.20.0.110 10.0.0.3
> > > +# Ping from sw1-p1 to 172.20.0.110
> > > +# Traffic path is
> > > +# sw1-p1 in ovn-chassis-2 -> tunnel -> ovn-gw-1 -> In ovn-gw-1 SNAT 
> > > 20.0.0.3 to 172.20.0.100 ->
> > > +#  DNAT 172.20.0.110 -> 10.0.0.3 -> tunnel -> ovn-chassis-1 -> br-int -> 
> > > sw0p1
> > > +
> > > +M_START_TCPDUMP([ovn-chassis-2], [-c 2 -neei genev_sys_6081 icmp], 
> > > [ch2_genev])
> > > +M_START_TCPDUMP([ovn-chassis-2], [-c 2 -neei eth2 icmp], [ch2_eth2])
> > > +M_START_TCPDUMP([ovn-gw-1], [-c 4 -neei genev_sys_6081 icmp], 
> > > [gw1_geneve])
> > > +M_START_TCPDUMP([ovn-gw-1], [-c 4 -neei eth2 icmp], [gw1_eth2])
> > > +M_START_TCPDUMP([ovn-chassis-1], [-c 2 -neei genev_sys_6081 icmp], 
> > > [ch1_genev])
> > > +M_START_TCPDUMP([ovn-chassis-1], [-c 2 -neei eth2 icmp], [ch1_eth2])
> > > +
> > > +M_NS_CHECK_EXEC([ovn-chassis-2], [sw1p1], [ping -q -c 3 -i 0.3 -w 2 
> > > 172.20.0.110 | FORMAT_PING], \
> > > +[0], [dnl
> > > +3 packets transmitted, 3 received, 0% packet loss, time 0ms
> > > +])
> > > +
> > > +m_as ovn-chassis-1 killall tcpdump
> > > +m_as ovn-chassis-2 killall tcpdump
> > > +m_as ovn-gw-1 killall tcpdump
> > > +
> > > +AT_CHECK([cat ch2_genev.tcpdump | cut -d  ' ' -f2-15], [0], [dnl
> > > +00:11:22:00:ff:01 > 00:11:22:00:ff:01, ethertype IPv4 (0x0800), length 
> > > 98: 20.0.0.3 > 172.20.0.110: ICMP echo request,
> > > +00:00:00:00:ff:02 > 40:54:00:00:00:03, ethertype IPv4 (0x0800), length 
> > > 98: 172.20.0.110 > 20.0.0.3: ICMP echo reply,
> > > +])
> > > +
> > > +AT_CHECK([cat ch1_eth2.tcpdump], [0], [dnl
> > > +
> > > +])
> > > +
> > > +AT_CHECK([cat gw1_geneve.tcpdump | cut -d  ' ' -f2-15], [0], [dnl
> > > +00:11:22:00:ff:01 > 00:11:22:00:ff:01, ethertype IPv4 (0x0800), length 
> > > 98: 20.0.0.3 > 172.20.0.110: ICMP echo request,
> > > +00:00:00:00:ff:01 > 50:54:00:00:00:03, ethertype IPv4 (0x0800), length 
> > > 98: 172.20.0.100 > 10.0.0.3: ICMP echo request,
> > > +00:11:22:00:ff:01 > 00:11:22:00:ff:01, ethertype IPv4 (0x0800), length 
> > > 98: 10.0.0.3 > 172.20.0.100: ICMP echo reply,
> > > +00:00:00:00:ff:02 > 40:54:00:00:00:03, ethertype IPv4 (0x0800), length 
> > > 98: 172.20.0.110 > 20.0.0.3: ICMP echo reply,
> > > +])
> > > +
> > > +AT_CHECK([cat gw1_eth2.tcpdump], [0], [dnl
> > > +
> > > +])
> > > +
> > > +AT_CHECK([cat ch1_genev.tcpdump | cut -d  ' ' -f2-15], [0], [dnl
> > > +00:00:00:00:ff:01 > 50:54:00:00:00:03, ethertype IPv4 (0x0800), length 
> > > 98: 172.20.0.100 > 10.0.0.3: ICMP echo request,
> > > +00:11:22:00:ff:01 > 00:11:22:00:ff:01, ethertype IPv4 (0x0800), length 
> > > 98: 10.0.0.3 > 172.20.0.100: ICMP echo reply,
> > > +])
> > > +
> > > +AT_CHECK([cat ch1_eth2.tcpdump], [0], [dnl
> > > +
> > > +])
> > > +
> > > +AT_CLEANUP
> > > diff --git a/tests/ovn.at b/tests/ovn.at
> > > index 061e7764e5..7a5082bdd1 100644
> > > --- a/tests/ovn.at
> > > +++ b/tests/ovn.at
> > > @@ -38276,3 +38276,159 @@ OVN_CLEANUP([hv1
> > >   ])
> > >   AT_CLEANUP
> > >   ])
> > > +
> > > +OVN_FOR_EACH_NORTHD([
> > > +AT_SETUP([Provider network overlay])
> > > +ovn_start
> > > +net_add n1
> > > +
> > > +for hv in 1 2; do
> > > +    sim_add hv${hv}
> > > +    as hv${hv}
> > > +    ovs-vsctl add-br br-phys
> > > +    ovn_attach n1 br-phys 192.168.0.${hv}
> > > +    ovs-vsctl set open . 
> > > external_ids:ovn-bridge-mappings=physnet1:br-phys
> > > +done
> > > +
> > > +check ovn-nbctl ls-add sw0
> > > +check ovn-nbctl lsp-add sw0 sw0-p1 -- lsp-set-addresses sw0-p1 
> > > "00:00:10:01:02:03 10.0.0.3"
> > > +check ovn-nbctl lsp-add sw0 sw0-p2 -- lsp-set-addresses sw0-p2 
> > > "00:00:04:01:02:04 10.0.0.4"
> > > +
> > > +check ovn-nbctl ls-add sw1
> > > +check ovn-nbctl lsp-add sw1 sw1-p1 -- lsp-set-addresses sw1-p1 
> > > "00:00:20:01:02:03 20.0.0.3"
> > > +check ovn-nbctl lsp-add sw1 sw1-p2 -- lsp-set-addresses sw1-p2 
> > > "00:00:20:01:02:04 20.0.0.4"
> > > +
> > > +check ovn-nbctl lr-add lr0
> > > +check ovn-nbctl lrp-add lr0 lr0-sw0 00:00:00:00:ff:01 10.0.0.1/24
> > > +check ovn-nbctl lsp-add sw0 sw0-lr0
> > > +check ovn-nbctl lsp-set-type sw0-lr0 router
> > > +check ovn-nbctl lsp-set-addresses sw0-lr0 router
> > > +check ovn-nbctl lsp-set-options sw0-lr0 router-port=lr0-sw0
> > > +
> > > +check ovn-nbctl lrp-add lr0 lr0-sw1 00:00:00:00:ff:02 20.0.0.1/24
> > > +check ovn-nbctl lsp-add sw1 sw1-lr0
> > > +check ovn-nbctl lsp-set-type sw1-lr0 router
> > > +check ovn-nbctl lsp-set-addresses sw1-lr0 router
> > > +check ovn-nbctl lsp-set-options sw1-lr0 router-port=lr0-sw1
> > > +
> > > +as hv1
> > > +ovs-vsctl add-port br-int vif11 -- \
> > > +    set Interface vif11 external-ids:iface-id=sw0-p1 \
> > > +                              options:tx_pcap=hv1/vif11-tx.pcap \
> > > +                              options:rxq_pcap=hv1/vif11-rx.pcap \
> > > +                              ofport-request=11
> > > +ovs-vsctl add-port br-int vif12 -- \
> > > +    set Interface vif12 external-ids:iface-id=sw1-p1 \
> > > +                              options:tx_pcap=hv1/vif12-tx.pcap \
> > > +                              options:rxq_pcap=hv1/vif12-rx.pcap \
> > > +                              ofport-request=12
> > > +
> > > +as hv2
> > > +ovs-vsctl add-port br-int vif21 -- \
> > > +    set Interface vif21 external-ids:iface-id=sw0-p2 \
> > > +                              options:tx_pcap=hv1/vif21-tx.pcap \
> > > +                              options:rxq_pcap=hv1/vif21-rx.pcap \
> > > +                              ofport-request=21
> > > +ovs-vsctl add-port br-int vif22 -- \
> > > +    set Interface vif22 external-ids:iface-id=sw1-p2 \
> > > +                              options:tx_pcap=hv1/vif22-tx.pcap \
> > > +                              options:rxq_pcap=hv1/vif22-rx.pcap \
> > > +                              ofport-request=22
> > > +
> > > +check ovn-nbctl --wait=hv sync
> > > +wait_for_ports_up
> > > +
> > > +sw0_dp_key=$(printf "%x" $(fetch_column Datapath_Binding tunnel_key 
> > > external_ids:name=sw0))
> > > +sw0p1_key=$(printf "%x" $(fetch_column Port_Binding tunnel_key 
> > > logical_port=sw0-p1))
> > > +sw0p2_key=$(printf "%x" $(fetch_column Port_Binding tunnel_key 
> > > logical_port=sw0-p2))
> > > +
> > > +sw1_dp_key=$(printf "%x" $(fetch_column Datapath_Binding tunnel_key 
> > > external_ids:name=sw1))
> > > +sw1p1_key=$(printf "%x" $(fetch_column Port_Binding tunnel_key 
> > > logical_port=sw1-p1))
> > > +sw1p2_key=$(printf "%x" $(fetch_column Port_Binding tunnel_key 
> > > logical_port=sw1-p2))
> > > +
> > > +check_output_flows_tunnelled() {
> > > +  hv=$1
> > > +  dp_key=$2
> > > +  dp_rport=$3
> > > +  AT_CHECK_UNQUOTED([as $hv ovs-ofctl dump-flows br-int 
> > > table=OFTABLE_REMOTE_OUTPUT,metadata=0x${dp_key},reg15=0x${dp_rport} | 
> > > ofctl_strip_all | grep -v NXST_FLOW], [0], [dnl
> > > + table=OFTABLE_REMOTE_OUTPUT, 
> > > priority=100,reg13=0/0xffff0000,reg15=0x${dp_rport},metadata=0x${dp_key} 
> > > actions=load:0x${dp_key}->NXM_NX_TUN_ID[[0..23]],set_field:0x${dp_rport}->tun_metadata0,move:NXM_NX_REG14[[0..14]]->NXM_NX_TUN_METADATA0[[16..30]],output:1,resubmit(,OFTABLE_LOCAL_OUTPUT)
> > > +])
> > > +}
> > > +
> > > +check_output_flows_via_localnet() {
> > > +  hv=$1
> > > +  dp_key=$2
> > > +  dp_rport=$3
> > > +  lnport_key=$4
> > > +  AT_CHECK_UNQUOTED([as $hv ovs-ofctl dump-flows br-int 
> > > table=OFTABLE_REMOTE_OUTPUT,metadata=0x${dp_key},reg15=0x${dp_rport} | 
> > > ofctl_strip_all | grep -v NXST_FLOW], [1], [dnl
> > > +])
> > > +
> > > +  AT_CHECK_UNQUOTED([as $hv ovs-ofctl dump-flows br-int 
> > > table=OFTABLE_LOCAL_OUTPUT,metadata=0x${dp_key},reg15=0x${dp_rport} | 
> > > ofctl_strip_all | grep -v NXST_FLOW], [0], [dnl
> > > + table=OFTABLE_LOCAL_OUTPUT, 
> > > priority=100,reg15=0x${dp_rport},metadata=0x${dp_key} 
> > > actions=load:0x${lnport_key}->NXM_NX_REG15[[]],resubmit(,OFTABLE_LOCAL_OUTPUT)
> > > +])
> > > +}
> > > +
> > > +# There are no localnet ports in sw0 and sw1.  So the pkts are tunnelled.
> > > +check_output_flows_tunnelled hv1 ${sw0_dp_key} ${sw0p2_key}
> > > +check_output_flows_tunnelled hv1 ${sw1_dp_key} ${sw1p2_key}
> > > +check_output_flows_tunnelled hv2 ${sw0_dp_key} ${sw0p1_key}
> > > +check_output_flows_tunnelled hv2 ${sw1_dp_key} ${sw1p1_key}
> > > +
> > > +# Add localnet port to sw0
> > > +check ovn-nbctl lsp-add sw0 ln-sw0 -- lsp-set-addresses ln-sw0 unknown 
> > > -- lsp-set-type ln-sw0 localnet
> > > +check ovn-nbctl --wait=hv lsp-set-options ln-sw0 network_name=physnet1 
> > > -- set logical_switch_port ln-sw0 tag_request=100
> > > +lnsw0_key=$(printf "%x" $(fetch_column Port_Binding tunnel_key 
> > > logical_port=ln-sw0))
> > > +
> > > +# Flows should be installed to use localnet port for sw0.
> > > +check_output_flows_via_localnet hv1 ${sw0_dp_key} ${sw0p2_key} 
> > > ${lnsw0_key}
> > > +check_output_flows_tunnelled hv1 ${sw1_dp_key} ${sw1p2_key}
> > > +check_output_flows_via_localnet hv2 ${sw0_dp_key} ${sw0p1_key} 
> > > ${lnsw0_key}
> > > +check_output_flows_tunnelled hv2 ${sw1_dp_key} ${sw1p1_key}
> > > +
> > > +# Add localnet port to sw1
> > > +check ovn-nbctl lsp-add sw1 ln-sw1 -- lsp-set-addresses ln-sw1 unknown 
> > > -- lsp-set-type ln-sw1 localnet
> > > +check ovn-nbctl --wait=hv lsp-set-options ln-sw1 network_name=physnet1 
> > > -- set logical_switch_port ln-sw1 tag_request=101
> > > +lnsw1_key=$(printf "%x" $(fetch_column Port_Binding tunnel_key 
> > > logical_port=ln-sw1))
> > > +
> > > +# Flows should be installed to use localnet port.
> > > +check_output_flows_via_localnet hv1 ${sw0_dp_key} ${sw0p2_key} 
> > > ${lnsw0_key}
> > > +check_output_flows_via_localnet hv1 ${sw1_dp_key} ${sw1p2_key} 
> > > ${lnsw1_key}
> > > +check_output_flows_via_localnet hv2 ${sw0_dp_key} ${sw0p1_key} 
> > > ${lnsw0_key}
> > > +check_output_flows_via_localnet hv2 ${sw1_dp_key} ${sw1p1_key} 
> > > ${lnsw1_key}
> > > +
> > > +# Set the provider network overlay option to true.
> > > +check ovn-nbctl set NB_Global . options:provider_network_overlay=true
> > > +check ovn-nbctl --wait=hv sync
> > > +
> > > +# Flows should be installed to tunnel.
> > > +check_output_flows_tunnelled hv1 ${sw0_dp_key} ${sw0p2_key}
> > > +check_output_flows_tunnelled hv1 ${sw1_dp_key} ${sw1p2_key}
> > > +check_output_flows_tunnelled hv2 ${sw0_dp_key} ${sw0p1_key}
> > > +check_output_flows_tunnelled hv2 ${sw1_dp_key} ${sw1p1_key}
> > > +
> > > +# Set the provider network overlay option to false.
> > > +check ovn-nbctl set NB_Global . options:provider_network_overlay=false
> > > +check ovn-nbctl --wait=hv sync
> > > +
> > > +# Flows should be installed to use localnet port.
> > > +check_output_flows_via_localnet hv1 ${sw0_dp_key} ${sw0p2_key} 
> > > ${lnsw0_key}
> > > +check_output_flows_via_localnet hv1 ${sw1_dp_key} ${sw1p2_key} 
> > > ${lnsw1_key}
> > > +check_output_flows_via_localnet hv2 ${sw0_dp_key} ${sw0p1_key} 
> > > ${lnsw0_key}
> > > +check_output_flows_via_localnet hv2 ${sw1_dp_key} ${sw1p1_key} 
> > > ${lnsw1_key}
> > > +
> > > +check ovn-nbctl --wait=hv lsp-del ln-sw0
> > > +
> > > +# Flows should be installed to tunnel for sw0
> > > +check_output_flows_tunnelled hv1 ${sw0_dp_key} ${sw0p2_key}
> > > +check_output_flows_tunnelled hv2 ${sw0_dp_key} ${sw0p1_key}
> > > +
> > > +check ovn-nbctl --wait=hv lsp-del ln-sw1
> > > +# Flows should be installed to tunnel.
> > > +check_output_flows_tunnelled hv1 ${sw0_dp_key} ${sw0p2_key}
> > > +check_output_flows_tunnelled hv1 ${sw1_dp_key} ${sw1p2_key}
> > > +check_output_flows_tunnelled hv2 ${sw0_dp_key} ${sw0p1_key}
> > > +check_output_flows_tunnelled hv2 ${sw1_dp_key} ${sw1p1_key}
> > > +
> > > +OVN_CLEANUP([hv1],[hv2])
> > > +AT_CLEANUP
> > > +])
> >
> > _______________________________________________
> > dev mailing list
> > [email protected]
> > https://mail.openvswitch.org/mailman/listinfo/ovs-dev
> >
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to