On Tue, Jun 14, 2022 at 4:19 PM Numan Siddique <[email protected]> wrote:
> > > > > > If I understand correctly, the major benefit of this feature is to
> > > > activate the port immediately when ready, without waiting for the 
> > > > control
> > > > plane to converge though SB DB. So I think here it shouldn't check
> > > > ovnsb_idl_txn for the list "ports_to_activate_in_engine". It can be:
> > > > >
> > > > > Yes, that's the intended benefit. Though note the second test scenario
> > > > > included with the patch (currently skipped) where I attempt to
> > > > > demonstrate the scenario but fail because for a reason not immediately
> > > > > clear to me, having ovsdb-server down makes vswitchd not trigger
> > > > > controller() action handler. I was hoping someone had an explanation
> > > > > to the behavior but so far I have no idea if that's some bug in
> > > > > vswitchd or elsewhere.
> > > > >
> > > >
> > > > For the test case, I think you intended to stop the SB DB but the test 
> > > > code
> > > > actually stopped the local OVSDB. Does this explain the behavior you 
> > > > were
> > > > seeing? (sorry that I didn't review the test case carefully)
> > > > To stop the SB DB, you should use something like:
> > > >
> > > > as ovn-sb
> > > > OVS_APP_EXIT_AND_WAIT([ovsdb-server])
> > >
> > > Hi Ihar,
> > >
> > > As Han mentioned, Is the intention  to stop Southbound ovsdb-server
> > > and see if the ovn-controller handles packet-in  properly for rarp ?
> > >
> > > If so,  would you like to respin another version ?  I was about to
> > > apply this patch and I'll wait for your reply.
> > >
> >
> > Yes that's the intention and I would be happy to respin with a version
> > of the test that demonstrates the proper behavior, but as I alluded in
> > the other email, right now ovn-controller effectively stops any
> > incremental handling on ovsdb commit failure, meaning flows are not
> > removed when sbdb is down.
> >
> > That's a shame because the original version of the patch that directly
> > issued flow mod commands to vswitchd worked as expected, and now that
> > we go through I-P engine, it doesn't.
> >
> > Let's understand what we can do with the engine issue before merging.
> > I don't think we are in rush.
>
> I'm pretty sure we can do this with I-P engine too.  To program flows
> we don't need
> ovnsb_idl_txn to be set.  So I think it should be possible.
> Looks like your patch already does it right ?
>

This is included, yes.

> Lets say SB ovsdb-server is down and the libvirt generates RARP.
> This packet is received by ovn-controller,
> pinctrl_rarp_activation_strategy_handler() stores
> the dp_key, port_key in the lists -  ports_to_activate_in_db and
> ports_to_activate_in_engine.

Yes.

> Main thread wakes up and it calls physical_handle_flows_for_lport().

It is woken up, but then this happens:

2022-06-15T00:40:25.724Z|00025|main|INFO|OVNSB commit failed, force
recompute next time. Which stops execution for all iterative handlers.
Instead, the main thread is now looping over trying to reconnect:

2022-06-15T00:40:25.727Z|00026|reconnect|INFO|ssl:127.0.0.1:34503:
connection closed by peer
2022-06-15T00:40:26.728Z|00027|reconnect|INFO|ssl:127.0.0.1:34503: connecting...
2022-06-15T00:40:26.728Z|00028|reconnect|INFO|ssl:127.0.0.1:34503:
connection attempt failed (Connection refused)
2022-06-15T00:40:26.728Z|00029|reconnect|INFO|ssl:127.0.0.1:34503:
waiting 2 seconds before reconnect
2022-06-15T00:40:28.731Z|00030|reconnect|INFO|ssl:127.0.0.1:34503: connecting...
2022-06-15T00:40:28.731Z|00031|reconnect|INFO|ssl:127.0.0.1:34503:
connection attempt failed (Connection refused)
2022-06-15T00:40:28.731Z|00032|reconnect|INFO|ssl:127.0.0.1:34503:
waiting 4 seconds before reconnect
2022-06-15T00:40:32.734Z|00033|memory|INFO|8348 kB peak resident set
size after 10.4 seconds
2022-06-15T00:40:32.734Z|00034|memory|INFO|idl-cells:1378
if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1
lflow-cache-entries-cache-expr:3 lflow-cache-entries-cache-matches:59
lflow-cache-size-KB:5 local_datapath_usage-KB:1
ofctrl_desired_flow_usage-KB:25 ofctrl_installed_flow_usage-KB:18
ofctrl_sb_flow_ref_usage-KB:13
2022-06-15T00:40:32.734Z|00035|reconnect|INFO|ssl:127.0.0.1:34503: connecting...
2022-06-15T00:40:32.735Z|00036|reconnect|INFO|ssl:127.0.0.1:34503:
connection attempt failed (Connection refused)
2022-06-15T00:40:32.735Z|00037|reconnect|INFO|ssl:127.0.0.1:34503:
continuing to reconnect in the background but suppressing further
logging

If I disable the recompute enforcement, then the I-P handler for
activated_ports is eventually triggered and flows are indeed removed.

--- a/controller/ovn-controller.c
+++ b/controller/ovn-controller.c
@@ -4190,7 +4190,7 @@ main(int argc, char *argv[])

         if (!ovsdb_idl_loop_commit_and_wait(&ovnsb_idl_loop)) {
             VLOG_INFO("OVNSB commit failed, force recompute next time.");
-            engine_set_force_recompute(true);
+            //engine_set_force_recompute(true);
         }

         int ovs_txn_status = ovsdb_idl_loop_commit_and_wait(&ovs_idl_loop);

I am not sure if this call to engine_set_force_recompute is really
needed. I believe recompute will be forced when OVS IDL is
reconnected. Perhaps we are also handling other commit failure types
here, not only those resulting from ovsdb-server going down. I'd love
to hear from others about the intent of this
engine_set_force_recompute call.

Even with the above snippet applied, ovn-controller fails to
gracefully exit at OVN_CLEANUP phase of the test case.

hv1: clean up sandbox
./ovn.at:15136: test -e $OVS_RUNDIR/ovn-controller.pid
./ovn.at:15136: ovs-appctl --timeout=10 -t ovn-controller exit
ovn.at:15136: waiting while kill -0 $TMPPID 2>/dev/null...
ovn.at:15136: wait failed after 30 seconds

This seems to be a problem that is unrelated to RARP: ovn-controller
attempts to reconnect to the same remote port of ovn-sb even after
ovn-sb' ovsdb-server is restarted. (It picks up a new port.) I've
tried to restart ovn-sb ovsdb-server before OVN_CLEANUP as follows:

--- a/tests/ovn.at
+++ b/tests/ovn.at
@@ -15284,6 +15283,14 @@ echo $request >> hv1/first.expected

 check_packets

+as ovn-sb start_daemon ovsdb-server \
+        -vjsonrpc \
+        --remote=punix:$ovs_base/ovn-sb/$1.sock \
+        --remote=db:OVN_Southbound,SB_Global,connections \
+        --private-key=$PKIDIR/testpki-test-privkey.pem \
+        --certificate=$PKIDIR/testpki-test-cert.pem \
+        --ca-cert=$PKIDIR/testpki-cacert.pem \
+        $ovs_base/ovn-sb/ovn-sb.db
 OVN_CLEANUP([hv1],[hv2])

 AT_CLEANUP

This change starts ovsdb-server but this doesn't make ovn-controller
reconnect to the new instance of ovsdb-server even after I put a sleep
20 after it. It still tries to reconnect to the same old remote port.

> This function deletes all the flows in the physical_flow table for the
> port_binding uuid.
> And the function pinctrl_is_port_activated() will return true and RARP
> flows are not added back.
> And ofctrl.c will delete the RARP flows from the ovs-vswitchd.
>
> I think you can update the test as per Han's suggestion and test it out.
>

I'll send a new revision with the recommendation applied even though
it still doesn't pass but at least it's closer to what I had in mind
in the first place.

> Thanks
> Numan
>
>
>
> >
> > Ihar
> >
> > > Thanks
> > > Numan
> > >
> > > >
> > > > Thanks,
> > > > Han
> > > >
> > > > > >
> > > > > >     if (ovnsb_idl_txn && 
> > > > > > !ovs_list_is_empty(&ports_to_activate_in_db))
> > > > ||
> > > > > >         !ovs_list_is_empty(ports_to_activate_in_engine)) {
> > > > > >
> > > > > > so that whenever ports_to_activate_in_engine is not empty the I-P
> > > > engine run can be triggered so that the blocking flows are removed ASAP.
> > > > > >
> > > > > > In addition, I wonder if the check for ports_to_activate_in_db is
> > > > really necessary.
> > > > > > - If the ovnsb_idl_txn was non-null in this iteration, the earlier 
> > > > > > call
> > > > to run_activated_ports() would have sent the updates to SB DB, so there 
> > > > is
> > > > no need to poll_immeidate_wake because when response come back the main
> > > > loop will woke up.
> > > > > > - If the ovsdb_idl_txn was null in this iteration, it means some
> > > > transaction in progress, so no need to poll_immediate_wake either.
> > > > > >
> > > > > > So I think it can simply:
> > > > > >     if ( !ovs_list_is_empty(ports_to_activate_in_engine)) {
> > > > > >
> > > > >
> > > > > The semantics of ovnsb_idl_txn was not immediately clear to me and I
> > > > > mimicked the other wait_* functions. I'll adjust.
> > > > >
> > > > > > > +        poll_immediate_wake();
> > > > > > > +    }
> > > > > > > +}
> > > > > > > +
> > > > > > > +bool pinctrl_is_port_activated(int64_t dp_key, int64_t port_key)
> > > > > > > +{
> > > > > > > +    const struct activated_port *pp;
> > > > > > > +    ovs_mutex_lock(&pinctrl_mutex);
> > > > > > > +    LIST_FOR_EACH (pp, list, &ports_to_activate_in_db) {
> > > > > > > +        if (pp->dp_key == dp_key && pp->port_key == port_key) {
> > > > > > > +            ovs_mutex_unlock(&pinctrl_mutex);
> > > > > > > +            return true;
> > > > > > > +        }
> > > > > > > +    }
> > > > > > > +    LIST_FOR_EACH (pp, list, ports_to_activate_in_engine) {
> > > > > > > +        if (pp->dp_key == dp_key && pp->port_key == port_key) {
> > > > > > > +            ovs_mutex_unlock(&pinctrl_mutex);
> > > > > > > +            return true;
> > > > > > > +        }
> > > > > > > +    }
> > > > > > > +    ovs_mutex_unlock(&pinctrl_mutex);
> > > > > > > +    return false;
> > > > > > > +}
> > > > > > > +
> > > > > > > +static void
> > > > > > > +run_activated_ports(struct ovsdb_idl_txn *ovnsb_idl_txn,
> > > > > > > +                    struct ovsdb_idl_index
> > > > *sbrec_datapath_binding_by_key,
> > > > > > > +                    struct ovsdb_idl_index
> > > > *sbrec_port_binding_by_key,
> > > > > > > +                    const struct sbrec_chassis *chassis)
> > > > > > > +    OVS_REQUIRES(pinctrl_mutex)
> > > > > > > +{
> > > > > > > +    if (!ovnsb_idl_txn) {
> > > > > > > +        return;
> > > > > > > +    }
> > > > > > > +
> > > > > > > +    struct activated_port *pp;
> > > > > > > +    LIST_FOR_EACH_SAFE (pp, list, &ports_to_activate_in_db) {
> > > > > > > +        const struct sbrec_port_binding *pb = 
> > > > > > > lport_lookup_by_key(
> > > > > > > +            sbrec_datapath_binding_by_key, 
> > > > > > > sbrec_port_binding_by_key,
> > > > > > > +            pp->dp_key, pp->port_key);
> > > > > > > +        if (!pb || lport_is_activated_by_activation_strategy(pb,
> > > > chassis)) {
> > > > > > > +            ovs_list_remove(&pp->list);
> > > > > > > +            free(pp);
> > > > > > > +            continue;
> > > > > > > +        }
> > > > > > > +        const char *activated_chassis = smap_get(
> > > > > > > +            &pb->options, "additional-chassis-activated");
> > > > > > > +        char *activated_str;
> > > > > > > +        if (activated_chassis) {
> > > > > > > +            activated_str = xasprintf(
> > > > > > > +                "%s,%s", activated_chassis, chassis->name);
> > > > > > > +            sbrec_port_binding_update_options_setkey(
> > > > > > > +                pb, "additional-chassis-activated", 
> > > > > > > activated_str);
> > > > > > > +            free(activated_str);
> > > > > > > +        } else {
> > > > > > > +            sbrec_port_binding_update_options_setkey(
> > > > > > > +                pb, "additional-chassis-activated", 
> > > > > > > chassis->name);
> > > > > > > +        }
> > > > > >
> > > > > > I have a concern here but I think it is ok to be addressed as a TODO
> > > > for future:
> > > > > > if ovn-controller is restarted after the RARP but before the change 
> > > > > > is
> > > > sent to SB DB, would ovn-controller still *think* the port is not 
> > > > activated
> > > > and still block it?
> > > > > >
> > > > >
> > > > > Yes that's the case though what would be the solution here?
> > > > > Persistence that is not backed by remote db? File based?..
> > > > >
> > > > > > Thanks again for the revisions. With the comment in
> > > > wait_activated_ports() addressed:
> > > > > > Acked-by: Han Zhou <[email protected]>
> > > > > >
> > > > > > Regards,
> > > > > > Han
> > > > > >
> > > > > > > +    }
> > > > > > > +}
> > > > > > > +
> > > > > > > +static void
> > > > > > > +pinctrl_rarp_activation_strategy_handler(const struct match *md)
> > > > > > > +    OVS_REQUIRES(pinctrl_mutex)
> > > > > > > +{
> > > > > > > +    /* Tag the port as activated in-memory. */
> > > > > > > +    struct activated_port *pp = xmalloc(sizeof *pp);
> > > > > > > +    pp->port_key = md->flow.regs[MFF_LOG_INPORT - MFF_REG0];
> > > > > > > +    pp->dp_key = ntohll(md->flow.metadata);
> > > > > > > +    ovs_list_push_front(&ports_to_activate_in_db, &pp->list);
> > > > > > > +
> > > > > > > +    pp = xmalloc(sizeof *pp);
> > > > > > > +    pp->port_key = md->flow.regs[MFF_LOG_INPORT - MFF_REG0];
> > > > > > > +    pp->dp_key = ntohll(md->flow.metadata);
> > > > > > > +    ovs_list_push_front(ports_to_activate_in_engine, &pp->list);
> > > > > > > +
> > > > > > > +    /* Notify main thread on pending additional-chassis-activated
> > > > updates. */
> > > > > > > +    notify_pinctrl_main();
> > > > > > > +}
> > > > > > > +
> > > > > > >  static struct hmap put_fdbs;
> > > > > > >
> > > > > > >  /* MAC learning (fdb) related functions.  Runs within the main
> > > > > > > diff --git a/controller/pinctrl.h b/controller/pinctrl.h
> > > > > > > index 88f18e983..0b6523baa 100644
> > > > > > > --- a/controller/pinctrl.h
> > > > > > > +++ b/controller/pinctrl.h
> > > > > > > @@ -20,6 +20,7 @@
> > > > > > >  #include <stdint.h>
> > > > > > >
> > > > > > >  #include "lib/sset.h"
> > > > > > > +#include "openvswitch/list.h"
> > > > > > >  #include "openvswitch/meta-flow.h"
> > > > > > >
> > > > > > >  struct hmap;
> > > > > > > @@ -33,6 +34,7 @@ struct sbrec_dns_table;
> > > > > > >  struct sbrec_controller_event_table;
> > > > > > >  struct sbrec_service_monitor_table;
> > > > > > >  struct sbrec_bfd_table;
> > > > > > > +struct sbrec_port_binding;
> > > > > > >
> > > > > > >  void pinctrl_init(void);
> > > > > > >  void pinctrl_run(struct ovsdb_idl_txn *ovnsb_idl_txn,
> > > > > > > @@ -56,4 +58,13 @@ void pinctrl_run(struct ovsdb_idl_txn
> > > > *ovnsb_idl_txn,
> > > > > > >  void pinctrl_wait(struct ovsdb_idl_txn *ovnsb_idl_txn);
> > > > > > >  void pinctrl_destroy(void);
> > > > > > >  void pinctrl_set_br_int_name(char *br_int_name);
> > > > > > > +
> > > > > > > +struct activated_port {
> > > > > > > +    uint32_t dp_key;
> > > > > > > +    uint32_t port_key;
> > > > > > > +    struct ovs_list list;
> > > > > > > +};
> > > > > > > +
> > > > > > > +struct ovs_list *get_ports_to_activate_in_engine(void);
> > > > > > > +bool pinctrl_is_port_activated(int64_t dp_key, int64_t port_key);
> > > > > > >  #endif /* controller/pinctrl.h */
> > > > > > > diff --git a/include/ovn/actions.h b/include/ovn/actions.h
> > > > > > > index 1ae496960..33c319f1c 100644
> > > > > > > --- a/include/ovn/actions.h
> > > > > > > +++ b/include/ovn/actions.h
> > > > > > > @@ -683,6 +683,9 @@ enum action_opcode {
> > > > > > >      /* put_fdb(inport, eth.src).
> > > > > > >       */
> > > > > > >      ACTION_OPCODE_PUT_FDB,
> > > > > > > +
> > > > > > > +    /* activation_strategy_rarp() */
> > > > > > > +    ACTION_OPCODE_ACTIVATION_STRATEGY_RARP,
> > > > > > >  };
> > > > > > >
> > > > > > >  /* Header. */
> > > > > > > diff --git a/northd/northd.c b/northd/northd.c
> > > > > > > index 0d6ebccde..4d6193589 100644
> > > > > > > --- a/northd/northd.c
> > > > > > > +++ b/northd/northd.c
> > > > > > > @@ -3499,6 +3499,16 @@ ovn_port_update_sbrec(struct northd_input
> > > > *input_data,
> > > > > > >                  smap_add(&options, "vlan-passthru", "true");
> > > > > > >              }
> > > > > > >
> > > > > > > +            /* Retain activated chassis flags. */
> > > > > > > +            if (op->sb->requested_additional_chassis) {
> > > > > > > +                const char *activated_str = smap_get(
> > > > > > > +                    &op->sb->options,
> > > > "additional-chassis-activated");
> > > > > > > +                if (activated_str) {
> > > > > > > +                    smap_add(&options,
> > > > "additional-chassis-activated",
> > > > > > > +                             activated_str);
> > > > > > > +                }
> > > > > > > +            }
> > > > > > > +
> > > > > > >              sbrec_port_binding_set_options(op->sb, &options);
> > > > > > >              smap_destroy(&options);
> > > > > > >              if (ovn_is_known_nb_lsp_type(op->nbsp->type)) {
> > > > > > > diff --git a/northd/ovn-northd.c b/northd/ovn-northd.c
> > > > > > > index e4e980720..ab28756af 100644
> > > > > > > --- a/northd/ovn-northd.c
> > > > > > > +++ b/northd/ovn-northd.c
> > > > > > > @@ -107,7 +107,10 @@ static const char *rbac_port_binding_auth[] =
> > > > > > >  static const char *rbac_port_binding_update[] =
> > > > > > >      {"chassis", "additional_chassis",
> > > > > > >       "encap", "additional_encap",
> > > > > > > -     "up", "virtual_parent"};
> > > > > > > +     "up", "virtual_parent",
> > > > > > > +     /* NOTE: we only need to update the
> > > > additional-chassis-activated key,
> > > > > > > +      * but RBAC_Role doesn't support mutate operation for 
> > > > > > > subkeys.
> > > > */
> > > > > > > +     "options"};
> > > > > > >
> > > > > > >  static const char *rbac_mac_binding_auth[] =
> > > > > > >      {""};
> > > > > > > diff --git a/ovn-nb.xml b/ovn-nb.xml
> > > > > > > index 14a624c16..9c09de8d8 100644
> > > > > > > --- a/ovn-nb.xml
> > > > > > > +++ b/ovn-nb.xml
> > > > > > > @@ -1052,6 +1052,17 @@
> > > > > > >            </p>
> > > > > > >          </column>
> > > > > > >
> > > > > > > +        <column name="options" key="activation-strategy">
> > > > > > > +          If used with multiple chassis set in
> > > > > > > +          <ref column="requested-chassis"/>, specifies an 
> > > > > > > activation
> > > > strategy
> > > > > > > +          for all additional chassis. By default, no activation
> > > > strategy is
> > > > > > > +          used, meaning additional port locations are immediately
> > > > available for
> > > > > > > +          use. When set to "rarp", the port is blocked for 
> > > > > > > ingress
> > > > and egress
> > > > > > > +          communication until a RARP packet is sent from a new
> > > > location. The
> > > > > > > +          "rarp" strategy is useful in live migration scenarios 
> > > > > > > for
> > > > virtual
> > > > > > > +          machines.
> > > > > > > +        </column>
> > > > > > > +
> > > > > > >          <column name="options" key="iface-id-ver">
> > > > > > >            If set, this port will be bound by
> > > > <code>ovn-controller</code>
> > > > > > >            only if this same key and value is configured in the
> > > > > > > diff --git a/ovn-sb.xml b/ovn-sb.xml
> > > > > > > index 898f3676a..59ad3aa2d 100644
> > > > > > > --- a/ovn-sb.xml
> > > > > > > +++ b/ovn-sb.xml
> > > > > > > @@ -3374,6 +3374,21 @@ tcp.flags = RST;
> > > > > > >          </p>
> > > > > > >        </column>
> > > > > > >
> > > > > > > +      <column name="options" key="activation-strategy">
> > > > > > > +        If used with multiple chassis set in <ref
> > > > column="requested-chassis"/>,
> > > > > > > +        specifies an activation strategy for all additional 
> > > > > > > chassis.
> > > > By
> > > > > > > +        default, no activation strategy is used, meaning 
> > > > > > > additional
> > > > port
> > > > > > > +        locations are immediately available for use. When set to
> > > > "rarp", the
> > > > > > > +        port is blocked for ingress and egress communication 
> > > > > > > until a
> > > > RARP
> > > > > > > +        packet is sent from a new location. The "rarp" strategy 
> > > > > > > is
> > > > useful
> > > > > > > +        in live migration scenarios for virtual machines.
> > > > > > > +      </column>
> > > > > > > +
> > > > > > > +      <column name="options" key="additional-chassis-activated">
> > > > > > > +        When <ref column="activation-strategy"/> is set, this 
> > > > > > > option
> > > > indicates
> > > > > > > +        that the port was activated using the strategy specified.
> > > > > > > +      </column>
> > > > > > > +
> > > > > > >        <column name="options" key="iface-id-ver">
> > > > > > >          If set, this port will be bound by
> > > > <code>ovn-controller</code>
> > > > > > >          only if this same key and value is configured in the
> > > > > > > diff --git a/tests/ovn.at b/tests/ovn.at
> > > > > > > index 59d51f3e0..3215e9dc2 100644
> > > > > > > --- a/tests/ovn.at
> > > > > > > +++ b/tests/ovn.at
> > > > > > > @@ -14924,6 +14924,371 @@ OVN_CLEANUP([hv1],[hv2],[hv3])
> > > > > > >  AT_CLEANUP
> > > > > > >  ])
> > > > > > >
> > > > > > > +OVN_FOR_EACH_NORTHD([
> > > > > > > +AT_SETUP([options:activation-strategy for logical port])
> > > > > > > +ovn_start
> > > > > > > +
> > > > > > > +net_add n1
> > > > > > > +
> > > > > > > +sim_add hv1
> > > > > > > +as hv1
> > > > > > > +check ovs-vsctl add-br br-phys
> > > > > > > +ovn_attach n1 br-phys 192.168.0.11
> > > > > > > +
> > > > > > > +sim_add hv2
> > > > > > > +as hv2
> > > > > > > +check ovs-vsctl add-br br-phys
> > > > > > > +ovn_attach n1 br-phys 192.168.0.12
> > > > > > > +
> > > > > > > +sim_add hv3
> > > > > > > +as hv3
> > > > > > > +check ovs-vsctl add-br br-phys
> > > > > > > +ovn_attach n1 br-phys 192.168.0.13
> > > > > > > +
> > > > > > > +# Disable local ARP responder to pass ARP requests through 
> > > > > > > tunnels
> > > > > > > +check ovn-nbctl ls-add ls0 -- add Logical_Switch ls0 other_config
> > > > vlan-passthru=true
> > > > > > > +
> > > > > > > +check ovn-nbctl lsp-add ls0 migrator
> > > > > > > +check ovn-nbctl lsp-set-options migrator 
> > > > > > > requested-chassis=hv1,hv2 \
> > > > > > > +                                         activation-strategy=rarp
> > > > > > > +
> > > > > > > +check ovn-nbctl lsp-add ls0 first
> > > > > > > +check ovn-nbctl lsp-set-options first requested-chassis=hv1
> > > > > > > +check ovn-nbctl lsp-add ls0 second
> > > > > > > +check ovn-nbctl lsp-set-options second requested-chassis=hv2
> > > > > > > +check ovn-nbctl lsp-add ls0 outside
> > > > > > > +check ovn-nbctl lsp-set-options outside requested-chassis=hv3
> > > > > > > +
> > > > > > > +check ovn-nbctl lsp-set-addresses migrator "00:00:00:00:00:10
> > > > 10.0.0.10"
> > > > > > > +check ovn-nbctl lsp-set-addresses first "00:00:00:00:00:01 
> > > > > > > 10.0.0.1"
> > > > > > > +check ovn-nbctl lsp-set-addresses second "00:00:00:00:00:02 
> > > > > > > 10.0.0.2"
> > > > > > > +check ovn-nbctl lsp-set-addresses outside "00:00:00:00:00:03
> > > > 10.0.0.3"
> > > > > > > +
> > > > > > > +for hv in hv1 hv2; do
> > > > > > > +    as $hv check ovs-vsctl -- add-port br-int migrator -- \
> > > > > > > +        set Interface migrator external-ids:iface-id=migrator \
> > > > > > > +                               
> > > > > > > options:tx_pcap=$hv/migrator-tx.pcap \
> > > > > > > +                               
> > > > > > > options:rxq_pcap=$hv/migrator-rx.pcap
> > > > > > > +done
> > > > > > > +
> > > > > > > +as hv1 check ovs-vsctl -- add-port br-int first -- \
> > > > > > > +    set Interface first external-ids:iface-id=first
> > > > > > > +as hv2 check ovs-vsctl -- add-port br-int second -- \
> > > > > > > +    set Interface second external-ids:iface-id=second
> > > > > > > +as hv3 check ovs-vsctl -- add-port br-int outside -- \
> > > > > > > +    set Interface outside external-ids:iface-id=outside
> > > > > > > +
> > > > > > > +for hv in hv1 hv2 hv3; do
> > > > > > > +    wait_row_count Chassis 1 name=$hv
> > > > > > > +done
> > > > > > > +hv1_uuid=$(fetch_column Chassis _uuid name=hv1)
> > > > > > > +hv2_uuid=$(fetch_column Chassis _uuid name=hv2)
> > > > > > > +hv3_uuid=$(fetch_column Chassis _uuid name=hv3)
> > > > > > > +
> > > > > > > +wait_column "$hv1_uuid" Port_Binding chassis 
> > > > > > > logical_port=migrator
> > > > > > > +wait_column "$hv1_uuid" Port_Binding requested_chassis
> > > > logical_port=migrator
> > > > > > > +wait_column "$hv2_uuid" Port_Binding additional_chassis
> > > > logical_port=migrator
> > > > > > > +wait_column "$hv2_uuid" Port_Binding requested_additional_chassis
> > > > logical_port=migrator
> > > > > > > +
> > > > > > > +wait_column "$hv1_uuid" Port_Binding chassis logical_port=first
> > > > > > > +wait_column "$hv2_uuid" Port_Binding chassis logical_port=second
> > > > > > > +wait_column "$hv3_uuid" Port_Binding chassis logical_port=outside
> > > > > > > +
> > > > > > > +OVN_POPULATE_ARP
> > > > > > > +
> > > > > > > +send_arp() {
> > > > > > > +    local hv=$1 inport=$2 eth_src=$3 eth_dst=$4 spa=$5 tpa=$6
> > > > > > > +    local
> > > > request=${eth_dst}${eth_src}08060001080006040001${eth_src}${spa}${eth_dst}${tpa}
> > > > > > > +    as ${hv} ovs-appctl netdev-dummy/receive $inport $request
> > > > > > > +    echo "${request}"
> > > > > > > +}
> > > > > > > +
> > > > > > > +send_rarp() {
> > > > > > > +    local hv=$1 inport=$2 eth_src=$3 eth_dst=$4 spa=$5 tpa=$6
> > > > > > > +    local
> > > > request=${eth_dst}${eth_src}80350001080006040001${eth_src}${spa}${eth_dst}${tpa}
> > > > > > > +    as ${hv} ovs-appctl netdev-dummy/receive $inport $request
> > > > > > > +    echo "${request}"
> > > > > > > +}
> > > > > > > +
> > > > > > > +reset_pcap_file() {
> > > > > > > +    local hv=$1
> > > > > > > +    local iface=$2
> > > > > > > +    local pcap_file=$3
> > > > > > > +    as $hv check ovs-vsctl -- set Interface $iface
> > > > options:tx_pcap=dummy-tx.pcap \
> > > > > > > +
> > > > options:rxq_pcap=dummy-rx.pcap
> > > > > > > +    check rm -f ${pcap_file}*.pcap
> > > > > > > +    as $hv check ovs-vsctl -- set Interface $iface
> > > > options:tx_pcap=${pcap_file}-tx.pcap \
> > > > > > > +
> > > > options:rxq_pcap=${pcap_file}-rx.pcap
> > > > > > > +}
> > > > > > > +
> > > > > > > +reset_env() {
> > > > > > > +    reset_pcap_file hv1 migrator hv1/migrator
> > > > > > > +    reset_pcap_file hv2 migrator hv2/migrator
> > > > > > > +    reset_pcap_file hv1 first hv1/first
> > > > > > > +    reset_pcap_file hv2 second hv2/second
> > > > > > > +    reset_pcap_file hv3 outside hv3/outside
> > > > > > > +
> > > > > > > +    for port in hv1/migrator hv2/migrator hv1/first hv2/second
> > > > hv3/outside; do
> > > > > > > +        : > $port.expected
> > > > > > > +    done
> > > > > > > +}
> > > > > > > +
> > > > > > > +check_packets() {
> > > > > > > +    OVN_CHECK_PACKETS([hv1/migrator-tx.pcap],
> > > > [hv1/migrator.expected])
> > > > > > > +    OVN_CHECK_PACKETS([hv2/migrator-tx.pcap],
> > > > [hv2/migrator.expected])
> > > > > > > +    OVN_CHECK_PACKETS([hv3/outside-tx.pcap], 
> > > > > > > [hv3/outside.expected])
> > > > > > > +    OVN_CHECK_PACKETS([hv1/first-tx.pcap], [hv1/first.expected])
> > > > > > > +    OVN_CHECK_PACKETS([hv2/second-tx.pcap], 
> > > > > > > [hv2/second.expected])
> > > > > > > +}
> > > > > > > +
> > > > > > > +migrator_spa=$(ip_to_hex 10 0 0 10)
> > > > > > > +first_spa=$(ip_to_hex 10 0 0 1)
> > > > > > > +second_spa=$(ip_to_hex 10 0 0 2)
> > > > > > > +outside_spa=$(ip_to_hex 10 0 0 3)
> > > > > > > +
> > > > > > > +reset_env
> > > > > > > +
> > > > > > > +# Packet from hv3:Outside arrives to hv1:Migrator
> > > > > > > +# hv3:Outside cannot reach hv2:Migrator because it is blocked by
> > > > RARP strategy
> > > > > > > +request=$(send_arp hv3 outside 000000000003 000000000010
> > > > $outside_spa $migrator_spa)
> > > > > > > +echo $request >> hv1/migrator.expected
> > > > > > > +
> > > > > > > +# Packet from hv1:First arrives to hv1:Migrator
> > > > > > > +# hv1:First cannot reach hv2:Migrator because it is blocked by 
> > > > > > > RARP
> > > > strategy
> > > > > > > +request=$(send_arp hv1 first 000000000001 000000000010 $first_spa
> > > > $migrator_spa)
> > > > > > > +echo $request >> hv1/migrator.expected
> > > > > > > +
> > > > > > > +# Packet from hv2:Second arrives to hv1:Migrator
> > > > > > > +# hv2:Second cannot reach hv2:Migrator because it is blocked by 
> > > > > > > RARP
> > > > strategy
> > > > > > > +request=$(send_arp hv2 second 000000000002 000000000010 
> > > > > > > $second_spa
> > > > $migrator_spa)
> > > > > > > +echo $request >> hv1/migrator.expected
> > > > > > > +
> > > > > > > +check_packets
> > > > > > > +reset_env
> > > > > > > +
> > > > > > > +# Packet from hv1:Migrator arrives to hv3:Outside
> > > > > > > +request=$(send_arp hv1 migrator 000000000010 000000000003
> > > > $migrator_spa $outside_spa)
> > > > > > > +echo $request >> hv3/outside.expected
> > > > > > > +
> > > > > > > +# Packet from hv1:Migrator arrives to hv1:First
> > > > > > > +request=$(send_arp hv1 migrator 000000000010 000000000001
> > > > $migrator_spa $first_spa)
> > > > > > > +echo $request >> hv1/first.expected
> > > > > > > +
> > > > > > > +# Packet from hv1:Migrator arrives to hv2:Second
> > > > > > > +request=$(send_arp hv1 migrator 000000000010 000000000002
> > > > $migrator_spa $second_spa)
> > > > > > > +echo $request >> hv2/second.expected
> > > > > > > +
> > > > > > > +check_packets
> > > > > > > +reset_env
> > > > > > > +
> > > > > > > +# hv2:Migrator cannot reach to hv3:Outside because it is blocked 
> > > > > > > by
> > > > RARP strategy
> > > > > > > +request=$(send_arp hv2 migrator 000000000010 000000000003
> > > > $migrator_spa $outside_spa)
> > > > > > > +
> > > > > > > +check_packets
> > > > > > > +reset_env
> > > > > > > +
> > > > > > > +AT_CHECK([ovn-sbctl find port_binding logical_port=migrator | 
> > > > > > > grep
> > > > -q additional-chassis-activated], [1])
> > > > > > > +
> > > > > > > +# Now activate hv2:Migrator location
> > > > > > > +request=$(send_rarp hv2 migrator 000000000010 ffffffffffff
> > > > $migrator_spa $migrator_spa)
> > > > > > > +
> > > > > > > +# RARP was reinjected into the pipeline
> > > > > > > +echo $request >> hv3/outside.expected
> > > > > > > +echo $request >> hv1/first.expected
> > > > > > > +echo $request >> hv2/second.expected
> > > > > > > +
> > > > > > > +check_packets
> > > > > > > +reset_env
> > > > > > > +
> > > > > > > +pb_uuid=$(ovn-sbctl --bare --columns _uuid find Port_Binding
> > > > logical_port=migrator)
> > > > > > > +OVS_WAIT_UNTIL([test xhv2 = x$(ovn-sbctl get Port_Binding 
> > > > > > > $pb_uuid
> > > > options:additional-chassis-activated | tr -d '""')])
> > > > > > > +
> > > > > > > +# Now packet arrives to both locations
> > > > > > > +request=$(send_arp hv3 outside 000000000003 000000000010
> > > > $outside_spa $migrator_spa)
> > > > > > > +echo $request >> hv1/migrator.expected
> > > > > > > +echo $request >> hv2/migrator.expected
> > > > > > > +
> > > > > > > +check_packets
> > > > > > > +reset_env
> > > > > > > +
> > > > > > > +# Packet from hv1:Migrator still arrives to hv3:Outside
> > > > > > > +request=$(send_arp hv1 migrator 000000000010 000000000003
> > > > $migrator_spa $outside_spa)
> > > > > > > +echo $request >> hv3/outside.expected
> > > > > > > +
> > > > > > > +check_packets
> > > > > > > +reset_env
> > > > > > > +
> > > > > > > +# hv2:Migrator can now reach to hv3:Outside because RARP strategy
> > > > activated it
> > > > > > > +request=$(send_arp hv2 migrator 000000000010 000000000003
> > > > $migrator_spa $outside_spa)
> > > > > > > +echo $request >> hv3/outside.expected
> > > > > > > +
> > > > > > > +check_packets
> > > > > > > +
> > > > > > > +# complete port migration and check that -activated flag is reset
> > > > > > > +check ovn-nbctl lsp-set-options migrator requested-chassis=hv2
> > > > > > > +OVS_WAIT_UNTIL([test x = x$(ovn-sbctl get Port_Binding $pb_uuid
> > > > options:additional-chassis-activated)])
> > > > > > > +
> > > > > > > +OVN_CLEANUP([hv1],[hv2],[hv3])
> > > > > > > +
> > > > > > > +AT_CLEANUP
> > > > > > > +])
> > > > > > > +
> > > > > > > +OVN_FOR_EACH_NORTHD([
> > > > > > > +AT_SETUP([options:activation-strategy=rarp is not waiting for
> > > > southbound db])
> > > > > > > +# TODO: remove it when we find a way to make vswitchd forward
> > > > packets to
> > > > > > > +# controller() handler when ovsdb-server is down
> > > > > > > +AT_SKIP_IF([true])
> > > > > > > +ovn_start
> > > > > > > +
> > > > > > > +net_add n1
> > > > > > > +
> > > > > > > +sim_add hv1
> > > > > > > +as hv1
> > > > > > > +check ovs-vsctl add-br br-phys
> > > > > > > +ovn_attach n1 br-phys 192.168.0.11
> > > > > > > +
> > > > > > > +sim_add hv2
> > > > > > > +as hv2
> > > > > > > +check ovs-vsctl add-br br-phys
> > > > > > > +ovn_attach n1 br-phys 192.168.0.12
> > > > > > > +
> > > > > > > +# Disable local ARP responder to pass ARP requests through 
> > > > > > > tunnels
> > > > > > > +check ovn-nbctl ls-add ls0 -- add Logical_Switch ls0 other_config
> > > > vlan-passthru=true
> > > > > > > +
> > > > > > > +check ovn-nbctl lsp-add ls0 migrator
> > > > > > > +check ovn-nbctl lsp-set-options migrator 
> > > > > > > requested-chassis=hv1,hv2 \
> > > > > > > +                                         activation-strategy=rarp
> > > > > > > +
> > > > > > > +check ovn-nbctl lsp-add ls0 first
> > > > > > > +check ovn-nbctl lsp-set-options first requested-chassis=hv1
> > > > > > > +
> > > > > > > +check ovn-nbctl lsp-set-addresses migrator "00:00:00:00:00:10
> > > > 10.0.0.10"
> > > > > > > +check ovn-nbctl lsp-set-addresses first "00:00:00:00:00:01 
> > > > > > > 10.0.0.1"
> > > > > > > +
> > > > > > > +for hv in hv1 hv2; do
> > > > > > > +    as $hv check ovs-vsctl -- add-port br-int migrator -- \
> > > > > > > +        set Interface migrator external-ids:iface-id=migrator \
> > > > > > > +                               
> > > > > > > options:tx_pcap=$hv/migrator-tx.pcap \
> > > > > > > +                               
> > > > > > > options:rxq_pcap=$hv/migrator-rx.pcap
> > > > > > > +done
> > > > > > > +
> > > > > > > +as hv1 check ovs-vsctl -- add-port br-int first -- \
> > > > > > > +    set Interface first external-ids:iface-id=first
> > > > > > > +
> > > > > > > +for hv in hv1 hv2; do
> > > > > > > +    wait_row_count Chassis 1 name=$hv
> > > > > > > +done
> > > > > > > +hv1_uuid=$(fetch_column Chassis _uuid name=hv1)
> > > > > > > +hv2_uuid=$(fetch_column Chassis _uuid name=hv2)
> > > > > > > +
> > > > > > > +wait_column "$hv1_uuid" Port_Binding chassis 
> > > > > > > logical_port=migrator
> > > > > > > +wait_column "$hv1_uuid" Port_Binding requested_chassis
> > > > logical_port=migrator
> > > > > > > +wait_column "$hv2_uuid" Port_Binding additional_chassis
> > > > logical_port=migrator
> > > > > > > +wait_column "$hv2_uuid" Port_Binding requested_additional_chassis
> > > > logical_port=migrator
> > > > > > > +
> > > > > > > +wait_column "$hv1_uuid" Port_Binding chassis logical_port=first
> > > > > > > +
> > > > > > > +OVN_POPULATE_ARP
> > > > > > > +
> > > > > > > +send_arp() {
> > > > > > > +    local hv=$1 inport=$2 eth_src=$3 eth_dst=$4 spa=$5 tpa=$6
> > > > > > > +    local
> > > > request=${eth_dst}${eth_src}08060001080006040001${eth_src}${spa}${eth_dst}${tpa}
> > > > > > > +    as ${hv} ovs-appctl netdev-dummy/receive $inport $request
> > > > > > > +    echo "${request}"
> > > > > > > +}
> > > > > > > +
> > > > > > > +send_rarp() {
> > > > > > > +    local hv=$1 inport=$2 eth_src=$3 eth_dst=$4 spa=$5 tpa=$6
> > > > > > > +    local
> > > > request=${eth_dst}${eth_src}80350001080006040001${eth_src}${spa}${eth_dst}${tpa}
> > > > > > > +    as ${hv} ovs-appctl netdev-dummy/receive $inport $request
> > > > > > > +    echo "${request}"
> > > > > > > +}
> > > > > > > +
> > > > > > > +reset_pcap_file() {
> > > > > > > +    local hv=$1
> > > > > > > +    local iface=$2
> > > > > > > +    local pcap_file=$3
> > > > > > > +    as $hv check ovs-vsctl -- set Interface $iface
> > > > options:tx_pcap=dummy-tx.pcap \
> > > > > > > +
> > > > options:rxq_pcap=dummy-rx.pcap
> > > > > > > +    check rm -f ${pcap_file}*.pcap
> > > > > > > +    as $hv check ovs-vsctl -- set Interface $iface
> > > > options:tx_pcap=${pcap_file}-tx.pcap \
> > > > > > > +
> > > > options:rxq_pcap=${pcap_file}-rx.pcap
> > > > > > > +}
> > > > > > > +
> > > > > > > +reset_env() {
> > > > > > > +    reset_pcap_file hv1 migrator hv1/migrator
> > > > > > > +    reset_pcap_file hv2 migrator hv2/migrator
> > > > > > > +    reset_pcap_file hv1 first hv1/first
> > > > > > > +
> > > > > > > +    for port in hv1/migrator hv2/migrator hv1/first; do
> > > > > > > +        : > $port.expected
> > > > > > > +    done
> > > > > > > +}
> > > > > > > +
> > > > > > > +check_packets() {
> > > > > > > +    OVN_CHECK_PACKETS([hv1/migrator-tx.pcap],
> > > > [hv1/migrator.expected])
> > > > > > > +    OVN_CHECK_PACKETS([hv2/migrator-tx.pcap],
> > > > [hv2/migrator.expected])
> > > > > > > +    OVN_CHECK_PACKETS([hv1/first-tx.pcap], [hv1/first.expected])
> > > > > > > +}
> > > > > > > +
> > > > > > > +migrator_spa=$(ip_to_hex 10 0 0 10)
> > > > > > > +first_spa=$(ip_to_hex 10 0 0 1)
> > > > > > > +
> > > > > > > +reset_env
> > > > > > > +
> > > > > > > +# Packet from hv1:First arrives to hv1:Migrator
> > > > > > > +# hv1:First cannot reach hv2:Migrator because it is blocked by 
> > > > > > > RARP
> > > > strategy
> > > > > > > +request=$(send_arp hv1 first 000000000001 000000000010 $first_spa
> > > > $migrator_spa)
> > > > > > > +echo $request >> hv1/migrator.expected
> > > > > > > +
> > > > > > > +check_packets
> > > > > > > +reset_env
> > > > > > > +
> > > > > > > +# Packet from hv1:Migrator arrives to hv1:First
> > > > > > > +request=$(send_arp hv1 migrator 000000000010 000000000001
> > > > $migrator_spa $first_spa)
> > > > > > > +echo $request >> hv1/first.expected
> > > > > > > +
> > > > > > > +check_packets
> > > > > > > +reset_env
> > > > > > > +
> > > > > > > +# hv2:Migrator cannot reach to hv1:First because it is blocked by
> > > > RARP strategy
> > > > > > > +request=$(send_arp hv2 migrator 000000000010 000000000001
> > > > $migrator_spa $first_spa)
> > > > > > > +
> > > > > > > +check_packets
> > > > > > > +reset_env
> > > > > > > +
> > > > > > > +# Before proceeding, stop ovsdb-server to make sure we test in 
> > > > > > > the
> > > > environment
> > > > > > > +# that can't remove flows triggered by updates to database
> > > > > > > +as hv2
> > > > > > > +SVCPID=$(cat $OVS_RUNDIR/ovsdb-server.pid)
> > > > > > > +kill -9 $SVCPID
> > > > > > > +
> > > > > > > +# Now activate hv2:Migrator location
> > > > > > > +request=$(send_rarp hv2 migrator 000000000010 ffffffffffff
> > > > $migrator_spa $migrator_spa)
> > > > > > > +
> > > > > > > +# RARP was reinjected into the pipeline
> > > > > > > +echo $request >> hv1/first.expected
> > > > > > > +
> > > > > > > +# Now packet from hv1:First arrives to both locations
> > > > > > > +request=$(send_arp hv1 first 000000000001 000000000010 $first_spa
> > > > $migrator_spa)
> > > > > > > +echo $request >> hv1/migrator.expected
> > > > > > > +echo $request >> hv2/migrator.expected
> > > > > > > +
> > > > > > > +# Packet from hv1:Migrator still arrives to hv1:First
> > > > > > > +request=$(send_arp hv1 migrator 000000000010 000000000001
> > > > $migrator_spa $first_spa)
> > > > > > > +echo $request >> hv1/first.expected
> > > > > > > +
> > > > > > > +# hv2:Migrator can now reach to hv1:First because RARP strategy
> > > > activated it
> > > > > > > +request=$(send_arp hv2 migrator 000000000010 000000000001
> > > > $migrator_spa $first_spa)
> > > > > > > +echo $request >> hv1/first.expected
> > > > > > > +
> > > > > > > +check_packets
> > > > > > > +
> > > > > > > +OVN_CLEANUP([hv1],[hv2])
> > > > > > > +
> > > > > > > +AT_CLEANUP
> > > > > > > +])
> > > > > > > +
> > > > > > >  OVN_FOR_EACH_NORTHD([
> > > > > > >  AT_SETUP([options:requested-chassis for logical port])
> > > > > > >  ovn_start
> > > > > > > --
> > > > > > > 2.34.1
> > > > > > >
> > > > > > >
> > > > > > > _______________________________________________
> > > > > > > dev mailing list
> > > > > > > [email protected]
> > > > > > > https://mail.openvswitch.org/mailman/listinfo/ovs-dev
> > > > > > >
> > > > >
> > > > _______________________________________________
> > > > dev mailing list
> > > > [email protected]
> > > > https://mail.openvswitch.org/mailman/listinfo/ovs-dev
> > > >
> > >
> >
> > _______________________________________________
> > dev mailing list
> > [email protected]
> > https://mail.openvswitch.org/mailman/listinfo/ovs-dev
> >
>


_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to