On Sun, Nov 27, 2016 at 1:08 PM, Darrell Ball <dlu...@gmail.com> wrote:

> This patch adds datapaths of interest support where only datapaths of
> local interest are monitored by the ovn-controller ovsdb client.  The
> idea is to do a flood fill in ovn-controller of datapath associations
> calculated by northd. A new column is added to the SB database
> datapath_binding table - related_datapaths to facilitate this so all
> datapaths associations are known quickly in ovn-controller.  This
> allows monitoring to adapt quickly with a single new monitor setting
> for all datapaths of interest locally.
>
> To reduce risk and minimize latency on network churn, only logical
> flows are conditionally monitored for now.  This should provide
> the bulk of the benefit.  This will be re-evaluated later to
> possibly add additional conditional monitoring such as for
> logical ports.
>
> Liran Schour contributed enhancements to the conditional
> monitoring granularity in ovs and also submitted patches
> for ovn usage of conditional monitoring which aided this patch
> though sharing of concepts through code review work.
>
> Ben Pfaff suggested that northd could be used to pre-populate
> related datapaths for ovn-controller to use.  That idea is
> used as part of this patch.
>

I see a few major issues. I don't think this patch in its current state
does what you want it to do. If I understand it correctly, in the
presence of distributed routers it will still populate all flows
everywhere, except for gateway router and corresponding
localnet flows. I have not yet run it manually, but I did play with
the automated tests a little to verify some of my understanding.
See comments inline.

>
> CC: Ben Pfaff <b...@ovn.org>
> CC: Liran Schour <lir...@il.ibm.com>
> Signed-off-by: Darrell Ball <dlu...@gmail.com>
> ---
>
> v7->v8: Start wth logical flow conditional monitoring off.
>         Remove deprecated code.
>         Recover SB DB version number change.
>         Change test to be more definitive.
>
> v6->v7: Rebase
>
> v5->v6: Rebase; fix stale handling.
>
> v4->v5: Correct cleanup of monitors.
>         Fix warning.
>
> v3->v4: Refactor after incremental processing backout.
>         Limit filtering to logical flows to limit risk.
>
> v2->v3: Line length violation fixups :/
>
> v1->v2: Added logical port removal monitoring handling, factoring
>         in recent changes for incremental processing.  Added some
>         comments in the code regarding initial conditions for
>         database conditional monitoring.
>
>  ovn/controller/binding.c        | 10 +++--
>  ovn/controller/lflow.c          |  5 +++
>  ovn/controller/ovn-controller.c | 92 ++++++++++++++++++++++++++++++
> +++++++++++
>  ovn/controller/ovn-controller.h |  3 ++
>  ovn/controller/patch.c          |  6 ++-
>  ovn/northd/ovn-northd.c         | 76 ++++++++++++++++++++++++++++++++++
>  ovn/ovn-sb.ovsschema            | 11 +++--
>  ovn/ovn-sb.xml                  |  9 ++++
>  tests/ovn.at                    |  8 ++++
>  9 files changed, 211 insertions(+), 9 deletions(-)
>
> diff --git a/ovn/controller/binding.c b/ovn/controller/binding.c
> index d7b175e..2aff69a 100644
> --- a/ovn/controller/binding.c
> +++ b/ovn/controller/binding.c
> @@ -106,7 +106,8 @@ get_local_iface_ids(const struct ovsrec_bridge *br_int,
>
>  static void
>  add_local_datapath(struct hmap *local_datapaths,
> -        const struct sbrec_port_binding *binding_rec)
> +        const struct sbrec_port_binding *binding_rec,
> +        const struct controller_ctx *ctx)
>  {
>      if (get_local_datapath(local_datapaths,
>                             binding_rec->datapath->tunnel_key)) {
> @@ -118,6 +119,7 @@ add_local_datapath(struct hmap *local_datapaths,
>      memcpy(&ld->uuid, &binding_rec->header_.uuid, sizeof ld->uuid);
>      hmap_insert(local_datapaths, &ld->hmap_node,
>                  binding_rec->datapath->tunnel_key);
> +    do_datapath_flood_fill(ctx, binding_rec->datapath);
>  }
>
>  static void
> @@ -295,7 +297,7 @@ consider_local_datapath(struct controller_ctx *ctx,
>              /* Add child logical port to the set of all local ports. */
>              sset_add(all_lports, binding_rec->logical_port);
>          }
> -        add_local_datapath(local_datapaths, binding_rec);
> +        add_local_datapath(local_datapaths, binding_rec, ctx);
>          if (iface_rec && qos_map && ctx->ovs_idl_txn) {
>              get_qos_params(binding_rec, qos_map);
>          }
> @@ -330,7 +332,7 @@ consider_local_datapath(struct controller_ctx *ctx,
>          }
>
>          sset_add(all_lports, binding_rec->logical_port);
> -        add_local_datapath(local_datapaths, binding_rec);
> +        add_local_datapath(local_datapaths, binding_rec, ctx);
>          if (binding_rec->chassis == chassis_rec) {
>              return;
>          }
> @@ -344,7 +346,7 @@ consider_local_datapath(struct controller_ctx *ctx,
>          const char *chassis = smap_get(&binding_rec->options,
>                                         "l3gateway-chassis");
>          if (!strcmp(chassis, chassis_rec->name) && ctx->ovnsb_idl_txn) {
> -            add_local_datapath(local_datapaths, binding_rec);
> +            add_local_datapath(local_datapaths, binding_rec, ctx);
>          }
>      } else if (chassis_rec && binding_rec->chassis == chassis_rec) {
>          if (ctx->ovnsb_idl_txn) {
> diff --git a/ovn/controller/lflow.c b/ovn/controller/lflow.c
> index 4e67365..765eae4 100644
> --- a/ovn/controller/lflow.c
> +++ b/ovn/controller/lflow.c
> @@ -167,6 +167,11 @@ consider_logical_flow(const struct lport_index
> *lports,
>      if (!ldp) {
>          return;
>      }
> +
> +    VLOG_DBG("consider logical flow: tunnel_key %lu "
> +             "\"match\" \"%s\", \"actions\" \"%s\"",
> +             ldp->tunnel_key, lflow->match, lflow->actions);
> +
>      if (is_switch(ldp)) {
>          /* For a logical switch datapath, local_datapaths tells us if
> there
>           * are any local ports for this datapath.  If not, we can skip
> diff --git a/ovn/controller/ovn-controller.c b/ovn/controller/ovn-
> controller.c
> index 52eb491..b3ae037 100644
> --- a/ovn/controller/ovn-controller.c
> +++ b/ovn/controller/ovn-controller.c
> @@ -69,6 +69,14 @@ OVS_NO_RETURN static void usage(void);
>
>  static char *ovs_remote;
>
> +struct dp_of_interest_node {
> +    struct hmap_node hmap_node;
> +    const struct sbrec_datapath_binding *sb_db;
> +    bool stale;
> +};
> +
> +static struct hmap dps_of_interest = HMAP_INITIALIZER(&dps_of_interest);
> +
>  struct local_datapath *
>  get_local_datapath(const struct hmap *local_datapaths, uint32_t
> tunnel_key)
>  {
> @@ -128,6 +136,69 @@ get_bridge(struct ovsdb_idl *ovs_idl, const char
> *br_name)
>      return NULL;
>  }
>
> +static bool
> +add_datapath_of_interest(const struct controller_ctx *ctx,
> +                         const struct sbrec_datapath_binding *dp)
> +{
> +    struct dp_of_interest_node *node;
> +    HMAP_FOR_EACH_WITH_HASH (node, hmap_node,
> uuid_hash(&dp->header_.uuid),
> +                             &dps_of_interest) {
> +        if (uuid_equals(&node->sb_db->header_.uuid, &dp->header_.uuid)) {
> +            node->stale = false;
> +            return false;
> +        }
> +    }
> +
>

This version has no logic for weeding out chassis specific datapaths
that are not targeted to this chassis. It is not clear to me whether adding
such logic is really worth it. However, the current tests are based on that
goal. I think the only reason the tests are passing is because of some
issues populating related_datapaths in northd. See comments below.


> +    sbrec_logical_flow_add_clause_logical_datapath(ctx->ovnsb_idl,
> +                                                   OVSDB_F_EQ,
> +                                                   &dp->header_.uuid);
> +    node = xzalloc(sizeof *node);
> +    hmap_insert(&dps_of_interest, &node->hmap_node,
> +                uuid_hash(&dp->header_.uuid));
> +    node->sb_db = dp;
> +    return true;
> +}
> +
> +void
> +do_datapath_flood_fill(const struct controller_ctx *ctx,
> +                       const struct sbrec_datapath_binding *dp_start)
> +{
> +    struct interest_dps_list_elem {
> +        struct ovs_list list_node;
> +        const struct sbrec_datapath_binding * dp;
> +    };
> +
> +    struct ovs_list interest_datapaths;
> +    ovs_list_init(&interest_datapaths);
> +
> +    struct interest_dps_list_elem *dp_list_elem =
> +        xzalloc(sizeof *dp_list_elem);
> +    dp_list_elem->dp = dp_start;
> +    ovs_list_push_back(&interest_datapaths, &dp_list_elem->list_node);
> +
> +    while (!ovs_list_is_empty(&interest_datapaths)) {
> +
> +        struct ovs_list *list_node_ptr =
> +            ovs_list_pop_front(&interest_datapaths);
> +        dp_list_elem = CONTAINER_OF(list_node_ptr,
> +            struct interest_dps_list_elem, list_node);
> +
> +       size_t num_related_datapaths = dp_list_elem->dp->n_related_
> datapaths;
> +       struct sbrec_datapath_binding **related_datapaths =
> +        dp_list_elem->dp->related_datapaths;
> +        if (!add_datapath_of_interest(ctx, dp_list_elem->dp)) {
> +            free(dp_list_elem);
> +            continue;
> +        }
> +        free(dp_list_elem);
> +        for (int i = 0; i < num_related_datapaths; i++) {
> +            dp_list_elem = xzalloc(sizeof *dp_list_elem);
> +            dp_list_elem->dp = related_datapaths[i];
> +            ovs_list_push_back(&interest_datapaths,
> &dp_list_elem->list_node);
> +        }
> +    }
> +}
> +
>  static const struct ovsrec_bridge *
>  create_br_int(struct controller_ctx *ctx,
>                const struct ovsrec_open_vswitch *cfg,
> @@ -465,6 +536,10 @@ main(int argc, char *argv[])
>          ovsdb_idl_create(ovnsb_remote, &sbrec_idl_class, true, true));
>      ovsdb_idl_omit_alert(ovnsb_idl_loop.idl, &sbrec_chassis_col_nb_cfg);
>
> +    /* Start with suppressing all logical flows and then later request
> those
> +     * that are specifically needed. */
> +    sbrec_logical_flow_add_clause_false(ovnsb_idl_loop.idl);
> +
>      /* Track the southbound idl. */
>      ovsdb_idl_track_add_all(ovnsb_idl_loop.idl);
>
> @@ -509,6 +584,12 @@ main(int argc, char *argv[])
>          struct hmap patched_datapaths = HMAP_INITIALIZER(&patched_
> datapaths);
>          struct sset all_lports = SSET_INITIALIZER(&all_lports);
>
> +        struct dp_of_interest_node *dp_cur_node, *dp_next_node;
> +        HMAP_FOR_EACH_SAFE (dp_cur_node, dp_next_node, hmap_node,
> +                            &dps_of_interest) {
> +            dp_cur_node->stale = true;
> +        }
> +
>          const struct ovsrec_bridge *br_int = get_br_int(&ctx);
>          const char *chassis_id = get_chassis_id(ctx.ovs_idl);
>
> @@ -559,6 +640,17 @@ main(int argc, char *argv[])
>              }
>              mcgroup_index_destroy(&mcgroups);
>              lport_index_destroy(&lports);
> +
> +            HMAP_FOR_EACH_SAFE (dp_cur_node, dp_next_node, hmap_node,
> +                                &dps_of_interest) {
> +                if(dp_cur_node->stale == true) {
> +                    sbrec_logical_flow_remove_clause_logical_datapath(
> +                        ctx.ovnsb_idl, OVSDB_F_EQ,
> +                        &dp_cur_node->sb_db->header_.uuid);
> +                    hmap_remove(&dps_of_interest,
> &dp_cur_node->hmap_node);
> +                    free(dp_cur_node);
> +                }
> +            }
>          }
>
>          sset_destroy(&all_lports);
> diff --git a/ovn/controller/ovn-controller.h b/ovn/controller/ovn-
> controller.h
> index 4dcf4e5..a474c45 100644
> --- a/ovn/controller/ovn-controller.h
> +++ b/ovn/controller/ovn-controller.h
> @@ -81,6 +81,9 @@ const struct ovsrec_bridge *get_bridge(struct ovsdb_idl
> *,
>  const struct sbrec_chassis *get_chassis(struct ovsdb_idl *,
>                                          const char *chassis_id);
>
> +void do_datapath_flood_fill(const struct controller_ctx *,
> +                            const struct sbrec_datapath_binding *);
> +
>  /* Must be a bit-field ordered from most-preferred (higher number) to
>   * least-preferred (lower number). */
>  enum chassis_tunnel_type {
> diff --git a/ovn/controller/patch.c b/ovn/controller/patch.c
> index c9a5dd9..2af4436 100644
> --- a/ovn/controller/patch.c
> +++ b/ovn/controller/patch.c
> @@ -252,7 +252,8 @@ add_bridge_mappings(struct controller_ctx *ctx,
>
>  static void
>  add_patched_datapath(struct hmap *patched_datapaths,
> -                     const struct sbrec_port_binding *binding_rec, bool
> local)
> +                     const struct sbrec_port_binding *binding_rec, bool
> local,
> +                     struct controller_ctx *ctx)
>  {
>      struct patched_datapath *pd = get_patched_datapath(patched_datapaths,
>                                         binding_rec->datapath->tunnel_
> key);
> @@ -269,6 +270,7 @@ add_patched_datapath(struct hmap *patched_datapaths,
>      /* stale is set to false. */
>      hmap_insert(patched_datapaths, &pd->hmap_node,
>                  binding_rec->datapath->tunnel_key);
> +    do_datapath_flood_fill(ctx, binding_rec->datapath);
>

Patch ports other than l3gateway ports are added everywhere, on all chassis.
If you do a flood fill here, then all flows on distributed routers and any
datapaths
related to the distributed routers will be installed everywhere.

There should be no need for any of the changes to patch.c. Logical router
datapaths should be picked up as part of the datapath flood fill from local
datapaths.

I removed this line and found that all logical router automated tests
failed,
which I expected due to issues in northd. See comments below.


>  }
>
>  static void
> @@ -362,7 +364,7 @@ add_logical_patch_ports(struct controller_ctx *ctx,
>                                existing_ports);
>              free(dst_name);
>              free(src_name);
> -            add_patched_datapath(patched_datapaths, binding, local_port);
> +            add_patched_datapath(patched_datapaths, binding, local_port,
> ctx);
>              if (local_port) {
>                  if (binding->chassis != chassis_rec &&
> ctx->ovnsb_idl_txn) {
>                      sbrec_port_binding_set_chassis(binding, chassis_rec);
> diff --git a/ovn/northd/ovn-northd.c b/ovn/northd/ovn-northd.c
> index 437da9f..b714799 100644
> --- a/ovn/northd/ovn-northd.c
> +++ b/ovn/northd/ovn-northd.c
> @@ -231,6 +231,42 @@ struct tnlid_node {
>      uint32_t tnlid;
>  };
>
> +struct related_datapath_node {
> +    struct hmap_node hmap_node;
> +    const struct sbrec_datapath_binding *sb_db;
> +};
> +
> +static void
> +add_related_datapath(struct hmap *related_datapaths,
> +                     const struct sbrec_datapath_binding *sb,
> +                     size_t *n_related_datapaths)
> +{
> +    struct related_datapath_node *node;
> +    HMAP_FOR_EACH_WITH_HASH (node, hmap_node,
> +                             uuid_hash(&sb->header_.uuid),
> +                             related_datapaths) {
> +        if (uuid_equals(&sb->header_.uuid, &node->sb_db->header_.uuid)) {
> +            return;
> +        }
> +    }
> +
> +    node = xzalloc(sizeof *node);
> +    hmap_insert(related_datapaths, &node->hmap_node,
> +                uuid_hash(&sb->header_.uuid));
> +    node->sb_db = sb;
> +    (*n_related_datapaths)++;
> +}
> +
> +static void
> +destroy_related_datapaths(struct hmap *related_datapaths)
> +{
> +    struct related_datapath_node *node;
> +    HMAP_FOR_EACH_POP (node, hmap_node, related_datapaths) {
> +        free(node);
> +    }
> +    hmap_destroy(related_datapaths);
> +}
> +
>  static void
>  destroy_tnlids(struct hmap *tnlids)
>  {
> @@ -376,6 +412,8 @@ struct ovn_datapath {
>      size_t n_router_ports;
>
>      struct hmap port_tnlids;
> +    struct hmap related_datapaths;
> +    size_t n_related_datapaths;
>      uint32_t port_key_hint;
>
>      bool has_unknown;
> @@ -426,6 +464,7 @@ ovn_datapath_create(struct hmap *datapaths, const
> struct uuid *key,
>      od->nbr = nbr;
>      hmap_init(&od->port_tnlids);
>      hmap_init(&od->ipam);
> +    hmap_init(&od->related_datapaths);
>      od->port_key_hint = 0;
>      hmap_insert(datapaths, &od->key_node, uuid_hash(&od->key));
>      return od;
> @@ -441,6 +480,7 @@ ovn_datapath_destroy(struct hmap *datapaths, struct
> ovn_datapath *od)
>          hmap_remove(datapaths, &od->key_node);
>          destroy_tnlids(&od->port_tnlids);
>          destroy_ipam(&od->ipam);
> +        destroy_related_datapaths(&od->related_datapaths);
>          free(od->router_ports);
>          free(od);
>      }
> @@ -624,6 +664,28 @@ build_datapaths(struct northd_context *ctx, struct
> hmap *datapaths)
>              smap_destroy(&ids);
>
>              sbrec_datapath_binding_set_tunnel_key(od->sb, tunnel_key);
> +
> +            struct sbrec_datapath_binding **sb_related_datapaths
> +                = xmalloc(sizeof(*sb_related_datapaths) *
> od->n_related_datapaths);
> +            int rdi = 0;
> +            struct related_datapath_node *related_datapath;
> +            HMAP_FOR_EACH (related_datapath, hmap_node,
> +                           &od->related_datapaths) {
> +                if (rdi >= od->n_related_datapaths) {
> +                    static struct vlog_rate_limit rl
> +                        = VLOG_RATE_LIMIT_INIT(5, 1);
> +                    VLOG_ERR_RL(&rl, "related datapaths accounting error"
> +                                UUID_FMT, UUID_ARGS(&od->key));
> +                    break;
> +                }
> +                sb_related_datapaths[rdi] = CONST_CAST(
> +                    struct sbrec_datapath_binding *,
> related_datapath->sb_db);
> +                rdi++;
> +            }
> +            sbrec_datapath_binding_set_related_datapaths(od->sb,
> +                sb_related_datapaths, od->n_related_datapaths);
> +            free(sb_related_datapaths);
> +
>

The code above needs to be moved. There are two issues:
1. This code is run under build_datapaths, which happens before build_ports.
   All of the calls to add_related_datapath are under build_ports.
   Given this order, I fail to see how sb related_datapaths ever gets
populated.
   With this code order, my expectation would be that it is always empty. I
have
   not run the code manually to check this yet. However, the fact that the
   automated router tests fail when removing the flood fill from
   patched_datapaths does provide some evidence that this might be the case.
2. This code only runs when a port on that datapath is added, so that it
exists
   in nb but not sb. If there are no changes in ports on the datapath, then
the
   nb_only list would be empty and related_datapaths would not be updated.

         }
>          destroy_tnlids(&dp_tnlids);
>      }
> @@ -1359,6 +1421,12 @@ ovn_port_update_sbrec(const struct ovn_port *op,
>              sbrec_port_binding_set_type(op->sb, "patch");
>          }
>
> +        if (op->peer && op->peer->od && op->peer->od->sb) {
> +            add_related_datapath(&op->od->related_datapaths,
> +                                 op->peer->od->sb,
> +                                 &op->od->n_related_datapaths);
> +        }
> +
>          const char *peer = op->peer ? op->peer->key : "<error>";
>          struct smap new;
>          smap_init(&new);
> @@ -1411,6 +1479,12 @@ ovn_port_update_sbrec(const struct ovn_port *op,
>                  sbrec_port_binding_set_type(op->sb, "patch");
>              }
>
> +            if (op->peer && op->peer->od && op->peer->od->sb) {
> +                add_related_datapath(&op->od->related_datapaths,
> +                                     op->peer->od->sb,
> +                                     &op->od->n_related_datapaths);
> +            }
> +
>              const char *router_port = smap_get_def(&op->nbsp->options,
>                                                     "router-port",
> "<error>");
>              struct smap new;
> @@ -4825,6 +4899,8 @@ main(int argc, char *argv[])
>      add_column_noalert(ovnsb_idl_loop.idl,
>                         &sbrec_datapath_binding_col_tunnel_key);
>      add_column_noalert(ovnsb_idl_loop.idl,
> +                       &sbrec_datapath_binding_col_related_datapaths);
> +    add_column_noalert(ovnsb_idl_loop.idl,
>                         &sbrec_datapath_binding_col_external_ids);
>
>      ovsdb_idl_add_table(ovnsb_idl_loop.idl, &sbrec_table_port_binding);
> diff --git a/ovn/ovn-sb.ovsschema b/ovn/ovn-sb.ovsschema
> index 89342fe..51c222d 100644
> --- a/ovn/ovn-sb.ovsschema
> +++ b/ovn/ovn-sb.ovsschema
> @@ -1,7 +1,7 @@
>  {
>      "name": "OVN_Southbound",
> -    "version": "1.9.0",
> -    "cksum": "239060528 9012",
> +    "version": "1.10.0",
> +    "cksum": "1139190282 9320",
>      "tables": {
>          "SB_Global": {
>              "columns": {
> @@ -93,7 +93,12 @@
>                                        "maxInteger": 16777215}}},
>                  "external_ids": {
>                      "type": {"key": "string", "value": "string",
> -                             "min": 0, "max": "unlimited"}}},
> +                             "min": 0, "max": "unlimited"}},
> +                "related_datapaths": {"type": {"key": {"type": "uuid",
> +                                      "refTable": "Datapath_Binding",
> +                                      "refType": "weak"},
> +                                      "min": 0,
> +                                      "max": "unlimited"}}},
>              "indexes": [["tunnel_key"]],
>              "isRoot": true},
>          "Port_Binding": {
> diff --git a/ovn/ovn-sb.xml b/ovn/ovn-sb.xml
> index 65191ed..c20861f 100644
> --- a/ovn/ovn-sb.xml
> +++ b/ovn/ovn-sb.xml
> @@ -1556,6 +1556,15 @@ tcp.flags = RST;
>        constructed for each supported encapsulation.
>      </column>
>
> +    <column name="related_datapaths">
> +      The related_datapaths column is used to keep track of which
> datapaths
> +      are connected to each other.  This is leveraged in ovn-controller to
> +      do a flood fill from each local logical switch to determine the full
> +      set of datapaths needed on a given ovn-controller.   This set of
> +      datapaths can be used to determine which logical ports and logical
> +      flows an ovn-controller should monitor.
> +    </column>
> +
>      <group title="OVN_Northbound Relationship">
>        <p>
>          Each row in <ref table="Datapath_Binding"/> is associated with
> some
> diff --git a/tests/ovn.at b/tests/ovn.at
> index 69f5277..04bc3da 100644
> --- a/tests/ovn.at
> +++ b/tests/ovn.at
> @@ -4079,6 +4079,7 @@ ovs-vsctl -- add-port br-int hv1-vif1 -- \
>      options:tx_pcap=hv1/vif1-tx.pcap \
>      options:rxq_pcap=hv1/vif1-rx.pcap \
>      ofport-request=1
> +ovs-appctl -t ovn-controller vlog/set ANY:file:DBG
>
>
>  sim_add hv2
> @@ -4090,6 +4091,7 @@ ovs-vsctl -- add-port br-int hv2-vif1 -- \
>      options:tx_pcap=hv2/vif1-tx.pcap \
>      options:rxq_pcap=hv2/vif1-rx.pcap \
>      ofport-request=1
> +ovs-appctl -t ovn-controller vlog/set ANY:file:DBG
>
>  # Pre-populate the hypervisors' ARP tables so that we don't lose any
>  # packets for ARP resolution (native tunneling doesn't queue packets
> @@ -4195,6 +4197,12 @@ as hv2 ovs-ofctl show br-int
>  as hv2 ovs-ofctl dump-flows br-int
>  echo "----------------------------"
>
> +# tunnel key 2 represents the gateway router and the associated
> +# logical flows should only be on hv2 not hv1 when conditional
> +# monitoring of flows is being used.
> +AT_CHECK([cat hv2/ovn-controller.log | grep 'tunnel_key 2'], [0],
> [ignore-nolog])
> +AT_CHECK([cat hv1/ovn-controller.log | grep 'tunnel_key 2'], [1],
> [ignore-nolog])
> +
>  echo $expected > expected
>  OVN_CHECK_PACKETS([hv2/vif1-tx.pcap], [expected])
>

This test checks that gateway router flows are not installed on the other
hypervisor.
As I mentioned above, I am not sure that this should even be a goal of this
patch.

The main thing that should be checked is unrelated sets of datapaths on
different
hypervisors. Such tests should include distributed routers, so that the
related_datapaths part of the code is exercised.

Mickey



>
> --
> 1.9.1
>
> _______________________________________________
> dev mailing list
> d...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>
_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to