On 11/30/16, 3:07 AM, "ovs-dev-boun...@openvswitch.org on behalf of Liran 
Schour" <ovs-dev-boun...@openvswitch.org on behalf of lir...@il.ibm.com> wrote:

    Darrell Ball <dlu...@gmail.com> wrote on 30/11/2016 02:29:01 AM:
    
    > On Tue, Nov 29, 2016 at 4:10 AM, Liran Schour <lir...@il.ibm.com> wrote:
    > Ben Pfaff <b...@ovn.org> wrote on 29/11/2016 12:51:51 AM:
    > 
    > > I hope that this version is ready to go in.  Liran, are you happy with
    > > this version?
    > > 
    > 
    > I did some short evaluation and got the following results:
    > Simulate 50 hosts, each host serves 20 logical ports, each logical 
    > network is spread over 6 of the 50 hosts.
    > 
    > Thanks for testing Liran
    > 
    > Can a few questions be asked so the differences are understood.
    > There is expectation that not monitoring logical ports will make a 
    > difference, but
    > this large is not expected.
    > 
    > 1) This topology seems to only use logical switches in a flat 
    > topology for each tenant;
    >     is that correct ?
    
    Right. We do not have any logical routers.
    
    > 2)  What is the distribution of logical switches per HV ?
    
    You have 20 logical ports from 20 different logical switches per HV.

Without some incremental processing, port processing which typically should not 
change
that often becomes more important.
Without port conditional monitoring, each HV will monitor 1000 ports vs 20 in 
this test.
I reinstated part of my V3 patch for ports earlier than later since we are in 
between 
incremental processing versions and port conditional monitoring becomes more
important to do earlier, afais.
    
    > 3) What is the total number of logical switches we end up with ?
    
    In that specific test: 167 logical switches.
    
    > 
    >  
    > 
    > 
    > Darrell patch:
    > --------------
    > Host 1 # flows 1504
    > Host 2 # flows 1626
    > Host 3 # flows 1443
    > ...
    > Logical flows = 8016
    > Elapsed time 207,990ms
    > total of 1002 (in 208ms per port)
    > Ports created and bound in 5,648,773,788,428 cycles
    > 1% south 69,936,025,644
    > 1% north 77,863,484,163
    > 97% clients 5,500,974,278,621
    > 
    > Conditional monitor V16 patch:
    > ------------------------------
    > Host 1 # flows 1358
    > Host 2 # flows 1482
    > Host 3 # flows 1296
    > ...
    > Logical flows = 8016
    > Elapsed time 207,998ms
    > total of 1002 (in 208ms per port)
    > Ports created and bound in 1,216,848,309,516 cycles
    > 
    >  
    > 
    > 4) This is only the CPU time to initially create and bind logical 
    > ports to each HV ?
    >      How exactly is this time measured ?
    
    You can ignore the time parameter, it is a trace from the past of the 
    evaluation system.

My main question was - what is the defined start and end points ?
i.e. when do we start measuring and when do we stop measuring ?

Some of these numbers are really hard to explain.
Hence, I added a separate simple unit test to simply track number of events 
(port and flow) seen by the HVs with conditional monitoring. This is also
easier to interpret and straightforward.

    
    > 5) This total cycles is in the trillions 
    > The previous tests from V10 and V15 below seem consistent and in the 2 
    digit
    > billions, which is at least 30 times less than now.
    > Is this same test as before using the 50 hosts configuration  ?
    > 
    
    The number that is shown in the tables below is total number of cycles of 
    the SB ovsdb-server not overall cycles of all the DC. And as you can see 
    (2% south 30,653,838,360) is still in the 2 digit billions.

I see, ok.
Most of the proportional benefit will be on the HV clients anyways.
    
    > Evaluation on simulated environment of 50 hosts and 1000 logical ports 
    shows
    > the following results (cycles #):
    > LN spread over # hosts|    master    | patch        | change
    > -------------------------------------------------------------
    >             1         | 24597200127  | 24339235374  |  1.0%
    >             6         | 23788521572  | 19145229352  | 19.5%
    >            12         | 23886405758  | 17913143176  | 25.0%
    >            18         | 25812686279  | 23675094540  |  8.2%
    >            24         | 28414671499  | 24770202308  | 12.8%
    >            30         | 31487218890  | 28397543436  |  9.8%
    >            36         | 36116993930  | 34105388739  |  5.5%
    >            42         | 37898342465  | 38647139083  | -1.9%
    >            48         | 41637996229  | 41846616306  | -0.5%
    >            50         | 41679995357  | 43455565977  | -4.2%
    > 
    > and from V15
    > 
    > Evaluation on simulated environment of 50 hosts and 1000 logical ports 
    shows
    > the following results (cycles #):
    > 
    > LN spread over # hosts|    master    | patch        | change
    > -------------------------------------------------------------
    >             1         | 24597200127  | 24339235374  |  1.0%
    >             6         | 23788521572  | 19145229352  | 19.5%
    >            12         | 23886405758  | 17913143176  | 25.0%
    >            18         | 25812686279  | 23675094540  |  8.2%
    >            24         | 28414671499  | 24770202308  | 12.8%
    >            30         | 31487218890  | 28397543436  |  9.8%
    >            36         | 36116993930  | 34105388739  |  5.5%
    >            42         | 37898342465  | 38647139083  | -1.9%
    >            48         | 41637996229  | 41846616306  | -0.5%
    >            50         | 41679995357  | 43455565977  | -4.2%
    > 
    >  
    > 
    > 2% south 30,653,838,360
    >  
    > 
    > 3% north 42,543,336,355
    > 
    > 6) Is this ovsdb NB database server CPU cycles ?; (it is hard to see
    > why the cycles
    > would differ here, although the numbers are small compared to the HV)
    
    Yes it is. I do not know exactly why you have difference here also.

Thanks
I know these tests are done by a different group.
Some data points are really hard to explain; maybe there is a lot of variance
and/or only an initial condition is measured. 
I added a unit test to track port and flow event numbers, as a surrogate.
I made the test easy to interpret and think about how the numbers would change
with more complex topologies and longer running tests without startup skew.
    
    >  
    > 
    > 93% clients 1,143,651,134,801
    > 
    > You can see that this patch significantly degrades the overhead of 
    > computation comparing to my origin patch (rebased version).
    > 
    > An obvious difference is that Darrell's patch has conditions only on
    > the Logical_Flow table and the origin patch did that on 
    > Port_Binding, Logical_Flow, Multicast_Group and MAC_Binding tables.
    > 
    >  
    > 
    > 
    > I can resubmit my rebased patch again and Darell can base his 
    > changes on top of it or just investigate the problem.
    > 
    > The code is here: 
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_liranschour_ovsBranch-3A&d=DgICAg&c=uilaK90D4TOVoH58JNXRgQ&r=BVhFA09CGX7JQ5Ih-uZnsw&m=4knSzUZXqT8uBMLrA6DjddoFltFT858Mdb9qAnNNDno&s=3ShtmqNnIaccOmkHGzUJO0g4_LuY_jyVlFxp98B7x4I&e=
  
    > monitor_cond_ovn_v16
    > 
    > Thanks,
    > - Liran
    > 
    > > On Sun, Nov 27, 2016 at 01:08:59PM -0800, Darrell Ball wrote:
    > > > This patch adds datapaths of interest support where only datapaths 
    of 
    > > > local interest are monitored by the ovn-controller ovsdb client. 
     The
    > > > idea is to do a flood fill in ovn-controller of datapath 
    associations
    > > > calculated by northd. A new column is added to the SB database
    > > > datapath_binding table - related_datapaths to facilitate this so all
    > > > datapaths associations are known quickly in ovn-controller.  This
    > > > allows monitoring to adapt quickly with a single new monitor setting
    > > > for all datapaths of interest locally.
    > > > 
    > > > To reduce risk and minimize latency on network churn, only logical
    > > > flows are conditionally monitored for now.  This should provide
    > > > the bulk of the benefit.  This will be re-evaluated later to
    > > > possibly add additional conditional monitoring such as for
    > > > logical ports.
    > > > 
    > > > Liran Schour contributed enhancements to the conditional
    > > > monitoring granularity in ovs and also submitted patches
    > > > for ovn usage of conditional monitoring which aided this patch
    > > > though sharing of concepts through code review work.
    > > > 
    > > > Ben Pfaff suggested that northd could be used to pre-populate
    > > > related datapaths for ovn-controller to use.  That idea is
    > > > used as part of this patch.
    > > > 
    > > > CC: Ben Pfaff <b...@ovn.org>
    > > > CC: Liran Schour <lir...@il.ibm.com>
    > > > Signed-off-by: Darrell Ball <dlu...@gmail.com>
    > > > ---
    > > > 
    > > > v7->v8: Start wth logical flow conditional monitoring off.
    > > >         Remove deprecated code.
    > > >         Recover SB DB version number change.
    > > >         Change test to be more definitive. 
    > > > 
    > > > v6->v7: Rebase
    > > > 
    > > > v5->v6: Rebase; fix stale handling.
    > > > 
    > > > v4->v5: Correct cleanup of monitors.
    > > >         Fix warning.
    > > > 
    > > > v3->v4: Refactor after incremental processing backout.
    > > >         Limit filtering to logical flows to limit risk.
    > > > 
    > > > v2->v3: Line length violation fixups :/
    > > > 
    > > > v1->v2: Added logical port removal monitoring handling, factoring
    > > >         in recent changes for incremental processing.  Added some
    > > >         comments in the code regarding initial conditions for
    > > >         database conditional monitoring.
    > > > 
    > > >  ovn/controller/binding.c        | 10 +++--
    > > >  ovn/controller/lflow.c          |  5 +++
    > > >  ovn/controller/ovn-controller.c | 92 ++++++++++++++++++++++++++++
    > > +++++++++++++
    > > >  ovn/controller/ovn-controller.h |  3 ++
    > > >  ovn/controller/patch.c          |  6 ++-
    > > >  ovn/northd/ovn-northd.c         | 76 
    ++++++++++++++++++++++++++++++++++
    > > >  ovn/ovn-sb.ovsschema            | 11 +++--
    > > >  ovn/ovn-sb.xml                  |  9 ++++
    > > >  tests/ovn.at                    |  8 ++++
    > > >  9 files changed, 211 insertions(+), 9 deletions(-)
    > > > 
    > > > diff --git a/ovn/controller/binding.c b/ovn/controller/binding.c
    > > > index d7b175e..2aff69a 100644
    > > > --- a/ovn/controller/binding.c
    > > > +++ b/ovn/controller/binding.c
    > > > @@ -106,7 +106,8 @@ get_local_iface_ids(const struct 
    > ovsrec_bridge *br_int,
    > > >  
    > > >  static void
    > > >  add_local_datapath(struct hmap *local_datapaths,
    > > > -        const struct sbrec_port_binding *binding_rec)
    > > > +        const struct sbrec_port_binding *binding_rec,
    > > > +        const struct controller_ctx *ctx)
    > > >  {
    > > >      if (get_local_datapath(local_datapaths,
    > > >                             binding_rec->datapath->tunnel_key)) {
    > > > @@ -118,6 +119,7 @@ add_local_datapath(struct hmap *local_datapaths,
    > > >      memcpy(&ld->uuid, &binding_rec->header_.uuid, sizeof ld->uuid);
    > > >      hmap_insert(local_datapaths, &ld->hmap_node,
    > > >                  binding_rec->datapath->tunnel_key);
    > > > +    do_datapath_flood_fill(ctx, binding_rec->datapath);
    > > >  }
    > > >  
    > > >  static void
    > > > @@ -295,7 +297,7 @@ consider_local_datapath(struct controller_ctx 
    *ctx,
    > > >              /* Add child logical port to the set of all local 
    ports. */
    > > >              sset_add(all_lports, binding_rec->logical_port);
    > > >          }
    > > > -        add_local_datapath(local_datapaths, binding_rec);
    > > > +        add_local_datapath(local_datapaths, binding_rec, ctx);
    > > >          if (iface_rec && qos_map && ctx->ovs_idl_txn) {
    > > >              get_qos_params(binding_rec, qos_map);
    > > >          }
    > > > @@ -330,7 +332,7 @@ consider_local_datapath(struct controller_ctx 
    *ctx,
    > > >          }
    > > >  
    > > >          sset_add(all_lports, binding_rec->logical_port);
    > > > -        add_local_datapath(local_datapaths, binding_rec);
    > > > +        add_local_datapath(local_datapaths, binding_rec, ctx);
    > > >          if (binding_rec->chassis == chassis_rec) {
    > > >              return;
    > > >          }
    > > > @@ -344,7 +346,7 @@ consider_local_datapath(struct controller_ctx 
    *ctx,
    > > >          const char *chassis = smap_get(&binding_rec->options,
    > > >                                         "l3gateway-chassis");
    > > >          if (!strcmp(chassis, chassis_rec->name) && 
    ctx->ovnsb_idl_txn) {
    > > > -            add_local_datapath(local_datapaths, binding_rec);
    > > > +            add_local_datapath(local_datapaths, binding_rec, ctx);
    > > >          }
    > > >      } else if (chassis_rec && binding_rec->chassis == chassis_rec) 
    {
    > > >          if (ctx->ovnsb_idl_txn) {
    > > > diff --git a/ovn/controller/lflow.c b/ovn/controller/lflow.c
    > > > index 4e67365..765eae4 100644
    > > > --- a/ovn/controller/lflow.c
    > > > +++ b/ovn/controller/lflow.c
    > > > @@ -167,6 +167,11 @@ consider_logical_flow(const struct 
    > lport_index *lports,
    > > >      if (!ldp) {
    > > >          return;
    > > >      }
    > > > +
    > > > +    VLOG_DBG("consider logical flow: tunnel_key %lu "
    > > > +             "\"match\" \"%s\", \"actions\" \"%s\"",
    > > > +             ldp->tunnel_key, lflow->match, lflow->actions);
    > > > +
    > > >      if (is_switch(ldp)) {
    > > >          /* For a logical switch datapath, local_datapaths tells
    > us if there
    > > >           * are any local ports for this datapath.  If not, we can 
    skip
    > > > diff --git a/ovn/controller/ovn-controller.c b/ovn/controller/ovn-
    > > controller.c
    > > > index 52eb491..b3ae037 100644
    > > > --- a/ovn/controller/ovn-controller.c
    > > > +++ b/ovn/controller/ovn-controller.c
    > > > @@ -69,6 +69,14 @@ OVS_NO_RETURN static void usage(void);
    > > >  
    > > >  static char *ovs_remote;
    > > >  
    > > > +struct dp_of_interest_node {
    > > > +    struct hmap_node hmap_node;
    > > > +    const struct sbrec_datapath_binding *sb_db;
    > > > +    bool stale;
    > > > +};
    > > > +
    > > > +static struct hmap dps_of_interest = 
    HMAP_INITIALIZER(&dps_of_interest);
    > > > +
    > > >  struct local_datapath *
    > > >  get_local_datapath(const struct hmap *local_datapaths, uint32_t
    > tunnel_key)
    > > >  {
    > > > @@ -128,6 +136,69 @@ get_bridge(struct ovsdb_idl *ovs_idl, const 
    > > char *br_name)
    > > >      return NULL;
    > > >  }
    > > >  
    > > > +static bool
    > > > +add_datapath_of_interest(const struct controller_ctx *ctx,
    > > > +                         const struct sbrec_datapath_binding *dp)
    > > > +{
    > > > +    struct dp_of_interest_node *node;
    > > > +    HMAP_FOR_EACH_WITH_HASH (node, hmap_node, uuid_hash
    > (&dp->header_.uuid),
    > > > +                             &dps_of_interest) {
    > > > +        if (uuid_equals(&node->sb_db->header_.uuid, 
    &dp->header_.uuid)) {
    > > > +            node->stale = false;
    > > > +            return false;
    > > > +        }
    > > > +    }
    > > > +
    > > > +    sbrec_logical_flow_add_clause_logical_datapath(ctx->ovnsb_idl,
    > > > +                                                   OVSDB_F_EQ,
    > > > +                                                   
    &dp->header_.uuid);
    > > > +    node = xzalloc(sizeof *node);
    > > > +    hmap_insert(&dps_of_interest, &node->hmap_node,
    > > > +                uuid_hash(&dp->header_.uuid));
    > > > +    node->sb_db = dp;
    > > > +    return true;
    > > > +}
    > > > +
    > > > +void
    > > > +do_datapath_flood_fill(const struct controller_ctx *ctx,
    > > > +                       const struct sbrec_datapath_binding 
    *dp_start)
    > > > +{
    > > > +    struct interest_dps_list_elem {
    > > > +        struct ovs_list list_node;
    > > > +        const struct sbrec_datapath_binding * dp;
    > > > +    };
    > > > +
    > > > +    struct ovs_list interest_datapaths;
    > > > +    ovs_list_init(&interest_datapaths);
    > > > +
    > > > +    struct interest_dps_list_elem *dp_list_elem =
    > > > +        xzalloc(sizeof *dp_list_elem);
    > > > +    dp_list_elem->dp = dp_start;
    > > > +    ovs_list_push_back(&interest_datapaths, 
    &dp_list_elem->list_node);
    > > > +
    > > > +    while (!ovs_list_is_empty(&interest_datapaths)) {
    > > > +
    > > > +        struct ovs_list *list_node_ptr =
    > > > +            ovs_list_pop_front(&interest_datapaths);
    > > > +        dp_list_elem = CONTAINER_OF(list_node_ptr,
    > > > +            struct interest_dps_list_elem, list_node);
    > > > +
    > > > +   size_t num_related_datapaths = 
    dp_list_elem->dp->n_related_datapaths;
    > > > +   struct sbrec_datapath_binding **related_datapaths =
    > > > +        dp_list_elem->dp->related_datapaths;
    > > > +        if (!add_datapath_of_interest(ctx, dp_list_elem->dp)) {
    > > > +            free(dp_list_elem);
    > > > +            continue;
    > > > +        }
    > > > +        free(dp_list_elem);
    > > > +        for (int i = 0; i < num_related_datapaths; i++) {
    > > > +            dp_list_elem = xzalloc(sizeof *dp_list_elem);
    > > > +            dp_list_elem->dp = related_datapaths[i];
    > > > +            ovs_list_push_back(&interest_datapaths, 
    > > &dp_list_elem->list_node);
    > > > +        }
    > > > +    }
    > > > +}
    > > > +
    > > >  static const struct ovsrec_bridge *
    > > >  create_br_int(struct controller_ctx *ctx,
    > > >                const struct ovsrec_open_vswitch *cfg,
    > > > @@ -465,6 +536,10 @@ main(int argc, char *argv[])
    > > >          ovsdb_idl_create(ovnsb_remote, &sbrec_idl_class, true, 
    true));
    > > >      ovsdb_idl_omit_alert(ovnsb_idl_loop.idl, 
    &sbrec_chassis_col_nb_cfg);
    > > >  
    > > > +    /* Start with suppressing all logical flows and then later 
    > > request those
    > > > +     * that are specifically needed. */
    > > > +    sbrec_logical_flow_add_clause_false(ovnsb_idl_loop.idl);
    > > > +
    > > >      /* Track the southbound idl. */
    > > >      ovsdb_idl_track_add_all(ovnsb_idl_loop.idl);
    > > >  
    > > > @@ -509,6 +584,12 @@ main(int argc, char *argv[])
    > > >          struct hmap patched_datapaths = HMAP_INITIALIZER
    > > (&patched_datapaths);
    > > >          struct sset all_lports = SSET_INITIALIZER(&all_lports);
    > > >  
    > > > +        struct dp_of_interest_node *dp_cur_node, *dp_next_node;
    > > > +        HMAP_FOR_EACH_SAFE (dp_cur_node, dp_next_node, hmap_node,
    > > > +                            &dps_of_interest) {
    > > > +            dp_cur_node->stale = true;
    > > > +        }
    > > > +
    > > >          const struct ovsrec_bridge *br_int = get_br_int(&ctx);
    > > >          const char *chassis_id = get_chassis_id(ctx.ovs_idl);
    > > >  
    > > > @@ -559,6 +640,17 @@ main(int argc, char *argv[])
    > > >              }
    > > >              mcgroup_index_destroy(&mcgroups);
    > > >              lport_index_destroy(&lports);
    > > > +
    > > > +            HMAP_FOR_EACH_SAFE (dp_cur_node, dp_next_node, 
    hmap_node,
    > > > +                                &dps_of_interest) {
    > > > +                if(dp_cur_node->stale == true) {
    > > > +                   
     sbrec_logical_flow_remove_clause_logical_datapath(
    > > > +                        ctx.ovnsb_idl, OVSDB_F_EQ,
    > > > +                        &dp_cur_node->sb_db->header_.uuid);
    > > > +                    hmap_remove(&dps_of_interest, 
    > &dp_cur_node->hmap_node);
    > > > +                    free(dp_cur_node);
    > > > +                }
    > > > +            }
    > > >          }
    > > >  
    > > >          sset_destroy(&all_lports);
    > > > diff --git a/ovn/controller/ovn-controller.h b/ovn/controller/ovn-
    > > controller.h
    > > > index 4dcf4e5..a474c45 100644
    > > > --- a/ovn/controller/ovn-controller.h
    > > > +++ b/ovn/controller/ovn-controller.h
    > > > @@ -81,6 +81,9 @@ const struct ovsrec_bridge *get_bridge(struct 
    > ovsdb_idl *,
    > > >  const struct sbrec_chassis *get_chassis(struct ovsdb_idl *,
    > > >                                          const char *chassis_id);
    > > >  
    > > > +void do_datapath_flood_fill(const struct controller_ctx *,
    > > > +                            const struct sbrec_datapath_binding *);
    > > > +
    > > >  /* Must be a bit-field ordered from most-preferred (higher number) 
    to
    > > >   * least-preferred (lower number). */
    > > >  enum chassis_tunnel_type {
    > > > diff --git a/ovn/controller/patch.c b/ovn/controller/patch.c
    > > > index c9a5dd9..2af4436 100644
    > > > --- a/ovn/controller/patch.c
    > > > +++ b/ovn/controller/patch.c
    > > > @@ -252,7 +252,8 @@ add_bridge_mappings(struct controller_ctx *ctx,
    > > >  
    > > >  static void
    > > >  add_patched_datapath(struct hmap *patched_datapaths,
    > > > -                     const struct sbrec_port_binding 
    > > *binding_rec, bool local)
    > > > +                     const struct sbrec_port_binding 
    > > *binding_rec, bool local,
    > > > +                     struct controller_ctx *ctx)
    > > >  {
    > > >      struct patched_datapath *pd = 
    get_patched_datapath(patched_datapaths,
    > > >                                         
    > binding_rec->datapath->tunnel_key);
    > > > @@ -269,6 +270,7 @@ add_patched_datapath(struct hmap 
    *patched_datapaths,
    > > >      /* stale is set to false. */
    > > >      hmap_insert(patched_datapaths, &pd->hmap_node,
    > > >                  binding_rec->datapath->tunnel_key);
    > > > +    do_datapath_flood_fill(ctx, binding_rec->datapath);
    > > >  }
    > > >  
    > > >  static void
    > > > @@ -362,7 +364,7 @@ add_logical_patch_ports(struct controller_ctx 
    *ctx,
    > > >                                existing_ports);
    > > >              free(dst_name);
    > > >              free(src_name);
    > > > -            add_patched_datapath(patched_datapaths, binding, 
    local_port);
    > > > +            add_patched_datapath(patched_datapaths, binding, 
    > > local_port, ctx);
    > > >              if (local_port) {
    > > >                  if (binding->chassis != chassis_rec && 
    > > ctx->ovnsb_idl_txn) {
    > > >                      sbrec_port_binding_set_chassis(binding, 
    chassis_rec);
    > > > diff --git a/ovn/northd/ovn-northd.c b/ovn/northd/ovn-northd.c
    > > > index 437da9f..b714799 100644
    > > > --- a/ovn/northd/ovn-northd.c
    > > > +++ b/ovn/northd/ovn-northd.c
    > > > @@ -231,6 +231,42 @@ struct tnlid_node {
    > > >      uint32_t tnlid;
    > > >  };
    > > >  
    > > > +struct related_datapath_node {
    > > > +    struct hmap_node hmap_node;
    > > > +    const struct sbrec_datapath_binding *sb_db;
    > > > +};
    > > > +
    > > > +static void
    > > > +add_related_datapath(struct hmap *related_datapaths,
    > > > +                     const struct sbrec_datapath_binding *sb,
    > > > +                     size_t *n_related_datapaths)
    > > > +{
    > > > +    struct related_datapath_node *node;
    > > > +    HMAP_FOR_EACH_WITH_HASH (node, hmap_node,
    > > > +                             uuid_hash(&sb->header_.uuid),
    > > > +                             related_datapaths) {
    > > > +        if (uuid_equals(&sb->header_.uuid, 
    &node->sb_db->header_.uuid)) {
    > > > +            return;
    > > > +        }
    > > > +    }
    > > > +
    > > > +    node = xzalloc(sizeof *node);
    > > > +    hmap_insert(related_datapaths, &node->hmap_node,
    > > > +                uuid_hash(&sb->header_.uuid));
    > > > +    node->sb_db = sb;
    > > > +    (*n_related_datapaths)++;
    > > > +}
    > > > +
    > > > +static void
    > > > +destroy_related_datapaths(struct hmap *related_datapaths)
    > > > +{
    > > > +    struct related_datapath_node *node;
    > > > +    HMAP_FOR_EACH_POP (node, hmap_node, related_datapaths) {
    > > > +        free(node);
    > > > +    }
    > > > +    hmap_destroy(related_datapaths);
    > > > +}
    > > > +
    > > >  static void
    > > >  destroy_tnlids(struct hmap *tnlids)
    > > >  {
    > > > @@ -376,6 +412,8 @@ struct ovn_datapath {
    > > >      size_t n_router_ports;
    > > >  
    > > >      struct hmap port_tnlids;
    > > > +    struct hmap related_datapaths;
    > > > +    size_t n_related_datapaths;
    > > >      uint32_t port_key_hint;
    > > >  
    > > >      bool has_unknown;
    > > > @@ -426,6 +464,7 @@ ovn_datapath_create(struct hmap *datapaths, 
    > > const struct uuid *key,
    > > >      od->nbr = nbr;
    > > >      hmap_init(&od->port_tnlids);
    > > >      hmap_init(&od->ipam);
    > > > +    hmap_init(&od->related_datapaths);
    > > >      od->port_key_hint = 0;
    > > >      hmap_insert(datapaths, &od->key_node, uuid_hash(&od->key));
    > > >      return od;
    > > > @@ -441,6 +480,7 @@ ovn_datapath_destroy(struct hmap *datapaths, 
    > > struct ovn_datapath *od)
    > > >          hmap_remove(datapaths, &od->key_node);
    > > >          destroy_tnlids(&od->port_tnlids);
    > > >          destroy_ipam(&od->ipam);
    > > > +        destroy_related_datapaths(&od->related_datapaths);
    > > >          free(od->router_ports);
    > > >          free(od);
    > > >      }
    > > > @@ -624,6 +664,28 @@ build_datapaths(struct northd_context *ctx, 
    > > struct hmap *datapaths)
    > > >              smap_destroy(&ids);
    > > >  
    > > >              sbrec_datapath_binding_set_tunnel_key(od->sb, 
    tunnel_key);
    > > > +
    > > > +            struct sbrec_datapath_binding **sb_related_datapaths
    > > > +                = xmalloc(sizeof(*sb_related_datapaths) * 
    > > od->n_related_datapaths);
    > > > +            int rdi = 0;
    > > > +            struct related_datapath_node *related_datapath;
    > > > +            HMAP_FOR_EACH (related_datapath, hmap_node,
    > > > +                           &od->related_datapaths) {
    > > > +                if (rdi >= od->n_related_datapaths) {
    > > > +                    static struct vlog_rate_limit rl
    > > > +                        = VLOG_RATE_LIMIT_INIT(5, 1);
    > > > +                    VLOG_ERR_RL(&rl, "related datapaths accounting 
    error"
    > > > +                                UUID_FMT, UUID_ARGS(&od->key));
    > > > +                    break;
    > > > +                }
    > > > +                sb_related_datapaths[rdi] = CONST_CAST(
    > > > +                    struct sbrec_datapath_binding *, 
    > > related_datapath->sb_db);
    > > > +                rdi++;
    > > > +            }
    > > > +            sbrec_datapath_binding_set_related_datapaths(od->sb,
    > > > +                sb_related_datapaths, od->n_related_datapaths);
    > > > +            free(sb_related_datapaths);
    > > > +
    > > >          }
    > > >          destroy_tnlids(&dp_tnlids);
    > > >      }
    > > > @@ -1359,6 +1421,12 @@ ovn_port_update_sbrec(const struct ovn_port 
    *op,
    > > >              sbrec_port_binding_set_type(op->sb, "patch");
    > > >          }
    > > >  
    > > > +        if (op->peer && op->peer->od && op->peer->od->sb) {
    > > > +            add_related_datapath(&op->od->related_datapaths,
    > > > +                                 op->peer->od->sb,
    > > > +                                 &op->od->n_related_datapaths);
    > > > +        }
    > > > +
    > > >          const char *peer = op->peer ? op->peer->key : "<error>";
    > > >          struct smap new;
    > > >          smap_init(&new);
    > > > @@ -1411,6 +1479,12 @@ ovn_port_update_sbrec(const struct ovn_port 
    *op,
    > > >                  sbrec_port_binding_set_type(op->sb, "patch");
    > > >              }
    > > >  
    > > > +            if (op->peer && op->peer->od && op->peer->od->sb) {
    > > > +                add_related_datapath(&op->od->related_datapaths,
    > > > +                                     op->peer->od->sb,
    > > > +                                     &op->od->n_related_datapaths);
    > > > +            }
    > > > +
    > > >              const char *router_port = 
    smap_get_def(&op->nbsp->options,
    > > >                                                     "router-port",
    > > "<error>");
    > > >              struct smap new;
    > > > @@ -4825,6 +4899,8 @@ main(int argc, char *argv[])
    > > >      add_column_noalert(ovnsb_idl_loop.idl,
    > > >                         &sbrec_datapath_binding_col_tunnel_key);
    > > >      add_column_noalert(ovnsb_idl_loop.idl,
    > > > +                       
    &sbrec_datapath_binding_col_related_datapaths);
    > > > +    add_column_noalert(ovnsb_idl_loop.idl,
    > > >                         &sbrec_datapath_binding_col_external_ids);
    > > >  
    > > >      ovsdb_idl_add_table(ovnsb_idl_loop.idl, 
    &sbrec_table_port_binding);
    > > > diff --git a/ovn/ovn-sb.ovsschema b/ovn/ovn-sb.ovsschema
    > > > index 89342fe..51c222d 100644
    > > > --- a/ovn/ovn-sb.ovsschema
    > > > +++ b/ovn/ovn-sb.ovsschema
    > > > @@ -1,7 +1,7 @@
    > > >  {
    > > >      "name": "OVN_Southbound",
    > > > -    "version": "1.9.0",
    > > > -    "cksum": "239060528 9012",
    > > > +    "version": "1.10.0",
    > > > +    "cksum": "1139190282 9320",
    > > >      "tables": {
    > > >          "SB_Global": {
    > > >              "columns": {
    > > > @@ -93,7 +93,12 @@
    > > >                                        "maxInteger": 16777215}}},
    > > >                  "external_ids": {
    > > >                      "type": {"key": "string", "value": "string",
    > > > -                             "min": 0, "max": "unlimited"}}},
    > > > +                             "min": 0, "max": "unlimited"}},
    > > > +                "related_datapaths": {"type": {"key": {"type": 
    "uuid",
    > > > +                                      "refTable": 
    "Datapath_Binding",
    > > > +                                      "refType": "weak"},
    > > > +                                      "min": 0,
    > > > +                                      "max": "unlimited"}}},
    > > >              "indexes": [["tunnel_key"]],
    > > >              "isRoot": true},
    > > >          "Port_Binding": {
    > > > diff --git a/ovn/ovn-sb.xml b/ovn/ovn-sb.xml
    > > > index 65191ed..c20861f 100644
    > > > --- a/ovn/ovn-sb.xml
    > > > +++ b/ovn/ovn-sb.xml
    > > > @@ -1556,6 +1556,15 @@ tcp.flags = RST;
    > > >        constructed for each supported encapsulation.
    > > >      </column>
    > > >  
    > > > +    <column name="related_datapaths">
    > > > +      The related_datapaths column is used to keep track of 
    > whichdatapaths
    > > > +      are connected to each other.  This is leveraged in ovn-
    > controller to
    > > > +      do a flood fill from each local logical switch to 
    > determine the full
    > > > +      set of datapaths needed on a given ovn-controller.   This set 
    of
    > > > +      datapaths can be used to determine which logical ports and 
    logical
    > > > +      flows an ovn-controller should monitor.
    > > > +    </column>
    > > > +
    > > >      <group title="OVN_Northbound Relationship">
    > > >        <p>
    > > >          Each row in <ref table="Datapath_Binding"/> is 
    > associatedwith some
    > 
    > > > diff --git a/tests/ovn.at b/tests/ovn.at
    > > > index 69f5277..04bc3da 100644
    > > > --- a/tests/ovn.at
    > > > +++ b/tests/ovn.at
    > > > @@ -4079,6 +4079,7 @@ ovs-vsctl -- add-port br-int hv1-vif1 -- \
    > > >      options:tx_pcap=hv1/vif1-tx.pcap \
    > > >      options:rxq_pcap=hv1/vif1-rx.pcap \
    > > >      ofport-request=1
    > > > +ovs-appctl -t ovn-controller vlog/set ANY:
    > file:DBG
    > 
    > > >  
    > > >  
    > > >  sim_add hv2
    > > > @@ -4090,6 +4091,7 @@ ovs-vsctl -- add-port br-int hv2-vif1 -- \
    > > >      options:tx_pcap=hv2/vif1-tx.pcap \
    > > >      options:rxq_pcap=hv2/vif1-rx.pcap \
    > > >      ofport-request=1
    > > > +ovs-appctl -t ovn-controller vlog/set ANY:file:DBG
    > > >  
    > > >  # Pre-populate the hypervisors' ARP tables so that we don't lose 
    any
    > > >  # packets for ARP resolution (native tunneling doesn't queue 
    packets
    > > > @@ -4195,6 +4197,12 @@ as hv2 ovs-ofctl show br-int
    > > >  as hv2 ovs-ofctl dump-flows br-int
    > > >  echo "----------------------------"
    > > >  
    > > > +# tunnel key 2 represents the gateway router and the associated
    > > > +# logical flows should only be on hv2 not hv1 when conditional
    > > > +# monitoring of flows is being used.
    > > > +AT_CHECK([cat hv2/ovn-controller.log | grep 'tunnel_key 2'], [0],
    > > [ignore-nolog])
    > > > +AT_CHECK([cat hv1/ovn-controller.log | grep 'tunnel_key 2'], [1],
    > > [ignore-nolog])
    > > > +
    > > >  echo $expected > expected
    > > >  OVN_CHECK_PACKETS([hv2/vif1-tx.pcap], [expected])
    > > >  
    > > > -- 
    > > > 1.9.1
    > > > 
    > > 
    
    
    _______________________________________________
    dev mailing list
    d...@openvswitch.org
    
https://urldefense.proofpoint.com/v2/url?u=https-3A__mail.openvswitch.org_mailman_listinfo_ovs-2Ddev&d=DgICAg&c=uilaK90D4TOVoH58JNXRgQ&r=BVhFA09CGX7JQ5Ih-uZnsw&m=4knSzUZXqT8uBMLrA6DjddoFltFT858Mdb9qAnNNDno&s=lwZkPDYOGZNed0_zr-1OcjDf0AsSxCjrhVC0ELvk6C8&e=
 
    

_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to