On Tue, Jan 20, 2026 at 5:50 AM Ales Musil <[email protected]> wrote:

>
>
> On Mon, Jan 19, 2026 at 2:53 PM Erlon Rodrigues Cruz <[email protected]>
> wrote:
>
>> Hey Ales,
>>
>
> Hi Erlon,
>

>
>>
>> Thanks for the thorough review. I'll address your suggestions in the next
>> patch version.
>> Regarding the testing,  I'll change your proposed tests to also do
>> bi-directional flows
>> (one of my patch interactions was breaking when any bidirectional traffic
>> was sent
>> that's why I added bidirectional traffic).
>>
>
> thank you for working on this!
>
>
>>
>> I got sidetracked by an issue in the first version of the fragmentation
>> patch. The problem
>>  is that DHCP traffic does not pass through the logical switch, whether
>> OVN serves
>> DHCP or it's just DHCP crossing the switch. We conducted tests, and the
>> issue also
>> occurs with the southbound version of the patch. I mentioned this problem
>> in the previous
>> weekly meeting and wanted to get your thoughts on what might be causing
>> it. We tested
>> DHCP and regular UDP with translation on and off to compare the data path
>> flows and traces.
>>
>> udp_ct_translation = on:
>> DHCP: datapath-flows
>> <https://gist.github.com/sombrafam/f7ceffaa51b031e1052d55cf516d5ab0> /
>> ofproto-trace
>> <https://gist.github.com/sombrafam/c846dc0c354282d0e6a98d648f1013e8>
>> UDP: datapath-flows
>> <https://gist.github.com/sombrafam/41f926afaea9c30c08c79bae5d2f3f76> /
>> ofproto-trace
>> <https://gist.github.com/sombrafam/883fff76b7b785d279cf52c554aa3da1>
>>
>> udp_ct_translation = off:
>> DHCP: datapath-flow
>> <https://gist.github.com/sombrafam/ccaa138def0edbf8c69d8be5bdeec864> /
>> ofproto-trace
>> <https://gist.github.com/sombrafam/bd4ad5e599ea87372b7c6983bf5a8d5b>
>> UDP: datapath-flow
>> <https://gist.github.com/sombrafam/9c8bb19b7cb4bfd9005066021ceddda3> /
>> ofproto-trace
>> <https://gist.github.com/sombrafam/68bee093613ed8a92c6d412db0e7edfd>
>>
>
> Could you please share the DBs from the test? The flows do not paint the
> full picture.
> Trying to understand what's going on from the traces it seems the packets
> took different
> path. Also just an FYI you can combine ofproto/trace with ovn-detrace that
> makes
> it more readable.
>

I have attached the traces with the DBs from another run. Thanks for the
tip on ovn-detrace. I'll definitely try it out later.


>
>
>> You didn't mention TCP/STCP. We briefly discussed in the meeting that
>> these should
>> be handled by the MSS. Are we okay with removing them and leaving only
>> UDP coverage?
>>
>
> Let's keep it consistent and do it for all.
>
>
>>
>> Erlon
>>
>
> Regards,
> Ales
>
>
>>
>> On Fri, Jan 9, 2026 at 4:43 AM Ales Musil <[email protected]> wrote:
>>
>>>
>>>
>>> On Thu, Jan 8, 2026 at 4:53 PM Erlon R. Cruz <[email protected]>
>>> wrote:
>>>
>>>> At the current state, OVN can not handle fragmented traffic for ACLs
>>>> in the userspace datapath (DPDK). Just like in the case of LB
>>>> (commit 20a96b9), the kernel DP will try to reassemble the fragments
>>>> during CT lookup, however userspace won't reassemble them.
>>>>
>>>> This patch allows OVN to handle fragmented traffic by defining a
>>>> translation table on southbound that leverages OpenFlow connection
>>>> tracking capabilities. When a stateful flow is created on NB, we add
>>>> a hint in the flow. This hint will be read in SB and if the
>>>> connection tracking is set to be used, SB will use the alternative
>>>> translation table that will use the connection tracking information.
>>>>
>>>> This approach should not change the current behavior and it's only
>>>> enabled if acl_udp_ct_translation is set:
>>>>
>>>> ovn-nbctl set NB_Global . options:acl_udp_ct_translation=true
>>>>
>>>> Signed-off-by: Erlon R. Cruz <[email protected]>
>>>> ---
>>>>
>>>
>>> Hi Erlon,
>>>
>>> thank you for the patch. It is missing a NEWS entry about the
>>> new option and also documentation in ovn-nb.xml. The documentation
>>> should also clearly state that this break HW offloading,
>>>
>>>
>>
>>
>>
>>> I have a couple of additional comments down below.
>>>
>>>
>>>>  controller/lflow.c        |  86 +++++++++-
>>>>  controller/lflow.h        |   1 +
>>>>  include/ovn/expr.h        |   1 +
>>>>  lib/expr.c                |  16 +-
>>>>  northd/en-global-config.c |   5 +
>>>>  northd/lflow-mgr.c        |  50 +++++-
>>>>  northd/lflow-mgr.h        |   8 +-
>>>>  northd/northd.c           |  50 ++++--
>>>>  tests/automake.mk         |   4 +-
>>>>  tests/system-ovn.at       | 273 ++++++++++++++++++++++++++++++
>>>>  tests/tcp_simple.py       | 338 ++++++++++++++++++++++++++++++++++++++
>>>>  tests/udp_client.py       | 113 +++++++++++++
>>>>  12 files changed, 925 insertions(+), 20 deletions(-)
>>>>  create mode 100755 tests/tcp_simple.py
>>>>  create mode 100755 tests/udp_client.py
>>>>
>>>> diff --git a/controller/lflow.c b/controller/lflow.c
>>>> index e84fb2486..85b6d61aa 100644
>>>> --- a/controller/lflow.c
>>>> +++ b/controller/lflow.c
>>>> @@ -51,10 +51,73 @@ COVERAGE_DEFINE(consider_logical_flow);
>>>>  /* Contains "struct expr_symbol"s for fields supported by OVN lflows.
>>>> */
>>>>  static struct shash symtab;
>>>>
>>>> +/* Alternative symbol table for ACL CT translation.
>>>> + * This symbol table is used when processing ACLs that need to match on
>>>> + * fragmented packets. It maps L4 protocol fields to their connection
>>>> + * tracking equivalents, allowing fragments to be matched correctly.
>>>> + *
>>>> + * For example, "udp.dst" is mapped to MFF_CT_TP_DST instead of
>>>> MFF_UDP_DST,
>>>> + * so that all fragments (not just the first) can be matched based on
>>>> the
>>>> + * connection tracking state.
>>>> + */
>>>> +static struct shash acl_ct_symtab;
>>>> +
>>>> +void
>>>> +ovn_init_acl_ct_symtab(struct shash *acl_symtab)
>>>> +{
>>>>
>>>
>>> This function should be in logical-fields.c.
>>>
>>>
>>>> +    /* Initialize with the standard symbol table. */
>>>> +    ovn_init_symtab(acl_symtab);
>>>> +
>>>> +    /* Remove the original tcp/udp/sctp symbols that we will override.
>>>> +     * Must remove subfields first since they reference the parent. */
>>>> +    expr_symtab_remove(acl_symtab, "tcp.src");
>>>> +    expr_symtab_remove(acl_symtab, "tcp.dst");
>>>> +    expr_symtab_remove(acl_symtab, "tcp");
>>>> +    expr_symtab_remove(acl_symtab, "udp.src");
>>>> +    expr_symtab_remove(acl_symtab, "udp.dst");
>>>> +    expr_symtab_remove(acl_symtab, "udp");
>>>> +    expr_symtab_remove(acl_symtab, "sctp.src");
>>>> +    expr_symtab_remove(acl_symtab, "sctp.dst");
>>>> +    expr_symtab_remove(acl_symtab, "sctp");
>>>> +
>>>> +    /* Add ct_proto field - CT original direction protocol.
>>>> +     * This is used in the tcp/udp/sctp predicate expansions below. */
>>>> +    expr_symtab_add_field(acl_symtab, "ct_proto", MFF_CT_NW_PROTO,
>>>> +                          "ct.trk", false);
>>>> +
>>>> +    /* Override TCP protocol and port fields to use CT equivalents.
>>>> +     * When "tcp" is used as a predicate, it expands to "ct_proto == 6"
>>>> +     * instead of "ip.proto == 6". This ensures we match on the CT
>>>> state
>>>> +     * which is available for all fragments. */
>>>> +    expr_symtab_add_predicate(acl_symtab, "tcp",
>>>> +                              "ct.trk && !ct.inv && ct_proto == 6");
>>>> +    expr_symtab_add_field(acl_symtab, "tcp.src", MFF_CT_TP_SRC,
>>>> +                          "tcp", false);
>>>> +    expr_symtab_add_field(acl_symtab, "tcp.dst", MFF_CT_TP_DST,
>>>> +                          "tcp", false);
>>>> +
>>>> +    /* Override UDP protocol and port fields */
>>>> +    expr_symtab_add_predicate(acl_symtab, "udp",
>>>> +                              "ct.trk && !ct.inv && ct_proto == 17");
>>>> +    expr_symtab_add_field(acl_symtab, "udp.src", MFF_CT_TP_SRC,
>>>> +                          "udp", false);
>>>> +    expr_symtab_add_field(acl_symtab, "udp.dst", MFF_CT_TP_DST,
>>>> +                          "udp", false);
>>>> +
>>>> +    /* Override SCTP protocol and port fields */
>>>> +    expr_symtab_add_predicate(acl_symtab, "sctp",
>>>> +                              "ct.trk && !ct.inv && ct_proto == 132");
>>>> +    expr_symtab_add_field(acl_symtab, "sctp.src", MFF_CT_TP_SRC,
>>>> +                          "sctp", false);
>>>> +    expr_symtab_add_field(acl_symtab, "sctp.dst", MFF_CT_TP_DST,
>>>> +                          "sctp", false);
>>>> +}
>>>> +
>>>>  void
>>>>  lflow_init(void)
>>>>  {
>>>>      ovn_init_symtab(&symtab);
>>>> +    ovn_init_acl_ct_symtab(&acl_ct_symtab);
>>>>  }
>>>>
>>>>  struct lookup_port_aux {
>>>> @@ -984,7 +1047,24 @@ convert_match_to_expr(const struct
>>>> sbrec_logical_flow *lflow,
>>>>                       lflow->match);
>>>>          return NULL;
>>>>      }
>>>> -    struct expr *e = expr_parse_string(lex_str_get(&match_s), &symtab,
>>>> +
>>>> +    /* Check if this logical flow requires ACL CT translation.
>>>> +     * If the tags contains "acl_ct_trans"="true", we use the
>>>> alternative
>>>> +     * symbol table that maps L4 fields (tcp/udp/sctp ports) to their
>>>> CT
>>>> +     * equivalents. */
>>>> +    const char *ct_trans = smap_get(&lflow->tags, "acl_ct_trans");
>>>>
>>>
>>> I wouldn't abbreviate the option name, let's use "acl_ct_translation".
>>> Also we have a helper smap_get_bool(), which gets you bool directly.
>>>
>>>
>>>> +    struct shash *symtab_to_use = (ct_trans && !strcmp(ct_trans,
>>>> "true")
>>>> +                                   ? &acl_ct_symtab : &symtab);
>>>> +
>>>> +    if (ct_trans && !strcmp(ct_trans, "true")) {
>>>> +        VLOG_DBG("ACL CT Translation enabled for logical flow:
>>>> match='%s' "
>>>> +                 "acl_ct_trans='%s' table=%"PRId64" pipeline=%s",
>>>> +                 lflow->match, ct_trans,
>>>> +                 lflow->table_id,
>>>> +                 lflow->pipeline);
>>>> +    }
>>>>
>>>
>>> Let's remove this debug log it doesn't bring extra value, the
>>> option/tag can be checked on the lflow directly in DB.
>>>
>>>
>>>> +
>>>> +    struct expr *e = expr_parse_string(lex_str_get(&match_s),
>>>> symtab_to_use,
>>>>                                         addr_sets, port_groups,
>>>> &addr_sets_ref,
>>>>                                         &port_groups_ref,
>>>>                                         ldp->datapath->tunnel_key,
>>>> @@ -1016,7 +1096,7 @@ convert_match_to_expr(const struct
>>>> sbrec_logical_flow *lflow,
>>>>              e = expr_combine(EXPR_T_AND, e, *prereqs);
>>>>              *prereqs = NULL;
>>>>          }
>>>> -        e = expr_annotate(e, &symtab, &error);
>>>> +        e = expr_annotate(e, symtab_to_use, &error);
>>>>      }
>>>>      if (error) {
>>>>          static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1);
>>>> @@ -2045,6 +2125,8 @@ lflow_destroy(void)
>>>>  {
>>>>      expr_symtab_destroy(&symtab);
>>>>      shash_destroy(&symtab);
>>>> +    expr_symtab_destroy(&acl_ct_symtab);
>>>> +    shash_destroy(&acl_ct_symtab);
>>>>  }
>>>>
>>>>  bool
>>>> diff --git a/controller/lflow.h b/controller/lflow.h
>>>> index 30c6dbebd..3e762d49b 100644
>>>> --- a/controller/lflow.h
>>>> +++ b/controller/lflow.h
>>>> @@ -156,6 +156,7 @@ struct lflow_ctx_out {
>>>>  };
>>>>
>>>>  void lflow_init(void);
>>>> +void ovn_init_acl_ct_symtab(struct shash *acl_symtab);
>>>>
>>>
>>> This should be in logical-fields.h.
>>>
>>>
>>>>  void lflow_run(struct lflow_ctx_in *, struct lflow_ctx_out *);
>>>>  void lflow_handle_cached_flows(struct lflow_cache *,
>>>>                                 const struct sbrec_logical_flow_table
>>>> *);
>>>> diff --git a/include/ovn/expr.h b/include/ovn/expr.h
>>>> index ae5ab1aab..a96c89d1b 100644
>>>> --- a/include/ovn/expr.h
>>>> +++ b/include/ovn/expr.h
>>>> @@ -319,6 +319,7 @@ struct expr_symbol
>>>> *expr_symtab_add_ovn_field(struct shash *symtab,
>>>>                                                const char *name,
>>>>                                                enum ovn_field_id id);
>>>>  void expr_symtab_destroy(struct shash *symtab);
>>>> +void expr_symtab_remove(struct shash *symtab, const char *name);
>>>>
>>>>  /* Expression type. */
>>>>  enum expr_type {
>>>> diff --git a/lib/expr.c b/lib/expr.c
>>>> index 288e245c6..2071b3a65 100644
>>>> --- a/lib/expr.c
>>>> +++ b/lib/expr.c
>>>> @@ -1818,7 +1818,21 @@ expr_symtab_destroy(struct shash *symtab)
>>>>          free(symbol);
>>>>      }
>>>>  }
>>>> -
>>>>
>>>
>>> Let's keep the FF.
>>>
>>>
>>>> +
>>>> +/* Removes the symbol with 'name' from 'symtab', freeing its memory.
>>>> + * Does nothing if the symbol doesn't exist. */
>>>> +void
>>>> +expr_symtab_remove(struct shash *symtab, const char *name)
>>>> +{
>>>> +    struct expr_symbol *symbol = shash_find_and_delete(symtab, name);
>>>> +    if (symbol) {
>>>> +        free(symbol->name);
>>>> +        free(symbol->prereqs);
>>>> +        free(symbol->predicate);
>>>> +        free(symbol);
>>>> +    }
>>>> +}
>>>> +
>>>>  /* Cloning. */
>>>>
>>>>  static struct expr *
>>>> diff --git a/northd/en-global-config.c b/northd/en-global-config.c
>>>> index 2556b2888..d5499a259 100644
>>>> --- a/northd/en-global-config.c
>>>> +++ b/northd/en-global-config.c
>>>> @@ -717,6 +717,11 @@ check_nb_options_out_of_sync(
>>>>          return true;
>>>>      }
>>>>
>>>> +    if (config_out_of_sync(&nb->options, &config_data->nb_options,
>>>> +                           "acl_udp_ct_translation", false)) {
>>>> +        return true;
>>>> +    }
>>>> +
>>>>      return false;
>>>>  }
>>>>
>>>> diff --git a/northd/lflow-mgr.c b/northd/lflow-mgr.c
>>>> index a942f287b..a302b6c99 100644
>>>> --- a/northd/lflow-mgr.c
>>>> +++ b/northd/lflow-mgr.c
>>>> @@ -184,6 +184,7 @@ struct ovn_lflow {
>>>>      struct ovn_dp_group *dpg;    /* Link to unique Sb datapath group.
>>>> */
>>>>      const char *where;
>>>>      const char *flow_desc;
>>>> +    bool acl_ct_trans;           /* Use CT-based L4 field translation.
>>>> */
>>>>
>>>>      struct uuid sb_uuid;         /* SB DB row uuid, specified by
>>>> northd. */
>>>>      struct ovs_list referenced_by;  /* List of struct lflow_ref_node.
>>>> */
>>>> @@ -700,7 +701,7 @@ lflow_ref_sync_lflows(struct lflow_ref *lflow_ref,
>>>>   * then it may corrupt the hmap.  Caller should ensure thread safety
>>>>   * for such scenarios.
>>>>   */
>>>> -void
>>>> +struct ovn_lflow *
>>>>  lflow_table_add_lflow(struct lflow_table *lflow_table,
>>>>                        const struct ovn_datapath *od,
>>>>                        const unsigned long *dp_bitmap, size_t
>>>> dp_bitmap_len,
>>>> @@ -771,6 +772,13 @@ lflow_table_add_lflow(struct lflow_table
>>>> *lflow_table,
>>>>      ovn_dp_group_add_with_reference(lflow, od, dp_bitmap,
>>>> dp_bitmap_len);
>>>>
>>>>      lflow_hash_unlock(hash_lock);
>>>> +    return lflow;
>>>> +}
>>>> +
>>>> +void
>>>> +ovn_lflow_set_acl_ct_trans(struct ovn_lflow *lflow, bool acl_ct_trans)
>>>> +{
>>>>
>>>
>>> To make it aligned with the rest we should add a new macro similar to
>>> ovn_lflow_add_with_lport_and_hint(),
>>> e.g. ovn_lflow_add_with_ct_trans_and_hint().
>>> This whole macro situation should be improved with the refactor, but
>>> until then
>>> we don't have a better choice.
>>>
>>> +    lflow->acl_ct_trans = acl_ct_trans;
>>>>  }
>>>>
>>>>  struct ovn_dp_group *
>>>> @@ -904,6 +912,7 @@ ovn_lflow_init(struct ovn_lflow *lflow, struct
>>>> ovn_datapath *od,
>>>>      lflow->stage_hint = stage_hint;
>>>>      lflow->ctrl_meter = ctrl_meter;
>>>>      lflow->flow_desc = flow_desc;
>>>> +    lflow->acl_ct_trans = false;
>>>>      lflow->dpg = NULL;
>>>>      lflow->where = where;
>>>>      lflow->sb_uuid = sbuuid;
>>>> @@ -1096,12 +1105,26 @@ sync_lflow_to_sb(struct ovn_lflow *lflow,
>>>>          sbrec_logical_flow_set_match(sbflow, lflow->match);
>>>>          sbrec_logical_flow_set_actions(sbflow, lflow->actions);
>>>>          sbrec_logical_flow_set_flow_desc(sbflow, lflow->flow_desc);
>>>> -        if (lflow->io_port) {
>>>> +
>>>> +        /* Set tags for io_port and/or acl_ct_trans if needed. */
>>>> +        if (lflow->io_port || lflow->acl_ct_trans) {
>>>>              struct smap tags = SMAP_INITIALIZER(&tags);
>>>> -            smap_add(&tags, "in_out_port", lflow->io_port);
>>>> +            if (lflow->io_port) {
>>>> +                smap_add(&tags, "in_out_port", lflow->io_port);
>>>> +            }
>>>> +            if (lflow->acl_ct_trans) {
>>>> +                smap_add(&tags, "acl_ct_trans", "true");
>>>> +            }
>>>>              sbrec_logical_flow_set_tags(sbflow, &tags);
>>>>              smap_destroy(&tags);
>>>>          }
>>>> +
>>>> +        VLOG_DBG("Creating new logical flow: "
>>>> +                 "pipeline=%s table=%d priority=%d match='%s' "
>>>> +                 "actions='%s' acl_ct_trans=%d",
>>>> +                 pipeline, table, lflow->priority, lflow->match,
>>>> +                 lflow->actions, lflow->acl_ct_trans);
>>>>
>>>
>>> Same as above, we can see the tags in the DB, let's remove this.
>>>
>>>
>>>> +
>>>>          sbrec_logical_flow_set_controller_meter(sbflow,
>>>> lflow->ctrl_meter);
>>>>
>>>>          /* Trim the source locator lflow->where, which looks something
>>>> like
>>>> @@ -1167,6 +1190,27 @@ sync_lflow_to_sb(struct ovn_lflow *lflow,
>>>>                  }
>>>>              }
>>>>          }
>>>> +
>>>> +        /* Update acl_ct_trans marker in tags if needed.
>>>> +         * This must be outside ovn_internal_version_changed check
>>>> because
>>>> +         * the option can be enabled/disabled at runtime. */
>>>>
>>> +        const char *cur_ct_trans = smap_get_def(&sbflow->tags,
>>>> +                                                "acl_ct_trans", "");
>>>
>>>
>>> Can be replaced with smap_get_bool().
>>>
>>>
>>>> +        bool cur_has_ct_trans = !strcmp(cur_ct_trans, "true");
>>>> +        if (lflow->acl_ct_trans != cur_has_ct_trans) {
>>>> +            VLOG_DBG("Updating logical flow: pipeline=%s "
>>>> +                     "table=%"PRId64" priority=%"PRId64" match='%s' "
>>>> +                     "old_acl_ct_trans=%d new_acl_ct_trans=%d",
>>>> +                     sbflow->pipeline, sbflow->table_id,
>>>> sbflow->priority,
>>>> +                     sbflow->match, cur_has_ct_trans,
>>>> lflow->acl_ct_trans);
>>>>
>>>
>>> Let's drop the DGB here.
>>>
>>>
>>>> +            if (lflow->acl_ct_trans) {
>>>> +                sbrec_logical_flow_update_tags_setkey(
>>>> +                    sbflow, "acl_ct_trans", "true");
>>>
>>> +            } else {
>>>> +                sbrec_logical_flow_update_tags_delkey(
>>>> +                    sbflow, "acl_ct_trans");
>>>> +            }
>>>> +        }
>>>>
>>>      }
>>>>
>>>>      if (lflow->od) {
>>>> diff --git a/northd/lflow-mgr.h b/northd/lflow-mgr.h
>>>> index c1e72d1be..790498b0b 100644
>>>> --- a/northd/lflow-mgr.h
>>>> +++ b/northd/lflow-mgr.h
>>>> @@ -24,6 +24,7 @@
>>>>  struct ovsdb_idl_txn;
>>>>  struct ovn_datapath;
>>>>  struct ovsdb_idl_row;
>>>> +struct ovn_lflow;
>>>>
>>>>  /* lflow map which stores the logical flows. */
>>>>  struct lflow_table {
>>>> @@ -77,7 +78,8 @@ bool lflow_ref_sync_lflows(struct lflow_ref *,
>>>>                             const struct sbrec_logical_dp_group_table
>>>> *);
>>>>
>>>>
>>>> -void lflow_table_add_lflow(struct lflow_table *, const struct
>>>> ovn_datapath *,
>>>> +struct ovn_lflow *lflow_table_add_lflow(struct lflow_table *,
>>>> +                           const struct ovn_datapath *,
>>>>                             const unsigned long *dp_bitmap,
>>>>                             size_t dp_bitmap_len, enum ovn_stage stage,
>>>>                             uint16_t priority, const char *match,
>>>> @@ -87,6 +89,10 @@ void lflow_table_add_lflow(struct lflow_table *,
>>>> const struct ovn_datapath *,
>>>>                             const char *where, const char *flow_desc,
>>>>                             struct lflow_ref *);
>>>>
>>>> +/* Set the acl_ct_trans flag on an lflow. When true, ovn-controller
>>>> will
>>>> + * use a symbol table that maps L4 port fields to CT equivalents. */
>>>> +void ovn_lflow_set_acl_ct_trans(struct ovn_lflow *lflow, bool
>>>> acl_ct_trans);
>>>> +
>>>>  /* Adds a row with the specified contents to the Logical_Flow table. */
>>>>  #define ovn_lflow_add_with_hint__(LFLOW_TABLE, OD, STAGE, PRIORITY,
>>>> MATCH, \
>>>>                                    ACTIONS, IN_OUT_PORT, CTRL_METER, \
>>>> diff --git a/northd/northd.c b/northd/northd.c
>>>> index 011f449ec..7542eea1a 100644
>>>> --- a/northd/northd.c
>>>> +++ b/northd/northd.c
>>>> @@ -88,6 +88,12 @@ static bool use_common_zone = false;
>>>>   * Otherwise, it will avoid using it.  The default is true. */
>>>>  static bool use_ct_inv_match = true;
>>>>
>>>> +/* If this option is 'true' northd will rewrite stateful ACL rules
>>>> that match
>>>> + * on L4 port fields (tcp/udp/sctp) to use connection tracking fields
>>>> to
>>>> + * properly handle IP fragments. By default this option is set to
>>>> 'false'.
>>>> + */
>>>> +static bool acl_udp_ct_translation = false;
>>>> +
>>>>  /* If this option is 'true' northd will implicitly add a
>>>> lowest-priority
>>>>   * drop rule in the ACL stage of logical switches that have at least
>>>> one
>>>>   * ACL.
>>>> @@ -7178,6 +7184,19 @@ consider_acl(struct lflow_table *lflows, const
>>>> struct ovn_datapath *od,
>>>>          match_tier_len = match->length;
>>>>      }
>>>>
>>>> +    /* Check if this ACL needs CT translation for fragment handling.
>>>> +     * All stateful ACLs are marked when the option is enabled; the
>>>> actual
>>>> +     * translation only affects L4 port fields in ovn-controller. */
>>>> +    bool needs_ct_trans = has_stateful && acl_udp_ct_translation;
>>>> +    struct ovn_lflow *lflow;
>>>> +
>>>> +    VLOG_DBG("consider_acl: ACL %s action=%s - "
>>>> +              "stage=%s priority=%"PRId64" match='%s' "
>>>> +              "has_stateful=%d needs_ct_trans=%d",
>>>> +              acl->name ? acl->name : "(unnamed)", acl->action,
>>>> +              ovn_stage_to_str(stage), acl->priority,
>>>> +              acl->match, has_stateful, needs_ct_trans);
>>>>
>>>
>>> Again let's drop this DBG log.
>>>
>>>
>>>> +
>>>>      if (!has_stateful
>>>>          || !strcmp(acl->action, "pass")
>>>>          || !strcmp(acl->action, "allow-stateless")) {
>>>> @@ -7195,6 +7214,7 @@ consider_acl(struct lflow_table *lflows, const
>>>> struct ovn_datapath *od,
>>>>
>>>>          ds_put_cstr(actions, "next;");
>>>>          ds_put_format(match, "(%s)", acl->match);
>>>> +        /* Stateless ACLs don't need CT translation. */
>>>>          ovn_lflow_add_with_hint(lflows, od, stage, priority,
>>>>                                  ds_cstr(match), ds_cstr(actions),
>>>>                                  &acl->header_, lflow_ref);
>>>> @@ -7264,9 +7284,10 @@ consider_acl(struct lflow_table *lflows, const
>>>> struct ovn_datapath *od,
>>>>                            (uint8_t) acl->network_function_group->id);
>>>>          }
>>>>          ds_put_cstr(actions, "next;");
>>>> -        ovn_lflow_add_with_hint(lflows, od, stage, priority,
>>>> -                                ds_cstr(match), ds_cstr(actions),
>>>> -                                &acl->header_, lflow_ref);
>>>> +        lflow = ovn_lflow_add_with_hint(lflows, od, stage, priority,
>>>> +                                        ds_cstr(match),
>>>> ds_cstr(actions),
>>>> +                                        &acl->header_, lflow_ref);
>>>> +        ovn_lflow_set_acl_ct_trans(lflow, needs_ct_trans);
>>>>
>>>>          /* Match on traffic in the request direction for an established
>>>>           * connection tracking entry that has not been marked for
>>>> @@ -7295,9 +7316,10 @@ consider_acl(struct lflow_table *lflows, const
>>>> struct ovn_datapath *od,
>>>>                            (uint8_t) acl->network_function_group->id);
>>>>          }
>>>>          ds_put_cstr(actions, "next;");
>>>> -        ovn_lflow_add_with_hint(lflows, od, stage, priority,
>>>> -                                ds_cstr(match), ds_cstr(actions),
>>>> -                                &acl->header_, lflow_ref);
>>>> +        lflow = ovn_lflow_add_with_hint(lflows, od, stage, priority,
>>>> +                                        ds_cstr(match),
>>>> ds_cstr(actions),
>>>> +                                        &acl->header_, lflow_ref);
>>>> +        ovn_lflow_set_acl_ct_trans(lflow, needs_ct_trans);
>>>>      } else if (!strcmp(acl->action, "drop")
>>>>                 || !strcmp(acl->action, "reject")) {
>>>>          if (acl->network_function_group) {
>>>> @@ -7322,9 +7344,10 @@ consider_acl(struct lflow_table *lflows, const
>>>> struct ovn_datapath *od,
>>>>          build_acl_sample_label_action(actions, acl, acl->sample_new,
>>>> NULL,
>>>>                                        obs_stage);
>>>>          ds_put_cstr(actions, "next;");
>>>> -        ovn_lflow_add_with_hint(lflows, od, stage, priority,
>>>> -                                ds_cstr(match), ds_cstr(actions),
>>>> -                                &acl->header_, lflow_ref);
>>>> +        lflow = ovn_lflow_add_with_hint(lflows, od, stage, priority,
>>>> +                                        ds_cstr(match),
>>>> ds_cstr(actions),
>>>> +                                        &acl->header_, lflow_ref);
>>>> +        ovn_lflow_set_acl_ct_trans(lflow, needs_ct_trans);
>>>>          /* For an existing connection without ct_mark.blocked set,
>>>> we've
>>>>           * encountered a policy change. ACLs previously allowed
>>>>           * this connection and we committed the connection tracking
>>>> @@ -7349,9 +7372,10 @@ consider_acl(struct lflow_table *lflows, const
>>>> struct ovn_datapath *od,
>>>>          ds_put_format(actions,
>>>>                        "ct_commit { ct_mark.blocked = 1; "
>>>>                        "ct_label.obs_point_id = %"PRIu32"; }; next;",
>>>> obs_pid);
>>>> -        ovn_lflow_add_with_hint(lflows, od, stage, priority,
>>>> -                                ds_cstr(match), ds_cstr(actions),
>>>> -                                &acl->header_, lflow_ref);
>>>> +        lflow = ovn_lflow_add_with_hint(lflows, od, stage, priority,
>>>> +                                        ds_cstr(match),
>>>> ds_cstr(actions),
>>>> +                                        &acl->header_, lflow_ref);
>>>> +        ovn_lflow_set_acl_ct_trans(lflow, needs_ct_trans);
>>>>      }
>>>>  }
>>>>
>>>> @@ -20273,6 +20297,8 @@ ovnnb_db_run(struct northd_input *input_data,
>>>>
>>>>      use_ct_inv_match = smap_get_bool(input_data->nb_options,
>>>>                                       "use_ct_inv_match", true);
>>>> +    acl_udp_ct_translation = smap_get_bool(input_data->nb_options,
>>>> +                                           "acl_udp_ct_translation",
>>>> false);
>>>>
>>>>      /* deprecated, use --event instead */
>>>>      controller_event_en = smap_get_bool(input_data->nb_options,
>>>> diff --git a/tests/automake.mk b/tests/automake.mk
>>>> index 8ae310547..6c4f8f10c 100644
>>>> --- a/tests/automake.mk
>>>> +++ b/tests/automake.mk
>>>> @@ -332,7 +332,9 @@ CHECK_PYFILES = \
>>>>         tests/check_acl_log.py \
>>>>         tests/scapy-server.py \
>>>>         tests/client.py \
>>>> -       tests/server.py
>>>> +       tests/server.py \
>>>> +       tests/udp_client.py \
>>>> +       tests/tcp_simple.py
>>>>
>>>>  EXTRA_DIST += $(CHECK_PYFILES)
>>>>  PYCOV_CLEAN_FILES += $(CHECK_PYFILES:.py=.py,cover) .coverage
>>>> diff --git a/tests/system-ovn.at b/tests/system-ovn.at
>>>> index 8af7cb058..445d540bc 100644
>>>> --- a/tests/system-ovn.at
>>>> +++ b/tests/system-ovn.at
>>>> @@ -18680,3 +18680,276 @@ OVS_TRAFFIC_VSWITCHD_STOP(["/.*error
>>>> receiving.*/d
>>>>  /.*terminating with signal 15.*/d"])
>>>>  AT_CLEANUP
>>>>  ])
>>>> +
>>>> +OVN_FOR_EACH_NORTHD([
>>>> +AT_SETUP([ACL dual direction with fragmentation])
>>>>
>>>
>>> On the topic of testing, we should add a test in ovn.at that checks
>>> the translated flows:
>>>
>>> 1) Set up a few ACLs with a tcp/udp match.
>>> 2) Check their lflows and OpenFlows without the global option being set.
>>> 3) Set the option to true, check the lflows and OpenFlows again.
>>> 4) And set the option back to false, check the lflows and OpenFlows.
>>>
>>> This also covers the runtime switching and synchronization.
>>>
>>>
>>>> +AT_KEYWORDS([ovnacl])
>>>> +
>>>> +CHECK_CONNTRACK()
>>>> +
>>>> +ovn_start
>>>> +OVS_TRAFFIC_VSWITCHD_START()
>>>> +ADD_BR([br-int])
>>>> +ADD_BR([br-ext])
>>>> +
>>>> +# Logical network:
>>>> +# 2 logical switches "public" (192.168.1.0/24) and "internal" (
>>>> 172.16.1.0/24)
>>>> +# connected to a router lr. internal has a server. client and server
>>>> are
>>>> +# connected through localnet.
>>>> +
>>>> +check ovs-ofctl add-flow br-ext action=normal
>>>> +# Set external-ids in br-int needed for ovn-controller
>>>> +check ovs-vsctl \
>>>> +        -- set Open_vSwitch . external-ids:system-id=hv1 \
>>>> +        -- set Open_vSwitch .
>>>> external-ids:ovn-remote=unix:$ovs_base/ovn-sb/ovn-sb.sock \
>>>> +        -- set Open_vSwitch . external-ids:ovn-encap-type=geneve \
>>>> +        -- set Open_vSwitch . external-ids:ovn-encap-ip=169.0.0.1 \
>>>> +        -- set bridge br-int fail-mode=secure
>>>> other-config:disable-in-band=true \
>>>> +        -- set Open_vSwitch .
>>>> external-ids:ovn-bridge-mappings=phynet:br-ext
>>>> +
>>>> +
>>>> +# Start ovn-controller
>>>> +start_daemon ovn-controller
>>>> +check sleep 3
>>>>
>>>
>>> Let's not put any sleep in here.
>>>
>>>
>>>> +
>>>> +# Set the minimal fragment size for userspace DP.
>>>> +# Note that this call will fail for system DP as this setting is not
>>>> supported there.
>>>> +ovs-appctl dpctl/ipf-set-min-frag v4 500
>>>> +
>>>> +check ovn-nbctl lr-add lr
>>>> +check ovn-nbctl ls-add internal
>>>>
>>>
>>> Looking down at the test we really only need internal, the traffic never
>>> leaves
>>> that switch.
>>>
>>>
>>>> +check ovn-nbctl ls-add public
>>>> +
>>>> +check ovn-nbctl lrp-add lr lr-pub 00:00:01:01:02:03 192.168.1.1/24
>>>> +check ovn-nbctl lsp-add  public pub-lr -- set Logical_Switch_Port
>>>> pub-lr \
>>>> +    type=router options:router-port=lr-pub
>>>> addresses=\"00:00:01:01:02:03\"
>>>>
>>>
>>> This can be replaced by lsp-add-router-port public pub-lr lr-pub.
>>>
>>>
>>>> +
>>>> +check ovn-nbctl lrp-add lr lr-internal 00:00:01:01:02:04 172.16.1.1/24
>>>> +check ovn-nbctl lsp-add internal internal-lr -- set
>>>> Logical_Switch_Port internal-lr \
>>>> +    type=router options:router-port=lr-internal
>>>> addresses=\"00:00:01:01:02:04\"
>>>>
>>>
>>> Same here: lsp-add-router-port internal internal-lr lr-internal.
>>>
>>>
>>>> +
>>>> +check ovn-nbctl lsp-add public ln_port \
>>>> +                -- lsp-set-addresses ln_port unknown \
>>>> +                -- lsp-set-type ln_port localnet \
>>>> +                -- lsp-set-options ln_port network_name=phynet
>>>>
>>>
>>> This can be replaced by lsp-add-localnet-port public ln_port phynet.
>>> Also why do we need to add physnet when both the client and server
>>> are in br-int?
>>>
>>>
>>>> +
>>>> +ADD_NAMESPACES(client)
>>>> +ADD_VETH(client, client, br-int, "172.16.1.3/24",
>>>> "f0:00:00:01:02:03", \
>>>> +         "172.16.1.1")
>>>> +check ovn-nbctl lsp-add internal client \
>>>> +-- lsp-set-addresses client "f0:00:00:01:02:03 172.16.1.3"
>>>> +NS_EXEC([client], [ip l set dev client mtu 900])
>>>> +NS_EXEC([client], [ip addr add 127.0.0.1/24 dev lo])
>>>>
>>>
>>> Why do we need to set an address for loopback?
>>>
>>>
>>>> +NS_EXEC([client], [ip link set up dev lo])
>>>> +NS_EXEC([client], [ip a])
>>>> +
>>>> +ADD_NAMESPACES(server)
>>>> +ADD_VETH(server, server, br-int, "172.16.1.2/24",
>>>> "f0:00:0f:01:02:03", \
>>>> +         "172.16.1.1")
>>>> +check ovn-nbctl lsp-add internal server \
>>>> +-- lsp-set-addresses server "f0:00:0f:01:02:03 172.16.1.2"
>>>> +NS_EXEC([server], [ip addr add 127.0.0.1/24 dev lo])
>>>> +NS_EXEC([server], [ip link set up dev lo])
>>>> +NS_EXEC([server], [ip a])
>>>> +NS_CHECK_EXEC([server], [ip route add 192.168.1.0/24 via 172.16.1.1
>>>> dev server])
>>>> +
>>>> +check ovn-nbctl set logical_router lr options:chassis=hv1
>>>> +
>>>> +dump_ovs_info() {
>>>> +    echo ====== ovs_info $1 ======
>>>> +    ovs-ofctl show br-int
>>>> +    echo && echo
>>>> +    echo =======ovn-nbctl show========
>>>> +    ovn-nbctl show
>>>> +    echo && echo
>>>> +    echo ========ACL===========
>>>> +    ovn-nbctl list ACL
>>>> +    echo ========Logical_Flow===========
>>>> +    ovn-sbctl list Logical_Flow
>>>> +    echo && echo
>>>> +    echo ========lflow-list===========
>>>> +    ovn-sbctl lflow-list
>>>> +    echo && echo
>>>> +    echo ============br-int===========
>>>> +    ovs-ofctl dump-flows -O OpenFlow15 br-int
>>>> +    echo && echo
>>>> +    echo ============br-int===========
>>>> +    ovs-ofctl -O OpenFlow15 dump-flows br-int | grep -E
>>>> "ct_tp|ct_state|ct_nw_proto" | head -10
>>>> +    echo && echo
>>>> +    echo ============br-ext===========
>>>> +    ovs-ofctl dump-flows br-ext
>>>> +    echo && echo
>>>> +
>>>> +}
>>>> +
>>>> +dump_data_plane_flows() {
>>>> +    echo ====== dataplane_flows $1 ======
>>>> +    echo ============dp flow-table==============
>>>> +    ovs-appctl dpctl/dump-flows
>>>> +    echo =======================================
>>>> +    echo ============dp flow-table -m===========
>>>> +    ovs-appctl dpctl/dump-flows -m
>>>> +    echo =======================================
>>>> +
>>>> +    ovs-appctl dpctl/dump-flows -m > flows-m.txt
>>>> +    pmd=$(cat flows-m.txt | awk -F ': ' '/flow-dump from
>>>> pmd/{print$2}')
>>>> +    for ufid in $(awk -F ',' '/ufid:/{print$1}' flows-m.txt); do
>>>> +        grep ^$ufid flows-m.txt
>>>> +        echo ========= ofproto/detrace $1 $ufid =============
>>>> +        ovs-appctl ofproto/detrace $ufid pmd=$pmd
>>>> +        echo =======================================
>>>> +    done
>>>> +
>>>> +    echo ============conntrack-table============
>>>> +    ovs-appctl dpctl/dump-conntrack
>>>> +    echo =======================================
>>>> +
>>>> +}
>>>>
>>>
>>> I think we can remove this dumping, if the test fails we have the DBs.
>>>
>>>
>>>> +
>>>> +test_tcp_traffic() {
>>>> +    local src_port=${1:-8080}
>>>> +    local dst_port=${2:-8080}
>>>> +    local payload_bytes=${3:-10000}
>>>>
>>>
>>> The default for payload bytes is always used, we can just hardcode it.
>>>
>>>
>>>> +
>>>> +    check ovn-nbctl --wait=hv sync
>>>> +    check ovs-appctl dpctl/flush-conntrack
>>>> +
>>>> +    # Start TCP server in background
>>>> +    NETNS_DAEMONIZE([server], [tcp_simple.py --mode server --bind-ip
>>>> 172.16.1.2 -p $dst_port], [server.pid])
>>>> +
>>>> +    # Give server time to start
>>>> +    sleep 1
>>>> +
>>>> +    # Collect only inbound TCP packets on client and server sides
>>>> +    NETNS_START_TCPDUMP([client], [-U -i client -Q in -nn -e -q tcp],
>>>> [tcpdump-tcp-client])
>>>> +    NETNS_START_TCPDUMP([server], [-U -i server -Q in -nn -e -q tcp],
>>>> [tcpdump-tcp-server])
>>>> +
>>>> +    # Run TCP client test and capture output
>>>> +    NS_EXEC([client], [tcp_simple.py --mode client -s 172.16.1.3 -d
>>>> 172.16.1.2 -p $dst_port -B $payload_bytes -n 5 -I 0.2 >
>>>> tcp_client_output.log 2>&1])
>>>> +
>>>> +    sleep 1
>>>> +    dump_data_plane_flows [tcp]
>>>> +
>>>> +    # Wait for client to complete and check success
>>>> +    OVS_WAIT_UNTIL([test -f tcp_client_output.log])
>>>> +    OVS_WAIT_UNTIL([grep -q "Client summary:" tcp_client_output.log])
>>>> +
>>>> +    # Verify client reported success
>>>> +    if ! grep -q "success=0" tcp_client_output.log; then
>>>> +        echo "TCP client test failed - checking output:"
>>>> +        cat tcp_client_output.log
>>>> +        AT_FAIL_IF([true])
>>>> +    fi
>>>> +
>>>> +    # Clean up
>>>> +    kill $(cat tcpdump-client.pid) $(cat tcpdump-server.pid) $(cat
>>>> server.pid) 2>/dev/null || true
>>>> +}
>>>> +
>>>> +test_fragmented_udp_traffic() {
>>>> +    local src_port=${1:-5353}
>>>> +    local dst_port=${2:-4242}
>>>> +
>>>> +    check ovn-nbctl --wait=hv sync
>>>> +    check ovs-appctl dpctl/flush-conntrack
>>>> +
>>>> +    NETNS_DAEMONIZE([server], [nc -l -u 172.16.1.2 $dst_port >
>>>> /dev/null], [server.pid])
>>>> +    NETNS_DAEMONIZE([client], [nc -l -u 172.16.1.3 $src_port >
>>>> /dev/null], [client.pid])
>>>> +
>>>> +    # Collect only inbound UDP packets on client and server sides
>>>> +    NETNS_START_TCPDUMP([client], \
>>>> +      [-U -i client -Q in -nn -e -q udp], [tcpdump-client])
>>>> +    NETNS_START_TCPDUMP([server], \
>>>> +      [-U -i server -Q in -nn -e -q udp], [tcpdump-server])
>>>> +
>>>> +    NS_EXEC([client], [udp_client.py -s 172.16.1.3 -d 172.16.1.2 -S
>>>> $src_port -D $dst_port -M 900 -B 1500 -i client -n 4 -I 0.5])
>>>> +    NS_EXEC([server], [udp_client.py -s 172.16.1.2 -d 172.16.1.3 -S
>>>> $dst_port -D $src_port -M 900 -B 1500 -i server -n 4 -I 0.5])
>>>> +
>>>> +    sleep 1
>>>> +    dump_data_plane_flows [udp]
>>>> +
>>>> +    OVS_WAIT_UNTIL([test "$(cat tcpdump-server.tcpdump | wc -l)" =
>>>> "8"])
>>>> +    OVS_WAIT_UNTIL([test "$(cat tcpdump-client.tcpdump | wc -l)" =
>>>> "8"])
>>>> +
>>>> +    kill $(cat tcpdump-client.pid) $(cat tcpdump-server.pid) $(cat
>>>> server.pid) $(cat client.pid)
>>>> +}
>>>> +
>>>> +ovn-appctl -t ovn-northd vlog/set debug
>>>> +ovn-appctl -t ovn-controller vlog/set debug
>>>> +ovn-appctl vlog/set file:dbg
>>>> +ovs-appctl vlog/set dpif:dbg ofproto:dbg conntrack:dbg
>>>
>>> +
>>>> +check ovn-nbctl set NB_Global . options:acl_udp_ct_translation=true
>>>> +
>>>> +check ovn-nbctl --wait=hv acl-del internal
>>>>
>>>
>>> We don't have any ACLs set on the internal switch, let's drop this.
>>>
>>>
>>>> +
>>>> +# Create port group with both client and server to trigger conjunction
>>>> flows
>>>> +client_uuid=$(ovn-nbctl --bare --columns=_uuid find
>>>> logical_switch_port name=client)
>>>> +server_uuid=$(ovn-nbctl --bare --columns=_uuid find
>>>> logical_switch_port name=server)
>>>> +check ovn-nbctl pg-add internal_vms $client_uuid $server_uuid
>>>> +
>>>> +# dump_ovs_info
>>>>
>>>
>>> nit: Leftover
>>>
>>>
>>>> +
>>>> +# Use port group in ACL rules to trigger conjunction flow generation
>>>> +
>>>> +check ovn-nbctl --wait=hv acl-add internal_vms from-lport 1002 "inport
>>>> == @internal_vms && ip4 && ip4.dst == 0.0.0.0/0 && udp" allow-related
>>>> +check ovn-nbctl --wait=hv acl-add internal_vms from-lport 1002 "inport
>>>> == @internal_vms && ip4" allow-related
>>>> +check ovn-nbctl --wait=hv acl-add internal_vms from-lport 1002 "inport
>>>> == @internal_vms && ip6" allow-related
>>>> +
>>>> +check ovn-nbctl --wait=hv acl-add internal_vms to-lport 1002 "outport
>>>> == @internal_vms && ip4 && ip4.src == 0.0.0.0/0 && tcp && tcp.dst ==
>>>> 22" allow-related
>>>> +check ovn-nbctl --wait=hv acl-add internal_vms to-lport 1002 "outport
>>>> == @internal_vms && ip4 && ip4.src == 0.0.0.0/0 && tcp && tcp.dst ==
>>>> 123" allow-related
>>>> +check ovn-nbctl --wait=hv acl-add internal_vms to-lport 1002 "outport
>>>> == @internal_vms && ip4 && ip4.src == 0.0.0.0/0 && tcp && tcp.dst ==
>>>> 1666" allow-related
>>>> +check ovn-nbctl --wait=hv acl-add internal_vms to-lport 1002 "outport
>>>> == @internal_vms && ip4 && ip4.src == 0.0.0.0/0 && tcp && tcp.dst ==
>>>> 9090" allow-related
>>>> +check ovn-nbctl --wait=hv acl-add internal_vms to-lport 1002 "outport
>>>> == @internal_vms && ip4 && ip4.src == 0.0.0.0/0 && tcp && tcp.dst ==
>>>> 40004" allow-related
>>>> +check ovn-nbctl --wait=hv acl-add internal_vms to-lport 1002 "outport
>>>> == @internal_vms && ip4 && ip4.src == 0.0.0.0/0 && tcp && tcp.dst ==
>>>> 51204" allow-related
>>>> +
>>>> +check ovn-nbctl --wait=hv acl-add internal_vms to-lport 1002 "outport
>>>> == @internal_vms && ip4 && ip4.src == 0.0.0.0/0 && tcp && tcp.dst >=
>>>> 3002 && tcp.dst <=13002" allow-related
>>>> +check ovn-nbctl --wait=hv acl-add internal_vms to-lport 1002 "outport
>>>> == @internal_vms && ip4 && ip4.src == 0.0.0.0/0 && udp && udp.dst >=
>>>> 5002 && udp.dst <=10010" allow-related
>>>> +check ovn-nbctl --wait=hv acl-add internal_vms to-lport 1002 "outport
>>>> == @internal_vms && ip4 && ip4.src == 0.0.0.0/0 && tcp && tcp.src >=
>>>> 3002 && tcp.src <=13002" allow-related
>>>> +check ovn-nbctl --wait=hv acl-add internal_vms to-lport 1002 "outport
>>>> == @internal_vms && ip4 && ip4.src == 0.0.0.0/0 && udp && udp.src >=
>>>> 5002 && udp.src <=10010" allow-related
>>>> +
>>>> +check ovn-nbctl --wait=hv acl-add internal_vms to-lport 1002 "outport
>>>> == @internal_vms && ip4 && ip4.src == 0.0.0.0/0 && udp && udp.dst ==
>>>> 123" allow-related
>>>> +check ovn-nbctl --wait=hv acl-add internal_vms to-lport 1002 "outport
>>>> == @internal_vms && ip4 && ip4.src == 0.0.0.0/0 && udp && udp.dst ==
>>>> 5060" allow-related
>>>> +check ovn-nbctl --wait=hv acl-add internal_vms to-lport 1002 "outport
>>>> == @internal_vms && ip4 && ip4.src == 0.0.0.0/0 && udp && udp.dst ==
>>>> 40004" allow-related
>>>> +
>>>> +
>>>> +check # Add the drop rule using port group
>>>> +check ovn-nbctl --wait=hv acl-add internal_vms to-lport 1002 "outport
>>>> == @internal_vms && ip4 && ip4.src == 0.0.0.0/0 && icmp4" allow-related
>>>> +check ovn-nbctl --wait=hv acl-add internal_vms to-lport 1002 "outport
>>>> == @internal_vms && arp" allow-related
>>>> +
>>>> +check ovn-nbctl --wait=hv --log --severity=info acl-add internal_vms
>>>> to-lport 100 "outport == @internal_vms" drop
>>>>
>>>
>>> nit: There is only one --wait=hv needed at the end
>>>
>>>
>>>> +
>>>> +AS_BOX([Testing ACL: test_fragmented_udp_traffic])
>>>> +dump_ovs_info test_fragmented_udp_traffic
>>>> +test_fragmented_udp_traffic 5060 5060
>>>> +
>>>> +AS_BOX([Testing ACL: test_tcp_traffic])
>>>> +dump_ovs_info test_tcp_traffic
>>>> +test_tcp_traffic 9090 9090
>>>> +
>>>> +AS_BOX([Testing ACL: test_fragmented_udp_traffic_inside_range])
>>>> +dump_ovs_info test_fragmented_udp_traffic_inside_range
>>>> +test_fragmented_udp_traffic 5005 5005
>>>> +
>>>> +AS_BOX([Testing ACL: test_tcp_traffic_range])
>>>> +dump_ovs_info test_tcp_traffic_range
>>>> +test_tcp_traffic 3003 3003
>>>> +
>>>> +# Reset the option back to default
>>>> +as ovn-sb
>>>> +OVS_APP_EXIT_AND_WAIT([ovsdb-server])
>>>> +
>>>> +as ovn-nb
>>>> +OVS_APP_EXIT_AND_WAIT([ovsdb-server])
>>>> +
>>>> +as northd
>>>> +OVS_APP_EXIT_AND_WAIT([ovn-northd])
>>>> +
>>>> +as
>>>> +OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d
>>>> +/connection dropped.*/d
>>>> +/WARN|netdev@ovs-netdev: execute.*/d
>>>> +/dpif|WARN|system@ovs-system: execute.*/d
>>>> +"])
>>>> +AT_CLEANUP
>>>> +])
>>>> +
>>>> diff --git a/tests/tcp_simple.py b/tests/tcp_simple.py
>>>> new file mode 100755
>>>> index 000000000..c5bbed08f
>>>> --- /dev/null
>>>> +++ b/tests/tcp_simple.py
>>>> @@ -0,0 +1,338 @@
>>>> +#!/usr/bin/env python3
>>>> +"""
>>>> +Simple TCP client/server for testing network connectivity and data
>>>> integrity.
>>>> +
>>>> +This module provides TCP echo server and client functionality for
>>>> testing
>>>> +network connections, data transmission, and MD5 checksum verification.
>>>> +"""
>>>> +import socket
>>>> +import time
>>>> +import argparse
>>>> +import sys
>>>> +import hashlib
>>>> +import random
>>>> +
>>>> +DEFAULT_SRC_IP = "0.0.0.0"
>>>> +DEFAULT_DST_IP = "0.0.0.0"
>>>> +DEFAULT_PORT = 8080
>>>> +
>>>> +
>>>> +def calculate_md5(data):
>>>> +    """Calculate MD5 checksum of data."""
>>>> +    if isinstance(data, str):
>>>> +        data = data.encode('utf-8')
>>>> +    return hashlib.md5(data).hexdigest()
>>>> +
>>>> +
>>>> +def generate_random_bytes(size):
>>>> +    """Generate random bytes that are valid UTF-8."""
>>>> +    # Generate random printable ASCII characters (32-126) which are
>>>> valid UTF-8
>>>> +    return bytes([random.randint(32, 126) for _ in range(size)])
>>>> +
>>>> +
>>>> +class TCPServer:
>>>> +    """Simple TCP echo server using standard sockets."""
>>>> +
>>>> +    def __init__(self, bind_ip, port):
>>>> +        self.bind_ip = bind_ip
>>>> +        self.port = port
>>>> +        self.running = False
>>>> +        self.server_socket = None
>>>> +
>>>> +    def start(self):
>>>> +        """Start the TCP server."""
>>>> +        exit_code = 0
>>>> +        self.running = True
>>>> +        self.server_socket = socket.socket(socket.AF_INET,
>>>> socket.SOCK_STREAM)
>>>> +        self.server_socket.setsockopt(socket.SOL_SOCKET,
>>>> +                                      socket.SO_REUSEADDR, 1)
>>>> +
>>>> +        try:
>>>> +            self.server_socket.bind((self.bind_ip, self.port))
>>>> +            self.server_socket.listen(5)
>>>> +            print(f"TCP Server listening on
>>>> {self.bind_ip}:{self.port}")
>>>> +
>>>> +            while self.running:
>>>> +                try:
>>>> +                    client_socket, client_addr =
>>>> self.server_socket.accept()
>>>> +                    print(f"Connection from
>>>> {client_addr[0]}:{client_addr[1]}")
>>>> +
>>>> +                    # Handle client directly (single-threaded)
>>>> +                    self._handle_client(client_socket, client_addr)
>>>> +
>>>> +                except socket.error as e:
>>>> +                    if self.running:
>>>> +                        print(f"Socket error: {e}")
>>>> +
>>>> +        except OSError as e:
>>>> +            if e.errno == 99:  # Cannot assign requested address
>>>> +                print(f"Error: Cannot bind to
>>>> {self.bind_ip}:{self.port}")
>>>> +            elif e.errno == 98:  # Address already in use
>>>> +                print(f"Error: Port {self.port} is already in use.")
>>>> +            else:
>>>> +                print(f"Error binding to {self.bind_ip}:{self.port}:
>>>> {e}")
>>>> +        except KeyboardInterrupt:
>>>> +            print("\nServer shutting down...")
>>>> +            exit_code = 0
>>>> +        except Exception as e:
>>>> +            print(f"Unexpected server error: {e}")
>>>> +            exit_code = 1
>>>> +        finally:
>>>> +            self.running = False
>>>> +            if self.server_socket:
>>>> +                self.server_socket.close()
>>>> +        sys.exit(exit_code)
>>>> +
>>>> +    def _handle_client(self, client_socket, client_addr):
>>>> +        """Handle individual client connection."""
>>>> +        total_bytes_received = 0
>>>> +        try:
>>>> +            while self.running:
>>>> +                data = client_socket.recv(4096)
>>>> +                if not data:
>>>> +                    break
>>>> +                client_socket.send(data)
>>>> +                total_bytes_received += len(data)
>>>> +
>>>> +            print(f"Total bytes received: {total_bytes_received}")
>>>> +        except socket.error as e:
>>>> +            print(f"Client {client_addr[0]}:{client_addr[1]} error:
>>>> {e}")
>>>> +        finally:
>>>> +            client_socket.close()
>>>> +            print(f"Connection closed with
>>>> {client_addr[0]}:{client_addr[1]}")
>>>> +
>>>> +
>>>> +class TCPClient:
>>>> +    """Simple TCP client using standard sockets."""
>>>> +
>>>> +    def __init__(self, src_ip, dst_ip, dst_port):
>>>> +        self.src_ip = src_ip
>>>> +        self.dst_ip = dst_ip
>>>> +        self.dst_port = dst_port
>>>> +
>>>> +    def connect_and_send(self, data_len, iterations=1, interval=0.1,
>>>> +                         unique_data=False):
>>>> +        """Connect to server and send data."""
>>>> +        print(f"TCP Client connecting to
>>>> {self.dst_ip}:{self.dst_port}")
>>>> +
>>>> +        success_count = 0
>>>> +        correct_responses = 0
>>>> +
>>>> +        # Generate data once if not using unique data per iteration
>>>> +        shared_data_bytes = None
>>>> +        shared_md5 = None
>>>> +        if not unique_data:
>>>> +            shared_data_bytes = generate_random_bytes(data_len)
>>>> +            shared_md5 = calculate_md5(shared_data_bytes)
>>>> +            print(f"Generated shared buffer: {len(shared_data_bytes)}
>>>> bytes, "
>>>> +                  f"MD5: {shared_md5}")
>>>> +
>>>> +        return_code = 0
>>>> +        for i in range(iterations):
>>>> +            iteration_result = self._process_iteration(
>>>> +                i, data_len, unique_data, shared_data_bytes,
>>>> shared_md5)
>>>> +            if iteration_result['success']:
>>>> +                success_count += 1
>>>> +                if iteration_result['checksum_match']:
>>>> +                    correct_responses += 1
>>>> +            else:
>>>> +                return_code = 1
>>>> +
>>>> +            if i < iterations - 1:
>>>> +                time.sleep(interval)
>>>> +
>>>> +        print(f"Client completed: {success_count}/{iterations} "
>>>> +              f"successful connections")
>>>> +        print(f"MD5 checksum verification: "
>>>> +              f"{correct_responses}/{success_count} correct")
>>>> +
>>>> +        return_code = 0 if (correct_responses ==
>>>> +                           success_count and success_count > 0) else 1
>>>> +        return return_code
>>>> +
>>>> +    def _process_iteration(self, iteration_idx, data_len, unique_data,
>>>> +                          shared_data_bytes, shared_md5):
>>>> +        """Process a single iteration of the client test."""
>>>> +        try:
>>>> +            # Create socket and connect
>>>> +            client_socket = socket.socket(socket.AF_INET,
>>>> +                                          socket.SOCK_STREAM)
>>>> +
>>>> +            # Bind to specific source IP if provided
>>>> +            if self.src_ip != "0.0.0.0":
>>>> +                client_socket.bind((self.src_ip, 0))
>>>> +
>>>> +            client_socket.connect((self.dst_ip, self.dst_port))
>>>> +            print(f"Iteration {iteration_idx + 1}: Connected to
>>>> server")
>>>> +
>>>> +            # Prepare data and calculate MD5 checksum
>>>> +            if unique_data:
>>>> +                data_bytes = generate_random_bytes(data_len)
>>>> +                original_md5 = calculate_md5(data_bytes)
>>>> +                print(f"Iteration {iteration_idx + 1}: Generated
>>>> unique data, "
>>>> +                      f"MD5: {original_md5}")
>>>> +            else:
>>>> +                data_bytes = shared_data_bytes
>>>> +                original_md5 = shared_md5
>>>> +                print(f"Iteration {iteration_idx + 1}: Using shared
>>>> buffer, "
>>>> +                      f"MD5: {original_md5}")
>>>> +
>>>> +            client_socket.send(data_bytes)
>>>> +            print(f"Iteration {iteration_idx + 1}: Sent
>>>> {len(data_bytes)} "
>>>> +                  f"bytes")
>>>> +
>>>> +            # Receive echo response
>>>> +            response = self._receive_response(client_socket,
>>>> len(data_bytes))
>>>> +            print(f"Iteration {iteration_idx + 1}: Received
>>>> {len(response)} "
>>>> +                  f"bytes")
>>>> +
>>>> +            # Calculate MD5 of received data
>>>> +            received_md5 = calculate_md5(response)
>>>> +            print(f"Iteration {iteration_idx + 1}: Received MD5: "
>>>> +                  f"{received_md5}")
>>>> +
>>>> +            # Verify checksum
>>>> +            checksum_match = original_md5 == received_md5
>>>> +
>>>> +            if checksum_match:
>>>> +                print(f"Iteration {iteration_idx + 1}: ✓ MD5 checksum "
>>>> +                      f"verified correctly")
>>>> +            else:
>>>> +                print(f"Iteration {iteration_idx + 1}: ✗ MD5 checksum "
>>>> +                      f"mismatch!")
>>>> +
>>>> +            client_socket.close()
>>>> +            return {'success': True, 'checksum_match': checksum_match}
>>>> +
>>>> +        except ConnectionRefusedError:
>>>> +            print(f"Iteration {iteration_idx + 1}: Connection refused
>>>> to "
>>>> +                  f"{self.dst_ip}:{self.dst_port}")
>>>> +            return {'success': False, 'checksum_match': False}
>>>> +        except socket.timeout:
>>>> +            print(f"Iteration {iteration_idx + 1}: Connection timeout
>>>> to "
>>>> +                  f"{self.dst_ip}:{self.dst_port}")
>>>> +            return {'success': False, 'checksum_match': False}
>>>> +        except OSError as os_error:
>>>> +            self._handle_os_error(iteration_idx, os_error)
>>>> +            return {'success': False, 'checksum_match': False}
>>>> +        except socket.error as sock_error:
>>>> +            print(f"Iteration {iteration_idx + 1}: Socket error:
>>>> {sock_error}")
>>>> +            return {'success': False, 'checksum_match': False}
>>>> +        except Exception as general_error:
>>>> +            print(f"Iteration {iteration_idx + 1}: Unexpected error: "
>>>> +                  f"{general_error}")
>>>> +            return {'success': False, 'checksum_match': False}
>>>> +
>>>> +    def _receive_response(self, client_socket, bytes_to_receive):
>>>> +        """Receive response data from server."""
>>>> +        response = b""
>>>> +        while len(response) < bytes_to_receive:
>>>> +            chunk = client_socket.recv(
>>>> +                min(4096, bytes_to_receive - len(response)))
>>>> +            if not chunk:
>>>> +                break
>>>> +            response += chunk
>>>> +        return response
>>>> +
>>>> +    def _handle_os_error(self, iteration_idx, os_error):
>>>> +        """Handle OS-specific errors."""
>>>> +        if os_error.errno == 99:  # Cannot assign requested address
>>>> +            print(f"Iteration {iteration_idx + 1}: Cannot bind to
>>>> source IP "
>>>> +                  f"{self.src_ip}")
>>>> +        elif os_error.errno == 101:  # Network is unreachable
>>>> +            print(f"Iteration {iteration_idx + 1}: Network unreachable
>>>> to "
>>>> +                  f"{self.dst_ip}:{self.dst_port}")
>>>> +        else:
>>>> +            print(f"Iteration {iteration_idx + 1}: Network error:
>>>> {os_error}")
>>>> +
>>>> +
>>>> +def main():
>>>> +    """Main function to parse arguments and run TCP client or
>>>> server."""
>>>> +    parser = argparse.ArgumentParser(
>>>> +        description="Simple TCP client/server for testing",
>>>> +        formatter_class=argparse.RawDescriptionHelpFormatter,
>>>> +        epilog="""
>>>> +COMMON OPTIONS:
>>>> +  --mode {client,server}  Run in client or server mode (required)
>>>> +  -p, --port PORT         TCP port (default: 8080)
>>>> +
>>>> +CLIENT MODE OPTIONS:
>>>> +  -s, --src-ip IP         Source IPv4 address (default: 0.0.0.0)
>>>> +  -d, --dst-ip IP         Destination IPv4 address (default: 0.0.0.0)
>>>> +  -n, --iterations N      Number of connections to make (default: 1)
>>>> +  -I, --interval SECS     Seconds between connections (default: 0.1)
>>>> +  -B, --payload-bytes N   Total TCP payload bytes (minimum: 500)
>>>> +  --unique-data           Generate unique random data for each
>>>> iteration
>>>> +
>>>> +SERVER MODE OPTIONS:
>>>> +  --bind-ip IP            Server bind IP address (default: 172.16.1.2)
>>>> +
>>>> +""")
>>>> +
>>>> +    # Mode selection (required)
>>>> +    parser.add_argument("--mode", choices=['client', 'server'],
>>>> required=True,
>>>> +                       help=argparse.SUPPRESS)
>>>> +
>>>> +    # Common arguments
>>>> +    parser.add_argument("-p", "--port", type=int, default=DEFAULT_PORT,
>>>> +                       help=argparse.SUPPRESS)
>>>> +
>>>> +    # Client mode arguments
>>>> +    parser.add_argument("-B", "--payload-bytes", type=int,
>>>> default=None,
>>>> +                       help=argparse.SUPPRESS)
>>>> +    parser.add_argument("-s", "--src-ip", default=DEFAULT_SRC_IP,
>>>> +                       help=argparse.SUPPRESS)
>>>> +    parser.add_argument("-d", "--dst-ip", default=DEFAULT_DST_IP,
>>>> +                       help=argparse.SUPPRESS)
>>>> +    parser.add_argument("-I", "--interval", type=float, default=0.1,
>>>> +                       help=argparse.SUPPRESS)
>>>> +    parser.add_argument("-n", "--iterations", type=int, default=1,
>>>> +                       help=argparse.SUPPRESS)
>>>> +    parser.add_argument("--unique-data", action="store_true",
>>>> +                       help=argparse.SUPPRESS)
>>>> +
>>>> +    # Server mode arguments
>>>> +    parser.add_argument("--bind-ip", default="0.0.0.0",
>>>> +                       help=argparse.SUPPRESS)
>>>> +
>>>> +    args = parser.parse_args()
>>>> +
>>>> +    # Validate arguments
>>>> +    if args.port < 1 or args.port > 65535:
>>>> +        print(f"Error: Port {args.port} is out of valid range
>>>> (1-65535)")
>>>> +        sys.exit(1)
>>>> +
>>>> +    if args.mode == 'client':
>>>> +        if args.iterations < 1:
>>>> +            print(f"Error: Iterations must be at least 1, "
>>>> +                  f"got {args.iterations}")
>>>> +            sys.exit(1)
>>>> +        if args.interval < 0:
>>>> +            print(f"Error: Interval cannot be negative, "
>>>> +                  f"got {args.interval}")
>>>> +            sys.exit(1)
>>>> +        if args.payload_bytes is not None and args.payload_bytes < 500:
>>>> +            print(f"Error: Payload bytes must be at least 500, "
>>>> +                  f"got {args.payload_bytes}")
>>>> +            sys.exit(1)
>>>> +
>>>> +    if args.mode == 'server':
>>>> +        # Run server
>>>> +        server = TCPServer(args.bind_ip, args.port)
>>>> +        server.start()
>>>> +    elif args.mode == 'client':
>>>> +        # Run client
>>>> +        client = TCPClient(args.src_ip, args.dst_ip, args.port)
>>>> +        success = client.connect_and_send(
>>>> +            max(0, int(args.payload_bytes)), args.iterations,
>>>> +            args.interval, args.unique_data)
>>>> +
>>>> +        # Summary
>>>> +        print(f"Client summary: payload_bytes={args.payload_bytes} "
>>>> +              f"iterations={args.iterations} success={success}")
>>>> +
>>>> +        sys.exit(success)
>>>> +
>>>> +
>>>> +if __name__ == "__main__":
>>>> +    main()
>>>> diff --git a/tests/udp_client.py b/tests/udp_client.py
>>>> new file mode 100755
>>>> index 000000000..cc67275f8
>>>> --- /dev/null
>>>> +++ b/tests/udp_client.py
>>>> @@ -0,0 +1,113 @@
>>>> +#!/usr/bin/env python3
>>>> +"""
>>>> +Scapy UDP client with MTU/payload control for testing fragmented UDP
>>>> traffic.
>>>> +
>>>> +This module provides functionality to send fragmented UDP packets
>>>> using Scapy,
>>>> +allowing control over MTU, payload size, and fragmentation behavior.
>>>> +"""
>>>> +import time
>>>> +import argparse
>>>> +import sys
>>>> +from scapy.all import IP, UDP, Raw, send, fragment
>>>> +
>>>> +# Defaults for same logical switch topology
>>>> +DEFAULT_SRC_IP = "172.16.1.3"
>>>> +DEFAULT_DST_IP = "172.16.1.2"
>>>> +
>>>> +
>>>> +def read_iface_mtu(ifname: str):
>>>> +    """Return MTU of interface or None if not available."""
>>>> +    try:
>>>> +        path = f"/sys/class/net/{ifname}/mtu"
>>>> +        with open(path, "r", encoding="utf-8") as f:
>>>> +            return int(f.read().strip())
>>>> +    except Exception:
>>>> +        return None
>>>> +
>>>> +
>>>> +def main():
>>>> +    """Main function to parse arguments and send fragmented UDP
>>>> packets."""
>>>> +    parser = argparse.ArgumentParser(
>>>> +        description="Scapy UDP client with MTU/payload control")
>>>> +    parser.add_argument("-M", "--mtu", type=int, default=None,
>>>> +        help="Target MTU; payload ~ mtu-28 (IPv4+UDP).")
>>>> +    parser.add_argument("-B", "--payload-bytes", type=int,
>>>> default=None,
>>>> +        help="Total UDP payload bytes (override default).")
>>>> +    parser.add_argument("-S", "--sport", type=int, default=4242,
>>>> +        help="UDP source port")
>>>> +    parser.add_argument("-D", "--dport", type=int, default=4242,
>>>> +        help="UDP destination port")
>>>> +    parser.add_argument("-s", "--src-ip", default=DEFAULT_SRC_IP,
>>>> +        help="Source IPv4 address")
>>>> +    parser.add_argument("-d", "--dst-ip", default=DEFAULT_DST_IP,
>>>> +        help="Destination IPv4 address")
>>>> +    parser.add_argument("-i", "--iface", default="client",
>>>> +        help="Egress interface (default: 'client').")
>>>> +    parser.add_argument("-I", "--interval", type=float, default=0.1,
>>>> +        help="Seconds between sends.")
>>>> +    parser.add_argument("-n", "--iterations", type=int, default=1,
>>>> +        help="Number of datagrams to send.")
>>>> +    args = parser.parse_args()
>>>> +
>>>> +    # Derive fragment size: for link MTU M, fragsize should typically
>>>> be M-20
>>>> +    # (IP header)
>>>> +    if args.mtu is not None:
>>>> +        fragsize = max(68, int(args.mtu) - 20)
>>>> +    else:
>>>> +        fragsize = 1480  # default for 1500 MTU
>>>> +
>>>> +    # Build payload; size can be user-controlled or synthesized to
>>>> ensure
>>>> +    # fragmentation
>>>> +    if args.payload_bytes is not None:
>>>> +        payload_len = max(0, int(args.payload_bytes))
>>>> +        payload = b"x" * payload_len
>>>> +    elif args.mtu is not None:
>>>> +        # Ensure payload exceeds fragsize-8 (UDP header) so we actually
>>>> +        # fragment.
>>>> +        base_len = max(0, int(args.mtu) - 28)
>>>> +        min_len_to_fragment = max(0, fragsize - 8 + 1)
>>>> +        payload_len = (base_len if base_len > min_len_to_fragment else
>>>> +                       fragsize * 2)
>>>> +        payload = b"x" * payload_len
>>>> +    else:
>>>> +        # No explicit size; synthesize a payload big enough to
>>>> fragment at
>>>> +        # default fragsize
>>>> +        payload_len = fragsize * 2
>>>> +        payload = b"x" * payload_len
>>>> +
>>>> +    # Construct full IP/UDP packet and fragment it explicitly
>>>> +    ip_layer = IP(src=args.src_ip, dst=args.dst_ip)
>>>> +    udp_layer = UDP(sport=args.sport, dport=args.dport)
>>>> +    full_packet = ip_layer / udp_layer / Raw(load=payload)
>>>> +    frags = fragment(full_packet, fragsize=fragsize)
>>>> +
>>>> +    total_fragments_per_send = len(frags)
>>>> +    for _ in range(max(0, args.iterations)):
>>>> +        try:
>>>> +            send(frags, iface=args.iface, return_packets=True,
>>>> verbose=False)
>>>> +        except OSError as e:
>>>> +            # Errno 90: Message too long (likely iface MTU < chosen
>>>> --mtu)
>>>> +            if getattr(e, "errno", None) == 90:
>>>> +                iface_mtu = read_iface_mtu(args.iface)
>>>> +                mtu_note = (f"iface_mtu={iface_mtu}" if iface_mtu is
>>>> not None
>>>> +                            else "iface_mtu=unknown")
>>>> +                print("ERROR: packet exceeds interface MTU. "
>>>> +                    f"iface={args.iface} {mtu_note} chosen_mtu="
>>>> +                    f"{args.mtu if args.mtu is not None else 1500} "
>>>> +                    f"fragsize={fragsize}. Set --mtu to the interface
>>>> MTU.",
>>>> +                    file=sys.stderr)
>>>> +                sys.exit(2)
>>>> +            raise
>>>> +        time.sleep(args.interval)
>>>> +
>>>> +    # Summary
>>>> +    mtu_print = args.mtu if args.mtu is not None else 1500
>>>> +    total_frags = total_fragments_per_send * max(0, args.iterations)
>>>> +    print(f"payload_bytes={payload_len} mtu={mtu_print}
>>>> fragsize={fragsize} "
>>>> +        f"fragments_per_datagram={total_fragments_per_send} "
>>>> +        f"total_fragments_sent={total_frags}")
>>>> +
>>>> +
>>>> +if __name__ == "__main__":
>>>> +    main()
>>>> +    sys.exit(0)
>>>> --
>>>> 2.43.0
>>>>
>>>>
>>> I adjusted the test to not use any external python scripts see down
>>> below:
>>> OVN_FOR_EACH_NORTHD([
>>> AT_SETUP([ACL dual direction with fragmentation])
>>> AT_KEYWORDS([ovnacl])
>>> AT_SKIP_IF([test $HAVE_TCPDUMP = no])
>>>
>>> CHECK_CONNTRACK()
>>>
>>> ovn_start
>>> OVS_TRAFFIC_VSWITCHD_START()
>>> ADD_BR([br-int])
>>>
>>> # Set external-ids in br-int needed for ovn-controller
>>> check ovs-vsctl \
>>>         -- set Open_vSwitch . external-ids:system-id=hv1 \
>>>         -- set Open_vSwitch .
>>> external-ids:ovn-remote=unix:$ovs_base/ovn-sb/ovn-sb.sock \
>>>         -- set Open_vSwitch . external-ids:ovn-encap-type=geneve \
>>>         -- set Open_vSwitch . external-ids:ovn-encap-ip=169.0.0.1 \
>>>         -- set bridge br-int fail-mode=secure
>>> other-config:disable-in-band=true
>>>
>>> # Start ovn-controller
>>> start_daemon ovn-controller
>>>
>>> # Set the minimal fragment size for userspace DP.
>>> # Note that this call will fail for system DP as this setting is not
>>> supported there.
>>> ovs-appctl dpctl/ipf-set-min-frag v4 500
>>>
>>> check ovn-nbctl ls-add internal
>>> check ovn-nbctl lsp-add internal client \
>>>     -- lsp-set-addresses client "f0:00:00:01:02:03 172.16.1.3"
>>> check ovn-nbctl lsp-add internal server \
>>>     -- lsp-set-addresses server "f0:00:0f:01:02:03 172.16.1.2"
>>>
>>> ADD_NAMESPACES(client)
>>> ADD_VETH(client, client, br-int, "172.16.1.3/24", "f0:00:00:01:02:03", \
>>>          "172.16.1.1")
>>> NS_EXEC([client], [ip l set dev client mtu 900])
>>>
>>> ADD_NAMESPACES(server)
>>> ADD_VETH(server, server, br-int, "172.16.1.2/24", "f0:00:0f:01:02:03", \
>>>          "172.16.1.1")
>>> NS_EXEC([server], [ip l set dev server mtu 900])
>>>
>>> printf %8000s > datafile
>>> printf %32000s > test_srv.expected
>>> printf %16000s > test_client.expected
>>>
>>> NETNS_START_TCPDUMP([client], [-U -i client -vnne ip], [tcpdump-client])
>>> NETNS_START_TCPDUMP([server], [-U -i server -vnne ip], [tcpdump-server])
>>>
>>> test_tcp_traffic() {
>>>     local src_port=${1:-8080}
>>>     local dst_port=${2:-8080}
>>>
>>>     check ovn-nbctl --wait=hv sync
>>>     check ovs-appctl dpctl/flush-conntrack
>>>
>>>     # Start TCP server in background
>>>     NETNS_DAEMONIZE([server], [nc -e /bin/cat -v -l -k 172.16.1.2
>>> $dst_port -o tcp_server.rcvd], [server.pid])
>>>
>>>     # Send 2 client requests.
>>>     NS_CHECK_EXEC([client], [(cat datafile; sleep 3) | nc 172.16.1.2
>>> $dst_port -p $src_port -o tcp_test_c1.recvd], [0], [ignore], [ignore])
>>>     NS_CHECK_EXEC([client], [(cat datafile; sleep 3) | nc 172.16.1.2
>>> $dst_port -p $src_port -o tcp_test_c2.recvd], [0], [ignore], [ignore])
>>>
>>>     check cmp test_srv.expected tcp_server.rcvd
>>>     check cmp test_client.expected tcp_test_c1.recvd
>>>     check cmp test_client.expected tcp_test_c2.recvd
>>>
>>>     # Clean up
>>>     kill $(cat server.pid) 2>/dev/null || true
>>>     rm tcp_server.rcvd tcp_test_c1.recvd tcp_test_c2.recvd
>>> }
>>>
>>> test_fragmented_udp_traffic() {
>>>     local src_port=${1:-8080}
>>>     local dst_port=${2:-8080}
>>>
>>>     check ovn-nbctl --wait=hv sync
>>>     check ovs-appctl dpctl/flush-conntrack
>>>
>>>     # Start UDP server in background
>>>     NETNS_DAEMONIZE([server], [nc -e /bin/cat -u -v -l 172.16.1.2
>>> $dst_port -o udp_server.rcvd], [server.pid])
>>>
>>>     # Send 2 client requests.
>>>     NS_CHECK_EXEC([client], [(cat datafile; sleep 3) | nc -u 172.16.1.2
>>> $dst_port -p $src_port -o udp_test_c1.recvd], [0], [ignore], [ignore])
>>>     NS_CHECK_EXEC([client], [(cat datafile; sleep 3) | nc -u 172.16.1.2
>>> $dst_port -p $src_port -o udp_test_c2.recvd], [0], [ignore], [ignore])
>>>
>>>     check cmp test_srv.expected udp_server.rcvd
>>>     check cmp test_client.expected udp_test_c1.recvd
>>>     check cmp test_client.expected udp_test_c2.recvd
>>>
>>>     # Clean up
>>>     kill $(cat server.pid) 2>/dev/null || true
>>>     rm udp_server.rcvd udp_test_c1.recvd udp_test_c2.recvd
>>> }
>>>
>>> # Create port group with both client and server to trigger conjunction
>>> flows
>>> client_uuid=$(ovn-nbctl --bare --columns=_uuid find logical_switch_port
>>> name=client)
>>> server_uuid=$(ovn-nbctl --bare --columns=_uuid find logical_switch_port
>>> name=server)
>>> check ovn-nbctl pg-add internal_vms $client_uuid $server_uuid
>>>
>>> # Use port group in ACL rules to trigger conjunction flow generation
>>>
>>> check ovn-nbctl acl-add internal_vms from-lport 1002 "inport ==
>>> @internal_vms && ip4 && ip4.dst == 0.0.0.0/0 && udp" allow-related
>>> check ovn-nbctl acl-add internal_vms from-lport 1002 "inport ==
>>> @internal_vms && ip4" allow-related
>>> check ovn-nbctl acl-add internal_vms from-lport 1002 "inport ==
>>> @internal_vms && ip6" allow-related
>>>
>>> check ovn-nbctl acl-add internal_vms to-lport 1002 "outport ==
>>> @internal_vms && ip4 && ip4.src == 0.0.0.0/0 && tcp && tcp.dst == 22"
>>> allow-related
>>> check ovn-nbctl acl-add internal_vms to-lport 1002 "outport ==
>>> @internal_vms && ip4 && ip4.src == 0.0.0.0/0 && tcp && tcp.dst == 123"
>>> allow-related
>>> check ovn-nbctl acl-add internal_vms to-lport 1002 "outport ==
>>> @internal_vms && ip4 && ip4.src == 0.0.0.0/0 && tcp && tcp.dst == 1666"
>>> allow-related
>>> check ovn-nbctl acl-add internal_vms to-lport 1002 "outport ==
>>> @internal_vms && ip4 && ip4.src == 0.0.0.0/0 && tcp && tcp.dst == 9090"
>>> allow-related
>>> check ovn-nbctl acl-add internal_vms to-lport 1002 "outport ==
>>> @internal_vms && ip4 && ip4.src == 0.0.0.0/0 && tcp && tcp.dst ==
>>> 40004" allow-related
>>> check ovn-nbctl acl-add internal_vms to-lport 1002 "outport ==
>>> @internal_vms && ip4 && ip4.src == 0.0.0.0/0 && tcp && tcp.dst ==
>>> 51204" allow-related
>>>
>>> check ovn-nbctl acl-add internal_vms to-lport 1002 "outport ==
>>> @internal_vms && ip4 && ip4.src == 0.0.0.0/0 && tcp && tcp.dst >= 3002
>>> && tcp.dst <=13002" allow-related
>>> check ovn-nbctl acl-add internal_vms to-lport 1002 "outport ==
>>> @internal_vms && ip4 && ip4.src == 0.0.0.0/0 && udp && udp.dst >= 5002
>>> && udp.dst <=10010" allow-related
>>> check ovn-nbctl acl-add internal_vms to-lport 1002 "outport ==
>>> @internal_vms && ip4 && ip4.src == 0.0.0.0/0 && tcp && tcp.src >= 3002
>>> && tcp.src <=13002" allow-related
>>> check ovn-nbctl acl-add internal_vms to-lport 1002 "outport ==
>>> @internal_vms && ip4 && ip4.src == 0.0.0.0/0 && udp && udp.src >= 5002
>>> && udp.src <=10010" allow-related
>>>
>>> check ovn-nbctl acl-add internal_vms to-lport 1002 "outport ==
>>> @internal_vms && ip4 && ip4.src == 0.0.0.0/0 && udp && udp.dst == 123"
>>> allow-related
>>> check ovn-nbctl acl-add internal_vms to-lport 1002 "outport ==
>>> @internal_vms && ip4 && ip4.src == 0.0.0.0/0 && udp && udp.dst == 5060"
>>> allow-related
>>> check ovn-nbctl acl-add internal_vms to-lport 1002 "outport ==
>>> @internal_vms && ip4 && ip4.src == 0.0.0.0/0 && udp && udp.dst ==
>>> 40004" allow-related
>>>
>>>
>>> # Add the drop rule using port group
>>> check ovn-nbctl acl-add internal_vms to-lport 1002 "outport ==
>>> @internal_vms && ip4 && ip4.src == 0.0.0.0/0 && icmp4" allow-related
>>> check ovn-nbctl acl-add internal_vms to-lport 1002 "outport ==
>>> @internal_vms && arp" allow-related
>>>
>>> check ovn-nbctl --log --severity=info acl-add internal_vms to-lport 100
>>> "outport == @internal_vms" drop
>>>
>>> check ovn-nbctl --wait=hv set NB_Global .
>>> options:acl_udp_ct_translation=true
>>>
>>> AS_BOX([Testing ACL: test_fragmented_udp_traffic])
>>> test_fragmented_udp_traffic 5060 5060
>>>
>>> AS_BOX([Testing ACL: test_tcp_traffic])
>>> test_tcp_traffic 9090 9090
>>>
>>> AS_BOX([Testing ACL: test_fragmented_udp_traffic_inside_range])
>>> test_fragmented_udp_traffic 5005 5005
>>>
>>> AS_BOX([Testing ACL: test_tcp_traffic_range])
>>> test_tcp_traffic 3003 3003
>>>
>>> OVN_CLEANUP_CONTROLLER([hv1])
>>>
>>> OVN_CLEANUP_NORTHD
>>>
>>> as
>>> OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d
>>> /connection dropped.*/d
>>> /WARN|netdev@ovs-netdev: execute.*/d
>>> /dpif|WARN|system@ovs-system: execute.*/d
>>> "])
>>> AT_CLEANUP
>>> ])
>>>
>>>
>>> Regards,
>>> Ales
>>>
>>
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to