On 3/6/25 2:58 AM, Rosemarie O'Riorden wrote:
> When a gateway router has a load balancer configured, the option
> lb_force_snat_ip=routerip can be set so that OVN SNATs load balanced
> packets to the logical router's egress interface's IP address, that is, the
> port chosen as "outport" in the lr_in_ip_routing stage.
> 
> However, this was only designed to work when one network was configured
> on the logical router outport. When multiple networks were configured,
> OVN's behavior was to simply choose the lexicographically first IP address for
> SNAT. This often lead to an incorrect address being used for SNAT.
> 
> To fix this, two main components have been added:
>  1. A new flag, flags.network_id. It is 4 bits and stores an index.
>  2. A new stage in the router ingress pipeline, lr_in_network_id.
> 
> Now in the stage lr_in_network_id, OVN generates flows that assign
> flags.network_id with an index, which is chosen by looping through the 
> networks
> on the port, and assigning flags.network_id = i. flags.network_id is then
> matched on later in lr_out_snat and the network at that index will be chosen
> for SNAT.
> 
> However, if there are more than 16 networks, flags.network_id will be 0
> for networks 17 and up. Then, for those networks, the first
> lexicographical network will be chosen for SNAT.
> 
> There is also a lower priority flow with the old behavior to make upgrades
> smooth.
> 
> Two tests have been added to verify that:
>  1. The correct network is chosen for SNAT.
>  2. The new and updated flows with flags.network_id are correct.
> 
> And tests that were broken by this new behavior have been updated.
> 
> Reported-at: https://issues.redhat.com/browse/FDP-871
> Reported-at: 
> https://mail.openvswitch.org/pipermail/ovs-dev/2024-October/417717.html
> Signed-off-by: Rosemarie O'Riorden <rosema...@redhat.com>
> ---
> v2:
> - Improve wording of commit message and add missing explanation
> - Add Reported-at tag
> - Correct hex value for MLF_NETWORK_ID in include/ovn/logical-fields.h
> - In northd/northd.c:
>   - Change behavior to set flags.network_id=0 for networks 16 and up
>   - Add flow with old behavior for consistency during upgrades
>   - Fix redundant loop conditions
>   - Remove code I accidentally included in v1
>   - Fix indentation in multiple spots
>   - Remove incorrect changes in build_lrouter_nat_defrag_and_lb()
> - Fix documentation, add new stage
> - Move end-to-end test from ovn-northd.at to ovn.at
> - Add check before all ovn and ovs commands in test
> - Remove unnecessary section included in test
> - Updated all flows in tests that were affected by new changes
> - Added more IP addresses to test to test behavior with >16 networks
> ---

Hi Rosemarie,

Thanks for this new version!

> include/ovn/logical-fields.h |   9 +-
>  lib/logical-fields.c         |   3 +
>  northd/northd.c              | 175 +++++++++++++++++++++++++++++------
>  northd/northd.h              |   3 +-
>  northd/ovn-northd.8.xml      |  59 ++++++++++--
>  tests/ovn-northd.at          | 135 ++++++++++++++++++++++++---
>  tests/ovn.at                 |  71 ++++++++++++++
>  7 files changed, 401 insertions(+), 54 deletions(-)
> 
> diff --git a/include/ovn/logical-fields.h b/include/ovn/logical-fields.h
> index 196ac9dd8..31562df4d 100644
> --- a/include/ovn/logical-fields.h
> +++ b/include/ovn/logical-fields.h
> @@ -97,6 +97,8 @@ enum mff_log_flags_bits {
>      MLF_FROM_CTRL_BIT = 19,
>      MLF_UNSNAT_NEW_BIT = 20,
>      MLF_UNSNAT_NOT_TRACKED_BIT = 21,
> +    MLF_NETWORK_ID_START_BIT = 28,
> +    MLF_NETWORK_ID_END_BIT = 31,
>  };
>  
>  /* MFF_LOG_FLAGS_REG flag assignments */
> @@ -159,7 +161,12 @@ enum mff_log_flags {
>      MLF_UNSNAT_NEW = (1 << MLF_UNSNAT_NEW_BIT),
>  
>      /* Indicate that the packet didn't go through unSNAT. */
> -    MLF_UNSNAT_NOT_TRACKED = (1 << MLF_UNSNAT_NOT_TRACKED_BIT)
> +    MLF_UNSNAT_NOT_TRACKED = (1 << MLF_UNSNAT_NOT_TRACKED_BIT),
> +
> +    /* Assign network ID to packet to choose correct network for snat when
> +     * lb_force_snat_ip=routerip. */
> +    MLF_NETWORK_ID = ((1 << (MLF_NETWORK_ID_END_BIT - 
> MLF_NETWORK_ID_START_BIT
> +                       + 1)) - 1),

I think that, when he was reviewing v1, Ilya said this should be
0xf0000000 (4 most significant bits are 1).  This expression yields the
value 15, 0x0000000f.

If we really want it to be correct we need something like (the kind of
ugly):

MLF_NETWORK_ID = (((uint64_t) 1 << (MLF_NETWORK_ID_END_BIT + 1)) - 1)
                 ^ ((1 << MLF_NETWORK_ID_START_BIT) - 1)

>  };

However, 15 is actually the maximum network IDs we support.. so I'd also
add a definition here, e.g.:

#define OVN_MAX_NETWORK_ID \
    ((1 << (MLF_NETWORK_ID_END_BIT - MLF_NETWORK_ID_START_BIT + 1)) - 1)

And use OVN_MAX_NETWORK_ID instead of the hardcoded 15 in northd.c.

>  
>  /* OVN logical fields
> diff --git a/lib/logical-fields.c b/lib/logical-fields.c
> index ed287f42b..db2d08ada 100644
> --- a/lib/logical-fields.c
> +++ b/lib/logical-fields.c
> @@ -147,6 +147,9 @@ ovn_init_symtab(struct shash *symtab)
>               MLF_UNSNAT_NOT_TRACKED_BIT);
>      expr_symtab_add_subfield(symtab, "flags.unsnat_not_tracked", NULL,
>                               flags_str);
> +    snprintf(flags_str, sizeof flags_str, "flags[%d..%d]",
> +             MLF_NETWORK_ID_START_BIT, MLF_NETWORK_ID_END_BIT);
> +    expr_symtab_add_subfield(symtab, "flags.network_id", NULL, flags_str);
>  
>      snprintf(flags_str, sizeof flags_str, "flags[%d]", MLF_FROM_CTRL_BIT);
>      expr_symtab_add_subfield(symtab, "flags.from_ctrl", NULL, flags_str);
> diff --git a/northd/northd.c b/northd/northd.c
> index 1d3e132d4..467999607 100644
> --- a/northd/northd.c
> +++ b/northd/northd.c
> @@ -12985,74 +12985,112 @@ build_lrouter_force_snat_flows_op(struct ovn_port 
> *op,
>                                    struct ds *match, struct ds *actions,
>                                    struct lflow_ref *lflow_ref)
>  {
> +    size_t network_id;
>      ovs_assert(op->nbrp);
>      if (!op->peer || !lrnat_rec->lb_force_snat_router_ip) {
>          return;
>      }
>  
> -    if (op->lrp_networks.n_ipv4_addrs) {
> +    for (size_t i = 0; i < op->lrp_networks.n_ipv4_addrs; i++) {
>          ds_clear(match);
>          ds_clear(actions);
>  
>          ds_put_format(match, "inport == %s && ip4.dst == %s",
> -                      op->json_key, op->lrp_networks.ipv4_addrs[0].addr_s);
> +                      op->json_key, op->lrp_networks.ipv4_addrs[i].addr_s);
>          ovn_lflow_add(lflows, op->od, S_ROUTER_IN_UNSNAT, 110,
>                        ds_cstr(match), "ct_snat;", lflow_ref);
>  
>          ds_clear(match);
>  
> +        /* Since flags.network_id is 16 bits, assign flags.network_id = 0 for

Typo: 4 bits.

> +         * networks above 16. */
> +        if (i > 15) {
> +            network_id = 0;
> +            static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
> +            VLOG_WARN_RL(&rl, "Logical router port %s already has the max of 
> "
> +                              "16 networks configured, so for network "
> +                              "\"%s/%d\" the first IP [%s] will be 
> considered "
> +                              "as SNAT for load balancer.", op->json_key,
> +                              op->lrp_networks.ipv4_addrs[i].addr_s,
> +                              op->lrp_networks.ipv4_addrs[i].plen,
> +                              op->lrp_networks.ipv4_addrs[0].addr_s);
> +        } else {
> +            network_id = i;
> +        }
> +
>          /* Higher priority rules to force SNAT with the router port ip.
>           * This only takes effect when the packet has already been
>           * load balanced once. */
> -        ds_put_format(match, "flags.force_snat_for_lb == 1 && ip4 && "
> -                      "outport == %s", op->json_key);
> +        ds_put_format(match, "flags.force_snat_for_lb == 1 && "
> +                      "flags.network_id == %"PRIuSIZE" && ip4 && "
> +                      "outport == %s", network_id, op->json_key);
>          ds_put_format(actions, "ct_snat(%s);",
> -                      op->lrp_networks.ipv4_addrs[0].addr_s);
> +                      op->lrp_networks.ipv4_addrs[network_id].addr_s);
>          ovn_lflow_add(lflows, op->od, S_ROUTER_OUT_SNAT, 110,
>                        ds_cstr(match), ds_cstr(actions),
>                        lflow_ref);
> -        if (op->lrp_networks.n_ipv4_addrs > 1) {
> -            static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
> -            VLOG_WARN_RL(&rl, "Logical router port %s is configured with "
> -                              "multiple IPv4 addresses.  Only the first "
> -                              "IP [%s] is considered as SNAT for load "
> -                              "balancer", op->json_key,
> -                              op->lrp_networks.ipv4_addrs[0].addr_s);
> -        }
>      }
>  
>      /* op->lrp_networks.ipv6_addrs will always have LLA and that will be
> -     * last in the list. So add the flows only if n_ipv6_addrs > 1. */
> -    if (op->lrp_networks.n_ipv6_addrs > 1) {
> +     * last in the list. So loop to add flows n_ipv6_addrs - 1 times. */
> +    for (size_t i = 0; i < op->lrp_networks.n_ipv6_addrs - 1; i++) {
>          ds_clear(match);
>          ds_clear(actions);
>  
>          ds_put_format(match, "inport == %s && ip6.dst == %s",
> -                      op->json_key, op->lrp_networks.ipv6_addrs[0].addr_s);
> +                      op->json_key, op->lrp_networks.ipv6_addrs[i].addr_s);
>          ovn_lflow_add(lflows, op->od, S_ROUTER_IN_UNSNAT, 110,
>                        ds_cstr(match), "ct_snat;", lflow_ref);
> -
>          ds_clear(match);
>  
> +        /* Since flags.network_id is 16 bits, assign flags.network_id = 0 for

Typo: 4 bits.

> +         * networks above 16. */
> +        if (i > 15) {
> +            network_id = 0;
> +            static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
> +            VLOG_WARN_RL(&rl, "Logical router port %s already has the max of 
> "
> +                              "16 networks configured, so for network "
> +                              "\"%s/%d\" the first IP [%s] will be 
> considered "
> +                              "as SNAT for load balancer.", op->json_key,
> +                              op->lrp_networks.ipv4_addrs[i].addr_s,
> +                              op->lrp_networks.ipv4_addrs[i].plen,
> +                              op->lrp_networks.ipv4_addrs[0].addr_s);
> +        } else {
> +            network_id = i;
> +        }
> +
>          /* Higher priority rules to force SNAT with the router port ip.
>           * This only takes effect when the packet has already been
>           * load balanced once. */
> -        ds_put_format(match, "flags.force_snat_for_lb == 1 && ip6 && "
> -                      "outport == %s", op->json_key);
> +        ds_put_format(match, "flags.force_snat_for_lb == 1 && "
> +                      "flags.network_id == %"PRIuSIZE" && ip6 && "
> +                      "outport == %s", network_id, op->json_key);
>          ds_put_format(actions, "ct_snat(%s);",
> -                      op->lrp_networks.ipv6_addrs[0].addr_s);
> +                      op->lrp_networks.ipv6_addrs[network_id].addr_s);
>          ovn_lflow_add(lflows, op->od, S_ROUTER_OUT_SNAT, 110,
>                        ds_cstr(match), ds_cstr(actions),
>                        lflow_ref);
> -        if (op->lrp_networks.n_ipv6_addrs > 2) {
> -            static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
> -            VLOG_WARN_RL(&rl, "Logical router port %s is configured with "
> -                              "multiple IPv6 addresses.  Only the first "
> -                              "IP [%s] is considered as SNAT for load "
> -                              "balancer", op->json_key,
> -                              op->lrp_networks.ipv6_addrs[0].addr_s);
> -        }
>      }
> +
> +    /* This lower-priority flow matches the old behavior for if northd is
> +     * upgraded before controller and flags.network_id is not recognized. */
> +    ds_clear(match);
> +    ds_clear(actions);
> +    ds_put_format(match, "flags.force_snat_for_lb == 1 && ip4 && "
> +                  "outport == \"%s\"", op->json_key);

This generates invalid flows, e.g., from the CI run logs:

2025-03-06T02:32:51.714Z|00049|lflow|WARN|error parsing match
"flags.force_snat_for_lb == 1 && ip4 && outport == ""rp-sw0""": Syntax
error at `rp' expecting end of input.

The additional double quotes are not needed.  It can be":
        "outport == %s", op->json_key)


> +    ds_put_format(actions, "ct_snat(%s);",
> +                  op->lrp_networks.ipv4_addrs[0].addr_s);
> +    ovn_lflow_add(lflows, op->od, S_ROUTER_OUT_SNAT, 100,
> +                  ds_cstr(match), ds_cstr(actions), lflow_ref);
> +
> +    ds_clear(match);
> +    ds_clear(actions);
> +    ds_put_format(match, "flags.force_snat_for_lb == 1 && ip6 && "
> +                  "outport == \"%s\"", op->json_key);

Same as above, no need for additional quotes.

> +    ds_put_format(actions, "ct_snat(%s);",
> +                  op->lrp_networks.ipv6_addrs[0].addr_s);
> +    ovn_lflow_add(lflows, op->od, S_ROUTER_OUT_SNAT, 100,
> +                  ds_cstr(match), ds_cstr(actions), lflow_ref);
>  }
>  
>  static void
> @@ -14930,6 +14968,85 @@ build_arp_request_flows_for_lrouter(
>                    lflow_ref);
>  }
>  
> +static void
> +build_lr_force_snat_network_id_flows(
> +            struct ovn_datapath *od, struct lflow_table *lflows,
> +            struct ds *match, struct ds *actions, struct lflow_ref 
> *lflow_ref)
> +{
> +    const struct ovn_port *op;
> +    size_t network_id;
> +    HMAP_FOR_EACH (op, dp_node, &od->ports) {
> +        for (size_t i = 0; i < op->lrp_networks.n_ipv4_addrs; i++) {
> +            /* Since flags.network_id is 16 bits, assign a value of 0 for 
> network

Typo: 4 bits.

> +             * 16 and up. */
> +            if (i > 15) {
> +                network_id = 0;
> +                static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 
> 5);
> +                VLOG_WARN_RL(&rl, "Logical router port %s already has the 
> max "
> +                                  "of 16 networks configured, so network "

Nit: if we add the OVN_MAX_NETWORK_ID macro we could use
(OVN_MAX_NETWORK_ID + 1) instead of hardcoding 16 here.

> +                                  "\"%s/%d\" is assigned "
> +                                  "flags.network_id = 0.", op->json_key,
> +                                  op->lrp_networks.ipv4_addrs[i].addr_s,
> +                                  op->lrp_networks.ipv4_addrs[i].plen);
> +            } else {
> +                network_id = i;
> +            }
> +
> +            ds_clear(match);
> +            ds_clear(actions);
> +
> +            ds_put_format(match, "flags.force_snat_for_lb == 1 && "

I forget now why this additional match is needed.  IIUC the purpose of
the stage is store the network-id of the router port network that was
used for forwarding the IP packet.  It doesn't really have anything to
do with NAT and/or LB (please also see my comment on the documentation
change, ovn-northd.8.xml).

If we do that we should also change the function name to something like
build_lrouter_network_id_flows().

> +                          "outport == %s && " REG_NEXT_HOP_IPV4 " == %s/%d",
> +                          op->json_key, 
> op->lrp_networks.ipv4_addrs[i].addr_s,
> +                          op->lrp_networks.ipv4_addrs[i].plen);
> +
> +            ds_put_format(actions, "flags.network_id = %"PRIuSIZE"; ",
> +                          network_id);
> +            ds_put_format(actions, "next;");
> +
> +            ovn_lflow_add(lflows, op->od, S_ROUTER_IN_NETWORK_ID, 110,
> +                          ds_cstr(match), ds_cstr(actions),
> +                          lflow_ref);
> +        }
> +
> +        /* op->lrp_networks.ipv6_addrs will always have LLA and that will be
> +         * last in the list. So add the flows only if n_ipv6_addrs > 1, and
> +         * loop n_ipv6_addrs - 1 times. */
> +        for (size_t i = 0; i < op->lrp_networks.n_ipv6_addrs - 1; i++) {
> +            /* Since flags.network_id is 16 bits, assign a value of 0 for 
> network

Typo: 4 bits.

> +             * 16 and up. */
> +            if (i > 15) {
> +                network_id = 0;
> +                static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 
> 5);
> +                VLOG_WARN_RL(&rl, "Logical router port %s already has the 
> max "
> +                                  "of 16 networks configured, so network "

Same nit about avoid to hardcode 16.

> +                                  "\"%s/%d\" is assigned "
> +                                  "flags.network_id = 0.", op->json_key,
> +                                  op->lrp_networks.ipv6_addrs[i].addr_s,
> +                                  op->lrp_networks.ipv6_addrs[i].plen);
> +            } else {
> +                network_id = i;
> +            }
> +
> +            ds_clear(match);
> +            ds_clear(actions);
> +
> +            ds_put_format(match, "flags.force_snat_for_lb == 1 && "

Same comment about removing the force_snat_for_lb match.

> +                          "outport == %s && " REG_NEXT_HOP_IPV6 " == %s/%d",
> +                          op->json_key, 
> op->lrp_networks.ipv6_addrs[i].addr_s,
> +                          op->lrp_networks.ipv6_addrs[i].plen);
> +
> +            ds_put_format(actions, "flags.network_id = %"PRIuSIZE"; ", i);
> +            ds_put_format(actions, "next;");
> +
> +            ovn_lflow_add(lflows, op->od, S_ROUTER_IN_NETWORK_ID, 110,
> +                          ds_cstr(match), ds_cstr(actions), lflow_ref);
> +        }
> +    }
> +    ovn_lflow_add(lflows, od, S_ROUTER_IN_NETWORK_ID, 0,
> +                  "1", "next;", lflow_ref);
> +}
> +
>  /* Logical router egress table DELIVERY: Delivery (priority 100-110).
>   *
>   * Priority 100 rules deliver packets to enabled logical ports.
> @@ -17328,6 +17445,8 @@ build_lswitch_and_lrouter_iterate_by_lr(struct 
> ovn_datapath *od,
>                                          &lsi->actions,
>                                          lsi->meter_groups,
>                                          NULL);
> +    build_lr_force_snat_network_id_flows(od, lsi->lflows, &lsi->match,
> +                                         &lsi->actions, NULL);
>      build_misc_local_traffic_drop_flows_for_lrouter(od, lsi->lflows, NULL);
>  
>      build_lr_nat_defrag_and_lb_default_flows(od, lsi->lflows, NULL);
> diff --git a/northd/northd.h b/northd/northd.h
> index 1a7afe902..388bac6df 100644
> --- a/northd/northd.h
> +++ b/northd/northd.h
> @@ -547,7 +547,8 @@ enum ovn_stage {
>      PIPELINE_STAGE(ROUTER, IN,  CHK_PKT_LEN,     22, "lr_in_chk_pkt_len")    
>  \
>      PIPELINE_STAGE(ROUTER, IN,  LARGER_PKTS,     23, "lr_in_larger_pkts")    
>  \
>      PIPELINE_STAGE(ROUTER, IN,  GW_REDIRECT,     24, "lr_in_gw_redirect")    
>  \
> -    PIPELINE_STAGE(ROUTER, IN,  ARP_REQUEST,     25, "lr_in_arp_request")    
>  \
> +    PIPELINE_STAGE(ROUTER, IN,  NETWORK_ID,      25, "lr_in_network_id")     
>  \
> +    PIPELINE_STAGE(ROUTER, IN,  ARP_REQUEST,     26, "lr_in_arp_request")    
>  \
>                                                                        \
>      /* Logical router egress stages. */                               \
>      PIPELINE_STAGE(ROUTER, OUT, CHECK_DNAT_LOCAL,   0,                       
> \
> diff --git a/northd/ovn-northd.8.xml b/northd/ovn-northd.8.xml
> index 155ba8a49..aa5d6714b 100644
> --- a/northd/ovn-northd.8.xml
> +++ b/northd/ovn-northd.8.xml
> @@ -5066,7 +5066,44 @@ icmp6 {
>        </li>
>      </ul>
>  
> -   <h3>Ingress Table 25: ARP Request</h3>
> +   <h3>Ingress Table 25: Network ID</h3>
> +
> +    <p>
> +      This table generates flows that set flags.network_id. It holds the
> +      following flows:
> +    </p>
> +
> +    <ul>
> +      <li>
> +        <p>
> +          A priority-110 flow for IPv4 packets with match
> +          <code>flags.force_snat_for_lb == 1 &amp;&amp; outport == 
> <var>P</var>

I forgot why we need to populate the network id value only if
force_snat_for_lb is set.  I think we should remove the extra
"flags.force_snat_for_lb == 1" match.  We can set the network id for all
IP packets.  That means in any later stage we can match on flags.network_id.

> +          &amp;&amp; REG_NEXT_HOP_IPV4 == <var>I</var>/<var>C</var></code>, 
> and
> +          actions <code>flags.network_id = <var>N</var>; next;</code>.
> +        </p>
> +
> +        <p>
> +          Where <var>P</var> is the outport, <var>I</var> is the next-hop IP,
> +          <var>C</var> is the next-hop network CIDR, and <var>N</var> is the
> +          network id (index).
> +        </p>
> +
> +        <p>
> +        <code>flags.network_id</code> is 16 bits, and thus only 16 networks 
> can

Nit: This should be "is 4 bits, and thus only 16 networks can".

> +        be indexed. If the number of networks is greater than 16, networks 17
> +        and up will have the actions <code>flags.network_id = 0; next;</code>
> +        and only the first lexicographical IP will be considered for SNAT for
> +        those networks.
> +        </p>
> +      </li>
> +
> +      <li>
> +        Catch-all: A priority-0 flow with match <code>1</code> has
> +        actions <code>next;</code>.
> +      </li>
> +    </ul>
> +
> +   <h3>Ingress Table 26: ARP Request</h3>
>  
>      <p>
>        In the common case where the Ethernet destination has been resolved, 
> this
> @@ -5330,18 +5367,22 @@ nd_ns {
>            table="Logical_Router"/>:lb_force_snat_ip=router_ip), then for
>            each logical router port <var>P</var> attached to the Gateway
>            router, a priority-110 flow matches
> -          <code>flags.force_snat_for_lb == 1 &amp;&amp; outport == 
> <var>P</var>
> -          </code> with an action <code>ct_snat(<var>R</var>);</code>
> -          where <var>R</var> is the IP configured on the router port.
> -          If <code>R</code> is an IPv4 address then the match will also
> -          include <code>ip4</code> and if it is an IPv6 address, then the
> -          match will also include <code>ip6</code>.
> +          <code>flags.force_snat_for_lb == 1 &amp;&amp; flags.network_id ==
> +          <var>N</var> &amp;&amp; outport == <var>P</var></code>, where
> +          <var>N</var> is the network index, with an action
> +          <code>ct_snat(<var>R</var>);</code> where <var>R</var> is the IP
> +          configured on the router port. If <code>R</code> is an IPv4 address
> +          then the match will also include <code>ip4</code> and if it is an
> +          IPv6 address, then the match will also include <code>ip6</code>.
> +          <var>N</var>, the network index, will be 0 for networks 17 and up.
>          </p>
>  
>          <p>
>            If the logical router port <var>P</var> is configured with multiple
> -          IPv4 and multiple IPv6 addresses, only the first IPv4 and first 
> IPv6
> -          address is considered.
> +          IPv4 and multiple IPv6 addresses, the IPv4 and IPv6 address within
> +          the same network as the next-hop will be chosen. However, if there
> +          are more than 16 networks configured, the first lexicographical IP
> +          will be considered for SNAT for networks 17 and up.
>          </p>
>        </li>
>  
> diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at
> index cfaba19bf..2a79627b7 100644
> --- a/tests/ovn-northd.at
> +++ b/tests/ovn-northd.at
> @@ -4493,9 +4493,15 @@ AT_CHECK([grep "lr_in_dnat" lr0flows | 
> ovn_strip_lflows], [0], [dnl
>  
>  AT_CHECK([grep "lr_out_snat" lr0flows | ovn_strip_lflows], [0], [dnl
>    table=??(lr_out_snat        ), priority=0    , match=(1), action=(next;)
> -  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"), 
> action=(ct_snat(172.168.0.100);)
> -  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"), 
> action=(ct_snat(10.0.0.1);)
> -  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw1"), 
> action=(ct_snat(20.0.0.1);)
> +  table=??(lr_out_snat        ), priority=100  , 
> match=(flags.force_snat_for_lb == 1 && ip4 && outport == ""lr0-public""), 
> action=(ct_snat(172.168.0.100);)
> +  table=??(lr_out_snat        ), priority=100  , 
> match=(flags.force_snat_for_lb == 1 && ip4 && outport == ""lr0-sw0""), 
> action=(ct_snat(10.0.0.1);)
> +  table=??(lr_out_snat        ), priority=100  , 
> match=(flags.force_snat_for_lb == 1 && ip4 && outport == ""lr0-sw1""), 
> action=(ct_snat(20.0.0.1);)
> +  table=??(lr_out_snat        ), priority=100  , 
> match=(flags.force_snat_for_lb == 1 && ip6 && outport == ""lr0-public""), 
> action=(ct_snat(fe80::200:20ff:fe20:1213);)
> +  table=??(lr_out_snat        ), priority=100  , 
> match=(flags.force_snat_for_lb == 1 && ip6 && outport == ""lr0-sw0""), 
> action=(ct_snat(fe80::200:ff:fe00:ff01);)
> +  table=??(lr_out_snat        ), priority=100  , 
> match=(flags.force_snat_for_lb == 1 && ip6 && outport == ""lr0-sw1""), 
> action=(ct_snat(fe80::200:ff:fe00:ff02);)

These double quotes above are incorrect.  The flows are not valid.
That's also why the system tests fail.  This comment applies for a bunch
of changes in ovn-northd.at.

> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && 
> outport == "lr0-public"), action=(ct_snat(172.168.0.100);)
> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && 
> outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && 
> outport == "lr0-sw1"), action=(ct_snat(20.0.0.1);)
>    table=??(lr_out_snat        ), priority=120  , match=(nd_ns), 
> action=(next;)
>  ])
>  
> @@ -4558,10 +4564,16 @@ AT_CHECK([grep "lr_in_dnat" lr0flows | 
> ovn_strip_lflows], [0], [dnl
>  
>  AT_CHECK([grep "lr_out_snat" lr0flows | ovn_strip_lflows], [0], [dnl
>    table=??(lr_out_snat        ), priority=0    , match=(1), action=(next;)
> -  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"), 
> action=(ct_snat(172.168.0.100);)
> -  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"), 
> action=(ct_snat(10.0.0.1);)
> -  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw1"), 
> action=(ct_snat(20.0.0.1);)
> -  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && ip6 && outport == "lr0-sw1"), 
> action=(ct_snat(bef0::1);)
> +  table=??(lr_out_snat        ), priority=100  , 
> match=(flags.force_snat_for_lb == 1 && ip4 && outport == ""lr0-public""), 
> action=(ct_snat(172.168.0.100);)
> +  table=??(lr_out_snat        ), priority=100  , 
> match=(flags.force_snat_for_lb == 1 && ip4 && outport == ""lr0-sw0""), 
> action=(ct_snat(10.0.0.1);)
> +  table=??(lr_out_snat        ), priority=100  , 
> match=(flags.force_snat_for_lb == 1 && ip4 && outport == ""lr0-sw1""), 
> action=(ct_snat(20.0.0.1);)
> +  table=??(lr_out_snat        ), priority=100  , 
> match=(flags.force_snat_for_lb == 1 && ip6 && outport == ""lr0-public""), 
> action=(ct_snat(fe80::200:20ff:fe20:1213);)
> +  table=??(lr_out_snat        ), priority=100  , 
> match=(flags.force_snat_for_lb == 1 && ip6 && outport == ""lr0-sw0""), 
> action=(ct_snat(fe80::200:ff:fe00:ff01);)
> +  table=??(lr_out_snat        ), priority=100  , 
> match=(flags.force_snat_for_lb == 1 && ip6 && outport == ""lr0-sw1""), 
> action=(ct_snat(bef0::1);)
> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && 
> outport == "lr0-public"), action=(ct_snat(172.168.0.100);)
> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && 
> outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && 
> outport == "lr0-sw1"), action=(ct_snat(20.0.0.1);)
> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip6 && 
> outport == "lr0-sw1"), action=(ct_snat(bef0::1);)
>    table=??(lr_out_snat        ), priority=120  , match=(nd_ns), 
> action=(next;)
>  ])
>  
> @@ -6268,8 +6280,12 @@ AT_CHECK([grep "lr_out_post_undnat" lr0flows | 
> ovn_strip_lflows], [0], [dnl
>  
>  AT_CHECK([grep "lr_out_snat" lr0flows | ovn_strip_lflows], [0], [dnl
>    table=??(lr_out_snat        ), priority=0    , match=(1), action=(next;)
> -  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"), 
> action=(ct_snat(172.168.0.10);)
> -  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"), 
> action=(ct_snat(10.0.0.1);)
> +  table=??(lr_out_snat        ), priority=100  , 
> match=(flags.force_snat_for_lb == 1 && ip4 && outport == ""lr0-public""), 
> action=(ct_snat(172.168.0.10);)
> +  table=??(lr_out_snat        ), priority=100  , 
> match=(flags.force_snat_for_lb == 1 && ip4 && outport == ""lr0-sw0""), 
> action=(ct_snat(10.0.0.1);)
> +  table=??(lr_out_snat        ), priority=100  , 
> match=(flags.force_snat_for_lb == 1 && ip6 && outport == ""lr0-public""), 
> action=(ct_snat(fe80::200:ff:fe00:ff02);)
> +  table=??(lr_out_snat        ), priority=100  , 
> match=(flags.force_snat_for_lb == 1 && ip6 && outport == ""lr0-sw0""), 
> action=(ct_snat(fe80::200:ff:fe00:ff01);)
> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && 
> outport == "lr0-public"), action=(ct_snat(172.168.0.10);)
> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && 
> outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
>    table=??(lr_out_snat        ), priority=120  , match=(nd_ns), 
> action=(next;)
>    table=??(lr_out_snat        ), priority=25   , match=(ip && ip4.src == 
> 10.0.0.0/24 && (!ct.trk || !ct.rpl)), action=(ct_snat(172.168.0.10);)
>    table=??(lr_out_snat        ), priority=33   , match=(ip && ip4.src == 
> 10.0.0.10 && (!ct.trk || !ct.rpl)), action=(ct_snat(172.168.0.30);)
> @@ -6334,8 +6350,12 @@ AT_CHECK([grep "lr_out_post_undnat" lr0flows | 
> ovn_strip_lflows], [0], [dnl
>  
>  AT_CHECK([grep "lr_out_snat" lr0flows | ovn_strip_lflows], [0], [dnl
>    table=??(lr_out_snat        ), priority=0    , match=(1), action=(next;)
> -  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"), 
> action=(ct_snat(172.168.0.10);)
> -  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"), 
> action=(ct_snat(10.0.0.1);)
> +  table=??(lr_out_snat        ), priority=100  , 
> match=(flags.force_snat_for_lb == 1 && ip4 && outport == ""lr0-public""), 
> action=(ct_snat(172.168.0.10);)
> +  table=??(lr_out_snat        ), priority=100  , 
> match=(flags.force_snat_for_lb == 1 && ip4 && outport == ""lr0-sw0""), 
> action=(ct_snat(10.0.0.1);)
> +  table=??(lr_out_snat        ), priority=100  , 
> match=(flags.force_snat_for_lb == 1 && ip6 && outport == ""lr0-public""), 
> action=(ct_snat(fe80::200:ff:fe00:ff02);)
> +  table=??(lr_out_snat        ), priority=100  , 
> match=(flags.force_snat_for_lb == 1 && ip6 && outport == ""lr0-sw0""), 
> action=(ct_snat(fe80::200:ff:fe00:ff01);)
> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && 
> outport == "lr0-public"), action=(ct_snat(172.168.0.10);)
> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && 
> outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
>    table=??(lr_out_snat        ), priority=120  , match=(nd_ns), 
> action=(next;)
>    table=??(lr_out_snat        ), priority=25   , match=(ip && ip4.src == 
> 10.0.0.0/24 && (!ct.trk || !ct.rpl)), action=(ct_snat(172.168.0.10);)
>    table=??(lr_out_snat        ), priority=33   , match=(ip && ip4.src == 
> 10.0.0.10 && (!ct.trk || !ct.rpl)), action=(ct_snat(172.168.0.30);)
> @@ -6412,10 +6432,14 @@ AT_CHECK([grep "lr_out_post_undnat" lr0flows | 
> ovn_strip_lflows], [0], [dnl
>  
>  AT_CHECK([grep "lr_out_snat" lr0flows | ovn_strip_lflows], [0], [dnl
>    table=??(lr_out_snat        ), priority=0    , match=(1), action=(next;)
> -  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-public"), 
> action=(ct_snat(172.168.0.10);)
> -  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && ip4 && outport == "lr0-sw0"), 
> action=(ct_snat(10.0.0.1);)
> -  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && ip6 && outport == "lr0-public"), 
> action=(ct_snat(def0::10);)
> -  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && ip6 && outport == "lr0-sw0"), 
> action=(ct_snat(aef0::1);)
> +  table=??(lr_out_snat        ), priority=100  , 
> match=(flags.force_snat_for_lb == 1 && ip4 && outport == ""lr0-public""), 
> action=(ct_snat(172.168.0.10);)
> +  table=??(lr_out_snat        ), priority=100  , 
> match=(flags.force_snat_for_lb == 1 && ip4 && outport == ""lr0-sw0""), 
> action=(ct_snat(10.0.0.1);)
> +  table=??(lr_out_snat        ), priority=100  , 
> match=(flags.force_snat_for_lb == 1 && ip6 && outport == ""lr0-public""), 
> action=(ct_snat(def0::10);)
> +  table=??(lr_out_snat        ), priority=100  , 
> match=(flags.force_snat_for_lb == 1 && ip6 && outport == ""lr0-sw0""), 
> action=(ct_snat(aef0::1);)
> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && 
> outport == "lr0-public"), action=(ct_snat(172.168.0.10);)
> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && 
> outport == "lr0-sw0"), action=(ct_snat(10.0.0.1);)
> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip6 && 
> outport == "lr0-public"), action=(ct_snat(def0::10);)
> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip6 && 
> outport == "lr0-sw0"), action=(ct_snat(aef0::1);)
>    table=??(lr_out_snat        ), priority=120  , match=(nd_ns), 
> action=(next;)
>    table=??(lr_out_snat        ), priority=25   , match=(ip && ip4.src == 
> 10.0.0.0/24 && (!ct.trk || !ct.rpl)), action=(ct_snat(172.168.0.10);)
>    table=??(lr_out_snat        ), priority=33   , match=(ip && ip4.src == 
> 10.0.0.10 && (!ct.trk || !ct.rpl)), action=(ct_snat(172.168.0.30);)
> @@ -16748,3 +16772,84 @@ check grep -q "Bad configuration: The peer of the 
> switch port 'ls-lrp1' (LRP pee
>  
>  AT_CLEANUP
>  ])
> +
> +AT_SETUP([lb_force_snat_ip=routerip generate flags.network_id flows])
> +ovn_start
> +
> +check ovn-nbctl lr-add lr
> +check ovn-nbctl set logical_router lr options:chassis=hv1
> +check ovn-nbctl set logical_router lr options:lb_force_snat_ip=router_ip
> +check ovn-nbctl lrp-add lr lrp-client 02:00:00:00:00:02 1.1.1.1/24 ff01::01
> +check ovn-nbctl lrp-add lr lrp-server 02:00:00:00:00:03 1.1.2.1/24 
> 7.7.7.1/24 \
> +            8.8.8.1/24 1.2.1.1/24 1.2.2.1/24 3.3.3.1/24 4.4.4.1/24 
> 5.5.5.1/24 \
> +            6.6.6.1/24 6.2.1.1/24 6.3.1.1/24 6.4.1.1/24 6.5.1.1/24 
> 6.6.1.1/24 \
> +                                  6.7.1.1/24 6.8.1.1/24 6.9.1.1/24 
> 7.2.1.1/24 \
> +                                                   ff01::02 ff01::03 ff01::06
> +check ovn-nbctl ls-add ls-client
> +check ovn-nbctl ls-add ls-server
> +check ovn-nbctl lsp-add ls-client lsp-client-router
> +check ovn-nbctl lsp-set-type lsp-client-router router
> +check ovn-nbctl lsp-add ls-server lsp-server-router
> +check ovn-nbctl lsp-set-type lsp-server-router router
> +check ovn-nbctl set logical_switch_port lsp-client-router 
> options:router-port=lrp-client
> +check ovn-nbctl set logical_switch_port lsp-server-router 
> options:router-port=lrp-server
> +check ovn-nbctl lsp-add ls-client client
> +check ovn-nbctl lsp-add ls-server server
> +check ovn-nbctl lsp-set-addresses client "02:00:00:00:00:01 1.1.1.10"
> +check ovn-nbctl lsp-set-addresses server "02:00:00:00:00:04 2.2.2.10"
> +check ovn-nbctl lsp-set-addresses lsp-client-router router
> +check ovn-nbctl lsp-set-addresses lsp-server-router router
> +check ovn-nbctl lb-add lb 42.42.42.42:80 2.2.2.10:80 udp
> +check ovn-nbctl lr-lb-add lr lb
> +check ovn-nbctl --wait=hv sync
> +
> +ovn-sbctl dump-flows lr > lrflows
> +AT_CAPTURE_FILE([lrflows])
> +
> +AT_CHECK([grep -E flags.network_id lrflows | ovn_strip_lflows], [0], [dnl
> +  table=??(lr_in_network_id   ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && outport == "lrp-client" && reg0 == 
> 1.1.1.1/24), action=(flags.network_id = 0; next;)
> +  table=??(lr_in_network_id   ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && outport == "lrp-client" && xxreg0 == 
> ff01::1/128), action=(flags.network_id = 0; next;)
> +  table=??(lr_in_network_id   ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && outport == "lrp-server" && reg0 == 
> 1.1.2.1/24), action=(flags.network_id = 0; next;)
> +  table=??(lr_in_network_id   ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && outport == "lrp-server" && reg0 == 
> 1.2.1.1/24), action=(flags.network_id = 1; next;)
> +  table=??(lr_in_network_id   ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && outport == "lrp-server" && reg0 == 
> 1.2.2.1/24), action=(flags.network_id = 2; next;)
> +  table=??(lr_in_network_id   ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && outport == "lrp-server" && reg0 == 
> 3.3.3.1/24), action=(flags.network_id = 3; next;)
> +  table=??(lr_in_network_id   ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && outport == "lrp-server" && reg0 == 
> 4.4.4.1/24), action=(flags.network_id = 4; next;)
> +  table=??(lr_in_network_id   ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && outport == "lrp-server" && reg0 == 
> 5.5.5.1/24), action=(flags.network_id = 5; next;)
> +  table=??(lr_in_network_id   ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && outport == "lrp-server" && reg0 == 
> 6.2.1.1/24), action=(flags.network_id = 6; next;)
> +  table=??(lr_in_network_id   ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && outport == "lrp-server" && reg0 == 
> 6.3.1.1/24), action=(flags.network_id = 7; next;)
> +  table=??(lr_in_network_id   ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && outport == "lrp-server" && reg0 == 
> 6.4.1.1/24), action=(flags.network_id = 8; next;)
> +  table=??(lr_in_network_id   ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && outport == "lrp-server" && reg0 == 
> 6.5.1.1/24), action=(flags.network_id = 9; next;)
> +  table=??(lr_in_network_id   ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && outport == "lrp-server" && reg0 == 
> 6.6.1.1/24), action=(flags.network_id = 10; next;)
> +  table=??(lr_in_network_id   ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && outport == "lrp-server" && reg0 == 
> 6.6.6.1/24), action=(flags.network_id = 11; next;)
> +  table=??(lr_in_network_id   ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && outport == "lrp-server" && reg0 == 
> 6.7.1.1/24), action=(flags.network_id = 12; next;)
> +  table=??(lr_in_network_id   ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && outport == "lrp-server" && reg0 == 
> 6.8.1.1/24), action=(flags.network_id = 13; next;)
> +  table=??(lr_in_network_id   ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && outport == "lrp-server" && reg0 == 
> 6.9.1.1/24), action=(flags.network_id = 14; next;)
> +  table=??(lr_in_network_id   ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && outport == "lrp-server" && reg0 == 
> 7.2.1.1/24), action=(flags.network_id = 15; next;)
> +  table=??(lr_in_network_id   ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && outport == "lrp-server" && reg0 == 
> 7.7.7.1/24), action=(flags.network_id = 0; next;)
> +  table=??(lr_in_network_id   ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && outport == "lrp-server" && reg0 == 
> 8.8.8.1/24), action=(flags.network_id = 0; next;)
> +  table=??(lr_in_network_id   ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && outport == "lrp-server" && xxreg0 == 
> ff01::2/128), action=(flags.network_id = 0; next;)
> +  table=??(lr_in_network_id   ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && outport == "lrp-server" && xxreg0 == 
> ff01::3/128), action=(flags.network_id = 1; next;)
> +  table=??(lr_in_network_id   ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && outport == "lrp-server" && xxreg0 == 
> ff01::6/128), action=(flags.network_id = 2; next;)
> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && 
> outport == "lrp-client"), action=(ct_snat(1.1.1.1);)
> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip4 && 
> outport == "lrp-server"), action=(ct_snat(1.1.2.1);)
> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip6 && 
> outport == "lrp-client"), action=(ct_snat(ff01::1);)
> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 0 && ip6 && 
> outport == "lrp-server"), action=(ct_snat(ff01::2);)
> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 1 && ip4 && 
> outport == "lrp-server"), action=(ct_snat(1.2.1.1);)
> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 1 && ip6 && 
> outport == "lrp-server"), action=(ct_snat(ff01::3);)
> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 10 && ip4 && 
> outport == "lrp-server"), action=(ct_snat(6.6.1.1);)
> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 11 && ip4 && 
> outport == "lrp-server"), action=(ct_snat(6.6.6.1);)
> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 12 && ip4 && 
> outport == "lrp-server"), action=(ct_snat(6.7.1.1);)
> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 13 && ip4 && 
> outport == "lrp-server"), action=(ct_snat(6.8.1.1);)
> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 14 && ip4 && 
> outport == "lrp-server"), action=(ct_snat(6.9.1.1);)
> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 15 && ip4 && 
> outport == "lrp-server"), action=(ct_snat(7.2.1.1);)
> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 2 && ip4 && 
> outport == "lrp-server"), action=(ct_snat(1.2.2.1);)
> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 2 && ip6 && 
> outport == "lrp-server"), action=(ct_snat(ff01::6);)
> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 3 && ip4 && 
> outport == "lrp-server"), action=(ct_snat(3.3.3.1);)
> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 4 && ip4 && 
> outport == "lrp-server"), action=(ct_snat(4.4.4.1);)
> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 5 && ip4 && 
> outport == "lrp-server"), action=(ct_snat(5.5.5.1);)
> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 6 && ip4 && 
> outport == "lrp-server"), action=(ct_snat(6.2.1.1);)
> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 7 && ip4 && 
> outport == "lrp-server"), action=(ct_snat(6.3.1.1);)
> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 8 && ip4 && 
> outport == "lrp-server"), action=(ct_snat(6.4.1.1);)
> +  table=??(lr_out_snat        ), priority=110  , 
> match=(flags.force_snat_for_lb == 1 && flags.network_id == 9 && ip4 && 
> outport == "lrp-server"), action=(ct_snat(6.5.1.1);)
> +])
> +AT_CLEANUP
> diff --git a/tests/ovn.at b/tests/ovn.at
> index d8c06cd73..f1ea1c64e 100644
> --- a/tests/ovn.at
> +++ b/tests/ovn.at
> @@ -42163,3 +42163,74 @@ wait_row_count ACL_ID 0
>  
>  AT_CLEANUP
>  ])
> +
> +AT_SETUP([lb_force_snat_ip=routerip select correct network for snat])
> +AT_SKIP_IF([test $HAVE_SCAPY = no])
> +ovn_start
> +
> +check ovn-nbctl lr-add lr
> +check ovn-nbctl set logical_router lr options:chassis=hv1
> +check ovn-nbctl set logical_router lr options:lb_force_snat_ip=router_ip
> +check ovn-nbctl lrp-add lr lrp-client 02:00:00:00:00:02 1.1.1.1/24
> +check ovn-nbctl lrp-add lr lrp-server 02:00:00:00:00:03 1.1.2.1/24 
> 1.2.1.1/24 \
> +            1.1.3.1/24 2.2.2.1/24 7.7.7.1/24 8.8.8.1/24 6.7.1.1/24 
> 6.8.1.1/24 \
> +            8.8.8.1/24 1.2.1.1/24 1.2.2.1/24 3.3.3.1/24 4.4.4.1/24 
> 5.5.5.1/24 \
> +            6.6.6.1/24 6.2.1.1/24 6.3.1.1/24 6.4.1.1/24 6.5.1.1/24 6.6.1.1/24
> +check ovn-nbctl ls-add ls-client
> +check ovn-nbctl ls-add ls-server
> +check ovn-nbctl lsp-add ls-client lsp-client-router
> +check ovn-nbctl lsp-set-type lsp-client-router router
> +check ovn-nbctl lsp-add ls-server lsp-server-router
> +check ovn-nbctl lsp-set-type lsp-server-router router
> +check ovn-nbctl set logical_switch_port lsp-client-router 
> options:router-port=lrp-client
> +check ovn-nbctl set logical_switch_port lsp-server-router 
> options:router-port=lrp-server
> +check ovn-nbctl lsp-add ls-client client
> +check ovn-nbctl lsp-add ls-server server
> +check ovn-nbctl lsp-set-addresses client "02:00:00:00:00:01 1.1.1.10"
> +check ovn-nbctl lsp-set-addresses server "02:00:00:00:00:04 2.2.2.10"
> +check ovn-nbctl lsp-set-addresses lsp-client-router router
> +check ovn-nbctl lsp-set-addresses lsp-server-router router
> +check ovn-nbctl lb-add lb 42.42.42.42:80 2.2.2.10:80 udp
> +check ovn-nbctl lr-lb-add lr lb
> +
> +# Create a hypervisor and create OVS ports corresponding to logical ports.
> +net_add n1
> +sim_add hv1
> +as hv1
> +check ovs-vsctl add-br br-phys
> +ovn_attach n1 br-phys 192.168.0.1
> +
> +check ovs-vsctl -- add-port br-int hv1-vif1 -- \
> +    set interface hv1-vif1 external-ids:iface-id=client \
> +    options:tx_pcap=hv1/client-tx.pcap \
> +    options:rxq_pcap=hv1/client-rx.pcap
> +
> +check ovs-vsctl -- add-port br-int hv1-vif2 -- \
> +    set interface hv1-vif2 external-ids:iface-id=server \
> +    options:tx_pcap=hv1/server-tx.pcap \
> +    options:rxq_pcap=hv1/server-rx.pcap
> +
> +wait_for_ports_up
> +check ovn-nbctl --wait=hv sync
> +
> +tx_src_mac="02:00:00:00:00:01"
> +tx_dst_mac="02:00:00:00:00:02"
> +tx_src_ip=1.1.1.10
> +tx_dst_ip=42.42.42.42
> +request=$(fmt_pkt "Ether(dst='${tx_dst_mac}', src='${tx_src_mac}')/ \
> +                  IP(src='${tx_src_ip}', dst='${tx_dst_ip}')/ \
> +                  UDP(sport=20001, dport=80)")
> +
> +check as hv1 ovs-appctl netdev-dummy/receive hv1-vif1 $request
> +
> +rx_src_mac="02:00:00:00:00:03"
> +rx_dst_mac="02:00:00:00:00:04"
> +rx_src_ip=2.2.2.1
> +rx_dst_ip=2.2.2.10
> +expected=$(fmt_pkt "Ether(dst='${rx_dst_mac}', src='${rx_src_mac}')/ \
> +                  IP(src='${rx_src_ip}', dst='${rx_dst_ip}', ttl=0x3F)/ \
> +                  UDP(sport=20001, dport=80)")
> +
> +echo $expected > expected
> +OVN_CHECK_PACKETS([hv1/server-tx.pcap], [expected])
> +AT_CLEANUP

Thanks,
Dumitru

_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to