On 11/2/22 16:57, Lorenzo Bianconi wrote:
> Rely on the following new actions in order to introduce affinity timeout
> support to load-balancer sessions:
> - commit_lb_aff
> - chk_lb_aff
> 
> Introduce the following tables in switch and router pipelines
> respectively:
> - S_SWITCH_IN_LB_AFF_CHECK
> - S_SWITCH_IN_LB_AFF_LEARN
> - S_ROUTER_IN_LB_AFF_CHECK
> - S_ROUTER_IN_LB_AFF_LEARN
> 
> In this way OVN is able to dnat connections received from the same client
> to a given load-balancer to the same backend if received in the affinity
> timeslot.
> Please note load-balancer affinity support has scalability limitation
> since we need to create a new logical flow per backend.
> 
> Acked-by: Mark Michelson <[email protected]>
> Signed-off-by: Lorenzo Bianconi <[email protected]>
> ---

Hi Lorenzo,

Thanks for this new revision.

I think there's still a problem when we use LB affinity in combination
with the hairpin_snat_ip option.

I set up a simple topology (VM1 <-> LS (LB) <-> LR).

ovn-nbctl lr-add rtr
ovn-nbctl lrp-add rtr rtr-ls 00:00:00:00:01:00 42.42.42.1/24 4242::1/64
ovn-nbctl ls-add ls

ovn-nbctl lsp-add ls ls-rtr
ovn-nbctl lsp-set-addresses ls-rtr 00:00:00:00:01:00
ovn-nbctl lsp-set-type ls-rtr router
ovn-nbctl lsp-set-options ls-rtr router-port=rtr-ls
ovn-nbctl lsp-add ls vm1
ovn-nbctl lsp-set-addresses vm1 00:00:00:00:00:01
ip netns add vm1
ovs-vsctl add-port br-int vm1 -- set interface vm1 type=internal
ip link set vm1 netns vm1
ip netns exec vm1 ip link set vm1 address 00:00:00:00:00:01
ip netns exec vm1 ip addr add 42.42.42.2/24 dev vm1
ip netns exec vm1 ip link set vm1 up
ip netns exec vm1 ip r a default via 42.42.42.1
ovs-vsctl set Interface vm1 external_ids:iface-id=vm1
ovn-nbctl lsp-add ls vm2
ovn-nbctl lsp-set-addresses vm2 00:00:00:00:00:02

ovn-nbctl lb-add lb-test 66.66.66.66:666 42.42.42.2:4242 tcp
ovn-nbctl ls-lb-add ls lb-test
ovn-nbctl set load_balancer lb-test options:affinity_timeout=60
ovn-nbctl set load_balancer lb-test options:hairpin_snat_ip="66.66.66.100"

I started a nc listener on vm1:

ip netns exec vm1 nc -k -l -v 0.0.0.0 4242

Then I try to access the LB service from the backend itself (VM1):
while sleep 1; do ip netns exec vm1 nc -z -v 66.66.66.66 666; done

The first connection succeeds, the affinity flow gets created in table
78, but subsequent connections fail.  That's because the hairpin TCP SYN
packet is not SNAT-ed anymore:

12:29:58.578150 00:00:00:00:00:01 > 00:00:00:00:01:00, ethertype IPv4
(0x0800), length 74: (tos 0x0, ttl 64, id 37159, offset 0, flags [DF],
proto TCP (6), length 60)

    42.42.42.2.43156 > 66.66.66.66.666: Flags [S], cksum 0xd8de
(incorrect -> 0x23bd), seq 3216328469, win 29200, options [mss
1460,sackOK,TS val 1691730611 ecr 0,nop,wscale 7], length 0

12:29:58.578381 00:00:00:00:01:00 > 00:00:00:00:00:01, ethertype IPv4
(0x0800), length 74: (tos 0x0, ttl 63, id 37159, offset 0, flags [DF],
proto TCP (6), length 60)

    42.42.42.2.43156 > 42.42.42.2.4242: Flags [S], cksum 0x461d
(correct), seq 3216328469, win 29200, options [mss 1460,sackOK,TS val
1691730611 ecr 0,nop,wscale 7], length 0  <<< This should use
hairpin_snat_ip as source IP.

Could you please check why this is happening?

If possible, it would be great if you could add a system test for this too.

Thanks,
Dumitru

>  lib/lb.c                |  10 +
>  lib/lb.h                |   1 +
>  northd/northd.c         | 390 +++++++++++++++++++++++++--------
>  northd/ovn-northd.8.xml | 208 +++++++++++++++---
>  ovn-nb.xml              |   7 +
>  tests/ovn-northd.at     | 346 ++++++++++++++++++------------
>  tests/ovn.at            |  54 ++---
>  tests/system-ovn.at     | 465 +++++++++++++++++++++++++++++++++++++++-
>  8 files changed, 1198 insertions(+), 283 deletions(-)
> 
> diff --git a/lib/lb.c b/lib/lb.c
> index ab5de38a8..c08ccceda 100644
> --- a/lib/lb.c
> +++ b/lib/lb.c
> @@ -208,6 +208,16 @@ ovn_northd_lb_create(const struct nbrec_load_balancer 
> *nbrec_lb)
>          smap_get_def(&nbrec_lb->options, "neighbor_responder", "reachable");
>      lb->neigh_mode = strcmp(mode, "all") ? LB_NEIGH_RESPOND_REACHABLE
>                                           : LB_NEIGH_RESPOND_ALL;
> +    uint32_t affinity_timeout =
> +        smap_get_uint(&nbrec_lb->options, "affinity_timeout", 0);
> +    if (affinity_timeout > UINT16_MAX) {
> +        static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 1);
> +        VLOG_WARN_RL(&rl, "max affinity_timeout timeout value is %u",
> +                     UINT16_MAX);
> +        affinity_timeout = UINT16_MAX;
> +    }
> +    lb->affinity_timeout = affinity_timeout;
> +
>      sset_init(&lb->ips_v4);
>      sset_init(&lb->ips_v6);
>      struct smap_node *node;
> diff --git a/lib/lb.h b/lib/lb.h
> index c1aadd6dd..62843e471 100644
> --- a/lib/lb.h
> +++ b/lib/lb.h
> @@ -67,6 +67,7 @@ struct ovn_northd_lb {
>      bool controller_event;
>      bool routable;
>      bool skip_snat;
> +    uint16_t affinity_timeout;
>  
>      struct sset ips_v4;
>      struct sset ips_v6;
> diff --git a/northd/northd.c b/northd/northd.c
> index b7388afc5..7011d01ca 100644
> --- a/northd/northd.c
> +++ b/northd/northd.c
> @@ -121,20 +121,22 @@ enum ovn_stage {
>      PIPELINE_STAGE(SWITCH, IN,  ACL,            8, "ls_in_acl")           \
>      PIPELINE_STAGE(SWITCH, IN,  QOS_MARK,       9, "ls_in_qos_mark")      \
>      PIPELINE_STAGE(SWITCH, IN,  QOS_METER,     10, "ls_in_qos_meter")     \
> -    PIPELINE_STAGE(SWITCH, IN,  LB,            11, "ls_in_lb")            \
> -    PIPELINE_STAGE(SWITCH, IN,  ACL_AFTER_LB,  12, "ls_in_acl_after_lb")  \
> -    PIPELINE_STAGE(SWITCH, IN,  STATEFUL,      13, "ls_in_stateful")      \
> -    PIPELINE_STAGE(SWITCH, IN,  PRE_HAIRPIN,   14, "ls_in_pre_hairpin")   \
> -    PIPELINE_STAGE(SWITCH, IN,  NAT_HAIRPIN,   15, "ls_in_nat_hairpin")   \
> -    PIPELINE_STAGE(SWITCH, IN,  HAIRPIN,       16, "ls_in_hairpin")       \
> -    PIPELINE_STAGE(SWITCH, IN,  ARP_ND_RSP,    17, "ls_in_arp_rsp")       \
> -    PIPELINE_STAGE(SWITCH, IN,  DHCP_OPTIONS,  18, "ls_in_dhcp_options")  \
> -    PIPELINE_STAGE(SWITCH, IN,  DHCP_RESPONSE, 19, "ls_in_dhcp_response") \
> -    PIPELINE_STAGE(SWITCH, IN,  DNS_LOOKUP,    20, "ls_in_dns_lookup")    \
> -    PIPELINE_STAGE(SWITCH, IN,  DNS_RESPONSE,  21, "ls_in_dns_response")  \
> -    PIPELINE_STAGE(SWITCH, IN,  EXTERNAL_PORT, 22, "ls_in_external_port") \
> -    PIPELINE_STAGE(SWITCH, IN,  L2_LKUP,       23, "ls_in_l2_lkup")       \
> -    PIPELINE_STAGE(SWITCH, IN,  L2_UNKNOWN,    24, "ls_in_l2_unknown")    \
> +    PIPELINE_STAGE(SWITCH, IN,  LB_AFF_CHECK,  11, "ls_in_lb_aff_check")  \
> +    PIPELINE_STAGE(SWITCH, IN,  LB,            12, "ls_in_lb")            \
> +    PIPELINE_STAGE(SWITCH, IN,  LB_AFF_LEARN,  13, "ls_in_lb_aff_learn")  \
> +    PIPELINE_STAGE(SWITCH, IN,  ACL_AFTER_LB,  14, "ls_in_acl_after_lb")  \
> +    PIPELINE_STAGE(SWITCH, IN,  STATEFUL,      15, "ls_in_stateful")      \
> +    PIPELINE_STAGE(SWITCH, IN,  PRE_HAIRPIN,   16, "ls_in_pre_hairpin")   \
> +    PIPELINE_STAGE(SWITCH, IN,  NAT_HAIRPIN,   17, "ls_in_nat_hairpin")   \
> +    PIPELINE_STAGE(SWITCH, IN,  HAIRPIN,       18, "ls_in_hairpin")       \
> +    PIPELINE_STAGE(SWITCH, IN,  ARP_ND_RSP,    19, "ls_in_arp_rsp")       \
> +    PIPELINE_STAGE(SWITCH, IN,  DHCP_OPTIONS,  20, "ls_in_dhcp_options")  \
> +    PIPELINE_STAGE(SWITCH, IN,  DHCP_RESPONSE, 21, "ls_in_dhcp_response") \
> +    PIPELINE_STAGE(SWITCH, IN,  DNS_LOOKUP,    22, "ls_in_dns_lookup")    \
> +    PIPELINE_STAGE(SWITCH, IN,  DNS_RESPONSE,  23, "ls_in_dns_response")  \
> +    PIPELINE_STAGE(SWITCH, IN,  EXTERNAL_PORT, 24, "ls_in_external_port") \
> +    PIPELINE_STAGE(SWITCH, IN,  L2_LKUP,       25, "ls_in_l2_lkup")       \
> +    PIPELINE_STAGE(SWITCH, IN,  L2_UNKNOWN,    26, "ls_in_l2_unknown")    \
>                                                                            \
>      /* Logical switch egress stages. */                                   \
>      PIPELINE_STAGE(SWITCH, OUT, PRE_LB,       0, "ls_out_pre_lb")         \
> @@ -155,20 +157,22 @@ enum ovn_stage {
>      PIPELINE_STAGE(ROUTER, IN,  IP_INPUT,        3, "lr_in_ip_input")     \
>      PIPELINE_STAGE(ROUTER, IN,  UNSNAT,          4, "lr_in_unsnat")       \
>      PIPELINE_STAGE(ROUTER, IN,  DEFRAG,          5, "lr_in_defrag")       \
> -    PIPELINE_STAGE(ROUTER, IN,  DNAT,            6, "lr_in_dnat")         \
> -    PIPELINE_STAGE(ROUTER, IN,  ECMP_STATEFUL,   7, "lr_in_ecmp_stateful") \
> -    PIPELINE_STAGE(ROUTER, IN,  ND_RA_OPTIONS,   8, "lr_in_nd_ra_options") \
> -    PIPELINE_STAGE(ROUTER, IN,  ND_RA_RESPONSE,  9, "lr_in_nd_ra_response") \
> -    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING_PRE,  10, "lr_in_ip_routing_pre") 
>  \
> -    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING,      11, "lr_in_ip_routing")     
>  \
> -    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING_ECMP, 12, 
> "lr_in_ip_routing_ecmp") \
> -    PIPELINE_STAGE(ROUTER, IN,  POLICY,          13, "lr_in_policy")         
>  \
> -    PIPELINE_STAGE(ROUTER, IN,  POLICY_ECMP,     14, "lr_in_policy_ecmp")    
>  \
> -    PIPELINE_STAGE(ROUTER, IN,  ARP_RESOLVE,     15, "lr_in_arp_resolve")    
>  \
> -    PIPELINE_STAGE(ROUTER, IN,  CHK_PKT_LEN,     16, "lr_in_chk_pkt_len")    
>  \
> -    PIPELINE_STAGE(ROUTER, IN,  LARGER_PKTS,     17, "lr_in_larger_pkts")    
>  \
> -    PIPELINE_STAGE(ROUTER, IN,  GW_REDIRECT,     18, "lr_in_gw_redirect")    
>  \
> -    PIPELINE_STAGE(ROUTER, IN,  ARP_REQUEST,     19, "lr_in_arp_request")    
>  \
> +    PIPELINE_STAGE(ROUTER, IN,  LB_AFF_CHECK,    6, "lr_in_lb_aff_check") \
> +    PIPELINE_STAGE(ROUTER, IN,  DNAT,            7, "lr_in_dnat")         \
> +    PIPELINE_STAGE(ROUTER, IN,  LB_AFF_LEARN,    8, "lr_in_lb_aff_learn") \
> +    PIPELINE_STAGE(ROUTER, IN,  ECMP_STATEFUL,   9, "lr_in_ecmp_stateful") \
> +    PIPELINE_STAGE(ROUTER, IN,  ND_RA_OPTIONS,   10, "lr_in_nd_ra_options") \
> +    PIPELINE_STAGE(ROUTER, IN,  ND_RA_RESPONSE,  11, "lr_in_nd_ra_response") 
> \
> +    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING_PRE,  12, "lr_in_ip_routing_pre") 
>  \
> +    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING,      13, "lr_in_ip_routing")     
>  \
> +    PIPELINE_STAGE(ROUTER, IN,  IP_ROUTING_ECMP, 14, 
> "lr_in_ip_routing_ecmp") \
> +    PIPELINE_STAGE(ROUTER, IN,  POLICY,          15, "lr_in_policy")         
>  \
> +    PIPELINE_STAGE(ROUTER, IN,  POLICY_ECMP,     16, "lr_in_policy_ecmp")    
>  \
> +    PIPELINE_STAGE(ROUTER, IN,  ARP_RESOLVE,     17, "lr_in_arp_resolve")    
>  \
> +    PIPELINE_STAGE(ROUTER, IN,  CHK_PKT_LEN,     18, "lr_in_chk_pkt_len")    
>  \
> +    PIPELINE_STAGE(ROUTER, IN,  LARGER_PKTS,     19, "lr_in_larger_pkts")    
>  \
> +    PIPELINE_STAGE(ROUTER, IN,  GW_REDIRECT,     20, "lr_in_gw_redirect")    
>  \
> +    PIPELINE_STAGE(ROUTER, IN,  ARP_REQUEST,     21, "lr_in_arp_request")    
>  \
>                                                                        \
>      /* Logical router egress stages. */                               \
>      PIPELINE_STAGE(ROUTER, OUT, CHECK_DNAT_LOCAL,   0,                       
> \
> @@ -215,8 +219,17 @@ enum ovn_stage {
>  #define REG_ORIG_DIP_IPV6         "xxreg1"
>  #define REG_ORIG_TP_DPORT         "reg2[0..15]"
>  
> +/* Register used to store backend ipv6 address
> + * for load balancer affinity. */
> +#define REG_LB_L2_AFF_BACKEND_IP6 "xxreg0"
> +
>  /* Register definitions for switches and routers. */
>  
> +/* Register used to store backend ipv4 address
> + * for load balancer affinity. */
> +#define REG_LB_AFF_BACKEND_IP4  "reg4"
> +#define REG_LB_AFF_MATCH_PORT   "reg8[0..15]"
> +
>  /* Indicate that this packet has been recirculated using egress
>   * loopback.  This allows certain checks to be bypassed, such as a
>   * logical router dropping packets with source IP address equals
> @@ -228,6 +241,8 @@ enum ovn_stage {
>  #define REGBIT_LOOKUP_NEIGHBOR_IP_RESULT "reg9[3]"
>  #define REGBIT_DST_NAT_IP_LOCAL "reg9[4]"
>  #define REGBIT_KNOWN_ECMP_NH    "reg9[5]"
> +#define REGBIT_KNOWN_LB_SESSION "reg9[6]"
> +#define REG
>  
>  /* Register to store the eth address associated to a router port for packets
>   * received in S_ROUTER_IN_ADMISSION.
> @@ -245,6 +260,10 @@ enum ovn_stage {
>  #define REG_SRC_IPV6 "xxreg1"
>  #define REG_ROUTE_TABLE_ID "reg7"
>  
> +/* Register used to store backend ipv6 address
> + * for load balancer affinity. */
> +#define REG_LB_L3_AFF_BACKEND_IP6  "xxreg1"
> +
>  #define REG_ORIG_TP_DPORT_ROUTER   "reg9[16..31]"
>  
>  /* Register used for setting a label for ACLs in a Logical Switch. */
> @@ -267,73 +286,75 @@ enum ovn_stage {
>   * OVS register usage:
>   *
>   * Logical Switch pipeline:
> - * 
> +----+----------------------------------------------+---+------------------+
> - * | R0 |     REGBIT_{CONNTRACK/DHCP/DNS}              |   |                 
>  |
> - * |    |     REGBIT_{HAIRPIN/HAIRPIN_REPLY}           |   |                 
>  |
> - * |    | REGBIT_ACL_HINT_{ALLOW_NEW/ALLOW/DROP/BLOCK} |   |                 
>  |
> - * |    |     REGBIT_ACL_LABEL                         | X |                 
>  |
> - * +----+----------------------------------------------+ X |                 
>  |
> - * | R1 |         ORIG_DIP_IPV4 (>= IN_PRE_STATEFUL)   | R |                 
>  |
> - * +----+----------------------------------------------+ E |                 
>  |
> - * | R2 |         ORIG_TP_DPORT (>= IN_PRE_STATEFUL)   | G |                 
>  |
> - * +----+----------------------------------------------+ 0 |                 
>  |
> - * | R3 |                  ACL LABEL                   |   |                 
>  |
> - * 
> +----+----------------------------------------------+---+------------------+
> - * | R4 |                   UNUSED                     |   |                 
>  |
> - * +----+----------------------------------------------+ X | 
> ORIG_DIP_IPV6(>= |
> - * | R5 |                   UNUSED                     | X | 
> IN_PRE_STATEFUL) |
> - * +----+----------------------------------------------+ R |                 
>  |
> - * | R6 |                   UNUSED                     | E |                 
>  |
> - * +----+----------------------------------------------+ G |                 
>  |
> - * | R7 |                   UNUSED                     | 1 |                 
>  |
> - * 
> +----+----------------------------------------------+---+------------------+
> - * | R8 |                   UNUSED                     |
> + * 
> +----+----------------------------------------------+---+-----------------------------------+
> + * | R0 |     REGBIT_{CONNTRACK/DHCP/DNS}              |   |                 
>                   |
> + * |    |     REGBIT_{HAIRPIN/HAIRPIN_REPLY}           |   |                 
>                   |
> + * |    | REGBIT_ACL_HINT_{ALLOW_NEW/ALLOW/DROP/BLOCK} |   |                 
>                   |
> + * |    |     REGBIT_ACL_LABEL                         | X |                 
>                   |
> + * +----+----------------------------------------------+ X |                 
>                   |
> + * | R5 |                   UNUSED                     | X |       
> LB_L2_AFF_BACKEND_IP6       |
> + * | R1 |         ORIG_DIP_IPV4 (>= IN_PRE_STATEFUL)   | R |                 
>                   |
> + * +----+----------------------------------------------+ E |                 
>                   |
> + * | R2 |         ORIG_TP_DPORT (>= IN_PRE_STATEFUL)   | G |                 
>                   |
> + * +----+----------------------------------------------+ 0 |                 
>                   |
> + * | R3 |                  ACL LABEL                   |   |                 
>                   |
> + * 
> +----+----------------------------------------------+---+-----------------------------------+
> + * | R4 |            REG_LB_AFF_BACKEND_IP4            |   |                 
>                   |
> + * +----+----------------------------------------------+ X |                 
>                   |
> + * | R5 |                   UNUSED                     | X | 
> ORIG_DIP_IPV6(>= IN_PRE_STATEFUL) |
> + * +----+----------------------------------------------+ R |                 
>                   |
> + * | R6 |                   UNUSED                     | E |                 
>                   |
> + * +----+----------------------------------------------+ G |                 
>                   |
> + * | R7 |                   UNUSED                     | 1 |                 
>                   |
> + * 
> +----+----------------------------------------------+---+-----------------------------------+
> + * | R8 |              LB_AFF_MATCH_PORT               |
>   * +----+----------------------------------------------+
>   * | R9 |                   UNUSED                     |
>   * +----+----------------------------------------------+
>   *
>   * Logical Router pipeline:
> - * 
> +-----+--------------------------+---+-----------------+---+---------------+
> - * | R0  | REGBIT_ND_RA_OPTS_RESULT |   |                 |   |              
>  |
> - * |     |   (= IN_ND_RA_OPTIONS)   | X |                 |   |              
>  |
> - * |     |      NEXT_HOP_IPV4       | R |                 |   |              
>  |
> - * |     |      (>= IP_INPUT)       | E | INPORT_ETH_ADDR | X |              
>  |
> - * +-----+--------------------------+ G |   (< IP_INPUT)  | X |              
>  |
> - * | R1  |   SRC_IPV4 for ARP-REQ   | 0 |                 | R |              
>  |
> - * |     |      (>= IP_INPUT)       |   |                 | E | 
> NEXT_HOP_IPV6 |
> - * +-----+--------------------------+---+-----------------+ G | ( >= DEFRAG 
> ) |
> - * | R2  |        UNUSED            | X |                 | 0 |              
>  |
> - * |     |                          | R |                 |   |              
>  |
> - * +-----+--------------------------+ E |     UNUSED      |   |              
>  |
> - * | R3  |        UNUSED            | G |                 |   |              
>  |
> - * |     |                          | 1 |                 |   |              
>  |
> - * 
> +-----+--------------------------+---+-----------------+---+---------------+
> - * | R4  |        UNUSED            | X |                 |   |              
>  |
> - * |     |                          | R |                 |   |              
>  |
> - * +-----+--------------------------+ E |     UNUSED      | X |              
>  |
> - * | R5  |        UNUSED            | G |                 | X |              
>  |
> - * |     |                          | 2 |                 | R |SRC_IPV6 for 
> NS|
> - * +-----+--------------------------+---+-----------------+ E | ( >=         
>  |
> - * | R6  |        UNUSED            | X |                 | G | 
> IN_IP_ROUTING)|
> - * |     |                          | R |                 | 1 |              
>  |
> - * +-----+--------------------------+ E |     UNUSED      |   |              
>  |
> - * | R7  |      ROUTE_TABLE_ID      | G |                 |   |              
>  |
> - * |     | (>= IN_IP_ROUTING_PRE && | 3 |                 |   |              
>  |
> - * |     |  <= IN_IP_ROUTING)       |   |                 |   |              
>  |
> - * 
> +-----+--------------------------+---+-----------------+---+---------------+
> - * | R8  |     ECMP_GROUP_ID        |   |                 |
> - * |     |     ECMP_MEMBER_ID       | X |                 |
> - * +-----+--------------------------+ R |                 |
> - * |     | REGBIT_{                 | E |                 |
> - * |     |   EGRESS_LOOPBACK/       | G |     UNUSED      |
> - * | R9  |   PKT_LARGER/            | 4 |                 |
> - * |     |   LOOKUP_NEIGHBOR_RESULT/|   |                 |
> - * |     |   SKIP_LOOKUP_NEIGHBOR/  |   |                 |
> - * |     |   KNOWN_ECMP_NH}         |   |                 |
> - * |     |                          |   |                 |
> - * |     | REG_ORIG_TP_DPORT_ROUTER |   |                 |
> - * |     |                          |   |                 |
> - * +-----+--------------------------+---+-----------------+
> + * 
> +-----+---------------------------+---+-----------------+---+------------------------------------+
> + * | R0  | REGBIT_ND_RA_OPTS_RESULT  |   |                 |   |             
>                        |
> + * |     |   (= IN_ND_RA_OPTIONS)    | X |                 |   |             
>                        |
> + * |     |      NEXT_HOP_IPV4        | R |                 |   |             
>                        |
> + * |     |      (>= IP_INPUT)        | E | INPORT_ETH_ADDR | X |             
>                        |
> + * +-----+---------------------------+ G |   (< IP_INPUT)  | X |             
>                        |
> + * | R1  |   SRC_IPV4 for ARP-REQ    | 0 |                 | R |             
>                        |
> + * |     |      (>= IP_INPUT)        |   |                 | E |     
> NEXT_HOP_IPV6 (>= DEFRAG )     |
> + * +-----+---------------------------+---+-----------------+ G |             
>                        |
> + * | R2  |        UNUSED             | X |                 | 0 |             
>                        |
> + * |     |                           | R |                 |   |             
>                        |
> + * +-----+---------------------------+ E |     UNUSED      |   |             
>                        |
> + * | R3  |        UNUSED             | G |                 |   |             
>                        |
> + * |     |                           | 1 |                 |   |             
>                        |
> + * 
> +-----+---------------------------+---+-----------------+---+------------------------------------+
> + * | R4  |  REG_LB_AFF_BACKEND_IP4   | X |                 |   |             
>                        |
> + * |     |                           | R |                 |   |             
>                        |
> + * +-----+---------------------------+ E |     UNUSED      | X |             
>                        |
> + * | R5  |        UNUSED             | G |                 | X |             
>                        |
> + * |     |                           | 2 |                 | R |        
> LB_L3_AFF_BACKEND_IP6       |
> + * +-----+---------------------------+---+-----------------+ E |           
> (<= IN_DNAT)             |
> + * | R6  |        UNUSED             | X |                 | G |             
>                        |
> + * |     |                           | R |                 | 1 |             
>                        |
> + * +-----+---------------------------+ E |     UNUSED      |   |             
>                        |
> + * | R7  |      ROUTE_TABLE_ID       | G |                 |   |             
>                        |
> + * |     | (>= IN_IP_ROUTING_PRE &&  | 3 |                 |   |             
>                        |
> + * |     |  <= IN_IP_ROUTING)        |   |                 |   |             
>                        |
> + * 
> +-----+---------------------------+---+-----------------+---+------------------------------------+
> + * | R8  |     ECMP_GROUP_ID         |   |                 |
> + * |     |     ECMP_MEMBER_ID        |   |                 |
> + * |     |    LB_AFF_MATCH_PORT      | X |                 |
> + * +-----+---------------------------+ R |                 |
> + * |     | REGBIT_{                  | E |                 |
> + * |     |   EGRESS_LOOPBACK/        | G |     UNUSED      |
> + * | R9  |   PKT_LARGER/             | 4 |                 |
> + * |     |   LOOKUP_NEIGHBOR_RESULT/ |   |                 |
> + * |     |   SKIP_LOOKUP_NEIGHBOR/   |   |                 |
> + * |     |   KNOWN_ECMP_NH}          |   |                 |
> + * |     |                           |   |                 |
> + * |     | REG_ORIG_TP_DPORT_ROUTER  |   |                 |
> + * |     |                           |   |                 |
> + * +-----+---------------------------+---+-----------------+
>   *
>   */
>  
> @@ -6943,6 +6964,190 @@ build_lb_rules_pre_stateful(struct hmap *lflows, 
> struct ovn_northd_lb *lb,
>      }
>  }
>  
> +static void
> +build_lb_affinity_flows(struct hmap *lflows, struct ovn_northd_lb *lb,
> +                        struct ovn_lb_vip *lb_vip, char *match,
> +                        bool router_pipeline)
> +{
> +    if (!lb->affinity_timeout) {
> +        return;
> +    }
> +
> +    enum ovn_stage stage0 = router_pipeline ?
> +        S_ROUTER_IN_LB_AFF_CHECK : S_SWITCH_IN_LB_AFF_CHECK;
> +    struct ovn_lflow *lflow_ref_aff_check = NULL;
> +    /* Check if we have already a enstablished connection for this
> +     * tuple and we are in affinity timeslot. */
> +    uint32_t hash_aff_check = ovn_logical_flow_hash(
> +            ovn_stage_get_table(stage0), ovn_stage_get_pipeline(stage0), 100,
> +            match, REGBIT_KNOWN_LB_SESSION" = chk_lb_aff(); next;");
> +
> +    size_t n_dp = router_pipeline ? lb->n_nb_lr : lb->n_nb_ls;
> +    for (size_t i = 0; i < n_dp; i++) {
> +        struct ovn_datapath *od = router_pipeline
> +            ? lb->nb_lr[i] : lb->nb_ls[i];
> +        if (!ovn_dp_group_add_with_reference(lflow_ref_aff_check, od)) {
> +            lflow_ref_aff_check = ovn_lflow_add_at_with_hash(
> +                    lflows, od, stage0, 100, match,
> +                    REGBIT_KNOWN_LB_SESSION" = chk_lb_aff(); next;",
> +                    NULL, NULL, &lb->nlb->header_,
> +                    OVS_SOURCE_LOCATOR, hash_aff_check);
> +        }
> +    }
> +
> +    struct ds aff_action_learn = DS_EMPTY_INITIALIZER;
> +    struct ds aff_match_learn = DS_EMPTY_INITIALIZER;
> +    struct ds aff_action_lb = DS_EMPTY_INITIALIZER;
> +    struct ds aff_match = DS_EMPTY_INITIALIZER;
> +
> +    stage0 = router_pipeline
> +        ? S_ROUTER_IN_LB_AFF_LEARN : S_SWITCH_IN_LB_AFF_LEARN;
> +    enum ovn_stage stage1 = router_pipeline
> +        ? S_ROUTER_IN_DNAT : S_SWITCH_IN_LB;
> +    bool ipv6 = !IN6_IS_ADDR_V4MAPPED(&lb_vip->vip);
> +    for (size_t i = 0; i < lb_vip->n_backends; i++) {
> +        struct ovn_lb_backend *backend = &lb_vip->backends[i];
> +
> +        /* Forward to OFTABLE_CHK_LB_AFFINITY table to store flow tuple. */
> +        ds_put_format(&aff_match_learn, "ct.new && %s.dst == %s",
> +                      ipv6 ? "ip6" : "ip4", backend->ip_str);
> +        if (backend->port) {
> +            ds_put_format(&aff_match_learn, " && %s.dst == %d",
> +                          lb->proto, backend->port);
> +        }
> +
> +        if (lb_vip->vip_port) {
> +            ds_put_format(&aff_action_learn,
> +                          "commit_lb_aff(vip = \"%s%s%s:%d\"",
> +                          ipv6 ? "[" : "", lb_vip->vip_str, ipv6 ? "]" : "",
> +                          lb_vip->vip_port);
> +        } else {
> +            ds_put_format(&aff_action_learn, "commit_lb_aff(vip = \"%s\"",
> +                          lb_vip->vip_str);
> +        }
> +
> +        if (backend->port) {
> +            ds_put_format(&aff_action_learn,", backend = \"%s%s%s:%d\"",
> +                          ipv6 ? "[" : "", backend->ip_str,
> +                          ipv6 ? "]" : "", backend->port);
> +        } else {
> +            ds_put_format(&aff_action_learn,", backend = \"%s\"",
> +                          backend->ip_str);
> +        }
> +
> +        if (lb_vip->vip_port) {
> +            ds_put_format(&aff_action_learn, ", proto = %s", lb->proto);
> +        }
> +
> +        ds_put_format(&aff_action_learn, ", timeout = %d); next;",
> +                      lb->affinity_timeout);
> +
> +        struct ovn_lflow *lflow_ref_aff_learn = NULL;
> +        uint32_t hash_aff_learn = ovn_logical_flow_hash(
> +                ovn_stage_get_table(stage0), ovn_stage_get_pipeline(stage0),
> +                100, ds_cstr(&aff_match_learn), ds_cstr(&aff_action_learn));
> +
> +        /* Use already selected backend within affinity
> +         * timeslot. */
> +        if (backend->port) {
> +            ds_put_format(&aff_match,
> +                REGBIT_KNOWN_LB_SESSION" == 1 && %s && %s == %s "
> +                "&& "REG_LB_AFF_MATCH_PORT" == %d",
> +                IN6_IS_ADDR_V4MAPPED(&lb_vip->vip) ? "ip4" : "ip6",
> +                IN6_IS_ADDR_V4MAPPED(&lb_vip->vip)
> +                    ? REG_LB_AFF_BACKEND_IP4
> +                    : router_pipeline
> +                        ? REG_LB_L3_AFF_BACKEND_IP6
> +                        : REG_LB_L2_AFF_BACKEND_IP6,
> +                backend->ip_str, backend->port);
> +            ds_put_format(&aff_action_lb, "ct_lb_mark(backends=%s%s%s:%d);",
> +                          ipv6 ? "[" : "", backend->ip_str,
> +                          ipv6 ? "]" : "", backend->port);
> +        } else {
> +            ds_put_format(&aff_match,
> +                REGBIT_KNOWN_LB_SESSION" == 1 && %s && %s == %s",
> +                IN6_IS_ADDR_V4MAPPED(&lb_vip->vip) ? "ip4" : "ip6",
> +                IN6_IS_ADDR_V4MAPPED(&lb_vip->vip)
> +                    ? REG_LB_AFF_BACKEND_IP4
> +                    : router_pipeline
> +                        ? REG_LB_L3_AFF_BACKEND_IP6
> +                        : REG_LB_L2_AFF_BACKEND_IP6,
> +                backend->ip_str);
> +            ds_put_format(&aff_action_lb, "ct_lb_mark(backends=%s);",
> +                          backend->ip_str);
> +        }
> +
> +        struct ovn_lflow *lflow_ref_aff_lb = NULL;
> +        uint32_t hash_aff_lb = ovn_logical_flow_hash(
> +                ovn_stage_get_table(stage1), ovn_stage_get_pipeline(stage1),
> +                150, ds_cstr(&aff_match), ds_cstr(&aff_action_lb));
> +
> +        for (size_t j = 0; j < n_dp; j++) {
> +            struct ovn_datapath *od = router_pipeline
> +                ? lb->nb_lr[j] : lb->nb_ls[j];
> +            if (!ovn_dp_group_add_with_reference(lflow_ref_aff_learn, od)) {
> +                lflow_ref_aff_learn = ovn_lflow_add_at_with_hash(
> +                        lflows, od, stage0, 100,  ds_cstr(&aff_match_learn),
> +                        ds_cstr(&aff_action_learn), NULL, NULL,
> +                        &lb->nlb->header_, OVS_SOURCE_LOCATOR,
> +                        hash_aff_learn);
> +            }
> +            if (!ovn_dp_group_add_with_reference(lflow_ref_aff_lb, od)) {
> +                lflow_ref_aff_lb = ovn_lflow_add_at_with_hash(
> +                        lflows, od, stage1, 150, ds_cstr(&aff_match),
> +                        ds_cstr(&aff_action_lb), NULL, NULL,
> +                        &lb->nlb->header_, OVS_SOURCE_LOCATOR,
> +                        hash_aff_lb);
> +            }
> +        }
> +
> +        ds_clear(&aff_action_learn);
> +        ds_clear(&aff_match_learn);
> +        ds_clear(&aff_action_lb);
> +        ds_clear(&aff_match);
> +    }
> +
> +    ds_destroy(&aff_action_learn);
> +    ds_destroy(&aff_match_learn);
> +    ds_destroy(&aff_action_lb);
> +    ds_destroy(&aff_match);
> +}
> +
> +static void
> +build_lb_affinity_ls_flows(struct hmap *lflows, struct ovn_northd_lb *lb,
> +                           struct ovn_lb_vip *lb_vip)
> +{
> +    struct ds match = DS_EMPTY_INITIALIZER;
> +    if (IN6_IS_ADDR_V4MAPPED(&lb_vip->vip)) {
> +        ds_put_format(&match, "ct.new && ip4 && "REG_ORIG_DIP_IPV4 " == %s",
> +                      lb_vip->vip_str);
> +    } else {
> +        ds_put_format(&match, "ct.new && ip6 && "REG_ORIG_DIP_IPV6 " == %s",
> +                      lb_vip->vip_str);
> +    }
> +
> +    if (lb_vip->vip_port) {
> +        ds_put_format(&match, " && "REG_ORIG_TP_DPORT " == %"PRIu16,
> +                      lb_vip->vip_port);
> +    }
> +
> +    build_lb_affinity_flows(lflows, lb, lb_vip, ds_cstr(&match), false);
> +    ds_destroy(&match);
> +}
> +
> +static void
> +build_lb_affinity_default_flows(struct ovn_datapath *od, struct hmap *lflows)
> +{
> +    if (od->nbs) {
> +        ovn_lflow_add(lflows, od, S_SWITCH_IN_LB_AFF_CHECK, 0, "1", "next;");
> +        ovn_lflow_add(lflows, od, S_SWITCH_IN_LB_AFF_LEARN, 0, "1", "next;");
> +    }
> +    if (od->nbr) {
> +        ovn_lflow_add(lflows, od, S_ROUTER_IN_LB_AFF_CHECK, 0, "1", "next;");
> +        ovn_lflow_add(lflows, od, S_ROUTER_IN_LB_AFF_LEARN, 0, "1", "next;");
> +    }
> +}
> +
>  static void
>  build_lb_rules(struct hmap *lflows, struct ovn_northd_lb *lb, bool 
> ct_lb_mark,
>                 struct ds *match, struct ds *action,
> @@ -6992,6 +7197,8 @@ build_lb_rules(struct hmap *lflows, struct 
> ovn_northd_lb *lb, bool ct_lb_mark,
>              priority = 120;
>          }
>  
> +        build_lb_affinity_ls_flows(lflows, lb, lb_vip);
> +
>          struct ovn_lflow *lflow_ref = NULL;
>          uint32_t hash = ovn_logical_flow_hash(
>                  ovn_stage_get_table(S_SWITCH_IN_LB),
> @@ -10074,6 +10281,8 @@ build_lrouter_nat_flows_for_lb(struct ovn_lb_vip 
> *lb_vip,
>          xcalloc(lb->n_nb_lr, sizeof *distributed_router);
>      int n_distributed_router = 0;
>  
> +    build_lb_affinity_flows(lflows, lb, lb_vip, new_match, true);
> +
>      /* Group gw router since we do not have datapath dependency in
>       * lflow generation for them.
>       */
> @@ -13981,6 +14190,7 @@ build_lswitch_and_lrouter_iterate_by_od(struct 
> ovn_datapath *od,
>      build_lrouter_nat_defrag_and_lb(od, lsi->lflows, lsi->ports, &lsi->match,
>                                      &lsi->actions, lsi->meter_groups,
>                                      lsi->features->ct_no_masked_label);
> +    build_lb_affinity_default_flows(od, lsi->lflows);
>  }
>  
>  /* Helper function to combine all lflow generation which is iterated by port.
> diff --git a/northd/ovn-northd.8.xml b/northd/ovn-northd.8.xml
> index a70f2e678..e1a6ca871 100644
> --- a/northd/ovn-northd.8.xml
> +++ b/northd/ovn-northd.8.xml
> @@ -853,9 +853,55 @@
>        </li>
>      </ul>
>  
> -    <h3>Ingress Table 11: LB</h3>
> +    <h3>Ingress Table 11: Load balancing affinity check</h3>
> +
> +    <p>
> +      Load balancing affinity check table contains the following
> +      logical flows:
> +    </p>
>  
>      <ul>
> +      <li>
> +        For all the configured load balancing rules for a switch in
> +        <code>OVN_Northbound</code> database where a positive affinity 
> timeout
> +        is specified in <code>options</code> column, that includes a L4 port
> +        <var>PORT</var> of protocol <var>P</var> and IP address 
> <var>VIP</var>,
> +        a priority-100 flow is added. For IPv4 <var>VIPs</var>, the flow
> +        matches <code>ct.new &amp;&amp; ip &amp;&amp; ip4.dst == 
> <var>VIP</var>
> +        &amp;&amp; <var>P</var>.dst == <var>PORT</var></code>. For IPv6
> +        <var>VIPs</var>, the flow matches <code>ct.new &amp;&amp; ip 
> &amp;&amp;
> +        ip6.dst == <var>VIP</var>&amp;&amp; <var>P</var> &amp;&amp;
> +        <var>P</var>.dst == <var> PORT</var></code>. The flow's action is
> +        <code>reg9[6] = chk_lb_aff(); next;</code>.
> +      </li>
> +
> +      <li>
> +        A priority 0 flow is added which matches on all packets and applies
> +        the action <code>next;</code>.
> +      </li>
> +    </ul>
> +
> +    <h3>Ingress Table 12: LB</h3>
> +
> +    <ul>
> +      <li>
> +        For all the configured load balancing rules for a switch in
> +        <code>OVN_Northbound</code> database where a positive affinity 
> timeout
> +        is specified in <code>options</code> column, that includes a L4 port
> +        <var>PORT</var> of protocol <var>P</var> and IP address 
> <var>VIP</var>,
> +        a priority-150 flow is added. For IPv4 <var>VIPs</var>, the flow
> +        matches <code>reg9[6] == 1 &amp;&amp; ip &amp;&amp; ip4.dst ==
> +        <var>VIP</var> &amp;&amp; <var>P</var>.dst == <var>PORT</var></code>.
> +        For IPv6 <var>VIPs</var>, the flow matches <code>reg9[6] == 1
> +        &amp;&amp; ip &amp;&amp; ip6.dst == <var> VIP </var>&amp;&amp;
> +        <var>P</var> &amp;&amp; <var>P</var>.dst == <var> PORT</var></code>.
> +        The flow's action is <code>ct_lb_mark(<var>args</var>)</code>, where
> +        <var>args</var> contains comma separated IP addresses (and optional
> +        port numbers) to load balance to.  The address family of the IP
> +        addresses of <var>args</var> is the same as the address family
> +        of <var>VIP</var>.
> +      </li>
> +
>        <li>
>          For all the configured load balancing rules for a switch in
>          <code>OVN_Northbound</code> database that includes a L4 port
> @@ -914,7 +960,38 @@
>        </li>
>      </ul>
>  
> -    <h3>Ingress table 12: <code>from-lport</code> ACLs after LB</h3>
> +    <h3>Ingress Table 13: Load balancing affinity learn</h3>
> +
> +    <p>
> +      Load balancing affinity learn table contains the following
> +      logical flows:
> +    </p>
> +
> +    <ul>
> +      <li>
> +        For all the configured load balancing rules for a switch in
> +        <code>OVN_Northbound</code> database where a positive affinity 
> timeout
> +        <var>T</var> is specified in <code>options</code> column, that 
> includes
> +        a L4 port <var>PORT</var> of protocol <var>P</var> and IP address
> +        <var>VIP</var>, a priority-100 flow is added. For IPv4 
> <var>VIPs</var>,
> +        the flow matches <code>ct.new &amp;&amp; ip &amp;&amp;
> +        ip4.dst == <var>VIP</var> &amp;&amp;
> +        <var>P</var>.dst == <var>PORT</var></code>. For IPv6 <var>VIPs</var>,
> +        the flow matches <code>ct.new &amp;&amp; ip &amp;&amp; ip6.dst == 
> <var>
> +        VIP </var>&amp;&amp; <var>P</var> &amp;&amp; <var>P</var>.dst == 
> <var>
> +        PORT</var></code>. The flow's action is <code>commit_lb_aff(vip =
> +        <var>VIP</var>:<var>PORT</var>, backend = <var>backend ip</var>:
> +        <var>backend port</var>, proto = <var>P</var>, timeout = 
> <var>T</var>);
> +        next;</code>.
> +      </li>
> +
> +      <li>
> +        A priority 0 flow is added which matches on all packets and applies
> +        the action <code>next;</code>.
> +      </li>
> +    </ul>
> +
> +    <h3>Ingress table 14: <code>from-lport</code> ACLs after LB</h3>
>  
>      <p>
>        Logical flows in this table closely reproduce those in the
> @@ -976,7 +1053,7 @@
>        </li>
>      </ul>
>  
> -    <h3>Ingress Table 13: Stateful</h3>
> +    <h3>Ingress Table 15: Stateful</h3>
>  
>      <ul>
>        <li>
> @@ -999,7 +1076,7 @@
>        </li>
>      </ul>
>  
> -    <h3>Ingress Table 14: Pre-Hairpin</h3>
> +    <h3>Ingress Table 16: Pre-Hairpin</h3>
>      <ul>
>        <li>
>          If the logical switch has load balancer(s) configured, then a
> @@ -1017,7 +1094,7 @@
>        </li>
>      </ul>
>  
> -    <h3>Ingress Table 15: Nat-Hairpin</h3>
> +    <h3>Ingress Table 17: Nat-Hairpin</h3>
>      <ul>
>        <li>
>           If the logical switch has load balancer(s) configured, then a
> @@ -1052,7 +1129,7 @@
>        </li>
>      </ul>
>  
> -    <h3>Ingress Table 16: Hairpin</h3>
> +    <h3>Ingress Table 18: Hairpin</h3>
>      <ul>
>        <li>
>          <p>
> @@ -1086,7 +1163,7 @@
>        </li>
>      </ul>
>  
> -    <h3>Ingress Table 17: ARP/ND responder</h3>
> +    <h3>Ingress Table 19: ARP/ND responder</h3>
>  
>      <p>
>        This table implements ARP/ND responder in a logical switch for known
> @@ -1388,7 +1465,7 @@ output;
>        </li>
>      </ul>
>  
> -    <h3>Ingress Table 18: DHCP option processing</h3>
> +    <h3>Ingress Table 20: DHCP option processing</h3>
>  
>      <p>
>        This table adds the DHCPv4 options to a DHCPv4 packet from the
> @@ -1449,7 +1526,7 @@ next;
>        </li>
>      </ul>
>  
> -    <h3>Ingress Table 19: DHCP responses</h3>
> +    <h3>Ingress Table 21: DHCP responses</h3>
>  
>      <p>
>        This table implements DHCP responder for the DHCP replies generated by
> @@ -1530,7 +1607,7 @@ output;
>        </li>
>      </ul>
>  
> -    <h3>Ingress Table 20 DNS Lookup</h3>
> +    <h3>Ingress Table 22 DNS Lookup</h3>
>  
>      <p>
>        This table looks up and resolves the DNS names to the corresponding
> @@ -1559,7 +1636,7 @@ reg0[4] = dns_lookup(); next;
>        </li>
>      </ul>
>  
> -    <h3>Ingress Table 21 DNS Responses</h3>
> +    <h3>Ingress Table 23 DNS Responses</h3>
>  
>      <p>
>        This table implements DNS responder for the DNS replies generated by
> @@ -1594,7 +1671,7 @@ output;
>        </li>
>      </ul>
>  
> -    <h3>Ingress table 22 External ports</h3>
> +    <h3>Ingress table 24 External ports</h3>
>  
>      <p>
>        Traffic from the <code>external</code> logical ports enter the ingress
> @@ -1637,7 +1714,7 @@ output;
>        </li>
>      </ul>
>  
> -    <h3>Ingress Table 23 Destination Lookup</h3>
> +    <h3>Ingress Table 25 Destination Lookup</h3>
>  
>      <p>
>        This table implements switching behavior.  It contains these logical
> @@ -1806,7 +1883,7 @@ output;
>        </li>
>      </ul>
>  
> -    <h3>Ingress Table 24 Destination unknown</h3>
> +    <h3>Ingress Table 26 Destination unknown</h3>
>  
>      <p>
>        This table handles the packets whose destination was not found or
> @@ -3172,7 +3249,33 @@ icmp6 {
>        packet de-fragmentation and tracking before sending it to the next 
> table.
>      </p>
>  
> -    <h3>Ingress Table 6: DNAT</h3>
> +    <h3>Ingress Table 6: Load balancing affinity check</h3>
> +
> +    <p>
> +      Load balancing affinity check table contains the following
> +      logical flows:
> +    </p>
> +
> +    <ul>
> +      <li>
> +        For all the configured load balancing rules for a logical router 
> where
> +        a positive affinity timeout is specified in <code>options</code>
> +        column, that includes a L4 port <var>PORT</var> of protocol
> +        <var>P</var> and IPv4 or IPv6 address <var>VIP</var>, a priority-100
> +        flow that matches on <code>ct.new &amp;&amp; ip &amp;&amp;
> +        reg0 == <var>VIP</var> &amp;&amp; <var>P</var> &amp;&amp; 
> reg9[16..31]
> +        == </code> <code><var>PORT</var></code> (<code>xxreg0 == <var>VIP
> +        </var></code> in the IPv6 case) with an action of <code>reg9[6] =
> +        chk_lb_aff(); next;</code>
> +      </li>
> +
> +      <li>
> +        A priority 0 flow is added which matches on all packets and applies
> +        the action <code>next;</code>.
> +      </li>
> +    </ul>
> +
> +    <h3>Ingress Table 7: DNAT</h3>
>  
>      <p>
>        Packets enter the pipeline with destination IP address that needs to
> @@ -3180,7 +3283,7 @@ icmp6 {
>        in the reverse direction needs to be unDNATed.
>      </p>
>  
> -    <p>Ingress Table 6: Load balancing DNAT rules</p>
> +    <p>Ingress Table 7: Load balancing DNAT rules</p>
>  
>      <p>
>        Following load balancing DNAT flows are added for Gateway router or
> @@ -3190,6 +3293,21 @@ icmp6 {
>      </p>
>  
>      <ul>
> +      <li>
> +        For all the configured load balancing rules for a logical router 
> where
> +        a positive affinity timeout is specified in <code>options</code>
> +        column, that includes a L4 port <var>PORT</var> of protocol
> +        <var>P</var> and IPv4 or IPv6 address <var>VIP</var>, a priority-150
> +        flow that matches on <code>reg9[6] == 1 &amp;&amp; ip &amp;&amp;
> +        reg0 == <var>VIP</var> &amp;&amp; <var>P</var> &amp;&amp;
> +        reg9[16..31] == </code> <code><var>PORT</var></code> (<code>xxreg0
> +        == <var>VIP</var></code> in the IPv6 case) with an action of
> +        <code>ct_lb_mark(<var>args</var>) </code>, where <var>args</var>
> +        contains comma separated IP addresses (and optional port numbers)
> +        to load balance to.  The address family of the IP addresses of
> +        <var>args</var> is the same as the address family of <var>VIP</var>.
> +      </li>
> +
>        <li>
>          If controller_event has been enabled for all the configured load
>          balancing rules for a Gateway router or Router with gateway port
> @@ -3319,7 +3437,7 @@ icmp6 {
>        </li>
>      </ul>
>  
> -    <p>Ingress Table 6: DNAT on Gateway Routers</p>
> +    <p>Ingress Table 7: DNAT on Gateway Routers</p>
>  
>      <ul>
>        <li>
> @@ -3361,7 +3479,7 @@ icmp6 {
>        </li>
>      </ul>
>  
> -    <p>Ingress Table 6: DNAT on Distributed Routers</p>
> +    <p>Ingress Table 7: DNAT on Distributed Routers</p>
>  
>      <p>
>        On distributed routers, the DNAT table only handles packets
> @@ -3416,7 +3534,35 @@ icmp6 {
>        </li>
>      </ul>
>  
> -    <h3>Ingress Table 7: ECMP symmetric reply processing</h3>
> +    <h3>Ingress Table 8: Load balancing affinity learn</h3>
> +
> +    <p>
> +      Load balancing affinity learn table contains the following
> +      logical flows:
> +    </p>
> +
> +    <ul>
> +      <li>
> +        For all the configured load balancing rules for a logical router 
> where
> +        a positive affinity timeout <var>T</var> is specified in 
> <code>options
> +        </code> column, that includes a L4 port <var>PORT</var> of protocol
> +        <var>P</var> and IPv4 or IPv6 address <var>VIP</var>, a priority-100
> +        flow that matches on <code>ct.new &amp;&amp; ip &amp;&amp; reg0 ==
> +        <var>VIP</var> &amp;&amp; <var>P</var> &amp;&amp; reg9[16..31] ==
> +        </code> <code><var>PORT</var></code> (<code>xxreg0 == <var>VIP</var>
> +        </code> in the IPv6 case) with an action of <code>commit_lb_aff(vip =
> +        <var>VIP</var>:<var>PORT</var>, backend = <var>backend ip</var>:
> +        <var>backend port</var>, proto = <var>P</var>, timeout = 
> <var>T</var>);
> +        next;</code>.
> +      </li>
> +
> +      <li>
> +        A priority 0 flow is added which matches on all packets and applies
> +        the action <code>next;</code>.
> +      </li>
> +    </ul>
> +
> +    <h3>Ingress Table 9: ECMP symmetric reply processing</h3>
>      <ul>
>        <li>
>          If ECMP routes with symmetric reply are configured in the
> @@ -3435,7 +3581,7 @@ icmp6 {
>        </li>
>      </ul>
>  
> -    <h3>Ingress Table 8: IPv6 ND RA option processing</h3>
> +    <h3>Ingress Table 10: IPv6 ND RA option processing</h3>
>  
>      <ul>
>        <li>
> @@ -3465,7 +3611,7 @@ reg0[5] = put_nd_ra_opts(<var>options</var>);next;
>        </li>
>      </ul>
>  
> -    <h3>Ingress Table 9: IPv6 ND RA responder</h3>
> +    <h3>Ingress Table 11: IPv6 ND RA responder</h3>
>  
>      <p>
>        This table implements IPv6 ND RA responder for the IPv6 ND RA replies
> @@ -3510,7 +3656,7 @@ output;
>        </li>
>      </ul>
>  
> -    <h3>Ingress Table 10: IP Routing Pre</h3>
> +    <h3>Ingress Table 12: IP Routing Pre</h3>
>  
>      <p>
>        If a packet arrived at this table from Logical Router Port <var>P</var>
> @@ -3540,7 +3686,7 @@ output;
>        </li>
>      </ul>
>  
> -    <h3>Ingress Table 11: IP Routing</h3>
> +    <h3>Ingress Table 13: IP Routing</h3>
>  
>      <p>
>        A packet that arrives at this table is an IP packet that should be
> @@ -3741,7 +3887,7 @@ select(reg8[16..31], <var>MID1</var>, <var>MID2</var>, 
> ...);
>        </li>
>      </ul>
>  
> -    <h3>Ingress Table 12: IP_ROUTING_ECMP</h3>
> +    <h3>Ingress Table 14: IP_ROUTING_ECMP</h3>
>  
>      <p>
>        This table implements the second part of IP routing for ECMP routes
> @@ -3793,7 +3939,7 @@ outport = <var>P</var>;
>        </li>
>      </ul>
>  
> -    <h3>Ingress Table 13: Router policies</h3>
> +    <h3>Ingress Table 15: Router policies</h3>
>      <p>
>        This table adds flows for the logical router policies configured
>        on the logical router. Please see the
> @@ -3865,7 +4011,7 @@ next;
>        </li>
>      </ul>
>  
> -    <h3>Ingress Table 14: ECMP handling for router policies</h3>
> +    <h3>Ingress Table 16: ECMP handling for router policies</h3>
>      <p>
>        This table handles the ECMP for the router policies configured
>        with multiple nexthops.
> @@ -3909,7 +4055,7 @@ outport = <var>P</var>
>        </li>
>      </ul>
>  
> -    <h3>Ingress Table 15: ARP/ND Resolution</h3>
> +    <h3>Ingress Table 17: ARP/ND Resolution</h3>
>  
>      <p>
>        Any packet that reaches this table is an IP packet whose next-hop
> @@ -4127,7 +4273,7 @@ outport = <var>P</var>
>  
>      </ul>
>  
> -    <h3>Ingress Table 16: Check packet length</h3>
> +    <h3>Ingress Table 18: Check packet length</h3>
>  
>      <p>
>        For distributed logical routers or gateway routers with gateway
> @@ -4164,7 +4310,7 @@ REGBIT_PKT_LARGER = check_pkt_larger(<var>L</var>); 
> next;
>        and advances to the next table.
>      </p>
>  
> -    <h3>Ingress Table 17: Handle larger packets</h3>
> +    <h3>Ingress Table 19: Handle larger packets</h3>
>  
>      <p>
>        For distributed logical routers or gateway routers with gateway port
> @@ -4227,7 +4373,7 @@ icmp6 {
>        and advances to the next table.
>      </p>
>  
> -    <h3>Ingress Table 18: Gateway Redirect</h3>
> +    <h3>Ingress Table 20: Gateway Redirect</h3>
>  
>      <p>
>        For distributed logical routers where one or more of the logical router
> @@ -4295,7 +4441,7 @@ icmp6 {
>        </li>
>      </ul>
>  
> -    <h3>Ingress Table 19: ARP Request</h3>
> +    <h3>Ingress Table 21: ARP Request</h3>
>  
>      <p>
>        In the common case where the Ethernet destination has been resolved, 
> this
> diff --git a/ovn-nb.xml b/ovn-nb.xml
> index f41e9d7c0..de0243a96 100644
> --- a/ovn-nb.xml
> +++ b/ovn-nb.xml
> @@ -1908,6 +1908,13 @@
>          requests only for VIPs that are part of a router's subnet.  The 
> default
>          value of this option, if not specified, is <code>reachable</code>.
>        </column>
> +
> +      <column name="options" key="affinity_timeout">
> +        If the CMS provide a positive value for 
> <code>affinity_timeout</code>,
> +        OVN will dnat connections received from the same client to this lb to
> +        the same backend if received in the affinity timeslot. Max supported
> +        affinity_timeout is 65535s.
> +      </column>
>      </group>
>    </table>
>  
> diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at
> index 4f399eccb..686667e47 100644
> --- a/tests/ovn-northd.at
> +++ b/tests/ovn-northd.at
> @@ -2158,9 +2158,9 @@ AT_CAPTURE_FILE([sw1flows])
>  
>  AT_CHECK(
>    [grep -E 'ls_(in|out)_acl' sw0flows sw1flows | grep pg0 | sort], [0], [dnl
> -sw0flows:  table=4 (ls_out_acl         ), priority=2003 , match=(outport == 
> @pg0 && ip6 && udp), action=(reg0 = 0; reject { /* eth.dst <-> eth.src; 
> ip.dst <-> ip.src; is implicit. */ outport <-> inport; 
> next(pipeline=ingress,table=23); };)
> +sw0flows:  table=4 (ls_out_acl         ), priority=2003 , match=(outport == 
> @pg0 && ip6 && udp), action=(reg0 = 0; reject { /* eth.dst <-> eth.src; 
> ip.dst <-> ip.src; is implicit. */ outport <-> inport; 
> next(pipeline=ingress,table=25); };)
>  sw0flows:  table=8 (ls_in_acl          ), priority=2002 , match=(inport == 
> @pg0 && ip4 && tcp && tcp.dst == 80), action=(reg0 = 0; reject { /* eth.dst 
> <-> eth.src; ip.dst <-> ip.src; is implicit. */ outport <-> inport; 
> next(pipeline=egress,table=5); };)
> -sw1flows:  table=4 (ls_out_acl         ), priority=2003 , match=(outport == 
> @pg0 && ip6 && udp), action=(reg0 = 0; reject { /* eth.dst <-> eth.src; 
> ip.dst <-> ip.src; is implicit. */ outport <-> inport; 
> next(pipeline=ingress,table=23); };)
> +sw1flows:  table=4 (ls_out_acl         ), priority=2003 , match=(outport == 
> @pg0 && ip6 && udp), action=(reg0 = 0; reject { /* eth.dst <-> eth.src; 
> ip.dst <-> ip.src; is implicit. */ outport <-> inport; 
> next(pipeline=ingress,table=25); };)
>  sw1flows:  table=8 (ls_in_acl          ), priority=2002 , match=(inport == 
> @pg0 && ip4 && tcp && tcp.dst == 80), action=(reg0 = 0; reject { /* eth.dst 
> <-> eth.src; ip.dst <-> ip.src; is implicit. */ outport <-> inport; 
> next(pipeline=egress,table=5); };)
>  ])
>  
> @@ -2174,10 +2174,10 @@ ovn-sbctl dump-flows sw1 > sw1flows2
>  AT_CAPTURE_FILE([sw1flows2])
>  
>  AT_CHECK([grep "ls_out_acl" sw0flows2 sw1flows2 | grep pg0 | sort], [0], [dnl
> -sw0flows2:  table=4 (ls_out_acl         ), priority=2002 , match=(outport == 
> @pg0 && ip4 && udp), action=(reg0 = 0; reject { /* eth.dst <-> eth.src; 
> ip.dst <-> ip.src; is implicit. */ outport <-> inport; 
> next(pipeline=ingress,table=23); };)
> -sw0flows2:  table=4 (ls_out_acl         ), priority=2003 , match=(outport == 
> @pg0 && ip6 && udp), action=(reg0 = 0; reject { /* eth.dst <-> eth.src; 
> ip.dst <-> ip.src; is implicit. */ outport <-> inport; 
> next(pipeline=ingress,table=23); };)
> -sw1flows2:  table=4 (ls_out_acl         ), priority=2002 , match=(outport == 
> @pg0 && ip4 && udp), action=(reg0 = 0; reject { /* eth.dst <-> eth.src; 
> ip.dst <-> ip.src; is implicit. */ outport <-> inport; 
> next(pipeline=ingress,table=23); };)
> -sw1flows2:  table=4 (ls_out_acl         ), priority=2003 , match=(outport == 
> @pg0 && ip6 && udp), action=(reg0 = 0; reject { /* eth.dst <-> eth.src; 
> ip.dst <-> ip.src; is implicit. */ outport <-> inport; 
> next(pipeline=ingress,table=23); };)
> +sw0flows2:  table=4 (ls_out_acl         ), priority=2002 , match=(outport == 
> @pg0 && ip4 && udp), action=(reg0 = 0; reject { /* eth.dst <-> eth.src; 
> ip.dst <-> ip.src; is implicit. */ outport <-> inport; 
> next(pipeline=ingress,table=25); };)
> +sw0flows2:  table=4 (ls_out_acl         ), priority=2003 , match=(outport == 
> @pg0 && ip6 && udp), action=(reg0 = 0; reject { /* eth.dst <-> eth.src; 
> ip.dst <-> ip.src; is implicit. */ outport <-> inport; 
> next(pipeline=ingress,table=25); };)
> +sw1flows2:  table=4 (ls_out_acl         ), priority=2002 , match=(outport == 
> @pg0 && ip4 && udp), action=(reg0 = 0; reject { /* eth.dst <-> eth.src; 
> ip.dst <-> ip.src; is implicit. */ outport <-> inport; 
> next(pipeline=ingress,table=25); };)
> +sw1flows2:  table=4 (ls_out_acl         ), priority=2003 , match=(outport == 
> @pg0 && ip6 && udp), action=(reg0 = 0; reject { /* eth.dst <-> eth.src; 
> ip.dst <-> ip.src; is implicit. */ outport <-> inport; 
> next(pipeline=ingress,table=25); };)
>  ])
>  
>  AS_BOX([3])
> @@ -2192,16 +2192,16 @@ AT_CAPTURE_FILE([sw1flows3])
>  AT_CHECK([grep "ls_out_acl" sw0flows3 sw1flows3 | grep pg0 | sort], [0], [dnl
>  sw0flows3:  table=4 (ls_out_acl         ), priority=2001 , match=(reg0[[7]] 
> == 1 && (outport == @pg0 && ip)), action=(reg0[[1]] = 1; next;)
>  sw0flows3:  table=4 (ls_out_acl         ), priority=2001 , match=(reg0[[8]] 
> == 1 && (outport == @pg0 && ip)), action=(next;)
> -sw0flows3:  table=4 (ls_out_acl         ), priority=2002 , 
> match=((reg0[[10]] == 1) && outport == @pg0 && ip4 && udp), action=(ct_commit 
> { ct_mark.blocked = 1; };  reg0 = 0; reject { /* eth.dst <-> eth.src; ip.dst 
> <-> ip.src; is implicit. */ outport <-> inport; 
> next(pipeline=ingress,table=23); };)
> -sw0flows3:  table=4 (ls_out_acl         ), priority=2002 , match=((reg0[[9]] 
> == 1) && outport == @pg0 && ip4 && udp), action=(reg0 = 0; reject { /* 
> eth.dst <-> eth.src; ip.dst <-> ip.src; is implicit. */ outport <-> inport; 
> next(pipeline=ingress,table=23); };)
> -sw0flows3:  table=4 (ls_out_acl         ), priority=2003 , 
> match=((reg0[[10]] == 1) && outport == @pg0 && ip6 && udp), action=(ct_commit 
> { ct_mark.blocked = 1; };  reg0 = 0; reject { /* eth.dst <-> eth.src; ip.dst 
> <-> ip.src; is implicit. */ outport <-> inport; 
> next(pipeline=ingress,table=23); };)
> -sw0flows3:  table=4 (ls_out_acl         ), priority=2003 , match=((reg0[[9]] 
> == 1) && outport == @pg0 && ip6 && udp), action=(reg0 = 0; reject { /* 
> eth.dst <-> eth.src; ip.dst <-> ip.src; is implicit. */ outport <-> inport; 
> next(pipeline=ingress,table=23); };)
> +sw0flows3:  table=4 (ls_out_acl         ), priority=2002 , 
> match=((reg0[[10]] == 1) && outport == @pg0 && ip4 && udp), action=(ct_commit 
> { ct_mark.blocked = 1; };  reg0 = 0; reject { /* eth.dst <-> eth.src; ip.dst 
> <-> ip.src; is implicit. */ outport <-> inport; 
> next(pipeline=ingress,table=25); };)
> +sw0flows3:  table=4 (ls_out_acl         ), priority=2002 , match=((reg0[[9]] 
> == 1) && outport == @pg0 && ip4 && udp), action=(reg0 = 0; reject { /* 
> eth.dst <-> eth.src; ip.dst <-> ip.src; is implicit. */ outport <-> inport; 
> next(pipeline=ingress,table=25); };)
> +sw0flows3:  table=4 (ls_out_acl         ), priority=2003 , 
> match=((reg0[[10]] == 1) && outport == @pg0 && ip6 && udp), action=(ct_commit 
> { ct_mark.blocked = 1; };  reg0 = 0; reject { /* eth.dst <-> eth.src; ip.dst 
> <-> ip.src; is implicit. */ outport <-> inport; 
> next(pipeline=ingress,table=25); };)
> +sw0flows3:  table=4 (ls_out_acl         ), priority=2003 , match=((reg0[[9]] 
> == 1) && outport == @pg0 && ip6 && udp), action=(reg0 = 0; reject { /* 
> eth.dst <-> eth.src; ip.dst <-> ip.src; is implicit. */ outport <-> inport; 
> next(pipeline=ingress,table=25); };)
>  sw1flows3:  table=4 (ls_out_acl         ), priority=2001 , match=(reg0[[7]] 
> == 1 && (outport == @pg0 && ip)), action=(reg0[[1]] = 1; next;)
>  sw1flows3:  table=4 (ls_out_acl         ), priority=2001 , match=(reg0[[8]] 
> == 1 && (outport == @pg0 && ip)), action=(next;)
> -sw1flows3:  table=4 (ls_out_acl         ), priority=2002 , 
> match=((reg0[[10]] == 1) && outport == @pg0 && ip4 && udp), action=(ct_commit 
> { ct_mark.blocked = 1; };  reg0 = 0; reject { /* eth.dst <-> eth.src; ip.dst 
> <-> ip.src; is implicit. */ outport <-> inport; 
> next(pipeline=ingress,table=23); };)
> -sw1flows3:  table=4 (ls_out_acl         ), priority=2002 , match=((reg0[[9]] 
> == 1) && outport == @pg0 && ip4 && udp), action=(reg0 = 0; reject { /* 
> eth.dst <-> eth.src; ip.dst <-> ip.src; is implicit. */ outport <-> inport; 
> next(pipeline=ingress,table=23); };)
> -sw1flows3:  table=4 (ls_out_acl         ), priority=2003 , 
> match=((reg0[[10]] == 1) && outport == @pg0 && ip6 && udp), action=(ct_commit 
> { ct_mark.blocked = 1; };  reg0 = 0; reject { /* eth.dst <-> eth.src; ip.dst 
> <-> ip.src; is implicit. */ outport <-> inport; 
> next(pipeline=ingress,table=23); };)
> -sw1flows3:  table=4 (ls_out_acl         ), priority=2003 , match=((reg0[[9]] 
> == 1) && outport == @pg0 && ip6 && udp), action=(reg0 = 0; reject { /* 
> eth.dst <-> eth.src; ip.dst <-> ip.src; is implicit. */ outport <-> inport; 
> next(pipeline=ingress,table=23); };)
> +sw1flows3:  table=4 (ls_out_acl         ), priority=2002 , 
> match=((reg0[[10]] == 1) && outport == @pg0 && ip4 && udp), action=(ct_commit 
> { ct_mark.blocked = 1; };  reg0 = 0; reject { /* eth.dst <-> eth.src; ip.dst 
> <-> ip.src; is implicit. */ outport <-> inport; 
> next(pipeline=ingress,table=25); };)
> +sw1flows3:  table=4 (ls_out_acl         ), priority=2002 , match=((reg0[[9]] 
> == 1) && outport == @pg0 && ip4 && udp), action=(reg0 = 0; reject { /* 
> eth.dst <-> eth.src; ip.dst <-> ip.src; is implicit. */ outport <-> inport; 
> next(pipeline=ingress,table=25); };)
> +sw1flows3:  table=4 (ls_out_acl         ), priority=2003 , 
> match=((reg0[[10]] == 1) && outport == @pg0 && ip6 && udp), action=(ct_commit 
> { ct_mark.blocked = 1; };  reg0 = 0; reject { /* eth.dst <-> eth.src; ip.dst 
> <-> ip.src; is implicit. */ outport <-> inport; 
> next(pipeline=ingress,table=25); };)
> +sw1flows3:  table=4 (ls_out_acl         ), priority=2003 , match=((reg0[[9]] 
> == 1) && outport == @pg0 && ip6 && udp), action=(reg0 = 0; reject { /* 
> eth.dst <-> eth.src; ip.dst <-> ip.src; is implicit. */ outport <-> inport; 
> next(pipeline=ingress,table=25); };)
>  ])
>  AT_CLEANUP
>  ])
> @@ -2373,7 +2373,7 @@ check ovn-nbctl --wait=sb \
>      -- ls-lb-add ls lb
>  
>  AT_CHECK([ovn-sbctl lflow-list ls | grep -e ls_in_acl_hint -e 
> ls_out_acl_hint -e ls_in_acl -e ls_out_acl | sort], [0], [dnl
> -  table=12(ls_in_acl_after_lb ), priority=0    , match=(1), action=(next;)
> +  table=14(ls_in_acl_after_lb ), priority=0    , match=(1), action=(next;)
>    table=3 (ls_out_acl_hint    ), priority=0    , match=(1), action=(next;)
>    table=3 (ls_out_acl_hint    ), priority=1    , match=(ct.est && 
> ct_mark.blocked == 0), action=(reg0[[10]] = 1; next;)
>    table=3 (ls_out_acl_hint    ), priority=2    , match=(ct.est && 
> ct_mark.blocked == 1), action=(reg0[[9]] = 1; next;)
> @@ -2416,7 +2416,7 @@ ovn-nbctl --wait=sb clear logical_switch ls acls
>  ovn-nbctl --wait=sb clear logical_switch ls load_balancer
>  
>  AT_CHECK([ovn-sbctl lflow-list ls | grep -e ls_in_acl_hint -e 
> ls_out_acl_hint -e ls_in_acl -e ls_out_acl | sort], [0], [dnl
> -  table=12(ls_in_acl_after_lb ), priority=0    , match=(1), action=(next;)
> +  table=14(ls_in_acl_after_lb ), priority=0    , match=(1), action=(next;)
>    table=3 (ls_out_acl_hint    ), priority=65535, match=(1), action=(next;)
>    table=4 (ls_out_acl         ), priority=65535, match=(1), action=(next;)
>    table=7 (ls_in_acl_hint     ), priority=65535, match=(1), action=(next;)
> @@ -3659,11 +3659,11 @@ AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], 
> [dnl
>  ])
>  
>  AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
> -  table=6 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80 && ct_mark.natted == 1), 
> action=(next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 10.0.0.100 && tcp && reg9[[16..31]] == 80 && ct_mark.natted == 1), 
> action=(next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80), 
> action=(ct_lb_mark(backends=10.0.0.4:8080);)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 10.0.0.100 && tcp && reg9[[16..31]] == 80), 
> action=(ct_lb_mark(backends=10.0.0.40:8080);)
> +  table=7 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80 && ct_mark.natted == 1), 
> action=(next;)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 10.0.0.100 && tcp && reg9[[16..31]] == 80 && ct_mark.natted == 1), 
> action=(next;)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80), 
> action=(ct_lb_mark(backends=10.0.0.4:8080);)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 10.0.0.100 && tcp && reg9[[16..31]] == 80), 
> action=(ct_lb_mark(backends=10.0.0.40:8080);)
>  ])
>  
>  AT_CHECK([grep "lr_out_undnat" lr0flows | sed 's/table=./table=?/' | sort], 
> [0], [dnl
> @@ -3695,11 +3695,11 @@ AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], 
> [dnl
>  ])
>  
>  AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
> -  table=6 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 10.0.0.100 && tcp && reg9[[16..31]] == 80 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80), 
> action=(flags.force_snat_for_lb = 1; ct_lb_mark(backends=10.0.0.4:8080);)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 10.0.0.100 && tcp && reg9[[16..31]] == 80), 
> action=(flags.force_snat_for_lb = 1; ct_lb_mark(backends=10.0.0.40:8080);)
> +  table=7 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 10.0.0.100 && tcp && reg9[[16..31]] == 80 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80), 
> action=(flags.force_snat_for_lb = 1; ct_lb_mark(backends=10.0.0.4:8080);)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 10.0.0.100 && tcp && reg9[[16..31]] == 80), 
> action=(flags.force_snat_for_lb = 1; ct_lb_mark(backends=10.0.0.40:8080);)
>  ])
>  
>  AT_CHECK([grep "lr_out_snat" lr0flows | sed 's/table=./table=?/' | sort], 
> [0], [dnl
> @@ -3741,11 +3741,11 @@ AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], 
> [dnl
>  ])
>  
>  AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
> -  table=6 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 10.0.0.100 && tcp && reg9[[16..31]] == 80 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80), 
> action=(flags.force_snat_for_lb = 1; ct_lb_mark(backends=10.0.0.4:8080);)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 10.0.0.100 && tcp && reg9[[16..31]] == 80), 
> action=(flags.force_snat_for_lb = 1; ct_lb_mark(backends=10.0.0.40:8080);)
> +  table=7 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 10.0.0.100 && tcp && reg9[[16..31]] == 80 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80), 
> action=(flags.force_snat_for_lb = 1; ct_lb_mark(backends=10.0.0.4:8080);)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 10.0.0.100 && tcp && reg9[[16..31]] == 80), 
> action=(flags.force_snat_for_lb = 1; ct_lb_mark(backends=10.0.0.40:8080);)
>  ])
>  
>  AT_CHECK([grep "lr_out_snat" lr0flows | sed 's/table=./table=?/' | sort], 
> [0], [dnl
> @@ -3801,11 +3801,11 @@ AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], 
> [dnl
>  ])
>  
>  AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
> -  table=6 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 10.0.0.100 && tcp && reg9[[16..31]] == 80 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80), 
> action=(flags.force_snat_for_lb = 1; ct_lb_mark(backends=10.0.0.4:8080);)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 10.0.0.100 && tcp && reg9[[16..31]] == 80), 
> action=(flags.force_snat_for_lb = 1; ct_lb_mark(backends=10.0.0.40:8080);)
> +  table=7 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 10.0.0.100 && tcp && reg9[[16..31]] == 80 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80), 
> action=(flags.force_snat_for_lb = 1; ct_lb_mark(backends=10.0.0.4:8080);)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 10.0.0.100 && tcp && reg9[[16..31]] == 80), 
> action=(flags.force_snat_for_lb = 1; ct_lb_mark(backends=10.0.0.40:8080);)
>  ])
>  
>  AT_CHECK([grep "lr_out_snat" lr0flows | sed 's/table=./table=?/' | sort], 
> [0], [dnl
> @@ -3848,8 +3848,8 @@ AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], 
> [dnl
>  ])
>  
>  AT_CHECK([grep "lr_in_dnat" lr0flows | grep skip_snat_for_lb | sort], [0], 
> [dnl
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 10.0.0.20 && tcp && reg9[[16..31]] == 80 && ct_mark.natted == 1), 
> action=(flags.skip_snat_for_lb = 1; next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 10.0.0.20 && tcp && reg9[[16..31]] == 80), 
> action=(flags.skip_snat_for_lb = 1; ct_lb_mark(backends=10.0.0.40:8080);)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 10.0.0.20 && tcp && reg9[[16..31]] == 80 && ct_mark.natted == 1), 
> action=(flags.skip_snat_for_lb = 1; next;)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 10.0.0.20 && tcp && reg9[[16..31]] == 80), 
> action=(flags.skip_snat_for_lb = 1; ct_lb_mark(backends=10.0.0.40:8080);)
>  ])
>  
>  AT_CHECK([grep "lr_out_snat" lr0flows | grep skip_snat_for_lb | sed 
> 's/table=./table=?/' | sort], [0], [dnl
> @@ -4019,7 +4019,7 @@ check_stateful_flows() {
>    table=? (ls_in_pre_stateful ), priority=120  , match=(ip4.dst == 10.0.0.20 
> && tcp.dst == 80), action=(reg1 = 10.0.0.20; reg2[[0..15]] = 80; ct_lb_mark;)
>  ])
>  
> -    AT_CHECK([grep "ls_in_lb" sw0flows | sort | sed 's/table=../table=??/'], 
> [0], [dnl
> +    AT_CHECK([grep "ls_in_lb " sw0flows | sort | sed 
> 's/table=../table=??/'], [0], [dnl
>    table=??(ls_in_lb           ), priority=0    , match=(1), action=(next;)
>    table=??(ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 
> 10.0.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; 
> ct_lb_mark(backends=10.0.0.4:8080);)
>    table=??(ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 
> 10.0.0.20 && tcp.dst == 80), action=(reg0[[1]] = 0; 
> ct_lb_mark(backends=10.0.0.40:8080);)
> @@ -4085,7 +4085,7 @@ AT_CHECK([grep "ls_in_pre_stateful" sw0flows | sort | 
> sed 's/table=./table=?/'],
>    table=? (ls_in_pre_stateful ), priority=110  , match=(reg0[[2]] == 1), 
> action=(ct_lb_mark;)
>  ])
>  
> -AT_CHECK([grep "ls_in_lb" sw0flows | sort | sed 's/table=../table=??/'], 
> [0], [dnl
> +AT_CHECK([grep "ls_in_lb " sw0flows | sort | sed 's/table=../table=??/'], 
> [0], [dnl
>    table=??(ls_in_lb           ), priority=0    , match=(1), action=(next;)
>  ])
>  
> @@ -4946,7 +4946,7 @@ AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], 
> [dnl
>  ])
>  
>  AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
> -  table=6 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
> +  table=7 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
>  ])
>  
>  AT_CHECK([grep "lr_out_undnat" lr0flows | sed 's/table=./table=?/' | sort], 
> [0], [dnl
> @@ -4982,7 +4982,7 @@ AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], 
> [dnl
>  ])
>  
>  AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
> -  table=6 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
> +  table=7 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
>  ])
>  
>  AT_CHECK([grep "lr_out_undnat" lr0flows | sed 's/table=./table=?/' | sort], 
> [0], [dnl
> @@ -5034,8 +5034,8 @@ AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], 
> [dnl
>  ])
>  
>  AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
> -  table=6 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
> -  table=6 (lr_in_dnat         ), priority=100  , match=(ip && ip4.dst == 
> 172.168.0.20 && inport == "lr0-public" && 
> is_chassis_resident("cr-lr0-public")), action=(ct_dnat_in_czone(10.0.0.3);)
> +  table=7 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
> +  table=7 (lr_in_dnat         ), priority=100  , match=(ip && ip4.dst == 
> 172.168.0.20 && inport == "lr0-public" && 
> is_chassis_resident("cr-lr0-public")), action=(ct_dnat_in_czone(10.0.0.3);)
>  ])
>  
>  AT_CHECK([grep "lr_out_chk_dnat_local" lr0flows | sed 's/table=./table=?/' | 
> sort], [0], [dnl
> @@ -5101,16 +5101,16 @@ AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], 
> [dnl
>  ])
>  
>  AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
> -  table=6 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
> -  table=6 (lr_in_dnat         ), priority=100  , match=(ip && ip4.dst == 
> 172.168.0.20 && inport == "lr0-public" && 
> is_chassis_resident("cr-lr0-public")), action=(ct_dnat_in_czone(10.0.0.3);)
> -  table=6 (lr_in_dnat         ), priority=110  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.200 && ct_mark.natted == 1 && 
> is_chassis_resident("cr-lr0-public")), action=(next;)
> -  table=6 (lr_in_dnat         ), priority=110  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.200 && is_chassis_resident("cr-lr0-public")), 
> action=(ct_lb_mark(backends=10.0.0.80,10.0.0.81);)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80 && ct_mark.natted == 1 && 
> is_chassis_resident("cr-lr0-public")), action=(next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.100 && tcp && reg9[[16..31]] == 8082 && ct_mark.natted == 1 
> && is_chassis_resident("cr-lr0-public")), action=(next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.210 && udp && reg9[[16..31]] == 60 && ct_mark.natted == 1 
> && is_chassis_resident("cr-lr0-public")), action=(next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80 && 
> is_chassis_resident("cr-lr0-public")), 
> action=(ct_lb_mark(backends=10.0.0.4:8080);)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.100 && tcp && reg9[[16..31]] == 8082 && 
> is_chassis_resident("cr-lr0-public")), 
> action=(ct_lb_mark(backends=10.0.0.50:82,10.0.0.60:82);)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.210 && udp && reg9[[16..31]] == 60 && 
> is_chassis_resident("cr-lr0-public")), 
> action=(ct_lb_mark(backends=10.0.0.50:6062,10.0.0.60:6062);)
> +  table=7 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
> +  table=7 (lr_in_dnat         ), priority=100  , match=(ip && ip4.dst == 
> 172.168.0.20 && inport == "lr0-public" && 
> is_chassis_resident("cr-lr0-public")), action=(ct_dnat_in_czone(10.0.0.3);)
> +  table=7 (lr_in_dnat         ), priority=110  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.200 && ct_mark.natted == 1 && 
> is_chassis_resident("cr-lr0-public")), action=(next;)
> +  table=7 (lr_in_dnat         ), priority=110  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.200 && is_chassis_resident("cr-lr0-public")), 
> action=(ct_lb_mark(backends=10.0.0.80,10.0.0.81);)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80 && ct_mark.natted == 1 && 
> is_chassis_resident("cr-lr0-public")), action=(next;)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.100 && tcp && reg9[[16..31]] == 8082 && ct_mark.natted == 1 
> && is_chassis_resident("cr-lr0-public")), action=(next;)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.210 && udp && reg9[[16..31]] == 60 && ct_mark.natted == 1 
> && is_chassis_resident("cr-lr0-public")), action=(next;)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80 && 
> is_chassis_resident("cr-lr0-public")), 
> action=(ct_lb_mark(backends=10.0.0.4:8080);)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.100 && tcp && reg9[[16..31]] == 8082 && 
> is_chassis_resident("cr-lr0-public")), 
> action=(ct_lb_mark(backends=10.0.0.50:82,10.0.0.60:82);)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.210 && udp && reg9[[16..31]] == 60 && 
> is_chassis_resident("cr-lr0-public")), 
> action=(ct_lb_mark(backends=10.0.0.50:6062,10.0.0.60:6062);)
>  ])
>  
>  AT_CHECK([grep "lr_out_chk_dnat_local" lr0flows | sed 's/table=./table=?/' | 
> sort], [0], [dnl
> @@ -5170,16 +5170,16 @@ AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], 
> [dnl
>  ])
>  
>  AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
> -  table=6 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
> -  table=6 (lr_in_dnat         ), priority=100  , match=(ip && ip4.dst == 
> 172.168.0.20), action=(flags.loopback = 1; ct_dnat(10.0.0.3);)
> -  table=6 (lr_in_dnat         ), priority=110  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.200 && ct_mark.natted == 1), action=(next;)
> -  table=6 (lr_in_dnat         ), priority=110  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.200), action=(ct_lb_mark(backends=10.0.0.80,10.0.0.81);)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80 && ct_mark.natted == 1), 
> action=(next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.100 && tcp && reg9[[16..31]] == 8082 && ct_mark.natted == 
> 1), action=(next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.210 && udp && reg9[[16..31]] == 60 && ct_mark.natted == 1), 
> action=(next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80), 
> action=(ct_lb_mark(backends=10.0.0.4:8080);)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.100 && tcp && reg9[[16..31]] == 8082), 
> action=(ct_lb_mark(backends=10.0.0.50:82,10.0.0.60:82);)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.210 && udp && reg9[[16..31]] == 60), 
> action=(ct_lb_mark(backends=10.0.0.50:6062,10.0.0.60:6062);)
> +  table=7 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
> +  table=7 (lr_in_dnat         ), priority=100  , match=(ip && ip4.dst == 
> 172.168.0.20), action=(flags.loopback = 1; ct_dnat(10.0.0.3);)
> +  table=7 (lr_in_dnat         ), priority=110  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.200 && ct_mark.natted == 1), action=(next;)
> +  table=7 (lr_in_dnat         ), priority=110  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.200), action=(ct_lb_mark(backends=10.0.0.80,10.0.0.81);)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80 && ct_mark.natted == 1), 
> action=(next;)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.100 && tcp && reg9[[16..31]] == 8082 && ct_mark.natted == 
> 1), action=(next;)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.210 && udp && reg9[[16..31]] == 60 && ct_mark.natted == 1), 
> action=(next;)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80), 
> action=(ct_lb_mark(backends=10.0.0.4:8080);)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.100 && tcp && reg9[[16..31]] == 8082), 
> action=(ct_lb_mark(backends=10.0.0.50:82,10.0.0.60:82);)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.210 && udp && reg9[[16..31]] == 60), 
> action=(ct_lb_mark(backends=10.0.0.50:6062,10.0.0.60:6062);)
>  ])
>  
>  AT_CHECK([grep "lr_out_chk_dnat_local" lr0flows | sed 's/table=./table=?/' | 
> sort], [0], [dnl
> @@ -5231,16 +5231,16 @@ AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], 
> [dnl
>  ])
>  
>  AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
> -  table=6 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
> -  table=6 (lr_in_dnat         ), priority=100  , match=(ip && ip4.dst == 
> 172.168.0.20), action=(flags.loopback = 1; ct_dnat(10.0.0.3);)
> -  table=6 (lr_in_dnat         ), priority=110  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.200 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> -  table=6 (lr_in_dnat         ), priority=110  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.200), action=(flags.force_snat_for_lb = 1; 
> ct_lb_mark(backends=10.0.0.80,10.0.0.81);)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.100 && tcp && reg9[[16..31]] == 8082 && ct_mark.natted == 
> 1), action=(flags.force_snat_for_lb = 1; next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.210 && udp && reg9[[16..31]] == 60 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80), 
> action=(flags.force_snat_for_lb = 1; ct_lb_mark(backends=10.0.0.4:8080);)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.100 && tcp && reg9[[16..31]] == 8082), 
> action=(flags.force_snat_for_lb = 1; 
> ct_lb_mark(backends=10.0.0.50:82,10.0.0.60:82);)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.210 && udp && reg9[[16..31]] == 60), 
> action=(flags.force_snat_for_lb = 1; 
> ct_lb_mark(backends=10.0.0.50:6062,10.0.0.60:6062);)
> +  table=7 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
> +  table=7 (lr_in_dnat         ), priority=100  , match=(ip && ip4.dst == 
> 172.168.0.20), action=(flags.loopback = 1; ct_dnat(10.0.0.3);)
> +  table=7 (lr_in_dnat         ), priority=110  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.200 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> +  table=7 (lr_in_dnat         ), priority=110  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.200), action=(flags.force_snat_for_lb = 1; 
> ct_lb_mark(backends=10.0.0.80,10.0.0.81);)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.100 && tcp && reg9[[16..31]] == 8082 && ct_mark.natted == 
> 1), action=(flags.force_snat_for_lb = 1; next;)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.210 && udp && reg9[[16..31]] == 60 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80), 
> action=(flags.force_snat_for_lb = 1; ct_lb_mark(backends=10.0.0.4:8080);)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.100 && tcp && reg9[[16..31]] == 8082), 
> action=(flags.force_snat_for_lb = 1; 
> ct_lb_mark(backends=10.0.0.50:82,10.0.0.60:82);)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.210 && udp && reg9[[16..31]] == 60), 
> action=(flags.force_snat_for_lb = 1; 
> ct_lb_mark(backends=10.0.0.50:6062,10.0.0.60:6062);)
>  ])
>  
>  AT_CHECK([grep "lr_out_chk_dnat_local" lr0flows | sed 's/table=./table=?/' | 
> sort], [0], [dnl
> @@ -5295,18 +5295,18 @@ AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], 
> [dnl
>  ])
>  
>  AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
> -  table=6 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
> -  table=6 (lr_in_dnat         ), priority=100  , match=(ip && ip4.dst == 
> 172.168.0.20), action=(flags.loopback = 1; ct_dnat(10.0.0.3);)
> -  table=6 (lr_in_dnat         ), priority=110  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.200 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> -  table=6 (lr_in_dnat         ), priority=110  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.200), action=(flags.force_snat_for_lb = 1; 
> ct_lb_mark(backends=10.0.0.80,10.0.0.81);)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.10 && tcp && reg9[[16..31]] == 9082 && ct_mark.natted == 
> 1), action=(flags.force_snat_for_lb = 1; next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.100 && tcp && reg9[[16..31]] == 8082 && ct_mark.natted == 
> 1), action=(flags.force_snat_for_lb = 1; next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.210 && udp && reg9[[16..31]] == 60 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80), 
> action=(flags.force_snat_for_lb = 1; ct_lb_mark(backends=10.0.0.4:8080);)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.10 && tcp && reg9[[16..31]] == 9082), 
> action=(flags.force_snat_for_lb = 1; 
> ct_lb_mark(backends=10.0.0.50:82,10.0.0.60:82);)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.100 && tcp && reg9[[16..31]] == 8082), 
> action=(flags.force_snat_for_lb = 1; 
> ct_lb_mark(backends=10.0.0.50:82,10.0.0.60:82);)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.210 && udp && reg9[[16..31]] == 60), 
> action=(flags.force_snat_for_lb = 1; 
> ct_lb_mark(backends=10.0.0.50:6062,10.0.0.60:6062);)
> +  table=7 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
> +  table=7 (lr_in_dnat         ), priority=100  , match=(ip && ip4.dst == 
> 172.168.0.20), action=(flags.loopback = 1; ct_dnat(10.0.0.3);)
> +  table=7 (lr_in_dnat         ), priority=110  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.200 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> +  table=7 (lr_in_dnat         ), priority=110  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.200), action=(flags.force_snat_for_lb = 1; 
> ct_lb_mark(backends=10.0.0.80,10.0.0.81);)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.10 && tcp && reg9[[16..31]] == 9082 && ct_mark.natted == 
> 1), action=(flags.force_snat_for_lb = 1; next;)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.100 && tcp && reg9[[16..31]] == 8082 && ct_mark.natted == 
> 1), action=(flags.force_snat_for_lb = 1; next;)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.210 && udp && reg9[[16..31]] == 60 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80), 
> action=(flags.force_snat_for_lb = 1; ct_lb_mark(backends=10.0.0.4:8080);)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.10 && tcp && reg9[[16..31]] == 9082), 
> action=(flags.force_snat_for_lb = 1; 
> ct_lb_mark(backends=10.0.0.50:82,10.0.0.60:82);)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.100 && tcp && reg9[[16..31]] == 8082), 
> action=(flags.force_snat_for_lb = 1; 
> ct_lb_mark(backends=10.0.0.50:82,10.0.0.60:82);)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.210 && udp && reg9[[16..31]] == 60), 
> action=(flags.force_snat_for_lb = 1; 
> ct_lb_mark(backends=10.0.0.50:6062,10.0.0.60:6062);)
>  ])
>  
>  AT_CHECK([grep "lr_out_chk_dnat_local" lr0flows | sed 's/table=./table=?/' | 
> sort], [0], [dnl
> @@ -5372,20 +5372,20 @@ AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], 
> [dnl
>  ])
>  
>  AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
> -  table=6 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
> -  table=6 (lr_in_dnat         ), priority=100  , match=(ip && ip4.dst == 
> 172.168.0.20), action=(flags.loopback = 1; ct_dnat(10.0.0.3);)
> -  table=6 (lr_in_dnat         ), priority=110  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.200 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> -  table=6 (lr_in_dnat         ), priority=110  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.200), action=(flags.force_snat_for_lb = 1; 
> ct_lb_mark(backends=10.0.0.80,10.0.0.81);)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.10 && tcp && reg9[[16..31]] == 9082 && ct_mark.natted == 
> 1), action=(flags.force_snat_for_lb = 1; next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.100 && tcp && reg9[[16..31]] == 8082 && ct_mark.natted == 
> 1), action=(flags.force_snat_for_lb = 1; next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.210 && udp && reg9[[16..31]] == 60 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip6 && 
> xxreg0 == def0::2 && tcp && reg9[[16..31]] == 8000 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80), 
> action=(flags.force_snat_for_lb = 1; ct_lb_mark(backends=10.0.0.4:8080);)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.10 && tcp && reg9[[16..31]] == 9082), 
> action=(flags.force_snat_for_lb = 1; 
> ct_lb_mark(backends=10.0.0.50:82,10.0.0.60:82);)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.100 && tcp && reg9[[16..31]] == 8082), 
> action=(flags.force_snat_for_lb = 1; 
> ct_lb_mark(backends=10.0.0.50:82,10.0.0.60:82);)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.210 && udp && reg9[[16..31]] == 60), 
> action=(flags.force_snat_for_lb = 1; 
> ct_lb_mark(backends=10.0.0.50:6062,10.0.0.60:6062);)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip6 && 
> xxreg0 == def0::2 && tcp && reg9[[16..31]] == 8000), 
> action=(flags.force_snat_for_lb = 1; 
> ct_lb_mark(backends=[[aef0::2]]:80,[[aef0::3]]:80);)
> +  table=7 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
> +  table=7 (lr_in_dnat         ), priority=100  , match=(ip && ip4.dst == 
> 172.168.0.20), action=(flags.loopback = 1; ct_dnat(10.0.0.3);)
> +  table=7 (lr_in_dnat         ), priority=110  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.200 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> +  table=7 (lr_in_dnat         ), priority=110  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.200), action=(flags.force_snat_for_lb = 1; 
> ct_lb_mark(backends=10.0.0.80,10.0.0.81);)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.10 && tcp && reg9[[16..31]] == 9082 && ct_mark.natted == 
> 1), action=(flags.force_snat_for_lb = 1; next;)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.100 && tcp && reg9[[16..31]] == 8082 && ct_mark.natted == 
> 1), action=(flags.force_snat_for_lb = 1; next;)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.210 && udp && reg9[[16..31]] == 60 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.est && ip6 && 
> xxreg0 == def0::2 && tcp && reg9[[16..31]] == 8000 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 10.0.0.10 && tcp && reg9[[16..31]] == 80), 
> action=(flags.force_snat_for_lb = 1; ct_lb_mark(backends=10.0.0.4:8080);)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.10 && tcp && reg9[[16..31]] == 9082), 
> action=(flags.force_snat_for_lb = 1; 
> ct_lb_mark(backends=10.0.0.50:82,10.0.0.60:82);)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.100 && tcp && reg9[[16..31]] == 8082), 
> action=(flags.force_snat_for_lb = 1; 
> ct_lb_mark(backends=10.0.0.50:82,10.0.0.60:82);)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.210 && udp && reg9[[16..31]] == 60), 
> action=(flags.force_snat_for_lb = 1; 
> ct_lb_mark(backends=10.0.0.50:6062,10.0.0.60:6062);)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && ip6 && 
> xxreg0 == def0::2 && tcp && reg9[[16..31]] == 8000), 
> action=(flags.force_snat_for_lb = 1; 
> ct_lb_mark(backends=[[aef0::2]]:80,[[aef0::3]]:80);)
>  ])
>  
>  AT_CHECK([grep "lr_out_chk_dnat_local" lr0flows | sed 's/table=./table=?/' | 
> sort], [0], [dnl
> @@ -5440,11 +5440,11 @@ AT_CHECK([grep "lr_in_defrag" lr0flows | sort], [0], 
> [dnl
>  ])
>  
>  AT_CHECK([grep "lr_in_dnat" lr0flows | sort], [0], [dnl
> -  table=6 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.210 && tcp && reg9[[16..31]] == 60 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.210 && udp && reg9[[16..31]] == 60 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.210 && tcp && reg9[[16..31]] == 60), 
> action=(flags.force_snat_for_lb = 1; 
> ct_lb_mark(backends=10.0.0.50:6062,10.0.0.60:6062);)
> -  table=6 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.210 && udp && reg9[[16..31]] == 60), 
> action=(flags.force_snat_for_lb = 1; 
> ct_lb_mark(backends=10.0.0.50:6062,10.0.0.60:6062);)
> +  table=7 (lr_in_dnat         ), priority=0    , match=(1), action=(next;)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.210 && tcp && reg9[[16..31]] == 60 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.est && ip4 && 
> reg0 == 172.168.0.210 && udp && reg9[[16..31]] == 60 && ct_mark.natted == 1), 
> action=(flags.force_snat_for_lb = 1; next;)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.210 && tcp && reg9[[16..31]] == 60), 
> action=(flags.force_snat_for_lb = 1; 
> ct_lb_mark(backends=10.0.0.50:6062,10.0.0.60:6062);)
> +  table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && ip4 && 
> reg0 == 172.168.0.210 && udp && reg9[[16..31]] == 60), 
> action=(flags.force_snat_for_lb = 1; 
> ct_lb_mark(backends=10.0.0.50:6062,10.0.0.60:6062);)
>  ])
>  
>  AT_CHECK([grep "lr_out_chk_dnat_local" lr0flows | sed 's/table=./table=?/' | 
> sort], [0], [dnl
> @@ -6537,7 +6537,7 @@ AT_CHECK([grep -e "ls_in_acl" lsflows | sed 
> 's/table=../table=??/' | sort], [0],
>    table=??(ls_in_acl_hint     ), priority=7    , match=(ct.new && !ct.est), 
> action=(reg0[[7]] = 1; reg0[[9]] = 1; next;)
>  ])
>  
> -AT_CHECK([grep -e "ls_in_lb" lsflows | sed 's/table=../table=??/' | sort], 
> [0], [dnl
> +AT_CHECK([grep -e "ls_in_lb " lsflows | sed 's/table=../table=??/' | sort], 
> [0], [dnl
>    table=??(ls_in_lb           ), priority=0    , match=(1), action=(next;)
>    table=??(ls_in_lb           ), priority=110  , match=(ct.new && ip4.dst == 
> 10.0.0.2), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.10);)
>  ])
> @@ -6590,7 +6590,7 @@ AT_CHECK([grep -e "ls_in_acl" lsflows | sed 
> 's/table=../table=??/' | sort], [0],
>    table=??(ls_in_acl_hint     ), priority=7    , match=(ct.new && !ct.est), 
> action=(reg0[[7]] = 1; reg0[[9]] = 1; next;)
>  ])
>  
> -AT_CHECK([grep -e "ls_in_lb" lsflows | sed 's/table=../table=??/' | sort], 
> [0], [dnl
> +AT_CHECK([grep -e "ls_in_lb " lsflows | sed 's/table=../table=??/' | sort], 
> [0], [dnl
>    table=??(ls_in_lb           ), priority=0    , match=(1), action=(next;)
>    table=??(ls_in_lb           ), priority=110  , match=(ct.new && ip4.dst == 
> 10.0.0.2), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.10);)
>  ])
> @@ -6643,7 +6643,7 @@ AT_CHECK([grep -e "ls_in_acl" lsflows | sed 
> 's/table=../table=??/' | sort], [0],
>    table=??(ls_in_acl_hint     ), priority=7    , match=(ct.new && !ct.est), 
> action=(reg0[[7]] = 1; reg0[[9]] = 1; next;)
>  ])
>  
> -AT_CHECK([grep -e "ls_in_lb" lsflows | sed 's/table=../table=??/' | sort], 
> [0], [dnl
> +AT_CHECK([grep -e "ls_in_lb " lsflows | sed 's/table=../table=??/' | sort], 
> [0], [dnl
>    table=??(ls_in_lb           ), priority=0    , match=(1), action=(next;)
>    table=??(ls_in_lb           ), priority=110  , match=(ct.new && ip4.dst == 
> 10.0.0.2), action=(reg0[[1]] = 0; ct_lb_mark(backends=10.0.0.10);)
>  ])
> @@ -7611,7 +7611,7 @@ sort | sed 's/table=../table=??/' ], [0], [dnl
>    table=??(ls_in_check_port_sec), priority=100  , match=(vlan.present), 
> action=(drop;)
>    table=??(ls_in_check_port_sec), priority=50   , match=(1), 
> action=(reg0[[15]] = check_in_port_sec(); next;)
>    table=??(ls_in_check_port_sec), priority=70   , match=(inport == 
> "localnetport"), action=(set_queue(10); reg0[[15]] = check_in_port_sec(); 
> next;)
> -  table=??(ls_in_check_port_sec), priority=70   , match=(inport == "sw0p1"), 
> action=(reg0[[14]] = 1; next(pipeline=ingress, table=16);)
> +  table=??(ls_in_check_port_sec), priority=70   , match=(inport == "sw0p1"), 
> action=(reg0[[14]] = 1; next(pipeline=ingress, table=18);)
>    table=??(ls_in_check_port_sec), priority=70   , match=(inport == "sw0p2"), 
> action=(set_queue(10); reg0[[15]] = check_in_port_sec(); next;)
>    table=??(ls_in_apply_port_sec), priority=0    , match=(1), action=(next;)
>    table=??(ls_in_apply_port_sec), priority=50   , match=(reg0[[15]] == 1), 
> action=(drop;)
> @@ -7648,11 +7648,11 @@ check ovn-nbctl                                       
>         \
>  AS_BOX([No chassis registered - use ct_lb_mark and ct_mark.natted])
>  check ovn-nbctl --wait=sb sync
>  AT_CHECK([ovn-sbctl lflow-list | grep -e natted -e ct_lb], [0], [dnl
> -  table=6 (lr_in_dnat         ), priority=110  , match=(ct.est && ip4 && 
> reg0 == 66.66.66.66 && ct_mark.natted == 1), action=(next;)
> -  table=6 (lr_in_dnat         ), priority=110  , match=(ct.new && ip4 && 
> reg0 == 66.66.66.66), action=(ct_lb_mark(backends=42.42.42.2);)
> +  table=7 (lr_in_dnat         ), priority=110  , match=(ct.est && ip4 && 
> reg0 == 66.66.66.66 && ct_mark.natted == 1), action=(next;)
> +  table=7 (lr_in_dnat         ), priority=110  , match=(ct.new && ip4 && 
> reg0 == 66.66.66.66), action=(ct_lb_mark(backends=42.42.42.2);)
>    table=6 (ls_in_pre_stateful ), priority=120  , match=(ip4.dst == 
> 66.66.66.66), action=(reg1 = 66.66.66.66; ct_lb_mark;)
>    table=6 (ls_in_pre_stateful ), priority=110  , match=(reg0[[2]] == 1), 
> action=(ct_lb_mark;)
> -  table=11(ls_in_lb           ), priority=110  , match=(ct.new && ip4.dst == 
> 66.66.66.66), action=(reg0[[1]] = 0; ct_lb_mark(backends=42.42.42.2);)
> +  table=12(ls_in_lb           ), priority=110  , match=(ct.new && ip4.dst == 
> 66.66.66.66), action=(reg0[[1]] = 0; ct_lb_mark(backends=42.42.42.2);)
>    table=2 (ls_out_pre_stateful), priority=110  , match=(reg0[[2]] == 1), 
> action=(ct_lb_mark;)
>  ])
>  
> @@ -7660,11 +7660,11 @@ AS_BOX([Chassis registered that doesn't support 
> ct_lb_mark - use ct_lb and ct_la
>  check ovn-sbctl chassis-add hv geneve 127.0.0.1
>  check ovn-nbctl --wait=sb sync
>  AT_CHECK([ovn-sbctl lflow-list | grep -e natted -e ct_lb], [0], [dnl
> -  table=6 (lr_in_dnat         ), priority=110  , match=(ct.est && ip4 && 
> reg0 == 66.66.66.66 && ct_label.natted == 1), action=(next;)
> -  table=6 (lr_in_dnat         ), priority=110  , match=(ct.new && ip4 && 
> reg0 == 66.66.66.66), action=(ct_lb(backends=42.42.42.2);)
> +  table=7 (lr_in_dnat         ), priority=110  , match=(ct.est && ip4 && 
> reg0 == 66.66.66.66 && ct_label.natted == 1), action=(next;)
> +  table=7 (lr_in_dnat         ), priority=110  , match=(ct.new && ip4 && 
> reg0 == 66.66.66.66), action=(ct_lb(backends=42.42.42.2);)
>    table=6 (ls_in_pre_stateful ), priority=120  , match=(ip4.dst == 
> 66.66.66.66), action=(reg1 = 66.66.66.66; ct_lb;)
>    table=6 (ls_in_pre_stateful ), priority=110  , match=(reg0[[2]] == 1), 
> action=(ct_lb;)
> -  table=11(ls_in_lb           ), priority=110  , match=(ct.new && ip4.dst == 
> 66.66.66.66), action=(reg0[[1]] = 0; ct_lb(backends=42.42.42.2);)
> +  table=12(ls_in_lb           ), priority=110  , match=(ct.new && ip4.dst == 
> 66.66.66.66), action=(reg0[[1]] = 0; ct_lb(backends=42.42.42.2);)
>    table=2 (ls_out_pre_stateful), priority=110  , match=(reg0[[2]] == 1), 
> action=(ct_lb;)
>  ])
>  
> @@ -7672,11 +7672,11 @@ AS_BOX([Chassis upgrades and supports ct_lb_mark - 
> use ct_lb_mark and ct_mark.na
>  check ovn-sbctl set chassis hv other_config:ct-no-masked-label=true
>  check ovn-nbctl --wait=sb sync
>  AT_CHECK([ovn-sbctl lflow-list | grep -e natted -e ct_lb], [0], [dnl
> -  table=6 (lr_in_dnat         ), priority=110  , match=(ct.est && ip4 && 
> reg0 == 66.66.66.66 && ct_mark.natted == 1), action=(next;)
> -  table=6 (lr_in_dnat         ), priority=110  , match=(ct.new && ip4 && 
> reg0 == 66.66.66.66), action=(ct_lb_mark(backends=42.42.42.2);)
> +  table=7 (lr_in_dnat         ), priority=110  , match=(ct.est && ip4 && 
> reg0 == 66.66.66.66 && ct_mark.natted == 1), action=(next;)
> +  table=7 (lr_in_dnat         ), priority=110  , match=(ct.new && ip4 && 
> reg0 == 66.66.66.66), action=(ct_lb_mark(backends=42.42.42.2);)
>    table=6 (ls_in_pre_stateful ), priority=120  , match=(ip4.dst == 
> 66.66.66.66), action=(reg1 = 66.66.66.66; ct_lb_mark;)
>    table=6 (ls_in_pre_stateful ), priority=110  , match=(reg0[[2]] == 1), 
> action=(ct_lb_mark;)
> -  table=11(ls_in_lb           ), priority=110  , match=(ct.new && ip4.dst == 
> 66.66.66.66), action=(reg0[[1]] = 0; ct_lb_mark(backends=42.42.42.2);)
> +  table=12(ls_in_lb           ), priority=110  , match=(ct.new && ip4.dst == 
> 66.66.66.66), action=(reg0[[1]] = 0; ct_lb_mark(backends=42.42.42.2);)
>    table=2 (ls_out_pre_stateful), priority=110  , match=(reg0[[2]] == 1), 
> action=(ct_lb_mark;)
>  ])
>  
> @@ -7838,11 +7838,11 @@ ovn-sbctl dump-flows S1 > S1flows
>  AT_CAPTURE_FILE([S0flows])
>  AT_CAPTURE_FILE([S1flows])
>  
> -AT_CHECK([grep "ls_in_lb" S0flows | sort], [0], [dnl
> -  table=11(ls_in_lb           ), priority=0    , match=(1), action=(next;)
> +AT_CHECK([grep "ls_in_lb " S0flows | sort], [0], [dnl
> +  table=12(ls_in_lb           ), priority=0    , match=(1), action=(next;)
>  ])
> -AT_CHECK([grep "ls_in_lb" S1flows | sort], [0], [dnl
> -  table=11(ls_in_lb           ), priority=0    , match=(1), action=(next;)
> +AT_CHECK([grep "ls_in_lb " S1flows | sort], [0], [dnl
> +  table=12(ls_in_lb           ), priority=0    , match=(1), action=(next;)
>  ])
>  
>  ovn-nbctl --wait=sb set NB_Global . options:install_ls_lb_from_router=true
> @@ -7854,14 +7854,14 @@ AT_CAPTURE_FILE([S0flows])
>  AT_CAPTURE_FILE([S1flows])
>  
>  AT_CHECK([grep "ls_in_lb" S0flows | sort], [0], [dnl
> -  table=11(ls_in_lb           ), priority=0    , match=(1), action=(next;)
> -  table=11(ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 
> 172.16.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; 
> ct_lb_mark(backends=10.0.0.2:80);)
> -  table=11(ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 
> 172.16.0.11 && tcp.dst == 8080), action=(reg0[[1]] = 0; 
> ct_lb_mark(backends=10.0.0.2:8080);)
> +  table=12(ls_in_lb           ), priority=0    , match=(1), action=(next;)
> +  table=12(ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 
> 172.16.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; 
> ct_lb_mark(backends=10.0.0.2:80);)
> +  table=12(ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 
> 172.16.0.11 && tcp.dst == 8080), action=(reg0[[1]] = 0; 
> ct_lb_mark(backends=10.0.0.2:8080);)
>  ])
>  AT_CHECK([grep "ls_in_lb" S1flows | sort], [0], [dnl
> -  table=11(ls_in_lb           ), priority=0    , match=(1), action=(next;)
> -  table=11(ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 
> 172.16.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; 
> ct_lb_mark(backends=10.0.0.2:80);)
> -  table=11(ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 
> 172.16.0.11 && tcp.dst == 8080), action=(reg0[[1]] = 0; 
> ct_lb_mark(backends=10.0.0.2:8080);)
> +  table=12(ls_in_lb           ), priority=0    , match=(1), action=(next;)
> +  table=12(ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 
> 172.16.0.10 && tcp.dst == 80), action=(reg0[[1]] = 0; 
> ct_lb_mark(backends=10.0.0.2:80);)
> +  table=12(ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 
> 172.16.0.11 && tcp.dst == 8080), action=(reg0[[1]] = 0; 
> ct_lb_mark(backends=10.0.0.2:8080);)
>  ])
>  
>  ovn-sbctl get datapath S0 _uuid > dp_uuids
> @@ -7880,11 +7880,11 @@ ovn-sbctl dump-flows S1 > S1flows
>  AT_CAPTURE_FILE([S0flows])
>  AT_CAPTURE_FILE([S1flows])
>  
> -AT_CHECK([grep "ls_in_lb" S0flows | sort], [0], [dnl
> -  table=11(ls_in_lb           ), priority=0    , match=(1), action=(next;)
> +AT_CHECK([grep "ls_in_lb " S0flows | sort], [0], [dnl
> +  table=12(ls_in_lb           ), priority=0    , match=(1), action=(next;)
>  ])
> -AT_CHECK([grep "ls_in_lb" S1flows | sort], [0], [dnl
> -  table=11(ls_in_lb           ), priority=0    , match=(1), action=(next;)
> +AT_CHECK([grep "ls_in_lb " S1flows | sort], [0], [dnl
> +  table=12(ls_in_lb           ), priority=0    , match=(1), action=(next;)
>  ])
>  
>  check_column "" sb:load_balancer datapaths name=lb0
> @@ -7923,8 +7923,86 @@ ovn-sbctl dump-flows R1 > R1flows
>  AT_CAPTURE_FILE([R1flows])
>  
>  AT_CHECK([grep "lr_in_arp_resolve" R1flows | grep priority=90 | sort], [0], 
> [dnl
> -  table=15(lr_in_arp_resolve  ), priority=90   , match=(outport == "R1-PUB" 
> && ip4.src == 10.0.0.3 && is_chassis_resident("S0-P0")), 
> action=(get_arp(outport, reg0); next;)
> -  table=15(lr_in_arp_resolve  ), priority=90   , match=(outport == "R1-PUB" 
> && ip6.src == 1000::3 && is_chassis_resident("S0-P0")), 
> action=(get_nd(outport, xxreg0); next;)
> +  table=17(lr_in_arp_resolve  ), priority=90   , match=(outport == "R1-PUB" 
> && ip4.src == 10.0.0.3 && is_chassis_resident("S0-P0")), 
> action=(get_arp(outport, reg0); next;)
> +  table=17(lr_in_arp_resolve  ), priority=90   , match=(outport == "R1-PUB" 
> && ip6.src == 1000::3 && is_chassis_resident("S0-P0")), 
> action=(get_nd(outport, xxreg0); next;)
> +])
> +
> +AT_CLEANUP
> +])
> +
> +AT_SETUP([check lb-affinity flows])
> +AT_KEYWORDS([lb-affinity-flows])
> +ovn_start
> +
> +ovn-nbctl lr-add R1
> +ovn-nbctl set logical_router R1 options:chassis=hv1
> +ovn-nbctl lrp-add R1 R1-S0 02:ac:10:01:00:01 10.0.0.1/24
> +ovn-nbctl lrp-add R1 R1-S1 02:ac:10:01:01:01 20.0.0.1/24
> +ovn-nbctl lrp-add R1 R1-PUB 02:ac:20:01:01:01 172.16.0.1/24
> +
> +ovn-nbctl ls-add S0
> +ovn-nbctl lsp-add S0 S0-R1
> +ovn-nbctl lsp-set-type S0-R1 router
> +ovn-nbctl lsp-set-addresses S0-R1 02:ac:10:01:00:01
> +ovn-nbctl lsp-set-options S0-R1 router-port=R1-S0
> +
> +ovn-nbctl ls-add S1
> +ovn-nbctl lsp-add S1 S1-R1
> +ovn-nbctl lsp-set-type S1-R1 router
> +ovn-nbctl lsp-set-addresses S1-R1 02:ac:10:01:01:01
> +ovn-nbctl lsp-set-options S1-R1 router-port=R1-S1
> +
> +# Add load balancers on the logical router R1
> +ovn-nbctl lb-add lb0 172.16.0.10:80 10.0.0.2:80,20.0.0.2:80 tcp
> +ovn-nbctl lr-lb-add R1 lb0
> +ovn-nbctl ls-lb-add S0 lb0
> +
> +ovn-sbctl dump-flows S0 > S0flows
> +ovn-sbctl dump-flows R1 > R1flows
> +
> +AT_CAPTURE_FILE([S0flows])
> +AT_CAPTURE_FILE([S1flows])
> +
> +AT_CHECK([grep "ls_in_lb_aff_check" S0flows | sort], [0], [dnl
> +  table=11(ls_in_lb_aff_check ), priority=0    , match=(1), action=(next;)
> +])
> +AT_CHECK([grep "ls_in_lb_aff_learn" S0flows | sort], [0], [dnl
> +  table=13(ls_in_lb_aff_learn ), priority=0    , match=(1), action=(next;)
> +])
> +
> +AT_CHECK([grep "lr_in_lb_aff_check" R1flows | sort], [0], [dnl
> +  table=6 (lr_in_lb_aff_check ), priority=0    , match=(1), action=(next;)
> +])
> +AT_CHECK([grep "lr_in_lb_aff_learn" R1flows | sort], [0], [dnl
> +  table=8 (lr_in_lb_aff_learn ), priority=0    , match=(1), action=(next;)
> +])
> +
> +ovn-nbctl --wait=sb set load_balancer lb0 options:affinity_timeout=60
> +
> +ovn-sbctl dump-flows S0 > S0flows
> +ovn-sbctl dump-flows R1 > R1flows
> +
> +AT_CAPTURE_FILE([S0flows])
> +AT_CAPTURE_FILE([S1flows])
> +
> +AT_CHECK([grep "ls_in_lb_aff_check" S0flows | sort], [0], [dnl
> +  table=11(ls_in_lb_aff_check ), priority=0    , match=(1), action=(next;)
> +  table=11(ls_in_lb_aff_check ), priority=100  , match=(ct.new && ip4 && 
> reg1 == 172.16.0.10 && reg2[[0..15]] == 80), action=(reg9[[6]] = 
> chk_lb_aff(); next;)
> +])
> +AT_CHECK([grep "ls_in_lb_aff_learn" S0flows | sort], [0], [dnl
> +  table=13(ls_in_lb_aff_learn ), priority=0    , match=(1), action=(next;)
> +  table=13(ls_in_lb_aff_learn ), priority=100  , match=(ct.new && ip4.dst == 
> 10.0.0.2 && tcp.dst == 80), action=(commit_lb_aff(vip = "172.16.0.10:80", 
> backend = "10.0.0.2:80", proto = tcp, timeout = 60); next;)
> +  table=13(ls_in_lb_aff_learn ), priority=100  , match=(ct.new && ip4.dst == 
> 20.0.0.2 && tcp.dst == 80), action=(commit_lb_aff(vip = "172.16.0.10:80", 
> backend = "20.0.0.2:80", proto = tcp, timeout = 60); next;)
> +])
> +
> +AT_CHECK([grep "lr_in_lb_aff_check" R1flows | sort], [0], [dnl
> +  table=6 (lr_in_lb_aff_check ), priority=0    , match=(1), action=(next;)
> +  table=6 (lr_in_lb_aff_check ), priority=100  , match=(ct.new && ip4 && 
> reg0 == 172.16.0.10 && tcp && reg9[[16..31]] == 80), action=(reg9[[6]] = 
> chk_lb_aff(); next;)
> +])
> +AT_CHECK([grep "lr_in_lb_aff_learn" R1flows | sort], [0], [dnl
> +  table=8 (lr_in_lb_aff_learn ), priority=0    , match=(1), action=(next;)
> +  table=8 (lr_in_lb_aff_learn ), priority=100  , match=(ct.new && ip4.dst == 
> 10.0.0.2 && tcp.dst == 80), action=(commit_lb_aff(vip = "172.16.0.10:80", 
> backend = "10.0.0.2:80", proto = tcp, timeout = 60); next;)
> +  table=8 (lr_in_lb_aff_learn ), priority=100  , match=(ct.new && ip4.dst == 
> 20.0.0.2 && tcp.dst == 80), action=(commit_lb_aff(vip = "172.16.0.10:80", 
> backend = "20.0.0.2:80", proto = tcp, timeout = 60); next;)
>  ])
>  
>  AT_CLEANUP
> diff --git a/tests/ovn.at b/tests/ovn.at
> index f90bd02bb..5cc08e114 100644
> --- a/tests/ovn.at
> +++ b/tests/ovn.at
> @@ -16090,7 +16090,7 @@ ovn-sbctl dump-flows sw0 > sw0-flows
>  AT_CAPTURE_FILE([sw0-flows])
>  
>  AT_CHECK([grep -E 'ls_(in|out)_acl' sw0-flows |grep reject| sed 
> 's/table=../table=??/' | sort], [0], [dnl
> -  table=??(ls_out_acl         ), priority=2002 , match=(ip), action=(reg0 = 
> 0; reject { /* eth.dst <-> eth.src; ip.dst <-> ip.src; is implicit. */ 
> outport <-> inport; next(pipeline=ingress,table=23); };)
> +  table=??(ls_out_acl         ), priority=2002 , match=(ip), action=(reg0 = 
> 0; reject { /* eth.dst <-> eth.src; ip.dst <-> ip.src; is implicit. */ 
> outport <-> inport; next(pipeline=ingress,table=25); };)
>  ])
>  
>  
> @@ -18684,7 +18684,7 @@ wait_for_ports_up ls1-lp_ext1
>  # There should be a flow in hv2 to drop traffic from ls1-lp_ext1 destined
>  # to router mac.
>  AT_CHECK([as hv2 ovs-ofctl dump-flows br-int \
> -table=30,dl_src=f0:00:00:00:00:03,dl_dst=a0:10:00:00:00:01 | \
> +table=32,dl_src=f0:00:00:00:00:03,dl_dst=a0:10:00:00:00:01 | \
>  grep -c "actions=drop"], [0], [1
>  ])
>  # Stop ovn-controllers on hv1 and hv3.
> @@ -20355,7 +20355,7 @@ check_row_count Port_Binding 1 logical_port=sw0-vir 
> virtual_parent=sw0-p1
>  wait_for_ports_up sw0-vir
>  check ovn-nbctl --wait=hv sync
>  AT_CHECK([test 2 = `cat hv1/ovn-controller.log | grep "pinctrl received  
> packet-in" | \
> -grep opcode=BIND_VPORT | grep OF_Table_ID=25 | wc -l`])
> +grep opcode=BIND_VPORT | grep OF_Table_ID=27 | wc -l`])
>  
>  wait_row_count Port_Binding 1 logical_port=sw0-vir6 chassis=$hv1_ch_uuid
>  check_row_count Port_Binding 1 logical_port=sw0-vir6 virtual_parent=sw0-p1
> @@ -20404,7 +20404,7 @@ eth_dst=00000000ff01
>  ip_src=$(ip_to_hex 10 0 0 10)
>  ip_dst=$(ip_to_hex 172 168 0 101)
>  send_icmp_packet 1 1 $eth_src $eth_dst $ip_src $ip_dst c4c9 
> 0000000000000000000000
> -AT_CHECK([as hv1 ovs-ofctl dump-flows br-int metadata=0x$lr0_dp_key | awk 
> '/table=26, n_packets=1, n_bytes=45/{print $7" "$8}'],[0],[dnl
> +AT_CHECK([as hv1 ovs-ofctl dump-flows br-int metadata=0x$lr0_dp_key | awk 
> '/table=28, n_packets=1, n_bytes=45/{print $7" "$8}'],[0],[dnl
>  priority=80,ip,reg15=0x3,metadata=0x3,nw_src=10.0.0.10 actions=drop
>  ])
>  
> @@ -26401,7 +26401,7 @@ ovn-sbctl dump-flows > sbflows
>  AT_CAPTURE_FILE([sbflows])
>  AT_CAPTURE_FILE([offlows])
>  OVS_WAIT_UNTIL([
> -    as hv1 ovs-ofctl dump-flows br-int table=21 > offlows
> +    as hv1 ovs-ofctl dump-flows br-int table=23 > offlows
>      test $(grep -c "load:0x64->NXM_NX_PKT_MARK" offlows) = 1 && \
>      test $(grep -c "load:0x3->NXM_NX_PKT_MARK" offlows) = 1 && \
>      test $(grep -c "load:0x4->NXM_NX_PKT_MARK" offlows) = 1 && \
> @@ -26494,12 +26494,12 @@ send_ipv4_pkt hv1 hv1-vif1 505400000003 
> 00000000ff01 \
>      $(ip_to_hex 10 0 0 3) $(ip_to_hex 172 168 0 120)
>  
>  OVS_WAIT_UNTIL([
> -    test 1 -eq $(as hv1 ovs-ofctl dump-flows br-int table=21 | \
> +    test 1 -eq $(as hv1 ovs-ofctl dump-flows br-int table=23 | \
>      grep "load:0x2->NXM_NX_PKT_MARK" -c)
>  ])
>  
>  AT_CHECK([
> -    test 0 -eq $(as hv1 ovs-ofctl dump-flows br-int table=21 | \
> +    test 0 -eq $(as hv1 ovs-ofctl dump-flows br-int table=23 | \
>      grep "load:0x64->NXM_NX_PKT_MARK" -c)
>  ])
>  
> @@ -27191,23 +27191,23 @@ check ovn-nbctl --wait=hv sync
>  
>  # Ensure ECMP symmetric reply flows are not present on any hypervisor.
>  AT_CHECK([
> -    test 0 -eq $(as hv1 ovs-ofctl dump-flows br-int table=15 | \
> +    test 0 -eq $(as hv1 ovs-ofctl dump-flows br-int table=17 | \
>      grep "priority=100" | \
>      grep 
> "ct(commit,zone=NXM_NX_REG11\\[[0..15\\]],exec(move:NXM_OF_ETH_SRC\\[[\\]]->NXM_NX_CT_LABEL\\[[32..79\\]],load:0x[[0-9]]->NXM_NX_CT_LABEL\\[[80..95\\]]))"
>  -c)
>  ])
>  AT_CHECK([
> -    test 0 -eq $(as hv1 ovs-ofctl dump-flows br-int table=21 | \
> +    test 0 -eq $(as hv1 ovs-ofctl dump-flows br-int table=25 | \
>      grep "priority=200" | \
>      grep 
> "actions=move:NXM_NX_CT_LABEL\\[[32..79\\]]->NXM_OF_ETH_DST\\[[\\]]" -c)
>  ])
>  
>  AT_CHECK([
> -    test 0 -eq $(as hv2 ovs-ofctl dump-flows br-int table=15 | \
> +    test 0 -eq $(as hv2 ovs-ofctl dump-flows br-int table=17 | \
>      grep "priority=100" | \
>      grep 
> "ct(commit,zone=NXM_NX_REG11\\[[0..15\\]],exec(move:NXM_OF_ETH_SRC\\[[\\]]->NXM_NX_CT_LABEL\\[[32..79\\]],load:0x[[0-9]]->NXM_NX_CT_LABEL\\[[80..95\\]]))"
>  -c)
>  ])
>  AT_CHECK([
> -    test 0 -eq $(as hv2 ovs-ofctl dump-flows br-int table=21 | \
> +    test 0 -eq $(as hv2 ovs-ofctl dump-flows br-int table=25 | \
>      grep "priority=200" | \
>      grep 
> "actions=move:NXM_NX_CT_LABEL\\[[32..79\\]]->NXM_OF_ETH_DST\\[[\\]]" -c)
>  ])
> @@ -27225,11 +27225,11 @@ AT_CAPTURE_FILE([hv2flows])
>  
>  AT_CHECK([
>      for hv in 1 2; do
> -        grep table=15 hv${hv}flows | \
> +        grep table=17 hv${hv}flows | \
>          grep "priority=100" | \
>          grep -c 
> "ct(commit,zone=NXM_NX_REG11\\[[0..15\\]],.*exec(move:NXM_OF_ETH_SRC\\[[\\]]->NXM_NX_CT_LABEL\\[[32..79\\]],load:0x[[0-9]]->NXM_NX_CT_MARK\\[[16..31\\]]))"
>  
> -        grep table=23 hv${hv}flows | \
> +        grep table=25 hv${hv}flows | \
>          grep "priority=200" | \
>          grep -c 
> "move:NXM_NX_CT_LABEL\\[[\\]]->NXM_NX_XXREG1\\[[\\]],move:NXM_NX_XXREG1\\[[32..79\\]]->NXM_OF_ETH_DST"
>      done; :], [0], [dnl
> @@ -27317,23 +27317,23 @@ check ovn-nbctl --wait=hv sync
>  
>  # Ensure ECMP symmetric reply flows are not present on any hypervisor.
>  AT_CHECK([
> -    test 0 -eq $(as hv1 ovs-ofctl dump-flows br-int table=15 | \
> +    test 0 -eq $(as hv1 ovs-ofctl dump-flows br-int table=17 | \
>      grep "priority=100" | \
>      grep 
> "ct(commit,zone=NXM_NX_REG11\\[[0..15\\]],exec(move:NXM_OF_ETH_SRC\\[[\\]]->NXM_NX_CT_LABEL\\[[32..79\\]],load:0x[[0-9]]->NXM_NX_CT_LABEL\\[[80..95\\]]))"
>  -c)
>  ])
>  AT_CHECK([
> -    test 0 -eq $(as hv1 ovs-ofctl dump-flows br-int table=21 | \
> +    test 0 -eq $(as hv1 ovs-ofctl dump-flows br-int table=25 | \
>      grep "priority=200" | \
>      grep 
> "actions=move:NXM_NX_CT_LABEL\\[[32..79\\]]->NXM_OF_ETH_DST\\[[\\]]" -c)
>  ])
>  
>  AT_CHECK([
> -    test 0 -eq $(as hv2 ovs-ofctl dump-flows br-int table=15 | \
> +    test 0 -eq $(as hv2 ovs-ofctl dump-flows br-int table=17 | \
>      grep "priority=100" | \
>      grep 
> "ct(commit,zone=NXM_NX_REG11\\[[0..15\\]],exec(move:NXM_OF_ETH_SRC\\[[\\]]->NXM_NX_CT_LABEL\\[[32..79\\]],load:0x[[0-9]]->NXM_NX_CT_MARK\\[[16..31\\]]))"
>  -c)
>  ])
>  AT_CHECK([
> -    test 0 -eq $(as hv2 ovs-ofctl dump-flows br-int table=21 | \
> +    test 0 -eq $(as hv2 ovs-ofctl dump-flows br-int table=25 | \
>      grep "priority=200" | \
>      grep "actions=move:NXM_NX_CT_LABEL\\[[\\]]->NXM_OF_ETH_DST\\[[\\]]" -c)
>  ])
> @@ -27350,11 +27350,11 @@ AT_CAPTURE_FILE([hv2flows])
>  
>  AT_CHECK([
>      for hv in 1 2; do
> -        grep table=15 hv${hv}flows | \
> +        grep table=17 hv${hv}flows | \
>          grep "priority=100" | \
>          grep -c 
> "ct(commit,zone=NXM_NX_REG11\\[[0..15\\]],.*exec(move:NXM_OF_ETH_SRC\\[[\\]]->NXM_NX_CT_LABEL\\[[32..79\\]],load:0x[[0-9]]->NXM_NX_CT_MARK\\[[16..31\\]]))"
>  
> -        grep table=23 hv${hv}flows | \
> +        grep table=25 hv${hv}flows | \
>          grep "priority=200" | \
>          grep -c 
> "move:NXM_NX_CT_LABEL\\[[\\]]->NXM_NX_XXREG1\\[[\\]],move:NXM_NX_XXREG1\\[[32..79\\]]->NXM_OF_ETH_DST"
>      done; :], [0], [dnl
> @@ -27818,7 +27818,7 @@ AT_CHECK([as hv1 ovs-ofctl dump-flows br-int | grep 
> "actions=controller" | grep
>  ])
>  
>  # The packet should've been dropped in the lr_in_arp_resolve stage.
> -AT_CHECK([as hv1 ovs-ofctl dump-flows br-int | grep -E "table=23, 
> n_packets=1,.* priority=1,ip,metadata=0x${sw_key},nw_dst=10.0.1.1 
> actions=drop" -c], [0], [dnl
> +AT_CHECK([as hv1 ovs-ofctl dump-flows br-int | grep -E "table=25, 
> n_packets=1,.* priority=1,ip,metadata=0x${sw_key},nw_dst=10.0.1.1 
> actions=drop" -c], [0], [dnl
>  1
>  ])
>  
> @@ -31331,15 +31331,15 @@ done
>  check ovn-nbctl --wait=hv sync
>  
>  # hv0 should see flows for lsp1 but not lsp2
> -AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=25 | grep 10.0.1.2], [0], 
> [ignore])
> -AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=25 | grep 10.0.2.2], [1])
> +AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=27 | grep 10.0.1.2], [0], 
> [ignore])
> +AT_CHECK([as hv1 ovs-ofctl dump-flows br-int table=27 | grep 10.0.2.2], [1])
>  # hv2 should see flows for lsp2 but not lsp1
> -AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=25 | grep 10.0.2.2], [0], 
> [ignore])
> -AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=25 | grep 10.0.1.2], [1])
> +AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=27 | grep 10.0.2.2], [0], 
> [ignore])
> +AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=27 | grep 10.0.1.2], [1])
>  
>  # Change lrp_lr_ls1 to a regular lrp, hv2 should see flows for lsp1
>  check ovn-nbctl --wait=hv lrp-del-gateway-chassis lrp_lr_ls1 hv1
> -AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=25 | grep 10.0.1.2], [0], 
> [ignore])
> +AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=27 | grep 10.0.1.2], [0], 
> [ignore])
>  
>  # Change it back, and trigger recompute to make sure extra flows are removed
>  # from hv2 (recompute is needed because currently I-P adds local datapaths 
> but
> @@ -31347,11 +31347,11 @@ AT_CHECK([as hv2 ovs-ofctl dump-flows br-int 
> table=25 | grep 10.0.1.2], [0], [ig
>  check ovn-nbctl --wait=hv lrp-set-gateway-chassis lrp_lr_ls1 hv1 1
>  as hv2 check ovn-appctl -t ovn-controller recompute
>  ovn-nbctl --wait=hv sync
> -AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=25 | grep 10.0.1.2], [1])
> +AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=27 | grep 10.0.1.2], [1])
>  
>  # Enable dnat_and_snat on lr, and now hv2 should see flows for lsp1.
>  AT_CHECK([ovn-nbctl --wait=hv --gateway-port=lrp_lr_ls1 lr-nat-add lr 
> dnat_and_snat 192.168.0.1 10.0.1.3 lsp1 f0:00:00:00:00:03])
> -AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=25 | grep 10.0.1.2], [0], 
> [ignore])
> +AT_CHECK([as hv2 ovs-ofctl dump-flows br-int table=27 | grep 10.0.1.2], [0], 
> [ignore])
>  
>  OVN_CLEANUP([hv1],[hv2])
>  AT_CLEANUP
> diff --git a/tests/system-ovn.at b/tests/system-ovn.at
> index 20c058415..b0273331d 100644
> --- a/tests/system-ovn.at
> +++ b/tests/system-ovn.at
> @@ -8323,7 +8323,7 @@ ovn-sbctl list ip_multicast
>  
>  wait_igmp_flows_installed()
>  {
> -    OVS_WAIT_UNTIL([ovs-ofctl dump-flows br-int table=31 | \
> +    OVS_WAIT_UNTIL([ovs-ofctl dump-flows br-int table=33 | \
>      grep 'priority=90' | grep "nw_dst=$1"])
>  }
>  
> @@ -8477,7 +8477,470 @@ OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE])
>  as
>  OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d
>  /connection dropped.*/d"])
> +AT_CLEANUP
> +])
> +
> +OVN_FOR_EACH_NORTHD([
> +AT_SETUP([load balancing affinity sessions - IPv4])
> +AT_KEYWORDS([ovnlb])
> +
> +CHECK_CONNTRACK()
> +CHECK_CONNTRACK_NAT()
> +ovn_start
> +OVS_TRAFFIC_VSWITCHD_START()
> +ADD_BR([br-int])
> +
> +# Set external-ids in br-int needed for ovn-controller
> +ovs-vsctl \
> +        -- set Open_vSwitch . external-ids:system-id=hv1 \
> +        -- set Open_vSwitch . 
> external-ids:ovn-remote=unix:$ovs_base/ovn-sb/ovn-sb.sock \
> +        -- set Open_vSwitch . external-ids:ovn-encap-type=geneve \
> +        -- set Open_vSwitch . external-ids:ovn-encap-ip=169.0.0.1 \
> +        -- set bridge br-int fail-mode=secure 
> other-config:disable-in-band=true
> +
> +# Start ovn-controller
> +start_daemon ovn-controller
> +
> +# Logical network:
> +# Two LRs - R1 and R2 that are connected to each other via LS "join"
> +# in 20.0.0.0/24 network. R1 has switchess foo (192.168.1.0/24) and
> +# bar (192.168.2.0/24) connected to it. R2 has alice (172.16.1.0/24) 
> connected
> +# to it.  R2 is a gateway router on which we add load-balancing rules.
> +#
> +#    foo -- R1 -- join - R2 -- alice
> +#           |
> +#    bar ----
> +
> +ovn-nbctl create Logical_Router name=R1
> +ovn-nbctl create Logical_Router name=R2 options:chassis=hv1
> +
> +ovn-nbctl ls-add foo
> +ovn-nbctl ls-add bar
> +ovn-nbctl ls-add alice
> +ovn-nbctl ls-add join
> +
> +# Connect foo to R1
> +ovn-nbctl lrp-add R1 foo 00:00:01:01:02:03 192.168.1.1/24
> +ovn-nbctl lsp-add foo rp-foo -- set Logical_Switch_Port rp-foo \
> +    type=router options:router-port=foo addresses=\"00:00:01:01:02:03\"
> +
> +# Connect bar to R1
> +ovn-nbctl lrp-add R1 bar 00:00:01:01:02:04 192.168.2.1/24
> +ovn-nbctl lsp-add bar rp-bar -- set Logical_Switch_Port rp-bar \
> +    type=router options:router-port=bar addresses=\"00:00:01:01:02:04\"
> +
> +# Connect alice to R2
> +ovn-nbctl lrp-add R2 alice 00:00:02:01:02:03 172.16.1.1/24
> +ovn-nbctl lsp-add alice rp-alice -- set Logical_Switch_Port rp-alice \
> +    type=router options:router-port=alice addresses=\"00:00:02:01:02:03\"
> +
> +# Connect R1 to join
> +ovn-nbctl lrp-add R1 R1_join 00:00:04:01:02:03 20.0.0.1/24
> +ovn-nbctl lsp-add join r1-join -- set Logical_Switch_Port r1-join \
> +    type=router options:router-port=R1_join addresses='"00:00:04:01:02:03"'
> +
> +# Connect R2 to join
> +ovn-nbctl lrp-add R2 R2_join 00:00:04:01:02:04 20.0.0.2/24
> +ovn-nbctl lsp-add join r2-join -- set Logical_Switch_Port r2-join \
> +    type=router options:router-port=R2_join addresses='"00:00:04:01:02:04"'
> +
> +# Static routes.
> +ovn-nbctl lr-route-add R1 172.16.1.0/24 20.0.0.2
> +ovn-nbctl lr-route-add R2 192.168.0.0/16 20.0.0.1
> +
> +# Logical port 'foo1' in switch 'foo'.
> +ADD_NAMESPACES(foo1)
> +ADD_VETH(foo1, foo1, br-int, "192.168.1.2/24", "f0:00:00:01:02:03", \
> +         "192.168.1.1")
> +ovn-nbctl lsp-add foo foo1 \
> +-- lsp-set-addresses foo1 "f0:00:00:01:02:03 192.168.1.2"
> +
> +# Logical port 'alice1' in switch 'alice'.
> +ADD_NAMESPACES(alice1)
> +ADD_VETH(alice1, alice1, br-int, "172.16.1.2/24", "f0:00:00:01:02:04", \
> +         "172.16.1.1")
> +ovn-nbctl lsp-add alice alice1 \
> +-- lsp-set-addresses alice1 "f0:00:00:01:02:04 172.16.1.2"
> +
> +# Logical port 'bar1' in switch 'bar'.
> +ADD_NAMESPACES(bar1)
> +ADD_VETH(bar1, bar1, br-int, "192.168.2.2/24", "f0:00:00:01:02:05", \
> +"192.168.2.1")
> +ovn-nbctl lsp-add bar bar1 \
> +-- lsp-set-addresses bar1 "f0:00:00:01:02:05 192.168.2.2"
> +
> +ADD_NAMESPACES(bar2)
> +ADD_VETH(bar2, bar2, br-int, "192.168.2.3/24", "e0:00:00:01:02:05", \
> +"192.168.2.1")
> +ovn-nbctl lsp-add bar bar2 \
> +-- lsp-set-addresses bar2 "e0:00:00:01:02:05 192.168.2.3"
> +
> +# Config OVN load-balancer with a VIP.
> +
> +ovn-nbctl lb-add lb0 172.16.1.100:8080 192.168.1.2:80,192.168.2.2:80
> +ovn-nbctl lr-lb-add R2 lb0
> +
> +# Start webservers in 'foo1', 'bar1'.
> +OVS_START_L7([foo1], [http])
> +OVS_START_L7([bar1], [http])
> +
> +# Wait for ovn-controller to catch up.
> +ovn-nbctl --wait=hv sync
> +OVS_WAIT_UNTIL([ovs-ofctl -O OpenFlow13 dump-groups br-int | \
> +grep 'nat(dst=192.168.2.2:80)'])
> +
> +dnl Should work with the virtual IP address through NAT
> +for i in $(seq 1 20); do
> +    echo Request $i
> +    NS_CHECK_EXEC([alice1], [wget 172.16.1.100:8080 -t 5 -T 1 
> --retry-connrefused -v -o wget$i.log])
> +done
> +
> +dnl Each server should have at least one connection.
> +AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(172.16.1.100) |
> +sed -e 's/zone=[[0-9]]*/zone=<cleared>/'], [0], [dnl
> +tcp,orig=(src=172.16.1.2,dst=172.16.1.100,sport=<cleared>,dport=<cleared>),reply=(src=192.168.1.2,dst=172.16.1.2,sport=<cleared>,dport=<cleared>),zone=<cleared>,mark=2,protoinfo=(state=<cleared>)
> +tcp,orig=(src=172.16.1.2,dst=172.16.1.100,sport=<cleared>,dport=<cleared>),reply=(src=192.168.2.2,dst=172.16.1.2,sport=<cleared>,dport=<cleared>),zone=<cleared>,mark=2,protoinfo=(state=<cleared>)
> +])
> +
> +# Flush conntrack entries for easier output parsing of next test.
> +AT_CHECK([ovs-appctl dpctl/flush-conntrack])
> +# Enable lb affinity
> +ovn-nbctl --wait=sb set load_balancer lb0 options:affinity_timeout=60
> +
> +for i in $(seq 1 20); do
> +    echo Request $i
> +    NS_CHECK_EXEC([alice1], [wget 172.16.1.100:8080 -t 5 -T 1 
> --retry-connrefused -v -o wget$i.log])
> +    sleep 1
> +done
> +
> +dnl here we should have just one entry in the ct table
> +AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(172.16.1.100) |
> +sed -e 's/zone=[[0-9]]*/zone=<cleared>/; 
> s/src=192.168.[[0-9]].2/src=192.168.<cleared>.2/'], [0], [dnl
> +tcp,orig=(src=172.16.1.2,dst=172.16.1.100,sport=<cleared>,dport=<cleared>),reply=(src=192.168.<cleared>.2,dst=172.16.1.2,sport=<cleared>,dport=<cleared>),zone=<cleared>,mark=2,protoinfo=(state=<cleared>)
> +])
> +
> +AT_CHECK([ovs-ofctl dump-flows br-int table=78 |grep cookie |sed -e 
> 's/duration=[[0-9]]*.[[0-9]]*s/duration=<cleared>/; 
> s/load:0xc0a80[[0-9]]02/load:0xc0a80<cleared>02/; 
> s/n_packets=[[0-9]]*/n_packets=<cleared>/; 
> s/n_bytes=[[0-9]]*/n_bytes=<cleared>/; 
> s/idle_age=[[0-9]]*/idle_age=<cleared>/; s/hard_age=[[0-9]]*, //'], [0], [dnl
> + cookie=0x0, duration=<cleared>, table=78, n_packets=<cleared>, 
> n_bytes=<cleared>, idle_timeout=60, idle_age=<cleared>, 
> tcp,metadata=0x2,nw_src=172.16.1.2,nw_dst=172.16.1.100,tp_dst=8080 
> actions=load:0x1->NXM_NX_REG10[[14]],load:0xc0a80<cleared>02->NXM_NX_REG4[[]],load:0x50->NXM_NX_REG8[[0..15]]
> +])
> +
> +check_affinity_flows () {
> +n1=$(ovs-ofctl dump-flows br-int table=15 |awk 
> '/priority=150,ip,reg4=0xc0a80102/{print substr($4,11,length($4)-11)}')
> +n2=$(ovs-ofctl dump-flows br-int table=15 |awk 
> '/priority=150,ip,reg4=0xc0a80202/{print substr($4,11,length($4)-11)}')
> +[[ $n1 -gt 0 -a $n2 -eq 0 ]] || [[ $n1 -eq 0 -a $n2 -gt 0 ]]
> +echo $?
> +}
> +AT_CHECK([test $(check_affinity_flows) -eq 0])
> +
> +# Flush conntrack entries for easier output parsing of next test.
> +AT_CHECK([ovs-appctl dpctl/flush-conntrack])
> +
> +ovn-nbctl lb-add lb1 172.16.1.101:8080 192.168.1.2:80,192.168.2.2:80
> +# Enable lb affinity
> +ovn-nbctl --wait=sb set load_balancer lb1 options:affinity_timeout=1
> +ovn-nbctl lr-lb-add R2 lb1
> +
> +# check we use both backends
> +for i in $(seq 1 20); do
> +    echo Request $i
> +    NS_CHECK_EXEC([alice1], [wget 172.16.1.101:8080 -t 5 -T 1 
> --retry-connrefused -v -o wget$i.log])
> +    OVS_WAIT_UNTIL([test "$(ovs-ofctl dump-flows br-int table=78 |grep 
> 172.16.1.101)" = ""])
> +done
> +
> +dnl Each server should have at least one connection.
> +AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(172.16.1.101) |
> +sed -e 's/zone=[[0-9]]*/zone=<cleared>/'], [0], [dnl
> +tcp,orig=(src=172.16.1.2,dst=172.16.1.101,sport=<cleared>,dport=<cleared>),reply=(src=192.168.1.2,dst=172.16.1.2,sport=<cleared>,dport=<cleared>),zone=<cleared>,mark=2,protoinfo=(state=<cleared>)
> +tcp,orig=(src=172.16.1.2,dst=172.16.1.101,sport=<cleared>,dport=<cleared>),reply=(src=192.168.2.2,dst=172.16.1.2,sport=<cleared>,dport=<cleared>),zone=<cleared>,mark=2,protoinfo=(state=<cleared>)
> +])
> +
> +# Flush conntrack entries for easier output parsing of next test.
> +AT_CHECK([ovs-appctl dpctl/flush-conntrack])
> +
> +OVS_START_L7([bar2], [http])
> +
> +ovn-nbctl lb-add lb2 192.168.2.100:8080 192.168.2.2:80,192.168.2.3:80
> +ovn-nbctl --wait=sb set load_balancer lb2 options:affinity_timeout=60
> +ovn-nbctl ls-lb-add foo lb2
> +
> +for i in $(seq 1 20); do
> +    echo Request $i
> +    NS_CHECK_EXEC([foo1], [wget 192.168.2.100:8080 -t 5 -T 1 
> --retry-connrefused -v -o wget$i.log])
> +    sleep 1
> +done
> +
> +AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(192.168.2.100) |
> +sed -e 's/zone=[[0-9]]*/zone=<cleared>/; 
> s/src=192.168.2.[[0-9]]/src=192.168.2.<cleared>/'], [0], [dnl
> +tcp,orig=(src=192.168.1.2,dst=192.168.2.100,sport=<cleared>,dport=<cleared>),reply=(src=192.168.2.<cleared>,dst=192.168.1.2,sport=<cleared>,dport=<cleared>),zone=<cleared>,mark=2,protoinfo=(state=<cleared>)
> +])
> +
> +# Flush conntrack entries for easier output parsing of next test.
> +AT_CHECK([ovs-appctl dpctl/flush-conntrack])
> +
> +ovn-nbctl lb-add lb3 192.168.2.101:8080 192.168.2.2:80,192.168.2.3:80
> +ovn-nbctl --wait=sb set load_balancer lb3 options:affinity_timeout=1
> +ovn-nbctl ls-lb-add foo lb3
> +# Flush conntrack entries for easier output parsing of next test.
> +AT_CHECK([ovs-appctl dpctl/flush-conntrack])
> +
> +for i in $(seq 1 20); do
> +    echo Request $i
> +    NS_CHECK_EXEC([foo1], [wget 192.168.2.101:8080 -t 5 -T 1 
> --retry-connrefused -v -o wget$i.log])
> +    OVS_WAIT_UNTIL([test "$(ovs-ofctl dump-flows br-int table=78 |grep 
> 192.168.2.101)" = ""])
> +done
> +
> +dnl Each server should have at least one connection.
> +AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(192.168.2.101) |
> +sed -e 's/zone=[[0-9]]*/zone=<cleared>/'], [0], [dnl
> +tcp,orig=(src=192.168.1.2,dst=192.168.2.101,sport=<cleared>,dport=<cleared>),reply=(src=192.168.2.2,dst=192.168.1.2,sport=<cleared>,dport=<cleared>),zone=<cleared>,mark=2,protoinfo=(state=<cleared>)
> +tcp,orig=(src=192.168.1.2,dst=192.168.2.101,sport=<cleared>,dport=<cleared>),reply=(src=192.168.2.3,dst=192.168.1.2,sport=<cleared>,dport=<cleared>),zone=<cleared>,mark=2,protoinfo=(state=<cleared>)
> +])
> +
> +OVS_APP_EXIT_AND_WAIT([ovn-controller])
> +
> +as ovn-sb
> +OVS_APP_EXIT_AND_WAIT([ovsdb-server])
> +
> +as ovn-nb
> +OVS_APP_EXIT_AND_WAIT([ovsdb-server])
> +
> +as northd
> +OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE])
> +
> +as
> +OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d
> +/connection dropped.*/d"])
> +AT_CLEANUP
> +])
> +
> +OVN_FOR_EACH_NORTHD([
> +AT_SETUP([load balancing affinity sessions - IPv6])
> +AT_KEYWORDS([ovnlb])
> +
> +CHECK_CONNTRACK()
> +CHECK_CONNTRACK_NAT()
> +ovn_start
> +OVS_TRAFFIC_VSWITCHD_START()
> +ADD_BR([br-int])
> +
> +# Set external-ids in br-int needed for ovn-controller
> +ovs-vsctl \
> +        -- set Open_vSwitch . external-ids:system-id=hv1 \
> +        -- set Open_vSwitch . 
> external-ids:ovn-remote=unix:$ovs_base/ovn-sb/ovn-sb.sock \
> +        -- set Open_vSwitch . external-ids:ovn-encap-type=geneve \
> +        -- set Open_vSwitch . external-ids:ovn-encap-ip=169.0.0.1 \
> +        -- set bridge br-int fail-mode=secure 
> other-config:disable-in-band=true
> +
> +# Start ovn-controller
> +start_daemon ovn-controller
> +
> +# Logical network:
> +# Two LRs - R1 and R2 that are connected to each other via LS "join"
> +# in fd20::/64 network. R1 has switchess foo (fd11::/64) and
> +# bar (fd12::/64) connected to it. R2 has alice (fd72::/64) connected
> +# to it.  R2 is a gateway router on which we add load-balancing rules.
> +#
> +#    foo -- R1 -- join - R2 -- alice
> +#           |
> +#    bar ----
> +
> +ovn-nbctl create Logical_Router name=R1
> +ovn-nbctl create Logical_Router name=R2 options:chassis=hv1
> +
> +ovn-nbctl ls-add foo
> +ovn-nbctl ls-add bar
> +ovn-nbctl ls-add alice
> +ovn-nbctl ls-add join
> +
> +# Connect foo to R1
> +ovn-nbctl lrp-add R1 foo 00:00:01:01:02:03 fd11::1/64
> +ovn-nbctl lsp-add foo rp-foo -- set Logical_Switch_Port rp-foo \
> +    type=router options:router-port=foo addresses=\"00:00:01:01:02:03\"
> +
> +# Connect bar to R1
> +ovn-nbctl lrp-add R1 bar 00:00:01:01:02:04 fd12::1/64
> +ovn-nbctl lsp-add bar rp-bar -- set Logical_Switch_Port rp-bar \
> +    type=router options:router-port=bar addresses=\"00:00:01:01:02:04\"
> +
> +# Connect alice to R2
> +ovn-nbctl lrp-add R2 alice 00:00:02:01:02:03 fd72::1/64
> +ovn-nbctl lsp-add alice rp-alice -- set Logical_Switch_Port rp-alice \
> +    type=router options:router-port=alice addresses=\"00:00:02:01:02:03\"
> +
> +# Connect R1 to join
> +ovn-nbctl lrp-add R1 R1_join 00:00:04:01:02:03 fd20::1/64
> +ovn-nbctl lsp-add join r1-join -- set Logical_Switch_Port r1-join \
> +    type=router options:router-port=R1_join addresses='"00:00:04:01:02:03"'
> +
> +# Connect R2 to join
> +ovn-nbctl lrp-add R2 R2_join 00:00:04:01:02:04 fd20::2/64
> +ovn-nbctl lsp-add join r2-join -- set Logical_Switch_Port r2-join \
> +    type=router options:router-port=R2_join addresses='"00:00:04:01:02:04"'
> +
> +# Static routes.
> +ovn-nbctl lr-route-add R1 fd72::/64 fd20::2
> +ovn-nbctl lr-route-add R2 fd11::/64 fd20::1
> +ovn-nbctl lr-route-add R2 fd12::/64 fd20::1
> +
> +# Logical port 'foo1' in switch 'foo'.
> +ADD_NAMESPACES(foo1)
> +ADD_VETH(foo1, foo1, br-int, "fd11::2/64", "f0:00:00:01:02:03", \
> +         "fd11::1")
> +OVS_WAIT_UNTIL([test "$(ip -n foo1 a | grep fd11::2 | grep tentative)" = ""])
> +ovn-nbctl lsp-add foo foo1 \
> +-- lsp-set-addresses foo1 "f0:00:00:01:02:03 fd11::2"
> +
> +# Logical port 'alice1' in switch 'alice'.
> +ADD_NAMESPACES(alice1)
> +ADD_VETH(alice1, alice1, br-int, "fd72::2/64", "f0:00:00:01:02:04", \
> +         "fd72::1")
> +OVS_WAIT_UNTIL([test "$(ip -n alice1 a | grep fd72::2 | grep tentative)" = 
> ""])
> +ovn-nbctl lsp-add alice alice1 \
> +-- lsp-set-addresses alice1 "f0:00:00:01:02:04 fd72::2"
> +
> +# Logical port 'bar1' in switch 'bar'.
> +ADD_NAMESPACES(bar1)
> +ADD_VETH(bar1, bar1, br-int, "fd12::2/64", "f0:00:00:01:02:05", \
> +"fd12::1")
> +OVS_WAIT_UNTIL([test "$(ip -n bar1 a | grep fd12::2 | grep tentative)" = ""])
> +ovn-nbctl lsp-add bar bar1 \
> +-- lsp-set-addresses bar1 "f0:00:00:01:02:05 fd12::2"
> +
> +ADD_NAMESPACES(bar2)
> +ADD_VETH(bar2, bar2, br-int, "fd12::3/64", "e0:00:00:01:02:05", \
> +"fd12::1")
> +OVS_WAIT_UNTIL([test "$(ip -n bar2 a | grep fd12::3 | grep tentative)" = ""])
> +ovn-nbctl lsp-add bar bar2 \
> +-- lsp-set-addresses bar2 "e0:00:00:01:02:05 fd12::3"
> +
> +ovn-nbctl lb-add lb0 [[fd30::1]]:8080 [[fd11::2]]:80,[[fd12::2]]:80
> +ovn-nbctl lr-lb-add R2 lb0
> +
> +
> +# Wait for ovn-controller to catch up.
> +ovn-nbctl --wait=hv sync
> +OVS_WAIT_UNTIL([ovs-ofctl -O OpenFlow13 dump-groups br-int | \
> +grep 'nat(dst=\[[fd11::2\]]:80)'])
>  
> +# Start webservers in 'foo1', 'bar1'.
> +OVS_START_L7([foo1], [http6])
> +OVS_START_L7([bar1], [http6])
> +
> +dnl Should work with the virtual IP address through NAT
> +for i in $(seq 1 20); do
> +    echo Request $i
> +    NS_CHECK_EXEC([alice1], [wget http://[[fd30::1]]:8080 -t 5 -T 1 
> --retry-connrefused -v -o wget$i.log])
> +done
> +
> +dnl Each server should have at least one connection.
> +AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(fd30::1) | grep -v 
> fe80 |
> +sed -e 's/zone=[[0-9]]*/zone=<cleared>/'], [0], [dnl
> +tcp,orig=(src=fd72::2,dst=fd30::1,sport=<cleared>,dport=<cleared>),reply=(src=fd11::2,dst=fd72::2,sport=<cleared>,dport=<cleared>),zone=<cleared>,mark=2,protoinfo=(state=<cleared>)
> +tcp,orig=(src=fd72::2,dst=fd30::1,sport=<cleared>,dport=<cleared>),reply=(src=fd12::2,dst=fd72::2,sport=<cleared>,dport=<cleared>),zone=<cleared>,mark=2,protoinfo=(state=<cleared>)
> +])
> +
> +# Flush conntrack entries for easier output parsing of next test.
> +AT_CHECK([ovs-appctl dpctl/flush-conntrack])
> +# Enable lb affinity
> +ovn-nbctl --wait=sb set load_balancer lb0 options:affinity_timeout=60
> +
> +for i in $(seq 1 20); do
> +    echo Request $i
> +    NS_CHECK_EXEC([alice1], [wget http://[[fd30::1]]:8080 -t 5 -T 1 
> --retry-connrefused -v -o wget$i.log])
> +    sleep 1
> +done
> +
> +dnl here we should have just one entry in the ct table
> +AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(fd30::1) | grep -v 
> fe80 |
> +sed -e 's/zone=[[0-9]]*/zone=<cleared>/; 
> s/src=fd1[[0-9]]::2/src=fd1<cleared>::2/'], [0], [dnl
> +tcp,orig=(src=fd72::2,dst=fd30::1,sport=<cleared>,dport=<cleared>),reply=(src=fd1<cleared>::2,dst=fd72::2,sport=<cleared>,dport=<cleared>),zone=<cleared>,mark=2,protoinfo=(state=<cleared>)
> +])
> +
> +AT_CHECK([ovs-ofctl dump-flows br-int table=78 |grep cookie |sed -e 
> 's/duration=[[0-9]]*.[[0-9]]*s/duration=<cleared>/; 
> s/load:0xfd1[[0-9]]000000000000/load:0xfd1<cleared>000000000000/; 
> s/n_packets=[[0-9]]*/n_packets=<cleared>/; 
> s/n_bytes=[[0-9]]*/n_bytes=<cleared>/; 
> s/idle_age=[[0-9]]*/idle_age=<cleared>/; s/hard_age=[[0-9]]*, //'], [0], [dnl
> + cookie=0x0, duration=<cleared>, table=78, n_packets=<cleared>, 
> n_bytes=<cleared>, idle_timeout=60, idle_age=<cleared>, 
> tcp6,metadata=0x2,ipv6_src=fd72::2,ipv6_dst=fd30::1,tp_dst=8080 
> actions=load:0x1->NXM_NX_REG10[[14]],load:0x2->NXM_NX_XXREG1[[0..63]],load:0xfd1<cleared>000000000000->NXM_NX_XXREG1[[64..127]],load:0x50->NXM_NX_REG8[[0..15]]
> +])
> +
> +check_affinity_flows () {
> +n1=$(ovs-ofctl dump-flows br-int table=15 |awk 
> '/priority=150,ipv6,reg4=0xfd110000/{print substr($4,11,length($4)-11)}')
> +n2=$(ovs-ofctl dump-flows br-int table=15 |awk 
> '/priority=150,ipv6,reg4=0xfd120000/{print substr($4,11,length($4)-11)}')
> +[[ $n1 -gt 0 -a $n2 -eq 0 ]] || [[ $n1 -eq 0 -a $n2 -gt 0 ]]
> +echo $?
> +}
> +AT_CHECK([test $(check_affinity_flows) -eq 0])
> +
> +# Flush conntrack entries for easier output parsing of next test.
> +AT_CHECK([ovs-appctl dpctl/flush-conntrack])
> +
> +ovn-nbctl lb-add lb1 [[fd30::2]]:8080 [[fd11::2]]:80,[[fd12::2]]:80
> +# Enable lb affinity
> +ovn-nbctl --wait=sb set load_balancer lb1 options:affinity_timeout=1
> +ovn-nbctl lr-lb-add R2 lb1
> +
> +# check we use both backends
> +for i in $(seq 1 20); do
> +    echo Request $i
> +    NS_CHECK_EXEC([alice1], [wget http://[[fd30::2]]:8080 -t 5 -T 1 
> --retry-connrefused -v -o wget$i.log])
> +    OVS_WAIT_UNTIL([test "$(ovs-ofctl dump-flows br-int table=78 |grep 
> fd30::2)" = ""])
> +done
> +
> +dnl Each server should have at least one connection.
> +AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(fd30::2) | grep -v 
> fe80 |
> +sed -e 's/zone=[[0-9]]*/zone=<cleared>/'], [0], [dnl
> +tcp,orig=(src=fd72::2,dst=fd30::2,sport=<cleared>,dport=<cleared>),reply=(src=fd11::2,dst=fd72::2,sport=<cleared>,dport=<cleared>),zone=<cleared>,mark=2,protoinfo=(state=<cleared>)
> +tcp,orig=(src=fd72::2,dst=fd30::2,sport=<cleared>,dport=<cleared>),reply=(src=fd12::2,dst=fd72::2,sport=<cleared>,dport=<cleared>),zone=<cleared>,mark=2,protoinfo=(state=<cleared>)
> +])
> +
> +# Flush conntrack entries for easier output parsing of next test.
> +AT_CHECK([ovs-appctl dpctl/flush-conntrack])
> +
> +OVS_START_L7([bar2], [http6])
> +
> +ovn-nbctl lb-add lb2 [[fd12::a]]:8080 [[fd12::2]]:80,[[fd12::3]]:80
> +ovn-nbctl --wait=sb set load_balancer lb2 options:affinity_timeout=60
> +ovn-nbctl ls-lb-add foo lb2
> +
> +for i in $(seq 1 20); do
> +    echo Request $i
> +    NS_CHECK_EXEC([foo1], [wget http://[[fd12::a]]:8080 -t 5 -T 1 
> --retry-connrefused -v -o wget$i.log])
> +    sleep 1
> +done
> +
> +dnl here we should have just one entry in the ct table
> +AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(fd12::a) | grep -v 
> fe80 |
> +sed -e 's/zone=[[0-9]]*/zone=<cleared>/; 
> s/src=fd12::[[0-9]]/src=fd12::<cleared>/'], [0], [dnl
> +tcp,orig=(src=fd11::2,dst=fd12::a,sport=<cleared>,dport=<cleared>),reply=(src=fd12::<cleared>,dst=fd11::2,sport=<cleared>,dport=<cleared>),zone=<cleared>,mark=2,protoinfo=(state=<cleared>)
> +])
> +
> +AT_CHECK([ovs-appctl dpctl/flush-conntrack])
> +
> +ovn-nbctl lb-add lb3 [[fd12::b]]:8080 [[fd12::2]]:80,[[fd12::3]]:80
> +ovn-nbctl --wait=sb set load_balancer lb3 options:affinity_timeout=1
> +ovn-nbctl ls-lb-add foo lb3
> +
> +for i in $(seq 1 20); do
> +    echo Request $i
> +    NS_CHECK_EXEC([foo1], [wget http://[[fd12::b]]:8080 -t 5 -T 1 
> --retry-connrefused -v -o wget$i.log])
> +    OVS_WAIT_UNTIL([test "$(ovs-ofctl dump-flows br-int table=78 |grep 
> fd12::b)" = ""])
> +done
> +
> +AT_CHECK([ovs-appctl dpctl/dump-conntrack | FORMAT_CT(fd12::b) | grep -v 
> fe80 |
> +sed -e 's/zone=[[0-9]]*/zone=<cleared>/'], [0], [dnl
> +tcp,orig=(src=fd11::2,dst=fd12::b,sport=<cleared>,dport=<cleared>),reply=(src=fd12::2,dst=fd11::2,sport=<cleared>,dport=<cleared>),zone=<cleared>,mark=2,protoinfo=(state=<cleared>)
> +tcp,orig=(src=fd11::2,dst=fd12::b,sport=<cleared>,dport=<cleared>),reply=(src=fd12::3,dst=fd11::2,sport=<cleared>,dport=<cleared>),zone=<cleared>,mark=2,protoinfo=(state=<cleared>)
> +])
> +
> +OVS_APP_EXIT_AND_WAIT([ovn-controller])
> +
> +as ovn-sb
> +OVS_APP_EXIT_AND_WAIT([ovsdb-server])
> +
> +as ovn-nb
> +OVS_APP_EXIT_AND_WAIT([ovsdb-server])
> +
> +as northd
> +OVS_APP_EXIT_AND_WAIT([NORTHD_TYPE])
> +
> +as
> +OVS_TRAFFIC_VSWITCHD_STOP(["/failed to query port patch-.*/d"])
>  AT_CLEANUP
>  ])
>  


_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to