>
>
> ---------- Forwarded message ---------
> From: Li,Rongqing <[email protected]>
> Date: Thu, Nov 28, 2019 at 12:17 AM
> Subject: 答复: ipf question
> To: Darrell Ball <[email protected]>
> Cc: ovs dev <[email protected]>
>
>
> It does not work for me
>
>
>
> Normal it should like below
>
>
>
>
> But when the size is 50000,
>
>
>
>
> 50% is loss
>
>
>
>
>
>
> Thanks
>
>
>
> -RongQing
>

Hi Li,Rongqing

This is what I see

ping -s 50000 -q -c 20 -i 0.1 -w 2

20 packets transmitted, 17 received, 15% packet loss, time 0ms

Note that all fragments for a given packet do not arrive into OVS in
sequence but are interleaved with other packet fragments,
hence > 1 packet's fragments are waiting to be sent back out from ipf when
the test ends, so this result is about what I expect.

A couple things to watch for in your testing:

1/ Make sure you are not filling up the maximum fragments stored (which is
for DOS protection)
This might lead to long RTT until these are flushed out; otherwise you have
something
else going on in the src or dest VMs reassembly
.
i..e. check
ovs-appctl dpctl/ipf-get-status
these fields:
max num frags (v4/v6):
num frag:

2/ Watch for datapath flows being revalidated out due to inactivity and
setup again; packets for
a stream are not normally spaced 1 sec apart, so revalidation may kick in;
use short '-i'; see above
New flows will change the recirc id and we are now doing strict checking
for resume flows.

Thanks Darrell




>
>
> *发件人:* Darrell Ball [mailto:[email protected]]
> *发送时间:* 2019年11月26日 10:38
> *收件人:* Li,Rongqing <[email protected]>
> *抄送:* ovs dev <[email protected]>
> *主题:* Re: ipf question
>
>
>
> Thanks Li,Rongqing
>
>
>
> On Mon, Nov 18, 2019 at 9:35 PM Li,Rongqing <[email protected]> wrote:
>
> Thanks, Darrell
>
>
>
> I can try to test it
>
>
>
> thanks !; can you try this 2 patch series ?
>
>
>
> dball@ubuntu:~/openvswitch/ovs$ cat
> outgoing2/0001-dp-packet-Cache-batch-action-list-in-batch.patch
> From 80436fb57a54ecfd532e99086dbb0e5142415070 Mon Sep 17 00:00:00 2001
> From: Darrell Ball <[email protected]>
> Date: Sun, 24 Nov 2019 17:29:07 -0800
> Subject: [patch v1 1/2] dp-packet: Cache batch action list in batch.
> To: [email protected]
>
> Cache the batch action list in the batch itself. This will be
> used in a subsequent patch.
>
> Signed-off-by: Darrell Ball <[email protected]>
> ---
>  lib/dp-packet.h | 27 ++++++++++++++++++++++++++-
>  1 file changed, 26 insertions(+), 1 deletion(-)
>
> diff --git a/lib/dp-packet.h b/lib/dp-packet.h
> index 14f0897..77df801 100644
> --- a/lib/dp-packet.h
> +++ b/lib/dp-packet.h
> @@ -775,9 +775,11 @@ enum { NETDEV_MAX_BURST = 32 }; /* Maximum number
> packets in a batch. */
>
>  struct dp_packet_batch {
>      size_t count;
> +    size_t actions_len;
> +    const struct nlattr *actions;
> +    struct dp_packet *packets[NETDEV_MAX_BURST];
>      bool trunc; /* true if the batch needs truncate. */
>      bool do_not_steal; /* Indicate that the packets should not be stolen.
> */
> -    struct dp_packet *packets[NETDEV_MAX_BURST];
>  };
>
>  static inline void
> @@ -786,6 +788,8 @@ dp_packet_batch_init(struct dp_packet_batch *batch)
>      batch->count = 0;
>      batch->trunc = false;
>      batch->do_not_steal = false;
> +    batch->actions = NULL;
> +    batch->actions_len = 0;
>  }
>
>  static inline void
> @@ -930,6 +934,27 @@ dp_packet_batch_reset_cutlen(struct dp_packet_batch
> *batch)
>      }
>  }
>
> +static inline void
> +dp_packet_batch_set_action_ctx(struct dp_packet_batch *batch,
> +                               const struct nlattr *actions,
> +                               size_t actions_len)
> +{
> +    batch->actions = actions;
> +    batch->actions_len = actions_len;
> +}
> +
> +static inline const struct nlattr *
> +dp_packet_batch_get_actions(struct dp_packet_batch *batch)
> +{
> +    return batch->actions;
> +}
> +
> +static inline size_t
> +dp_packet_batch_get_action_len(struct dp_packet_batch *batch)
> +{
> +    return batch->actions_len;
> +}
> +
>  #ifdef  __cplusplus
>  }
>  #endif
> --
> 1.9.1
>
>
>
> dball@ubuntu:~/openvswitch/ovs$ cat
> outgoing2/0002-ipf-Resume-fragments-in-same-action-list.patch
> From 4aefd4e791f9d7b6ef916e41136cb59e6020bb22 Mon Sep 17 00:00:00 2001
> From: Darrell Ball <[email protected]>
> Date: Sun, 24 Nov 2019 17:33:58 -0800
> Subject: [patch v1 2/2] ipf: Resume fragments in same action list.
> To: [email protected]
>
> Once fragments are reassembled and go through conntrack, the fragments
> need to resume processing in the same action list.  There is a
> semantic requirement that the list have, at most, one conntrack action,
> so this specifies the resume point in the list as well.  A memcmp is
> used to compare pre and post processing lists as padding is zeroed out,
> therefore having predictable values.
>
> Signed-off-by: Darrell Ball <[email protected]>
> ---
>  lib/dpif-netdev.c |  1 +
>  lib/ipf.c         | 49 ++++++++++++++++++++++++++++++++++++-------------
>  2 files changed, 37 insertions(+), 13 deletions(-)
>
> diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
> index 5142bad..3ddca5b 100644
> --- a/lib/dpif-netdev.c
> +++ b/lib/dpif-netdev.c
> @@ -7363,6 +7363,7 @@ dp_netdev_execute_actions(struct
> dp_netdev_pmd_thread *pmd,
>  {
>      struct dp_netdev_execute_aux aux = { pmd, flow };
>
> +    dp_packet_batch_set_action_ctx(packets, actions, actions_len);
>      odp_execute_actions(&aux, packets, should_steal, actions,
>                          actions_len, dp_execute_cb);
>  }
> diff --git a/lib/ipf.c b/lib/ipf.c
> index 45c4891..ad82620 100644
> --- a/lib/ipf.c
> +++ b/lib/ipf.c
> @@ -91,6 +91,8 @@ union ipf_addr {
>  /* Represents a single fragment; part of a list of fragments. */
>  struct ipf_frag {
>      struct dp_packet *pkt;
> +    struct nlattr *actions;
> +    size_t actions_len;
>      uint16_t start_data_byte;
>      uint16_t end_data_byte;
>      bool dnsteal; /* 'do not steal': if true, ipf should not free packet.
> */
> @@ -261,7 +263,12 @@ ipf_list_clean(struct hmap *frag_lists,
>  {
>      ovs_list_remove(&ipf_list->list_node);
>      hmap_remove(frag_lists, &ipf_list->node);
> -    free(ipf_list->frag_list);
> +    struct ipf_frag *frag_list = ipf_list->frag_list;
> +    ovs_assert(frag_list);
> +    for (int i = 0; i <= ipf_list->last_inuse_idx; i++) {
> +        free(frag_list[i].actions);
> +    }
> +    free(frag_list);
>      free(ipf_list);
>  }
>
> @@ -793,7 +800,7 @@ static bool
>  ipf_process_frag(struct ipf *ipf, struct ipf_list *ipf_list,
>                   struct dp_packet *pkt, uint16_t start_data_byte,
>                   uint16_t end_data_byte, bool ff, bool lf, bool v6,
> -                 bool dnsteal)
> +                 struct dp_packet_batch *pb)
>      OVS_REQUIRES(ipf->ipf_lock)
>  {
>      bool duped_frag = ipf_is_frag_duped(ipf_list->frag_list,
> @@ -811,7 +818,11 @@ ipf_process_frag(struct ipf *ipf, struct ipf_list
> *ipf_list,
>              frag->pkt = pkt;
>              frag->start_data_byte = start_data_byte;
>              frag->end_data_byte = end_data_byte;
> -            frag->dnsteal = dnsteal;
> +            frag->dnsteal = pb->do_not_steal;
> +            frag->actions_len = dp_packet_batch_get_action_len(pb);
> +            ovs_assert(dp_packet_batch_get_actions(pb));
> +            frag->actions = xmemdup(dp_packet_batch_get_actions(pb),
> +                                    frag->actions_len);
>              ipf_list->last_inuse_idx++;
>              atomic_count_inc(&ipf->nfrag);
>              ipf_count(ipf, v6, IPF_NFRAGS_ACCEPTED);
> @@ -849,7 +860,7 @@ ipf_list_init(struct ipf_list *ipf_list, struct
> ipf_list_key *key,
>  static bool
>  ipf_handle_frag(struct ipf *ipf, struct dp_packet *pkt, ovs_be16 dl_type,
>                  uint16_t zone, long long now, uint32_t hash_basis,
> -                bool dnsteal)
> +                struct dp_packet_batch *pb)
>      OVS_REQUIRES(ipf->ipf_lock)
>  {
>      struct ipf_list_key key;
> @@ -918,7 +929,7 @@ ipf_handle_frag(struct ipf *ipf, struct dp_packet
> *pkt, ovs_be16 dl_type,
>      }
>
>      return ipf_process_frag(ipf, ipf_list, pkt, start_data_byte,
> -                            end_data_byte, ff, lf, v6, dnsteal);
> +                            end_data_byte, ff, lf, v6, pb);
>  }
>
>  /* Filters out fragments from a batch of fragments and adjust the batch.
> */
> @@ -940,7 +951,7 @@ ipf_extract_frags_from_batch(struct ipf *ipf, struct
> dp_packet_batch *pb,
>
>              ovs_mutex_lock(&ipf->ipf_lock);
>              if (!ipf_handle_frag(ipf, pkt, dl_type, zone, now, hash_basis,
> -                                 pb->do_not_steal)) {
> +                                 pb)) {
>                  dp_packet_batch_refill(pb, pkt, pb_idx);
>              }
>              ovs_mutex_unlock(&ipf->ipf_lock);
> @@ -954,20 +965,30 @@ ipf_extract_frags_from_batch(struct ipf *ipf, struct
> dp_packet_batch *pb,
>   * management has trouble dealing with multiple source types.  The
>   * check_source paramater is used to indicate when this check is needed.
> */
>  static bool
> -ipf_dp_packet_batch_add(struct dp_packet_batch *pb , struct dp_packet
> *pkt,
> -                        bool check_source OVS_UNUSED)
> +ipf_dp_packet_batch_add(struct dp_packet_batch *pb, struct dp_packet *pkt,
> +                        struct ipf_frag *frag, bool check_source
> OVS_UNUSED)
>  {
>  #ifdef DPDK_NETDEV
>      if ((dp_packet_batch_is_full(pb)) ||
>          /* DPDK cannot handle multiple sources in a batch. */
>          (check_source && !dp_packet_batch_is_empty(pb)
> -         && pb->packets[0]->source != pkt->source)) {
> +         && pb->packets[0]->source != frag->pkt->source)) {
>  #else
>      if (dp_packet_batch_is_full(pb)) {
>  #endif
>          return false;
>      }
>
> +    /* Make sure that the fragment resumes processing in the same action
> list.
> +     * There is a semantic requirement that an action list have, at most,
> one
> +     * conntrack action, which could be enforced when the action list is
> +     * generated.  Netlink padding is predictably zero, hence memcmp
> should
> +     * be reliable to compare action lists. */
> +    if ((dp_packet_batch_get_action_len(pb) != frag->actions_len) ||
> +        (memcmp(dp_packet_batch_get_actions(pb), frag->actions,
> +                frag->actions_len))) {
> +        return false;
> +    }
>      dp_packet_batch_add(pb, pkt);
>      return true;
>  }
> @@ -1020,9 +1041,11 @@ ipf_send_frags_in_list(struct ipf *ipf, struct
> ipf_list *ipf_list,
>      }
>
>      while (ipf_list->last_sent_idx < ipf_list->last_inuse_idx) {
> -        struct dp_packet *pkt
> -            = ipf_list->frag_list[ipf_list->last_sent_idx + 1].pkt;
> -        if (ipf_dp_packet_batch_add(pb, pkt, true)) {
> +        struct ipf_frag *frag =
> +            &ipf_list->frag_list[ipf_list->last_sent_idx + 1];
> +        struct dp_packet *pkt = frag->pkt;
> +
> +        if (ipf_dp_packet_batch_add(pb, pkt, frag, true)) {
>              ipf_list->last_sent_idx++;
>              atomic_count_dec(&ipf->nfrag);
>
> @@ -1122,7 +1145,7 @@ ipf_execute_reass_pkts(struct ipf *ipf, struct
> dp_packet_batch *pb)
>
>      LIST_FOR_EACH_SAFE (rp, next, rp_list_node,
> &ipf->reassembled_pkt_list) {
>          if (!rp->list->reass_execute_ctx &&
> -            ipf_dp_packet_batch_add(pb, rp->pkt, false)) {
> +            ipf_dp_packet_batch_add(pb, rp->pkt, rp->list->frag_list,
> false)) {
>              rp->list->reass_execute_ctx = rp->pkt;
>          }
>      }
> --
> 1.9.1
>
>
>
>
>
> Other question, it seems that ipf is driven by input packet, no timer,  if
> a packet has 40 fragment, first time, 32 fragments are received, next time,
> 8 fragment are received,  reassemble is success, and send out 32 fragment.
> After that,  if no new packets are received, the remaining 8 fragment will
> in complete list, and can not send out until expired to clean them, is it
> true?
>
>
>
> Fragments will continually try to be sent out to resume to a same type
> action list execution batch and point of processing as they were received
> from;
>
> At least, there would be subsequent batches for retries.
>
>
>
>
>
> -RongQing
>
>
>
> *发件人:* Darrell Ball [mailto:[email protected]]
> *发送时间:* 2019年11月19日 10:22
> *收件人:* Li,Rongqing <[email protected]>
> *抄送:* ovs dev <[email protected]>
> *主题:* Re: ipf question
>
>
>
> Thanks; I had a look and I noticed ipf does not keep all the context it
> needs
>
> to properly resume fragment processing in the general case; I have a
> potential fix,
>
> but won't get to it this week.
>
>
>
>
>
> On Sun, Nov 17, 2019 at 11:08 PM Darrell Ball <[email protected]> wrote:
>
>
>
>
>
> On Fri, Nov 15, 2019 at 6:03 PM Li,Rongqing <[email protected]> wrote:
>
> 发件人: Darrell Ball <[email protected]>
> 发送时间: 2019年11月15日 22:58
> 收件人: Li,Rongqing
> 抄送: ovs dev
> 主题: Re: ipf question
>
>
> >Let me paraphrase, just to confirm we are on the same page.
> >IIUC, for example, in the case of a 33 fragment packet, in the first pass
> all 33 fragments enter ipf, then are >reassembled, pass thru conntrack
> >and then the frags sent out, while in the second pass, only 32 fragments
> enter conntrack/ipf, while index 32 >fragment is being forwarded out
> without going thru conntrack/ipf ?
>
> true.
> the second pass is recirculation
> and index 32 fragment is not into contrack/ipf, and send out to vm
> directly.
>
>
>
> can you check what rule is being hit by that fragment packet vs others and
> then compare the pkt metadata
>
>
>
>
> if  I change  NETDEV_MAX_BURST to 64, it works
>
>
>
> good test
>
>
>
>
> thanks
>
> -RongQing
>
>
>
>
>
>
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to