2017-01-12 8:49 GMT-08:00 Ciara Loftus <[email protected]>:
> Unconditional insertion of EMC entries results in EMC thrashing at high
> numbers of parallel flows. When this occurs, the performance of the EMC
> often falls below that of the dpcls classifier, rendering the EMC
> practically useless.
>
> Instead of unconditionally inserting entries into the EMC when a miss
> occurs, use a 1% probability of insertion. This ensures that the most
> frequent flows have the highest chance of creating an entry in the EMC,
> and the probability of thrashing the EMC is also greatly reduced.
>
> Signed-off-by: Ciara Loftus <[email protected]>
> Signed-off-by: Georg Schmuecking <[email protected]>
> Co-authored-by: Georg Schmuecking <[email protected]>

Thanks for the patch

> ---
>  lib/dpif-netdev.c | 20 ++++++++++++++++++--
>  1 file changed, 18 insertions(+), 2 deletions(-)
>
> diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
> index 546a1e9..8d55ba2 100644
> --- a/lib/dpif-netdev.c
> +++ b/lib/dpif-netdev.c
> @@ -144,6 +144,9 @@ struct netdev_flow_key {
>  #define EM_FLOW_HASH_MASK (EM_FLOW_HASH_ENTRIES - 1)
>  #define EM_FLOW_HASH_SEGS 2
>
> +#define EM_FLOW_INSERT_PROB 100
> +#define EM_FLOW_INSERT_MIN (UINT32_MAX / EM_FLOW_INSERT_PROB)
> +
>  struct emc_entry {
>      struct dp_netdev_flow *flow;
>      struct netdev_flow_key key;   /* key.hash used for emc hash value. */
> @@ -1994,6 +1997,19 @@ emc_insert(struct emc_cache *cache, const struct 
> netdev_flow_key *key,
>      emc_change_entry(to_be_replaced, flow, key);
>  }
>
> +static inline void
> +emc_probabilistic_insert(struct dp_netdev_pmd_thread *pmd,
> +                         struct emc_cache *cache,
> +                         const struct netdev_flow_key *key,
> +                         struct dp_netdev_flow *flow)
> +{
> +    /* One in every EM_FLOW_INSERT_PROB packets are inserted to reduce
> +     * thrashing */
> +    if ((key->hash ^ (uint32_t)pmd->last_cycles) <= EM_FLOW_INSERT_MIN) {

pmd->last_cycles is always 0 when OVS is compiled without DPDK.  While
we currently
don't require high throughput from this code unless DPDK is enabled, I
think that
depending only on the hash might decrease the coverage of the exact
match cache in
the unit tests.

Have you thought about just using a counter?

> +        emc_insert(cache, key, flow);
> +    }
> +}
> +
>  static inline struct dp_netdev_flow *
>  emc_lookup(struct emc_cache *cache, const struct netdev_flow_key *key)
>  {
> @@ -4092,7 +4108,7 @@ handle_packet_upcall(struct dp_netdev_pmd_thread *pmd, 
> struct dp_packet *packet,
>          }
>          ovs_mutex_unlock(&pmd->flow_mutex);
>
> -        emc_insert(&pmd->flow_cache, key, netdev_flow);
> +        emc_probabilistic_insert(pmd, &pmd->flow_cache, key, netdev_flow);
>      }
>  }
>
> @@ -4187,7 +4203,7 @@ fast_path_processing(struct dp_netdev_pmd_thread *pmd,
>
>          flow = dp_netdev_flow_cast(rules[i]);
>
> -        emc_insert(flow_cache, &keys[i], flow);
> +        emc_probabilistic_insert(pmd, flow_cache, &keys[i], flow);
>          dp_netdev_queue_batches(packet, flow, &keys[i].mf, batches, 
> n_batches);
>      }
>
> --
> 2.4.3
>
> _______________________________________________
> dev mailing list
> [email protected]
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to