Re: [ovs-dev] [RFC 1/4] dpif-netdev: Refactor datapath flow cache

2018-02-26 Thread Wang, Yipeng1
Bhanu, thanks for the comment. Please see my comments inlined.

And Jan please feel free to add more. 

>-Original Message-
>From: Bodireddy, Bhanuprakash
>Sent: Tuesday, February 20, 2018 1:10 PM
>To: Wang, Yipeng1 <yipeng1.w...@intel.com>; d...@openvswitch.org; 
>jan.scheur...@ericsson.com
>Cc: Tai, Charlie <charlie@intel.com>
>Subject: RE: [ovs-dev] [RFC 1/4] dpif-netdev: Refactor datapath flow cache
>
>Hi Yipeng,
>
>Thanks for the RFC series. This patch series need to be rebased.
>I applied this on an older commit to do initial testing. Some comments below.
>
>I see that DFC cache is implemented in similar lines of EMC cache except that 
>it holds
>Million entries and uses more bits of RSS hash to index in to the Cache. 
[Wang, Yipeng] 
Conceptually, DFC/CD is much memory efficient than EMC since it does not store 
the full key.

>DPCLS lookup is expensive and consumes 30% of total cycles in some test cases 
>and DFC
>Cache will definitely reduce some pain there.
>
>On the memory foot print:
>
>On Master,
>EMC  entry size = 592 bytes
>   8k entries = ~4MB.
>
>With this patch,
> EMC entry size = 256 bytes
>  16k entries = ~4MB.
>
>I like the above reduction in flow key size, keeping the entry size to 
>multiple of cache line and still keeping the overall EMC size to
>~4MB with more EMC entries.
>
>However my concern is DFC cache size. As DFC cache is million entries and 
>consumes ~12 MB for each PMD thread, it might not fit in to
>L3 cache. Also note that in newer platforms L3 cache is shrinking and L2 is 
>slightly increased (eg: Skylake has only 1MB L2 and 19MB L3
>cache).
>
[Wang, Yipeng] 
Yes, this is also our concern. The following (4/4) patch introduces indirect 
table for this reason. 

>Inspite of the memory footprint I still think DFC cache improves switching 
>performance as it is lot less expensive than invoking
>dpcls_lookup() as the later involves more expensive hash computation and 
>subtable traversal. It would be nice if there is more testing
>done with real VNFs to see that this patch doesn't cause cache thrashing and 
>suffer from memory bottlenecks.
>
[Wang, Yipeng] 
I don't have real VNF benchmarking set, but will it be useful if we do test 
with some synthetic cache pirating application to emulate the effect? 
 
>
>[BHANU]
>I am not sure if this is good idea to simplify EMC by using 1-way associative 
>instead of current 2 way associative implementation.
>I prefer to leave the current approach as-is unless we have strong data to 
>prove it otherwise.
>This comment applies to below code changes w.r.t to EMC lookup and insert.
>
[Wang, Yipeng] 
I have synthetic tests showing simpler EMC works much better, while I also have 
other sets of synthetic tests showing the 2way EMC works much better.
It eventually depends on the use cases.  We had some discussion at this thread: 
https://mail.openvswitch.org/pipermail/ovs-dev/2017-December/342197.html

>>The maximum size of the EMC flow key is limited to 256 bytes to reduce the
>>memory footprint. This should be sufficient to hold most real life packet flow
>>keys. Larger flows are not installed in the EMC.
>
>+1
>
>>
>> reload:
>> pmd_alloc_static_tx_qid(pmd);
>>
>>@@ -4166,8 +4282,7 @@ reload:
>> }
>>
>> if (lc++ > 1024) {
>>-bool reload;
>>-
>>+dfc_slow_sweep(>flow_cache);
>[BHANU]  I need to better understand the usage of RCUs. But I am wondering why 
>the sweep function
>Isn't under the below !rcu_try_quiesce() condition?

[Wang, Yipeng] 
The condition is to see if this pmd can be successfully quiesced once in this 
iteration. And sweeping will evict EMC entries and register rcu callbacks to 
free megaflow entry later.  Then putting this function under the condition 
means that we sweep and register new call back only if this PMD successfully 
quiesced this round.
I think functionally it should be fine either put sweep inside or outside the 
condition. But if outside the condition, there may be callbacks accumulated and 
cannot be called since this PMD may never be able to enter quiesced state once. 
 So, I think we may want to change the code to keep the sweep under the 
condition.
Others may know better of this, please comment.
>
>>@@ -5039,7 +5154,7 @@ emc_processing(struct dp_netdev_pmd_thread
>>*pmd,
>> }
>> miniflow_extract(packet, >mf);
>> key->len = 0; /* Not computed yet. */
>>-/* If EMC is disabled skip hash computation and emc_lookup */
>>+/* If DFC is disabled skip hash computation and DFC lookup */
>[BHANU] Why would DFC ever be disabled by user? This was done e

Re: [ovs-dev] [RFC 1/4] dpif-netdev: Refactor datapath flow cache

2018-02-20 Thread Bodireddy, Bhanuprakash
Hi Yipeng,

Thanks for the RFC series. This patch series need to be rebased. 
I applied this on an older commit to do initial testing. Some comments below.

I see that DFC cache is implemented in similar lines of EMC cache except that 
it holds
Million entries and uses more bits of RSS hash to index in to the Cache. I 
agree that
DPCLS lookup is expensive and consumes 30% of total cycles in some test cases 
and DFC
Cache will definitely reduce some pain there.

On the memory foot print:

On Master, 
EMC  entry size = 592 bytes
   8k entries = ~4MB.

With this patch,
 EMC entry size = 256 bytes
  16k entries = ~4MB.

I like the above reduction in flow key size, keeping the entry size to multiple 
of cache line and still keeping the overall EMC size to ~4MB with more EMC 
entries.

However my concern is DFC cache size. As DFC cache is million entries and 
consumes ~12 MB for each PMD thread, it might not fit in to L3 cache. Also note 
that in newer platforms L3 cache is shrinking and L2 is slightly increased (eg: 
Skylake has only 1MB L2 and 19MB L3 cache).

Inspite of the memory footprint I still think DFC cache improves switching 
performance as it is lot less expensive than invoking dpcls_lookup() as the 
later involves more expensive hash computation and subtable traversal. It would 
be nice if there is more testing done with real VNFs to see that this patch 
doesn't cause cache thrashing and suffer from memory bottlenecks.

Some more comments below.

>This is a rebase of Jan's previous patch [PATCH] dpif-netdev: Refactor
>datapath flow cache https://mail.openvswitch.org/pipermail/ovs-dev/2017-
>November/341066.html
>
>So far the netdev datapath uses an 8K EMC to speed up the lookup of
>frequently used flows by comparing the parsed packet headers against the
>miniflow of a cached flow, using 13 bits of the packet RSS hash as index. The
>EMC is too small for many applications with 100K or more parallel packet flows
>so that EMC threshing actually degrades performance.
>Furthermore, the size of struct miniflow and the flow copying cost prevents us
>from making it much larger.
>
>At the same time the lookup cost of the megaflow classifier (DPCLS) is
>increasing as the number of frequently hit subtables grows with the
>complexity of pipeline and the number of recirculations.
>
>To close the performance gap for many parallel flows, this patch introduces
>the datapath flow cache (DFC) with 1M entries as lookup stage between EMC
>and DPCLS. It directly maps 20 bits of the RSS hash to a pointer to the last 
>hit
>megaflow entry and performs a masked comparison of the packet flow with
>the megaflow key to confirm the hit. This avoids the costly DPCLS lookup even
>for very large number of parallel flows with a small memory overhead.
>
>Due the large size of the DFC and the low risk of DFC thrashing, any DPCLS hit
>immediately inserts an entry in the DFC so that subsequent packets get
>speeded up. The DFC, thus, accelerate also short-lived flows.
>
>To further accelerate the lookup of few elephant flows, every DFC hit triggers
>a probabilistic EMC insertion of the flow. As the DFC entry is already in place
>the default EMC insertion probability can be reduced to
>1/1000 to minimize EMC thrashing should there still be many fat flows.
>The inverse EMC insertion probability remains configurable.
>
>The EMC implementation is simplified by removing the possibility to store a
>flow in two slots, as there is no particular reason why two flows should
>systematically collide (the RSS hash is not symmetric).

[BHANU]
I am not sure if this is good idea to simplify EMC by using 1-way associative 
instead of current 2 way associative implementation.
I prefer to leave the current approach as-is unless we have strong data to 
prove it otherwise.
This comment applies to below code changes w.r.t to EMC lookup and insert.

>The maximum size of the EMC flow key is limited to 256 bytes to reduce the
>memory footprint. This should be sufficient to hold most real life packet flow
>keys. Larger flows are not installed in the EMC.

+1 

>
>The pmd-stats-show command is enhanced to show both EMC and DFC hits
>separately.
>
>The sweep speed for cleaning up obsolete EMC and DFC flow entries and
>freeing dead megaflow entries is increased. With a typical PMD cycle duration
>of 100us under load and checking one DFC entry per cycle, the DFC sweep
>should normally complete within in 100s.
>
>In PVP performance tests with an L3 pipeline over VXLAN we determined the
>optimal EMC size to be 16K entries to obtain a uniform speedup compared to
>the master branch over the full range of parallel flows. The measurement
>below is for 64 byte packets and the average number of subtable lookups per
>DPCLS hit in this pipeline is 1.0, i.e. the acceleration already starts for a 
>single
>busy mask. Tests with many visited subtables should show a strong increase
>of the gain through DFC.
>
>Flows   master  DFC+EMC  Gain
>   

[ovs-dev] [RFC 1/4] dpif-netdev: Refactor datapath flow cache

2018-01-18 Thread Yipeng Wang
From: Jan Scheurich 

This is a rebase of Jan's previous patch
[PATCH] dpif-netdev: Refactor datapath flow cache
https://mail.openvswitch.org/pipermail/ovs-dev/2017-November/341066.html

So far the netdev datapath uses an 8K EMC to speed up the lookup of
frequently used flows by comparing the parsed packet headers against
the miniflow of a cached flow, using 13 bits of the packet RSS hash
as index. The EMC is too small for many applications with 100K or more
parallel packet flows so that EMC threshing actually degrades performance.
Furthermore, the size of struct miniflow and the flow copying cost
prevents us from making it much larger.

At the same time the lookup cost of the megaflow classifier (DPCLS) is
increasing as the number of frequently hit subtables grows with the
complexity of pipeline and the number of recirculations.

To close the performance gap for many parallel flows, this patch
introduces the datapath flow cache (DFC) with 1M entries as lookup
stage between EMC and DPCLS. It directly maps 20 bits of the RSS hash
to a pointer to the last hit megaflow entry and performs a masked
comparison of the packet flow with the megaflow key to confirm the
hit. This avoids the costly DPCLS lookup even for very large number of
parallel flows with a small memory overhead.

Due the large size of the DFC and the low risk of DFC thrashing, any
DPCLS hit immediately inserts an entry in the DFC so that subsequent
packets get speeded up. The DFC, thus, accelerate also short-lived
flows.

To further accelerate the lookup of few elephant flows, every DFC hit
triggers a probabilistic EMC insertion of the flow. As the DFC entry is
already in place the default EMC insertion probability can be reduced to
1/1000 to minimize EMC thrashing should there still be many fat flows.
The inverse EMC insertion probability remains configurable.

The EMC implementation is simplified by removing the possibility to
store a flow in two slots, as there is no particular reason why two
flows should systematically collide (the RSS hash is not symmetric).
The maximum size of the EMC flow key is limited to 256 bytes to reduce
the memory footprint. This should be sufficient to hold most real life
packet flow keys. Larger flows are not installed in the EMC.

The pmd-stats-show command is enhanced to show both EMC and DFC hits
separately.

The sweep speed for cleaning up obsolete EMC and DFC flow entries and
freeing dead megaflow entries is increased. With a typical PMD cycle
duration of 100us under load and checking one DFC entry per cycle, the
DFC sweep should normally complete within in 100s.

In PVP performance tests with an L3 pipeline over VXLAN we determined the
optimal EMC size to be 16K entries to obtain a uniform speedup compared
to the master branch over the full range of parallel flows. The measurement
below is for 64 byte packets and the average number of subtable lookups
per DPCLS hit in this pipeline is 1.0, i.e. the acceleration already starts for
a single busy mask. Tests with many visited subtables should show a strong
increase of the gain through DFC.

Flows   master  DFC+EMC  Gain
[Mpps]  [Mpps]
--
8   4.454.62 3.8%
100 4.174.47 7.2%
10003.884.3412.0%
20003.544.1717.8%
50003.013.8227.0%
1   2.753.6331.9%
2   2.643.5032.8%
5   2.603.3328.1%
10  2.593.2324.7%
50  2.593.1621.9%

Signed-off-by: Jan Scheurich 
---
 lib/dpif-netdev.c | 350 --
 1 file changed, 235 insertions(+), 115 deletions(-)

diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
index c7d157a..b9f4b6d 100644
--- a/lib/dpif-netdev.c
+++ b/lib/dpif-netdev.c
@@ -128,19 +128,19 @@ struct netdev_flow_key {
 uint64_t buf[FLOW_MAX_PACKET_U64S];
 };
 
-/* Exact match cache for frequently used flows
+/* Datapath flow cache (DFC) for frequently used flows
  *
- * The cache uses a 32-bit hash of the packet (which can be the RSS hash) to
- * search its entries for a miniflow that matches exactly the miniflow of the
- * packet. It stores the 'dpcls_rule' (rule) that matches the miniflow.
+ * The cache uses the 32-bit hash of the packet (which can be the RSS hash) to
+ * directly look up a pointer to the matching megaflow. To check for a match
+ * the packet's flow key is compared against the key and mask of the megaflow.
  *
- * A cache entry holds a reference to its 'dp_netdev_flow'.
- *
- * A miniflow with a given hash can be in one of EM_FLOW_HASH_SEGS different
- * entries. The 32-bit hash is split into EM_FLOW_HASH_SEGS values (each of
- * them is EM_FLOW_HASH_SHIFT bits wide and the remainder is thrown away). Each
- * value is the index of a cache entry where the miniflow could be.
+ * For even faster lookup, the most frequently used packet flows are also
+ * inserted into a small exact match cache