[dpdk-dev] DPDK2.1 (rc3 & rc4) major performance drop.

2015-08-11 Thread Mcnamara, John
> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Weglicki, MichalX
> Sent: Tuesday, August 11, 2015 11:40 AM
> To: dev at dpdk.org
> Subject: [dpdk-dev] DPDK2.1 (rc3 & rc4) major performance drop.
> 
> Hello,
> 
> Currently I'm integrating OVS head with DPDK 2.1. Based on my tests
> performance in all scenarios (confirmed on Phy2Phy and Vhostuser) has
> dropped about 10%. Please find example results below:

Also:

> Michal:
> It seems I can fix it on OVS side by passing old hardcoded 
> size(2048 + RTE_PKTMBUF_HEADROOM) as argument instead of NULL.

Hi,

In commit 1d493a49490fa the bahaviour of rte_pktmbuf_pool_init() changed:

commit 1d493a49490fa90e09689d49280cff0d51d0193e
Author: Olivier Matz 
Date:   Wed Apr 22 11:57:18 2015 +0200

mbuf: fix data room size calculation in pool init

Previously passing opaque_arg == NULL initialized mbuf_data_room_size = 2048 + 
RTE_PKTMBUF_HEADROOM.

Now it is set as follows:

+   /* if no structure is provided, assume no mbuf private area */
+   user_mbp_priv = opaque_arg;
+   if (user_mbp_priv == NULL) {
+   default_mbp_priv.mbuf_priv_size = 0;
+   if (mp->elt_size > sizeof(struct rte_mbuf))
+   roomsz = mp->elt_size - sizeof(struct rte_mbuf);
+   else
+   roomsz = 0;
+   default_mbp_priv.mbuf_data_room_size = roomsz;
+   user_mbp_priv = _mbp_priv;
+   }

A workaround, for OVS, would be to pass the new opaque_arg struct with the 
required default set. However, perhaps this should be fixed in DPDK.

The updated doc in the same patch says:

+DPDK 2.0 to DPDK 2.1
+
+
+*   The second argument of rte_pktmbuf_pool_init(mempool, opaque) is now a
+pointer to a struct rte_pktmbuf_pool_private instead of a uint16_t
+casted into a pointer. Backward compatibility is preserved when the
+argument was NULL which is the majority of use cases, but not if the
+opaque pointer was not NULL, as it is not technically feasible. In
+this case, the application has to be modified to properly fill a
+rte_pktmbuf_pool_private structure and pass it to
+rte_pktmbuf_pool_init().
+

I think the OVS issue shows that backward compatibility isn't preserved (in the 
strictest sense).

Should this be fixed? Opinions?

John.
-- 




[dpdk-dev] DPDK2.1 (rc3 & rc4) major performance drop.

2015-08-11 Thread Weglicki, MichalX
Hello,

Currently I'm integrating OVS head with DPDK 2.1. Based on my tests performance 
in all scenarios (confirmed on Phy2Phy and Vhostuser) has dropped about 10%. 
Please find example results below:

PHY2PHY (Bidirectional)
DPDK 2.1:
Iteration 1: 14,164,987 pps
Iteration 2: 13,866,386 pps
Iteration 3: 14,093,153 pps
DPDK 2.0:
Iteration 1: 15,406,646 pps
Iteration 2: 15,410,182 pps
Iteration 3: 15,404,678 pps

VHOSTUser:
DPDK 2.1:
Iteration 1: 3,328,678 pps
Iteration 2: 3,367,130 pps
Iteration 3: 3,259,899 pps
DPDK 2.0:
Iteration 1: 3,786,969 pps
Iteration 2: 3,736,117 pps
Iteration 3: 3,557,150 pps

Based on my test last commit where performance was the same was: 
37f9a7270e800df5c603b2c76c73ed3bca3328d9
Bad commit: 1d493a49490fa90e09689d49280cff0d51d0193e

You can find diff in attachment.

Change is still compatible in OVS because in rte_pktmbuf_pool_init, NULL is 
passed as second argument.

Did anyone noticed such performance drop? Could anyone give me some information 
why this implementation has changed? Any hints how to get back to previous 
performance would be very appreciated.

Br,
Michal.


--
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare

This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). Any review or distribution by others is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender and delete all copies.