We don't have anything right now, but its in my test list to run this week. I 
can provide some input for what I'm getting for results then. 


----- Original Message -----

From: "Anita Tragler" <atrag...@redhat.com> 
To: "Thomas F Herbert" <therb...@redhat.com>, "Christian Trautman" 
<ctrau...@redhat.com>, "Rashid Khan" <rk...@redhat.com>, "Flavio Leitner" 
Cc: "Billy McFall" <bmcf...@redhat.com>, "Karl Rister" <kris...@redhat.com>, 
"Douglas Shakshober" <dsh...@redhat.com>, "Franck Baudin" <fbau...@redhat.com>, 
"Andrew Theurer" <atheu...@redhat.com>, "Maciek Konstantynowicz (mkonstan)" 
<mkons...@cisco.com>, csit-...@lists.fd.io, "vpp-dev" <vpp-dev@lists.fd.io> 
Sent: Monday, March 5, 2018 12:36:17 PM 
Subject: Re: vpp-comparison-18.01 - Invitation to comment 

+ Christian, Flavio 

Hi Christian, Flavio, Karl 

Seems OVS-DPDK performance is degrading with higher flow counts based on Karl's 
testing. Do we have any recent OVS 2.9 PVP zero loss or 0.0001% loss tests with 
1K, 10K, 100K flows, don't we need to disable the EMC for better performance 
over 8K flows? 


Anita Tragler 
Technical Product Manager - Networking/NFV | Platform BU 
atrag...@redhat.com | IRC: atragler | +1 (919) 830-7342 / (919) 754-4300 

On Wed, Feb 28, 2018 at 9:58 AM, Thomas F Herbert < therb...@redhat.com > 

Resending email to cc csit and vpp lists 

+csit-dev vpp-dev 

We had a discussion about the variability in today's CSIT meeting. Initial 
testing has showed that CSIT-925 shows the best promise in solving the problem. 
The first recommendation is that you disable all unused plugins. 

The CSIT team is encouraging you to repeat the tests with all but the needed 
plugins disabled and is interested in hearing the results. 

In addition, there was a conversation between myself and Ray Kinsella from 
Intel during the meeting. He is interesting in reviewing your test setup and 
comparing with what they are doing internally in Intel and and fdio CSIT. 

I will start a separate thread with yourself, and Ray et. al. about comparing 
configs etc. 


On 02/20/2018 06:20 PM, Billy McFall wrote: 


Hey Karl, 

Thomas was going to follow-up with you and Andrew at Andrew's next NetPerf 
meeting on the variability you are seeing in your VPP testing. Not sure if that 
meeting has happen or not, but I wanted to touch basis with you because there 
was a lot of discussion in today's VPP call around some of the variability they 
are seeing in CSIT. Most of what they are reporting is being seen in VPP 18.01 
and not in VPP 17.10. I think you are seeing it across the last couple of 
releases, so some of the points below may not address your issue. One thought 
is I wonder how long their tests run for? I think your tests run for 5 min if I 
remember correctly. 

Couple of points: 

    * First, not sure of you saw, but VPP 18.01.1 was released on 2/7/2018. I 
attached a diff of the CLI to VPP 17.10. Probably too late since you already 
ran VPP 18.01 through its paces. I'll try to get it out earlier next release. 

    * The FD.io CSIT 18.01 Report has been released (based on VPP 18.01.1): 

        * https://docs.fd.io/csit/rls1801/doc/ 

    * During the VPP 18.01 testing, some performance degradation was 
discovered. In CSIT, they always test with all plugins installed (default for 
VPP). They tracked down the an issue in NAT where some NAT worker thread was 
doing some periodic work even though NAT wasn't enabled ( VPP-1162 ). That 
along with a VTS fix pushed for a VPP 18.01.1. 

    * They have identified a few additional issues that seem to be causing some 
variability in the CSIT environment that have NOT been fixed. Not sure if these 
could be causing the deviation you are seeing: 

        * Known Issues - Particularly: 

            * CSIT-925 - With all plugins loaded (default VPP startup config) 
rates vary intermittently 3% to 5% across multiple test executions. Not seen in 
VPP 17.10 (so may not be what you are seeing) and not seen if all plugins 
except DPDK are disabled. 
            * CSIT-926 - NDR, PDR and MaxRates of -3%..-1% vs. rls1710 
            * CSIT-927 - vhost-user lower NDR: virtio vring size is not 
properly negotiated to 1024, instead it's set to the default of 256. They don't 
think the code changed so looking into test setup or test environment. 

    * Section and from the report have links (for example see 
pretty ASCII format for 1t1c ) to a text file with rates and stdev for the 
tests. There are links for NDR and PDR and 1t1c and 2t2c. 

    * I remember from previous VPP calls that the FD.io CSIT 18.01 Report was 
also held up to complete some pre and post Meltdown and Spectre fix tests, 
comparing performance before and after OS patches. I searched for Spectre in 
the report and came up with this link, but the tests that are pointed to don't 
exist, so this may still be a work in progress, 

        * Impact of SpectreAndMeltdown Patches 
Billy McFall 

On Mon, Feb 5, 2018 at 2:36 PM, Karl Rister (via Google Sheets) < 
drive-shares-nore...@google.com > wrote: 


kris...@redhat.com has invited you to comment on the following spreadsheet: 
Here is the latest set of results we have for VPP testing with release 18.01. 
We did a bunch of cleanup on how the results are presented to hopefully make it 
easier to comprehend. 

One thing that stands out to me is that VPP in general has much higher 
variability between the recorded samples than OVS (the exception being tests 
where OVS scored very low; the variability there is quite high since small 
differences between each sample are magnified). The general trend is that VPP 
variability is increasing at 1M flows and it's a bit mixed at 256 and 10K 
Open in Sheets 






        Google Sheets: Create and edit spreadsheets online. 
Google LLC, 1600 Amphitheatre Parkway, Mountain View, CA 94043, USA 
You have received this email because someone shared a spreadsheet with you from 
Google Sheets.  

Billy McFall 
Networking Group 
CTO Office 
Red Hat 


Thomas F Herbert 
NFV and Fast Data Planes 
Networking Group Office of the CTO 
Red Hat 


Reply via email to