[dpdk-dev] [PATCH] doc: introduce PVP reference benchmark

2016-11-29 Thread Yuanhan Liu
On Thu, Nov 24, 2016 at 08:35:51AM +0100, Maxime Coquelin wrote:
> 
> 
> On 11/24/2016 06:07 AM, Yuanhan Liu wrote:
> >First of all, thanks for the doc! It's a great one.
> Thanks.
> I would be interested to know if you have other tuning I don't mention
> in this doc.

I was thinking we may need doc some performance impacts by some features,
say we observed that indirect desc may be good for some cases, while may
be bad for others. Also, the non mergeable Rx path outweighs the mergeable
Rx path. If user cares about the perfomance and ascertains all packets
fits into a typical MTU, he may likely want to disable the mergeable
feature, which is enabled by default.

Maybe we could start a new doc, or maybe we could add a new section here?

--yliu


[dpdk-dev] [PATCH] doc: introduce PVP reference benchmark

2016-11-29 Thread Maxime Coquelin
Hi Yuanhan,

On 11/29/2016 11:16 AM, Yuanhan Liu wrote:
> On Thu, Nov 24, 2016 at 08:35:51AM +0100, Maxime Coquelin wrote:
>>
>>
>> On 11/24/2016 06:07 AM, Yuanhan Liu wrote:
>>> First of all, thanks for the doc! It's a great one.
>> Thanks.
>> I would be interested to know if you have other tuning I don't mention
>> in this doc.
>
> I was thinking we may need doc some performance impacts by some features,
> say we observed that indirect desc may be good for some cases, while may
> be bad for others. Also, the non mergeable Rx path outweighs the mergeable
> Rx path. If user cares about the perfomance and ascertains all packets
> fits into a typical MTU, he may likely want to disable the mergeable
> feature, which is enabled by default.
>
> Maybe we could start a new doc, or maybe we could add a new section here?

I agree that we should documents impact of Virtio features on traffic
profile.
My opinion is that it deserves a dedicated document.

For this PVP doc, I suggest we add a section stating that one could try
with different Virtio features, and in Kevin's result template
proposal, we add a line for Virtio features enabled/disabled.

Thanks,
Maxime
>
>   --yliu
>


[dpdk-dev] [PATCH] doc: introduce PVP reference benchmark

2016-11-28 Thread Thomas Monjalon
2016-11-28 15:02, Maxime Coquelin:
> 
> On 11/28/2016 12:22 PM, Thomas Monjalon wrote:
> > 2016-11-23 22:00, Maxime Coquelin:
> >> +You can use this qmp-vcpu-pin script to pin vCPUs:
> >> +
> >> +   .. code-block:: python
> >> +
> >> +#!/usr/bin/python
> >> +# QEMU vCPU pinning tool
> >> +#
> >> +# Copyright (C) 2016 Red Hat Inc.
> >> +#
> >> +# Authors:
> >> +#  Maxime Coquelin 
> >> +#
> >> +# This work is licensed under the terms of the GNU GPL, version 2.  
> >> See
> >> +# the COPYING file in the top-level directory
> >> +import argparse
> >> +import json
> >> +import os
> >> +
> >> +from subprocess import call
> >> +from qmp import QEMUMonitorProtocol
> >> +
> >> +pinned = []
> >> +
> >> +parser = argparse.ArgumentParser(description='Pin QEMU vCPUs to 
> >> physical CPUs')
> >> +parser.add_argument('-s', '--server', type=str, required=True,
> >> +help='QMP server path or address:port')
> >> +parser.add_argument('cpu', type=int, nargs='+',
> >> +help='Physical CPUs IDs')
> >> +args = parser.parse_args()
> >> +
> >> +devnull = open(os.devnull, 'w')
> >> +
> >> +srv = QEMUMonitorProtocol(args.server)
> >> +srv.connect()
> >> +
> >> +for vcpu in srv.command('query-cpus'):
> >> +vcpuid = vcpu['CPU']
> >> +tid = vcpu['thread_id']
> >> +if tid in pinned:
> >> +print 'vCPU{}\'s tid {} already pinned, 
> >> skipping'.format(vcpuid, tid)
> >> +continue
> >> +
> >> +cpuid = args.cpu[vcpuid % len(args.cpu)]
> >> +print 'Pin vCPU {} (tid {}) to physical CPU {}'.format(vcpuid, 
> >> tid, cpuid)
> >> +try:
> >> +call(['taskset', '-pc', str(cpuid), str(tid)], stdout=devnull)
> >> +pinned.append(tid)
> >> +except OSError:
> >> +print 'Failed to pin vCPU{} to CPU{}'.format(vcpuid, cpuid)
> >>
> >
> >
> > No please do not introduce such useful script in a doc.
> > I think it must be a separate file in the DPDK repository or
> > in the QEMU repository.
> 
> Ok. The patch is under review on Qemu ML.
> While it gets merged, I can add a link to its patchwork ID.
> Ok for you?

Perfect, thanks


[dpdk-dev] [PATCH] doc: introduce PVP reference benchmark

2016-11-28 Thread Maxime Coquelin


On 11/28/2016 12:22 PM, Thomas Monjalon wrote:
> 2016-11-23 22:00, Maxime Coquelin:
>> +You can use this qmp-vcpu-pin script to pin vCPUs:
>> +
>> +   .. code-block:: python
>> +
>> +#!/usr/bin/python
>> +# QEMU vCPU pinning tool
>> +#
>> +# Copyright (C) 2016 Red Hat Inc.
>> +#
>> +# Authors:
>> +#  Maxime Coquelin 
>> +#
>> +# This work is licensed under the terms of the GNU GPL, version 2.  See
>> +# the COPYING file in the top-level directory
>> +import argparse
>> +import json
>> +import os
>> +
>> +from subprocess import call
>> +from qmp import QEMUMonitorProtocol
>> +
>> +pinned = []
>> +
>> +parser = argparse.ArgumentParser(description='Pin QEMU vCPUs to 
>> physical CPUs')
>> +parser.add_argument('-s', '--server', type=str, required=True,
>> +help='QMP server path or address:port')
>> +parser.add_argument('cpu', type=int, nargs='+',
>> +help='Physical CPUs IDs')
>> +args = parser.parse_args()
>> +
>> +devnull = open(os.devnull, 'w')
>> +
>> +srv = QEMUMonitorProtocol(args.server)
>> +srv.connect()
>> +
>> +for vcpu in srv.command('query-cpus'):
>> +vcpuid = vcpu['CPU']
>> +tid = vcpu['thread_id']
>> +if tid in pinned:
>> +print 'vCPU{}\'s tid {} already pinned, 
>> skipping'.format(vcpuid, tid)
>> +continue
>> +
>> +cpuid = args.cpu[vcpuid % len(args.cpu)]
>> +print 'Pin vCPU {} (tid {}) to physical CPU {}'.format(vcpuid, tid, 
>> cpuid)
>> +try:
>> +call(['taskset', '-pc', str(cpuid), str(tid)], stdout=devnull)
>> +pinned.append(tid)
>> +except OSError:
>> +print 'Failed to pin vCPU{} to CPU{}'.format(vcpuid, cpuid)
>>
>
>
> No please do not introduce such useful script in a doc.
> I think it must be a separate file in the DPDK repository or
> in the QEMU repository.

Ok. The patch is under review on Qemu ML.
While it gets merged, I can add a link to its patchwork ID.
Ok for you?

Thanks,
Maxime


[dpdk-dev] [PATCH] doc: introduce PVP reference benchmark

2016-11-28 Thread Thomas Monjalon
2016-11-23 22:00, Maxime Coquelin:
> +You can use this qmp-vcpu-pin script to pin vCPUs:
> +
> +   .. code-block:: python
> +
> +#!/usr/bin/python
> +# QEMU vCPU pinning tool
> +#
> +# Copyright (C) 2016 Red Hat Inc.
> +#
> +# Authors:
> +#  Maxime Coquelin 
> +#
> +# This work is licensed under the terms of the GNU GPL, version 2.  See
> +# the COPYING file in the top-level directory
> +import argparse
> +import json
> +import os
> +
> +from subprocess import call
> +from qmp import QEMUMonitorProtocol
> +
> +pinned = []
> +
> +parser = argparse.ArgumentParser(description='Pin QEMU vCPUs to physical 
> CPUs')
> +parser.add_argument('-s', '--server', type=str, required=True,
> +help='QMP server path or address:port')
> +parser.add_argument('cpu', type=int, nargs='+',
> +help='Physical CPUs IDs')
> +args = parser.parse_args()
> +
> +devnull = open(os.devnull, 'w')
> +
> +srv = QEMUMonitorProtocol(args.server)
> +srv.connect()
> +
> +for vcpu in srv.command('query-cpus'):
> +vcpuid = vcpu['CPU']
> +tid = vcpu['thread_id']
> +if tid in pinned:
> +print 'vCPU{}\'s tid {} already pinned, skipping'.format(vcpuid, 
> tid)
> +continue
> +
> +cpuid = args.cpu[vcpuid % len(args.cpu)]
> +print 'Pin vCPU {} (tid {}) to physical CPU {}'.format(vcpuid, tid, 
> cpuid)
> +try:
> +call(['taskset', '-pc', str(cpuid), str(tid)], stdout=devnull)
> +pinned.append(tid)
> +except OSError:
> +print 'Failed to pin vCPU{} to CPU{}'.format(vcpuid, cpuid)
> 


No please do not introduce such useful script in a doc.
I think it must be a separate file in the DPDK repository or
in the QEMU repository.


[dpdk-dev] [PATCH] doc: introduce PVP reference benchmark

2016-11-25 Thread Maxime Coquelin
Hi John,

On 11/24/2016 06:38 PM, Mcnamara, John wrote:
>> -Original Message-
>> From: Maxime Coquelin [mailto:maxime.coquelin at redhat.com]
>> Sent: Wednesday, November 23, 2016 9:00 PM
>> To: yuanhan.liu at linux.intel.com; thomas.monjalon at 6wind.com; Mcnamara, 
>> John
>> ; Yang, Zhiyong ;
>> dev at dpdk.org
>> Cc: fbaudin at redhat.com; Maxime Coquelin 
>> Subject: [PATCH] doc: introduce PVP reference benchmark
>>
>> Having reference benchmarks is important in order to obtain reproducible
>> performance figures.
>>
>> This patch describes required steps to configure a PVP setup using testpmd
>> in both host and guest.
>>
>> Not relying on external vSwitch ease integration in a CI loop by not being
>> impacted by DPDK API changes.
>
> Hi Maxime,
>
> Thanks for the detailed doc and this initiative. Some minor documentation
> comments below.
>
>
>
>> +
>> +Setup overview
>> +..
>
> This level header should be -, even if it looks like dots in the
> contribution guide:
>
> http://dpdk.org/doc/guides/contributing/documentation.html#section-headers
>
>
>> +
>> +.. figure:: img/pvp_2nics.svg
>> +
>> +  PVP setup using 2 NICs
>> +
>
> The figure needs a target so it can be used with :numref:, like this:
>
> .. _figure_pvp_2nics:
>
> .. figure:: img/pvp_2nics.*
>
>PVP setup using 2 NICs
>
>
>> +DPDK build
>> +~~
>> +
>
> Put a one line description at the start of each section, even if it is just: 
> Build DPDK:
Ok.
>
>
>
>> +Testpmd launch
>> +~~
>> +
>> +#. Assign NICs to DPDK:
>> +
>> +   .. code-block:: console
>> +
>> +modprobe vfio-pci
>> +$RTE_SDK/install/sbin/dpdk-devbind -b vfio-pci :11:00.0
>> + :11:00.1
>> +
>> +*Note: Sandy Bridge family seems to have some limitations wrt its
>> +IOMMU, giving poor performance results. To achieve good performance on
>> +these machines, consider using UIO instead.*
>
> This would be better as an RST note:
>
> #. Assign NICs to DPDK:
>
>.. code-block:: console
>
>   modprobe vfio-pci
>   $RTE_SDK/install/sbin/dpdk-devbind -b vfio-pci :11:00.0 :11:00.1
>
>.. Note::
>
>   The Sandy Bridge family seems to have some IOMMU limitations giving poor
>   performance results. To achieve good performance on these machines
>   consider using UIO instead.
This is indeed better, thanks for the tip!

About this note, I couldn't find official information about this
problem.

Do you confirm the issue, or I misconfigured something?

I'll also add something about security implications of using UIO.
>
>
>
>> +First, SELinux policy needs to be set to permissiven, as testpmd is run
>> +as root (reboot required):
>
> s/permissiven/permissive/
>
>
> There are a couple of trailing whitespace errors as well at build as well.

Ok, I will rework all this.

Thanks for the review,
Maxime


[dpdk-dev] [PATCH] doc: introduce PVP reference benchmark

2016-11-24 Thread Mcnamara, John
> -Original Message-
> From: Maxime Coquelin [mailto:maxime.coquelin at redhat.com]
> Sent: Wednesday, November 23, 2016 9:00 PM
> To: yuanhan.liu at linux.intel.com; thomas.monjalon at 6wind.com; Mcnamara, 
> John
> ; Yang, Zhiyong ;
> dev at dpdk.org
> Cc: fbaudin at redhat.com; Maxime Coquelin 
> Subject: [PATCH] doc: introduce PVP reference benchmark
> 
> Having reference benchmarks is important in order to obtain reproducible
> performance figures.
> 
> This patch describes required steps to configure a PVP setup using testpmd
> in both host and guest.
> 
> Not relying on external vSwitch ease integration in a CI loop by not being
> impacted by DPDK API changes.

Hi Maxime,

Thanks for the detailed doc and this initiative. Some minor documentation
comments below.



> +
> +Setup overview
> +..

This level header should be -, even if it looks like dots in the
contribution guide:

http://dpdk.org/doc/guides/contributing/documentation.html#section-headers


> +
> +.. figure:: img/pvp_2nics.svg
> +
> +  PVP setup using 2 NICs
> +

The figure needs a target so it can be used with :numref:, like this:

.. _figure_pvp_2nics:

.. figure:: img/pvp_2nics.*

   PVP setup using 2 NICs


> +DPDK build
> +~~
> +

Put a one line description at the start of each section, even if it is just: 
Build DPDK:



> +Testpmd launch
> +~~
> +
> +#. Assign NICs to DPDK:
> +
> +   .. code-block:: console
> +
> +modprobe vfio-pci
> +$RTE_SDK/install/sbin/dpdk-devbind -b vfio-pci :11:00.0
> + :11:00.1
> +
> +*Note: Sandy Bridge family seems to have some limitations wrt its
> +IOMMU, giving poor performance results. To achieve good performance on
> +these machines, consider using UIO instead.*

This would be better as an RST note:

#. Assign NICs to DPDK:

   .. code-block:: console

  modprobe vfio-pci
  $RTE_SDK/install/sbin/dpdk-devbind -b vfio-pci :11:00.0 :11:00.1

   .. Note::

  The Sandy Bridge family seems to have some IOMMU limitations giving poor
  performance results. To achieve good performance on these machines
  consider using UIO instead.



> +First, SELinux policy needs to be set to permissiven, as testpmd is run
> +as root (reboot required):

s/permissiven/permissive/


There are a couple of trailing whitespace errors as well at build as well.


John






[dpdk-dev] [PATCH] doc: introduce PVP reference benchmark

2016-11-24 Thread Maxime Coquelin


On 11/24/2016 12:58 PM, Kevin Traynor wrote:
> On 11/23/2016 09:00 PM, Maxime Coquelin wrote:
>> Having reference benchmarks is important in order to obtain
>> reproducible performance figures.
>>
>> This patch describes required steps to configure a PVP setup
>> using testpmd in both host and guest.
>>
>> Not relying on external vSwitch ease integration in a CI loop by
>> not being impacted by DPDK API changes.
>>
>> Signed-off-by: Maxime Coquelin 
>
> A short template/hint of the main things to report after running could
> be useful to help ML discussions about results e.g.
>
> Traffic Generator: IXIA
> Acceptable Loss: 100% (i.e. raw throughput test)
> DPDK version/commit: v16.11
> QEMU version/commit: v2.7.0
> Patches applied: 
> CPU: E5-2680 v3, 2.8GHz
> Result: x mpps
> NIC: ixgbe 82599

Good idea, I'll add a section in the end providing this template.

>
>> ---
>>  doc/guides/howto/img/pvp_2nics.svg   | 556 
>> +++
>>  doc/guides/howto/index.rst   |   1 +
>>  doc/guides/howto/pvp_reference_benchmark.rst | 389 +++
>>  3 files changed, 946 insertions(+)
>>  create mode 100644 doc/guides/howto/img/pvp_2nics.svg
>>  create mode 100644 doc/guides/howto/pvp_reference_benchmark.rst
>>
>
> 
>
>> +Host tuning
>> +~~~
>
> I would add turbo boost =disabled on BIOS.
>
+1, will be in next revision.

>> +
>> +#. Append these options to Kernel command line:
>> +
>> +   .. code-block:: console
>> +
>> +intel_pstate=disable mce=ignore_ce default_hugepagesz=1G hugepagesz=1G 
>> hugepages=6 isolcpus=2-7 rcu_nocbs=2-7 nohz_full=2-7 iommu=pt intel_iommu=on
>> +
>> +#. Disable hyper-threads at runtime if necessary and BIOS not accessible:
>> +
>> +   .. code-block:: console
>> +
>> +cat /sys/devices/system/cpu/cpu*[0-9]/topology/thread_siblings_list \
>> +| sort | uniq \
>> +| awk -F, '{system("echo 0 > 
>> /sys/devices/system/cpu/cpu"$2"/online")}'
>> +
>> +#. Disable NMIs:
>> +
>> +   .. code-block:: console
>> +
>> +echo 0 > /proc/sys/kernel/nmi_watchdog
>> +
>> +#. Exclude isolated CPUs from the writeback cpumask:
>> +
>> +   .. code-block:: console
>> +
>> +echo ff03 > /sys/bus/workqueue/devices/writeback/cpumask
>> +
>> +#. Isolate CPUs from IRQs:
>> +
>> +   .. code-block:: console
>> +
>> +clear_mask=0xfc #Isolate CPU2 to CPU7 from IRQs
>> +for i in /proc/irq/*/smp_affinity
>> +do
>> + echo "obase=16;$(( 0x$(cat $i) & ~$clear_mask ))" | bc > $i
>> +done
>> +
>> +Qemu build
>> +~~
>> +
>> +   .. code-block:: console
>> +
>> +git clone git://dpdk.org/dpdk
>> +cd dpdk
>> +export RTE_SDK=$PWD
>> +make install T=x86_64-native-linuxapp-gcc DESTDIR=install
>> +
>> +DPDK build
>> +~~
>> +
>> +   .. code-block:: console
>> +
>> +git clone git://dpdk.org/dpdk
>> +cd dpdk
>> +export RTE_SDK=$PWD
>> +make install T=x86_64-native-linuxapp-gcc DESTDIR=install
>> +
>> +Testpmd launch
>> +~~
>> +
>> +#. Assign NICs to DPDK:
>> +
>> +   .. code-block:: console
>> +
>> +modprobe vfio-pci
>> +$RTE_SDK/install/sbin/dpdk-devbind -b vfio-pci :11:00.0 :11:00.1
>> +
>> +*Note: Sandy Bridge family seems to have some limitations wrt its IOMMU,
>> +giving poor performance results. To achieve good performance on these 
>> machines,
>> +consider using UIO instead.*
>> +
>> +#. Launch testpmd application:
>> +
>> +   .. code-block:: console
>> +
>> +$RTE_SDK/install/bin/testpmd -l 0,2,3,4,5 --socket-mem=1024 -n 4 \
>> +--vdev 'net_vhost0,iface=/tmp/vhost-user1' \
>> +--vdev 'net_vhost1,iface=/tmp/vhost-user2' -- \
>> +--portmask=f --disable-hw-vlan -i --rxq=1 --txq=1
>> +--nb-cores=4 --forward-mode=io
>> +
>> +#. In testpmd interactive mode, set the portlist to obtin the right 
>> chaining:
>> +
>> +   .. code-block:: console
>> +
>> +set portlist 0,2,1,3
>> +start
>> +
>> +VM launch
>> +~
>> +
>> +The VM may be launched ezither by calling directly QEMU, or by using 
>> libvirt.
>
> s/ezither/either
>
>> +
>> +#. Qemu way:
>> +
>> +Launch QEMU with two Virtio-net devices paired to the vhost-user sockets 
>> created by testpmd:
>> +
>> +   .. code-block:: console
>> +
>> +/bin/x86_64-softmmu/qemu-system-x86_64 \
>> +-enable-kvm -cpu host -m 3072 -smp 3 \
>> +-chardev socket,id=char0,path=/tmp/vhost-user1 \
>> +-netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
>> +-device 
>> virtio-net-pci,netdev=mynet1,mac=52:54:00:02:d9:01,addr=0x10 \
>> +-chardev socket,id=char1,path=/tmp/vhost-user2 \
>> +-netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
>> +-device 
>> virtio-net-pci,netdev=mynet2,mac=52:54:00:02:d9:02,addr=0x11 \
>> +-object 
>> memory-backend-file,id=mem,size=3072M,mem-path=/dev/hugepages,share=on \
>> +-numa node,memdev=mem -mem-prealloc \
>> +-net user,hostfwd=tcp::1002$1-:22 

[dpdk-dev] [PATCH] doc: introduce PVP reference benchmark

2016-11-24 Thread Yuanhan Liu
First of all, thanks for the doc! It's a great one.

On Wed, Nov 23, 2016 at 10:00:06PM +0100, Maxime Coquelin wrote:
> +Qemu build
> +~~
> +
> +   .. code-block:: console
> +
> +git clone git://dpdk.org/dpdk
> +cd dpdk
> +export RTE_SDK=$PWD
> +make install T=x86_64-native-linuxapp-gcc DESTDIR=install

It's actually DPDK build.

I will take a closer look at it and also render it to see how it looks
like when I get back to office next week.

--yliu
> +
> +DPDK build
> +~~
> +
> +   .. code-block:: console
> +
> +git clone git://dpdk.org/dpdk
> +cd dpdk
> +export RTE_SDK=$PWD
> +make install T=x86_64-native-linuxapp-gcc DESTDIR=install
> +


[dpdk-dev] [PATCH] doc: introduce PVP reference benchmark

2016-11-24 Thread Kevin Traynor
On 11/23/2016 09:00 PM, Maxime Coquelin wrote:
> Having reference benchmarks is important in order to obtain
> reproducible performance figures.
> 
> This patch describes required steps to configure a PVP setup
> using testpmd in both host and guest.
> 
> Not relying on external vSwitch ease integration in a CI loop by
> not being impacted by DPDK API changes.
> 
> Signed-off-by: Maxime Coquelin 

A short template/hint of the main things to report after running could
be useful to help ML discussions about results e.g.

Traffic Generator: IXIA
Acceptable Loss: 100% (i.e. raw throughput test)
DPDK version/commit: v16.11
QEMU version/commit: v2.7.0
Patches applied: 
CPU: E5-2680 v3, 2.8GHz
Result: x mpps
NIC: ixgbe 82599

> ---
>  doc/guides/howto/img/pvp_2nics.svg   | 556 
> +++
>  doc/guides/howto/index.rst   |   1 +
>  doc/guides/howto/pvp_reference_benchmark.rst | 389 +++
>  3 files changed, 946 insertions(+)
>  create mode 100644 doc/guides/howto/img/pvp_2nics.svg
>  create mode 100644 doc/guides/howto/pvp_reference_benchmark.rst
> 



> +Host tuning
> +~~~

I would add turbo boost =disabled on BIOS.

> +
> +#. Append these options to Kernel command line:
> +
> +   .. code-block:: console
> +
> +intel_pstate=disable mce=ignore_ce default_hugepagesz=1G hugepagesz=1G 
> hugepages=6 isolcpus=2-7 rcu_nocbs=2-7 nohz_full=2-7 iommu=pt intel_iommu=on
> +
> +#. Disable hyper-threads at runtime if necessary and BIOS not accessible:
> +
> +   .. code-block:: console
> +
> +cat /sys/devices/system/cpu/cpu*[0-9]/topology/thread_siblings_list \
> +| sort | uniq \
> +| awk -F, '{system("echo 0 > 
> /sys/devices/system/cpu/cpu"$2"/online")}'
> +
> +#. Disable NMIs:
> +
> +   .. code-block:: console
> +
> +echo 0 > /proc/sys/kernel/nmi_watchdog
> +
> +#. Exclude isolated CPUs from the writeback cpumask:
> +
> +   .. code-block:: console
> +
> +echo ff03 > /sys/bus/workqueue/devices/writeback/cpumask
> +
> +#. Isolate CPUs from IRQs:
> +
> +   .. code-block:: console
> +
> +clear_mask=0xfc #Isolate CPU2 to CPU7 from IRQs
> +for i in /proc/irq/*/smp_affinity
> +do
> + echo "obase=16;$(( 0x$(cat $i) & ~$clear_mask ))" | bc > $i
> +done
> +
> +Qemu build
> +~~
> +
> +   .. code-block:: console
> +
> +git clone git://dpdk.org/dpdk
> +cd dpdk
> +export RTE_SDK=$PWD
> +make install T=x86_64-native-linuxapp-gcc DESTDIR=install
> +
> +DPDK build
> +~~
> +
> +   .. code-block:: console
> +
> +git clone git://dpdk.org/dpdk
> +cd dpdk
> +export RTE_SDK=$PWD
> +make install T=x86_64-native-linuxapp-gcc DESTDIR=install
> +
> +Testpmd launch
> +~~
> +
> +#. Assign NICs to DPDK:
> +
> +   .. code-block:: console
> +
> +modprobe vfio-pci
> +$RTE_SDK/install/sbin/dpdk-devbind -b vfio-pci :11:00.0 :11:00.1
> +
> +*Note: Sandy Bridge family seems to have some limitations wrt its IOMMU,
> +giving poor performance results. To achieve good performance on these 
> machines,
> +consider using UIO instead.*
> +
> +#. Launch testpmd application:
> +
> +   .. code-block:: console
> +
> +$RTE_SDK/install/bin/testpmd -l 0,2,3,4,5 --socket-mem=1024 -n 4 \
> +--vdev 'net_vhost0,iface=/tmp/vhost-user1' \
> +--vdev 'net_vhost1,iface=/tmp/vhost-user2' -- \
> +--portmask=f --disable-hw-vlan -i --rxq=1 --txq=1
> +--nb-cores=4 --forward-mode=io
> +
> +#. In testpmd interactive mode, set the portlist to obtin the right chaining:
> +
> +   .. code-block:: console
> +
> +set portlist 0,2,1,3
> +start
> +
> +VM launch
> +~
> +
> +The VM may be launched ezither by calling directly QEMU, or by using libvirt.

s/ezither/either

> +
> +#. Qemu way:
> +
> +Launch QEMU with two Virtio-net devices paired to the vhost-user sockets 
> created by testpmd:
> +
> +   .. code-block:: console
> +
> +/bin/x86_64-softmmu/qemu-system-x86_64 \
> +-enable-kvm -cpu host -m 3072 -smp 3 \
> +-chardev socket,id=char0,path=/tmp/vhost-user1 \
> +-netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
> +-device virtio-net-pci,netdev=mynet1,mac=52:54:00:02:d9:01,addr=0x10 
> \
> +-chardev socket,id=char1,path=/tmp/vhost-user2 \
> +-netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \
> +-device virtio-net-pci,netdev=mynet2,mac=52:54:00:02:d9:02,addr=0x11 
> \
> +-object 
> memory-backend-file,id=mem,size=3072M,mem-path=/dev/hugepages,share=on \
> +-numa node,memdev=mem -mem-prealloc \
> +-net user,hostfwd=tcp::1002$1-:22 -net nic \
> +-qmp unix:/tmp/qmp.socket,server,nowait \
> +-monitor stdio .qcow2

Probably mergeable rx data path =off would want to be tested also when
evaluating any performance improvements/regressions.

> +
> +You can use this qmp-vcpu-pin script to pin vCPUs:
> +
> +   .. code-block:: python
> +
> +

[dpdk-dev] [PATCH] doc: introduce PVP reference benchmark

2016-11-24 Thread Maxime Coquelin


On 11/24/2016 06:07 AM, Yuanhan Liu wrote:
> First of all, thanks for the doc! It's a great one.
Thanks.
I would be interested to know if you have other tuning I don't mention
in this doc.

>
> On Wed, Nov 23, 2016 at 10:00:06PM +0100, Maxime Coquelin wrote:
>> +Qemu build
>> +~~
>> +
>> +   .. code-block:: console
>> +
>> +git clone git://dpdk.org/dpdk
>> +cd dpdk
>> +export RTE_SDK=$PWD
>> +make install T=x86_64-native-linuxapp-gcc DESTDIR=install
>
> It's actually DPDK build.
>
Oh right! Copy/paste mistake...
This is the Qemu build block:

Qemu build
~~

.. code-block:: console

 git clone git://git.qemu.org/qemu.git
 cd qemu
 mkdir bin
 cd bin
 ../configure --target-list=x86_64-softmmu
 make

> I will take a closer look at it and also render it to see how it looks
> like when I get back to office next week.
>
>   --yliu
>> +
>> +DPDK build
>> +~~
>> +
>> +   .. code-block:: console
>> +
>> +git clone git://dpdk.org/dpdk
>> +cd dpdk
>> +export RTE_SDK=$PWD
>> +make install T=x86_64-native-linuxapp-gcc DESTDIR=install
>> +