Pls refer below for my comments.

From: Damon Wang [mailto:damon.dev...@gmail.com]
Sent: Monday, November 21, 2016 2:46 AM
To: Alec Hothan (ahothan) <ahot...@cisco.com>
Cc: Thomas F Herbert <therb...@redhat.com>; Maciek Konstantynowicz (mkonstan) 
<mkons...@cisco.com>; Andrew Theurer <atheu...@redhat.com>; Douglas Shakshober 
<dsh...@redhat.com>; csit-...@lists.fd.io; vpp-dev <vpp-dev@lists.fd.io>; 
Rashid Khan <rk...@redhat.com>; Liew, Irene <irene.l...@intel.com>; Karl Rister 
<kris...@redhat.com>
Subject: Re: [vpp-dev] [csit-dev] vHost user test scenarios for CSIT

About

VPP in guest with layer 2 and layer 3 vRouted traffic.

Does this mean vpp run in guest vm? There are lots needs for running vpp in a 
vm as a VNF, the point is testing vpp routing performance in a vm with 
vhost-user.

+-------------------------------------+
|                                     |
|                                     |
|        +--------------------+       |
|        |   VPP in Guest VM  |       |
|        |                    |       |
|        |     Routing form   |       |
|        |     eth0 to eth1   |       |
|        |                    |       |
|        ++-------+--+-------++       |
|         |       |  |       |        |
|         | eth0  |  |  eth1 |        |
|         |       |  |       |        |
|         +---+---+  +---+---+        |
|             |          |            |
|             |          |            |
|             |          |            |
|             |          |            |
|     +-------+----------+-------+    |
|     |                          |    |
|     |                          |    |
|     |   vSwitch, eg. OVS DPDK  |    |
|     |                          |    |
|     |                          |    |
+--+--+-------+---------+--------+-+--+
   |          |         |          |
   |  eth0    |         |   eth1   |
   |          |         |          |
   +----+-----+         +-----+----+
        |                     |
        |                     |
        |                     |
        |                     |
   +----+-----+         +-----+----+
   |          |         |          |
   |  eth0    |         |   eth1   |
   |          |         |          |
+--+----------+---------+----------+---+
|                                      |
|          Traffic Generator           |
|                                      |
|                                      |
+--------------------------------------+


2016-11-17 9:06 GMT+08:00 Alec Hothan (ahothan) 
<ahot...@cisco.com<mailto:ahot...@cisco.com>>:
Few comments inline…


On 11/16/16, 8:18 AM, 
"vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> on behalf of 
Thomas F Herbert" 
<vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> on behalf of 
therb...@redhat.com<mailto:therb...@redhat.com>> wrote:

    +Irene Liew from Intel

    On 11/15/2016 02:06 PM, Maciek Konstantynowicz (mkonstan) wrote:

    On 11 Nov 2016, at 13:58, Thomas F Herbert < 
<mailto:therb...@redhat.com<mailto:therb...@redhat.com>>therb...@redhat.com<mailto:therb...@redhat.com>>
 wrote:


    On 11/09/2016 07:39 AM, Maciek Konstantynowicz (mkonstan) wrote:


    Some inputs from my side with MK.


    On 8 Nov 2016, at 21:25, Thomas F Herbert 
<therb...@redhat.com<mailto:therb...@redhat.com>> wrote:

    All:

    Soliciting opinions from people as to vhost-user testing scenarios and 
guest modes in
    fd.io<http://fd.io> <http://fd.io/> CSIT testing of VPP - vhost-user.
    I will forward to this mailing list as well as summarize any additional 
feedback.

    I asked some people that happen to be here at OVSCON as well as some Red 
Hat and Intel people. I am also including some people that are involved in 
upstream vhost-user work in DPDK.
    So far, I have the following feedback with an attempt to condense feedback 
and to keep the list small. If I left out anything, let me know.

    In addition to the PVP tests done now with small packets.

We should standardize on a basic limited set of sizes: 64, IMIX, 1518 bytes 
(this can be extended if needed to the list defined in RFC-2455)


    Testpmd in guest is OK for now.


I’d like to suggest to define/document the testpmd config used for testing: 
testpmd options and config, VM (vcpu, RAM).
Having a testpmd image capable of auto-configuring itself on the virtual 
interfaces at init time would also be good to have.

[Irene] Testpmd in guest is ok. I would suggest to use the fwd mac mode in 
testpmd to do the test in the VM.



    MK: vhost should be tested also with IRQ drivers, not only PMD, e.g. Linux 
guest with kernel IP routing. It’s done today in CSIT functional tests in VIRL 
(no testpmd there).


    Yes, as long as testPMD in guest is in the suite to maximize perf test


    Agree. testpmd is already used in csit perf tests with vhost.



    1 Add multiple VMs (How many?)





    MK: For performance test, we should aim for a box-full, so for 1vCPU VMs 
fill up all cores :)
[Irene] Does it mean in a dual-socket system of 12 cores per socket, we would 
use up 24 cores in the system with remote access in the socket for VPP? We 
would need to carefully define the configuration with the remote socket.

This will depend on the testpmd settings (mostly number of vCPU).
I’d suggest a minimum of 10 chains (10 x PVP) and 2 networks per chain.

[Irene] I have yet to run with 1 vCPU in the VM. Testpmd PMD would need 1 core 
itself for polling. And there should be at least 1 core for VM Linux processes 
and testpmd application. Shall we do with 2 vCPU in the VM? Just to make sure 
VM CPU resources won’t be the bottleneck.

    2 Both multi-queue and single-queue




    MK: vhost single-queue for sure. vhost multi-queue seems to matter only to 
huge VMs that generate lots of traffic and coming close to overloading worker 
thread dealing with it.

[Irene] I agree with single-queue for scaling 10 chains. It is not feasible to 
run multiple queues when you do not have enough cores to support multiple 
queues on the vhost. With 1 or 2 VMs running, it would make sense to run 
multiple queues. And from the past test experience, adding multi-queue on vhost 
doesn’t guarantee higher number throughput gain if you have the same limited 
number of OVS PMD cores. It may hurt performance. I would recommend to do 
single queue for a scale up 10 VM chains on the system.

+1 for both single and multi-queue


    3 Tests that cause the equivalent of multiple flows in OVS. Varying variety 
of traffic including layer 2 and layer 3 traffic.




    MK: Yes. Many flows is must.

[Irene] Many flows is a must. Shall we define the number of flows to test? 
10,000 flows, 100,000 flows and 1 million flows? Any recommended number of 
flows that is preferred in NFV setup?


    4 Multiple IF's (Guest or Host or Both?)




    MK: What do you mean by multiple IF’s (interfaces)? With multiple VMs we 
surely have multiple vhost interfaces, minimum 2 vhost interfaces per VM. What 
matters IMV is the ratio and speed between: i) physical interfaces 10GE, 40GE; 
and ii) vhost interfaces with
     slow or fast VMs. I suggest we work few scenarios covering both i) and 
ii), and number of VMs, based on use cases folks have.

[Irene] In our tests, we usually have 2 physical NICs interfaces and 2 vhost 
interfaces for each VM.

Most deployments will have a limited number of physical interfaces per compute 
node. One interface or 2 bonded interfaces per compute node.
The number of vhost interfaces is going to be an order of magnitude larger. 
With the example of 10 VMs and 2 networks per VM, that’s 20 vhost interfaces 
for 1 phys interface.
Of course there might be special configs with very different requirements 
(large oversubscription of VMs, or larger number of phys interfaces) but I 
think the 10 x PVP with 20 vhost interfaces and 1 phys interface use case looks 
like a good starting point.


    I am copying this to Franck. I am not sure whether he was asking for 
multiple PHY PMDs or more then 2 IFs per guest. I think that multiple guests 
with 2 IFs each should be a pretty good test to start with.


    OK. Any more feedback here from anybody?



    The following might not be doable by 17.01 and if not consider the 
following as a wish list for future:

    1 vxLan tunneled traffic




    MK: Do you mean VXLAN on the wire, VPP (running in host) does VXLAN tunnel 
termination (VTEP) into L2BD, and then L2 switching into VMs via vhost? If so, 
that’s the most common requirement I hear from folks e.g. OPNFV/FDS.



    I am not sure whether Franck was suggesting VTEP or whether he wanted encap 
and decap of L3 vxlan or whether he was asking for forwarding rules in guest 
and not just layer 2 MAC forwarding.


We need to cover the openstack vxlan overlay case: VTEP in the vswitch, 
everythying below the vswitch is VxLAN traffic, everything above the VTEP is 
straight L2 forwarding to the vhost interfaces.



    OK. Any more feedback here from anybody?


    2 VPP in guest with layer 2 and layer 3 vRouted traffic.


    MK: What do you mean here? VPP in guest with dpdk-virtio (instead of 
testpmd), and VPP in host with vhost ?

    Yes, VPP in host. I think some folks are looking for a test that 
approximates a routing VNF but I am forwarding this for Franck's comment



    OK. Any more feedback here from anybody?

    3 Additional Overlay/Underlay: MPLS

    MK: MPLSoEthernet?, MPLSoGRE? VPNv4, VPNv6? Else?
    MK: L2oLISP, IPv4oLISP, IPv6oLISP.



    MPLSoEthernet


    But what VPP configuration - just MPLS label switching (LSR), or VPN edge 
(LER aka PE) ?



    I don't have the answer. Maybe Franck or Anita may want to comment.

    In general, the context for my comment is wrt to perf testing of VPP vs 
DPDK/OVS and other vSwitches/data planes. Current testing is optimized for 
multiple layer 2 flows. If we are passing and forwarding tunneled or encapped 
traffic in the VM, even if we don't
     terminate a VTEP, we are closer to real world VNF use cases, and may 
provide a better basis perf comparisons for Telcos and similar users.



On the OpenStack front, we need to stay focused first on L2 switching 
performance in the vswitch between physical interfaces, potentially virtual 
interfaces such as vxlan tunnels and vhost interfaces.

Thanks

   Alec



_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
https://lists.fd.io/mailman/listinfo/vpp-dev

_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to