-----Original Message-----
From: "Bodireddy, Bhanuprakash" <[email protected]>
Date: Monday, June 26, 2017 at 5:56 AM
To: Darrell Ball <[email protected]>, "[email protected]" 
<[email protected]>
Subject: RE: [ovs-dev] [PATCH v10] netdev-dpdk: Increase pmd thread priority.

    >With this change and CFS in effect, it effectively means that the dpdk 
control
    >threads need to be on different cores than the PMD threads or the response
    >latency may be too long for their control work ?
    >Have we tested having the control threads on the same cpu with -20 nice for
    >the pmd thread ?
    
    Yes, I did some testing and had a reason to add the comment that recommends 
dpdk-lcore-mask and pmd-cpu-mask should be non-overlapping.
    The testing was done with a simple script that adds and deletes 750 vHost 
User ports(script copied below). The time statistics are captured in this case.
    
       dpdk-lcore-mask |  PMD thread   | PMD NICE |  Time statistics
          unspecified                Core 3                  -20                
real    1m5.610s   / user    0m0.706s/ sys     0m0.023s          [With patch]
          Core 3                            Core 3                  -20         
      real    2m14.089s / user    0m0.717s/ sys     0m0.017s         [with 
patch]
          unspecified                Core 3                     0               
  real    1m5.209s   /user    0m0.711s/sys     0m0.020s            [Master]
          Core 3                            Core 3                    0         
       real     1m7.209s   /user    0m0.711s/sys     0m0.020s            
[Master]
    

[Darrell]
So if either the lcore mask is unspecified or specified to be non-conflicting, 
then the advantage is basically nil.
We should usually be able to do this and when we cannot I am not sure favoring 
throughput over management tasks such as port add is good,
as the potential relative impact of the management task is high while the % of 
total cpu time usage is lower.
///////////


    In all cases, if the dpdk-lcore-mask is 'unspecified' the main thread 
floats between the available cores(0-27 in my case).
    
    With this patch(PMD nice value is at -20), and with main & pmd thread 
pinned to core 3, the port addition and deletion took twice the time. However 
most important thing to notice is  with active traffic and with port 
addition/deletion in progress, throughput drops instantly *without* the patch. 
In this case the vswitchd  thread consumes 7% of the CPU time at one stage 
there by impacting the forwarding performance.
    
    With the patch the throughput is still affected but happens gradually. In 
this case the vswitchd thread was consuming not more than 2% of the CPU time 
and so port addition/deletion took longer time.       
    
    >
    >I see the comment is added below
    >+    It is recommended that the OVS control thread and pmd thread shouldn't
    >be
    >+    pinned to the same core i.e 'dpdk-lcore-mask' and 'pmd-cpu-mask' cpu
    >mask
    >+    settings should be non-overlapping.
    >
    >
    >I understand that other heavy threads would be a problem for PMD threads
    >and we want to effectively encourage these to be on different cores in the
    >situation where we are using a pmd-cpu-mask.
    >However, here we are almost shutting down other threads by default on the
    >same core as PMDs threads using -20 nice, even those with little cpu load 
but
    >just needing a reasonable latency.
    
    I had the logic of completely shutting down other threads in the early 
versions of this patch by assigning real time priority to the PMD thread. But 
that seemed too dangerous and changing nice value is safer bet. I agree that 
latency can go up for non-pmd threads with this patch but it’s the same problem 
as there are other kernel threads that runs at -20 nice value and some with 
'rt' priority. 
    
    >
    >Will this aggravate the argument from some quarters that using dpdk 
requires
    >too much cpu reservation ?
    Atleast for PMD threads that are heart of packet processing in OvS-DPDK. 
    
    
    More information on commands:
    
    script to test the port addition and deletion.
    
    $cat port_test.sh
       cmds=; for i in {1..750}; do cmds+=" -- add-port br0 dpdkvhostuser$i -- 
set Interface dpdkvhostuser$i type=dpdkvhostuser"; done
       ovs-vsctl $cmds
    
       sleep 1;
    
       cmds=; for i in {1..750}; do cmds+=" -- del-port br0 dpdkvhostuser$i"; 
done
       ovs-vsctl $cmds
    
    $ time ./port_test.sh
    
    dpdk-lcore-mask and pmd-cpu-mask explicitly set to CORE 3.
    
-------------------------------------------------------------------------------------------
    $ ovs-vsctl set Open_vSwitch . other_config:dpdk-lcore-mask=8
    $ ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=8
    $ ps -eLo tid,psr,comm | grep -e revalidator -e handler -e ovs -e pmd -e 
urc -e eal
       110881  20 ovsdb-server
       110892   3 ovs-vswitchd
       110976   3 pmd61
       110898   3 eal-intr-thread
       110903   3 urcu3
       110947   3 handler60
    
    Dpdk-lcore-mask unspecified, pmd-cpu-mask explicitly set to CORE 3.
    
---------------------------------------------------------------------------------------------
    $ ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=8
    $  ps -eLo tid,psr,comm | grep -e revalidator -e handler -e ovs -e pmd -e 
urc -e eal
        111474  14 ovsdb-server
        111483   6 ovs-vswitchd
        111566   3 pmd61
        111564  10 revalidator60
        111489   0 eal-intr-thread
        111493   8 urcu3
    
    Regards,
    Bhanuprakash.
    

_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to