> On 13 Feb 2017, at 17:34, yusuf khan <yusuf.at...@gmail.com> wrote:
> 
> Hi,
> 
> Comments inline.
> 
> Br,
> Yusuf
> 
> On Mon, Feb 13, 2017 at 9:20 PM, Damjan Marion <dmarion.li...@gmail.com 
> <mailto:dmarion.li...@gmail.com>> wrote:
> 
> > On 10 Feb 2017, at 18:03, yusuf khan <yusuf.at...@gmail.com 
> > <mailto:yusuf.at...@gmail.com>> wrote:
> >
> > Hi,
> >
> > I am testing vpp performance for l3 routing. I am pumping traffic from 
> > moongen which is sending packet at 10Gbps line rate with 84 bytes packet 
> > size.
> > If i start vpp with single worker thread(in addition to main thread), vpp 
> > is able to route almost at the line rate. Almost because i see some drop at 
> > the receive of nic.
> > avg vector per node is 97 in this case.
> >
> > Success case stats from moongen below...
> >
> > Thread 1 vpp_wk_0 (lcore 11)
> > Time 122.6, average vectors/node 96.78, last 128 main loops 12.00 per node 
> > 256.00
> >   vector rates in 3.2663e6, out 3.2660e6, drop 1.6316e-2, punt 0.0000e0
> > ------------------------Moongen 
> > output------------------------------------------------------------------
> > [Device: id=5] TX: 11.57 Mpps, 8148 Mbit/s (10000 Mbit/s with framing)
> > [Device: id=6] RX: 11.41 Mpps, 8034 Mbit/s (9860 Mbit/s with framing)
> 
> Here seems that moongen is not able to send faster….
>     [Yusuf] Here moongen is sending 10000 Mbit/s but receive is some what 
> less, may be due to nic drop… 

Yeah, I wanted to say that VPP is not limiting factor here.


> 
> >
> >
> > But when i start vpp with 2 worker threads , each polling seperate nic. i 
> > see thre throught put almost reduce by 40%! The other thread is not 
> > receiving any packets its just polling idle nic but impacting other thread?
> 
> Looks like one worker is polling both interfaces and another one is idle. 
> That’s why you see drop of performance.
> 
> Can you provide output of “show dpdk interface placement” command?
> 
>     [Yusuf] Each thread is polling individual interface. please find the 
> output below
>     Thread 1 (vpp_wk_0 at lcore 11):
>   TenGigabitEthernet5/0/1 queue 0
> Thread 2 (vpp_wk_1 at lcore 24):
>   TenGigabitEthernet5/0/0 queue 0

you have both ports on the same card. Have you tried with two different cards?
82599 have some hardware limitations, If i remember correctly it is round 23 
Mpps per card with 64B packets.

Can you also capture following outputs  while traffic is running :

clear hardware
clear run
[wait 1-2 sec]
show run
show hardware


_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to