Re: [vpp-dev] rx-miss while sending packets to Interface (IMPORTANT)

2021-09-29 Thread Akash S R
Hi Mathew,

Thank you for your valuable response!
Yes I have only one tunnel with complete 1 GBPS traffic flowing to the
interface. So, My testing is only for one tunnel and not multiple tunnels.
So, How can I bypass this rx-miss or split this work among all the
available worker threads or any other way to solve this problem?
Is there any NIC related points out here for this issue ?

Please let me know if there are any solutions , mates.

Thanks,
Akash

On Wed, Sep 29, 2021 at 8:13 PM Matthew Smith  wrote:

>
> I saw two noteworthy items in your 'vppctl show runtime' output:
>
> 1. All of the packets received/processed appear to be handled by the
> gtpu4-input node. They also all appear to be received/handled by a single
> worker thread.
> 2. Nearly all of the packets are being dropped. I mean the packets that
> were actually received and processed - not the packets that were counted as
> an rx-miss.
>
> Regarding #1 -
>
> I imagine that one thread might be receiving all inbound packets because
> you're sending across a single GTPU tunnel (i.e. a single stream). If this
> is true, AFAIK most NICs will hash all of the packets to the same queue.
> This means you will likely be constrained to handling however many packets
> can be handled by a single thread and increasing the number of workers or
> rx queues won't help. You might be able to utilize multiple workers/queues
> by sending across  multiple tunnels. I know very little about GTPU use
> cases so I don't know whether it's practical for you to use multiple
> tunnels.
>
> Regarding #2 -
>
> Out of 8004674 packets received by dpdk-input, 7977696 packets end up
> being dropped by error-drop. It would probably be useful to look at a
> packet trace and see what node is sending the packets to error-drop. If
> packets are being passed to error-drop by gtpu4-input, maybe they do not
> match any configured tunnel.
>
> -Matt
>
>
> On Tue, Sep 28, 2021 at 9:24 AM Akash S R 
> wrote:
>
>> Hello,
>>
>> I have tried increasing the workers from 2 to 5 and rx/tx queues from
>> 1024 (default) to 2048 ,also decreasing till 256 but of no use.
>> As you told, vector is very high (> 100). Please let us know if there is
>> any other way or reason for the same.
>>
>>
>> Thread 1 vpp_wk_0 (lcore 2)
>> Time 41.3, 10 sec internal node vector rate 79.42 loops/sec 3628.25
>>   vector rates in 1.9388e5, out 6.5319e2, drop 1.9322e5, punt 0.e0
>>  Name State Calls  Vectors
>>  Suspends Clocks   Vectors/Call
>> TenGigabitEthernet4/0/0-output   active  19735
>> 26978   0  2.12e31.37
>> TenGigabitEthernet4/0/0-tx   active  19735
>> 26969   0  1.43e31.37
>> dpdk-input   polling  10241833
>> 8004674   0  2.81e3 .78
>> drop active  92422
>> 7977696   0  1.31e3   86.32
>> error-drop   active  92422
>> 7977696   0  6.82e1   86.32
>> ethernet-input   active  93109
>> 8004674   0  1.84e2   85.97
>> gtpu4-input  active  93106
>> 8004671   0  3.29e2   85.97
>> ip4-drop active  2
>> 2   0  1.33e41.00
>> ip4-inputactive  93106
>> 8004671   0  6.82e2   85.97
>> ip4-input-no-checksumactive  93106
>> 8004673   0  2.37e2   85.97
>> ip4-localactive  93106
>> 8004671   0  2.66e2   85.97
>> ip4-lookup   active 112841
>> 8031651   0  3.55e2   71.18
>> ip4-policer-classify active  93106
>> 8004671   0  1.35e3   85.97
>> ip4-rewrite  active  19735
>> 26978   0  2.43e31.37
>> ip4-udp-lookup   active  93106
>> 8004671   0  3.17e2   85.97
>> ip6-inputactive  1
>> 1   0  1.99e41.00
>> ip6-not-enabled  active  1
>> 1   0  2.59e41.00
>> unix-epoll-input polling  9998
>> 0   0  3.30e30.00
>> ---
>> Thread 2 vpp_wk_1 (lcore 3)
>> Time 41.3, 10 sec internal node vector rate 0.00 loops/sec 988640.31
>>   vector rates in 0.e0, out 0.e0, drop 0.e0, punt 0.e0
>>  Name State Calls  Vectors
>>  Suspends Clocks   

Re: [vpp-dev] rx-miss while sending packets to Interface (IMPORTANT)

2021-09-29 Thread Akash S R
Hey Sathish,

Interface type is vfio-pci (10-Gigabit SFI/SFP+ Network Connection 10fb),
with rx queue and tx queue of hardware set to 4096 (MAX)..

Let me know if anything is known. :)

Thanks,
Akash

On Wed, Sep 29, 2021 at 6:09 PM  wrote:

> Hi Akash,
>
> Please let me know what type interface and driver you are using, example
> igb_uio, vfio, SRIOV etc?
> --
> Regards,
> Satish Singh
> 
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20232): https://lists.fd.io/g/vpp-dev/message/20232
Mute This Topic: https://lists.fd.io/mt/85920772/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] rx-miss while sending packets to Interface (IMPORTANT)

2021-09-29 Thread Matthew Smith via lists.fd.io
I saw two noteworthy items in your 'vppctl show runtime' output:

1. All of the packets received/processed appear to be handled by the
gtpu4-input node. They also all appear to be received/handled by a single
worker thread.
2. Nearly all of the packets are being dropped. I mean the packets that
were actually received and processed - not the packets that were counted as
an rx-miss.

Regarding #1 -

I imagine that one thread might be receiving all inbound packets because
you're sending across a single GTPU tunnel (i.e. a single stream). If this
is true, AFAIK most NICs will hash all of the packets to the same queue.
This means you will likely be constrained to handling however many packets
can be handled by a single thread and increasing the number of workers or
rx queues won't help. You might be able to utilize multiple workers/queues
by sending across  multiple tunnels. I know very little about GTPU use
cases so I don't know whether it's practical for you to use multiple
tunnels.

Regarding #2 -

Out of 8004674 packets received by dpdk-input, 7977696 packets end up being
dropped by error-drop. It would probably be useful to look at a packet
trace and see what node is sending the packets to error-drop. If packets
are being passed to error-drop by gtpu4-input, maybe they do not match any
configured tunnel.

-Matt


On Tue, Sep 28, 2021 at 9:24 AM Akash S R  wrote:

> Hello,
>
> I have tried increasing the workers from 2 to 5 and rx/tx queues from 1024
> (default) to 2048 ,also decreasing till 256 but of no use.
> As you told, vector is very high (> 100). Please let us know if there is
> any other way or reason for the same.
>
>
> Thread 1 vpp_wk_0 (lcore 2)
> Time 41.3, 10 sec internal node vector rate 79.42 loops/sec 3628.25
>   vector rates in 1.9388e5, out 6.5319e2, drop 1.9322e5, punt 0.e0
>  Name State Calls  Vectors
>Suspends Clocks   Vectors/Call
> TenGigabitEthernet4/0/0-output   active  19735   26978
>   0  2.12e31.37
> TenGigabitEthernet4/0/0-tx   active  19735   26969
>   0  1.43e31.37
> dpdk-input   polling  10241833 8004674
>   0  2.81e3 .78
> drop active  92422 7977696
>   0  1.31e3   86.32
> error-drop   active  92422 7977696
>   0  6.82e1   86.32
> ethernet-input   active  93109 8004674
>   0  1.84e2   85.97
> gtpu4-input  active  93106 8004671
>   0  3.29e2   85.97
> ip4-drop active  2   2
>   0  1.33e41.00
> ip4-inputactive  93106 8004671
>   0  6.82e2   85.97
> ip4-input-no-checksumactive  93106 8004673
>   0  2.37e2   85.97
> ip4-localactive  93106 8004671
>   0  2.66e2   85.97
> ip4-lookup   active 112841 8031651
>   0  3.55e2   71.18
> ip4-policer-classify active  93106 8004671
>   0  1.35e3   85.97
> ip4-rewrite  active  19735   26978
>   0  2.43e31.37
> ip4-udp-lookup   active  93106 8004671
>   0  3.17e2   85.97
> ip6-inputactive  1   1
>   0  1.99e41.00
> ip6-not-enabled  active  1   1
>   0  2.59e41.00
> unix-epoll-input polling  9998   0
>   0  3.30e30.00
> ---
> Thread 2 vpp_wk_1 (lcore 3)
> Time 41.3, 10 sec internal node vector rate 0.00 loops/sec 988640.31
>   vector rates in 0.e0, out 0.e0, drop 0.e0, punt 0.e0
>  Name State Calls  Vectors
>Suspends Clocks   Vectors/Call
> dpdk-input   polling  37858263   0
>   0  1.14e30.00
> unix-epoll-input polling 36940   0
>   0  3.54e30.00
> ---
> Thread 3 vpp_wk_2 (lcore 4)
> Time 41.3, 10 sec internal node vector rate 0.00 loops/sec 983700.23
>   vector rates in 4.8441e-2, out 0.e0, drop 4.8441e-2, punt 0.e0
>   

Re: [vpp-dev] rx-miss while sending packets to Interface (IMPORTANT)

2021-09-29 Thread satishsept7
Hi Akash,

Please let me know what type interface and driver you are using, example 
igb_uio, vfio, SRIOV etc?
--
Regards,
Satish Singh

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20222): https://lists.fd.io/g/vpp-dev/message/20222
Mute This Topic: https://lists.fd.io/mt/85920772/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] rx-miss while sending packets to Interface (IMPORTANT)

2021-09-28 Thread Akash S R
Hello,

I have tried increasing the workers from 2 to 5 and rx/tx queues from 1024
(default) to 2048 ,also decreasing till 256 but of no use.
As you told, vector is very high (> 100). Please let us know if there is
any other way or reason for the same.


Thread 1 vpp_wk_0 (lcore 2)
Time 41.3, 10 sec internal node vector rate 79.42 loops/sec 3628.25
  vector rates in 1.9388e5, out 6.5319e2, drop 1.9322e5, punt 0.e0
 Name State Calls  Vectors
   Suspends Clocks   Vectors/Call
TenGigabitEthernet4/0/0-output   active  19735   26978
  0  2.12e31.37
TenGigabitEthernet4/0/0-tx   active  19735   26969
  0  1.43e31.37
dpdk-input   polling  10241833 8004674
  0  2.81e3 .78
drop active  92422 7977696
  0  1.31e3   86.32
error-drop   active  92422 7977696
  0  6.82e1   86.32
ethernet-input   active  93109 8004674
  0  1.84e2   85.97
gtpu4-input  active  93106 8004671
  0  3.29e2   85.97
ip4-drop active  2   2
  0  1.33e41.00
ip4-inputactive  93106 8004671
  0  6.82e2   85.97
ip4-input-no-checksumactive  93106 8004673
  0  2.37e2   85.97
ip4-localactive  93106 8004671
  0  2.66e2   85.97
ip4-lookup   active 112841 8031651
  0  3.55e2   71.18
ip4-policer-classify active  93106 8004671
  0  1.35e3   85.97
ip4-rewrite  active  19735   26978
  0  2.43e31.37
ip4-udp-lookup   active  93106 8004671
  0  3.17e2   85.97
ip6-inputactive  1   1
  0  1.99e41.00
ip6-not-enabled  active  1   1
  0  2.59e41.00
unix-epoll-input polling  9998   0
  0  3.30e30.00
---
Thread 2 vpp_wk_1 (lcore 3)
Time 41.3, 10 sec internal node vector rate 0.00 loops/sec 988640.31
  vector rates in 0.e0, out 0.e0, drop 0.e0, punt 0.e0
 Name State Calls  Vectors
   Suspends Clocks   Vectors/Call
dpdk-input   polling  37858263   0
  0  1.14e30.00
unix-epoll-input polling 36940   0
  0  3.54e30.00
---
Thread 3 vpp_wk_2 (lcore 4)
Time 41.3, 10 sec internal node vector rate 0.00 loops/sec 983700.23
  vector rates in 4.8441e-2, out 0.e0, drop 4.8441e-2, punt 0.e0
 Name State Calls  Vectors
   Suspends Clocks   Vectors/Call
dpdk-input   polling  37820862   2
  0 2.16e100.00
drop active  2   2
  0  9.34e31.00
error-drop   active  2   2
  0  7.78e31.00
ethernet-input   active  2   2
  0  2.44e41.00
ip6-inputactive  2   2
  0  1.06e41.00
ip6-not-enabled  active  2   2
  0  8.75e31.00
unix-epoll-input polling 36906   0
  0  3.54e30.00
---
Thread 4 vpp_wk_3 (lcore 5)
Time 41.3, 10 sec internal node vector rate 0.00 loops/sec 985307.67
  vector rates in 0.e0, out 0.e0, drop 0.e0, punt 0.e0
 Name State Calls  Vectors
   Suspends Clocks   Vectors/Call
dpdk-input   polling  37834354   0
  0  1.14e30.00
unix-epoll-input polling 36918   0
  0  3.47e3 

Re: [vpp-dev] rx-miss while sending packets to Interface (IMPORTANT)

2021-09-28 Thread Akash S R
Thanks mate, that makes sense. Will check it out and get back .


Thanks,
Akash

On Tue, Sep 28, 2021, 16:00 Benoit Ganne (bganne)  wrote:

> Rx-miss means the NIC must drop packets on RX because the rx queue was
> full, usually because VPP cannot keep up with the incoming packet rate.
> You can check it with the output of 'show run'. If you see a big average
> vector size (100 or more) it means VPP is busy.
> To improve that you must increase the number of VPP workers (and rx
> queues).
> Rx-misses can also be caused by traffic spikes. In that case you can
> increase the rx queues size to absorb the bursts.
>
> Best
> ben
>
> > -Original Message-
> > From: vpp-dev@lists.fd.io  On Behalf Of Akash S R
> > Sent: mardi 28 septembre 2021 11:43
> > To: vpp-dev 
> > Subject: [vpp-dev] rx-miss while sending packets to Interface (IMPORTANT)
> >
> > Hello mates,
> >
> > Its been a long time, me , raising a query out here to you guys :) (Nah,
> > Please ignore this junk.)
> >
> > I have a question on rx-miss. Whenever we fire packets at some high rate,
> > say 1GBPS or more, I get rx-miss on the interface with some packet count.
> > I referred to a link below from the VPP forum:
> https://lists.fd.io/g/vpp-
> > dev/topic/78179815#17985
> >
> > TenGigabitEthernet4/0/1   3  up  1500/0/0/0 rx
> > packets   8622948
> > rx
> > bytes  1103737344
> > ip4
> > 8622948
> > rx-
> > miss283721416
> >
> > But the buffer is available and allocated is less. I reduced the queues
> to
> > 256 and threads to 4.
> > But still the issue is not resolved.
> > DBGvpp# show dpdk buffer
> > name="vpp pool 0"  available =   13984 allocated =2816 total =
>  16800
> >
> >  May I know the reason for this rx-miss and any kinda resolution for this
> > issue?
> >
> > Thanks,
> > Akash
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20217): https://lists.fd.io/g/vpp-dev/message/20217
Mute This Topic: https://lists.fd.io/mt/85920772/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] rx-miss while sending packets to Interface (IMPORTANT)

2021-09-28 Thread Benoit Ganne (bganne) via lists.fd.io
Rx-miss means the NIC must drop packets on RX because the rx queue was full, 
usually because VPP cannot keep up with the incoming packet rate.
You can check it with the output of 'show run'. If you see a big average vector 
size (100 or more) it means VPP is busy.
To improve that you must increase the number of VPP workers (and rx queues).
Rx-misses can also be caused by traffic spikes. In that case you can increase 
the rx queues size to absorb the bursts.

Best
ben

> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Akash S R
> Sent: mardi 28 septembre 2021 11:43
> To: vpp-dev 
> Subject: [vpp-dev] rx-miss while sending packets to Interface (IMPORTANT)
> 
> Hello mates,
> 
> Its been a long time, me , raising a query out here to you guys :) (Nah,
> Please ignore this junk.)
> 
> I have a question on rx-miss. Whenever we fire packets at some high rate,
> say 1GBPS or more, I get rx-miss on the interface with some packet count.
> I referred to a link below from the VPP forum: https://lists.fd.io/g/vpp-
> dev/topic/78179815#17985
> 
> TenGigabitEthernet4/0/1   3  up  1500/0/0/0 rx
> packets   8622948
> rx
> bytes  1103737344
> ip4
> 8622948
> rx-
> miss283721416
> 
> But the buffer is available and allocated is less. I reduced the queues to
> 256 and threads to 4.
> But still the issue is not resolved.
> DBGvpp# show dpdk buffer
> name="vpp pool 0"  available =   13984 allocated =2816 total =   16800
> 
>  May I know the reason for this rx-miss and any kinda resolution for this
> issue?
> 
> Thanks,
> Akash


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20216): https://lists.fd.io/g/vpp-dev/message/20216
Mute This Topic: https://lists.fd.io/mt/85920772/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-