Hi Mathew,

Thank you for your valuable response!
Yes I have only one tunnel with complete 1 GBPS traffic flowing to the
interface. So, My testing is only for one tunnel and not multiple tunnels.
So, How can I bypass this rx-miss or split this work among all the
available worker threads or any other way to solve this problem?
Is there any NIC related points out here for this issue ?

Please let me know if there are any solutions , mates.

Thanks,
Akash

On Wed, Sep 29, 2021 at 8:13 PM Matthew Smith <mgsm...@netgate.com> wrote:

>
> I saw two noteworthy items in your 'vppctl show runtime' output:
>
> 1. All of the packets received/processed appear to be handled by the
> gtpu4-input node. They also all appear to be received/handled by a single
> worker thread.
> 2. Nearly all of the packets are being dropped. I mean the packets that
> were actually received and processed - not the packets that were counted as
> an rx-miss.
>
> Regarding #1 -
>
> I imagine that one thread might be receiving all inbound packets because
> you're sending across a single GTPU tunnel (i.e. a single stream). If this
> is true, AFAIK most NICs will hash all of the packets to the same queue.
> This means you will likely be constrained to handling however many packets
> can be handled by a single thread and increasing the number of workers or
> rx queues won't help. You might be able to utilize multiple workers/queues
> by sending across  multiple tunnels. I know very little about GTPU use
> cases so I don't know whether it's practical for you to use multiple
> tunnels.
>
> Regarding #2 -
>
> Out of 8004674 packets received by dpdk-input, 7977696 packets end up
> being dropped by error-drop. It would probably be useful to look at a
> packet trace and see what node is sending the packets to error-drop. If
> packets are being passed to error-drop by gtpu4-input, maybe they do not
> match any configured tunnel.
>
> -Matt
>
>
> On Tue, Sep 28, 2021 at 9:24 AM Akash S R <akashsr.akas...@gmail.com>
> wrote:
>
>> Hello,
>>
>> I have tried increasing the workers from 2 to 5 and rx/tx queues from
>> 1024 (default) to 2048 ,also decreasing till 256 but of no use.
>> As you told, vector is very high (> 100). Please let us know if there is
>> any other way or reason for the same.
>>
>>
>> Thread 1 vpp_wk_0 (lcore 2)
>> Time 41.3, 10 sec internal node vector rate 79.42 loops/sec 3628.25
>>   vector rates in 1.9388e5, out 6.5319e2, drop 1.9322e5, punt 0.0000e0
>>              Name                 State         Calls          Vectors
>>      Suspends         Clocks       Vectors/Call
>> TenGigabitEthernet4/0/0-output   active              19735
>> 26978               0          2.12e3            1.37
>> TenGigabitEthernet4/0/0-tx       active              19735
>> 26969               0          1.43e3            1.37
>> dpdk-input                       polling          10241833
>> 8004674               0          2.81e3             .78
>> drop                             active              92422
>> 7977696               0          1.31e3           86.32
>> error-drop                       active              92422
>> 7977696               0          6.82e1           86.32
>> ethernet-input                   active              93109
>> 8004674               0          1.84e2           85.97
>> gtpu4-input                      active              93106
>> 8004671               0          3.29e2           85.97
>> ip4-drop                         active                  2
>> 2               0          1.33e4            1.00
>> ip4-input                        active              93106
>> 8004671               0          6.82e2           85.97
>> ip4-input-no-checksum            active              93106
>> 8004673               0          2.37e2           85.97
>> ip4-local                        active              93106
>> 8004671               0          2.66e2           85.97
>> ip4-lookup                       active             112841
>> 8031651               0          3.55e2           71.18
>> ip4-policer-classify             active              93106
>> 8004671               0          1.35e3           85.97
>> ip4-rewrite                      active              19735
>> 26978               0          2.43e3            1.37
>> ip4-udp-lookup                   active              93106
>> 8004671               0          3.17e2           85.97
>> ip6-input                        active                  1
>> 1               0          1.99e4            1.00
>> ip6-not-enabled                  active                  1
>> 1               0          2.59e4            1.00
>> unix-epoll-input                 polling              9998
>> 0               0          3.30e3            0.00
>> ---------------
>> Thread 2 vpp_wk_1 (lcore 3)
>> Time 41.3, 10 sec internal node vector rate 0.00 loops/sec 988640.31
>>   vector rates in 0.0000e0, out 0.0000e0, drop 0.0000e0, punt 0.0000e0
>>              Name                 State         Calls          Vectors
>>      Suspends         Clocks       Vectors/Call
>> dpdk-input                       polling          37858263
>> 0               0          1.14e3            0.00
>> unix-epoll-input                 polling             36940
>> 0               0          3.54e3            0.00
>> ---------------
>> Thread 3 vpp_wk_2 (lcore 4)
>> Time 41.3, 10 sec internal node vector rate 0.00 loops/sec 983700.23
>>   vector rates in 4.8441e-2, out 0.0000e0, drop 4.8441e-2, punt 0.0000e0
>>              Name                 State         Calls          Vectors
>>      Suspends         Clocks       Vectors/Call
>> dpdk-input                       polling          37820862
>> 2               0         2.16e10            0.00
>> drop                             active                  2
>> 2               0          9.34e3            1.00
>> error-drop                       active                  2
>> 2               0          7.78e3            1.00
>> ethernet-input                   active                  2
>> 2               0          2.44e4            1.00
>> ip6-input                        active                  2
>> 2               0          1.06e4            1.00
>> ip6-not-enabled                  active                  2
>> 2               0          8.75e3            1.00
>> unix-epoll-input                 polling             36906
>> 0               0          3.54e3            0.00
>> ---------------
>> Thread 4 vpp_wk_3 (lcore 5)
>> Time 41.3, 10 sec internal node vector rate 0.00 loops/sec 985307.67
>>   vector rates in 0.0000e0, out 0.0000e0, drop 0.0000e0, punt 0.0000e0
>>              Name                 State         Calls          Vectors
>>      Suspends         Clocks       Vectors/Call
>> dpdk-input                       polling          37834354
>> 0               0          1.14e3            0.00
>> unix-epoll-input                 polling             36918
>> 0               0          3.47e3            0.00
>>
>> Thanks,
>> Akash
>>
>> On Tue, Sep 28, 2021 at 4:09 PM Akash S R via lists.fd.io
>> <akashsr.akashsr=gmail....@lists.fd.io> wrote:
>>
>>> Thanks mate, that makes sense. Will check it out and get back .
>>>
>>>
>>> Thanks,
>>> Akash
>>>
>>> On Tue, Sep 28, 2021, 16:00 Benoit Ganne (bganne) <bga...@cisco.com>
>>> wrote:
>>>
>>>> Rx-miss means the NIC must drop packets on RX because the rx queue was
>>>> full, usually because VPP cannot keep up with the incoming packet rate.
>>>> You can check it with the output of 'show run'. If you see a big
>>>> average vector size (100 or more) it means VPP is busy.
>>>> To improve that you must increase the number of VPP workers (and rx
>>>> queues).
>>>> Rx-misses can also be caused by traffic spikes. In that case you can
>>>> increase the rx queues size to absorb the bursts.
>>>>
>>>> Best
>>>> ben
>>>>
>>>> > -----Original Message-----
>>>> > From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> On Behalf Of Akash S
>>>> R
>>>> > Sent: mardi 28 septembre 2021 11:43
>>>> > To: vpp-dev <vpp-dev@lists.fd.io>
>>>> > Subject: [vpp-dev] rx-miss while sending packets to Interface
>>>> (IMPORTANT)
>>>> >
>>>> > Hello mates,
>>>> >
>>>> > Its been a long time, me , raising a query out here to you guys :)
>>>> (Nah,
>>>> > Please ignore this junk.)
>>>> >
>>>> > I have a question on rx-miss. Whenever we fire packets at some high
>>>> rate,
>>>> > say 1GBPS or more, I get rx-miss on the interface with some packet
>>>> count.
>>>> > I referred to a link below from the VPP forum:
>>>> https://lists.fd.io/g/vpp-
>>>> > dev/topic/78179815#17985
>>>> >
>>>> > TenGigabitEthernet4/0/1           3      up          1500/0/0/0     rx
>>>> > packets               8622948
>>>> >                                                                     rx
>>>> > bytes              1103737344
>>>> >
>>>>  ip4
>>>> > 8622948
>>>> >
>>>>  rx-
>>>> > miss                283721416
>>>> >
>>>> > But the buffer is available and allocated is less. I reduced the
>>>> queues to
>>>> > 256 and threads to 4.
>>>> > But still the issue is not resolved.
>>>> > DBGvpp# show dpdk buffer
>>>> > name="vpp pool 0"  available =   13984 allocated =    2816 total =
>>>>  16800
>>>> >
>>>> >  May I know the reason for this rx-miss and any kinda resolution for
>>>> this
>>>> > issue?
>>>> >
>>>> > Thanks,
>>>> > Akash
>>>>
>>>>
>>>
>>>
>>>
>> 
>>
>>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20233): https://lists.fd.io/g/vpp-dev/message/20233
Mute This Topic: https://lists.fd.io/mt/85920772/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to