Re: [vpp-dev] rx misses observed on dpdk interface

2020-05-07 Thread Damjan Marion via lists.fd.io

ok, this is quite common to see. What is your question / concern?

— 
Damjan


> On 7 May 2020, at 04:42, zhangt...@inspur.com wrote:
> 
> Hi all;
> I am using Intel 82599 (10G) , running with  VPP v20.01-release with line 
> rate of 10G 128 bytes of packet size, i am observing Rx misses on the 
> interfaces.
> 
> The VPP related config as flow:
> vpp# show hardware-interfaces 
>   NameIdx   Link  Hardware
> TenGigabitEthernet3/0/11 up   TenGigabitEthernet3/0/1
>   Link speed: 10 Gbps
>   Ethernet address 6c:92:bf:4d:e2:fb
>   Intel 82599
> carrier up full duplex mtu 9206 
> flags: admin-up pmd maybe-multiseg subif tx-offload intel-phdr-cksum 
> rx-ip4-cksum
> rx: queues 1 (max 128), desc 4096 (min 32 max 4096 align 8)
> tx: queues 1 (max 64), desc 4096 (min 32 max 4096 align 8)
> pci: device 8086:15ab subsystem 8086: address :03:00.01 numa 0
> max rx packet len: 15872
> promiscuous: unicast off all-multicast on
> vlan offload: strip off filter off qinq off
> rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum 
> outer-ipv4-cksum 
>vlan-filter vlan-extend jumbo-frame scatter keep-crc 
> rx offload active: ipv4-cksum jumbo-frame scatter 
> tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum 
>tcp-tso outer-ipv4-cksum multi-segs 
> tx offload active: udp-cksum tcp-cksum tcp-tso multi-segs 
> rss avail: ipv4-tcp ipv4-udp ipv4 ipv6-tcp-ex ipv6-udp-ex 
> ipv6-tcp 
>ipv6-udp ipv6-ex ipv6 
> rss active:none
> tx burst function: ixgbe_xmit_pkts
> rx burst function: ixgbe_recv_scattered_pkts_vec
>
> tx frames ok61218472
> tx bytes ok   7591090528
> rx frames ok61218472
> rx bytes ok   7591090528
> rx missed  59536
> extended stats:
>   rx good packets   61218472
>   tx good packets   61218472
>   rx good bytes   7591090528
>   tx good bytes   7591090528
>   rx missed errors 59536
>   rx q0packets  61218472
>   rx q0bytes  7591090528
>   tx q0packets  61218472
>   tx q0bytes  7591090528
>   rx size 128 to 255 packets66097941
>   rx total packets  66097927
>   rx total bytes  8196143588
>   tx total packets  61218472
>   tx size 128 to 255 packets61218472
>   rx l3 l4 xsum error  101351297
>   out pkts untagged 61218472
>   rx priority0 dropped 59536
> local0 0down  local0
>   Link speed: unknown
>   local
> 
> cpu {
>  ## In the VPP there is one main thread and optionally the user can create 
> worker(s)
>  ## The main thread and worker thread(s) can be pinned to CPU core(s) 
> manually or automatically
>
>  ## Manual pinning of thread(s) to CPU core(s)
>
>  ## Set logical CPU core where main thread runs, if main core is not set
>  ## VPP will use core 1 if available
>  #main-core 1
>
>  ## Set logical CPU core(s) where worker threads are running
>  #corelist-workers 2-3,18-19
>  #corelist-workers 4-3,5-7
>
>  ## Automatic pinning of thread(s) to CPU core(s)
>
>  ## Sets number of CPU core(s) to be skipped (1 ... N-1)
>  ## Skipped CPU core(s) are not used for pinning main thread and working 
> thread(s).
>  ## The main thread is automatically pinned to the first available CPU core 
> and worker(s)
>  ## are pinned to next free CPU core(s) after core assigned to main thread
>  #skip-cores 4
>
>  ## Specify a number of workers to be created
>  ## Workers are pinned to N consecutive CPU cores while skipping "skip-cores" 
> CPU core(s)
>  ## and main thread's CPU core
>  # workers 4
>
>  ## Set scheduling policy and priority of main and worker threads
>
>  ## Scheduling policy options are: other (SCHED_OTHER), batch (SCHED_BATCH)
>  ## idle (SCHED_IDLE), fifo (SCHED_FIFO), rr (SCHED_RR)
>  scheduler-policy fifo
>
>  ## Scheduling priority is used only for "real-time policies (fifo and rr),
>  ## and has to be in the range of priorities supported for a particular policy
>  scheduler-priority 50
> }
>
> buffers {
>  ## Increase number of buffers allocated, needed only in scenarios with
>  ## large number of interfaces and worker threads. Value is per numa node.
>  ## Default is 16384 (8192 if running unpriviledged)
>   

Re: [vpp-dev] rx misses observed on dpdk interface

2017-11-29 Thread Abhilash Lakshmeshwar
Thanks Damjan.

yes, correct, there were other jobs on the core too. after setting the CPU
affinity to specific cores it is working fine.

Thanks,
Abhilash

On Wed, Nov 29, 2017 at 5:17 PM, Damjan Marion 
wrote:

>
> Are you sure that kernel is not de-scheduling VPP and running something
> else on the same core?
> rx-miss happens typically when vpp is not able to service rx queue (either
> because it is too busy or because kernel is not giving him opportunity to
> do the job).
> In this case I bet on second...
>
> On 29 Nov 2017, at 10:35, Abhilash Lakshmeshwar 
> wrote:
>
> Hello,
>
> I am using Intel X710/XL710 (10G) , running with  VPP 17.07 with line
> rate of 1G 1500 bytes of packet size,using 1G huge page. i am observing Rx
> misses on the interfaces.
>
> Initially i tried with default configuration with single core with
> uio_driver as igb_uio and uio_pci_generic. I even tried with couple of CPU
> parameters like skip-cores, increasing Rx queues for interface, but there
> is no improvement.
>
> vpp# show int TenGigabitEthernet1/0/0
>
>   Name   Idx   State  Counter
> Count
>
> TenGigabitEthernet1/0/0   1 up   rx
> packets   1100478
>
>
>  rx bytes  1646315088
>
>
>  rx-miss 2842
>
> below is my startup config.
>
> unix {
>   nodaemon
>   log /tmp/vpp.log
>   full-coredump
> }
>
> api-trace {
>   on
> }
>
> api-segment {
>   gid vpp
> }
>
> dpdk {
>   dev :01:00.0
>   dev :01:00.1
>   dev :01:00.2
>   dev :01:00.3
>
>   uio-driver uio_pci_generic
>   num-mbufs 16384
> }
>
> heapsize 3G
>
> Could you please let me know if i am missing anything ?
>
> Thanks,
> Abhilash
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
>
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] rx misses observed on dpdk interface

2017-11-29 Thread Damjan Marion

Are you sure that kernel is not de-scheduling VPP and running something else on 
the same core?
rx-miss happens typically when vpp is not able to service rx queue (either 
because it is too busy or because kernel is not giving him opportunity to do 
the job).
In this case I bet on second...

> On 29 Nov 2017, at 10:35, Abhilash Lakshmeshwar  
> wrote:
> 
> Hello,
> 
> I am using Intel X710/XL710 (10G) , running with  VPP 17.07 with line rate of 
> 1G 1500 bytes of packet size,using 1G huge page. i am observing Rx misses on 
> the interfaces.
> 
> Initially i tried with default configuration with single core with uio_driver 
> as igb_uio and uio_pci_generic. I even tried with couple of CPU parameters 
> like skip-cores, increasing Rx queues for interface, but there is no 
> improvement.
> 
> vpp# show int TenGigabitEthernet1/0/0
>   Name   Idx   State  Counter  
> Count
> TenGigabitEthernet1/0/0   1 up   rx packets   
> 1100478
>  rx 
> bytes  1646315088
>  
> rx-miss 2842
> 
> below is my startup config.
> 
> unix {
>   nodaemon
>   log /tmp/vpp.log
>   full-coredump
> }
> 
> api-trace {
>   on
> }
> 
> api-segment {
>   gid vpp
> }
> 
> dpdk {
>   dev :01:00.0
>   dev :01:00.1
>   dev :01:00.2
>   dev :01:00.3
> 
>   uio-driver uio_pci_generic
>   num-mbufs 16384
> }
> 
> heapsize 3G
> 
> Could you please let me know if i am missing anything ?
> 
> Thanks,
> Abhilash
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev