Re: [vpp-dev] #vpp-hoststack - Issue with UDP receiver application using VCL library

2020-01-24 Thread Raj Kumar
Hi Florin,
After fixing the UDP checksum offload issue and using the 64K tx buffer, I
am able to send 35Gbps ( half duplex) .
In DPDK code ( ./plugins/dpdk/device/init.c) , it was not setting the
DEV_TX_OFFLOAD_TCP_CKSUM and DEV_TX_OFFLOAD_UDP_CKSUM offload bit for MLNX5
PMD.
 In the udp tx application I am using vppcom_session_write to write to the
session and write len is same as the buffer size ( 64K).

Btw, I run all the tests with the patch
https://gerrit.fd.io/r/c/vpp/+/24462 you
provided.

If I run a single UDP tx connection then the throughput is 35 Gbps. But, on
starting other UDP rx connections (20 Gbps) the tx throughput goes down to
12Gbps.
Even , if I run 2 UDP tx connection then also I am not able to scale up the
throughput.  The overall throughput stays the same.
First I tried this test with 4 worker threads and then with 1 worker
thread.

 I have following 2 points -
1) With my udp tx test application, I am getting this throughput after
using 64K tx buffer. But , in actual product I have to send the variable
size UDP packets ( max len 9000 bytes) . That mean the maximum tx buffer
size would be 9K and with that buffer size  I am getting 15Gbps which is
fine if I can some how scale up it by running multiple applications. But,
that does not seems to work with UDP ( I am not using udpc).

2)  My target is the achieve at least 30 Gbps rx and 30 Gbps tx UDP
throughput on one  NUMA node.  I tried by running the multiple VPP
instances on VFs ( SR-IOV) and I can scale up the throughput ( rx and tx)
with the number of VPP instances.
Here is the throughput test with VF -
1 VPP instance  ( 15Gbps  rx and 15Gbps  tx)
2 VPP instances  ( 30Gbps rx and 30 Ghps tx)
3 VPP instances  ( 45 Gbps rx and 35Gbps tx)

I have 2 NUMA node on the serer so I am expecting to get 60 Gbps rx and 60
Gbps rx total throughput.

Btw, I also tested TCP without VF. It seems to scale up properly as the
connections are going on different threads.

*vpp# sh thread*

*ID NameTypeLWP Sched Policy (Priority)
lcore  Core   Socket State*

*0  vpp_main22181   other (0)
1  0  0*

*1  vpp_wk_0workers 22183   other (0)
   2  2  0*

*2  vpp_wk_1workers 22184   other (0)
3  3  0*

*3  vpp_wk_2workers 22185   other (0)
4  4  0*

*4  vpp_wk_3workers 22186   other (0)
   5  8  0*



*4 worker threads *

*Iperf3 TCP tests  - 8000 bytes packets *

1 Connection:

Rx only

18 Gbps

vpp# sh session verbose 1

ConnectionState  Rx-f
Tx-f

[0:0][T] fd0d:edc4::2001::203:6669->:::0  LISTEN 0 0

Thread 0: active sessions 1



ConnectionState  Rx-f
Tx-f

[1:0][T] fd0d:edc4::2001::203:6669->fd0d:edc4:ESTABLISHED0 0

Thread 1: active sessions 1

Thread 2: no sessions

Thread 3: no sessions



ConnectionState  Rx-f
Tx-f

[4:0][T] fd0d:edc4::2001::203:6669->fd0d:edc4:ESTABLISHED0 0

Thread 4: active sessions 1



2 connections:



Rx only

32Gbps

vpp# sh session verbose 1

ConnectionState  Rx-f
Tx-f

[0:0][T] fd0d:edc4::2001::203:6669->:::0  LISTEN 0 0

[0:1][T] fd0d:edc4::2001::203:6679->:::0  LISTEN 0 0

Thread 0: active sessions 2



ConnectionState  Rx-f
Tx-f

[1:0][T] fd0d:edc4::2001::203:6669->fd0d:edc4:ESTABLISHED0 0

Thread 1: active sessions 1

Thread 2: no sessions

Thread 3: no sessions



ConnectionState  Rx-f
Tx-f

[4:0][T] fd0d:edc4::2001::203:6669->fd0d:edc4:ESTABLISHED0 0

[4:1][T] fd0d:edc4::2001::203:6679->fd0d:edc4:ESTABLISHED0 0

[4:2][T] fd0d:edc4::2001::203:6679->fd0d:edc4:ESTABLISHED0 0

Thread 4: active sessions 3

3 connection

Rx only

43Gbps

vpp# sh session verbose 1

ConnectionState  Rx-f
Tx-f

[0:0][T] fd0d:edc4::2001::203:6669->:::0  LISTEN 0 0

[0:1][T] fd0d:edc4::2001::203:6679->:::0  LISTEN 0 0

[0:2][T] fd0d:edc4::2001::203:6689->:::0  LISTEN 0 0

Thread 0: active sessions 3



ConnectionState  Rx-f
Tx-f

[1:0][T] fd0d:edc4::2001::203:6669->fd0d:edc4:ESTABLISHED0 0

Thread 1: active sessions 1

Thread 2: no sessions



ConnectionState  Rx-f
Tx-f

[3:0][T] fd0d:edc4::2001::203:6689->fd0d:edc4:ESTABLISHED0 0

Thread 3: active sessions 1



ConnectionState  Rx-f
Tx-f

[4:0][T] 

Re: [vpp-dev] issue with ARP and classify packet forwarding #classify

2020-01-24 Thread Balaji Venkatraman via Lists.Fd.Io
Hi Po,

Could you ensure the memifs and the ACLs u have configured are consistent with 
the tables you have. Are all under 100?

--
Regards,
Balaji.


From:  on behalf of Po 
Date: Thursday, January 23, 2020 at 11:14 PM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] issue with ARP and classify packet forwarding #classify

Hi,

I would like to classify the packet and forward to desired destination
- Classify hits the rules
- ARP proxy enabled

Expect the acl-hits goes to [ip4-arp] then hand over to [memif0/3-output], but 
end up packet dropped with ip4-arp: no source address for ARP request

Any expert may share the what is missing from the debug CLI?

Thank you.
Po



Topology
[cid:attach_0_15ECC1408F99A7F7_12230@lists.fd.io]

Commands:
vpp# create interface memif id 2 slave
vpp# create interface memif id 3 slave
vpp# set interface state memif0/2 up
vpp# set interface state memif0/3 up
vpp# classify table mask hex 
00ff buckets 16 
skip 1
vpp# classify session opaque-index 0 table-index 0 match hex 
00060a0a02010a0a0202
 action set-ip4-fib-id 100
vpp# classify session opaque-index 1 table-index 0 match hex 
00060a0a02010a0a0203
 action set-ip4-fib-id 200
vpp# ip route add 10.10.2.2/32 table 100 via memif0/3
vpp# ip route add 10.10.2.0/24 table 100 via memif0/2
vpp# ip route add 10.10.2.0/24 via memif0/2
vpp# set int input acl intfc memif0/2 ip4-table 0
vpp# set int ip address memif0/2 10.10.2.0/24
vpp# set ip arp proxy 10.10.2.1 - 10.10.2.11
vpp# set ip arp fib-id 100 proxy 10.10.2.1 - 10.10.2.11
vpp# set interface proxy-arp memif0/2 enable
vpp# set interface proxy-arp memif0/3 enable
vpp#


vpp# show classify table

[6]: heap offset 1200, elts 2, normal
0: [1200]: next_index -1 advance 0 opaque 0 action 1 metadata 1
k: 00060a0a02010a0a0202
hits 3, last_heard 494.07


vpp# show vlib graph ip4-arp
   Name  NextPrevious
ip4-arp error-drop [0]  nsh-adj-incomplete
  memif0/3-output [1] lookup-ip4-src
lookup-ip4-dst-itf
  lookup-ip4-dst
mpls-adj-incomplete
tcp4-output
bfd-udp-echo4-input
  bfd-udp4-input
 ip4-punt-redirect
 ip4-load-balance
ip4-lookup
   ip4-classify
vpp# show ip fib index 1
ipv4-VRF:100, fib_index:1, flow hash:[src dst sport dport proto ] 
locks:[src:classify:1, ]
0.0.0.0/0
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:9 buckets:1 uRPF:7 to:[0:0]]
[0] [@0]: dpo-drop ip4
0.0.0.0/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:10 buckets:1 uRPF:8 to:[0:0]]
[0] [@0]: dpo-drop ip4
10.10.2.0/24
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:20 buckets:1 uRPF:18 to:[0:0]]
[0] [@4]: ipv4-glean: memif0/2: mtu:9000 02fea803ab310806
10.10.2.0/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:24 buckets:1 uRPF:22 to:[0:0]]
[0] [@2]: dpo-receive
10.10.2.2/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:19 buckets:1 uRPF:17 to:[3:180]]
[0] [@3]: arp-ipv4: via 10.10.2.2 memif0/3
224.0.0.0/4
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:12 buckets:1 uRPF:10 to:[0:0]]
[0] [@0]: dpo-drop ip4
240.0.0.0/4
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:11 buckets:1 uRPF:9 to:[0:0]]
[0] [@0]: dpo-drop ip4
255.255.255.255/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:13 buckets:1 uRPF:11 to:[0:0]]
[0] [@0]: dpo-drop ip4



Trace
Packet 12

00:08:13:344609: memif-input
  memif: hw_if_index 1 next-index 4
slot: ring 0
00:08:13:344620: ethernet-input
  IP4: b2:5f:84:5e:0b:43 -> 02:fe:a8:03:ab:31
00:08:13:344628: ip4-input
  TCP: 10.10.2.1 -> 10.10.2.2
tos 0x00, ttl 64, length 60, checksum 0xbb1c
fragment id 0x6789, flags DONT_FRAGMENT
  TCP: 59057 -> 12345
seq. 0x692e832e ack 0x
flags 0x02 SYN, tcp header: 40 bytes
window 29200, checksum 0xf2cc
00:08:13:344634: ip4-inacl
  INACL: sw_if_index 1, next_index 1, table 0, offset 1200
00:08:13:344639: ip4-lookup
  fib 1 dpo-idx 0 flow hash: 0x
  TCP: 10.10.2.1 -> 10.10.2.2
tos 0x00, ttl 64, length 60, checksum 

[vpp-dev] Coverity run FAILED as of 2020-01-24 14:00:15 UTC

2020-01-24 Thread Noreply Jenkins
Coverity run failed today.

Current number of outstanding issues are 3
Newly detected: 0
Eliminated: 0
More details can be found at  
https://scan.coverity.com/projects/fd-io-vpp/view_defects
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15248): https://lists.fd.io/g/vpp-dev/message/15248
Mute This Topic: https://lists.fd.io/mt/70070849/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP VSZ shoots to 200GB because of DPDK plugin

2020-01-24 Thread siddarth rai
Hi,

Here is the output :

0040-004ca000 r-xp  fd:01 188780472
 /root/vpp/build-root/install-vpp-native/vpp/bin/vpp
006c9000-006ca000 r--p 000c9000 fd:01 188780472
 /root/vpp/build-root/install-vpp-native/vpp/bin/vpp
006ca000-006cb000 rw-p 000ca000 fd:01 188780472
 /root/vpp/build-root/install-vpp-native/vpp/bin/vpp
006cb000-006cc000 rw-p  00:00 0
01b4b000-01c8b000 rw-p  00:00 0
 [heap]
1-10002f000 rw-p  00:00 0
130008000-130029000 rw-s  00:13 779982
/dev/shm/global_vm
130029000-13104a000 rw-s  00:13 779983
/dev/shm/vpe-api
13104a000-134008000 rw-s 01042000 00:13 779982
/dev/shm/global_vm
14000-94000 r--p  00:00 0
94000-940001000 rw-p  00:00 0
ac0001000-ac0002000 rw-p  00:00 0
c40002000-c40003000 rw-p  00:00 0
dc0003000-dc0064000 rw-p  00:00 0
dc0c64000-dc0cc5000 rw-p  00:00 0
dc18c5000-dc1926000 rw-p  00:00 0
dc2526000-dc2587000 rw-p  00:00 0
10-100280 rw-s  00:27 779981
/buffers-numa-0
100280-14 ---p  00:00 0
7ef98864f000-7efa3f80 rw-p  00:00 0
7efa3f80-7efe3f80 r--p  00:00 0
7efe3fa0-7f023fa0 r--p  00:00 0
7f023fc0-7f063fc0 r--p  00:00 0
7f063fe0-7f064000 rw-p  00:0e 780006
/anon_hugepage (deleted)
7f064000-7f064020 rw-p  00:0e 780008
/anon_hugepage (deleted)
7f064020-7f064040 rw-p  00:0e 780009
/anon_hugepage (deleted)
7f064040-7f0a3fe0 r--p  00:00 0
7f0a4000-7f124000 r--p  00:00 0
7f128000-7f1a8000 r--p  00:00 0
7f1a8c00-7f1a8c021000 rw-p  00:00 0
7f1a8c021000-7f1a9000 ---p  00:00 0
7f1a93ff4000-7f1a9800 rw-p  00:00 0
7f1a9800-7f1a98021000 rw-p  00:00 0
7f1a98021000-7f1a9c00 ---p  00:00 0
7f1a9c00-7f1a9c021000 rw-p  00:00 0
7f1a9c021000-7f1aa000 ---p  00:00 0
7f1aa000-7f1aa0021000 rw-p  00:00 0
7f1aa0021000-7f1aa400 ---p  00:00 0
7f1aa49b8000-7f1aa52da000 rw-p  00:00 0
7f1aa52da000-7f1aa52db000 ---p  00:00 0
7f1aa52db000-7f1ac000 rw-p  00:00 0
7f1ac000-7f22c000 r--p  00:00 0
7f22c000-7f22c0001000 rw-s febd2000 00:12 11698
 /sys/devices/pci:00/:00:04.0/resource1
7f22c0001000-7f22c0005000 rw-s fe004000 00:12 11699
 /sys/devices/pci:00/:00:04.0/resource4
7f22c043a000-7f22c1a59000 rw-p  00:00 0
7f22c1a59000-7f22c1a5c000 r-xp  fd:01 138716319
 
/root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/vmxnet3_test_plugin.so
7f22c1a5c000-7f22c1c5b000 ---p 3000 fd:01 138716319
 
/root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/vmxnet3_test_plugin.so
7f22c1c5b000-7f22c1c5c000 r--p 2000 fd:01 138716319
 
/root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/vmxnet3_test_plugin.so
7f22c1c5c000-7f22c1c5d000 rw-p 3000 fd:01 138716319
 
/root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/vmxnet3_test_plugin.so
7f22c1c5d000-7f22c1c5f000 r-xp  fd:01 138561791
 
/root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/stn_test_plugin.so
7f22c1c5f000-7f22c1e5e000 ---p 2000 fd:01 138561791
 
/root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/stn_test_plugin.so
7f22c1e5e000-7f22c1e5f000 r--p 1000 fd:01 138561791
 
/root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/stn_test_plugin.so
7f22c1e5f000-7f22c1e6 rw-p 2000 fd:01 138561791
 
/root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/stn_test_plugin.so
7f22c1e6-7f22c1e63000 r-xp  fd:01 138561788
 
/root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/nsim_test_plugin.so
7f22c1e63000-7f22c2062000 ---p 3000 fd:01 138561788
 
/root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/nsim_test_plugin.so
7f22c2062000-7f22c2063000 r--p 2000 fd:01 138561788
 
/root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/nsim_test_plugin.so
7f22c2063000-7f22c2064000 rw-p 3000 fd:01 138561788
 
/root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/nsim_test_plugin.so
7f22c2064000-7f22c2067000 r-xp  fd:01 138561787
 
/root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/nsh_test_plugin.so
7f22c2067000-7f22c2266000 ---p 3000 fd:01 138561787
 
/root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/nsh_test_plugin.so
7f22c2266000-7f22c2267000 r--p 2000 fd:01 138561787
 
/root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/nsh_test_plugin.so
7f22c2267000-7f22c2268000 rw-p 3000 fd:01 138561787
 
/root/vpp/build-root/install-vpp-native/vpp/lib/vpp_api_test_plugins/nsh_test_plugin.so
7f22c2268000-7f22c2271000 r-xp  fd:01 138561786
 

[vpp-dev] VPP 20.01 draft release notes available for your edits

2020-01-24 Thread Andrew Yourtchenko
Hi all,

https://gerrit.fd.io/r/#/c/vpp/+/24505/ is the new change that I have
pushed with the draft 20.01
release notes.

I would like to request your edits in BEFORE 23:59 UTC Tuesday 28th
January 2020.

I will do some final tweaking and clean up the formatting quirks after
that in preparation for the Wednesday release.

You will notice two things different from our previous formats of the
release notes:

1) mentions of the feature have the commit ID of that feature commit -
this will make it easier to find the respective commits from release
notes.

2) the bullet points of the features are sorted alphabetically.

You will also notice that the wording of the release notes might seem
familiar - this is because I used the existing classification and
content information from your commits and MAINTAINERS, to
*automatically* generate the contents.

Thanks a lot for your help in keeping this information precise and up to date!

--a (your friendly 20.01 release manager)
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15245): https://lists.fd.io/g/vpp-dev/message/15245
Mute This Topic: https://lists.fd.io/mt/70069222/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP VSZ shoots to 200GB because of DPDK plugin

2020-01-24 Thread Damjan Marion via Lists.Fd.Io

Can you capture "cat /proc/$(pgrep vpp)/maps” and put it into pastebin?

— 
Damjan

> On 24 Jan 2020, at 09:07, siddarth rai  wrote:
> 
> Hi, 
> 
> Thanks a lot for the tip.
> 
> I would still like to know what is causing DPDK plugin to take up so much 
> VSZ. 
> If any one can give me any pointers I will try and debug it further to 
> hopefully control the VSZ.
> 
> Regards,
> Siddarth
> 
> On Tue, Dec 17, 2019 at 2:57 PM Benoit Ganne (bganne)  > wrote:
> Hi Siddarth,
> 
> > The issue here is that huge core files are generated, which take up a lot
> > of space and the system down time is huge too.
> > Even if I compress it, I will have to de-compress wherever I try to debug
> > it and the disk space requirement will be huge.
> 
> I know this will not fix your issue, however that might help:
>  - when the core file is generated, if the VA is not in use it should not 
> take space on the disk because it should be stored as a sparse file. Here is 
> an example I have locally (note the 117M allocated on disk vs the 2.6G 
> "virtual" size):
> bganne@ubuntu1804:~$ ls -lsh core
> 117M -rw-rw-r-- 1 bganne bganne 2.6G Nov 12 15:34 core
> Also, if you do not compress it at generation time (via 
> /proc/sys/kernel/core_pattern or similar) it should not impact the downtime 
> as it is simply not written nor processed
>  - if you compress/decompress it with gzip, it will not produce a sparse file 
> but you can 're-sparse' it using eg. dd:
> bganne@ubuntu1804:~$ zcat core.gz | dd conv=sparse of=core
> 
> Ben

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15244): https://lists.fd.io/g/vpp-dev/message/15244
Mute This Topic: https://lists.fd.io/mt/68143971/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP VSZ shoots to 200GB because of DPDK plugin

2020-01-24 Thread siddarth rai
Hi,

Thanks a lot for the tip.

I would still like to know what is causing DPDK plugin to take up so much
VSZ.
If any one can give me any pointers I will try and debug it further to
hopefully control the VSZ.

Regards,
Siddarth

On Tue, Dec 17, 2019 at 2:57 PM Benoit Ganne (bganne) 
wrote:

> Hi Siddarth,
>
> > The issue here is that huge core files are generated, which take up a lot
> > of space and the system down time is huge too.
> > Even if I compress it, I will have to de-compress wherever I try to debug
> > it and the disk space requirement will be huge.
>
> I know this will not fix your issue, however that might help:
>  - when the core file is generated, if the VA is not in use it should not
> take space on the disk because it should be stored as a sparse file. Here
> is an example I have locally (note the 117M allocated on disk vs the 2.6G
> "virtual" size):
> bganne@ubuntu1804:~$ ls -lsh core
> 117M -rw-rw-r-- 1 bganne bganne 2.6G Nov 12 15:34 core
> Also, if you do not compress it at generation time (via
> /proc/sys/kernel/core_pattern or similar) it should not impact the downtime
> as it is simply not written nor processed
>  - if you compress/decompress it with gzip, it will not produce a sparse
> file but you can 're-sparse' it using eg. dd:
> bganne@ubuntu1804:~$ zcat core.gz | dd conv=sparse of=core
>
> Ben
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15243): https://lists.fd.io/g/vpp-dev/message/15243
Mute This Topic: https://lists.fd.io/mt/68143971/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-