Re: (Q about fixing endianness bugs in handlers) Re: [vpp-dev] Proposal for VPP binary API stability

2020-05-18 Thread Ole Troan
> API messages in network byte order made sense 10 years ago when I worked with 
> a mixed x86_64 / ppc32 system. As Damjan points out, API interoperability 
> between big-endian and little-endian systems is a boutique use-case these 
> days.
>  
> Timing is key. We won’t be able to cherry-pick API message handler fixes 
> across an endian-order flag-day. If we decide to switch to native byte order, 
> we’d better switch right before we pull our next LTS release.

Here is a draft patch that moves endian conversion into the API infrastructure:
https://gerrit.fd.io/r/c/vpp/+/27119

If we first move all the API handlers to native endian, and leave it to the API 
infrastructure to do endian conversion, we can even allow for clients to 
negotiate what endianness they want.
That move we can do without affecting the API clients as the on-the-wire 
encoding will be the same.

Regarding that patch; I might want to hide the endian conversion inside of 
vl_api_send_msg() instead of in the REPLY macro(s).
There are some mess to tidy up, like client_index is used native endian for 
shared memory clients, but not for socket clients.

Cheers,
Ole

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16438): https://lists.fd.io/g/vpp-dev/message/16438
Mute This Topic: https://lists.fd.io/mt/74228149/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Troubleshooting IPSec in VPP

2020-05-18 Thread Muthu Raj
Hello,

I am trying out IPSec on VPP, and used the wiki[1] to create an IPSec
tunnel between an AWS instance(remote) and my home. The tunnel was
established successfully, and when pinging an IP on the remote side,
the icmp req flows over the tunnel, is seen by the remote box, and
responded back as well. I also see that the packets indeed end up
reaching my home VPP instance - however, they do not reach the last
hop. When I run show int, the ipip0 interface does not show the rx
counter at all, and when running `show errors` I do not see the
counter for the `ipsec4-tun-input` node either. Neither do I see the
`esp4-encrypt-tun' counter.

My preliminary guess is that it has something to do with the fact that
on AWS we cannot see the public IP inside the instance and so that
cannot be assigned to the interface itself, so probably the ESP
packets are generated with source as the private IP address
corresponding to the public IP. With strongswan, we specify an
explicit sourceip parameter, like in the snippet below

  left=1.2.3.4
  leftid=1.2.3.4
  leftsubnet=172.16.0.0/16
  right=4.5.6.7 #AWS public IP
  rightsourceip=10.6.82.34 #AWS private IP for that public IP, seen
inside the instance.
  rightsubnet=10.6.0.0/16

I am attaching the ikev2 sa as seen from both sides.
How would I fix this issue?
Any help is appreciated very much.

Thanks in advance.


This is from the home side. I've changed the IPs on home and remote
side. The private IP addresses have been left as it is.

vpp# show ikev2 sa
 iip 1.2.3.4 ispi d8607eea97ac12a9 rip 4.5.6.7 rspi d6726c2768b2420
 encr:aes-cbc-256 prf:hmac-sha2-256 integ:sha1-96 dh-group:modp-2048
  nonce i:eb01e6ef107ba7018679bd239e25d4557f2465323caf0d3213b453ca59af3deb
r:1d9cb8f11cd69d4b2f73b182028d8aa8854a49bb3c99797f3994575c2994154c
  SK_d1eee29fff1ff234f1452006a79a7e27787e83331b29954300a70a9d6061f2fde
  SK_a  i:7f86a547c2d9cb2a4035e4926ca6e23c745c6c8c
r:04a71f139f2076058ceafb9be73eb359e43bc308
  SK_e  i:281c47cd100f69a3425031667150d3054124ff887d77a4a1f43fd7dece7486fc
r:3f72f8e973ee62962dc9dffd64d80af9e83993acbcd3690adf85044a23310409
  SK_p  i:79c096024c45499bd43b5d716c56e5152252c433b112195201dd5c4c23a1f1c7
r:fb7e3b35d57b2987bf61f04858a4afaeee10045c6001594f9f2e505b94d950d8
  identifier (i) fqdn vpp.aws
  identifier (r) fqdn vpp.home
  child sa 0:
encr:aes-cbc-256 integ:sha1-96 esn:yes
spi(i) 147e7a05 spi(r) de36dcbc
SK_e  i:31e22be618e3fe60faf935759e75fdc699f743486dd18f07de8b78747d10d229
  r:30b10195fdb1cd5b7384a2db92d5a51fd9fab7f6fc7db775957e3dc862d72532
SK_a  i:f98f3539966a66afec330c7cdf85fbe2794e01d3
  r:9504182eb614d90aa8fe742122ec9d98c1b6e224
traffic selectors (i):
  0 type 7 protocol_id 0 addr 172.30.0.0 - 172.30.255.255 port 0 - 65535
traffic selectors (r):
  0 type 7 protocol_id 0 addr 10.6.0.0 - 10.6.255.255 port 0 - 65535
 iip 1.2.3.4 ispi d8607eea97ac12a9 rip 4.5.6.7 rspi d6726c2768b2420


Here is the AWS side

vpp# show ikev2 sa
 iip 1.2.3.4 ispi a72ae3cef809725c rip  rspi b8b7b8ef09266a6d
 encr:aes-cbc-256 prf:hmac-sha2-256 integ:sha1-96 dh-group:modp-2048
  nonce i:d3e4299761fd93edd3df16456cb0ca9f717f67e57155fa7cb4cd0b9a1d371019
r:e9a33a33b901366438e262d225a418e9489839415562d3e3673107e0d81d830f
  SK_d7e4f795db87a02c5b4d5ea738945521473f5e449b783f3ac4b954be7716b7909
  SK_a  i:13639a11b6e96e65dd38d095a87fc1b5ceefdc6b
r:97c96809563dfe39c3d2762c1ff1bf0a8fbc3576
  SK_e  i:114661a058686bd4362d8515ce83a7d7de098af11b08084c407ad51843316135
r:d812542cfa988e6c302fc52d848fb2d7b7321d6c3e77ee04134338a21c0ccba8
  SK_p  i:a65ea61c70b3cb749dedc205b7715b4c278a4bc630c6508d89a55a00cd00a2cd
r:9e23352bac4d21f6f0d2ec8de82e556db3ddaba0ade0c4d664a020da3986d17b
  identifier (i) fqdn vpp.home
  identifier (r) fqdn vpp.aws
  child sa 0:
encr:aes-cbc-256 integ:sha1-96 esn:yes
spi(i) 31c649f8 spi(r) 967b11c4
SK_e  i:6a1b5898746bc922af1beba021768cd6417a0e8a4c555e5544781fee302cf633
  r:2035a8be8fae47c284cef445381cef487bcd670bddc31558109c0303bc0f5399
SK_a  i:da119e539529803a3d2a883c01a825211c782bd2
  r:2330bc2dd9eb3741e3df649bcc3f7e5320fba512
traffic selectors (i):
  0 type 7 protocol_id 0 addr 172.30.0.0 - 172.30.255.255 port 0 - 65535
traffic selectors (r):
  0 type 7 protocol_id 0 addr 10.6.0.0 - 10.6.255.255 port 0 - 65535
 iip 1.2.3.4 ispi a72ae3cef809725c rip  rspi b8b7b8ef09266a6d
 iip 1.2.3.4 ispi d8607eea97ac12a9 rip  rspi d6726c2768b2420
 encr:aes-cbc-256 prf:hmac-sha2-256 integ:sha1-96 dh-group:modp-2048
  nonce i:eb01e6ef107ba7018679bd239e25d4557f2465323caf0d3213b453ca59af3deb
r:1d9cb8f11cd69d4b2f73b182028d8aa8854a49bb3c99797f3994575c2994154c
  SK_d1eee29fff1ff234f1452006a79a7e27787e83331b29954300a70a9d6061f2fde
  SK_a  i:7f86a547c2d9cb2a4035e4926ca6e23c745c6c8c
r:04a71f139f2076058ceafb9be73eb359e43bc308
  SK_e  

Re: [vpp-dev] vpp-merge-master-centos7 jobs are broken

2020-05-18 Thread Dave Wallace
FYI, I've seen this particular failure signature previously when the 
download from packagecloud for vpp-ext-deps install fails.  At that 
point, the vpp-ext-deps package gets built by the executor and the 
subsequent push to packagecloud fails because the package actually exists.


If the script(s) verified that the package exists on packagecloud prior 
to download, then it could know enough to skip the upload at the end.  I 
didn't find a simple way to implement this fix at that time and it 
happens infrequently, thus it never got fixed.


Thanks,
-daw-


On 5/16/2020 5:42 AM, Andrew Yourtchenko wrote:


1) “jobs are broken” is a bit of a strong assessment for this case - 
it would be more precise to say “sporadic failures” - there are plenty 
of blue bullets after and before that job:

https://jenkins.fd.io/view/vpp/job/vpp-merge-master-centos7/

(the trend overall is not stellar - 
https://jenkins.fd.io/view/vpp/job/vpp-merge-master-centos7/buildTimeTrend - 
but you see the big red dip when we had been debugging the issue last 
weekend and that was rather broken :-)


2) About the nature of the failure: this is what I see in 
https://jenkins.fd.io/view/vpp/job/vpp-merge-master-centos7/9496/console:

**
*05:14:22* ***
*05:14:22* + echo '* VPP BUILD SUCCESSFULLY COMPLETED'
*05:14:22* * VPP BUILD SUCCESSFULLY COMPLETED
*05:14:22* + echo 
'***'
*05:14:22* ***
...

*05:15:22* Pushing 
./build-root/vpp-selinux-policy-20.09-rc0~22_gf3a522f~b9496.x86_64.rpm... 
success!
*05:15:22* Pushing ./build/external/vpp-ext-deps-20.09-0.x86_64.rpm... error:
*05:15:22*
*05:15:22*  filename: has already been taken
*05:15:22*
*05:15:22* Build step 'Execute shell' marked build as failure
*05:15:22* $ ssh-agent -k
*05:15:22* unset SSH_AUTH_SOCK;
*05:15:22* unset SSH_AGENT_PID;
*05:15:22* echo Agent pid 72 killed;
*05:15:22* [ssh-agent] Stopped.

It means that the job had expected to push the VPP-ext-deps (which is 
updated every once in a while only), but found that the file with that 
name is already there in the repo. If the file is right content, it’s 
not the end of the world at all. Though odd that it happens only once 
in a while, I will try to look at it - probably after the 20.05 
release is done.


In the memif issues you mention, there are some *warnings* that don’t 
affect the state of the job.


worth having a person who is the expert in that area looking at them 
and seeing if they are real or bogus.. I suppose got blame will tell 
who that is - probably Damjan ?


--a

On 16 May 2020, at 06:14, Paul Vinciguerra 
 wrote:


I'm not sure who this should go to, nor the impact, so I'm posting 
it here.


vpp-merge-master-centos7 is failing due to libmemif and 
[-Wstringop-overflow=] see [0].


[0] 
https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-merge-master-centos7/9496/console.log.gz





-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16439): https://lists.fd.io/g/vpp-dev/message/16439
Mute This Topic: https://lists.fd.io/mt/74243105/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: (Q about fixing endianness bugs in handlers) Re: [vpp-dev] Proposal for VPP binary API stability

2020-05-18 Thread Andrew Yourtchenko
FYI - I have added an editorial blurb into the draft of the release notes for 
20.05, based on the discussions in this thread:

https://gerrit.fd.io/r/c/vpp/+/27128/

Please feel free to review.

--a

> On 17 May 2020, at 20:13, Jon Loeliger  wrote:
> 
> 
>> On Sat, May 16, 2020 at 10:02 AM Christian Hopps  wrote:
> 
>> 
>> I know we use the binary APIs, I believe Netgate does as well. I'm sure 
>> there are others too (might be good to collect a list of these folks if one 
>> doesn't exist yet).
> 
> Indeed, Netgate uses the binary APIs extensively.
> 
> jdl
> 
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16440): https://lists.fd.io/g/vpp-dev/message/16440
Mute This Topic: https://lists.fd.io/mt/74228149/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Reminder: VPP 20.05 RC2 milestone is this Wednesday 18.00 UTC.

2020-05-18 Thread Andrew Yourtchenko
Hi all,

Just a kind reminder - we are laying the RC2 tag this Wednesday 20th May.

After that milestone we will only accept critical bugfixes in stable/2005 
branch in preparation for the release.

--a
/* your friendly 20.05 release manager */-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16441): https://lists.fd.io/g/vpp-dev/message/16441
Mute This Topic: https://lists.fd.io/mt/74295377/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] VPP 20.05 DRAFT release notes

2020-05-18 Thread Andrew Yourtchenko
Hi all,

This email is to let you know that the DRAFT of the VPP 20.05 Release Notes is 
available for your review at https://gerrit.fd.io/r/c/vpp/+/27128/

Please have a look before the end of this week and feel free to contact me if 
any related questions arise.

--a
/* your friendly 20.05 release manager */-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16442): https://lists.fd.io/g/vpp-dev/message/16442
Mute This Topic: https://lists.fd.io/mt/74295579/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] vpp master branch can not build with ubuntu 16.04

2020-05-18 Thread Pei, Yulong
Dear All,

vpp master branch can not build with ubuntu 16.04, anyone can kindly help on 
the fix ?

error info as below,


VPP version : 20.09-rc0~28-g53b8dc8
VPP library version : 20.09
GIT toplevel dir: /root/download_vpp/vpp
Build type  : release
C flags : -Wno-address-of-packed-member -g -fPIC -Werror -Wall 
-march=corei7 -mtune=corei7-avx -O2 -fstack-protector -DFORTIFY_SOURCE=2 
-fno-common
Linker flags (apps) : -pie
Linker flags (libs) :
Host processor  : x86_64
Target processor: x86_64
Prefix path : 
/opt/vpp/external/x86_64;/root/download_vpp/vpp/build-root/install-vpp-native/external
Install prefix  : /root/download_vpp/vpp/build-root/install-vpp-native/vpp
-- Configuring done
-- Generating done
-- Build files have been written to: 
/root/download_vpp/vpp/build-root/build-vpp-native/vpp
 Building vpp in /root/download_vpp/vpp/build-root/build-vpp-native/vpp 
[796/2008] Building C object vat/CMakeFiles/vpp_api_test.dir/types.c.o
FAILED: ccache /usr/lib/ccache/gcc-9  -Dvpp_api_test_EXPORTS 
-I/root/download_vpp/vpp/src -I. -Iinclude -Wno-address-of-packed-member -g 
-fPIC -Werror -Wall -march=corei7 -mtune=corei7-avx  -O2 -fstack-protector 
-DFORTIFY_SOURCE=2 -fno-common-pthread -MMD -MT 
vat/CMakeFiles/vpp_api_test.dir/types.c.o -MF 
vat/CMakeFiles/vpp_api_test.dir/types.c.o.d -o 
vat/CMakeFiles/vpp_api_test.dir/types.c.o   -c 
/root/download_vpp/vpp/src/vat/types.c
/root/download_vpp/vpp/src/vat/types.c: In function 
'format_vl_api_address_family':
/root/download_vpp/vpp/src/vat/types.c:28:31: error: 'vl_api_address_family_t' 
{aka 'enum '} is promoted to 'int' when passed through '...' 
[-Werror]
   28 |   vl_api_address_family_t af = va_arg (*args, vl_api_address_family_t);
/root/download_vpp/vpp/src/vat/types.c:28:31: note: (so you should pass 'int' 
not 'vl_api_address_family_t' {aka 'enum '} to 'va_arg')
/root/download_vpp/vpp/src/vat/types.c:28:31: note: if this code is reached, 
the program will abort
/root/download_vpp/vpp/src/vat/types.c: In function 
'format_vl_api_address_union':
/root/download_vpp/vpp/src/vat/types.c:56:31: error: 'vl_api_address_family_t' 
{aka 'enum '} is promoted to 'int' when passed through '...' 
[-Werror]
   56 |   vl_api_address_family_t af = va_arg (*args, vl_api_address_family_t);
/root/download_vpp/vpp/src/vat/types.c:56:31: note: if this code is reached, 
the program will abort
cc1: all warnings being treated as errors
[796/2008] Building C object vnet/CMakeFiles/vnet_icl.dir/ipsec/esp_decrypt.c.o
ninja: build stopped: subcommand failed.
Makefile:693: recipe for target 'vpp-build' failed
make[1]: *** [vpp-build] Error 1
make[1]: Leaving directory '/root/download_vpp/vpp/build-root'
Makefile:403: recipe for target 'build-release' failed
make: *** [build-release] Error 2


root@fdio-vpp:~/download_vpp/vpp# cat /etc/issue
Ubuntu 16.04.4 LTS \n \l

root@fdio-vpp:~/download_vpp/vpp# uname -r
4.13.0-36-generic

root@fdio-vpp:~/download_vpp/vpp# gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/9/lto-wrapper
OFFLOAD_TARGET_NAMES=nvptx-none:hsa
OFFLOAD_TARGET_DEFAULT=1
Target: x86_64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Ubuntu 
9.3.0-10ubuntu2~16.04' --with-bugurl=file:///usr/share/doc/gcc-9/README.Bugs 
--enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --prefix=/usr 
--with-gcc-major-version-only --program-suffix=-9 
--program-prefix=x86_64-linux-gnu- --enable-shared --enable-linker-build-id 
--libexecdir=/usr/lib --without-included-gettext --enable-threads=posix 
--libdir=/usr/lib --enable-nls --enable-clocale=gnu --enable-libstdcxx-debug 
--enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new 
--enable-gnu-unique-object --disable-vtable-verify --enable-plugin 
--with-system-zlib --with-target-system-zlib=auto --enable-objc-gc=auto 
--enable-multiarch --disable-werror --with-arch-32=i686 --with-abi=m64 
--with-multilib-list=m32,m64,mx32 --enable-multilib --with-tune=generic 
--enable-offload-targets=nvptx-none,hsa --without-cuda-driver 
--enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu 
--target=x86_64-linux-gnu
Thread model: posix
gcc version 9.3.0 (Ubuntu 9.3.0-10ubuntu2~16.04)

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16447): https://lists.fd.io/g/vpp-dev/message/16447
Mute This Topic: https://lists.fd.io/mt/74318066/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] DPDK packets received by NIC but not delivered to engine

2020-05-18 Thread Mohammed Alshohayeb
I am having an issue where I see packets in the #show hardware-interfaces but 
only a very small fraction is deliver to the vlib engine

Here are the things I've tried

* Using different packet generators (pktgen/trex/tcpreplay)
* Using variety of physical servers
* All versions running from 19.01 to 20.01
* Tried multiple NICs (Mellanox ConnectX5) and Chelsio T6 (cxgb)
* Made sure checksums are ok since some NICs drop bad frames in the pmd

The vpp.conf is straight forward
unix {
nodaemon
log /var/log/vpp/vpp.log
cli-listen /run/vpp/cli.sock
interactive
}
dpdk {
dev :86:00.0
dev :86:00.1
}

Notes
- When connecting the two interfaces via xconnect things work well
- Tried using the macswap plugin and enabling it but exhibited the same very 
slow behaviour

Here is the show interface counters after pushing ~100 packets

vpp# sh hardware-interfaces

Name Idx Link Hardware

HundredGigabitEthernet86/0/0 1 up HundredGigabitEthernet86/0/0

Link speed: 100 Gbps

Ethernet address ec:0d:9a:cd:94:8a

Mellanox ConnectX-4 Family

carrier up full duplex mtu 9206

flags: admin-up pmd maybe-multiseg rx-ip4-cksum

rx: queues 1 (max 65535), desc 1024 (min 0 max 65535 align 1)

tx: queues 1 (max 65535), desc 1024 (min 0 max 65535 align 1)

pci: device 15b3:1017 subsystem 15b3:0007 address :86:00.00 numa 1

module: unknown

max rx packet len: 65536

promiscuous: unicast off all-multicast on

vlan offload: strip off filter off qinq off

rx offload avail: vlan-strip ipv4-cksum udp-cksum tcp-cksum vlan-filter

jumbo-frame scatter timestamp keep-crc

rx offload active: ipv4-cksum jumbo-frame scatter

tx offload avail: vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso

outer-ipv4-cksum vxlan-tnl-tso gre-tnl-tso multi-segs

udp-tnl-tso ip-tnl-tso

tx offload active: multi-segs

rss avail: ipv4 ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv6 ipv6-frag

ipv6-tcp ipv6-udp ipv6-other ipv6-tcp-ex ipv6-udp-ex

ipv6-ex ipv6-tcp-ex ipv6-udp-ex

rss active: none

tx burst function: mlx5_tx_burst_vec

rx burst function: mlx5_rx_burst

rx frames ok 1847

rx bytes ok 465972

extended stats:

rx good packets 1847

rx good bytes 465972

rx q0packets 1847

rx q0bytes 465972

rx port unicast packets 16

rx port unicast bytes 1034369007

rx port multicast packets 1838

rx port multicast bytes 462894

rx port broadcast packets 9

rx port broadcast bytes 3078

rx packets phy 142

rx bytes phy 1038835147

vpp# sh int

Name Idx State MTU (L3/IP4/IP6/MPLS) Counter Count

HundredGigabitEthernet86/0/0 1 up 9000/0/0/0 rx packets 1856

rx bytes 467272

drops 1856

ip4 1816

ip6 26

local0 0 down 0/0/0/0

vpp#

You can see the packets are received in the unicast but for some reason they 
are not being forwarded to further

Is there something obvious I am missing?
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16446): https://lists.fd.io/g/vpp-dev/message/16446
Mute This Topic: https://lists.fd.io/mt/74316945/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Segmentation fault in VPP 20.05 release when using VCL VPPCOM_PROTO_UDPC #vpp-hoststack

2020-05-18 Thread Raj Kumar
Hi Florin,
I tried the path [1] , but still VPP is crashing when  application is using
listen with UDPC.

[1] https://gerrit.fd.io/r/c/vpp/+/27111



On a different topic , I have some questions. Could you please  provide
your inputs -

1) I am using Mellanox NIC. Any idea how can I enable Tx checksum offload (
for udp).  Also, how to change the Tx burst mode and Rx burst mode to the
Vector .

HundredGigabitEthernet12/0/1   3 up   HundredGigabitEthernet12/0/1
  Link speed: 100 Gbps
  Ethernet address b8:83:03:9e:98:81
 * Mellanox ConnectX-4 Family*
carrier up full duplex mtu 9206
flags: admin-up pmd maybe-multiseg rx-ip4-cksum
rx: queues 4 (max 1024), desc 1024 (min 0 max 65535 align 1)
tx: queues 5 (max 1024), desc 1024 (min 0 max 65535 align 1)
pci: device 15b3:1013 subsystem 1590:00c8 address :12:00.01 numa 0
switch info: name :12:00.1 domain id 1 port id 65535
max rx packet len: 65536
promiscuous: unicast off all-multicast on
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum vlan-filter
   jumbo-frame scatter timestamp keep-crc rss-hash
rx offload active: ipv4-cksum udp-cksum tcp-cksum jumbo-frame scatter
tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso
   outer-ipv4-cksum vxlan-tnl-tso gre-tnl-tso
geneve-tnl-tso
   multi-segs udp-tnl-tso ip-tnl-tso
   * tx offload active: multi-segs*
rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv4
ipv6-tcp-ex
   ipv6-udp-ex ipv6-frag ipv6-tcp ipv6-udp ipv6-other
   ipv6-ex ipv6 l4-dst-only l4-src-only l3-dst-only
l3-src-only
rss active:ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv4
ipv6-tcp-ex
   ipv6-udp-ex ipv6-frag ipv6-tcp ipv6-udp ipv6-other
   ipv6-ex ipv6

*   tx burst mode: No MPW + MULTI + TSO + INLINE + METADATArx burst
mode: Scalar*

2) My application needs to send routing header (SRv6) and Destination
option extension header. On RedHat 8.1 , I was using socket option to add
routing and destination option extension header.
With VPP , I can use SRv6 policy to let VPP add the routing header. But, I
am not sure if there is any option in VPP or HostStack to add the
destination option header.


Coming back to the original problem, here are the traces-

VCL<39673>: configured VCL debug level (2) from VCL_DEBUG!
VCL<39673>: using default heapsize 268435456 (0x1000)
VCL<39673>: allocated VCL heap = 0x7f6b40221010, size 268435456 (0x1000)
VCL<39673>: using default configuration.
vppcom_connect_to_vpp:487: vcl<39673:0>: app (udp6_rx) connecting to VPP
api (/vpe-api)...
vppcom_connect_to_vpp:502: vcl<39673:0>: app (udp6_rx) is connected to VPP!
vppcom_app_create:1200: vcl<39673:0>: sending session enable
vppcom_app_create:1208: vcl<39673:0>: sending app attach
vppcom_app_create:1217: vcl<39673:0>: app_name 'udp6_rx', my_client_index 0
(0x0)
vppcom_connect_to_vpp:487: vcl<39673:1>: app (udp6_rx-wrk-1) connecting to
VPP api (/vpe-api)...
vppcom_connect_to_vpp:502: vcl<39673:1>: app (udp6_rx-wrk-1) is connected
to VPP!
vcl_worker_register_with_vpp:262: vcl<39673:1>: added worker 1
vl_api_app_worker_add_del_reply_t_handler:235: vcl<94:-1>: worker 1
vpp-worker 1 added
vppcom_epoll_create:2558: vcl<39673:1>: Created vep_idx 0
vppcom_session_create:1279: vcl<39673:1>: created session 1
vppcom_session_bind:1426: vcl<39673:1>: session 1 handle 16777217: binding
to local IPv6 address 2001:5b0::700:b883:31f:29e:9880 port 6677, proto
UDPC
vppcom_session_listen:1458: vcl<39673:1>: session 16777217: sending vpp
listen request...

#1  0x77761259 in session_listen (ls=, sep=sep@entry
=0x7fffb575ad50)
at
/usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vnet/session/session_types.h:247
#2  0x77788b5f in app_listener_alloc_and_init
(app=app@entry=0x7fffb7273038,
sep=sep@entry=0x7fffb575ad50,
listener=listener@entry=0x7fffb575ad28) at
/usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vnet/session/application.c:196
#3  0x77788ef8 in vnet_listen (a=a@entry=0x7fffb575ad50)
at
/usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vnet/session/application.c:1005
#4  0x77779e20 in session_mq_listen_handler (data=0x13007ec01)
at
/usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vnet/session/session_node.c:64
#5  session_mq_listen_handler (data=data@entry=0x13007ec01)
at
/usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vnet/session/session_node.c:36
#6  0x77bbcdd9 in vl_api_rpc_call_t_handler (mp=0x13007ebe8)
at
/usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vlibmemory/vlib_api.c:520
#7  0x77bc5ecd in vl_msg_api_handler_with_vm_node
(am=am@entry=0x77dd2ea0
, vlib_rp=,
the_msg=0x13007ebe8, vm=vm@entry=0x76d7c200 ,
node=node@entry=0x7fffb571a000, 

Re: [vpp-dev] Segmentation fault in VPP 20.05 release when using VCL VPPCOM_PROTO_UDPC #vpp-hoststack

2020-05-18 Thread Florin Coras
Hi Raj, 

By the looks of it, something’s not right because in the logs VCL still reports 
it’s binding using UDPC. You probably cherry-picked [1] but it needs [2] as 
well. More inline.

[1] https://gerrit.fd.io/r/c/vpp/+/27111
[2] https://gerrit.fd.io/r/c/vpp/+/27106

> On May 18, 2020, at 8:42 PM, Raj Kumar  wrote:
> 
> 
> Hi Florin,
> I tried the path [1] , but still VPP is crashing when  application is using 
> listen with UDPC.
> 
> [1] https://gerrit.fd.io/r/c/vpp/+/27111 
>  
> 
> 
> 
> On a different topic , I have some questions. Could you please  provide your 
> inputs - 
> 
> 1) I am using Mellanox NIC. Any idea how can I enable Tx checksum offload ( 
> for udp).  Also, how to change the Tx burst mode and Rx burst mode to the 
> Vector .
> 
> HundredGigabitEthernet12/0/1   3 up   HundredGigabitEthernet12/0/1
>   Link speed: 100 Gbps
>   Ethernet address b8:83:03:9e:98:81
>   Mellanox ConnectX-4 Family
> carrier up full duplex mtu 9206
> flags: admin-up pmd maybe-multiseg rx-ip4-cksum
> rx: queues 4 (max 1024), desc 1024 (min 0 max 65535 align 1)
> tx: queues 5 (max 1024), desc 1024 (min 0 max 65535 align 1)
> pci: device 15b3:1013 subsystem 1590:00c8 address :12:00.01 numa 0
> switch info: name :12:00.1 domain id 1 port id 65535
> max rx packet len: 65536
> promiscuous: unicast off all-multicast on
> vlan offload: strip off filter off qinq off
> rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum vlan-filter
>jumbo-frame scatter timestamp keep-crc rss-hash
> rx offload active: ipv4-cksum udp-cksum tcp-cksum jumbo-frame scatter
> tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso
>outer-ipv4-cksum vxlan-tnl-tso gre-tnl-tso 
> geneve-tnl-tso
>multi-segs udp-tnl-tso ip-tnl-tso
> tx offload active: multi-segs
> rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv4 ipv6-tcp-ex
>ipv6-udp-ex ipv6-frag ipv6-tcp ipv6-udp ipv6-other
>ipv6-ex ipv6 l4-dst-only l4-src-only l3-dst-only 
> l3-src-only
> rss active:ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv4 ipv6-tcp-ex
>ipv6-udp-ex ipv6-frag ipv6-tcp ipv6-udp ipv6-other
>ipv6-ex ipv6
> tx burst mode: No MPW + MULTI + TSO + INLINE + METADATA
> rx burst mode: Scalar

FC: Not sure why (might not be supported) but the offloads are not enabled in 
dpdk_lib_init for VNET_DPDK_PMD_MLX* nics. You could try replicating what’s 
done for the Intel cards and see if that works. Alternatively, you might want 
to try the rdma driver, although I don’t know if it supports csum offloading 
(cc Ben and Damjan). 

>
> 2) My application needs to send routing header (SRv6) and Destination option 
> extension header. On RedHat 8.1 , I was using socket option to add routing 
> and destination option extension header.
> With VPP , I can use SRv6 policy to let VPP add the routing header. But, I am 
> not sure if there is any option in VPP or HostStack to add the destination 
> option header.

FC: We don’t currently support this. 

Regards,
Florin

> 
> 
> Coming back to the original problem, here are the traces- 
> 
> VCL<39673>: configured VCL debug level (2) from VCL_DEBUG!
> VCL<39673>: using default heapsize 268435456 (0x1000)
> VCL<39673>: allocated VCL heap = 0x7f6b40221010, size 268435456 (0x1000)
> VCL<39673>: using default configuration.
> vppcom_connect_to_vpp:487: vcl<39673:0>: app (udp6_rx) connecting to VPP api 
> (/vpe-api)...
> vppcom_connect_to_vpp:502: vcl<39673:0>: app (udp6_rx) is connected to VPP!
> vppcom_app_create:1200: vcl<39673:0>: sending session enable
> vppcom_app_create:1208: vcl<39673:0>: sending app attach
> vppcom_app_create:1217: vcl<39673:0>: app_name 'udp6_rx', my_client_index 0 
> (0x0)
> vppcom_connect_to_vpp:487: vcl<39673:1>: app (udp6_rx-wrk-1) connecting to 
> VPP api (/vpe-api)...
> vppcom_connect_to_vpp:502: vcl<39673:1>: app (udp6_rx-wrk-1) is connected to 
> VPP!
> vcl_worker_register_with_vpp:262: vcl<39673:1>: added worker 1
> vl_api_app_worker_add_del_reply_t_handler:235: vcl<94:-1>: worker 1 
> vpp-worker 1 added
> vppcom_epoll_create:2558: vcl<39673:1>: Created vep_idx 0
> vppcom_session_create:1279: vcl<39673:1>: created session 1
> vppcom_session_bind:1426: vcl<39673:1>: session 1 handle 16777217: binding to 
> local IPv6 address 2001:5b0::700:b883:31f:29e:9880 port 6677, proto UDPC
> vppcom_session_listen:1458: vcl<39673:1>: session 16777217: sending vpp 
> listen request...
> 
> #1  0x77761259 in session_listen (ls=, 
> sep=sep@entry=0x7fffb575ad50)
> at 
> /usr/src/debug/vpp-20.05-rc0~748_g83d129837.x86_64/src/vnet/session/session_types.h:247
> #2  0x77788b5f in app_listener_alloc_and_init 
> (app=app@entry=0x7fffb7273038, sep=sep@entry=0x7fffb575ad50,
>