Re: [vpp-dev] Jenkins failed

2018-09-27 Thread Lijian Zhang
Hi All,
Jenkins fails always. But it seems not caused by my patch.
Could anyone help take a look at this failure?

https://gerrit.fd.io/r/#/c/14905/

10:06:01 mv rpm/RPMS/x86_64/*.rpm .
10:06:02 Removing rpm/BUILD/
10:06:02 Removing rpm/BUILDROOT/
10:06:02 Removing rpm/RPMS/
10:06:02 Removing rpm/SOURCES/
10:06:02 Removing rpm/SPECS/
10:06:02 Removing rpm/SRPMS/
10:06:02 Removing rpm/tmp/
10:06:02 make[2]: Leaving directory 
`/w/workspace/vpp-verify-master-centos7/build/external'
10:06:02 sudo rpm -Uih vpp-ext-deps-18.10-3.x86_64.rpm
10:06:02 
10:06:02package vpp-ext-deps-18.10-5.x86_64 (which is newer than 
vpp-ext-deps-18.10-3.x86_64) is already installed
10:06:02 make[1]: *** [install-rpm] Error 2
10:06:02 make[1]: Leaving directory 
`/w/workspace/vpp-verify-master-centos7/build/external'
10:06:02 make: *** [install-ext-deps] Error 2
10:06:02 Build step 'Execute shell' marked build as failure
10:06:02 $ ssh-agent -k
10:06:02 unset SSH_AUTH_SOCK;
10:06:02 unset SSH_AGENT_PID;
10:06:02 echo Agent pid 109 killed;
10:06:02 [ssh-agent] Stopped.
10:06:02 Skipped archiving because build is not successful
10:06:02 [PostBuildScript] - Executing post build scripts.


From: vpp-dev@lists.fd.io  On Behalf Of Lijian Zhang
Sent: Friday, September 28, 2018 10:20 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] Jenkins failed

Hi All,
Jenkins fails always. But it seems not caused by my patch.
Could anyone help take a look at this failure?

https://jenkins.fd.io/view/vpp/job/vpp-verify-master-osleap15/3360/console

09:55:49 Checking for unpackaged file(s): /usr/lib/rpm/check-files 
/w/workspace/vpp-verify-master-osleap15/build/external/rpm/BUILDROOT/vpp-ext-deps-18.10-3.x86_64
09:58:10 Wrote: 
/w/workspace/vpp-verify-master-osleap15/build/external/rpm/RPMS/x86_64/vpp-ext-deps-18.10-3.x86_64.rpm
09:58:10 Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.1FKrVn
09:58:10 + umask 022
09:58:10 + cd /w/workspace/vpp-verify-master-osleap15/build/external/rpm/BUILD
09:58:10 + /usr/bin/rm -rf 
/w/workspace/vpp-verify-master-osleap15/build/external/rpm/BUILDROOT/vpp-ext-deps-18.10-3.x86_64
09:58:10 + exit 0
09:58:10 mv rpm/RPMS/x86_64/*.rpm .
09:58:10 Removing rpm/BUILD/
09:58:10 Removing rpm/BUILDROOT/
09:58:10 Removing rpm/RPMS/
09:58:10 Removing rpm/SOURCES/
09:58:10 Removing rpm/SPECS/
09:58:10 Removing rpm/SRPMS/
09:58:10 Removing rpm/tmp/
09:58:10 make[2]: Leaving directory 
'/w/workspace/vpp-verify-master-osleap15/build/external'
09:58:10 sudo rpm -Uih vpp-ext-deps-18.10-3.x86_64.rpm
09:58:10 
09:58:11package vpp-ext-deps-18.10-5.x86_64 (which is newer than 
vpp-ext-deps-18.10-3.x86_64) is already installed
09:58:11 make[1]: *** [Makefile:115: install-rpm] Error 2
09:58:11 make[1]: Leaving directory 
'/w/workspace/vpp-verify-master-osleap15/build/external'
09:58:11 make: *** [Makefile:503: install-ext-deps] Error 2
09:58:11 Build step 'Execute shell' marked build as failure
09:58:11 $ ssh-agent -k
09:58:11 unset SSH_AUTH_SOCK;
09:58:11 unset SSH_AGENT_PID;
09:58:11 echo Agent pid 82 killed;
09:58:11 [ssh-agent] Stopped.
09:58:11 Skipped archiving because build is not successful
09:58:11 [PostBuildScript] - Executing post build scripts.
09:58:11 [vpp-verify-master-osleap15] $ /bin/bash 
/tmp/jenkins1414970379125314929.sh

IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.
IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10701): https://lists.fd.io/g/vpp-dev/message/10701
Mute This Topic: https://lists.fd.io/mt/26372984/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Jenkins failed

2018-09-27 Thread Lijian Zhang
Hi All,
Jenkins fails always. But it seems not caused by my patch.
Could anyone help take a look at this failure?

https://jenkins.fd.io/view/vpp/job/vpp-verify-master-osleap15/3360/console

09:55:49 Checking for unpackaged file(s): /usr/lib/rpm/check-files 
/w/workspace/vpp-verify-master-osleap15/build/external/rpm/BUILDROOT/vpp-ext-deps-18.10-3.x86_64
09:58:10 Wrote: 
/w/workspace/vpp-verify-master-osleap15/build/external/rpm/RPMS/x86_64/vpp-ext-deps-18.10-3.x86_64.rpm
09:58:10 Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.1FKrVn
09:58:10 + umask 022
09:58:10 + cd /w/workspace/vpp-verify-master-osleap15/build/external/rpm/BUILD
09:58:10 + /usr/bin/rm -rf 
/w/workspace/vpp-verify-master-osleap15/build/external/rpm/BUILDROOT/vpp-ext-deps-18.10-3.x86_64
09:58:10 + exit 0
09:58:10 mv rpm/RPMS/x86_64/*.rpm .
09:58:10 Removing rpm/BUILD/
09:58:10 Removing rpm/BUILDROOT/
09:58:10 Removing rpm/RPMS/
09:58:10 Removing rpm/SOURCES/
09:58:10 Removing rpm/SPECS/
09:58:10 Removing rpm/SRPMS/
09:58:10 Removing rpm/tmp/
09:58:10 make[2]: Leaving directory 
'/w/workspace/vpp-verify-master-osleap15/build/external'
09:58:10 sudo rpm -Uih vpp-ext-deps-18.10-3.x86_64.rpm
09:58:10 
09:58:11package vpp-ext-deps-18.10-5.x86_64 (which is newer than 
vpp-ext-deps-18.10-3.x86_64) is already installed
09:58:11 make[1]: *** [Makefile:115: install-rpm] Error 2
09:58:11 make[1]: Leaving directory 
'/w/workspace/vpp-verify-master-osleap15/build/external'
09:58:11 make: *** [Makefile:503: install-ext-deps] Error 2
09:58:11 Build step 'Execute shell' marked build as failure
09:58:11 $ ssh-agent -k
09:58:11 unset SSH_AUTH_SOCK;
09:58:11 unset SSH_AGENT_PID;
09:58:11 echo Agent pid 82 killed;
09:58:11 [ssh-agent] Stopped.
09:58:11 Skipped archiving because build is not successful
09:58:11 [PostBuildScript] - Executing post build scripts.
09:58:11 [vpp-verify-master-osleap15] $ /bin/bash 
/tmp/jenkins1414970379125314929.sh

IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10700): https://lists.fd.io/g/vpp-dev/message/10700
Mute This Topic: https://lists.fd.io/mt/26372984/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] libvppapiclient.so.0 missing (govpp)

2018-09-27 Thread carlito nueno
I also tried installing vpp from https://packagecloud.io/fdio/master 
(18.10-rc0~521-g09cce66~b5292).
I am encountering same error.

Thanks
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10699): https://lists.fd.io/g/vpp-dev/message/10699
Mute This Topic: https://lists.fd.io/mt/26372834/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] libvppapiclient.so.0 missing (govpp)

2018-09-27 Thread carlito nueno
Hi all,

I pulled the latest vpp master (as of September 27 2018) and am using
the vagrant file to build vpp. Afterwards I transferred the .deb
packages out of vagrant box and installed vpp:

sudo dpkg -i *.deb

When I try to run govpp application I am getting this error:

error while loading shared libraries: libvppapiclient.so.0: cannot
open shared object file: No such file or directory

Thanks
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10698): https://lists.fd.io/g/vpp-dev/message/10698
Mute This Topic: https://lists.fd.io/mt/26372834/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Unable to build router plugin

2018-09-27 Thread carlito nueno
Thanks for the tip Mehran. I will take a look and report back.
On Thu, Sep 27, 2018 at 12:16 AM Mehran Memarnejad
 wrote:
>
> Hi carlito,
>
> I've had problems muck like yours. Sometimes VPP updates its functions while 
> vppsb is still the same, so you need to change it to make it work.
> In my problem I just updated the vppsb's outdated function to the new one and 
> it worked.
> As you know, vppsb is a plugin for vpp and it calls vpp's functions, so any 
> change in vpp's function affects vppsb e.g. function singnature change
>
>
> On Thursday, September 27, 2018, carlito nueno  wrote:
>>
>> Hi all,
>>
>> I am trying to build the router-plugin:
>> make V=0 PLATFORM=vpp TAG=vpp_debug install-deb netlink-install 
>> router-install
>>
>> I am using the Vagrantfile present in vpp repo and am pulling the
>> current master (as of September 26 2018). I am also pulling the
>> current master of vppsb.
>>
>> But I am getting this error:
>>
>>  Building router in /vpp/build-root/build-vpp_debug-native/router 
>> make[1]: Entering directory '/vpp/build-root/build-vpp_debug-native/router'
>>   CC   router/tap_inject.lo
>>   CC   router/tap_inject_netlink.lo
>> /vpp/build-data/../router/router/tap_inject_netlink.c: In function
>> ‘add_del_neigh’:
>> /vpp/build-data/../router/router/tap_inject_netlink.c:140:9: error:
>> too many arguments to function ‘vnet_unset_ip6_ethernet_neighbor’
>>  vnet_unset_ip6_ethernet_neighbor (vm, sw_if_index,
>>  ^~~~
>> In file included from
>> /vpp/build-data/../router/router/tap_inject_netlink.c:19:0:
>> /vpp/build-root/install-vpp_debug-native/vpp/include/vnet/ip/ip6_neighbor.h:84:12:
>> note: declared here
>>  extern int vnet_unset_ip6_ethernet_neighbor (vlib_main_t * vm,
>> ^~~~
>> Makefile:483: recipe for target 'router/tap_inject_netlink.lo' failed
>> make[1]: *** [router/tap_inject_netlink.lo] Error 1
>> make[1]: *** Waiting for unfinished jobs
>> make[1]: Leaving directory '/vpp/build-root/build-vpp_debug-native/router'
>> Makefile:691: recipe for target 'router-build' failed
>> make: *** [router-build] Error 2
>>
>> Any advice?
>>
>> Thanks
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10697): https://lists.fd.io/g/vpp-dev/message/10697
Mute This Topic: https://lists.fd.io/mt/26280661/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Master branch l2bd test perf dop

2018-09-27 Thread Zhang Yuwei
Hi Neale,
 I assume the replications should related to the interfaces in the 
bridge, right? I just have 2 interfaces in the bridge which means one of the 
interface receive traffic and another send out. In the 64K size packet test 
case, the performance have almost 35% drop. I didn’t do other case yet.

Regards,
Yuwei
From: Neale Ranns (nranns) [mailto:nra...@cisco.com]
Sent: Thursday, September 27, 2018 8:31 PM
To: Zhang, Yuwei1 ; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Master branch l2bd test perf dop


Hi Yuwei,

There was a change to the l2flood node recently:
  https://gerrit.fd.io/r/#/c/13578/
where we use the buffer clone mechanism rather than free-recycle. I would 
expect the CPU cycles per invocation of the l2-flood node to increase, but the 
number of invocations of l2flood to decrease (w.r.t. the interface-tx node).
How many replications does your test perform and is there a trend for perf 
change versus number of replications?

Thanks,
Neale


De : mailto:vpp-dev@lists.fd.io>> au nom de Zhang Yuwei 
mailto:yuwei1.zh...@intel.com>>
Date : jeudi 27 septembre 2018 à 05:02
À : "vpp-dev@lists.fd.io" 
mailto:vpp-dev@lists.fd.io>>
Objet : [vpp-dev] Master branch l2bd test perf dop

Hi All,
 In our recent test, I found a performance drop in master branch. I 
execute the l2bd test case in a 2.5GHZ CPU and found almost 35% drop compared 
to 18.07 release. My test is set two NIC ports to a same bridge domain and send 
traffic to test the l2 forward performance. I found in the master branch, 
l2flood function consume much more CPU cycles than 18.07 which means any test 
use the l2flood function will also have a performance drop. Can anybody kindly 
help to check this issue? Thanks a lot.

Regards,
Yuwei

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10696): https://lists.fd.io/g/vpp-dev/message/10696
Mute This Topic: https://lists.fd.io/mt/26289209/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Setup the Ipsec environment with VPP

2018-09-27 Thread tianye
Hi Gaofeng

Thank you very much!
Let me read it first.

From: Feng Gao [mailto:gfree.w...@gmail.com]
Sent: Friday, September 28, 2018 8:40 AM
To: tian...@twsz.com
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Setup the Ipsec environment with VPP

You could reference this doc: https://wiki.fd.io/view/VPP/IPSec_and_IKEv2#IKEv2

And it would works well if you follow it by step.

On Thu, Sep 27, 2018 at 7:46 PM Tian Ye2(田野) 
mailto:tian...@twsz.com>> wrote:

Hello VPP developers:



Is there someone who can tell me how to setup the Ipsec environment with VPP?

Given the following topology, let’s assume “Gateway moon” is running VPP.

I need to setup a Ipsec server in “Gateway moon” and I use “client alice”(Let’s 
say it is a common Ubuntu desktop with strongswan) as the Ipsec client to 
connect to “moon”.

Or let me know the reference document link will also be helpful to me.



Thank you very much!



[alice moon carol winnetou]
本电子邮件(包括任何的附件)为本公司保密文件。本文件仅仅可为以上指定的收件人或公司使用,如果阁下非电子邮件所指定之收件人,那么阁下对该邮件部分或全部的泄漏、阅览、复印、变更、散布或对邮件内容的使用都是被严格禁止的。如果阁下接收了该错误传送的电子邮件,敬请阁下通过回复该邮件的方式立即通知寄件人,同时删除你所接收到的文本。
 This e-mail may contain confidential and/or privileged information. If you are 
not the intended recipient (or have received this e-mail in error) please 
notify the sender immediately and destroy this e-mail. Any unauthorized 
copying, disclosure or distribution of the material in this e-mail is strictly 
forbidden.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10689): https://lists.fd.io/g/vpp-dev/message/10689
Mute This Topic: https://lists.fd.io/mt/26330595/1419086
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[gfree.w...@gmail.com]
-=-=-=-=-=-=-=-=-=-=-=-
本电子邮件(包括任何的附件)为本公司保密文件。本文件仅仅可为以上指定的收件人或公司使用,如果阁下非电子邮件所指定之收件人,那么阁下对该邮件部分或全部的泄漏、阅览、复印、变更、散布或对邮件内容的使用都是被严格禁止的。如果阁下接收了该错误传送的电子邮件,敬请阁下通过回复该邮件的方式立即通知寄件人,同时删除你所接收到的文本。
 This e-mail may contain confidential and/or privileged information. If you are 
not the intended recipient (or have received this e-mail in error) please 
notify the sender immediately and destroy this e-mail. Any unauthorized 
copying, disclosure or distribution of the material in this e-mail is strictly 
forbidden.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10695): https://lists.fd.io/g/vpp-dev/message/10695
Mute This Topic: https://lists.fd.io/mt/26330595/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Setup the Ipsec environment with VPP

2018-09-27 Thread Feng Gao
You could reference this doc:
https://wiki.fd.io/view/VPP/IPSec_and_IKEv2#IKEv2

And it would works well if you follow it by step.

On Thu, Sep 27, 2018 at 7:46 PM Tian Ye2(田野)  wrote:

> Hello VPP developers:
>
>
>
> Is there someone who can tell me how to setup the Ipsec environment with
> VPP?
>
> Given the following topology, let’s assume “Gateway moon” is running VPP.
>
> I need to setup a Ipsec server in “Gateway moon” and I use “client alice”
> (Let’s say it is a common Ubuntu desktop with strongswan) as the Ipsec
> client to connect to “moon”.
>
> Or let me know the reference document link will also be helpful to me.
>
>
>
> Thank you very much!
>
>
>
> [image: alice moon carol winnetou]
> 本电子邮件(包括任何的附件)为本公司保密文件。本文件仅仅可为以上指定的收件人或公司使用,如果阁下非电子邮件所指定之收件人,那么阁下对该邮件部分或全部的泄漏、阅览、复印、变更、散布或对邮件内容的使用都是被严格禁止的。如果阁下接收了该错误传送的电子邮件,敬请阁下通过回复该邮件的方式立即通知寄件人,同时删除你所接收到的文本。
> This e-mail may contain confidential and/or privileged information. If you
> are not the intended recipient (or have received this e-mail in error)
> please notify the sender immediately and destroy this e-mail. Any
> unauthorized copying, disclosure or distribution of the material in this
> e-mail is strictly forbidden.
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#10689): https://lists.fd.io/g/vpp-dev/message/10689
> Mute This Topic: https://lists.fd.io/mt/26330595/1419086
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [gfree.w...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10694): https://lists.fd.io/g/vpp-dev/message/10694
Mute This Topic: https://lists.fd.io/mt/26330595/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Want to switch to dpdk17.11.4 ,using vpp 18.0.1

2018-09-27 Thread chetan bhasin
Hello Everyone,

Please suggest the right approach.

Thanks,
Chetan Bhasin

On Wed, Sep 26, 2018, 10:41 chetan bhasin 
wrote:

> Hi everyone,
>
> We are using Vpp18.0.1 that internally using dpdk 17.11 version. We want
> to switch to dpdk 17.11.4 as it has Mellanox fixes.
>
> Can anybody suggest the steps to do so ? Does it have any impact ?
>
> Thanks,
> Chetan Bhasin
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10693): https://lists.fd.io/g/vpp-dev/message/10693
Mute This Topic: https://lists.fd.io/mt/26227852/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Make test failures on ARM - IP4, L2, ECMP, Multicast, GRE, SCTP, SPAN, ACL

2018-09-27 Thread Neale Ranns via Lists.Fd.Io


De :  au nom de Juraj Linkeš 
Date : jeudi 27 septembre 2018 à 09:21
À : "Neale Ranns (nranns)" 
Cc : vpp-dev 
Objet : Re: [vpp-dev] Make test failures on ARM - IP4, L2, ECMP, Multicast, 
GRE, SCTP, SPAN, ACL

Hi Neale,

I had a debugging session with Andrew about failing ACL testcases and he 
uncovered that the root cause is in l2 and ip4:

1) the timeout and big files

for some reason in the bridged setup done by a testcase, the VPP reinjects the 
packet being sent onto one of the interfaces of the bridge, in a loop.

The following crude diff eliminates the problem and the tests pass: 
https://paste.ubuntu.com/p/CSMYjXsZyX/

[nr] Can we please see the packet trace with that patch in place?

2) there is a failure of a mac acl testcase in the routed scenario, where the 
ip lookup picks up incorrect next index:

The following shows the problem for the properly and improperly routed packet:

https://paste.ubuntu.com/p/wTWWNhwSKY/

that’s bizarre. I’m not sure where to start debugging that other than attaching 
GDB and having a poke around.

/neale


Could you advise on the first issue (Andrew wasn't sure the diff is a proper 
fix) and help debug the other issue (or, most likely related, issues 
https://jira.fd.io/browse/VPP-1432 and https://jira.fd.io/browse/VPP-1433?) If 
not, could you suggest someone so I can ask them?

Thanks,
Juraj

From: Juraj Linkeš
Sent: Tuesday, September 25, 2018 10:07 AM
To: 'Juraj Linkeš' ; vpp-dev 
Cc: csit-dev 
Subject: RE: Make test failures on ARM - IP4, L2, ECMP, Multicast, GRE, SCTP, 
SPAN, ACL

I created the new tickets under CSIT, which is an oversight, but I fixed it and 
now the tickets are under VPP:

· GRE crash

· SCTP failure/crash

oMe and Marco resolved a similar issue in the past, but this could be 
something different

· SPAN crash

· IP4 failures

oThese are multiple failures and I'm not sure that grouping them together 
is correct

· L2 failures/crash

oAs in IP4, these are multiple failures and I'm not sure that grouping them 
together is correct

· ECMP failure

· Multicast failure

· ACL failure

oI'm already working with Andrew on fixing this

There seem to be a lot of people who touched the code. I would like to ask the 
authors to tell me who to turn to (at least for IP and L2).

Regards,
Juraj

From: Juraj Linkeš [mailto:juraj.lin...@pantheon.tech]
Sent: Monday, September 24, 2018 6:26 PM
To: vpp-dev mailto:vpp-dev@lists.fd.io>>
Cc: csit-dev mailto:csit-...@lists.fd.io>>
Subject: [vpp-dev] Make test failures on ARM

Hi vpp-devs,

Especially ARM vpp devs ☺

We're experiencing a number of failures on Cavium ThunderX and we'd like to fix 
the issues. I've created a number of Jira tickets:

· GRE crash

· SCTP failure/crash

oMe and Marco resolved a similar issue in the past, but this could be 
something different

· SPAN crash

· IP4 failures

oThese are multiple failures and I'm not sure that grouping them together 
is correct

· L2 failures/crash

oAs in IP4, these are multiple failures and I'm not sure that grouping them 
together is correct

· ECMP failure

· Multicast failure

· ACL failure

oI'm already working with Andrew on fixing this

The reason I didn't reach out to all authors individually is that I wanted 
someone to look at the issues and assess whether there's an overlap (or I 
grouped the failures improperly), since some of the failures look similar.

Then there's the issue of hardware availability - if anyone willing to help has 
access to fd.io lab, I can setup access to a Cavium ThunderX, otherwise we 
could set up a call if further debugging is needed.

Thanks,
Juraj
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10692): https://lists.fd.io/g/vpp-dev/message/10692
Mute This Topic: https://lists.fd.io/mt/26218436/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Master branch l2bd test perf dop

2018-09-27 Thread Neale Ranns via Lists.Fd.Io

Hi Yuwei,

There was a change to the l2flood node recently:
  https://gerrit.fd.io/r/#/c/13578/
where we use the buffer clone mechanism rather than free-recycle. I would 
expect the CPU cycles per invocation of the l2-flood node to increase, but the 
number of invocations of l2flood to decrease (w.r.t. the interface-tx node).
How many replications does your test perform and is there a trend for perf 
change versus number of replications?

Thanks,
Neale


De :  au nom de Zhang Yuwei 
Date : jeudi 27 septembre 2018 à 05:02
À : "vpp-dev@lists.fd.io" 
Objet : [vpp-dev] Master branch l2bd test perf dop

Hi All,
 In our recent test, I found a performance drop in master branch. I 
execute the l2bd test case in a 2.5GHZ CPU and found almost 35% drop compared 
to 18.07 release. My test is set two NIC ports to a same bridge domain and send 
traffic to test the l2 forward performance. I found in the master branch, 
l2flood function consume much more CPU cycles than 18.07 which means any test 
use the l2flood function will also have a performance drop. Can anybody kindly 
help to check this issue? Thanks a lot.

Regards,
Yuwei

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10691): https://lists.fd.io/g/vpp-dev/message/10691
Mute This Topic: https://lists.fd.io/mt/26289209/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Master branch l2bd test perf dop

2018-09-27 Thread Damjan Marion via Lists.Fd.Io

-- 
Damjan

> On 27 Sep 2018, at 05:02, Zhang Yuwei  wrote:
> 
> Hi All,
>  In our recent test, I found a performance drop in master branch. I 
> execute the l2bd test case in a 2.5GHZ CPU and found almost 35% drop compared 
> to 18.07 release. My test is set two NIC ports to a same bridge domain and 
> send traffic to test the l2 forward performance. I found in the master 
> branch, l2flood function consume much more CPU cycles than 18.07 which means 
> any test use the l2flood function will also have a performance drop. Can 
> anybody kindly help to check this issue? Thanks a lot.
>  
> Regards,
> Yuwei
> 

Is that drop visible in CSIT trending graphs?


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10690): https://lists.fd.io/g/vpp-dev/message/10690
Mute This Topic: https://lists.fd.io/mt/26289209/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Setup the Ipsec environment with VPP

2018-09-27 Thread 田野
Hello VPP developers:



Is there someone who can tell me how to setup the Ipsec environment with VPP?

Given the following topology, let’s assume “Gateway moon” is running VPP.

I need to setup a Ipsec server in “Gateway moon” and I use “client alice”(Let’s 
say it is a common Ubuntu desktop with strongswan) as the Ipsec client to 
connect to “moon”.

Or let me know the reference document link will also be helpful to me.



Thank you very much!



[alice moon carol winnetou]

本电子邮件(包括任何的附件)为本公司保密文件。本文件仅仅可为以上指定的收件人或公司使用,如果阁下非电子邮件所指定之收件人,那么阁下对该邮件部分或全部的泄漏、阅览、复印、变更、散布或对邮件内容的使用都是被严格禁止的。如果阁下接收了该错误传送的电子邮件,敬请阁下通过回复该邮件的方式立即通知寄件人,同时删除你所接收到的文本。
 This e-mail may contain confidential and/or privileged information. If you are 
not the intended recipient (or have received this e-mail in error) please 
notify the sender immediately and destroy this e-mail. Any unauthorized 
copying, disclosure or distribution of the material in this e-mail is strictly 
forbidden.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10689): https://lists.fd.io/g/vpp-dev/message/10689
Mute This Topic: https://lists.fd.io/mt/26330595/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] one question about IP fragment

2018-09-27 Thread hujie....@chinatelecom.cn
Hi Ole,

Thanks. We try to change the MTU value to send the big packets which are more 
than 1500 or 1460 Bytes but it is only a temporary way limited in the Lab. And 
we hope to find a tools or some codes which can be put into the VPP or DPDK to 
fragment the big packets to  less than 1460 bytes automatically.

Best Regards.

Jie HU



hujie@chinatelecom.cn
 
From: Yang, Zhiyong
Date: 2018-09-27 15:37
To: Ole Troan
CC: vpp-dev@lists.fd.io; Kinsella, Ray; hujie@chinatelecom.cn; Liu, Frank M
Subject: RE: [vpp-dev] one question about IP fragment
Ole, thanks so much for your warm help.
 
> The next steps for tunnels, to help avoid fragmentation is to add some sort of
> tunnel path MTU discovery.
 
It looks very interesting and helpful. And I'm looking forward to seeing it.
 
However, we fail to send big packet greater than MTU now,  for example, 
We have MTU = 1500, when 1500bytes packets are encapped by vxlan protocol, 
of course,  the size of packets is bigger than MTU at the time ,
it looks that sending packets fails now.
  
BTW
Is DPDK IP fragment/reassembly supported in VPP now?
 
Thanks
Zhiyong
 
> -Original Message-
> From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Ole Troan
> Sent: Wednesday, September 26, 2018 7:06 PM
> To: Yang, Zhiyong 
> Cc: vpp-dev@lists.fd.io; Kinsella, Ray 
> Subject: Re: [vpp-dev] one question about IP fragment
> 
> Zhiyong,
> 
> > When I use vxlan in both VM and physical machine , packet 
> > drop issue
> is come across, after setting VPP MTU = 1600,  this issue will disappear.  As 
> we
> all know, some of routes on the network doesn’t support more than 1518 bytes
> packet.
> > Should I try to use IP fragment in this case? Or any other better solution?
> Does VPP IP fragment work now? If yes, Could you show me how to configure?
> Thank you very much.
> 
> Fragmentation now works. Currently both for IPv4 and IPv6 packets are
> fragmented in ip{4,6}_rewrite. Note that only locally originated IPv6 packets 
> can
> be fragmented, and only IPv4 packets with DF = 0.
> Assuming the above two restrictions are adhered to, if the VXLAN node sent
> packets larger than the outgoing interface MTU they should be fragmented now.
> There is no configuration for fragmentation. (Although I am thinking of 
> adding a
> knob, with default disabled).
> 
> For reassembly, VPP has a short-coming, It can only reassemble as an input
> feature, meaning all fragments, even though not destined for the VPP instance
> itself are reassembled.
> I think Juraj is working on a fix for that.
> 
> The next steps for tunnels, to help avoid fragmentation is to add some sort of
> tunnel path MTU discovery.
> 
> But in short, you are much better off with a well managed MTU, than you are
> with fragmentation.
> See our draft in intarea for a list: 
> https://tools.ietf.org/html/draft-ietf-intarea-
> frag-fragile-00
> 
> Cheers,
> Ole
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10688): https://lists.fd.io/g/vpp-dev/message/10688
Mute This Topic: https://lists.fd.io/mt/26229382/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] one question about IP fragment

2018-09-27 Thread Zhiyong Yang


> -Original Message-
> From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Ole Troan
> Sent: Thursday, September 27, 2018 5:19 PM
> To: Yang, Zhiyong 
> Cc: vpp-dev@lists.fd.io; Kinsella, Ray ;
> hujie@chinatelecom.cn; Liu, Frank M 
> Subject: Re: [vpp-dev] one question about IP fragment
> 
> > Got it.
> > Do you have plan to support this DPDK feature in VPP? Or always support VPP
> native implementation Only in future.
> 
> There are no plans as far as I’m aware.
> I don’t see any reason to do the fragmentation with DPDK code over VPP.
> Unless there is significant performance difference. Great if you could have a 
> go
> at measuring?
> 
> And of course, I’ll repeat my mantra. If you rely on IP fragmentation you are
> doing something wrong. ;-)
> 
> Best regards,
> Ole

No, I don't plan to try it as well.  And I like your mantra and sense of humor. 
 ;-)


Thanks
Zhiyong

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10687): https://lists.fd.io/g/vpp-dev/message/10687
Mute This Topic: https://lists.fd.io/mt/26229382/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] one question about IP fragment

2018-09-27 Thread Ole Troan
> Got it.
> Do you have plan to support this DPDK feature in VPP? Or always support VPP 
> native implementation Only in future.

There are no plans as far as I’m aware.
I don’t see any reason to do the fragmentation with DPDK code over VPP.
Unless there is significant performance difference. Great if you could have a 
go at measuring?

And of course, I’ll repeat my mantra. If you rely on IP fragmentation you are 
doing something wrong. ;-)

Best regards,
Ole-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10686): https://lists.fd.io/g/vpp-dev/message/10686
Mute This Topic: https://lists.fd.io/mt/26229382/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] one question about IP fragment

2018-09-27 Thread Zhiyong Yang
Ole, 

Thank you for quick reply.
Reply inline.

> -Original Message-
> From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Ole Troan
> Sent: Thursday, September 27, 2018 4:49 PM
> To: Yang, Zhiyong 
> Cc: vpp-dev@lists.fd.io; Kinsella, Ray ;
> hujie@chinatelecom.cn; Liu, Frank M 
> Subject: Re: [vpp-dev] one question about IP fragment
> 
> Zhiyong,
> 
> >> The next steps for tunnels, to help avoid fragmentation is to add
> >> some sort of tunnel path MTU discovery.
> >
> > It looks very interesting and helpful. And I'm looking forward to seeing it.
> >
> > However, we fail to send big packet greater than MTU now,  for
> > example, We have MTU = 1500, when 1500bytes packets are encapped by
> > vxlan protocol, of course,  the size of packets is bigger than MTU at
> > the time , it looks that sending packets fails now.
> 
> You are not master latest?
> This is for IPv4 or IPv6 transport?
> 
> Take a look at the packet trace and see how that looks like?
> And note that IPv4 with DF=1 or not locally originated IPv6 is not fragmented.
> 
> You can also do testing with https://gerrit.fd.io/r/#/c/14984/ included, which
> fixes a problem with tracing after fragmentation.

Hujie will check this testing.

> 
> > BTW
> > Is DPDK IP fragment/reassembly supported in VPP now?
> 
> No. DPDK fragmentation uses indirect buffers which we don’t have in VPP.
> 

Got it.
Do you have plan to support this DPDK feature in VPP? Or always support VPP 
native implementation Only in future.

> Best regards,
> Ole

Thanks
Zhiyong
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10685): https://lists.fd.io/g/vpp-dev/message/10685
Mute This Topic: https://lists.fd.io/mt/26229382/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] one question about IP fragment

2018-09-27 Thread Ole Troan
Hujie,

> Thanks. We try to change the MTU value to send the big packets which are more 
> than 1500 or 1460 Bytes but it is only a temporary way limited in the Lab. 
> And we hope to find a tools or some codes which can be put into the VPP or 
> DPDK to fragment the big packets to  less than 1460 bytes automatically.

As I replied above, VPP will fragment automatically, given the restrictions 
above.
E.g. it might be that VXLAN tunnel head end sets DF=1 or doesn’t set the 
locally originated flag on the packet.
A packet trace should give you a better hint. Or just set a breakpoint in 
ip{4,6}_path_mtu_check()

Cheers,
Ole

> From: Yang, Zhiyong
> Date: 2018-09-27 15:37
> To: Ole Troan
> CC: vpp-dev@lists.fd.io; Kinsella, Ray; hujie@chinatelecom.cn; Liu, Frank 
> M
> Subject: RE: [vpp-dev] one question about IP fragment
> Ole, thanks so much for your warm help.
>  
> > The next steps for tunnels, to help avoid fragmentation is to add some sort 
> > of
> > tunnel path MTU discovery.
>  
> It looks very interesting and helpful. And I'm looking forward to seeing it.
>  
> However, we fail to send big packet greater than MTU now,  for example,
> We have MTU = 1500, when 1500bytes packets are encapped by vxlan protocol,
> of course,  the size of packets is bigger than MTU at the time ,
> it looks that sending packets fails now.
>  
> BTW
> Is DPDK IP fragment/reassembly supported in VPP now?
>  
> Thanks
> Zhiyong
>  
> > -Original Message-
> > From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Ole 
> > Troan
> > Sent: Wednesday, September 26, 2018 7:06 PM
> > To: Yang, Zhiyong 
> > Cc: vpp-dev@lists.fd.io; Kinsella, Ray 
> > Subject: Re: [vpp-dev] one question about IP fragment
> >
> > Zhiyong,
> >
> > > When I use vxlan in both VM and physical machine , packet 
> > > drop issue
> > is come across, after setting VPP MTU = 1600,  this issue will disappear.  
> > As we
> > all know, some of routes on the network doesn’t support more than 1518 bytes
> > packet.
> > > Should I try to use IP fragment in this case? Or any other better 
> > > solution?
> > Does VPP IP fragment work now? If yes, Could you show me how to configure?
> > Thank you very much.
> >
> > Fragmentation now works. Currently both for IPv4 and IPv6 packets are
> > fragmented in ip{4,6}_rewrite. Note that only locally originated IPv6 
> > packets can
> > be fragmented, and only IPv4 packets with DF = 0.
> > Assuming the above two restrictions are adhered to, if the VXLAN node sent
> > packets larger than the outgoing interface MTU they should be fragmented 
> > now.
> > There is no configuration for fragmentation. (Although I am thinking of 
> > adding a
> > knob, with default disabled).
> >
> > For reassembly, VPP has a short-coming, It can only reassemble as an input
> > feature, meaning all fragments, even though not destined for the VPP 
> > instance
> > itself are reassembled.
> > I think Juraj is working on a fix for that.
> >
> > The next steps for tunnels, to help avoid fragmentation is to add some sort 
> > of
> > tunnel path MTU discovery.
> >
> > But in short, you are much better off with a well managed MTU, than you are
> > with fragmentation.
> > See our draft in intarea for a list: 
> > https://tools.ietf.org/html/draft-ietf-intarea-
> > frag-fragile-00
> >
> > Cheers,
> > Ole

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10684): https://lists.fd.io/g/vpp-dev/message/10684
Mute This Topic: https://lists.fd.io/mt/26229382/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] one question about IP fragment

2018-09-27 Thread Ole Troan
Zhiyong,

>> The next steps for tunnels, to help avoid fragmentation is to add some sort 
>> of
>> tunnel path MTU discovery.
> 
> It looks very interesting and helpful. And I'm looking forward to seeing it.
> 
> However, we fail to send big packet greater than MTU now,  for example, 
> We have MTU = 1500, when 1500bytes packets are encapped by vxlan protocol, 
> of course,  the size of packets is bigger than MTU at the time ,
> it looks that sending packets fails now.

You are not master latest?
This is for IPv4 or IPv6 transport?

Take a look at the packet trace and see how that looks like?
And note that IPv4 with DF=1 or not locally originated IPv6 is not fragmented.

You can also do testing with https://gerrit.fd.io/r/#/c/14984/ included, which 
fixes a problem with tracing after fragmentation.

> BTW
> Is DPDK IP fragment/reassembly supported in VPP now?

No. DPDK fragmentation uses indirect buffers which we don’t have in VPP.

Best regards,
Ole-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10683): https://lists.fd.io/g/vpp-dev/message/10683
Mute This Topic: https://lists.fd.io/mt/26229382/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Make test failures on ARM - IP4, L2, ECMP, Multicast, GRE, SCTP, SPAN, ACL

2018-09-27 Thread Andrew Yourtchenko


> On 27 Sep 2018, at 09:21, Juraj Linkeš  wrote:
> 
> Hi Neale,
>  
> I had a debugging session with Andrew about failing ACL testcases and he 
> uncovered that the root cause is in l2 and ip4:
> 1) the timeout and big files
> 
> for some reason in the bridged setup done by a testcase, the VPP reinjects 
> the packet being sent onto one of the interfaces of the bridge, in a loop.
> 
> The following crude diff eliminates the problem and the tests pass: 
> https://paste.ubuntu.com/p/CSMYjXsZyX/
> 
> 2) there is a failure of a mac acl testcase in the routed scenario, where the 
> ip lookup picks up incorrect next index:
> 
> The following shows the problem for the properly and improperly routed packet:
> 
> https://paste.ubuntu.com/p/wTWWNhwSKY/
> 
> Could you advise on the first issue (Andrew wasn't sure the diff is a proper 
> fix) and help debug the other

To clarify: I am 100% sure it is NOT a proper fix, it was there to just 
demonstrate the issue.

—a



> issue (or, most likely related, issues https://jira.fd.io/browse/VPP-1432 and 
> https://jira.fd.io/browse/VPP-1433?) If not, could you suggest someone so I 
> can ask them?
>  
> Thanks,
> Juraj
>  
> From: Juraj Linkeš 
> Sent: Tuesday, September 25, 2018 10:07 AM
> To: 'Juraj Linkeš' ; vpp-dev 
> Cc: csit-dev 
> Subject: RE: Make test failures on ARM - IP4, L2, ECMP, Multicast, GRE, SCTP, 
> SPAN, ACL
>  
> I created the new tickets under CSIT, which is an oversight, but I fixed it 
> and now the tickets are under VPP:
> ·GRE crash
> ·SCTP failure/crash
> o   Me and Marco resolved a similar issue in the past, but this could be 
> something different
> ·SPAN crash
> ·IP4 failures
> o   These are multiple failures and I'm not sure that grouping them together 
> is correct
> ·L2 failures/crash
> o   As in IP4, these are multiple failures and I'm not sure that grouping 
> them together is correct
> ·ECMP failure
> ·Multicast failure
> ·ACL failure
> o   I'm already working with Andrew on fixing this
>  
> There seem to be a lot of people who touched the code. I would like to ask 
> the authors to tell me who to turn to (at least for IP and L2).
>  
> Regards,
> Juraj
>  
> From: Juraj Linkeš [mailto:juraj.lin...@pantheon.tech] 
> Sent: Monday, September 24, 2018 6:26 PM
> To: vpp-dev 
> Cc: csit-dev 
> Subject: [vpp-dev] Make test failures on ARM
>  
> Hi vpp-devs,
>  
> Especially ARM vpp devs J
>  
> We're experiencing a number of failures on Cavium ThunderX and we'd like to 
> fix the issues. I've created a number of Jira tickets:
> ·GRE crash
> ·SCTP failure/crash
> o   Me and Marco resolved a similar issue in the past, but this could be 
> something different
> ·SPAN crash
> ·IP4 failures
> o   These are multiple failures and I'm not sure that grouping them together 
> is correct
> ·L2 failures/crash
> o   As in IP4, these are multiple failures and I'm not sure that grouping 
> them together is correct
> ·ECMP failure
> ·Multicast failure
> ·ACL failure
> o   I'm already working with Andrew on fixing this
>  
> The reason I didn't reach out to all authors individually is that I wanted 
> someone to look at the issues and assess whether there's an overlap (or I 
> grouped the failures improperly), since some of the failures look similar.
>  
> Then there's the issue of hardware availability - if anyone willing to help 
> has access to fd.io lab, I can setup access to a Cavium ThunderX, otherwise 
> we could set up a call if further debugging is needed.
>  
> Thanks,
> Juraj
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#10680): https://lists.fd.io/g/vpp-dev/message/10680
> Mute This Topic: https://lists.fd.io/mt/26218436/675608
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [ayour...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10682): https://lists.fd.io/g/vpp-dev/message/10682
Mute This Topic: https://lists.fd.io/mt/26218436/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] one question about IP fragment

2018-09-27 Thread Zhiyong Yang
Ole, thanks so much for your warm help.

> The next steps for tunnels, to help avoid fragmentation is to add some sort of
> tunnel path MTU discovery.

It looks very interesting and helpful. And I'm looking forward to seeing it.

However, we fail to send big packet greater than MTU now,  for example, 
We have MTU = 1500, when 1500bytes packets are encapped by vxlan protocol, 
of course,  the size of packets is bigger than MTU at the time ,
it looks that sending packets fails now.
  
BTW
Is DPDK IP fragment/reassembly supported in VPP now?

Thanks
Zhiyong

> -Original Message-
> From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Ole Troan
> Sent: Wednesday, September 26, 2018 7:06 PM
> To: Yang, Zhiyong 
> Cc: vpp-dev@lists.fd.io; Kinsella, Ray 
> Subject: Re: [vpp-dev] one question about IP fragment
> 
> Zhiyong,
> 
> > When I use vxlan in both VM and physical machine , packet 
> > drop issue
> is come across, after setting VPP MTU = 1600,  this issue will disappear.  As 
> we
> all know, some of routes on the network doesn’t support more than 1518 bytes
> packet.
> > Should I try to use IP fragment in this case? Or any other better solution?
> Does VPP IP fragment work now? If yes, Could you show me how to configure?
> Thank you very much.
> 
> Fragmentation now works. Currently both for IPv4 and IPv6 packets are
> fragmented in ip{4,6}_rewrite. Note that only locally originated IPv6 packets 
> can
> be fragmented, and only IPv4 packets with DF = 0.
> Assuming the above two restrictions are adhered to, if the VXLAN node sent
> packets larger than the outgoing interface MTU they should be fragmented now.
> There is no configuration for fragmentation. (Although I am thinking of 
> adding a
> knob, with default disabled).
> 
> For reassembly, VPP has a short-coming, It can only reassemble as an input
> feature, meaning all fragments, even though not destined for the VPP instance
> itself are reassembled.
> I think Juraj is working on a fix for that.
> 
> The next steps for tunnels, to help avoid fragmentation is to add some sort of
> tunnel path MTU discovery.
> 
> But in short, you are much better off with a well managed MTU, than you are
> with fragmentation.
> See our draft in intarea for a list: 
> https://tools.ietf.org/html/draft-ietf-intarea-
> frag-fragile-00
> 
> Cheers,
> Ole
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10681): https://lists.fd.io/g/vpp-dev/message/10681
Mute This Topic: https://lists.fd.io/mt/26229382/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Make test failures on ARM - IP4, L2, ECMP, Multicast, GRE, SCTP, SPAN, ACL

2018-09-27 Thread Juraj Linkeš
Hi Neale,

I had a debugging session with Andrew about failing ACL testcases and he 
uncovered that the root cause is in l2 and ip4:

1) the timeout and big files

for some reason in the bridged setup done by a testcase, the VPP reinjects the 
packet being sent onto one of the interfaces of the bridge, in a loop.

The following crude diff eliminates the problem and the tests pass: 
https://paste.ubuntu.com/p/CSMYjXsZyX/

2) there is a failure of a mac acl testcase in the routed scenario, where the 
ip lookup picks up incorrect next index:

The following shows the problem for the properly and improperly routed packet:

https://paste.ubuntu.com/p/wTWWNhwSKY/
Could you advise on the first issue (Andrew wasn't sure the diff is a proper 
fix) and help debug the other issue (or, most likely related, issues 
https://jira.fd.io/browse/VPP-1432 and https://jira.fd.io/browse/VPP-1433?) If 
not, could you suggest someone so I can ask them?

Thanks,
Juraj

From: Juraj Linkeš
Sent: Tuesday, September 25, 2018 10:07 AM
To: 'Juraj Linkeš' ; vpp-dev 
Cc: csit-dev 
Subject: RE: Make test failures on ARM - IP4, L2, ECMP, Multicast, GRE, SCTP, 
SPAN, ACL

I created the new tickets under CSIT, which is an oversight, but I fixed it and 
now the tickets are under VPP:

*GRE crash

*SCTP failure/crash

o   Me and Marco resolved a similar issue in the past, but this could be 
something different

*SPAN crash

*IP4 failures

o   These are multiple failures and I'm not sure that grouping them together is 
correct

*L2 failures/crash

o   As in IP4, these are multiple failures and I'm not sure that grouping them 
together is correct

*ECMP failure

*Multicast failure

*ACL failure

o   I'm already working with Andrew on fixing this

There seem to be a lot of people who touched the code. I would like to ask the 
authors to tell me who to turn to (at least for IP and L2).

Regards,
Juraj

From: Juraj Linkeš [mailto:juraj.lin...@pantheon.tech]
Sent: Monday, September 24, 2018 6:26 PM
To: vpp-dev mailto:vpp-dev@lists.fd.io>>
Cc: csit-dev mailto:csit-...@lists.fd.io>>
Subject: [vpp-dev] Make test failures on ARM

Hi vpp-devs,

Especially ARM vpp devs :)

We're experiencing a number of failures on Cavium ThunderX and we'd like to fix 
the issues. I've created a number of Jira tickets:

*GRE crash

*SCTP failure/crash

o   Me and Marco resolved a similar issue in the past, but this could be 
something different

*SPAN crash

*IP4 failures

o   These are multiple failures and I'm not sure that grouping them together is 
correct

*L2 failures/crash

o   As in IP4, these are multiple failures and I'm not sure that grouping them 
together is correct

*ECMP failure

*Multicast failure

*ACL failure

o   I'm already working with Andrew on fixing this

The reason I didn't reach out to all authors individually is that I wanted 
someone to look at the issues and assess whether there's an overlap (or I 
grouped the failures improperly), since some of the failures look similar.

Then there's the issue of hardware availability - if anyone willing to help has 
access to fd.io lab, I can setup access to a Cavium ThunderX, otherwise we 
could set up a call if further debugging is needed.

Thanks,
Juraj
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10680): https://lists.fd.io/g/vpp-dev/message/10680
Mute This Topic: https://lists.fd.io/mt/26218436/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Unable to build router plugin

2018-09-27 Thread Mehran Memarnejad
Hi carlito,

I've had problems muck like yours. Sometimes VPP updates its functions
while vppsb is still the same, so you need to change it to make it work.
In my problem I just updated the vppsb's outdated function to the new one
and it worked.
As you know, vppsb is a plugin for vpp and it calls vpp's functions, so any
change in vpp's function affects vppsb e.g. function singnature change


On Thursday, September 27, 2018, carlito nueno 
wrote:

> Hi all,
>
> I am trying to build the router-plugin:
> make V=0 PLATFORM=vpp TAG=vpp_debug install-deb netlink-install
> router-install
>
> I am using the Vagrantfile present in vpp repo and am pulling the
> current master (as of September 26 2018). I am also pulling the
> current master of vppsb.
>
> But I am getting this error:
>
>  Building router in /vpp/build-root/build-vpp_debug-native/router 
> make[1]: Entering directory '/vpp/build-root/build-vpp_
> debug-native/router'
>   CC   router/tap_inject.lo
>   CC   router/tap_inject_netlink.lo
> /vpp/build-data/../router/router/tap_inject_netlink.c: In function
> ‘add_del_neigh’:
> /vpp/build-data/../router/router/tap_inject_netlink.c:140:9: error:
> too many arguments to function ‘vnet_unset_ip6_ethernet_neighbor’
>  vnet_unset_ip6_ethernet_neighbor (vm, sw_if_index,
>  ^~~~
> In file included from
> /vpp/build-data/../router/router/tap_inject_netlink.c:19:0:
> /vpp/build-root/install-vpp_debug-native/vpp/include/vnet/
> ip/ip6_neighbor.h:84:12:
> note: declared here
>  extern int vnet_unset_ip6_ethernet_neighbor (vlib_main_t * vm,
> ^~~~
> Makefile:483: recipe for target 'router/tap_inject_netlink.lo' failed
> make[1]: *** [router/tap_inject_netlink.lo] Error 1
> make[1]: *** Waiting for unfinished jobs
> make[1]: Leaving directory '/vpp/build-root/build-vpp_debug-native/router'
> Makefile:691: recipe for target 'router-build' failed
> make: *** [router-build] Error 2
>
> Any advice?
>
> Thanks
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10679): https://lists.fd.io/g/vpp-dev/message/10679
Mute This Topic: https://lists.fd.io/mt/26280661/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-