Re: [vpp-dev] Fix in set_ipfix_exporter_command_fn() to avoid segmentation fault crash

2020-05-28 Thread Elias Rudberg
Ah. OK, now it's changed to the hopefully better name
"unformat_udp_port".
/ Elias

On Fri, 2020-05-29 at 00:32 +0200, Andrew  Yourtchenko wrote:
> > On 29 May 2020, at 00:02, Elias Rudberg 
> > wrote:
> > 
> > I changed the fix using %U and a new unformat_l3_port function, as
> > suggested by Paul:
> > 
> > https://gerrit.fd.io/r/c/vpp/+/27280
> 
> My opinion it’s an incorrect and unnecessary
> generalization/abstraction:
> 
> 1) port is a L4 concept, not L3. Cf name.
> 
> 2) no one said all L4 ports are/have to be a u16, or that the L4 has
> to have a concept of port. Don’t let TCP/UDP monoculture fool you.
> 
> But, 路‍♂️.
> 
> —a
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16567): https://lists.fd.io/g/vpp-dev/message/16567
Mute This Topic: https://lists.fd.io/mt/74491544/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Fix in set_ipfix_exporter_command_fn() to avoid segmentation fault crash

2020-05-28 Thread Andrew Yourtchenko


> On 29 May 2020, at 00:02, Elias Rudberg  wrote:
> 
> I changed the fix using %U and a new unformat_l3_port function, as
> suggested by Paul:
> 
> https://gerrit.fd.io/r/c/vpp/+/27280

My opinion it’s an incorrect and unnecessary generalization/abstraction:

1) port is a L4 concept, not L3. Cf name.

2) no one said all L4 ports are/have to be a u16, or that the L4 has to have a 
concept of port. Don’t let TCP/UDP monoculture fool you.

But, 路‍♂️.

—a

> 
> This works fine, but I wasn't sure where to put the unformat_l3_port
> function. Now it's in vnet/udp/udp_format.c -- let me know if you have
> a better idea about where it should be.
> 
> / Elias
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16566): https://lists.fd.io/g/vpp-dev/message/16566
Mute This Topic: https://lists.fd.io/mt/74491544/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Fix in set_ipfix_exporter_command_fn() to avoid segmentation fault crash

2020-05-28 Thread Elias Rudberg
I changed the fix using %U and a new unformat_l3_port function, as
suggested by Paul:

https://gerrit.fd.io/r/c/vpp/+/27280

This works fine, but I wasn't sure where to put the unformat_l3_port
function. Now it's in vnet/udp/udp_format.c -- let me know if you have
a better idea about where it should be.

/ Elias

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16565): https://lists.fd.io/g/vpp-dev/message/16565
Mute This Topic: https://lists.fd.io/mt/74491544/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Fix in set_ipfix_exporter_command_fn() to avoid segmentation fault crash

2020-05-28 Thread Andrew Yourtchenko
Ouch. Attempting to run “make test-debug” resulted in a lot of unrelated 
sadness... I will do some more testing to ensure it’s not a PEBCAC...

--a

> On 28 May 2020, at 21:25, Andrew Yourtchenko via lists.fd.io 
>  wrote:
> 
> Paul,
> 
> This is an excellent catch, thanks!! I will give it a go in test-debug...
> 
> --a
> 
>>> On 28 May 2020, at 16:15, Paul Vinciguerra  
>>> wrote:
>>> 
>> 
>> A few weeks back, I became aware of the following issue with the LISP tests:
>> 
>> /vpp/build-root/install-vpp_debug-native/vpp/bin/vpp[63989]: 
>> vnet_lisp_add_del_locator_set:2140: Can't delete a locator that supports a 
>> mapping!
>> /vpp/build-root/install-vpp_debug-native/vpp/bin/vpp[63989]: received signal 
>> SIGSEGV, PC 0xa0a0a00, faulting address 0xa0a0a00
>> 14:32:44,290 Child test runner process unresponsive and core-file exists in 
>> test temporary directory (last test running was `Test case for basic 
>> encapsulation' in `/tmp/vpp-unittest-TestLisp-wtuvdu4q')!
>> 
>> which seems to be triggered by the api trace commands in tearDown in 
>> framework.py.  I see it while running tests in a docker container.
>> 
>> 
>> 
>> On Thu, May 28, 2020 at 4:51 AM Andrew Yourtchenko  
>> wrote:
>>> Hi Elias,
>>> 
>>> Yeah it all does point to something like uninitialized data - I ran 
>>> yesterday the tests on two different machines for a while, apparently 
>>> without the issues...
>>> 
>>> The CI runtime environment is much more dynamic - it’s an ephemeral docker 
>>> container that is orchestrated by the nomad and is destroyed after the job 
>>> is run.
>>> 
>>> Could you push as a separate change the code that reliably gives you the 
>>> error in the LISP unit test in the CI, and let me know the change# ?
>>> 
>>> 
>>>  I will then test some tooling enhancement ideas I had for a while - to 
>>> check within the job whether the core exists, and if it does, to load it 
>>> into gdb and do some scripted processing of it and output the results... 
>>> (Iterate over the call stack and issue stuff like ‘info locals’, ‘info 
>>> regs’, etc).
>>> 
>>> I did some experiments with that approach earlier and it seemed like a 
>>> rather scalable technique for most of the issues, which should also save 
>>> disk space and the developer time ...
>>> 
>>> --a
>>> 
>>> > On 28 May 2020, at 10:33, Elias Rudberg  wrote:
>>> > 
>>> > Hi Andrew,
>>> > 
>>> > In my case it failed several times and appeared to be triggered by
>>> > seemingly harmless code changes, but it seemed like the problem was
>>> > reproducible for a given version of the code. What seemed to matter was
>>> > when I changed things related to local variables inside the
>>> > set_ipfix_exporter_command_fn() function. The test logs said "Core-file 
>>> > exists" which I suppose means that vpp crashed. The testing framework
>>> > repeats the test several times, saying "3 attempt(s) left", then "2
>>> > attempt(s) left" and so on, all those repeated attempts seemed to crash
>>> > in the same way.
>>> > 
>>> > It could be something with uninitialized variables, e.g. something that
>>> > is assumed to be zero but is never explicitly initialized so it can
>>> > work when it happens to be zero but depending on platform and compiler
>>> > details there could be some garbage there causing a problem. Then
>>> > unrelated code changes like adding variables somewhere making things
>>> > end up at slightly different memory ocations could make the error come
>>> > and go. This is just guessing of course.
>>> > 
>>> > Is it possible to get login access to the machine where the
>>> > gerrit/jenkins tests are run, to debug it there where the issue can be
>>> > reproduced?
>>> > 
>>> > / Elias
>>> > 
>>> > 
>>> >> On Wed, 2020-05-27 at 19:03 +0200, Andrew  Yourtchenko wrote:
>>> >> Yep, so it looks like we have an issue...
>>> >> 
>>> >> https://gerrit.fd.io/r/c/vpp/+/27305 has the same failures, I am
>>> >> rerunning it now to see how intermittent it is - as well as testing
>>> >> the latest master locally
>>> >> 
>>> >> --a
>>> >> 
>>> >>> On 27 May 2020, at 18:56, Elias Rudberg 
>>> >>> wrote:
>>> >>> 
>>> >>> Hi Andrew,
>>> >>> 
>>> >>> Yes, it was Basic LISP test. It looked like this in the
>>> >>> console.log.gz
>>> >>> for vpp-verify-master-ubuntu1804:
>>> >>> 
>>> >>> ===
>>> >>> 
>>> >>> ===
>>> >>> TEST RESULTS:
>>> >>>Scheduled tests: 1177
>>> >>> Executed tests: 1176
>>> >>>   Passed tests: 1039
>>> >>>  Skipped tests: 137
>>> >>> Not Executed tests: 1
>>> >>> Errors: 1
>>> >>> FAILURES AND ERRORS IN TESTS:
>>> >>> Testcase name: Basic LISP test 
>>> >>> ERROR: Test case for basic encapsulation
>>> >>> [test_lisp.TestLisp.test_lisp_basic_encap]
>>> >>> TESTCASES WHERE NO TESTS WERE SUCCESSFULLY EXECUTED:
>>> >>> Basic LISP test 
>>> >>> ===
>>> >>> 
>>> >>> ===
>>> >>> 
>>> >>> / 

Re: [vpp-dev] Fix in set_ipfix_exporter_command_fn() to avoid segmentation fault crash

2020-05-28 Thread Andrew Yourtchenko

Hi Elias,

I will let Ole merge the patch it if he is happy with it.

Thanks a lot!

—a

> On 28 May 2020, at 16:19, Elias Rudberg  wrote:
> 
> Hi Andrew,
> 
>> Could you push as a separate change the code that reliably gives you
>> the error in the LISP unit test
> 
> I tried but today, whatever I do, I cannot reproduce the test failure
> anymore. All tests pass now even when I try exactly the same code for
> which the test failed yesterday.
> 
> For example, Patchset 4 for https://gerrit.fd.io/r/c/vpp/+/27280 failed
> yesterday, but now I created Patchset 8 which is identical to Patchset
> 4, and Patchset 8 passes all tests.
> 
> I don't know, maybe something changed in the testing environment since
> yesterday, or maybe the issue was never reproducible, it was just a
> coincidence that made it seem that way yesterday.
> 
> The good news is that the fix I wanted to do now passes the tests also
> when written as Ole suggested, with collector_port as u32 and a bounds
> check added:
> 
> https://gerrit.fd.io/r/c/vpp/+/27280
> 
> It would be great if that could get merged.
> 
> Best regards,
> Elias
> 
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16563): https://lists.fd.io/g/vpp-dev/message/16563
Mute This Topic: https://lists.fd.io/mt/74491544/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Fix in set_ipfix_exporter_command_fn() to avoid segmentation fault crash

2020-05-28 Thread Andrew Yourtchenko
Paul,

This is an excellent catch, thanks!! I will give it a go in test-debug...

--a

> On 28 May 2020, at 16:15, Paul Vinciguerra  wrote:
> 
> 
> A few weeks back, I became aware of the following issue with the LISP tests:
> 
> /vpp/build-root/install-vpp_debug-native/vpp/bin/vpp[63989]: 
> vnet_lisp_add_del_locator_set:2140: Can't delete a locator that supports a 
> mapping!
> /vpp/build-root/install-vpp_debug-native/vpp/bin/vpp[63989]: received signal 
> SIGSEGV, PC 0xa0a0a00, faulting address 0xa0a0a00
> 14:32:44,290 Child test runner process unresponsive and core-file exists in 
> test temporary directory (last test running was `Test case for basic 
> encapsulation' in `/tmp/vpp-unittest-TestLisp-wtuvdu4q')!
> 
> which seems to be triggered by the api trace commands in tearDown in 
> framework.py.  I see it while running tests in a docker container.
> 
> 
> 
> On Thu, May 28, 2020 at 4:51 AM Andrew Yourtchenko  wrote:
>> Hi Elias,
>> 
>> Yeah it all does point to something like uninitialized data - I ran 
>> yesterday the tests on two different machines for a while, apparently 
>> without the issues...
>> 
>> The CI runtime environment is much more dynamic - it’s an ephemeral docker 
>> container that is orchestrated by the nomad and is destroyed after the job 
>> is run.
>> 
>> Could you push as a separate change the code that reliably gives you the 
>> error in the LISP unit test in the CI, and let me know the change# ?
>> 
>> 
>>  I will then test some tooling enhancement ideas I had for a while - to 
>> check within the job whether the core exists, and if it does, to load it 
>> into gdb and do some scripted processing of it and output the results... 
>> (Iterate over the call stack and issue stuff like ‘info locals’, ‘info 
>> regs’, etc).
>> 
>> I did some experiments with that approach earlier and it seemed like a 
>> rather scalable technique for most of the issues, which should also save 
>> disk space and the developer time ...
>> 
>> --a
>> 
>> > On 28 May 2020, at 10:33, Elias Rudberg  wrote:
>> > 
>> > Hi Andrew,
>> > 
>> > In my case it failed several times and appeared to be triggered by
>> > seemingly harmless code changes, but it seemed like the problem was
>> > reproducible for a given version of the code. What seemed to matter was
>> > when I changed things related to local variables inside the
>> > set_ipfix_exporter_command_fn() function. The test logs said "Core-file 
>> > exists" which I suppose means that vpp crashed. The testing framework
>> > repeats the test several times, saying "3 attempt(s) left", then "2
>> > attempt(s) left" and so on, all those repeated attempts seemed to crash
>> > in the same way.
>> > 
>> > It could be something with uninitialized variables, e.g. something that
>> > is assumed to be zero but is never explicitly initialized so it can
>> > work when it happens to be zero but depending on platform and compiler
>> > details there could be some garbage there causing a problem. Then
>> > unrelated code changes like adding variables somewhere making things
>> > end up at slightly different memory ocations could make the error come
>> > and go. This is just guessing of course.
>> > 
>> > Is it possible to get login access to the machine where the
>> > gerrit/jenkins tests are run, to debug it there where the issue can be
>> > reproduced?
>> > 
>> > / Elias
>> > 
>> > 
>> >> On Wed, 2020-05-27 at 19:03 +0200, Andrew  Yourtchenko wrote:
>> >> Yep, so it looks like we have an issue...
>> >> 
>> >> https://gerrit.fd.io/r/c/vpp/+/27305 has the same failures, I am
>> >> rerunning it now to see how intermittent it is - as well as testing
>> >> the latest master locally
>> >> 
>> >> --a
>> >> 
>> >>> On 27 May 2020, at 18:56, Elias Rudberg 
>> >>> wrote:
>> >>> 
>> >>> Hi Andrew,
>> >>> 
>> >>> Yes, it was Basic LISP test. It looked like this in the
>> >>> console.log.gz
>> >>> for vpp-verify-master-ubuntu1804:
>> >>> 
>> >>> ===
>> >>> 
>> >>> ===
>> >>> TEST RESULTS:
>> >>>Scheduled tests: 1177
>> >>> Executed tests: 1176
>> >>>   Passed tests: 1039
>> >>>  Skipped tests: 137
>> >>> Not Executed tests: 1
>> >>> Errors: 1
>> >>> FAILURES AND ERRORS IN TESTS:
>> >>> Testcase name: Basic LISP test 
>> >>> ERROR: Test case for basic encapsulation
>> >>> [test_lisp.TestLisp.test_lisp_basic_encap]
>> >>> TESTCASES WHERE NO TESTS WERE SUCCESSFULLY EXECUTED:
>> >>> Basic LISP test 
>> >>> ===
>> >>> 
>> >>> ===
>> >>> 
>> >>> / Elias
>> >>> 
>> >>> 
>> >>> 
>> >>> On Wed, 2020-05-27 at 18:42 +0200, Andrew  Yourtchenko wrote:
>>  Basic LISP test - was it the one that was failing for you ?
>>  That particular test intermittently failed a couple of times for
>>  me
>>  as well, on a doc-only change, so we have an unrelated issue.
>>  I am running it locally to see what 

[vpp-dev] How VPP knows that the link on an interface came up

2020-05-28 Thread Ahmed Bashandy
Hi,

When the carrier of an interface comes up, e.g. because someone plugs a cable, 
VPP function “ send_sw_interface_event()” sends a message to clients 
indicating the event

We are seeing a delay of 4-6 seconds from the time we plug the cable until the 
function “ send_sw_interface_event() is called and message is sent

But when the carrier goes down, e.g. by pulling out the cable), the message is 
almost always sent within a second

I am trying to figure out why does it take that much time to detect that the 
carrier came up?

Ahmed

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16561): https://lists.fd.io/g/vpp-dev/message/16561
Mute This Topic: https://lists.fd.io/mt/74527621/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Segfault in 'vapi_type_msg_header1_t_ntoh()' with a C++ api client #vpp #vapi

2020-05-28 Thread Klement Sekera via lists.fd.io
Hey,

swapping context makes no sense since context is for client - client generates 
whatever context it wants and vpp just copies it from request to response, so 
that client can match response with request. So there is no reason for client 
to swap it when sending the message and then swap it back when receiving the 
message. On the other hand, message ID is always in network byte order and thus 
might need to be swapped.

Do you have or could you produce a piece of code which reproduces the issue, 
please?

Thanks,
Klement

> On 28 May 2020, at 19:14, pashinho1...@gmail.com wrote:
> 
> Hi all,
> 
> So, the problem is encountered with my C++ client when receiving a reply. The 
> strange thing is that this happens only with a specific type of api 
> request-reply.
> Follows the segfault stack trace:
> Thread 1 "sample_plugin_client" hit Breakpoint 1, 
> vapi_type_msg_header1_t_ntoh (h=0x0)
> at /usr/include/vapi/vapi_internal.h:63
> 63h->_vl_msg_id = be16toh (h->_vl_msg_id);
> (gdb) bt
> #0  vapi_type_msg_header1_t_ntoh (h=0x0) at 
> /usr/include/vapi/vapi_internal.h:63
> #1  0x00406bff in vapi_msg_sample_plugin_session_add_reply_ntoh 
> (msg=0x0) at /usr/include/vapi/sample_plugin.api.vapi.h:1215
> #2  0x00419869 in 
> vapi::vapi_swap_to_host (msg=0x0)
> at /usr/include/vapi/sample_plugin.api.vapi.hpp:260
> #3  0x00419ad3 in 
> vapi::Msg::assign_response 
> (this=0x7fffe1b8,
> resp_id=49, shm_data=0x0) at /usr/include/vapi/vapi.hpp:614
> #4  0x00419797 in vapi::Request vapi_msg_sample_plugin_pfcp_add_reply>::assign_response (
> this=0x7fffe170, id=49, shm_data=0x0) at 
> /usr/include/vapi/vapi.hpp:684
> #5  0x00410b1b in vapi::Connection::dispatch (this=0x7fffe2c0, 
> limit=0x7fffe170, time=5)
> at /usr/include/vapi/vapi.hpp:289
> #6  0x00410d9a in vapi::Connection::dispatch (this=0x7fffe2c0, 
> limit=...)
> at /usr/include/vapi/vapi.hpp:324
> #7  0x00410ddc in vapi::Connection::wait_for_response 
> (this=0x7fffe2c0, req=...)
> at /usr/include/vapi/vapi.hpp:340
> #8  0x0040ba58 in sample_plugin_pfcp_add (vpp_conn=..., msg_pload=...)
> at 
> /root/tmp/vpp/src/plugins/sample_plugin/rubbish/client/src/sample_plugin_client.cpp:213
> #9  0x0040c485 in main () at 
> /root/tmp/vpp/src/plugins/sample_plugin/rubbish/client/src/sample_plugin_client.cpp:388
> Here's the kaboom " h->_vl_msg_id = be16toh (h->_vl_msg_id)", where 'h' is 
> NULL, yikes :O.
> I traced the root cause in the 'vapi::Connection::dispatch()' method, 
> specifically here:
> u32 context = *reinterpret_cast((static_cast (shm_data) + 
> vapi_get_context_offset (id))); // context' here is 218103808 in my case, for 
> example
> const auto x = requests.front();
> matching_req = x;
> if (context == x->context)  // while 'x->context' here is 13, i.e. htonl(13) 
> is 218103808 (endianness inconsistency), so this branch here is not taken
> {
> std::tie (rv, break_dispatch) = x->assign_response (id, shm_data);
> }
> else // this one is taken, i.e. by passing 'nullptr' and subsequently 
> being dereferenced ==> BOOM!
> {
> std::tie (rv, break_dispatch) = x->assign_response (id, nullptr);
> }
> Also, I see REPLY_MACRO doing:
> rmp->_vl_msg_id = htons(REPLY_MSG_ID_BASE + VL_API_blablabla_REPLY);
> rmp->context = mp->context;
> So '_vl_msg_id' gets network byte order, but not so with 'context', why's 
> that? Does this have something to do with the client's resulting segfault?
> Oh, and I'm on top of the latest 'stable/2005'.
> 
> Thank you 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16560): https://lists.fd.io/g/vpp-dev/message/16560
Mute This Topic: https://lists.fd.io/mt/74526467/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Mute #vapi: https://lists.fd.io/mk?hashtag=vapi=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Segfault in 'vapi_type_msg_header1_t_ntoh()' with a C++ api client #vpp #vapi

2020-05-28 Thread pashinho1990
[Edited Message Follows]

Hi all,

So, the problem is encountered with my C++ client when receiving a reply. The 
strange thing is that this happens only with a specific type of api 
request-reply.
Follows the segfault stack trace:
> 
> Thread 1 "sample_plugin_client" hit Breakpoint 1,
> vapi_type_msg_header1_t_ntoh (h=0x0)
> at /usr/include/vapi/vapi_internal.h:63
> 63    h->_vl_msg_id = be16toh (h->_vl_msg_id);
> (gdb) bt
> #0  vapi_type_msg_header1_t_ntoh (h=0x0) at
> /usr/include/vapi/vapi_internal.h:63
> #1  0x00406bff in vapi_msg_sample_plugin_session_add_reply_ntoh
> (msg=0x0) at /usr/include/vapi/sample_plugin.api.vapi.h:1215
> #2  0x00419869 in
> vapi::vapi_swap_to_host
> (msg=0x0)
> at /usr/include/vapi/sample_plugin.api.vapi.hpp:260
> #3  0x00419ad3 in
> vapi::Msg::assign_response
> (this=0x7fffe1b8,
> resp_id=49, shm_data=0x0) at /usr/include/vapi/vapi.hpp:614
> #4  0x00419797 in
> vapi::Request vapi_msg_sample_plugin_session_add_reply>::assign_response (
> this=0x7fffe170, id=49, shm_data=0x0) at
> /usr/include/vapi/vapi.hpp:684
> #5  0x00410b1b in vapi::Connection::dispatch (this=0x7fffe2c0,
> limit=0x7fffe170, time=5)
> at /usr/include/vapi/vapi.hpp:289
> #6  0x00410d9a in vapi::Connection::dispatch (this=0x7fffe2c0,
> limit=...)
> at /usr/include/vapi/vapi.hpp:324
> #7  0x00410ddc in vapi::Connection::wait_for_response
> (this=0x7fffe2c0, req=...)
> at /usr/include/vapi/vapi.hpp:340
> #8  0x0040ba58 in sample_plugin_session_add (vpp_conn=...,
> msg_pload=...)
> at
> /root/tmp/vpp/src/plugins/sample_plugin/rubbish/client/src/sample_plugin_client.cpp:213
> 
> #9  0x0040c485 in main () at
> /root/tmp/vpp/src/plugins/sample_plugin/rubbish/client/src/sample_plugin_client.cpp:388
> 

Here's the kaboom " h->_vl_msg_id = be16toh (h->_vl_msg_id)", where 'h' is 
NULL, yikes :O.
I traced the root cause in the 'vapi::Connection::dispatch()' method, 
specifically here:
> 
> 
> u32 context = *reinterpret_cast((static_cast (shm_data) +
> vapi_get_context_offset (id))); // context' here is 218103808 in my case,
> for example
> const auto x = requests.front();
> matching_req = x;
> if (context == x->context)  // while 'x->context' here is 13, i.e.
> htonl(13) is 218103808 (endianness inconsistency), so this branch here is
> not taken
> {
> std::tie (rv, break_dispatch) = x->assign_response (id, shm_data);
> }
> else     // this one is taken, i.e. by passing 'nullptr' and subsequently
> being dereferenced ==> BOOM!
> {
> std::tie (rv, break_dispatch) = x->assign_response (id, nullptr);
> }

Also, I see REPLY_MACRO doing:
> 
> rmp->_vl_msg_id = htons(REPLY_MSG_ID_BASE + VL_API_blablabla_REPLY);
> rmp->context = mp->context;

So '_vl_msg_id' gets network byte order, but not so with 'context', why's that? 
Does this have something to do with the client's resulting segfault?
Oh, and I'm on top of the latest 'stable/2005'.

Thank you
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16559): https://lists.fd.io/g/vpp-dev/message/16559
Mute This Topic: https://lists.fd.io/mt/74526467/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Segfault in 'vapi_type_msg_header1_t_ntoh()' with a C++ api client #vpp #vapi

2020-05-28 Thread pashinho1990
Hi all,

So, the problem is encountered with my C++ client when receiving a reply. The 
strange thing is that this happens only with a specific type of api 
request-reply.
Follows the segfault stack trace:
> 
> Thread 1 "sample_plugin_client" hit Breakpoint 1,
> vapi_type_msg_header1_t_ntoh (h=0x0)
> at /usr/include/vapi/vapi_internal.h:63
> 63    h->_vl_msg_id = be16toh (h->_vl_msg_id);
> (gdb) bt
> #0  vapi_type_msg_header1_t_ntoh (h=0x0) at
> /usr/include/vapi/vapi_internal.h:63
> #1  0x00406bff in vapi_msg_sample_plugin_session_add_reply_ntoh
> (msg=0x0) at /usr/include/vapi/sample_plugin.api.vapi.h:1215
> #2  0x00419869 in
> vapi::vapi_swap_to_host
> (msg=0x0)
> at /usr/include/vapi/sample_plugin.api.vapi.hpp:260
> #3  0x00419ad3 in
> vapi::Msg::assign_response
> (this=0x7fffe1b8,
> resp_id=49, shm_data=0x0) at /usr/include/vapi/vapi.hpp:614
> #4  0x00419797 in vapi::Request vapi_msg_sample_plugin_pfcp_add_reply>::assign_response (
> this=0x7fffe170, id=49, shm_data=0x0) at
> /usr/include/vapi/vapi.hpp:684
> #5  0x00410b1b in vapi::Connection::dispatch (this=0x7fffe2c0,
> limit=0x7fffe170, time=5)
> at /usr/include/vapi/vapi.hpp:289
> #6  0x00410d9a in vapi::Connection::dispatch (this=0x7fffe2c0,
> limit=...)
> at /usr/include/vapi/vapi.hpp:324
> #7  0x00410ddc in vapi::Connection::wait_for_response
> (this=0x7fffe2c0, req=...)
> at /usr/include/vapi/vapi.hpp:340
> #8  0x0040ba58 in sample_plugin_pfcp_add (vpp_conn=...,
> msg_pload=...)
> at
> /root/tmp/vpp/src/plugins/sample_plugin/rubbish/client/src/sample_plugin_client.cpp:213
> 
> #9  0x0040c485 in main () at
> /root/tmp/vpp/src/plugins/sample_plugin/rubbish/client/src/sample_plugin_client.cpp:388
> 

Here's the kaboom " h->_vl_msg_id = be16toh (h->_vl_msg_id)", where 'h' is 
NULL, yikes :O.
I traced the root cause in the 'vapi::Connection::dispatch()' method, 
specifically here:
> 
> 
> u32 context = *reinterpret_cast((static_cast (shm_data) +
> vapi_get_context_offset (id))); // context' here is 218103808 in my case,
> for example
> const auto x = requests.front();
> matching_req = x;
> if (context == x->context)  // while 'x->context' here is 13, i.e.
> htonl(13) is 218103808 (endianness inconsistency), so this branch here is
> not taken
> {
> std::tie (rv, break_dispatch) = x->assign_response (id, shm_data);
> }
> else     // this one is taken, i.e. by passing 'nullptr' and subsequently
> being dereferenced ==> BOOM!
> {
> std::tie (rv, break_dispatch) = x->assign_response (id, nullptr);
> }

Also, I see REPLY_MACRO doing:
> 
> rmp->_vl_msg_id = htons(REPLY_MSG_ID_BASE + VL_API_blablabla_REPLY);
> rmp->context = mp->context;

So '_vl_msg_id' gets network byte order, but not so with 'context', why's that? 
Does this have something to do with the client's resulting segfault?
Oh, and I'm on top of the latest 'stable/2005'.

Thank you
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16559): https://lists.fd.io/g/vpp-dev/message/16559
Mute This Topic: https://lists.fd.io/mt/74526467/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Mute #vapi: https://lists.fd.io/mk?hashtag=vapi=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] generic TCP MSS clamping

2020-05-28 Thread Miklos Tirpak
Thank you for the pointer, this is exactly what I was looking for. I will 
rebase the patch and add RX support.

Thanks,
Miklos

From: otr...@employees.org 
Sent: Thursday, May 28, 2020 12:43 PM
To: Mohsin Kazmi (sykazmi) 
Cc: Miklós Tirpák ; vpp-dev@lists.fd.io 

Subject: Re: [vpp-dev] generic TCP MSS clamping

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you recognize the sender and know the content 
is safe.


Good find Mohsin. So it's only missing clamping on RX. I'm sure Miklos can add 
that.

Cheers,
Ole

> On 28 May 2020, at 12:23, Mohsin Kazmi (sykazmi)  wrote:
>
> Hi Miklos,
>
> May be, it will help https://gerrit.fd.io/r/c/vpp/+/15144
>
> -br
> Mohsin
> From:  on behalf of Ole Troan 
> Date: Thursday, May 28, 2020 at 11:23 AM
> To: Miklos Tirpak 
> Cc: "vpp-dev@lists.fd.io" 
> Subject: Re: [vpp-dev] generic TCP MSS clamping
>
> Hi Miklos,
>
> > I see the NAT plugin already supports TCP MSS clamping but it is 
> > implemented only in in2out direction.
> >
> > We have endpoints with wrong MTUs behind tunnels and not all the traffic is 
> > NATed. Hence, it would be very nice to have generic support for MSS 
> > clamping that could be enabled on the tunnel interface.
> >
> > Do you think implementing this as a feature arch would make sense? Then it 
> > would not be limited to NAT or to one kind of tunnel for example.
> > If so, what is the best place? A new plugin?
>
> A bidirectional TCP MSS adjust would be fine.
> Putting it in a plugin is likely the simplest.
>
> I'm unsure if it should be generic or not. E.g. the NAT also needs to adjust 
> the TCP checksum, and it's likely better to do it only once.
>
> Best regards,
> Ole
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16558): https://lists.fd.io/g/vpp-dev/message/16558
Mute This Topic: https://lists.fd.io/mt/74499850/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP release 20.05 is complete!

2020-05-28 Thread Dave Wallace

Most Excellent!

Congratulations to the VPP & CSIT community for all of the effort to 
complete another high quality VPP release.


Special thanks to Andrew for his management of the release and 
improvements to the release process!


-daw-

On 5/27/2020 5:27 PM, Andrew Yourtchenko wrote:

Dear all,

I am happy to announce that the release 20.05 is available on
packagecloud.io in fdio/release repository.

I have verified that it is installable on Ubuntu 18.04 and Centos 7
distributions.

Special thanks to Vanessa Valderrama and Dave Wallace for the help
during the release.

--a (your friendly 20.05 release manager)

P.s. Branch stable/2005 is now open.




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16557): https://lists.fd.io/g/vpp-dev/message/16557
Mute This Topic: https://lists.fd.io/mt/74509933/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Fix in set_ipfix_exporter_command_fn() to avoid segmentation fault crash

2020-05-28 Thread Elias Rudberg
Hi Andrew,

> Could you push as a separate change the code that reliably gives you
> the error in the LISP unit test

I tried but today, whatever I do, I cannot reproduce the test failure
anymore. All tests pass now even when I try exactly the same code for
which the test failed yesterday.

For example, Patchset 4 for https://gerrit.fd.io/r/c/vpp/+/27280 failed
yesterday, but now I created Patchset 8 which is identical to Patchset
4, and Patchset 8 passes all tests.

I don't know, maybe something changed in the testing environment since
yesterday, or maybe the issue was never reproducible, it was just a
coincidence that made it seem that way yesterday.

The good news is that the fix I wanted to do now passes the tests also
when written as Ole suggested, with collector_port as u32 and a bounds
check added:

https://gerrit.fd.io/r/c/vpp/+/27280

It would be great if that could get merged.

Best regards,
Elias

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16556): https://lists.fd.io/g/vpp-dev/message/16556
Mute This Topic: https://lists.fd.io/mt/74491544/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Fix in set_ipfix_exporter_command_fn() to avoid segmentation fault crash

2020-05-28 Thread Paul Vinciguerra
A few weeks back, I became aware of the following issue with the LISP tests:

/vpp/build-root/install-vpp_debug-native/vpp/bin/vpp[63989]:
vnet_lisp_add_del_locator_set:2140: Can't delete a locator that supports a
mapping!
/vpp/build-root/install-vpp_debug-native/vpp/bin/vpp[63989]: received
signal SIGSEGV, PC 0xa0a0a00, faulting address 0xa0a0a00
14:32:44,290 Child test runner process unresponsive and core-file exists in
test temporary directory (last test running was `Test case for basic
encapsulation' in `/tmp/vpp-unittest-TestLisp-wtuvdu4q')!

which seems to be triggered by the api trace commands in tearDown in
framework.py.  I see it while running tests in a docker container.



On Thu, May 28, 2020 at 4:51 AM Andrew Yourtchenko 
wrote:

> Hi Elias,
>
> Yeah it all does point to something like uninitialized data - I ran
> yesterday the tests on two different machines for a while, apparently
> without the issues...
>
> The CI runtime environment is much more dynamic - it’s an ephemeral docker
> container that is orchestrated by the nomad and is destroyed after the job
> is run.
>
> Could you push as a separate change the code that reliably gives you the
> error in the LISP unit test in the CI, and let me know the change# ?
>
>
>  I will then test some tooling enhancement ideas I had for a while - to
> check within the job whether the core exists, and if it does, to load it
> into gdb and do some scripted processing of it and output the results...
> (Iterate over the call stack and issue stuff like ‘info locals’, ‘info
> regs’, etc).
>
> I did some experiments with that approach earlier and it seemed like a
> rather scalable technique for most of the issues, which should also save
> disk space and the developer time ...
>
> --a
>
> > On 28 May 2020, at 10:33, Elias Rudberg 
> wrote:
> >
> > Hi Andrew,
> >
> > In my case it failed several times and appeared to be triggered by
> > seemingly harmless code changes, but it seemed like the problem was
> > reproducible for a given version of the code. What seemed to matter was
> > when I changed things related to local variables inside the
> > set_ipfix_exporter_command_fn() function. The test logs said "Core-file
> > exists" which I suppose means that vpp crashed. The testing framework
> > repeats the test several times, saying "3 attempt(s) left", then "2
> > attempt(s) left" and so on, all those repeated attempts seemed to crash
> > in the same way.
> >
> > It could be something with uninitialized variables, e.g. something that
> > is assumed to be zero but is never explicitly initialized so it can
> > work when it happens to be zero but depending on platform and compiler
> > details there could be some garbage there causing a problem. Then
> > unrelated code changes like adding variables somewhere making things
> > end up at slightly different memory ocations could make the error come
> > and go. This is just guessing of course.
> >
> > Is it possible to get login access to the machine where the
> > gerrit/jenkins tests are run, to debug it there where the issue can be
> > reproduced?
> >
> > / Elias
> >
> >
> >> On Wed, 2020-05-27 at 19:03 +0200, Andrew  Yourtchenko wrote:
> >> Yep, so it looks like we have an issue...
> >>
> >> https://gerrit.fd.io/r/c/vpp/+/27305 has the same failures, I am
> >> rerunning it now to see how intermittent it is - as well as testing
> >> the latest master locally
> >>
> >> --a
> >>
> >>> On 27 May 2020, at 18:56, Elias Rudberg 
> >>> wrote:
> >>>
> >>> Hi Andrew,
> >>>
> >>> Yes, it was Basic LISP test. It looked like this in the
> >>> console.log.gz
> >>> for vpp-verify-master-ubuntu1804:
> >>>
> >>> ===
> >>> 
> >>> ===
> >>> TEST RESULTS:
> >>>Scheduled tests: 1177
> >>> Executed tests: 1176
> >>>   Passed tests: 1039
> >>>  Skipped tests: 137
> >>> Not Executed tests: 1
> >>> Errors: 1
> >>> FAILURES AND ERRORS IN TESTS:
> >>> Testcase name: Basic LISP test
> >>> ERROR: Test case for basic encapsulation
> >>> [test_lisp.TestLisp.test_lisp_basic_encap]
> >>> TESTCASES WHERE NO TESTS WERE SUCCESSFULLY EXECUTED:
> >>> Basic LISP test
> >>> ===
> >>> 
> >>> ===
> >>>
> >>> / Elias
> >>>
> >>>
> >>>
> >>> On Wed, 2020-05-27 at 18:42 +0200, Andrew  Yourtchenko wrote:
>  Basic LISP test - was it the one that was failing for you ?
>  That particular test intermittently failed a couple of times for
>  me
>  as well, on a doc-only change, so we have an unrelated issue.
>  I am running it locally to see what is going on.
>  --a
> 
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16555): https://lists.fd.io/g/vpp-dev/message/16555
Mute This Topic: https://lists.fd.io/mt/74491544/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  

Re: [SUSPECTED SPAM] [vpp-dev] Troubleshooting IPsec peer behind NAT (AWS instance)

2020-05-28 Thread Muthu Raj
Hello,

I have just added an use case over at
https://wiki.fd.io/view/VPP/IPSec_and_IKEv2#IPSec_between_VPP_peers.2C_tunneling_IPv4_over_IPv6

It is pretty bare bones for now, but I hope to continue to improve it. Feel
free to point out mistakes if there are any.
I also will try to write a longer version explaining more things (more like
capturing what neale explained to me) with traces in the VPP user docs.
Thanks Neale, Filip and everyone.

On Fri, May 15, 2020 at 3:11 PM Neale Ranns (nranns) 
wrote:

>
>
> Hi Muthu,
>
>
>
> *From: * on behalf of Muthu Raj <
> muthuraj.muth...@gmail.com>
> *Date: *Friday 15 May 2020 at 09:20
> *To: *"Neale Ranns (nranns)" 
> *Cc: *"Filip Tehlar -X (ftehlar - PANTHEON TECH SRO at Cisco)" <
> fteh...@cisco.com>, "vpp-dev@lists.fd.io" 
> *Subject: *Re: [SUSPECTED SPAM] [vpp-dev] Troubleshooting IPsec peer
> behind NAT (AWS instance)
>
>
>
> Hi Neale,
>
>
>
> Sorry about the trace.
>
>
>
> Not your fault at all  I was commenting that the trace VPP produced was
> not clear in indicating the miss.
>
>
>
> The match in the SPD is against SA 20’s tunnel addresses not the policy’s
> local/remote range.
>
>
>
> Thanks for clarifying this. I was confused here.
>
>
>
> I created the SA and policy with this in mind and got it to work
> successfully.
>
> Thanks a lot for your help.
>
>
>
> Glad to hear it.
>
>
>
> I will spend some time this coming week and try to get a small write up
> onto https://wiki.fd.io/
>
> It may be of help to someone.
>
>
>
> I’m sure it will be. Thanks you!
>
>
>
> /neale
>
>
>
> Muthu
>
>
>
> On Thu, May 14, 2020 at 8:52 PM Neale Ranns (nranns) 
> wrote:
>
>
>
> Hi Muthu,
>
>
>
> The tracing is not great, but what you see indicates a miss in the SPD.
>
> The match in the SPD is against SA 20’s tunnel addresses not the policy’s
> local/remote range.
>
>
>
> /neale
>
>
>
>
>
> *From: *Muthu Raj 
> *Date: *Thursday 14 May 2020 at 15:51
> *To: *"muthuraj.muth...@gmail.com" 
> *Cc: *"Neale Ranns (nranns)" , "Filip Tehlar -X
> (ftehlar - PANTHEON TECH SRO at Cisco)" , "
> vpp-dev@lists.fd.io" 
> *Subject: *Re: [SUSPECTED SPAM] [vpp-dev] Troubleshooting IPsec peer
> behind NAT (AWS instance)
>
>
>
> Hi Neale,
>
>
>
> So I've since tried out setting SPD on the interface with the IPv6
> address, and even though I am not able to ping the interface, I see that it
> does receive and process packets (which I had erroneously assumed it did
> not when it became unpingable).
>
>
>
>
>
> I added a new SPD and added a policy like so
>
>  ipsec policy add spd 1 priority 10 inbound  action protect sa 20
> local-ip-range  -   remote-ip-range  -
> 
>
>
>
> This is what the trace looks like:
>
>
>
> Packet 10
>
> 01:02:05:902414: dpdk-input
>   lan0 rx queue 0
>   buffer 0x13daa8: current data 0, length 96, buffer-pool 1, ref-count 1,
> totlen-nifb 0, trace handle 0x9
>ext-hdr-valid
>l4-cksum-computed l4-cksum-correct
>   PKT MBUF: port 0, nb_segs 1, pkt_len 96
> buf_len 2176, data_len 96, ol_flags 0x181, data_off 128, phys_addr
> 0xe2f6aa80
> packet_type 0x211 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
> rss 0x0 fdir.hi 0x0 fdir.lo 0x0
> Packet Offload Flags
> 01:02:05:868297: ethernet-input
>   frame: flags 0x3, hw-if-index 2, sw-if-index 2
>   IP6: 5c:5e:ab:d0:29:f0 -> b4:96:91:18:eb:be 802.1q vlan 67
> 01:02:05:868299: ip6-input
>   IPSEC_ESP: 2001::1 -> 2001::2
> tos 0x28, flow label 0x0, hop limit 238, payload length 132
> 01:02:05:868299: ipsec6-input-feature
>   IPSEC_ESP: sa_id 0 spd 2 policy 0 spi 1000 (0x03e8) seq 13693
> 01:02:05:868300: ip6-lookup
>   fib 0 dpo-idx 8 flow hash: 0x
>   IPSEC_ESP: 2001::1 -> 2001::2
> tos 0x28, flow label 0x0, hop limit 238, payload length 132
> 01:02:05:868301: ip6-local
> IPSEC_ESP: 2001::1 -> 2001::2
>   tos 0x28, flow label 0x0, hop limit 238, payload length 132
> 01:02:05:868301: ip6-punt
> IPSEC_ESP: 2001::1 -> 2001::2
>   tos 0x28, flow label 0x0, hop limit 238, payload length 132
> 01:02:05:868302: error-punt
>   rx:wan0.67
> 01:02:05:868302: punt
>   ip6-input: valid ip6 packets
>
>
>
>
>
>   IPSEC_ESP: sa_id 0 spd 2 policy 0 spi 1000 (0x03e8) seq 13693
>
>
>
> Here, the spd 2 actually does not have any policy in the 0 index.
>
> here is what show ipsec spd 2 looks like:
>
>
>
>  ip6-inbound-protect:
>[4] priority 100 action protect type ip6-inbound-protect protocol any
> sa 20
>  local addr range 2001::2 - 2001::2 port range 0 - 65535
>  remote addr range 2001::1 - 2001::1 port range 0 - 65535
>  packets 0 bytes 0
>
>
>
> Is there a mistake in the way SPD has been added?
>
> Or is something else the issue?
>
>
>
> Here is trace as seen by the sender:
>
>
>
> Packet 1
>
>
> 20:48:33:279228: dpdk-input
>   eth0 rx queue 0
>   buffer 0x8ee28: current data 0, length 98, buffer-pool 0, ref-count 1,
> totlen-nifb 0, trace handle 0x0
>   ext-hdr-valid
>   l4-cksum-computed 

Re: [vpp-dev] generic TCP MSS clamping

2020-05-28 Thread Ole Troan
Good find Mohsin. So it's only missing clamping on RX. I'm sure Miklos can add 
that.

Cheers,
Ole

> On 28 May 2020, at 12:23, Mohsin Kazmi (sykazmi)  wrote:
> 
> Hi Miklos,
>
> May be, it will help https://gerrit.fd.io/r/c/vpp/+/15144
>
> -br
> Mohsin
> From:  on behalf of Ole Troan 
> Date: Thursday, May 28, 2020 at 11:23 AM
> To: Miklos Tirpak 
> Cc: "vpp-dev@lists.fd.io" 
> Subject: Re: [vpp-dev] generic TCP MSS clamping
>
> Hi Miklos,
> 
> > I see the NAT plugin already supports TCP MSS clamping but it is 
> > implemented only in in2out direction.
> > 
> > We have endpoints with wrong MTUs behind tunnels and not all the traffic is 
> > NATed. Hence, it would be very nice to have generic support for MSS 
> > clamping that could be enabled on the tunnel interface.
> > 
> > Do you think implementing this as a feature arch would make sense? Then it 
> > would not be limited to NAT or to one kind of tunnel for example.
> > If so, what is the best place? A new plugin?
> 
> A bidirectional TCP MSS adjust would be fine.
> Putting it in a plugin is likely the simplest.
> 
> I'm unsure if it should be generic or not. E.g. the NAT also needs to adjust 
> the TCP checksum, and it's likely better to do it only once.
> 
> Best regards,
> Ole
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16553): https://lists.fd.io/g/vpp-dev/message/16553
Mute This Topic: https://lists.fd.io/mt/74499850/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] generic TCP MSS clamping

2020-05-28 Thread Mohsin Kazmi via lists.fd.io
Hi Miklos,

May be, it will help https://gerrit.fd.io/r/c/vpp/+/15144

-br
Mohsin
From:  on behalf of Ole Troan 
Date: Thursday, May 28, 2020 at 11:23 AM
To: Miklos Tirpak 
Cc: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] generic TCP MSS clamping

Hi Miklos,

> I see the NAT plugin already supports TCP MSS clamping but it is implemented 
> only in in2out direction.
>
> We have endpoints with wrong MTUs behind tunnels and not all the traffic is 
> NATed. Hence, it would be very nice to have generic support for MSS clamping 
> that could be enabled on the tunnel interface.
>
> Do you think implementing this as a feature arch would make sense? Then it 
> would not be limited to NAT or to one kind of tunnel for example.
> If so, what is the best place? A new plugin?

A bidirectional TCP MSS adjust would be fine.
Putting it in a plugin is likely the simplest.

I'm unsure if it should be generic or not. E.g. the NAT also needs to adjust 
the TCP checksum, and it's likely better to do it only once.

Best regards,
Ole

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16552): https://lists.fd.io/g/vpp-dev/message/16552
Mute This Topic: https://lists.fd.io/mt/74499850/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] generic TCP MSS clamping

2020-05-28 Thread Ole Troan
Hi Miklos,

> I see the NAT plugin already supports TCP MSS clamping but it is implemented 
> only in in2out direction.
> 
> We have endpoints with wrong MTUs behind tunnels and not all the traffic is 
> NATed. Hence, it would be very nice to have generic support for MSS clamping 
> that could be enabled on the tunnel interface.
> 
> Do you think implementing this as a feature arch would make sense? Then it 
> would not be limited to NAT or to one kind of tunnel for example.
> If so, what is the best place? A new plugin?

A bidirectional TCP MSS adjust would be fine.
Putting it in a plugin is likely the simplest.

I'm unsure if it should be generic or not. E.g. the NAT also needs to adjust 
the TCP checksum, and it's likely better to do it only once.

Best regards,
Ole-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16551): https://lists.fd.io/g/vpp-dev/message/16551
Mute This Topic: https://lists.fd.io/mt/74499850/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Fix in set_ipfix_exporter_command_fn() to avoid segmentation fault crash

2020-05-28 Thread Andrew Yourtchenko
Hi Elias,

Yeah it all does point to something like uninitialized data - I ran yesterday 
the tests on two different machines for a while, apparently without the 
issues...

The CI runtime environment is much more dynamic - it’s an ephemeral docker 
container that is orchestrated by the nomad and is destroyed after the job is 
run.

Could you push as a separate change the code that reliably gives you the error 
in the LISP unit test in the CI, and let me know the change# ?


 I will then test some tooling enhancement ideas I had for a while - to check 
within the job whether the core exists, and if it does, to load it into gdb and 
do some scripted processing of it and output the results... (Iterate over the 
call stack and issue stuff like ‘info locals’, ‘info regs’, etc).

I did some experiments with that approach earlier and it seemed like a rather 
scalable technique for most of the issues, which should also save disk space 
and the developer time ...

--a

> On 28 May 2020, at 10:33, Elias Rudberg  wrote:
> 
> Hi Andrew,
> 
> In my case it failed several times and appeared to be triggered by
> seemingly harmless code changes, but it seemed like the problem was
> reproducible for a given version of the code. What seemed to matter was
> when I changed things related to local variables inside the
> set_ipfix_exporter_command_fn() function. The test logs said "Core-file 
> exists" which I suppose means that vpp crashed. The testing framework
> repeats the test several times, saying "3 attempt(s) left", then "2
> attempt(s) left" and so on, all those repeated attempts seemed to crash
> in the same way.
> 
> It could be something with uninitialized variables, e.g. something that
> is assumed to be zero but is never explicitly initialized so it can
> work when it happens to be zero but depending on platform and compiler
> details there could be some garbage there causing a problem. Then
> unrelated code changes like adding variables somewhere making things
> end up at slightly different memory ocations could make the error come
> and go. This is just guessing of course.
> 
> Is it possible to get login access to the machine where the
> gerrit/jenkins tests are run, to debug it there where the issue can be
> reproduced?
> 
> / Elias
> 
> 
>> On Wed, 2020-05-27 at 19:03 +0200, Andrew  Yourtchenko wrote:
>> Yep, so it looks like we have an issue...
>> 
>> https://gerrit.fd.io/r/c/vpp/+/27305 has the same failures, I am
>> rerunning it now to see how intermittent it is - as well as testing
>> the latest master locally
>> 
>> --a
>> 
>>> On 27 May 2020, at 18:56, Elias Rudberg 
>>> wrote:
>>> 
>>> Hi Andrew,
>>> 
>>> Yes, it was Basic LISP test. It looked like this in the
>>> console.log.gz
>>> for vpp-verify-master-ubuntu1804:
>>> 
>>> ===
>>> 
>>> ===
>>> TEST RESULTS:
>>>Scheduled tests: 1177
>>> Executed tests: 1176
>>>   Passed tests: 1039
>>>  Skipped tests: 137
>>> Not Executed tests: 1
>>> Errors: 1
>>> FAILURES AND ERRORS IN TESTS:
>>> Testcase name: Basic LISP test 
>>> ERROR: Test case for basic encapsulation
>>> [test_lisp.TestLisp.test_lisp_basic_encap]
>>> TESTCASES WHERE NO TESTS WERE SUCCESSFULLY EXECUTED:
>>> Basic LISP test 
>>> ===
>>> 
>>> ===
>>> 
>>> / Elias
>>> 
>>> 
>>> 
>>> On Wed, 2020-05-27 at 18:42 +0200, Andrew  Yourtchenko wrote:
 Basic LISP test - was it the one that was failing for you ?
 That particular test intermittently failed a couple of times for
 me
 as well, on a doc-only change, so we have an unrelated issue.
 I am running it locally to see what is going on.
 --a
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16550): https://lists.fd.io/g/vpp-dev/message/16550
Mute This Topic: https://lists.fd.io/mt/74491544/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] clang-9

2020-05-28 Thread Lijian Zhang
Hi Damjan,
I got some feedback regarding switching default compiler from gcc to clang in 
VPP, from compiler team.

" Neither LLVM nor GCC tune much for Neoverse N1, I think the difference is 
that GCC has better vectorization and a better optimized AArch64 backend. GCC10 
has just been released and will likely do even better (it is now ~20% ahead of 
LLVM on SPEC)."

" I was looking at the roadmap for LLVM (which is the backend for clang). I see 
plans for tuning for Zeus and Perseus, but just support for N1. I think they 
fixed some alignment issues and replaced some intrinsics, but no specific N1 
tuning. LLVM is behind gcc in performance, by a decent amount."

I did some benchmarking on L2/L3 single flow throughput btw gcc-9.2.0 
(-march=armv8.2-a+crc+crypto -mtune=neoverse-n1) and clang-10 
(-mcpu=neoverse-n1). From the results below, gcc-9.2.0 gives better throughput 
number (about 4%) than clang-10.

clang-10:
L3: 11.04Mpps/10.99Mpps/11.02Mpps
L2: 11.55Mpps/11.56Mpps

gcc-9.2.0
L3: 11.61Mpps/11.55Mpps/11.59Mpps
L2: 12.15Mpps/12.16Mpps

Is it possible to restore gcc as the default compiler for vpp, or for vpp 
compiling on Arm CPU?
Is it possible to remove clang-9 dependency in Makefile, or make it applied for 
x86 only?
This dependency forces vpp compiling to use clang-9 always.

diff --git a/Makefile b/Makefile
ifeq ($(OS_VERSION_ID),18.04)
DEB_DEPENDS += python-dev python-all python-pip python-virtualenv
DEB_DEPENDS += libssl-dev
-   DEB_DEPENDS += clang-9
diff --git a/src/CMakeLists.txt b/src/CMakeLists.txt
-set(CMAKE_C_COMPILER_NAMES clang-10 clang-9 gcc-9 cc)
+set(CMAKE_C_COMPILER_NAMES gcc-9 cc)

Thanks.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16548): https://lists.fd.io/g/vpp-dev/message/16548
Mute This Topic: https://lists.fd.io/mt/73327785/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] RPM packaging on "stable/2005" branch maybe broken on suse leap 15.1 #vpp

2020-05-28 Thread Benoit Ganne (bganne) via lists.fd.io
I guess you are using extras/rpm/vpp-suse.spec (SUSE)? It looks like it misses 
several evolutions compared to extras/rpm/vpp.spec (CentOS).
I am not familiar with SUSE but you can try to look the CentOS specfile to fix 
it.

ben

> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of
> pashinho1...@gmail.com
> Sent: mercredi 27 mai 2020 18:38
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] RPM packaging on "stable/2005" branch maybe broken on
> suse leap 15.1 #vpp
> 
> Hi all,
> 
> All worked just fine as I was on top of "stable/2001" branch, but today I
> rebased my work on top of "stable/2005" and did a "make pkg-rpm" which
> breaks right at the end at the "vpp-papi" build stage.
> Specifically:
> 
>   ...
>   running install_egg_info
>   running egg_info
>   creating vpp_papi.egg-info
>   writing requirements to vpp_papi.egg-info/requires.txt
>   writing vpp_papi.egg-info/PKG-INFO
>   writing top-level names to vpp_papi.egg-info/top_level.txt
>   writing dependency_links to vpp_papi.egg-info/dependency_links.txt
>   writing manifest file 'vpp_papi.egg-info/SOURCES.txt'
>   reading manifest file 'vpp_papi.egg-info/SOURCES.txt'
>   writing manifest file 'vpp_papi.egg-info/SOURCES.txt'
>   Copying vpp_papi.egg-info to /root/tmp/vpp/build-
> root/rpmbuild/BUILDROOT/vpp-20.05-
> rc2~10_g31ba8e8cb.x86_64/usr/lib/python2.7/site-packages/vpp_papi-1.6.2-
> py2.7.egg-info
>   running install_scripts
>   + mkdir -p -m755 /root/tmp/vpp/build-root/rpmbuild/BUILDROOT/vpp-
> 20.05-rc2~10_g31ba8e8cb.x86_64/usr/lib/python2.7/site-packages/vpp_papi
>   ++ find '/root/tmp/vpp/build-root/rpmbuild/BUILDROOT/vpp-20.05-
> rc2~10_g31ba8e8cb.x86_64/../../BUILD/vpp-20.05/build-root/install-vpp-
> native//*/lib/python2.7/site-packages/' -type f -print
>   ++ grep -v pyc
>   find: ‘/root/tmp/vpp/build-root/rpmbuild/BUILDROOT/vpp-20.05-
> rc2~10_g31ba8e8cb.x86_64/../../BUILD/vpp-20.05/build-root/install-vpp-
> native//*/lib/python2.7/site-packages/’: No such file or directory
> 
> Apparently, the "find" cmd is trying to watch into "/build-
> root/rpmbuild/BUILD/vpp-20.05/build-root/install-vpp-
> native/[external|vom|vpp]/lib/" where, in fact, I don't see any
> "python2.7" folder. Is this an issue on my end, or how can I understand
> this? Please someone do let me know.
> 
> The system is an openSUSE Leap 15.1
> 
> Thank you
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16547): https://lists.fd.io/g/vpp-dev/message/16547
Mute This Topic: https://lists.fd.io/mt/74503879/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP fails to start - error message EAL: FATAL: Cannot get hugepage information.

2020-05-28 Thread Benoit Ganne (bganne) via lists.fd.io
Could you share the output of:
~# strace vpp -c /usr/share/vpp/vpp.conf
With DPDK enabled (ie with your targeted config which is failing)?

ben

> -Original Message-
> From: Manoj Iyer 
> Sent: jeudi 28 mai 2020 01:11
> To: Damjan Marion ; Manoj Iyer 
> Cc: Benoit Ganne (bganne) ; vpp-dev@lists.fd.io; Rodney
> Schmidt ; Kshitij Sudan 
> Subject: Re: [vpp-dev] VPP fails to start - error message EAL: FATAL:
> Cannot get hugepage information.
> 
> Any more thoughts on this failure ?
> 
> Thanks
> Manoj Iyer
> 
> 
> From: vpp-dev@lists.fd.io  on behalf of Manoj Iyer
> via lists.fd.io 
> Sent: Tuesday, May 26, 2020 6:51 PM
> To: Damjan Marion 
> Cc: bga...@cisco.com ; vpp-dev@lists.fd.io  d...@lists.fd.io>; Rodney Schmidt ; Kshitij Sudan
> 
> Subject: Re: [vpp-dev] VPP fails to start - error message EAL: FATAL:
> Cannot get hugepage information.
> 
> $ lscpu
> 
> Architecture: aarch64
> 
> Byte Order:   Little Endian
> 
> CPU(s):   8
> 
> On-line CPU(s) list:  0
> 
> Off-line CPU(s) list: 1-7
> 
> Thread(s) per core:   1
> 
> Core(s) per socket:   1
> 
> Socket(s):1
> 
> NUMA node(s): 1
> 
> Vendor ID:ARM
> 
> Model:3
> 
> Model name:   Cortex-A72
> 
> Stepping: r0p3
> 
> BogoMIPS: 250.00
> 
> L1d cache:unknown size
> 
> L1i cache:unknown size
> 
> L2 cache: unknown size
> 
> NUMA node0 CPU(s):0
> 
> Flags:fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
> 
> 
> $ grep .  /sys/kernel/mm/hugepages/hugepages-*/*
> 
> /sys/kernel/mm/hugepages/hugepages-1048576kB/free_hugepages:0
> 
> /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages:0
> 
> /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages_mempolicy:0
> 
> /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_overcommit_hugepages:0
> 
> /sys/kernel/mm/hugepages/hugepages-1048576kB/resv_hugepages:0
> 
> /sys/kernel/mm/hugepages/hugepages-1048576kB/surplus_hugepages:0
> 
> /sys/kernel/mm/hugepages/hugepages-2048kB/free_hugepages:1024
> 
> /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages:1024
> 
> /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages_mempolicy:1024
> 
> /sys/kernel/mm/hugepages/hugepages-2048kB/nr_overcommit_hugepages:0
> 
> /sys/kernel/mm/hugepages/hugepages-2048kB/resv_hugepages:0
> 
> /sys/kernel/mm/hugepages/hugepages-2048kB/surplus_hugepages:0
> 
> /sys/kernel/mm/hugepages/hugepages-32768kB/free_hugepages:0
> 
> /sys/kernel/mm/hugepages/hugepages-32768kB/nr_hugepages:0
> 
> /sys/kernel/mm/hugepages/hugepages-32768kB/nr_hugepages_mempolicy:0
> 
> /sys/kernel/mm/hugepages/hugepages-32768kB/nr_overcommit_hugepages:0
> 
> /sys/kernel/mm/hugepages/hugepages-32768kB/resv_hugepages:0
> 
> /sys/kernel/mm/hugepages/hugepages-32768kB/surplus_hugepages:0
> 
> /sys/kernel/mm/hugepages/hugepages-64kB/free_hugepages:0
> 
> /sys/kernel/mm/hugepages/hugepages-64kB/nr_hugepages:0
> 
> /sys/kernel/mm/hugepages/hugepages-64kB/nr_hugepages_mempolicy:0
> 
> /sys/kernel/mm/hugepages/hugepages-64kB/nr_overcommit_hugepages:0
> 
> /sys/kernel/mm/hugepages/hugepages-64kB/resv_hugepages:0
> 
> /sys/kernel/mm/hugepages/hugepages-64kB/surplus_hugepages:0
> 
> 
> 
> $ grep . /sys/devices/system/node/node*/hugepages/hugepages-*/*
> 
> /sys/devices/system/node/node0/hugepages/hugepages-
> 1048576kB/free_hugepages:0
> 
> /sys/devices/system/node/node0/hugepages/hugepages-
> 1048576kB/nr_hugepages:0
> 
> /sys/devices/system/node/node0/hugepages/hugepages-
> 1048576kB/surplus_hugepages:
> 
> /sys/devices/system/node/node0/hugepages/hugepages-
> 2048kB/free_hugepages:1024
> 
> /sys/devices/system/node/node0/hugepages/hugepages-
> 2048kB/nr_hugepages:1024
> 
> /sys/devices/system/node/node0/hugepages/hugepages-
> 2048kB/surplus_hugepages:0
> 
> /sys/devices/system/node/node0/hugepages/hugepages-
> 32768kB/free_hugepages:0
> 
> /sys/devices/system/node/node0/hugepages/hugepages-32768kB/nr_hugepages:0
> 
> /sys/devices/system/node/node0/hugepages/hugepages-
> 32768kB/surplus_hugepages:0
> 
> /sys/devices/system/node/node0/hugepages/hugepages-64kB/free_hugepages:0
> 
> /sys/devices/system/node/node0/hugepages/hugepages-64kB/nr_hugepages:0
> 
> /sys/devices/system/node/node0/hugepages/hugepages-
> 64kB/surplus_hugepages:0
> 
> ubuntu@sst100:~$
> 
> 
> 
> From: Damjan Marion 
> Sent: Tuesday, May 26, 2020 6:01 PM
> To: Manoj Iyer 
> Cc: bga...@cisco.com ; vpp-dev@lists.fd.io  d...@lists.fd.io>; Rodney Schmidt ; Kshitij Sudan
> 
> Subject: Re: [vpp-dev] VPP fails to start - error message EAL: FATAL:
> Cannot get hugepage information.
> 
> Can you capture:
> 
> lscpu
> 
> grep . /sys/kernel/mm/hugepages/hugepages-*/*
> 
> grep . /sys/devices/system/node/node*/hugepages/hugepages-*/*
> 
> -
> Damjan
> 
> > On 27 May 2020, at 00:50, Manoj Iyer  wrote:
> >
> > But the issue is VPP's dpgk plugin fails and VPP service is not started
> as a 

Re: [vpp-dev] ACL plugin optimization

2020-05-28 Thread Neale Ranns via lists.fd.io

Hi Govind,

As well as removing the prefetches, you've also removed the per packet call to 
acl_fa_find_session_with_hash(). So IIUC you've removed the per-packet session 
lookup and instead re-use the lookup of packet 0 each time. that'll make things 
quicker but it's not functionally correct.

/neale

On 27/05/2020 23:51, "vpp-dev@lists.fd.io on behalf of Andrew Yourtchenko" 
 wrote:

Hi Govind,

1) According to Jenkins, this patch permits some of the packets that
should be denied, hence JJB voted "-1".

2) If you suspect merely the prefetches are the issue, just commenting
out the body of prefetch_session_entry() in the original code should
turn it into a no-op that doesn't break anything.

Hard to say anything else given the functionality is not correct.

In general - ensure you run "EXTENDED_TESTS=y TEST=acl* make test" as
a sanity check before extensive perf-tests. It's not a 100% guarantee
but it does catch a few naughty cases.

Also - take a look at f1cd92d8d9, which got about 30% improvement back
in the day, and is the source of much of the trickiness in that node.

--a


On 5/27/20, Govindarajan Mohandoss  wrote:
> Hi Andrew,
>
>   While profiling the ACL plugin node using perf tool in ARM Neoverse
> platform, Bihash related prefetches were shown as bottleneck.
>
> Performance improvement is seen in ARM N1, TX2 and Intel Skylake servers
> after removing those prefetches. Testing is done with Ingress ACL/IPv4
> forwarding in both SF and SL modes.
>
> As the code change is common for Ingress/Egress ACL for both IPv4 and 
IPv6,
> performance improvement is expected for those cases also.
>
> Following are the test results for Ingress ACL / IPv4 / 1 core / 64B @ MRR
> in ARM N1, TX2 and Intel Skylake servers:
>
>
>
> Legend:
>
> ===
>
> N1 - ARM Neoverse
>
> TX2 - ARM Thunder X2
>
> SKX - Intel Skylake
>
> SL: % imp - Performance improvement in stateless mode
>
> SF: % imp - Performance improvement in stateful mode
>
>
>
>
>
>
> SKX
> N1
> TX2
> Num Rules
> Matching Rules
> SL: Avg % imp
> SF: Avg % imp
> SL: % imp
> SF: % imp
> SL: % imp
> SF: % imp
> 1
> 1
> 0.99
> 12.09
> 8.38
> 10.41
> 4.48
> 4.63
> 50
> 1 (50th)
> 0.79
> 9.63
> 8.76
> 10.06
> 5.32
> 4.63
> 100
> 1 (100th)
> 4.34
> 10.75
> 8.60
> 10.06
> 6.98
> 4.63
> 1000
> 1(1000th)
> 4.18
> 13.06
> 8.61
> 11.14
> 6.17
> 5.58
> 100
> 100
> 3.70
> 11.70
> 6.65
> 14
> 2.82
> 6.53
> 1000
> 1000
> 1.84
> 15.96
> 5.52
> 27.72
> 4.72
> 8.69
>
>
>
>
>
> Please find the patch here: https://gerrit.fd.io/r/c/vpp/+/27167
>
>
>
> I ran per patch regression on ARM Taishan server in CSIT lab. Following 
are
> the results for Stateless and Stateful modes:
>
> 1.  perftest-3n-tsh acl_statelessAND1cAND64b:
>
>
> 
https://jenkins.fd.io/job/vpp-csit-verify-perf-master-3n-tsh/23/consoleFull
>
>  In the log, I can see the comparative numbers between parent and
> current (my patch) for 45 test cases.
>
>  I searched for "Difference of averages relative to parent" in the 
log -
>  41/45 test cases have shown around 4% improvement with the patch. Rest of
> the 4 test cases stayed neutral.
>
>
>
> 2. perftest-3n-tsh acl_statefulAND1cAND64b:
>
> https://jenkins.fd.io/job/vpp-csit-verify-perf-master-3n-tsh/25/
>
> Performance improvement is seen in all 36 test cases.
>
>
>
> Please provide your comments.
>
>
>
> Thanks
>
> Govind
>
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16545): https://lists.fd.io/g/vpp-dev/message/16545
Mute This Topic: https://lists.fd.io/mt/74507621/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-