Re: [vpp-dev] Query about internal apps

2020-03-12 Thread Vivek Gupta
Hi Ole,

Please see inline.

Regards,
Vivek

-Original Message-
From: otr...@employees.org  
Sent: Thursday, March 12, 2020 1:39 PM
To: Vivek Gupta 
Cc: kusumanjal...@gmail.com; Florin Coras ; 
vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Query about internal apps

Hi Vivek,

> We are trying to achieve the mechanism, something similar to TAP interface, 
> in VPP.
>  
> So, the packets coming out of the TAP interface, will be directed directly to 
> the application. The application will receive the packets, coming via TAP 
> interface, process them and send it down via the Host stack.
>  
> Possible options, we could think of are:-
> - Enhance the session layer to provide a L2 transport mechanism and add nodes 
> like tap-input and tap-out which would achieve the same.
> - Use the existing session layer by doing a IP/UDP encap and send it to the 
> APP, via session layer and use existing mechanism.
>   This introduces an overhead of additional encap/decap.
>  
> We wanted to check if there is any alternate option to directly transfer the 
> packets from the plugin to the VPP App, without even involving the session 
> layer and have no additional overhead encap/decap,

Is this similar to the idea of routing directly to the application?
I.e. give each application an IP address (easier with IPv6), and the 
application itself links in whatever transport layer it needs. In a VPP context 
the application could sit behind a memif interface. The application would need 
some support for IP address assignment, ARP/ND etc.
Userland networking taken to the extreme. ;-)
Vivek> It is similar to that. We need the application to define it's own 
routing and packet processing. 
Best regards,
Ole
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15768): https://lists.fd.io/g/vpp-dev/message/15768
Mute This Topic: https://lists.fd.io/mt/71885250/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Tap connect CLI

2020-03-12 Thread Gudimetla, Leela Sankar via Lists.Fd.Io
Hi,

I see that the tap connect CLI has been deprecated which was used for creating 
a pair of interfaces between container and host.
Is there any other CLI/mechanism available in VPP-1908 to achieve the same?

Thanks,
Leela sankar Gudimetla
Embedded Software Engineer 3 |  Ciena
San Jose, CA, USA
M | +1.408.904.2160
[Ciena Logo]

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15767): https://lists.fd.io/g/vpp-dev/message/15767
Mute This Topic: https://lists.fd.io/mt/71914759/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VRRP Unit Tests failing on Master Ubuntu

2020-03-12 Thread Klement Sekera via Lists.Fd.Io
There is also test/scripts/test-loop.sh which might some users better.

Regards,
Klement

> On 12 Mar 2020, at 19:08, Dave Wallace  wrote:
> 
> Hi Matt,
> 
> Your patch [0] verified, Ray +1'd it, and I merged it.
> 
> In my investigation on Naginator retries, I found an unrelated gerrit change 
> [1] where there was a VRRP test failure [2] on which failed the 
> vpp-arm-verify-master-ubuntu1804 job but subsequently passed on both the 
> Naginator retry [3] as well as the verify of the next patch [4] to the gerrit 
> change.
> 
> This failure occurred on March 02, 2020 prior to the recent timekeeping 
> related changes.
> 
> In case you are not aware, I wrote a bash function [5] which allows iterative 
> running of make test until it encounters a failure. This function has been 
> helpful in tracking down and fixing intermittent test failures in the quic 
> tests which were very hard to reproduce outside of 'make test'. Note that in 
> particular, I have seen many more intermittent failures with 'make test' 
> running tests in parallel (make test TEST_JOBS=auto) when running them 
> serially. Also, the grep (-g) option is most useful for detecting 
> clib_warning() instrumentation of suspected errant conditions in release 
> images.
> 
> Hope this helps,
> -daw-
> 
> [0] https://gerrit.fd.io/r/c/vpp/+/25834
> [1] https://gerrit.fd.io/r/c/vpp/+/25581
> [2] https://gerrit.fd.io/r/c/vpp/+/25581#message-cb3ca555_cb3c5e63
>   
> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-arm-verify-master-ubuntu1804/8899/console-timestamp.log.gz
> [3] https://gerrit.fd.io/r/c/vpp/+/25581#message-b01de4c2_560ef9c6
> [4] https://gerrit.fd.io/r/c/vpp/+/25581#message-d2ecb27d_a9d52cc9
> [5] https://git.fd.io/vpp/tree/extras/bash/functions.bash
> - %< -
> Usage: vpp-make-test [-a][-d][-f][-g ][-r ]  
> []
>  -aRun extended tests
>  -dRun vpp debug image (i.e. with ASSERTS)
>  -fTestcase is a feature set (e.g. tcp)
>  -g  Text to grep for in log, FAIL on match.
>Enclose  in single quotes when it contains 
> any dashes:
>e.g.  vpp-make-test -g 'goof-bad-' test_xyz
>  -r   Retry Count (default = 100 for individual | 1 for 
> feature)
> - %< -
> 
> 
> On 3/12/2020 12:41 PM, Matthew Smith wrote:
>> Hi Dave,
>> 
>> That sounds fine to me.
>> 
>> Thanks,
>> -Matt
>> 
>> 
>> On Thu, Mar 12, 2020 at 11:32 AM Dave Wallace  wrote:
>> Matt,
>> 
>> I will keep an eye on this gerrit and merge it once the verify jobs have 
>> completed.
>> If there are other tests which fail, are you ok if I add them to this patch 
>> and turn it into a generic 'disable failing tests' gerrit change?
>> 
>> The other possibility is that this is due to the recent disabling of the 
>> Naginator retry plugin.
>> 
>> I'm going to investigate if this issue may have been masked by Naginator...
>> 
>> Thanks for your help on keeping the CI operational!
>> -daw-
>> 
>> On 3/12/2020 12:09 PM, Matthew Smith via Lists.Fd.Io wrote:
>>> 
>>> Change submitted - https://gerrit.fd.io/r/c/vpp/+/25834. Verification jobs 
>>> are running. Hopefully they won't   fail :)
>>> 
>>> -Matt
>>> 
>>> 
>>> On Thu, Mar 12, 2020 at 10:22 AM Matthew Smith via Lists.Fd.Io 
>>>  wrote:
>>> 
>>> I don't have a solution yet, but one observation has popped up quickly
>>> 
>>> In the 2 failed jobs Ray sent links for, one of them had a test fail which 
>>> was not related to VRRP. There is a BFD6 test failure for the NAT change 
>>> https://gerrit.fd.io/r/c/vpp/+/25462:
>>> 
>>> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-ubuntu1804/2678/archives/
>>> 
>>> Looking back through a couple of recent failed runs of that job, there is 
>>> also a DHCP6 PD test failure for rdma change 
>>> https://gerrit.fd.io/r/c/vpp/+/25823:
>>> 
>>> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-ubuntu1804/2682/archives/
>>> 
>>> The most obvious common thread between BFD6, DHCP6 and VRRP to me seems to 
>>> be that they all maintain state which is dependent on timers. There could 
>>> be a more general issue with timing-sensitive tests. I am going to submit a 
>>> change which will prevent the VRRP tests from running temporarily while I 
>>> can figure out a proper solution. Based on the above, other tests may need 
>>> the same treatment.
>>> 
>>> -Matt
>>>
>>> 
>>> 
>>> 
>>> On Thu, Mar 12, 2020 at 8:57 AM Matthew Smith  wrote:
>>> Hi Ray,
>>> 
>>> Thanks for bringing it to my attention. I'll look into it.
>>> 
>>> -Matt
>>> 
>>> 
>>> On Thu, Mar 12, 2020 at 8:24 AM Ray Kinsella  wrote:
>>> Anyone else noticing seeming spurious failures related to the VRRP plugin's 
>>> unit tests.
>>> Some examples from un-related commits.
>>> 
>>> Ray K
>>> 
>>> nat: timed out session scavenging upgrade 
>>> (https://gerrit.fd.io/r/c/vpp/+/25462)
>>> 

Re: [vpp-dev] VRRP Unit Tests failing on Master Ubuntu

2020-03-12 Thread Matthew Smith via Lists.Fd.Io
Hi Dave,

That sounds fine to me.

Thanks,
-Matt


On Thu, Mar 12, 2020 at 11:32 AM Dave Wallace  wrote:

> Matt,
>
> I will keep an eye on this gerrit and merge it once the verify jobs have
> completed.
> If there are other tests which fail, are you ok if I add them to this
> patch and turn it into a generic 'disable failing tests' gerrit change?
>
> The other possibility is that this is due to the recent disabling of the
> Naginator retry plugin.
>
> I'm going to investigate if this issue may have been masked by Naginator...
>
> Thanks for your help on keeping the CI operational!
> -daw-
>
> On 3/12/2020 12:09 PM, Matthew Smith via Lists.Fd.Io wrote:
>
>
> Change submitted - https://gerrit.fd.io/r/c/vpp/+/25834. Verification
> jobs are running. Hopefully they won't fail :)
>
> -Matt
>
>
> On Thu, Mar 12, 2020 at 10:22 AM Matthew Smith via Lists.Fd.Io  netgate@lists.fd.io> wrote:
>
>>
>> I don't have a solution yet, but one observation has popped up quickly
>>
>> In the 2 failed jobs Ray sent links for, one of them had a test fail
>> which was not related to VRRP. There is a BFD6 test failure for the NAT
>> change https://gerrit.fd.io/r/c/vpp/+/25462:
>>
>>
>> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-ubuntu1804/2678/archives/
>>
>> Looking back through a couple of recent failed runs of that job, there is
>> also a DHCP6 PD test failure for rdma change
>> https://gerrit.fd.io/r/c/vpp/+/25823:
>>
>>
>> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-ubuntu1804/2682/archives/
>>
>> The most obvious common thread between BFD6, DHCP6 and VRRP to me seems
>> to be that they all maintain state which is dependent on timers. There
>> could be a more general issue with timing-sensitive tests. I am going to
>> submit a change which will prevent the VRRP tests from running temporarily
>> while I can figure out a proper solution. Based on the above, other tests
>> may need the same treatment.
>>
>> -Matt
>>
>>
>>
>>
>> On Thu, Mar 12, 2020 at 8:57 AM Matthew Smith 
>> wrote:
>>
>>> Hi Ray,
>>>
>>> Thanks for bringing it to my attention. I'll look into it.
>>>
>>> -Matt
>>>
>>>
>>> On Thu, Mar 12, 2020 at 8:24 AM Ray Kinsella  wrote:
>>>
 Anyone else noticing seeming spurious failures related to the VRRP
 plugin's unit tests.
 Some examples from un-related commits.

 Ray K

 nat: timed out session scavenging upgrade (
 https://gerrit.fd.io/r/c/vpp/+/25462)

 https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-ubuntu1804/2678/console.log.gz


 ==
 TEST RESULTS:
  Scheduled tests: 1138
   Executed tests: 1138
 Passed tests: 1021
Skipped tests: 112
 Failures: 3
   Errors: 2
 FAILURES AND ERRORS IN TESTS:
   Testcase name: IPv4 VRRP Test Case
 FAILURE: IPv4 Master VR does not reply for VIP w/ accept mode off
 [test_vrrp.TestVRRP4.test_vrrp4_accept_mode_disabled]
 FAILURE: IPv4 Master VR preempted by higher priority backup
 [test_vrrp.TestVRRP4.test_vrrp4_master_preempted]
   Testcase name: IPv6 VRRP Test Case
 FAILURE: IPv6 Master VR preempted by higher priority backup
 [test_vrrp.TestVRRP6.test_vrrp6_master_preempted]
   ERROR: IPv6 Backup VR preempts lower priority master
 [test_vrrp.TestVRRP6.test_vrrp6_backup_preempts]
   Testcase name: Bidirectional Forwarding Detection (BFD) (IPv6)
   ERROR: echo function [test_bfd.BFD6TestCase.test_echo]

 ==

 vlib: startup multi-arch variant configuration (
 https://gerrit.fd.io/r/c/vpp/+/25798_

 https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-ubuntu1804/2675/console.log.gz


 ==
 TEST RESULTS:
  Scheduled tests: 22
   Executed tests: 22
 Passed tests: 21
 Failures: 1
 FAILURES AND ERRORS IN TESTS:
   Testcase name: IPv4 VRRP Test Case
 FAILURE: IPv4 Master VR preempted by higher priority backup
 [test_vrrp.TestVRRP4.test_vrrp4_master_preempted]

 ==




>>
> 
>
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15764): https://lists.fd.io/g/vpp-dev/message/15764
Mute This Topic: https://lists.fd.io/mt/71901798/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VRRP Unit Tests failing on Master Ubuntu

2020-03-12 Thread Dave Wallace

Matt,

I will keep an eye on this gerrit and merge it once the verify jobs have 
completed.
If there are other tests which fail, are you ok if I add them to this 
patch and turn it into a generic 'disable failing tests' gerrit change?


The other possibility is that this is due to the recent disabling of the 
Naginator retry plugin.


I'm going to investigate if this issue may have been masked by Naginator...

Thanks for your help on keeping the CI operational!
-daw-

On 3/12/2020 12:09 PM, Matthew Smith via Lists.Fd.Io wrote:


Change submitted - https://gerrit.fd.io/r/c/vpp/+/25834. Verification 
jobs are running. Hopefully they won't fail :)


-Matt


On Thu, Mar 12, 2020 at 10:22 AM Matthew Smith via Lists.Fd.Io 
 > wrote:



I don't have a solution yet, but one observation has popped up
quickly

In the 2 failed jobs Ray sent links for, one of them had a test
fail which was not related to VRRP. There is a BFD6 test failure
for the NAT change https://gerrit.fd.io/r/c/vpp/+/25462:


https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-ubuntu1804/2678/archives/

Looking back through a couple of recent failed runs of that job,
there is also a DHCP6 PD test failure for rdma change
https://gerrit.fd.io/r/c/vpp/+/25823:


https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-ubuntu1804/2682/archives/

The most obvious common thread between BFD6, DHCP6 and VRRP to me
seems to be that they all maintain state which is dependent on
timers. There could be a more general issue with timing-sensitive
tests. I am going to submit a change which will prevent the VRRP
tests from running temporarily while I can figure out a proper
solution. Based on the above, other tests may need the same treatment.

-Matt



On Thu, Mar 12, 2020 at 8:57 AM Matthew Smith mailto:mgsm...@netgate.com>> wrote:

Hi Ray,

Thanks for bringing it to my attention. I'll look into it.

-Matt


On Thu, Mar 12, 2020 at 8:24 AM Ray Kinsella mailto:m...@ashroe.eu>> wrote:

Anyone else noticing seeming spurious failures related to
the VRRP plugin's unit tests.
Some examples from un-related commits.

Ray K

nat: timed out session scavenging upgrade
(https://gerrit.fd.io/r/c/vpp/+/25462)

https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-ubuntu1804/2678/console.log.gz


==
TEST RESULTS:
     Scheduled tests: 1138
      Executed tests: 1138
        Passed tests: 1021
       Skipped tests: 112
            Failures: 3
              Errors: 2
FAILURES AND ERRORS IN TESTS:
  Testcase name: IPv4 VRRP Test Case
    FAILURE: IPv4 Master VR does not reply for VIP w/
accept mode off
[test_vrrp.TestVRRP4.test_vrrp4_accept_mode_disabled]
    FAILURE: IPv4 Master VR preempted by higher priority
backup [test_vrrp.TestVRRP4.test_vrrp4_master_preempted]
  Testcase name: IPv6 VRRP Test Case
    FAILURE: IPv6 Master VR preempted by higher priority
backup [test_vrrp.TestVRRP6.test_vrrp6_master_preempted]
      ERROR: IPv6 Backup VR preempts lower priority master
[test_vrrp.TestVRRP6.test_vrrp6_backup_preempts]
  Testcase name: Bidirectional Forwarding Detection (BFD)
(IPv6)
      ERROR: echo function [test_bfd.BFD6TestCase.test_echo]

==

vlib: startup multi-arch variant configuration
(https://gerrit.fd.io/r/c/vpp/+/25798_

https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-ubuntu1804/2675/console.log.gz


==
TEST RESULTS:
     Scheduled tests: 22
      Executed tests: 22
        Passed tests: 21
            Failures: 1
FAILURES AND ERRORS IN TESTS:
  Testcase name: IPv4 VRRP Test Case
    FAILURE: IPv4 Master VR preempted by higher priority
backup [test_vrrp.TestVRRP4.test_vrrp4_master_preempted]

==








-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15763): https://lists.fd.io/g/vpp-dev/message/15763
Mute This Topic: https://lists.fd.io/mt/71901798/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  

Re: [vpp-dev] VRRP Unit Tests failing on Master Ubuntu

2020-03-12 Thread Matthew Smith via Lists.Fd.Io
Change submitted - https://gerrit.fd.io/r/c/vpp/+/25834. Verification jobs
are running. Hopefully they won't fail :)

-Matt


On Thu, Mar 12, 2020 at 10:22 AM Matthew Smith via Lists.Fd.Io  wrote:

>
> I don't have a solution yet, but one observation has popped up quickly
>
> In the 2 failed jobs Ray sent links for, one of them had a test fail which
> was not related to VRRP. There is a BFD6 test failure for the NAT change
> https://gerrit.fd.io/r/c/vpp/+/25462:
>
>
> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-ubuntu1804/2678/archives/
>
> Looking back through a couple of recent failed runs of that job, there is
> also a DHCP6 PD test failure for rdma change
> https://gerrit.fd.io/r/c/vpp/+/25823:
>
>
> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-ubuntu1804/2682/archives/
>
> The most obvious common thread between BFD6, DHCP6 and VRRP to me seems to
> be that they all maintain state which is dependent on timers. There could
> be a more general issue with timing-sensitive tests. I am going to submit a
> change which will prevent the VRRP tests from running temporarily while I
> can figure out a proper solution. Based on the above, other tests may need
> the same treatment.
>
> -Matt
>
>
>
>
> On Thu, Mar 12, 2020 at 8:57 AM Matthew Smith  wrote:
>
>> Hi Ray,
>>
>> Thanks for bringing it to my attention. I'll look into it.
>>
>> -Matt
>>
>>
>> On Thu, Mar 12, 2020 at 8:24 AM Ray Kinsella  wrote:
>>
>>> Anyone else noticing seeming spurious failures related to the VRRP
>>> plugin's unit tests.
>>> Some examples from un-related commits.
>>>
>>> Ray K
>>>
>>> nat: timed out session scavenging upgrade (
>>> https://gerrit.fd.io/r/c/vpp/+/25462)
>>>
>>> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-ubuntu1804/2678/console.log.gz
>>>
>>>
>>> ==
>>> TEST RESULTS:
>>>  Scheduled tests: 1138
>>>   Executed tests: 1138
>>> Passed tests: 1021
>>>Skipped tests: 112
>>> Failures: 3
>>>   Errors: 2
>>> FAILURES AND ERRORS IN TESTS:
>>>   Testcase name: IPv4 VRRP Test Case
>>> FAILURE: IPv4 Master VR does not reply for VIP w/ accept mode off
>>> [test_vrrp.TestVRRP4.test_vrrp4_accept_mode_disabled]
>>> FAILURE: IPv4 Master VR preempted by higher priority backup
>>> [test_vrrp.TestVRRP4.test_vrrp4_master_preempted]
>>>   Testcase name: IPv6 VRRP Test Case
>>> FAILURE: IPv6 Master VR preempted by higher priority backup
>>> [test_vrrp.TestVRRP6.test_vrrp6_master_preempted]
>>>   ERROR: IPv6 Backup VR preempts lower priority master
>>> [test_vrrp.TestVRRP6.test_vrrp6_backup_preempts]
>>>   Testcase name: Bidirectional Forwarding Detection (BFD) (IPv6)
>>>   ERROR: echo function [test_bfd.BFD6TestCase.test_echo]
>>>
>>> ==
>>>
>>> vlib: startup multi-arch variant configuration (
>>> https://gerrit.fd.io/r/c/vpp/+/25798_
>>>
>>> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-ubuntu1804/2675/console.log.gz
>>>
>>>
>>> ==
>>> TEST RESULTS:
>>>  Scheduled tests: 22
>>>   Executed tests: 22
>>> Passed tests: 21
>>> Failures: 1
>>> FAILURES AND ERRORS IN TESTS:
>>>   Testcase name: IPv4 VRRP Test Case
>>> FAILURE: IPv4 Master VR preempted by higher priority backup
>>> [test_vrrp.TestVRRP4.test_vrrp4_master_preempted]
>>>
>>> ==
>>>
>>>
>>>
>>> 
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15762): https://lists.fd.io/g/vpp-dev/message/15762
Mute This Topic: https://lists.fd.io/mt/71901798/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VRRP Unit Tests failing on Master Ubuntu

2020-03-12 Thread Matthew Smith via Lists.Fd.Io
I don't have a solution yet, but one observation has popped up quickly

In the 2 failed jobs Ray sent links for, one of them had a test fail which
was not related to VRRP. There is a BFD6 test failure for the NAT change
https://gerrit.fd.io/r/c/vpp/+/25462:

https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-ubuntu1804/2678/archives/

Looking back through a couple of recent failed runs of that job, there is
also a DHCP6 PD test failure for rdma change
https://gerrit.fd.io/r/c/vpp/+/25823:

https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-ubuntu1804/2682/archives/

The most obvious common thread between BFD6, DHCP6 and VRRP to me seems to
be that they all maintain state which is dependent on timers. There could
be a more general issue with timing-sensitive tests. I am going to submit a
change which will prevent the VRRP tests from running temporarily while I
can figure out a proper solution. Based on the above, other tests may need
the same treatment.

-Matt




On Thu, Mar 12, 2020 at 8:57 AM Matthew Smith  wrote:

> Hi Ray,
>
> Thanks for bringing it to my attention. I'll look into it.
>
> -Matt
>
>
> On Thu, Mar 12, 2020 at 8:24 AM Ray Kinsella  wrote:
>
>> Anyone else noticing seeming spurious failures related to the VRRP
>> plugin's unit tests.
>> Some examples from un-related commits.
>>
>> Ray K
>>
>> nat: timed out session scavenging upgrade (
>> https://gerrit.fd.io/r/c/vpp/+/25462)
>>
>> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-ubuntu1804/2678/console.log.gz
>>
>>
>> ==
>> TEST RESULTS:
>>  Scheduled tests: 1138
>>   Executed tests: 1138
>> Passed tests: 1021
>>Skipped tests: 112
>> Failures: 3
>>   Errors: 2
>> FAILURES AND ERRORS IN TESTS:
>>   Testcase name: IPv4 VRRP Test Case
>> FAILURE: IPv4 Master VR does not reply for VIP w/ accept mode off
>> [test_vrrp.TestVRRP4.test_vrrp4_accept_mode_disabled]
>> FAILURE: IPv4 Master VR preempted by higher priority backup
>> [test_vrrp.TestVRRP4.test_vrrp4_master_preempted]
>>   Testcase name: IPv6 VRRP Test Case
>> FAILURE: IPv6 Master VR preempted by higher priority backup
>> [test_vrrp.TestVRRP6.test_vrrp6_master_preempted]
>>   ERROR: IPv6 Backup VR preempts lower priority master
>> [test_vrrp.TestVRRP6.test_vrrp6_backup_preempts]
>>   Testcase name: Bidirectional Forwarding Detection (BFD) (IPv6)
>>   ERROR: echo function [test_bfd.BFD6TestCase.test_echo]
>>
>> ==
>>
>> vlib: startup multi-arch variant configuration (
>> https://gerrit.fd.io/r/c/vpp/+/25798_
>>
>> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-ubuntu1804/2675/console.log.gz
>>
>>
>> ==
>> TEST RESULTS:
>>  Scheduled tests: 22
>>   Executed tests: 22
>> Passed tests: 21
>> Failures: 1
>> FAILURES AND ERRORS IN TESTS:
>>   Testcase name: IPv4 VRRP Test Case
>> FAILURE: IPv4 Master VR preempted by higher priority backup
>> [test_vrrp.TestVRRP4.test_vrrp4_master_preempted]
>>
>> ==
>>
>>
>>
>>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15761): https://lists.fd.io/g/vpp-dev/message/15761
Mute This Topic: https://lists.fd.io/mt/71901798/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Is there any Linux FD to poll for VCL message

2020-03-12 Thread Florin Coras
Hi Murthy, 

Yes it does, although we’re guilty of not having it properly documented. 

The message queue used between vpp and a vcl worker can do both mutex/condvar 
and eventfd notifications. The former is the default but you can switch to 
eventfds by adding to vcl.conf "use-mq-eventfd”. You can then use 
vppcom_worker_mqs_epfd to retrieve a vcl worker's epoll fd (it’s an epoll fd 
for historic reasons) which you should be able to nest into your own linux 
epoll fd. 

Note that you’ll also need to force memfd segments for vpp’s message queues, 
i.e., session { evt_qs_memfd_seg }, and use the socket transport for binary 
api, i.e., in vpp’s startup.conf add "socksvr { /path/to/api.sock }" and in 
vcl.conf "api-socket-name /path/to/api.sock”. 

Regards,
Florin

> On Mar 12, 2020, at 4:40 AM, Satya Murthy  wrote:
> 
> Hi ,
> 
> We have a TCP application trying integrate with VPP-VCL framework.
> 
> Our application has its own dispatch loop with epoll and we would like to 
> know if VCL framework has any linux fd ( like an eventfd for the entire svm 
> message queue ) that we can add into our epoll to poll for VCL session 
> messages.
> 
> Once we get an asynchronous indication that a message has arrived in the VCL 
> svm message queue, we can call vppcom_epoll_wait() function to read the 
> messages for sessions and handle them accordingly. 
> 
> Any inputs on how we can achieve this?
> 
> -- 
> Thanks & Regards,
> Murthy 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15760): https://lists.fd.io/g/vpp-dev/message/15760
Mute This Topic: https://lists.fd.io/mt/71899986/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] A question about using packetdrill test vpp hoststack

2020-03-12 Thread Florin Coras
Hi Longfei, 

For protocol correctness, the only tool I’ve used is Defensics Codenomicon, 
which has about 1.2M tests and only needs an http terminator. Therefore, nginx 
+ ldp is enough. Having said that, it would be great to also support 
packetdrill. 

As for your issue, I’m not entirely sure why you’re hitting it. Could you try 
replacing the veth pair with a tap interface?

Regards,
Florin

> On Mar 12, 2020, at 2:53 AM, dailongfei  wrote:
> 
> Hi,
> 
> Recently, I wan to use packetdrill to test vpp hoststack .  I connet  vpp and 
> kernel-protocol-stack with veth.  And local  client runs on vpp hoststack , 
> remote is on kernel.
>  local <> vcl <> vpp <-> veth1 <-> veth0 <-> remote.
> 
> At remote , I just wen to  recvivethe  2-layer  or 3-layer packet, so I 
> use raw sock. However , I meet a problem that raw sock just get the copy of 
> packet that sent by local client , the packet still transfer to the upper 
> layer (4 layer) .And the upper layer will answer the packet ,  which will 
> influences my test.  Since the local client just want to receive the packet 
> that sent by raw sock.
>  local <> vcl <> vpp <-> veth1 <-> veth0 <-> raw sock.
>   
>|
>   
>---X--> upper layer .   
> 
> Do your matter the  same problem  when testing vpp hoststack ? And do you 
> have good idea about the vpp  hoststack test with packetdrill?
> 
> Regards,
> Longfei
>
> 
>   
> dailongfei
> 
> dailong...@corp.netease.com
>  
> 
> 签名由 网易邮箱大师  定制
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15759): https://lists.fd.io/g/vpp-dev/message/15759
Mute This Topic: https://lists.fd.io/mt/71898758/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Query about internal apps

2020-03-12 Thread Florin Coras
Hi Ole, Vivek,

If I understand this right, you’re looking to intercept l2 packets in vpp 
(supposedly only from certain hosts), process them and maybe generate some 
return traffic. What is the payload of those l2 packets? 

You could write a feature that inspects all traffic on a certain interface and 
intercepts the packets that you’re interested in. 

Alternatively, session layer supports pure shared memory transports, i.e., 
cut-through connections (see slide 16-18 here [1]). For instance, a vpp builtin 
application could receive data directly over shared memory from an external 
application. However, currently session layer only knows how to lookup 
5-tuples, so the two peers (external app and vpp builtin app) need to agree on 
a shared “fake” 5-tuple.

Regards,
Florin

[1] https://wiki.fd.io/images/9/9c/Vpp-hoststack-kc-eu19.pdf

> On Mar 12, 2020, at 1:09 AM, Ole Troan  wrote:
> 
> Hi Vivek,
> 
>> We are trying to achieve the mechanism, something similar to TAP interface, 
>> in VPP.
>> 
>> So, the packets coming out of the TAP interface, will be directed directly 
>> to the application. The application will receive the packets, coming via TAP 
>> interface, process them and send it down via the Host stack.
>> 
>> Possible options, we could think of are:-
>> - Enhance the session layer to provide a L2 transport mechanism and add 
>> nodes like tap-input and tap-out which would achieve the same.
>> - Use the existing session layer by doing a IP/UDP encap and send it to the 
>> APP, via session layer and use existing mechanism.
>>  This introduces an overhead of additional encap/decap.
>> 
>> We wanted to check if there is any alternate option to directly transfer the 
>> packets from the plugin to the VPP App, without even involving the session 
>> layer and have no additional overhead encap/decap,
> 
> Is this similar to the idea of routing directly to the application?
> I.e. give each application an IP address (easier with IPv6), and the 
> application itself links in whatever transport layer it needs. In a VPP 
> context the application could sit behind a memif interface. The application 
> would need some support for IP address assignment, ARP/ND etc.
> Userland networking taken to the extreme. ;-)
> 
> Best regards,
> Ole

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15758): https://lists.fd.io/g/vpp-dev/message/15758
Mute This Topic: https://lists.fd.io/mt/71885250/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VRRP Unit Tests failing on Master Ubuntu

2020-03-12 Thread Paul Vinciguerra
Yes.
Has been for a few days.

On Thu, Mar 12, 2020 at 9:25 AM Ray Kinsella  wrote:

> Anyone else noticing seeming spurious failures related to the VRRP
> plugin's unit tests.
> Some examples from un-related commits.
>
> Ray K
>
> nat: timed out session scavenging upgrade (
> https://gerrit.fd.io/r/c/vpp/+/25462)
>
> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-ubuntu1804/2678/console.log.gz
>
>
> ==
> TEST RESULTS:
>  Scheduled tests: 1138
>   Executed tests: 1138
> Passed tests: 1021
>Skipped tests: 112
> Failures: 3
>   Errors: 2
> FAILURES AND ERRORS IN TESTS:
>   Testcase name: IPv4 VRRP Test Case
> FAILURE: IPv4 Master VR does not reply for VIP w/ accept mode off
> [test_vrrp.TestVRRP4.test_vrrp4_accept_mode_disabled]
> FAILURE: IPv4 Master VR preempted by higher priority backup
> [test_vrrp.TestVRRP4.test_vrrp4_master_preempted]
>   Testcase name: IPv6 VRRP Test Case
> FAILURE: IPv6 Master VR preempted by higher priority backup
> [test_vrrp.TestVRRP6.test_vrrp6_master_preempted]
>   ERROR: IPv6 Backup VR preempts lower priority master
> [test_vrrp.TestVRRP6.test_vrrp6_backup_preempts]
>   Testcase name: Bidirectional Forwarding Detection (BFD) (IPv6)
>   ERROR: echo function [test_bfd.BFD6TestCase.test_echo]
>
> ==
>
> vlib: startup multi-arch variant configuration (
> https://gerrit.fd.io/r/c/vpp/+/25798_
>
> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-ubuntu1804/2675/console.log.gz
>
>
> ==
> TEST RESULTS:
>  Scheduled tests: 22
>   Executed tests: 22
> Passed tests: 21
> Failures: 1
> FAILURES AND ERRORS IN TESTS:
>   Testcase name: IPv4 VRRP Test Case
> FAILURE: IPv4 Master VR preempted by higher priority backup
> [test_vrrp.TestVRRP4.test_vrrp4_master_preempted]
>
> ==
>
>
>
> 
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15757): https://lists.fd.io/g/vpp-dev/message/15757
Mute This Topic: https://lists.fd.io/mt/71901798/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Coverity run FAILED as of 2020-03-12 14:00:24 UTC

2020-03-12 Thread Noreply Jenkins
Coverity run failed today.

Current number of outstanding issues are 6
Newly detected: 4
Eliminated: 0
More details can be found at  
https://scan.coverity.com/projects/fd-io-vpp/view_defects
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15756): https://lists.fd.io/g/vpp-dev/message/15756
Mute This Topic: https://lists.fd.io/mt/71902490/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] howto insert DPO after NAT?

2020-03-12 Thread Andreas Schultz
Hi,

My 3GPP UPF implementation uses a DPO to direct all UE IP to my session
logic. This works well.

I now want to use deterministic NAT on the UE IPs. The NAT config inserts a
DPO in the FIB:

nat44 deterministic add in 10.106.0.0/16 out 10.116.0.0/24

show ip fib:

ipv4-VRF:2, fib_index:2, flow hash:[src dst sport dport proto ] epoch:0
flags:none locks:[CLI:3, adjacency:1, ]
[...]
10.116.0.0/24
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:35 buckets:1 uRPF:41 to:[0:0]]
[0] [@2]: dpo-receive: 0.0.0.0 on host-ens161

When my session now tries to insert a DPO with /32 route to the UE IP, VPP
crashes with:

(gdb) bt
#0  0x75ec29c4 in hash_memory64 (p=0x7fffad1302ef,
n_bytes=1881489448, state=0) at /usr/src/vpp/src/vppinfra/hash.c:141
#1  0x75ec2ea3 in hash_memory (p=0x7fffad1302ef,
n_bytes=1881489448, state=0) at /usr/src/vpp/src/vppinfra/hash.c:280
#2  0x75ec46ad in vec_key_sum (h=0x7fffb633cba0,
key=140736097092335) at /usr/src/vpp/src/vppinfra/hash.c:864
#3  0x75ec30ef in key_sum (h=0x7fffb633cba0, key=140736097092335)
at /usr/src/vpp/src/vppinfra/hash.c:341
#4  0x75ec3a2c in lookup (v=0x7fffb633cc60, key=140736097092335,
op=GET, new_value=0x0, old_value=0x0) at
/usr/src/vpp/src/vppinfra/hash.c:556
#5  0x75ec3d2a in _hash_get (v=0x7fffb633cc60, key=140736097092335)
at /usr/src/vpp/src/vppinfra/hash.c:641
#6  0x7641faf3 in vlib_get_node_by_name (vm=0x766b4680
, name=0x7fffad1302ef "upf-ip4-session-dpo") at
/usr/src/vpp/src/vlib/node.c:52
#7  0x775032e0 in dpo_default_get_next_node (dpo=0x7fffb7b1f650) at
/usr/src/vpp/src/vnet/dpo/dpo.c:298
#8  0x77504302 in dpo_get_next_node (child_type=DPO_LOAD_BALANCE,
child_proto=DPO_PROTO_IP4, parent_dpo=0x7fffb7b1f650) at
/usr/src/vpp/src/vnet/dpo/dpo.c:428
#9  0x775046d1 in dpo_stack (child_type=DPO_LOAD_BALANCE,
child_proto=DPO_PROTO_IP4, dpo=0x7fffb7a7dca0, parent=0x7fffb7b1f650) at
/usr/src/vpp/src/vnet/dpo/dpo.c:521
#10 0x77510a94 in load_balance_set_bucket_i (lb=0x7fffb7a7dc80,
bucket=0, buckets=0x7fffb7a7dca0, next=0x7fffb7b1f650) at
/usr/src/vpp/src/vnet/dpo/load_balance.c:252
#11 0x77511423 in load_balance_fill_buckets_norm
(lb=0x7fffb7a7dc80, nhs=0x7fffb7b1f650, buckets=0x7fffb7a7dca0,
n_buckets=1) at /usr/src/vpp/src/vnet/dpo/load_balance.c:525
#12 0x77511846 in load_balance_fill_buckets (lb=0x7fffb7a7dc80,
nhs=0x7fffb7b1f650, buckets=0x7fffb7a7dca0, n_buckets=1,
flags=LOAD_BALANCE_FLAG_NONE) at
/usr/src/vpp/src/vnet/dpo/load_balance.c:589
#13 0x77511bf3 in load_balance_multipath_update
(dpo=0x7fffb7a7d1d8, raw_nhs=0x7fffb7b1f600, flags=LOAD_BALANCE_FLAG_NONE)
at /usr/src/vpp/src/vnet/dpo/load_balance.c:654
#14 0x7749f1d4 in fib_entry_src_mk_lb (fib_entry=0x7fffb7a7d1b0,
esrc=0x7fffb7b1f570, fct=FIB_FORW_CHAIN_TYPE_UNICAST_IP4,
dpo_lb=0x7fffb7a7d1d8) at /usr/src/vpp/src/vnet/fib/fib_entry_src.c:645
#15 0x7749f348 in fib_entry_src_action_install
(fib_entry=0x7fffb7a7d1b0, source=FIB_SOURCE_FIRST) at
/usr/src/vpp/src/vnet/fib/fib_entry_src.c:705
#16 0x7749fe14 in fib_entry_src_action_activate
(fib_entry=0x7fffb7a7d1b0, source=FIB_SOURCE_FIRST) at
/usr/src/vpp/src/vnet/fib/fib_entry_src.c:1078
#17 0x77496170 in fib_entry_create_special (fib_index=2,
prefix=0x7fffb507c7f0, source=FIB_SOURCE_FIRST,
flags=(FIB_ENTRY_FLAG_EXCLUSIVE | FIB_ENTRY_FLAG_LOOSE_URPF_EXEMPT),
dpo=0x7fffb507c7d8) at /usr/src/vpp/src/vnet/fib/fib_entry.c:775
#18 0x7747d3be in fib_table_entry_special_dpo_add (fib_index=2,
prefix=0x7fffb507c7f0, source=FIB_SOURCE_FIRST,
flags=(FIB_ENTRY_FLAG_EXCLUSIVE | FIB_ENTRY_FLAG_LOOSE_URPF_EXEMPT),
dpo=0x7fffb507c7d8) at /usr/src/vpp/src/vnet/fib/fib_table.c:338
#19 0x7fffad0cdaac in pfcp_add_del_ue_ip (ip=0x7fffb7b1e730,
si=0x7fffb6827040, is_add=1) at /usr/src/vpp/src/plugins/upf/upf_pfcp.c:1192

The invocation is here:
https://github.com/travelping/vpp/blob/feature/2001/upf-liusa-pfcp-socket/src/plugins/upf/upf_pfcp.c#L1191

Any hints on what I might be doing wrong?

Many thanks,
Andreas

-- 

Andreas Schultz

-- 

Principal Engineer

t: +49 391 819099-224

--- enabling your networks
-

Travelping GmbH
Roentgenstraße 13
39108 Magdeburg
Germany

t: +49 391 819099-0
f: +49 391 819099-299

e: i...@travelping.com
w: https://www.travelping.com/
Company registration: Amtsgericht Stendal
Geschaeftsfuehrer: Holger Winkelmann
Reg. No.: HRB 10578
VAT ID: DE236673780
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15755): https://lists.fd.io/g/vpp-dev/message/15755
Mute This Topic: https://lists.fd.io/mt/71902419/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VRRP Unit Tests failing on Master Ubuntu

2020-03-12 Thread Matthew Smith via Lists.Fd.Io
Hi Ray,

Thanks for bringing it to my attention. I'll look into it.

-Matt


On Thu, Mar 12, 2020 at 8:24 AM Ray Kinsella  wrote:

> Anyone else noticing seeming spurious failures related to the VRRP
> plugin's unit tests.
> Some examples from un-related commits.
>
> Ray K
>
> nat: timed out session scavenging upgrade (
> https://gerrit.fd.io/r/c/vpp/+/25462)
>
> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-ubuntu1804/2678/console.log.gz
>
>
> ==
> TEST RESULTS:
>  Scheduled tests: 1138
>   Executed tests: 1138
> Passed tests: 1021
>Skipped tests: 112
> Failures: 3
>   Errors: 2
> FAILURES AND ERRORS IN TESTS:
>   Testcase name: IPv4 VRRP Test Case
> FAILURE: IPv4 Master VR does not reply for VIP w/ accept mode off
> [test_vrrp.TestVRRP4.test_vrrp4_accept_mode_disabled]
> FAILURE: IPv4 Master VR preempted by higher priority backup
> [test_vrrp.TestVRRP4.test_vrrp4_master_preempted]
>   Testcase name: IPv6 VRRP Test Case
> FAILURE: IPv6 Master VR preempted by higher priority backup
> [test_vrrp.TestVRRP6.test_vrrp6_master_preempted]
>   ERROR: IPv6 Backup VR preempts lower priority master
> [test_vrrp.TestVRRP6.test_vrrp6_backup_preempts]
>   Testcase name: Bidirectional Forwarding Detection (BFD) (IPv6)
>   ERROR: echo function [test_bfd.BFD6TestCase.test_echo]
>
> ==
>
> vlib: startup multi-arch variant configuration (
> https://gerrit.fd.io/r/c/vpp/+/25798_
>
> https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-ubuntu1804/2675/console.log.gz
>
>
> ==
> TEST RESULTS:
>  Scheduled tests: 22
>   Executed tests: 22
> Passed tests: 21
> Failures: 1
> FAILURES AND ERRORS IN TESTS:
>   Testcase name: IPv4 VRRP Test Case
> FAILURE: IPv4 Master VR preempted by higher priority backup
> [test_vrrp.TestVRRP4.test_vrrp4_master_preempted]
>
> ==
>
>
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15754): https://lists.fd.io/g/vpp-dev/message/15754
Mute This Topic: https://lists.fd.io/mt/71901798/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP memif ping taking ~20ms

2020-03-12 Thread Aloys Augustin (aloaugus) via Lists.Fd.Io
Hello,

For what it’s worth, I observed ~10ms memif pings when both VPPs were scheduled 
on the same CPU, which happens with the default configuration (VPP takes CPU 1 
by default). You can try changing the configuration of one of your VPPs by 
setting the main-core in the cpu section: 
https://fd.io/docs/vpp/master/gettingstarted/users/configuring/startup.html 
 

Cheers,
Aloÿs

> On 12 Mar 2020, at 14:30, vyshakh krishnan  wrote:
> 
> Hi Damjan,
> 
> Please find the trace on both side
> 
> vpp1 (10.1.1.1)  (10.1.1.2) vpp2
> 
> DBGvpp# ping 10.1.1.2
> 116 bytes from 10.1.1.2 : icmp_seq=1 ttl=64 time=31.8174 ms
> 116 bytes from 10.1.1.2 : icmp_seq=2 ttl=64 time=47.9716 ms
> 116 bytes from 10.1.1.2 : icmp_seq=3 ttl=64 time=40.0259 ms
> 116 bytes from 10.1.1.2 : icmp_seq=4 ttl=64 time=19.9210 ms
> 116 bytes from 10.1.1.2 : icmp_seq=5 ttl=64 time=39.9568 ms
> 
> Statistics: 5 sent, 5 received, 0% packet loss
> 
> 
> VPP1:
> **
> 
> Packet 3
> 
> 01:35:02:872114: memif-input
>   memif: hw_if_index 1 next-index 4
> slot: ring 0
> 01:35:02:872132: ethernet-input
>   frame: flags 0x1, hw-if-index 1, sw-if-index 1
>   IP4: 02:dc:5c:30:00:00 -> 02:dc:5c:30:00:00
> 01:35:02:872142: ip4-input
>   ICMP: 10.1.1.2 -> 10.1.1.1
> tos 0x00, ttl 64, length 96, checksum 0x36e4
> fragment id 0x2db5
>   ICMP echo_reply checksum 0xf86f
> 01:35:02:872153: ip4-lookup
>   fib 0 dpo-idx 8 flow hash: 0x
>   ICMP: 10.1.1.2 -> 10.1.1.1
> tos 0x00, ttl 64, length 96, checksum 0x36e4
> fragment id 0x2db5
>   ICMP echo_reply checksum 0xf86f
> 01:35:02:872206: ip4-local
> ICMP: 10.1.1.2 -> 10.1.1.1
>   tos 0x00, ttl 64, length 96, checksum 0x36e4
>   fragment id 0x2db5
> ICMP echo_reply checksum 0xf86f
> 01:35:02:872215: ip4-icmp-input
>   ICMP: 10.1.1.2 -> 10.1.1.1
> tos 0x00, ttl 64, length 96, checksum 0x36e4
> fragment id 0x2db5
>   ICMP echo_reply checksum 0xf86f
> 01:35:02:872224: ip4-icmp-echo-reply
>   ICMP4 echo id 50293 seq 1 send to cli node 620
> 
> Packet 4
> 
> 01:35:02:872114: memif-input
>   memif: hw_if_index 1 next-index 4
> slot: ring 0
> 01:35:02:872132: ethernet-input
>   frame: flags 0x1, hw-if-index 1, sw-if-index 1
>   ARP: 02:dc:5c:30:00:00 -> ff:ff:ff:ff:ff:ff
> 01:35:02:872150: arp-input
>   request, type ethernet/IP4, address size 6/4
>   02:dc:5c:30:00:00/10.1.1.2  -> 00:00:00:00:00:00/10.1.1.1 
> 
> 01:35:02:872163: arp-reply
>   request, type ethernet/IP4, address size 6/4
>   02:dc:5c:30:00:00/10.1.1.2  -> 00:00:00:00:00:00/10.1.1.1 
> 
> 01:35:02:872219: memif11/11-output
>   memif11/11 l2_hdr_offset_valid l3_hdr_offset_valid 
>   ARP: 02:dc:5c:30:00:00 -> 02:dc:5c:30:00:00
>   reply, type ethernet/IP4, address size 6/4
>   02:dc:5c:30:00:00/10.1.1.1  -> 02:dc:5c:30:00:00/10.1.1.2 
> 
> 
> Packet 5
> 
> 01:35:03:136170: memif-input
>   memif: hw_if_index 1 next-index 4
> slot: ring 0
> 01:35:03:136186: ethernet-input
>   frame: flags 0x1, hw-if-index 1, sw-if-index 1
>   IP6: 02:dc:5c:30:00:00 -> 33:33:00:00:00:01
> 01:35:03:136195: ip6-input
>   ICMP6: fe80::dc:5cff:fe30:0 -> ff02::1
> tos 0x00, flow label 0x0, hop limit 255, payload length 32
>   ICMP router_advertisement checksum 0x57bf
> 01:35:03:136201: ip6-mfib-forward-lookup
>   fib 0 entry 4
> 01:35:03:136209: ip6-mfib-forward-rpf
>   entry 4 itf 1 flags Accept,
> 01:35:03:136212: ip6-replicate
>   replicate: 2 via [@1]: dpo-receive
> 01:35:03:136217: ip6-local
> ICMP6: fe80::dc:5cff:fe30:0 -> ff02::1
>   tos 0x00, flow label 0x0, hop limit 255, payload length 32
> ICMP router_advertisement checksum 0x57bf
> 01:35:03:136225: ip6-icmp-input
>   ICMP6: fe80::dc:5cff:fe30:0 -> ff02::1
> tos 0x00, flow label 0x0, hop limit 255, payload length 32
>   ICMP router_advertisement checksum 0x57bf
> 01:35:03:136228: icmp6-router-advertisement
>   ICMP6: fe80::dc:5cff:fe30:0 -> ff02::1
> tos 0x00, flow label 0x0, hop limit 255, payload length 32
>   ICMP router_advertisement checksum 0x57bf
> 01:35:03:136237: ip6-drop
> ICMP6: fe80::dc:5cff:fe30:0 -> ff02::1
>   tos 0x00, flow label 0x0, hop limit 255, payload length 32
> ICMP router_advertisement checksum 0x57bf
> 01:35:03:136242: error-drop
>   rx:memif11/11
> 01:35:03:136246: drop
>   ip6-icmp-input: valid packets
> 
> Packet 6
> 
> 01:35:03:860055: memif-input
>   memif: hw_if_index 1 next-index 4
> slot: ring 0
> 01:35:03:880103: ethernet-input
>   frame: flags 0x1, hw-if-index 1, sw-if-index 1
>   IP4: 02:dc:5c:30:00:00 -> 02:dc:5c:30:00:00
> 01:35:03:880114: ip4-input
>   ICMP: 10.1.1.2 -> 10.1.1.1
> tos 0x00, ttl 64, length 96, checksum 0xb43f
> fragment id 0xb059
>   ICMP echo_reply checksum 0x3092
> 01:35:03:880124: 

Re: [vpp-dev] VPP memif ping taking ~20ms

2020-03-12 Thread vyshakh krishnan
Hi Damjan,

Please find the trace on both side

vpp1 (10.1.1.1)  (10.1.1.2) vpp2

DBGvpp# ping 10.1.1.2
116 bytes from 10.1.1.2: icmp_seq=1 ttl=64 time=31.8174 ms
116 bytes from 10.1.1.2: icmp_seq=2 ttl=64 time=47.9716 ms
116 bytes from 10.1.1.2: icmp_seq=3 ttl=64 time=40.0259 ms
116 bytes from 10.1.1.2: icmp_seq=4 ttl=64 time=19.9210 ms
116 bytes from 10.1.1.2: icmp_seq=5 ttl=64 time=39.9568 ms

Statistics: 5 sent, 5 received, 0% packet loss


VPP1:
**

Packet 3

01:35:02:872114: memif-input
  memif: hw_if_index 1 next-index 4
slot: ring 0
01:35:02:872132: ethernet-input
  frame: flags 0x1, hw-if-index 1, sw-if-index 1
  IP4: 02:dc:5c:30:00:00 -> 02:dc:5c:30:00:00
01:35:02:872142: ip4-input
  ICMP: 10.1.1.2 -> 10.1.1.1
tos 0x00, ttl 64, length 96, checksum 0x36e4
fragment id 0x2db5
  ICMP echo_reply checksum 0xf86f
01:35:02:872153: ip4-lookup
  fib 0 dpo-idx 8 flow hash: 0x
  ICMP: 10.1.1.2 -> 10.1.1.1
tos 0x00, ttl 64, length 96, checksum 0x36e4
fragment id 0x2db5
  ICMP echo_reply checksum 0xf86f
01:35:02:872206: ip4-local
ICMP: 10.1.1.2 -> 10.1.1.1
  tos 0x00, ttl 64, length 96, checksum 0x36e4
  fragment id 0x2db5
ICMP echo_reply checksum 0xf86f
01:35:02:872215: ip4-icmp-input
  ICMP: 10.1.1.2 -> 10.1.1.1
tos 0x00, ttl 64, length 96, checksum 0x36e4
fragment id 0x2db5
  ICMP echo_reply checksum 0xf86f
01:35:02:872224: ip4-icmp-echo-reply
  ICMP4 echo id 50293 seq 1 send to cli node 620

Packet 4

01:35:02:872114: memif-input
  memif: hw_if_index 1 next-index 4
slot: ring 0
01:35:02:872132: ethernet-input
  frame: flags 0x1, hw-if-index 1, sw-if-index 1
  ARP: 02:dc:5c:30:00:00 -> ff:ff:ff:ff:ff:ff
01:35:02:872150: arp-input
  request, type ethernet/IP4, address size 6/4
  02:dc:5c:30:00:00/10.1.1.2 -> 00:00:00:00:00:00/10.1.1.1
01:35:02:872163: arp-reply
  request, type ethernet/IP4, address size 6/4
  02:dc:5c:30:00:00/10.1.1.2 -> 00:00:00:00:00:00/10.1.1.1
01:35:02:872219: memif11/11-output
  memif11/11 l2_hdr_offset_valid l3_hdr_offset_valid
  ARP: 02:dc:5c:30:00:00 -> 02:dc:5c:30:00:00
  reply, type ethernet/IP4, address size 6/4
  02:dc:5c:30:00:00/10.1.1.1 -> 02:dc:5c:30:00:00/10.1.1.2

Packet 5

01:35:03:136170: memif-input
  memif: hw_if_index 1 next-index 4
slot: ring 0
01:35:03:136186: ethernet-input
  frame: flags 0x1, hw-if-index 1, sw-if-index 1
  IP6: 02:dc:5c:30:00:00 -> 33:33:00:00:00:01
01:35:03:136195: ip6-input
  ICMP6: fe80::dc:5cff:fe30:0 -> ff02::1
tos 0x00, flow label 0x0, hop limit 255, payload length 32
  ICMP router_advertisement checksum 0x57bf
01:35:03:136201: ip6-mfib-forward-lookup
  fib 0 entry 4
01:35:03:136209: ip6-mfib-forward-rpf
  entry 4 itf 1 flags Accept,
01:35:03:136212: ip6-replicate
  replicate: 2 via [@1]: dpo-receive
01:35:03:136217: ip6-local
ICMP6: fe80::dc:5cff:fe30:0 -> ff02::1
  tos 0x00, flow label 0x0, hop limit 255, payload length 32
ICMP router_advertisement checksum 0x57bf
01:35:03:136225: ip6-icmp-input
  ICMP6: fe80::dc:5cff:fe30:0 -> ff02::1
tos 0x00, flow label 0x0, hop limit 255, payload length 32
  ICMP router_advertisement checksum 0x57bf
01:35:03:136228: icmp6-router-advertisement
  ICMP6: fe80::dc:5cff:fe30:0 -> ff02::1
tos 0x00, flow label 0x0, hop limit 255, payload length 32
  ICMP router_advertisement checksum 0x57bf
01:35:03:136237: ip6-drop
ICMP6: fe80::dc:5cff:fe30:0 -> ff02::1
  tos 0x00, flow label 0x0, hop limit 255, payload length 32
ICMP router_advertisement checksum 0x57bf
01:35:03:136242: error-drop
  rx:memif11/11
01:35:03:136246: drop
  ip6-icmp-input: valid packets

Packet 6

01:35:03:860055: memif-input
  memif: hw_if_index 1 next-index 4
slot: ring 0
01:35:03:880103: ethernet-input
  frame: flags 0x1, hw-if-index 1, sw-if-index 1
  IP4: 02:dc:5c:30:00:00 -> 02:dc:5c:30:00:00
01:35:03:880114: ip4-input
  ICMP: 10.1.1.2 -> 10.1.1.1
tos 0x00, ttl 64, length 96, checksum 0xb43f
fragment id 0xb059
  ICMP echo_reply checksum 0x3092
01:35:03:880124: ip4-lookup
  fib 0 dpo-idx 8 flow hash: 0x
  ICMP: 10.1.1.2 -> 10.1.1.1
tos 0x00, ttl 64, length 96, checksum 0xb43f
fragment id 0xb059
  ICMP echo_reply checksum 0x3092
01:35:03:880178: ip4-local
ICMP: 10.1.1.2 -> 10.1.1.1
  tos 0x00, ttl 64, length 96, checksum 0xb43f
  fragment id 0xb059
ICMP echo_reply checksum 0x3092
01:35:03:880185: ip4-icmp-input
  ICMP: 10.1.1.2 -> 10.1.1.1
tos 0x00, ttl 64, length 96, checksum 0xb43f
fragment id 0xb059
  ICMP echo_reply checksum 0x3092
01:35:03:880192: ip4-icmp-echo-reply
  ICMP4 echo id 50293 seq 2 send to cli node 620

Packet 7

01:35:03:860055: memif-input
  memif: hw_if_index 1 next-index 4
slot: ring 0
01:35:03:880103: ethernet-input
  frame: flags 0x1, hw-if-index 1, sw-if-index 1
  ARP: 02:dc:5c:30:00:00 -> ff:ff:ff:ff:ff:ff
01:35:03:880120: arp-input
  request, type ethernet/IP4, address size 6/4
  02:dc:5c:30:00:00/10.1.1.2 -> 00:00:00:00:00:00/10.1.1.1
01:35:03:880130: 

[vpp-dev] VRRP Unit Tests failing on Master Ubuntu

2020-03-12 Thread Ray Kinsella
Anyone else noticing seeming spurious failures related to the VRRP plugin's 
unit tests.
Some examples from un-related commits.

Ray K

nat: timed out session scavenging upgrade (https://gerrit.fd.io/r/c/vpp/+/25462)
https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-ubuntu1804/2678/console.log.gz

==
TEST RESULTS:
 Scheduled tests: 1138
  Executed tests: 1138
Passed tests: 1021
   Skipped tests: 112
Failures: 3
  Errors: 2
FAILURES AND ERRORS IN TESTS:
  Testcase name: IPv4 VRRP Test Case 
FAILURE: IPv4 Master VR does not reply for VIP w/ accept mode off 
[test_vrrp.TestVRRP4.test_vrrp4_accept_mode_disabled]
FAILURE: IPv4 Master VR preempted by higher priority backup 
[test_vrrp.TestVRRP4.test_vrrp4_master_preempted]
  Testcase name: IPv6 VRRP Test Case 
FAILURE: IPv6 Master VR preempted by higher priority backup 
[test_vrrp.TestVRRP6.test_vrrp6_master_preempted]
  ERROR: IPv6 Backup VR preempts lower priority master 
[test_vrrp.TestVRRP6.test_vrrp6_backup_preempts]
  Testcase name: Bidirectional Forwarding Detection (BFD) (IPv6) 
  ERROR: echo function [test_bfd.BFD6TestCase.test_echo]
==

vlib: startup multi-arch variant configuration 
(https://gerrit.fd.io/r/c/vpp/+/25798_
https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-ubuntu1804/2675/console.log.gz

==
TEST RESULTS:
 Scheduled tests: 22
  Executed tests: 22
Passed tests: 21
Failures: 1
FAILURES AND ERRORS IN TESTS:
  Testcase name: IPv4 VRRP Test Case 
FAILURE: IPv4 Master VR preempted by higher priority backup 
[test_vrrp.TestVRRP4.test_vrrp4_master_preempted]
==





signature.asc
Description: OpenPGP digital signature
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15751): https://lists.fd.io/g/vpp-dev/message/15751
Mute This Topic: https://lists.fd.io/mt/71901798/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Is there any Linux FD to poll for VCL message

2020-03-12 Thread Satya Murthy
Hi ,

We have a TCP application trying integrate with VPP-VCL framework.

Our application has its own dispatch loop with epoll and we would like to know 
if VCL framework has any linux fd ( like an eventfd for the entire svm message 
queue ) that we can add into our epoll to poll for VCL session messages.

Once we get an asynchronous indication that a message has arrived in the VCL 
svm message queue, we can call vppcom_epoll_wait() function to read the 
messages for sessions and handle them accordingly.

Any inputs on how we can achieve this?

--
Thanks & Regards,
Murthy
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15750): https://lists.fd.io/g/vpp-dev/message/15750
Mute This Topic: https://lists.fd.io/mt/71899986/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Ignore SIGPROF signal in VPP

2020-03-12 Thread Dave Barach via Lists.Fd.Io
+1 this seems OK to me.

From: vpp-dev@lists.fd.io  On Behalf Of Damjan Marion via 
Lists.Fd.Io
Sent: Thursday, March 12, 2020 6:16 AM
To: Lijian Zhang 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Ignore SIGPROF signal in VPP



> On 12 Mar 2020, at 10:49, Lijian Zhang 
> mailto:lijian.zh...@arm.com>> wrote:
>
> Hi Maintainers,
> We are profiling VPP with MAP (a software profile suite on Arm CPUs, see 
> details 
> inhttps://www.arm.com/products/development-tools/server-and-hpc/forge/map) on 
> Arm CPUs.
>
> The MAP sampler runs inside target process as a library because it does a lot 
> more things that require access to the program's memory like matching up 
> OpenMP threads etc. So it is a lot more invasive and uses the SIGPROF signal 
> to control the sample rate.
>
> VPP will receive SIGPROF signal because MAP uses SIGPROF signal to drive its 
> sampler to do profiling on VPP. However, the default action of SIGPROF signal 
> handler in VPP such as unix_signal_handler() is process termination. To 
> profile VPP with MAP, We need to change VPP signal handler to ignore SIGPROF 
> signal.
>
> Can we upstream a patch to simply ignore the SIGPROF in VPP?

I think so, please submit patch and unless somebody raises his concern here I 
will merge it...

>
> diff --git a/src/vlib/unix/main.c b/src/vlib/unix/main.c
> index e40a462..6138a6f 100755
> --- a/src/vlib/unix/main.c
> +++ b/src/vlib/unix/main.c
> @@ -218,6 +218,7 @@
>/* ignore SIGPIPE, SIGCHLD */
>  case SIGPIPE:
>  case SIGCHLD:
> +   case SIGPROF:
>sa.sa_sigaction = (void *) SIG_IGN;
>break;

—
Damjan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15749): https://lists.fd.io/g/vpp-dev/message/15749
Mute This Topic: https://lists.fd.io/mt/71898718/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] impact of API requests on forwarding performance?

2020-03-12 Thread Ole Troan
Hi Elias,

> Thanks for explaining!
> I'm sorry if what I wrote before was wrong or confusing.

no, not at all!

>> Checking counters values in the stats segment has _no_ impact on VPP.
>> VPP writes those counters regardless of reader frequency.
> 
> That's great!
> 
> Just to be clear, to make sure I understand what this means, if we do
> the following in python:
> 
> from vpp_papi.vpp_stats import VPPStats
> stat = VPPStats("/run/vpp/stats.sock")
> dir = stat.ls(['^/nat44/total-users'])
> counters = stat.dump(dir)
> list_of_counters=counters.get('/nat44/total-users')
> 
> (followed by a loop in python to sum up the counter values from
> different vpp threads) then what we are doing is that we are checking
> counters values in the stats segment, so there should be no impact on
> VPP?

Yes, that is correct.
(Of course depending on how pedantic you want to be there is an extra load on 
the memory hierarchy by client reading, which may in turn affect VPP.)

The actual VPP counter is exposed directly in shared memory. No copying or 
locking from VPP side.
The client uses optimistic locking, meaning that it will retry copying out the 
counter if the underlaying stat segment directory structure has changed.

Best regards,
Ole-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15748): https://lists.fd.io/g/vpp-dev/message/15748
Mute This Topic: https://lists.fd.io/mt/71882379/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP memif ping taking ~20ms

2020-03-12 Thread Damjan Marion via Lists.Fd.Io

That’s weird…. Can you capture packet trace on both sides?


> On 12 Mar 2020, at 11:16, vyshakh krishnan  wrote:
> 
> Hi Damjan,
> 
> We don't have any worker threads and memif is in polling mode:
> 
> DBGvpp# sh int rx-placement
> Thread 0 (vpp_main):
>   node memif-input:
> memif11/11 queue 0 (polling)
> memif222/222 queue 0 (polling)
> 
> Thanks
> Vyshakh
> 
> On Wed, Mar 11, 2020 at 9:03 PM Damjan Marion  > wrote:
> 
> Are you running VPP with worker threads and using interrupt mode in memif?
> 
> can you capture “sh int rx-placement” on both sides?
> 
> — 
> Damjan
> 
>> On 11 Mar 2020, at 15:44, vyshakh krishnan > > wrote:
>> 
>> Hi All,
>> 
>> When we try to ping back to back connected memif interface, its taking 
>> around 20 milli secs:
>> 
>> vpp1 (10.1.1.1)  (10.1.1.2) vpp2
>> 
>> DBGvpp#  ping 10.1.1.2
>> 116 bytes from 10.1.1.2 : icmp_seq=1 ttl=64 time=15.1229 ms
>> 116 bytes from 10.1.1.2 : icmp_seq=2 ttl=64 time=20.1475 ms
>> 116 bytes from 10.1.1.2 : icmp_seq=3 ttl=64 time=20.0371 ms
>> 116 bytes from 10.1.1.2 : icmp_seq=4 ttl=64 time=14.9237 ms
>> 116 bytes from 10.1.1.2 : icmp_seq=5 ttl=64 time=20.1059 ms
>> 
>> Statistics: 5 sent, 5 received, 0% packet loss
>> 
>> Is it expected to take 20 msecs for a direct ping? 
>> 
>> Thanks
>> Vyshakh
>> 
>> 
>> 
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15747): https://lists.fd.io/g/vpp-dev/message/15747
Mute This Topic: https://lists.fd.io/mt/71880617/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP memif ping taking ~20ms

2020-03-12 Thread vyshakh krishnan
Hi Damjan,

We don't have any worker threads and memif is in polling mode:

DBGvpp# sh int rx-placement
Thread 0 (vpp_main):
  node memif-input:
memif11/11 queue 0 (polling)
memif222/222 queue 0 (polling)

Thanks
Vyshakh

On Wed, Mar 11, 2020 at 9:03 PM Damjan Marion  wrote:

>
> Are you running VPP with worker threads and using interrupt mode in memif?
>
> can you capture “sh int rx-placement” on both sides?
>
> —
> Damjan
>
> On 11 Mar 2020, at 15:44, vyshakh krishnan  wrote:
>
> Hi All,
>
> When we try to ping back to back connected memif interface, its taking
> around 20 milli secs:
>
> vpp1 (10.1.1.1)  (10.1.1.2) vpp2
>
> DBGvpp#  ping 10.1.1.2
> 116 bytes from 10.1.1.2: icmp_seq=1 ttl=64 time=15.1229 ms
> 116 bytes from 10.1.1.2: icmp_seq=2 ttl=64 time=20.1475 ms
> 116 bytes from 10.1.1.2: icmp_seq=3 ttl=64 time=20.0371 ms
> 116 bytes from 10.1.1.2: icmp_seq=4 ttl=64 time=14.9237 ms
> 116 bytes from 10.1.1.2: icmp_seq=5 ttl=64 time=20.1059 ms
>
> Statistics: 5 sent, 5 received, 0% packet loss
>
> Is it expected to take 20 msecs for a direct ping?
>
> Thanks
> Vyshakh
>
>
> 
>
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15746): https://lists.fd.io/g/vpp-dev/message/15746
Mute This Topic: https://lists.fd.io/mt/71880617/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Ignore SIGPROF signal in VPP

2020-03-12 Thread Damjan Marion via Lists.Fd.Io


> On 12 Mar 2020, at 10:49, Lijian Zhang  wrote:
> 
> Hi Maintainers,
> We are profiling VPP with MAP (a software profile suite on Arm CPUs, see 
> details 
> inhttps://www.arm.com/products/development-tools/server-and-hpc/forge/map) on 
> Arm CPUs.
>  
> The MAP sampler runs inside target process as a library because it does a lot 
> more things that require access to the program's memory like matching up 
> OpenMP threads etc. So it is a lot more invasive and uses the SIGPROF signal 
> to control the sample rate.
>  
> VPP will receive SIGPROF signal because MAP uses SIGPROF signal to drive its 
> sampler to do profiling on VPP. However, the default action of SIGPROF signal 
> handler in VPP such as unix_signal_handler() is process termination. To 
> profile VPP with MAP, We need to change VPP signal handler to ignore SIGPROF 
> signal.
>  
> Can we upstream a patch to simply ignore the SIGPROF in VPP?

I think so, please submit patch and unless somebody raises his concern here I 
will merge it...

>  
> diff --git a/src/vlib/unix/main.c b/src/vlib/unix/main.c
> index e40a462..6138a6f 100755
> --- a/src/vlib/unix/main.c
> +++ b/src/vlib/unix/main.c
> @@ -218,6 +218,7 @@
>/* ignore SIGPIPE, SIGCHLD */
>  case SIGPIPE:
>  case SIGCHLD:
> +   case SIGPROF:
>sa.sa_sigaction = (void *) SIG_IGN;
>break;

— 
Damjan-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15745): https://lists.fd.io/g/vpp-dev/message/15745
Mute This Topic: https://lists.fd.io/mt/71898718/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Failure of creating avf interface on SMP system

2020-03-12 Thread Damjan Marion via Lists.Fd.Io


> On 12 Mar 2020, at 10:30, Lijian Zhang  wrote:
> 
> Hi Damjan,
> We observed a failure when creating avf interfaces on two types of Arm CPUs, 
> both are SMP system, only one numa-id.
>  
> Function vlib_pci_get_device_info() reads ‘/sys/bus/pci/devices/ id>/numa_node’ to check which numa_node a NIC device resides in.
> But on SMP system, -1 is returned as below example, and then later VPP uses 
> -1 to access some arrays which causes memory out-of-bound issue.
>  
> It seems that -1 is returned where the kernel doesn’t have NUMA node 
> information, and the kernel ABI documents -1 as a valid return value here:
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/ABI/testing/sysfs-bus-pci#n285
>  
> Is it ok to just set di->numa_node to 0, if ‘/sys/bus/pci/devices/ id>/numa_node’ returns -1 and there is only one numa node by checking 
> ‘/sys/devices/system/node/online’?

sounds good to me. Please submit patch.

>   if (-1 == di->numa_node

please change to:

if (di->numa_node == -1)

— 
Damjan




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15744): https://lists.fd.io/g/vpp-dev/message/15744
Mute This Topic: https://lists.fd.io/mt/71898582/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] impact of API requests on forwarding performance?

2020-03-12 Thread Elias Rudberg
Hi Ole,

Thanks for explaining!
I'm sorry if what I wrote before was wrong or confusing.

> Checking counters values in the stats segment has _no_ impact on VPP.
> VPP writes those counters regardless of reader frequency.

That's great!

Just to be clear, to make sure I understand what this means, if we do
the following in python:

from vpp_papi.vpp_stats import VPPStats
stat = VPPStats("/run/vpp/stats.sock")
dir = stat.ls(['^/nat44/total-users'])
counters = stat.dump(dir)
list_of_counters=counters.get('/nat44/total-users')

(followed by a loop in python to sum up the counter values from
different vpp threads) then what we are doing is that we are checking
counters values in the stats segment, so there should be no impact on
VPP?

Best regards,
Elias

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15743): https://lists.fd.io/g/vpp-dev/message/15743
Mute This Topic: https://lists.fd.io/mt/71882379/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] A question about using packetdrill test vpp hoststack

2020-03-12 Thread dailongfei







Hi,Recently, I wan to use packetdrill to test vpp hoststack .  I connet  vpp and kernel-protocol-stack with veth.  And local  client runs on vpp hoststack , remote is on kernel.   local <> vcl <> vpp <-> veth1 <-> veth0 <-> remote.At remote , I just wen to  recvive    the  2-layer  or 3-layer packet, so I use raw sock. However , I meet a problem that raw sock just get the copy of packet that sent by local client , the packet still transfer to the upper layer (4 layer) .And the upper layer will answer the packet ,  which will influences my test.  Since the local client just want to receive the packet that sent by raw sock. local <> vcl <> vpp <-> veth1 <-> veth0 <-> raw sock.                                                                                     |                                                                                     ---X--> upper layer .                       Do your matter the  same problem  when testing vpp hoststack ? And do you have good idea about the vpp  hoststack test with packetdrill?Regards,Longfei 






 










dailongfei







dailong...@corp.netease.com








签名由
网易邮箱大师
定制

 



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15742): https://lists.fd.io/g/vpp-dev/message/15742
Mute This Topic: https://lists.fd.io/mt/71898758/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Ignore SIGPROF signal in VPP

2020-03-12 Thread Lijian Zhang
Hi Maintainers,
We are profiling VPP with MAP (a software profile suite on Arm CPUs, see 
details in 
https://www.arm.com/products/development-tools/server-and-hpc/forge/map) on Arm 
CPUs.

The MAP sampler runs inside target process as a library because it does a lot 
more things that require access to the program's memory like matching up OpenMP 
threads etc. So it is a lot more invasive and uses the SIGPROF signal to 
control the sample rate.

VPP will receive SIGPROF signal because MAP uses SIGPROF signal to drive its 
sampler to do profiling on VPP. However, the default action of SIGPROF signal 
handler in VPP such as unix_signal_handler() is process termination. To profile 
VPP with MAP, We need to change VPP signal handler to ignore SIGPROF signal.

Can we upstream a patch to simply ignore the SIGPROF in VPP?

diff --git a/src/vlib/unix/main.c b/src/vlib/unix/main.c
index e40a462..6138a6f 100755
--- a/src/vlib/unix/main.c
+++ b/src/vlib/unix/main.c
@@ -218,6 +218,7 @@
   /* ignore SIGPIPE, SIGCHLD */
 case SIGPIPE:
 case SIGCHLD:
+   case SIGPROF:
   sa.sa_sigaction = (void *) SIG_IGN;
   break;

Thanks.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15741): https://lists.fd.io/g/vpp-dev/message/15741
Mute This Topic: https://lists.fd.io/mt/71898718/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] impact of API requests on forwarding performance?

2020-03-12 Thread Ole Troan
Elias,

> I think you are right about the stop-world way it works.
> 
> We have seen a performance impact, but that was for a command that was
> quite slow, listing something with many lines of output (the "show
> nat44 sessions" command). So then the worker threads were stopped
> during that whole operation and we saw some packet drops each time.
> Later we were able to extract the info we needed in other ways (like
> getting number of sessions directly as a single number per thread via
> the python API instead of fetching a large output and counting lines in
> that), so we could avoid that performance problem.

The correct answer is of course "it depends".
Non thread safe API calls will block the workers.
Thread-safe ones, may incur locking that affects the workers.
The dump/detail mechanism is not suitable to dump large data structures.
Those typically would block the main thread for too long a time.
In general the expectation is that the control plane should know what it has 
confiured VPP with, so there shouldn't be a need to query VPP for that 
information.

For frequently changing data structures like the NAT binding table, it's a 
little trickier.
The NAT binding table could have several hundred million entries with millions 
of adds/deletes a second.
Neither the stats segment or the API is useful for that. Although you could 
imagine doing something with snap-shots.
For data structures like that there is ipfix or even syslog.


> For small things like checking the values of some counters, we have not
> seen any performance impact. But then we only did those calls once
> every 30 seconds or so. If you do it very often, like many times times
> per second, maybe there could be a performance impact also for small
> things. I suppose you could test it by gradually increasing the
> frequency of your API calls and seeing when drops start to happen.

Checking counters values in the stats segment has _no_ impact on VPP.
VPP writes those counters regardless of reader frequency.

Best regards,
Ole-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15740): https://lists.fd.io/g/vpp-dev/message/15740
Mute This Topic: https://lists.fd.io/mt/71882379/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Failure of creating avf interface on SMP system

2020-03-12 Thread Lijian Zhang
Hi Damjan,
We observed a failure when creating avf interfaces on two types of Arm CPUs, 
both are SMP system, only one numa-id.

Function vlib_pci_get_device_info() reads '/sys/bus/pci/devices//numa_node' to check which numa_node a NIC device resides in.
But on SMP system, -1 is returned as below example, and then later VPP uses -1 
to access some arrays which causes memory out-of-bound issue.

It seems that -1 is returned where the kernel doesn't have NUMA node 
information, and the kernel ABI documents -1 as a valid return value here:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/ABI/testing/sysfs-bus-pci#n285

Is it ok to just set di->numa_node to 0, if '/sys/bus/pci/devices//numa_node' returns -1 and there is only one numa node by checking 
'/sys/devices/system/node/online'?
  if (-1 == di->numa_node)
  {
if ((err = clib_sysfs_read ("/sys/devices/system/node/online", "%U",
 unformat_bitmap_list, )))
  clib_error_free (err);
if (clib_bitmap_count_set_bits (bmp) == 1)
  di->numa_node = 0;
  }

$ cat /sys/bus/pci/devices/:03:00.0/numa_node
-1
$ cat /sys/devices/system/node/online
0
$ lscpu
Architecture:aarch64
Byte Order:  Little Endian
CPU(s):  4
On-line CPU(s) list: 0-3
Thread(s) per core:  2
Core(s) per socket:  2
Socket(s):   1
NUMA node(s):1
Vendor ID:   ARM
Model:   0
Stepping:r1p0
BogoMIPS:100.00
L1d cache:   64K
L1i cache:   64K
L2 cache:1024K
L3 cache:1024K
L4 cache:8192K
NUMA node0 CPU(s):   0-3
Flags:   fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp 
asimdhp cpuid asimdrdm lrcpc dcpop asimddp

Thanks.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15739): https://lists.fd.io/g/vpp-dev/message/15739
Mute This Topic: https://lists.fd.io/mt/71898582/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] impact of API requests on forwarding performance?

2020-03-12 Thread Elias Rudberg
Hi Andreas,

I think you are right about the stop-world way it works.

We have seen a performance impact, but that was for a command that was
quite slow, listing something with many lines of output (the "show
nat44 sessions" command). So then the worker threads were stopped
during that whole operation and we saw some packet drops each time.
Later we were able to extract the info we needed in other ways (like
getting number of sessions directly as a single number per thread via
the python API instead of fetching a large output and counting lines in
that), so we could avoid that performance problem.

For small things like checking the values of some counters, we have not
seen any performance impact. But then we only did those calls once
every 30 seconds or so. If you do it very often, like many times times
per second, maybe there could be a performance impact also for small
things. I suppose you could test it by gradually increasing the
frequency of your API calls and seeing when drops start to happen.

Best regards,
Elias


On Wed, 2020-03-11 at 17:03 +0100, Andreas Schultz wrote:
> Hi,
> 
> Has anyone benchmarked the impact of VPP API invocations on the
> forwarding performance?
> 
> Background: most calls on the VPP API run in a stop-world maner. That
> means all graph node worker threads are stopped at a barrier, the API
> call is executed and then the workers are released from the barrier.
> Right?
> 
> My question is now, when I do 1k, 10k or even 100k API invocation per
> second, how does that impact the forwarding performance of VPP?
> 
> Does anyone have a use-case running that is actually doing that?
> 
> Many thanks,
> Andreas

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15738): https://lists.fd.io/g/vpp-dev/message/15738
Mute This Topic: https://lists.fd.io/mt/71882379/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Query about internal apps

2020-03-12 Thread Ole Troan
Hi Vivek,

> We are trying to achieve the mechanism, something similar to TAP interface, 
> in VPP.
>  
> So, the packets coming out of the TAP interface, will be directed directly to 
> the application. The application will receive the packets, coming via TAP 
> interface, process them and send it down via the Host stack.
>  
> Possible options, we could think of are:-
> - Enhance the session layer to provide a L2 transport mechanism and add nodes 
> like tap-input and tap-out which would achieve the same.
> - Use the existing session layer by doing a IP/UDP encap and send it to the 
> APP, via session layer and use existing mechanism.
>   This introduces an overhead of additional encap/decap.
>  
> We wanted to check if there is any alternate option to directly transfer the 
> packets from the plugin to the VPP App, without even involving the session 
> layer and have no additional overhead encap/decap,

Is this similar to the idea of routing directly to the application?
I.e. give each application an IP address (easier with IPv6), and the 
application itself links in whatever transport layer it needs. In a VPP context 
the application could sit behind a memif interface. The application would need 
some support for IP address assignment, ARP/ND etc.
Userland networking taken to the extreme. ;-)

Best regards,
Ole-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15737): https://lists.fd.io/g/vpp-dev/message/15737
Mute This Topic: https://lists.fd.io/mt/71885250/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-