Re: [vpp-dev] vpp 19.08 failed to load in CENTOS 7

2020-04-06 Thread amitmulayoff
Thanks for the replay
i edited the qustion , it is not working with uio and iommu is enabled so i 
need to work with vfio-pci
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16019): https://lists.fd.io/g/vpp-dev/message/16019
Mute This Topic: https://lists.fd.io/mt/72808669/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vpp 20.01 not properly handle ipv6 router advertisement and neighbor advertisement #vnet #vpp

2020-04-06 Thread guojx1998
Would anyone please confirm the issue on neighbor advertisement with vpp 20.01? 
the missing router bit.

Thanks in advance!
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16018): https://lists.fd.io/g/vpp-dev/message/16018
Mute This Topic: https://lists.fd.io/mt/72765661/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Mute #vnet: https://lists.fd.io/mk?hashtag=vnet=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] n_vectors...

2020-04-06 Thread Burt Silverman
I had to go through slowly and take lots of Dramamine to get my head to
stop spinning while I read through everything. I agree -- referring to a
frame as a vector when it is not a vector in the sense of the
infrastructure vector makes something relatively straightforward into
something unnecessarily abstruse.

I would change the comment
  /* Number of vector elements currently in frame. */

to
/* Number of frame elements (vectors) in frame. */

or
/* Number of frame elements (vectors [aka packets]) in frame. */

And the documentation should either pick another term for vector, or, if
that ruins the spirit of the project, should spell out each time that it is
using the concept loosely, i.e., not in the meaning of the infrastructure
code. I think the documentation should be maintained in a fashion where
there is an implicit belief that anybody reading the documentation is also
reading the source code.

Now, if I have the wrong picture at this point, then even more thorough
changes should be made (I hope I have the correct picture.)

Burt
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16017): https://lists.fd.io/g/vpp-dev/message/16017
Mute This Topic: https://lists.fd.io/mt/72667316/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] n_vectors...

2020-04-06 Thread Elias Rudberg
Hi Burt,

Thanks, but then I think you mean the vectors as in src/vppinfra/vec.h
but the discussion here was about how the name "n_vectors" is used in
for example src/vlib/node.h and such places. It's a different thing.

If we have a situation like this, now first described using a picture
without using the word "vector" for anything:

A : [ a1 a2 a3 ]
B : [ b1 b2 b3 b4 ]

Then the above can be described in different ways.

Alternative 1: we can say that A and B are vectors. A is a vector with
3 elements, B is a vector with 4 elements. The number of vectors is 2
(A and B). According to this view, if there was something called
n_vectors then we would say that n_vectors=2.

Alternative 2 (the VPP way): A consists o3 3 vectors, and B consists of
4 vectors. The number of vectors for A is 3, and the number of vectors
for B is 4. A and B each have their own n_vectors values, A has
n_vectors=3 and B has n_vectors=4. At least this is how I think it is
in the VPP source code.

The VPP source code can be confusing if you assume the word "vector" is
used as in alternative 1.

I think the main scenario of interest in VPP is that there is a bunch
of packets that are processed together. You might think that this would
be described as a vector of packets, but the VPP source code instead
describes the individual packets as vectors, so that "number of
vectors" in effect means "number of packets". At least that is how I
think it is.

There is at least one comment in src/vlib/node.h that seems to say
this, it looks like this:

  /* Number of vector elements currently in frame. */
  u16 n_vectors;

So that variable is called n_vectors but according to the comment its
meaning is the number of vector elements rather than the number of
vectors.

Best regards,
Elias

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16016): https://lists.fd.io/g/vpp-dev/message/16016
Mute This Topic: https://lists.fd.io/mt/72667316/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Jobs are failing due to inspect.py

2020-04-06 Thread Andrew Yourtchenko
(“su -l ubuntu“ rather than “si -l ubuntu”, obviously).

--a

> On 6 Apr 2020, at 18:21, Andrew  Yourtchenko  wrote:
> 
> Paul,
> 
> It’s still perfectly reproducible in a fresh LXD container for me.
> 
> lxc launch ubuntu:18.04 vpptest
> lxc exec vpptest — si -l ubuntu 
> # now we are a user “ubuntu” inside the container 
> 
> sudo apt-get update
> sudo apt-get upgrade
> git clone https://git.fd.io/vpp
> cd vpp
> # go to version before the pinning
> git checkout -b broken-doc 2e1fa54b
> make test-doc
> 
> The above steps reproduce an error.
> 
> --a
> 
>>> On 6 Apr 2020, at 17:35, Paul Vinciguerra  
>>> wrote:
>>> 
>> 
>> I have not been able to reproduce the problem from a fresh ubuntu 18.04 
>> container and for me, the build succeeds.
>> 
>> build succeeded, 240 warnings.
>> The HTML pages are in ../../build-root/build-test/doc/html.
>> make[2]: Leaving directory '/vpp/test/doc'
>> If someone can send me the error log:
>> The full traceback has been saved in /tmp/sphinx-err-o2xo4j0j.log
>> I will gladly look into it.
>> 
>> Paul
>> 
>>> On Mon, Apr 6, 2020 at 11:15 AM Paul Vinciguerra via lists.fd.io 
>>>  wrote:
>>> Andrew submitted a changeset that backs out the updated Sphinx package.  I 
>>> am building the target 'test-doc' to try to learn the root cause.
>>> 
 On Mon, Apr 6, 2020 at 11:03 AM steven luong via lists.fd.io 
  wrote:
 Folks,
 

 
 It looks like jobs for all branches, 19.08, 20.01, and master, are failing 
 due to this inspect.py error. Could somebody who is familiar with the 
 issue please take a look at it?
 

 
 18:59:12 Exception occurred:
 18:59:12   File "/usr/lib/python3.6/inspect.py", line 516, in unwrap
 18:59:12 raise ValueError('wrapper loop when unwrapping 
 {!r}'.format(f))
 18:59:12 ValueError: wrapper loop when unwrapping scapy.fields.BitEnumField
 18:59:12 The full traceback has been saved in 
 /tmp/sphinx-err-o2xo4j0j.log, if you want to report the issue to the 
 developers.
 18:59:12 Please also report this if it was a user error, so that a better 
 error message can be provided next time.
 18:59:12 A bug report can be filed in the tracker at 
 . Thanks!
 18:59:12 Makefile:71: recipe for target 'html' failed
 18:59:12 make[2]: *** [html] Error 2
 18:59:12 make[2]: Leaving directory 
 '/w/workspace/vpp-make-test-docs-verify-1908/test/doc'
 18:59:12 Makefile:237: recipe for target 'doc' failed
 18:59:12 make[1]: *** [doc] Error 2
 18:59:12 make[1]: Leaving directory 
 '/w/workspace/vpp-make-test-docs-verify-1908/test'
 18:59:12 Makefile:449: recipe for target 'test-doc' failed
 18:59:12 make: *** [test-doc] Error 2
 18:59:12 Build step 'Execute shell' marked build as failure
 18:59:12 $ ssh-agent -k

 

 
 Steven
 
 
>>> 
>> 
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16015): https://lists.fd.io/g/vpp-dev/message/16015
Mute This Topic: https://lists.fd.io/mt/72813354/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Jobs are failing due to inspect.py

2020-04-06 Thread Andrew Yourtchenko
Paul,

It’s still perfectly reproducible in a fresh LXD container for me.

lxc launch ubuntu:18.04 vpptest
lxc exec vpptest — si -l ubuntu 
# now we are a user “ubuntu” inside the container 

sudo apt-get update
sudo apt-get upgrade
git clone https://git.fd.io/vpp
cd vpp
# go to version before the pinning
git checkout -b broken-doc 2e1fa54b
make test-doc

The above steps reproduce an error.

--a

> On 6 Apr 2020, at 17:35, Paul Vinciguerra  wrote:
> 
> 
> I have not been able to reproduce the problem from a fresh ubuntu 18.04 
> container and for me, the build succeeds.
> 
> build succeeded, 240 warnings.
> The HTML pages are in ../../build-root/build-test/doc/html.
> make[2]: Leaving directory '/vpp/test/doc'
> If someone can send me the error log:
> The full traceback has been saved in /tmp/sphinx-err-o2xo4j0j.log
> I will gladly look into it.
> 
> Paul
> 
>> On Mon, Apr 6, 2020 at 11:15 AM Paul Vinciguerra via lists.fd.io 
>>  wrote:
>> Andrew submitted a changeset that backs out the updated Sphinx package.  I 
>> am building the target 'test-doc' to try to learn the root cause.
>> 
>>> On Mon, Apr 6, 2020 at 11:03 AM steven luong via lists.fd.io 
>>>  wrote:
>>> Folks,
>>> 
>>>
>>> 
>>> It looks like jobs for all branches, 19.08, 20.01, and master, are failing 
>>> due to this inspect.py error. Could somebody who is familiar with the issue 
>>> please take a look at it?
>>> 
>>>
>>> 
>>> 18:59:12 Exception occurred:
>>> 18:59:12   File "/usr/lib/python3.6/inspect.py", line 516, in unwrap
>>> 18:59:12 raise ValueError('wrapper loop when unwrapping {!r}'.format(f))
>>> 18:59:12 ValueError: wrapper loop when unwrapping scapy.fields.BitEnumField
>>> 18:59:12 The full traceback has been saved in /tmp/sphinx-err-o2xo4j0j.log, 
>>> if you want to report the issue to the developers.
>>> 18:59:12 Please also report this if it was a user error, so that a better 
>>> error message can be provided next time.
>>> 18:59:12 A bug report can be filed in the tracker at 
>>> . Thanks!
>>> 18:59:12 Makefile:71: recipe for target 'html' failed
>>> 18:59:12 make[2]: *** [html] Error 2
>>> 18:59:12 make[2]: Leaving directory 
>>> '/w/workspace/vpp-make-test-docs-verify-1908/test/doc'
>>> 18:59:12 Makefile:237: recipe for target 'doc' failed
>>> 18:59:12 make[1]: *** [doc] Error 2
>>> 18:59:12 make[1]: Leaving directory 
>>> '/w/workspace/vpp-make-test-docs-verify-1908/test'
>>> 18:59:12 Makefile:449: recipe for target 'test-doc' failed
>>> 18:59:12 make: *** [test-doc] Error 2
>>> 18:59:12 Build step 'Execute shell' marked build as failure
>>> 18:59:12 $ ssh-agent -k
>>>
>>> 
>>>
>>> 
>>> Steven
>>> 
>>> 
>> 
> 
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16014): https://lists.fd.io/g/vpp-dev/message/16014
Mute This Topic: https://lists.fd.io/mt/72813354/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Jobs are failing due to inspect.py

2020-04-06 Thread steven luong via lists.fd.io
Dear Andrew,

I confirm that master has been rescued and reverted from “lockdown” back to 
“normal”. Please proceed the “disinfection process” on 19.08 and 20.01 if you 
will.

Steven

From: Andrew  Yourtchenko 
Date: Monday, April 6, 2020 at 8:09 AM
To: "Steven Luong (sluong)" 
Cc: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] Jobs are failing due to inspect.py

Sphinx upgraded itself last night under the hood to a (crashing) version 3.0.0 
from 2.4.4.

I made a pin on master, so the master should be ok now - rebase and recheck 
please, and let me know if it works!

Will do the same on the other two branches later today if we are all happy 
about the fix on master...
--a


On 6 Apr 2020, at 17:03, steven luong via lists.fd.io 
 wrote:
Folks,

It looks like jobs for all branches, 19.08, 20.01, and master, are failing due 
to this inspect.py error. Could somebody who is familiar with the issue please 
take a look at it?


18:59:12 Exception occurred:

18:59:12   File "/usr/lib/python3.6/inspect.py", line 516, in unwrap

18:59:12 raise ValueError('wrapper loop when unwrapping {!r}'.format(f))

18:59:12 ValueError: wrapper loop when unwrapping scapy.fields.BitEnumField

18:59:12 The full traceback has been saved in /tmp/sphinx-err-o2xo4j0j.log, if 
you want to report the issue to the developers.

18:59:12 Please also report this if it was a user error, so that a better error 
message can be provided next time.

18:59:12 A bug report can be filed in the tracker at 
. Thanks!

18:59:12 Makefile:71: recipe for target 'html' failed

18:59:12 make[2]: *** [html] Error 2

18:59:12 make[2]: Leaving directory 
'/w/workspace/vpp-make-test-docs-verify-1908/test/doc'

18:59:12 Makefile:237: recipe for target 'doc' failed

18:59:12 make[1]: *** [doc] Error 2

18:59:12 make[1]: Leaving directory 
'/w/workspace/vpp-make-test-docs-verify-1908/test'

18:59:12 Makefile:449: recipe for target 'test-doc' failed

18:59:12 make: *** [test-doc] Error 2

18:59:12 Build step 'Execute shell' marked build as failure

18:59:12 $ ssh-agent -k


Steven

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16013): https://lists.fd.io/g/vpp-dev/message/16013
Mute This Topic: https://lists.fd.io/mt/72813354/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Jobs are failing due to inspect.py

2020-04-06 Thread Paul Vinciguerra
Thanks,
But that only tells me where the traceback was written, I'm looking for the
contents of that file.

Paul

On Mon, Apr 6, 2020 at 11:43 AM Steven Luong (sluong) 
wrote:

> master
>
> *https://jenkins.fd.io/job/vpp-make-test-docs-verify-master/19049/console
> *
>
>
>
> 20.01
>
> https://jenkins.fd.io/job/vpp-make-test-docs-verify-2001/61/console
>
>
>
> Steven
>
>
>
> *From: *Paul Vinciguerra 
> *Date: *Monday, April 6, 2020 at 8:35 AM
> *To: *Paul Vinciguerra 
> *Cc: *"Steven Luong (sluong)" , "vpp-dev@lists.fd.io" <
> vpp-dev@lists.fd.io>
> *Subject: *Re: [vpp-dev] Jobs are failing due to inspect.py
>
>
>
> I have not been able to reproduce the problem from a fresh ubuntu 18.04
> container and for me, the build succeeds.
>
>
>
> build succeeded, 240 warnings.
>
> The HTML pages are in ../../build-root/build-test/doc/html.
> make[2]: Leaving directory '/vpp/test/doc'
>
> If someone can send me the error log:
>
> The full traceback has been saved in /tmp/sphinx-err-o2xo4j0j.log
>
> I will gladly look into it.
>
>
>
> Paul
>
>
>
> On Mon, Apr 6, 2020 at 11:15 AM Paul Vinciguerra via lists.fd.io  vinciconsulting@lists.fd.io> wrote:
>
> Andrew submitted a changeset that backs out the updated Sphinx package.  I
> am building the target 'test-doc' to try to learn the root cause.
>
>
>
> On Mon, Apr 6, 2020 at 11:03 AM steven luong via lists.fd.io  cisco@lists.fd.io> wrote:
>
> Folks,
>
>
>
> It looks like jobs for all branches, 19.08, 20.01, and master, are failing
> due to this inspect.py error. Could somebody who is familiar with the issue
> please take a look at it?
>
>
>
> *18:59:12* Exception occurred:
>
> *18:59:12*   File "/usr/lib/python3.6/inspect.py", line 516, in unwrap
>
> *18:59:12* raise ValueError('wrapper loop when unwrapping {!r}'.format(f))
>
> *18:59:12* ValueError: wrapper loop when unwrapping scapy.fields.BitEnumField
>
> *18:59:12* The full traceback has been saved in /tmp/sphinx-err-o2xo4j0j.log, 
> if you want to report the issue to the developers.
>
> *18:59:12* Please also report this if it was a user error, so that a better 
> error message can be provided next time.
>
> *18:59:12* A bug report can be filed in the tracker at 
> . Thanks!
>
> *18:59:12* Makefile:71: recipe for target 'html' failed
>
> *18:59:12* make[2]: *** [html] Error 2
>
> *18:59:12* make[2]: Leaving directory 
> '/w/workspace/vpp-make-test-docs-verify-1908/test/doc'
>
> *18:59:12* Makefile:237: recipe for target 'doc' failed
>
> *18:59:12* make[1]: *** [doc] Error 2
>
> *18:59:12* make[1]: Leaving directory 
> '/w/workspace/vpp-make-test-docs-verify-1908/test'
>
> *18:59:12* Makefile:449: recipe for target 'test-doc' failed
>
> *18:59:12* make: *** [test-doc] Error 2
>
> *18:59:12* Build step 'Execute shell' marked build as failure
>
> *18:59:12* $ ssh-agent -k
>
>
>
>
>
> Steven
>
>
>
> 
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16012): https://lists.fd.io/g/vpp-dev/message/16012
Mute This Topic: https://lists.fd.io/mt/72813354/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Jobs are failing due to inspect.py

2020-04-06 Thread steven luong via lists.fd.io
master
https://jenkins.fd.io/job/vpp-make-test-docs-verify-master/19049/console

20.01
https://jenkins.fd.io/job/vpp-make-test-docs-verify-2001/61/console

Steven

From: Paul Vinciguerra 
Date: Monday, April 6, 2020 at 8:35 AM
To: Paul Vinciguerra 
Cc: "Steven Luong (sluong)" , "vpp-dev@lists.fd.io" 

Subject: Re: [vpp-dev] Jobs are failing due to inspect.py

I have not been able to reproduce the problem from a fresh ubuntu 18.04 
container and for me, the build succeeds.

build succeeded, 240 warnings.
The HTML pages are in ../../build-root/build-test/doc/html.
make[2]: Leaving directory '/vpp/test/doc'
If someone can send me the error log:
The full traceback has been saved in /tmp/sphinx-err-o2xo4j0j.log
I will gladly look into it.

Paul

On Mon, Apr 6, 2020 at 11:15 AM Paul Vinciguerra via 
lists.fd.io 
mailto:vinciconsulting@lists.fd.io>>
 wrote:
Andrew submitted a changeset that backs out the updated Sphinx package.  I am 
building the target 'test-doc' to try to learn the root cause.

On Mon, Apr 6, 2020 at 11:03 AM steven luong via 
lists.fd.io 
mailto:cisco@lists.fd.io>> wrote:
Folks,

It looks like jobs for all branches, 19.08, 20.01, and master, are failing due 
to this inspect.py error. Could somebody who is familiar with the issue please 
take a look at it?


18:59:12 Exception occurred:

18:59:12   File "/usr/lib/python3.6/inspect.py", line 516, in unwrap

18:59:12 raise ValueError('wrapper loop when unwrapping {!r}'.format(f))

18:59:12 ValueError: wrapper loop when unwrapping scapy.fields.BitEnumField

18:59:12 The full traceback has been saved in /tmp/sphinx-err-o2xo4j0j.log, if 
you want to report the issue to the developers.

18:59:12 Please also report this if it was a user error, so that a better error 
message can be provided next time.

18:59:12 A bug report can be filed in the tracker at 
. Thanks!

18:59:12 Makefile:71: recipe for target 'html' failed

18:59:12 make[2]: *** [html] Error 2

18:59:12 make[2]: Leaving directory 
'/w/workspace/vpp-make-test-docs-verify-1908/test/doc'

18:59:12 Makefile:237: recipe for target 'doc' failed

18:59:12 make[1]: *** [doc] Error 2

18:59:12 make[1]: Leaving directory 
'/w/workspace/vpp-make-test-docs-verify-1908/test'

18:59:12 Makefile:449: recipe for target 'test-doc' failed

18:59:12 make: *** [test-doc] Error 2

18:59:12 Build step 'Execute shell' marked build as failure

18:59:12 $ ssh-agent -k


Steven


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16011): https://lists.fd.io/g/vpp-dev/message/16011
Mute This Topic: https://lists.fd.io/mt/72813354/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Jobs are failing due to inspect.py

2020-04-06 Thread Paul Vinciguerra
I have not been able to reproduce the problem from a fresh ubuntu 18.04
container and for me, the build succeeds.

build succeeded, 240 warnings.
The HTML pages are in ../../build-root/build-test/doc/html.
make[2]: Leaving directory '/vpp/test/doc'
If someone can send me the error log:
The full traceback has been saved in /tmp/sphinx-err-o2xo4j0j.log
I will gladly look into it.

Paul

On Mon, Apr 6, 2020 at 11:15 AM Paul Vinciguerra via lists.fd.io  wrote:

> Andrew submitted a changeset that backs out the updated Sphinx package.  I
> am building the target 'test-doc' to try to learn the root cause.
>
> On Mon, Apr 6, 2020 at 11:03 AM steven luong via lists.fd.io  cisco@lists.fd.io> wrote:
>
>> Folks,
>>
>>
>>
>> It looks like jobs for all branches, 19.08, 20.01, and master, are
>> failing due to this inspect.py error. Could somebody who is familiar with
>> the issue please take a look at it?
>>
>>
>>
>> *18:59:12* Exception occurred:
>>
>> *18:59:12*   File "/usr/lib/python3.6/inspect.py", line 516, in unwrap
>>
>> *18:59:12* raise ValueError('wrapper loop when unwrapping 
>> {!r}'.format(f))
>>
>> *18:59:12* ValueError: wrapper loop when unwrapping scapy.fields.BitEnumField
>>
>> *18:59:12* The full traceback has been saved in 
>> /tmp/sphinx-err-o2xo4j0j.log, if you want to report the issue to the 
>> developers.
>>
>> *18:59:12* Please also report this if it was a user error, so that a better 
>> error message can be provided next time.
>>
>> *18:59:12* A bug report can be filed in the tracker at 
>> . Thanks!
>>
>> *18:59:12* Makefile:71: recipe for target 'html' failed
>>
>> *18:59:12* make[2]: *** [html] Error 2
>>
>> *18:59:12* make[2]: Leaving directory 
>> '/w/workspace/vpp-make-test-docs-verify-1908/test/doc'
>>
>> *18:59:12* Makefile:237: recipe for target 'doc' failed
>>
>> *18:59:12* make[1]: *** [doc] Error 2
>>
>> *18:59:12* make[1]: Leaving directory 
>> '/w/workspace/vpp-make-test-docs-verify-1908/test'
>>
>> *18:59:12* Makefile:449: recipe for target 'test-doc' failed
>>
>> *18:59:12* make: *** [test-doc] Error 2
>>
>> *18:59:12* Build step 'Execute shell' marked build as failure
>>
>> *18:59:12* $ ssh-agent -k
>>
>>
>>
>>
>>
>> Steven
>>
>> 
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16010): https://lists.fd.io/g/vpp-dev/message/16010
Mute This Topic: https://lists.fd.io/mt/72813354/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] n_vectors...

2020-04-06 Thread Burt Silverman
 >If you imagine yourself in the position of someone who is a newcomer
>starting to use and learn about VPP. Perhaps someone with an
>engineering background who has an understanding of the "vector" concept
>from linear algebra courses and so on.

Speaking as someone who knows very little about the code, but I do know
that vectors in C++ are not the same as vectors in linear algebra. Linear
algebra vectors are more like a valarray in C++, although I understand that
the valarray is designed for high performance floating point, so there is
not a direct match to linear algebra vectors.

A vector in VPP's infrastructure is analogous to a vector in C++. I assume
the original VPP designer was more comfortable writing high performance
code in C rather than C++, so he wrote his own library in C.

Also, I think years ago I looked at the header file in VPP that defines
vectors, and for about 20 minutes I wondered why they were not like linear
algebra vectors. Luckily I had a C++ textbook nearby and opened it up and
said "oh, now I understand."

I hope that answers part of your concern, Elias, although like I say, I
have not studied much of the code, so I imagine there are more questions
you would like answers for.

Burt
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16009): https://lists.fd.io/g/vpp-dev/message/16009
Mute This Topic: https://lists.fd.io/mt/72667316/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Jobs are failing due to inspect.py

2020-04-06 Thread Paul Vinciguerra
Andrew submitted a changeset that backs out the updated Sphinx package.  I
am building the target 'test-doc' to try to learn the root cause.

On Mon, Apr 6, 2020 at 11:03 AM steven luong via lists.fd.io  wrote:

> Folks,
>
>
>
> It looks like jobs for all branches, 19.08, 20.01, and master, are failing
> due to this inspect.py error. Could somebody who is familiar with the issue
> please take a look at it?
>
>
>
> *18:59:12* Exception occurred:
>
> *18:59:12*   File "/usr/lib/python3.6/inspect.py", line 516, in unwrap
>
> *18:59:12* raise ValueError('wrapper loop when unwrapping {!r}'.format(f))
>
> *18:59:12* ValueError: wrapper loop when unwrapping scapy.fields.BitEnumField
>
> *18:59:12* The full traceback has been saved in /tmp/sphinx-err-o2xo4j0j.log, 
> if you want to report the issue to the developers.
>
> *18:59:12* Please also report this if it was a user error, so that a better 
> error message can be provided next time.
>
> *18:59:12* A bug report can be filed in the tracker at 
> . Thanks!
>
> *18:59:12* Makefile:71: recipe for target 'html' failed
>
> *18:59:12* make[2]: *** [html] Error 2
>
> *18:59:12* make[2]: Leaving directory 
> '/w/workspace/vpp-make-test-docs-verify-1908/test/doc'
>
> *18:59:12* Makefile:237: recipe for target 'doc' failed
>
> *18:59:12* make[1]: *** [doc] Error 2
>
> *18:59:12* make[1]: Leaving directory 
> '/w/workspace/vpp-make-test-docs-verify-1908/test'
>
> *18:59:12* Makefile:449: recipe for target 'test-doc' failed
>
> *18:59:12* make: *** [test-doc] Error 2
>
> *18:59:12* Build step 'Execute shell' marked build as failure
>
> *18:59:12* $ ssh-agent -k
>
>
>
>
>
> Steven
> 
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16008): https://lists.fd.io/g/vpp-dev/message/16008
Mute This Topic: https://lists.fd.io/mt/72813354/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Jobs are failing due to inspect.py

2020-04-06 Thread Andrew Yourtchenko
Correction: just the *rebase* on master should be fine since it by itself will 
trigger a recheck.

--a

> On 6 Apr 2020, at 17:09, Andrew  Yourtchenko  wrote:
> 
> Sphinx upgraded itself last night under the hood to a (crashing) version 
> 3.0.0 from 2.4.4.
> 
> I made a pin on master, so the master should be ok now - rebase and recheck 
> please, and let me know if it works!
> 
> Will do the same on the other two branches later today if we are all happy 
> about the fix on master...
> 
> --a
> 
>>> On 6 Apr 2020, at 17:03, steven luong via lists.fd.io 
>>>  wrote:
>>> 
>> 
>> Folks,
>>
>> It looks like jobs for all branches, 19.08, 20.01, and master, are failing 
>> due to this inspect.py error. Could somebody who is familiar with the issue 
>> please take a look at it?
>>
>> 18:59:12 Exception occurred:
>> 18:59:12   File "/usr/lib/python3.6/inspect.py", line 516, in unwrap
>> 18:59:12 raise ValueError('wrapper loop when unwrapping {!r}'.format(f))
>> 18:59:12 ValueError: wrapper loop when unwrapping scapy.fields.BitEnumField
>> 18:59:12 The full traceback has been saved in /tmp/sphinx-err-o2xo4j0j.log, 
>> if you want to report the issue to the developers.
>> 18:59:12 Please also report this if it was a user error, so that a better 
>> error message can be provided next time.
>> 18:59:12 A bug report can be filed in the tracker at 
>> . Thanks!
>> 18:59:12 Makefile:71: recipe for target 'html' failed
>> 18:59:12 make[2]: *** [html] Error 2
>> 18:59:12 make[2]: Leaving directory 
>> '/w/workspace/vpp-make-test-docs-verify-1908/test/doc'
>> 18:59:12 Makefile:237: recipe for target 'doc' failed
>> 18:59:12 make[1]: *** [doc] Error 2
>> 18:59:12 make[1]: Leaving directory 
>> '/w/workspace/vpp-make-test-docs-verify-1908/test'
>> 18:59:12 Makefile:449: recipe for target 'test-doc' failed
>> 18:59:12 make: *** [test-doc] Error 2
>> 18:59:12 Build step 'Execute shell' marked build as failure
>> 18:59:12 $ ssh-agent -k
>>
>>
>> Steven
>> 
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16007): https://lists.fd.io/g/vpp-dev/message/16007
Mute This Topic: https://lists.fd.io/mt/72813354/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Jobs are failing due to inspect.py

2020-04-06 Thread Andrew Yourtchenko
Sphinx upgraded itself last night under the hood to a (crashing) version 3.0.0 
from 2.4.4.

I made a pin on master, so the master should be ok now - rebase and recheck 
please, and let me know if it works!

Will do the same on the other two branches later today if we are all happy 
about the fix on master...

--a

> On 6 Apr 2020, at 17:03, steven luong via lists.fd.io 
>  wrote:
> 
> 
> Folks,
>
> It looks like jobs for all branches, 19.08, 20.01, and master, are failing 
> due to this inspect.py error. Could somebody who is familiar with the issue 
> please take a look at it?
>
> 18:59:12 Exception occurred:
> 18:59:12   File "/usr/lib/python3.6/inspect.py", line 516, in unwrap
> 18:59:12 raise ValueError('wrapper loop when unwrapping {!r}'.format(f))
> 18:59:12 ValueError: wrapper loop when unwrapping scapy.fields.BitEnumField
> 18:59:12 The full traceback has been saved in /tmp/sphinx-err-o2xo4j0j.log, 
> if you want to report the issue to the developers.
> 18:59:12 Please also report this if it was a user error, so that a better 
> error message can be provided next time.
> 18:59:12 A bug report can be filed in the tracker at 
> . Thanks!
> 18:59:12 Makefile:71: recipe for target 'html' failed
> 18:59:12 make[2]: *** [html] Error 2
> 18:59:12 make[2]: Leaving directory 
> '/w/workspace/vpp-make-test-docs-verify-1908/test/doc'
> 18:59:12 Makefile:237: recipe for target 'doc' failed
> 18:59:12 make[1]: *** [doc] Error 2
> 18:59:12 make[1]: Leaving directory 
> '/w/workspace/vpp-make-test-docs-verify-1908/test'
> 18:59:12 Makefile:449: recipe for target 'test-doc' failed
> 18:59:12 make: *** [test-doc] Error 2
> 18:59:12 Build step 'Execute shell' marked build as failure
> 18:59:12 $ ssh-agent -k
>
>
> Steven
> 
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16006): https://lists.fd.io/g/vpp-dev/message/16006
Mute This Topic: https://lists.fd.io/mt/72813354/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vpp 19.08 failed to load in CENTOS 7

2020-04-06 Thread Benoit Ganne (bganne) via lists.fd.io
Hi,

As it is working with uio but not vfio, are you sure IOMMU is enabled?
You can try the following before running VPP:
~# echo 1 | sudo tee /sys/module/vfio/parameters/enable_unsafe_noiommu_mode

Ben

> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of
> amitmulay...@gmail.com
> Sent: lundi 6 avril 2020 12:07
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] vpp 19.08 failed to load in CENTOS 7
> 
> [Edited Message Follows]
> 
> 
> Hi all
> im using vpp version 19.08 , with centos 7 kernel 4.4
> when i want to use vfio-pci drive  i get
> 
> error allocating rte services array
> EAL: FATAL: rte_service_init() failed
> 
> 
> and vpp faild to load  , i want to use vfio-pci becouse im working on I7
> cpu with iommu.
> the vfio-pci driver is loaded and can be seen in lsmod.
> my  vm.nr_hugepages = 1024 , is there anything im doing worng regarding
> DPDK or somthing?
> 
> pls if someone can advice
> Thannks!!
> 
> 
> vpp# show version
> vpp v19.08.1-release built by root on localhost.localdomain at Sun Jan 26
> 10:08:45 EST 2020
> 
> vpp# show dpdk version
> DPDK Version: DPDK 19.05.0
> DPDK EAL init args:   -c 2 -n 4 --in-memory --vdev crypto_aesni_mb0 --
> file-prefix vpp --master-lcore 1
> 
> 
> [root@localhost ~]# uname -a
> Linux localhost.localdomain 4.4.211-1.el7.elrepo.x86_64 #1 SMP Thu Jan 23
> 08:11:08 EST 2020 x86_64 x86_64 x86_64 GNU/Linux
> [root@localhost ~]#
> 
> [root@localhost device]# cat /etc/centos-release
> CentOS Linux release 7.7.1908 (Core)
> 
> 
> [root@localhost ~]#  sysctl -a | grep hugepages
> 
> vm.hugepages_treat_as_movable = 0
> vm.nr_hugepages = 1024
> vm.nr_hugepages_mempolicy = 1024
> vm.nr_overcommit_hugepages = 0
> 
> [root@localhost ~]# cat /etc/vpp/startup.conf
> 
> unix {
> nodaemon
> log /var/log/vpp/vpp.log
> full-coredump
> cli-listen /run/vpp/cli.sock
> gid vpp
> }
> api-trace {
> on
> }
> api-segment {
> gid vpp
> }
> socksvr {
> default
> }
> dpdk {
> uio-driver vfio-pci
> vdev crypto_aesni_mb0
> dev default {
> num-rx-desc 4096
> num-tx-desc 4096
> }
> #num-mbufs 128000
> socket-mem 0,1024
> no-multi-seg
> no-tx-checksum-offload
> }
> nat {
> translation hash buckets 10240
> translation hash memory 268435456
> user hash buckets 1280
> user hash memory 134217728
> max translations per user 1000
> }
> 
> 
> [root@localhost ~]# /usr/bin/vpp -c /etc/vpp/startup.conf
> vlib_plugin_early_init:361: plugin path /usr/lib/x86_64-linux-
> gnu/vpp_plugins:/usr/lib/vpp_plugins
> load_one_plugin:189: Loaded plugin: abf_plugin.so (Access Control List
> (ACL) Based Forwarding)
> load_one_plugin:189: Loaded plugin: acl_plugin.so (Access Control Lists
> (ACL))
> load_one_plugin:189: Loaded plugin: avf_plugin.so (Intel Adaptive Virtual
> Function (AVF) Device Driver)
> load_one_plugin:189: Loaded plugin: builtinurl_plugin.so (vpp built-in URL
> support)
> load_one_plugin:189: Loaded plugin: cdp_plugin.so (Cisco Discovery
> Protocol (CDP))
> load_one_plugin:189: Loaded plugin: crypto_ia32_plugin.so (Intel IA32
> Software Crypto Engine)
> load_one_plugin:189: Loaded plugin: crypto_ipsecmb_plugin.so (Intel IPSEC
> Multi-buffer Crypto Engine)
> load_one_plugin:189: Loaded plugin: crypto_openssl_plugin.so (OpenSSL
> Crypto Engine)
> load_one_plugin:189: Loaded plugin: ct6_plugin.so (IPv6 Connection
> Tracker)
> load_one_plugin:189: Loaded plugin: dhcp_plugin.so (Dynamic Host
> Configuration Protocol (DHCP))
> load_one_plugin:189: Loaded plugin: dns_plugin.so (Simple DNS name
> resolver)
> load_one_plugin:189: Loaded plugin: dpdk_plugin.so (Data Plane Development
> Kit (DPDK))
> load_one_plugin:189: Loaded plugin: flowprobe_plugin.so (Flow per Packet)
> load_one_plugin:189: Loaded plugin: gbp_plugin.so (Group Based Policy
> (GBP))
> load_one_plugin:189: Loaded plugin: gtpu_plugin.so (GPRS Tunnelling
> Protocol, User Data (GTPv1-U))
> load_one_plugin:189: Loaded plugin: hs_apps_plugin.so (Host Stack
> Applications)
> load_one_plugin:189: Loaded plugin: http_static_plugin.so (HTTP Static
> Server)
> load_one_plugin:189: Loaded plugin: igmp_plugin.so (Internet Group
> Management Protocol (IGMP))
> load_one_plugin:189: Loaded plugin: ikev2_plugin.so (Internet Key Exchange
> (IKEv2) Protocol)
> load_one_plugin:189: Loaded plugin: ila_plugin.so (Identifier Locator
> Addressing (ILA) for IPv6)
> load_one_plugin:189: Loaded plugin: ioam_plugin.so (Inbound Operations,
> Administration, and Maintenance (OAM))
> load_one_plugin:117: Plugin disabled (default): ixge_plugin.so
> load_one_plugin:189: Loaded plugin: l2e_plugin.so (Layer 2 (L2) Emulation)
> load_one_plugin:189: Loaded plugin: l3xc_plugin.so (L3 Cross-Connect
> (L3XC))
> load_one_plugin:189: Loaded plugin: lacp_plugin.so (Link Aggregation
> Control Protocol (LACP))
> load_one_plugin:189: Loaded plugin: lb_plugin.so (Load Balancer (LB))
> load_one_plugin:189: Loaded plugin: mactime_plugin.so (Time-based MAC
> 

[vpp-dev] Jobs are failing due to inspect.py

2020-04-06 Thread steven luong via lists.fd.io
Folks,

It looks like jobs for all branches, 19.08, 20.01, and master, are failing due 
to this inspect.py error. Could somebody who is familiar with the issue please 
take a look at it?


18:59:12 Exception occurred:

18:59:12   File "/usr/lib/python3.6/inspect.py", line 516, in unwrap

18:59:12 raise ValueError('wrapper loop when unwrapping {!r}'.format(f))

18:59:12 ValueError: wrapper loop when unwrapping scapy.fields.BitEnumField

18:59:12 The full traceback has been saved in /tmp/sphinx-err-o2xo4j0j.log, if 
you want to report the issue to the developers.

18:59:12 Please also report this if it was a user error, so that a better error 
message can be provided next time.

18:59:12 A bug report can be filed in the tracker at 
. Thanks!

18:59:12 Makefile:71: recipe for target 'html' failed

18:59:12 make[2]: *** [html] Error 2

18:59:12 make[2]: Leaving directory 
'/w/workspace/vpp-make-test-docs-verify-1908/test/doc'

18:59:12 Makefile:237: recipe for target 'doc' failed

18:59:12 make[1]: *** [doc] Error 2

18:59:12 make[1]: Leaving directory 
'/w/workspace/vpp-make-test-docs-verify-1908/test'

18:59:12 Makefile:449: recipe for target 'test-doc' failed

18:59:12 make: *** [test-doc] Error 2

18:59:12 Build step 'Execute shell' marked build as failure

18:59:12 $ ssh-agent -k


Steven
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16004): https://lists.fd.io/g/vpp-dev/message/16004
Mute This Topic: https://lists.fd.io/mt/72813354/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP nat ipfix logging problem, need to use thread-specific vlib_main_t?

2020-04-06 Thread Paul Vinciguerra
Thanks Dave, Neale.

This is great information.

On Mon, Apr 6, 2020 at 6:14 AM Neale Ranns (nranns) 
wrote:

>
>
> In the test harness you can inject onto a given worker, e.g. see
> IpsecTun6HandoffTests.
>
>
>
> /neale
>
>
>
> *From: * on behalf of Paul Vinciguerra <
> pvi...@vinciconsulting.com>
> *Date: *Sunday 5 April 2020 at 17:24
> *To: *"Dave Barach (dbarach)" 
> *Cc: *Elias Rudberg , "vpp-dev@lists.fd.io" <
> vpp-dev@lists.fd.io>
> *Subject: *Re: [vpp-dev] VPP nat ipfix logging problem, need to use
> thread-specific vlib_main_t?
>
>
>
> How can we test scenarios like this?
>
> 'set interface rx-placement' doesn't support pg interfaces.
>
> DBGvpp# set interface rx-placement TenGigabitEthernet5/0/0 worker 2
> DBGvpp# set interface rx-placement pg0 worker 2
> set interface rx-placement: not found
>
> DBGvpp#
>
> Is there another command to bind a pg interface to a worker thread?
>
>
>
> On Sun, Apr 5, 2020 at 8:08 AM Dave Barach via lists.fd.io  cisco@lists.fd.io> wrote:
>
> If you have the thread index handy, that's OK. Otherwise, use
> vlib_get_main() which grabs the thread index from thread local storage.
>
> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Elias Rudberg
> Sent: Sunday, April 5, 2020 4:58 AM
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] VPP nat ipfix logging problem, need to use
> thread-specific vlib_main_t?
>
> Hello VPP experts,
>
> We have been using VPP for NAT44 for a while and it has been working fine,
> but a few days ago when we tried turing on nat ipfix logging, vpp crashed.
> It turned out that the problem went away if we used only a single thread,
> so it seemed related to how threading was handled in the ipfix logging
> code. The crash happened in different ways on different runs but often
> seemed related to the snat_ipfix_send() function in
> plugins/nat/nat_ipfix_logging.c.
>
> Having looked at the code in nat_ipfix_logging.c I have the following
> theory about what goes wrong (I might have misunderstood something, if so
> please correct me):
>
> In the the snat_ipfix_send() function, a vlib_main_t data structure is
> used, a pointer to it is fetched in the following way:
>
>vlib_main_t *vm = frm->vlib_main;
>
> So the frm->vlib_main pointer comes from "frm" which has been set to
> flow_report_main which is a global data structure from vnet/ipfix-
> export/flow_report.c that as far as I can tell only exists once in memory
> (not once per thread). This means that different threads calling the
> snat_ipfix_send() function are using the same vlib_main_t data structure.
> That is not how it should be, I think, instead each thread should be using
> its own thread-specific vlib_main_t data structure.
>
> A suggestion for how to fix this is to replace the line
>
>vlib_main_t *vm = frm->vlib_main;
>
> with the following line
>
>vlib_main_t *vm = vlib_mains[thread_index];
>
> in all places where worker threads are using such a vlib_main_t pointer.
> Using vlib_mains[thread_index] means that we are picking the
> thread-specific vlib_main_t data structure for the current thread, instead
> of all threads using the same vlib_main_t. I pushed such a change to
> gerrit, here: https://gerrit.fd.io/r/c/vpp/+/26359
>
> That fix seems to solve the issue in my tests, vpp does not crash anymore
> after the change. Please have a look at it and let me know if this seems
> reasonable or if I have misunderstood something.
>
> Best regards,
> Elias
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16003): https://lists.fd.io/g/vpp-dev/message/16003
Mute This Topic: https://lists.fd.io/mt/72786912/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Coverity run FAILED as of 2020-04-06 14:00:24 UTC

2020-04-06 Thread Noreply Jenkins
Coverity run failed today.

Current number of outstanding issues are 5
Newly detected: 0
Eliminated: 0
More details can be found at  
https://scan.coverity.com/projects/fd-io-vpp/view_defects
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16002): https://lists.fd.io/g/vpp-dev/message/16002
Mute This Topic: https://lists.fd.io/mt/72811953/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] vpp 19.08 failed to load in CENTOS 7

2020-04-06 Thread amitmulayoff
[Edited Message Follows]

Hi all
im using vpp version 19.08 , with centos 7 kernel 4.4
when i want to use vfio-pci drive  i get
error allocating rte services array
EAL: FATAL: rte_service_init() failed

and vpp faild to load  , i want to use vfio-pci becouse im working on I7 cpu 
with iommu.
the vfio-pci driver is loaded and can be seen in lsmod.
my  vm.nr_hugepages = 1024 , is there anything im doing worng regarding DPDK or 
somthing?

pls if someone can advice
Thannks!!

vpp# *show version*
vpp v19.08.1-release built by root on localhost.localdomain at Sun Jan 26 
10:08:45 EST 2020

vpp# show *dpdk version*
DPDK Version:             DPDK 19.05.0
DPDK EAL init args:       -c 2 -n 4 --in-memory --vdev crypto_aesni_mb0 
--file-prefix vpp --master-lcore 1

[root@localhost ~]# *uname -a*
Linux localhost.localdomain 4.4.211-1.el7.elrepo.x86_64 #1 SMP Thu Jan 23 
08:11:08 EST 2020 x86_64 x86_64 x86_64 GNU/Linux
[root@localhost ~]#

[root@localhost device]# *cat /etc/centos-release*
CentOS Linux release 7.7.1908 (Core)

[root@localhost ~]# *sysctl -a | grep hugepages*
vm.hugepages_treat_as_movable = 0
vm.nr_hugepages = 1024
vm.nr_hugepages_mempolicy = 1024
vm.nr_overcommit_hugepages = 0

[root@localhost ~]# *cat /etc/vpp/startup.conf*
unix {
nodaemon
log /var/log/vpp/vpp.log
full-coredump
cli-listen /run/vpp/cli.sock
gid vpp
}
api-trace {
on
}
api-segment {
gid vpp
}
socksvr {
default
}
dpdk {
uio-driver vfio-pci
vdev crypto_aesni_mb0
dev default {
num-rx-desc 4096
num-tx-desc 4096
}
#num-mbufs 128000
socket-mem 0,1024
no-multi-seg
no-tx-checksum-offload
}
nat {
translation hash buckets 10240
translation hash memory 268435456
user hash buckets 1280
user hash memory 134217728
max translations per user 1000
}

[root@localhost ~]# / *usr/bin/vpp -c /etc/vpp/startup.conf*
vlib_plugin_early_init:361: plugin path 
/usr/lib/x86_64-linux-gnu/vpp_plugins:/usr/lib/vpp_plugins
load_one_plugin:189: Loaded plugin: abf_plugin.so (Access Control List (ACL) 
Based Forwarding)
load_one_plugin:189: Loaded plugin: acl_plugin.so (Access Control Lists (ACL))
load_one_plugin:189: Loaded plugin: avf_plugin.so (Intel Adaptive Virtual 
Function (AVF) Device Driver)
load_one_plugin:189: Loaded plugin: builtinurl_plugin.so (vpp built-in URL 
support)
load_one_plugin:189: Loaded plugin: cdp_plugin.so (Cisco Discovery Protocol 
(CDP))
load_one_plugin:189: Loaded plugin: crypto_ia32_plugin.so (Intel IA32 Software 
Crypto Engine)
load_one_plugin:189: Loaded plugin: crypto_ipsecmb_plugin.so (Intel IPSEC 
Multi-buffer Crypto Engine)
load_one_plugin:189: Loaded plugin: crypto_openssl_plugin.so (OpenSSL Crypto 
Engine)
load_one_plugin:189: Loaded plugin: ct6_plugin.so (IPv6 Connection Tracker)
load_one_plugin:189: Loaded plugin: dhcp_plugin.so (Dynamic Host Configuration 
Protocol (DHCP))
load_one_plugin:189: Loaded plugin: dns_plugin.so (Simple DNS name resolver)
load_one_plugin:189: Loaded plugin: dpdk_plugin.so (Data Plane Development Kit 
(DPDK))
load_one_plugin:189: Loaded plugin: flowprobe_plugin.so (Flow per Packet)
load_one_plugin:189: Loaded plugin: gbp_plugin.so (Group Based Policy (GBP))
load_one_plugin:189: Loaded plugin: gtpu_plugin.so (GPRS Tunnelling Protocol, 
User Data (GTPv1-U))
load_one_plugin:189: Loaded plugin: hs_apps_plugin.so (Host Stack Applications)
load_one_plugin:189: Loaded plugin: http_static_plugin.so (HTTP Static Server)
load_one_plugin:189: Loaded plugin: igmp_plugin.so (Internet Group Management 
Protocol (IGMP))
load_one_plugin:189: Loaded plugin: ikev2_plugin.so (Internet Key Exchange 
(IKEv2) Protocol)
load_one_plugin:189: Loaded plugin: ila_plugin.so (Identifier Locator 
Addressing (ILA) for IPv6)
load_one_plugin:189: Loaded plugin: ioam_plugin.so (Inbound Operations, 
Administration, and Maintenance (OAM))
load_one_plugin:117: Plugin disabled (default): ixge_plugin.so
load_one_plugin:189: Loaded plugin: l2e_plugin.so (Layer 2 (L2) Emulation)
load_one_plugin:189: Loaded plugin: l3xc_plugin.so (L3 Cross-Connect (L3XC))
load_one_plugin:189: Loaded plugin: lacp_plugin.so (Link Aggregation Control 
Protocol (LACP))
load_one_plugin:189: Loaded plugin: lb_plugin.so (Load Balancer (LB))
load_one_plugin:189: Loaded plugin: mactime_plugin.so (Time-based MAC Source 
Address Filter)
load_one_plugin:189: Loaded plugin: map_plugin.so (Mapping of Address and Port 
(MAP))
load_one_plugin:189: Loaded plugin: mdata_plugin.so (Buffer metadata change 
tracker.)
load_one_plugin:189: Loaded plugin: memif_plugin.so (Packet Memory Interface 
(memif) -- Experimental)
load_one_plugin:189: Loaded plugin: nat_plugin.so (Network Address Translation 
(NAT))
load_one_plugin:189: Loaded plugin: nsh_plugin.so (Network Service Header (NSH))
load_one_plugin:189: Loaded plugin: nsim_plugin.so (Network Delay Simulator)
load_one_plugin:117: Plugin disabled (default): oddbuf_plugin.so
load_one_plugin:189: Loaded plugin: perfmon_plugin.so (Performance Monitor)
load_one_plugin:189: Loaded plugin: ping_plugin.so (Ping (ping))
load_one_plugin:189: 

Re: [vpp-dev] [csit-report] Regressions as of 2020-04-03 14:00:14 UTC #email

2020-04-06 Thread Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco) via lists.fd.io
> 100ge2p1cx556a-rdma

No VPP code change.
Up to now, RDMA tests were suffering from "duration stretching".
That is a phenomenon where TRex takes longer to send
the required number of packets, so device under tests
experiences smaller offered load and MRR value is inflated.
We have suppressed the phenomenon
by tweaking [0] core usage and max rate (pps) limit.
The new numbers should be reliable now.

Vratko.

[0] https://gerrit.fd.io/r/c/csit/+/26262

-Original Message-
From: csit-rep...@lists.fd.io  On Behalf Of 
nore...@jenkins.fd.io
Sent: Friday, 2020-April-03 18:07
To: Fdio+Csit-Report via Email Integration 
Subject: [csit-report] Regressions as of 2020-04-03 14:00:14 UTC #email

Following regressions occured in the last trending job runs, listed per testbed 
type.



2n-skx, CSIT build: 
https://jenkins.fd.io/view/csit/job/csit-vpp-perf-mrr-daily-master-2n-skx/881, 
VPP version: 20.05-rc0~433-g0c7aa7ab5~b930

No regressions

3n-skx, CSIT build: 
https://jenkins.fd.io/view/csit/job/csit-vpp-perf-mrr-daily-master-3n-skx/862, 
VPP version: 20.05-rc0~433-g0c7aa7ab5~b930

tests.vpp.perf.srv6.25ge2p1xxv710-avf-ethip6ip6-ip6base-srv6enc1sid-mrr.78b-2t1c-avf-ethip6ip6-ip6base-srv6enc1sid-mrr
tests.vpp.perf.srv6.25ge2p1xxv710-avf-ethip6srhip6-ip6base-srv6enc2sids-mrr.78b-2t1c-avf-ethip6srhip6-ip6base-srv6enc2sids-mrr
tests.vpp.perf.srv6.25ge2p1xxv710-avf-ethip6srhip6-ip6base-srv6enc2sids-nodecaps-mrr.78b-2t1c-avf-ethip6srhip6-ip6base-srv6enc2sids-nodecaps-mrr
tests.vpp.perf.srv6.25ge2p1xxv710-avf-ethip6srhip6-ip6base-srv6proxy-dyn-mrr.78b-2t1c-avf-ethip6srhip6-ip6base-srv6proxy-dyn-mrr
tests.vpp.perf.srv6.25ge2p1xxv710-avf-ethip6srhip6-ip6base-srv6proxy-masq-mrr.78b-2t1c-avf-ethip6srhip6-ip6base-srv6proxy-masq-mrr
tests.vpp.perf.srv6.25ge2p1xxv710-avf-ethip6srhip6-ip6base-srv6proxy-stat-mrr.78b-2t1c-avf-ethip6srhip6-ip6base-srv6proxy-stat-mrr
tests.vpp.perf.srv6.25ge2p1xxv710-avf-ethip6ip6-ip6base-srv6enc1sid-mrr.78b-4t2c-avf-ethip6ip6-ip6base-srv6enc1sid-mrr
tests.vpp.perf.srv6.25ge2p1xxv710-avf-ethip6srhip6-ip6base-srv6enc2sids-mrr.78b-4t2c-avf-ethip6srhip6-ip6base-srv6enc2sids-mrr
tests.vpp.perf.srv6.25ge2p1xxv710-avf-ethip6srhip6-ip6base-srv6enc2sids-nodecaps-mrr.78b-4t2c-avf-ethip6srhip6-ip6base-srv6enc2sids-nodecaps-mrr
tests.vpp.perf.srv6.25ge2p1xxv710-avf-ethip6srhip6-ip6base-srv6proxy-dyn-mrr.78b-4t2c-avf-ethip6srhip6-ip6base-srv6proxy-dyn-mrr
tests.vpp.perf.srv6.25ge2p1xxv710-avf-ethip6srhip6-ip6base-srv6proxy-masq-mrr.78b-4t2c-avf-ethip6srhip6-ip6base-srv6proxy-masq-mrr
tests.vpp.perf.srv6.25ge2p1xxv710-avf-ethip6srhip6-ip6base-srv6proxy-stat-mrr.78b-4t2c-avf-ethip6srhip6-ip6base-srv6proxy-stat-mrr
tests.vpp.perf.srv6.25ge2p1xxv710-avf-ethip6ip6-ip6base-srv6enc1sid-mrr.78b-8t4c-avf-ethip6ip6-ip6base-srv6enc1sid-mrr
tests.vpp.perf.srv6.25ge2p1xxv710-avf-ethip6srhip6-ip6base-srv6enc2sids-mrr.78b-8t4c-avf-ethip6srhip6-ip6base-srv6enc2sids-mrr
tests.vpp.perf.srv6.25ge2p1xxv710-avf-ethip6srhip6-ip6base-srv6enc2sids-nodecaps-mrr.78b-8t4c-avf-ethip6srhip6-ip6base-srv6enc2sids-nodecaps-mrr
tests.vpp.perf.srv6.25ge2p1xxv710-avf-ethip6srhip6-ip6base-srv6proxy-masq-mrr.78b-8t4c-avf-ethip6srhip6-ip6base-srv6proxy-masq-mrr
tests.vpp.perf.srv6.25ge2p1xxv710-avf-ethip6srhip6-ip6base-srv6proxy-stat-mrr.78b-8t4c-avf-ethip6srhip6-ip6base-srv6proxy-stat-mrr
tests.vpp.perf.vm 
vhost.25ge2p1xxv710-avf-dot1q-l2xcbase-eth-2vhostvr1024-1vm-mrr.64b-8t4c-avf-dot1q-l2xcbase-eth-2vhostvr1024-1vm-mrr


2n-clx, CSIT build: 
https://jenkins.fd.io/view/csit/job/csit-vpp-perf-mrr-daily-master-2n-clx/282, 
VPP version: 20.05-rc0~456-g57a5a2df5~b953

tests.vpp.perf.container 
memif.2n1l-25ge2p1xxv710-avf-eth-l2xcbase-eth-2memif-1dcr-mrr.64b-2t1c-avf-eth-l2xcbase-eth-2memif-1dcr-mrr
tests.vpp.perf.container 
memif.2n1l-25ge2p1xxv710-avf-eth-l2xcbase-eth-2memif-1dcr-mrr.64b-4t2c-avf-eth-l2xcbase-eth-2memif-1dcr-mrr
tests.vpp.perf.container 
memif.2n1l-25ge2p1xxv710-avf-eth-l2xcbase-eth-2memif-1dcr-mrr.64b-8t4c-avf-eth-l2xcbase-eth-2memif-1dcr-mrr
tests.vpp.perf.container 
memif.2n1l-25ge2p1xxv710-avf-ethip4-ip4base-eth-2memif-1dcr-mrr.64b-8t4c-avf-ethip4-ip4base-eth-2memif-1dcr-mrr
tests.vpp.perf.container 
memif.2n1l-25ge2p1xxv710-eth-l2bdbasemaclrn-eth-2memif-1dcr-mrr.64b-8t4c-eth-l2bdbasemaclrn-eth-2memif-1dcr-mrr
tests.vpp.perf.container 
memif.2n1l-25ge2p1xxv710-eth-l2xcbase-eth-2memif-1dcr-mrr.64b-8t4c-eth-l2xcbase-eth-2memif-1dcr-mrr
tests.vpp.perf.container 
memif.2n1l-100ge2p1cx556a-rdma-dot1q-l2bdbasemaclrn-eth-2memif-1dcr-mrr.64b-2t1c-rdma-dot1q-l2bdbasemaclrn-eth-2memif-1dcr-mrr
tests.vpp.perf.container 
memif.2n1l-100ge2p1cx556a-rdma-eth-l2bdbasemaclrn-eth-2memif-1dcr-mrr.64b-2t1c-rdma-eth-l2bdbasemaclrn-eth-2memif-1dcr-mrr
tests.vpp.perf.container 
memif.2n1l-100ge2p1cx556a-rdma-eth-l2xcbase-eth-2memif-1dcr-mrr.64b-2t1c-rdma-eth-l2xcbase-eth-2memif-1dcr-mrr
tests.vpp.perf.container 
memif.2n1l-100ge2p1cx556a-rdma-dot1q-l2bdbasemaclrn-eth-2memif-1dcr-mrr.64b-4t2c-rdma-dot1q-l2bdbasemaclrn-eth-2memif-1dcr-mrr

[vpp-dev] vpp 19.08 failed to load in CENTOS 7

2020-04-06 Thread amitmulayoff
[Edited Message Follows]

Hi all
im using vpp version 19.08 , with centos 7 kernel 4.4
when i want to use vfio-pci drive  i get
error allocating rte services array
EAL: FATAL: rte_service_init() failed

and vpp faild to load but if i use uio_pci_generic then vpp is ok , i want to 
use vfio-pci becouse im working on I7 cpu with iommu.
the vfio-pci driver is loaded and can be seen in lsmod my  vm.nr_hugepages = 
1024 , is there anything im doing worng regarding DPDK or somthing?

pls if someone can advice
Thannks!!

vpp# *show version*
vpp v19.08.1-release built by root on localhost.localdomain at Sun Jan 26 
10:08:45 EST 2020

vpp# show *dpdk version*
DPDK Version:             DPDK 19.05.0
DPDK EAL init args:       -c 2 -n 4 --in-memory --vdev crypto_aesni_mb0 
--file-prefix vpp --master-lcore 1

[root@localhost ~]# *uname -a*
Linux localhost.localdomain 4.4.211-1.el7.elrepo.x86_64 #1 SMP Thu Jan 23 
08:11:08 EST 2020 x86_64 x86_64 x86_64 GNU/Linux
[root@localhost ~]#

[root@localhost device]# *cat /etc/centos-release*
CentOS Linux release 7.7.1908 (Core)

[root@localhost ~]# *sysctl -a | grep hugepages*
vm.hugepages_treat_as_movable = 0
vm.nr_hugepages = 1024
vm.nr_hugepages_mempolicy = 1024
vm.nr_overcommit_hugepages = 0

[root@localhost ~]# *cat /etc/vpp/startup.conf*
unix {
nodaemon
log /var/log/vpp/vpp.log
full-coredump
cli-listen /run/vpp/cli.sock
gid vpp
}
api-trace {
on
}
api-segment {
gid vpp
}
socksvr {
default
}
dpdk {
uio-driver vfio-pci
vdev crypto_aesni_mb0
dev default {
num-rx-desc 4096
num-tx-desc 4096
}
#num-mbufs 128000
socket-mem 0,1024
no-multi-seg
no-tx-checksum-offload
}
nat {
translation hash buckets 10240
translation hash memory 268435456
user hash buckets 1280
user hash memory 134217728
max translations per user 1000
}

[root@localhost ~]# / *usr/bin/vpp -c /etc/vpp/startup.conf*
vlib_plugin_early_init:361: plugin path 
/usr/lib/x86_64-linux-gnu/vpp_plugins:/usr/lib/vpp_plugins
load_one_plugin:189: Loaded plugin: abf_plugin.so (Access Control List (ACL) 
Based Forwarding)
load_one_plugin:189: Loaded plugin: acl_plugin.so (Access Control Lists (ACL))
load_one_plugin:189: Loaded plugin: avf_plugin.so (Intel Adaptive Virtual 
Function (AVF) Device Driver)
load_one_plugin:189: Loaded plugin: builtinurl_plugin.so (vpp built-in URL 
support)
load_one_plugin:189: Loaded plugin: cdp_plugin.so (Cisco Discovery Protocol 
(CDP))
load_one_plugin:189: Loaded plugin: crypto_ia32_plugin.so (Intel IA32 Software 
Crypto Engine)
load_one_plugin:189: Loaded plugin: crypto_ipsecmb_plugin.so (Intel IPSEC 
Multi-buffer Crypto Engine)
load_one_plugin:189: Loaded plugin: crypto_openssl_plugin.so (OpenSSL Crypto 
Engine)
load_one_plugin:189: Loaded plugin: ct6_plugin.so (IPv6 Connection Tracker)
load_one_plugin:189: Loaded plugin: dhcp_plugin.so (Dynamic Host Configuration 
Protocol (DHCP))
load_one_plugin:189: Loaded plugin: dns_plugin.so (Simple DNS name resolver)
load_one_plugin:189: Loaded plugin: dpdk_plugin.so (Data Plane Development Kit 
(DPDK))
load_one_plugin:189: Loaded plugin: flowprobe_plugin.so (Flow per Packet)
load_one_plugin:189: Loaded plugin: gbp_plugin.so (Group Based Policy (GBP))
load_one_plugin:189: Loaded plugin: gtpu_plugin.so (GPRS Tunnelling Protocol, 
User Data (GTPv1-U))
load_one_plugin:189: Loaded plugin: hs_apps_plugin.so (Host Stack Applications)
load_one_plugin:189: Loaded plugin: http_static_plugin.so (HTTP Static Server)
load_one_plugin:189: Loaded plugin: igmp_plugin.so (Internet Group Management 
Protocol (IGMP))
load_one_plugin:189: Loaded plugin: ikev2_plugin.so (Internet Key Exchange 
(IKEv2) Protocol)
load_one_plugin:189: Loaded plugin: ila_plugin.so (Identifier Locator 
Addressing (ILA) for IPv6)
load_one_plugin:189: Loaded plugin: ioam_plugin.so (Inbound Operations, 
Administration, and Maintenance (OAM))
load_one_plugin:117: Plugin disabled (default): ixge_plugin.so
load_one_plugin:189: Loaded plugin: l2e_plugin.so (Layer 2 (L2) Emulation)
load_one_plugin:189: Loaded plugin: l3xc_plugin.so (L3 Cross-Connect (L3XC))
load_one_plugin:189: Loaded plugin: lacp_plugin.so (Link Aggregation Control 
Protocol (LACP))
load_one_plugin:189: Loaded plugin: lb_plugin.so (Load Balancer (LB))
load_one_plugin:189: Loaded plugin: mactime_plugin.so (Time-based MAC Source 
Address Filter)
load_one_plugin:189: Loaded plugin: map_plugin.so (Mapping of Address and Port 
(MAP))
load_one_plugin:189: Loaded plugin: mdata_plugin.so (Buffer metadata change 
tracker.)
load_one_plugin:189: Loaded plugin: memif_plugin.so (Packet Memory Interface 
(memif) -- Experimental)
load_one_plugin:189: Loaded plugin: nat_plugin.so (Network Address Translation 
(NAT))
load_one_plugin:189: Loaded plugin: nsh_plugin.so (Network Service Header (NSH))
load_one_plugin:189: Loaded plugin: nsim_plugin.so (Network Delay Simulator)
load_one_plugin:117: Plugin disabled (default): oddbuf_plugin.so
load_one_plugin:189: Loaded plugin: perfmon_plugin.so (Performance Monitor)
load_one_plugin:189: Loaded plugin: 

Re: [vpp-dev] n_vectors...

2020-04-06 Thread Elias Rudberg
Hi Dave,

Thanks for your answer, I understand that there are many difficulties
and problems with renaming things in existing code which I did not
realize before.

> P.S. mapping "n_vectors" to whatever it means to you seems like a
> pretty minimal entry barrier. It's not like the code is inconsistent.

Here however I disagree: I think it can be a significant entry barrier.

If you imagine yourself in the position of someone who is a newcomer
starting to use and learn about VPP. Perhaps someone with an
engineering background who has an understanding of the "vector" concept
from linear algebra courses and so on. This person has read about the
ideas of how VPP works for example here 
https://wiki.fd.io/view/VPP/What_is_VPP%3F where it says "the VPP
platform grabs all available packets from RX rings to form a vector of
packets" which seems fine according to the usual meaning of the word
"vector". Up to that point everything is fine and someone familiar with
the vector concept will feel that their knowledge about vectors can be
useful when working with VPP. But at the moment when this person starts
looking at the code and sees "n_vectors" there, that will be confusing.
Making the assumption that the VPP source code uses its own definition
of what a "vector" is, is actually a pretty big step to make.

Of course it's not the first time a word has different meanings
depending on the context, but in this case the concept of a "vector" is
quite well established and also seems to be used according to its usual
meaning in VPP documentation. Then it becomes confusing when the word
apparently has a different meaning in the source code.

So, while you are probably right that it's not practical to rename
things like that in the existing code, I still think this issue can be
a significant obstacle for new people coming in.

Anyway, thanks again for explaining the situation, for me personally
this helped my understanding a lot.

Best regards,
Elias

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16000): https://lists.fd.io/g/vpp-dev/message/16000
Mute This Topic: https://lists.fd.io/mt/72667316/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] CSIT - AVF interface create crash [VPP-1858]

2020-04-06 Thread Peter Mikus via lists.fd.io
Hello vpp-dev,

We found issues running CSIT AVF tests. I opened VPP-1858 for tracking.

In case of any questions please contact @csit-dev.

Thank you.

Peter Mikus
Engineer – Software
Cisco Systems Limited
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15999): https://lists.fd.io/g/vpp-dev/message/15999
Mute This Topic: https://lists.fd.io/mt/72809079/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] no TX on af-packet interface, was: ARP answers on wrong interface?

2020-04-06 Thread Benoit Ganne (bganne) via lists.fd.io
Hi Andreas,

> The setup this happens on has not been changed, except for VPP and kernel
> upgrades. I can't say which upgrade did trigger this problem, but it feels
> like a kernel version thing at the moment (5.0, 5.3 and 5.6 tested, all
> exhibit this, 4.x might not - need to retest).

Note that it works perfectly fine for me in a VM with kernel 5.3.0-1018-azure 
#19~18.04.1-Ubuntu on Ubuntu 18.04 using veth.
Maybe try to bisect kernel commits to see what triggers this behavior on your 
system?

> There is nothing that messes with the interface and the interface is up
> all the time. The problem occurs randomly, but only when load is sent
> through VPP. Once the interface is messed up, it will stay that way and
> not TX any traffic any more.

Best
ben

> 
>   > It seems that somehow the interface is messed up. Anything I can
> do to
>   > debug this further?
>   >
>   > VPP is current from master branch, OS is Ubuntu 18.04.4 with the
> 5.3.0-42-
>   > generic kernel.
>   >
>   > Andreas
>   >
>   > Am Fr., 3. Apr. 2020 um 11:03 Uhr schrieb Andreas Schultz via
> lists.fd.io 
>   >    
>   >   > >:
>   >
>   >
>   >   Hi again,
>   >
>   >   forget my first description of the problem. After using a
> VPP node
>   > handoff PCAP trace I've discovered that ARP answers are only send
> on the
>   > correct interface.
>   >
>   >   Or at least VPP tries. The node trace shows that VPP tries
> to send
>   > the ARP answer (it hits host-ens224-tx), but the packet is not
> seen by a
>   > tcpdump/dumpcap on the raw host interface. It seems that somehow
> the af-
>   > packet interface is screwed.
>   >
>   >   Andreas
>   >
>   >   Am Fr., 3. Apr. 2020 um 10:46 Uhr schrieb Andreas Schultz
> via
>   > lists.fd.io   
>   >  
>   >   > >:
>   >
>   >
>   >   Hi,
>   >
>   >   I have two interfaces that are connected to the same
> layer L2
>   > network. Both interfaces have IPs from the same /24 IP range, but
> they are
>   > in different FIBs.
>   >
>   >   My problem is now that ARP are answered on the wrong
> interface
>   > (with the correct MAC). With the attached config a ARP request for
>   > 172.20.16.105 (interface ens224) is answered on interface ens161.
>   >
>   >   In itself the answer on the wrong interface would
> not be a big
>   > problem, but the underlying switch is confused by seeing the MAC
> address
>   > on the wrong interface.
>   >
>   >   I could understand  this behavior if both
> interfaces/IP where
>   > in the same FIB, but they are not!
>   >
>   >   It seems to me that the ARP responder node should
> filter by
>   > src/dst FIB index? Is there an option or setting to enable that?
>   >
>   >   Regards,
>   >   Andreas
>   >
>   >   Config:
>   >
>   >   ip table add 1
>   >   ip table add 2
>   >
>   >   create host-interface name ens224
>   >   set interface mac address host-ens224
> 00:0c:29:46:1f:53
>   >   set interface mtu 1500 host-ens224
>   >   set interface ip table host-ens224 1
>   >   set interface ip address host-ens224
> 172.20.16.105/24 
>   > 
>   >   set interface state host-ens224 up
>   >
>   >   create host-interface name ens161
>   >   set interface mac address host-ens161
> 00:50:56:86:ed:f9
>   >   set interface mtu 1500 host-ens161
>   >   set interface ip table host-ens161 2
>   >   set interface ip address host-ens161
> 172.20.16.106/24 
>   > 
>   >   set interface state host-ens161 up
>   >
>   >
>   >
>   >
>   >   --
>   >
>   >
>   >   Andreas Schultz
>   >
>   >
>   >
>   >
>   >
>   >   --
>   >
>   >
>   >   Andreas Schultz
>   >
>   >   --
>   >
>   >   Principal Engineer
>   >
>   >   t: +49 391 819099-224
>   >
>   >
>   >
>   >   --- enabling your networks -
> 
>   > 
>   >
>   > Travelping GmbH
>   > Roentgenstraße 13
>   > 39108 Magdeburg
>   > 

Re: [vpp-dev] VPP nat ipfix logging problem, need to use thread-specific vlib_main_t?

2020-04-06 Thread Neale Ranns via lists.fd.io

In the test harness you can inject onto a given worker, e.g. see 
IpsecTun6HandoffTests.

/neale

From:  on behalf of Paul Vinciguerra 

Date: Sunday 5 April 2020 at 17:24
To: "Dave Barach (dbarach)" 
Cc: Elias Rudberg , "vpp-dev@lists.fd.io" 

Subject: Re: [vpp-dev] VPP nat ipfix logging problem, need to use 
thread-specific vlib_main_t?

How can we test scenarios like this?
'set interface rx-placement' doesn't support pg interfaces.
DBGvpp# set interface rx-placement TenGigabitEthernet5/0/0 worker 2
DBGvpp# set interface rx-placement pg0 worker 2
set interface rx-placement: not found
DBGvpp#
Is there another command to bind a pg interface to a worker thread?

On Sun, Apr 5, 2020 at 8:08 AM Dave Barach via lists.fd.io 
mailto:cisco@lists.fd.io>> wrote:
If you have the thread index handy, that's OK. Otherwise, use vlib_get_main() 
which grabs the thread index from thread local storage.

-Original Message-
From: vpp-dev@lists.fd.io 
mailto:vpp-dev@lists.fd.io>> On Behalf Of Elias Rudberg
Sent: Sunday, April 5, 2020 4:58 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] VPP nat ipfix logging problem, need to use thread-specific 
vlib_main_t?

Hello VPP experts,

We have been using VPP for NAT44 for a while and it has been working fine, but 
a few days ago when we tried turing on nat ipfix logging, vpp crashed. It 
turned out that the problem went away if we used only a single thread, so it 
seemed related to how threading was handled in the ipfix logging code. The 
crash happened in different ways on different runs but often seemed related to 
the snat_ipfix_send() function in plugins/nat/nat_ipfix_logging.c.

Having looked at the code in nat_ipfix_logging.c I have the following theory 
about what goes wrong (I might have misunderstood something, if so please 
correct me):

In the the snat_ipfix_send() function, a vlib_main_t data structure is used, a 
pointer to it is fetched in the following way:

   vlib_main_t *vm = frm->vlib_main;

So the frm->vlib_main pointer comes from "frm" which has been set to 
flow_report_main which is a global data structure from vnet/ipfix- 
export/flow_report.c that as far as I can tell only exists once in memory (not 
once per thread). This means that different threads calling the 
snat_ipfix_send() function are using the same vlib_main_t data structure. That 
is not how it should be, I think, instead each thread should be using its own 
thread-specific vlib_main_t data structure.

A suggestion for how to fix this is to replace the line

   vlib_main_t *vm = frm->vlib_main;

with the following line

   vlib_main_t *vm = vlib_mains[thread_index];

in all places where worker threads are using such a vlib_main_t pointer. Using 
vlib_mains[thread_index] means that we are picking the thread-specific 
vlib_main_t data structure for the current thread, instead of all threads using 
the same vlib_main_t. I pushed such a change to gerrit, here: 
https://gerrit.fd.io/r/c/vpp/+/26359

That fix seems to solve the issue in my tests, vpp does not crash anymore after 
the change. Please have a look at it and let me know if this seems reasonable 
or if I have misunderstood something.

Best regards,
Elias

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15997): https://lists.fd.io/g/vpp-dev/message/15997
Mute This Topic: https://lists.fd.io/mt/72786912/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] vpp 19.08 failed to load in CENTOS 7

2020-04-06 Thread amitmulayoff
Hi all
im using vpp version 19.08 , with centos 7 kernel 4.4
when i want to use vfio-pci drive  i get
error allocating rte services array
EAL: FATAL: rte_service_init() failed

and vpp faild to load but if i use uio_pci_generic then vpp is ok , i want to 
use vfio-pci becouse im working on I7 cpu with iommu.
the vfio-pci driver is loaded and can be seen in lsmod my  vm.nr_hugepages = 
1024 , is there anything im doing worng regarding DPDK or somthing?

pls if someone can advice
Thannks!!

vpp# *show version*
vpp v19.08.1-release built by root on localhost.localdomain at Sun Jan 26 
10:08:45 EST 2020

vpp# show *dpdk version*
DPDK Version:             DPDK 19.05.0
DPDK EAL init args:       -c 2 -n 4 --in-memory --vdev crypto_aesni_mb0 
--file-prefix vpp --master-lcore 1

[root@localhost ~]# *uname -a*
Linux localhost.localdomain 4.4.211-1.el7.elrepo.x86_64 #1 SMP Thu Jan 23 
08:11:08 EST 2020 x86_64 x86_64 x86_64 GNU/Linux
[root@localhost ~]#

[root@localhost device]# *cat /etc/centos-release*
CentOS Linux release 7.7.1908 (Core)

[root@localhost ~]# *sysctl -a | grep hugepages*
vm.hugepages_treat_as_movable = 0
vm.nr_hugepages = 1024
vm.nr_hugepages_mempolicy = 1024
vm.nr_overcommit_hugepages = 0

[root@localhost ~]# *cat /etc/vpp/startup.conf*
unix {
nodaemon
log /var/log/vpp/vpp.log
full-coredump
cli-listen /run/vpp/cli.sock
gid vpp
}
api-trace {
on
}
api-segment {
gid vpp
}
socksvr {
default
}
dpdk {
uio-driver vfio-pci
vdev crypto_aesni_mb0
dev default {
num-rx-desc 4096
num-tx-desc 4096
}
#num-mbufs 128000
socket-mem 0,1024
no-multi-seg
no-tx-checksum-offload
}
nat {
translation hash buckets 10240
translation hash memory 268435456
user hash buckets 1280
user hash memory 134217728
max translations per user 1000
}
vlib_plugin_early_init:361: plugin path 
/usr/lib/x86_64-linux-gnu/vpp_plugins:/usr/lib/vpp_plugins
load_one_plugin:189: Loaded plugin: abf_plugin.so (Access Control List (ACL) 
Based Forwarding)
load_one_plugin:189: Loaded plugin: acl_plugin.so (Access Control Lists (ACL))
load_one_plugin:189: Loaded plugin: avf_plugin.so (Intel Adaptive Virtual 
Function (AVF) Device Driver)
load_one_plugin:189: Loaded plugin: cdp_plugin.so (Cisco Discovery Protocol 
(CDP))
load_one_plugin:189: Loaded plugin: crypto_ia32_plugin.so (Intel IA32 Software 
Crypto Engine)
load_one_plugin:189: Loaded plugin: crypto_ipsecmb_plugin.so (Intel IPSEC 
Multi-buffer Crypto Engine)
load_one_plugin:189: Loaded plugin: crypto_openssl_plugin.so (OpenSSL Crypto 
Engine)
load_one_plugin:189: Loaded plugin: ct6_plugin.so (IPv6 Connection Tracker)
load_one_plugin:189: Loaded plugin: dns_plugin.so (Simple DNS name resolver)
load_one_plugin:189: Loaded plugin: dpdk_plugin.so (Data Plane Development Kit 
(DPDK))
load_one_plugin:189: Loaded plugin: flowprobe_plugin.so (Flow per Packet)
load_one_plugin:189: Loaded plugin: gbp_plugin.so (Group Based Policy (GBP))
load_one_plugin:189: Loaded plugin: gtpu_plugin.so (GPRS Tunnelling Protocol, 
User Data (GTPv1-U))
load_one_plugin:189: Loaded plugin: hs_apps_plugin.so (Host Stack Applications)
load_one_plugin:189: Loaded plugin: http_static_plugin.so (HTTP Static Server)
load_one_plugin:189: Loaded plugin: igmp_plugin.so (Internet Group Management 
Protocol (IGMP))
load_one_plugin:189: Loaded plugin: ikev2_plugin.so (Internet Key Exchange 
(IKEv2) Protocol)
load_one_plugin:189: Loaded plugin: ila_plugin.so (Identifier Locator 
Addressing (ILA) for IPv6)
load_one_plugin:189: Loaded plugin: ioam_plugin.so (Inbound Operations, 
Administration, and Maintenance (OAM))
load_one_plugin:117: Plugin disabled (default): ixge_plugin.so
load_one_plugin:189: Loaded plugin: l2e_plugin.so (Layer 2 (L2) Emulation)
load_one_plugin:189: Loaded plugin: l3xc_plugin.so (L3 Cross-Connect (L3XC))
load_one_plugin:189: Loaded plugin: lacp_plugin.so (Link Aggregation Control 
Protocol (LACP))
load_one_plugin:189: Loaded plugin: lb_plugin.so (Load Balancer (LB))
load_one_plugin:189: Loaded plugin: mactime_plugin.so (Time-based MAC Source 
Address Filter)
load_one_plugin:189: Loaded plugin: map_plugin.so (Mapping of Address and Port 
(MAP))
load_one_plugin:189: Loaded plugin: memif_plugin.so (Packet Memory Interface 
(memif) -- Experimental)
load_one_plugin:189: Loaded plugin: nat_plugin.so (Network Address Translation 
(NAT))
load_one_plugin:189: Loaded plugin: nsh_plugin.so (Network Service Header (NSH))
load_one_plugin:189: Loaded plugin: nsim_plugin.so (Network Delay Simulator)
load_one_plugin:117: Plugin disabled (default): oddbuf_plugin.so
load_one_plugin:189: Loaded plugin: perfmon_plugin.so (Performance Monitor)
load_one_plugin:189: Loaded plugin: quic_plugin.so (Quic transport protocol)
load_one_plugin:189: Loaded plugin: rdma_plugin.so (RDMA IBverbs Device Driver)
load_one_plugin:189: Loaded plugin: router.so (router)
load_one_plugin:117: Plugin disabled (default): sctp_plugin.so
load_one_plugin:189: Loaded plugin: srv6ad_plugin.so (Dynamic Segment Routing 
for IPv6 (SRv6) Proxy)
load_one_plugin:189: 

Re: [vpp-dev] status of AF_XDP VPP plugin?

2020-04-06 Thread Andreas Schultz
Am Mo., 6. Apr. 2020 um 09:11 Uhr schrieb Július Milan
:

> Hi Andreas
>
>
>
> I believe you mean this one:
>
> https://gerrit.fd.io/r/#/c/vpp/+/21606/
>

Yes, sorry, got the wrong link


> Unfortunately no, I was moved to other project in work, but considering to
> finish it on my own.
>
>
>
> The status is:
>
> It is not yet done to support multiple RX/TX queues and it’s not yet
> zerocopy (between XDP plugin and the rest of VPP).
>
> XDP uses it’s own kernel mmap-ed ring buffers, as far as I found out, to
> achieve zerozopy it would be needed either to mmap whole vlib buffers pool
> (not sure if it is always continuous piece of memory, even when using
> multiple workers and thus possible, is it?)
>
> or to support external buffers (by for example adding pointer to external
> data to vlib buffer, this way the DPDK is doing it)
>

I am new to VPP, but with a little guidence about especially regarding
> overall architecture, I would gladly complete XDP plugin.
>

I'm no expert on VPP internals either. I'm going to test it and maybe I can
spend some time on it.

Thanks & Regards
>
> Julius
>

Thanks,
Andreas

>
>
> *From:* Andreas Schultz [mailto:andreas.schu...@travelping.com]
> *Sent:* Monday, April 6, 2020 8:19 AM
> *To:* Július Milan 
> *Cc:* vpp-dev@lists.fd.io
> *Subject:* status of AF_XDP VPP plugin?
>
>
>
> Hi Julius,
>
>
>
> I just found your XDP device plugin for VPP [1] and I was wondering what
> state it is in?
>
> Do you still work on it and try to get it merged to VPP?
>
>
>
> Regards,
>
> Andreas
>
>
>
> 1: https://gerrit.fd.io/r/c/vpp/+/25785
>
> --
>
> Andreas Schultz
>


-- 

Andreas Schultz

-- 

Principal Engineer

t: +49 391 819099-224

--- enabling your networks
-

Travelping GmbH
Roentgenstraße 13
39108 Magdeburg
Germany

t: +49 391 819099-0
f: +49 391 819099-299

e: i...@travelping.com
w: https://www.travelping.com/
Company registration: Amtsgericht Stendal
Geschaeftsfuehrer: Holger Winkelmann
Reg. No.: HRB 10578
VAT ID: DE236673780
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15995): https://lists.fd.io/g/vpp-dev/message/15995
Mute This Topic: https://lists.fd.io/mt/72806402/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] status of AF_XDP VPP plugin?

2020-04-06 Thread Andreas Schultz
Hi Julius,

I just found your XDP device plugin for VPP [1] and I was wondering what
state it is in?
Do you still work on it and try to get it merged to VPP?

Regards,
Andreas

1: https://gerrit.fd.io/r/c/vpp/+/25785
-- 

Andreas Schultz
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15994): https://lists.fd.io/g/vpp-dev/message/15994
Mute This Topic: https://lists.fd.io/mt/72806402/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-