[vpp-dev] Congrats to Marco Varlese on his election as a vpp project committer

2018-02-08 Thread Dave Barach (dbarach)
It gives me great pleasure to announce Marco's election as a vpp project 
committer, confirmed a few minutes ago by the fd.io vpp TSC.

Vanessa V. will take care of [+2 button] mechanics shortly.

Thanks much to Marco for his interest in the vpp project!

Dave
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] SCTP coverity-scan warnings addressed

2018-02-08 Thread Dave Barach (dbarach)
+1, thanks Marco...!

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Luke, Chris
Sent: Thursday, February 8, 2018 7:06 AM
To: Marco Varlese 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] SCTP coverity-scan warnings addressed

Great, thanks!

Chris

> -Original Message-
> From: Marco Varlese [mailto:mvarl...@suse.de]
> Sent: Thursday, February 8, 2018 5:16
> To: Luke, Chris 
> Cc: Florin Coras ; vpp-dev@lists.fd.io
> Subject: SCTP coverity-scan warnings addressed
> 
> Hi Chris,
> 
> Just to update you that I took care of the action item which came up 
> during the VPP project-meeting on Tuesday.
> 
> The patch https://gerrit.fd.io/r/#/c/10433/ addressing the warnings 
> (8) re SCTP was merged.
> 
> 
> Cheers,
> --
> Marco V
> 
> SUSE LINUX GmbH | GF: Felix Imendörffer, Jane Smithard, Graham Norton 
> HRB 21284 (AG Nürnberg) Maxfeldstr. 5, D-90409, Nürnberg

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] FW: New Committer Nomination: Marco Varlese

2018-02-06 Thread Dave Barach (dbarach)
Copying the list...

From: Luke, Chris [mailto:chris_l...@comcast.com]
Sent: Tuesday, February 6, 2018 11:40 AM
To: Dave Barach (dbarach) <dbar...@cisco.com>; Keith Burns (krb) 
<k...@cisco.com>; Florin Coras (fcoras) <fco...@cisco.com>; John Lo (loj) 
<l...@cisco.com>; Damjan Marion (damarion) <damar...@cisco.com>; Neale Ranns 
(nranns) <nra...@cisco.com>; Ole Troan <o...@cisco.com>; Dave Wallace 
<dwallac...@gmail.com>; Ed Warnicke (eaw) <e...@cisco.com>
Subject: RE: New Committer Nomination: Marco Varlese

+1

From: Dave Barach (dbarach) [mailto:dbar...@cisco.com]
Sent: Tuesday, February 6, 2018 8:56
To: Keith Burns (krb) <k...@cisco.com<mailto:k...@cisco.com>>; Florin Coras 
(fcoras) <fco...@cisco.com<mailto:fco...@cisco.com>>; John Lo (loj) 
<l...@cisco.com<mailto:l...@cisco.com>>; Luke, Chris 
<chris_l...@cable.comcast.com<mailto:chris_l...@cable.comcast.com>>; Damjan 
Marion (damarion) <damar...@cisco.com<mailto:damar...@cisco.com>>; Neale Ranns 
(nranns) <nra...@cisco.com<mailto:nra...@cisco.com>>; Ole Troan 
<o...@cisco.com<mailto:o...@cisco.com>>; Dave Wallace 
<dwallac...@gmail.com<mailto:dwallac...@gmail.com>>; Ed Warnicke (eaw) 
<e...@cisco.com<mailto:e...@cisco.com>>
Subject: New Committer Nomination: Marco Varlese

Folks,

In view of significant code contributions to the vpp project - see below - I'm 
pleased to nominate Marco Varlese as a vpp project committer. I have high 
confidence that he'll be a major asset to the project in a committer role.

Marco has contributed 46 merged patches, including significant new feature 
work.  Example: host stack implementation of SCTP, 8 KLOC 
https://gerrit.fd.io/r/#/c/9150.


Please vote (+1, 0, -1) on vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>. 
We'll need a recorded vote so that the TSC will approve Marco's nomination.

Thanks... Dave

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] FW: New Committer Nomination: Marco Varlese

2018-02-06 Thread Dave Barach (dbarach)
To record Damjan’s vote…

From: Damjan Marion (damarion)
Sent: Tuesday, February 6, 2018 9:07 AM
To: Dave Barach (dbarach) <dbar...@cisco.com>
Cc: Keith Burns (krb) <k...@cisco.com>; Florin Coras (fcoras) 
<fco...@cisco.com>; John Lo (loj) <l...@cisco.com>; Luke, Chris 
<chris_l...@comcast.com>; Neale Ranns (nranns) <nra...@cisco.com>; Ole Troan 
<o...@cisco.com>; Dave Wallace <dwallac...@gmail.com>; Ed Warnicke (eaw) 
<e...@cisco.com>
Subject: Re: New Committer Nomination: Marco Varlese

+1

On 6 Feb 2018, at 14:55, Dave Barach (dbarach) 
<dbar...@cisco.com<mailto:dbar...@cisco.com>> wrote:
Folks,

In view of significant code contributions to the vpp project – see below – I’m 
pleased to nominate Marco Varlese as a vpp project committer. I have high 
confidence that he’ll be a major asset to the project in a committer role.

Marco has contributed 46 merged patches, including significant new feature 
work.  Example: host stack implementation of SCTP, 8 KLOC 
https://gerrit.fd.io/r/#/c/9150.


Please vote (+1, 0, -1) on vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>. 
We’ll need a recorded vote so that the TSC will approve Marco’s nomination.

Thanks... Dave

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] New fd.io vpp project committer vote: Marco Varlese

2018-02-06 Thread Dave Barach (dbarach)
Copying vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>  to formally open
the vote described below. Voting is limited to current committers, and will
remain open for 1 week, or until folks have voted.

 

Thanks. Dave

 

From: Dave Barach (dbarach) 
Sent: Tuesday, February 6, 2018 8:56 AM
To: Keith Burns (krb) <k...@cisco.com>; Florin Coras (fcoras)
<fco...@cisco.com>; John Lo (loj) <l...@cisco.com>; Luke, Chris
<chris_l...@comcast.com>; Damjan Marion (damarion) <damar...@cisco.com>;
Neale Ranns (nranns) <nra...@cisco.com>; Ole Troan <o...@cisco.com>; Dave
Wallace <dwallac...@gmail.com>; Ed Warnicke (eaw) <e...@cisco.com>
Subject: New Committer Nomination: Marco Varlese

 

Folks,

 

In view of significant code contributions to the vpp project - see below -
I'm pleased to nominate Marco Varlese as a vpp project committer. I have
high confidence that he'll be a major asset to the project in a committer
role.  

 

Marco has contributed 46 merged patches, including significant new feature
work.  Example: host stack implementation of SCTP, 8 KLOC
https://gerrit.fd.io/r/#/c/9150. All merged patches:
https://gerrit.fd.io/r/#/q/status:merged+owner:%22Marco+Varlese+%253Cmarco.v
arlese%2540suse.de%253E%22 

 

 

Please vote (+1, 0, -1) on vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>
. We'll need a recorded vote so that the TSC will approve Marco's
nomination.

 

Thanks... Dave

 



smime.p7s
Description: S/MIME cryptographic signature
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Calling a C function in one plugin from another plugin?

2018-02-05 Thread Dave Barach (dbarach)
You can ask vlib_get_plugin_symbol ("plugin_name", "function_name") for the 
address of a function... 

Returns NULL if e.g. the plugin in question isn't loaded or the symbol is 
missing.

HTH... D.

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Michael Lilja
Sent: Monday, February 5, 2018 9:54 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] Calling a C function in one plugin from another plugin?

Hi,

I'm looking at using DPDK rte_flow (generic flow API) for ACL offloading. From 
what I can see the only option I have is to implement a v1_msg_* receiver in 
the DPDK plugin to accept commands from ACL via the SHMEM rings. The concern I 
have is that this might be in conflict with the design of VPP, I'm not sure if 
VPP is designed to have inter-plugin-communication?

Does anyone have another approach to call DPDK functions from within another 
plugins instead of the v1_msg_* layer?

Thanks,
Michael
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] unformat %s eats newlines

2018-02-02 Thread Dave Barach (dbarach)
Folks who need even slightly bulletproof configuration methods should use 
binary APIs: directly from C or through one of several language bindings.

Debug CLI is a developer’s tool, subject to change without notice, and 
supported at the implementer’s discretion.

Extra and/or unparsed input should not go unnoticed: the next function up the 
parse stack will complain.

IIWY I’d leave unformat(…) alone.

HTH… D.

From: Andreas Schultz [mailto:andreas.schu...@travelping.com]
Sent: Friday, February 2, 2018 3:26 PM
To: Dave Barach (dbarach) <dbar...@cisco.com>
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] unformat %s eats newlines

Dave Barach (dbarach) <dbar...@cisco.com<mailto:dbar...@cisco.com>> schrieb am 
Fr., 2. Feb. 2018 um 19:22 Uhr:
Why not simply:

while (…)
  {
if (unformat(input, “name %s”, ))
  ;
else if (…)
  ;
else
 break;
  }

if (<didn’t parse required args>)
return clib_error_return (0, "parse error: '%U'",
  format_unformat_error, input);

That would mean that malformated optional and random additional stuff would get 
unnoticed. CLI verification is already not that strong (the usual while loop 
parsing permits random argument order even when the help strings suggest 
strongly ordered arguments).

Is there a reason that unformat eats the newline or is just to hard to change?

Andreas

D.

From: vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> 
[mailto:vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io>] On 
Behalf Of Andreas Schultz
Sent: Friday, February 2, 2018 12:47 PM
To: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: [vpp-dev] unformat %s eats newlines

A typical construct to parse arguments is to use unformat in a while loop that 
checks for UNFORMAT_END_OF_INPUT.
For multiline input that relies on the detection of "\n" in the input stream.

The problem is that a construct like:

unformat (input, "name %_%v%_", )

eats the newline when it is the only characted following the string to be 
parsed.

This even break reading a multi line config with exec.

Regards
Andreas
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] unformat %s eats newlines

2018-02-02 Thread Dave Barach (dbarach)
Why not simply:

while (…)
  {
if (unformat(input, “name %s”, ))
  ;
else if (…)
  ;
else
 break;
  }

if ()
return clib_error_return (0, "parse error: '%U'",
  format_unformat_error, input);

D.

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Andreas Schultz
Sent: Friday, February 2, 2018 12:47 PM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] unformat %s eats newlines

A typical construct to parse arguments is to use unformat in a while loop that 
checks for UNFORMAT_END_OF_INPUT.
For multiline input that relies on the detection of "\n" in the input stream.

The problem is that a construct like:

unformat (input, "name %_%v%_", )

eats the newline when it is the only characted following the string to be 
parsed.

This even break reading a multi line config with exec.

Regards
Andreas
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] How to get dns server of dhcp client interface

2018-01-31 Thread Dave Barach (dbarach)
Option 6 (dhcp server) parsing is not implemented. See 
…/src/vnet/dhcp/client.c, switch statement near line 112…

Should be a simple coding task. Feel free to submit a patch.

Worst-case, file a Jira ticket so we don’t forget about it.

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of ???
Sent: Tuesday, January 30, 2018 7:58 PM
To: vpp-dev 
Subject: [vpp-dev] How to get dns server of dhcp client interface

Hi,vpp-dev team,
When I use dhcp client to get a wan ip address for vpp interface,How to get the 
dns server address?
sudo vppctl set dhcp client intfc GigabitEthernet0/a/0
vagrant@localhost:~$ sudo vppctl show dhcp client
[0] GigabitEthernet0/a/0 state DHCP_BOUND addr 10.180.30.193/24 gw 10.180.30.1

Thanks.


Regards,
Jzhchen


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP Graph Optimization

2018-01-26 Thread Dave Barach (dbarach)
Dear David,

 

A bit of history. We worked on vpp for a decade before making any serious 
effort to multi-thread it. The first scheme that I tried was to break up the 
graph into reconfigurable pipeline stages. Effective partitioning of the graph 
is highly workload-dependent, and it can change in a heartbeat. the resulting 
system runs at the speed of the slowest pipeline stage.

 

In terms of easily measured inter-thread handoff cost, it’s not awful. 2-3 
clocks/pkt. Handing vectors of packets between threads can cause a festival of 
cache coherence traffic, and it can easily undo the positive effects of ddio 
(packet data DMA into the cache hierarchy).

 

We actually use the scheme you describe in a very fine-grained way: dual and 
quad loop graph dispatch functions process 2 or 4 packets at the same time. 
Until we run out of registers, a superscalar CPU can “do the same thing to 2 or 
4 packets at the same time” pretty effectively. Including memory hierarchy 
stalls, vpp averages more than two instructions retired per clock cycle.

 

At the graph node level, I can’t see how to leverage this technique. Presenting 
[identical] vectors to 2 (or more) nodes running on multiple threads would mean 
(1) the parallelized subgraph would run at the speed of the slowest node. (2) 
you’d pay the handoff costs already discussed above, and (3) you’d need an 
expensive algorithm to make sure that all vector replicas were finished before 
reentering sequential processing. (4) None of the graph nodes we’ve ever 
constructed are free of ordering constraints. Every node alters packet state in 
a meaningful way, or they wouldn’t be worth having. ()… 

 

We’ve had considerable success with flow-hashing across a set of identical 
graph replicas [worker threads], even when available hardware RSS hashing is 
not useful [think about NATted UDP traffic]. 

 

Hope this is of some interest.

 

Thanks… Dave

 

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of David Bainbridge
Sent: Friday, January 26, 2018 12:39 PM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] VPP Graph Optimization

 

I have just started to read up on VPP/FD.io, and I have a question about graph 
optimization and was wondering if (as I suspect) this has already been thought 
about and either planned or decided against.

 

The documentation I found on VPP essentially says that VPP uses batch 
processing and processes all packets in a vector on one step before proceeding 
to the next step. The claim is this provides overall better throughput because 
of instruction caching.

 

I was wondering if optimization of the graph to understand where concurrency 
can be leveraged has been considered, as well as where you could process the 
vector by two steps with an offset. If this is possible, then steps could be 
pinned to cores and perhaps both concurrency and instruction caching could be 
leveraged.

 

For example assume the following graph:

 



 

In this graph, steps B,C can be done concurrently as they don't "modify" the 
vector. Steps D, E can't be done concurrently, but as they don't require look 
back/forward they can be done in offset.

 

What I am suggesting is, if there are enough cores, then steps could be pinned 
to cores to achieve the benefits of instruction caching, and after step A is 
complete, steps B,C could be done concurrently. After B,C are complete then D 
can be started and as D completes processing on a packet if can then be 
processed by E (i.e., the entire vector does not need to be processed by D 
before processing by E is started).

 

I make no argument that this doesn't increase complexity and also introduces 
coordination costs that don't exists today. To be fair, offset processing could 
be viewed as splitting the original large vector into smaller vectors and 
processing the smaller vectors from start to finish (almost dynamic 
optimization based on dynamic vector resizing).

Just curious to hear others thoughts and if some of this has been thought 
through or experimented with. As I said, just thinking off the cuff and 
wondering; not fully thought through.

 

avèk respè,

/david

 



smime.p7s
Description: S/MIME cryptographic signature
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] openSUSE build fails

2018-01-26 Thread Dave Barach (dbarach)
As Marco wrote: we’ve experienced sporadic, inexplicable LF infra-related build 
failures since the project started more than two years ago. It’s unusual for an 
otherwise correct patch to require more than one “recheck” for validation, but 
it’s absolutely not unknown.

To mitigate these problems, Ed Kern has built a containerized Jenkins minion 
system which runs on physical hardware, instead of the current setup which 
relies on cloud-hosted Openstack VMs. As soon as practicable – post 18.01 CSIT 
report – we’ll switch to it.

Given a failure which isn’t obviously related to a specific patch, please press 
the “recheck” button. No need to ask, just do it. In case of persistent 
failure, please email vpp-dev.

Thanks… Dave

From: Ni, Hongjun [mailto:hongjun...@intel.com]
Sent: Friday, January 26, 2018 3:25 AM
To: Marco Varlese <mvarl...@suse.de>; Ole Troan <otr...@employees.org>
Cc: Dave Barach (dbarach) <dbar...@cisco.com>; Gabriel Ganne 
<gabriel.ga...@enea.com>; Billy McFall <bmcf...@redhat.com>; Damjan Marion 
(damarion) <damar...@cisco.com>; vpp-dev <vpp-dev@lists.fd.io>
Subject: RE: [vpp-dev] openSUSE build fails

Hi Marco,

Thank you for your explanation.  Would contact you if I ran into similar issue 
again.

Thanks,
Hongjun

From: Marco Varlese [mailto:mvarl...@suse.de]
Sent: Friday, January 26, 2018 4:21 PM
To: Ni, Hongjun <hongjun...@intel.com<mailto:hongjun...@intel.com>>; Ole Troan 
<otr...@employees.org<mailto:otr...@employees.org>>
Cc: Dave Barach (dbarach) <dbar...@cisco.com<mailto:dbar...@cisco.com>>; 
Gabriel Ganne <gabriel.ga...@enea.com<mailto:gabriel.ga...@enea.com>>; Billy 
McFall <bmcf...@redhat.com<mailto:bmcf...@redhat.com>>; Damjan Marion 
(damarion) <damar...@cisco.com<mailto:damar...@cisco.com>>; vpp-dev 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] openSUSE build fails

On Fri, 2018-01-26 at 06:58 +, Ni, Hongjun wrote:
I rechecked this patch twice, and it built successfully now.

But why need to recheck twice?
If a "recheck" fixed that then it must be an infrastructure glitch; that's the 
only thing I can think of...

That would not be a surprise either since it does happen from time-to-time to 
see random build failures which get fixed by a "recheck".

Having said that, if you happen to have again this sort of problems (and which 
do not go away with a recheck) feel free to drop me an email and I will look 
into it. Just take into account I'm based at UTC+1.



-Hongjun
- Marco


From: Ole Troan [mailto:otr...@employees.org]
Sent: Friday, January 26, 2018 2:53 PM
To: Ni, Hongjun <hongjun...@intel.com<mailto:hongjun...@intel.com>>
Cc: Dave Barach (dbarach) <dbar...@cisco.com<mailto:dbar...@cisco.com>>; Marco 
Varlese <mvarl...@suse.de<mailto:mvarl...@suse.de>>; Gabriel Ganne 
<gabriel.ga...@enea.com<mailto:gabriel.ga...@enea.com>>; Billy McFall 
<bmcf...@redhat.com<mailto:bmcf...@redhat.com>>; Damjan Marion (damarion) 
<damar...@cisco.com<mailto:damar...@cisco.com>>; vpp-dev 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] openSUSE build fails

Hi Hongjun,

I have no OpenSUSE at hand, and could not give it a try.

Neither do I.

Ole



From: Ole Troan [mailto:otr...@employees.org]
Sent: Friday, January 26, 2018 2:08 PM
To: Ni, Hongjun <hongjun...@intel.com<mailto:hongjun...@intel.com>>
Cc: Dave Barach (dbarach) <dbar...@cisco.com<mailto:dbar...@cisco.com>>; Marco 
Varlese <mvarl...@suse.de<mailto:mvarl...@suse.de>>; Gabriel Ganne 
<gabriel.ga...@enea.com<mailto:gabriel.ga...@enea.com>>; Billy McFall 
<bmcf...@redhat.com<mailto:bmcf...@redhat.com>>; Damjan Marion (damarion) 
<damar...@cisco.com<mailto:damar...@cisco.com>>; vpp-dev 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] openSUSE build fails

Hongjun,

This looks suspect:

03:32:31 APIGEN vlibmemory/memclnt.api.h 03:32:31 JSON API 
vlibmemory/memclnt.api.json 03:32:31 SyntaxError: invalid syntax 
(vppapigentab.py, line 11) 03:32:31 
WARNING:vppapigen:/w/workspace/vpp-verify-master-opensuse/build-root/rpmbuild/BUILD/vpp-18.04/build-data/../src/vlibmemory/memclnt.api:0:1:
 Old Style VLA: u8 data[0]; 03:32:31 Makefile:8794: recipe for target 
'vlibmemory/memclnt.api.h' failed 03:32:31 make[5]: *** 
[vlibmemory/memclnt.api.h] Error 1 03:32:31 make[5]: *** Waiting for unfinished 
jobs 03:32:31




Can you try running vppapigen manually on that platform?
Vppapigen —debug —input memclnt.api ...

Cheers
Ole


On 26 Jan 2018, at 06:38, Ni, Hongjun 
<hongjun...@intel.com<mailto:hongjun...@intel.com>> wrote:
Hi all,

It seems that OpenSUSE build failed for this patch:
https://jenkins.fd.io/job/vpp-verify-master-opens

Re: [vpp-dev] VPP 18.01 Release artifacts are now available on nexus.fd.io

2018-01-25 Thread Dave Barach (dbarach)
Congrats to DaveW and the rest of the fd.io vpp team on the 18.01 release!

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Dave Wallace
Sent: Thursday, January 25, 2018 12:23 AM
To: vpp-dev@lists.fd.io; csit-...@lists.fd.io
Subject: [vpp-dev] VPP 18.01 Release artifacts are now available on nexus.fd.io

Folks,

The VPP 18.01 Release artifacts are now available on nexus.fd.io

The ubuntu.xenial and centos packages can be installed following the recipe on 
the wiki: https://wiki.fd.io/view/VPP/Installing_VPP_binaries_from_packages

Thank you to all of the VPP community who have contributed to the 18.01 VPP 
Release.


Elvis has left the building!
-daw-
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Missing PLY ?

2018-01-24 Thread Dave Barach (dbarach)
“$ make install-dep” fixed it for me… D.

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Jon Loeliger
Sent: Wednesday, January 24, 2018 1:58 PM
To: vpp-dev 
Subject: [vpp-dev] Missing PLY ?

Hey Kids,

The new API Gen seems to want ply.lex, but I don't think
it is listed as a dependency or something somewhere.  Or
maybe I have a really crappy Python.  Dunno.

Net effect, shown below, isn't good.

Did I miss a step?

Thanks,
jdl


make[4]: Entering directory 
`/home/jdl/workspace/vpp/build-root/rpmbuild/vpp-18.04.0/build-root/build-vpp-native/vpp'
  APIGEN   vlibmemory/memclnt.api.h
  JSON API vlibmemory/memclnt.api.json
Traceback (most recent call last):
  File 
"/home/jdl/workspace/vpp/build-root/rpmbuild/vpp-18.04.0/build-root/tools/bin/vppapigen",
 line 4, in 
import ply.lex as lex
ImportError: No module named ply.lex
Traceback (most recent call last):
  File 
"/home/jdl/workspace/vpp/build-root/rpmbuild/vpp-18.04.0/build-root/tools/bin/vppapigen",
 line 4, in 
import ply.lex as lex
ImportError: No module named ply.lex
make[4]: *** [vlibmemory/memclnt.api.h] Error 1
make[4]: *** Waiting for unfinished jobs
make[4]: *** [vlibmemory/memclnt.api.json] Error 1
make[4]: Leaving directory 
`/home/jdl/workspace/vpp/build-root/rpmbuild/vpp-18.04.0/build-root/build-vpp-native/vpp'
make[3]: *** [vpp-build] Error 2
make[3]: Leaving directory 
`/home/jdl/workspace/vpp/build-root/rpmbuild/vpp-18.04.0/build-root'
make[2]: *** [install-packages] Error 1
make[2]: Leaving directory 
`/home/jdl/workspace/vpp/build-root/rpmbuild/vpp-18.04.0/build-root'
error: Bad exit status from /var/tmp/rpm-tmp.8lAVBj (%build)

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Question and bug found on GTP performance testing

2018-01-24 Thread Dave Barach (dbarach)
We're not going to turn vnet_register_interface(...) into an epic catalog of 
special-purpose strcmp's. Any patch which looks the least bit like the diffs 
shown below is guaranteed to be scored -2, and never merged.

Please let John propose a mechanism to address this issue.

Thanks... Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Lollita Liu
Sent: Wednesday, January 24, 2018 5:09 AM
To: John Lo (loj) ; vpp-dev@lists.fd.io
Cc: Kingwel Xie ; David Yu Z 
; Terry Zhang Z ; Jordy 
You 
Subject: Re: [vpp-dev] Question and bug found on GTP performance testing

Hi, John.

We tried to bypass the node creation in interface creation and 
try the case again.  The GTPU throughput does not be affected by the interface 
creation any more. The basic source code is as follow:

diff --git a/src/vnet/interface.c b/src/vnet/interface.c
index 82eccc1..451019e 100644
--- a/src/vnet/interface.c
+++ b/src/vnet/interface.c
@@ -745,6 +745,10 @@ vnet_register_interface (vnet_main_t * vnm,
   hw->max_l3_packet_bytes[VLIB_RX] = ~0;
   hw->max_l3_packet_bytes[VLIB_TX] = ~0;

+  if (0 == strcmp(dev_class->name, "GTPU")) {
+goto skip_add_node;
+  }
+
   tx_node_name = (char *) format (0, "%v-tx", hw->name);
   output_node_name = (char *) format (0, "%v-output", hw->name);

@@ -881,6 +885,8 @@ vnet_register_interface (vnet_main_t * vnm,
   setup_output_node (vm, hw->output_node_index, hw_class);
   setup_tx_node (vm, hw->tx_node_index, dev_class);

+skip_add_node:
+
   /* Call all up/down callbacks with zero flags when interface is created. */
   vnet_sw_interface_set_flags_helper (vnm, hw->sw_if_index, /* flags */ 0,
  
VNET_INTERFACE_SET_FLAGS_HELPER_IS_CREATE);

BR/Lollita Liu

From: Lollita Liu
Sent: Tuesday, January 23, 2018 11:28 AM
To: 'John Lo (loj)' >; 
vpp-dev@lists.fd.io
Cc: David Yu Z >; 
Kingwel Xie >; Terry 
Zhang Z >; Jordy 
You >
Subject: RE: Question and bug found on GTP performance testing

Hi, John,
The internal mechanism is very clear to me now.

And do you have any thought about the dead lock on main thread?

BR/Lollita Liu

From: John Lo (loj) [mailto:l...@cisco.com]
Sent: Tuesday, January 23, 2018 11:18 AM
To: Lollita Liu >; 
vpp-dev@lists.fd.io
Cc: David Yu Z >; 
Kingwel Xie >; Terry 
Zhang Z >; Jordy 
You >
Subject: RE: Question and bug found on GTP performance testing

Hi Lolita,

Thank you for providing information from your performance test with observed 
behavior and problems.

On interface creation, including tunnels, VPP always creates dedicated output 
and tx nodes for each interface. As you correctly observed, these dedicated tx 
and output nodes are not used for most tunnel interfaces such as GTPU and 
VXLAN. All these tunnel interfaces of the same tunnel type would use an 
existing tunnel type specific encap node as their output nodes.

I can see that for large scale tunnel deployments, creation of a large number 
of these not-used output and tx nodes can be an issue, especially when multiple 
worker threads are used. The worker threads will be blocked from forwarding 
packets while the main thread is busy creating these nodes and do setups for 
multiple worker threads.

I believe we should improve VPP interface creation to allow a way for creating 
interfaces, such as tunnels, where existing (encap-)nodes can be specified as 
interface output nodes without creating dedicated tx and output nodes.

Your observation that the forwarding PPS impact only occur during initial 
tunnel creation and not subsequent delete and create is as expected. It is 
because on tunnel deletion, the associated interfaces are not deleted but kept 
in a reused pool for subsequent creation of the same tunnel type. It may not be 
the best approach for interface usage flexibility but it certainly helps with 
efficiency of tunnel delete and create cases.

I will work on the interface creation improvement described above when I get a 
chance.  I can let you know when a patch is available on vpp master for you to 
try.  As for 18.01 release, it is probably too late to include this improvement.

Regards,
John

From: vpp-dev-boun...@lists.fd.io 

Re: [vpp-dev] heap per thread

2018-01-24 Thread Dave Barach (dbarach)
Yes, it’s possible. This is not the obvious way to do it.

Before I answer any questions: what are you trying to accomplish? Idiomatic vpp 
coding techniques typically don’t result in enough memory allocator traffic to 
make it worth using per-thread heaps.

D.

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Saeed P
Sent: Wednesday, January 24, 2018 12:53 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] heap per thread

Hi
I tried to change the memory allocation on VPP to set different mheap per 
worker not a shared mheap per worker.
so on /vlib/threads.c at start_workers function chang as follow :

   if ( !strcmp( tr->name , "workers") )
   {
   tr->mheap_size = new_mheap_size ;
   }
   vec_add2 (vlib_worker_threads, w, 1);

  if (tr->mheap_size)
w->thread_mheap = mheap_alloc (0 , 
tr->mheap_size);
  else
w->thread_mheap = main_heap;

 by default the "tr->mheap_size" is zero so go into else and use the main_heap 
but now allocate mheap for workers, but it has coredump as GDB shows:

 Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault.
mheap_get_search_free_bin (align_offset=4, align=, 
n_user_data_bytes_arg=, bin=11, v=0x7fffb5bdd000) at 
/root/CGNAT/build-data/../src/vppinfra/mheap.c:401
401   uword this_object_n_user_data_bytes = mheap_elt_data_bytes 
(e);

 Is it possible to set different mheap per worker ?


Thanks,
-Saeed
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] RFC: Error Codes

2018-01-23 Thread Dave Barach (dbarach)
Right. The error number base needs to be managed just like the message ID base… 
D.

From: Jon Loeliger [mailto:j...@netgate.com]
Sent: Tuesday, January 23, 2018 9:39 AM
To: Ole Troan <otr...@employees.org>
Cc: Dave Barach (dbarach) <dbar...@cisco.com>; vpp-dev <vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] RFC: Error Codes

On Tue, Jan 23, 2018 at 8:12 AM, Ole Troan 
<otr...@employees.org<mailto:otr...@employees.org>> wrote:
Dear Dave,

> I would be tempted to have the compiler emit 
> "foreach__api_error" macros [or similar]:
>
> #define foreach_foo_api_error \
> _(SUCCESS, "Success") \
> _(ERROR, "This didn't go well")
>
> To minimize pain in upgrading existing C-code...

Ah, yes of course.
Done.
https://gerrit.fd.io/r/#/c/10204/

Cheers,
Ole

Glad to see you guys like the notion!

With this, will plugins have to manage an error number base for
each plugin now?  Or will that be some form of magic behind the scene?

Thanks,
jdl

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] How to link zmq library to new plugin in vpp

2018-01-23 Thread Dave Barach (dbarach)
This report leaves a bit to be desired. As in: "configure file" means what? 
Where is the build output?

Laying that aside, if your plugin is called xxx, try adding:

 xxx_plugin_la_LIBADD += -lzmq

to src/plugins/xxx.am...

HTH... Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Vadnere, Neha R
Sent: Tuesday, January 23, 2018 4:56 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] How to link zmq library to new plugin in vpp

Hi,

I want to use zmq APIs in my plugin in VPP. I tried to add following line in 
configure file:
LDFLAGS+=-L/usr/local/lib -lzmq
autoreconf -fis
./configure
make
make install

and  I am building new plugin from main vpp directory
cd vpp/
make build
make run

But this is not working for me. Can anybody please let me know the correct way 
to link library to plugin?

Regards,
Neha

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] RFC: Error Codes

2018-01-23 Thread Dave Barach (dbarach)
I would be tempted to have the compiler emit "foreach__api_error" 
macros [or similar]:

#define foreach_foo_api_error \
_(SUCCESS, "Success") \
_(ERROR, "This didn't go well") 

To minimize pain in upgrading existing C-code...

D.


-Original Message-
From: Ole Troan [mailto:otr...@employees.org] 
Sent: Tuesday, January 23, 2018 5:41 AM
To: Dave Barach (dbarach) <dbar...@cisco.com>; Jon Loeliger <j...@netgate.com>
Cc: vpp-dev <vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] RFC: Error Codes

Dave, Jon,

> On 22 Jan 2018, at 19:34, Dave Barach (dbarach) <dbar...@cisco.com> wrote:
> 
> Dear Jon,
> 
> That makes sense to me. Hopefully Ole will comment with respect to 
> adding statements of the form
> 
> error { FOO_NOT_AVAILABLE, “Resource ‘foo’ is not available } ;
> 
> to the new Python PLY-based API generator.
> 
> The simple technique used to allocate plugin message-ID’s seems to work OK to 
> solve the analogous problem here.

That makes sense to me too (wonder why we haven't done that before. ;-))

Here is the patch to the compiler:

https://gerrit.fd.io/r/10204 VPPAPIGEN: Error definitions

VPPAPIGEN: Error definitions
This commit adds support for defining errors.

errors {
  SUCCESS, "No error";
  ERROR, "This didn't go well";
};

Which results in the following C:

vl_error(VL_API_ERROR_SUCCESS, "No error") vl_error(VL_API_ERROR_ERROR, "This 
didn't go well")

And JSON:
 "errors": [ [ "SUCCESS", "No error" ], [ "ERROR", "This is wrong" ] ]


Does that seem sane?

Cheers,
Ole

> 
> Thanks… Dave
> 
> From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] 
> On Behalf Of Jon Loeliger
> Sent: Monday, January 22, 2018 12:13 PM
> To: vpp-dev <vpp-dev@lists.fd.io>
> Subject: [vpp-dev] RFC: Error Codes
> 
> Hey VPP Aficionados,
> 
> I would like to make a proposal for a new way to introduce error codes 
> into the VPP code base.  The two main motivations for the proposal are
> 
> 1) to improve the over-all error messages coupled to their API 
> calls, and
> 2) to clearly delineate the errors for VNET from those of various plugins.
> 
> Recently, it was pointed out to me that the errors for the various 
> plugins should not introduce new, plugin-specific errors into the main 
> VNET list of errors (src/vnet/api_errno.h) on the basis that plugins 
> shouldn't clutter VNET, should be more self-sustaining, and should stand 
> alone.
> 
> Without a set of generic error codes that can be used by the various 
> plugins, there would then be no error codes as viable return values 
> from the API calls defined by plugins.
> 
> So here is my proposal:
> 
> - Extend the API definition files to allow the definition of error 
> messages
>   and codes specific to VNET, or to a plugin.
> 
> - Each plugin registers its error codes with a main registry upon being 
> loaded.
> 
> - The global error table is maintained, perhaps much like API enums today.
> 
> - Each API call then has a guaranteed set of return values defined 
> directly
>   within its own API definition, thus coupling API calls and their 
> possible
>   returned error codes as well.
> 
> Other thoughts?
> 
> Thanks,
> jdl
> 
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] RFC: Error Codes

2018-01-22 Thread Dave Barach (dbarach)
Dear Jon,

That makes sense to me. Hopefully Ole will comment with respect to adding 
statements of the form

error { FOO_NOT_AVAILABLE, “Resource ‘foo’ is not available } ;

to the new Python PLY-based API generator.

The simple technique used to allocate plugin message-ID’s seems to work OK to 
solve the analogous problem here.

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Jon Loeliger
Sent: Monday, January 22, 2018 12:13 PM
To: vpp-dev 
Subject: [vpp-dev] RFC: Error Codes

Hey VPP Aficionados,

I would like to make a proposal for a new way to introduce error codes
into the VPP code base.  The two main motivations for the proposal are

1) to improve the over-all error messages coupled to their API calls,
and
2) to clearly delineate the errors for VNET from those of various plugins.

Recently, it was pointed out to me that the errors for the various plugins
should not introduce new, plugin-specific errors into the main VNET list
of errors (src/vnet/api_errno.h) on the basis that plugins shouldn't clutter
VNET, should be more self-sustaining, and should stand alone.

Without a set of generic error codes that can be used by the various plugins,
there would then be no error codes as viable return values from the API calls
defined by plugins.

So here is my proposal:

- Extend the API definition files to allow the definition of error messages
  and codes specific to VNET, or to a plugin.

- Each plugin registers its error codes with a main registry upon being 
loaded.

- The global error table is maintained, perhaps much like API enums today.

- Each API call then has a guaranteed set of return values defined directly
  within its own API definition, thus coupling API calls and their possible
  returned error codes as well.

Other thoughts?

Thanks,
jdl

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Create an arc

2018-01-17 Thread Dave Barach (dbarach)
Dear Korian,

Steering traffic from ip4_lookup to  is easily accomplished by 
setting the fib result [dpo->dpoi_next_node] to send matching traffic where you 
want it to go. 

Add an arc from ip4/6_lookup to  by calling vlib_node_add_next(...) 
to create the arc, then create fib entries with dpoi_next_node set to the 
returned next_index.

This is not a feature arc problem. Attempting to solve it as such will cause no 
end of trouble. 

Neale, please jump in as needed...

HTH... Dave

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of korian edeline
Sent: Wednesday, January 17, 2018 9:30 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] Create an arc

Hi all,

Here is the deal:

I have 2 nodes (my-node-1, my-node-2),  I would like my-node-1 to receive 
packets from ip4-lookup, forwarding to either ip4-rewrite, error-drop or 
my-node-2. my-node-2 should only receive from my-node-1 and forward to 
ip4-rewrite or error-drop.

If I put them BEFORE ip4-lookup, i can use pre-built arc ip4-unicast and 
everything works perfect. But i figured that if i want them after ip4-lookup, i 
have to create my own arc. So here is what i have, plus replacing occurences of 
"ip4-unicast" by "my-arc".

VNET_FEATURE_ARC_INIT (my_arc, static) = {
   .arc_name = "my-arc,
   .start_nodes = VNET_FEATURES ("ip4-lookup"),
   .arc_index_ptr = _main.feature_arc_index };

What am i missing  ?

Thanks

Korian

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Proposal to remove ssvm_eth

2018-01-13 Thread Dave Barach (dbarach)
Dear Florin,

Quite to the contrary: removing the ssvm_ethernet driver would be a Good Thing. 
I built it as a prototype a long time ago. It has not been widely adopted. 
Memif solves the same general problem [much better], so please go ahead...

Thanks... Dave

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Florin Coras
Sent: Friday, January 12, 2018 7:56 PM
To: vpp-dev 
Subject: [vpp-dev] Proposal to remove ssvm_eth

Hi everyone, 

I’m in the process of cleaning up the ssvm code and realized some of the data 
structures have fields that are only used within the ssvm_eth code. Since we 
now have memif, and nobody is really maintaining ssvm_eth, I’d like to remove 
the code. 

Therefore, does anybody have something against me doing that?

Thanks, 
Florin


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Memory Leakage Test By Valgrind

2018-01-08 Thread Dave Barach (dbarach)
Mheap.c has its own highly accurate memory leak checker. I haven’t tried the 
valgrind integration in many years. Valgrind makes vpp run slowly enough to 
make it unusable.

To use the built-in leakfinder: build TAG=vpp_debug, set #define 
MHEAP_HAVE_SMALL_OBJECT_CACHE to 0 in .../src/vppinfra/mheap_bootstrap.h.

Then:


  *   Start vpp, configure it, and so forth.
  *   “memory-trace on”
  *   
  *   “show memory”

“show memory” prints a nice report of all memory allocated during .

HTH… Dave

P.S. You probably won’t want to see all of the initial memory allocations, but 
if you do supply the command line stanza “ ... vlib { ... memory-trace ... }”


From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Saeed P
Sent: Monday, January 8, 2018 3:23 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] Memory Leakage Test By Valgrind

Hi,
I want to check memory leakage on VPP so I use Valgrind memcheck tool to do it,
but it gives me a lot of errors on output, like "Invalid read of size ..." and
at the end of the valgrind memcheck report is saying :
"This is usually caused by using VALGRIND_MALLOCLIKE_BLOCK in an inappropriate 
way."
I compile VPP with CLIB_DEBUG=1 (make build command) so the code includes the 
signals
to valgrind regarding VPP has its own memory allocator.
I use command with options below:
valgrind --leak-check=full
 --show-leak-kinds=all
 --read-var-info=yes
   --trace-children=yes
 --fair-sched=yes
 --log-file=memcheck-output.log
   /root/vpp/build-root/install-vpp_debug-native/vpp/bin/vpp \
 -c /etc/vpp/startup.conf

Do you test Valgrind Memcheck on VPP successfully?
which command and configuration use?
Is there any useful tool for that, except internal commands:
"show memory" , "memory-trace"

Thanks,
-Saeed
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP 18.01 RC1 milestone is complete!

2018-01-05 Thread Dave Barach (dbarach)
Hey Dave, thanks for all your work to make this happen!

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Dave Wallace
Sent: Thursday, January 4, 2018 12:25 PM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] VPP 18.01 RC1 milestone is complete!

Folks,

The VPP 18.01 RC1 milestone is complete. The VPP 18.01 release branch 
(stable/1801) has been created, along with the associated nexus and 
packagecloud repo's.

  *   vpp master branch is now open for all patches slated for VPP 18.04 (and 
beyond).
  *   vpp stable/1801 is open for bug fix patches only.
Per the standard process, all bug fixes to the stable branch should follow the 
best practices:

  *   All bug fixes must be double-committed to the release throttle as well as 
to the master branch
 *   Commit first to the release throttle, then "git cherry-pick" into 
master
 *   Manual merges may be required, depending on the degree of divergence 
between throttle and master
  *   All bug fixes need to have a Jira ticket
 *   Please put Jira IDs into the commit messages.
 *   Please use the same Jira ID for both the stable branch and master.
Note: I downloaded and installed the Ubuntu packages for stable/1801 & master 
following the directions on this wiki page:  
https://wiki.fd.io/view/VPP/Installing_VPP_binaries_from_packages

Please let me know if there are any issues downloading/installing the centos7 
artifacts.
-daw-

ps. Thanks to Florin Coras and Ed Warnicke for their assistance.
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] 'pool_elt_at_index' Relative Addressing Cause a Mistake

2018-01-04 Thread Dave Barach (dbarach)
All of the pool_get(mypool, new_elt) variants are capable of expanding - and 
hence moving - mypool, leading to dangling references to free memory if you’re 
not careful. Here’s the usual coding pattern:

old_elt = pool_elt_at_index (mypool, index);

/* use old-elt */

pool_get (mypool, new_elt);

/* old-elt now INVALID, but index (or p[0]) is still fine */

old_elt = pool_elt_at_index (mypool, index);



Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of ???
Sent: Thursday, January 4, 2018 1:22 AM
To: vpp-dev 
Subject: [vpp-dev] 'pool_elt_at_index' Relative Addressing Cause a Mistake


Hi guys,

I'm testing ikev2. When I initiate a sa succeed(pr1), then add the other one 
(pr2), the sa->pr1->name is rewritten.

After viewing I find the 'pool_elt_at_index' is relative addressing . And the 
'pool_base' may change when use the pointer we preserved before.

eg:'pool_elt_at_index (km->profiles, p[0]);'

How can we solve the problem?

Thanks,
Xyxue

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP does not detect NIC automatically

2018-01-03 Thread Dave Barach (dbarach)
Dear Charlie,

Vpp won't touch an interface if it has an associated Linux kernel interface 
which is up, and/or has an address configured on it. Manually unbinding the 
interface - as you did - makes the Linux kernel interface disappear.

Accidentally whitelisting a host's management ethernet would be a Bad Thing. 
"Hmmm... Why can't I ping the box anymore, let alone ssh to it..."

HTH... Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Li, Charlie
Sent: Wednesday, January 3, 2018 2:29 PM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] VPP does not detect NIC automatically

Hi VPP team,

I am new to VPP and am following the "FDIO Quick Start Guide" 
(https://docs.google.com/document/d/1zqYN7qMavgbdkPWIJIrsPXlxNOZ_GhEveHQxpYr3qrg/edit)
 to get started.

I am running Ubuntu 16.04 and using the pre-built packages.

According to the document, vpp should detect and take over the Ethernet ports 
that are not in use (link down). But on my system, vpp does not detect any 
interfaces except the "local0".

# ps -eaf | grep vpp
root991  1 99 Nov28 ?2-06:13:50 /usr/bin/vpp -c 
/etc/vpp/startup.conf
root  83025  83014  0 15:34 pts/800:00:00 grep --color=auto vpp

# lspci
...
01:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ 
Network Connection (rev 01)
01:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ 
Network Connection (rev 01)
...

# vppctl show int
  Name   Idx   State  Counter  Count
local00down

Then I add the two interfaces to the whitelist in startup.conf

## Whitelist specific interface by specifying PCI address
dev :01:00.0
dev :01:00.1

And restart vpp, but it still does not detect the interfaces.

As a workaround, I manually bind the interfaces

~/dpdk/usertools/dpdk-devbind.py -b uio_pci_generic :01:00.0
~/dpdk/usertools/dpdk-devbind.py -b uio_pci_generic :01:00.1

And restart vpp; now everything starts to work.

Is this as expected or did I miss something?


Regards,
Charlie Li

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] No vpp project meeting 12/26/2017

2017-12-15 Thread Dave Barach (dbarach)
Have a great holiday season!

Thanks... Dave

P.S. Are folks interested in meeting on 1/2/2018, or should we reconvene on 
1/9/2018?

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] openSUSE build fails

2017-12-15 Thread Dave Barach (dbarach)
Dear Marco,

Thanks very much...

Dave

From: Marco Varlese [mailto:mvarl...@suse.de]
Sent: Friday, December 15, 2017 9:06 AM
To: Dave Barach (dbarach) <dbar...@cisco.com>; Gabriel Ganne 
<gabriel.ga...@enea.com>; Billy McFall <bmcf...@redhat.com>
Cc: Damjan Marion (damarion) <damar...@cisco.com>; vpp-dev <vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] openSUSE build fails

We (at SUSE) are currently pushing an update to 2.2.11 for openSUSE in our 
repositories.
Once that's confirmed to be upstream, I will push a new patch to the 
ci-management repo to have the indent package upgraded to the latest version 
and re-enable the "checkstyle".


Cheers,
Marco

On Fri, 2017-12-15 at 13:51 +, Dave Barach (dbarach) wrote:
With a bit of fiddling, I was able to fix gerrit 9440 so that indent 2.2.10 and 
2.2.11 appear to produce identical results...

HTH... Dave

From: vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Gabriel Ganne
Sent: Friday, December 15, 2017 8:42 AM
To: Billy McFall <bmcf...@redhat.com<mailto:bmcf...@redhat.com>>; Marco Varlese 
<mvarl...@suse.de<mailto:mvarl...@suse.de>>
Cc: Damjan Marion (damarion) <damar...@cisco.com<mailto:damar...@cisco.com>>; 
vpp-dev <vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] openSUSE build fails


Hi,



If you browse the source http://hg.savannah.gnu.org/hgweb/indent/

The tag 2.2.11  is there, the source seems updated regularly.



Best regards,



--

Gabriel Ganne


From: vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> 
<vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io>> on behalf of 
Billy McFall <bmcf...@redhat.com<mailto:bmcf...@redhat.com>>
Sent: Friday, December 15, 2017 2:26:42 PM
To: Marco Varlese
Cc: Damjan Marion (damarion); vpp-dev
Subject: Re: [vpp-dev] openSUSE build fails



On Fri, Dec 15, 2017 at 5:15 AM, Marco Varlese 
<mvarl...@suse.de<mailto:mvarl...@suse.de>> wrote:
Hi Damjan,

On Fri, 2017-12-15 at 09:06 +, Damjan Marion (damarion) wrote:


On 15 Dec 2017, at 08:52, Marco Varlese 
<mvarl...@suse.de<mailto:mvarl...@suse.de>> wrote:

Damjan,

On Thu, 2017-12-14 at 16:04 +, Damjan Marion (damarion) wrote:
Folks,

I'm hearing from multiple people that OpenSUSE verify job is failing (again).
I haven't heard (or read) anything over the mailing list otherwise I would have
looked into it.
Also, if you hear anything like that you can always ping me directly and I will
look into it...

yes, people pinging me...
See
https://gerrit.fd.io/r/#/c/9440/<https://url10.mailanyone.net/v1/?m=1ePq0t-0001Fl-6M=57e1b682=68TFeWozfVWr0cOeQcoSLfj_6UOcLVL45-kDlIThNR_ycQZG5LOgi7NnZMJtDMUAmhIPtu-lSoEuMy-6KVT4RlufdWPa2MdfXzb_ObzIVcMVqAGqH7isJhFQHsNuaRick9gGwiEgwUQHltVsqpH-j4MwmcVniuBLxSiCuh2d9gPyZ9J_DeIXB9ebiI349MT3YFcKCmnf4x6PSEKrRYEoXYvyBIR1brcxBEL7qox2rRo>

also:

https://gerrit.fd.io/r/#/c/9813/<https://url10.mailanyone.net/v1/?m=1ePq0t-0001Fl-6M=57e1b682=CrMjX_E-jo7WRmm-sopHZy5U_DhywlV7a5A369OJyOow2Mnl2gcRxDLpcasYhpTRR5BtPvolweaLRScakJLx-NDgwKa8ITMZEpYTSnZ33x76qqlb_GnK382fDZNMYQn6KPDthHl7JZPOslzVKjUVDmvIaFaOxiQgDYkMHw02f9pC0xMMRtuuURi0fwbx8lfGUi64rlyZBA0T4tJOBYSPjVrm_yF86cI4X2Cc5I7XB8s>
 - abandoned but it shows that something was wrong

Ok, so just summarizing our conversation on IRC for others too.

That issue is connected to the different versions of INDENT (C checkstyle) 
installed on the different distros.

openSUSE runs 2.2.10 whilst CentOS and Ubuntu run 2.2.11

What strikes me is that the upstream repo 
https://ftp.gnu.org/gnu/indent/<https://url10.mailanyone.net/v1/?m=1ePq0t-0001Fl-6M=57e1b682=SSiBFtZ5JbQhgjd9dEVXNEBYOdVo_Zo1ALr23wlHN44ValS-HDgjsawWJnWi-UHq0Pe9bgVnD5fLzJs6yISu-7ZkpGlAUgLW-IeDY4i6dsSzbSrCQ97iLT5lh93ItR7CCJtRvXBazqKbU6mxvPD_UTUCxm8qPdLPUdki9viMke3Q_tIJAReRf4KOT37lCP3T5tgGg3r1OT86tvKq2dovxDIjSQuPwKrDpiZ8AsSTB5w>
 has 2.2.10 as last revision.
Our indent package maintainer is looking at possible other sources where Indent 
could "live" these days and will let me know as soon as she finds out.

@Thomas Herbert, would you know the source where the Indent package on CentOS 
come from? Maybe that could help...

Marco, I can't find the source. I'll look around a little more. From CentoOS 
7.4:

$ sudo yum provides indent
:
indent-2.2.11-13.el7.x86_64 : A GNU program for formatting C code
Repo: base
:

$ sudo repoquery -i indent
Name: indent
Version : 2.2.11
Release : 13.el7
Architecture: x86_64
Size: 359131
Packager: CentOS BuildSystem 
<http://bugs.centos.org<https://url10.mailanyone.net/v1/?m=1ePq0t-0001Fl-6M=57e1b682=Oxgo1-3NKrvdr09W1lYMqTengBfLr3NBV2FFVNtp8fYGuDtJWoThOJlSD8GJqvFV073z9nD7sN8CIc6cGMY5Ktf0s2dmicXgEpxSpJ-1vWF3HJzKuKhaong1C79JraHgpv_RMkyn1Ti3ea_6V8IRf2brmeHyPu

Re: [vpp-dev] openSUSE build fails

2017-12-15 Thread Dave Barach (dbarach)
With a bit of fiddling, I was able to fix gerrit 9440 so that indent 2.2.10 and 
2.2.11 appear to produce identical results...

HTH... Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Gabriel Ganne
Sent: Friday, December 15, 2017 8:42 AM
To: Billy McFall ; Marco Varlese 
Cc: Damjan Marion (damarion) ; vpp-dev 
Subject: Re: [vpp-dev] openSUSE build fails


Hi,



If you browse the source http://hg.savannah.gnu.org/hgweb/indent/

The tag 2.2.11  is there, the source seems updated regularly.



Best regards,



--

Gabriel Ganne


From: vpp-dev-boun...@lists.fd.io 
> on behalf of 
Billy McFall >
Sent: Friday, December 15, 2017 2:26:42 PM
To: Marco Varlese
Cc: Damjan Marion (damarion); vpp-dev
Subject: Re: [vpp-dev] openSUSE build fails



On Fri, Dec 15, 2017 at 5:15 AM, Marco Varlese 
> wrote:
Hi Damjan,

On Fri, 2017-12-15 at 09:06 +, Damjan Marion (damarion) wrote:



On 15 Dec 2017, at 08:52, Marco Varlese 
> wrote:

Damjan,

On Thu, 2017-12-14 at 16:04 +, Damjan Marion (damarion) wrote:

Folks,

I'm hearing from multiple people that OpenSUSE verify job is failing (again).
I haven't heard (or read) anything over the mailing list otherwise I would have
looked into it.
Also, if you hear anything like that you can always ping me directly and I will
look into it...

yes, people pinging me...
See
https://gerrit.fd.io/r/#/c/9440/

also:

https://gerrit.fd.io/r/#/c/9813/
 - abandoned but it shows that something was wrong

Ok, so just summarizing our conversation on IRC for others too.

That issue is connected to the different versions of INDENT (C checkstyle) 
installed on the different distros.

openSUSE runs 2.2.10 whilst CentOS and Ubuntu run 2.2.11

What strikes me is that the upstream repo 
https://ftp.gnu.org/gnu/indent/
 has 2.2.10 as last revision.
Our indent package maintainer is looking at possible other sources where Indent 
could "live" these days and will let me know as soon as she finds out.

@Thomas Herbert, would you know the source where the Indent package on CentOS 
come from? Maybe that could help...

Marco, I can't find the source. I'll look around a little more. From CentoOS 
7.4:

$ sudo yum provides indent
:
indent-2.2.11-13.el7.x86_64 : A GNU program for formatting C code
Repo: base
:

$ sudo repoquery -i indent
Name: indent
Version : 2.2.11
Release : 13.el7
Architecture: x86_64
Size: 359131
Packager: CentOS BuildSystem 
>
Group   : Applications/Text
URL : 
http://indent.isidore-it.eu/beautify.html
   <-- BUSTED LINK
Repository  : base
Summary : A GNU program for formatting C code
Source  : indent-2.2.11-13.el7.src.rpm
Description :
Indent is a GNU program for beautifying C code, so that it is easier to
read.  Indent can also convert from one C writing style to a different
one.  Indent understands correct C syntax and tries to handle incorrect
C syntax.

Install the indent package if you are developing applications in C and
you want a program to format your code.






So generally speaking i would like to question having verify jobs for 

Re: [vpp-dev] openSUSE build fails

2017-12-15 Thread Dave Barach (dbarach)
Guys,

I’ll take a look at e.g. gerrit 9440, ip_frag.c and see if I can fix it.

Under the circumstances, it seems perfectly OK to s/ON/OFF/ as needed in the 
per-file patch verification on/off switch:

/*
* fd.io coding-style-patch-verification: ON
*
* Local Variables:
* eval: (c-set-style "gnu")
* End:
*/

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Marco Varlese
Sent: Friday, December 15, 2017 5:16 AM
To: Damjan Marion (damarion) 
Cc: vpp-dev 
Subject: Re: [vpp-dev] openSUSE build fails

Hi Damjan,

On Fri, 2017-12-15 at 09:06 +, Damjan Marion (damarion) wrote:



On 15 Dec 2017, at 08:52, Marco Varlese 
> wrote:

Damjan,

On Thu, 2017-12-14 at 16:04 +, Damjan Marion (damarion) wrote:

Folks,

I'm hearing from multiple people that OpenSUSE verify job is failing (again).
I haven't heard (or read) anything over the mailing list otherwise I would have
looked into it.
Also, if you hear anything like that you can always ping me directly and I will
look into it...

yes, people pinging me...
See
https://gerrit.fd.io/r/#/c/9440/

also:

https://gerrit.fd.io/r/#/c/9813/ - abandoned but it shows that something was 
wrong

Ok, so just summarizing our conversation on IRC for others too.

That issue is connected to the different versions of INDENT (C checkstyle) 
installed on the different distros.

openSUSE runs 2.2.10 whilst CentOS and Ubuntu run 2.2.11

What strikes me is that the upstream repo https://ftp.gnu.org/gnu/indent/ has 
2.2.10 as last revision.
Our indent package maintainer is looking at possible other sources where Indent 
could "live" these days and will let me know as soon as she finds out.

@Thomas Herbert, would you know the source where the Indent package on CentOS 
come from? Maybe that could help...





So generally speaking i would like to question having verify jobs for multiple
distros.
Is there really a value in compiling same code on different distros. Yes I
know gcc version can be different,
but that can be addressed in simpler way, if it needs to be addressed at all.

More distros means more moving parts and bigger chance that something will
fail.
Well, I am not sure how to interpret this but (in theory) a build should be
reproducible in the first place and I should not worry about problems with build
outcomes. It doesn't only affect openSUSE and I raised it many times over the
mailing-list; when you need to run "recheck" multiple times to have a build
succeed. IMHO the issue should be addressed and not solved by putting it under
the carpet...

We all know that we have extreme fragile system, as obviously we are not be 
able to
fix that in almost 2 years, so as long as the system is as it increasing 
complexity doesn't help
and just causes frustration.

Also it cost resources
That is a different matter and if that's the case then it should be discussed
seriously; raising this argument now, after having had people investing their
times in getting stuff up and running isn't really a cool thing...

Marco, decision to have verify jobs on 2 distros was made much before you 
joined the project,
and I don't remember serious decision on that topic, it might be that at that 
time
we were simply unexperienced, or maybe we didn't expect infra to be so fragile.

Fact is that now we have ridiculous situation, 2 verify jobs says patch is OK, 
3rd one says
it is not. Which one to trust?

So please don't take this personal, i know you invested time to get suse build 
working, but still
I think it is a valid question to ask, do we really need 3 verify jobs. Should 
we have 4 tomorrow
if somebody invest his time to do verify job on Archlinux for example?

Thanks,

Damjan



--
Marco V

SUSE LINUX GmbH | GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg) Maxfeldstr. 5, D-90409, Nürnberg
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [discuss] New Option for fd.io mailing lists: groups.io

2017-12-14 Thread Dave Barach (dbarach)
Minor quibble with the assertion that we don’t “moderate” our discussions. I 
spend a bit of time every day dealing with messages sent e.g. to vpp-dev from 
(a) folks who aren’t members of the list, and (b) spam / phish emails.

You’d be surprised how much category (b) email needs to be disposed of...

Thanks… Dave

From: discuss-boun...@lists.fd.io [mailto:discuss-boun...@lists.fd.io] On 
Behalf Of Joel Halpern
Sent: Thursday, December 14, 2017 10:51 AM
To: Ed Warnicke ; t...@lists.fd.io; disc...@lists.fd.io; 
vpp-dev ; csit-...@lists.fd.io; cicn-...@lists.fd.io; 
honeycomb-dev ; deb_dpdk-...@lists.fd.io; 
rpm_d...@lists.fd.io; nsh_sfc-...@lists.fd.io; odp4vpp-...@lists.fd.io; 
pma_tools-...@lists.fd.io; puppet-f...@lists.fd.io; tldk-...@lists.fd.io; 
trex-...@lists.fd.io
Subject: Re: [discuss] New Option for fd.io mailing lists: groups.io

I like having good searchable archives.

I have to say that I am completely turned off by the end of the FAQ.  We don’t 
“moderate” any of our discussions.  And unless something is very strange, the 
use of groups.io vs mailman better not have any visible effect on participation 
in the email discussions.

Listing features like wikis seems also counter-productive.  I do not want us to 
have two separate wiki spaces.

Polls would be nice once in a while (although doodle seems to work just fine 
for most folks.)

If we want calendaring, I would want it integrated in the wiki, not part of the 
mailing list.

Yours,
Joel

From: discuss-boun...@lists.fd.io 
[mailto:discuss-boun...@lists.fd.io] On Behalf Of Ed Warnicke
Sent: Thursday, December 14, 2017 10:45 AM
To: t...@lists.fd.io; 
disc...@lists.fd.io; vpp-dev 
>; 
csit-...@lists.fd.io; 
cicn-...@lists.fd.io; honeycomb-dev 
>; 
deb_dpdk-...@lists.fd.io; 
rpm_d...@lists.fd.io; 
nsh_sfc-...@lists.fd.io; 
odp4vpp-...@lists.fd.io; 
pma_tools-...@lists.fd.io; 
puppet-f...@lists.fd.io; 
tldk-...@lists.fd.io; 
trex-...@lists.fd.io
Subject: [discuss] New Option for fd.io mailing lists: groups.io

A new option has become available for handling mailing lists: 
groups.io

As a community, we need to look at this option, provide feedback, and come to a 
decision as to whether or not to migrate.  A critical part of that is having 
folks take a look, ask questions, and express opinions :)

We have a sandbox example at  https://groups.io/g/lfn  you can look at

And an example with active list and imported archive: 
https://lists.odpi.org/g/odpi-sig-bi

Major benefits include searchability, better web interface, etc.

The LF was kind enough to write a FAQ for us as we consider as a community 
whether to migrate or not:

FAQs
Q: What are the key differences between Mailman and Groups.io?
●Groups.io has a modern interface, robust user security model, and interactive, 
searchable archives
●Groups.io provides advanced features including muting threads and integrations 
with modern tools like GitHub, Slack, and Trello
● Groups.io also has optional extras like a shared calendar, 
polling, chat, a wiki, and more
● Groups.io uses a concept of subgroups, where members first join 
the project “group” (a master list), then they choose the specific “subgroup” 
lists they want to subscribe to

Q: How is the experience different for me as a list moderator or participant?
In many ways, it is very much the same. You will still find the main group at 
your existing URL and sub-groups equate to the more focused mailing lists based 
on the community’s needs. Here is an example of main group and sub-group URL 
patterns, and their respective emails:

https://lists.fd.io/g/tsc
https://lists.fd.io/g/discuss
https:/lists.fd.io/g/vpp-dev
t...@lists.fd.io
disc...@lists.fd.io
vpp-...@llists.fd.io

What is different is Groups.io’s simple but highly functional UI that will make 
the experience of moderating or participating in the community discussions more 
enjoyable.


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] undefined refrence to

2017-12-11 Thread Dave Barach (dbarach)
Gld doesn’t know that e.g. vpp_api.o needs e.g. format until after it’s already 
processed -lvppinfra. Reorder the command line.

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Samuel S
Sent: Monday, December 11, 2017 2:05 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] undefined refrence to

Hi,
Finally i can write something that can be built (i think). but i have some 
issue.
i include api_common.h, api_helper_macro.h and vat_helper_marcors.h but i can't 
make it and compiler says:
/***/
gcc -Wall -I/usr/include -I/usr/include/vpp_plugins -lvlibmemoryclient -lsvm 
-lvppinfra -lvlib -lvatplugin -lpthread -lm -lrt -ldl -lcrypto vpp_api.o main.o 
-o test
vpp_api.o: In function `vpp_nat_init':
vpp_api.c:(.text+0x3be): undefined reference to `format'
vpp_api.c:(.text+0x3ce): undefined reference to 
`vl_client_get_first_plugin_msg_id'
vpp_api.c:(.text+0x405): undefined reference to `vl_noop_handler'
vpp_api.c:(.text+0x420): undefined reference to `vl_msg_api_set_handlers'
vpp_api.o: In function `vpp_connect':
vpp_api.c:(.text+0x44f): undefined reference to `vl_client_connect_to_vlib'
vpp_api.c:(.text+0x458): undefined reference to `svm_region_exit'
vpp_api.c:(.text+0x466): undefined reference to `api_main'
main.o: In function `api_snat_interface_dump':
main.c:(.text+0x29f): undefined reference to `vl_msg_api_alloc_as_if_client'
main.c:(.text+0x2fb): undefined reference to `vl_msg_api_send_shmem'
main.c:(.text+0x307): undefined reference to `vat_time_now'
main.c:(.text+0x360): undefined reference to `vat_suspend'
main.c:(.text+0x36c): undefined reference to `vat_time_now'
collect2: error: ld returned 1 exit status
makefile:20: recipe for target 'main' failed
make: *** [main] Error 1
/**/
vpp_api.h
vpp_api.c
main.c
makefile
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Build error when trying to cross-compile vpp

2017-12-11 Thread Dave Barach (dbarach)
Look in config.log and work out the name of the compiler. Fix in 
.../build-data/platforms/x86_64.mk or override from the command line.

From: nikhil ap [mailto:niks3...@gmail.com]
Sent: Sunday, December 10, 2017 8:43 AM
To: Dave Barach (dbarach) <dbar...@cisco.com>
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Build error when trying to cross-compile vpp

Hi Dave, it doesn't work. After make boostrap, I did:

make PLATFORM=x86_64

  No cross-compiler found for platform x86_64 target x86_64-mu-linux; try 
make PLATFORM=x86_64 install-tools 
Makefile:635: recipe for target 'dpdk-configure' failed
make[1]: *** [dpdk-configure] Error 1
make[1]: Leaving directory '/home/nikhil/projects/vpp/build-root'
Makefile:322: recipe for target 'build' failed
make: *** [build] Error 2

I also tried

 make PLATFORM=x86_64 x86_64_os=rumprun-netbsd build
builds without cross-compilation ( May be because the make boostrap configured 
the native compiler)

checking for gcc... gcc
checking whether we are cross compiling... no

I guess make PLATFORM= bootsrap  where it configures is the generally 
the way of cross-compilation


On Fri, Dec 8, 2017 at 6:20 PM, Dave Barach (dbarach) 
<dbar...@cisco.com<mailto:dbar...@cisco.com>> wrote:
Please try this sequence from the top of your workspace:

$ make bootstrap
$ make PLATFORM= build

That’s the “supported, plan-A” scheme. If it doesn’t work, please let us know.

If you specify PLATFORM when building host tools (i.e. vppapigen), it won’t 
work.

Thanks… Dave

From: nikhil ap [mailto:niks3...@gmail.com<mailto:niks3...@gmail.com>]
Sent: Thursday, December 7, 2017 10:58 PM

To: Dave Barach (dbarach) <dbar...@cisco.com<mailto:dbar...@cisco.com>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] Build error when trying to cross-compile vpp

Hi Dave,

It works if I run, "make is_build_tool=yes tools-install" in .../build-root but 
if I specify the platform, I still see the same issue if I try to cross-compile 
tools with  make PLATFORM=x86_64 TAG=x86_64_debug is_build_tool=yes 
tools-install

It is hitting this configuration in ../src/configure.ac<http://configure.ac>

AM_COND_IF([CROSSCOMPILE],
[
  AC_PATH_PROG([VPPAPIGEN], [vppapigen], [no])
  if test "$VPPAPIGEN" = "no"; then
AC_MSG_ERROR([Externaly built vppapigen is needed when cross-compiling...])
  fi
],[


On Tue, Dec 5, 2017 at 8:53 PM, Dave Barach (dbarach) 
<dbar...@cisco.com<mailto:dbar...@cisco.com>> wrote:
See also “bootstrap.sh...”

$ make V=0 is_build_tool=yes tools-install

Thanks… Dave

From: nikhil ap [mailto:niks3...@gmail.com<mailto:niks3...@gmail.com>]
Sent: Tuesday, December 5, 2017 9:11 AM
To: Dave Barach (dbarach) <dbar...@cisco.com<mailto:dbar...@cisco.com>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>

Subject: Re: [vpp-dev] Build error when trying to cross-compile vpp

Hi Dave,

I added a file x86_64.mk<http://x86_64.mk> in .../build-data/plaforms/ with the 
following content:

x86_64_arch = x86_64
x86_64_os = rumprun-netbsd
x86_64_target = x86_64-rumprun-netbsd
x86_64_native_tools = vppapigen
x86_64_uses_dpdk = yes

and in the TLD I did a "make PLATFORM=x86_64 TAG=x86_64_debug bootstrap" but I 
am still seeing that vppapigen is not getting built. Any clues?

Thanks,
Nikhil


On Tue, Dec 5, 2017 at 7:05 PM, Dave Barach (dbarach) 
<dbar...@cisco.com<mailto:dbar...@cisco.com>> wrote:
Dear Nikhil,

The first step in adding a new platform: construct 
.../build-data/plaforms/xxx.mk<http://xxx.mk>. There are several examples.

Note the rule:

xxx_native_tools = vppapigen

This rule builds the missing build-host tool.

Then:

“make PLATFORM=xxx TAG=xxx_debug vpp-install” or similar.

Caveat: the main Makefile “.../build-root/Makefile” is non-trivial.

In the past, we’ve used it to self-compile full toolchains, and to use the 
resulting toolchains to cross-compile embedded Linux images with squashfs / 
unionfs disk images.

All of the mechanisms are there to do interesting things, but since we seldom 
do those things anymore you can expect a certain amount of trouble.

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> 
[mailto:vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io>] On 
Behalf Of nikhil ap
Sent: Tuesday, December 5, 2017 6:05 AM
To: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] Build error when trying to cross-compile vpp

After a bit more digging around the make file, I did this:

 make PLATFORM=x86_64 x86_64_os=rumprun-netbsd bootstrap

checking build system type... x86_64-pc-linux-gnu
checking host system type... x86_64-rumprun-netbsd
checking whether we are cross compiling... yes

However, I am still seeing this error:

checking for vppapigen... no
configure: error: Externaly built vppapigen is needed when cross-compi

Re: [vpp-dev] MACIP ACL replace causes ip4_table_index change?

2017-12-11 Thread Dave Barach (dbarach)
Folks will miss a clib_warning, unless they check syslog. 

I'd consider returning VNET_API_ERROR_ENTRY_ALREADY_EXISTS and calling it a day.

Thanks… Dave

-Original Message-
From: Andrew  Yourtchenko [mailto:ayour...@gmail.com] 
Sent: Sunday, December 10, 2017 9:04 AM
To: Dave Barach (dbarach) <dbar...@cisco.com>
Cc: Jon Loeliger <j...@netgate.com>; vpp-dev <vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] MACIP ACL replace causes ip4_table_index change?

Dear Dave,

On 12/9/17, Dave Barach (dbarach) <dbar...@cisco.com> wrote:
> This looks wrong... vnet_set_input_acl_intfc(...) at line 93:
>
>   /* Return ok on ADD operation if feature is already enabled */
>   if (is_add &&
>am->classify_table_index_by_sw_if_index[ti][sw_if_index] != ~0)
>  return 0;
>
> It’s been that way for a very long time.

Yeah so I am wondering what's the right approach on fixing it, I see
three alternatives:

1) "set" the new inacl even if there is an existing one applied..
upside: consistent with what "set" means in layman's terms; downside:
bigger change vs. the existing semantics which maybe is masking some
other issues.

2) return an error rather than zero, and let the callers deal with
this. upside: no big change of the semantics. downside: returning an
error might upset some callers that were "accidentally" relying on
this behaviour.

3) stick in a "clib_warning()" saying "This will soon return an error.
The calling code needs to ensure this is handled correctly", and wait
for one or two releases, and have a JIRA for the next release to
*then* do (2) in the next release.

If this behavior has been here sufficiently long, (3) seems like a
safest action..

What do you think ?

--a


>
> Thanks… Dave
>
> From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On
> Behalf Of Jon Loeliger
> Sent: Saturday, December 9, 2017 11:23 AM
> To: Andrew  Yourtchenko <ayour...@gmail.com>
> Cc: vpp-dev <vpp-dev@lists.fd.io>
> Subject: Re: [vpp-dev] MACIP ACL replace causes ip4_table_index change?
>
> On Sat, Dec 9, 2017 at 8:16 AM, Andrew  Yourtchenko
> <ayour...@gmail.com<mailto:ayour...@gmail.com>> wrote:
> Jon,
>
> Hi Andrew,
>
> Thanks for taking a look at this issue!
>
> on api trace: does the below work ? (even though the current scenario
> is trivially reproducible, the api traces are very useful for tougher
> cases, and save a lot of typing while storytelling).
>
> DBGvpp# api trace on
>
> . do the things 
>
> DBGvpp# api trace save macip-trace
> API trace saved to /tmp/macip-trace
> DBGvpp# api trace custom-dump /tmp/macip-trace
> SCRIPT: macip_acl_add_replace -1 count 1 count 1 \
>   ipv4 permit \
> src mac 00:00:00:00:00:00 mask 00:00:00:00:00:00 \
> src ip 0.0.0.0/0<http://0.0.0.0/0>, \
>
> SCRIPT: macip_acl_interface_add_del sw_if_index 0 acl_index 0 add
> SCRIPT: macip_acl_add_replace 0 count 1 count 1 \
>   ipv4 permit \
> src mac 00:00:00:00:00:00 mask 00:00:00:00:00:00 \
> src ip 0.0.0.0/0<http://0.0.0.0/0>, \
>
> I think that is the right sequence.
>
>
> Now, to the issue itself: it's exactly as I described, but with a twist:
> vnet_set_input_acl_intfc(), which is used under the hood to assign the
> inacl on the interfaces, is quite picky - if there is an existing
> inacl applied, it just quietly does nothing. (@DaveB - this kinda
> feels strange, I am not sure what the logic is behind doing that.)
>
> Anyway, rather than debating on why it behaves this way, and,
> especially since we actually are deleting the tables in question, it's
> better to unapply the inacls first, and then reapply them after the
> tables have been recreated.
>
> This solves half the problem for me.  It looks like I can properly
> turn around and remove this ACL from the interface now!
>
> But I still have doubts; or at least I don't understand why the
> three table indices are 3 after initial creation, and 0 after they
> are replaced.
>
> The result is in https://gerrit.fd.io/r/#/c/9772/ - you can verify
> that it addresses the issue.
>
> I've left a comment on the code there.
>
> Despite what Gerrit thinks, this code does compile and run for me!
> So maybe just a "rebuild" request there will allow it to verify?
>
>
> Now, going on to "how exactly did this slip through" - seems the macip
> tests are quite a bit too lenient than they should be. I'll need to
> address that as well, though probably I will split the dot1q/dot1ad
> test cases out, and in the process refactor things a bit... so in the
> interests of your time, maybe 9772 can go with just an actual code
> fix.
>
> I've not read 

Re: [vpp-dev] how to find vlib_buffer_t.data size(capacity) @ runtime?

2017-12-07 Thread Dave Barach (dbarach)
Interpret b->current_data, b->current_length, the buffer freelist index, and 
the related vlib_buffer_free_list_t structure. 

In most cases, b->packet_data is actually VLIB_BUFFER_DATA_SIZE (2048) bytes 
long. Look at the related vlib_buffer_free_list_t to know for sure. 

Current_data is a SIGNED offset into b->packet_data[0]. It can be negative by 
as much as VLIB_BUFFER_PRE_DATA_SIZE. Typically, device drivers write the first 
octet of packet data into b->packet_data[0], but devices / device driver 
writers may place data at arbitrary [positive] offsets into b->packet_data.   

HTH... Dave

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco)
Sent: Thursday, December 7, 2017 8:06 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] how to find vlib_buffer_t.data size(capacity) @ runtime?

Hi,

I discovered that the packet generator does not always respect the default 
vlib_buffer_t.data size as defined in buffer.h:

#define VLIB_BUFFER_DATA_SIZE   (2048)

It derives the required buffer size from the individual packet sizes from the 
pcap file - at least that's what happens in 'make test'. In my case it's 256 
bytes.

My question is - what is the easiest way to determine the actual allocated 
vlib_buffer_t.data space at runtime? I want to be able to append some data to a 
buffer but first I would like to make sure that it fits...

Thanks,
Klement


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] memory issues

2017-12-06 Thread Dave Barach (dbarach)
Before we crank up the vppinfra memory leakfinder, etc. etc.: cat /proc/`pidof 
vpp`/maps and have a hard stare at the output.

Configure one step at a time, looking for significant changes in the address 
space layout.

HTH… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Luke, Chris
Sent: Tuesday, December 5, 2017 9:58 PM
To: 薛欣颖 ; vpp-dev 
Subject: Re: [vpp-dev] memory issues

I agree 5g is large, but I do not think this is the FIB. The default heap maxes 
out much sooner than that. Something else is going on.

For DPDK, “show dpdk buffer” and otherwise “show physmem”.

Chris.

From: 薛欣颖 >
Date: Tuesday, December 5, 2017 at 20:06
To: Chris Luke 
>, vpp-dev 
>
Subject: Re: Re: [vpp-dev] memory issues


Hi Chris,

I see what you mean. I have two other questions:
1. 200k static routing use 5g memory is also  large , how can I configure it 
use less physical memory?
2. How can I check the packet buffer memory?

BTW, do you have the test similar with 'the memory size 200k static routing 
use'?

Thanks,
Xyxue


From: Luke, Chris
Date: 2017-12-05 21:43
To: 薛欣颖; vpp-dev
Subject: Re: [vpp-dev] memory issues
You’re misreading top. “Virt” only means the virtual memory footprint of the 
process. This includes unused heap, shared libraries, anonymous mmap() regions 
etc. “RSS” is the resident-in-memory size. It’s actually using 5G.

“show memory” also only shows the heap usage, it does not include packet buffer 
memory.

Chris.

From: > on 
behalf of 薛欣颖 >
Date: Tuesday, December 5, 2017 at 00:51
To: vpp-dev >
Subject: [vpp-dev] memory issues


Hi guys,

I am using vpp v18.01-rc0~241-g4c9f2a8.
I configured 200K static routing. When I 'show memory' in VPP, '150+k used'. 
But in my machine ,used almost 15g. After del the static routing ,almost using 
16g memory.
More info is shown below:

VPP# show memory
Thread 0 vpp_main
heap 0x7fffb58e9000, 1076983 objects, 110755k of 151671k used, 15386k free, 
13352k reclaimed, 16829k overhead, 1048572k capacity
User heap index=0:
heap 0x7fffb58e9000, 1076984 objects, 110755k of 151671k used, 15386k free, 
13352k reclaimed, 16829k overhead, 1048572k capacity
User heap index=1:
heap 0x77ed4000, 2 objects, 128k of 130k used, 92 free, 0 reclaimed, 1k 
overhead, 1020k capacity
User heap index=2:
heap 0x7fffb1e28000, 2 objects, 512k of 514k used, 92 free, 0 reclaimed, 1k 
overhead, 8188k capacity
User heap index=3:
heap 0x7fffb1628000, 2 objects, 512k of 514k used, 92 free, 0 reclaimed, 1k 
overhead, 8188k capacity
User heap index=4:
heap 0x7fffaf628000, 2 objects, 512k of 514k used, 92 free, 0 reclaimed, 1k 
overhead, 32764k capacity
User heap index=5:
heap 0x7fffaf528000, 2 objects, 8k of 10k used, 92 free, 0 reclaimed, 1k 
overhead, 1020k capacity
User heap index=6:
heap 0x7fffaf428000, 2 objects, 8k of 10k used, 92 free, 0 reclaimed, 1k 
overhead, 1020k capacity
User heap index=7:
heap 0x7fffaf328000, 2 objects, 120k of 122k used, 92 free, 0 reclaimed, 1k 
overhead, 1020k capacity
User heap index=8:
heap 0x7fffaf228000, 2 objects, 120k of 122k used, 92 free, 0 reclaimed, 1k 
overhead, 1020k capacity
User heap index=9:
heap 0x7fffa7228000, 2 objects, 8k of 10k used, 92 free, 0 reclaimed, 1k 
overhead, 131068k capacity
User heap index=10:
heap 0x7fff9f228000, 2 objects, 8k of 10k used, 92 free, 0 reclaimed, 1k 
overhead, 131068k capacity
User heap index=11:
heap 0x7fff9b228000, 2 objects, 16k of 18k used, 92 free, 0 reclaimed, 1k 
overhead, 65532k capacity
User heap index=12:
heap 0x7fff9b028000, 2 objects, 256k of 258k used, 92 free, 0 reclaimed, 1k 
overhead, 2044k capacity
User heap index=13:
heap 0x7fff9ae28000, 2 objects, 240k of 242k used, 92 free, 0 reclaimed, 1k 
overhead, 2044k capacity
User heap index=14:
heap 0x7fff9ad28000, 5 objects, 8k of 10k used, 168 free, 0 reclaimed, 1k 
overhead, 1020k capacity
User heap index=15:
heap 0x7fff9ac28000, 5 objects, 8k of 10k used, 168 free, 0 reclaimed, 1k 
overhead, 1020k capacity
User heap index=16:
heap 0x7fff9ab28000, 2 objects, 8k of 10k used, 92 free, 0 reclaimed, 1k 
overhead, 1020k capacity
User heap index=17:
heap 0x7fff9a128000, 2 objects, 1k of 3k used, 88 free, 0 reclaimed, 1k 
overhead, 10236k capacity
User heap index=18:
heap 0x7fff9a028000, 2 objects, 8k of 10k used, 92 free, 0 reclaimed, 1k 
overhead, 1020k capacity
User heap index=19:
heap 0x7fff99f28000, 2 objects, 8k of 10k used, 92 free, 0 reclaimed, 1k 
overhead, 1020k capacity
User heap index=20:
heap 0x7fff99e28000, 2 objects, 2k of 4k 

Re: [vpp-dev] some files are never compiled

2017-12-05 Thread Dave Barach (dbarach)
Merged... I’ll clean out some more junk and push another patch... Thanks… Dave

From: Gabriel Ganne [mailto:gabriel.ga...@enea.com]
Sent: Tuesday, December 5, 2017 10:14 AM
To: Dave Barach (dbarach) <dbar...@cisco.com>; vpp-dev@lists.fd.io
Subject: Re: some files are never compiled


Thanks Dave,



I had submitted a pull-request for the smp files here : 
https://gerrit.fd.io/r/#/c/9730/

Please tell me if I should abandon it and let you do a more complete patch (I 
don't think I can judge for all the mentioned files by myself).



Best regards,



--

Gabriel Ganne


From: Dave Barach (dbarach) <dbar...@cisco.com<mailto:dbar...@cisco.com>>
Sent: Tuesday, December 5, 2017 4:06:09 PM
To: Gabriel Ganne; vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: RE: some files are never compiled


Dear Gabriel,



The files mentioned below fall into several buckets:



  *   Code samples which might reasonably move to .../extras
  *   Things we’re not using at the moment, but which would take someone a good 
long time to build from scratch.

 *   The simulated annealing driver in vppinfra/anneal.c is a good example.

  *   Debris which should be removed



I’ll push a change-set to remove debris. Most of it is mine anyhow... ()...



Thanks… Dave



From: vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Gabriel Ganne
Sent: Tuesday, December 5, 2017 9:52 AM
To: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: [vpp-dev] some files are never compiled



Hi,



Following a question by Kevin Wang in 
VPP-1066<https://url10.mailanyone.net/v1/?m=1eMEna-0004JB-5d=57e1b682=dWeAA22ZVlhnZRfz-i1puxFN8Tk-2rBFyied8Gs8zXDL3Q9k6F90-z0RmbwFTcDpe1fhM2I86d0RWNBHd8VwdvEr6xIUAJFUUGBWtqrQ4SAz6poJV5MSO5svULzjhQ09dOY07pPOvLkZGm5vbduTTwLjeubkD5dEcO6oGql-Kl4gxIbNQD7liXGVkfS6NbeQtSvyOOWcF5cpIdRE6a-t4A_GyvHqKgofPuOcGu8KcWc>,
 I saw that some files are actually never compiled.

Could some external plugin be using them ?

Can (Should) they be removed ?



As an example, I followed the smp.c, and smp_fido.[ch] files.

They have been disabled by commit 01d86c7f6f05938c7d3fe181bd0aa2f75ccdd1df 
(reviewed here: 
https://gerrit.fd.io/r/#/c/2273/)<https://url10.mailanyone.net/v1/?m=1eMEna-0004JB-5d=57e1b682=l-t4hbRfiNUIHcmfdMfBceTCPh9V9nacm3ht2BtvZJdfe6BafPT4y2sAiyKxs3ILFgcCRQf4GEWgCXLfyWhmeR4XvUzjyzRejSl7yqU8A1qfnylWZBISY7Qk5IIaPgwqRx_qTYRI06Y-7wYuAuDsQp_sSnrtQM4oPijSLDSwlaTL_grLpgRDWHWS2iS38TfgV7brBJ9vX20IUGojBeO5oCpxWXrFWkWwLJi4wNGoN5I>
 almost 1.5 year ago.



Here is how I listed them :

for file in $(git find "\.c$"); do

f=`basename $file .c` ;

git grep -q "$f\.c";

if [ $? -eq 1 ] ;  then echo $file ; fi ;

done

src/examples/vlib/plex_test.c
src/tools/g2/mkversion.c
src/vlib/elog_samples.c
src/vlib/parse.c
src/vlib/parse_builtin.c
src/vnet/ethernet/mac_swap.c
src/vnet/fib/fib_entry_src_default.c
src/vnet/ip/ip4_test.c
src/vnet/map/examples/health_check.c
src/vpp/app/sticky_hash.c
src/vppinfra/anneal.c
src/vppinfra/mod_test_hash.c
src/vppinfra/pfhash.c
src/vppinfra/phash.c
src/vppinfra/qhash.c
src/vppinfra/smp.c
src/vppinfra/smp_fifo.c
src/vppinfra/test_pfhash.c
src/vppinfra/test_phash.c
src/vppinfra/test_pool.c
src/vppinfra/test_qhash.c
src/vppinfra/tw_timer_4t_3w_4sl_ov.c
src/vppinfra/unix-kelog.c







--

Gabriel Ganne
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Build error when trying to cross-compile vpp

2017-12-05 Thread Dave Barach (dbarach)
See also “bootstrap.sh...”

$ make V=0 is_build_tool=yes tools-install

Thanks… Dave

From: nikhil ap [mailto:niks3...@gmail.com]
Sent: Tuesday, December 5, 2017 9:11 AM
To: Dave Barach (dbarach) <dbar...@cisco.com>
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Build error when trying to cross-compile vpp

Hi Dave,

I added a file x86_64.mk<http://x86_64.mk> in .../build-data/plaforms/ with the 
following content:

x86_64_arch = x86_64
x86_64_os = rumprun-netbsd
x86_64_target = x86_64-rumprun-netbsd
x86_64_native_tools = vppapigen
x86_64_uses_dpdk = yes

and in the TLD I did a "make PLATFORM=x86_64 TAG=x86_64_debug bootstrap" but I 
am still seeing that vppapigen is not getting built. Any clues?

Thanks,
Nikhil


On Tue, Dec 5, 2017 at 7:05 PM, Dave Barach (dbarach) 
<dbar...@cisco.com<mailto:dbar...@cisco.com>> wrote:
Dear Nikhil,

The first step in adding a new platform: construct 
.../build-data/plaforms/xxx.mk<http://xxx.mk>. There are several examples.

Note the rule:

xxx_native_tools = vppapigen

This rule builds the missing build-host tool.

Then:

“make PLATFORM=xxx TAG=xxx_debug vpp-install” or similar.

Caveat: the main Makefile “.../build-root/Makefile” is non-trivial.

In the past, we’ve used it to self-compile full toolchains, and to use the 
resulting toolchains to cross-compile embedded Linux images with squashfs / 
unionfs disk images.

All of the mechanisms are there to do interesting things, but since we seldom 
do those things anymore you can expect a certain amount of trouble.

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> 
[mailto:vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io>] On 
Behalf Of nikhil ap
Sent: Tuesday, December 5, 2017 6:05 AM
To: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] Build error when trying to cross-compile vpp

After a bit more digging around the make file, I did this:

 make PLATFORM=x86_64 x86_64_os=rumprun-netbsd bootstrap

checking build system type... x86_64-pc-linux-gnu
checking host system type... x86_64-rumprun-netbsd
checking whether we are cross compiling... yes

However, I am still seeing this error:

checking for vppapigen... no
configure: error: Externaly built vppapigen is needed when cross-compiling...
Makefile:635: recipe for target 'tools-configure' failed
make[1]: *** [tools-configure] Error 1

What is the issue?

On Tue, Dec 5, 2017 at 3:55 PM, nikhil ap 
<niks3...@gmail.com<mailto:niks3...@gmail.com>> wrote:
Hi All,

I am trying to cross-compile vpp. The make doesn't expose a way to pass the 
--host parameter required to configure and build using cross compilation.

Initially, I did the following:

CC=x86_64-rumprun-netbsd-gcc make bootstrap, but I saw the following error

If you meant to cross compile, use `--host'.
See `config.log' for more details

As a work-around based on the config.log, I did this following

/src/configure (Stripped other output ) --build=x86_64-linux-gnu 
--host=x86_64-rumprun-netbsd --target=x86_64-linux-gnu

However,  I saw the following error:
checking for vppapigen... no
configure: error: Externaly built vppapigen is needed when cross-compiling...

Is there a way to cleanly cross-compile?


--
Regards,
Nikhil



--
Regards,
Nikhil



--
Regards,
Nikhil
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] some files are never compiled

2017-12-05 Thread Dave Barach (dbarach)
Dear Gabriel,

The files mentioned below fall into several buckets:


  *   Code samples which might reasonably move to .../extras
  *   Things we’re not using at the moment, but which would take someone a good 
long time to build from scratch.
 *   The simulated annealing driver in vppinfra/anneal.c is a good example.
  *   Debris which should be removed

I’ll push a change-set to remove debris. Most of it is mine anyhow... ()...

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Gabriel Ganne
Sent: Tuesday, December 5, 2017 9:52 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] some files are never compiled


Hi,



Following a question by Kevin Wang in 
VPP-1066, I saw that some files are 
actually never compiled.

Could some external plugin be using them ?

Can (Should) they be removed ?



As an example, I followed the smp.c, and smp_fido.[ch] files.
They have been disabled by commit 01d86c7f6f05938c7d3fe181bd0aa2f75ccdd1df 
(reviewed here: https://gerrit.fd.io/r/#/c/2273/) almost 1.5 year ago.



Here is how I listed them :

for file in $(git find "\.c$"); do

f=`basename $file .c` ;

git grep -q "$f\.c";

if [ $? -eq 1 ] ;  then echo $file ; fi ;

done
src/examples/vlib/plex_test.c
src/tools/g2/mkversion.c
src/vlib/elog_samples.c
src/vlib/parse.c
src/vlib/parse_builtin.c
src/vnet/ethernet/mac_swap.c
src/vnet/fib/fib_entry_src_default.c
src/vnet/ip/ip4_test.c
src/vnet/map/examples/health_check.c
src/vpp/app/sticky_hash.c
src/vppinfra/anneal.c
src/vppinfra/mod_test_hash.c
src/vppinfra/pfhash.c
src/vppinfra/phash.c
src/vppinfra/qhash.c
src/vppinfra/smp.c
src/vppinfra/smp_fifo.c
src/vppinfra/test_pfhash.c
src/vppinfra/test_phash.c
src/vppinfra/test_pool.c
src/vppinfra/test_qhash.c
src/vppinfra/tw_timer_4t_3w_4sl_ov.c
src/vppinfra/unix-kelog.c






--

Gabriel Ganne
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Build error when trying to cross-compile vpp

2017-12-05 Thread Dave Barach (dbarach)
Dear Nikhil,

The first step in adding a new platform: construct 
.../build-data/plaforms/xxx.mk. There are several examples.

Note the rule:

xxx_native_tools = vppapigen

This rule builds the missing build-host tool.

Then:

“make PLATFORM=xxx TAG=xxx_debug vpp-install” or similar.

Caveat: the main Makefile “.../build-root/Makefile” is non-trivial.

In the past, we’ve used it to self-compile full toolchains, and to use the 
resulting toolchains to cross-compile embedded Linux images with squashfs / 
unionfs disk images.

All of the mechanisms are there to do interesting things, but since we seldom 
do those things anymore you can expect a certain amount of trouble.

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of nikhil ap
Sent: Tuesday, December 5, 2017 6:05 AM
To: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Build error when trying to cross-compile vpp

After a bit more digging around the make file, I did this:

 make PLATFORM=x86_64 x86_64_os=rumprun-netbsd bootstrap

checking build system type... x86_64-pc-linux-gnu
checking host system type... x86_64-rumprun-netbsd
checking whether we are cross compiling... yes

However, I am still seeing this error:

checking for vppapigen... no
configure: error: Externaly built vppapigen is needed when cross-compiling...
Makefile:635: recipe for target 'tools-configure' failed
make[1]: *** [tools-configure] Error 1

What is the issue?

On Tue, Dec 5, 2017 at 3:55 PM, nikhil ap 
> wrote:
Hi All,

I am trying to cross-compile vpp. The make doesn't expose a way to pass the 
--host parameter required to configure and build using cross compilation.

Initially, I did the following:

CC=x86_64-rumprun-netbsd-gcc make bootstrap, but I saw the following error

If you meant to cross compile, use `--host'.
See `config.log' for more details

As a work-around based on the config.log, I did this following

/src/configure (Stripped other output ) --build=x86_64-linux-gnu 
--host=x86_64-rumprun-netbsd --target=x86_64-linux-gnu

However,  I saw the following error:
checking for vppapigen... no
configure: error: Externaly built vppapigen is needed when cross-compiling...

Is there a way to cleanly cross-compile?


--
Regards,
Nikhil



--
Regards,
Nikhil
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Question on node type "VLIB_NODE_TYPE_PROCESS"

2017-11-30 Thread Dave Barach (dbarach)
At least for now, process nodes run on the main thread. See line 1587 of 
.../src/vlib/main.c.

The lldp-process is not super-complicated. Set a gdb breakpoint on line 157 
[switch(event_type)], cause it to do something, and you can walk through it, 
etc.

HTH... Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Yeddula, Avinash
Sent: Thursday, November 30, 2017 5:49 PM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] Question on node type "VLIB_NODE_TYPE_PROCESS"

Hello,

I have a setup with, 1 worker thread (Core 8) and 1 main thread (Core 0).

As I read about the node type VLIB_NODE_TYPE_PROCESS, it says
"The graph node scheduler invokes these processes in much the same way as 
traditional vector-processing run-to-completion graph  nodes".

For eg..
A node like "lldp_process_node", as I see whenever a timeout occurs or an event 
has been generated, a frame has been sent out of an interface. The questions I 
have are..


  1.  The part I'm not able to figure out yet is, where is (on which 
thread/core) this "lldp_process_node" running in the back ground ?  I'm 
assuming it cannot be worker thread.


  1.  Would you please point me the piece of code in vpp infra, that schedules 
all nodes of type "VLIB_NODE_TYPE_PROCESS".


  1.  I tried to turn on few debugs like this "VLIB_BUFFER_TRACE_TRAJECTORY" 
and few other ones. None of them seems to generate any traces/logs (show trace 
- doesn't give me any info). Any pointers on how to enable relevant logs for 
this activity.

Thanks
-Avinash

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] problem in elog format

2017-11-30 Thread Dave Barach (dbarach)
Hmmm. I’ve never seen that issue, although I haven’t run c2cpel in a while. 
I’ll take a look later today.

It looks like .../src/perftool.am builds it, so look under 
build-root/install-xxx and (possibly) install it manually...

Thanks… Dave

From: Juan Salmon [mailto:salmonju...@gmail.com]
Sent: Thursday, November 30, 2017 12:50 AM
To: Dave Barach (dbarach) <dbar...@cisco.com>
Cc: Florin Coras <fcoras.li...@gmail.com>; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] problem in elog format

Thanks a lot,
Now I want to convert elog file to text file.
I compiled perftools in test directory, but when running c2cpel tools, the 
following error accrued:

c2cpel: error while loading shared libraries: libcperf.so.0: cannot open shared 
object file: No such file or directory

Best Regards,
Juan Salmon.

On Wed, Nov 29, 2017 at 3:53 PM, Dave Barach (dbarach) 
<dbar...@cisco.com<mailto:dbar...@cisco.com>> wrote:

PMFJI, but we have organized schemes for capturing, serializing, and eventually 
displaying string data.



Please note: a single "format" call will probably cost more than the entire 
clock-cycle budget available to process a packet. Really. Seriously. Printfs 
(aka format calls) in the packet-processing path are to be avoided at all 
costs. The basic event-logger modus operandi is to capture binary data and 
pretty-print it offline.



At times, one will need or want to log string data. Here's how to proceed:



The printf-like function elog_string(...) adds a string to the event log string 
heap, and returns a cookie which offline tools use to print that string. The 
"T" format specifier in an event definition means "go print the string at the 
indicated u32 string heap offset”. Here’s an example:



  /* *INDENT-OFF* */

  ELOG_TYPE_DECLARE (e) =

{

  .format = "serialize-msg: %s index %d",

  .format_args = "T4i4",

};

  struct

{

u32 c[2];

  } *ed;

  ed = ELOG_DATA (mc->elog_main, e);

  ed->c[0] = elog_id_for_msg_name (mc, msg->name);

  ed->c[1] = si;



So far so good, but let’s do a bit of work to keep from blowing up the string 
heap:



static u32

elog_id_for_msg_name (mc_main_t * m, char *msg_name)

{

  uword *p, r;

  uword *h = m->elog_id_by_msg_name;

  u8 *name_copy;



  if (!h)

h = m->elog_id_by_msg_name = hash_create_string (0, sizeof (uword));



  p = hash_get_mem (h, msg_name);

  if (p)

return p[0];

  r = elog_string (m->elog_main, "%s", msg_name);



  name_copy = format (0, "%s%c", msg_name, 0);



  hash_set_mem (h, name_copy, r);

  m->elog_id_by_msg_name = h;



  return r;

}



As in: each unique string appears exactly once in the event-log string heap. 
Hash_get_mem (x) is way cheaper than printf(x). Please remember that this hash 
flavor is not inherently thread-safe.



In the case of enumerated strings, use the “t” format specifier. It only costs 
1 octet to represent up to 256 constant strings:



  ELOG_TYPE_DECLARE (e) =

  {

.format = "my enum: %s",

.format_args = "t1",

.n_enum_strings =

  2,

.enum_strings =

  {

"string 1",

"string 2",

  },

   };

  struct

  {

u8 which;

  } *ed;

  ed = ELOG_DATA (_global_main.elog_main, e);

  ed->which = which;





HTH… Dave



-Original Message-
From: vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> 
[mailto:vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io>] On 
Behalf Of Florin Coras
Sent: Wednesday, November 29, 2017 4:43 AM
To: Juan Salmon <salmonju...@gmail.com<mailto:salmonju...@gmail.com>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] problem in elog format



Hi Juan,



We don’t typically use elogs to store strings, still, you may be able to get it 
to run with:



struct

{

u8 err[20];

} * ed;



And then copy your data to err: clib_memcpy (ed->err, your_vec, vec_len 
(your_vec)). Make sure your vec is 0 terminated.



HTH,

Florin



> On Nov 28, 2017, at 9:12 PM, Juan Salmon 
> <salmonju...@gmail.com<mailto:salmonju...@gmail.com>> wrote:

>

>

> I want to use event-log and send string to one of elements of ed struct.

> but the result is not correct.

>

> the sample code:

>

> ELOG_TYPE_DECLARE (e) = {

> .format = "Test LOG: %s",

> .format_args = "s20",

> };

> struct

> {

> u8 * err;

> } * ed;

>

>

> vlib_worker_thread_t * w = vlib_worker_threads + cpu_index;

> ed = ELOG_TRACK_DATA (_global_main.elog_main, e, w->elog_track);

>

>

[vpp-dev] Frequently-asked questions wiki page

2017-11-29 Thread Dave Barach (dbarach)
Folks,

Please see https://wiki.fd.io/view/VPP/FAQ. Additions welcome. I decided to 
start with a personal favorite...

Thanks... Dave

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] problem in elog format

2017-11-29 Thread Dave Barach (dbarach)
PMFJI, but we have organized schemes for capturing, serializing, and eventually 
displaying string data.



Please note: a single "format" call will probably cost more than the entire 
clock-cycle budget available to process a packet. Really. Seriously. Printfs 
(aka format calls) in the packet-processing path are to be avoided at all 
costs. The basic event-logger modus operandi is to capture binary data and 
pretty-print it offline.



At times, one will need or want to log string data. Here's how to proceed:



The printf-like function elog_string(...) adds a string to the event log string 
heap, and returns a cookie which offline tools use to print that string. The 
"T" format specifier in an event definition means "go print the string at the 
indicated u32 string heap offset”. Here’s an example:



  /* *INDENT-OFF* */

  ELOG_TYPE_DECLARE (e) =

{

  .format = "serialize-msg: %s index %d",

  .format_args = "T4i4",

};

  struct

{

u32 c[2];

  } *ed;

  ed = ELOG_DATA (mc->elog_main, e);

  ed->c[0] = elog_id_for_msg_name (mc, msg->name);

  ed->c[1] = si;



So far so good, but let’s do a bit of work to keep from blowing up the string 
heap:



static u32

elog_id_for_msg_name (mc_main_t * m, char *msg_name)

{

  uword *p, r;

  uword *h = m->elog_id_by_msg_name;

  u8 *name_copy;



  if (!h)

h = m->elog_id_by_msg_name = hash_create_string (0, sizeof (uword));



  p = hash_get_mem (h, msg_name);

  if (p)

return p[0];

  r = elog_string (m->elog_main, "%s", msg_name);



  name_copy = format (0, "%s%c", msg_name, 0);



  hash_set_mem (h, name_copy, r);

  m->elog_id_by_msg_name = h;



  return r;

}



As in: each unique string appears exactly once in the event-log string heap. 
Hash_get_mem (x) is way cheaper than printf(x). Please remember that this hash 
flavor is not inherently thread-safe.



In the case of enumerated strings, use the “t” format specifier. It only costs 
1 octet to represent up to 256 constant strings:



  ELOG_TYPE_DECLARE (e) =

  {

.format = "my enum: %s",

.format_args = "t1",

.n_enum_strings =

  2,

.enum_strings =

  {

"string 1",

"string 2",

  },

   };

  struct

  {

u8 which;

  } *ed;

  ed = ELOG_DATA (_global_main.elog_main, e);

  ed->which = which;





HTH… Dave



-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Florin Coras
Sent: Wednesday, November 29, 2017 4:43 AM
To: Juan Salmon 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] problem in elog format



Hi Juan,



We don’t typically use elogs to store strings, still, you may be able to get it 
to run with:



struct

{

u8 err[20];

} * ed;



And then copy your data to err: clib_memcpy (ed->err, your_vec, vec_len 
(your_vec)). Make sure your vec is 0 terminated.



HTH,

Florin



> On Nov 28, 2017, at 9:12 PM, Juan Salmon 
> > wrote:

>

>

> I want to use event-log and send string to one of elements of ed struct.

> but the result is not correct.

>

> the sample code:

>

> ELOG_TYPE_DECLARE (e) = {

> .format = "Test LOG: %s",

> .format_args = "s20",

> };

> struct

> {

> u8 * err;

> } * ed;

>

>

> vlib_worker_thread_t * w = vlib_worker_threads + cpu_index;

> ed = ELOG_TRACK_DATA (_global_main.elog_main, e, w->elog_track);

>

> ed->err = format (0,"%s", "This is a Test");

>

>

> Could you please help me?

>

>

> Best Regards,

> Juan Salmon.

> ___

> vpp-dev mailing list

> vpp-dev@lists.fd.io

> https://lists.fd.io/mailman/listinfo/vpp-dev



___

vpp-dev mailing list

vpp-dev@lists.fd.io

https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] How to enable RSS in VPP

2017-11-28 Thread Dave Barach (dbarach)
You are sending traffic with more than one flow, correct?

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Saxena, Nitin
Sent: Tuesday, November 28, 2017 11:45 AM
To: vpp-dev@lists.fd.io
Cc: Athreya, Narayana Prasad 
Subject: [vpp-dev] How to enable RSS in VPP

HI,

I am using ConnectX-4 NIC which has Rx RSS support however I can see VPP is not 
using RSS feature with this NIC.
NIC is getting traffic from 1 queue only?  Is this can be fixed in VPP? If yes 
how?

Output from show hardware detail

==
UnknownEthernet32/0/0  1 up   UnknownEthernet32/0/0
  Ethernet address 24:8a:07:a4:6b:78
  Mellanox ConnectX-4 Family
carrier up full duplex speed 4 mtu 9216  promisc
pci id:device 15b3:1013 subsystem 15b3:0008
pci address:   :32:00.00
max rx packet len: 65536
max num of queues: rx 65535 tx 65535
promiscuous:   unicast on all-multicast on
vlan offload:  strip off filter off qinq off
rx offload caps:   vlan-strip ipv4-cksum udp-cksum tcp-cksum
tx offload caps:   vlan-insert ipv4-cksum udp-cksum tcp-cksum 
outer-ipv4-cksum
rss active:ipv4-udp
rss supported: none
rx queues 4, rx desc 1024, tx queues 5, tx desc 1024
cpu socket 0

tx frames ok 31003987272
tx bytes ok1860239236320
rx frames ok 63884415232
rx bytes ok3833064913920
extended stats:
  rx good packets63884415232
  tx good packets31003987272
  rx good bytes3833064913920
  tx good bytes1860239236320
  rx errors0
  tx errors0
  rx mbuf allocation errors0
  rx q0packets 0
  rx q0bytes   0
  rx q0errors  0
  rx q1packets 0
  rx q1bytes   0
  rx q1errors  0
  rx q2packets 0
  rx q2bytes   0
  rx q2errors  0
  rx q3packets   63884415232
  rx q3bytes   3833064913920
  rx q3errors  0
  tx q0packets 0
  tx q0bytes   0
  tx q1packets   31003987272
  tx q1bytes   1860239236320
  tx q2packets 0
  tx q2bytes   0
  tx q3packets 0
  tx q3bytes   0
  tx q4packets 0
  tx q4bytes   0


Regards,
Nitin
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP crash observed with 4k sub-interfaces and 4k FIBs

2017-11-27 Thread Dave Barach (dbarach)
Laying aside the out-of-memory issue for a minute: can you explain the vpp 
deployment you have in mind?

Given where vpp would fit in a normal network design, I’m not seeing why you’d 
want to go with a full vlan / VRF’s mesh.

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Balaji Kn
Sent: Monday, November 27, 2017 4:32 AM
To: vpp-dev 
Subject: [vpp-dev] VPP crash observed with 4k sub-interfaces and 4k FIBs

Hello,

I am using VPP 17.07 and initialized heap memory as 3G in startup configuration.
My use case is to have 4k sub-interfaces to differentiated by VLAN and 
associated each sub-interface with unique VRF. Eventually using 4k FIBs.

However i am observing VPP is crashing with memory crunch while adding an ip 
route.

backtrace
#0  0x7fae4c981cc9 in __GI_raise (sig=sig@entry=6) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x7fae4c9850d8 in __GI_abort () at abort.c:89
#2  0x004070b3 in os_panic ()
at 
/jenkins_home/workspace/vFE/vFE_Release_Master_Build/datapath/vpp/build-data/../src/vpp/vnet/main.c:263
#3  0x7fae4d19007a in clib_mem_alloc_aligned_at_offset 
(os_out_of_memory_on_failure=1,
align_offset=, align=64, size=1454172096)
at 
/jenkins_home/workspace/vFE/vFE_Release_Master_Build/datapath/vpp/build-data/../src/vppinfra/mem.h:102
#4  vec_resize_allocate_memory (v=v@entry=0x7fade2c44880, 
length_increment=length_increment@entry=1,
data_bytes=, header_bytes=, 
header_bytes@entry=24,
data_align=data_align@entry=64)
at 
/jenkins_home/workspace/vFE/vFE_Release_Master_Build/datapath/vpp/build-data/../src/vppinfra/vec.c:84
#5  0x7fae4db9210c in _vec_resize (data_align=, 
header_bytes=,
data_bytes=, length_increment=, v=)
at 
/jenkins_home/workspace/vFE/vFE_Release_Master_Build/datapath/vpp/build-data/../src/vppinfra/vec.h:142

I initially suspected FIB is consuming more heap space. But I do see much 
memory consumed by FIB table also and felt 3GB of heap is sufficient

vpp# show fib memory
FIB memory
 Name   Size  in-use /allocated   totals
 Entry   7260010 /  60010 4320720/4320720
 Entry Source3268011 /  68011 2176352/2176352
 Entry Path-Extensions   60  0   /0   0/0
multicast-Entry 1924006  /   4006 769152/769152
   Path-list 4860016 /  60016 2880768/2880768
   uRPF-list 1676014 /  76015 1216224/1216240
 Path8060016 /  60016 4801280/4801280
  Node-list elements 2076017 /  76019 1520340/1520380
Node-list heads  8 68020 /  68020 544160/544160

Is there any way to identify usage of heap memory in other modules?
Any pointers would be helpful.

Regards,
Balaji
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Import/includes in .api files

2017-11-20 Thread Dave Barach (dbarach)
Would some variant of the usual C / C++ guitar lick work?

#ifndef __defined_my_types
#define __defined_my_types 
#include 
#endif /* __defined_my_types */



-Original Message-
From: Ole Troan [mailto:otr...@employees.org] 
Sent: Monday, November 20, 2017 10:32 AM
To: Dave Barach (dbarach) <dbar...@cisco.com>
Cc: Neale Ranns (nranns) <nra...@cisco.com>; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Import/includes in .api files

Dave,

> Since the beginning of time, we've been running .api files through the C 
> preprocessor. Put all of your "typeonly..." definitions in a file, and 
> #include it. Should work immediately.
> 
> Thanks to Damjan, there's only one copy of the suffix rule, in 
> .../src/suffix-rules.mk. Here's the relevant rule:
> 
> %.api.h: %.api @VPPAPIGEN@
>   @echo "  APIGEN  " $@ ; \
>   mkdir -p `dirname $@` ; \
>   $(CC) $(CPPFLAGS) -E -P -C -x c $<  \
>   | @VPPAPIGEN@ --input - --output $@ --show-name $@ > /dev/null

Sorry, for misunderstanding, this seems to work perfectly fine with the 
language bindings.

Verified by moving fib_path to types.api and comparing the resulting 
ip.api.json.

Need to figure how to deal with duplicates, which will end up in the .JSON 
definitions when multiple .api include the same file.
That shouldn't be a big deal though.

Best regards,
Ole
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] Import/includes in .api files

2017-11-20 Thread Dave Barach (dbarach)
Dear Neale,

Since the beginning of time, we've been running .api files through the C 
preprocessor. Put all of your "typeonly..." definitions in a file, and #include 
it. Should work immediately.

Thanks to Damjan, there's only one copy of the suffix rule, in 
.../src/suffix-rules.mk. Here's the relevant rule:

%.api.h: %.api @VPPAPIGEN@
@echo "  APIGEN  " $@ ; \
mkdir -p `dirname $@` ; \
$(CC) $(CPPFLAGS) -E -P -C -x c $<  \
| @VPPAPIGEN@ --input - --output $@ --show-name $@ > /dev/null

HTH… Dave

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Neale Ranns (nranns)
Sent: Monday, November 20, 2017 3:28 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] RFC: Import/includes in .api files


Hi All,

I’d like to be able to re-use types defined in one .api file in many other .api 
files. My specific objective is to re-use a fib_path_t across the many APIs 
that describe a destination to which to send packets.

My first attempt at this is:
  https://gerrit.fd.io/r/#/c/9489/

I updated vppapigen to accept the keyword ‘import’, munch the subsequent 
string, and then generate the #include in the resulting .api.h. then the fun 
started… multiple type definitions, include guards, here be dragons, turn back 
now and seek assistance.
I later realised that an import statement is not required. If I create 
vnet/fib/fib.api and add it to vnet_all_api_h.h at the top, then that has some 
success. However, no import statement is not so friendly to other tools that 
parse the .api files.

So an RFC that is really an RFH; how is it best to approach this?

Regards,
Neale


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Please Call DigSafe...

2017-11-17 Thread Dave Barach (dbarach)
Dear Chris,

As you probably worked out, the forcing function for my email was a patch that 
both Florin and I -2'ed yesterday; a real stinker.

I want to facilitate discussions of the form: "I'd like to implement X or fix 
Y. What's the right way to do it? Who should I talk to about that?"

Guidelines seem like a good idea. I'll try to write something on the wiki.

Thanks... Dave

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Luke, Chris
Sent: Friday, November 17, 2017 8:51 AM
To: Dave Barach ; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Please Call DigSafe...

Hi Dave,

After spending a few minutes to work out that you were talking about a proposed 
patch and not something any of us had merged (and, especially not that I 
merged!), I see that what we need is a balance between not discouraging people 
to experiment, or submit their ideas, but to also steer people towards relevant 
leads before they get in too deep.

Problem is, if people make huge patches before ever talking to someone, our 
first contact is when they submit it. The teaching moment is when the reviewer 
notices it. That is obviously too late for the first patch, but should help 
with subsequent work.

This is why open source generally prefers people to keep their patches small 
and thematic; most reviewers tire of seeing many large patches when they are 
developed in isolation and are directionally unsound - to the point that they 
start to see the color bar in the review list and if it's yellow-or-worse, and 
not from someone they specifically associate with quality work, typically those 
submissions end up ignored.

I don't think we have contribution guidelines for VPP or fd.io in general 
(apart from the style and doc guides); at least a very quick scan of the wiki 
was not fruitful. We should have somewhere to send new people (can we nudge 
people who login to Gerrit for the first time?), and also people whose first 
submission is unacceptable (too big, too complex, directionally unsound). And 
we as reviewers should remain vigilant and, importantly, consistent.

Chris.


> -Original Message-
> From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On
> Behalf Of Dave Barach
> Sent: Friday, November 17, 2017 7:45
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] Please Call DigSafe...
> 
> Folks,
> 
> At our next project meeting, I'd like to spend a few minutes talking about a
> good-news / bad-news situation affecting the vpp project.
> 
> As the community has expanded, committers have begun noticing
> unacceptable and unfixable patches in mission-critical code. Yesterday's
> soap-opera episode involved the ip4/6 speed-paths.
> 
> I think we should allocate a bit of meeting time for folks to talk about what
> they're trying to develop, with an eye towards engaging with relevant area
> experts from the start.
> 
> In most places in the US, folks planning to dig holes on their property are
> required to call 811 (DigSafe): to avoid hitting buried gas lines and blowing 
> up
> the neighborhood. It seems like we need to create something
> similar for the vpp project.
> 
> Thoughts?
> 
> Thanks... Dave
> 
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] Discussion Topic: creating demo branches in git.fd.io/vpp

2017-11-15 Thread Dave Barach (dbarach)
+1... 

-Original Message-
From: Ole Troan [mailto:otr...@employees.org] 
Sent: Wednesday, November 15, 2017 5:49 PM
To: Dave Wallace <dwallac...@gmail.com>
Cc: Ed Warnicke <hagb...@gmail.com>; Dave Barach (dbarach) <dbar...@cisco.com>; 
Keith Burns (krb) <k...@cisco.com>; Florin Coras (fcoras) <fco...@cisco.com>; 
John Lo (loj) <l...@cisco.com>; Luke, Chris <chris_l...@comcast.com>; Damjan 
Marion <dmarion.li...@gmail.com>; Neale Ranns (nranns) <nra...@cisco.com>; 
vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Discussion Topic: creating demo branches in git.fd.io/vpp

Just as a data-point.
What we have done for IETF hackathons is to create a branch on github. E.g:

https://github.com/vpp-dev/vpp/tree/ietf100-nat

This allows us to do "high speed collaboration". Then cherry pick what has 
value after the event.
Perhaps something similar could be done for "demos"?

Cheers,
Ole

> On 16 Nov 2017, at 06:17, Dave Wallace <dwallac...@gmail.com> wrote:
> 
> Folks,
> 
> Per the action item from this yesterday's VPP weekly meeting, I'm asking for 
> opinions from the VPP community on allowing the creation of demo branches in 
> the VPP git repo.
> 
> The definition of a demo branch is defined as a branch pulled from master 
> that:
> 
> 1) Purpose is to demonstrate a VPP use case at a public conference/symposium 
> (kubecon 2017 as the 1st instance).
> 2) The branch will never be merged back into master.
> 2) Commits to the branch will be cherry-picked/double-committed to master.
> 
> Some comments I recall from memory (please forgive me if I have left any 
> comments out):
> 
> Pro: Will allow utilization of LF infra to utilize CI process
> Pro: Will allow publishing of demo artifacts for ease of reproduction of the 
> demo.
> Con: Will pollute repo with ephemeral code that will rapidly become out of 
> date / dead.
> Con: Sets precedent which may cause large numbers of non-production branches 
> over time.
> 
> Please feel add additional Pro/Con comments here.  Comments are welcome from 
> all members of the VPP community.
> 
> I will begin with my thoughts since yesterday's meeting:
> 
> Con: In order for the CI infra to be utilized, the addition of demo branch 
> specific jenkins jobs needs to be added to ci-management (polluting that repo 
> as well).
> Con: May add overhead to CSIT project in triaging any CSIT failures on the 
> demo branch.
> Con: Adds overhead to already over-subscribed committer task workload 
> (reviewing commits to demo branch & double commits to main)
> 
> 
> IMHO, this proposal has the potential to cause the VPP committer workload to 
> spiral out of control thus disrupting the regular release cadence.
> 
> 
> @Ed,
> 
> It might be a good idea to include the ci-management and CSIT projects in 
> this discussion since those projects may be affected by this proposal.  I'll 
> let you decide whether or not to add additional projects to the discussion?
> 
> As the TSC Chairperson, I will let you decide when to close the discussion 
> and call for a vote of the committers (whom I addressed directly on this 
> email).
> 
> 
> Thanks,
> -daw-
> 
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] Discussion Topic: creating demo branches in git.fd.io/vpp

2017-11-15 Thread Dave Barach (dbarach)
I have a mild preference that we avoid creating demo branches in the master vpp 
repo. When faced with similar requirements, I’ve added branch(es) to ephemeral 
downstream mirrors.

I suppose one could destroy demo branches in the master vpp repo, but 
“something could go wrong...”

Thoughts?

Thanks... Dave

From: Dave Wallace [mailto:dwallac...@gmail.com]
Sent: Wednesday, November 15, 2017 5:17 PM
To: Ed Warnicke <hagb...@gmail.com>; Dave Barach (dbarach) <dbar...@cisco.com>; 
Keith Burns (krb) <k...@cisco.com>; Florin Coras (fcoras) <fco...@cisco.com>; 
John Lo (loj) <l...@cisco.com>; Luke, Chris <chris_l...@comcast.com>; Damjan 
Marion <dmarion.li...@gmail.com>; Neale Ranns (nranns) <nra...@cisco.com>; Ole 
Troan (otroan) <otr...@cisco.com>; vpp-dev@lists.fd.io
Subject: Discussion Topic: creating demo branches in git.fd.io/vpp

Folks,

Per the action item from this yesterday's VPP weekly meeting, I'm asking for 
opinions from the VPP community on allowing the creation of demo branches in 
the VPP git repo.

The definition of a demo branch is defined as a branch pulled from master that:

1) Purpose is to demonstrate a VPP use case at a public conference/symposium 
(kubecon 2017 as the 1st instance).
2) The branch will never be merged back into master.
2) Commits to the branch will be cherry-picked/double-committed to master.

Some comments I recall from memory (please forgive me if I have left any 
comments out):

Pro: Will allow utilization of LF infra to utilize CI process
Pro: Will allow publishing of demo artifacts for ease of reproduction of the 
demo.
Con: Will pollute repo with ephemeral code that will rapidly become out of date 
/ dead.
Con: Sets precedent which may cause large numbers of non-production branches 
over time.

Please feel add additional Pro/Con comments here.  Comments are welcome from 
all members of the VPP community.

I will begin with my thoughts since yesterday's meeting:

Con: In order for the CI infra to be utilized, the addition of demo branch 
specific jenkins jobs needs to be added to ci-management (polluting that repo 
as well).
Con: May add overhead to CSIT project in triaging any CSIT failures on the demo 
branch.
Con: Adds overhead to already over-subscribed committer task workload 
(reviewing commits to demo branch & double commits to main)


IMHO, this proposal has the potential to cause the VPP committer workload to 
spiral out of control thus disrupting the regular release cadence.


@Ed,

It might be a good idea to include the ci-management and CSIT projects in this 
discussion since those projects may be affected by this proposal.  I'll let you 
decide whether or not to add additional projects to the discussion?

As the TSC Chairperson, I will let you decide when to close the discussion and 
call for a vote of the committers (whom I addressed directly on this email).


Thanks,
-daw-
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] TCP Options: tcp_header_t and tcp_options_t

2017-11-14 Thread Dave Barach (dbarach)
Dear Justin,

Brief commercial: hopefully you added your node to the ip4 unicast feature arc, 
configured to grab pkts, pre-ip4/6-lookup. 

In feature-arc land, the following one-liner sets next0 so pkts will visit the 
next enabled feature. The last node in the ip4-unicast feature arc is 
ip4-lookup...

  /* Next node in unicast feature arc */
  vnet_get_config_data (em->config_main[table_index],
>current_config_index, ,
/* # bytes of config data */ 0);

Check the ip protocol and ignore any non-TCP pkts:

  ip40 = vlib_buffer_get_current (b0);
  if (ip40->protocol != IP_PROTOCOL_TCP)
goto trace0;

Then use ip4_next_header() to find the tcp layer, etc. etc.

HTH... Dave

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Justin Iurman
Sent: Monday, November 13, 2017 5:30 PM
To: vpp-dev 
Subject: [vpp-dev] TCP Options: tcp_header_t and tcp_options_t

Guys,

My node is located right before ip4_lookup. What's the fastest/cleanest way to 
get options related to a TCP packet, having access to a tcp_header_t structure 
(which is not directly linked to its options) ? Actually, I'd like to modify or 
remove some options on the fly. 

Do I have to call tcp_options_parse function from src/vnet/tcp/tcp_input.c ? 
But I guess it would duplicate the job, since it is already called at one 
moment. 

Or should I get the TCP connection, which connects both tcp_header_t and 
tcp_options_t ? Or should I directly modify options "in" the packet, by moving 
the data pointer (a sort-of copy of what tcp_options_parse already does) ?

Thanks for your help !

Justin
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] vlib_validate_buffer_enqueue

2017-11-13 Thread Dave Barach (dbarach)
Dear Justin,

Quad-loops are generally not effective for table-lookup-intensive tasks. At a 
certain point, gcc runs out of registers and starts putting hot variables onto 
the stack. I've converted a number of dual loops into quad loops, only to 
discover that they're no faster than the dual loop version.

Rather than having the sample plugin propagate a bunch of "fetch me a rock" 
coding work, I went with a dual-single loop. When doing new development, I shut 
off the dual loop, make the single loop work, then build the dual (or quad) 
loop. 

With experience, building a dual (or quad) loop becomes a mechanical exercise 
easily done during a boring meeting. ()... 

In viable quad-loop use-cases, it's not worth any performance to also provide a 
dual loop. The dual-loop code will run at most one time; there's no chance of 
fixed overhead amortization. 

Thanks… Dave

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Justin Iurman
Sent: Monday, November 13, 2017 5:51 AM
To: vpp-dev 
Subject: [vpp-dev] vlib_validate_buffer_enqueue

Hey guys,

In buffer_node.h, there are the following macros:
- vlib_validate_buffer_enqueue_x1
- vlib_validate_buffer_enqueue_x2
- vlib_validate_buffer_enqueue_x4

In a node, I was just wondering what was the use idea behind that ? Is it for a 
reason of speed ? I mean, you're obviously faster if you process 4 packets 
horizontally than one after the other. Why then, in the sample plugin, is the 
"x4" version not used ? A "perfect" plugin would use each of them to cover each 
case, right ? Also, why not having a "x8" (or more) version ? I guess it's 
either for a performance issue or to stop at a specific ceiling.

Thanks !

Justin
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] make test-all

2017-11-13 Thread Dave Barach (dbarach)
Try increasing the size of the shared-memory API segment. An allocation of 25mb 
is failing. You might ask yourself how sane it is to generate that much output. 

Thanks… Dave

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco)
Sent: Monday, November 13, 2017 5:27 AM
To: John Lo (loj) ; Pavel Kotucek -X (pkotucek - PANTHEON 
TECHNOLOGIES at Cisco) ; vpp-dev@lists.fd.io; Brian Brooks 

Subject: Re: [vpp-dev] make test-all

So it seems that vpp coredumps while dumping the API trace after
creating all the interfaces...

(gdb) bt
#0  0x7f14f4b1e428 in __GI_raise (sig=sig@entry=6) at 
../sysdeps/unix/sysv/linux/raise.c:54
#1  0x7f14f4b2002a in __GI_abort () at abort.c:89
#2  0x00405d83 in os_panic () at 
/home/ksekera/vpp/build-data/../src/vpp/vnet/main.c:268
#3  0x7f14f5fe5f86 in clib_mem_alloc_aligned_at_offset 
(os_out_of_memory_on_failure=1, align_offset=0, align=1, size=25282098)
at /home/ksekera/vpp/build-data/../src/vppinfra/mem.h:105
#4  clib_mem_alloc (size=25282098) at 
/home/ksekera/vpp/build-data/../src/vppinfra/mem.h:114
#5  vl_msg_api_alloc_internal (may_return_null=0, pool=, 
nbytes=25282098)
at /home/ksekera/vpp/build-data/../src/vlibmemory/memory_shared.c:176
#6  vl_msg_api_alloc (nbytes=nbytes@entry=25282082) at 
/home/ksekera/vpp/build-data/../src/vlibmemory/memory_shared.c:207
#7  0x00411392 in vl_api_cli_inband_t_handler (mp=0x300e2a0c) at 
/home/ksekera/vpp/build-data/../src/vpp/api/api.c:223
#8  0x7f14f5fdfa23 in vl_msg_api_handler_with_vm_node 
(am=am@entry=0x7f14f620d460 , the_msg=the_msg@entry=0x300e2a0c,
vm=vm@entry=0x7f14f5fd6260 , 
node=node@entry=0x7f14b410e000) at 
/home/ksekera/vpp/build-data/../src/vlibapi/api_shared.c:508
#9  0x7f14f5fef35f in memclnt_process (vm=0x7f14f5fd6260 
, node=0x7f14b410e000, f=)
at /home/ksekera/vpp/build-data/../src/vlibmemory/memory_vlib.c:970

(gdb) p input
$5 = {buffer = 0x7f14b56f6558 "dump 
/tmp/vpp-unittest-P2PEthernetAPI-qRwMY6/vpp_api_trace.test_p2p_subif_creation_10k.log\n",
  index = 18446744073709551615, buffer_marks = 0x7f14b592a240, fill_buffer = 
0x0, fill_buffer_arg = 0x0}

I'm pretty sure that the history of this mess was:

1.) the test was added first as enhanced
2.) automatic dump of api trace was added, but only tested against 'make test', 
not 'make test-all'

Thanks,
Klement

Quoting Klement Sekera (2017-11-11 22:12:52)
> Hi Brian,
> 
> it should. Though I just tried running it on latest master and got a
> timeout in test_p2p_ethernet, which shouldn't happen. I see the test was
> trying to create tens of thousands of interfaces... maybe something is
> slower than usual?
> 
> Thanks,
> Klement
> 
> Quoting Brian Brooks (2017-11-11 01:11:47)
> >Should “make test-all” pass?
> > 
> > 
> > 
> >Thanks,
> > 
> >Brian
> > 
> > 
> > 
> >IMPORTANT NOTICE: The contents of this email and any attachments are
> >confidential and may also be privileged. If you are not the intended
> >recipient, please notify the sender immediately and do not disclose the
> >contents to any other person, use it for any purpose, or store or copy 
> > the
> >information in any medium. Thank you.
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] multi-core multi-threading performance

2017-11-08 Thread Dave Barach (dbarach)
Please write up what you’ve done, and provide a pointer to your code.

Thanks… Dave

From: Pragash Vijayaragavan [mailto:pxv3...@rit.edu]
Sent: Wednesday, November 8, 2017 1:19 AM
To: Dave Barach (dbarach) <dbar...@cisco.com>
Cc: vpp-dev@lists.fd.io; John Marshall (jwm) <j...@cisco.com>; Neale Ranns 
(nranns) <nra...@cisco.com>; Minseok Kwon <mxk...@rit.edu>
Subject: Re: multi-core multi-threading performance

Hi all,

Any help/ideas on how we can have a better performance using multi-cores is 
appreciated.

Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu<mailto:pxv3...@rit.edu>
ph : 585 764 4662


On Mon, Nov 6, 2017 at 8:10 AM, Pragash Vijayaragavan 
<pxv3...@g.rit.edu<mailto:pxv3...@g.rit.edu>> wrote:
Ok now i provisioned 4 rx queues for 4 worker threads and yea all workers
are processing traffic, but the lookup rate has dropped, i am getting low 
packets than when it was 2 workers.

I tried configuring 4 tx queues as well, still same problem (low packets 
received compared to 2 workers).



Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu<mailto:pxv3...@rit.edu>
ph : 585 764 4662<tel:(585)%20764-4662>


On Mon, Nov 6, 2017 at 8:00 AM, Pragash Vijayaragavan 
<pxv3...@g.rit.edu<mailto:pxv3...@g.rit.edu>> wrote:
Just 1, let me change it to 2 may be 3 and get back to you.

Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu<mailto:pxv3...@rit.edu>
ph : 585 764 4662<tel:(585)%20764-4662>


On Mon, Nov 6, 2017 at 7:48 AM, Dave Barach (dbarach) 
<dbar...@cisco.com<mailto:dbar...@cisco.com>> wrote:
How many RX queues did you provision? One per worker, or no supper...

Thanks… Dave

From: Pragash Vijayaragavan [mailto:pxv3...@rit.edu<mailto:pxv3...@rit.edu>]
Sent: Monday, November 6, 2017 7:36 AM

To: Dave Barach (dbarach) <dbar...@cisco.com<mailto:dbar...@cisco.com>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>; John Marshall (jwm) 
<j...@cisco.com<mailto:j...@cisco.com>>; Neale Ranns (nranns) 
<nra...@cisco.com<mailto:nra...@cisco.com>>; Minseok Kwon 
<mxk...@rit.edu<mailto:mxk...@rit.edu>>
Subject: Re: multi-core multi-threading performance

Hi Dave,

As per your suggestion i tried sending different traffic and i could notice 
that, 1 worker acts per port (hardware NIC)

Is it true that multiple workers cannot work on same port at the same time?





Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu<mailto:pxv3...@rit.edu>
ph : 585 764 4662<tel:(585)%20764-4662>


On Mon, Nov 6, 2017 at 7:13 AM, Pragash Vijayaragavan 
<pxv3...@g.rit.edu<mailto:pxv3...@g.rit.edu>> wrote:
Thanks Dave,

let me try it out real quick and get back to you.

Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu<mailto:pxv3...@rit.edu>
ph : 585 764 4662<tel:(585)%20764-4662>


On Mon, Nov 6, 2017 at 7:11 AM, Dave Barach (dbarach) 
<dbar...@cisco.com<mailto:dbar...@cisco.com>> wrote:
Incrementing / random src/dst addr/port

Thanks… Dave

From: Pragash Vijayaragavan [mailto:pxv3...@rit.edu<mailto:pxv3...@rit.edu>]
Sent: Monday, November 6, 2017 7:06 AM
To: Dave Barach (dbarach) <dbar...@cisco.com<mailto:dbar...@cisco.com>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>; John Marshall (jwm) 
<j...@cisco.com<mailto:j...@cisco.com>>; Neale Ranns (nranns) 
<nra...@cisco.com<mailto:nra...@cisco.com>>; Minseok Kwon 
<mxk...@rit.edu<mailto:mxk...@rit.edu>>
Subject: Re: multi-core multi-threading performance

Hi Dave,

Thanks for the mail

a "show run" command shows dpdk-input process on 2 of the workers but the 
ip6-lookup process is running only on 1 worker.

What config should be done to make all threads process traffic.

This is for 4 workers and 1 main core.

Pasted output :


vpp# sh run
Thread 0 vpp_main (lcore 1)
Time 7.5, average vectors/node 0.00, last 128 main loops 0.00 per node 0.00
  vector rates in 0.e0, out 0.e0, drop 0.e0, punt 0.e0
 Name State Calls  Vectors
Suspends Clocks   Vectors/Call
acl-plugin-fa-cleaner-process   any wait 0   0  
15  4.97e30.00
api-rx-from-ring active  0   0  
79  1.07e50.00
cdp-process any wait 0   0  
 3  2.65e30.00
dpdk-processany wait 0   0  
 2  6.77e70.00
fib-walk

Re: [vpp-dev] Simple setup, that does not work.

2017-11-07 Thread Dave Barach (dbarach)
Check host interface IP address, basic connectivity [cable on floor?], and so 
on.

Check “show hardware.” If the MIB stats indicate that packets are reaching the 
NIC MAC layer - but not VPP - see if /proc/cmdline contains “intel_iommu=on”. 
If it does, try removing that stanza and reboot. You can, in fact, run with the 
iommu enabled, but for a 101(a) simple test it’s not worth going there...

HTH… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of John Wei
Sent: Monday, November 6, 2017 11:00 PM
To: vpp-dev 
Subject: [vpp-dev] Simple setup, that does not work.



I followed one of the fd.io youtube demo, it is very simple, but 
it does not work for me.



  *   Restart vpp
  *   vppctl set int state GigabitEthernet13/0/0 up
  *   vppctl set int ip address GigabitEthernet13/0/0 
192.168.50.166/24
  *   # vppctl show int addr

 *   GigabitEthernet13/0/0 (up):
 * 192.168.50.166/24
 *   GigabitEthernetb/0/0 (dn):
 *   local0 (dn):

  *   on host: ping 192.168.50.166 does not work (just hang)
What is missing?
I am running v17.10-release bits.

John

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] multi-core multi-threading performance

2017-11-06 Thread Dave Barach (dbarach)
How many RX queues did you provision? One per worker, or no supper...

Thanks… Dave

From: Pragash Vijayaragavan [mailto:pxv3...@rit.edu]
Sent: Monday, November 6, 2017 7:36 AM
To: Dave Barach (dbarach) <dbar...@cisco.com>
Cc: vpp-dev@lists.fd.io; John Marshall (jwm) <j...@cisco.com>; Neale Ranns 
(nranns) <nra...@cisco.com>; Minseok Kwon <mxk...@rit.edu>
Subject: Re: multi-core multi-threading performance

Hi Dave,

As per your suggestion i tried sending different traffic and i could notice 
that, 1 worker acts per port (hardware NIC)

Is it true that multiple workers cannot work on same port at the same time?





Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu<mailto:pxv3...@rit.edu>
ph : 585 764 4662


On Mon, Nov 6, 2017 at 7:13 AM, Pragash Vijayaragavan 
<pxv3...@g.rit.edu<mailto:pxv3...@g.rit.edu>> wrote:
Thanks Dave,

let me try it out real quick and get back to you.

Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu<mailto:pxv3...@rit.edu>
ph : 585 764 4662<tel:(585)%20764-4662>


On Mon, Nov 6, 2017 at 7:11 AM, Dave Barach (dbarach) 
<dbar...@cisco.com<mailto:dbar...@cisco.com>> wrote:
Incrementing / random src/dst addr/port

Thanks… Dave

From: Pragash Vijayaragavan [mailto:pxv3...@rit.edu<mailto:pxv3...@rit.edu>]
Sent: Monday, November 6, 2017 7:06 AM
To: Dave Barach (dbarach) <dbar...@cisco.com<mailto:dbar...@cisco.com>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>; John Marshall (jwm) 
<j...@cisco.com<mailto:j...@cisco.com>>; Neale Ranns (nranns) 
<nra...@cisco.com<mailto:nra...@cisco.com>>; Minseok Kwon 
<mxk...@rit.edu<mailto:mxk...@rit.edu>>
Subject: Re: multi-core multi-threading performance

Hi Dave,

Thanks for the mail

a "show run" command shows dpdk-input process on 2 of the workers but the 
ip6-lookup process is running only on 1 worker.

What config should be done to make all threads process traffic.

This is for 4 workers and 1 main core.

Pasted output :


vpp# sh run
Thread 0 vpp_main (lcore 1)
Time 7.5, average vectors/node 0.00, last 128 main loops 0.00 per node 0.00
  vector rates in 0.e0, out 0.e0, drop 0.e0, punt 0.e0
 Name State Calls  Vectors
Suspends Clocks   Vectors/Call
acl-plugin-fa-cleaner-process   any wait 0   0  
15  4.97e30.00
api-rx-from-ring active  0   0  
79  1.07e50.00
cdp-process any wait 0   0  
 3  2.65e30.00
dpdk-processany wait 0   0  
 2  6.77e70.00
fib-walkany wait 0   0  
  7474  6.74e20.00
gmon-processtime wait0   0  
 1  4.24e30.00
ikev2-manager-process   any wait 0   0  
 7  7.04e30.00
ip6-icmp-neighbor-discovery-ev  any wait 0   0  
 7  4.67e30.00
lisp-retry-service  any wait 0   0  
 3  7.21e30.00
unix-epoll-input polling  21655148   0  
 0  5.43e20.00
vpe-oam-process any wait 0   0  
 4  5.28e30.00
---
Thread 1 vpp_wk_0 (lcore 2)
Time 7.5, average vectors/node 255.99, last 128 main loops 14.00 per node 256.00
  vector rates in 4.1903e6, out 4.1903e6, drop 0.e0, punt 0.e0
 Name State Calls  Vectors
Suspends Clocks   Vectors/Call
FortyGigabitEthernet4/0/0-outp   active 12333431572992  
 0  6.58e0  255.99
FortyGigabitEthernet4/0/0-tx active 12333431572992  
 0  7.20e1  255.99
dpdk-input   polling12434731572992  
 0  5.49e1  253.91
ip6-inputactive 12333431572992  
 0  2.28e1  255.99
ip6-load-balance active 12333431572992  
 0  1.61e1  255.99
ip6-lookup   active 12333431572992  
 0  3.77e2  255.99
ip6-rewrite  active 12333431572992  
   

Re: [vpp-dev] multi-core multi-threading performance

2017-11-06 Thread Dave Barach (dbarach)
Have you verified that all of the worker threads are processing traffic? 
Sufficiently poor RSS statistics could mean - in the limit - that only one 
worker thread is processing traffic.

Thanks… Dave

From: Pragash Vijayaragavan [mailto:pxv3...@rit.edu]
Sent: Sunday, November 5, 2017 10:03 PM
To: vpp-dev@lists.fd.io
Cc: John Marshall (jwm) <j...@cisco.com>; Neale Ranns (nranns) 
<nra...@cisco.com>; Dave Barach (dbarach) <dbar...@cisco.com>; Minseok Kwon 
<mxk...@rit.edu>
Subject: multi-core multi-threading performance

Hi ,

We are measuring performance of ip6 lookup in multi-core multi-worker 
environments and
we don't see good scaling of performance when we keep increasing the number of 
cores/workers.

We are just changing the startup.conf file to create more workers, rx-queues, 
sock-mem etc. Should we do anything else to see an increase in performance.

Is there a limitation on the performance even if we increase the number of 
workers.

Is it dependent on the number of hardware NICs we have, we only have 1 NIC to 
receive the traffic.


TIA,

Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu<mailto:pxv3...@rit.edu>
ph : 585 764 4662

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP default graph

2017-10-31 Thread Dave Barach (dbarach)
Dear Mostafa,

First, “show vlib graph” describes the entire graph in detail.

Vpp uses ingress flow-hashing (e.g. hardware RSS hashing) across a set of 
threads running identical graph replicas to achieve multi-core scaling.

Historical experiments with pipelining in vpp dissuaded me from pursuing that 
processing model: the entire pipeline runs at the speed of the slowest stage. 
More to the point: if the offered workload changes, one needs to reconfigure 
the pipeline to achieve decent performance.

In vpp, you can spin up arbitrary threads and process packets however you like, 
of course.

It would help if you’d describe your application in detail, otherwise we won’t 
be able to make detailed suggestions.

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Mostafa Salari
Sent: Tuesday, October 31, 2017 8:06 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] VPP default graph

Hi, I have 3 issues:
1. I want to know what is the default structure of graph nodes when VPP is 
running!
2. In dpdk ip_pipeline application, i was able to determine how many instances 
be created and determine lcore that each instance must run on it! In this way, 
i was able to make custom optimizations and make a fast packet processing 
pipeline for my special goal. What is the way in VPP?
3. In order to change the default arrangement, what should i do?

Best regards,
Mostafa
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] gerrit 8872 centos validation failure (stable/1710)

2017-10-18 Thread Dave Barach (dbarach)
There's a zero percent chance this failure has anything to do with the patch in 
question.

I'll press the "recheck" button once again.

14:09:27   CC   vnet/ip/ip6_neighbor.lo
14:09:28   CC   vnet/ip/ip6_pg.lo
14:09:28   CC   vnet/ip/ip_api.lo
14:09:28   CC   vnet/ip/ip_checksum.lo
14:09:30 ./libtool: fork: Cannot allocate memory
14:09:30 Makefile:6169: recipe for target 'vnet/ip/ip4_source_check.lo' failed
14:09:30 make[5]: *** [vnet/ip/ip4_source_check.lo] Error 254
14:09:30 make[5]: *** Waiting for unfinished jobs
14:09:30 ./libtool: fork: Cannot allocate memory
14:09:30 ./libtool: line 1: wait_for: No record of process 19127
14:09:30 bash: ../sysdeps/nptl/fork.c:156: __libc_fork: Assertion 
`THREAD_GETMEM (self, tid) != ppid' failed.
14:09:30 gcc: internal compiler error: Segmentation fault (program cc1)
14:09:30 Please submit a full bug report,
14:09:30 with preprocessed source if appropriate.
14:09:30 See  for instructions.
14:09:30 ./libtool: fork: Cannot allocate memory
14:09:30 bash: ../sysdeps/nptl/fork.c:156: __libc_fork: Assertion 
`THREAD_GETMEM (self, tid) != ppid' failed.
14:09:30 Makefile:6169: recipe for target 'vnet/policer/policer.lo' failed
14:09:30 make[5]: *** [vnet/policer/policer.lo] Error 1
14:09:30 Build step 'Execute shell' marked build as failure
14:09:30 $ ssh-agent -k
14:09:30 FATAL: Cannot run program "ssh-agent": error=12, Cannot allocate memory

Thanks... Dave

From: Dave Barach (dbarach)
Sent: Wednesday, October 18, 2017 9:57 AM
To: csit-...@lists.fd.io; Florin Coras (fcoras) <fco...@cisco.com>
Cc: vpp-dev@lists.fd.io
Subject: gerrit 8872 centos validation failure (stable/1710)

Please see https://gerrit.fd.io/r/#/c/8872 and 
https://jenkins.fd.io/job/vpp-verify-1710-centos7/53. I've already pressed the 
"recheck" button. The validation failure appears unrelated to the patch.

Thanks... Dave

12:50:49 Wrote: 
/w/workspace/vpp-verify-1710-centos7/dpdk/rpm/RPMS/x86_64/vpp-dpdk-devel-17.08-vpp1.x86_64.rpm
12:50:49 Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.cojge5
12:50:49 + umask 022
12:50:49 + cd /w/workspace/vpp-verify-1710-centos7/dpdk/rpm/BUILD
12:50:49 + /usr/bin/rm -rf 
/w/workspace/vpp-verify-1710-centos7/dpdk/rpm/BUILDROOT/vpp-dpdk-17.08-vpp1.x86_64
12:50:49 + exit 0
12:50:49 mv rpm/RPMS/x86_64/*.rpm .
12:50:49 git clean -fdx rpm
12:50:49 Removing rpm/BUILD/
12:50:49 Removing rpm/BUILDROOT/
12:50:49 Removing rpm/RPMS/
12:50:49 Removing rpm/SOURCES/
12:50:49 Removing rpm/SPECS/
12:50:49 Removing rpm/SRPMS/
12:50:49 Removing rpm/tmp/
12:50:49 make[2]: Leaving directory `/w/workspace/vpp-verify-1710-centos7/dpdk'
12:50:49 sudo rpm -Uih vpp-dpdk-devel-17.08-vpp1.x86_64.rpm
12:50:49 
12:50:49   package vpp-dpdk-devel-17.08-vpp2.x86_64 (which is newer than 
vpp-dpdk-devel-17.08-vpp1.x86_64) is already installed
12:50:49 make[1]: *** [install-rpm] Error 2
12:50:49 make[1]: Leaving directory `/w/workspace/vpp-verify-1710-centos7/dpdk'
12:50:49 make: *** [dpdk-install-dev] Error 2
12:50:49 Build step 'Execute shell' marked build as failure
12:50:49 $ ssh-agent -k
12:50:49 unset SSH_AUTH_SOCK;
12:50:49 unset SSH_AGENT_PID;
12:50:49 echo Agent pid 9677 killed;
12:50:50 [ssh-agent] Stopped.
12:50:50 Skipped archiving because build is not successful
12:50:50 [PostBuildScript] - Execution post build scripts.

Thanks... Dave

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP routing API dump

2017-10-18 Thread Dave Barach (dbarach)
Confirmed. Patch on the way... Thanks... Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Dave Barach (dbarach)
Sent: Wednesday, October 18, 2017 10:36 AM
To: Samuel Elias -X (samelias - PANTHEON TECHNOLOGIES at Cisco) 
<samel...@cisco.com>; Neale Ranns (nranns) <nra...@cisco.com>
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] VPP routing API dump

Please don't assume that people scrub through jira tickets.

If you'd sent email to vpp-dev, this would have been fixed ages ago. Without 
even looking at the code, I'll bet it's a single missing ntohl(...):

(gdb) p/x 33554432
$1 = 0x200

Thanks... Dave

From: vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Samuel Elias -X (samelias - 
PANTHEON TECHNOLOGIES at Cisco)
Sent: Wednesday, October 18, 2017 10:24 AM
To: Neale Ranns (nranns) <nra...@cisco.com<mailto:nra...@cisco.com>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: [vpp-dev] VPP routing API dump


Hello,



Could you please take a look at this bug with routing API:

https://jira.fd.io/browse/VPP-930



It is a minor issue, but it's been breaking Honeycomb's CRUD tests for a while 
now.



tia,

- Sam


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP routing API dump

2017-10-18 Thread Dave Barach (dbarach)
Please don't assume that people scrub through jira tickets.

If you'd sent email to vpp-dev, this would have been fixed ages ago. Without 
even looking at the code, I'll bet it's a single missing ntohl(...):

(gdb) p/x 33554432
$1 = 0x200

Thanks... Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Samuel Elias -X (samelias - PANTHEON TECHNOLOGIES at Cisco)
Sent: Wednesday, October 18, 2017 10:24 AM
To: Neale Ranns (nranns) 
Cc: vpp-dev@lists.fd.io
Subject: [vpp-dev] VPP routing API dump


Hello,



Could you please take a look at this bug with routing API:

https://jira.fd.io/browse/VPP-930



It is a minor issue, but it's been breaking Honeycomb's CRUD tests for a while 
now.



tia,

- Sam


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] vnet_buffer(b)-> sw_if_index[VLIB_TX] => fib index in ip[46]_lookup

2017-10-18 Thread Dave Barach (dbarach)
Dear John,

I must've done a seriously defective [read: typo-ridden] search.

Scratch that idea... Thanks for your help...

Dave

From: John Lo (loj)
Sent: Wednesday, October 18, 2017 10:15 AM
To: Dave Barach (dbarach) <dbar...@cisco.com>; vpp-dev@lists.fd.io
Subject: RE: vnet_buffer(b)-> sw_if_index[VLIB_TX] => fib index in ip[46]_lookup

Hi Dave,

I found quite a few places using this mechanism:


  *   In the input ACL support for PBR, there is an action to switch FIB table 
and the code uses this mechanism to specify which ones to use:
*** src/vnet/ip/ip_input_acl.c:
ip_inacl_inline[290]   vnet_buffer (b0)->sw_if_index[VLIB_TX] = 
e0->metadata;
ip_inacl_inline[348]   vnet_buffer (b0)->sw_if_index[VLIB_TX] =


  *   Ping support:
*** src/vnet/ip/ping.c:
send_ip6_ping[283] vnet_buffer (p0)->sw_if_index[VLIB_TX] = 
fib_index;
send_ip6_ping[291] vnet_buffer (p0)->sw_if_index[VLIB_TX] =
send_ip4_ping[410] vnet_buffer (p0)->sw_if_index[VLIB_TX] = 
fib_index;
send_ip4_ping[418] vnet_buffer (p0)->sw_if_index[VLIB_TX] =


  *   DHCP proxy:
*** src/vnet/dhcp/dhcp4_proxy_node.c:
dhcp_proxy_to_server_input[209] vnet_buffer(b0)->sw_if_index[VLIB_TX] =
dhcp_proxy_to_client_input[644] vnet_buffer (b0)->sw_if_index[VLIB_TX] = 
sw_if_index;

*** src/vnet/dhcp/dhcp6_proxy_node.c:
dhcpv6_proxy_to_server_input[241] vnet_buffer(b0)->sw_if_index[VLIB_TX] = 
server_fib_idx;
dhcpv6_proxy_to_client_input[683] vnet_buffer (b0)->sw_if_index[VLIB_TX] = 
original_sw_if_index


  *   ICMP6
*** src/vnet/ip/icmp6.c:
ip6_icmp_echo_request[356] vnet_buffer (p0)->sw_if_index[VLIB_TX] =
ip6_icmp_echo_request[365] vnet_buffer (p0)->sw_if_index[VLIB_TX] = 
fib_index0;
ip6_icmp_echo_request[380] vnet_buffer (p1)->sw_if_index[VLIB_TX] =
ip6_icmp_echo_request[389] vnet_buffer (p1)->sw_if_index[VLIB_TX] = 
fib_index1;
ip6_icmp_echo_request[456] vnet_buffer (p0)->sw_if_index[VLIB_TX] =
ip6_icmp_echo_request[464] vnet_buffer (p0)->sw_if_index[VLIB_TX] = 
fib_index0;


  *   VXLAN-GPE
*** src/vnet/vxlan-gpe/decap.c:
vxlan_gpe_input[332]   vnet_buffer(b0)->sw_if_index[VLIB_TX] = 
t0->decap_fib_index;
vxlan_gpe_input[415]   vnet_buffer(b1)->sw_if_index[VLIB_TX] = 
t1->decap_fib_index;
vxlan_gpe_input[435]   vnet_buffer(b1)->sw_if_index[VLIB_TX] = 
t1->decap_fib_index;
vxlan_gpe_input[576]   vnet_buffer(b0)->sw_if_index[VLIB_TX] = 
t0->decap_fib_index;

Is there another way to specify FIB table index to use if this mechanism is 
removed?

Regards,
John

From: vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Dave Barach (dbarach)
Sent: Wednesday, October 18, 2017 8:51 AM
To: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: [vpp-dev] vnet_buffer(b)-> sw_if_index[VLIB_TX] => fib index in 
ip[46]_lookup

Folks,

Is anyone is actually using the "vnet_buffer(b)->sw_if_index[VLIB_TX] => 
[fib_index | ~0]" method to select the lookup fib index in ip[46]_lookup?

If not, I would like to remove the corresponding code...

Thanks... Dave

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] gerrit 8872 centos validation failure (stable/1710)

2017-10-18 Thread Dave Barach (dbarach)
Please see https://gerrit.fd.io/r/#/c/8872 and 
https://jenkins.fd.io/job/vpp-verify-1710-centos7/53. I've already pressed the 
"recheck" button. The validation failure appears unrelated to the patch.

Thanks... Dave

12:50:49 Wrote: 
/w/workspace/vpp-verify-1710-centos7/dpdk/rpm/RPMS/x86_64/vpp-dpdk-devel-17.08-vpp1.x86_64.rpm
12:50:49 Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.cojge5
12:50:49 + umask 022
12:50:49 + cd /w/workspace/vpp-verify-1710-centos7/dpdk/rpm/BUILD
12:50:49 + /usr/bin/rm -rf 
/w/workspace/vpp-verify-1710-centos7/dpdk/rpm/BUILDROOT/vpp-dpdk-17.08-vpp1.x86_64
12:50:49 + exit 0
12:50:49 mv rpm/RPMS/x86_64/*.rpm .
12:50:49 git clean -fdx rpm
12:50:49 Removing rpm/BUILD/
12:50:49 Removing rpm/BUILDROOT/
12:50:49 Removing rpm/RPMS/
12:50:49 Removing rpm/SOURCES/
12:50:49 Removing rpm/SPECS/
12:50:49 Removing rpm/SRPMS/
12:50:49 Removing rpm/tmp/
12:50:49 make[2]: Leaving directory `/w/workspace/vpp-verify-1710-centos7/dpdk'
12:50:49 sudo rpm -Uih vpp-dpdk-devel-17.08-vpp1.x86_64.rpm
12:50:49 
12:50:49   package vpp-dpdk-devel-17.08-vpp2.x86_64 (which is newer than 
vpp-dpdk-devel-17.08-vpp1.x86_64) is already installed
12:50:49 make[1]: *** [install-rpm] Error 2
12:50:49 make[1]: Leaving directory `/w/workspace/vpp-verify-1710-centos7/dpdk'
12:50:49 make: *** [dpdk-install-dev] Error 2
12:50:49 Build step 'Execute shell' marked build as failure
12:50:49 $ ssh-agent -k
12:50:49 unset SSH_AUTH_SOCK;
12:50:49 unset SSH_AGENT_PID;
12:50:49 echo Agent pid 9677 killed;
12:50:50 [ssh-agent] Stopped.
12:50:50 Skipped archiving because build is not successful
12:50:50 [PostBuildScript] - Execution post build scripts.

Thanks... Dave

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] vnet_buffer(b)-> sw_if_index[VLIB_TX] => fib index in ip[46]_lookup

2017-10-18 Thread Dave Barach (dbarach)
Folks,

Is anyone is actually using the "vnet_buffer(b)->sw_if_index[VLIB_TX] => 
[fib_index | ~0]" method to select the lookup fib index in ip[46]_lookup?

If not, I would like to remove the corresponding code...

Thanks... Dave

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] not_last parameter of ip_add_del_route from ip.api

2017-10-18 Thread Dave Barach (dbarach)
Adding Neale for further comment, but I believe it's a FIB 1.0 historical 
artifact which has no obvious reason to exist at this point.

Thanks... Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at Cisco)
Sent: Wednesday, October 18, 2017 5:59 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] not_last parameter of ip_add_del_route from ip.api

Hi,

while working on adding MPLS support to HC,
I noticed that 'not_last' param of ip_add_del_route
is ignored by the message handler in ip_api.c:
https://gerrit.fd.io/r/#/c/8826/

Could it be removed or I missed something?

Regards,
Marek

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [FD.io Helpdesk #47101] No joy: ping6 gerrit.fd.io

2017-10-17 Thread Dave Barach (dbarach)
Ack... Thanks… Dave

-Original Message-
From: Anton Baranov via RT [mailto:fdio-helpd...@rt.linuxfoundation.org] 
Sent: Tuesday, October 17, 2017 11:57 AM
To: Dave Barach (dbarach) <dbar...@cisco.com>
Cc: vpp-dev@lists.fd.io
Subject: [FD.io Helpdesk #47101] No joy: ping6 gerrit.fd.io

We're working with our cloud provider to fix the issue.


On Tue Oct 17 10:39:05 2017, abaranov wrote:
> Thishan:
> 
> I'm checking this right now
> 
> Regards,


-- 
Anton Baranov
Systems and Network Administrator
The Linux Foundation
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] vpp deadlock - syslog in signal handler

2017-10-17 Thread Dave Barach (dbarach)
In almost all cases, the glibc malloc heap will not be pickled since it's not 
used on a regular basis.

For some effort, one could replace the syslog library code, I guess.

Thanks... Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Gabriel Ganne
Sent: Tuesday, October 17, 2017 4:18 AM
To: vpp-dev 
Subject: [vpp-dev] vpp deadlock - syslog in signal handler


Hi,



I have encountered a deadlock in vpp on the raising of a memory alloc exception.

The signal is caught by unix_signal_handler(), which determines this is a fatal 
error and then syslogs the error message.

The problem is that syslog() then tries to allocate a scratchpad memory, and 
deadlocks since allocation is the reason why I'm here in the first place.



clib_warning() functions should be safe because all the memory needed is 
alloc'ed at init, but I don't see how this syslog() call can succeed.

Should I just remove it ?

Or is there a way I don't know about to still make this work ?



Below is a backtrace of the problem:
#0  0xa42e2c0c in __lll_lock_wait_private 
(futex=futex@entry=0xa43869a0 ) at ./lowlevellock.c:33
#1  0xa426b6e8 in __GI___libc_malloc (bytes=bytes@entry=584) at 
malloc.c:2888
#2  0xa425ace8 in __GI___open_memstream (bufloc=0x655b4670, 
bufloc@entry=0x655b46d0, sizeloc=0x655b4678, 
sizeloc@entry=0x655b46d8) at memstream.c:76
#3  0xa42cef18 in __GI___vsyslog_chk (ap=..., fmt=0xa4be2990 "%s", 
flag=-1, pri=27) at ../misc/syslog.c:167
#4  __syslog (pri=pri@entry=27, fmt=fmt@entry=0xa4be2990 "%s") at 
../misc/syslog.c:117
#5  0xa4bd7ab4 in unix_signal_handler (signum=, 
si=, uc=) at 
/home/gannega/vpp/build-data/../src/vlib/unix/main.c:119
#6  
#7  0xa42654e0 in malloc_consolidate (av=av@entry=0xa43869a0 
) at malloc.c:4182
#8  0xa4269354 in malloc_consolidate (av=0xa43869a0 ) 
at malloc.c:4151
#9  _int_malloc (av=av@entry=0xa43869a0 , 
bytes=bytes@entry=32816) at malloc.c:3450
#10 0xa426b5b4 in __GI___libc_malloc (bytes=bytes@entry=32816) at 
malloc.c:2890
#11 0xa4299000 in __alloc_dir (statp=0x655b5d48, flags=0, 
close_fd=true, fd=5) at ../sysdeps/posix/opendir.c:247
#12 opendir_tail (fd=) at ../sysdeps/posix/opendir.c:145
#13 __opendir (name=name@entry=0xa4bdf258 "/sys/bus/pci/devices") at 
../sysdeps/posix/opendir.c:200
#14 0xa4bde088 in foreach_directory_file 
(dir_name=dir_name@entry=0xa4bdf258 "/sys/bus/pci/devices", 
f=f@entry=0xa4baf4a8 , arg=arg@entry=0xa4c0af30 
,
scan_dirs=scan_dirs@entry=0) at 
/home/gannega/vpp/build-data/../src/vlib/unix/util.c:59
#15 0xa4baed64 in linux_pci_init (vm=0xa4c0af30 ) 
at /home/gannega/vpp/build-data/../src/vlib/linux/pci.c:648
#16 0xa4bae504 in vlib_call_init_exit_functions (vm=0xa4c0af30 
, head=, call_once=call_once@entry=1) at 
/home/gannega/vpp/build-data/../src/vlib/init.c:57
#17 0xa4bae548 in vlib_call_all_init_functions (vm=) at 
/home/gannega/vpp/build-data/../src/vlib/init.c:75
#18 0xa4bb3838 in vlib_main (vm=, 
vm@entry=0xa4c0af30 , input=input@entry=0x655b5fc8) 
at /home/gannega/vpp/build-data/../src/vlib/main.c:1748
#19 0xa4bd7c0c in thread0 (arg=281473445834544) at 
/home/gannega/vpp/build-data/../src/vlib/unix/main.c:567
#20 0xa44f3e38 in clib_calljmp () at 
/home/gannega/vpp/build-data/../src/vppinfra/longjmp.S:676


Best regards,

--

Gabriel Ganne
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] No joy: ping6 gerrit.fd.io

2017-10-16 Thread Dave Barach (dbarach)
It looks like gerrit.fd.io has dropped off the ipv6 radar screen. Appears not 
to be a DNS problem or other problem on my end:

$ ping6 gerrit.fd.io
PING gerrit.fd.io(2604:e100:1:0:f816:3eff:fe7e:8731) 56 data bytes
^C
--- gerrit.fd.io ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 3022ms

$ ping6 www.google.com
PING www.google.com(iad30s07-in-x04.1e100.net) 56 data bytes
64 bytes from iad30s07-in-x04.1e100.net: icmp_seq=1 ttl=49 time=33.4 ms
64 bytes from iad30s07-in-x04.1e100.net: icmp_seq=2 ttl=49 time=30.4 ms
^C
--- www.google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 30.413/31.943/33.473/1.530 ms

Please investigate AYEC.

Thanks... Dave

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] jvpp core future test failure (gerrit 8743)

2017-10-13 Thread Dave Barach (dbarach)
I think this should about cover the situation... ()... HTH... Dave

void
vl_msg_api_config (vl_msg_api_msg_config_t * c)
{
  api_main_t *am = _main;

  /*
   * This happens during the java core tests if the message
   * dictionary is missing newly added xxx_reply_t messages.
   * Should never happen, but since I shot myself in the foot once
   * this way, I thought I'd make it easy to debug if I ever do
   * it again... (;-)...
   */
  if (c->id == 0)
{
  if (c->name)
 clib_warning ("Trying to register %s with a NULL msg id!", c->name);
  else
 clib_warning ("Trying to register a NULL msg with a NULL msg id!");
  clib_warning ("Did you forget to call setup_message_id_table?");
  return;
}


Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Dave Barach (dbarach)
Sent: Friday, October 13, 2017 1:32 PM
To: Ole Troan (otroan) <otr...@cisco.com>; Ole Troan <otr...@employees.org>
Cc: vpp-dev@lists.fd.io
Subject: [vpp-dev] jvpp core future test failure (gerrit 8743)

Dear Ole,

See https://gerrit.fd.io/r/#/c/8743. It turns out that the java core future 
“make test” test fails as shown below.

The patch adds three xxx_reply_t binary api messages. See 
.../src/vnet/dns/dns.api.

It sure looks like the Java code knows about them, but isn’t doing a very good 
job of registering them. Note that I had to modify the binary API client 
library to keep Java from ASSERTing due to the NULL msg id’s squawked below.

What’s going on here? These messages work like a champ in vpp_api_test...

INFO: Testing Java future API for core plugin
[New Thread 0x7fffd5f9c700 (LWP 4611)]
vl_msg_api_config:671: Trying to register dns_enable_disable_reply with a NULL 
msg id!
vl_msg_api_config:671: Trying to register dns_name_server_add_del_reply with a 
NULL msg id!
vl_msg_api_config:671: Trying to register dns_resolve_name_reply with a NULL 
msg id!
[Thread 0x7fffd5f9c700 (LWP 4611) exited]
Exception in thread "main" java.lang.IllegalStateException: API mismatch 
detected: dns_resolve_name_reply_451ab6c0 is missing
 at io.fd.vpp.jvpp.core.JVppCoreImpl.init0(Native Method)
 at io.fd.vpp.jvpp.core.JVppCoreImpl.init(JVppCoreImpl.java:75)
 at io.fd.vpp.jvpp.JVppRegistryImpl.register(JVppRegistryImpl.java:72)
 at 
io.fd.vpp.jvpp.core.future.FutureJVppCoreFacade.(FutureJVppCoreFacade.java:25)
 at 
io.fd.vpp.jvpp.core.test.FutureApiTest.testFutureApi(FutureApiTest.java:50)
 at io.fd.vpp.jvpp.core.test.FutureApiTest.main(FutureApiTest.java:44)
[New Thread 0x7fffd54af700 (LWP 4612)]

Thanks… Dave

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] jvpp core future test failure (gerrit 8743)

2017-10-13 Thread Dave Barach (dbarach)
Yes, I did. I just worked it out myself... Thanks… Dave

From: Ole Troan (otroan)
Sent: Friday, October 13, 2017 1:48 PM
To: Ole Troan (otroan) <otr...@cisco.com>
Cc: Dave Barach (dbarach) <dbar...@cisco.com>; Ole Troan 
<otr...@employees.org>; vpp-dev@lists.fd.io
Subject: Re: jvpp core future test failure (gerrit 8743)

s/map/dns


if you were to cut and paste.




#define vl_msg_name_crc_list
#include 
#undef vl_msg_name_crc_list

static void
setup_message_id_table (api_main_t * am)
{
#define _(id,n,crc) vl_msg_api_add_msg_name_crc (am, #n "_" #crc, id);
  foreach_vl_msg_name_crc_dns;
#undef _
}


Ole


On 13 Oct 2017, at 19:46, Ole Troan <otr...@cisco.com<mailto:otr...@cisco.com>> 
wrote:

Dear Dave,

I wonder if you forgot to hookup the messages in the CRC dictionary?

#define vl_msg_name_crc_list
#include 
#undef vl_msg_name_crc_list

static void
setup_message_id_table (api_main_t * am)
{
#define _(id,n,crc) vl_msg_api_add_msg_name_crc (am, #n "_" #crc, id);
  foreach_vl_msg_name_crc_map;
#undef _
}


If my guess is correct, I’’ have a chat with the Java guys if we can come up 
with a slightly more user-friendly error message. ;-)


Best regards,
Ole



On 13 Oct 2017, at 19:31, Dave Barach (dbarach) 
<dbar...@cisco.com<mailto:dbar...@cisco.com>> wrote:

Dear Ole,

See https://gerrit.fd.io/r/#/c/8743. It turns out that the java core future 
“make test” test fails as shown below.

The patch adds three xxx_reply_t binary api messages. See 
.../src/vnet/dns/dns.api.

It sure looks like the Java code knows about them, but isn’t doing a very good 
job of registering them. Note that I had to modify the binary API client 
library to keep Java from ASSERTing due to the NULL msg id’s squawked below.

What’s going on here? These messages work like a champ in vpp_api_test...

INFO: Testing Java future API for core plugin
[New Thread 0x7fffd5f9c700 (LWP 4611)]
vl_msg_api_config:671: Trying to register dns_enable_disable_reply with a NULL 
msg id!
vl_msg_api_config:671: Trying to register dns_name_server_add_del_reply with a 
NULL msg id!
vl_msg_api_config:671: Trying to register dns_resolve_name_reply with a NULL 
msg id!
[Thread 0x7fffd5f9c700 (LWP 4611) exited]
Exception in thread "main" java.lang.IllegalStateException: API mismatch 
detected: dns_resolve_name_reply_451ab6c0 is missing
 at io.fd.vpp.jvpp.core.JVppCoreImpl.init0(Native Method)
 at io.fd.vpp.jvpp.core.JVppCoreImpl.init(JVppCoreImpl.java:75)
 at io.fd.vpp.jvpp.JVppRegistryImpl.register(JVppRegistryImpl.java:72)
 at 
io.fd.vpp.jvpp.core.future.FutureJVppCoreFacade.(FutureJVppCoreFacade.java:25)
 at 
io.fd.vpp.jvpp.core.test.FutureApiTest.testFutureApi(FutureApiTest.java:50)
 at io.fd.vpp.jvpp.core.test.FutureApiTest.main(FutureApiTest.java:44)
[New Thread 0x7fffd54af700 (LWP 4612)]

Thanks… Dave


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] jvpp core future test failure (gerrit 8743)

2017-10-13 Thread Dave Barach (dbarach)
Dear Ole,

See https://gerrit.fd.io/r/#/c/8743. It turns out that the java core future 
"make test" test fails as shown below.

The patch adds three xxx_reply_t binary api messages. See 
.../src/vnet/dns/dns.api.

It sure looks like the Java code knows about them, but isn't doing a very good 
job of registering them. Note that I had to modify the binary API client 
library to keep Java from ASSERTing due to the NULL msg id's squawked below.

What's going on here? These messages work like a champ in vpp_api_test...

INFO: Testing Java future API for core plugin
[New Thread 0x7fffd5f9c700 (LWP 4611)]
vl_msg_api_config:671: Trying to register dns_enable_disable_reply with a NULL 
msg id!
vl_msg_api_config:671: Trying to register dns_name_server_add_del_reply with a 
NULL msg id!
vl_msg_api_config:671: Trying to register dns_resolve_name_reply with a NULL 
msg id!
[Thread 0x7fffd5f9c700 (LWP 4611) exited]
Exception in thread "main" java.lang.IllegalStateException: API mismatch 
detected: dns_resolve_name_reply_451ab6c0 is missing
 at io.fd.vpp.jvpp.core.JVppCoreImpl.init0(Native Method)
 at io.fd.vpp.jvpp.core.JVppCoreImpl.init(JVppCoreImpl.java:75)
 at io.fd.vpp.jvpp.JVppRegistryImpl.register(JVppRegistryImpl.java:72)
 at 
io.fd.vpp.jvpp.core.future.FutureJVppCoreFacade.(FutureJVppCoreFacade.java:25)
 at 
io.fd.vpp.jvpp.core.test.FutureApiTest.testFutureApi(FutureApiTest.java:50)
 at io.fd.vpp.jvpp.core.test.FutureApiTest.main(FutureApiTest.java:44)
[New Thread 0x7fffd54af700 (LWP 4612)]

Thanks... Dave

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [v17.07.01]: vec_add2() causing crash on ARMv8

2017-09-29 Thread Dave Barach (dbarach)
As a quick hack: try moving "u32 interrupt_pending;" to the start of the 
structure...

Thanks… Dave

-Original Message-
From: Brian Brooks [mailto:brian.bro...@arm.com] 
Sent: Friday, September 22, 2017 12:33 PM
To: Dave Barach (dbarach) <dbar...@cisco.com>
Cc: Saxena, Nitin <nitin.sax...@cavium.com>; vpp-dev@lists.fd.io; Damjan Marion 
(damarion) <damar...@cisco.com>; Narayana, Prasad Athreya 
<prasadathreya.naray...@cavium.com>
Subject: Re: [vpp-dev] [v17.07.01]: vec_add2() causing crash on ARMv8

On 09/28 11:57:36, Dave Barach (dbarach) wrote:
> Dear Nitin,
> 
> First off: exactly which LDXR / STXR instruction variant pairs is generated? 
> I begin to wonder if __sync_lock_test_and_set(...) might not be doing you any 
> favors. Given that dq->interrupt_pending is a u32, I would have expected a 
> 4-byte instruction with (at worst) a 4-byte alignment requirement.

It's true that a LDXR of 4 bytes only requires 4 byte alignment (not 8).

For the TAS, objdump vhost-user.o shows

  ldxr   w0, [x1]
  stxr   w3, w2, [x1]
  cbnz   w3, ..

These instructions are operating on 4 byte data because of the use of a
'w' register instead of a 'x' register to hold the actual value.

Nitin, can you confirm you see the same generated code? If so, is
>interrupt_pending 4 byte aligned?

> Are there any alignment restrictions on the 1-byte variants LDXRB / STXRB?
> 
> If not: since we use dq->interrupt_pending as a one-bit flag, declaring it as 
> a u8 - or casting >interrupt_pending to (u8 *) in an arch-dependent 
> fashion - might make the pain go away.
> 
> Aligning every vector in the system will waste memory, and will not legislate 
> this class of problem out of existence. So, I wouldn't want to force 8-byte 
> alignment in the way you mention.
> 
> Anyhow, aligning the first vector element to an 8-byte boundary says little 
> about the layout of elements within each vector element, especially if the 
> structure is packed.
> 
> If dq->interrupt_pending needs to be aligned to a specific boundary without 
> fail, the only completely reliable method would be to pack and pad the 
> structure e.g. to a multiple of 8 octets and ensure that interrupt_pending 
> lands on the required boundary. Then use vec_add2_ha (...) to manipulate the 
> vector.
> 
> HTH... Dave
> 
> From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
> Behalf Of Saxena, Nitin
> Sent: Thursday, September 28, 2017 4:53 AM
> To: vpp-dev@lists.fd.io
> Cc: Narayana, Prasad Athreya <prasadathreya.naray...@cavium.com>
> Subject: [vpp-dev] [v17.07.01]: vec_add2() causing crash on ARMv8
> 
> 
> Hi All,
> 
> 
> 
> I got a crash with vpp v17.07.01 on ARMv8 Soc 
> @src/vnet/devices/virtio/vhost-user.c: Line no: 1852
> 
> 
> if (clib_smp_swap (>interrupt_pending, 0) ||
> (node->state == VLIB_NODE_STATE_POLLING)){
> }
> 
> While debugging it turns out that value of (>interrupt_pending) was not 8 
> byte aligned hence causing SIGBUS error on ARMv8 SoC. Further debugging tells 
> that dq was added in vector using vec_add2 (src/vnet/devices/devices.c Line 
> no: 152)
> 
> vec_add2 (rt->devices_and_queues, dq, 1)
> 
> which uses 0 byte alignment. Changing vec_add2 to vec_add2_aligned() fixed 
> the problem. My question is can we completely define vec_add2() as
> 
> #define vec_add2(V,P,N)   vec_add2_ha(V,P,N,0,8) instead of #define 
> vec_add2(V,P,N)   vec_add2_ha(V,P,N,0,0)
> 
> This can be helpful for all architecture.
> 
> Thanks,
> Nitin
> 
> 
> 

> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Compile error with linux/memfd.h

2017-09-28 Thread Dave Barach (dbarach)
Dear Eddie,

As discussed in private email: I think that the version of CentOS on your build 
system is too old. If memory serves, CentOS 7.3 is required. Google tells me 
that the earliest Linux kernel with memfd support is 3.17; it looks like your 
system is running a 3.10 derivative: 
"/usr/src/kernels/3.10.0-693.2.2.el7.x86_64/include/uapi/linux"

Other folks, please jump in on that topic.

After you resolve the CentOS version issue, you'll certainly need to run "make 
install-dep" from the workspace root: "WARNING: Please install ccache AYEC and 
re-run this script"

Thanks... Dave

P.S. We verify every patch on CentOS before merge...

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Eddie Ruan (eruan)
Sent: Thursday, September 28, 2017 5:27 PM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] Compile error with linux/memfd.h

Hi,

I try to make my hand dirty via pulling and compiling VPP codes. I am following 
wiki.

https://wiki.fd.io/view/VPP/Pulling,_Building,_Running,_Hacking_and_Pushing_VPP_Code#Building_the_first_time

I have tried different options, but the compile always stucks at following 
error, not be able to find linux/memfd.h

I found following copy from my cento OS box. I am not sure if that's one it 
looks for, or there are some other package I need to install.

[root@spitfire-2 linux]# pwd
/usr/src/kernels/3.10.0-693.2.2.el7.x86_64/include/uapi/linux
[root@spitfire-2 linux]# ls memfd.h
memfd.h

Does anyone have some hints on how to solve it?


Thanks

Eddie



[root@spitfire-2 vpp]# make bootstrap
WARNING: Please install ccache AYEC and re-run this script
make[1]: Entering directory `/nobackup/vpp/build-root'
 Arch for platform 'native' is native 
 Finding source for tools 
 Makefile fragment found in /nobackup/vpp/build-root/packages/tools.mk 
 Source found in /nobackup/vpp/src 
 Configuring tools: nothing to do 
 Building tools in /nobackup/vpp/build-root/build-tool-native/tools 
make[2]: Entering directory `/nobackup/vpp/build-root/build-tool-native/tools'
make  all-recursive
make[3]: Entering directory `/nobackup/vpp/build-root/build-tool-native/tools'
Making all in .
make[4]: Entering directory `/nobackup/vpp/build-root/build-tool-native/tools'
  CC   vppinfra/linux/mem.lo
/nobackup/vpp/build-root/../src/vppinfra/linux/mem.c:25:25: fatal error: 
linux/memfd.h: No such file or directory
#include 



[logo_Grey]



Eddie Ruan
PRINCIPAL ENGINEER.ENGINEERING
er...@cisco.com
Tel: +1 408 853 0776

Cisco Systems, Inc.
821 Alder Drive
MILPITAS
95035
United States
cisco.com


[http://www.cisco.com/assets/swa/img/thinkbeforeyouprint.gif]Think before you 
print.

This email may contain confidential and privileged material for the sole use of 
the intended recipient. Any review, use, distribution or disclosure by others 
is strictly prohibited. If you are not the intended recipient (or authorized to 
receive for the recipient), please contact the sender by reply email and delete 
all copies of this message.
Please click 
here for 
Company Registration Information.


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] vlan sub interfaces

2017-09-28 Thread Dave Barach (dbarach)
See https://gerrit.fd.io/r/#/c/8590. The patch cherry-picked easily to 
stable/1707.

Assuming that the cherry-pick patch validates - and that it solves your problem 
- it will be up to Neale [as the 17.07 release manager] whether to merge it or 
not.

Please let us know whether the cherry-pick patch works for you.

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Prabhjot Singh Sethi
Sent: Thursday, September 28, 2017 3:27 PM
To: Akshaya Nadahalli ; Prabhjot Singh Sethi 
; vpp-dev@lists.fd.io; John Lo (loj) 
Subject: Re: [vpp-dev] vlan sub interfaces

yes it works perfectly fine with this patch.
i hope this will be pushed to 17.07 branch as well.

Thanks for the help :)

Regards,
Prabhjot

- Original Message -
From:
"Akshaya Nadahalli" >

To:
"Prabhjot Singh Sethi" 
>, 
>, "John Lo" 
>
Cc:

Sent:
Thu, 28 Sep 2017 19:18:50 +0530
Subject:
Re: [vpp-dev] vlan sub interfaces


Hi Prabhjot,



Can you pls try with below patch and see if it helps:

https://gerrit.fd.io/r/#/c/8435/



Regards,

Akshaya N

On Thursday 28 September 2017 03:45 PM, Prabhjot Singh Sethi wrote:
trying again with more appropriate subject

Can some one please help if i am missing any thing over here ?

As mentioned earlier, i have interface host-eth10 and sub interface 
host-eth10.10 (create sub host-eth10 10)
host-eth10 is associated to bridge domain 2 and sub interface is associated to 
bridge domain 3
when VPP receives tagged packet with vlan 10 it still associates it to bd 2 and 
not bd 3

Note: if i don't associate any bd with base interface it just drops the packet 
with some error.

Regards,
Prabhjot


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

--
Regards,
Akshaya N
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [v17.07.01]: vec_add2() causing crash on ARMv8

2017-09-28 Thread Dave Barach (dbarach)
Dear Nitin,

First off: exactly which LDXR / STXR instruction variant pairs is generated? I 
begin to wonder if __sync_lock_test_and_set(...) might not be doing you any 
favors. Given that dq->interrupt_pending is a u32, I would have expected a 
4-byte instruction with (at worst) a 4-byte alignment requirement.

Are there any alignment restrictions on the 1-byte variants LDXRB / STXRB?

If not: since we use dq->interrupt_pending as a one-bit flag, declaring it as a 
u8 - or casting >interrupt_pending to (u8 *) in an arch-dependent fashion - 
might make the pain go away.

Aligning every vector in the system will waste memory, and will not legislate 
this class of problem out of existence. So, I wouldn't want to force 8-byte 
alignment in the way you mention.

Anyhow, aligning the first vector element to an 8-byte boundary says little 
about the layout of elements within each vector element, especially if the 
structure is packed.

If dq->interrupt_pending needs to be aligned to a specific boundary without 
fail, the only completely reliable method would be to pack and pad the 
structure e.g. to a multiple of 8 octets and ensure that interrupt_pending 
lands on the required boundary. Then use vec_add2_ha (...) to manipulate the 
vector.

HTH... Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Saxena, Nitin
Sent: Thursday, September 28, 2017 4:53 AM
To: vpp-dev@lists.fd.io
Cc: Narayana, Prasad Athreya 
Subject: [vpp-dev] [v17.07.01]: vec_add2() causing crash on ARMv8


Hi All,



I got a crash with vpp v17.07.01 on ARMv8 Soc 
@src/vnet/devices/virtio/vhost-user.c: Line no: 1852


if (clib_smp_swap (>interrupt_pending, 0) ||
(node->state == VLIB_NODE_STATE_POLLING)){
}

While debugging it turns out that value of (>interrupt_pending) was not 8 
byte aligned hence causing SIGBUS error on ARMv8 SoC. Further debugging tells 
that dq was added in vector using vec_add2 (src/vnet/devices/devices.c Line no: 
152)

vec_add2 (rt->devices_and_queues, dq, 1)

which uses 0 byte alignment. Changing vec_add2 to vec_add2_aligned() fixed the 
problem. My question is can we completely define vec_add2() as

#define vec_add2(V,P,N)   vec_add2_ha(V,P,N,0,8) instead of #define 
vec_add2(V,P,N)   vec_add2_ha(V,P,N,0,0)

This can be helpful for all architecture.

Thanks,
Nitin



___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Stable branch for 17.10 pulled

2017-09-28 Thread Dave Barach (dbarach)
+1... Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Luke, Chris
Sent: Wednesday, September 27, 2017 11:15 PM
To: Florin Coras ; vpp-dev 
Subject: Re: [vpp-dev] Stable branch for 17.10 pulled

Great work, Florin!

Cheers,
Chris.

From: vpp-dev-boun...@lists.fd.io 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Florin Coras
Sent: Wednesday, September 27, 2017 21:46
To: vpp-dev >
Subject: [vpp-dev] Stable branch for 17.10 pulled

Folks,

The release branch, stable/1710, for VPP 17.10 has now been pulled and tags 
have been laid. As a result, master is yet again open for all changes.

From this point onward, up until the release date on October 25th [1], we need 
to be disciplined with respect to bugfixes. Here is the traditional list of 
common-sense suggestions:

  • All bug fixes must be double-committed to the release throttle 
as well as to the master branch
  • Commit first to the release throttle, then "git 
cherry-pick" into master
  • Manual merges may be required, depending on the 
degree of divergence between throttle and master
  • All bug fixes need to have a Jira ticket
  • Please put Jira IDs into the commit messages.
  • Please use the same Jira ID

Regards,
Florin

[1] https://wiki.fd.io/view/Projects/vpp/Release_Plans/Release_Plan_17.10
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] deadlock issue in VPP during DHCP packet processing

2017-09-26 Thread Dave Barach (dbarach)
Does this happen w/ master/latest? My guess: yes...

Florin and I are working on a patch to fix an obvious issue in this path right 
now, look for results shortly...

HTH... Dave


From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Balaji Kn
Sent: Tuesday, September 26, 2017 8:37 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] deadlock issue in VPP during DHCP packet processing

Hello All,

I am working on VPP 17.07 and using DHCP proxy functionality. CPU configuration 
provided as one main thread and one worker thread.

cpu {
  main-core 0
  corelist-workers 1
}

Deadlock is observed while processing DHCP offer packet in VPP. However issue 
is not observed if i comment CPU configuration in startup.conf file (if running 
in single thread) and everything works smoothly.

Following message is displayed on console.
vlib_worker_thread_barrier_sync: worker thread deadlock

Backtrace from core file generated.
[New LWP 12792]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `/usr/bin/vpp -c /etc/vpp/startup.conf'.
Program terminated with signal SIGABRT, Aborted.
#0  0x7f721ab0fc37 in __GI_raise (sig=sig@entry=6) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:56
56  ../nptl/sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0  0x7f721ab0fc37 in __GI_raise (sig=sig@entry=6) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x7f721ab13028 in __GI_abort () at abort.c:89
#2  0x00407073 in os_panic () at 
/root/vfe/fe-vfe/datapath/vpp/build-data/../src/vpp/vnet/main.c:263
#3  0x7f721c0b5d5d in vlib_worker_thread_barrier_sync (vm=0x7f721c2e12e0 
)
at /root/vfe/fe-vfe/datapath/vpp/build-data/../src/vlib/threads.c:1192
#4  0x7f721c2e973a in vl_msg_api_handler_with_vm_node 
(am=am@entry=0x7f721c5063a0 , the_msg=the_msg@entry=0x304bc6d4,
vm=vm@entry=0x7f721c2e12e0 , 
node=node@entry=0x7f71da6a8000)
at /root/vfe/fe-vfe/datapath/vpp/build-data/../src/vlibapi/api_shared.c:501
#5  0x7f721c2f34be in memclnt_process (vm=, 
node=0x7f71da6a8000, f=)
at 
/root/vfe/fe-vfe/datapath/vpp/build-data/../src/vlibmemory/memory_vlib.c:544
#6  0x7f721c08ec96 in vlib_process_bootstrap (_a=)
at /root/vfe/fe-vfe/datapath/vpp/build-data/../src/vlib/main.c:1259
#7  0x7f721b2ec858 in clib_calljmp () at 
/root/vfe/fe-vfe/datapath/vpp/build-data/../src/vppinfra/longjmp.S:110
#8  0x7f71da9efe20 in ?? ()
#9  0x7f721c090041 in vlib_process_startup (f=0x0, p=0x7f71da6a8000, 
vm=0x7f721c2e12e0 )
at /root/vfe/fe-vfe/datapath/vpp/build-data/../src/vlib/main.c:1281
#10 dispatch_process (vm=0x7f721c2e12e0 , p=0x7f71da6a8000, 
last_time_stamp=58535483853222, f=0x0)
at /root/vfe/fe-vfe/datapath/vpp/build-data/../src/vlib/main.c:1324
#11 0x00d800d9 in ?? ()

Any pointers would be appreciated.

Regards,
Balaji

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Poor L3/L4 Performance

2017-09-25 Thread Dave Barach (dbarach)
As discussed off-list: please stick to best-practice coding patterns. 
Single-packet frames simply cannot perform, etc.

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Alessio Silvestro
Sent: Monday, September 25, 2017 10:13 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] Poor L3/L4 Performance

Dear all,

I am performing some experiments on VPP in order to get some performance 
metrics for specific applications.

I am working on vpp v17.04.2-2.

In order to have a baseline of my system, I run L2 XConnect (XC) as in 
[https://perso.telecom-paristech.fr/~drossi/paper/vpp-bench-techrep.pdf].

In this case, I can achieve, similarly to the paper, ~13Mpps -- which somehow 
confirm that the
current setup is correct.

I implemented 2 further experiments:

1) L3-Xconnect

I implemented a new node that listens for traffic with specific ether_type with 
the following api:

ethernet_register_input_type(vm, ETHERNET_TYPE_X, my_node.index)

Once the traffic is received, the node sends the traffic directly to l2_output 
without any further processing.

The achieved packet rate is less than 5 Mpps.

2) L4-Xconnect

I implemented another node that listens for UDP traffic on  a specific port 
with the following api:


udp_register_dst_port (vm, UDP_DST_PORT_vxlan, vxlan_input_node.index, 1 /* 
is_ip4 */);

Once the traffic is received, the node sends the traffic directly to l2_output 
without any further processing.

The achieved packet rate is less than 4 Mpps.


The testbed is composed of 2 servers. The first server is running VPP whereas 
the second server runs the traffic generator (packetgen). The servers are 
equipped with Intel NICs capable of dual-port 10 Gbps full-duplex link. 
Generated packets have the size of 64kb.

VPP is configured to run with one main thread and one worker thread. Therefore, 
the previous values are meant for a single CPU-core.

In my opinion those values are a bit too low compared to other state-of-the-art 
approaches.

Do you have any idea on why this is happening and, if this is my fault, how I 
can fix it.

Thanks,
Alessio

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Practical use of the include-what-you-use tool for individual developers

2017-09-23 Thread Dave Barach (dbarach)
Dear Burt,

This is of interest, but I have concerns about boiling the ocean at this point 
in the release cycle. Please hold any patches on this topic until well after 
the 17.10 RC1 throttle branch pull.

Although we haven’t caused ourselves massive pain with similar work - coding 
standards cleanup, build-related directory refactoring - I’m not convinced that 
restructuring existing header files is worth the pain it may cause.

Direct inclusion creates ordering requirements which are at least as annoying 
as unnecessary build dependencies.

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Burt Silverman
Sent: Friday, September 22, 2017 9:39 PM
To: vpp-dev 
Subject: [vpp-dev] Practical use of the include-what-you-use tool for 
individual developers

This is a follow up on my recent post about the include-what-you-use tool. I 
discovered a way that you can use this tool to include a more appropriate set 
of header files in the files that you develop than would otherwise be the case.

The stated philosophy behind the tool is that you should directly include all 
header files that are used by a file. If struct a is declared in a.h and struct 
b is declared in b.h, your .c file that references both struct a and struct b 
should directly include a.h and b.h. But this will lead to including many more 
files than we have typically done in vpp. My personal preference, although not 
sanctioned by the pros, is to allow indirect header file inclusion. It turns 
out that there is a simple way to do this using include-what-you-use, and it 
does not require rewriting the tool's code.

include-what-you-use will suggest which header files should be added and which 
header files should be removed from the file that you are analyzing.

Understand that the corresponding .h file to a .c file will be treated 
specially. It will be analyzed along with the .c file.

For an example, I will use vnet/tcp/builtin_client.c. First I show files that 
are suggested to be removed from builtin_client.h.

vnet/tcp/builtin_client.h should remove these lines:
- #include   // lines 28-28
- #include   // lines 22-22
- #include   // lines 30-30
- #include   // lines 29-29
- #include   // lines 26-26
- #include   // lines 25-25

Running include-what-you-use involves running the clang C compiler, so if a 
necessary header file is missing and a type cannot be resolved, you will see 
regular compiler error messages.

After removing the header file includes above from builtin_client.h, and 
re-running include-what-you-use, we find the error:

In file included from vnet/tcp/builtin_client.c:20:
./vnet/tcp/builtin_client.h:40:3: error: unknown type name 'svm_fifo_t'
  svm_fifo_t *server_rx_fifo;
  ^

We manually search for the svm_fifo_t declaration and we see that rather than 
including svm_fifo_segment.h in builtin_client.h, we should have included 
svm_fifo.h.

Fixing that and re-running IWYU, we find

vnet/tcp/builtin_client.c:55:3: error: use of undeclared identifier 
'session_fifo_event_t' session_fifo_event_t evt;
  ^

so therefore, session.h should have been included in builtin_client.c rather 
than builtin_client.h.

We also find

vnet/tcp/builtin_client.c:258:8: error: use of undeclared identifier 
'vnet_disconnect_args_t'
  vnet_disconnect_args_t _a, *a = &_a;
  ^
so application_interface.h should have been included in builtin_client.c rather 
than builtin_client.h.

Re-running IWYU tells us that no lines need to be removed from 
builtin_client.h, however,

vnet/tcp/builtin_client.c should remove these lines:
- #include   // lines 24-24
- #include   // lines 25-25
- #include   // lines 26-26
- #include   // lines 19-19
- #include   // lines 18-18
- #include   // lines 27-27

Removing these includes, re-running IWYU indicates that no more includes need 
to be removed from either builtin_client.h or builtin_client.c, and the 
compilation is successful. We are done, and
we have

builtin_client.h includes:
#include 
#include 
#include 
#include 

builtin_client.c includes:
#include 
#include 
#include 

Now, if on the other hand, a developer prefers to include all the headers 
directly, like many experts like to see, the result would be:

The full include-list for vnet/tcp/builtin_client.c:
#include 
#include// for memset, NULL
#include// for vnet_app_attach_args_t
#include  // for stream_session_handle
#include "svm/svm_fifo.h" // for svm_fifo_t, svm_fif...
#include "vlib/cli.h" // for vlib_cli_output
#include "vlib/global_funcs.h"// for vlib_get_thread_main
#include "vlib/init.h"// for VLIB_INIT_FUNCTION
#include "vlib/node_funcs.h"  // for vlib_process_get_ev...
#include "vlib/threads.h" // for vlib_get_thread_index
#include "vlibapi/api_common.h"   

Re: [vpp-dev] some issue about using unformat %u 

2017-09-20 Thread Dave Barach (dbarach)
Varargs functions effectively bypass strong type-checking. It can’t be helped.

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of wang.hu...@zte.com.cn
Sent: Tuesday, September 19, 2017 11:34 PM
To: vpp-dev@lists.fd.io
Cc: gu.ji...@zte.com.cn; wang.ju...@zte.com.cn
Subject: [vpp-dev] some issue about using unformat %u


hi all:

we found some common using issues about the use of CLI unformat, as follow:



u16 out_port = 0;

u32 vrf_id = 0, protocol;

else if (unformat (line_input, "%U %u", unformat_ip4_address,

 _addr, _port))



when inputing u16 or u8 type param(not u32), the  local variable which behind 
of "out_port" in stack will be overwrited, is that right? is there some Notes 
about this?

I think the bellow code maybe  cause that issue.

unformat->va_unformat->do_percent->unformat_integer-> *(u32 *) v = value;











王辉 wanghui



IT开发工程师 IT Development Engineer
虚拟化南京四部/无线研究院/无线产品经营部 NIV Nanjing Dept. IV/Wireless Product R&D 
Institute/Wireless Product Operation Division


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Multiple vppctl Considered Harmful

2017-09-19 Thread Dave Barach (dbarach)
Give me a minute, I'll try it right away...

Thanks... Dave

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Jon Loeliger
Sent: Tuesday, September 19, 2017 2:01 PM
To: vpp-dev 
Subject: [vpp-dev] Multiple vppctl Considered Harmful

Folks,

While I appear to be able to run a single vppctl up against VPP,
if I then start a second one, to the same VPP process, VPP immediately
aborts.  It's pretty unfriendly.

EAL:   Invalid NUMA socket, default to 0
EAL:   Invalid NUMA socket, default to 0
DPDK physical memory layout:
Segment 0: phys:0x3520, len:2097152, virt:0x7ff40d00,
socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 1: phys:0x35c0, len:8388608, virt:0x7ff40c60,
socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 2: phys:0x36c0, len:2097152, virt:0x7ff40c20,
socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 3: phys:0x6d80, len:224395264, virt:0x7ff3fea0,
socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 4: phys:0x3f940, len:2097152, virt:0x7ff3fe60,
socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 5: phys:0x3f980, len:29360128, virt:0x7ff38c80,
socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Aborted



vpp# show version
vpp v17.10-rc0~307-g6b3a8ef built by jdl on bcc-1.netgate.com at Mon
Sep 11 18:38:26 CDT 2017



Does anyone else see that?  Or am I special?

Thanks,
jdl
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] Coverity runs

2017-09-19 Thread Dave Barach (dbarach)
Very cool! Thanks for working on it... Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Luke, Chris
Sent: Tuesday, September 19, 2017 11:50 AM
To: vpp-dev 
Subject: [vpp-dev] Coverity runs

All,

Coverity have increased the limits for our project size again; effective 
yesterday I run the build twice daily. 0600 and 1500 Eastern is what I have in 
cron currently, which I hope will be useful times for the majority of the 
current contributors to get feedback on their patches once merged. Thoughts on 
the timing welcome.

Chris.
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP API Message Multi-Registration Question

2017-09-15 Thread Dave Barach (dbarach)
How about: only complain if the new registration is actually different from the 
old one?

Thanks... Dave

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Jon Loeliger
Sent: Friday, September 15, 2017 3:35 PM
To: vpp-dev 
Subject: [vpp-dev] VPP API Message Multi-Registration Question

Folks,

We have a need to re-register API message handlers.
(Think re-fork/exec scenarios for daemons.)

Today, we register handlers via a call to this function:

vl_msg_api_set_handlers (int id, char *name, void *handler, void *cleanup,
 void *endian, void *print, int size, int traced)

When we do that today, we see this warning on the second call.

vl_msg_api_config (vl_msg_api_msg_config_t * c)
{

  [...]

  if (am->msg_names[c->id])
clib_warning ("BUG: multiple registrations of 'vl_api_%s_t_handler'",
  c->name);

  am->msg_names[c->id] = c->name;
  am->msg_handlers[c->id] = c->handler;
  am->msg_cleanup_handlers[c->id] = c->cleanup;
  am->msg_endian_handlers[c->id] = c->endian;
  am->msg_print_handlers[c->id] = c->print;
  am->message_bounce[c->id] = c->message_bounce;
  am->is_mp_safe[c->id] = c->is_mp_safe;

Sure, the handler is re-registered, but it is really annoying,
and it is misleading in our case.  So we are looking for a
way to squelch it.

Is there a way to *un*-bind a handler during a "graceful shutdown"
procedure so that we can remove any binding here, and thus
later when we re-bind it is all happy again?

Or, can we call a (new?) API function that says "Yeah, we know,
but squash that message for me." just prior to registering the
handlers.

Or, is there a graceful shutdown of the API handling that
I have just missed or overlooked somewhere?

Thanks,
jdl
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] net/mlx5: install libmlx5 & libibverbs if no OFED

2017-09-13 Thread Dave Barach (dbarach)
I typically use "git commit --amend" followed by "git review [--draft]".

HTH... Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Shachar Beiser
Sent: Wednesday, September 13, 2017 11:29 AM
To: vpp-dev@lists.fd.io
Cc: Shahaf Shuler ; Damjan Marion (damarion) 

Subject: [vpp-dev] net/mlx5: install libmlx5 & libibverbs if no OFED

Hi,

  I would like to send a second patch fixing the comments I received .
  I understand that it may not be done by "git push" & also "git 
review"/"git review -s" seems like it has no effect.

  What is the procedure to send a second patch ?
 -Shachar Beiser.
  https://gerrit.fd.io/r/#/c/8390/1


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] Spurious patch verification failure (gerrit 8400)

2017-09-13 Thread Dave Barach (dbarach)
See gerrit https://gerrit.fd.io/r/#/c/8400, 
https://jenkins.fd.io/job/vpp-verify-master-centos7/7070/console


12:29:12 make[1]: Leaving directory 
`/w/workspace/vpp-verify-master-centos7/test'
12:29:12 [vpp-verify-master-centos7] $ /bin/bash 
/tmp/hudson3100921859131279854.sh
12:29:12 Loaded plugins: fastestmirror, langpacks
12:29:12 Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache 
fast
12:29:17 Determining fastest mirrors
12:29:18  * base: centos.mirror.ca.planethoster.net
12:29:18  * epel: ftp.cse.buffalo.edu
12:29:18  * extras: centos.mirror.iweb.ca
12:29:18  * updates: centos.mirror.netelligent.ca
12:29:21 Package redhat-lsb-4.1-27.el7.centos.1.x86_64 already installed and 
latest version
12:29:21 Nothing to do
12:29:21 DISTRIB_ID: CentOS
12:29:21 DISTRIB_RELEASE: 7.3.1611
12:29:21 DISTRIB_CODENAME: Core
12:29:21 DISTRIB_DESCRIPTION: "CentOS Linux release 7.3.1611 (Core) "
12:29:21 INSTALLING VPP-DPKG-DEV from apt/yum repo
12:29:21 REPO_URL: https://nexus.fd.io/content/repositories/fd.io.master.centos7
12:29:21 Loaded plugins: fastestmirror, langpacks
12:29:52 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 [Errno 12] Timeout on 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 
seconds')
12:29:52 Trying other mirror.
12:30:22 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 [Errno 12] Timeout on 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 
seconds')
12:30:22 Trying other mirror.
12:30:52 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 [Errno 12] Timeout on 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 
seconds')
12:30:52 Trying other mirror.
12:31:22 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 [Errno 12] Timeout on 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 
seconds')
12:31:22 Trying other mirror.
12:31:52 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 [Errno 12] Timeout on 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 
seconds')
12:31:52 Trying other mirror.
12:32:22 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 [Errno 12] Timeout on 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 
seconds')
12:32:22 Trying other mirror.
12:32:52 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 [Errno 12] Timeout on 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 
seconds')
12:32:52 Trying other mirror.
12:33:22 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 [Errno 12] Timeout on 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 
seconds')
12:33:22 Trying other mirror.
12:33:52 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 [Errno 12] Timeout on 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 
seconds')
12:33:52 Trying other mirror.
12:34:22 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 [Errno 12] Timeout on 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 
seconds')
12:34:22 Trying other mirror.
12:34:22
12:34:22
12:34:22  One of the configured repositories failed (fd.io master branch latest 
merge),
12:34:22  and yum doesn't have enough cached data to continue. At this point 
the only
12:34:22  safe thing yum can do is fail. There are a few ways to work "fix" 
this:
12:34:22
12:34:22  1. Contact the upstream for the repository and get them to fix 
the problem.
12:34:22
12:34:22  2. Reconfigure the baseurl/etc. for the repository, to point to a 
working
12:34:22 upstream. This is most often useful if you are using a newer
12:34:22 distribution release than is supported by the repository (and 
the
12:34:22 packages for the previous distribution release still work).
12:34:22
12:34:22  

Re: [vpp-dev] vpp performance numbers with 10Gbps interface.

2017-09-12 Thread Dave Barach (dbarach)
+1. If you want to rx-and-drop packets, install a drop adjacency... Sending to 
an unrouteable address results in 100% icmp error replies...

Thanks... Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Florin Coras
Sent: Tuesday, September 12, 2017 1:05 PM
To: Rahul Negi 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] vpp performance numbers with 10Gbps interface.

Hi Rahul,

It looks like all your packets are going to ip4-imcp-error, ip4-local and 
ip4-udp-lookup. What is your test setup?

Florin

On Sep 12, 2017, at 5:10 AM, Rahul Negi 
> wrote:

Hi All,
I was trying to measure maximum PPS handled by vpp.I have installed ubuntu 
16.04 on my server.I have followed vpp recommended bios settings.

Hardware specs:
root@kujo:~# lscpu
Architecture:  x86_64
CPU op-mode(s):32-bit, 64-bit
Byte Order:Little Endian
CPU(s):8
On-line CPU(s) list:   0-7
Thread(s) per core:1
Core(s) per socket:8
Socket(s): 1
NUMA node(s):  1
Vendor ID: GenuineIntel
CPU family:6
Model: 45
Model name:Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz
Stepping:  7
CPU MHz:   1200.000
CPU max MHz:   2900.
CPU min MHz:   1200.
BogoMIPS:  5786.39
Virtualization:VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache:  256K
L3 cache:  20480K
NUMA node0 CPU(s): 0-7
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca 
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx 
pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology 
nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est 
tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt 
tsc_deadline_timer aes xsave avx lahf_lm epb tpr_shadow vnmi flexpriority ept 
vpid xsaveopt dtherm


Vpp version:

vpp# show version
vpp v17.10-rc0~301-gb2d2fc7 built by root on kujo at Mon Sep 11 16:39:34 IST 
2017

My vpp model has 1 main thread and 1 worker thread.I was not able to get more 
than 6Mpps .After 6 Mpps i can see the rx_miss counters in vpp stats.

vpp# show interface
  Name   Idx   State  Counter  Count
TenGigabitEtherneta/0/0   1down  rx-error   
2
TenGigabitEtherneta/0/1   2 up   rx packets  
52647168
 rx bytes  
3369416188
 tx packets  
52638150
 tx bytes  
4842700014
 drops  
 9024
 ip4 
52645519
 tx-error   
1
local00down
vpp# show interface
  Name   Idx   State  Counter  Count
TenGigabitEtherneta/0/0   1down  rx-error   
2
TenGigabitEtherneta/0/1   2 up   rx packets  
54696192
 rx bytes  
3500553704
 tx packets  
54687170
 tx bytes  
5031209822
 drops  
 9028
 ip4 
54694538
 tx-error   
1
local00down
vpp# show interface
  Name   Idx   State  Counter  Count
TenGigabitEtherneta/0/0   1down  rx-error   
2
TenGigabitEtherneta/0/1   2 up   rx packets  
56743168
 rx bytes  
3631560168
 tx packets  
56734146
 tx bytes  
5219531614
 drops  
 9028
 ip4 
56741514
 rx-miss 
23152160
 tx-error   
1
local00down
vpp# show interface
  Name

Re: [vpp-dev] VPP 1704 and router plugin

2017-09-12 Thread Dave Barach (dbarach)
Set a breakpoint in format_fib_table_name, and see if e.g. fib_table->ft_desc 
is NULL.

Thanks... Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Bhanu Chander Gaddoju
Sent: Tuesday, September 12, 2017 11:45 AM
To: vpp-dev@lists.fd.io; Ni, Hongjun 
Subject: [vpp-dev] VPP 1704 and router plugin

Hi All,

  We are building router plugin with VPP 1704 branch. We used VPP stable/1704 
branch and vppsb source code after (https://gerrit.fd.io/r/#/c/5881/ ) check in.

  Router plugin is loaded properly. We are able to see the router plugin when 
"vppctl  show plugin" command is issued.
  But, VPP daemon is getting crashed when we issue "vppctl show ip fib". Crash 
dump and the VPP configuration is given below.
  Please help me in resolving this issue.

Crash Dump:
(gdb) c
Continuing.

Program received signal SIGSEGV, Segmentation fault.
0x in ?? ()
(gdb) bt
#0  0x in ?? ()
#1  0x96b00034 in do_percent (va=, fmt=, 
_s=) at 
/root/vpp-1704/build-data/../src/vppinfra/format.c:372
#2  va_format (s=0x57a9dfe8 "ipv4-VRF:0, fib_index 0, flow hash: ", 
s@entry=0x0, fmt=fmt@entry=0x96e72ca8 "%U, fib_index %d, flow hash: %U", 
va=0x568fd988, va@entry=0x568fd9a8)
at /root/vpp-1704/build-data/../src/vppinfra/format.c:403
#3  0x96ed3cb4 in vlib_cli_output (vm=vm@entry=0x96f28ed0 
, fmt=fmt@entry=0x96e72ca8 "%U, fib_index %d, flow hash: 
%U")
at /root/vpp-1704/build-data/../src/vlib/cli.c:584
#4  0x96dfa69c in ip4_show_fib (vm=0x96f28ed0 , 
input=, cmd=) at 
/root/vpp-1704/build-data/../src/vnet/fib/ip4_fib.c:497
#5  0x96ed3f58 in vlib_cli_dispatch_sub_commands 
(vm=vm@entry=0x96f28ed0 , cm=cm@entry=0x96f291a8 
, input=input@entry=0x568fde00,
parent_command_index=) at 
/root/vpp-1704/build-data/../src/vlib/cli.c:485
#6  0x96ed43cc in vlib_cli_dispatch_sub_commands 
(vm=vm@entry=0x96f28ed0 , cm=cm@entry=0x96f291a8 
, input=input@entry=0x568fde00,
parent_command_index=) at 
/root/vpp-1704/build-data/../src/vlib/cli.c:463
#7  0x96ed43cc in vlib_cli_dispatch_sub_commands (vm=0x96f28ed0 
, cm=0x96f291a8 , 
input=0x568fde00,
parent_command_index=) at 
/root/vpp-1704/build-data/../src/vlib/cli.c:463
#8  0x96ed4700 in vlib_cli_input (vm=0x96f28ed0 , 
input=0x568fde00, function=, function_arg=)
at /root/vpp-1704/build-data/../src/vlib/cli.c:559
#9  0x00414d04 in vl_api_cli_request_t_handler ()
#10 0x96f4b434 in vl_msg_api_handler_with_vm_node (am=0x568fde00, 
the_msg=0x49b000, vm=0x305e7dc0, node=0x96f28ed0 )
at /root/vpp-1704/build-data/../src/vlibapi/api_shared.c:502
#11 0x96f32a94 in memclnt_process (vm=, node=0x6d, 
f=) at 
/root/vpp-1704/build-data/../src/vlibmemory/memory_vlib.c:543
#12 0x96eda090 in vlib_process_bootstrap (_a=) at 
/root/vpp-1704/build-data/../src/vlib/main.c:1226
#13 0x96b07854 in clib_calljmp () at 
/root/vpp-1704/build-data/../src/vppinfra/longjmp.S:676
Backtrace stopped: previous frame identical to this frame (corrupt stack?)
(gdb)


VPP configuration:
# vpp -c /etc/vpp/startup.conf &
# vppctl create loopback interface
# vppctl set interface state loop0 up
# vppctl set interface state GigabitEthernet0/3/0 up
# vppctl set interface state GigabitEthernet0/4/0 up
# vppctl set interface ip address loop0 2.2.2.2/32
# vppctl set interface ip address GigabitEthernet0/3/0 10.0.10.2/24
# vppctl set interface ip address GigabitEthernet0/4/0 10.0.20.2/24
# vppctl enable tap-inject
# vppctl show tap-inject
# ip addr add 10.0.10.2/24 dev vpp0
# ip addr add 10.0.20.2/24 dev vpp1
# ip link set dev vpp0 up
# ip link set dev vpp1 up
# vppctl show ip fib


Regards,
Bhanu,
HSDC, NXP India.


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] u32 vs uint32_t

2017-09-11 Thread Dave Barach (dbarach)
+1, let’s stick with u32... Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Dave Wallace
Sent: Monday, September 11, 2017 12:36 PM
To: Florin Coras ; Luke, Chris 
Cc: vpp-dev 
Subject: Re: [vpp-dev] u32 vs uint32_t

+1
On 09/11/2017 11:27 AM, Florin Coras wrote:
Hi Chris,

Personally, I’d like to enforce the use of u32. Is it an option to just replace 
all occurrences of uint32_t in ip.h/mpls.h?

Thanks,
Florin

On Sep 11, 2017, at 7:55 AM, Luke, Chris 
> wrote:

For discussion: VPP has traditionally used its own fixed-width types, such as 
u32 and u64 and only uses standard types when referring to the external world 
(eg, to talk to libc, etc). Recently I’ve noticed the C99 variant, uint32_t 
creeping in more and into VPP internal matters. As a matter of style and 
consistency, which should we as a project be using?

Reason I ask: The recent MPLS patch (https://gerrit.fd.io/r/#/c/8371) uses both 
styles in .h files but doesn’t have stdint.h included in any path leading to 
those .h’s; Coverity appears to be fussy about this – it checks that all types 
used in a .h are defined in the scope of that .h. Upshot is that Coverity is 
balking at this and only 54% of the project now compiles under Coverity

To resolve the issue with Coverity, I am torn with adding “#include ” 
to ip.h/mpls.h to fix it where it happens, or just accept that humans are 
inconsistent and add it to vppinfra/types.h. Thoughts?

Chris.

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev





___

vpp-dev mailing list

vpp-dev@lists.fd.io

https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP contribute

2017-09-11 Thread Dave Barach (dbarach)
That’s right, no need to send patches to a mailing list. In fact, please don’t 
send patches to this list. ()...

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Shachar Beiser
Sent: Monday, September 11, 2017 7:48 AM
To: vpp-dev@lists.fd.io
Cc: Damjan Marion (damarion) 
Subject: [vpp-dev] VPP contribute

Hi ,

   I contribute to the VPP for the first time.
   I am following the instructions in your web-site : 
https://wiki.fd.io/view/VPP/Pulling,_Building,_Running,_Hacking_and_Pushing_VPP_Code#Setting_up_Gerrit
   Is that it ? don’t I need to send my patches to the mailing-list ?

   -Shachar Beiser.

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] mheap performance

2017-09-08 Thread Dave Barach (dbarach)
Dear Jacek,

 

Oh, heck, we don’t need to use a sledgehammer to kill a fly. It will take five 
minutes to fix this problem. Copying Ole Troan for his input, and / or to 
simply fix the problem as follows:

 

Make a set of pools whose elements are n * CLIB_CACHE_LINE BYTES in size. It’s 
easy enough to dynamically create a fresh pool if [all of a sudden] you need k 
* CLIB_CACHE_LINE BYTES

 

Allocate d->rules from the appropriate pool by rounding 1<psid_length to a 
multiple of a cache line.

 

At that point, the memory allocator will instantly behave itself. If necessary, 
you can preallocate the rule pools, see also pool.h.

 

Absent data to the contrary, it’s reasonably likely that cache-line alignment 
of d->rules is unnecessary in the first place. Have you tried dropping the 
alignment constraint? 

 

Thanks… Dave

 

From: Jacek Siuda [mailto:j...@semihalf.com] 
Sent: Friday, September 8, 2017 10:39 AM
To: Dave Barach (dbarach) <dbar...@cisco.com>
Cc: vpp-dev@lists.fd.io; Michał Dubiel <m...@semihalf.com>
Subject: Re: [vpp-dev] mheap performance

 

Hi Dave,

The perf backtrace (taken from "control-only" lcore 0) is as follows:
-  91.87% vpp_main  libvppinfra.so.0.0.0[.] mheap_get_aligned
   - mheap_get_aligned
  - 99.48% map_add_del_psid
   vl_api_map_add_del_rule_t_handler
   vl_msg_api_handler_with_vm_node
   memclnt_process
   vlib_process_bootstrap
   clib_calljmp

Using DPDK's rte_malloc_socket(), CPU consumption drops to around 0,5%.

>From my (somewhat brief) mheap code analysis, it looks like mheap might not 
>take into account alignment when looking for free space to allocate structure. 
>So, in my case, when I keep allocating 16B objects with 64B alignment, it 
>starts to examine each hole it left by previous object's allocation alignment 
>and only then realize it cannot be used because of alignment. But of course I 
>might be wrong and the root cause is entirely elsewhere...

In my test, I'm just adding 300,000 tunnels (one domain+one rule).

Unfortunately, rte_malloc() provides only aligned memory allocation, not 
aligned-at-offset. Theoretically we could provide wrapper around it, but that 
would need some careful coding and a lot of testing. I made an attempt to 
quickly replace mheap globally, but of course it ended up in utter failure.

 

Right now, I added a concept of external allocator to clib (via function 
pointers), I'm enabling it only upon DPDK plugin initialization. However, such 
approach requires using it directly instead of clib alloc, (e.g. I did it upon 
rule adding). While it does not add dependency on DPDK, I'm not fully 
satisfied, because it would need manual replacement of all allocation calls. If 
you want, I can share the patch.

Best Regards,

Jacek.

 

2017-09-05 15:30 GMT+02:00 Dave Barach (dbarach) <dbar...@cisco.com 
<mailto:dbar...@cisco.com> >:

Dear Jacek,

 

Use of the clib memory allocator is mainly historical. It’s elegant in a couple 
of ways - including built-in leak-finding - but it has been known to backfire 
in terms of performance. Individual mheaps are limited to 4gb in a [typical] 
32-bit vector length image. 

 

Note that the idiosyncratic mheap API functions “tell me how long this object 
really is” and “allocate N bytes aligned to a boundary at a certain offset” are 
used all over the place.

 

I wouldn’t mind replacing it - so long as we don’t create a hard dependency on 
the dpdk - but before we go there...: Tell me a bit about the scenario at hand. 
What are we repeatedly allocating / freeing? That’s almost never necessary...

 

Can you easily share the offending backtrace?  

 

Thanks… Dave

 

From: vpp-dev-boun...@lists.fd.io <mailto:vpp-dev-boun...@lists.fd.io>  
[mailto:vpp-dev-boun...@lists.fd.io <mailto:vpp-dev-boun...@lists.fd.io> ] On 
Behalf Of Jacek Siuda
Sent: Tuesday, September 5, 2017 9:08 AM
To: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io> 
Subject: [vpp-dev] mheap performance

 

Hi,

I'm conducting a tunnel test using VPP (vnet) map with the following parameters:

ea_bits_len=0, psid_offset=16, psid=length, single rule for each domain; total 
number of tunnels: 30, total number of control messages: 600k.

My problem is with simple adding tunnels. After adding more than ~150k-200k, 
performance drops significantly: first 100k is added in ~3s (on asynchronous C 
client), next 100k in another ~5s, but the last 100k takes ~37s to add; in 
total: ~45s. Python clients are performing even worse: 32 minutes(!) for 300k 
tunnels with synchronous (blocking) version and ~95s with asynchronous. The 
python clients are expected to perform a bit worse according to vpp docs, but I 
was worried by non-linear time of single tunnel addition that is visible even 
on C client.

While investigating this using perf, I found the culprit: it is the memory 
allocation done for ip address b

Re: [vpp-dev] Rearrangement of graph nodes

2017-09-08 Thread Dave Barach (dbarach)
One could do that, but what problem are you trying to solve? The data 
structures involved are not super-complicated, but what you’ve described is 
neither a beginner project nor a worthwhile project IMO.

If you want to spoof MAC addresses in the L2 path, add an L2 feature node which 
does that. Generally speaking, two nodes A and B have a contract in terms of 
buffer metadata setup. Arbitrary graph rewiring would result in either gross or 
subtle malfunction.

In terms of how to build a plugin: look at .../src/examples/sample-plugin.

I maintain (sporadically) a set of emacs skeletons in .../extras/emacs. If you 
M-x eval-buffer all-skel.el, a subsequent M-x make-plugin will create a 
boilerplate plugin for you.

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Ngo Doan Lap
Sent: Thursday, September 7, 2017 10:18 PM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] Rearrangement of graph nodes

Hi,
From this page https://wiki.fd.io/view/VPP/What_is_VPP%3F
There is an option to create a plugin to rearrange graph nodes.
I want to write a plugin that builds graph nodes from a file, for example
graph.txt:
dpdk-input-->ethernet-input->change-mac

I would like to know from your opinion that is it possible with VPP?
And if yes, can you tell me how to write a plugin to rearrange graph nodes?
(I'm unable to find the example/doc to build a plugin to rearrange graph nodes)
--
Thanks and Best Regards,
Ngo Doan Lap

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] mheap performance

2017-09-05 Thread Dave Barach (dbarach)
Dear Jacek,

Use of the clib memory allocator is mainly historical. It’s elegant in a couple 
of ways - including built-in leak-finding - but it has been known to backfire 
in terms of performance. Individual mheaps are limited to 4gb in a [typical] 
32-bit vector length image.

Note that the idiosyncratic mheap API functions “tell me how long this object 
really is” and “allocate N bytes aligned to a boundary at a certain offset” are 
used all over the place.

I wouldn’t mind replacing it - so long as we don’t create a hard dependency on 
the dpdk - but before we go there...: Tell me a bit about the scenario at hand. 
What are we repeatedly allocating / freeing? That’s almost never necessary...

Can you easily share the offending backtrace?

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Jacek Siuda
Sent: Tuesday, September 5, 2017 9:08 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] mheap performance

Hi,
I'm conducting a tunnel test using VPP (vnet) map with the following parameters:
ea_bits_len=0, psid_offset=16, psid=length, single rule for each domain; total 
number of tunnels: 30, total number of control messages: 600k.
My problem is with simple adding tunnels. After adding more than ~150k-200k, 
performance drops significantly: first 100k is added in ~3s (on asynchronous C 
client), next 100k in another ~5s, but the last 100k takes ~37s to add; in 
total: ~45s. Python clients are performing even worse: 32 minutes(!) for 300k 
tunnels with synchronous (blocking) version and ~95s with asynchronous. The 
python clients are expected to perform a bit worse according to vpp docs, but I 
was worried by non-linear time of single tunnel addition that is visible even 
on C client.
While investigating this using perf, I found the culprit: it is the memory 
allocation done for ip address by rule addition request.
The memory is allocated by clib, which is using mheap library (~98% of cpu 
consumption). I looked into mheap and it looks a bit complicated for allocating 
a short object.
I've done a short experiment by replacing (in vnet/map/ only) clib allocation 
with DPDK rte_malloc() and achieved a way better performance: 300k tunnels in 
~5-6s with the same C-client, and respectively ~70s and ~30-40s with Python 
clients. Also, I haven't noticed any negative impact on packet throughput with 
my experimental allocator.
So, here are my questions:
1) Did someone other reported performance penalties for using mheap library? 
I've searched the list archive and could not find any related questions.
2) Why mheap library was chosen to be used in clib? Are there any performance 
benefits in some scenarios?
3) Are there any (long- or short-term) plans to replace memory management in 
clib with some other library?
4) I wonder, if I'd like to upstream my solution, how should I approach 
customization of memory allocation, so it would be accepted by community. 
Installable function pointers defaulting to clib?

Best Regards,
Jacek Siuda.


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Packet loss on use of API & cmdline

2017-09-01 Thread Dave Barach (dbarach)
Dear Colin,

That makes total sense... Tunnels are modelled as "magic interfaces" 
[especially] in the encap direction. Each tunnel has an output node, which 
means that the [first] FIB entry will need to add a graph arc.

A bit of "show vlib graph" action will confirm that... 

Thanks… Dave

-Original Message-
From: Colin Tregenza Dancer [mailto:c...@metaswitch.com] 
Sent: Friday, September 1, 2017 11:01 AM
To: Dave Barach (dbarach) <dbar...@cisco.com>; Ole Troan 
<otr...@employees.org>; Neale Ranns (nranns) <nra...@cisco.com>
Cc: vpp-dev@lists.fd.io
Subject: RE: [vpp-dev] Packet loss on use of API & cmdline

I think there is something special in this case related to the fact that we're 
adding of a new tunnel / subnet, before we issue our 63 ip_neighbor_add_del 
calls, because it is only the first call ip_neighbor_add_del which updates the 
nodes, with all of the other just doing a rewrite.

I'll mail you guys the full (long) trace offline so you can see the overall 
sequence.

Cheers,

Colin. 
-----Original Message-
From: Dave Barach (dbarach) [mailto:dbar...@cisco.com] 
Sent: 01 September 2017 15:15
To: Colin Tregenza Dancer <c...@metaswitch.com>; Ole Troan 
<otr...@employees.org>; Neale Ranns (nranns) <nra...@cisco.com>
Cc: vpp-dev@lists.fd.io
Subject: RE: [vpp-dev] Packet loss on use of API & cmdline

Dear Colin, 

Of all of these, ip_neighbor_add_del seems like the one to tune right away.

The API message handler itself can be marked mp-safe right away. Both the ip4 
and the ip6 underlying routines are thread-aware (mumble RPC mumble).

We should figure out why the FIB thinks it needs to pull the node runtime 
update lever. AFAICT, adding ip arp / ip6 neighbor adjacencies shouldn’t 
require a node runtime update, at least not in the typical case. 

Copying Neale Ranns. I don't expect to hear back immediately; he's on PTO until 
9/11. 

Thanks… Dave

-Original Message-
From: Colin Tregenza Dancer [mailto:c...@metaswitch.com]
Sent: Friday, September 1, 2017 8:51 AM
To: Dave Barach (dbarach) <dbar...@cisco.com>; Ole Troan <otr...@employees.org>
Cc: vpp-dev@lists.fd.io
Subject: RE: [vpp-dev] Packet loss on use of API & cmdline

Hi Dave,

Thanks for looking at this.

I get repeated vlib_node_runtime_update() calls when I use the API functions:  
ip_neighbor_add_del, gre_add_del_tunnel, create_loopback, 
sw_interface_set_l2_bridge & sw_interface_add_del_address (thought there may be 
others which I’m not currently calling).

To illustrate, I've included below a formatted version of my barrier trace from 
when I make an ip_neighbor_add_del API call (raw traces for the other commands 
are included at the end).  

At the point this call was made there were 3 worker threads, ~425nodes in the 
system, and a load of ~3Mpps saturating two 10G NICs.

It shows the API function name, followed by a tree of the recursive calls to 
barrier_sync/release.  On each line I show the calling function name, current 
recursion depth, and elapsed timing from the point the barrier was actually 
closed.  

[50]: ip_neighbor_add_del

<2(80us)adj_nbr_update_rewrite_internal
<3(82us)vlib_node_runtime_update{(86us)}
(86us)>
<3(87us)vlib_node_runtime_update{(90us)}
(90us)>
<3(91us)vlib_node_runtime_update{(94us)}
(95us)>
(95us)>
<2(119us)adj_nbr_update_rewrite_internal
(120us)>
(135us)>
(136us)>
{(137us)vlib_worker_thread_node_runtime_update
[179us]
[256us]
worker=1
worker=2
worker=3
(480us)}

This trace is taken on my dev branch, where I am delaying the worker thread 
updates till just before the barrier release.  In the vlib_node_runtime_update 
functions, the time stamp within the {} braces show the point as which the 
rework_required flag is set (instead of the mainline behaviour of repeatedly 
invoking vlib_worker_thread_node_runtime_update()) 

At the end you can also see the additional profiling stamps I've added at 
various points within vlib_worker_thread_node_runtime_update().  The first two 
stamps are after the two stats sync loops, then there are three lines of 
tracing for the invocations of the function I've added to contain the code for 
the per worker re-fork.  Those functions calls are further profiled at various 
points, where the gap between B & C is where the clone node alloc/copying is 
occurring, and between C & D is where the old clone nodes are being freed.  As 
you might guess from the short C-D gap, this branch also included my 
optimization to allocate/free all the clone nodes in a single block.

Having successfully tested the move of the per thread re-fork into a separate 
function, I'm about try the "

Re: [vpp-dev] Packet loss on use of API & cmdline

2017-09-01 Thread Dave Barach (dbarach)
Dear Colin, 

Of all of these, ip_neighbor_add_del seems like the one to tune right away.

The API message handler itself can be marked mp-safe right away. Both the ip4 
and the ip6 underlying routines are thread-aware (mumble RPC mumble).

We should figure out why the FIB thinks it needs to pull the node runtime 
update lever. AFAICT, adding ip arp / ip6 neighbor adjacencies shouldn’t 
require a node runtime update, at least not in the typical case. 

Copying Neale Ranns. I don't expect to hear back immediately; he's on PTO until 
9/11. 

Thanks… Dave

-Original Message-
From: Colin Tregenza Dancer [mailto:c...@metaswitch.com] 
Sent: Friday, September 1, 2017 8:51 AM
To: Dave Barach (dbarach) <dbar...@cisco.com>; Ole Troan <otr...@employees.org>
Cc: vpp-dev@lists.fd.io
Subject: RE: [vpp-dev] Packet loss on use of API & cmdline

Hi Dave,

Thanks for looking at this.

I get repeated vlib_node_runtime_update() calls when I use the API functions:  
ip_neighbor_add_del, gre_add_del_tunnel, create_loopback, 
sw_interface_set_l2_bridge & sw_interface_add_del_address (thought there may be 
others which I’m not currently calling).

To illustrate, I've included below a formatted version of my barrier trace from 
when I make an ip_neighbor_add_del API call (raw traces for the other commands 
are included at the end).  

At the point this call was made there were 3 worker threads, ~425nodes in the 
system, and a load of ~3Mpps saturating two 10G NICs.

It shows the API function name, followed by a tree of the recursive calls to 
barrier_sync/release.  On each line I show the calling function name, current 
recursion depth, and elapsed timing from the point the barrier was actually 
closed.  

[50]: ip_neighbor_add_del

<2(80us)adj_nbr_update_rewrite_internal
<3(82us)vlib_node_runtime_update{(86us)}
(86us)>
<3(87us)vlib_node_runtime_update{(90us)}
(90us)>
<3(91us)vlib_node_runtime_update{(94us)}
(95us)>
(95us)>
<2(119us)adj_nbr_update_rewrite_internal
(120us)>
(135us)>
(136us)>
{(137us)vlib_worker_thread_node_runtime_update
[179us]
[256us]
worker=1
worker=2
worker=3
(480us)}

This trace is taken on my dev branch, where I am delaying the worker thread 
updates till just before the barrier release.  In the vlib_node_runtime_update 
functions, the time stamp within the {} braces show the point as which the 
rework_required flag is set (instead of the mainline behaviour of repeatedly 
invoking vlib_worker_thread_node_runtime_update()) 

At the end you can also see the additional profiling stamps I've added at 
various points within vlib_worker_thread_node_runtime_update().  The first two 
stamps are after the two stats sync loops, then there are three lines of 
tracing for the invocations of the function I've added to contain the code for 
the per worker re-fork.  Those functions calls are further profiled at various 
points, where the gap between B & C is where the clone node alloc/copying is 
occurring, and between C & D is where the old clone nodes are being freed.  As 
you might guess from the short C-D gap, this branch also included my 
optimization to allocate/free all the clone nodes in a single block.

Having successfully tested the move of the per thread re-fork into a separate 
function, I'm about try the "collective brainsurgery" version, where I will get 
the workers to re-fork their own clone (with the barrier still held) rather 
than having in done sequentially by main.

I'll let you know how it goes...

Colin.

_Raw traces of other calls_

Sep  1 12:57:38 pocvmhost vpp[6315]: [155]: gre_add_del_tunnel
Sep  1 12:57:38 pocvmhost vpp[6315]: 
<vl_msg_api_barrier_sync<1(53us)vlib_node_runtime_update{(86us)}(87us)><1(96us)vlib_node_runtime_update{(99us)}(99us)><1(100us)vlib_node_runtime_update{(102us)}(103us)><1(227us)vlib_node_runtime_update{(232us)}(233us)><1(235us)vlib_node_runtime_update{(237us)}(238us)><1(308us)vlib_node_runtime_update{(313us)}(314us)><1(316us)adj_nbr_update_rewrite_internal(317us)><1(349us)adj_nbr_update_rewrite_internal(350us)>(353us)>{(354us)vlib_worker_thread_node_runtime_update[394us][462us]worker=1[423:425]worker=2[423:425]worker=3[423:425](708us)}
Sep  1 12:57:38 pocvmhost vpp[6315]: Barrier(us) # 42822  - O   300  D 5  C 
  708  U 0 - nested   8
Sep  1 12:57:38 pocvmhost vpp[6315]: [13]: sw_interface_set_flags
Sep  1 12:57:38 pocvmhost vpp[6315]: <vl_msg_api_barrier_sync(45us)>
Sep  1 12:57:38 pocvmhost vpp[6315]: Barrier(us) # 42823  - O  1143  D70  C 
   46  U 0 - nested   0
Sep  1 12:57:38 pocvmhost vpp[6315]: [85]: create_loopback
Sep  1 12:57:38 pocvmhost vpp

Re: [vpp-dev] Packet loss on use of API & cmdline

2017-09-01 Thread Dave Barach (dbarach)
Dear Colin,

Please describe the scenario which leads to vlib_node_runtime_update(). I 
wouldn't mind having a good long stare at the situation. 

I do like the parallel data structure update approach that you've described, 
tempered with the realization that it amounts to "collective brain surgery." I 
had more than enough trouble making the data structure fork-and-update code 
work reliably in the first place. 

Thanks… Dave

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Colin Tregenza Dancer via vpp-dev
Sent: Friday, September 1, 2017 6:12 AM
To: Ole Troan 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Packet loss on use of API & cmdline

Hi Ole,

Thanks for the quick reply.

I did think about making all the commands we use is_mp_safe, but was both 
concerned about the extent of the work, and the potential for introducing 
subtle bugs.  I also didn't think it would help my key problem, which is the 
multi-ms commands which make multiple calls to vlib_node_runtime_update(), not 
least because it seemed likely that I'd need to hold the barrier across the 
multiple node changes in a single API call (to avoid inconsistent intermediate 
states).

Do you have any thoughts on the change to call 
vlib_worker_thread_node_runtime_update() a single time just before releasing 
the barrier?  It seems to work fine, but I'm keen to get input from someone who 
has been working on the codebase for longer.


More generally, even with my changes, vlib_worker_thread_node_runtime_update() 
is the single function which holds the barrier for longer than all other 
elements, and is the one which therefore most runs the risk of causing Rx 
overflow.  

Detailed profiling showed that for my setup, ~40-50% of the time is taken in 
"/* re-fork nodes */" with the memory functions used to allocate the new clone 
nodes, and free the old clones.  Given that we know the number of nodes at the 
start of the loop, and given that (as far as I can tell) new clone nodes aren't 
altered between calls to the update function, I tried a change to allocate/free 
all the nodes as a single block (whilst still cloning and inserting them as 
before). I needed to make a matching change in the "/* fork nodes */" code in 
start_workers(), (and probably need to make a matching change in the 
termination code,) but in testing this almost halves the execution time of 
vlib_worker_thread_node_runtime_update() without any obvious problems. 

Having said that, the execution time of the node cloning remains O(M.N), where 
M is the number of threads and N the number of nodes.  This is reflected in the 
fact that when I try on larger system (i.e. more workers and more nodes) I 
again suffer packet loss because this one function is holding the barrier for 
multiple ms.

The change I'm currently working on is to try and reduce to delay to O(N) by 
getting the worker threads to clone their own data structures in parallel.  I'm 
doing this by extending their busy wait on the barrier, to also include looking 
for a flag telling them to rebuild their data structures.  When the main thread 
is about to release the barrier, and decides it needs a rebuild, I was going to 
get it to do the relatively quick stats scraping, then sets the flag telling 
the workers to rebuild their clones.  The rebuild will then happen on all the 
workers in parallel (which looking at the code seems to be safe), and only when 
all the cloning is done, will the main thread actually release the barrier.  

I hope to get results from this soon, and will let you know how it goes, but 
again I'm very keen to get other people's views.

Cheers,

Colin.

-Original Message-
From: Ole Troan [mailto:otr...@employees.org] 
Sent: 01 September 2017 09:37
To: Colin Tregenza Dancer 
Cc: Neale Ranns (nranns) ; Florin Coras 
; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Packet loss on use of API & cmdline

Colin,

Good investigation!

A good first step would be to make all APIs and CLIs thread safe.
When an API/CLI is thread safe, that must be flagged through the is_mp_safe 
flag.
It is quite likely that many already are, but haven't been flagged as such.

Best regards,
Ole


> On 31 Aug 2017, at 19:07, Colin Tregenza Dancer via vpp-dev 
>  wrote:
> 
> I’ve been doing quite a bit of investigation since my last email, in 
> particular adding instrumentation on barrier calls to report 
> open/lowering/closed/raising times, along with calling trees and nesting 
> levels.
> 
> As a result I believe I now have a clearer understanding of what’s leading to 
> the packet loss I’m observing when using the API, along with some code 
> changes which in my testing reliably eliminate the 500K packet loss I was 
> previously observing.
> 
> Would either of you (or anyone else on the list) be able to offer their 
> opinions on my understanding of 

Re: [vpp-dev] About the order of  VLIB_INIT_FUNCTION called between different plugins

2017-08-31 Thread Dave Barach (dbarach)
Actually, that’s not quite right... Here’s a bit of code from 
.../vlib/unix/plugin.c:

  /*
   * Sort the plugins by name. This is important.
   * API traces contain absolute message numbers.
   * Loading plugins in directory (vs. alphabetical) order
   * makes trace replay incredibly fragile.
   */
  vec_sort_with_function (pm->plugin_info, plugin_name_sort_cmp);

You can assume that AAA will start before AAB, etc.

Thanks… Dave

From: wang.hu...@zte.com.cn [mailto:wang.hu...@zte.com.cn]
Sent: Wednesday, August 30, 2017 9:39 PM
To: Damjan Marion (damarion) <damar...@cisco.com>
Cc: vpp-dev@lists.fd.io; zhao.qingl...@zte.com.cn; wu.bi...@zte.com.cn; 
gu.ji...@zte.com.cn; dong.ju...@zte.com.cn; Dave Barach (dbarach) 
<dbar...@cisco.com>
Subject: 答复: Re: [vpp-dev] About the order of  VLIB_INIT_FUNCTION called 
between different plugins


Thanks you two for the help~ Now we knew that plugins startup maybe has no 
order , due to reading plugins by "readdir" 。

And the function which defined in other plugin.so could not be called in our 
plugin,  so vlib_call_init_function maybe can not works.

We have reconsidered the dependencies between plugins, and ajust that.











王辉 wanghui



IT开发工程师 IT Development Engineer
虚拟化南京四部/无线研究院/无线产品经营部 NIV Nanjing Dept. IV/Wireless Product R&D 
Institute/Wireless Product Operation Division


原始邮件
发件人: <damar...@cisco.com<mailto:damar...@cisco.com>>;
收件人:王辉10067165;
抄送人: 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>;赵清凌10066964;吴兵10040069;顾剑10036178;董娟00096251;
 <dbar...@cisco.com<mailto:dbar...@cisco.com>>;
日 期 :2017年08月30日 23:35
主 题 :Re: [vpp-dev] About the order of  VLIB_INIT_FUNCTION called between 
different plugins


Yes, also please note that you can at any time use vlib_get_plugin_symbol(..) 
function to get pointer to symbol in another plugin. If you get NULL then 
another plugin is not loaded.

So something like this should work, assuming that you want to go that way...

static clib_error_t *
bar_init (vlib_main_t * vm)
 {
clib_error_t *error = 0;
if (vlib_get_plugin_symbol (“foo_plugin.so”, “foo_init”) == 0)
{
  clib_warning ( “foo plugin not loaded. bar disabled”);
  bar_main.disabled = 1;
  return 0;
}

if ((error = vlib_call_init_function (vm, foo_init)))
 return error;

  // continue with bar init…
}



> On 30 Aug 2017, at 12:37, Dave Barach (dbarach) 
> <dbar...@cisco.com<mailto:dbar...@cisco.com>> wrote:
>
> Explicit dependencies between plugins is probably not a good idea. There is 
> little to guarantee that both A and B will be loaded.
>
> Please describe the use-case in more detail.
>
> Thanks… Dave
>
> From: vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> 
> [mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of 
> wang.hu...@zte.com.cn<mailto:wang.hu...@zte.com.cn>
> Sent: Wednesday, August 30, 2017 4:01 AM
> To: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
> Cc: zhao.qingl...@zte.com.cn<mailto:zhao.qingl...@zte.com.cn>; 
> wu.bi...@zte.com.cn<mailto:wu.bi...@zte.com.cn>; 
> gu.ji...@zte.com.cn<mailto:gu.ji...@zte.com.cn>; 
> dong.ju...@zte.com.cn<mailto:dong.ju...@zte.com.cn>
> Subject: [vpp-dev] About the order of  VLIB_INIT_FUNCTION called between 
> different plugins
>
> Hi all:
>
> How to control the order of  VLIB_INIT_FUNCTION (user xxx_init function) 
> called between Different plugins?
>
> It depends on plugin name?or the sequence of loading plugin ?
>
>  or is there any other way to adjust the order?
>
>
>
> Thanks~
>
>
>
>
>
>
>
> 王辉 wanghui
>
>
>
> IT开发工程师 IT Development Engineer
> 虚拟化南京四部/无线研究院/无线产品经营部 NIV Nanjing Dept. IV/Wireless Product R&D 
> Institute/Wireless Product Operation Division
>
>
>
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
> https://lists.fd.io/mailman/listinfo/vpp-dev




___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Issue forwarding TCP packets

2017-08-30 Thread Dave Barach (dbarach)
This is a system problem; vpp can’t solve it all by itself.

When forwarding packets at L2, vpp doesn’t look past the ethernet header. It’s 
simply delivering packets generated by the Linux kernel on one interface to 
another Linux kernel interface.

The kernel cheats by not generating L4 checksums. When we deliver such packets 
to a Linux kernel interface which isn’t part of a Linux bridge, it throws them 
on the floor. Linux bridging winks and nods, knowing that the L4 checksum won’t 
be set.

Two options: compute the checksums in software, or ignore them across the 
board. Neither option is perfect. The checksum computation is slow. I’m not a 
big believer in checksums, to be honest, but ignoring the L4 checksum across 
the board is probably not acceptable.

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Prabhjot Singh Sethi
Sent: Wednesday, August 30, 2017 9:45 AM
To: Florin Coras ; Prabhjot Singh Sethi 

Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Issue forwarding TCP packets

Hi Florin,
For now we can proceed with a requirement to disable offload on VMs. but still 
keeping performance in view i think VPP should have option similar to linux 
bridge to be able to do forwarding without correct checksums, while offloading 
is enable the real checksum calculations should be done only when the packet is 
egressing out of the server to other compute nodes or gateway.

i think we can leave the topic open for now, and it can be considered later for 
performance improvements for TCP

Regards,
Prabhjot

- Original Message -
From:
"Florin Coras" >

To:
"Prabhjot Singh Sethi" 
>.
Cc:
>
Sent:
Tue, 29 Aug 2017 11:13:14 -0700
Subject:
Re: [vpp-dev] Issue forwarding TCP packets


Hi Prabhjot,

In your setup, VPP just switches packets from one interface to the other, 
irrespective of their L4 checksum. That is, it does not read nor update them. 
Therefore, since the source Linux interface does not provide correct checksums, 
the destination rejects all packets.

As you’ve noted lower, although the packets are ultimately delivered to the 
app, even with the Linux bridge, the checksum is not correctly computed when 
offloading is on. I suspect the reason for this is an underlying optimization 
in the Linux kernel whereby local interfaces can exchange L4 packets without 
the use of checksums. In fact, the approach makes sense because packets never 
leave the kernel between the sndmsg() and recvmsg() calls so there’s no need to 
protect data integrity. However, this condition does not hold anymore when vpp 
does the switching, so the checksums must be computed before the packets leave 
the kernel (hence the need for checksum offload disabling).

Hope this helps,
Florin

> On Aug 28, 2017, at 11:46 PM, 
> prabh...@techtrueup.com wrote:
>
> Thanks Florin,
> it works with offload disabled, earlier when i tried changing offload 
> settings i missed doing it on one machine.
>
> However my question is still about why VPP is unable to handle it, is it a 
> missing configuration or missing functionality in VPP?
>
> i can ssh from one vm to another without turning off offload while using 
> linux bridge, and i don't see any change in the packet as well after being 
> forwarded. Though tcpdump complains about checksum, but everything works fine.
>
> packet ingressing into linux bridge from vm-1
> 06:41:58.626869 02:53:71:ef:2f:2a > 02:91:fb:46:9e:43, ethertype IPv4 
> (0x0800), length 74: (tos 0x0, ttl 64, id 6341, offset 0, flags [DF], proto 
> TCP (6), length 60)
> 1.1.1.3.51912 > 1.1.1.4.22: Flags [S], cksum 0x0437 (incorrect -> 0xdeae), 
> seq 2306393024, win 29200, options [mss 1460,sackOK,TS val 1825541 ecr 
> 0,nop,wscale 7], length 0
> 0x: 0291 fb46 9e43 0253 71ef 2f2a 0800 4500
> 0x0010: 003c 18c5 4000 4006 1def 0101 0103 0101
> 0x0020: 0104 cac8 0016 8978 c3c0   a002
> 0x0030: 7210 0437  0204 05b4 0402 080a 001b
> 0x0040: db05   0103 0307
>
> packet egressing from linux bridge to vm-2
> 06:41:58.627130 02:53:71:ef:2f:2a > 02:91:fb:46:9e:43, ethertype IPv4 
> (0x0800), length 74: (tos 0x0, ttl 64, id 6341, offset 0, flags [DF], proto 
> TCP (6), length 60)
> 1.1.1.3.51912 > 1.1.1.4.22: Flags [S], cksum 0x0437 (incorrect -> 0xdeae), 
> seq 2306393024, win 29200, options [mss 1460,sackOK,TS val 1825541 ecr 
> 0,nop,wscale 7], length 0
> 0x: 0291 fb46 9e43 0253 71ef 2f2a 0800 4500
> 0x0010: 003c 18c5 4000 4006 1def 0101 0103 0101
> 0x0020: 0104 cac8 0016 8978 c3c0   a002
> 0x0030: 7210 0437  0204 05b4 0402 080a 001b
> 0x0040: db05   0103 0307
>
>
> Regards,
> Prabhjot
>
> Quoting Florin Coras >:
>
>> Hi Prabhjot,
>>
>> From your 

Re: [vpp-dev] About the order of  VLIB_INIT_FUNCTION called between different plugins

2017-08-30 Thread Dave Barach (dbarach)
Explicit dependencies between plugins is probably not a good idea. There is 
little to guarantee that both A and B will be loaded.

Please describe the use-case in more detail.

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of wang.hu...@zte.com.cn
Sent: Wednesday, August 30, 2017 4:01 AM
To: vpp-dev@lists.fd.io
Cc: zhao.qingl...@zte.com.cn; wu.bi...@zte.com.cn; gu.ji...@zte.com.cn; 
dong.ju...@zte.com.cn
Subject: [vpp-dev] About the order of  VLIB_INIT_FUNCTION called between 
different plugins


Hi all:

How to control the order of  VLIB_INIT_FUNCTION (user xxx_init function) called 
between Different plugins?

It depends on plugin name?or the sequence of loading plugin ?

 or is there any other way to adjust the order?



Thanks~







王辉 wanghui



IT开发工程师 IT Development Engineer
虚拟化南京四部/无线研究院/无线产品经营部 NIV Nanjing Dept. IV/Wireless Product R&D 
Institute/Wireless Product Operation Division


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [EXT] Re: compiling error natively on an am64 box for fd.io_vpp

2017-08-28 Thread Dave Barach (dbarach)
+1

From: Damjan Marion [mailto:dmarion.li...@gmail.com]
Sent: Saturday, August 26, 2017 3:11 PM
To: Eric Chen <eri...@marvell.com>
Cc: Dave Barach (dbarach) <dbar...@cisco.com>; Sergio Gonzalez Monroy 
<sergio.gonzalez.mon...@intel.com>; vpp-dev <vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] [EXT] Re: compiling error natively on an am64 box for 
fd.io_vpp

Hi Eric,

Same code compiles perfectly fine on ARM64 with newer gcc version.

If you are starting new development cycle it makes sense to me that you pick up 
latest ubuntu release,
specially when new hardware is involved instead of trying to chase this kind of 
bugs.

Do you have any strong reason to stay on ubuntu 16.04? Both 17.04 and upcoming 
17.10 are working fine on arm64 and
compiling of VPP works without issues.

Thanks,

Damjan


On 26 Aug 2017, at 15:23, Eric Chen 
<eri...@marvell.com<mailto:eri...@marvell.com>> wrote:

Dave,

Thanks for your answer.
I tried below variation, it doesn’t help.

Btw, there is not only one place reporting “error: unable to generate reloads 
for:”,

I will try to checkout the version of 17.01.1,
since with the same native compiler, I succeeded to build fd.io_odp4vpp (which 
is based on fd.io<http://fd.io/> 17.01.1).

will keep you posted.

Thanks
Eric

From: Dave Barach (dbarach) [mailto:dbar...@cisco.com]
Sent: 2017年8月26日 20:08
To: Eric Chen <eri...@marvell.com<mailto:eri...@marvell.com>>; Sergio Gonzalez 
Monroy 
<sergio.gonzalez.mon...@intel.com<mailto:sergio.gonzalez.mon...@intel.com>>; 
vpp-dev <vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Subject: RE: [vpp-dev] [EXT] Re: compiling error natively on an am64 box for 
fd.io_vpp

Just so everyone knows, the function in question is almost too simple for its 
own good:

always_inline uword
vlib_process_suspend_time_is_zero (f64 dt)
{
  return dt < 10e-6;
}

What happens if you try this variation?

always_inline int
vlib_process_suspend_time_is_zero (f64 dt)
{
  if (dt < 10e-6)
 return 1;
  return 0;
}

This does look like a gcc bug, but it may not be hard to work around...

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Eric Chen
Sent: Friday, August 25, 2017 11:02 PM
To: Eric Chen <eri...@marvell.com<mailto:eri...@marvell.com>>; Sergio Gonzalez 
Monroy 
<sergio.gonzalez.mon...@intel.com<mailto:sergio.gonzalez.mon...@intel.com>>; 
vpp-dev <vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] [EXT] Re: compiling error natively on an am64 box for 
fd.io_vpp

Hi Sergio,

I upgrading to Ubuntu 16.04,

Succedd to Nativly build fd.io_odp4vpp (w / odp-linux),
However when buidl fd.io_vpp (w/ dpdk),  it reported below error,
(almost the same , only difference is over dpdk or odp-linux)

Anyone met before? Seem a bug of gcc.

In file included from 
/home/ericxh/work/git_work/fd.io_vpp/build-data/../src/vlib/error_funcs.h:43:0,
 from 
/home/ericxh/work/git_work/fd.io_vpp/build-data/../src/vlib/vlib.h:70,
 from 
/home/ericxh/work/git_work/fd.io_vpp/build-data/../src/vnet/l2/l2_fib.c:19:
/home/ericxh/work/git_work/fd.io_vpp/build-data/../src/vlib/node_funcs.h: In 
function ‘vlib_process_suspend_time_is_zero’:
/home/ericxh/work/git_work/fd.io_vpp/build-data/../src/vlib/node_funcs.h:442:1: 
error: unable to generate reloads for:
}
^
(insn 11 37 12 2 (set (reg:CCFPE 66 cc)
(compare:CCFPE (reg:DF 79)
(reg:DF 80))) 
/home/ericxh/work/git_work/fd.io_vpp/build-data/../src/vlib/node_funcs.h:441 
395 {*cmpedf}
 (expr_list:REG_DEAD (reg:DF 80)
(expr_list:REG_DEAD (reg:DF 79)
(nil
/home/ericxh/work/git_work/fd.io_vpp/build-data/../src/vlib/node_funcs.h:442:1: 
internal compiler error: in curr_insn_transform, at lra-constraints.c:3509
Please submit a full bug report,
with preprocessed source if appropriate.
See 
>
 for instructions.
Makefile:6111: recipe for target 'vnet/l2/l2_fib.lo' failed
make[4]: *** [vnet/l2/l2_fib.lo] Error 1
make[4]: *** Waiting for unfinished jobs



ericxh@linaro-developer:~/work/git_work/fd.io_vpp$<mailto:ericxh@linaro-developer:~/work/git_work/fd.io_vpp$>
 gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/lib/gcc/aarch64-linux-gnu/5/lto-wrapper
Target: aarch64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 
5.3.1-14ubuntu2' --with-bugurl=file:///usr/share/doc/gcc-5/README.Bugs 
--enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --prefix=/usr 
--program-suffix=-5 --enable-shared --enable-linker-build-id 
--libexecdir=/usr/lib --without-included-gettext --enable-threads=posix 
--libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu 
--enable-libstdcxx-debug --enable-libstdcxx-time=yes 
--with-default-libstdcxx-abi=new --enable-gnu-unique-object 
--disable-libqu

Re: [vpp-dev] [EXT] Re: compiling error natively on an am64 box for fd.io_vpp

2017-08-26 Thread Dave Barach (dbarach)
Just so everyone knows, the function in question is almost too simple for its 
own good:

always_inline uword
vlib_process_suspend_time_is_zero (f64 dt)
{
  return dt < 10e-6;
}

What happens if you try this variation?

always_inline int
vlib_process_suspend_time_is_zero (f64 dt)
{
  if (dt < 10e-6)
 return 1;
  return 0;
}

This does look like a gcc bug, but it may not be hard to work around...

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Eric Chen
Sent: Friday, August 25, 2017 11:02 PM
To: Eric Chen ; Sergio Gonzalez Monroy 
; vpp-dev 
Subject: Re: [vpp-dev] [EXT] Re: compiling error natively on an am64 box for 
fd.io_vpp

Hi Sergio,


I upgrading to Ubuntu 16.04,



Succedd to Nativly build fd.io_odp4vpp (w / odp-linux),

However when buidl fd.io_vpp (w/ dpdk),  it reported below error,

(almost the same , only difference is over dpdk or odp-linux)



Anyone met before? Seem a bug of gcc.



In file included from 
/home/ericxh/work/git_work/fd.io_vpp/build-data/../src/vlib/error_funcs.h:43:0,

 from 
/home/ericxh/work/git_work/fd.io_vpp/build-data/../src/vlib/vlib.h:70,

 from 
/home/ericxh/work/git_work/fd.io_vpp/build-data/../src/vnet/l2/l2_fib.c:19:

/home/ericxh/work/git_work/fd.io_vpp/build-data/../src/vlib/node_funcs.h: In 
function ‘vlib_process_suspend_time_is_zero’:

/home/ericxh/work/git_work/fd.io_vpp/build-data/../src/vlib/node_funcs.h:442:1: 
error: unable to generate reloads for:

}

^

(insn 11 37 12 2 (set (reg:CCFPE 66 cc)

(compare:CCFPE (reg:DF 79)

(reg:DF 80))) 
/home/ericxh/work/git_work/fd.io_vpp/build-data/../src/vlib/node_funcs.h:441 
395 {*cmpedf}

 (expr_list:REG_DEAD (reg:DF 80)

(expr_list:REG_DEAD (reg:DF 79)

(nil

/home/ericxh/work/git_work/fd.io_vpp/build-data/../src/vlib/node_funcs.h:442:1: 
internal compiler error: in curr_insn_transform, at lra-constraints.c:3509

Please submit a full bug report,

with preprocessed source if appropriate.

See  for instructions.

Makefile:6111: recipe for target 'vnet/l2/l2_fib.lo' failed

make[4]: *** [vnet/l2/l2_fib.lo] Error 1

make[4]: *** Waiting for unfinished jobs







ericxh@linaro-developer:~/work/git_work/fd.io_vpp$
 gcc -v

Using built-in specs.

COLLECT_GCC=gcc

COLLECT_LTO_WRAPPER=/usr/lib/gcc/aarch64-linux-gnu/5/lto-wrapper

Target: aarch64-linux-gnu

Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 
5.3.1-14ubuntu2' --with-bugurl=file:///usr/share/doc/gcc-5/README.Bugs 
--enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --prefix=/usr 
--program-suffix=-5 --enable-shared --enable-linker-build-id 
--libexecdir=/usr/lib --without-included-gettext --enable-threads=posix 
--libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu 
--enable-libstdcxx-debug --enable-libstdcxx-time=yes 
--with-default-libstdcxx-abi=new --enable-gnu-unique-object 
--disable-libquadmath --enable-plugin --with-system-zlib 
--disable-browser-plugin --enable-java-awt=gtk --enable-gtk-cairo 
--with-java-home=/usr/lib/jvm/java-1.5.0-gcj-5-arm64/jre --enable-java-home 
--with-jvm-root-dir=/usr/lib/jvm/java-1.5.0-gcj-5-arm64 
--with-jvm-jar-dir=/usr/lib/jvm-exports/java-1.5.0-gcj-5-arm64 
--with-arch-directory=aarch64 --with-ecj-jar=/usr/share/java/eclipse-ecj.jar 
--enable-multiarch --enable-fix-cortex-a53-843419 --disable-werror 
--enable-checking=release --build=aarch64-linux-gnu --host=aarch64-linux-gnu 
--target=aarch64-linux-gnu

Thread model: posix

gcc version 5.3.1 20160413 (Ubuntu/Linaro 5.3.1-14ubuntu2)






From: vpp-dev-boun...@lists.fd.io 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Eric Chen
Sent: 2017年8月25日 21:20
To: Sergio Gonzalez Monroy 
>; 
vpp-dev >
Subject: Re: [vpp-dev] [EXT] Re: compiling error natively on an am64 box for 
fd.io_vpp

HI Sergio

Thanks a lot.

I look at the log, and search “APIGEN: “,  indeed not find “ipsec.api.h” is 
generated.

So I change the .mk, try to remove “ --without-libssl”,

Then “ipsec.api.h” is generated,  but I do not understand why 
“--without-libssl” can not work,  there should be some dependency between 
different options.

Anyway, Thank you for the help.


Eric
From: Sergio Gonzalez Monroy [mailto:sergio.gonzalez.mon...@intel.com]
Sent: 2017年8月25日 19:56
To: Eric Chen >; vpp-dev 
>
Subject: [EXT] Re: [vpp-dev] compiling error natively on an am64 box for 
fd.io_vpp

External Email

Hi Eric,

The ipsec.api.h file should be auto generated, did you have any other error 
before that one?

Thanks,
Sergio

  1   2   3   >