Re: [vpp-dev] Question on node type "VLIB_NODE_TYPE_PROCESS"

2017-11-30 Thread Dave Barach (dbarach)
At least for now, process nodes run on the main thread. See line 1587 of 
.../src/vlib/main.c.

The lldp-process is not super-complicated. Set a gdb breakpoint on line 157 
[switch(event_type)], cause it to do something, and you can walk through it, 
etc.

HTH... Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Yeddula, Avinash
Sent: Thursday, November 30, 2017 5:49 PM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] Question on node type "VLIB_NODE_TYPE_PROCESS"

Hello,

I have a setup with, 1 worker thread (Core 8) and 1 main thread (Core 0).

As I read about the node type VLIB_NODE_TYPE_PROCESS, it says
"The graph node scheduler invokes these processes in much the same way as 
traditional vector-processing run-to-completion graph  nodes".

For eg..
A node like "lldp_process_node", as I see whenever a timeout occurs or an event 
has been generated, a frame has been sent out of an interface. The questions I 
have are..


  1.  The part I'm not able to figure out yet is, where is (on which 
thread/core) this "lldp_process_node" running in the back ground ?  I'm 
assuming it cannot be worker thread.


  1.  Would you please point me the piece of code in vpp infra, that schedules 
all nodes of type "VLIB_NODE_TYPE_PROCESS".


  1.  I tried to turn on few debugs like this "VLIB_BUFFER_TRACE_TRAJECTORY" 
and few other ones. None of them seems to generate any traces/logs (show trace 
- doesn't give me any info). Any pointers on how to enable relevant logs for 
this activity.

Thanks
-Avinash

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] Question on node type "VLIB_NODE_TYPE_PROCESS"

2017-11-30 Thread Yeddula, Avinash
Hello,

I have a setup with, 1 worker thread (Core 8) and 1 main thread (Core 0).

As I read about the node type VLIB_NODE_TYPE_PROCESS, it says
"The graph node scheduler invokes these processes in much the same way as 
traditional vector-processing run-to-completion graph  nodes".

For eg..
A node like "lldp_process_node", as I see whenever a timeout occurs or an event 
has been generated, a frame has been sent out of an interface. The questions I 
have are..


  1.  The part I'm not able to figure out yet is, where is (on which 
thread/core) this "lldp_process_node" running in the back ground ?  I'm 
assuming it cannot be worker thread.


  1.  Would you please point me the piece of code in vpp infra, that schedules 
all nodes of type "VLIB_NODE_TYPE_PROCESS".


  1.  I tried to turn on few debugs like this "VLIB_BUFFER_TRACE_TRAJECTORY" 
and few other ones. None of them seems to generate any traces/logs (show trace 
- doesn't give me any info). Any pointers on how to enable relevant logs for 
this activity.

Thanks
-Avinash

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] zero copy when delivering packet from vpp to a VM/container?

2017-11-30 Thread Yuliang Li
Hi all,

Is there a way to attach a VM/container to VPP, so that packet between vpp
and VM/container requires zero copy?

Thanks,
-- 
Yuliang Li
PhD student
Department of Computer Science
Yale University
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Jenkins jobs not starting from a "clean" state?

2017-11-30 Thread Marco Varlese
Thomas,

On Thu, 2017-11-30 at 10:24 -0500, Thomas F Herbert wrote:
> 
> 
> 
> 
> 
> 

[SNIP]

> >   
> >   Maybe "unhappy" is a little too strong :) :) :)
> >   
> > 
> >   
> >   I feel that being DPDK such an important piece in the VPP
> > infrastructure I would think it should be built during the 
> >   "VPP build process" rather than having the DPDK devel
> > packages installed beforehand. At the end of the day, we
> >   even have the whole infrastructure in place to built it and
> > link the built DPDK in VPP so why don't we use it?
> > 
> 
> My view is that ultimately with respect to distros support of
> fd.io/VPP the object is to have a dependency on an external upstream
> DPDK RPM. Therefore to be able to run VPP with external DPDK RPM is
> critical. I don't see the disadvantage of having DPDK RPM installed
> beforehand. 

Then I'd have a question to you Thomas: if that's the goal, shouldn't the jobs
you run on Jenkins (for CentOS) build VPP using the DPDK_SHARED mode?

Anway, as previously mentioned, for openSUSE I followed the steps not to install
the packaged DPDK so I think we can keep things as they are...

[SNIP]


Cheers,
Marco

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Jenkins jobs not starting from a "clean" state?

2017-11-30 Thread Thomas F Herbert



On 11/30/2017 02:52 AM, Marco Varlese wrote:

Dear Ed,

On Wed, 2017-11-29 at 18:57 +, Ed Kern (ejk) wrote:



On Nov 29, 2017, at 3:09 AM, Marco Varlese > wrote:

Hi Ed,

On Wed, 2017-11-29 at 03:24 +, Ed Kern (ejk) wrote:


All the jobs that ive looked at vpp verify’s are still set to 
‘single use slave’  So there is no re-use..


if the job is run by jenkins and is either ubuntu or centos it will 
attempt to pull and install

ubuntu:
vpp-dpdk-dev
vpp-dpdk-dkms


I suppose that's exactly my point of "not-clean-state”.


Just to be clear…

Are you unhappy that it is doing those package installs before 
running the make verify (or build.sh in the case of opensuse)

Or just because you think those package installs are breaking the build.


Maybe "unhappy" is a little too strong :) :) :)

I feel that being DPDK such an important piece in the VPP 
infrastructure I would think it should be built during the
"VPP build process" rather than having the DPDK devel packages 
installed beforehand. At the end of the day, we
even have the whole infrastructure in place to built it and link the 
built DPDK in VPP so why don't we use it?
My view is that ultimately with respect to distros support of fd.io/VPP 
the object is to have a dependency on an external upstream DPDK RPM. 
Therefore to be able to run VPP with external DPDK RPM is critical. I 
don't see the disadvantage of having DPDK RPM installed beforehand. 
However, we should maintain the capability of building DPDK within the 
VPP project because that is a convenience for developers and people 
deploying build environments from source code with one-stop-shopping.


As I just said, this is my view and not trying to enforce it!




It will attempt to install but since it finds those packages already 
installed then obviously it doesn't keep going.


well no…well I certainly should have kept going (you trimmed the log 
you sent to not include the failure or a link to a specific

job.


Fair enough; my bad...



I was trying to ask and understand how is it possible that on a 
"fresh booted" VM there're already packages installed.


Well when either the openstack or container image are ‘booted’ 
neither dpdk or vpp are installed.
32k other prereqs and base packages but (tmk with openstack clone) 
not dpdk.


Sure and that's understood; I mistakenly used "fresh booted" here...





I suppose that what the message "Up-to-date DPDK package already 
installed" points out, am I correct?


again i want to be careful we are on the same page..

DPDK is not installed in the base template image.

DPDK IS now installed as a package (to ubuntu and centos):
AFTER check style
BEFORE make verify or build.sh
as part of the jenkins build scripts.


Yes, indeed. We're on the same page.

+1


What I was trying to propose is not to install DPDK as a package 
(ubuntu / centos) but leave everything to the Makefile/Build 
infrastructure in VPP to do those steps.
That will also help us to strengthen the VPP build-infrastructure and 
spot any regression when/if modifications are made to the various 
Makefiles...

I don't see the necessity of this. See my comments above.





Similarly, the DPDK tarball is already downloaded when the 'curl' 
command runs since the source tarball can already be found in /dpdk




So from the base template we go from no directory
template base image is spun up
git clone/fetch/etc is run
check style is run
there should be no dpdk.xxx.tar.xz anywhere at this point
make verify or build.sh (im only speaking about verify builds )  is run
as part of that ( i myself trip the download by doing make config in 
the dpdk directory, never bothered to track down how it is tripped 
‘normally’)

the tar.xz is pulled.

if you actually see an order different then this id be curious to see it.

thanks,

Ed

Cheers,
Marco





centos:
vpp-dpdk-devel

if running the build of opensuse or any build outside of jenkins it 
will attempt to build it from scratch..

Right, I believe that should always be the approach tho...


(thats one of the issues you were seeing the other day)

https://gerrit.fd.io/r/#/c/9606/

both that dpdk-17.08 rev’d up and also the fact that they added 
stable to the directory name..
and to make matters worse (only because their mirrors are hosed) 
 the makefile was pointed atfast.dpdk.org 
which points to three servers that return at least two different 
cksums…(So a total of 3 different a. pre 11/27 b. two post 11/27)
in my patch above i just changed it tostatic.dpdk.org 
 which is slower but consistent.
Great... we could have modified my patch since yours looks pretty 
similar... anyway, I abonded mine with the open yours gets merged 
sooner.




Note: ignore the cpoc failures…im still bumping into oom condition 
waiting on vanessa to come back and bump me up

https://rt.linuxfoundation.org/Ticket/Display.html?id=48884




On Nov 28, 2017, at 7:58 AM, Marco 

Re: [vpp-dev] api functions using shared memory

2017-11-30 Thread Gabriel Ganne
I'm afraid I haven't followed csit work for long enough to be sure what to add.

The current VppCounters class summarizes the stats though 
show_vpp_statistics(), which contains the results from "show run" "show hard", 
and "show error".


I'm adding the csit-dev ML in CC.


--

Gabriel Ganne


From: Luke, Chris 
Sent: Thursday, November 30, 2017 3:15:14 PM
To: Gabriel Ganne; Ole Troan
Cc: vpp-dev@lists.fd.io
Subject: RE: [vpp-dev] api functions using shared memory


Which “show run” info? The stats in the header are calculated and some of the 
base values needed for it are missing in the current API; I intend to fix 
precisely that with this work since they are ideal summary lines for ‘vpptop’.



Chris.



From: Gabriel Ganne [mailto:gabriel.ga...@enea.com]
Sent: Thursday, November 30, 2017 9:02 AM
To: Luke, Chris ; Ole Troan 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] api functions using shared memory



Chris,

It seems your work in 
https://gerrit.fd.io/r/#/c/9483/
 does all what Maciek and Dave discussed in VPP-55.

Thanks again !



--

Gabriel Ganne



From: Gabriel Ganne
Sent: Thursday, November 30, 2017 2:52:06 PM
To: Luke, Chris; Ole Troan
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] api functions using shared memory



Actually, during the CSIT weekly call yesterday there was mentioned a missing 
VPP api for "show run".

I think I even found a jira for it : 
https://jira.fd.io/browse/VPP-55



It seemed like no one was working on it, and so I had a look.

In the ticket, Dave Barach  suggested to add this to the get_node_graph api 
function which is why I went there.



If you could add the infos from "show run" into your dev, this would be great.

Otherwise I can work on it after you've finished. There's no rush.



Regards,



--

Gabriel Ganne



From: Luke, Chris >
Sent: Thursday, November 30, 2017 2:39:02 PM
To: Gabriel Ganne; Ole Troan
Subject: RE: [vpp-dev] api functions using shared memory



What data for each node are you looking for? So I can make sure it ends up 
included. The existing get_node_graph is fairly limited in what it retrieves 
aside from the adjacencies and a handful of stats.



My approach right now, to keep it simple, is to use the _dump/_details 
mechanism to return a list of items that encode the thread index and node index 
with the other details in a pretty flat structure; this should then be easily 
consumed by any binding. For Python I’ll also provide a way to reanimate the 
data into a fairly simple object model.



You can see the first pass of my work at 
https://gerrit.fd.io/r/#/c/9222/
 and the two patches leading up to it which exposed a mechanism to interpret 
the result_in_shmem from Python and dezerialize into an object model. Ole 
rightly objects to using shmem, though I think there is still merit in merging 
the basic shmem reader since, while the mechanism exists in the API, we should 
support it where we can. Otherwise we should remove it from the API altogether.



My current work on this I expect to have cleaned up and usable in a few days, 
though I’m travelling next week (Kubecon) which may interrupt things if I don’t 
make enough progress this week.



Chris.



From: Gabriel Ganne [mailto:gabriel.ga...@enea.com]
Sent: Thursday, November 30, 2017 7:47 AM
To: Ole Troan >; Luke, Chris 
>
Subject: Re: [vpp-dev] api functions using shared memory



Great !

Thanks.



--

Gabriel Ganne





From: Ole Troan >
Sent: Thursday, November 30, 2017 12:50
To: Gabriel Ganne
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] api functions using shared memory



Gabriel,

> I am looking at the 

Re: [vpp-dev] api functions using shared memory

2017-11-30 Thread Luke, Chris
Which "show run" info? The stats in the header are calculated and some of the 
base values needed for it are missing in the current API; I intend to fix 
precisely that with this work since they are ideal summary lines for 'vpptop'.

Chris.

From: Gabriel Ganne [mailto:gabriel.ga...@enea.com]
Sent: Thursday, November 30, 2017 9:02 AM
To: Luke, Chris ; Ole Troan 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] api functions using shared memory


Chris,

It seems your work in https://gerrit.fd.io/r/#/c/9483/ does all what Maciek and 
Dave discussed in VPP-55.

Thanks again !



--

Gabriel Ganne


From: Gabriel Ganne
Sent: Thursday, November 30, 2017 2:52:06 PM
To: Luke, Chris; Ole Troan
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] api functions using shared memory


Actually, during the CSIT weekly call yesterday there was mentioned a missing 
VPP api for "show run".

I think I even found a jira for it : https://jira.fd.io/browse/VPP-55



It seemed like no one was working on it, and so I had a look.

In the ticket, Dave Barach  suggested to add this to the get_node_graph api 
function which is why I went there.



If you could add the infos from "show run" into your dev, this would be great.

Otherwise I can work on it after you've finished. There's no rush.



Regards,



--

Gabriel Ganne


From: Luke, Chris >
Sent: Thursday, November 30, 2017 2:39:02 PM
To: Gabriel Ganne; Ole Troan
Subject: RE: [vpp-dev] api functions using shared memory


What data for each node are you looking for? So I can make sure it ends up 
included. The existing get_node_graph is fairly limited in what it retrieves 
aside from the adjacencies and a handful of stats.



My approach right now, to keep it simple, is to use the _dump/_details 
mechanism to return a list of items that encode the thread index and node index 
with the other details in a pretty flat structure; this should then be easily 
consumed by any binding. For Python I'll also provide a way to reanimate the 
data into a fairly simple object model.



You can see the first pass of my work at 
https://gerrit.fd.io/r/#/c/9222/
 and the two patches leading up to it which exposed a mechanism to interpret 
the result_in_shmem from Python and dezerialize into an object model. Ole 
rightly objects to using shmem, though I think there is still merit in merging 
the basic shmem reader since, while the mechanism exists in the API, we should 
support it where we can. Otherwise we should remove it from the API altogether.



My current work on this I expect to have cleaned up and usable in a few days, 
though I'm travelling next week (Kubecon) which may interrupt things if I don't 
make enough progress this week.



Chris.



From: Gabriel Ganne [mailto:gabriel.ga...@enea.com]
Sent: Thursday, November 30, 2017 7:47 AM
To: Ole Troan >; Luke, Chris 
>
Subject: Re: [vpp-dev] api functions using shared memory



Great !

Thanks.



--

Gabriel Ganne





From: Ole Troan >
Sent: Thursday, November 30, 2017 12:50
To: Gabriel Ganne
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] api functions using shared memory



Gabriel,

> I am looking at the get_node_graph() api function, for use in python.
> It returns  a u64 reply_in_shmem value which points to the shared memory and 
> must then be processed by vlib_node_unserialize() (as is done in vat) but I 
> only saw such a function in C.
> Is there any way to do this in python ? (other languages should have the same 
> issue).
>
> Also, I had a look at the the cli api function (which works the same way) and 
> saw that it had an cli_inband version which apparently was designed to 
> replace the cli api function because it was using shared memory 
> (https://gerrit.fd.io/r/#/c/2575/)
> Should the get_node_graph api function also get an *_inband version ?

Yes. That's also a prerequisite for alternate transports like the socket API.
I think Chris is working on it.

Cheers,
Ole
___
vpp-dev 

Re: [vpp-dev] api functions using shared memory

2017-11-30 Thread Gabriel Ganne
Chris,

It seems your work in https://gerrit.fd.io/r/#/c/9483/ does all what Maciek and 
Dave discussed in VPP-55.

Thanks again !


--

Gabriel Ganne


From: Gabriel Ganne
Sent: Thursday, November 30, 2017 2:52:06 PM
To: Luke, Chris; Ole Troan
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] api functions using shared memory


Actually, during the CSIT weekly call yesterday there was mentioned a missing 
VPP api for "show run".

I think I even found a jira for it : https://jira.fd.io/browse/VPP-55



It seemed like no one was working on it, and so I had a look.

In the ticket, Dave Barach  suggested to add this to the get_node_graph api 
function which is why I went there.


If you could add the infos from "show run" into your dev, this would be great.

Otherwise I can work on it after you've finished. There's no rush.


Regards,


--

Gabriel Ganne


From: Luke, Chris 
Sent: Thursday, November 30, 2017 2:39:02 PM
To: Gabriel Ganne; Ole Troan
Subject: RE: [vpp-dev] api functions using shared memory


What data for each node are you looking for? So I can make sure it ends up 
included. The existing get_node_graph is fairly limited in what it retrieves 
aside from the adjacencies and a handful of stats.



My approach right now, to keep it simple, is to use the _dump/_details 
mechanism to return a list of items that encode the thread index and node index 
with the other details in a pretty flat structure; this should then be easily 
consumed by any binding. For Python I’ll also provide a way to reanimate the 
data into a fairly simple object model.



You can see the first pass of my work at 
https://gerrit.fd.io/r/#/c/9222/
 and the two patches leading up to it which exposed a mechanism to interpret 
the result_in_shmem from Python and dezerialize into an object model. Ole 
rightly objects to using shmem, though I think there is still merit in merging 
the basic shmem reader since, while the mechanism exists in the API, we should 
support it where we can. Otherwise we should remove it from the API altogether.



My current work on this I expect to have cleaned up and usable in a few days, 
though I’m travelling next week (Kubecon) which may interrupt things if I don’t 
make enough progress this week.



Chris.



From: Gabriel Ganne [mailto:gabriel.ga...@enea.com]
Sent: Thursday, November 30, 2017 7:47 AM
To: Ole Troan ; Luke, Chris 
Subject: Re: [vpp-dev] api functions using shared memory



Great !

Thanks.



--

Gabriel Ganne





From: Ole Troan >
Sent: Thursday, November 30, 2017 12:50
To: Gabriel Ganne
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] api functions using shared memory



Gabriel,

> I am looking at the get_node_graph() api function, for use in python.
> It returns  a u64 reply_in_shmem value which points to the shared memory and 
> must then be processed by vlib_node_unserialize() (as is done in vat) but I 
> only saw such a function in C.
> Is there any way to do this in python ? (other languages should have the same 
> issue).
>
> Also, I had a look at the the cli api function (which works the same way) and 
> saw that it had an cli_inband version which apparently was designed to 
> replace the cli api function because it was using shared memory 
> (https://gerrit.fd.io/r/#/c/2575/)
> Should the get_node_graph api function also get an *_inband version ?

Yes. That's also a prerequisite for alternate transports like the socket API.
I think Chris is working on it.

Cheers,
Ole

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] api functions using shared memory

2017-11-30 Thread Gabriel Ganne
Actually, during the CSIT weekly call yesterday there was mentioned a missing 
VPP api for "show run".

I think I even found a jira for it : https://jira.fd.io/browse/VPP-55



It seemed like no one was working on it, and so I had a look.

In the ticket, Dave Barach  suggested to add this to the get_node_graph api 
function which is why I went there.


If you could add the infos from "show run" into your dev, this would be great.

Otherwise I can work on it after you've finished. There's no rush.


Regards,


--

Gabriel Ganne


From: Luke, Chris 
Sent: Thursday, November 30, 2017 2:39:02 PM
To: Gabriel Ganne; Ole Troan
Subject: RE: [vpp-dev] api functions using shared memory


What data for each node are you looking for? So I can make sure it ends up 
included. The existing get_node_graph is fairly limited in what it retrieves 
aside from the adjacencies and a handful of stats.



My approach right now, to keep it simple, is to use the _dump/_details 
mechanism to return a list of items that encode the thread index and node index 
with the other details in a pretty flat structure; this should then be easily 
consumed by any binding. For Python I’ll also provide a way to reanimate the 
data into a fairly simple object model.



You can see the first pass of my work at 
https://gerrit.fd.io/r/#/c/9222/
 and the two patches leading up to it which exposed a mechanism to interpret 
the result_in_shmem from Python and dezerialize into an object model. Ole 
rightly objects to using shmem, though I think there is still merit in merging 
the basic shmem reader since, while the mechanism exists in the API, we should 
support it where we can. Otherwise we should remove it from the API altogether.



My current work on this I expect to have cleaned up and usable in a few days, 
though I’m travelling next week (Kubecon) which may interrupt things if I don’t 
make enough progress this week.



Chris.



From: Gabriel Ganne [mailto:gabriel.ga...@enea.com]
Sent: Thursday, November 30, 2017 7:47 AM
To: Ole Troan ; Luke, Chris 
Subject: Re: [vpp-dev] api functions using shared memory



Great !

Thanks.



--

Gabriel Ganne





From: Ole Troan >
Sent: Thursday, November 30, 2017 12:50
To: Gabriel Ganne
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] api functions using shared memory



Gabriel,

> I am looking at the get_node_graph() api function, for use in python.
> It returns  a u64 reply_in_shmem value which points to the shared memory and 
> must then be processed by vlib_node_unserialize() (as is done in vat) but I 
> only saw such a function in C.
> Is there any way to do this in python ? (other languages should have the same 
> issue).
>
> Also, I had a look at the the cli api function (which works the same way) and 
> saw that it had an cli_inband version which apparently was designed to 
> replace the cli api function because it was using shared memory 
> (https://gerrit.fd.io/r/#/c/2575/)
> Should the get_node_graph api function also get an *_inband version ?

Yes. That's also a prerequisite for alternate transports like the socket API.
I think Chris is working on it.

Cheers,
Ole

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] problem in elog format

2017-11-30 Thread Dave Barach (dbarach)
Hmmm. I’ve never seen that issue, although I haven’t run c2cpel in a while. 
I’ll take a look later today.

It looks like .../src/perftool.am builds it, so look under 
build-root/install-xxx and (possibly) install it manually...

Thanks… Dave

From: Juan Salmon [mailto:salmonju...@gmail.com]
Sent: Thursday, November 30, 2017 12:50 AM
To: Dave Barach (dbarach) 
Cc: Florin Coras ; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] problem in elog format

Thanks a lot,
Now I want to convert elog file to text file.
I compiled perftools in test directory, but when running c2cpel tools, the 
following error accrued:

c2cpel: error while loading shared libraries: libcperf.so.0: cannot open shared 
object file: No such file or directory

Best Regards,
Juan Salmon.

On Wed, Nov 29, 2017 at 3:53 PM, Dave Barach (dbarach) 
> wrote:

PMFJI, but we have organized schemes for capturing, serializing, and eventually 
displaying string data.



Please note: a single "format" call will probably cost more than the entire 
clock-cycle budget available to process a packet. Really. Seriously. Printfs 
(aka format calls) in the packet-processing path are to be avoided at all 
costs. The basic event-logger modus operandi is to capture binary data and 
pretty-print it offline.



At times, one will need or want to log string data. Here's how to proceed:



The printf-like function elog_string(...) adds a string to the event log string 
heap, and returns a cookie which offline tools use to print that string. The 
"T" format specifier in an event definition means "go print the string at the 
indicated u32 string heap offset”. Here’s an example:



  /* *INDENT-OFF* */

  ELOG_TYPE_DECLARE (e) =

{

  .format = "serialize-msg: %s index %d",

  .format_args = "T4i4",

};

  struct

{

u32 c[2];

  } *ed;

  ed = ELOG_DATA (mc->elog_main, e);

  ed->c[0] = elog_id_for_msg_name (mc, msg->name);

  ed->c[1] = si;



So far so good, but let’s do a bit of work to keep from blowing up the string 
heap:



static u32

elog_id_for_msg_name (mc_main_t * m, char *msg_name)

{

  uword *p, r;

  uword *h = m->elog_id_by_msg_name;

  u8 *name_copy;



  if (!h)

h = m->elog_id_by_msg_name = hash_create_string (0, sizeof (uword));



  p = hash_get_mem (h, msg_name);

  if (p)

return p[0];

  r = elog_string (m->elog_main, "%s", msg_name);



  name_copy = format (0, "%s%c", msg_name, 0);



  hash_set_mem (h, name_copy, r);

  m->elog_id_by_msg_name = h;



  return r;

}



As in: each unique string appears exactly once in the event-log string heap. 
Hash_get_mem (x) is way cheaper than printf(x). Please remember that this hash 
flavor is not inherently thread-safe.



In the case of enumerated strings, use the “t” format specifier. It only costs 
1 octet to represent up to 256 constant strings:



  ELOG_TYPE_DECLARE (e) =

  {

.format = "my enum: %s",

.format_args = "t1",

.n_enum_strings =

  2,

.enum_strings =

  {

"string 1",

"string 2",

  },

   };

  struct

  {

u8 which;

  } *ed;

  ed = ELOG_DATA (_global_main.elog_main, e);

  ed->which = which;





HTH… Dave



-Original Message-
From: vpp-dev-boun...@lists.fd.io 
[mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Florin Coras
Sent: Wednesday, November 29, 2017 4:43 AM
To: Juan Salmon >
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] problem in elog format



Hi Juan,



We don’t typically use elogs to store strings, still, you may be able to get it 
to run with:



struct

{

u8 err[20];

} * ed;



And then copy your data to err: clib_memcpy (ed->err, your_vec, vec_len 
(your_vec)). Make sure your vec is 0 terminated.



HTH,

Florin



> On Nov 28, 2017, at 9:12 PM, Juan Salmon 
> > wrote:

>

>

> I want to use event-log and send string to one of elements of ed struct.

> but the result is not correct.

>

> the sample code:

>

> ELOG_TYPE_DECLARE (e) = {

> .format = "Test LOG: %s",

> .format_args = "s20",

> };

> struct

> {

> u8 * err;

> } * ed;

>

>

> vlib_worker_thread_t * w = vlib_worker_threads + cpu_index;

> ed = ELOG_TRACK_DATA (_global_main.elog_main, e, w->elog_track);

>

> ed->err = format (0,"%s", "This is a Test");

>

>

> Could you please help me?

>

>

> Best Regards,

> Juan Salmon.

> ___

> vpp-dev mailing list

> vpp-dev@lists.fd.io

> https://lists.fd.io/mailman/listinfo/vpp-dev




Re: [vpp-dev] api functions using shared memory

2017-11-30 Thread Ole Troan
Gabriel,

> I am looking at the get_node_graph() api function, for use in python.
> It returns  a u64 reply_in_shmem value which points to the shared memory and 
> must then be processed by vlib_node_unserialize() (as is done in vat) but I 
> only saw such a function in C.
> Is there any way to do this in python ? (other languages should have the same 
> issue).
> 
> Also, I had a look at the the cli api function (which works the same way) and 
> saw that it had an cli_inband version which apparently was designed to 
> replace the cli api function because it was using shared memory 
> (https://gerrit.fd.io/r/#/c/2575/)
> Should the get_node_graph api function also get an *_inband version ?

Yes. That's also a prerequisite for alternate transports like the socket API.
I think Chris is working on it.

Cheers,
Ole




signature.asc
Description: Message signed with OpenPGP
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] SR MPLS not effective

2017-11-30 Thread 薛欣颖

Hi Neale,

I can't configure the command like this:
'VPP# sr mpls policy add bsid 999 next 210  209  208  207  206  205  204  203  
202  201  
unknown input `209  208  207  206  205  204  ...''

I configured the command like before. And the all info is shown below:
packet info:
00:05:58:166326: af-packet-input 
af_packet: hw_if_index 1 next-index 4 
tpacket2_hdr: 
status 0x2001 len 78 snaplen 78 mac 66 net 80 
sec 0x5a1feacb nsec 0x372a239a vlan 0 
00:05:58:166355: ethernet-input 
IP4: 00:00:00:66:00:55 -> 00:0c:29:4d:af:b5 
00:05:58:166385: ip4-input 
ICMP: 21.1.1.5 -> 23.1.1.5 
tos 0x00, ttl 64, length 64, checksum 0x4cb1 
fragment id 0x0001 
ICMP echo_request checksum 0x5f5d 
00:05:58:166391: ip4-lookup 
fib 0 dpo-idx 33 flow hash: 0x 
ICMP: 21.1.1.5 -> 23.1.1.5 
tos 0x00, ttl 64, length 64, checksum 0x4cb1 
fragment id 0x0001 
ICMP echo_request checksum 0x5f5d 
00:05:58:166401: ip4-load-balance 
fib 0 dpo-idx 33 flow hash: 0x 
ICMP: 21.1.1.5 -> 23.1.1.5 
tos 0x00, ttl 64, length 64, checksum 0x4cb1 
fragment id 0x0001 
ICMP echo_request checksum 0x5f5d 
00:05:58:166405: ip4-mpls-label-imposition 
mpls-header:[16416:63:0:eos]   //when I 
configured two layer label,this info is ' mpls-header:[101:63:0:eos] '
00:05:58:166415: mpls-label-imposition 
mpls-header:[211:255:0:neos] 
00:05:58:166416: mpls-output 
adj-idx 3 : mpls via 14.1.1.2 host-eth2: 000c290fe2a8000c294dafa18847 flow 
hash: 0x 
:  
0020:  
00:05:58:166424: host-eth2-output 
host-eth2 
MPLS: 00:0c:29:4d:af:a1 -> 00:0c:29:0f:e2:a8 
label 211 exp 0, s 0, ttl 255 

VPP# show sr mpls policies 
  
VPP# show sr mpls policies 
SR MPLS policies: 
[0].- BSID: 999 
Type: Default 
Segment Lists: 
[0].- < 210, 209, 208, 207, 206, 205, 204, 203, 202, 201 > 
---


VPP# show mpls fib label 999 
MPLS-VRF:0, fib_index 0 
999:neos/21 fib:0 index:26 locks:2 
src:SR refs:1 
index:27 locks:4 flags:shared, uPRF-list:28 len:1 itfs:[2, ] 
index:27 pl-index:27 mpls weight=1 pref=0 recursive: oper-flags:resolved, 
via 210 neos in fib:0 via-fib:27 via-dpo:[dpo-load-balance:29] 
Extensions: 
path:27 labels:209 208 207 206 205 204 203 202 201 
forwarding: mpls-neos-chain 
[@0]: dpo-load-balance: [proto:mpls index:30 buckets:1 uRPF:28 to:[0:0]] 
[0] [@8]: 
mpls-label:[0]:[209:255:0:neos][208:255:0:neos][207:255:0:neos][206:255:0:neos][205:255:0:neos][204:255:0:neos][203:255:0:neos][202:255:0:neos][16416:0:1:neos]
 
[@2]: dpo-load-balance: [proto:mpls index:29 buckets:1 uRPF:29 to:[0:0] 
via:[586:58600]] 
[0] [@8]: mpls-label:[3]:[211:255:0:neos] 
[@3]: mpls via 14.1.1.2 host-eth2: 000c290fe2a8000c294dafa18847 
999:eos/21 fib:0 index:28 locks:4 
src:SR refs:1 
index:27 locks:4 flags:shared, uPRF-list:28 len:1 itfs:[2, ] 
index:27 pl-index:27 mpls weight=1 pref=0 recursive: oper-flags:resolved, 
via 210 neos in fib:0 via-fib:27 via-dpo:[dpo-load-balance:29] 
Extensions: 
path:27 labels:209 208 207 206 205 204 203 202 201 
src:recursive-resolution cover:-1 refs:1 

forwarding: mpls-eos-chain 
[@0]: dpo-load-balance: [proto:mpls index:31 buckets:1 uRPF:28 to:[0:0]] 
[0] [@8]: 
mpls-label:[2]:[209:255:0:neos][208:255:0:neos][207:255:0:neos][206:255:0:neos][205:255:0:neos][204:255:0:neos][203:255:0:neos][202:255:0:neos][16416:0:1:neos]
 
[@2]: dpo-load-balance: [proto:mpls index:29 buckets:1 uRPF:29 to:[0:0] 
via:[586:58600]] 
[0] [@8]: mpls-label:[3]:[211:255:0:neos] 
[@3]: mpls via 14.1.1.2 host-eth2: 000c290fe2a8000c294dafa18847


VPP# show mpls fib label 210 
MPLS-VRF:0, fib_index 0 
210:neos/21 fib:0 index:27 locks:4 
src:CLI refs:1 
index:29 locks:2 flags:shared, uPRF-list:29 len:1 itfs:[2, ] 
index:29 pl-index:29 ip4 weight=1 pref=0 attached-nexthop: oper-flags:resolved, 
14.1.1.2 host-eth2 
[@0]: ipv4 via 14.1.1.2 host-eth2: 000c290fe2a8000c294dafa10800 
Extensions: 
path:29 labels:211 
src:recursive-resolution cover:-1 refs:1 

forwarding: mpls-neos-chain 
[@0]: dpo-load-balance: [proto:mpls index:29 buckets:1 uRPF:29 to:[0:0] 
via:[686:68600]] 
[0] [@8]: mpls-label:[3]:[211:255:0:neos] 
[@3]: mpls via 14.1.1.2 host-eth2: 000c290fe2a8000c294dafa18847

Thanks,
Xyxue


 
From: Neale Ranns (nranns)
Date: 2017-11-30 18:40
To: 薛欣颖; Pablo Camarillo (pcamaril); vpp-dev
Subject: Re: [vpp-dev] SR MPLS not effective
 
Hi Xyxue,
 
To get a 10 label stack, you need to do;
sr mpls policy add bsid 999 next 210  209  208  207  206  205  204  203  202  
201   
i.e. only use the ‘next’ keyword once.
 
And then if you don’t get the desired result, could show me the following 
outputs;
  sh sr mpls polic
  sh mpls fib 999
  sh mpls fib 210
  st trace  <<< for a ‘bad’ packet.
 
thanks,
neale
 
From: 薛欣颖 
Date: Thursday, 30 November 2017 at 10:23
To: "Neale Ranns (nranns)" , "Pablo Camarillo (pcamaril)" 
, vpp-dev 

Re: [vpp-dev] api functions using shared memory

2017-11-30 Thread Luke, Chris
I’m already working on making this easier to consume. Stay tuned. 

Chris.

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Gabriel Ganne
Sent: Thursday, November 30, 2017 4:44
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] api functions using shared memory


Hi,



I am looking at the get_node_graph() api function, for use in python.

It returns  a u64 reply_in_shmem value which points to the shared memory and 
must then be processed by vlib_node_unserialize() (as is done in vat) but I 
only saw such a function in C.

Is there any way to do this in python ? (other languages should have the same 
issue).



Also, I had a look at the the cli api function (which works the same way) and 
saw that it had an cli_inband version which apparently was designed to replace 
the cli api function because it was using shared memory 
(https://gerrit.fd.io/r/#/c/2575/)

Should the get_node_graph api function also get an *_inband version ?



Best regards,



--

Gabriel Ganne
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] SR MPLS not effective

2017-11-30 Thread Neale Ranns (nranns)

Hi Xyxue,

To get a 10 label stack, you need to do;
sr mpls policy add bsid 999 next 210  209  208  207  206  205  204  203  202  
201
i.e. only use the ‘next’ keyword once.

And then if you don’t get the desired result, could show me the following 
outputs;
  sh sr mpls polic
  sh mpls fib 999
  sh mpls fib 210
  st trace  <<< for a ‘bad’ packet.

thanks,
neale

From: 薛欣颖 
Date: Thursday, 30 November 2017 at 10:23
To: "Neale Ranns (nranns)" , "Pablo Camarillo (pcamaril)" 
, vpp-dev 
Subject: Re: Re: [vpp-dev] SR MPLS not effective


Hi Neale,

After referring to your example, I modified my configuration. And the two layer 
label SR MPLS works well.
But when configure ten layer label, the bottom label is different from the 
configuration.

vpp1 configuration:
create host-interface name eth4 mac 00:0c:29:4d:af:b5
create host-interface name eth2 mac 00:0c:29:4d:af:a1
set interface state host-eth2 up
set interface state host-eth4 up
set interface ip address host-eth2 14.1.1.1/24
set interface ip address host-eth4 21.1.1.1/24
mpls table add 0
set interface mpls host-eth2 enable
sr mpls policy add bsid 999 next 210 next 209 next 208 next 207 next 206 next 
205 next 204 next 203 next 202 next 201   // encap 10 layer label
sr mpls steer l3 23.1.1.0/24 via sr policy bsid 999
mpls local-label add non-eos 210 via 14.1.1.2 host-eth2 out-label 211

Actually, the bottom is 16416 (the value configured is 201).  The ttl is 63 and 
others is 255.
The trace info and message info is shown below:

00:31:47:618725: af-packet-input
af_packet: hw_if_index 1 next-index 4
tpacket2_hdr:
status 0x2001 len 78 snaplen 78 mac 66 net 80
sec 0x5a1fb1aa nsec 0x22ae91a9 vlan 0
00:31:47:618752: ethernet-input
IP4: 00:00:00:66:00:55 -> 00:0c:29:4d:af:b5
00:31:47:618815: ip4-input
ICMP: 21.1.1.5 -> 23.1.1.5
tos 0x00, ttl 64, length 64, checksum 0x4cb1
fragment id 0x0001
ICMP echo_request checksum 0x5f5d
00:31:47:618823: ip4-lookup
fib 0 dpo-idx 34 flow hash: 0x
ICMP: 21.1.1.5 -> 23.1.1.5
tos 0x00, ttl 64, length 64, checksum 0x4cb1
fragment id 0x0001
ICMP echo_request checksum 0x5f5d
00:31:47:618833: ip4-load-balance
fib 0 dpo-idx 34 flow hash: 0x
ICMP: 21.1.1.5 -> 23.1.1.5
tos 0x00, ttl 64, length 64, checksum 0x4cb1
fragment id 0x0001
ICMP echo_request checksum 0x5f5d
00:31:47:618837: ip4-mpls-label-imposition
mpls-header:[16416:63:0:eos]
00:31:47:618844: mpls-label-imposition
mpls-header:[211:255:0:neos]
00:31:47:618846: mpls-output
adj-idx 3 : mpls via 14.1.1.2 host-eth2: 000c290fe2a8000c294dafa18847 flow 
hash: 0x
: 
0020: 
00:31:47:618853: host-eth2-output
host-eth2
MPLS: 00:0c:29:4d:af:a1 -> 00:0c:29:0f:e2:a8
label 211 exp 0, s 0, ttl 255

Thanks for your help.

Thanks,
Xyxue


From: Neale Ranns (nranns)
Date: 2017-11-29 16:37
To: 薛欣颖; Pablo Camarillo 
(pcamaril); vpp-dev
Subject: Re: [vpp-dev] SR MPLS not effective

Hi Xyxue,

Here’s a hastily assembled guide on how I would do it;
  https://wiki.fd.io/view/VPP/Segment_Routing_for_MPLS

I’ve not verified the configs myself. If you use it, please let me know any 
errors you find.

Regards,
neale



From: 薛欣颖 
Date: Wednesday, 29 November 2017 at 08:13
To: "Neale Ranns (nranns)" , "Pablo Camarillo (pcamaril)" 
, vpp-dev 
Subject: Re: Re: [vpp-dev] SR MPLS not effective

After add the 'mpls local-label add non-eos 33 mpls-lookup-in-table 0 ' on P,  
the mistake still exist.

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] api functions using shared memory

2017-11-30 Thread Gabriel Ganne
Hi,


I am looking at the get_node_graph() api function, for use in python.

It returns  a u64 reply_in_shmem value which points to the shared memory and 
must then be processed by vlib_node_unserialize() (as is done in vat) but I 
only saw such a function in C.

Is there any way to do this in python ? (other languages should have the same 
issue).


Also, I had a look at the the cli api function (which works the same way) and 
saw that it had an cli_inband version which apparently was designed to replace 
the cli api function because it was using shared memory 
(https://gerrit.fd.io/r/#/c/2575/)

Should the get_node_graph api function also get an *_inband version ?


Best regards,


--

Gabriel Ganne
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] SR MPLS not effective

2017-11-30 Thread 薛欣颖

Hi Neale,

After referring to your example, I modified my configuration. And the two layer 
label SR MPLS works well.
But when configure ten layer label, the bottom label is different from the 
configuration.

vpp1 configuration:
create host-interface name eth4 mac 00:0c:29:4d:af:b5 
create host-interface name eth2 mac 00:0c:29:4d:af:a1 
set interface state host-eth2 up 
set interface state host-eth4 up 
set interface ip address host-eth2 14.1.1.1/24 
set interface ip address host-eth4 21.1.1.1/24 
mpls table add 0 
set interface mpls host-eth2 enable 
sr mpls policy add bsid 999 next 210 next 209 next 208 next 207 next 206 next 
205 next 204 next 203 next 202 next 201   // encap 10 layer label
sr mpls steer l3 23.1.1.0/24 via sr policy bsid 999   
mpls local-label add non-eos 210 via 14.1.1.2 host-eth2 out-label 211

Actually, the bottom is 16416 (the value configured is 201).  The ttl is 63 and 
others is 255.
The trace info and message info is shown below:

00:31:47:618725: af-packet-input 
af_packet: hw_if_index 1 next-index 4 
tpacket2_hdr: 
status 0x2001 len 78 snaplen 78 mac 66 net 80 
sec 0x5a1fb1aa nsec 0x22ae91a9 vlan 0 
00:31:47:618752: ethernet-input 
IP4: 00:00:00:66:00:55 -> 00:0c:29:4d:af:b5 
00:31:47:618815: ip4-input 
ICMP: 21.1.1.5 -> 23.1.1.5 
tos 0x00, ttl 64, length 64, checksum 0x4cb1 
fragment id 0x0001 
ICMP echo_request checksum 0x5f5d 
00:31:47:618823: ip4-lookup 
fib 0 dpo-idx 34 flow hash: 0x 
ICMP: 21.1.1.5 -> 23.1.1.5 
tos 0x00, ttl 64, length 64, checksum 0x4cb1 
fragment id 0x0001 
ICMP echo_request checksum 0x5f5d 
00:31:47:618833: ip4-load-balance 
fib 0 dpo-idx 34 flow hash: 0x 
ICMP: 21.1.1.5 -> 23.1.1.5 
tos 0x00, ttl 64, length 64, checksum 0x4cb1 
fragment id 0x0001 
ICMP echo_request checksum 0x5f5d 
00:31:47:618837: ip4-mpls-label-imposition 
mpls-header:[16416:63:0:eos] 
00:31:47:618844: mpls-label-imposition 
mpls-header:[211:255:0:neos] 
00:31:47:618846: mpls-output 
adj-idx 3 : mpls via 14.1.1.2 host-eth2: 000c290fe2a8000c294dafa18847 flow 
hash: 0x 
:  
0020:  
00:31:47:618853: host-eth2-output 
host-eth2 
MPLS: 00:0c:29:4d:af:a1 -> 00:0c:29:0f:e2:a8 
label 211 exp 0, s 0, ttl 255
 
Thanks for your help.
Thanks,Xyxue


 
From: Neale Ranns (nranns)
Date: 2017-11-29 16:37
To: 薛欣颖; Pablo Camarillo (pcamaril); vpp-dev
Subject: Re: [vpp-dev] SR MPLS not effective
 
Hi Xyxue,
 
Here’s a hastily assembled guide on how I would do it;
  https://wiki.fd.io/view/VPP/Segment_Routing_for_MPLS
 
I’ve not verified the configs myself. If you use it, please let me know any 
errors you find.
 
Regards,
neale
 
 
 
From: 薛欣颖 
Date: Wednesday, 29 November 2017 at 08:13
To: "Neale Ranns (nranns)" , "Pablo Camarillo (pcamaril)" 
, vpp-dev 
Subject: Re: Re: [vpp-dev] SR MPLS not effective
 
After add the 'mpls local-label add non-eos 33 mpls-lookup-in-table 0 ' on P,  
the mistake still exist.
 
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev