Re: [vpp-dev] VPP release 22.06 is complete !

2022-07-01 Thread Jerome Tollet via lists.fd.io
Here is the release notes: 
https://s3-docs.fd.io/vpp/22.06/aboutvpp/releasenotes/v22.06.html
Jerome

De : vpp-dev@lists.fd.io  de la part de Dave Wallace 

Date : mercredi, 29 juin 2022 à 18:44
À : vpp-dev@lists.fd.io 
Objet : Re: [vpp-dev] VPP release 22.06 is complete !
Congratulations to the entire FD.io Community and all who contributed to yet 
another on time, quality VPP release!

Special thanks to Andrew for his work as Release Manager.

Thanks,
-daw-
On 6/29/22 12:13 PM, Andrew Yourtchenko wrote:
Hello all,

VPP 22.06 release is complete and the artifacts are available in packagecloud 
release repository at https://packagecloud.io/fdio/release

Thanks a lot to all of you for the hard work which made this release possible!

Thanks to Vanessa Valderrama for the help with publishing the release artifacts!

Cheers! And onwards to 22.10! :-)

--a /* your friendly 22.06 release manager */










-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21600): https://lists.fd.io/g/vpp-dev/message/21600
Mute This Topic: https://lists.fd.io/mt/92068364/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP release 22.06 is complete !

2022-06-29 Thread Jerome Tollet via lists.fd.io
Congratulations !

De : vpp-dev@lists.fd.io  de la part de Dave Wallace 

Date : mercredi, 29 juin 2022 à 18:44
À : vpp-dev@lists.fd.io 
Objet : Re: [vpp-dev] VPP release 22.06 is complete !
Congratulations to the entire FD.io Community and all who contributed to yet 
another on time, quality VPP release!

Special thanks to Andrew for his work as Release Manager.

Thanks,
-daw-
On 6/29/22 12:13 PM, Andrew Yourtchenko wrote:
Hello all,

VPP 22.06 release is complete and the artifacts are available in packagecloud 
release repository at https://packagecloud.io/fdio/release

Thanks a lot to all of you for the hard work which made this release possible!

Thanks to Vanessa Valderrama for the help with publishing the release artifacts!

Cheers! And onwards to 22.10! :-)

--a /* your friendly 22.06 release manager */










-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21598): https://lists.fd.io/g/vpp-dev/message/21598
Mute This Topic: https://lists.fd.io/mt/92068364/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] [presentation] VPP as a OSPF/BGP router - talk at VirtualNOG

2022-06-27 Thread Jerome Tollet via lists.fd.io
Great presentation Pim. Thanks !

De : vpp-dev@lists.fd.io  de la part de Pim van Pelt 

Date : dimanche, 26 juin 2022 à 11:43
À : vpp-dev 
Objet : [vpp-dev] [presentation] VPP as a OSPF/BGP router - talk at VirtualNOG
Hoi folks,

I don't usually share my presentations about VPP with the developer community, 
but I was encouraged to share this one as it's of general interest both for the 
dataplane performance (with trex loadtests) as well as for configuration of 
lower level (vppcfg for the dataplane) and higher level (kees, ansible).
I spent 45 minutes with the VirtualNOG folks this Friday, and I thought I'd 
pass the recording along:

- https://virtualnog.net/posts/2022-06-24-vpp/ with the video on Youtube
- https://media.ccc.de/v/vnog-4170-bgpospf-with-100mpps-on-amd64 with the same 
video on CCC

Thanks to Maximilian Wilhelm for hosting me, to Andre Toonk for his talk last 
year, bringing visibility to VPP also, and for the vibrant community who showed 
pretty genuine interest for VPP.

groet,
Pim

--
Pim van Pelt mailto:p...@ipng.nl>>
PBVP1-RIPE - http://www.ipng.nl/

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21586): https://lists.fd.io/g/vpp-dev/message/21586
Mute This Topic: https://lists.fd.io/mt/91997835/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] What's New in Calico v3.23

2022-05-24 Thread Jerome Tollet via lists.fd.io
People is this list will certainly be interested in this announcement 
https://www.tigera.io/blog/whats-new-in-calico-v3-23/
Regards,
Jerome

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21446): https://lists.fd.io/g/vpp-dev/message/21446
Mute This Topic: https://lists.fd.io/mt/91319905/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Feedback on a tool: vppcfg

2022-04-03 Thread Jerome Tollet via lists.fd.io
Hi Pim,
Over the past few years, we had many discussions about how best can VPP be 
configured by end users.
What is really nice with your proposal is that it’s pragmatic and  simple. 
Actually much more simple than the Netconf/yang (remember Honeycomb…) and 
probably cover many use cases.
I’ve not yet tried it but will certainly do it soon.
Thanks !
Jerome

De : vpp-dev@lists.fd.io  de la part de Pim van Pelt 

Date : samedi, 2 avril 2022 à 17:18
À : vpp-dev 
Objet : [vpp-dev] Feedback on a tool: vppcfg
Hoi colleagues,

I know there exist several smaller and larger scale VPP configuration harnesses 
out there, some more complex and feature complete than others. I wanted to 
share my work on an approach based on a YAML configuration with strict syntax 
and semantic validation, and a path planner that brings the dataplane from any 
configuration state safely to any other configuration state, as defined by 
these YAML files.

A bit of a storyline on the validator: 
https://ipng.ch/s/articles/2022/03/27/vppcfg-1.html
A bit of background on the DAG path planner: 
https://ipng.ch/s/articles/2022/04/02/vppcfg-2.html
Code with tests on https://github.com/pimvanpelt/vppcfg

The config and planner supports interfaces, bondethernets, vxlan tunnels, l2xc, 
bridgedomains and, quelle surprise, linux-cp configurations of all sorts. If 
anybody feels like giving it a spin, I'd certainly appreciate feedback and if 
you can manage to create two configuration states that the planner cannot 
reconcile, I'd love to hear about those too.

For now, the path planner works by reading the API configuration state exactly 
once (at startup), and then it figures out the CLI calls to print without 
needing to consult VPP again. This is super useful as it’s a non-intrusive way 
to inspect the changes before applying them, and it’s a property I’d like to 
carry forward. However, I don’t necessarily think that emitting the CLI 
statements is the best user experience, it’s more for the purposes of analysis 
that they can be useful. What I really want to do is emit API calls after the 
plan is created and reviewed/approved, directly reprogramming the VPP 
dataplane. However, the VPP API set needed to do this is not 100% baked yet. 
For example, I observed crashes when tinkering with BVIs and Loopbacks (see my 
thread from last week, thanks for the response Neale), and fixed a few obvious 
errors in the Linux CP API (gerrit) but there are still a few more issues to 
work through before I can set the next step with vppcfg.
If this tool proves to be useful to others, I'm happy to upstream it to extras/ 
somewhere.

--
Pim van Pelt mailto:p...@ipng.nl>>
PBVP1-RIPE - http://www.ipng.nl/

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21186): https://lists.fd.io/g/vpp-dev/message/21186
Mute This Topic: https://lists.fd.io/mt/90202690/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Fastest way to connect application in user space to VPP #vpp

2022-03-31 Thread Jerome Tollet via lists.fd.io
Hello,
In your config you enabled gso but didn't turn gro on. IOW, in this mode, VPP 
accepts GSO packets sent by linux but won't coalesce packets to reduce the load 
on linux side.
You may be interested in this article 
https://medium.com/fd-io-vpp/getting-to-40g-encrypted-container-networking-with-calico-vpp-on-commodity-hardware-d7144e52659a
 even if it's not super recent. Anyway, if you application can support VCL, it 
will certainly be much faster.
Jerome

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21159): https://lists.fd.io/g/vpp-dev/message/21159
Mute This Topic: https://lists.fd.io/mt/90135014/21656
Mute #vpp:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] New release of Calico/VPP

2021-12-17 Thread Jerome Tollet via lists.fd.io
Hello,
People in this mailing list may be interested to know that a new version of 
Calico/VPP for Calico 3.21 was released yesterday.
Here is the changelog: 
https://github.com/projectcalico/vpp-dataplane/wiki/Release-Notes
It leverages many vpp features: memif, HostStack, maglev (cnat), policies, …
Jerome

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20649): https://lists.fd.io/g/vpp-dev/message/20649
Mute This Topic: https://lists.fd.io/mt/87814174/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP release 21.10.1 artifacts are available on packagecloud.io

2021-11-17 Thread Jerome Tollet via lists.fd.io
Congratulations !

Le 17/11/2021 21:09, « vpp-dev@lists.fd.io au nom de Andrew Yourtchenko » 
 a écrit :

Hi all,

VPP release 21.10.1 artifacts are available at packagecloud.io/fdio/release.

Thanks a lot to Dave Wallace and Vanessa Valderrama for the help! 

--a


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20510): https://lists.fd.io/g/vpp-dev/message/20510
Mute This Topic: https://lists.fd.io/mt/87128508/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Calico/VPP 0.15.0 for Calico 3.19.1 was released

2021-06-29 Thread Jerome Tollet via lists.fd.io
Hello,
This group may be interested in knowing that Calico/VPP 0.15.0 for Calico 
3.19.1 was released last Friday. Here is the changelog: 
https://github.com/projectcalico/vpp-dataplane/wiki/Release-Notes.
It leverages a significant number of VPP features including: CNAT, IPv4/IPv6 
routing, IPSec, wireguard, native drivers for numerous interfaces, 
adaptive/interrupt mode, ACLs, …
Regards,
Jerome


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19645): https://lists.fd.io/g/vpp-dev/message/19645
Mute This Topic: https://lists.fd.io/mt/83864486/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP release 21.01 release is available on packagecloud.io/fdio/release !

2021-01-28 Thread Jerome Tollet via lists.fd.io
Congratulations!

Le 27/01/2021 22:07, « vpp-dev@lists.fd.io au nom de Andrew Yourtchenko » 
 a écrit :

Hi all,

VPP release 21.01 is complete and is available from the usual
packagecloud.io/fdio/release location!

I have verified using the scripts [0] that the new release installs
and runs on the Centos8, Debian 10 (Buster) as well as Ubuntu 18.04
and 20.04.

A small remark: if you are installing on Debian or Ubuntu 20.04 in a 
container,
you might need to "export VPP_INSTALL_SKIP_SYSCTL=1" before
installation such that it skips calling the sysctl command which will
fail. (This was a case with 20.09 release as well, but I did not
explicitly mention that and there were a few offline questions)

Please let me know if you experience any  issues.

Thanks a lot to Dave Wallace and Vanessa Valderrama for their help
during the release process !

[0] https://github.com/ayourtch/vpp-relops/tree/master/docker-tests

--a /* your friendly 21.01 release manager */


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18617): https://lists.fd.io/g/vpp-dev/message/18617
Mute This Topic: https://lists.fd.io/mt/80168525/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Bridge domains / mac learning limits evolutions

2020-12-22 Thread Jerome Tollet via lists.fd.io
Hello,
As a follow-up to this email, I did a patch introducing a per bridge-domain 
limit (https://gerrit.fd.io/r/c/vpp/+/30472).
If it gets in, I was wondering if we should now deprecate the global limit (per 
option #2 below) or whether we should keep both (option #1 below). I am still 
under the impression #2 is better and I can work on a patch for that.
Thoughts ?
Jerome

De : Jerome Tollet 
Date : mercredi 16 décembre 2020 à 14:51
À : "vpp-dev@lists.fd.io" 
Objet : Bridge domains / mac learning limits evolutions

Hello,
With current implementation of mac learning in VPP, it is possible to configure 
maximum number of learned entries. This limit is global and shared by all 
bridge domains in a given VPP instance.
I am considering implementing a per bridge domain limit making sure that a 
bridge domain can’t exhaust all available entries.

I would like to gather the opinion from the community on that:

1)  Shall we keep the global limit and add a per bridge domain limit?

2)  Shall we only implement a per bridge domain limit and remove the global 
limit?

My personal opinion is that #2 may be enough. It guarantees that the overall 
system can’t run low in memory because of too many learned addresses. However, 
it would change the existing behavior.
Do people have an opinion about that ?

Jerome

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18407): https://lists.fd.io/g/vpp-dev/message/18407
Mute This Topic: https://lists.fd.io/mt/79000620/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] move to clang-format

2020-12-16 Thread Jerome Tollet via lists.fd.io
Yes please!

De :  au nom de "Damjan Marion via lists.fd.io" 

Répondre à : "dmar...@me.com" 
Date : mercredi 16 décembre 2020 à 15:12
À : vpp-dev 
Objet : Re: [vpp-dev] move to clang-format


Any feedback?

Any good reason not to do the switch now when we have stable/2101 created?

Thanks,

Damjan


> On 14.12.2020., at 09:32, Benoit Ganne (bganne)  wrote:
>
> Sounds good to me, clang-format should be more consistent than indent...
>
> ben
>
>> -Original Message-
>> From: vpp-dev@lists.fd.io  On Behalf Of Damjan Marion
>> via lists.fd.io
>> Sent: dimanche 13 décembre 2020 13:16
>> To: vpp-dev 
>> Subject: [vpp-dev] move to clang-format
>>
>>
>> Hi,
>>
>> I was playing a bit with clang-format as replacement to gnu indent which
>> we use today[1].
>>
>> While it is impossible to render exact same result like gnu indent, good
>> thing is that clang-format can be used only on lines which are changed in
>> the diff so no major reformat is needed. My patch deos exactly that.
>>
>> Another good thing is that clang-format can learn about custom foreach
>> macros se we can significantly reduce amount of INDENT-OFF/INDENT-ON
>> sections in the code. It also properly formats registration macros like
>> VLIB_REGISTER_NODE() which again means less INDENT-OFF/INDENT-ON.
>>
>> What it cannot deal with is macros which include body of function as
>> argument. Three most popular ones are pool_foreach, pool_foreach_index and
>> clib_bitmap_foreach. To address this I created patch[2] which adds simpler
>> variant of the macros. Instead of writing
>>
>> pool_foreach (e, pool ({
>>  /* some code */
>> }));
>>
>> New macro looks like:
>>
>> pool_foreach2 (e, pool)
>>  /* some code */
>>
>> Here we have option to either maintain both macros, or do one-shot
>> replacement.
>>
>> As we plan to move to ubuntu 20.04 post 21.01 release, and that comes with
>> lot of gnu indent pain, it might be also good time to move to clang-
>> format. It is obvious that gnu indent is on the sunset of it’s existence
>> and no new development happening for years.
>>
>> Thoughts?
>>
>> —
>> Damjan
>>
>> [1] https://gerrit.fd.io/r/c/vpp/+/30395
>> [2] https://gerrit.fd.io/r/c/vpp/+/30393
>>
>>
>>
>>
>>
>



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18363): https://lists.fd.io/g/vpp-dev/message/18363
Mute This Topic: https://lists.fd.io/mt/78925374/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Bridge domains / mac learning limits evolutions

2020-12-16 Thread Jerome Tollet via lists.fd.io
Hello,
With current implementation of mac learning in VPP, it is possible to configure 
maximum number of learned entries. This limit is global and shared by all 
bridge domains in a given VPP instance.
I am considering implementing a per bridge domain limit making sure that a 
bridge domain can’t exhaust all available entries.

I would like to gather the opinion from the community on that:

  1.  Shall we keep the global limit and add a per bridge domain limit?
  2.  Shall we only implement a per bridge domain limit and remove the global 
limit?

My personal opinion is that #2 may be enough. It guarantees that the overall 
system can’t run low in memory because of too many learned addresses. However, 
it would change the existing behavior.
Do people have an opinion about that ?

Jerome

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18355): https://lists.fd.io/g/vpp-dev/message/18355
Mute This Topic: https://lists.fd.io/mt/79000620/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Vpp crashes with core dump vhost-user interface

2020-12-08 Thread Jerome Tollet via lists.fd.io
Hi Eyle,
BTW, networking-vpp 20.09rc0 was released last week. That may be another way to 
test it.
Jerome

Le 08/12/2020 19:16, « vpp-dev@lists.fd.io au nom de Benoit Ganne (bganne) via 
lists.fd.io »  a 
écrit :

Hi Eyle,

Thanks for the core, I think I identified the issue.
Can you check if https://gerrit.fd.io/r/c/vpp/+/30346 fix the issue? It 
should apply to 20.05 without conflicts.

Best
ben

> -Original Message-
> From: Eyle Brinkhuis 
> Sent: mercredi 2 décembre 2020 17:13
> To: Benoit Ganne (bganne) 
> Cc: vpp-dev@lists.fd.io
> Subject: Re: Vpp crashes with core dump vhost-user interface
> 
> Hi Ben, all,
> 
> I’m sorry, I forgot about adding a backtrace. I have now posted it here:
> https://surfdrive.surf.nl/files/index.php/s/0SUKUNivkpg9Dnb
> 
> 
>   I am not too familiar with the openstack integration, but now that
> 20.09 is out, can't you move to 20.09? At least in your lab to check
> whether you still see this issue.
> 
> The last “guaranteed to work” version is 20.05.1 against networking-vpp. I
> can still try though, in my testbed, but I’d like to keep to the known
> working combinations as much as possible. Ill let you know if anything
> comes up!
> 
> Thanks for the quick replies, both you and Steven.
> 
> Regards,
> 
> Eyle
> 
> 
>   On 2 Dec 2020, at 16:35, Benoit Ganne (bganne)   > wrote:
> 
>   Hi Eyle,
> 
>   I am not too familiar with the openstack integration, but now that
> 20.09 is out, can't you move to 20.09? At least in your lab to check
> whether you still see this issue.
>   Apart from that, we'd need to decipher the backtrace to be able to
> help. The best should be to share a coredump as explained here:
> https://fd.io/docs/vpp/master/troubleshooting/reportingissues/reportingiss
> ues.html#core-files
>  sues.html#core-files>
> 
>   Best
>   ben
> 
> 
> 
>   -Original Message-
>   From: vpp-dev@lists.fd.io    d...@lists.fd.io  > On Behalf Of Eyle
>   Brinkhuis
>   Sent: mercredi 2 décembre 2020 14:59
>   To: vpp-dev@lists.fd.io 
>   Subject: [vpp-dev] Vpp crashes with core dump vhost-user
> interface
> 
>   Hi all,
> 
>   In our environment (vpp 20.05.1, ubuntu 18.04.5, networking-
> vpp 20.05.1,
>   Openstack train) we are running into an issue. When we spawn a
> VM (regular
>   ubuntu 1804.4) with 16 CPU cores and 8G memory and a VPP
> backed interface,
>   our VPP instance dies:
> 
>   Dec 02 13:39:39 compute03-asd002a vpp[1788161]:
> /usr/bin/vpp[1788161]:
>   linux_epoll_file_update:120: epoll_ctl: Operation not
> permitted (errno 1)
>   Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]:
>   linux_epoll_file_update:120: epoll_ctl: Operation not
> permitted (errno 1)
>   Dec 02 13:39:39 compute03-asd002a vpp[1788161]:
> /usr/bin/vpp[1788161]:
>   received signal SIGSEGV, PC 0x7fdf80653188, faulting address
>   0x7ffe414b8680
>   Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]:
> received signal
>   SIGSEGV, PC 0x7fdf80653188, faulting address 0x7ffe414b8680
>   Dec 02 13:39:39 compute03-asd002a vpp[1788161]:
> /usr/bin/vpp[1788161]: #0
>   0x7fdf806556d5 0x7fdf806556d5
>   Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #0
>   0x7fdf806556d5 0x7fdf806556d5
>   Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #1
>   0x7fdf7feab8a0 0x7fdf7feab8a0
>   Dec 02 13:39:39 compute03-asd002a vpp[1788161]:
> /usr/bin/vpp[1788161]: #1
>   0x7fdf7feab8a0 0x7fdf7feab8a0
>   Dec 02 13:39:39 compute03-asd002a vpp[1788161]:
> /usr/bin/vpp[1788161]: #2
>   0x7fdf80653188 0x7fdf80653188
>   Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #2
>   0x7fdf80653188 0x7fdf80653188
>   Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #3
>   0x7fdf81f29e52 0x7fdf81f29e52
>   Dec 02 13:39:39 compute03-asd002a vpp[1788161]:
> /usr/bin/vpp[1788161]: #3
>   0x7fdf81f29e52 0x7fdf81f29e52
>   Dec 02 13:39:39 compute03-asd002a vpp[1788161]:
> /usr/bin/vpp[1788161]: #4
>   0x7fdf80653b79 0x7fdf80653b79
>   Dec 02 13:39:39 compute03-asd002a /usr/bin/vpp[1788161]: #4
>   

[vpp-dev] Calico/vpp 0.10.0 was released today

2020-12-07 Thread Jerome Tollet via lists.fd.io
Hello,
Folks may be interested in this message just posted on Calico #vpp slack 
channel:

Just released Calico/VPP v0.10.0, included are
* Wireguard support
* MTU configuration (in VPP)
* Uplink driver autodetection
As usual, updated docs are available here 
https://github.com/projectcalico/vpp-dataplane/wiki/Getting-started
And we're working to have the policies support under the Christmas tree…


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18276): https://lists.fd.io/g/vpp-dev/message/18276
Mute This Topic: https://lists.fd.io/mt/78788360/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Replace strongswan with VPP ?

2020-10-05 Thread Jerome Tollet via lists.fd.io
Hello,
-StrongSwan is purely Control Plane and doesn't implement IPSec.
-VPP supports IPSec and IKEv2 but has less features than StrongSwan when it 
comes to control plan.
VPP IPSec implementation is super fast and would lead to better performance 
than StrongSwan + Linux.
Jerome

Le 05/10/2020 08:53, « vpp-dev@lists.fd.io au nom de jarek » 
 a écrit :

Hello!

I've just discovered VPP and I'm trying to understand it, so
don't blame me for stupid questions.
I have performance problem with IPSec tunnel terminated on
server with strongswan - it looks that ipsec is working only on one
core. Can I use VPP instead on strongswan, and expect that it will work
faster on all cores ?

best regards
Jarek


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#17643): https://lists.fd.io/g/vpp-dev/message/17643
Mute This Topic: https://lists.fd.io/mt/77313662/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Calico-VPP 0.8.1

2020-09-21 Thread Jerome Tollet via lists.fd.io
Hello,
People may be interested in knowing that Calico/VPP 0.8.1 is available here: 
https://github.com/projectcalico/vpp-dataplane
It now supports native VPP AF_XDP integration for smoother linux integration. 
It requires a recent kernel though.
Other than that, it also improves support for ICMP with NAT & k8s services.
Jerome

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#17476): https://lists.fd.io/g/vpp-dev/message/17476
Mute This Topic: https://lists.fd.io/mt/76991889/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev]: Trouble shooting low bandwidth of memif interface

2020-08-04 Thread Jerome Tollet via lists.fd.io
Hello Rajiv,
I don’t have any documentation about fine tuning performance on multicore in 
lxc.
VPP 20.05 significantly improved mq support for tapv2. It also improved support 
for GSO which may be useful depending on your use case.
Regards,
Jerome

De : Rajith PR 
Date : vendredi 31 juillet 2020 à 10:07
À : Jerome Tollet 
Cc : vpp-dev 
Objet : Re: [vpp-dev]: Trouble shooting low bandwidth of memif interface

Thanks Jerome. I have pinned VPP thread to different cores (no isolation 
though). And yes after migrating to tapv2 interface, performance significantly 
improved (75x times approx).
Do you have any documentation on how to fine tune the performance on multi core 
in an lxc container.

Thanks,
Rajith

On Thu, Jul 30, 2020 at 5:06 PM Jerome Tollet (jtollet) 
mailto:jtol...@cisco.com>> wrote:
Hello Rajith,

1.   are you making sure your vpp workers are not sharing same cores and are 
isolated?

2.   host-interfaces are slower than vpp tapv2 interfaces. Maybe you should try 
them.
Jerome



De : mailto:vpp-dev@lists.fd.io>> au nom de "Rajith PR via 
lists.fd.io" 
mailto:rtbrick@lists.fd.io>>
Répondre à : "raj...@rtbrick.com" 
mailto:raj...@rtbrick.com>>
Date : jeudi 30 juillet 2020 à 08:44
À : vpp-dev mailto:vpp-dev@lists.fd.io>>
Objet : Re: [vpp-dev]: Trouble shooting low bandwidth of memif interface

Looks like the image is not visible. Resending the topology diagram for 
reference.


[cid:image001.png@01D66AB0.7BFF8520]


On Thu, Jul 30, 2020 at 11:44 AM Rajith PR via lists.fd.io 
mailto:rtbrick@lists.fd.io>> wrote:
Hello Experts,

I am trying to measure the performance of memif interface and getting a very 
low bandwidth(652Kbytes/sec).  I am new to performance tuning and any help on 
troubleshooting the issue would be very helpful.

The test topology i am using is as below:

Erreur ! Nom du fichier non spécifié.

Basically, I have two lxc containers each hosting an instance of VPP. The VPP 
instances are connected using memif. On lxc-01 i run the iperf3 client that 
generates TCP traffic and on lxc-02 i run the iperf3 server. Linux veth pairs 
are used for interconnecting the iperf tool with VPP.

Test Environment:

CPU Details:

 *-cpu
  description: CPU
  product: Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz
  vendor: Intel Corp.
  physical id: c
  bus info: cpu@0
  version: Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz
  serial: None
  slot: U3E1
  size: 3100MHz
  capacity: 3100MHz
  width: 64 bits
  clock: 100MHz
  capabilities: x86-64 fpu fpu_exception wp vme de pse tsc msr pae mce 
cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss 
ht tm pbe syscall nx pdpe1gb rdtscp constant_tsc art arch_perfmon pebs bts 
rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 
monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 
x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 
3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp 
tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 
erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 
xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear 
flush_l1d cpufreq
  configuration: cores=2 enabledcores=2 threads=4

VPP Configuration:

No workers. VPP main thread, iperf client and server are pinned to separate 
cores.

Test Results:

[11:36][ubuntu:~]$ iperf3 -s -B 200.1.1.1 -f K -A 3
---
Server listening on 5201
---
Accepted connection from 100.1.1.1, port 45188
[  5] local 200.1.1.1 port 5201 connected to 100.1.1.1 port 45190
[ ID] Interval   Transfer Bandwidth
[  5]   0.00-1.00   sec   154 KBytes   154 KBytes/sec
[  5]   1.00-2.00   sec   783 KBytes   784 KBytes/sec
[  5]   2.00-3.00   sec   782 KBytes   782 KBytes/sec
[  5]   3.00-4.00   sec   663 KBytes   663 KBytes/sec
[  5]   4.00-5.00   sec   631 KBytes   631 KBytes/sec
[  5]   5.00-6.00   sec   677 KBytes   677 KBytes/sec
[  5]   6.00-7.00   sec   693 KBytes   693 KBytes/sec
[  5]   7.00-8.00   sec   706 KBytes   706 KBytes/sec
[  5]   8.00-9.00   sec   672 KBytes   672 KBytes/sec
[  5]   9.00-10.00  sec   764 KBytes   764 KBytes/sec
[  5]  10.00-10.04  sec  21.2 KBytes   504 KBytes/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval   Transfer Bandwidth
[  5]   0.00-10.04  sec  0.00 Bytes  0.00 KBytes/sec  sender
[  5]   0.00-10.04  sec  6.39 MBytes   652 KBytes/sec  receiver
---
Server listening on 5201
---


[11:36][ubuntu:~]$ sudo iperf3 -c 

Re: [vpp-dev]: Trouble shooting low bandwidth of memif interface

2020-07-30 Thread Jerome Tollet via lists.fd.io
Hello Rajith,

  1.  are you making sure your vpp workers are not sharing same cores and are 
isolated?
  2.  host-interfaces are slower than vpp tapv2 interfaces. Maybe you should 
try them.
Jerome



De :  au nom de "Rajith PR via lists.fd.io" 

Répondre à : "raj...@rtbrick.com" 
Date : jeudi 30 juillet 2020 à 08:44
À : vpp-dev 
Objet : Re: [vpp-dev]: Trouble shooting low bandwidth of memif interface

Looks like the image is not visible. Resending the topology diagram for 
reference.


[cid:image001.png@01D66676.6DEB4E80]


On Thu, Jul 30, 2020 at 11:44 AM Rajith PR via lists.fd.io 
mailto:rtbrick@lists.fd.io>> wrote:
Hello Experts,

I am trying to measure the performance of memif interface and getting a very 
low bandwidth(652Kbytes/sec).  I am new to performance tuning and any help on 
troubleshooting the issue would be very helpful.

The test topology i am using is as below:

[Image supprimée par l'expéditeur.]


Basically, I have two lxc containers each hosting an instance of VPP. The VPP 
instances are connected using memif. On lxc-01 i run the iperf3 client that 
generates TCP traffic and on lxc-02 i run the iperf3 server. Linux veth pairs 
are used for interconnecting the iperf tool with VPP.

Test Environment:

CPU Details:

 *-cpu
  description: CPU
  product: Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz
  vendor: Intel Corp.
  physical id: c
  bus info: cpu@0
  version: Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz
  serial: None
  slot: U3E1
  size: 3100MHz
  capacity: 3100MHz
  width: 64 bits
  clock: 100MHz
  capabilities: x86-64 fpu fpu_exception wp vme de pse tsc msr pae mce 
cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss 
ht tm pbe syscall nx pdpe1gb rdtscp constant_tsc art arch_perfmon pebs bts 
rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 
monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 
x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 
3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp 
tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 
erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 
xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear 
flush_l1d cpufreq
  configuration: cores=2 enabledcores=2 threads=4

VPP Configuration:

No workers. VPP main thread, iperf client and server are pinned to separate 
cores.

Test Results:

[11:36][ubuntu:~]$ iperf3 -s -B 200.1.1.1 -f K -A 3
---
Server listening on 5201
---
Accepted connection from 100.1.1.1, port 45188
[  5] local 200.1.1.1 port 5201 connected to 100.1.1.1 port 45190
[ ID] Interval   Transfer Bandwidth
[  5]   0.00-1.00   sec   154 KBytes   154 KBytes/sec
[  5]   1.00-2.00   sec   783 KBytes   784 KBytes/sec
[  5]   2.00-3.00   sec   782 KBytes   782 KBytes/sec
[  5]   3.00-4.00   sec   663 KBytes   663 KBytes/sec
[  5]   4.00-5.00   sec   631 KBytes   631 KBytes/sec
[  5]   5.00-6.00   sec   677 KBytes   677 KBytes/sec
[  5]   6.00-7.00   sec   693 KBytes   693 KBytes/sec
[  5]   7.00-8.00   sec   706 KBytes   706 KBytes/sec
[  5]   8.00-9.00   sec   672 KBytes   672 KBytes/sec
[  5]   9.00-10.00  sec   764 KBytes   764 KBytes/sec
[  5]  10.00-10.04  sec  21.2 KBytes   504 KBytes/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval   Transfer Bandwidth
[  5]   0.00-10.04  sec  0.00 Bytes  0.00 KBytes/sec  sender
[  5]   0.00-10.04  sec  6.39 MBytes   652 KBytes/sec  receiver
---
Server listening on 5201
---


[11:36][ubuntu:~]$ sudo iperf3 -c 200.1.1.1  -A 2
Connecting to host 200.1.1.1, port 5201
[  4] local 100.1.1.1 port 45190 connected to 200.1.1.1 port 5201
[ ID] Interval   Transfer Bandwidth   Retr  Cwnd
[  4]   0.00-1.00   sec   281 KBytes  2.30 Mbits/sec   44   2.83 KBytes
[  4]   1.00-2.00   sec   807 KBytes  6.62 Mbits/sec  124   5.66 KBytes
[  4]   2.00-3.00   sec   737 KBytes  6.04 Mbits/sec  136   5.66 KBytes
[  4]   3.00-4.00   sec   720 KBytes  5.90 Mbits/sec  130   5.66 KBytes
[  4]   4.00-5.00   sec   574 KBytes  4.70 Mbits/sec  134   5.66 KBytes
[  4]   5.00-6.00   sec   720 KBytes  5.90 Mbits/sec  120   7.07 KBytes
[  4]   6.00-7.00   sec   666 KBytes  5.46 Mbits/sec  134   5.66 KBytes
[  4]   7.00-8.00   sec   741 KBytes  6.07 Mbits/sec  124   5.66 KBytes
[  4]   8.00-9.00   sec   660 KBytes  5.41 Mbits/sec  128   4.24 KBytes
[  4]   9.00-10.00  sec   740 KBytes  6.05 Mbits/sec  130   4.24 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval   

Re: [vpp-dev] Happening NOW: LFN webinar on CNFs + VPP + NSM

2020-07-16 Thread Jerome Tollet via lists.fd.io
Hello Jill,
Where can we get access to the recording?
Jerome

De :  au nom de Jill Lovato 
Date : jeudi 16 juillet 2020 à 18:07
À : "vpp-dev@lists.fd.io" 
Objet : [vpp-dev] Happening NOW: LFN webinar on CNFs + VPP + NSM

Hi Folks,

This morning, LFN is hosting a webinar with presenters from Pantheon Tech 
entitled: "Building CNFs with FD.io VPP and Network Service Mesh + VPP 
traceability in cloud-native deployments."


If you're able, please join us now: 
https://zoom.us/webinar/register/WN_dqSynTTPSbubHXUR3ddj0A. A recording will be 
available following the live broadcast.

Thanks,

--
Jill Lovato
Senior PR Manager
The Linux Foundation
jlov...@linuxfoundation.org
Phone: +1.503.703.8268
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16994): https://lists.fd.io/g/vpp-dev/message/16994
Mute This Topic: https://lists.fd.io/mt/75544904/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Published: FD.io CSIT-2005 Release Report

2020-07-16 Thread Jerome Tollet via lists.fd.io
Hi Maciek,
Very impressive report with some interesting performance boosts. Thanks!
Jerome

Le 14/07/2020 15:33, « vpp-dev@lists.fd.io au nom de Maciek Konstantynowicz 
(mkonstan) via lists.fd.io »  a écrit :

Hi All,

FD.io CSIT-2005 report has been published on FD.io docs site:

https://docs.fd.io/csit/rls2005/report/

Many thanks to All in CSIT, VPP and wider FD.io community who
contributed and worked hard to make CSIT-2005 happen!

See below for pointers to specific sections in the report.

Welcome all comments, best by email to csit-...@lists.fd.io.

Cheers,
-Maciek


Points of Note in the CSIT-2005 Report

Indexed specific links listed at the bottom.

1. VPP release notes
   a. Changes in CSIT-2005: [1]
   b. Known issues: [2]

2. VPP performance - 64B/IMIX throughput graphs (selected NIC models):
   a. Graphs explained: [3]
   b. L2 Ethernet Switching:[4]
   c. IPv4 Routing: [5]
   d. IPv6 Routing: [6]
   e. SRv6 Routing: [7]
   f. IPv4 Tunnels: [8]
   g. KVM VMs vhost-user:   [9]
   h. LXC/DRC Container Memif: [10]
   e. IPsec IPv4 Routing:  [11]
   f. Virtual Topology System: [12]

3. VPP performance - multi-core and latency graphs:
   a. Speedup Multi-Core:  [13]
   b. Latency: [14]

4. VPP performance comparisons
   a. VPP-20.05 vs. VPP-20.01:  [15]

5. VPP performance test details - all NICs:
   a. Detailed results 64B IMIX 1518B 9kB:  [16]
   b. Configuration:[17]

DPDK Testpmd and L3fwd performance sections follow similar structure.

6. DPDK applications:
  a. Release notes:   [18]
  b. DPDK performance - 64B throughput graphs:[19]
  c. DPDK performance - latency graphs:   [20]
  d. DPDK performance - DPDK-20.02 vs. DPDK-19.08: [21]

Functional device tests (VPP_Device) are also included in the report.

Specific links within the report:

 [1] 
https://docs.fd.io/csit/rls2005/report/vpp_performance_tests/csit_release_notes.html#changes-in-csit-release
 [2] 
https://docs.fd.io/csit/rls2005/report/vpp_performance_tests/csit_release_notes.html#known-issues
 [3] 
https://docs.fd.io/csit/rls2005/report/vpp_performance_tests/packet_throughput_graphs/index.html
 [4] 
https://docs.fd.io/csit/rls2005/report/vpp_performance_tests/packet_throughput_graphs/l2.html
 [5] 
https://docs.fd.io/csit/rls2005/report/vpp_performance_tests/packet_throughput_graphs/ip4.html
 [6] 
https://docs.fd.io/csit/rls2005/report/vpp_performance_tests/packet_throughput_graphs/ip6.html
 [7] 
https://docs.fd.io/csit/rls2005/report/vpp_performance_tests/packet_throughput_graphs/srv6.html
 [8] 
https://docs.fd.io/csit/rls2005/report/vpp_performance_tests/packet_throughput_graphs/ip4_tunnels.html
 [9] 
https://docs.fd.io/csit/rls2005/report/vpp_performance_tests/packet_throughput_graphs/vm_vhost.html
[10] 
https://docs.fd.io/csit/rls2005/report/vpp_performance_tests/packet_throughput_graphs/container_memif.html
[11] 
https://docs.fd.io/csit/rls2005/report/vpp_performance_tests/packet_throughput_graphs/ipsec.html
[12] 
https://docs.fd.io/csit/rls2005/report/vpp_performance_tests/packet_throughput_graphs/vts.html
[13] 
https://docs.fd.io/csit/rls2005/report/vpp_performance_tests/throughput_speedup_multi_core/index.html
[14] 
https://docs.fd.io/csit/rls2005/report/vpp_performance_tests/packet_latency/index.html
[15] 
https://docs.fd.io/csit/rls2005/report/vpp_performance_tests/comparisons/current_vs_previous_release.html
[16] 
https://docs.fd.io/csit/rls2005/report/detailed_test_results/vpp_performance_results/index.html
[17] 
https://docs.fd.io/csit/rls2005/report/test_configuration/vpp_performance_configuration/index.html
[18] 
https://docs.fd.io/csit/rls2005/report/dpdk_performance_tests/csit_release_notes.html
[19] 
https://docs.fd.io/csit/rls2005/report/dpdk_performance_tests/packet_throughput_graphs/index.html
[20] 
https://docs.fd.io/csit/rls2005/report/dpdk_performance_tests/packet_latency/index.html
[21] 
https://docs.fd.io/csit/rls2005/report/dpdk_performance_tests/comparisons/current_vs_previous_release.html



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16980): https://lists.fd.io/g/vpp-dev/message/16980
Mute This Topic: https://lists.fd.io/mt/75498644/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [tsc] [vpp-dev] Replacing master/slave nomenclature

2020-07-14 Thread Jerome Tollet via lists.fd.io
Hi Chris,
I suspect it would be good to align on the new bond nomenclature coming from 
other projects. DPDK and Linux are probably starting points we should consider 
IMO.
Jerome

Le 14/07/2020 18:45, « t...@lists.fd.io au nom de Chris Luke » 
 a écrit :

It is subjective and contextualized. But in this case, if making the effort 
to correct a wrong, why stop half way?

Chris.

-Original Message-
From: vpp-dev@lists.fd.io  On Behalf Of Jerome Tollet 
via lists.fd.io
Sent: Tuesday, July 14, 2020 12:37
To: Steven Luong (sluong) ; Dave Barach (dbarach) 
; Kinsella, Ray ; Stephen Hemminger 
; vpp-dev@lists.fd.io; t...@lists.fd.io; Ed 
Warnicke (eaw) 
Subject: [EXTERNAL] Re: [vpp-dev] Replacing master/slave nomenclature

Hi Steven,
Please note that per this proposition,  
https://urldefense.com/v3/__https://lkml.org/lkml/2020/7/4/229__;!!CQl3mcHX2A!QdLdxm4rtZW-mFe5jt_qzEpx-_X2KWnqvjyEl-7Py41jsEV7FrnEw0lTNcF8LdfUzQ$
 , slave must be avoided but master can be kept.
Maybe master/member or master/secondary could be options too.
Jerome

Le 14/07/2020 18:32, « vpp-dev@lists.fd.io au nom de steven luong via 
lists.fd.io »  a 
écrit :

I am in the process of pushing a patch to replace master/slave with 
aggregator/member for the bonding.

Steven

On 7/13/20, 4:44 AM, "vpp-dev@lists.fd.io on behalf of Dave Barach via 
lists.fd.io"  
wrote:

+1, especially since our next release will be supported for a year, 
and API name changes are involved...

-Original Message-
From: Kinsella, Ray 
Sent: Monday, July 13, 2020 6:01 AM
To: Dave Barach (dbarach) ; Stephen Hemminger 
; vpp-dev@lists.fd.io; t...@lists.fd.io; Ed 
Warnicke (eaw) 
Subject: Re: [vpp-dev] Replacing master/slave nomenclature

Hi Stephen,

I agree, I don't think we should ignore this.
Ed - I suggest we table a discussion at the next FD.io TSC?

Ray K

On 09/07/2020 17:05, Dave Barach via lists.fd.io wrote:
> Looping in the technical steering committee...
>
> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of 
Stephen Hemminger
> Sent: Thursday, July 2, 2020 7:02 PM
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] Replacing master/slave nomenclature
>
> Is the VPP project addressing the use of master/slave 
nomenclature in the code base, documentation and CLI?  We are doing this for 
DPDK and it would be good if the replacement wording used in DPDK matched the 
wording used in FD.io projects.
>
> Particularly problematic is the use of master/slave in bonding.
> This seems to be a leftover from Linux, since none of the 
commercial products use that terminology and it is not present in 802.1AX 
standard.
>
> The IEEE and IETF are doing an across the board look at these 
terms in standards.
>
>
>
>



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16955): https://lists.fd.io/g/vpp-dev/message/16955
Mute This Topic: https://lists.fd.io/mt/75503531/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Replacing master/slave nomenclature

2020-07-14 Thread Jerome Tollet via lists.fd.io
Hi Steven,
Please note that per this proposition,  https://lkml.org/lkml/2020/7/4/229, 
slave must be avoided but master can be kept. 
Maybe master/member or master/secondary could be options too.
Jerome

Le 14/07/2020 18:32, « vpp-dev@lists.fd.io au nom de steven luong via 
lists.fd.io »  a 
écrit :

I am in the process of pushing a patch to replace master/slave with 
aggregator/member for the bonding.

Steven

On 7/13/20, 4:44 AM, "vpp-dev@lists.fd.io on behalf of Dave Barach via 
lists.fd.io"  
wrote:

+1, especially since our next release will be supported for a year, and 
API name changes are involved... 

-Original Message-
From: Kinsella, Ray  
Sent: Monday, July 13, 2020 6:01 AM
To: Dave Barach (dbarach) ; Stephen Hemminger 
; vpp-dev@lists.fd.io; t...@lists.fd.io; Ed 
Warnicke (eaw) 
Subject: Re: [vpp-dev] Replacing master/slave nomenclature

Hi Stephen,

I agree, I don't think we should ignore this.
Ed - I suggest we table a discussion at the next FD.io TSC?

Ray K

On 09/07/2020 17:05, Dave Barach via lists.fd.io wrote:
> Looping in the technical steering committee...
> 
> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Stephen 
Hemminger
> Sent: Thursday, July 2, 2020 7:02 PM
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] Replacing master/slave nomenclature
> 
> Is the VPP project addressing the use of master/slave nomenclature in 
the code base, documentation and CLI?  We are doing this for DPDK and it would 
be good if the replacement wording used in DPDK matched the wording used in 
FD.io projects.
> 
> Particularly problematic is the use of master/slave in bonding.
> This seems to be a leftover from Linux, since none of the commercial 
products use that terminology and it is not present in 802.1AX standard.
> 
> The IEEE and IETF are doing an across the board look at these terms 
in standards.
> 
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16949): https://lists.fd.io/g/vpp-dev/message/16949
Mute This Topic: https://lists.fd.io/mt/75399929/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] l2: performance enhancement in l2output

2020-07-08 Thread Jerome Tollet via lists.fd.io
Hello Zhiyong,
Have you measured performance impact of this patch ?
Jerome


De :  au nom de Zhiyong Yang 
Date : mercredi 8 juillet 2020 à 09:37
À : "vpp-dev@lists.fd.io" , Damjan Marion 
, Paul Vinciguerra 
Cc : "Kinsella, Ray" 
Objet : [vpp-dev] l2: performance enhancement in l2output

Hi Damjan, Paul and other VPP maintainers,

I have submitted the patch to accelerate the perf on XEON using avx512.
Please help review it. Thank you.
https://gerrit.fd.io/r/c/vpp/+/27213

BR
Zhiyong
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16911): https://lists.fd.io/g/vpp-dev/message/16911
Mute This Topic: https://lists.fd.io/mt/75372041/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] jvpp compatiblity

2020-06-17 Thread Jerome Tollet via lists.fd.io
Hi Chang,
There’s no active development on jvpp as well as honeycomb but any contribution 
to update them is of course more than welcome.
If you need to program vpp, I’d recommend considering go, c or python.
Jerome

De :  au nom de Hyunseok 
Date : jeudi 18 juin 2020 à 03:21
À : "vpp-dev@lists.fd.io" 
Objet : [vpp-dev] jvpp compatiblity

Hi,
It seems the current jvpp is not compatible with the latest vpp 20.05. Building 
jvpp against the latest vpp fails.
Looks like jvpp development has stopped since vpp 19.04.
Is there any plan to update jvpp to make it compatible with the latest vpp?  In 
fact I do not understand why jvpp was dropped from vpp and made independent.
Since honeycomb relies on jvpp, having up-to-date jvpp seems important.

Thanks,
Chang
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16749): https://lists.fd.io/g/vpp-dev/message/16749
Mute This Topic: https://lists.fd.io/mt/74950618/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Welcome to VSAP project

2020-06-02 Thread Jerome Tollet via lists.fd.io
Hello Ping & VSAP folks,
Quite impressive speed up for Nginx!
Jerome

De :  au nom de "Yu, Ping" 
Date : lundi 1 juin 2020 à 11:10
À : "vpp-dev@lists.fd.io" , "csit-...@lists.fd.io" 

Cc : "Yu, Ping" , "Yu, Ping" 
Objet : [vpp-dev] Welcome to VSAP project

Hello, all

Glad to announce that the project VSAP(VPP Stack Acceleration Project) is 
online and this project aims to establish an industry user space application 
ecosystem based on VPP host stack. VSAP has done much work to optimize VPP host 
stack for Nginx, and we can see some exciting numbers comparing with kernel 
space stack.

The detailed project is below:
https://wiki.fd.io/view/VSAP

Welcome to join our mailing-list from below page.
https://lists.fd.io/g/vsap-dev


The VSAP project meeting time will be scheduled in as bi-weekly meeting in 
Friday 9:00 am PRC time( PT:  and next incoming meeting is Jun 12. Please join 
in this meeting via webex: https://intel.webex.com/meet/pyu4


Look forward to meeting with you.

Ping

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16604): https://lists.fd.io/g/vpp-dev/message/16604
Mute This Topic: https://lists.fd.io/mt/74599949/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP release 20.05 is complete!

2020-05-27 Thread Jerome Tollet via lists.fd.io
Congratulations!

Le 27/05/2020 23:28, « vpp-dev@lists.fd.io au nom de Andrew Yourtchenko » 
 a écrit :

Dear all,

I am happy to announce that the release 20.05 is available on
packagecloud.io in fdio/release repository.

I have verified that it is installable on Ubuntu 18.04 and Centos 7
distributions.

Special thanks to Vanessa Valderrama and Dave Wallace for the help
during the release.

--a (your friendly 20.05 release manager)

P.s. Branch stable/2005 is now open.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16542): https://lists.fd.io/g/vpp-dev/message/16542
Mute This Topic: https://lists.fd.io/mt/74509933/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Getting to 40G encrypted container networking with Calico/VPP on commodity hardware

2020-05-26 Thread Jerome Tollet via lists.fd.io
Dear VPP community,
This article may be of interest to you:
https://medium.com/fd-io-vpp/getting-to-40g-encrypted-container-networking-with-calico-vpp-on-commodity-hardware-d7144e52659a
Regards,
Jerome
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16512): https://lists.fd.io/g/vpp-dev/message/16512
Mute This Topic: https://lists.fd.io/mt/74487397/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] blinkenlights .. vpptop?

2020-05-14 Thread Jerome Tollet via lists.fd.io
https://github.com/PantheonTechnologies/vpptop may be of interest to you


De :  au nom de Christian Hopps 
Date : jeudi 14 mai 2020 à 11:19
À : vpp-dev 
Cc : Christian Hopps 
Objet : [vpp-dev] blinkenlights .. vpptop?

Has anyone contemplated the possibility of a "vpptop" like utility for VPP? The 
thought crossed my mind while I was using htop to help debug some performance 
problems I've been having that aren't directly related to vpp processing.

I'm thinking that vpptop could present a dashboard like top does showing the 
(top?) node run times, etc -- with the ability to narrow to per-core. A good 
starting point for the column data would be show runtime output.

More importantly maybe, it would also present per-core usage bars giving a 
"graphical" representation of "idleness" per core so one could quickly know how 
efficiently things were running on each core, and thus if workload was 
distributed correctly.

For memory it could show a graphical bar depicting buffer usage as well.

Thanks,
Chris.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16375): https://lists.fd.io/g/vpp-dev/message/16375
Mute This Topic: https://lists.fd.io/mt/74201203/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Why performance of vpp vhost-user is poorer than vhost-net

2020-04-28 Thread Jerome Tollet via lists.fd.io
Hello,
Can you clarify a bit your topology?

  *   virtio-net is tipically a driver within the guest
  *   vhost-user is deployed in the hyperfvisor.

Maybe a diagram would help us to understand.

Jerome

De :  au nom de "chu.penghong" 
Date : mardi 28 avril 2020 à 09:16
À : "vpp-dev@lists.fd.io" 
Objet : [vpp-dev] Why performance of vpp vhost-user is poorer than vhost-net

Hello,
I use vpp as the bridge between two virtual manchines on the same host. I 
use vpp vhost-user as the backend and use iperf3 on the vm to test tcp 
performance.
I found that the performance of vpp vhost-user is poorer than vhost-net. 
For vhost-net, I can get 30Gbps . For vpp vhost-user, I only get 10Gbps.

  This is the vpp output :

vpp# show vhost-user
Virtio vhost-user interfaces
Global:
  coalesce frames 32 time 1e-3
  Number of rx virtqueues in interrupt mode: 0
  Number of GSO interfaces: 2
Interface: VirtualEthernet0/0/0 (ifindex 1)
  GSO enable
virtio_net_hdr_sz 12
 features mask (0xa27c):
 features (0x15020dd83):
   VIRTIO_NET_F_CSUM (0)
   VIRTIO_NET_F_GUEST_CSUM (1)
   VIRTIO_NET_F_GUEST_TSO4 (7)
   VIRTIO_NET_F_GUEST_TSO6 (8)
   VIRTIO_NET_F_GUEST_UFO (10)
   VIRTIO_NET_F_HOST_TSO4 (11)
   VIRTIO_NET_F_HOST_TSO6 (12)
   VIRTIO_NET_F_HOST_UFO (14)
   VIRTIO_NET_F_MRG_RXBUF (15)
   VIRTIO_NET_F_GUEST_ANNOUNCE (21)
   VIRTIO_F_INDIRECT_DESC (28)
   VHOST_USER_F_PROTOCOL_FEATURES (30)
   VIRTIO_F_VERSION_1 (32)
  protocol features (0x3)
   VHOST_USER_PROTOCOL_F_MQ (0)
   VHOST_USER_PROTOCOL_F_LOG_SHMFD (1)

 socket filename /home/pml/run/sock1.sock type client errno "Success"

 rx placement:
   thread 6 on vring 1, polling
 tx placement: spin-lock
   thread 0 on vring 0
   thread 1 on vring 0
   thread 2 on vring 0
   thread 3 on vring 0
   thread 4 on vring 0
   thread 5 on vring 0
   thread 6 on vring 0
   thread 7 on vring 0
   thread 8 on vring 0

 Memory regions (total 3)
 region fdguest_phys_addrmemory_sizeuserspace_addr 
mmap_offsetmmap_addr
 == = == == == 
== ==
  0 600x 0x000a 0x7f659e20 
0x 0x7faa41a0
  1 610x000c 0xbff4 0x7f659e2c 
0x000c 0x7f4fff0c
  2 620x0001 0x00014000 0x7f665e20 
0xc000 0x7f4ebf00

 Virtqueue 0 (TX)
  qsz 1024 last_avail_idx 39946 last_used_idx 39946
  avail.flags 0 avail.idx 40511 used.flags 1 used.idx 39946
  kickfd 63 callfd 64 errfd -1

 Virtqueue 1 (RX)
  qsz 1024 last_avail_idx 46846 last_used_idx 46846
  avail.flags 1 avail.idx 46846 used.flags 1 used.idx 46846
  kickfd 58 callfd 65 errfd -1

Interface: VirtualEthernet0/0/1 (ifindex 2)
  GSO enable
virtio_net_hdr_sz 12
 features mask (0xa27c):
 features (0x15020dd83):
   VIRTIO_NET_F_CSUM (0)
   VIRTIO_NET_F_GUEST_CSUM (1)
   VIRTIO_NET_F_GUEST_TSO4 (7)
   VIRTIO_NET_F_GUEST_TSO6 (8)
   VIRTIO_NET_F_GUEST_UFO (10)
   VIRTIO_NET_F_HOST_TSO4 (11)
   VIRTIO_NET_F_HOST_TSO6 (12)
   VIRTIO_NET_F_HOST_UFO (14)
   VIRTIO_NET_F_MRG_RXBUF (15)
   VIRTIO_NET_F_GUEST_ANNOUNCE (21)
   VIRTIO_F_INDIRECT_DESC (28)
   VHOST_USER_F_PROTOCOL_FEATURES (30)
   VIRTIO_F_VERSION_1 (32)
  protocol features (0x3)
   VHOST_USER_PROTOCOL_F_MQ (0)
   VHOST_USER_PROTOCOL_F_LOG_SHMFD (1)

 socket filename /home/pml/run/sock2.sock type client errno "Success"

 rx placement:
   thread 8 on vring 1, polling
 tx placement: spin-lock
   thread 0 on vring 0
   thread 1 on vring 0
   thread 2 on vring 0
   thread 3 on vring 0
   thread 4 on vring 0
   thread 5 on vring 0
   thread 6 on vring 0
   thread 7 on vring 0
   thread 8 on vring 0

 Memory regions (total 3)
 region fdguest_phys_addrmemory_sizeuserspace_addr 
mmap_offsetmmap_addr
 == = == == == 
== ==
  0 680x 0x000a 0x7f314a60 
0x 0x7faa4180
  1 690x000c 0xbff4 0x7f314a6c 
0x000c 0x7f4d3f0c
  2 700x0001 0x00014000 0x7f320a60 
0xc000 0x7f4bff00

 Virtqueue 0 (TX)
  qsz 1024 last_avail_idx 50139 last_used_idx 50139
  avail.flags 0 avail.idx 50761 used.flags 1 used.idx 50139
  kickfd 71 callfd 72 errfd -1

 Virtqueue 1 (RX)
  qsz 1024 last_avail_idx 24888 last_used_idx 24888
  avail.flags 1 avail.idx 24888 used.flags 1 used.idx 24888
  kickfd 66 callfd 73 errfd -1

vpp# show node counters
   CountNode  Reason
382312l2-output   L2 output packets
382312l2-learnL2 learn packets
382312l2-inputL2 input packets
20 VirtualEthernet0/0/1-tx

[vpp-dev] New calico-vpp alpha version released

2020-04-24 Thread Jerome Tollet via lists.fd.io
Hello,
A new (alpha) version of calico-vpp was released today. Main addition is 
support for Kubernetes ipv6 services.
Feel free to give it a try : https://github.com/calico-vpp/calico-vpp
Congrats to @Aloys Augustin (aloaugus) & @Nathan 
Skrzypczak !
Regards,
Jerome
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16151): https://lists.fd.io/g/vpp-dev/message/16151
Mute This Topic: https://lists.fd.io/mt/73250268/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Interesting VPP/Quic article

2020-03-31 Thread Jerome Tollet via Lists.Fd.Io
https://blogs.cisco.com/cloud/building-fast-quic-sockets-in-vpp

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15962): https://lists.fd.io/g/vpp-dev/message/15962
Mute This Topic: https://lists.fd.io/mt/72685199/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP Performance data

2020-03-30 Thread Jerome Tollet via Lists.Fd.Io
Hello,
You should have a look at https://docs.fd.io/csit/rls2001/report/
Jerome

De :  au nom de "Majumdar, Kausik" 

Date : lundi 30 mars 2020 à 19:47
À : "vpp-dev@lists.fd.io" 
Objet : [vpp-dev] VPP Performance data


Hi,

Can someone please share the VPP performance measurement data in different 
packet sizes for plain vanilla IPv4, IPv6, and tunnel encap/decap for IPSec, 
VxLAN cases. Also do we have VPP packet forwarding data with at least one 
Service VNF in the same host or in a remote host, where traffic is service 
chained to run the services.

I am considering the below topologies to get some performance data -


1.   Tgen --> IPv4/IPv6 --> Host1 (VPP1)  
Host2 (VPP2) --> IPv4/IPv6 --> Tgen



2.   Tgen --> IPv4/IPv6 --> Host1 (VPP1)  
Host2 (VPP2) --> Host2 VNF (Service VM) --> Host2 (VPP2) --> IPv4/IPv6 --> Tgen



3.   Tgen --> IPv4/IPv6 --> Host1 (VPP1)  
Host2 (VPP2) --> Host3 VNF (Service VM) --> Host2 (VPP2) --> IPv4/IPv6 --> Tgen


I am assuming some analysis already performed in the above scenarios with the 
Number of CPUs, Cores, SR-IOV for VNF forwarding, or OVS in Kernel for bridging 
to VNF.  If we have some pointer of these data for VPP 20.01 release that would 
be great.

Thanks,
Kausik
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15942): https://lists.fd.io/g/vpp-dev/message/15942
Mute This Topic: https://lists.fd.io/mt/72658703/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Tap connect CLI

2020-03-13 Thread Jerome Tollet via Lists.Fd.Io
Hello,
You can create tap interface using “create tap” CLI command.
Jerome

De :  au nom de "Gudimetla, Leela Sankar via Lists.Fd.Io" 

Répondre à : "lgudi...@ciena.com" 
Date : vendredi 13 mars 2020 à 00:13
À : "vpp-dev@lists.fd.io" 
Cc : "vpp-dev@lists.fd.io" 
Objet : [vpp-dev] Tap connect CLI

Hi,

I see that the tap connect CLI has been deprecated which was used for creating 
a pair of interfaces between container and host.
Is there any other CLI/mechanism available in VPP-1908 to achieve the same?

Thanks,
Leela sankar Gudimetla
Embedded Software Engineer 3 |  Ciena
San Jose, CA, USA
M | +1.408.904.2160
[Ciena Logo]

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15770): https://lists.fd.io/g/vpp-dev/message/15770
Mute This Topic: https://lists.fd.io/mt/71914759/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] networking-vpp 20.01 for OpenStack

2020-02-27 Thread Jerome Tollet via Lists.Fd.Io
Hello,
This announcement may be of interest to you: 
http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012877.html
Jerome

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15593): https://lists.fd.io/g/vpp-dev/message/15593
Mute This Topic: https://lists.fd.io/mt/71590281/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP release 20.01 is complete!

2020-01-30 Thread Jerome Tollet via Lists.Fd.Io
+1 !

De :  au nom de Dave Wallace 
Date : jeudi 30 janvier 2020 à 15:57
À : Andrew Yourtchenko , vpp-dev 
Cc : csit-dev 
Objet : Re: [vpp-dev] VPP release 20.01 is complete!

Congratulations to the entire VPP/CSIT community for another on-time delivery 
of an ever-improving codebase!

Special thanks to Andrew for his efforts as the 20.01 Release Manager.
-daw-

On 1/29/2020 7:26 PM, Andrew Yourtchenko wrote:

Dear all,



I am happy to announce that the release 20.01 is available on packagecloud in 
fdio/release repository.



I have verified that it is installable on Ubuntu 16.04, 18.04, and Centos 7 
distributions.



Special thanks to Ed Kern and Dave Wallace for the help during the release 
ceremony.



--a (your friendly 20.01 release manager)



P.s. Branch stable/2001 is now open.




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15294): https://lists.fd.io/g/vpp-dev/message/15294
Mute This Topic: https://lists.fd.io/mt/70261452/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] crypto_ia32 -> crypto_native

2020-01-27 Thread Jerome Tollet via Lists.Fd.Io
Sounds good to me.
Jerome

De :  au nom de "Damjan Marion via Lists.Fd.Io" 

Répondre à : "dmar...@me.com" 
Date : lundi 27 janvier 2020 à 21:54
À : vpp-dev , "csit-...@lists.fd.io" 
Cc : "vpp-dev@lists.fd.io" 
Objet : [vpp-dev] crypto_ia32 -> crypto_native


Folks,

To avoid code duplication i would like to rename crypto_ia32 plugin into 
crypto_native. Reason is adding ARMv8 support which seems to be very similar to 
IA32 in terms of implementing CBC and GCM.

Any objections or caveats?

--
Damjan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15259): https://lists.fd.io/g/vpp-dev/message/15259
Mute This Topic: https://lists.fd.io/mt/70165923/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP support for VRRP

2020-01-13 Thread Jerome Tollet via Lists.Fd.Io
Hello Ahmed,
The presentation you are referring to is about networking-vpp (OpenStack 
driver). It’s not about VPP in itself.

  *   Networking-vpp supports HA mode with VRRP for VPP using keepalived
  *   We currently have no plan to add support for VRRP

Of course, contributions are more than welcome in case you’d like to work on 
VRRP for VPP.
Jerome

De :  au nom de Ahmed Bashandy 
Date : lundi 13 janvier 2020 à 10:30
À : "vpp-dev@lists.fd.io" 
Objet : [vpp-dev] VPP support for VRRP

Hi

Slide 34 in the presentation
https://www.cisco.com/c/dam/m/en_us/service-provider/ciscoknowledgenetwork/files/0531-techad-ckn.pptx
says “support for HA (VRRP based)“

But when I searched the mailing  I found
https://lists.fd.io/g/vpp-dev/message/12862?p=,,,20,0,0,0::relevance,,vrrp,20,2,0,31351846
where Ole says that VRRP is not supported

Are there any plans to support VRRP anytime soon?

Ahmed

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15138): https://lists.fd.io/g/vpp-dev/message/15138
Mute This Topic: https://lists.fd.io/mt/69665214/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] extreme low throughput when connecting two docker container on a vm.

2020-01-02 Thread Jerome Tollet via Lists.Fd.Io
Hi Jeffrey,
I would recommend using tap interfaces (create tap) with 1024 rx/tx descriptors 
and gso turned on.
That will be much faster than host interfaces.
Jerome

De :  au nom de xu cai 
Date : jeudi 2 janvier 2020 à 22:14
À : "vpp-dev@lists.fd.io" 
Objet : [vpp-dev] extreme low throughput when connecting two docker container 
on a vm.

hi there,

i am quite new to VPP,  just follow up the wiki guidance, and setup a VPP 
instance, and connect two docker container to the VPP instance and configure a 
bridge in VPP instance for these two docker container,

then i run iperf on the docker container and see

root@94bf13822d8e:/# iperf -c 10.10.10.3 -t 600 -i 10

Client connecting to 10.10.10.3, TCP port 5001
TCP window size: 85.0 KByte (default)

[  3] local 10.10.10.2 port 40628 connected with 10.10.10.3 port 5001
[ ID] Interval   Transfer Bandwidth
[  3]  0.0-10.0 sec  8.25 MBytes  6.92 Mbits/sec
[  3] 10.0-20.0 sec  9.00 MBytes  7.55 Mbits/sec
[  3] 20.0-30.0 sec  8.75 MBytes  7.34 Mbits/sec


the way, i connect the container and vpp is through veth pair.

vpp# show inte
  Name   IdxState  MTU (L3/IP4/IP6/MPLS) 
Counter  Count
host-veth1000 4  up  9000/0/0/0 rx packets  
  517153
rx bytes
   900265110
tx packets  
  427351
tx bytes
30077622
drops   
 104
host-veth2000 3  up  9000/0/0/0 rx packets  
  427351
rx bytes
30077622
tx packets  
  517049
tx bytes
   900260742
tx-error
  62
local00 down  0/0/0/0
vpp#

vpp# sh bridge-domain 2 detail
  BD-ID   Index   BSN  Age(min)  Learning  U-Forwrd   UU-Flood   Flooding  
ARP-Term  arp-ufwd   BVI-Intf
2   2  0 offonon   floodon   
off   offN/A

   Interface   If-idx ISN  SHG  BVI  TxFlood
VLAN-Tag-Rewrite
 host-veth1000   4 50-  * none
 host-veth2000   3 50-  * none
vpp#


if i connect the containers through other method, i see much much higher 
throughput.  i don't know how to debug this, could anybody give some guidance ? 
 i think i missed something fundamental.  more important, what's the right way 
to connect containers through vpp ?   thanks

--
- jeffrey
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15025): https://lists.fd.io/g/vpp-dev/message/15025
Mute This Topic: https://lists.fd.io/mt/69387845/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] efficient use of DPDK

2019-12-05 Thread Jerome Tollet via Lists.Fd.Io
inlined

Le 05/12/2019 09:03, « vpp-dev@lists.fd.io au nom de Honnappa Nagarahalli » 
 a écrit :



> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Jerome 
Tollet via
> Lists.Fd.Io
> Sent: Wednesday, December 4, 2019 9:33 AM
> To: tho...@monjalon.net
> Cc: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] efficient use of DPDK
>
> Actually native drivers (like Mellanox or AVF) can be faster w/o buffer
> conversion and tend to be faster than when used by DPDK. I suspect VPP is 
not
> the only project to report this extra cost.
It would be good to know other projects that report this extra cost. It 
will help support changes to DPDK.
[JT] I may be wrong but I think there was a presentation about that last week 
during DPDK user conf in the US.

> Jerome
>
> Le 04/12/2019 15:43, « Thomas Monjalon »  a écrit :
>
> 03/12/2019 22:11, Jerome Tollet (jtollet):
> > Thomas,
> > I am afraid you may be missing the point. VPP is a framework where 
plugins
> are first class citizens. If a plugin requires leveraging offload (inline 
or
> lookaside), it is more than welcome to do it.
> > There are multiple examples including hw crypto accelerators
> 
(https://software.intel.com/en-us/articles/get-started-with-ipsec-acceleration-
> in-the-fdio-vpp-project).
>
> OK I understand plugins are open.
> My point was about the efficiency of the plugins,
> given the need for buffer conversion.
> If some plugins are already efficient enough, great:
> it means there is no need for bringing effort in native VPP drivers.
>
>
> > Le 03/12/2019 17:07, « vpp-dev@lists.fd.io au nom de Thomas Monjalon
> »  a écrit :
> >
> > 03/12/2019 13:12, Damjan Marion:
> > > > On 3 Dec 2019, at 09:28, Thomas Monjalon 

> wrote:
> > > > 03/12/2019 00:26, Damjan Marion:
> > > >>
> > > >> Hi THomas!
> > > >>
> > > >> Inline...
> > > >>
> > > >>>> On 2 Dec 2019, at 23:35, Thomas Monjalon
>  wrote:
> > > >>>
> > > >>> Hi all,
> > > >>>
> > > >>> VPP has a buffer called vlib_buffer_t, while DPDK has 
rte_mbuf.
> > > >>> Are there some benchmarks about the cost of converting, 
from one
> format
> > > >>> to the other one, during Rx/Tx operations?
> > > >>
> > > >> We are benchmarking both dpdk i40e PMD performance and 
native
> VPP AVF driver performance and we are seeing significantly better
> performance with native AVF.
> > > >> If you taake a look at [1] you will see that DPDK i40e 
driver provides
> 18.62 Mpps and exactly the same test with native AVF driver is giving us 
arounf
> 24.86 Mpps.
> > [...]
> > > >>
> > > >>> So why not improving DPDK integration in VPP to make it 
faster?
> > > >>
> > > >> Yes, if we can get freedom to use parts of DPDK we want 
instead of
> being forced to adopt whole DPDK ecosystem.
> > > >> for example, you cannot use dpdk drivers without using EAL,
> mempool, rte_mbuf... rte_eal_init is monster which I was hoping that it 
will
> disappear for long time...
> > > >
> > > > You could help to improve these parts of DPDK,
> > > > instead of spending time to try implementing few drivers.
> > > > Then VPP would benefit from a rich driver ecosystem.
> > >
> > > Thank you for letting me know what could be better use of my 
time.
> >
> > "You" was referring to VPP developers.
> > I think some other Cisco developers are also contributing to 
VPP.
> >
> > > At the moment we have good coverage of native drivers, and 
still there
> is a option for people to use dpdk. It is now mainly up to driver vendors 
to
> decide if they are happy with performance they wil get from dpdk pmd or 
they
> want better...
> >
> > Yes it is possible to use DPDK in VPP with degraded performance.
> > If

Re: [vpp-dev] efficient use of DPDK

2019-12-04 Thread Jerome Tollet via Lists.Fd.Io
Hi Nitin,
I am not necessarily speaking about Inline IPSec. I was just saying that VPP 
lets you the choice to do both inline and lookaside types of offload.
Here is a public example of inline acceleration: 
https://www.intel.com/content/dam/www/programmable/us/en/pdfs/literature/wp/wp-01295-hcl-segment-routing-over-ipv6-acceleration-using-intel-fpga-programmable-acceleration-card-n3000.pdf
Jerome

Le 04/12/2019 18:19, « Nitin Saxena »  a écrit :

Hi Jerome,

I have query unrelated to the original thread. 

>> There are other examples (lookaside and inline)
By inline do you mean "Inline IPSEC"? Could you please elaborate what you 
meant by inline offload in VPP?

Thanks,
Nitin

> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Jerome Tollet
> via Lists.Fd.Io
> Sent: Wednesday, December 4, 2019 9:00 PM
> To: Thomas Monjalon ; Damjan Marion
> 
> Cc: vpp-dev@lists.fd.io
> Subject: [EXT] Re: [vpp-dev] efficient use of DPDK
> 
> External Email
> 
> --
> Hi Thomas,
> I strongly disagree with your conclusions from this discussion:
> 1) Yes, VPP made the choice of not being DPDK dependent BUT certainly not
> at the cost of performance. (It's actually the opposite ie AVF driver)
> 2) VPP is NOT exclusively CPU centric. I gave you the example of crypto
> offload based on Intel QAT cards (lookaside). There are other examples
> (lookaside and inline)
> 3) Plugins are free to use any sort of offload (and they do).
> 
> Jerome
> 
> Le 04/12/2019 15:19, « vpp-dev@lists.fd.io au nom de Thomas Monjalon »
>  a écrit :
> 
> 03/12/2019 20:01, Damjan Marion:
> > On 3 Dec 2019, at 17:06, Thomas Monjalon wrote:
> > > 03/12/2019 13:12, Damjan Marion:
> > >> On 3 Dec 2019, at 09:28, Thomas Monjalon wrote:
> > >>> 03/12/2019 00:26, Damjan Marion:
> > >>>> On 2 Dec 2019, at 23:35, Thomas Monjalon wrote:
> > >>>>> VPP has a buffer called vlib_buffer_t, while DPDK has 
rte_mbuf.
> > >>>>> Are there some benchmarks about the cost of converting, from
> one format
> > >>>>> to the other one, during Rx/Tx operations?
> > >>>>
> > >>>> We are benchmarking both dpdk i40e PMD performance and native
> VPP AVF driver performance and we are seeing significantly better
> performance with native AVF.
> > >>>> If you taake a look at [1] you will see that DPDK i40e driver 
provides
> 18.62 Mpps and exactly the same test with native AVF driver is giving us
> arounf 24.86 Mpps.
> > > [...]
> > >>>>
> > >>>>> So why not improving DPDK integration in VPP to make it 
faster?
> > >>>>
> > >>>> Yes, if we can get freedom to use parts of DPDK we want 
instead of
> being forced to adopt whole DPDK ecosystem.
> > >>>> for example, you cannot use dpdk drivers without using EAL,
> mempool, rte_mbuf... rte_eal_init is monster which I was hoping that it 
will
> disappear for long time...
> 
> As stated below, I take this feedback, thanks.
> However it won't change VPP choice of not using rte_mbuf natively.
> 
> [...]
> > >> At the moment we have good coverage of native drivers, and still
> there is a option for people to use dpdk. It is now mainly up to driver 
vendors
> to decide if they are happy with performance they wil get from dpdk pmd or
> they want better...
> > >
> > > Yes it is possible to use DPDK in VPP with degraded performance.
> > > If an user wants best performance with VPP and a real NIC,
> > > a new driver must be implemented for VPP only.
> > >
> > > Anyway real performance benefits are in hardware device offloads
> > > which will be hard to implement in VPP native drivers.
> > > Support (investment) would be needed from vendors to make it
> happen.
> > > About offloads, VPP is not using crypto or compression drivers
> > > that DPDK provides (plus regex coming).
> >
> > Nice marketing pitch for your company :)
> 
> I guess you mean Mellanox has a good offloads offering.
> But my point is about the end of Moore's law

Re: [vpp-dev] VPP / tcp_echo performance

2019-12-04 Thread Jerome Tollet via Lists.Fd.Io
Are you using VPP native virtio or DPDK virtio ?
Jerome

De :  au nom de "dch...@akouto.com" 
Date : mercredi 4 décembre 2019 à 16:29
À : "vpp-dev@lists.fd.io" 
Objet : Re: [vpp-dev] VPP / tcp_echo performance

Hi,

Thank you Florin and Jerome for your time, very much appreciated.
· For VCL configuration, FIFO sizes are 16 MB
· "show session verbose 2" does not indicate any retransmissions. Here 
are the numbers during a test run where approx. 9 GB were transferred (the 
difference in values between client and server is just because it took me a few 
seconds to issue the command on the client side as you can see from the 
duration):
SERVER SIDE:
 stats: in segs 5989307 dsegs 5989306 bytes 8544661342 dupacks 0
out segs 3942513 dsegs 0 bytes 0 dupacks 0
fr 0 tr 0 rxt segs 0 bytes 0 duration 106.489
err wnd data below 0 above 0 ack below 0 above 0
CLIENT SIDE:
 stats: in segs 4207793 dsegs 0 bytes 0 dupacks 0
out segs 6407444 dsegs 6407443 bytes 9141373892 dupacks 0
fr 0 tr 0 rxt segs 0 bytes 0 duration 114.113
err wnd data below 0 above 0 ack below 0 above 0
· sh int does not seem to indicate any issue. There are occasional 
drops but I enabled tracing and checked those out, they are LLC BPDU's, I'm not 
sure where those are coming from but I suspect they are from linuxbridge in the 
compute host where the VMs are running.
· @Jerome: Before I use the dpdk-devbind command to make the interfaces 
available to VPP, they use virtio drivers. When assigned to VPP they use 
uio_pci_generic.

I'm not sure if any other stats might be useful so I'm just pasting a bunch of 
stats & information from the client & server instances below, I know it's a 
lot, just putting it here in case there is something useful in there. Thanks 
again for taking the time to follow-up with me and for the suggestions, I 
really do appreciate it very much!

Regards,
Dom
#
# Interface uses virtio-pci when the iperf3 test is run using regular Linux
# networking.
#
[root@vpp-test-1 centos]# dpdk-devbind --status

Network devices using kernel driver
===
:00:03.0 'Virtio network device 1000' if=eth0 drv=virtio-pci 
unused=virtio_pci *Active*
:00:04.0 'Virtio network device 1000' if=eth1 drv=virtio-pci 
unused=virtio_pci *Active*

#
# Interface uses uio_pci_generic when set up for VPP
#

[root@vpp-test-1 centos]# dpdk-devbind --status

Network devices using DPDK-compatible driver

:00:03.0 'Virtio network device 1000' drv=uio_pci_generic unused=virtio_pci

Network devices using kernel driver
===
:00:04.0 'Virtio network device 1000' if=eth1 drv=virtio-pci 
unused=virtio_pci,uio_pci_generic *Active*


vpp# sh hardware-interfaces
  NameIdx   Link  Hardware
GigabitEthernet0/3/0   1 up   GigabitEthernet0/3/0
  Link speed: 10 Gbps
  Ethernet address fa:16:3e:10:5e:4b
  Red Hat Virtio
carrier up full duplex mtu 9206
flags: admin-up pmd maybe-multiseg
rx: queues 1 (max 1), desc 256 (min 0 max 65535 align 1)
tx: queues 1 (max 1), desc 256 (min 0 max 65535 align 1)
pci: device 1af4:1000 subsystem 1af4:0001 address :00:03.00 numa 0
max rx packet len: 9728
promiscuous: unicast off all-multicast on
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip udp-cksum tcp-cksum tcp-lro vlan-filter
   jumbo-frame
rx offload active: jumbo-frame
tx offload avail:  vlan-insert udp-cksum tcp-cksum tcp-tso multi-segs
tx offload active: multi-segs
rss avail: none
rss active:none
tx burst function: virtio_xmit_pkts
rx burst function: virtio_recv_mergeable_pkts

rx frames ok 467
rx bytes ok27992
extended stats:
  rx good packets467
  rx good bytes27992
  rx q0packets   467
  rx q0bytes   27992
  rx q0 good packets 467
  rx q0 good bytes 27992
  rx q0 multicast packets465
  rx q0 broadcast packets  2
  rx q0 undersize packets467


#
# Dropped packets are LLC BPDUs, not sure but probably a linuxbridge thing

Re: [vpp-dev] efficient use of DPDK

2019-12-04 Thread Jerome Tollet via Lists.Fd.Io
Actually native drivers (like Mellanox or AVF) can be faster w/o buffer 
conversion and tend to be faster than when used by DPDK. I suspect VPP is not 
the only project to report this extra cost.
Jerome

Le 04/12/2019 15:43, « Thomas Monjalon »  a écrit :

03/12/2019 22:11, Jerome Tollet (jtollet):
> Thomas,
> I am afraid you may be missing the point. VPP is a framework where 
plugins are first class citizens. If a plugin requires leveraging offload 
(inline or lookaside), it is more than welcome to do it.
> There are multiple examples including hw crypto accelerators 
(https://software.intel.com/en-us/articles/get-started-with-ipsec-acceleration-in-the-fdio-vpp-project).

OK I understand plugins are open.
My point was about the efficiency of the plugins,
given the need for buffer conversion.
If some plugins are already efficient enough, great:
it means there is no need for bringing effort in native VPP drivers.


> Le 03/12/2019 17:07, « vpp-dev@lists.fd.io au nom de Thomas Monjalon » 
 a écrit :
> 
> 03/12/2019 13:12, Damjan Marion:
> > > On 3 Dec 2019, at 09:28, Thomas Monjalon  
wrote:
> > > 03/12/2019 00:26, Damjan Marion:
> > >> 
> > >> Hi THomas!
> > >> 
> > >> Inline...
> > >> 
> >  On 2 Dec 2019, at 23:35, Thomas Monjalon  
wrote:
> > >>> 
> > >>> Hi all,
> > >>> 
> > >>> VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
> > >>> Are there some benchmarks about the cost of converting, from 
one format
> > >>> to the other one, during Rx/Tx operations?
> > >> 
> > >> We are benchmarking both dpdk i40e PMD performance and native 
VPP AVF driver performance and we are seeing significantly better performance 
with native AVF.
> > >> If you taake a look at [1] you will see that DPDK i40e driver 
provides 18.62 Mpps and exactly the same test with native AVF driver is giving 
us arounf 24.86 Mpps.
> [...]
> > >> 
> > >>> So why not improving DPDK integration in VPP to make it faster?
> > >> 
> > >> Yes, if we can get freedom to use parts of DPDK we want instead 
of being forced to adopt whole DPDK ecosystem.
> > >> for example, you cannot use dpdk drivers without using EAL, 
mempool, rte_mbuf... rte_eal_init is monster which I was hoping that it will 
disappear for long time...
> > > 
> > > You could help to improve these parts of DPDK,
> > > instead of spending time to try implementing few drivers.
> > > Then VPP would benefit from a rich driver ecosystem.
> > 
> > Thank you for letting me know what could be better use of my time.
> 
> "You" was referring to VPP developers.
> I think some other Cisco developers are also contributing to VPP.
> 
> > At the moment we have good coverage of native drivers, and still 
there is a option for people to use dpdk. It is now mainly up to driver vendors 
to decide if they are happy with performance they wil get from dpdk pmd or they 
want better...
> 
> Yes it is possible to use DPDK in VPP with degraded performance.
> If an user wants best performance with VPP and a real NIC,
> a new driver must be implemented for VPP only.
> 
> Anyway real performance benefits are in hardware device offloads
> which will be hard to implement in VPP native drivers.
> Support (investment) would be needed from vendors to make it happen.
> About offloads, VPP is not using crypto or compression drivers
> that DPDK provides (plus regex coming).
> 
> VPP is a CPU-based packet processing software.
> If users want to leverage hardware device offloads,
> a truly DPDK-based software is required.
> If I understand well your replies, such software cannot be VPP.





-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14796): https://lists.fd.io/g/vpp-dev/message/14796
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] efficient use of DPDK

2019-12-04 Thread Jerome Tollet via Lists.Fd.Io
Hi Thomas,
I strongly disagree with your conclusions from this discussion:
1) Yes, VPP made the choice of not being DPDK dependent BUT certainly not at 
the cost of performance. (It's actually the opposite ie AVF driver)
2) VPP is NOT exclusively CPU centric. I gave you the example of crypto offload 
based on Intel QAT cards (lookaside). There are other examples (lookaside and 
inline)
3) Plugins are free to use any sort of offload (and they do). 

Jerome

Le 04/12/2019 15:19, « vpp-dev@lists.fd.io au nom de Thomas Monjalon » 
 a écrit :

03/12/2019 20:01, Damjan Marion:
> On 3 Dec 2019, at 17:06, Thomas Monjalon wrote:
> > 03/12/2019 13:12, Damjan Marion:
> >> On 3 Dec 2019, at 09:28, Thomas Monjalon wrote:
> >>> 03/12/2019 00:26, Damjan Marion:
>  On 2 Dec 2019, at 23:35, Thomas Monjalon wrote:
> > VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
> > Are there some benchmarks about the cost of converting, from one 
format
> > to the other one, during Rx/Tx operations?
>  
>  We are benchmarking both dpdk i40e PMD performance and native VPP 
AVF driver performance and we are seeing significantly better performance with 
native AVF.
>  If you taake a look at [1] you will see that DPDK i40e driver 
provides 18.62 Mpps and exactly the same test with native AVF driver is giving 
us arounf 24.86 Mpps.
> > [...]
>  
> > So why not improving DPDK integration in VPP to make it faster?
>  
>  Yes, if we can get freedom to use parts of DPDK we want instead of 
being forced to adopt whole DPDK ecosystem.
>  for example, you cannot use dpdk drivers without using EAL, mempool, 
rte_mbuf... rte_eal_init is monster which I was hoping that it will disappear 
for long time...

As stated below, I take this feedback, thanks.
However it won't change VPP choice of not using rte_mbuf natively.

[...]
> >> At the moment we have good coverage of native drivers, and still there 
is a option for people to use dpdk. It is now mainly up to driver vendors to 
decide if they are happy with performance they wil get from dpdk pmd or they 
want better...
> > 
> > Yes it is possible to use DPDK in VPP with degraded performance.
> > If an user wants best performance with VPP and a real NIC,
> > a new driver must be implemented for VPP only.
> > 
> > Anyway real performance benefits are in hardware device offloads
> > which will be hard to implement in VPP native drivers.
> > Support (investment) would be needed from vendors to make it happen.
> > About offloads, VPP is not using crypto or compression drivers
> > that DPDK provides (plus regex coming).
> 
> Nice marketing pitch for your company :)

I guess you mean Mellanox has a good offloads offering.
But my point is about the end of Moore's law,
and the offload trending of most of device vendors.
However I truly respect the choice of avoiding device offloads.

> > VPP is a CPU-based packet processing software.
> > If users want to leverage hardware device offloads,
> > a truly DPDK-based software is required.
> > If I understand well your replies, such software cannot be VPP.
> 
> Yes, DPDK is centre of the universe/

DPDK is where most of networking devices are supported in userspace.
That's all.


> So Dear Thomas, I can continue this discussion forever, but that is not 
something I'm going to do as it started to be trolling contest.

I agree

> I can understand that you may be passionate about you project and that 
you maybe think that it is the greatest thing after sliced bread, but please 
allow that other people have different opinion. Instead of giving the lessons 
to other people what they should do, if you are interested for dpdk to be 
better consumed, please take a feedback provided to you. I assume that you are 
interested as you showed up on this mailing list, if not there was no reason 
for starting this thread in the first place.

Thank you for the feedbacks, this discussion was required:
1/ it gives more motivation to improve EAL API
2/ it confirms the VPP design choice of not being DPDK-dependent (at a 
performance cost)
3/ it confirms the VPP design choice of being focused on CPU-based 
processing




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14795): https://lists.fd.io/g/vpp-dev/message/14795
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP / tcp_echo performance

2019-12-04 Thread Jerome Tollet via Lists.Fd.Io
Hi Dom,
In addition to Florin’s questions, can you clarify what you mean by 
“…interfaces are assigned to DPDK/VPP” ? What driver are you using ?
Regards,
Jerome


De :  au nom de Florin Coras 
Date : mercredi 4 décembre 2019 à 02:31
À : "dch...@akouto.com" 
Cc : "vpp-dev@lists.fd.io" 
Objet : Re: [vpp-dev] VPP / tcp_echo performance

Hi Dom,

I’ve never tried to run the stack in a VM, so not sure about the expected 
performance, but here are a couple of comments:
- What fifo sizes are you using? Are they at least 4MB (see [1] for VCL 
configuration).
- I don’t think you need to configure more than 16k buffers/numa.

Additionally, to get more information on the issue:
- What does “show session verbose 2” report? Check the stats section for 
retransmit counts (tr - timer retransmit, fr - fast retansmit) which if 
non-zero indicate that packets are lost.
- Check interface rx/tx error counts with “show int”.
- Typically, for improved performance, you should write more than 1.4kB per 
call. But the fact that your average is less than 1.4kB suggests that you often 
find the fifo full or close to full. So probably the issue is not your sender 
app.

Regards,
Florin

[1] https://wiki.fd.io/view/VPP/HostStack/LDP/iperf


On Dec 3, 2019, at 11:40 AM, dch...@akouto.com wrote:

Hi all,
I've been running some performance tests and not quite getting the results I 
was hoping for, and have a couple of related questions I was hoping someone 
could provide some tips with. For context, here's a summary of the results of 
TCP tests I've run on two VMs (CentOS 7 OpenStack instances, host-1 is the 
client and host-2 is the server):
· Running iperf3 natively before the interfaces are assigned to 
DPDK/VPP: 10 Gbps TCP throughput
· Running iperf3 with VCL/HostStack: 3.5 Gbps TCP throughput
· Running a modified version of the tcp_echo application (similar 
results with socket and svm api): 610 Mbps throughput
Things I've tried to improve performance:
· Anything I could apply from 
https://wiki.fd.io/view/VPP/How_To_Optimize_Performance_(System_Tuning)
· Added tcp { cc-algo cubic } to VPP startup config
· Using isolcpu and VPP startup config options, allocated first 2, then 
4 and finally 6 of the 8 available cores to VPP main & worker threads
· In VPP startup config set "buffers-per-numa 65536" and "default 
data-size 4096"
· Updated grub boot options to include hugepagesz=1GB hugepages=64 
default_hugepagesz=1GB
My goal is to achieve at least the same throughput using VPP as I get when I 
run iperf3 natively on the same network interfaces (in this case 10 Gbps).

A couple of related questions:
· Given the items above, do any VPP or kernel configuration items jump 
out that I may have missed that could justify the difference in native vs VPP 
performance or help get the two a bit closer?
· In the modified tcp_echo application, n_sent = app_send_stream(...) 
is called in a loop always using the same length (1400 bytes) in my test 
version. The return value n_sent indicates that the average bytes sent is only 
around 130 bytes per call after some run time. Are there any parameters or 
options that might improve this?
Any tips or pointers to documentation that might shed some light would be 
hugely appreciated!

Regards,
Dom

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14772): https://lists.fd.io/g/vpp-dev/message/14772
Mute This Topic: https://lists.fd.io/mt/65863639/675152
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[fcoras.li...@gmail.com]
-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14779): https://lists.fd.io/g/vpp-dev/message/14779
Mute This Topic: https://lists.fd.io/mt/65863639/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] efficient use of DPDK

2019-12-03 Thread Jerome Tollet via Lists.Fd.Io
Thomas,
I am afraid you may be missing the point. VPP is a framework where plugins are 
first class citizens. If a plugin requires leveraging offload (inline or 
lookaside), it is more than welcome to do it.
There are multiple examples including hw crypto accelerators 
(https://software.intel.com/en-us/articles/get-started-with-ipsec-acceleration-in-the-fdio-vpp-project).
 
Jerome

Le 03/12/2019 17:07, « vpp-dev@lists.fd.io au nom de Thomas Monjalon » 
 a écrit :

03/12/2019 13:12, Damjan Marion:
> > On 3 Dec 2019, at 09:28, Thomas Monjalon  wrote:
> > 03/12/2019 00:26, Damjan Marion:
> >> 
> >> Hi THomas!
> >> 
> >> Inline...
> >> 
>  On 2 Dec 2019, at 23:35, Thomas Monjalon  wrote:
> >>> 
> >>> Hi all,
> >>> 
> >>> VPP has a buffer called vlib_buffer_t, while DPDK has rte_mbuf.
> >>> Are there some benchmarks about the cost of converting, from one 
format
> >>> to the other one, during Rx/Tx operations?
> >> 
> >> We are benchmarking both dpdk i40e PMD performance and native VPP AVF 
driver performance and we are seeing significantly better performance with 
native AVF.
> >> If you taake a look at [1] you will see that DPDK i40e driver provides 
18.62 Mpps and exactly the same test with native AVF driver is giving us arounf 
24.86 Mpps.
[...]
> >> 
> >>> So why not improving DPDK integration in VPP to make it faster?
> >> 
> >> Yes, if we can get freedom to use parts of DPDK we want instead of 
being forced to adopt whole DPDK ecosystem.
> >> for example, you cannot use dpdk drivers without using EAL, mempool, 
rte_mbuf... rte_eal_init is monster which I was hoping that it will disappear 
for long time...
> > 
> > You could help to improve these parts of DPDK,
> > instead of spending time to try implementing few drivers.
> > Then VPP would benefit from a rich driver ecosystem.
> 
> Thank you for letting me know what could be better use of my time.

"You" was referring to VPP developers.
I think some other Cisco developers are also contributing to VPP.

> At the moment we have good coverage of native drivers, and still there is 
a option for people to use dpdk. It is now mainly up to driver vendors to 
decide if they are happy with performance they wil get from dpdk pmd or they 
want better...

Yes it is possible to use DPDK in VPP with degraded performance.
If an user wants best performance with VPP and a real NIC,
a new driver must be implemented for VPP only.

Anyway real performance benefits are in hardware device offloads
which will be hard to implement in VPP native drivers.
Support (investment) would be needed from vendors to make it happen.
About offloads, VPP is not using crypto or compression drivers
that DPDK provides (plus regex coming).

VPP is a CPU-based packet processing software.
If users want to leverage hardware device offloads,
a truly DPDK-based software is required.
If I understand well your replies, such software cannot be VPP.




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14774): https://lists.fd.io/g/vpp-dev/message/14774
Mute This Topic: https://lists.fd.io/mt/65218320/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Patch validation seems to be broken: api-crc job, 'no module named ply...'

2019-12-02 Thread Jerome Tollet via Lists.Fd.Io
Hi Dave,
I just sent a private email to Dave, Andrew and Ed  (see enclosed).
Thanks for you help.
Jerome

De :  au nom de "Dave Barach via Lists.Fd.Io" 

Répondre à : "Dave Barach (dbarach)" 
Date : lundi 2 décembre 2019 à 14:24
À : "Ed Kern (ejk)" , "Andrew Yourtchenko (ayourtch)" 

Cc : "vpp-dev@lists.fd.io" 
Objet : [vpp-dev] Patch validation seems to be broken: api-crc job, 'no module 
named ply...'

Please have a look... Thanks... Dave

+++ export PYTHONPATH=/w/workspace/vpp-csit-verify-api-crc-master/csit
+++ PYTHONPATH=/w/workspace/vpp-csit-verify-api-crc-master/csit
+++ make json-api-files
/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/generate_json.py
Traceback (most recent call last):
  File 
"/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/vppapigen.py", 
line 3, in 
import ply.lex as lex
ModuleNotFoundError: No module named 'ply'
Searching '/w/workspace/vpp-csit-verify-api-crc-master/src' for .api files.
Traceback (most recent call last):
  File 
"/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/generate_json.py",
 line 97, in 
main()
  File 
"/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/generate_json.py",
 line 92, in main
vppapigen(vppapigen_bin, output_dir, src_dir, f)
  File 
"/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/generate_json.py",
 line 59, in vppapigen
src_file.name)])
  File "/usr/lib/python3.6/subprocess.py", line 356, in check_output
**kwargs).stdout
  File "/usr/lib/python3.6/subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command 
'['/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/vppapigen.py',
 '--includedir', '/w/workspace/vpp-csit-verify-api-crc-master/src', '--input', 
'/w/workspace/vpp-csit-verify-api-crc-master/src/plugins/marvell/pp2/pp2.api', 
'JSON', '--output', 
'/w/workspace/vpp-csit-verify-api-crc-master/build-root/install-vpp-native/vpp/share/vpp/api/plugins/pp2.api.json']'
 returned non-zero exit status 1.
Makefile:610: recipe for target 'json-api-files' failed
make: *** [json-api-files] Error 1
+++ die 'Generation of .api.json files failed.'
+++ set -x
+++ set +eu
+++ warn 'Generation of .api.json files failed.'
+++ set -exuo pipefail
+++ echo 'Generation of .api.json files failed.'
Generation of .api.json files failed.
+++ exit 1
Build step 'Execute shell' marked build as failure
--- Begin Message ---
Hello,

I tried to push this patch (https://gerrit.fd.io/r/c/vpp/+/23700) which only 
adds a FEATURE.yaml file for DHCP.

 

This test: https://jenkins.fd.io/job/vpp-csit-verify-api-crc-master/2164/ 
returns and error and having a look at it logs 
(https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-csit-verify-api-crc-master/2164/console.log.gz)
  I found the following problem:

 

+++ export PYTHONPATH=/w/workspace/vpp-csit-verify-api-crc-master/csit

+++ PYTHONPATH=/w/workspace/vpp-csit-verify-api-crc-master/csit

+++ make json-api-files

/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/generate_json.py

Traceback (most recent call last):

  File 
"/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/vppapigen.py", 
line 3, in 

import ply.lex as lex

ModuleNotFoundError: No module named 'ply'

Searching '/w/workspace/vpp-csit-verify-api-crc-master/src' for .api files.

Traceback (most recent call last):

  File 
"/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/generate_json.py",
 line 97, in 

main()

  File 
"/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/generate_json.py",
 line 92, in main

   vppapigen(vppapigen_bin, output_dir, src_dir, f)

 File 
"/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/generate_json.py",
 line 59, in vppapigen

src_file.name)])

  File "/usr/lib/python3.6/subprocess.py", line 356, in check_output

**kwargs).stdout

  File "/usr/lib/python3.6/subprocess.py", line 438, in run

output=stdout, stderr=stderr)

subprocess.CalledProcessError: Command 
'['/w/workspace/vpp-csit-verify-api-crc-master/src/tools/vppapigen/vppapigen.py',
 '--includedir',  '/w/workspace/vpp-csit-verify-api-crc-master/src', '--input', 
'/w/workspace/vpp-csit-verify-api-crc-master/src/plugins/marvell/pp2/pp2.api', 
'JSON', '--output', 
'/w/workspace/vpp-csit-verify-api-crc-master/build-root/install-vpp-native/vpp/share/vpp/api/plugins/pp2.api.json']'
  returned non-zero exit status 1.

Makefile:610: recipe for target 'json-api-files' failed

 

Can you help with that?

 

Jerome

--- End Message ---
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14750): https://lists.fd.io/g/vpp-dev/message/14750
Mute This Topic: https://lists.fd.io/mt/64831658/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Looking for memif performance numbers and configuration

2019-11-11 Thread Jerome Tollet via Lists.Fd.Io
Did you have a look at 
https://docs.fd.io/csit/rls1908/report/vpp_performance_tests/throughput_speedup_multi_core/container_memif.html#
 ?
Jerome

Le 11/11/2019 09:55, « vpp-dev@lists.fd.io au nom de Raj » 
 a écrit :

Hello all,

I am looking to see if any one has attempted to connect two VPP using
memif and what pps  numbers is seen over memif interface? It would be
great if the VPP config too can be shares so that I can recreate it
locally.

Thanks and Regards,

Raj


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14557): https://lists.fd.io/g/vpp-dev/message/14557
Mute This Topic: https://lists.fd.io/mt/52470809/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] OpenStack Open Source Cloud Computing Software » Message: Networking-vpp 19.08.1 for VPP 19.08.1 is now available

2019-10-11 Thread Jerome Tollet via Lists.Fd.Io
Hello,
This message may be or interest to you: 
http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010073.html
Jerome
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14161): https://lists.fd.io/g/vpp-dev/message/14161
Mute This Topic: https://lists.fd.io/mt/34485126/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP 19.08 release is available!

2019-08-22 Thread Jerome Tollet via Lists.Fd.Io
+1. List of new features is quite impressive: Quic, new crypto stuffs, native 
drivers improvement, …
Congrats to all contributors

De :  au nom de Florin Coras 
Date : jeudi 22 août 2019 à 18:07
À : Andrew Yourtchenko 
Cc : vpp-dev , csit-dev 
Objet : Re: [vpp-dev] VPP 19.08 release is available!

Congrats to the entire community and thanks Andrew!

Cheers,
Florin

> On Aug 21, 2019, at 1:57 PM, Andrew Yourtchenko  wrote:
>
> Hi all,
>
> the VPP release 19.08 artifacts are available on packagecloud release
> repositories.
>
> I have tested the installation on ubuntu and centos.
>
> Many thanks to everyone involved into making it happen!
>
> Special thanks to Vanessa Valderrama for the help today.
>
> --a
> 
>
> p.s. stable/1908 branch is re-opened for the fixes slated for .1
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#13804): https://lists.fd.io/g/vpp-dev/message/13804
> Mute This Topic: https://lists.fd.io/mt/32983052/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13816): https://lists.fd.io/g/vpp-dev/message/13816
Mute This Topic: https://lists.fd.io/mt/32983052/675291
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [jtol...@cisco.com]
-=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13818): https://lists.fd.io/g/vpp-dev/message/13818
Mute This Topic: https://lists.fd.io/mt/32983052/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP in interrupt mode

2019-05-25 Thread Jerome Tollet via Lists.Fd.Io
Hi Berna,
You can also try:
set interface rx-mode
Summary/usage
set interface rx-mode  [queue ] [polling | interrupt | adaptive].
Description
This command is used to assign the RX packet processing mode (polling, 
interrupt, adaptive) of the a given interface, and optionally a given queue. If 
the 'queue' is not provided, the 'mode' is applied to all queues of the 
interface. Not all interfaces support all modes. To display the current rx-mode 
use the command 'show interface rx-placement'.
Example usage
Example of how to assign rx-mode to all queues on an interface:
vpp# set interface rx-mode VirtualEthernet0/0/12 polling
Example of how to assign rx-mode to one queue of an interface:
vpp# set interface rx-mode VirtualEthernet0/0/12 queue 0 interrupt
Example of how to display the rx-mode of all interfaces:
vpp# show interface rx-placement
Thread 1 (vpp_wk_0):
  node dpdk-input:
GigabitEthernet7/0/0 queue 0 (polling)
  node vhost-user-input:
VirtualEthernet0/0/12 queue 0 (interrupt)
VirtualEthernet0/0/12 queue 2 (polling)
VirtualEthernet0/0/13 queue 0 (polling)
VirtualEthernet0/0/13 queue 2 (polling)
Thread 2 (vpp_wk_1):
  node dpdk-input:
GigabitEthernet7/0/1 queue 0 (polling)
  node vhost-user-input:
VirtualEthernet0/0/12 queue 1 (polling)
VirtualEthernet0/0/12 queue 3 (polling)
VirtualEthernet0/0/13 queue 1 (polling)
VirtualEthernet0/0/13 queue 3 (polling)


De :  au nom de Berna Demir 
Date : samedi 25 mai 2019 à 14:49
À : "vpp-dev@lists.fd.io" 
Objet : Re: [vpp-dev] VPP in interrupt mode

I found it.
there is parameter to add sleep time in poll mode.
poll-sleep-usec 
https://fdio-vpp.readthedocs.io/en/latest/gettingstarted/users/configuring/startup.html#unix


On Sat, May 25, 2019 at 3:56 PM Berna Demir 
mailto:berna.demirs...@gmail.com>> wrote:
Hi

Is there any way to configure vpp in interrupt mode for test purpose.
I know vpp as dpdk application should read NIC in pooling mode
but I have limited CPU resource in my development machine.

Thanks,
Berna
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13149): https://lists.fd.io/g/vpp-dev/message/13149
Mute This Topic: https://lists.fd.io/mt/31760235/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] FW: networking-vpp 19.04 is now available

2019-05-14 Thread Jerome Tollet via Lists.Fd.Io
FYI.

From: "Naveen Joy (najoy)" 
Date: Tuesday, May 14, 2019 at 2:16 PM
To: "openstack-disc...@lists.openstack.org" 

Subject: networking-vpp 19.04 is now available

Hello All,

We'd like to invite you all to try out networking-vpp 19.04.
As many of you may already know, VPP is a fast user space forwarder based on 
the DPDK toolkit. VPP uses vector packet processing algorithms to
minimize the CPU time spent on each packet to maximize throughput.
Networking-vpp is a ML2 mechanism driver that controls VPP on your control and 
compute hosts to provide fast L2 forwarding under Neutron.
This latest version is updated to work with VPP 19.04.
In the 19.04 release, we've worked on making the below updates:
- We've built an automated test pipeline using Tempest. We've identified and 
fixed bugs discovered during our integration test runs.
   We are currently investigating a bug, which causes a race condition in the 
agent. We hope to have a fix for this issue soon.
- We've made it possible to overwrite the VPP repo path. VPP repo paths are 
constructed, pointing to upstream repos, based on OS and version
requested. Now you can allow this to be redirected elsewhere.
- We've updated the mac-ip permit list to allow the link-local IPv6 address 
prefix for neighbor discovery to enable seamless IPv6 networking.

- We've worked on additional fixes for Python3 compatibility and enabled py3 
tests in gerrit gating.

- We've updated the ACL calls in vpp.py to tidy-up the arguments. We've worked 
on reordering vpp.py to group related functions, which is going to be helpful
for further refactoring work in the future.
- We've been doing the usual round of bug fixes and updates - the code will 
work with both VPP 19.01 and 19.04 and has been updated to keep up with
Neutron Rocky and Stein.
The README [1] explains how you can try out VPP using devstack: the devstack 
plugin will deploy the mechanism driver and VPP 19.04 and should give
you a working system with a minimum of hassle.
We will be continuing our development for VPP's 19.08 release. We welcome 
anyone who would like to come help us.
--
Naveen & Ian

[1] https://opendev.org/x/networking-vpp/src/branch/master/README.rst
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13029): https://lists.fd.io/g/vpp-dev/message/13029
Mute This Topic: https://lists.fd.io/mt/31626875/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] RPM package question related to https://gerrit.fd.io/r/#/c/18953/

2019-05-10 Thread Jerome Tollet via Lists.Fd.Io
Hi Thomas,

In https://gerrit.fd.io/r/#/c/18953/ you are putting epel-release and Python3 
into the dependencies.

  1.  I can understand that those are needed as BuildRequires:, but are they 
really needed for Requires: ?
  2.  Why do we need epel-release as runtime requirement?
  3.  Do we really need Python3 for VPP 19.04 itself?

Jerome



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12975): https://lists.fd.io/g/vpp-dev/message/12975
Mute This Topic: https://lists.fd.io/mt/31575010/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP Load Balancer with active-standby

2019-05-09 Thread Jerome Tollet via Lists.Fd.Io
Hi Yusuke Tatsumi,
What protocol are you considering to duplicate states?
Jerome

De :  au nom de Yusuke Tatsumi 
Date : mercredi 8 mai 2019 à 07:24
À : "vpp-dev@lists.fd.io" 
Objet : [vpp-dev] VPP Load Balancer with active-standby

Hi all,

We've already used VPP LB plugin as active-active L3DSR load balancer, and It 
works well in our environment.
In addition to the above, I'm now interested in supporting active-standby load 
balancer for some particular cases, mysql for example which support only single 
active node.
So I plan to enhance LB feature to support active-standby.
In consideration of robustness of feature, adding member-selection-weight 
function is suitable I think.
With this method, VPP will do as follows,
- no weight setting (default): active-active
- weighted setting (ex. 6:4): active-active with 6:4 weighted
- weighted setting (ex. 10:0): active-standby
Do you have any option or much good idea?

Regards,

—
立見 祐介
ヤフー株式会社
テクノロジーグループ システム統括本部 クラウドプラットフォーム本部 技術1部 コンピュート
TEL: 03-6898-3081
mail: ytats...@yahoo-corp.jp

—
Yusuke Tatsumi
Compute team,
Cloud Platform Division,
System Management Group
Yahoo Japan Corporation
Direct: +81 (3) 6898 3081
mail: ytats...@yahoo-corp.jp

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12964): https://lists.fd.io/g/vpp-dev/message/12964
Mute This Topic: https://lists.fd.io/mt/31540449/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Introducing vpptop - real-time viewer for VPP stats using dynamic terminal user interface #govpp

2019-05-08 Thread Jerome Tollet via Lists.Fd.Io
Hi Andrej,
I  didn’t try it but video is really great!
Jerome

De :  au nom de "Ondrej Fabry -X (ofabry - PANTHEON 
TECHNOLOGIES at Cisco) via Lists.Fd.Io" 
Répondre à : "Ondrej Fabry -X (ofabry - PANTHEON TECHNOLOGIES at Cisco)" 

Date : mercredi 8 mai 2019 à 19:05
À : "vpp-dev@lists.fd.io" 
Cc : "vpp-dev@lists.fd.io" 
Objet : [vpp-dev] Introducing vpptop - real-time viewer for VPP stats using 
dynamic terminal user interface #govpp

Hey vpp-devs,

I'd like to introduce vpptop project that was just open sourced by 
Pantheon.TECH.
https://github.com/PantheonTechnologies/vpptop

What is vpptop?
The vpptop is a Go implementation of real-time viewer for VPP metrics provided 
by dynamic terminal user interface.

Here's short demo preview of vpptop in action.
https://asciinema.org/a/NHODZM2ebcwWFPEEPcja8X19R

Find more information in the README file.

Regards,
Ondrej Fabry
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12960): https://lists.fd.io/g/vpp-dev/message/12960
Mute This Topic: https://lists.fd.io/mt/31545571/21656
Mute #govpp: https://lists.fd.io/mk?hashtag=govpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] To support QUIC protocol

2019-03-15 Thread Jerome Tollet via Lists.Fd.Io
Hi Davi,
Unfortunately, there’s no standard API for Quic. Every library comes with its 
own model. On our side, we are currently exploring Quicly.
Could you describe a bit more your setup ?
Jerome

De : Davi Scofield 
Date : vendredi 15 mars 2019 à 03:42
À : Jerome Tollet 
Cc : vpp-dev 
Objet : Re:Re: [vpp-dev] To support QUIC protocol

Hi,Jerome:
  I want to run my WEB Server and  video Server(using RTMP protocol) over QUIC 
in VPP . I donot want to modify my WEB/Video Server Code, is there any method 
to achieve this?
  THANKS.
Davi

At 2019-03-14 16:01:39, "Jerome Tollet via Lists.Fd.Io" 
 wrote:

Hi Davi,
We are currently working on it. Do you have specific requirements or 
suggestions?
Jerome

De :  au nom de Davi Scofield 
Date : jeudi 14 mars 2019 à 03:32
À : vpp-dev 
Objet : [vpp-dev] To support QUIC protocol

Hello,
   Is there any roadmap or suggestion to support QUIC protocol in VPP?
   Thanks!
Davi

















-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12554): https://lists.fd.io/g/vpp-dev/message/12554
Mute This Topic: https://lists.fd.io/mt/30423896/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] To support QUIC protocol

2019-03-14 Thread Jerome Tollet via Lists.Fd.Io
Hi Davi,
We are currently working on it. Do you have specific requirements or 
suggestions?
Jerome

De :  au nom de Davi Scofield 
Date : jeudi 14 mars 2019 à 03:32
À : vpp-dev 
Objet : [vpp-dev] To support QUIC protocol

Hello,
   Is there any roadmap or suggestion to support QUIC protocol in VPP?
   Thanks!
Davi










-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12525): https://lists.fd.io/g/vpp-dev/message/12525
Mute This Topic: https://lists.fd.io/mt/30423896/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Openstack networking-vpp

2019-02-14 Thread Jerome Tollet via Lists.Fd.Io
Hi Eyle,
Most of the documentation is currently in github and it’s probably not 
exhaustive. Feel free to ask questions on the mailing list if you have specific 
needs. You are also more than welcome to contribute to it 
vpp-dev mailing list is really dedicated to VPP. I would recommend to use 
openstack-...@lists.openstack.org for 
specific questions about networking-vpp.
Regards,
Jerome

De :  au nom de Eyle Brinkhuis 
Date : jeudi 14 février 2019 à 04:13
À : "vpp-dev@lists.fd.io" 
Objet : [vpp-dev] Openstack networking-vpp

Hi all,

Sorry if this is not the right place to ask, but I’m interested in messing 
around with the Openstack networking VPP plugin. Is there any documentation 
other than the git-repo available?

Thanks!

Regards,

Eyle Brinkhuis



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12257): https://lists.fd.io/g/vpp-dev/message/12257
Mute This Topic: https://lists.fd.io/mt/29837743/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Sweetcomb 19.01 Released

2019-02-14 Thread Jerome Tollet via Lists.Fd.Io
Congratulations!
Jerome

De :  au nom de "Ni, Hongjun" 
Date : jeudi 14 février 2019 à 00:40
À : "sweetcomb-...@lists.fd.io" , 
"vpp-dev@lists.fd.io" 
Cc : Ed Warnicke , "Dave Barach (dbarach)" , 
"DiGiglio, John" , "Kinsella, Ray" 
, "Li, Jokul" , "Liu, Yu Y" 

Objet : [vpp-dev] Sweetcomb 19.01 Released

Hey all,

The Sweetcomb 19.01 Release is out.

Packages can be downloaded from Package Cloud:
https://packagecloud.io/fdio/1901

Many thanks to all contributors and members for your great contribution and 
help!
Especially thanks to Ed Warnicke, Vanessa and Drenfong for setting infra 
quickly.

The following is Sweetcomb 19.01 release notes:

## Features
### Northbound Interface
- Netconf
- gRPC Network Management Interface

### IETF Yang Models
- ietf-interfa...@2014-05-08.yang
- ietf-interfaces.yang
- ietf...@2014-06-16.yang
- ietf-yang-ty...@2013-07-15.yang

### OpenConfig Yang Models
- openconfig-extensions.yang
- openconfig-if-aggregate.yang
- openconfig-if-ethernet.yang
- openconfig-if-ip.yang
- openconfig-if-types.yang
- openconfig-inet-types.yang
- openconfig-interfaces.yang
- openconfig-local-routing.yang
- openconfig-policy-types.yang
- openconfig-types.yang
- openconfig-vlan-types.yang
- openconfig-vlan.yang
- openconfig-yang-types.yang

### Data Store
- Sysrepo configuration
- Sysrepo operational

### Translation Layer: IETF
- interface

### Translation Layer: OpenConfig
- interface
- local routing

### Connection to VPP
- connect to VPP via binary APIs
- recconnect to VPP automatically

Thanks you all,
Hongjun
On behalf of the FD.io/Sweetcomb team

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12256): https://lists.fd.io/g/vpp-dev/message/12256
Mute This Topic: https://lists.fd.io/mt/29836514/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [sweetcomb-dev] [vpp-dev] Streaming Telemetry with gNMI server

2019-02-03 Thread Jerome Tollet via Lists.Fd.Io
Hi Hongjun,
Integrating this work with Sweetcomb would be interesting because stats may be 
"enriched" with extra information which not exposed to stats shared memory 
segment.
Because of Chinese new year, there won't be a weekly call on Thursday but may 
Yohann & Stevan could attend to next call.
Regards,
Jerome

Le 03/02/2019 05:32, « sweetcomb-...@lists.fd.io au nom de Ni, Hongjun » 
 a écrit :

Hi Yohan and Stevan,

Thank you for your great work!

FD.io has a sub-project named Sweetcomb, which provides gNMI Northbound 
interface to upper application.
Sweetcomb project will push its first release on Feb 6, 2019.
Please take a look at below link for details from Pantheon Technologies:
https://www.youtube.com/watch?v=hTv6hFnyAhE 

Not sure if your work could be integrated with Sweetcomb project?

Thanks a lot,
Hongjun


-Original Message-
From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Yohan 
Pipereau
Sent: Sunday, February 3, 2019 5:55 AM
To: vpp-dev@lists.fd.io
Cc: Stevan COROLLER 
Subject: [vpp-dev] Streaming Telemetry with gNMI server

Hi everyone,

Stevan and I have developed a small gRPC server to stream VPP metrics to an 
analytic stack.

That's right, there is already a program to do this in VPP, it is 
vpp_prometheus_export. Here are the main details/improvements regarding our 
implementation:

* Our implementation is based on gNMI specification, a network standard 
co-written by several network actors to allow configuration and telemetry with 
RPCs.
* Thanks to gNMI protobuf file, messages are easier to parse and use a 
binary format for better performances.
* We are using gRPC and Protobuf, so this is a HTTP2 server
* We are using a push model (or streaming) instead of a pull model. This 
mean that clients subscribe to metric paths with a sample interval, and our 
server streams counters according to the sample interval.
* As we said just before, contrary to vpp_prometheus_export, our 
application let clients decide which metric will be streamed and how often.
* For interface related counters, we also provide conversion of interface 
indexes into interface names.
Ex: /if/rx would be output as /if/rx/tap0/thread0 But at this stage, this 
conversion is expensive because it uses a loop to collect vapi interface 
events. It is planned to write paths with interface names in STAT shared memory 
segment to avoid this loop.

Here is the link to our project:
https://github.com/vpp-telemetry-pfe/gnmi-grpc

We have provided a docker scenario to illustrate our work. It can be found 
in docker directory of the project. You can follow the guide named guide.md.

Do not hesitate to give us feedbacks regarding the scenario or the code.

Yohan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#170): https://lists.fd.io/g/sweetcomb-dev/message/170
Mute This Topic: https://lists.fd.io/mt/29637803/675291
Group Owner: sweetcomb-dev+ow...@lists.fd.io
Unsubscribe: 
https://lists.fd.io/g/sweetcomb-dev/leave/3383274/1904987652/xyzzy  
[jtol...@cisco.com]
-=-=-=-=-=-=-=-=-=-=-=-



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12152): https://lists.fd.io/g/vpp-dev/message/12152
Mute This Topic: https://lists.fd.io/mt/29649627/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Streaming Telemetry with gNMI server

2019-02-03 Thread Jerome Tollet via Lists.Fd.Io
Hi Yohan & Stevan,
Great work. Thanks!
Jerome

Le 02/02/2019 23:35, « vpp-dev@lists.fd.io au nom de Yohan Pipereau » 
 a écrit :

Hi everyone,

Stevan and I have developed a small gRPC server to stream VPP metrics to
an analytic stack.

That's right, there is already a program to do this in VPP, it is
vpp_prometheus_export. Here are the main details/improvements regarding
our implementation:

* Our implementation is based on gNMI specification, a network standard
co-written by several network actors to allow configuration and
telemetry with RPCs.
* Thanks to gNMI protobuf file, messages are easier to parse and use a
binary format for better performances.
* We are using gRPC and Protobuf, so this is a HTTP2 server
* We are using a push model (or streaming) instead of a pull model. This
mean that clients subscribe to metric paths with a sample interval, and
our server streams counters according to the sample interval.
* As we said just before, contrary to vpp_prometheus_export, our
application let clients decide which metric will be streamed and how often.
* For interface related counters, we also provide conversion of
interface indexes into interface names.
Ex: /if/rx would be output as /if/rx/tap0/thread0
But at this stage, this conversion is expensive because it uses a loop
to collect vapi interface events. It is planned to write paths with
interface names in STAT shared memory segment to avoid this loop.

Here is the link to our project:
https://github.com/vpp-telemetry-pfe/gnmi-grpc

We have provided a docker scenario to illustrate our work. It can be
found in docker directory of the project. You can follow the guide named
guide.md.

Do not hesitate to give us feedbacks regarding the scenario or the code.

Yohan


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12151): https://lists.fd.io/g/vpp-dev/message/12151
Mute This Topic: https://lists.fd.io/mt/29635594/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] wireshark vpp dispatch trace dissector merged

2019-01-17 Thread Jerome Tollet via Lists.Fd.Io
That’s really nice!

De :  au nom de Florin Coras 
Date : jeudi 17 janvier 2019 à 17:25
À : "Dave Barach (dbarach)" 
Cc : "vpp-dev@lists.fd.io" 
Objet : Re: [vpp-dev] wireshark vpp dispatch trace dissector merged

Awesome!

Florin


On Jan 17, 2019, at 5:11 AM, Dave Barach via Lists.Fd.Io 
mailto:dbarach=cisco@lists.fd.io>> wrote:

I’m pleased to announce that our dispatch trace wireshark dissector has been 
merged. At some point in the indefinite future, everyone’s favorite distro will 
include a copy of wireshark which knows how to dissect vpp dispatch trace pcap 
files.

I’ll update the docs accordingly.

Dave

From: bugzilla-dae...@wireshark.org 
mailto:bugzilla-dae...@wireshark.org>>
Sent: Thursday, January 17, 2019 6:32 AM
To: wiresh...@barachs.net
Subject: [Bug 15411] [dissector] Add dissector for vector packet processing 
dispatch traces

Comment # 2 on 
bug 15411 from 
Gerrit Code Review

Change 31466 merged by Anders Broman:

VPP: add vpp graph dispatch trace dissector



https://code.wireshark.org/review/31466


You are receiving this mail because:
· You reported the bug.
· You are the assignee for the bug.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11939): https://lists.fd.io/g/vpp-dev/message/11939
Mute This Topic: https://lists.fd.io/mt/29172119/675152
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[fcoras.li...@gmail.com]
-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11942): https://lists.fd.io/g/vpp-dev/message/11942
Mute This Topic: https://lists.fd.io/mt/29172119/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re : [vpp-dev] Maintainer router plugin

2018-12-10 Thread Jerome Tollet via Lists.Fd.Io
Hello Justin,
Regarding multi instance, have you considered running multiple instances of VPP 
in different containers?
Jerome

Le 08/12/2018 18:00, « vpp-dev@lists.fd.io au nom de Justin Iurman » 
 a écrit :

Hi Hongjun,

> There is no plan to use memif at present. Welcome your contribution if 
you will.

Of course, if I find some free time. Anyone interested in working on this ?

> In router plugin, we inject links, routes, etc. from different namespace 
in
> Kernel into different VRFs In VPP.
> Not support multi-instance mode.

How do you determine which namespace(s) to look at ? Or do you take care of 
all namespaces by default ?

Also, would multi-instance mode be feasible ?

Cheers,
Justin


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11535): https://lists.fd.io/g/vpp-dev/message/11535
Mute This Topic: https://lists.fd.io/mt/28707406/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Request: please add "real" pcap ability #vpp

2018-11-26 Thread Jerome Tollet via Lists.Fd.Io
Thanks for the update. Feel free to contribute to the documentation or wiki on 
that point 
Jerome

De :  au nom de Brian Dickson 

Date : dimanche 25 novembre 2018 à 19:36
À : Jerome Tollet 
Cc : "vpp-dev@lists.fd.io" 
Objet : Re: [vpp-dev] Request: please add "real" pcap ability #vpp

Hi, Jerome (and everyone),

Thanks for this!

Using packet-capture + span, did indeed accomplish what I was looking for.

One useful data point: I was able to capture about 10 seconds of line-rate 10G 
into a pcap file, and it looks like I didn't lose any packets, on a VPP host 
that was not forwarding packets.

Thanks again,
Brian

On Fri, Nov 23, 2018 at 9:06 AM Jerome Tollet (jtollet) 
mailto:jtol...@cisco.com>> wrote:
Hi Brian,
I tried what I told you and I confirm that worked fine on my setup.

create packet-generator interface pg0
packet-generator capture pg0 pcap /tmp/mycap.pcap
set interface span SOURCE_INTF destination pg0
set interface state pg0 up

Jerome
De : mailto:vpp-dev@lists.fd.io>> au nom de Brian Dickson 
mailto:brian.peter.dick...@gmail.com>>
Date : vendredi 23 novembre 2018 à 08:03
À : "d...@barachs.net" 
mailto:d...@barachs.net>>
Cc : "vpp-dev@lists.fd.io" 
mailto:vpp-dev@lists.fd.io>>
Objet : Re: [vpp-dev] Request: please add "real" pcap ability #vpp


On Thu, Nov 22, 2018 at 5:30 AM mailto:d...@barachs.net>> 
wrote:
Laying aside comments about folks who aren’t regular community contributors 
introducing themselves in random ways, here are a few thoughts:

We have a plan to unify pcap tracepoints when Damjan finishes reworking the 
ethernet input node.

That is very welcome news.

Is there a rough timeline for Damjan's reworking, and the unification? I just 
want to factor that into my own plans, if possible.


No matter what, pcap capture involves a bunch of data copying. The forwarding 
rate will clearly suffer. Full stop.

Yes, I fully understand that. There's no such thing as a free lunch.

In the environment in question, there's VPP hosts (doing BGP with the netlink 
and router sandbox plugins to get the routing table into VPP), and adjacent to 
them (physically upstream/downstream) we are using passive optical splitters.

Those optical splitters feed copies of traffic to capture hosts, specifically 
dedicated to packet capture and/or other integrated analysis code to be 
developed.

Our packet capture would only be using VPP without any packet forwarding, i.e. 
as a convenient way of integrating kernel offload with packet capture, and 
possibly chained with other added custom nodes.

(DPDK by itself is not really friendly for doing any kind of from-scratch 
integration, and I haven't found many/any other currently maintained open 
source packages/frameworks that offer pcap. E.g. netmap-libpcap seems 
abandoned.)

Having the ability to add other nodes in the graph, that do other stuff, 
possibly with zero copy, is another major reason we're looking at VPP.

So, pcap is the starting point, and future work might keep the pcap capability 
(assuming the ability to control whether capture is done, and the ability to 
specify pcap filter rules), and add other custom functionality.

To give you an idea, this is not consumer-grade stuff we are using; 12 or 24 
core Intel boxes (with HT, appears as 24 or 48 cores), and 128GB or 256GB of 
memory, just for packet capture, onto RAIDed SSDs.

Thanks for the info, and I'll definitely look at that extras/wireshark thing.

Brian


In master/latest, I’ve added pcap tracing – and a wireshark dissector – to the 
graph dispatch engine. See .../extras/wireshark/readme.md for more detail. The 
wireshark dissector isn’t finished by any means, nor do we have a blessed encap 
type number from tcpdump-workers, nor is the work upstreamed into wireshark.

Erreur ! Nom du fichier non spécifié.



From: vpp-dev@lists.fd.io 
mailto:vpp-dev@lists.fd.io>> On Behalf Of 
brian.peter.dick...@gmail.com
Sent: Wednesday, November 21, 2018 6:59 PM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] Request: please add "real" pcap ability #vpp

Hi, dev folks,

Apologies for my first message being kind of demanding.

However, I think this is a reasonable request.

What I am interested in, and I think this is likely to be a fairly universal 
desire, is the ability to properly integrate some kind of pcap packet capture 
to the full VPP graph.

The current available mechanisms (pcap drop trace and pcap tx trace) do not 
apply to packets that are only "handled" by the host in question, i.e. neither 
originate or terminate on the local host.

In particular, I'm interested in something that can run on a bare metal host 
and, presuming sufficient resources can be given to it (cores, memory, etc), do 
packet capture at line rate.

Thus, any restriction ("run it on a VM") is not adequate.

Given that there is already stuff for handling the pcap file already 

Re: [vpp-dev] Request: please add "real" pcap ability #vpp

2018-11-23 Thread Jerome Tollet via Lists.Fd.Io
Hi Brian,
I tried what I told you and I confirm that worked fine on my setup.

create packet-generator interface pg0
packet-generator capture pg0 pcap /tmp/mycap.pcap
set interface span SOURCE_INTF destination pg0
set interface state pg0 up

Jerome
De :  au nom de Brian Dickson 

Date : vendredi 23 novembre 2018 à 08:03
À : "d...@barachs.net" 
Cc : "vpp-dev@lists.fd.io" 
Objet : Re: [vpp-dev] Request: please add "real" pcap ability #vpp


On Thu, Nov 22, 2018 at 5:30 AM mailto:d...@barachs.net>> 
wrote:
Laying aside comments about folks who aren’t regular community contributors 
introducing themselves in random ways, here are a few thoughts:

We have a plan to unify pcap tracepoints when Damjan finishes reworking the 
ethernet input node.

That is very welcome news.

Is there a rough timeline for Damjan's reworking, and the unification? I just 
want to factor that into my own plans, if possible.


No matter what, pcap capture involves a bunch of data copying. The forwarding 
rate will clearly suffer. Full stop.

Yes, I fully understand that. There's no such thing as a free lunch.

In the environment in question, there's VPP hosts (doing BGP with the netlink 
and router sandbox plugins to get the routing table into VPP), and adjacent to 
them (physically upstream/downstream) we are using passive optical splitters.

Those optical splitters feed copies of traffic to capture hosts, specifically 
dedicated to packet capture and/or other integrated analysis code to be 
developed.

Our packet capture would only be using VPP without any packet forwarding, i.e. 
as a convenient way of integrating kernel offload with packet capture, and 
possibly chained with other added custom nodes.

(DPDK by itself is not really friendly for doing any kind of from-scratch 
integration, and I haven't found many/any other currently maintained open 
source packages/frameworks that offer pcap. E.g. netmap-libpcap seems 
abandoned.)

Having the ability to add other nodes in the graph, that do other stuff, 
possibly with zero copy, is another major reason we're looking at VPP.

So, pcap is the starting point, and future work might keep the pcap capability 
(assuming the ability to control whether capture is done, and the ability to 
specify pcap filter rules), and add other custom functionality.

To give you an idea, this is not consumer-grade stuff we are using; 12 or 24 
core Intel boxes (with HT, appears as 24 or 48 cores), and 128GB or 256GB of 
memory, just for packet capture, onto RAIDed SSDs.

Thanks for the info, and I'll definitely look at that extras/wireshark thing.

Brian


In master/latest, I’ve added pcap tracing – and a wireshark dissector – to the 
graph dispatch engine. See .../extras/wireshark/readme.md for more detail. The 
wireshark dissector isn’t finished by any means, nor do we have a blessed encap 
type number from tcpdump-workers, nor is the work upstreamed into wireshark.

[cid:image001.png@01D4823D.97D176D0]



From: vpp-dev@lists.fd.io 
mailto:vpp-dev@lists.fd.io>> On Behalf Of 
brian.peter.dick...@gmail.com
Sent: Wednesday, November 21, 2018 6:59 PM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] Request: please add "real" pcap ability #vpp

Hi, dev folks,

Apologies for my first message being kind of demanding.

However, I think this is a reasonable request.

What I am interested in, and I think this is likely to be a fairly universal 
desire, is the ability to properly integrate some kind of pcap packet capture 
to the full VPP graph.

The current available mechanisms (pcap drop trace and pcap tx trace) do not 
apply to packets that are only "handled" by the host in question, i.e. neither 
originate or terminate on the local host.

In particular, I'm interested in something that can run on a bare metal host 
and, presuming sufficient resources can be given to it (cores, memory, etc), do 
packet capture at line rate.

Thus, any restriction ("run it on a VM") is not adequate.

Given that there is already stuff for handling the pcap file already (in 
vnet/unix IIRC), this should not be a lot of work.

There are two use cases I have:
· debugging data plane stuff on a vpp-based router (i.e. using the vppsb 
netlink and router projects)
· packet capture at line rate (a vpp host that only listens/drops traffic, 
incidental to the packet capture, i.e. a single-purpose host, bypassing 
kernel/driver limitations, to take all ethernet traffic on a port and stuff it 
into a pcap file.)
oNB: for scaling purposes, it is reasonable to implement the pcap captures 
using RSS/RFS to multiple cores and having each core be a thread doing pcap 
file writing; how that would be put into the "vpp graph" might be a little less 
than trivial, but should be straightforward, IMHO)
Thanks in advance.

Brian Dickson

P.S. There is a SERIOUS lack of useful documentation on how to actually do 
this, as a potential ad-hoc contributor. Not sure if 

Re: [vpp-dev] Request: please add "real" pcap ability #vpp

2018-11-22 Thread Jerome Tollet via Lists.Fd.Io
I tried in the past and that was working fine.
Jerome

De : Brian Dickson 
Date : vendredi 23 novembre 2018 à 07:44
À : Jerome Tollet 
Cc : "vpp-dev@lists.fd.io" 
Objet : Re: [vpp-dev] Request: please add "real" pcap ability #vpp


On Thu, Nov 22, 2018 at 8:18 AM Jerome Tollet (jtollet) 
mailto:jtol...@cisco.com>> wrote:
Hi Peter,

(It's actually Brian, BTW.)

Did you try creating a pg interface and spanning packet from your port to this 
interface?

I didn't, there wasn't a lot of documentation that would have pointed in that 
direction.

But, I will, thanks for the suggestion.

Brian

Jerome

De : mailto:vpp-dev@lists.fd.io>> au nom de 
"brian.peter.dick...@gmail.com" 
mailto:brian.peter.dick...@gmail.com>>
Date : jeudi 22 novembre 2018 à 00:58
À : "vpp-dev@lists.fd.io" 
mailto:vpp-dev@lists.fd.io>>
Objet : [vpp-dev] Request: please add "real" pcap ability #vpp

Hi, dev folks,

Apologies for my first message being kind of demanding.

However, I think this is a reasonable request.

What I am interested in, and I think this is likely to be a fairly universal 
desire, is the ability to properly integrate some kind of pcap packet capture 
to the full VPP graph.

The current available mechanisms (pcap drop trace and pcap tx trace) do not 
apply to packets that are only "handled" by the host in question, i.e. neither 
originate or terminate on the local host.

In particular, I'm interested in something that can run on a bare metal host 
and, presuming sufficient resources can be given to it (cores, memory, etc), do 
packet capture at line rate.

Thus, any restriction ("run it on a VM") is not adequate.

Given that there is already stuff for handling the pcap file already (in 
vnet/unix IIRC), this should not be a lot of work.

There are two use cases I have:
• debugging data plane stuff on a vpp-based router (i.e. using the 
vppsb netlink and router projects)
• packet capture at line rate (a vpp host that only listens/drops 
traffic, incidental to the packet capture, i.e. a single-purpose host, 
bypassing kernel/driver limitations, to take all ethernet traffic on a port and 
stuff it into a pcap file.)
oNB: for scaling purposes, it is reasonable to implement the pcap captures 
using RSS/RFS to multiple cores and having each core be a thread doing pcap 
file writing; how that would be put into the "vpp graph" might be a little less 
than trivial, but should be straightforward, IMHO)
Thanks in advance.

Brian Dickson

P.S. There is a SERIOUS lack of useful documentation on how to actually do 
this, as a potential ad-hoc contributor. Not sure if you guys have gotten this 
feedback from anyone else.
P.P.S. I'm using 18.07 because that is the last version that builds alongside 
the vppsb netlink and router plugins.
P.P.P.S. Even getting 18.07 and vppsb to build was a nightmare. You should try 
doing this from scratch, i.e. put yourselves in the shoes of someone who just 
discovered vpp...
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11376): https://lists.fd.io/g/vpp-dev/message/11376
Mute This Topic: https://lists.fd.io/mt/28282785/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Request: please add "real" pcap ability #vpp

2018-11-22 Thread Jerome Tollet via Lists.Fd.Io
Hi Peter,
Did you try creating a pg interface and spanning packet from your port to this 
interface?
Jerome

De :  au nom de "brian.peter.dick...@gmail.com" 

Date : jeudi 22 novembre 2018 à 00:58
À : "vpp-dev@lists.fd.io" 
Objet : [vpp-dev] Request: please add "real" pcap ability #vpp

Hi, dev folks,

Apologies for my first message being kind of demanding.

However, I think this is a reasonable request.

What I am interested in, and I think this is likely to be a fairly universal 
desire, is the ability to properly integrate some kind of pcap packet capture 
to the full VPP graph.

The current available mechanisms (pcap drop trace and pcap tx trace) do not 
apply to packets that are only "handled" by the host in question, i.e. neither 
originate or terminate on the local host.

In particular, I'm interested in something that can run on a bare metal host 
and, presuming sufficient resources can be given to it (cores, memory, etc), do 
packet capture at line rate.

Thus, any restriction ("run it on a VM") is not adequate.

Given that there is already stuff for handling the pcap file already (in 
vnet/unix IIRC), this should not be a lot of work.

There are two use cases I have:
· debugging data plane stuff on a vpp-based router (i.e. using the 
vppsb netlink and router projects)
· packet capture at line rate (a vpp host that only listens/drops 
traffic, incidental to the packet capture, i.e. a single-purpose host, 
bypassing kernel/driver limitations, to take all ethernet traffic on a port and 
stuff it into a pcap file.)
oNB: for scaling purposes, it is reasonable to implement the pcap captures 
using RSS/RFS to multiple cores and having each core be a thread doing pcap 
file writing; how that would be put into the "vpp graph" might be a little less 
than trivial, but should be straightforward, IMHO)
Thanks in advance.

Brian Dickson

P.S. There is a SERIOUS lack of useful documentation on how to actually do 
this, as a potential ad-hoc contributor. Not sure if you guys have gotten this 
feedback from anyone else.
P.P.S. I'm using 18.07 because that is the last version that builds alongside 
the vppsb netlink and router plugins.
P.P.P.S. Even getting 18.07 and vppsb to build was a nightmare. You should try 
doing this from scratch, i.e. put yourselves in the shoes of someone who just 
discovered vpp...
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11373): https://lists.fd.io/g/vpp-dev/message/11373
Mute This Topic: https://lists.fd.io/mt/28282785/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] FW: [openstack-dev] networking-vpp 18.10 for VPP 18.10 is now available

2018-11-08 Thread Jerome Tollet via Lists.Fd.Io
Dear VPP’ers,
This message may be of interest to you.
Jerome

De : Naveen Joy 
Répondre à : "OpenStack Development Mailing List (not for usage questions)" 

Date : jeudi 8 novembre 2018 à 23:05
À : openstack-dev 
Objet : [openstack-dev] networking-vpp 18.10 for VPP 18.10 is now available

Hello All,

In conjunction with the release of VPP 18.10, we'd like to invite you all to 
try out networking-vpp 18.10 for VPP 18.10.
As many of you may already know, VPP is a fast user space forwarder based on 
the DPDK toolkit. VPP uses vector packet
processing algorithms to minimize the CPU time spent on each packet to maximize 
throughput.

Networking-vpp is a ML2 mechanism driver that controls VPP on your control and 
compute hosts to provide fast L2 forwarding
under Neutron.

In this release, we have made improvements to fully support the network trunk 
service plugin. Using this plugin, you can attach
multiple networks to an instance by binding it to a single vhostuser trunk 
port. The APIs are the same as the OpenStack Neutron trunk
service APIs. You can also now bind and unbind subports to a bound network 
trunk.

Another feature we have improved in this release is the Tap-as-a-service(TaaS). 
The TaaS code has been updated to handle any out of order
etcd messages received during agent restarts. You can use this service to 
create remote port mirroring capability for tenant virtual networks.

Besides the above, this release also has several bug fixes, VPP 18.10 API 
compatibility and stability related improvements.

The README [1] explains how you can try out VPP using devstack: the devstack 
plugin will deploy the mechanism driver and VPP 18.10
and should give you a working system with a minimum of hassle.

We will be continuing our development between now and VPP's 19.01 release. 
There are several features we're planning to work on
and we will keep you updated through our bugs list [2]. We welcome anyone who 
would like to come help us.

Everyone is welcome to join our biweekly IRC meetings, every other Monday (the 
next one is due this Monday at 0800 PT = 1600 GMT.
--
Ian & Naveen

[1]https://github.com/openstack/networking-vpp/blob/master/README.rst
[2]http://goo.gl/i3TzAt
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11179): https://lists.fd.io/g/vpp-dev/message/11179
Mute This Topic: https://lists.fd.io/mt/28042373/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] [tsc] VPP 18.10 is out!!!

2018-10-24 Thread Jerome Tollet via Lists.Fd.Io
Congratulations!

Le 24/10/2018 00:47, « t...@lists.fd.io au nom de Marco Varlese » 
 a écrit :

Dear all,

I am very happy to announce that release 18.10 is available.

Release artificats can be found on both Nexus and Packagecloud.

Thanks to all contributors for yet another great release!


Cheers,
-- 
Marco V

SUSE LINUX GmbH | GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg) Maxfeldstr. 5, D-90409, Nürnberg



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10956): https://lists.fd.io/g/vpp-dev/message/10956
Mute This Topic: https://lists.fd.io/mt/27617131/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] FW: Change in vpp[master]: fix format error in show logging config output

2018-10-03 Thread Jerome Tollet via Lists.Fd.Io
“To be very honest, if i'm newbie here, i will probably run away after spending 
half hour staring into hundred thousand lines of jenkins build logs from distro 
i never used in my life...”

  *   That’s precisely my concern here.

Jerome

De : Damjan Marion 
Date : mercredi 3 octobre 2018 à 21:08
À : Jerome Tollet 
Cc : "vpp-dev@lists.fd.io" 
Objet : Re: [vpp-dev] FW: Change in vpp[master]: fix format error in show 
logging config output





On 3 Oct 2018, at 20:44, Jerome Tollet via Lists.Fd.Io 
mailto:jtollet=cisco@lists.fd.io>> wrote:

Hello,
I submitted a very simple patch early today to fix a minor problem with the 
output of “show error configuration”. (https://gerrit.fd.io/r/#/c/15110/).
Verify job didn’t go through and honestly it was not really convenient to 
understand what went wrong.
In order to provide best contributor experience, would it make sense to let 
voting right to strictly key testing jobs?
Any thought from the community on that?
Jerome

Unfortunately this is our reality, and I don't expect things to be better 
unless we decide to decouple code verification form the distro packaging 
verifications.
To be very honest, if i'm newbie here, i will probably run away after spending 
half hour staring into hundred thousand lines of jenkins build logs from distro 
i never used in my life...

--
Damjan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10748): https://lists.fd.io/g/vpp-dev/message/10748
Mute This Topic: https://lists.fd.io/mt/26721213/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] FW: Change in vpp[master]: fix format error in show logging config output

2018-10-03 Thread Jerome Tollet via Lists.Fd.Io
Hello,
I submitted a very simple patch early today to fix a minor problem with the 
output of “show error configuration”. (https://gerrit.fd.io/r/#/c/15110/).
Verify job didn’t go through and honestly it was not really convenient to 
understand what went wrong.
In order to provide best contributor experience, would it make sense to let 
voting right to strictly key testing jobs?
Any thought from the community on that?
Jerome

De : "fd.io JJB (Code Review)" 
Répondre à : "jobbuil...@projectrotterdam.info" 
, Jerome Tollet 
Date : mercredi 3 octobre 2018 à 15:08
À : Jerome Tollet 
Objet : Change in vpp[master]: fix format error in show logging config output


fd.io JJB posted comments on this change.

View Cange

Patch set 2:Verified -1

Build Failed

https://jenkins.fd.io/job/vpp-verify-master-osleap15/3619/ : FAILURE

No problems were identified. If you know why this problem occurred, please add 
a suitable Cause for it. ( 
https://jenkins.fd.io/job/vpp-verify-master-osleap15/3619/ )

Logs: 
https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-osleap15/3619

https://jenkins.fd.io/job/vpp-csit-verify-virl-master/13815/ : FAILURE (skipped)

Job is failing due to JNLP4-connect error ( 
https://jenkins.fd.io/job/vpp-csit-verify-virl-master/13815/ )

Logs: 
https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-csit-verify-virl-master/13815

https://jenkins.fd.io/job/vpp-verify-master-ubuntu1604/14858/ : SUCCESS

Logs: 
https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-ubuntu1604/14858

https://jenkins.fd.io/job/vpp-make-test-docs-verify-master/9468/ : SUCCESS

Logs: 
https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-make-test-docs-verify-master/9468

https://jenkins.fd.io/job/vpp-verify-master-centos7/14258/ : SUCCESS

Logs: 
https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-centos7/14258

https://jenkins.fd.io/job/vpp-docs-verify-master/11769/ : SUCCESS

Logs: 
https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-docs-verify-master/11769

https://jenkins.fd.io/job/vpp-arm-verify-master-ubuntu1604/2656/ : SUCCESS

Logs: 
https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-arm-verify-master-ubuntu1604/2656

https://jenkins.fd.io/job/vpp-verify-master-clang/2897/ : SUCCESS

Logs: 
https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-master-clang/2897

https://jenkins.fd.io/job/vpp-beta-verify-master-ubuntu1804/2742/ : SUCCESS

Logs: 
https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-beta-verify-master-ubuntu1804/2742

To view, visit change 15110. To unsubscribe, 
visit settings.
Gerrit-Project: vpp
Gerrit-Branch: master
Gerrit-MessageType: comment
Gerrit-Change-Id: Idc41a219db185b524f497b096eb71892b5f9ebf8
Gerrit-Change-Number: 15110
Gerrit-PatchSet: 2
Gerrit-Owner: Jerome Tollet 
Gerrit-Reviewer: Damjan Marion 
Gerrit-Reviewer: Florin Coras 
Gerrit-Reviewer: Jerome Tollet 
Gerrit-Reviewer: Marco Varlese 
Gerrit-Reviewer: fd.io JJB 
Gerrit-Comment-Date: Wed, 03 Oct 2018 13:08:44 +
Gerrit-HasComments: No
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10746): https://lists.fd.io/g/vpp-dev/message/10746
Mute This Topic: https://lists.fd.io/mt/26721213/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP 18.07 RPM release packages available in Centos

2018-09-13 Thread Jerome Tollet via Lists.Fd.Io
Excellent news. Thanks!

De : Thomas F Herbert 
Date : jeudi 13 septembre 2018 à 21:29
À : "vpp-dev@lists.fd.io" 
Cc : Jerome Tollet 
Objet : Re: [vpp-dev] VPP 18.07 RPM release packages available in Centos


Jerome,

18.07 release packages are in Centos mirrors.

To install Centos release packages of VPP on a Centos, install the Centos NFV 
SIG yum repo and then install vpp as follows.

yum install centos-release-fdio
yum install vpp*

--Tom

On 05/23/2018 10:32 AM, Jerome Tollet (jtollet) wrote:

Hey Thomas,

Can you let us know what you are planning to do for next releases?

Are you manually building those RPMs or will you automatically include bugfix 
versions as well as future versions (e.g. 18.07)?

Jerome

On 5/21/2018 12:24 PM, Thomas F Herbert wrote:

VPP 18.04 RPMs are available in the Centos mirrors by way of the Centos NFV SIG.

From an updated Centos:

To install VPP on a Centos host, intall the Centos NFV SIG yum repo and then 
install vpp as follows.

yum install centos-release-fdio

yum install vpp*

--Tom
--
Thomas F Herbert
NFV and Fast Data Planes
Networking Group Office of the CTO
Red Hat

_._,_._,_

Links:

You receive all messages sent to this group.

View/Reply Online (#9338) | Reply 
To 
Sender
 | Reply To 
Group
 | Mute This Topic | New 
Topic

Change Your Subscription
Group Home
Contact Group Owner
Terms Of Service
Unsubscribe From This Group
_._,_._,_

--
Thomas F Herbert
NFV and Fast Data Planes
Networking Group Office of the CTO
Red Hat
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10494): https://lists.fd.io/g/vpp-dev/message/10494
Mute This Topic: https://lists.fd.io/mt/25646023/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] control-plane restarts, etc.

2018-08-28 Thread Jerome Tollet via Lists.Fd.Io
Hi Mike,
In such a situation, CP has to run state reconciliation process. You can find 
examples in networking-vpp (Python), Ligato (go) or Honeycomb (Java).
Jerome

De :  au nom de "Bly, Mike" 
Date : mardi 28 août 2018 à 21:51
À : "vpp-dev@lists.fd.io" 
Objet : [vpp-dev] control-plane restarts, etc.

Hello,

Can someone tell me the current VPP stance on warm restarts, configuration 
replays, etc? If the control plane is restarted for whatever reason, what is 
the expectation regarding replaying of configuration down to VPP and/or 
auditing of the active VPP configuration on a live system?

Regards,
Mike
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10328): https://lists.fd.io/g/vpp-dev/message/10328
Mute This Topic: https://lists.fd.io/mt/25067741/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] FW: [openstack-dev] networking-vpp 18.07 for VPP 18.07 is now available

2018-08-20 Thread Jerome Tollet via Lists.Fd.Io
Hello,
People in this list may be interested in the message below.
Jerome

De : Naveen Joy 
Répondre à : "OpenStack Development Mailing List (not for usage questions)" 

Date : samedi 18 août 2018 à 03:29
À : "openstack-...@lists.openstack.org" 
Objet : [openstack-dev] networking-vpp 18.07 for VPP 18.07 is now available

Hello Everyone,

In conjunction with the release of VPP 18.07, we'd like to invite you all to 
try out networking-vpp 18.07 for VPP 18.07.
As many of you may already know, VPP is a fast userspace forwarder based on the 
DPDK toolkit, and uses vector packet processing algorithms to minimize the CPU 
time spent on each packet to maximize throughput.

Networking-vpp is a ML2 mechanism driver that controls VPP on your control and 
compute hosts to provide fast L2 forwarding under Neutron.

This version has the below additional enhancements, along with supporting the 
latest VPP 18.07 APIs:
- Network Trunking
- Tap-as-a-Service (Taas)

Both the above features are experimental in this release.
Along with this, there have been the usual upkeep as Neutron versions and VPP 
APIs change, bug fixes, code and test improvements.

The README [1] explains more about the above features and how you can try out 
VPP using devstack:
the devstack plugin will deploy the mechanism driver and VPP itself and should 
give you a working system with a minimum of hassle.

We will be continuing our development between now and VPP's 18.10 release. 
There are several features we're planning to work on and we will keep you 
updated through our bugs list [2].
We welcome anyone who would like to come help us.

Everyone is welcome to join our biweekly IRC meetings, every other Monday (the 
next one is due this Monday at 0900 PST = 1600 GMT.
--
Ian & Naveen

[1]https://github.com/openstack/networking-vpp/blob/master/README.rst
[2]http://goo.gl/i3TzAt
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10219): https://lists.fd.io/g/vpp-dev/message/10219
Mute This Topic: https://lists.fd.io/mt/24818986/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Fw: VPP on AWS

2018-08-06 Thread Jerome Tollet via Lists.Fd.Io
Hi Sandeep,
Yes, you’ll get optimal performance with it.
Jerome


Envoyé de mon iPhone

Le 6 août 2018 à 10:40, Sandeep Bajaj 
mailto:sandeep_ba...@yahoo.com>> a écrit :

Thanks Jerome...I ended up doing that and got it to work with igb_uioWill I 
get the ena performance with this driver ? VPP does seem to have support for 
ena...How can I use ena directly ?

I tried the following :

insmod lib/modules/4.4.0-1061-aws/kernel/drivers/net/ethernet/amazon/ena/ena.ko

insmod: ERROR: could not insert module 
lib/modules/4.4.0-1061-aws/kernel/drivers/net/ethernet/amazon/ena/ena.ko: File 
exists

./usertools/dpdk-devbind.py --status

Network devices using DPDK-compatible driver



:00:06.0 'Device ec20' drv=igb_uio unused=ena


Thanks for all ur help!
Cheers
Sandeep



On Monday, August 6, 2018, 4:32:51 AM PDT, Jerome Tollet via Lists.Fd.Io 
mailto:jtollet=cisco@lists.fd.io>> wrote:



Hi Sandeep,

You can download latest dpdk and compile the missing driver from there.

Jerome



De : mailto:vpp-dev@lists.fd.io>> au nom de "Sandeep Bajaj 
via Lists.Fd.Io" 
mailto:sandeep_bajaj=yahoo@lists.fd.io>>
Répondre à : "sandeep_ba...@yahoo.com<mailto:sandeep_ba...@yahoo.com>" 
mailto:sandeep_ba...@yahoo.com>>
Date : lundi 6 août 2018 à 03:34
À : "vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
mailto:vpp-dev@lists.fd.io>>
Cc : "vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
mailto:vpp-dev@lists.fd.io>>
Objet : [vpp-dev] Fw: VPP on AWS







Hi



I am trying to spin up VPP on AWS(c5.large with ENA adapter), but  I seeing 
this error (missing VFIO/UIO driver). I have installed VPP 18.04 stable on 
ubuntu xenial



vpp.service - vector packet processing engine

   Loaded: loaded (/lib/systemd/system/vpp.service; enabled; vendor preset: 
enabled)

   Active: active (running) since Sun 2018-08-05 06:39:34 UTC; 5s ago

  Process: 32690 ExecStopPost=/bin/rm -f /dev/shm/db /dev/shm/global_vm 
/dev/shm/vpe-api (code=exited, status=0/SUCCESS)

  Process: 32699 ExecStartPre=/sbin/modprobe uio_pci_generic (code=exited, 
status=1/FAILURE)

  Process: 32696 ExecStartPre=/bin/rm -f /dev/shm/db /dev/shm/global_vm 
/dev/shm/vpe-api (code=exited, status=0/SUCCESS)

 Main PID: 32702 (vpp_main)

Tasks: 4

   Memory: 31.5M

  CPU: 2.775s

   CGroup: /system.slice/vpp.service

   └─32702 /usr/bin/vpp -c /etc/vpp/startup.conf



Aug 05 06:39:34  vpp[32702]: /usr/bin/vpp[32702]: dpdk_config:1275: EAL init 
args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp --master-lcore 
0 --socket-mem 2048

Aug 05 06:39:34  /usr/bin/vpp[32702]: vlib_pci_bind_to_uio: Skipping PCI device 
:00:06.0: missing kernel VFIO or UIO driver

Aug 05 06:39:34  /usr/bin/vpp[32702]: dpdk_config:1275: EAL init args: -c 1 -n 
4 --huge-dir /run/vpp/hugepages --file-prefix vpp --master-lcore 0 --socket-mem 
2048

Aug 05 06:39:34 vpp[32702]: EAL: No free hugepages reported in 
hugepages-1048576kB

Aug 05 06:39:35  vpp[32702]: EAL:   Invalid NUMA socket, default to 0

Aug 05 06:39:35  vpp[32702]: EAL:   Invalid NUMA socket, default to 0

Aug 05 06:39:35  vnet[32702]: EAL:   Invalid NUMA socket, default to 0

Aug 05 06:39:35  vnet[32702]: EAL:   Invalid NUMA socket, default to 0

Aug 05 06:39:35  vnet[32702]: dpdk_ipsec_process:1018: not enough DPDK crypto 
resources, default to OpenSSL

Aug 05 06:39:35  vnet[32702]: dpdk_lib_init:230: DPDK drivers found no ports...





When I try to do modprobe for vfio, I get this FATAL error :

modprobe vfio-pci

modprobe: FATAL: Module vfio-pci not found in directory 
/lib/modules/4.4.0-1061-aws



Any ideas on how to get the missing driver, so that I can have interfaces 
controlled by VPP ?



thanks

Sandeep

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10047): https://lists.fd.io/g/vpp-dev/message/10047
Mute This Topic: https://lists.fd.io/mt/24205315/882711
Group Owner: vpp-dev+ow...@lists.fd.io<mailto:ow...@lists.fd.io>
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
[sandeep_ba...@yahoo.com<mailto:sandeep_ba...@yahoo.com>]
-=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10048): https://lists.fd.io/g/vpp-dev/message/10048
Mute This Topic: https://lists.fd.io/mt/24205315/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Fw: VPP on AWS

2018-08-06 Thread Jerome Tollet via Lists.Fd.Io
Hi Sandeep,
You can download latest dpdk and compile the missing driver from there.
Jerome

De :  au nom de "Sandeep Bajaj via Lists.Fd.Io" 

Répondre à : "sandeep_ba...@yahoo.com" 
Date : lundi 6 août 2018 à 03:34
À : "vpp-dev@lists.fd.io" 
Cc : "vpp-dev@lists.fd.io" 
Objet : [vpp-dev] Fw: VPP on AWS



Hi

I am trying to spin up VPP on AWS(c5.large with ENA adapter), but  I seeing 
this error (missing VFIO/UIO driver). I have installed VPP 18.04 stable on 
ubuntu xenial

vpp.service - vector packet processing engine

   Loaded: loaded (/lib/systemd/system/vpp.service; enabled; vendor preset: 
enabled)

   Active: active (running) since Sun 2018-08-05 06:39:34 UTC; 5s ago

  Process: 32690 ExecStopPost=/bin/rm -f /dev/shm/db /dev/shm/global_vm 
/dev/shm/vpe-api (code=exited, status=0/SUCCESS)

  Process: 32699 ExecStartPre=/sbin/modprobe uio_pci_generic (code=exited, 
status=1/FAILURE)

  Process: 32696 ExecStartPre=/bin/rm -f /dev/shm/db /dev/shm/global_vm 
/dev/shm/vpe-api (code=exited, status=0/SUCCESS)

 Main PID: 32702 (vpp_main)

Tasks: 4

   Memory: 31.5M

  CPU: 2.775s

   CGroup: /system.slice/vpp.service

   └─32702 /usr/bin/vpp -c /etc/vpp/startup.conf



Aug 05 06:39:34  vpp[32702]: /usr/bin/vpp[32702]: dpdk_config:1275: EAL init 
args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp --master-lcore 
0 --socket-mem 2048

Aug 05 06:39:34  /usr/bin/vpp[32702]: vlib_pci_bind_to_uio: Skipping PCI device 
:00:06.0: missing kernel VFIO or UIO driver

Aug 05 06:39:34  /usr/bin/vpp[32702]: dpdk_config:1275: EAL init args: -c 1 -n 
4 --huge-dir /run/vpp/hugepages --file-prefix vpp --master-lcore 0 --socket-mem 
2048

Aug 05 06:39:34 vpp[32702]: EAL: No free hugepages reported in 
hugepages-1048576kB

Aug 05 06:39:35  vpp[32702]: EAL:   Invalid NUMA socket, default to 0

Aug 05 06:39:35  vpp[32702]: EAL:   Invalid NUMA socket, default to 0

Aug 05 06:39:35  vnet[32702]: EAL:   Invalid NUMA socket, default to 0

Aug 05 06:39:35  vnet[32702]: EAL:   Invalid NUMA socket, default to 0

Aug 05 06:39:35  vnet[32702]: dpdk_ipsec_process:1018: not enough DPDK crypto 
resources, default to OpenSSL

Aug 05 06:39:35  vnet[32702]: dpdk_lib_init:230: DPDK drivers found no ports...


When I try to do modprobe for vfio, I get this FATAL error :

modprobe vfio-pci

modprobe: FATAL: Module vfio-pci not found in directory 
/lib/modules/4.4.0-1061-aws

Any ideas on how to get the missing driver, so that I can have interfaces 
controlled by VPP ?

thanks
Sandeep
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10047): https://lists.fd.io/g/vpp-dev/message/10047
Mute This Topic: https://lists.fd.io/mt/24205315/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-