Re: [vpp-dev] Feedback on a tool: vppcfg

2022-04-04 Thread Luca Muscariello via lists.fd.io
Hi Pim

This is very helpful.
I'll give it a try ASAP.

Dave's proposal to incubate thi project in FDio as a subproject makes sense
and I would be interested to follow up on that.


On Sat, Apr 2, 2022 at 5:18 PM Pim van Pelt  wrote:

> Hoi colleagues,
>
> I know there exist several smaller and larger scale VPP configuration
> harnesses out there, some more complex and feature complete than others. I
> wanted to share my work on an approach based on a YAML configuration with
> strict syntax and semantic validation, and a path planner that brings the
> dataplane from any configuration state safely to any other
> configuration state, as defined by these YAML files.
>
> A bit of a storyline on the validator:
> https://ipng.ch/s/articles/2022/03/27/vppcfg-1.html
> A bit of background on the DAG path planner:
> https://ipng.ch/s/articles/2022/04/02/vppcfg-2.html
> Code with tests on https://github.com/pimvanpelt/vppcfg
>

The architecture looks similar to neplan + systemd-networkd.
I did not check the code yet in detail so I may be wrong.

Thanks for sharing
Luca




>
> The config and planner supports interfaces, bondethernets, vxlan tunnels,
> l2xc, bridgedomains and, quelle surprise, linux-cp configurations of all
> sorts. If anybody feels like giving it a spin, I'd certainly appreciate
> feedback and if you can manage to create two configuration states that the
> planner cannot reconcile, I'd love to hear about those too.
>
> For now, the path planner works by reading the API configuration state
> exactly once (at startup), and then it figures out the CLI calls to print
> without needing to consult VPP again. This is super useful as it’s a
> non-intrusive way to inspect the changes before applying them, and it’s a
> property I’d like to carry forward. However, I don’t necessarily think that
> emitting the CLI statements is the best user experience, it’s more for the
> purposes of analysis that they can be useful. What I really want to do is
> emit API calls after the plan is created and reviewed/approved, directly
> reprogramming the VPP dataplane. However, the VPP API set needed to do this
> is not 100% baked yet. For example, I observed crashes when tinkering with
> BVIs and Loopbacks (see my thread from last week, thanks for the response
> Neale), and fixed a few obvious errors in the Linux CP API (gerrit) but
> there are still a few more issues to work through before I can set the next
> step with vppcfg.
>




>
> If this tool proves to be useful to others, I'm happy to upstream it to
> extras/ somewhere.
>
> --
> Pim van Pelt 
> PBVP1-RIPE - http://www.ipng.nl/
>
> 
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21197): https://lists.fd.io/g/vpp-dev/message/21197
Mute This Topic: https://lists.fd.io/mt/90202690/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Out of tree plugins ergonomics

2022-03-15 Thread Luca Muscariello via lists.fd.io
On Tue, Mar 15, 2022 at 1:36 PM Andrew Yourtchenko 
wrote:

> hi all,
>
> Is there anyone doing the truly “out of tree“ plugins ? I would like to
> take a look a bit at the ergonomics of this process, and having another
> brain(s) to discuss with would be useful.
>

This is the approach taken by the hicn plugin which is out of tree and
depends on vpp stable release
as distributed in package cloud

https://gerrit.fd.io/r/gitweb?p=hicn.git;a=tree;f=hicn-plugin;hb=HEAD



>
> --a
> 
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21024): https://lists.fd.io/g/vpp-dev/message/21024
Mute This Topic: https://lists.fd.io/mt/89796445/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [tsc] [vpp-dev] Scapy license in VPP

2021-02-01 Thread Luca Muscariello
On Fri, Jan 29, 2021 at 6:09 PM Vratko Polak -X (vrpolak - PANTHEON
TECHNOLOGIES at Cisco) via lists.fd.io 
wrote:

> > Why do you say that vpp_papi need to be dual licensed?
>
>
>
> I think e-mail reply would be long,
>
> and I will need to address comments anyway,
>
> so I respond via a Gerrit change [1].
>

Hi Vratko

Combining Apache 2.0 and GPLv2 projects together is a difficult task
that several large projects have faced in the past and solved with
different solutions.

I did not follow the unfolding of the discussion in this list about fd.io
requirements
to end up with double licensing, so I may have limited visibility on the
long term goal.
You may have discussed this already so I may be late. Apologies for that.
I remember following the beginning of the discussion some time ago only.

Nevertheless, you are right that this is a topic for lawyer but
- some lawyers think that Apache 2.0 and GPLv2 are compatible
- some lawyers do not think that Apache 2.0 and GPLv2 are compatible
- the issue has not been tested in court

Moreover the authors of both licenses are not in agreement: the Apache
foundation
and the Free software foundation.

I'm not a lawyer but I've found myself in an intricate situation and I've
had the FSF
position very clearly stated by Eben Moglen in person.

According to FSF's viewpoint GPLv2 should contaminate the entire software
including PAPI.
LLVM for instance has not chosen that path and instead has opted to an
Apache 2.0 license
across the code they develop with the addition of exceptions to the binary
distribution.

You can find the text of the exception at the end of the file below right
after the end
of the Apache 2.0 license text.

https://releases.llvm.org/10.0.0/LICENSE.TXT

which I report below for people's convenience

 LLVM Exceptions to the Apache 2.0 License 

As an exception, if, as a result of your compiling your source code, portions
of this Software are embedded into an Object form of such source code, you
may redistribute such embedded portions in such Object form without complying
with the conditions of Sections 4(a), 4(b) and 4(d) of the License.

In addition, if you combine or link compiled forms of this Software with
software that is licensed under the GPLv2 ("Combined Software") and if a
court of competent jurisdiction determines that the patent provision (Section
3), the indemnity provision (Section 9) or other Section of the License
conflicts with the conditions of the GPLv2, you may retroactively and
prospectively choose to deem waived or otherwise exclude such Section(s) of
the License, but only in their entirety and only with respect to the Combined
Software.

---



Luca




>
>
> Vratko.
>
>
>
> [1] https://gerrit.fd.io/r/c/vpp/+/31025
>
>
>
> *From:* Paul Vinciguerra 
> *Sent:* Friday, 2021-January-29 15:29
> *To:* Vratko Polak -X (vrpolak - PANTHEON TECH SRO at Cisco) <
> vrpo...@cisco.com>
> *Cc:* t...@lists.fd.io; Kinsella, Ray ;
> vpp-dev@lists.fd.io
> *Subject:* Re: [vpp-dev] Scapy license in VPP
>
>
>
> Why do you say that vpp_papi need to be dual licensed?
>
>
>
> On Thu, Jan 28, 2021 at 12:43 PM Vratko Polak -X (vrpolak - PANTHEON
> TECHNOLOGIES at Cisco) via lists.fd.io 
> wrote:
>
> First draft created [0] for the change that will switch
>
> licenses for Python files used together with Scapy.
>
>
>
> For some files, I was not sure whether they are used together with Scapy.
>
> One big detail is that vpp_papi needs to have dual license,
>
> as test framework integrates with it (and with scapy).
>
> If I understand the licensing logic correctly,
>
> CSIT tests can still choose to use vpp_papi under Apache license option.
>
> But we may need to discuss that with lawyers.
>
>
>
> Ray, you may need to upgrade your contributor-finding shell pipeline
>
> to cover all files I added the new license into.
>
>
>
> Vratko.
>
>
>
> [0] https://gerrit.fd.io/r/c/vpp/+/30998
>
>
>
> 
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18637): https://lists.fd.io/g/vpp-dev/message/18637
Mute This Topic: https://lists.fd.io/mt/80286623/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Install libmemif in vpp 19.08.01

2020-03-14 Thread Luca Muscariello
Hi,

have you tried  the following on ubuntu 18?

sudo apt-get install libmemif-dev

Luca



On Sat, Mar 14, 2020 at 5:45 AM Himanshu Rakshit  wrote:

> Hi All,
>
> As I understand libmemif is not a part of the vpp packages any more. What
> is the way to install libmemif compatible with vpp 19.08.1
>
> Thanks,
> Himanshu
> 
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15785): https://lists.fd.io/g/vpp-dev/message/15785
Mute This Topic: https://lists.fd.io/mt/71944179/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Trying my luck with BGP peering

2020-03-09 Thread Luca Muscariello
I've never tested this on 18.10. We just made it working for 19.08 'cos we
needed routing.
The plugin predates our own work.  You must have a good reason to stay on
18.10. Your choice.


On Mon, Mar 9, 2020 at 4:47 PM Satya Murthy 
wrote:

> Hi Luca,
>
> Thanks a lot for this info.
> Really appreciate timely inputs on this.
>
> We are currently on fdio 1810 version. Will we be able check this plugins
> to this version?  (or) we have to move to 20.01 ?
> Please let us know.
>
> --
> Thanks & Regards,
> Murthy 
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15709): https://lists.fd.io/g/vpp-dev/message/15709
Mute This Topic: https://lists.fd.io/mt/71831881/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Trying my luck with BGP peering

2020-03-09 Thread Luca Muscariello
Satya,

Some more info about the router plugin. I just realised that the current
extras/router-plugin
does not  build for 20.01 'cos of ip-neighbor/ updates in 20.01.
I'm going to push a short example below using BGP in FRR.

diff --git a/docs/source/control.md b/docs/source/control.md
index b7b5ebc..4464abd 100644
--- a/docs/source/control.md
+++ b/docs/source/control.md
@@ -370,3 +370,45 @@ add  "no ipv6 nd suppress-ra" to the first
configuration part of the /etc/frr/frr
 After the following configuration, the traffic over tap interface can be
observed
 via `tcpdump- i vpp1`. The neighborhood and route can be seen with the
 `show ipv6 ospf6 neighbor/route` command.
+
+## Configure VPP and FRRouting for BGP
+This document describes how to configure the VPP with hicn_router plugin
and FRR to enable the BGP protocol. The VPP and FRR
+are configured in a docker file.
+
+### DPDK configuration on host machine:
+```
+- Install and configure dpdk
+- make install T=x86_64-native-linux-gcc && cd x86_64-native-linux-gcc
&& sudo make install
+- modprobe uio
+- modprobe uio_pci_generic
+- dpdk-devbind --status
+- the PCIe number of the desired device can be observed ("xxx")
+- sudo dpdk-devbind -b uio_pci_generic "xxx"
+```
+### VPP configuration:
+
+```
+- Run and configure the VPP (hICN router plugin is required to be
installed in VPP)
+- set int state TenGigabitEtherneta/0/0 up
+- set int ip address TenGigabitEtherneta/0/0 10.0.10.1/24
+- enable tap-inject  # This creates the taps by router plugin
+- show tap-inject # This shows the created taps
+
+- Setup the tap interface
+- ip addr add 10.0.10.1/24 dev vpp0
+- ip link set dev vpp0 up
+```
+### FRR configuration:
+Assume there are two nodes with 1234,5678 AS numbers. This is the
configuration of node A with AS number 1234
+
+ (1234)A(2001::1) ==
(2001::2)B(5678)
+```
+   - /usr/lib/frr/frrinit.sh start &
+   - vtysh
+   - configure terminal
+   - router bgp 1234
+   - neighbor 2001::2 remote-as 5678
+   - address-family ipv6 unicast
+   - neighbor 2001::2 activate
+   - exit-address-family
+```
diff --git a/extras/router-plugin/rtinject/tap_inject_netlink.c
b/extras/router-plugin/rtinject/tap_inject_netlink.c
index a221e8e..f2b561e 100644
--- a/extras/router-plugin/rtinject/tap_inject_netlink.c
+++ b/extras/router-plugin/rtinject/tap_inject_netlink.c
@@ -16,13 +16,14 @@

 #include "../devices/rtnetlink/netns.h"
 #include 
-#include 
+#include 
 #include 
 #include 
-#include 
+#include 
 #include 
 #include 
 #include 
+#include 

 #include "tap_inject.h"


This will break 19.08 but will support 20.01 as in our project we only
support the latest stable (there are tags on older releases though).
The BGP example may be useful for your use case. Consider it v
experimental but if you make it
working for your use case please share your findings with working
configurations in hicn-...@lists.fd.io.

The OSPF6 example is in here
https://hicn.readthedocs.io/en/latest/control.html#routing-plugin-for-vpp-and-frrouting-for-ospf6
but it has to be updated with also v4.

On Mon, Mar 9, 2020 at 12:13 PM Luca Muscariello 
wrote:

> FWIW, we have cloned the router plugin in here in our own project
> https://github.com/FDio/hicn/tree/master/extras/router-plugin
> tested in ubuntu 18LTS and FRR with BGP and OSPF.
> BGP works for IPv4 and IPv6. OSPF IPv4 works fine while IPv6 does not work
> because there is a
> VPP issue on ND and multicast that we did were unable to fix. The ND issue
> may be just related
> to the VMware ESXi environment we were using and may not show up in other
> platforms.
>
> I'll share some more info in the doc later on how we manage to configure
> and use the plugin with FRR.
>
> Luca
>
>
> On Mon, Mar 9, 2020 at 11:29 AM Satya Murthy 
> wrote:
>
>> Hi ,
>>
>> I think, this topic has been discussed in few of the earlier questions,
>> but still I could not find a one that gave a workable solution in totality.
>> We are trying to write a BGP application which hosts BGP peering
>> sessions, using VPP as a dataplane entity.
>>
>> We tried following few options with issues mentioned below. I am also
>> attaching an image that we are trying to achieve.
>>
>> 1) Tried to clone VPP sandbox code for vpp router plugin. But, vppsb repo
>> does not seem to be available in the following path anymore.
>> https://gerrit.fd.io/r/vppsb
>> Is there a place where I can get this code ?
>> Or Is this obsoleted one ?
>>
>> 2) If vppsb is obsoleted, is there an alternative that works well for
>> this.
>> We tried punting approach by doing
>> a. set punt tcp
>> b. ip punt redirect add rx Interface/7/0.100 via 1.1.1.1 tapcli-0
>

Re: [vpp-dev] Trying my luck with BGP peering

2020-03-09 Thread Luca Muscariello
FWIW, we have cloned the router plugin in here in our own project
https://github.com/FDio/hicn/tree/master/extras/router-plugin
tested in ubuntu 18LTS and FRR with BGP and OSPF.
BGP works for IPv4 and IPv6. OSPF IPv4 works fine while IPv6 does not work
because there is a
VPP issue on ND and multicast that we did were unable to fix. The ND issue
may be just related
to the VMware ESXi environment we were using and may not show up in other
platforms.

I'll share some more info in the doc later on how we manage to configure
and use the plugin with FRR.

Luca


On Mon, Mar 9, 2020 at 11:29 AM Satya Murthy 
wrote:

> Hi ,
>
> I think, this topic has been discussed in few of the earlier questions,
> but still I could not find a one that gave a workable solution in totality.
> We are trying to write a BGP application which hosts BGP peering sessions,
> using VPP as a dataplane entity.
>
> We tried following few options with issues mentioned below. I am also
> attaching an image that we are trying to achieve.
>
> 1) Tried to clone VPP sandbox code for vpp router plugin. But, vppsb repo
> does not seem to be available in the following path anymore.
> https://gerrit.fd.io/r/vppsb
> Is there a place where I can get this code ?
> Or Is this obsoleted one ?
>
> 2) If vppsb is obsoleted, is there an alternative that works well for this.
> We tried punting approach by doing
> a. set punt tcp
> b. ip punt redirect add rx Interface/7/0.100 via 1.1.1.1 tapcli-0
> However, here, we are seeing an ARP being sent from vpp-veth-ifc to
> host-veth-interface and it is not able to resolve the ARP.
> The redirect is getting triggered from VPP, but it is not reaching the
> host due to the ARP issue.
> Is there anything that we need to do for this ?
>
> Also, in this approach, the reverse path from vpp-host to the external
> box, how will it work ?
> The punt redirect config that we added, will it work for reverse path as
> well ?
>
> Please help us here as we are kind of stuck with what approach we need to
> take ?
> ( we want to avoid the VPP-TCP-stack for the time being, as it will be
> more effort to integrate our BGP app with VPP-TCP-stack,i.e VCL framework ).
> --
> Thanks & Regards,
> Murthy 
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15704): https://lists.fd.io/g/vpp-dev/message/15704
Mute This Topic: https://lists.fd.io/mt/71831881/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] RFC: FD.io Summit (Userspace), September, Bordeaux France

2020-02-24 Thread Luca Muscariello
A few people from the hicn/cicn projects may be able to attend in Bordeaux.

Luca

On Mon, Feb 24, 2020 at 7:48 PM Honnappa Nagarahalli <
honnappa.nagaraha...@arm.com> wrote:

> 
>
> >
> > Hi folks,
> >
> > A 2020 FD.io event is something that has been discussed a number of times
> > recently at the FD.io TSC.
> > With the possibility of co-locating such an event with DPDK Userspace, in
> > Bordeaux, in September.
> >
> > Clearly, we are incredibly eager to make sure that such an event would
> be a
> > success.
> > That FD.io users and contributors would attend, and get value out of the
> > event.
> > (it is a ton of work for those involved - we want people to benefit)
> >
> > The likelihood is that this would be the only FD.io event of this kind
> in 2020.
> >
> > So instead of speculating, it is better to ask a direct question to the
> > community and ask for honest feedback.
> > How does the community feel about such an event at DPDK Userspace:-
> >
> > * Do they value co-locating with DPDK Userspace?
> > * Are they likely to attend?
> IMO, this is valuable and would definitely be helpful to solve problems
> across the aisle
> I would attend.
>
> >
> > Thanks,
> >
> > Ray K
> > FD.io TSC
>
> 
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15512): https://lists.fd.io/g/vpp-dev/message/15512
Mute This Topic: https://lists.fd.io/mt/71450821/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[hicn-dev] [vpp-dev] VPP Router Plugin or alternatives

2019-07-24 Thread Luca Muscariello
-- Forwarded message -
From: Luca Muscariello 
Date: Wed, Jul 24, 2019 at 12:31 PM
Subject: Re: [hicn-dev] [vpp-dev] VPP Router Plugin or alternatives
To: 


Patch has been merged and packages are available in packagecloud as
hicn-extra-plugin-19.04
Code available here
https://github.com/FDio/hicn/tree/master/utils/extras

Luca
_._,_._,_
--
Links:

You receive all messages sent to this group.

View/Reply Online (#56) <https://lists.fd.io/g/hicn-dev/message/56> | Reply
To Group

| Reply To Sender

| Mute This Topic <https://lists.fd.io/mt/32406082/675095> | New Topic
<https://lists.fd.io/g/hicn-dev/post>

Your Subscription <https://lists.fd.io/g/hicn-dev/editsub/675095> | Contact
Group Owner  | Unsubscribe
<https://lists.fd.io/g/hicn-dev/leave/3611144/1347534290/xyzzy> [
muscarie...@ieee.org]
_._,_._,_
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13559): https://lists.fd.io/g/vpp-dev/message/13559
Mute This Topic: https://lists.fd.io/mt/32582619/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] libmemif non packaged in deb/rpm anymore from 19.04 [regression?]

2019-07-09 Thread Luca Muscariello via Lists.Fd.Io


“ libmemif doesn’t have anything with VPP, it is completely independent library 
and packaging it
with VPP is wrong.

So somebody will need to contribute standalone packaging for libmemif…..”


This is what you have just written.
The standalone packaging was done in the patch https://gerrit.fd.io/r/#/c/16436/
which you asked Mauro to write.

BTW, we’ll host the libmemif packaging in our project.
Thanks for the cooperation.


From: Damjan Marion 
Date: Tuesday, July 9, 2019 at 10:30 PM
To: Luca Muscariello 
Cc: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] libmemif non packaged in deb/rpm anymore from 19.04 
[regression?]


On 9 Jul 2019, at 22:23, Luca Muscariello (lumuscar) 
mailto:lumus...@cisco.com>> wrote:

Let me try again.

This patch was merged to get packaged libmemif in 19.01

https://gerrit.fd.io/r/#/c/16436/

It has disappeared with 19.04.
Is this still somewhere? No it is gone or Yes it has been moved in another 
package.
If it is gone. Why? Where can I trace the decision?

Dear Luca,

Glad to see that you are back on track and we are not discussing contents of 
vpp-dev and vpp-lib anymore. No idea what happened, probably it is not 
intentional. I suggest that you contact Mauro who is author of the original 
patch and ask him to check if build mechanics is ok. If it is ,then it may be 
some issue with artefact publishing...

--
Damjan


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13475): https://lists.fd.io/g/vpp-dev/message/13475
Mute This Topic: https://lists.fd.io/mt/32405021/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] libmemif non packaged in deb/rpm anymore from 19.04 [regression?]

2019-07-09 Thread Luca Muscariello via Lists.Fd.Io
Let me try again.

This patch was merged to get packaged libmemif in 19.01

https://gerrit.fd.io/r/#/c/16436/

It has disappeared with 19.04.
Is this still somewhere? No it is gone or Yes it has been moved in another 
package.
If it is gone. Why? Where can I trace the decision?




From:  on behalf of "Damjan Marion via Lists.Fd.Io" 

Reply-To: "dmar...@me.com" 
Date: Tuesday, July 9, 2019 at 10:13 PM
To: Luca Muscariello 
Cc: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] libmemif non packaged in deb/rpm anymore from 19.04 
[regression?]



> On 9 Jul 2019, at 21:26, Luca Muscariello via Lists.Fd.Io 
>  wrote:
>
> Fine with me and thanks for letting me know, now.
>
> However it is you who told me in December to make the patch to get
> the package done. Which we did and so the patch was merged for 19.01 release.
>
> So, now you wake up with this statement.
> Next time be more respectful of everybody else time and do not do that again.


Dear Luca,

Next time you are making statements like this, please make sure you are not 
confused. In your original email you are talking about libmemif missing in 
vpp-lib and vpp-dev and then you point to  the patch which doesn't have 
anything with vpp-lib and vpp-dev packaging. It actually adds new package 
called memif:

add_vpp_packaging(
  NAME "memif"
  VENDOR "fd.io"
  DESCRIPTION "Shared Memory Interface"
)
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13471): https://lists.fd.io/g/vpp-dev/message/13471
Mute This Topic: https://lists.fd.io/mt/32405021/675095
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [lumus...@cisco.com]
-=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13473): https://lists.fd.io/g/vpp-dev/message/13473
Mute This Topic: https://lists.fd.io/mt/32405021/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] libmemif non packaged in deb/rpm anymore from 19.04 [regression?]

2019-07-09 Thread Luca Muscariello via Lists.Fd.Io
Fine with me and thanks for letting me know, now.

However it is you who told me in December to make the patch to get
the package done. Which we did and so the patch was merged for 19.01 release.

So, now you wake up with this statement. 
Next time be more respectful of everybody else time and do not do that again.




On 7/9/19, 7:36 PM, "Damjan Marion"  wrote:


> On 9 Jul 2019, at 15:36, Luca Muscariello via Lists.Fd.Io 
 wrote:
> 
>  
> Hi,
>  
> libmemif was made available in the binary distribution as deb/rpm in
> vpp-lib and vpp-dev starting from 19.01 thanks to a patch that Mauro
> submitted in Dec.
>  
> https://gerrit.fd.io/r/#/c/16436/
>  
> From 19.04 libmemif is not available anymore.
>  
> In our project we rely quite a lot on libmemif and that removal
> is currently a pain for us.
>  
> I did not see any message in the list with a motivation for that.
> Is there any work needed to get that component back in packages?
>  
> I’m hoping this can go back into 19.08.
> Please do let us know how we can help to get that back into vpp-dev/lib.

Luca,

libmemif doesn’t have anything with VPP, it is completely independent 
library and packaging it 
with VPP is wrong.

So somebody will need to contribute standalone packaging for libmemif…..




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13469): https://lists.fd.io/g/vpp-dev/message/13469
Mute This Topic: https://lists.fd.io/mt/32405021/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP Router Plugin or alternatives

2019-07-09 Thread Luca Muscariello via Lists.Fd.Io
(cross-posting to hicn-dev)

By chance we were working on resurrecting the routing 
plugin in the hicn project and Masoud has just submitted a patch

https://gerrit.fd.io/r/#/c/20535/

This builds for VPP 19.04. Tested with FRR, OSPF only.

If someone is interested in testing more please cherry-pick 
share results and cross-post in hicn-dev.

We do not intend to design a new routing plugin as of now,
so this is it, but I do understand that a clean-slate 
design would be desirable.




On 7/9/19, 16:28, "vpp-dev@lists.fd.io on behalf of Ray Kinsella" 
 wrote:

It is accurate to say it is bit-rotting, it not accurate to say it is
orphaned.

> 
> I believe that, but obviously nobody is interested to do so. Neither 
original authors or users of that code.

Well that is not entirely accurate - I think that it is more accurate to
say the authors are tired of fixing it and are thinking about a better
way to fix this problem long term.

It's something we are looking at, at the moment and are open to
collaboration on it.

>>>
>>> So, I don't really have a problem with it not being in VPP proper, so 
long as it is maintained.
>>
>> So, you expect that somebody maintains it for you? why? what is his 
incentive to do so? People are typically maintaining some specific feature 
because they are interested in using it and they see a value in open source 
collaboration.
>>
>> I was under the impression that keeping unmodified code "in sync" would 
not be that difficult, and of great value to the larger community of folks 
using VPP.
> 
> If it is not so difficult, please contribute.

To be fair - we do see alot usage of the vRouter plugin - despite all it
shortcomings, so we do have a desire & motivation to fix it.

And it is also clear from the number of questions we get on it - that
there is a strong desire from the community for functionality of this
type - it just hasn't turned into action, so far.

The question is do we invest time in fixing a design we know is
inherently broken (again and again) or do we step back and take a breath.

> 
>>
>> The value of VPP is best measured by the number of folks using it. I 
believe a non-trivial number of folks have basically stumbled across VPP from 
the links shared from the FRR project (alternate forwarding planes).
>>
>> If you don't want those people using VPP, I don't understand your 
rationale.
> 
> I will be happy that more people is using VPP and contribute back good 
quality features for everyone benefit.
> I believe good integration between VPP and routing software like FRR is 
something we are really missing, but outdated sandbox experiment code
> is not solution for that.

Agreed, completely - trouble is I have never seen good integration
between FRR and anything but Linux.

>> It think best way to make feature maintained is to do the work right and 
convince others contributors that this is not throw-over-the-fence code.
>>
>> I was not the author of netlink or router plugins. I never threw that 
code anywhere. It is published in the vppsb, but has otherwise been left to rot.
> 
> Yes, looks like original authors are not interested to keep it in sync 
and improve and also there is nobody to step in.
> This is normal in open-source, Look at MAINTAINERS file in Linux kernel 
source tree and you will see how much things are declared as orphaned>
>>
>> The VPP coding practice requires a lot of investment up front to get to 
the meat of the matter. 
>> For what I am looking for in the immediate short term (getting the 
plugins to compile), that is a huge up-front cost for a very low pay-off.
>>  
>>
>> You will be happy as feature is maintained, we will be happy as we will 
have one valuable member of the community.
>>
>> IMHO, the VPP project's documentation (wiki and otherwise) is inadequate 
to allow new participants with lots of experience doing maintenance-level or 
porting-level modification of C code, to pick up the plug-ins and get the bugs 
fixed.
>>  
>>
>> Not being able to write code is not excuse, as you can always find 
somebody to do it for you.
>>
>> I can write code, that is not the issue at all.
>>  
>>
>>>
>>> It serves a very specific purpose, and does so adequately, if/when it 
compiles.
>>>
>>> It has not been maintained to keep parity with VPP, and I think 
everyone asking about it, is interested in the minimum amount of effort to keep 
it viable in the short term.
>>
>> Who is "everyone" and why do you think think that any vpp contributor or 
committer should work on feature you are interested in instead of working on 
the feature that person is interested in.
>>
>> I was under the impression that there was a core team coordinating the 
efforts and 

[vpp-dev] libmemif non packaged in deb/rpm anymore from 19.04 [regression?]

2019-07-09 Thread Luca Muscariello via Lists.Fd.Io

Hi,

libmemif was made available in the binary distribution as deb/rpm in
vpp-lib and vpp-dev starting from 19.01 thanks to a patch that Mauro
submitted in Dec.

https://gerrit.fd.io/r/#/c/16436/

From 19.04 libmemif is not available anymore.

In our project we rely quite a lot on libmemif and that removal
is currently a pain for us.

I did not see any message in the list with a motivation for that.
Is there any work needed to get that component back in packages?

I’m hoping this can go back into 19.08.
Please do let us know how we can help to get that back into vpp-dev/lib.




Thanks
Luca


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13464): https://lists.fd.io/g/vpp-dev/message/13464
Mute This Topic: https://lists.fd.io/mt/32405021/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Statistics on GitHub for home-brew binaries

2019-06-03 Thread Luca Muscariello via Lists.Fd.Io
Hi everyone,

 

FD.io is not directly using GitHub but there are projects such as home-brew

for macOS which grant binary distribution to a software project based on GitHub 
statistics.

 

In our project we support macOS among many other platforms and we’d love to give

our user base the possibility to fetch binaries using brew.

 

Fortunately, fd.io is mirroring project repos on GitHub and we’d like to ask 
everyone’s help

to get stars and forks on GitHub. 

 

The two fd.io projects we’d need help for are mirrored at the following two 
links.

 

 

https://github.com/FDio/cicn/

 

https://github.com/FDio/hicn

 

Thanks in advance for your help on behalf of the hicn and cicn projects.

Best

Luca

 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13199): https://lists.fd.io/g/vpp-dev/message/13199
Mute This Topic: https://lists.fd.io/mt/31907826/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] [tsc] FD.io minisummit at Kubecon EU (May 21-23, Barcelona)

2019-02-22 Thread Luca Muscariello via Lists.Fd.Io
Hi Ed

 

Is participation to fd.io summit free of charge?

 

>From hicn project point of view Kubecon EU in Barcelona is 

a convenient location.

 

Thanks

Luca

 

 

 

 

 

 

From:  on behalf of Edward Warnicke 
Date: Thursday 21 February 2019 at 21:15
To: "t...@lists.fd.io" , vpp-dev , 
"csit-...@lists.fd.io" , "dmm-...@lists.fd.io" 
, honeycomb-dev , 
"sweetcomb-...@lists.fd.io" , "hicn-...@lists.fd.io" 

Subject: Re: [tsc] FD.io minisummit at Kubecon EU (May 21-23, Barcelona)

 

+ hicn and sweetcomb, who's lists were misconfigured the first time I sent it :)

 

Ed

 

On Thu, Feb 21, 2019 at 1:04 PM Ed Warnicke  wrote:

Traditionally in the past FD.io has held collocated minisummits at Kubecon (NA 
and EU). 

 

We are fortunate to have budget to do Kubecon EU, Kubecon NA, and one other 
collocated event (probably best to be held in Asia and possibly at an embedded 
event).

 

In the course of discussing this, the FD.io Marketing group felt that doing a 
separate FD.io booth at Kubecon EU was a clear win, but raised the question:

 

- Is there a better choice for an EU event to collocate with for the FD.io 
minisummit?

 

This is a good question, and so wanted to open the discussion to the broader 
community, as it differs from what had previously been discussed in the 
community and decided on.

 

So, thoughts?  Opinions?  Ideas?

 

Ed

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12326): https://lists.fd.io/g/vpp-dev/message/12326
Mute This Topic: https://lists.fd.io/mt/29997902/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vapi msg send

2019-02-12 Thread Luca Muscariello via Lists.Fd.Io
According to Ole

 

https://lists.fd.io/g/vpp-dev/message/10481?p=,,,20,0,0,0::relevance,,vapi,20,2,0,25510961

 

 

What deb package contains vapi_c_gen.py and vapi_cpp_gen.py?

 

For those who wants to generate C api for an external VPP plugin w/o checking 
out the

whole vpp tree.

 

Thanks

Luca

 

 

 

 

From:  on behalf of "mhemmatp via Lists.Fd.Io" 

Reply-To: "Masoud Hemmatpour (mhemmatp)" 
Date: Tuesday 12 February 2019 at 15:07
To: "vpp-dev@lists.fd.io" 
Cc: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] vapi msg send

 

 Hello all,

 I am going to use vapi to connect to a plugin in vpp. I am following this 
instruction:

 1- connect to vpp and create the context (ctx)
 1- allocating memory through the APIs (i.e., initializing the header of the 
message)
 2- initializing the payload of the message (msg)
 3- vapi_send(ctx,msg)

Actually, I dont receive any ERR from vapi_send() however the message is not 
received to the vpp (I check it by api trace save/dump). Did I miss something ?

Any help is very welcomed.

Kind Regards, 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12236): https://lists.fd.io/g/vpp-dev/message/12236
Mute This Topic: https://lists.fd.io/mt/29749976/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Hybrid ICN hicn - 19.01 release is out

2019-02-08 Thread Luca Muscariello via Lists.Fd.Io
Hi everyone,

 

I’m very happy to announce that the first hICN release is out.

 

hICN 19.01 release makes use of VPP 19.01 and artifacts are 

available on package cloud.

 

 

hICN (hybrid ICN) is composed of a server stack (network + transport) 

based on VPP and a client portable stack (macOS, Windows, GNU/Linux, iOS, 
Android).

hICN makes use of VPP through a new plugin and also by using libmemif 

as network/transport connector.

 

 

More information about the project can be found 

in the wiki page https://wiki.fd.io/view/HICN

 

 

The team is already engaging in several fd.io projects such as

VPP and sweetcomb and willing to engage with other projects

such as CSIT, GoVPP in the first place, for both testing and

network management.

 

hICN makes also use of CICN contributions such as libparc 

to support crypto APIs and OS portability. 

 

 

Many thanks to Ed and Vanessa for setting everything up in a 

very short time. Frankly, we cannot think about a better support.

 

Thanks and hope to working with you in the future.

 

Luca 

on behalf of the fd.io/hicn team.

 

 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12216): https://lists.fd.io/g/vpp-dev/message/12216
Mute This Topic: https://lists.fd.io/mt/29700389/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] TCP performance - TSO - HW offloading in general.

2018-05-14 Thread Luca Muscariello (lumuscar)
Hi Florin,

Session enable does not help.
hping is using raw sockets so this must be the reason.

Luca



From: Florin Coras <fcoras.li...@gmail.com>
Date: Friday 11 May 2018 at 23:02
To: Luca Muscariello <lumuscar+f...@cisco.com>
Cc: "vpp-dev@lists.fd.io" <vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] TCP performance - TSO - HW offloading in general.

Hi Luca,

Not really sure why the kernel is slow to reply to ping. Maybe it has to do 
with scheduling but it’s just guess work.

I’ve never tried hping. Let me see if I understand your scenario: while running 
iperf you tried to hping the stack and you got no rst back? Anything 
interesting in “sh error” counters? If iperf wasn’t running, did you first 
enable the stack with “session enable”?

Florin


On May 11, 2018, at 3:19 AM, Luca Muscariello 
<lumuscar+f...@cisco.com<mailto:lumuscar+f...@cisco.com>> wrote:

Florin,

A few more comments about latency.
Some number in ms in the table below:

This is ping and iperf3 concurrent. In case of VPP it is vppctl ping.

Kernel w/ load   Kernel w/o load  VPP w/ load  VPP w/o load
Min.   :0.1920   Min.   :0.0610   Min.   :0.0573   Min.   :0.03480
1st Qu.:0.2330   1st Qu.:0.1050   1st Qu.:0.2058   1st Qu.:0.04640
Median :0.2450   Median :0.1090   Median :0.2289   Median :0.04880
Mean   :0.2458   Mean   :0.1153   Mean   :0.2568   Mean   :0.05096
3rd Qu.:0.2720   3rd Qu.:0.1290   3rd Qu.:0.2601   3rd Qu.:0.05270
Max.   :0.2800   Max.   :0.1740   Max.   :0.6926   Max.   :0.09420

In short: ICMP packets have a lower latency under load.
I could interpret this as du to vectorization maybe. Also the Linux kernel
is slower to reply to ping by x2 factor (system call latency?) 115us vs
50us in VPP. w/ load no difference. In this test Linux TCP is using TSO.

While trying to use hping  to have a latency sample w/ TCP instead of ICMP
we noticed that VPP TCP stack does not reply with a RST. So we don’t get
any sample. Is that expected behavior?

Thanks


Luca





From: Luca Muscariello <lumus...@cisco.com<mailto:lumus...@cisco.com>>
Date: Thursday 10 May 2018 at 13:52
To: Florin Coras <fcoras.li...@gmail.com<mailto:fcoras.li...@gmail.com>>
Cc: Luca Muscariello <lumuscar+f...@cisco.com<mailto:lumuscar+f...@cisco.com>>, 
"vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] TCP performance - TSO - HW offloading in general.

MTU had no effect, just statistical fluctuations in the test reports. Sorry for 
misreporting the info.

We are exploiting vectorization as we have a single memif channel
per transport socket so we can control the size of the batches dynamically.

In theory the size of outstanding data from the transport should be controlled 
in bytes for
batching to be useful and not harmful as frame sizes can vary a lot. But I’m 
not aware of a queue abstraction from DPDK
to control that from VPP.

From: Florin Coras <fcoras.li...@gmail.com<mailto:fcoras.li...@gmail.com>>
Date: Wednesday 9 May 2018 at 18:23
To: Luca Muscariello <lumus...@cisco.com<mailto:lumus...@cisco.com>>
Cc: Luca Muscariello <lumuscar+f...@cisco.com<mailto:lumuscar+f...@cisco.com>>, 
"vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] TCP performance - TSO - HW offloading in general.

Hi Luca,

We don’t yet support pmtu in the stack so tcp uses a fixed 1460 mtu, unless you 
changed that, we shouldn’t generate jumbo packets. If we do, I’ll have to take 
a look at it :)

If you already had your transport protocol, using memif is the natural way to 
go. Using the session layer makes sense only if you can implement your 
transport within vpp in a way that leverages vectorization or if it can 
leverage the existing transports (see for instance the TLS implementation).

Until today [1] the stack did allow for excessive batching (generation of 
multiple frames in one dispatch loop) but we’re now restricting that to one. 
This is still far from proper pacing which is on our todo list.

Florin

[1] https://gerrit.fd.io/r/#/c/12439/





On May 9, 2018, at 4:21 AM, Luca Muscariello (lumuscar) 
<lumus...@cisco.com<mailto:lumus...@cisco.com>> wrote:

Florin,

Thanks for the slide deck, I’ll check it soon.

BTW, VPP/DPDK test was using jumbo frames by default so the TCP stack had a 
little
advantage wrt the Linux TCP stack which was using 1500B by default.

By manually setting DPDK MTU to 1500B the goodput goes down to 8.5Gbps which 
compares
to 4.5Gbps for Linux w/o TSO. Also congestion window adaptation is not the same.

BTW, for what we’re doing it is difficult to reuse the VPP session layer as it 
is.
Our transport stack uses a different kind of namespace and mux/demux is also 
different.

We are using memif as underlying driver which does 

Re: [vpp-dev] TCP performance - TSO - HW offloading in general.

2018-05-11 Thread Luca Muscariello
Florin,

 

A few more comments about latency.

Some number in ms in the table below:

 

This is ping and iperf3 concurrent. In case of VPP it is vppctl ping.

 

Kernel w/ load   Kernel w/o load  VPP w/ load  VPP w/o load

Min.   :0.1920   Min.   :0.0610   Min.   :0.0573   Min.   :0.03480

1st Qu.:0.2330   1st Qu.:0.1050   1st Qu.:0.2058   1st Qu.:0.04640

Median :0.2450   Median :0.1090   Median :0.2289   Median :0.04880

Mean   :0.2458   Mean   :0.1153   Mean   :0.2568   Mean   :0.05096

3rd Qu.:0.2720   3rd Qu.:0.1290   3rd Qu.:0.2601   3rd Qu.:0.05270

Max.   :0.2800   Max.   :0.1740   Max.   :0.6926   Max.   :0.09420

 

In short: ICMP packets have a lower latency under load.

I could interpret this as du to vectorization maybe. Also the Linux kernel

is slower to reply to ping by x2 factor (system call latency?) 115us vs

50us in VPP. w/ load no difference. In this test Linux TCP is using TSO.

 

While trying to use hping  to have a latency sample w/ TCP instead of ICMP 

we noticed that VPP TCP stack does not reply with a RST. So we don’t get

any sample. Is that expected behavior?

 

Thanks

 

 

Luca

 

 

 

 

 

From: Luca Muscariello <lumus...@cisco.com>
Date: Thursday 10 May 2018 at 13:52
To: Florin Coras <fcoras.li...@gmail.com>
Cc: Luca Muscariello <lumuscar+f...@cisco.com>, "vpp-dev@lists.fd.io" 
<vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] TCP performance - TSO - HW offloading in general.

 

MTU had no effect, just statistical fluctuations in the test reports. Sorry for 
misreporting the info.

 

We are exploiting vectorization as we have a single memif channel 

per transport socket so we can control the size of the batches dynamically. 

 

In theory the size of outstanding data from the transport should be controlled 
in bytes for 

batching to be useful and not harmful as frame sizes can vary a lot. But I’m 
not aware of a queue abstraction from DPDK 

to control that from VPP.

 

From: Florin Coras <fcoras.li...@gmail.com>
Date: Wednesday 9 May 2018 at 18:23
To: Luca Muscariello <lumus...@cisco.com>
Cc: Luca Muscariello <lumuscar+f...@cisco.com>, "vpp-dev@lists.fd.io" 
<vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] TCP performance - TSO - HW offloading in general.

 

Hi Luca,

 

We don’t yet support pmtu in the stack so tcp uses a fixed 1460 mtu, unless you 
changed that, we shouldn’t generate jumbo packets. If we do, I’ll have to take 
a look at it :)

 

If you already had your transport protocol, using memif is the natural way to 
go. Using the session layer makes sense only if you can implement your 
transport within vpp in a way that leverages vectorization or if it can 
leverage the existing transports (see for instance the TLS implementation).

 

Until today [1] the stack did allow for excessive batching (generation of 
multiple frames in one dispatch loop) but we’re now restricting that to one. 
This is still far from proper pacing which is on our todo list. 

 

Florin

 

[1] https://gerrit.fd.io/r/#/c/12439/

 




On May 9, 2018, at 4:21 AM, Luca Muscariello (lumuscar) <lumus...@cisco.com> 
wrote:

 

Florin,

 

Thanks for the slide deck, I’ll check it soon.

 

BTW, VPP/DPDK test was using jumbo frames by default so the TCP stack had a 
little

advantage wrt the Linux TCP stack which was using 1500B by default.

 

By manually setting DPDK MTU to 1500B the goodput goes down to 8.5Gbps which 
compares

to 4.5Gbps for Linux w/o TSO. Also congestion window adaptation is not the same.

 

BTW, for what we’re doing it is difficult to reuse the VPP session layer as it 
is.

Our transport stack uses a different kind of namespace and mux/demux is also 
different.

 

We are using memif as underlying driver which does not seem to be a

bottleneck as we can also control batching there. Also, we have our own

shared memory downstream memif inside VPP through a plugin.

 

What we observed is that delay-based congestion control does not like

much VPP batching (batching in general) and we are using DBCG.

 

Linux TSO has the same problem but has TCP pacing to limit bad effects of bursts

on RTT/losses and flow control laws.

 

I guess you’re aware of these issues already.

 

Luca

 

 

From: Florin Coras <fcoras.li...@gmail.com>
Date: Monday 7 May 2018 at 22:23
To: Luca Muscariello <lumus...@cisco.com>
Cc: Luca Muscariello <lumuscar+f...@cisco.com>, "vpp-dev@lists.fd.io" 
<vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] TCP performance - TSO - HW offloading in general.

 

Yes, the whole host stack uses shared memory segments and fifos that the 
session layer manages. For a brief description of the session layer see [1, 2]. 
Apart from that, unfortunately, we don’t have any other dev documentation. 
src/vnet/session/segment_manager.[ch] has some good examples of how to allocate 
segments and fifos. Under application_interface.h check 
app_[send|recv]_[stream|

Re: [vpp-dev] TCP performance - TSO - HW offloading in general.

2018-05-10 Thread Luca Muscariello (lumuscar)
MTU had no effect, just statistical fluctuations in the test reports. Sorry for 
misreporting the info.

We are exploiting vectorization as we have a single memif channel
per transport socket so we can control the size of the batches dynamically.

In theory the size of outstanding data from the transport should be controlled 
in bytes for
batching to be useful and not harmful as frame sizes can vary a lot. But I’m 
not aware of a queue abstraction from DPDK
to control that from VPP.

From: Florin Coras <fcoras.li...@gmail.com>
Date: Wednesday 9 May 2018 at 18:23
To: Luca Muscariello <lumus...@cisco.com>
Cc: Luca Muscariello <lumuscar+f...@cisco.com>, "vpp-dev@lists.fd.io" 
<vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] TCP performance - TSO - HW offloading in general.

Hi Luca,

We don’t yet support pmtu in the stack so tcp uses a fixed 1460 mtu, unless you 
changed that, we shouldn’t generate jumbo packets. If we do, I’ll have to take 
a look at it :)

If you already had your transport protocol, using memif is the natural way to 
go. Using the session layer makes sense only if you can implement your 
transport within vpp in a way that leverages vectorization or if it can 
leverage the existing transports (see for instance the TLS implementation).

Until today [1] the stack did allow for excessive batching (generation of 
multiple frames in one dispatch loop) but we’re now restricting that to one. 
This is still far from proper pacing which is on our todo list.

Florin

[1] https://gerrit.fd.io/r/#/c/12439/



On May 9, 2018, at 4:21 AM, Luca Muscariello (lumuscar) 
<lumus...@cisco.com<mailto:lumus...@cisco.com>> wrote:

Florin,

Thanks for the slide deck, I’ll check it soon.

BTW, VPP/DPDK test was using jumbo frames by default so the TCP stack had a 
little
advantage wrt the Linux TCP stack which was using 1500B by default.

By manually setting DPDK MTU to 1500B the goodput goes down to 8.5Gbps which 
compares
to 4.5Gbps for Linux w/o TSO. Also congestion window adaptation is not the same.

BTW, for what we’re doing it is difficult to reuse the VPP session layer as it 
is.
Our transport stack uses a different kind of namespace and mux/demux is also 
different.

We are using memif as underlying driver which does not seem to be a
bottleneck as we can also control batching there. Also, we have our own
shared memory downstream memif inside VPP through a plugin.

What we observed is that delay-based congestion control does not like
much VPP batching (batching in general) and we are using DBCG.

Linux TSO has the same problem but has TCP pacing to limit bad effects of bursts
on RTT/losses and flow control laws.

I guess you’re aware of these issues already.

Luca


From: Florin Coras <fcoras.li...@gmail.com<mailto:fcoras.li...@gmail.com>>
Date: Monday 7 May 2018 at 22:23
To: Luca Muscariello <lumus...@cisco.com<mailto:lumus...@cisco.com>>
Cc: Luca Muscariello <lumuscar+f...@cisco.com<mailto:lumuscar+f...@cisco.com>>, 
"vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] TCP performance - TSO - HW offloading in general.

Yes, the whole host stack uses shared memory segments and fifos that the 
session layer manages. For a brief description of the session layer see [1, 2]. 
Apart from that, unfortunately, we don’t have any other dev documentation. 
src/vnet/session/segment_manager.[ch] has some good examples of how to allocate 
segments and fifos. Under application_interface.h check 
app_[send|recv]_[stream|dgram]_raw for examples on how to read/write to the 
fifos.

Now, regarding the the writing to the fifos: they are lock free but size 
increments are atomic since the assumption is that we’ll always have one reader 
and one writer. Still, batching helps. VCL doesn’t do it but iperf probably 
does it.

Hope this helps,
Florin

[1] https://wiki.fd.io/view/VPP/HostStack/SessionLayerArchitecture
[2] https://wiki.fd.io/images/1/15/Vpp-hoststack-kc-eu-18.pdf



On May 7, 2018, at 11:35 AM, Luca Muscariello (lumuscar) 
<lumus...@cisco.com<mailto:lumus...@cisco.com>> wrote:

Florin,

So the TCP stack does not connect to VPP using memif.
I’ll check the shared memory you mentioned.

For our transport stack we’re using memif. Nothing to
do with TCP though.

Iperf3 to VPP there must be copies anyway.
There must be some batching with timing though
while doing these copies.

Is there any doc of svm_fifo usage?

Thanks
Luca

On 7 May 2018, at 20:00, Florin Coras 
<fcoras.li...@gmail.com<mailto:fcoras.li...@gmail.com>> wrote:
Hi Luca,

I guess, as you did, that it’s vectorization. VPP is really good at pushing 
packets whereas Linux is good at using all hw optimizations.

The stack uses it’s own shared memory mechanisms (check svm_fifo_t) but given 
that you did the testing with iperf3, I suspect the edge is not there. T

Re: [vpp-dev] TCP performance - TSO - HW offloading in general.

2018-05-09 Thread Luca Muscariello (lumuscar)
Florin,

Thanks for the slide deck, I’ll check it soon.

BTW, VPP/DPDK test was using jumbo frames by default so the TCP stack had a 
little
advantage wrt the Linux TCP stack which was using 1500B by default.

By manually setting DPDK MTU to 1500B the goodput goes down to 8.5Gbps which 
compares
to 4.5Gbps for Linux w/o TSO. Also congestion window adaptation is not the same.

BTW, for what we’re doing it is difficult to reuse the VPP session layer as it 
is.
Our transport stack uses a different kind of namespace and mux/demux is also 
different.

We are using memif as underlying driver which does not seem to be a
bottleneck as we can also control batching there. Also, we have our own
shared memory downstream memif inside VPP through a plugin.

What we observed is that delay-based congestion control does not like
much VPP batching (batching in general) and we are using DBCG.

Linux TSO has the same problem but has TCP pacing to limit bad effects of bursts
on RTT/losses and flow control laws.

I guess you’re aware of these issues already.

Luca


From: Florin Coras <fcoras.li...@gmail.com>
Date: Monday 7 May 2018 at 22:23
To: Luca Muscariello <lumus...@cisco.com>
Cc: Luca Muscariello <lumuscar+f...@cisco.com>, "vpp-dev@lists.fd.io" 
<vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] TCP performance - TSO - HW offloading in general.

Yes, the whole host stack uses shared memory segments and fifos that the 
session layer manages. For a brief description of the session layer see [1, 2]. 
Apart from that, unfortunately, we don’t have any other dev documentation. 
src/vnet/session/segment_manager.[ch] has some good examples of how to allocate 
segments and fifos. Under application_interface.h check 
app_[send|recv]_[stream|dgram]_raw for examples on how to read/write to the 
fifos.

Now, regarding the the writing to the fifos: they are lock free but size 
increments are atomic since the assumption is that we’ll always have one reader 
and one writer. Still, batching helps. VCL doesn’t do it but iperf probably 
does it.

Hope this helps,
Florin

[1] https://wiki.fd.io/view/VPP/HostStack/SessionLayerArchitecture
[2] https://wiki.fd.io/images/1/15/Vpp-hoststack-kc-eu-18.pdf


On May 7, 2018, at 11:35 AM, Luca Muscariello (lumuscar) 
<lumus...@cisco.com<mailto:lumus...@cisco.com>> wrote:

Florin,

So the TCP stack does not connect to VPP using memif.
I’ll check the shared memory you mentioned.

For our transport stack we’re using memif. Nothing to
do with TCP though.

Iperf3 to VPP there must be copies anyway.
There must be some batching with timing though
while doing these copies.

Is there any doc of svm_fifo usage?

Thanks
Luca

On 7 May 2018, at 20:00, Florin Coras 
<fcoras.li...@gmail.com<mailto:fcoras.li...@gmail.com>> wrote:
Hi Luca,

I guess, as you did, that it’s vectorization. VPP is really good at pushing 
packets whereas Linux is good at using all hw optimizations.

The stack uses it’s own shared memory mechanisms (check svm_fifo_t) but given 
that you did the testing with iperf3, I suspect the edge is not there. That is, 
I guess they’re not abusing syscalls with lots of small writes. Moreover, the 
fifos are not zero-copy, apps do have to write to the fifo and vpp has to 
packetize that data.

Florin


On May 7, 2018, at 10:29 AM, Luca Muscariello (lumuscar) 
<lumus...@cisco.com<mailto:lumus...@cisco.com>> wrote:

Hi Florin

Thanks for the info.

So, how do you explain VPP TCP stack beats Linux
implementation by doubling the goodput?
Does it come from vectorization?
Any special memif optimization underneath?

Luca

On 7 May 2018, at 18:17, Florin Coras 
<fcoras.li...@gmail.com<mailto:fcoras.li...@gmail.com>> wrote:
Hi Luca,

We don’t yet support TSO because it requires support within all of vpp (think 
tunnels). Still, it’s on our list.

As for crypto offload, we do have support for IPSec offload with QAT cards and 
we’re now working with Ping and Ray from Intel on accelerating the TLS OpenSSL 
engine also with QAT cards.

Regards,
Florin


On May 7, 2018, at 7:53 AM, Luca Muscariello 
<lumuscar+f...@cisco.com<mailto:lumuscar+f...@cisco.com>> wrote:

Hi,

A few questions about the TCP stack and HW offloading.
Below is the experiment under test.

  ++  +---+
  |  +-+ DPDK-10GE|   |
  |Iperf3| TCP |  ++  |TCP   Iperf3
  |  ++Nexus Switch+--+   +
  |LXC   | VPP||  ++  |VPP |LXC   |
  ++  DPDK-10GE   +---+


Using the Linux kernel w/ or w/o TSO I get an iperf3 goodput of 9.5Gbps or 
4.5Gbps.
Using VPP TCP stack I get 9.2Gbps, say max goodput as Linux w/ TSO.

Is there any TSO implementation already in VPP one can take advantage of?

Side question. Is there any crypto offloading service available in VPP?
Essentially  for the computation of RSA-1024/2048, EDCSA 192/256 signatures.

Thanks
Luca







Re: [vpp-dev] TCP performance - TSO - HW offloading in general.

2018-05-07 Thread Luca Muscariello (lumuscar)
Hi Florin

Thanks for the info.

So, how do you explain VPP TCP stack beats Linux
implementation by doubling the goodput?
Does it come from vectorization?
Any special memif optimization underneath?

Luca

On 7 May 2018, at 18:17, Florin Coras 
<fcoras.li...@gmail.com<mailto:fcoras.li...@gmail.com>> wrote:

Hi Luca,

We don’t yet support TSO because it requires support within all of vpp (think 
tunnels). Still, it’s on our list.

As for crypto offload, we do have support for IPSec offload with QAT cards and 
we’re now working with Ping and Ray from Intel on accelerating the TLS OpenSSL 
engine also with QAT cards.

Regards,
Florin

On May 7, 2018, at 7:53 AM, Luca Muscariello 
<lumuscar+f...@cisco.com<mailto:lumuscar+f...@cisco.com>> wrote:

Hi,

A few questions about the TCP stack and HW offloading.
Below is the experiment under test.

  ++  +---+
  |  +-+ DPDK-10GE|   |
  |Iperf3| TCP |  ++  |TCP   Iperf3
  |  ++Nexus Switch+--+   +
  |LXC   | VPP||  ++  |VPP |LXC   |
  ++  DPDK-10GE   +---+


Using the Linux kernel w/ or w/o TSO I get an iperf3 goodput of 9.5Gbps or 
4.5Gbps.
Using VPP TCP stack I get 9.2Gbps, say max goodput as Linux w/ TSO.

Is there any TSO implementation already in VPP one can take advantage of?

Side question. Is there any crypto offloading service available in VPP?
Essentially  for the computation of RSA-1024/2048, EDCSA 192/256 signatures.

Thanks
Luca





[vpp-dev] TCP performance - TSO - HW offloading in general.

2018-05-07 Thread Luca Muscariello
Hi,

 

A few questions about the TCP stack and HW offloading.

Below is the experiment under test.

 

 

  ++  +---+

  |  +-+ DPDK-10GE|   |

  |Iperf3| TCP |  ++  |TCP   Iperf3

  |  ++Nexus Switch+--+   +

  |LXC   | VPP||  ++  |VPP |LXC   |

  ++  DPDK-10GE   +---+

 

 

Using the Linux kernel w/ or w/o TSO I get an iperf3 goodput of 9.5Gbps or 
4.5Gbps.

Using VPP TCP stack I get 9.2Gbps, say max goodput as Linux w/ TSO.

 

Is there any TSO implementation already in VPP one can take advantage of?

 

Side question. Is there any crypto offloading service available in VPP?

Essentially  for the computation of RSA-1024/2048, EDCSA 192/256 signatures.


   

Thanks

Luca