[openstack-dev] networking-vpp 18.10 for VPP 18.10 is now available

2018-11-08 Thread Naveen Joy (najoy)
Hello All,

In conjunction with the release of VPP 18.10, we'd like to invite you all to 
try out networking-vpp 18.10 for VPP 18.10.
As many of you may already know, VPP is a fast user space forwarder based on 
the DPDK toolkit. VPP uses vector packet
processing algorithms to minimize the CPU time spent on each packet to maximize 
throughput.

Networking-vpp is a ML2 mechanism driver that controls VPP on your control and 
compute hosts to provide fast L2 forwarding
under Neutron.

In this release, we have made improvements to fully support the network trunk 
service plugin. Using this plugin, you can attach
multiple networks to an instance by binding it to a single vhostuser trunk 
port. The APIs are the same as the OpenStack Neutron trunk
service APIs. You can also now bind and unbind subports to a bound network 
trunk.

Another feature we have improved in this release is the Tap-as-a-service(TaaS). 
The TaaS code has been updated to handle any out of order
etcd messages received during agent restarts. You can use this service to 
create remote port mirroring capability for tenant virtual networks.

Besides the above, this release also has several bug fixes, VPP 18.10 API 
compatibility and stability related improvements.

The README [1] explains how you can try out VPP using devstack: the devstack 
plugin will deploy the mechanism driver and VPP 18.10
and should give you a working system with a minimum of hassle.

We will be continuing our development between now and VPP's 19.01 release. 
There are several features we're planning to work on
and we will keep you updated through our bugs list [2]. We welcome anyone who 
would like to come help us.

Everyone is welcome to join our biweekly IRC meetings, every other Monday (the 
next one is due this Monday at 0800 PT = 1600 GMT.
--
Ian & Naveen

[1]https://github.com/openstack/networking-vpp/blob/master/README.rst
[2]http://goo.gl/i3TzAt
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] networking-vpp 18.07 for VPP 18.07 is now available

2018-08-17 Thread Naveen Joy (najoy)
Hello Everyone,

In conjunction with the release of VPP 18.07, we'd like to invite you all to 
try out networking-vpp 18.07 for VPP 18.07.
As many of you may already know, VPP is a fast userspace forwarder based on the 
DPDK toolkit, and uses vector packet processing algorithms to minimize the CPU 
time spent on each packet to maximize throughput.

Networking-vpp is a ML2 mechanism driver that controls VPP on your control and 
compute hosts to provide fast L2 forwarding under Neutron.

This version has the below additional enhancements, along with supporting the 
latest VPP 18.07 APIs:
- Network Trunking
- Tap-as-a-Service (Taas)

Both the above features are experimental in this release.
Along with this, there have been the usual upkeep as Neutron versions and VPP 
APIs change, bug fixes, code and test improvements.

The README [1] explains more about the above features and how you can try out 
VPP using devstack:
the devstack plugin will deploy the mechanism driver and VPP itself and should 
give you a working system with a minimum of hassle.

We will be continuing our development between now and VPP's 18.10 release. 
There are several features we're planning to work on and we will keep you 
updated through our bugs list [2].
We welcome anyone who would like to come help us.

Everyone is welcome to join our biweekly IRC meetings, every other Monday (the 
next one is due this Monday at 0900 PST = 1600 GMT.
--
Ian & Naveen

[1]https://github.com/openstack/networking-vpp/blob/master/README.rst
[2]http://goo.gl/i3TzAt
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] networking-vpp 18.04 for VPP 18.04 is now available

2018-05-01 Thread Naveen Joy (najoy)
Hello Everyone,

In conjunction with the release of VPP 18.04, we'd like to invite you all to 
try out networking-vpp 18.04 for VPP 18.04.
VPP is a fast userspace forwarder based on the DPDK toolkit, and uses vector 
packet processing algorithms to minimize the CPU time spent on each packet and 
maximize throughput.
networking-vpp is a ML2 mechanism driver that controls VPP on your control and 
compute hosts to provide fast L2 forwarding under Neutron.

This version has a few additional enhancements and several bug fixes, along 
with supporting the VPP 18.04 APIs:
- L3 HA is fully supported for VLAN, VXLAN-GPE and Flat Network Types
- IPv6 VM addressing supported for VXLAN-GPE
- Deadlock prevention in eventlet

Along with this, there have been the usual upkeep as Neutron versions change, 
bug fixes, code and test improvements.

The README [1] explains how you can try out VPP and networking-vpp using 
devstack: the devstack plugin will deploy the mechanism driver and VPP itself 
and should give you a working system with a minimum of hassle.
It will use the etcd version deployed by newer versions of devstack.

We will be continuing our development between now and VPP's 18.07 release.
There are several features we're planning to work on (you'll find a list in our 
RFE bugs at [2]), and we welcome anyone who would like to come help us.

Everyone is welcome to join our biweekly IRC meetings, held every other Monday, 
0800 PST = 1600 GMT.
--
Naveen & Ian

[1]https://github.com/openstack/networking-vpp/blob/master/README.rst
[2]http://goo.gl/i3TzAt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] networking-vpp 18.01 for VPP 18.01 is now available

2018-02-12 Thread Naveen Joy (najoy)
Hello Everyone,

In conjunction with the release of VPP 18.01, we'd like to invite you all to 
try out networking-vpp 18.01 for VPP 18.01.
VPP is a fast userspace forwarder based on the DPDK toolkit, and uses vector 
packet processing algorithms to minimize the CPU time spent on each packet and 
maximize throughput.
networking-vpp is a ML2 mechanism driver that controls VPP on your control and 
compute hosts to provide fast L2 forwarding under Neutron.

This version has a few additional enhancements, along with supporting the VPP 
18.01 APIs:
- L3 HA
- VM Live Migration
- Neutron protocol names in a security group rule

Along with this, there have been the usual upkeep as Neutron versions change, 
bug fixes, code and test improvements.

The README [1] explains how you can try out VPP using devstack: the devstack 
plugin will deploy the mechanism driver and VPP itself and should give you a 
working system with a minimum of hassle.
It will use the etcd version deployed by newer versions of devstack.

We will be continuing our development between now and VPP's 18.04 release in 
April.
There are several features we're planning to work on (you'll find a list in our 
RFE bugs at [2]), and we welcome anyone who would like to come help us.

Everyone is welcome to join our biweekly IRC meetings, every other Monday (the 
next one is due in a week), 0800 PST = 1600 GMT.
--
Naveen & Ian

[1]https://github.com/openstack/networking-vpp/blob/master/README.rst
[2]http://goo.gl/i3TzAt
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-vpp]Introducing networking-vpp

2016-10-06 Thread Naveen Joy (najoy)
Was not comaparing it in the way you imply below. My point was that
eventhough they are fundamentally different systems, they play
a similar role in the way we use them - the primary purpose being
to facilitate communications between the components of a distributed
system. 
We have found etcd to be simpler and a better fit for what we
are trying to accomplish.

Regards,
Naveen 


On 10/6/16, 10:43 AM, "Jay Pipes"  wrote:

>On 10/06/2016 11:58 AM, Naveen Joy (najoy) wrote:
>> It¹s primarliy because we have seen better stability and scalability
>> with etcd over rabbitmq.
>
>Well, that's kind of comparing apples to oranges. :)
>
>One is a distributed k/v store. The other is a message queue broker.
>
>The way that we (IMHO) over-use the peer-to-peer RPC communication
>paradigm in Nova and Neutron has resulted in a number of design choices
>and awkward code in places like oslo.messaging because of the use of
>broker-based message queue systems as the underlying transport
>mechanism. It's not that RabbitMQ or AMQP isn't scalable or reliable.
>It's that we're using it in ways that don't necessarily fit well.
>
>One might argue that in using etcd and etcd watches in the way you are
>in networking-vpp, that you are essentially using those tools to create
>a simplified pub-sub messaging system and that isn't really what etcd
>was built for and you will end up running into similar fitness issues
>long-term. But, who knows? It might end up being a genius implementation.
>:)
>
>I'm happy to see innovation flourish here and encourage new designs and
>strategies. Let's just make sure we compare apples to apples when making
>statements about performance or reliability.
>
>All the best,
>-jay
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-vpp]Introducing networking-vpp

2016-10-06 Thread Naveen Joy (najoy)
The etcd handling code is unique to networking-vpp. In our model, the server 
code uses neutron DB to store port states and has journaling mechanisms to map 
this to the etcd DB. So even in our design (as it should be), the neutronDB as 
the primary source of truth.
When data is published to etcd, the corresponding compute agents receive a 
notification and they drop the vhostuser and tap interfaces into VPP and report 
the state back to the server. A return thread on the server watches for this 
state update from
the agents and sends port bound events to nova.

Regards,
Naveen

From: Neil Jerram mailto:n...@tigera.io>>
Reply-To: openstack-dev 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, October 6, 2016 at 8:39 AM
To: openstack-dev 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron][networking-vpp]Introducing networking-vpp

On Thu, Oct 6, 2016 at 3:44 PM Jay Pipes 
mailto:jaypi...@gmail.com>> wrote:
On 10/06/2016 03:46 AM, Jerome Tollet (jtollet) wrote:
> Hey Kevin,
>
> Thanks for your interest in this project.
>
> We found etcd very convenient to store desired states as well as
> operational states. It made the design easy to provide production grade
> features (e.g. agent restart, mechanical driver restart, ...) in a very
> concise code. In addition to that, debugging is simple to do using
> simple "etcdwatch" commands. Please note that etcd is not an alternative
> to rabbitmq even though communication protocol is HTTP/JSON.

It's probably worth mentioning that the etcd code used in networking-vpp
came from networking-calico?

When I chatted with Ian about this, I actually asked the same question
and Ian told me the code came from networking-calico pretty much as-is.

To clarify (or check my own understanding of!) that statement: it's certainly 
true that networking-calico also uses etcd as its mechanism for communicating 
between the ML2 driver and the agents.  However, from a quick look at the 
networking-vpp code, it appears to me that it doesn't use any detailed etcd 
handling code from networking-calico, or use the same etcd data model as Calico 
(i.e. the definition of how information is structured and named in the etcd 
tree).  So I think the statement above just means that networking-vpp uses a 
similar architecture as networking-calico, in particular as regards using etcd 
for communication.

Happy to be corrected if that's not quite right, of course!

We can discuss at length about using etcd as the state data store
instead of the Neutron database and using etcd watches as the mechanism
for state change communication, but that will likely end up in lots of
bike-shedding. There's certainly advantages and disadvantages to each
approach.

Another clarification here, I think you should say 'as the transport instead of 
Neutron RPC', not 'as the state data store instead of the Neutron database'.  
With networking-calico the Neutron DB is still the primary source of truth, and 
the etcd DB is just a mapping of that; and I would guess (tentatively) that 
that is true for networking-vpp as well.


Best,
-jay

Regards,
 Neil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-vpp]Introducing networking-vpp

2016-10-06 Thread Naveen Joy (najoy)
It's primarliy because we have seen better stability and scalability with etcd 
over rabbitmq.

Thanks,
Naveen

From: Kevin Benton mailto:ke...@benton.pub>>
Reply-To: openstack-dev 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, October 5, 2016 at 3:27 PM
To: openstack-dev 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron][networking-vpp]Introducing networking-vpp

Cool. Always like to see more drivers.

I'm curious about the choice of etcd instead of rabbitmq as the communication 
mechanism between the mech driver and the agents since it introduces a new 
dependency into the deployment. Is this because the agent is written to work 
with other things like Kubernetes, Docker, etc as well?

On Wed, Oct 5, 2016 at 12:01 PM, Ian Wells 
mailto:ijw.ubu...@cack.org.uk>> wrote:
We'd like to introduce the VPP mechanism driver, networking-vpp[1], to the 
developer community.

networking-vpp is an ML2 mechanism driver to control DPDK-based VPP user-space 
forwarders on OpenStack compute nodes.  The code does what mechanism drivers do 
- it connects VMs to each other and to other Neutron-provided network services 
like routers.  It also does it with care - we aim to make sure this is a robust 
design that can withstand common cloud problems and failures - and with clarity 
- so that it's straightforward to see what it's chosen to do and what it's 
thinking.

To give some background:

VPP is an open source network forwarder originally developed by Cisco and now 
part of the Linux Foundation FD.io project for fast dataplanes.  It's very very 
good at moving packets around, and has demonstrated performance up to and well 
beyond 10Gbps - of tiny packets: ten times the number of packets iperf uses to 
fill a 10Gbps link.  This makes it especially useful for NFV use cases.  It's a 
user space forwarder, which has other benefits versus kernel packet forwarding: 
it can be stopped and upgraded without rebooting the host, and (in the worst 
case) it can crash without bringing down the whole system.

networking-vpp is its driver for OpenStack.  We've written about 3,000 lines of 
code, consisting of a mechanism driver and an agent to program VPP through its 
Python API, and we use etcd to be a robust datastore and communication channel 
living between the two.


The code, at the moment, is in a fairly early stage, so please play with it and 
fix or report any problems you find.  It will move packets between VLANs and 
flat networks and VMs, and will connect to DHCP servers, routers and the 
metadata server in your cloud, so for basic uses it will work just the way you 
expect.  However, we certainly don't support every feature of Neutron just yet. 
 In particular, we haven't tested some things like LBaaS and VPNaaS with it - 
they should work, we just haven't tried - and, most obviously, security groups 
are not yet implemented - that's on the way.  However, we'd like to get it into 
your hands so that you can have a go with it, see what you like and don't like 
about it, and help us file down the rough edges if you feel like joining us.  
Enjoy!

[1]
https://github.com/openstack/networking-vpp for all your code needs
https://review.openstack.org/#/q/status:open+project:openstack/networking-vpp 
to help
https://launchpad.net/networking-vpp for bugs


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev