Hi Ryan,

Mathieu Rohon :
We have been working on similar Use cases to announce /32 with the
Bagpipe BGPSpeaker that supports EVPN.

Btw, the code for the BGP E-VPN implementation is at https://github.com/Orange-OpenSource/bagpipe-bgp It reuses parts of ExaBGP (to which we contributed encodings for E-VPN and IP VPNs) and relies on the VXLAN native Linux kernel implementation for the E-VPN dataplane.

-Thomas

Please have a look at use case B in [1][2].
Note also that the L2population Mechanism driver for ML2, that is
compatible with OVS, Linuxbridge and ryu ofagent, is inspired by EVPN,
and I'm sure it could help in your use case

[1]http://fr.slideshare.net/ThomasMorin1/neutron-and-bgp-vpns-with-bagpipe
[2]https://www.youtube.com/watch?v=q5z0aPrUZYc&sns
[3]https://blueprints.launchpad.net/neutron/+spec/l2-population

Mathieu

On Thu, Dec 4, 2014 at 12:02 AM, Ryan Clevenger
<ryan.cleven...@rackspace.com> wrote:
Hi,

At Rackspace, we have a need to create a higher level networking service
primarily for the purpose of creating a Floating IP solution in our
environment. The current solutions for Floating IPs, being tied to plugin
implementations, does not meet our needs at scale for the following reasons:

1. Limited endpoint H/A mainly targeting failover only and not multi-active
endpoints,
2. Lack of noisy neighbor and DDOS mitigation,
3. IP fragmentation (with cells, public connectivity is terminated inside
each cell leading to fragmentation and IP stranding when cell CPU/Memory use
doesn't line up with allocated IP blocks. Abstracting public connectivity
away from nova installations allows for much more efficient use of those
precious IPv4 blocks).
4. Diversity in transit (multiple encapsulation and transit types on a per
floating ip basis).

We realize that network infrastructures are often unique and such a solution
would likely diverge from provider to provider. However, we would love to
collaborate with the community to see if such a project could be built that
would meet the needs of providers at scale. We believe that, at its core,
this solution would boil down to terminating north<->south traffic
temporarily at a massively horizontally scalable centralized core and then
encapsulating traffic east<->west to a specific host based on the
association setup via the current L3 router's extension's 'floatingips'
resource.

Our current idea, involves using Open vSwitch for header rewriting and
tunnel encapsulation combined with a set of Ryu applications for management:

https://i.imgur.com/bivSdcC.png

The Ryu application uses Ryu's BGP support to announce up to the Public
Routing layer individual floating ips (/32's or /128's) which are then
summarized and announced to the rest of the datacenter. If a particular
floating ip is experiencing unusually large traffic (DDOS, slashdot effect,
etc.), the Ryu application could change the announcements up to the Public
layer to shift that traffic to dedicated hosts setup for that purpose. It
also announces a single /32 "Tunnel Endpoint" ip downstream to the TunnelNet
Routing system which provides transit to and from the cells and their
hypervisors. Since traffic from either direction can then end up on any of
the FLIP hosts, a simple flow table to modify the MAC and IP in either the
SRC or DST fields (depending on traffic direction) allows the system to be
completely stateless. We have proven this out (with static routing and
flows) to work reliably in a small lab setup.

On the hypervisor side, we currently plumb networks into separate OVS
bridges. Another Ryu application would control the bridge that handles
overlay networking to selectively divert traffic destined for the default
gateway up to the FLIP NAT systems, taking into account any configured
logical routing and local L2 traffic to pass out into the existing overlay
fabric undisturbed.

Adding in support for L2VPN EVPN
(https://tools.ietf.org/html/draft-ietf-l2vpn-evpn-11) and L2VPN EVPN
Overlay (https://tools.ietf.org/html/draft-sd-l2vpn-evpn-overlay-03) to the
Ryu BGP speaker will allow the hypervisor side Ryu application to advertise
up to the FLIP system reachability information to take into account VM
failover, live-migrate, and supported encapsulation types. We believe that
decoupling the tunnel endpoint discovery from the control plane
(Nova/Neutron) will provide for a more robust solution as well as allow for
use outside of openstack if desired.



_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to