Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration

2014-12-18 Thread A, Keshava
Hi  Thomas,

Basically as per your thought, extend the 'vpn-label' to OVS itself.
So that, when  MPLS-over-GRE packet comes from OVS , use that  incoming label 
to index respective VPN table at DC-Edge side ?

Question:
1. Who tells which label to use to OVS ? 
You are thinking to have BGP-VPN session between DC-Edge to Compute 
Node(OVS) ? 
So that there it self-look at the BGP-VPN table and based on 
destination add that VPN label as MPLS label  in OVS ?
OR
 ODL or OpenStack controller will dictate  which VPN label to use to 
both DC-Edge and CN(ovs)?

2. How much will be the gain/advantage by generating the mpls from OVS 
? (compare the terminating VxLAN on DC-edge and then originating the mpls from 
there ?)


keshava

-Original Message-
From: Thomas Morin [mailto:thomas.mo...@orange.com] 
Sent: Tuesday, December 16, 2014 7:10 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and 
collaboration

Hi Keshava,

2014-12-15 11:52, A, Keshava :
   I have been thinking of Starting MPLS right from CN for L2VPN/EVPN 
 scenario also.

   Below are my queries w.r.t supporting MPLS from OVS :
   1. MPLS will be used even for VM-VM traffic across CNs 
 generated by OVS  ?

If E-VPN is used only to interconnect outside of a Neutron domain, then MPLS 
does not have to be used for traffic between VMs.

If E-VPN is used inside one DC for VM-VM traffic, then MPLS is *one* of the 
possible encapsulation only: E-VPN specs have been defined to use VXLAN (handy 
because there is native kernel support), MPLS/GRE or MPLS/UDP are other 
possibilities.

   2. MPLS will be originated right from OVS and will be mapped at 
 Gateway (it may be NN/Hardware router ) to SP network ?
   So MPLS will carry 2 Labels ? (one for hop-by-hop, and 
 other one 
 for end to identify network ?)

On will carry 2 Labels ? : this would be one possibility, but not the one we 
target.
We would actually favor MPLS/GRE (GRE used instead of what you call the MPLS 
hop-by-hop label) inside the DC -- this requires only one label.
At the DC edge gateway, depending on the interconnection techniques to connect 
the WAN, different options can be used (RFC4364 section 10): 
Option A with back-to-back VRFs (no MPLS label, but typically VLANs), or option 
B (with one MPLS label), a mix of A/B is also possible and sometimes called 
option D (one label) ;  option C also exists, but is not a good fit here.

Inside one DC, if vswitches see each other across an Ethernet segment, we can 
also use MPLS with just one label (the VPN label) without a GRE encap.

In a way, you can say that in Option B, the label are mapped at the DC/WAN 
gateway(s), but this is really just MPLS label swaping, not to be misunderstood 
as mapping a DC label space to a WAN label space (see below, the label space is 
local to each device).


   3. MPLS will go over even the network physical infrastructure 
  also ?

The use of MPLS/GRE means we are doing an overlay, just like your typical 
VXLAN-based solution, and the network physical infrastructure does not need to 
be MPLS-aware (it just needs to be able to carry IP
traffic)

   4. How the Labels will be mapped a/c virtual and physical world 
 ?

(I don't get the question, I'm not sure what you mean by mapping labels)

   5. Who manages the label space  ? Virtual world or physical 
 world or 
 both ? (OpenStack +  ODL ?)

In MPLS*, the label space is local to each device : a label is 
downstream-assigned, i.e. allocated by the receiving device for a specific 
purpose (e.g. forwarding in a VRF). It is then (typically) avertized in a 
routing protocol; the sender device will use this label to send traffic to the 
receiving device for this specific purpose.  As a result a sender device may 
then use label 42 to forward traffic in the context of VPN X to a receiving 
device A, and the same label 42 to forward traffic in the context of another 
VPN Y to another receiving device B, and locally use label 42 to receive 
traffic for VPN Z.  There is no global label space to manage.

So, while you can design a solution where the label space is managed in a 
centralized fashion, this is not required.

You could design an SDN controller solution where the controller would manage 
one label space common to all nodes, or all the label spaces of all forwarding 
devices, but I think its hard to derive any interesting property from such a 
design choice.

In our BaGPipe distributed design (and this is also true in OpenContrail for 
instance) the label space is managed locally on each compute node (or network 
node if the BGP speaker is on a network node). More precisely in VPN 
implementation.

If you take a step back, the only naming space that has to be managed 
in BGP VPNs is the Route Target space. This is only in the control plane

Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration

2014-12-16 Thread Thomas Morin
 advertised by eBGP)
- inside the WAN: (typically) two labels (e.g. LDP label to reach remote 
edge, and VPN label advertised via iBGP)

- WAN to  edgress DC edge: one MPLS label (VPN label advertised by eBGP)
- egress DC overlay: one MPLS-over-GRE hop from DC edge to vswitch

Not sure how the above answers your questions; please keep asking if it 
does not !  ;)


-Thomas





-Original Message-
From: Mathieu Rohon [mailto:mathieu.ro...@gmail.com]
Sent: Monday, December 15, 2014 3:46 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and 
collaboration

Hi Ryan,

We have been working on similar Use cases to announce /32 with the Bagpipe 
BGPSpeaker that supports EVPN.
Please have a look at use case B in [1][2].
Note also that the L2population Mechanism driver for ML2, that is compatible 
with OVS, Linuxbridge and ryu ofagent, is inspired by EVPN, and I'm sure it 
could help in your use case

[1]http://fr.slideshare.net/ThomasMorin1/neutron-and-bgp-vpns-with-bagpipe
[2]https://www.youtube.com/watch?v=q5z0aPrUZYcsns
[3]https://blueprints.launchpad.net/neutron/+spec/l2-population

Mathieu

On Thu, Dec 4, 2014 at 12:02 AM, Ryan Clevenger ryan.cleven...@rackspace.com 
wrote:

Hi,

At Rackspace, we have a need to create a higher level networking
service primarily for the purpose of creating a Floating IP solution
in our environment. The current solutions for Floating IPs, being tied
to plugin implementations, does not meet our needs at scale for the following 
reasons:

1. Limited endpoint H/A mainly targeting failover only and not
multi-active endpoints, 2. Lack of noisy neighbor and DDOS mitigation,
3. IP fragmentation (with cells, public connectivity is terminated
inside each cell leading to fragmentation and IP stranding when cell
CPU/Memory use doesn't line up with allocated IP blocks. Abstracting
public connectivity away from nova installations allows for much more
efficient use of those precious IPv4 blocks).
4. Diversity in transit (multiple encapsulation and transit types on a
per floating ip basis).

We realize that network infrastructures are often unique and such a
solution would likely diverge from provider to provider. However, we
would love to collaborate with the community to see if such a project
could be built that would meet the needs of providers at scale. We
believe that, at its core, this solution would boil down to
terminating north-south traffic temporarily at a massively
horizontally scalable centralized core and then encapsulating traffic
east-west to a specific host based on the association setup via the current 
L3 router's extension's 'floatingips'
resource.

Our current idea, involves using Open vSwitch for header rewriting and
tunnel encapsulation combined with a set of Ryu applications for management:

https://i.imgur.com/bivSdcC.png

The Ryu application uses Ryu's BGP support to announce up to the
Public Routing layer individual floating ips (/32's or /128's) which
are then summarized and announced to the rest of the datacenter. If a
particular floating ip is experiencing unusually large traffic (DDOS,
slashdot effect, etc.), the Ryu application could change the
announcements up to the Public layer to shift that traffic to
dedicated hosts setup for that purpose. It also announces a single /32
Tunnel Endpoint ip downstream to the TunnelNet Routing system which
provides transit to and from the cells and their hypervisors. Since
traffic from either direction can then end up on any of the FLIP
hosts, a simple flow table to modify the MAC and IP in either the SRC
or DST fields (depending on traffic direction) allows the system to be
completely stateless. We have proven this out (with static routing and
flows) to work reliably in a small lab setup.

On the hypervisor side, we currently plumb networks into separate OVS
bridges. Another Ryu application would control the bridge that handles
overlay networking to selectively divert traffic destined for the
default gateway up to the FLIP NAT systems, taking into account any
configured logical routing and local L2 traffic to pass out into the
existing overlay fabric undisturbed.

Adding in support for L2VPN EVPN
(https://tools.ietf.org/html/draft-ietf-l2vpn-evpn-11) and L2VPN EVPN
Overlay (https://tools.ietf.org/html/draft-sd-l2vpn-evpn-overlay-03)
to the Ryu BGP speaker will allow the hypervisor side Ryu application
to advertise up to the FLIP system reachability information to take
into account VM failover, live-migrate, and supported encapsulation
types. We believe that decoupling the tunnel endpoint discovery from
the control plane
(Nova/Neutron) will provide for a more robust solution as well as
allow for use outside of openstack if desired.





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration

2014-12-16 Thread Thomas Morin

Hi Ryan,

Mathieu Rohon :

We have been working on similar Use cases to announce /32 with the
Bagpipe BGPSpeaker that supports EVPN.


Btw, the code for the BGP E-VPN implementation is at 
https://github.com/Orange-OpenSource/bagpipe-bgp
It reuses parts of ExaBGP (to which we contributed encodings for E-VPN 
and IP VPNs) and relies on the VXLAN native Linux kernel implementation 
for the E-VPN dataplane.


-Thomas


Please have a look at use case B in [1][2].
Note also that the L2population Mechanism driver for ML2, that is
compatible with OVS, Linuxbridge and ryu ofagent, is inspired by EVPN,
and I'm sure it could help in your use case

[1]http://fr.slideshare.net/ThomasMorin1/neutron-and-bgp-vpns-with-bagpipe
[2]https://www.youtube.com/watch?v=q5z0aPrUZYcsns
[3]https://blueprints.launchpad.net/neutron/+spec/l2-population

Mathieu

On Thu, Dec 4, 2014 at 12:02 AM, Ryan Clevenger
ryan.cleven...@rackspace.com wrote:

Hi,

At Rackspace, we have a need to create a higher level networking service
primarily for the purpose of creating a Floating IP solution in our
environment. The current solutions for Floating IPs, being tied to plugin
implementations, does not meet our needs at scale for the following reasons:

1. Limited endpoint H/A mainly targeting failover only and not multi-active
endpoints,
2. Lack of noisy neighbor and DDOS mitigation,
3. IP fragmentation (with cells, public connectivity is terminated inside
each cell leading to fragmentation and IP stranding when cell CPU/Memory use
doesn't line up with allocated IP blocks. Abstracting public connectivity
away from nova installations allows for much more efficient use of those
precious IPv4 blocks).
4. Diversity in transit (multiple encapsulation and transit types on a per
floating ip basis).

We realize that network infrastructures are often unique and such a solution
would likely diverge from provider to provider. However, we would love to
collaborate with the community to see if such a project could be built that
would meet the needs of providers at scale. We believe that, at its core,
this solution would boil down to terminating north-south traffic
temporarily at a massively horizontally scalable centralized core and then
encapsulating traffic east-west to a specific host based on the
association setup via the current L3 router's extension's 'floatingips'
resource.

Our current idea, involves using Open vSwitch for header rewriting and
tunnel encapsulation combined with a set of Ryu applications for management:

https://i.imgur.com/bivSdcC.png

The Ryu application uses Ryu's BGP support to announce up to the Public
Routing layer individual floating ips (/32's or /128's) which are then
summarized and announced to the rest of the datacenter. If a particular
floating ip is experiencing unusually large traffic (DDOS, slashdot effect,
etc.), the Ryu application could change the announcements up to the Public
layer to shift that traffic to dedicated hosts setup for that purpose. It
also announces a single /32 Tunnel Endpoint ip downstream to the TunnelNet
Routing system which provides transit to and from the cells and their
hypervisors. Since traffic from either direction can then end up on any of
the FLIP hosts, a simple flow table to modify the MAC and IP in either the
SRC or DST fields (depending on traffic direction) allows the system to be
completely stateless. We have proven this out (with static routing and
flows) to work reliably in a small lab setup.

On the hypervisor side, we currently plumb networks into separate OVS
bridges. Another Ryu application would control the bridge that handles
overlay networking to selectively divert traffic destined for the default
gateway up to the FLIP NAT systems, taking into account any configured
logical routing and local L2 traffic to pass out into the existing overlay
fabric undisturbed.

Adding in support for L2VPN EVPN
(https://tools.ietf.org/html/draft-ietf-l2vpn-evpn-11) and L2VPN EVPN
Overlay (https://tools.ietf.org/html/draft-sd-l2vpn-evpn-overlay-03) to the
Ryu BGP speaker will allow the hypervisor side Ryu application to advertise
up to the FLIP system reachability information to take into account VM
failover, live-migrate, and supported encapsulation types. We believe that
decoupling the tunnel endpoint discovery from the control plane
(Nova/Neutron) will provide for a more robust solution as well as allow for
use outside of openstack if desired.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration

2014-12-15 Thread Mathieu Rohon
Hi Ryan,

We have been working on similar Use cases to announce /32 with the
Bagpipe BGPSpeaker that supports EVPN.
Please have a look at use case B in [1][2].
Note also that the L2population Mechanism driver for ML2, that is
compatible with OVS, Linuxbridge and ryu ofagent, is inspired by EVPN,
and I'm sure it could help in your use case

[1]http://fr.slideshare.net/ThomasMorin1/neutron-and-bgp-vpns-with-bagpipe
[2]https://www.youtube.com/watch?v=q5z0aPrUZYcsns
[3]https://blueprints.launchpad.net/neutron/+spec/l2-population

Mathieu

On Thu, Dec 4, 2014 at 12:02 AM, Ryan Clevenger
ryan.cleven...@rackspace.com wrote:
 Hi,

 At Rackspace, we have a need to create a higher level networking service
 primarily for the purpose of creating a Floating IP solution in our
 environment. The current solutions for Floating IPs, being tied to plugin
 implementations, does not meet our needs at scale for the following reasons:

 1. Limited endpoint H/A mainly targeting failover only and not multi-active
 endpoints,
 2. Lack of noisy neighbor and DDOS mitigation,
 3. IP fragmentation (with cells, public connectivity is terminated inside
 each cell leading to fragmentation and IP stranding when cell CPU/Memory use
 doesn't line up with allocated IP blocks. Abstracting public connectivity
 away from nova installations allows for much more efficient use of those
 precious IPv4 blocks).
 4. Diversity in transit (multiple encapsulation and transit types on a per
 floating ip basis).

 We realize that network infrastructures are often unique and such a solution
 would likely diverge from provider to provider. However, we would love to
 collaborate with the community to see if such a project could be built that
 would meet the needs of providers at scale. We believe that, at its core,
 this solution would boil down to terminating north-south traffic
 temporarily at a massively horizontally scalable centralized core and then
 encapsulating traffic east-west to a specific host based on the
 association setup via the current L3 router's extension's 'floatingips'
 resource.

 Our current idea, involves using Open vSwitch for header rewriting and
 tunnel encapsulation combined with a set of Ryu applications for management:

 https://i.imgur.com/bivSdcC.png

 The Ryu application uses Ryu's BGP support to announce up to the Public
 Routing layer individual floating ips (/32's or /128's) which are then
 summarized and announced to the rest of the datacenter. If a particular
 floating ip is experiencing unusually large traffic (DDOS, slashdot effect,
 etc.), the Ryu application could change the announcements up to the Public
 layer to shift that traffic to dedicated hosts setup for that purpose. It
 also announces a single /32 Tunnel Endpoint ip downstream to the TunnelNet
 Routing system which provides transit to and from the cells and their
 hypervisors. Since traffic from either direction can then end up on any of
 the FLIP hosts, a simple flow table to modify the MAC and IP in either the
 SRC or DST fields (depending on traffic direction) allows the system to be
 completely stateless. We have proven this out (with static routing and
 flows) to work reliably in a small lab setup.

 On the hypervisor side, we currently plumb networks into separate OVS
 bridges. Another Ryu application would control the bridge that handles
 overlay networking to selectively divert traffic destined for the default
 gateway up to the FLIP NAT systems, taking into account any configured
 logical routing and local L2 traffic to pass out into the existing overlay
 fabric undisturbed.

 Adding in support for L2VPN EVPN
 (https://tools.ietf.org/html/draft-ietf-l2vpn-evpn-11) and L2VPN EVPN
 Overlay (https://tools.ietf.org/html/draft-sd-l2vpn-evpn-overlay-03) to the
 Ryu BGP speaker will allow the hypervisor side Ryu application to advertise
 up to the FLIP system reachability information to take into account VM
 failover, live-migrate, and supported encapsulation types. We believe that
 decoupling the tunnel endpoint discovery from the control plane
 (Nova/Neutron) will provide for a more robust solution as well as allow for
 use outside of openstack if desired.

 

 Ryan Clevenger
 Manager, Cloud Engineering - US
 m: 678.548.7261
 e: ryan.cleven...@rackspace.com


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration

2014-12-15 Thread A, Keshava
Mathieua,
I have been thinking of Starting MPLS right from CN for L2VPN/EVPN 
scenario also.

Below are my queries w.r.t supporting MPLS from OVS :
1. MPLS will be used even for VM-VM traffic across CNs 
generated by OVS  ?
2. MPLS will be originated right from OVS and will be mapped at 
Gateway (it may be NN/Hardware router ) to SP network ?
So MPLS will carry 2 Labels ? (one for hop-by-hop, and 
other one for end to identify network ?)
3. MPLS will go over even the network physical infrastructure 
 also ?
4. How the Labels will be mapped a/c virtual and physical world 
?
5. Who manages the label space  ? Virtual world or physical 
world or both ? (OpenStack +  ODL ?)
6. The labels are nested (i.e. Like L3 VPN end to end MPLS 
connectivity ) will be established ?
7. Or it will be label stitching between Virtual-Physical 
network ? 
How the end-to-end path will be setup ?

Let me know your opinion for the same.

regards,
keshava


-Original Message-
From: Mathieu Rohon [mailto:mathieu.ro...@gmail.com] 
Sent: Monday, December 15, 2014 3:46 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and 
collaboration

Hi Ryan,

We have been working on similar Use cases to announce /32 with the Bagpipe 
BGPSpeaker that supports EVPN.
Please have a look at use case B in [1][2].
Note also that the L2population Mechanism driver for ML2, that is compatible 
with OVS, Linuxbridge and ryu ofagent, is inspired by EVPN, and I'm sure it 
could help in your use case

[1]http://fr.slideshare.net/ThomasMorin1/neutron-and-bgp-vpns-with-bagpipe
[2]https://www.youtube.com/watch?v=q5z0aPrUZYcsns
[3]https://blueprints.launchpad.net/neutron/+spec/l2-population

Mathieu

On Thu, Dec 4, 2014 at 12:02 AM, Ryan Clevenger ryan.cleven...@rackspace.com 
wrote:
 Hi,

 At Rackspace, we have a need to create a higher level networking 
 service primarily for the purpose of creating a Floating IP solution 
 in our environment. The current solutions for Floating IPs, being tied 
 to plugin implementations, does not meet our needs at scale for the following 
 reasons:

 1. Limited endpoint H/A mainly targeting failover only and not 
 multi-active endpoints, 2. Lack of noisy neighbor and DDOS mitigation, 
 3. IP fragmentation (with cells, public connectivity is terminated 
 inside each cell leading to fragmentation and IP stranding when cell 
 CPU/Memory use doesn't line up with allocated IP blocks. Abstracting 
 public connectivity away from nova installations allows for much more 
 efficient use of those precious IPv4 blocks).
 4. Diversity in transit (multiple encapsulation and transit types on a 
 per floating ip basis).

 We realize that network infrastructures are often unique and such a 
 solution would likely diverge from provider to provider. However, we 
 would love to collaborate with the community to see if such a project 
 could be built that would meet the needs of providers at scale. We 
 believe that, at its core, this solution would boil down to 
 terminating north-south traffic temporarily at a massively 
 horizontally scalable centralized core and then encapsulating traffic 
 east-west to a specific host based on the association setup via the current 
 L3 router's extension's 'floatingips'
 resource.

 Our current idea, involves using Open vSwitch for header rewriting and 
 tunnel encapsulation combined with a set of Ryu applications for management:

 https://i.imgur.com/bivSdcC.png

 The Ryu application uses Ryu's BGP support to announce up to the 
 Public Routing layer individual floating ips (/32's or /128's) which 
 are then summarized and announced to the rest of the datacenter. If a 
 particular floating ip is experiencing unusually large traffic (DDOS, 
 slashdot effect, etc.), the Ryu application could change the 
 announcements up to the Public layer to shift that traffic to 
 dedicated hosts setup for that purpose. It also announces a single /32 
 Tunnel Endpoint ip downstream to the TunnelNet Routing system which 
 provides transit to and from the cells and their hypervisors. Since 
 traffic from either direction can then end up on any of the FLIP 
 hosts, a simple flow table to modify the MAC and IP in either the SRC 
 or DST fields (depending on traffic direction) allows the system to be 
 completely stateless. We have proven this out (with static routing and
 flows) to work reliably in a small lab setup.

 On the hypervisor side, we currently plumb networks into separate OVS 
 bridges. Another Ryu application would control the bridge that handles 
 overlay networking to selectively divert traffic destined for the 
 default gateway up to the FLIP NAT systems, taking into account any 
 configured logical routing and local L2 traffic to pass out

Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration

2014-12-10 Thread Stephen Balukoff
Hi Keshava,

For the purposes of Octavia, it's going to be service VMs (or containers or
what have you). However, service VM or tenant VM the concept is roughly
similar:  We need some kind of layer-3 routing capability which works
something like Neutron floating IPs (though not just a static NAT in this
case) but which can distribute traffic to a set of back-end VMs running on
a Neutron network according to some predictable algorithm (probably a
distributed hash).

The idea behind ACTIVE-ACTIVE is that you have many service VMs (we call
them amphorae) which service the same public IP in some way-- this allows
for horizontal scaling of services which need it (ie. anything which does
TLS termination with a significant amount of load).

Does this make sense to you?

Thanks,
Stephen


On Mon, Dec 8, 2014 at 9:56 PM, A, Keshava keshav...@hp.com wrote:

  Stephen,



 Interesting to know what is “ACTIVE-ACTIVE topology of load balancing VMs”.

 What is the scenario is it Service-VM (of NFV) or Tennant VM ?

 Curious to know the background of this thoughts .



 keshava





 *From:* Stephen Balukoff [mailto:sbaluk...@bluebox.net]
 *Sent:* Tuesday, December 09, 2014 7:18 AM
 *To:* OpenStack Development Mailing List (not for usage questions)

 *Subject:* Re: [openstack-dev] [Neutron] [RFC] Floating IP idea
 solicitation and collaboration



 For what it's worth, I know that the Octavia project will need something
 which can do more advanced layer-3 networking in order to deliver and
 ACTIVE-ACTIVE topology of load balancing VMs / containers / machines.
 That's still a down the road feature for us, but it would be great to be
 able to do more advanced layer-3 networking in earlier releases of Octavia
 as well. (Without this, we might have to go through back doors to get
 Neutron to do what we need it to, and I'd rather avoid that.)



 I'm definitely up for learning more about your proposal for this project,
 though I've not had any practical experience with Ryu yet. I would also
 like to see whether it's possible to do the sort of advanced layer-3
 networking you've described without using OVS. (We have found that OVS
 tends to be not quite mature / stable enough for our needs and have moved
 most of our clouds to use ML2 / standard linux bridging.)



 Carl:  I'll also take a look at the two gerrit reviews you've linked. Is
 this week's L3 meeting not happening then? (And man-- I wish it were an
 hour or two later in the day. Coming at y'all from PST timezone here.)



 Stephen



 On Mon, Dec 8, 2014 at 11:57 AM, Carl Baldwin c...@ecbaldwin.net wrote:

 Ryan,

 I'll be traveling around the time of the L3 meeting this week.  My
 flight leaves 40 minutes after the meeting and I might have trouble
 attending.  It might be best to put it off a week or to plan another
 time -- maybe Friday -- when we could discuss it in IRC or in a
 Hangout.

 Carl


 On Mon, Dec 8, 2014 at 8:43 AM, Ryan Clevenger
 ryan.cleven...@rackspace.com wrote:
  Thanks for getting back Carl. I think we may be able to make this weeks
  meeting. Jason Kölker is the engineer doing all of the lifting on this
 side.
  Let me get with him to review what you all have so far and check our
  availability.
 
  
 
  Ryan Clevenger
  Manager, Cloud Engineering - US
  m: 678.548.7261
  e: ryan.cleven...@rackspace.com
 
  
  From: Carl Baldwin [c...@ecbaldwin.net]
  Sent: Sunday, December 07, 2014 4:04 PM
  To: OpenStack Development Mailing List
  Subject: Re: [openstack-dev] [Neutron] [RFC] Floating IP idea
 solicitation
  and collaboration
 
  Ryan,
 
  I have been working with the L3 sub team in this direction.  Progress has
  been slow because of other priorities but we have made some.  I have
 written
  a blueprint detailing some changes needed to the code to enable the
  flexibility to one day run glaring ups on an l3 routed network [1].
 Jaime
  has been working on one that integrates ryu (or other speakers) with
 neutron
  [2].  Dvr was also a step in this direction.
 
  I'd like to invite you to the l3 weekly meeting [3] to discuss further.
 I'm
  very happy to see interest in this area and have someone new to
 collaborate.
 
  Carl
 
  [1] https://review.openstack.org/#/c/88619/
  [2] https://review.openstack.org/#/c/125401/
  [3] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam
 
  On Dec 3, 2014 4:04 PM, Ryan Clevenger ryan.cleven...@rackspace.com
  wrote:
 
  Hi,
 
  At Rackspace, we have a need to create a higher level networking service
  primarily for the purpose of creating a Floating IP solution in our
  environment. The current solutions for Floating IPs, being tied to
 plugin
  implementations, does not meet our needs at scale for the following
 reasons:
 
  1. Limited endpoint H/A mainly targeting failover only and not
  multi-active endpoints,
  2. Lack of noisy neighbor and DDOS mitigation,
  3. IP fragmentation (with cells, public connectivity

Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration

2014-12-10 Thread Jason Kölker
On Mon, Dec 8, 2014 at 7:57 PM, Carl Baldwin c...@ecbaldwin.net wrote:
 I'll be traveling around the time of the L3 meeting this week.  My
 flight leaves 40 minutes after the meeting and I might have trouble
 attending.  It might be best to put it off a week or to plan another
 time -- maybe Friday -- when we could discuss it in IRC or in a
 Hangout.

Carl,

Very glad to see the work the L3 team has been working towards in
this. I'm still digesting the specs/blueprints, but as you stated they
are very much in the direction we'd like to head as well. I'll start
lurking in the L3 meetings to get more familiar with the current state
of things as I've been disconnected from upstream for a while. I'm
`jkoelker` on freenode or `jkoel...@gmail.com` for hangouts if you
wanna chat.

Happy Hacking!

7-11

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration

2014-12-08 Thread Stephen Balukoff
For what it's worth, I know that the Octavia project will need something
which can do more advanced layer-3 networking in order to deliver and
ACTIVE-ACTIVE topology of load balancing VMs / containers / machines.
That's still a down the road feature for us, but it would be great to be
able to do more advanced layer-3 networking in earlier releases of Octavia
as well. (Without this, we might have to go through back doors to get
Neutron to do what we need it to, and I'd rather avoid that.)

I'm definitely up for learning more about your proposal for this project,
though I've not had any practical experience with Ryu yet. I would also
like to see whether it's possible to do the sort of advanced layer-3
networking you've described without using OVS. (We have found that OVS
tends to be not quite mature / stable enough for our needs and have moved
most of our clouds to use ML2 / standard linux bridging.)

Carl:  I'll also take a look at the two gerrit reviews you've linked. Is
this week's L3 meeting not happening then? (And man-- I wish it were an
hour or two later in the day. Coming at y'all from PST timezone here.)

Stephen

On Mon, Dec 8, 2014 at 11:57 AM, Carl Baldwin c...@ecbaldwin.net wrote:

 Ryan,

 I'll be traveling around the time of the L3 meeting this week.  My
 flight leaves 40 minutes after the meeting and I might have trouble
 attending.  It might be best to put it off a week or to plan another
 time -- maybe Friday -- when we could discuss it in IRC or in a
 Hangout.

 Carl

 On Mon, Dec 8, 2014 at 8:43 AM, Ryan Clevenger
 ryan.cleven...@rackspace.com wrote:
  Thanks for getting back Carl. I think we may be able to make this weeks
  meeting. Jason Kölker is the engineer doing all of the lifting on this
 side.
  Let me get with him to review what you all have so far and check our
  availability.
 
  
 
  Ryan Clevenger
  Manager, Cloud Engineering - US
  m: 678.548.7261
  e: ryan.cleven...@rackspace.com
 
  
  From: Carl Baldwin [c...@ecbaldwin.net]
  Sent: Sunday, December 07, 2014 4:04 PM
  To: OpenStack Development Mailing List
  Subject: Re: [openstack-dev] [Neutron] [RFC] Floating IP idea
 solicitation
  and collaboration
 
  Ryan,
 
  I have been working with the L3 sub team in this direction.  Progress has
  been slow because of other priorities but we have made some.  I have
 written
  a blueprint detailing some changes needed to the code to enable the
  flexibility to one day run glaring ups on an l3 routed network [1].
 Jaime
  has been working on one that integrates ryu (or other speakers) with
 neutron
  [2].  Dvr was also a step in this direction.
 
  I'd like to invite you to the l3 weekly meeting [3] to discuss further.
 I'm
  very happy to see interest in this area and have someone new to
 collaborate.
 
  Carl
 
  [1] https://review.openstack.org/#/c/88619/
  [2] https://review.openstack.org/#/c/125401/
  [3] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam
 
  On Dec 3, 2014 4:04 PM, Ryan Clevenger ryan.cleven...@rackspace.com
  wrote:
 
  Hi,
 
  At Rackspace, we have a need to create a higher level networking service
  primarily for the purpose of creating a Floating IP solution in our
  environment. The current solutions for Floating IPs, being tied to
 plugin
  implementations, does not meet our needs at scale for the following
 reasons:
 
  1. Limited endpoint H/A mainly targeting failover only and not
  multi-active endpoints,
  2. Lack of noisy neighbor and DDOS mitigation,
  3. IP fragmentation (with cells, public connectivity is terminated
 inside
  each cell leading to fragmentation and IP stranding when cell
 CPU/Memory use
  doesn't line up with allocated IP blocks. Abstracting public
 connectivity
  away from nova installations allows for much more efficient use of those
  precious IPv4 blocks).
  4. Diversity in transit (multiple encapsulation and transit types on a
 per
  floating ip basis).
 
  We realize that network infrastructures are often unique and such a
  solution would likely diverge from provider to provider. However, we
 would
  love to collaborate with the community to see if such a project could be
  built that would meet the needs of providers at scale. We believe that,
 at
  its core, this solution would boil down to terminating north-south
 traffic
  temporarily at a massively horizontally scalable centralized core and
 then
  encapsulating traffic east-west to a specific host based on the
  association setup via the current L3 router's extension's 'floatingips'
  resource.
 
  Our current idea, involves using Open vSwitch for header rewriting and
  tunnel encapsulation combined with a set of Ryu applications for
 management:
 
  https://i.imgur.com/bivSdcC.png
 
  The Ryu application uses Ryu's BGP support to announce up to the Public
  Routing layer individual floating ips (/32's or /128's) which are then
  summarized and announced to the rest of the datacenter

Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration

2014-12-08 Thread A, Keshava
Stephen,

Interesting to know what is “ACTIVE-ACTIVE topology of load balancing VMs”.
What is the scenario is it Service-VM (of NFV) or Tennant VM ?
Curious to know the background of this thoughts .

keshava


From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Tuesday, December 09, 2014 7:18 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and 
collaboration

For what it's worth, I know that the Octavia project will need something which 
can do more advanced layer-3 networking in order to deliver and ACTIVE-ACTIVE 
topology of load balancing VMs / containers / machines. That's still a down 
the road feature for us, but it would be great to be able to do more advanced 
layer-3 networking in earlier releases of Octavia as well. (Without this, we 
might have to go through back doors to get Neutron to do what we need it to, 
and I'd rather avoid that.)

I'm definitely up for learning more about your proposal for this project, 
though I've not had any practical experience with Ryu yet. I would also like to 
see whether it's possible to do the sort of advanced layer-3 networking you've 
described without using OVS. (We have found that OVS tends to be not quite 
mature / stable enough for our needs and have moved most of our clouds to use 
ML2 / standard linux bridging.)

Carl:  I'll also take a look at the two gerrit reviews you've linked. Is this 
week's L3 meeting not happening then? (And man-- I wish it were an hour or two 
later in the day. Coming at y'all from PST timezone here.)

Stephen

On Mon, Dec 8, 2014 at 11:57 AM, Carl Baldwin 
c...@ecbaldwin.netmailto:c...@ecbaldwin.net wrote:
Ryan,

I'll be traveling around the time of the L3 meeting this week.  My
flight leaves 40 minutes after the meeting and I might have trouble
attending.  It might be best to put it off a week or to plan another
time -- maybe Friday -- when we could discuss it in IRC or in a
Hangout.

Carl

On Mon, Dec 8, 2014 at 8:43 AM, Ryan Clevenger
ryan.cleven...@rackspace.commailto:ryan.cleven...@rackspace.com wrote:
 Thanks for getting back Carl. I think we may be able to make this weeks
 meeting. Jason Kölker is the engineer doing all of the lifting on this side.
 Let me get with him to review what you all have so far and check our
 availability.

 

 Ryan Clevenger
 Manager, Cloud Engineering - US
 m: 678.548.7261tel:678.548.7261
 e: ryan.cleven...@rackspace.commailto:ryan.cleven...@rackspace.com

 
 From: Carl Baldwin [c...@ecbaldwin.netmailto:c...@ecbaldwin.net]
 Sent: Sunday, December 07, 2014 4:04 PM
 To: OpenStack Development Mailing List
 Subject: Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation
 and collaboration

 Ryan,

 I have been working with the L3 sub team in this direction.  Progress has
 been slow because of other priorities but we have made some.  I have written
 a blueprint detailing some changes needed to the code to enable the
 flexibility to one day run glaring ups on an l3 routed network [1].  Jaime
 has been working on one that integrates ryu (or other speakers) with neutron
 [2].  Dvr was also a step in this direction.

 I'd like to invite you to the l3 weekly meeting [3] to discuss further.  I'm
 very happy to see interest in this area and have someone new to collaborate.

 Carl

 [1] https://review.openstack.org/#/c/88619/
 [2] https://review.openstack.org/#/c/125401/
 [3] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam

 On Dec 3, 2014 4:04 PM, Ryan Clevenger 
 ryan.cleven...@rackspace.commailto:ryan.cleven...@rackspace.com
 wrote:

 Hi,

 At Rackspace, we have a need to create a higher level networking service
 primarily for the purpose of creating a Floating IP solution in our
 environment. The current solutions for Floating IPs, being tied to plugin
 implementations, does not meet our needs at scale for the following reasons:

 1. Limited endpoint H/A mainly targeting failover only and not
 multi-active endpoints,
 2. Lack of noisy neighbor and DDOS mitigation,
 3. IP fragmentation (with cells, public connectivity is terminated inside
 each cell leading to fragmentation and IP stranding when cell CPU/Memory use
 doesn't line up with allocated IP blocks. Abstracting public connectivity
 away from nova installations allows for much more efficient use of those
 precious IPv4 blocks).
 4. Diversity in transit (multiple encapsulation and transit types on a per
 floating ip basis).

 We realize that network infrastructures are often unique and such a
 solution would likely diverge from provider to provider. However, we would
 love to collaborate with the community to see if such a project could be
 built that would meet the needs of providers at scale. We believe that, at
 its core, this solution would boil down to terminating north-south traffic
 temporarily at a massively horizontally scalable centralized

Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration

2014-12-07 Thread Carl Baldwin
Ryan,

I have been working with the L3 sub team in this direction.  Progress has
been slow because of other priorities but we have made some.  I have
written a blueprint detailing some changes needed to the code to enable the
flexibility to one day run glaring ups on an l3 routed network [1].  Jaime
has been working on one that integrates ryu (or other speakers) with
neutron [2].  Dvr was also a step in this direction.

I'd like to invite you to the l3 weekly meeting [3] to discuss further.
I'm very happy to see interest in this area and have someone new to
collaborate.

Carl

[1] https://review.openstack.org/#/c/88619/
[2] https://review.openstack.org/#/c/125401/
[3] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam
On Dec 3, 2014 4:04 PM, Ryan Clevenger ryan.cleven...@rackspace.com
wrote:

   Hi,

  At Rackspace, we have a need to create a higher level networking service
 primarily for the purpose of creating a Floating IP solution in our
 environment. The current solutions for Floating IPs, being tied to plugin
 implementations, does not meet our needs at scale for the following reasons:

  1. Limited endpoint H/A mainly targeting failover only and not
 multi-active endpoints,
 2. Lack of noisy neighbor and DDOS mitigation,
 3. IP fragmentation (with cells, public connectivity is terminated inside
 each cell leading to fragmentation and IP stranding when cell CPU/Memory
 use doesn't line up with allocated IP blocks. Abstracting public
 connectivity away from nova installations allows for much more efficient
 use of those precious IPv4 blocks).
 4. Diversity in transit (multiple encapsulation and transit types on a per
 floating ip basis).

  We realize that network infrastructures are often unique and such a
 solution would likely diverge from provider to provider. However, we would
 love to collaborate with the community to see if such a project could be
 built that would meet the needs of providers at scale. We believe that, at
 its core, this solution would boil down to terminating north-south
 traffic temporarily at a massively horizontally scalable centralized core
 and then encapsulating traffic east-west to a specific host based on the
 association setup via the current L3 router's extension's 'floatingips'
 resource.

  Our current idea, involves using Open vSwitch for header rewriting and
 tunnel encapsulation combined with a set of Ryu applications for management:

  https://i.imgur.com/bivSdcC.png

  The Ryu application uses Ryu's BGP support to announce up to the Public
 Routing layer individual floating ips (/32's or /128's) which are then
 summarized and announced to the rest of the datacenter. If a particular
 floating ip is experiencing unusually large traffic (DDOS, slashdot effect,
 etc.), the Ryu application could change the announcements up to the Public
 layer to shift that traffic to dedicated hosts setup for that purpose. It
 also announces a single /32 Tunnel Endpoint ip downstream to the
 TunnelNet Routing system which provides transit to and from the cells and
 their hypervisors. Since traffic from either direction can then end up on
 any of the FLIP hosts, a simple flow table to modify the MAC and IP in
 either the SRC or DST fields (depending on traffic direction) allows the
 system to be completely stateless. We have proven this out (with static
 routing and flows) to work reliably in a small lab setup.

  On the hypervisor side, we currently plumb networks into separate OVS
 bridges. Another Ryu application would control the bridge that handles
 overlay networking to selectively divert traffic destined for the default
 gateway up to the FLIP NAT systems, taking into account any configured
 logical routing and local L2 traffic to pass out into the existing overlay
 fabric undisturbed.

  Adding in support for L2VPN EVPN (
 https://tools.ietf.org/html/draft-ietf-l2vpn-evpn-11) and L2VPN EVPN
 Overlay (https://tools.ietf.org/html/draft-sd-l2vpn-evpn-overlay-03) to
 the Ryu BGP speaker will allow the hypervisor side Ryu application to
 advertise up to the FLIP system reachability information to take into
 account VM failover, live-migrate, and supported encapsulation types. We
 believe that decoupling the tunnel endpoint discovery from the control
 plane (Nova/Neutron) will provide for a more robust solution as well as
 allow for use outside of openstack if desired.

  

 Ryan Clevenger
 Manager, Cloud Engineering - US
 m: 678.548.7261
 e: ryan.cleven...@rackspace.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev