From: Pedro Roque Marques [mailto:pedro.r.marq...@gmail.com]
Colin,
"The nice thing about standards is that there are so many of them to choose 
from."

For instance, if you take this Internet Draft:
http://tools.ietf.org/html/draft-ietf-l3vpn-end-system-02 which is based on 
RFC4364.

It has already been implemented as a Neutron plugin via OpenContrail 
(http://juniper.github.io/contrail-vnc/README.html); With this implementation 
each OpenStack cluster can be configured as its own Autonomous System.

There is a blueprint
https://blueprints.launchpad.net/neutron/+spec/neutron-bgp-mpls-vpn
that is discussing adding the provisioning of the autonomous system and peering 
to Neutron.

Please note that the work above does interoperate with 4364 using option B. 
Option C is possible but not that practical (as an operator you probably don't 
want to expose your internal topology between clusters).

If you want to give it a try you can use this devstack fork: 
https://github.com/dsetia/devstack.
You can use it to interoperate with a standard router that implements 4364 and 
support MPLS over GRE. Products from cisco/juniper/ALU/huwawei etc do.

I believe that the work i'm referencing implements interoperability while 
having very minimal changes to Neutron. It is based on the same concept of 
neutron virtual network and it hides the BGP/MPLS functionality from the user 
by translating policies that establish connectivity between virtual networks 
into RFC 4364 concepts.
Please refer to: 
https://blueprints.launchpad.net/neutron/+spec/policy-extensions-for-neutron

Would it make sense to have an IRC/Web meeting around interoperability with 
RFC4364 an OpenStack managed clusters ? I believe that there is a lot of work 
that has already been done there by multiple vendors as well as some carriers.

+1  And it should be scheduled and announced a reasonable time in advance 
developers can plan to participate.

--Rocky

  Pedro.

On Nov 7, 2013, at 12:35 AM, Colin McNamara 
<co...@2cups.com<mailto:co...@2cups.com>> wrote:

I have a couple concerns that I don't feel I clearly communicated during the L3 
advanced features session. I'd like to take this opportunity to both clearly 
communicate my thoughts, as well as start a discussion around them.

Building to the edge of the "autonomous system"

The current state of neutron implementation is functionally the l2 domain and 
simple l3 services that are part of a larger autonomous system. The routers and 
switches northbound of the OpenStack networking layer handled the abstraction 
and integration of the components.

Note, I use the term "Autonomous System" to describe more then the notion of 
BGP AS, but more broadly in the term of a system that is controlled within a 
common framework and methodology, and integrates with a peer system that 
doesn't not share that same scope or method of control

These components that composed the autonomous system boundary implement 
protocols and standards that map into IETF and IEEE standards. The reasoning 
for this is interoperability. Before vendors utilize IETF for interoperability 
at this layer, the provider experience was horrible (this was my personal 
experience in the late 90's).


Wednesdays discussions in the Neutron Design Sessions

A couple of the discussions, most notably the extension of l3 functionality 
fell within the scope of starting the process of extending Neutron with 
functionality that will result (eventually) in the ability for an OpenStack 
installation to operate as it's own Autonomous System.

The discussions that occurred to support L3 advanced functionality (northbound 
boundary), and the QOS extension functionality both fell into the scope of 
Northbound and Southbound boundaries of this system.

My comments in the session

My comments in the session, while clouded with jet-lag were specifically around 
two concepts that are used when integrating other types of systems

1. In a simple (1-8) tenant environment integration with a northbound AS is 
normally done in a PE-CE model that generally centers around mapping dot1q tags 
into the appropriate northbound l3 segments and then handling the availability 
of the L2 path that traverses with port channeling, MLAG, STP, Etc.

2. In a complex environment (8+ for discussion) different Carrier Supporting 
Carrier (CSC) methods defined in IETF RFC 4364 Section 10 type A, B or C are 
used. These allow the mapping of segregated tenant networks together and 
synchronizing between distributed systems. This normally extends the tagging or 
tunneling mechanism and then allows for BGP to synchronize NLRI information 
between AS's.

These are the standard ways of integrating between carriers, but also 
components of these implementations are used to integrate and scale inside of a 
single web scale data center. Commonly when you scale beyond a certain physical 
port boundary (1000is edge ports in many implementations, much larger in 
current implementations) the same designs for C2C integrations are used to 
create network availability zones inside a web scale data center.

Support of these IETF and IEEE standard integrations are necessary for brown 
field installations

In a green field installation, diverging from IETF and IEEE standards on the 
north bound edge while not a great idea, can result in a functional 
implementation. In a brown field implementation where OpenStack Neutron will be 
integrated into an existing network core. This boundary layer is where we move 
from a controlled system into a distributed system. The cleanly integrate into 
this system, IETF and IEEE protocols and standards have to be followed.


<8DB71B56-CDE5-42D5-870E-CF94157510F8.png>When we diverge from this standards 
based integration at the north edge of our autonomous system we lose the 
ability to integrate without introducing major changes (and risk), into our 
core. In my experience this is sufficient to either slow or stall adoption. 
This is a major risk, that I believe can be mitigated.

My thoughts on mitigating this risk

We need to at least map and track the relevant IETF RFC's that define the 
internet standards for integration at the AS boundary. I know that many of the 
network vendor developers that contribute to Neutron have access to people who 
both have deep knowledge of these standards, and also participate in the IETF 
working groups. I would hope that these resources could be leveraged to at 
least give a sanity check, at best ensure a compliant northbound interface to 
other systems.

Side benefit of engaging IETF members in this discussion

The other side benefit of this is that inventions inside of Neutron can also be 
communicated as standards to the rest of the world in the form of net new 
RFC's. In OVS this has already happened, as OVS has emerged to be a common 
component in many network devices, and the need to establish and reference a 
common standard has risen it's head. I would think that inventions within 
Neutron would follow this same path.

Regards,
Colin
Colin McNamara
People | Process | Technology
--------------------------------------------
Mobile:             858-208-8105
Twitter: @colinmcnamara<http://www.twitter.com/colinmcnamara>
Linkedin:          
www.<http://www.linkedin.com/colinmcnamara>linkedin.com/colinmcnamara<http://www.linkedin.com/colinmcnamara>
Blog:    www.colinmcnamara.com<http://www.colinmcnamara.com/>
Email:  co...@2cups.com<mailto://co...@2cups.com>





_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to