I thought Monday network meeting agreed on that “VLAN aware VMs”, Trunk network 
+ L2GW were different use cases.

Still I get the feeling that the proposals are put up against each other.

Here are some examples why bridging between Neutron internal networks using 
trunk network and L2GW IMO should be avoided. I am still fine with bridging to 
external networks.

Assuming VM with trunk port wants to use floating IP on specific VLAN. Router 
has to be created on a Neutron network behind L2GW since Neutron router cannot 
handle VLANs. (Maybe not too common use case, but just to show what kind of 
issues you can get into)
neutron floatingip-associate FLOATING_IP_ID INTERNAL_VM_PORT_ID
The code to check if valid port has to be able to traverse the L2GW. Handing of 
IP addresses of VM will most likely be affected since VM port is connected to 
several broadcast domains. Alternatively new API can be created.

In “VLAN aware VMs” trunk port mac address has to be globally unique since it 
can be connected to any network, other ports still only has to be unique per 
network. But for L2GW all mac addresses has to be globally unique since they 
might be bridged together at a later stage. Also some implementations might not 
be able to take VID into account when doing mac address learning, forcing at 
least unique macs on a trunk network.

Benefits with “VLAN aware VMs” are integration with existing Neutron services.
Benefits with Trunk networks are less consumption of Neutron networks, less 
management per VLAN.
Benefits with L2GW is ease to do network stitching.
There are other benefits with the different proposals, the point is that it 
might be beneficial to have all solutions.

Platforms that have issues forking of VLANs at VM port level could get around 
with trunk network + L2GW but having more hacks if integration with other parts 
of Neutron is needed. Platforms that have issues implementing trunk networks 
could get around using “VLAN aware VMs” but being forced to separately manage 
every VLAN as a Neutron network. On platforms that have both, user can select 
method depending on what is needed.

Thanks,
Erik



From: Armando M. [mailto:arma...@gmail.com]
Sent: den 28 oktober 2014 19:01
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Sorry for jumping into this thread late...there's lots of details to process, 
and I needed time to digest!

Having said that, I'd like to recap before moving the discussion forward, at 
the Summit and beyond.

As it's being pointed out, there are a few efforts targeting this area; I think 
that is sensible to adopt the latest spec system we have been using to 
understand where we are, and I mean Gerrit and the spec submissions.

To this aim I see the following specs:

https://review.openstack.org/93613 - Service API for L2 bridging 
tenants/provider networks
https://review.openstack.org/100278 - API Extension for l2-gateway
https://review.openstack.org/94612 - VLAN aware VMs
https://review.openstack.org/97714 - VLAN trunking networks for NFV

First of all: did I miss any? I am intentionally leaving out any vendor 
specific blueprint for now.

When I look at these I clearly see that we jump all the way to implementations 
details. From an architectural point of view, this clearly does not make a lot 
of sense.

In order to ensure that everyone is on the same page, I would suggest to have a 
discussion where we focus on the following aspects:

- Identify the use cases: what are, in simple terms, the possible interactions 
that an actor (i.e. the tenant or the admin) can have with the system (an 
OpenStack deployment), when these NFV-enabling capabilities are available? What 
are the observed outcomes once these interactions have taken place?

-  Management API: what abstractions do we expose to the tenant or admin (do we 
augment the existing resources, or do we create new resources, or do we do 
both)? This should obviously driven by a set of use cases, and we need to 
identify the minimum set or logical artifacts that would let us meet the needs 
of the widest set of use cases.

- Core Neutron changes: what needs to happen to the core of Neutron, if 
anything, so that we can implement this NFV-enabling constructs successfully? 
Are there any changes to the core L2 API? Are there any changes required to the 
core framework (scheduling, policy, notifications, data model etc)?

- Add support to the existing plugin backends: the openvswitch reference 
implementation is an obvious candidate, but other plugins may want to leverage 
the newly defined capabilities too. Once the above mentioned points have been 
fleshed out, it should be fairly straightforward to have these efforts progress 
in autonomy.

IMO, until we can get a full understanding of the aspects above, I don't 
believe like the core team is in the best position to determine the best 
approach forward; I think it's in everyone's interest in making sure that 
something cohesive comes out of this; the worst possible outcome is no progress 
at all, or even worse, some frankenstein system that no-one really know what it 
does, or how it can be used.

I will go over the specs one more time in order to identify some answers to my 
points above. I hope someone can help me through the process.


Many thanks,
Armando
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to