Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-11-04 Thread A, Keshava

Hi Ian/Erik,

If the Service-VM contain the ‘multiple Features’ in which packets needs to 
processed one after other.
Example: When the packet the Service-VM from external network via OpenStack , 
First it should processed for vNAT and after finishing that packet  should be 
processed for  DPI functionality ).

How to control the chaining of execution for each packet entering the NFV 
service VM ?



1.   Each feature execution in the Service-VM should  be controlled by 
OpenStack ? By having nested Q-in-Q (where each Q maps to corresponding feature 
in that Service/NFV VM ? )

Or

2.   It will be informed  by Service Layer to Service-VM  (outside 
OpenStack) .Then execution chain should be handled  “internally   by that 
Service-VM”  itself and it should be transparent to OpenStack  ?

Or thinking is different here ?
[cid:image001.png@01CFF889.F7F5C1C0]

Thanks  regards,
Keshava

From: Erik Moe [mailto:erik@ericsson.com]
Sent: Monday, November 03, 2014 3:42 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints



From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: den 31 oktober 2014 23:35
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints


On 31 October 2014 06:29, Erik Moe 
erik@ericsson.commailto:erik@ericsson.com wrote:


I thought Monday network meeting agreed on that “VLAN aware VMs”, Trunk network 
+ L2GW were different use cases.
 HI
Still I get the feeling that the proposals are put up against each other.

I think we agreed they were different, or at least the light was beginning to 
dawn on the differences, but Maru's point was that if we really want to decide 
what specs we have we need to show use cases not just for each spec 
independently, but also include use cases where e.g. two specs are required and 
the third doesn't help, so as to show that *all* of them are needed.  In fact, 
I suggest that first we do that - here - and then we meet up one lunchtime and 
attack the specs in etherpad before submitting them.  In theory we could have 
them reviewed and approved by the end of the week.  (This theory may not be 
very realistic, but it's good to set lofty goals, my manager tells me.)
Ok, let’s try. I hope you theory turns out to be realistic. ☺
Here are some examples why bridging between Neutron internal networks using 
trunk network and L2GW IMO should be avoided. I am still fine with bridging to 
external networks.

Assuming VM with trunk port wants to use floating IP on specific VLAN. Router 
has to be created on a Neutron network behind L2GW since Neutron router cannot 
handle VLANs. (Maybe not too common use case, but just to show what kind of 
issues you can get into)
neutron floatingip-associate FLOATING_IP_ID INTERNAL_VM_PORT_ID
The code to check if valid port has to be able to traverse the L2GW. Handing of 
IP addresses of VM will most likely be affected since VM port is connected to 
several broadcast domains. Alternatively new API can be created.

Now, this is a very good argument for 'trunk ports', yes.  It's not actually an 
argument against bridging between networks.  I think the bridging case 
addresses use cases (generally NFV use cases) where you're not interested in 
Openstack managing addresses - often because you're forwarding traffic rather 
than being an endpoint, and/or you plan on disabling all firewalling for speed 
reasons, but perhaps because you wish to statically configure an address rather 
than use DHCP.  The point is that, in the absence of a need for address-aware 
functions, you don't really care much about ports, and in fact configuring 
ports with many addresses may simply be overhead.  Also, as you say, this 
doesn't address the external bridging use case where what you're bridging to is 
not necessarily in Openstack's domain of control.
I know that many NFVs currently prefer to manage everything themselves. At the 
same time, IMO, I think they should be encouraged to become Neutronified.
In “VLAN aware VMs” trunk port mac address has to be globally unique since it 
can be connected to any network, other ports still only has to be unique per 
network. But for L2GW all mac addresses has to be globally unique since they 
might be bridged together at a later stage.

I'm not sure that that's particularly a problem - any VM with a port will have 
one globally unique MAC address.  I wonder if I'm missing the point here, 
though.
Ok, this was probably too specific, sorry. Neutron can reuse MAC addresses 
among Neutron networks. But I guess this is configurable.
Also some implementations might not be able to take VID into account when doing 
mac address learning, forcing at least unique macs on a trunk network.

If an implementation struggles with VLANs then the logical thing to do would be 
not to implement them in that driver.  Which is fine: I would

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-11-04 Thread Erik Moe

Hi,

I have reserved the last slot on Friday.

https://etherpad.openstack.org/p/neutron-kilo-meetup-slots

/Erik


From: Richard Woo [mailto:richardwoo2...@gmail.com]
Sent: den 3 november 2014 23:56
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Hello, will this topic be discussed in the design session?

Richard

On Mon, Nov 3, 2014 at 10:36 PM, Erik Moe 
erik@ericsson.commailto:erik@ericsson.com wrote:

I created an etherpad and added use cases (so far just the ones in your email).

https://etherpad.openstack.org/p/tenant_vlans

/Erik


From: Erik Moe [mailto:erik@ericsson.commailto:erik@ericsson.com]
Sent: den 2 november 2014 23:12

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints



From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: den 31 oktober 2014 23:35
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints


On 31 October 2014 06:29, Erik Moe 
erik@ericsson.commailto:erik@ericsson.com wrote:


I thought Monday network meeting agreed on that “VLAN aware VMs”, Trunk network 
+ L2GW were different use cases.

Still I get the feeling that the proposals are put up against each other.

I think we agreed they were different, or at least the light was beginning to 
dawn on the differences, but Maru's point was that if we really want to decide 
what specs we have we need to show use cases not just for each spec 
independently, but also include use cases where e.g. two specs are required and 
the third doesn't help, so as to show that *all* of them are needed.  In fact, 
I suggest that first we do that - here - and then we meet up one lunchtime and 
attack the specs in etherpad before submitting them.  In theory we could have 
them reviewed and approved by the end of the week.  (This theory may not be 
very realistic, but it's good to set lofty goals, my manager tells me.)
Ok, let’s try. I hope you theory turns out to be realistic. ☺
Here are some examples why bridging between Neutron internal networks using 
trunk network and L2GW IMO should be avoided. I am still fine with bridging to 
external networks.

Assuming VM with trunk port wants to use floating IP on specific VLAN. Router 
has to be created on a Neutron network behind L2GW since Neutron router cannot 
handle VLANs. (Maybe not too common use case, but just to show what kind of 
issues you can get into)
neutron floatingip-associate FLOATING_IP_ID INTERNAL_VM_PORT_ID
The code to check if valid port has to be able to traverse the L2GW. Handing of 
IP addresses of VM will most likely be affected since VM port is connected to 
several broadcast domains. Alternatively new API can be created.

Now, this is a very good argument for 'trunk ports', yes.  It's not actually an 
argument against bridging between networks.  I think the bridging case 
addresses use cases (generally NFV use cases) where you're not interested in 
Openstack managing addresses - often because you're forwarding traffic rather 
than being an endpoint, and/or you plan on disabling all firewalling for speed 
reasons, but perhaps because you wish to statically configure an address rather 
than use DHCP.  The point is that, in the absence of a need for address-aware 
functions, you don't really care much about ports, and in fact configuring 
ports with many addresses may simply be overhead.  Also, as you say, this 
doesn't address the external bridging use case where what you're bridging to is 
not necessarily in Openstack's domain of control.
I know that many NFVs currently prefer to manage everything themselves. At the 
same time, IMO, I think they should be encouraged to become Neutronified.
In “VLAN aware VMs” trunk port mac address has to be globally unique since it 
can be connected to any network, other ports still only has to be unique per 
network. But for L2GW all mac addresses has to be globally unique since they 
might be bridged together at a later stage.

I'm not sure that that's particularly a problem - any VM with a port will have 
one globally unique MAC address.  I wonder if I'm missing the point here, 
though.
Ok, this was probably too specific, sorry. Neutron can reuse MAC addresses 
among Neutron networks. But I guess this is configurable.
Also some implementations might not be able to take VID into account when doing 
mac address learning, forcing at least unique macs on a trunk network.

If an implementation struggles with VLANs then the logical thing to do would be 
not to implement them in that driver.  Which is fine: I would expect (for 
instance) LB-driver networking to work for this and leave OVS-driver networking 
to never work for this, because there's little point in fixing it.
Same as above, this is related to reuse of MAC addresses.
Benefits

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-11-04 Thread Erik Moe

It sounds like this would match VLAN trunk.

Maybe it could be mapped to trunk port also, but I have not really worked with 
flat networks so I am not sure how DHCP etc. looks like.

Is it desired to be able to control port membership of VLANs or is it ok to 
connect all VLANs to all ports?

/Erik


From: Wuhongning [mailto:wuhongn...@huawei.com]
Sent: den 4 november 2014 03:41
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Is the trunk port use case like the super vlan?

Also there is another typical use case maybe not covered: extended flat 
network. Traffic on the port carries multiple vlans, but these vlans are not 
necessarily managed by neutron-network, so can not be classified to trunk port. 
And they don't need a gateway to communicate with other nodes in the physical 
provider network, what they expect neutron to do, is much like what the flat 
network does(so I call it extended flat): just keep the packets as is 
bidirectionally between wire and vnic.


From: Erik Moe [erik@ericsson.com]
Sent: Tuesday, November 04, 2014 5:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

I created an etherpad and added use cases (so far just the ones in your email).

https://etherpad.openstack.org/p/tenant_vlans

/Erik


From: Erik Moe [mailto:erik@ericsson.com]
Sent: den 2 november 2014 23:12
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints



From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: den 31 oktober 2014 23:35
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints


On 31 October 2014 06:29, Erik Moe 
erik@ericsson.commailto:erik@ericsson.com wrote:


I thought Monday network meeting agreed on that VLAN aware VMs, Trunk network 
+ L2GW were different use cases.

Still I get the feeling that the proposals are put up against each other.

I think we agreed they were different, or at least the light was beginning to 
dawn on the differences, but Maru's point was that if we really want to decide 
what specs we have we need to show use cases not just for each spec 
independently, but also include use cases where e.g. two specs are required and 
the third doesn't help, so as to show that *all* of them are needed.  In fact, 
I suggest that first we do that - here - and then we meet up one lunchtime and 
attack the specs in etherpad before submitting them.  In theory we could have 
them reviewed and approved by the end of the week.  (This theory may not be 
very realistic, but it's good to set lofty goals, my manager tells me.)
Ok, let's try. I hope you theory turns out to be realistic. :)
Here are some examples why bridging between Neutron internal networks using 
trunk network and L2GW IMO should be avoided. I am still fine with bridging to 
external networks.

Assuming VM with trunk port wants to use floating IP on specific VLAN. Router 
has to be created on a Neutron network behind L2GW since Neutron router cannot 
handle VLANs. (Maybe not too common use case, but just to show what kind of 
issues you can get into)
neutron floatingip-associate FLOATING_IP_ID INTERNAL_VM_PORT_ID
The code to check if valid port has to be able to traverse the L2GW. Handing of 
IP addresses of VM will most likely be affected since VM port is connected to 
several broadcast domains. Alternatively new API can be created.

Now, this is a very good argument for 'trunk ports', yes.  It's not actually an 
argument against bridging between networks.  I think the bridging case 
addresses use cases (generally NFV use cases) where you're not interested in 
Openstack managing addresses - often because you're forwarding traffic rather 
than being an endpoint, and/or you plan on disabling all firewalling for speed 
reasons, but perhaps because you wish to statically configure an address rather 
than use DHCP.  The point is that, in the absence of a need for address-aware 
functions, you don't really care much about ports, and in fact configuring 
ports with many addresses may simply be overhead.  Also, as you say, this 
doesn't address the external bridging use case where what you're bridging to is 
not necessarily in Openstack's domain of control.
I know that many NFVs currently prefer to manage everything themselves. At the 
same time, IMO, I think they should be encouraged to become Neutronified.
In VLAN aware VMs trunk port mac address has to be globally unique since it 
can be connected to any network, other ports still only has to be unique per 
network. But for L2GW all mac addresses has to be globally unique since they 
might be bridged together at a later stage.

I'm not sure that that's particularly

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-11-03 Thread A, Keshava
Hi Ian,

I think we need to understand how these VRRP and HSRP will work and where it 
used. What is NFV problem domain also ..

VRRP: is used to provide the redundancy for L2 network, where those routers are 
connected to Last mile L2 devices/L2 network.
   Since L2 network are stateless, Active entity going down and 
standby entity talking control is simple.
HSRP: is proprietary protocol anyway.

Here we are also talking about NFV and its redundancy is for L3 network also . 
(Routing /Signaling protocol redundancy)
For routing protocols “Active Routing Entity” always wants to have control over 
‘Standby Routing Entity’ so that Active is under the control of the 
network/Standby.
If VRRP kind of approach is issued for ‘L3 Routing Redundancy’, each Active and 
Standby will be running as independently which is not good model and it will 
not provide the five-9 redundancy.
In order to provide the 99.9 redundancy at L3 network/routing  NFV elements 
it is  required to run Active and Standby Entity, where Standby will be under 
the control of Active. VRRP will not be good option for this.

Then it is required to run as I mentioned .
I hope you got it .

Thanks  regards,
Keshava

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Saturday, November 01, 2014 4:07 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Go  read about HSRP and VRRP.  What you propose is akin to turning off one 
physical switch port and turning on another when you want to switch from an 
active physical server to a standby, and this is not how it's done in practice; 
instead, you connect the two VMs to the same network and let them decide which 
gets the primary address.

On 28 October 2014 10:27, A, Keshava 
keshav...@hp.commailto:keshav...@hp.com wrote:
Hi Alan and  Salvatore,

Thanks for response and I also agree we need to take small steps.
However I have below points to make.

It is very important how the Service VM needs will be deployed w.r.t HA.
As per current discussion, you are proposing something like below kind of 
deployment for Carrier Grade HA.
Since there is a separate port for Standby-VM also, then the corresponding 
standby-VM interface address should be globally routable also.
Means it may require the Standby Routing protocols to advertise its interface 
as Next-HOP for prefix it routes.
However external world should not be aware of the standby-routing running in 
the network.

[cid:image001.png@01CFF770.BF467FA0]

[cid:image002.png@01CFF770.BF467FA0]

Instead if we can think of running Standby on same stack with Passive port, ( 
as shown below)  then external world will be unaware of the standing Service 
Routing running.
This may be  something very basic requirement from Service-VM (NFV HA 
perspective) for Routing/MPLS/Packet processing domain.
I am brining this issue now itself, because you are proposing to change the 
basic framework of packer delivering to VM’s.
(Of course there may be  other mechanism of supporting redundancy, however it 
will not be as efficient as that of handing at packet level).

[cid:image003.png@01CFF770.BF467FA0]


Thanks  regards,
Keshava

From: Alan Kavanagh 
[mailto:alan.kavan...@ericsson.commailto:alan.kavan...@ericsson.com]
Sent: Tuesday, October 28, 2014 6:48 PM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Hi Salvatore

Inline below.

From: Salvatore Orlando [mailto:sorla...@nicira.com]
Sent: October-28-14 12:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Keshava,

I think the thread is not going a bit off its stated topic - which is to 
discuss the various proposed approaches to vlan trunking.
Regarding your last post, I'm not sure I saw either spec implying that at the 
data plane level every instance attached to a trunk will be implemented as a 
different network stack.
AK-- Agree
Also, quoting the principle earlier cited in this thread -  make the easy 
stuff easy and the hard stuff possible - I would say that unless five 9s is a 
minimum requirement for a NFV application, we might start worrying about it 
once we have the bare minimum set of tools for allowing a NFV application over 
a neutron network.
AK-- five 9’s is a 100% must requirement for NFV, but lets ensure we don’t mix 
up what the underlay service needs to guarantee and what openstack needs to do 
to ensure this type of service. Would agree, we should focus more on having the 
right configuration sets for onboarding NFV which is what Openstack needs to 
ensure is exposed then what is used underneath guarantee the 5 9’s is a 
separate matter.
I think Ian has done a good job in explaining that while both approaches 
considered here address trunking for NFV use cases, they propose alternative

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-11-03 Thread A, Keshava
Hi,
What are all the Service-VM(NFV elements)  HA architecture ?
L2 NFV and L3 NFV elements HA architecture will be different.

Keeping mind we need to  expose  the underlying port/interface  architecture 
from OpenStack side to Service-VM’s.

Thanks  Regards,
Keshava

From: A, Keshava
Sent: Monday, November 03, 2014 2:16 PM
To: Ian Wells; OpenStack Development Mailing List (not for usage questions)
Cc: A, Keshava
Subject: RE: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Hi Ian,

I think we need to understand how these VRRP and HSRP will work and where it 
used. What is NFV problem domain also ..

VRRP: is used to provide the redundancy for L2 network, where those routers are 
connected to Last mile L2 devices/L2 network.
   Since L2 network are stateless, Active entity going down and 
standby entity talking control is simple.
HSRP: is proprietary protocol anyway.

Here we are also talking about NFV and its redundancy is for L3 network also . 
(Routing /Signaling protocol redundancy)
For routing protocols “Active Routing Entity” always wants to have control over 
‘Standby Routing Entity’ so that Active is under the control of the 
network/Standby.
If VRRP kind of approach is issued for ‘L3 Routing Redundancy’, each Active and 
Standby will be running as independently which is not good model and it will 
not provide the five-9 redundancy.
In order to provide the 99.9 redundancy at L3 network/routing  NFV elements 
it is  required to run Active and Standby Entity, where Standby will be under 
the control of Active. VRRP will not be good option for this.

Then it is required to run as I mentioned .
I hope you got it .

Thanks  regards,
Keshava

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Saturday, November 01, 2014 4:07 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Go  read about HSRP and VRRP.  What you propose is akin to turning off one 
physical switch port and turning on another when you want to switch from an 
active physical server to a standby, and this is not how it's done in practice; 
instead, you connect the two VMs to the same network and let them decide which 
gets the primary address.

On 28 October 2014 10:27, A, Keshava 
keshav...@hp.commailto:keshav...@hp.com wrote:
Hi Alan and  Salvatore,

Thanks for response and I also agree we need to take small steps.
However I have below points to make.

It is very important how the Service VM needs will be deployed w.r.t HA.
As per current discussion, you are proposing something like below kind of 
deployment for Carrier Grade HA.
Since there is a separate port for Standby-VM also, then the corresponding 
standby-VM interface address should be globally routable also.
Means it may require the Standby Routing protocols to advertise its interface 
as Next-HOP for prefix it routes.
However external world should not be aware of the standby-routing running in 
the network.

[cid:image001.png@01CFF772.FBECB440]

[cid:image002.png@01CFF772.FBECB440]

Instead if we can think of running Standby on same stack with Passive port, ( 
as shown below)  then external world will be unaware of the standing Service 
Routing running.
This may be  something very basic requirement from Service-VM (NFV HA 
perspective) for Routing/MPLS/Packet processing domain.
I am brining this issue now itself, because you are proposing to change the 
basic framework of packer delivering to VM’s.
(Of course there may be  other mechanism of supporting redundancy, however it 
will not be as efficient as that of handing at packet level).

[cid:image003.png@01CFF772.FBECB440]


Thanks  regards,
Keshava

From: Alan Kavanagh 
[mailto:alan.kavan...@ericsson.commailto:alan.kavan...@ericsson.com]
Sent: Tuesday, October 28, 2014 6:48 PM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Hi Salvatore

Inline below.

From: Salvatore Orlando [mailto:sorla...@nicira.com]
Sent: October-28-14 12:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Keshava,

I think the thread is not going a bit off its stated topic - which is to 
discuss the various proposed approaches to vlan trunking.
Regarding your last post, I'm not sure I saw either spec implying that at the 
data plane level every instance attached to a trunk will be implemented as a 
different network stack.
AK-- Agree
Also, quoting the principle earlier cited in this thread -  make the easy 
stuff easy and the hard stuff possible - I would say that unless five 9s is a 
minimum requirement for a NFV application, we might start worrying about it 
once we have the bare minimum set of tools for allowing a NFV application over 
a neutron network.
AK-- five 9’s is a 100% must requirement for NFV, but lets ensure we

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-11-03 Thread Richard Woo
Hello, will this topic be discussed in the design session?

Richard

On Mon, Nov 3, 2014 at 10:36 PM, Erik Moe erik@ericsson.com wrote:



 I created an etherpad and added use cases (so far just the ones in your
 email).



 https://etherpad.openstack.org/p/tenant_vlans



 /Erik





 *From:* Erik Moe [mailto:erik@ericsson.com]
 *Sent:* den 2 november 2014 23:12

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking
 blueprints







 *From:* Ian Wells [mailto:ijw.ubu...@cack.org.uk ijw.ubu...@cack.org.uk]

 *Sent:* den 31 oktober 2014 23:35
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking
 blueprints




 On 31 October 2014 06:29, Erik Moe erik@ericsson.com wrote:





 I thought Monday network meeting agreed on that “VLAN aware VMs”, Trunk
 network + L2GW were different use cases.



 Still I get the feeling that the proposals are put up against each other.



 I think we agreed they were different, or at least the light was beginning
 to dawn on the differences, but Maru's point was that if we really want to
 decide what specs we have we need to show use cases not just for each spec
 independently, but also include use cases where e.g. two specs are required
 and the third doesn't help, so as to show that *all* of them are needed.
 In fact, I suggest that first we do that - here - and then we meet up one
 lunchtime and attack the specs in etherpad before submitting them.  In
 theory we could have them reviewed and approved by the end of the week.
 (This theory may not be very realistic, but it's good to set lofty goals,
 my manager tells me.)

 Ok, let’s try. I hope you theory turns out to be realistic. J

  Here are some examples why bridging between Neutron internal networks
 using trunk network and L2GW IMO should be avoided. I am still fine with
 bridging to external networks.



 Assuming VM with trunk port wants to use floating IP on specific VLAN.
 Router has to be created on a Neutron network behind L2GW since Neutron
 router cannot handle VLANs. (Maybe not too common use case, but just to
 show what kind of issues you can get into)

 neutron floatingip-associate FLOATING_IP_ID INTERNAL_VM_PORT_ID

 The code to check if valid port has to be able to traverse the L2GW.
 Handing of IP addresses of VM will most likely be affected since VM port is
 connected to several broadcast domains. Alternatively new API can be
 created.



 Now, this is a very good argument for 'trunk ports', yes.  It's not
 actually an argument against bridging between networks.  I think the
 bridging case addresses use cases (generally NFV use cases) where you're
 not interested in Openstack managing addresses - often because you're
 forwarding traffic rather than being an endpoint, and/or you plan on
 disabling all firewalling for speed reasons, but perhaps because you wish
 to statically configure an address rather than use DHCP.  The point is
 that, in the absence of a need for address-aware functions, you don't
 really care much about ports, and in fact configuring ports with many
 addresses may simply be overhead.  Also, as you say, this doesn't address
 the external bridging use case where what you're bridging to is not
 necessarily in Openstack's domain of control.

 I know that many NFVs currently prefer to manage everything themselves. At
 the same time, IMO, I think they should be encouraged to become
 Neutronified.

  In “VLAN aware VMs” trunk port mac address has to be globally unique
 since it can be connected to any network, other ports still only has to be
 unique per network. But for L2GW all mac addresses has to be globally
 unique since they might be bridged together at a later stage.



 I'm not sure that that's particularly a problem - any VM with a port will
 have one globally unique MAC address.  I wonder if I'm missing the point
 here, though.

 Ok, this was probably too specific, sorry. Neutron can reuse MAC addresses
 among Neutron networks. But I guess this is configurable.

  Also some implementations might not be able to take VID into account
 when doing mac address learning, forcing at least unique macs on a trunk
 network.



 If an implementation struggles with VLANs then the logical thing to do
 would be not to implement them in that driver.  Which is fine: I would
 expect (for instance) LB-driver networking to work for this and leave
 OVS-driver networking to never work for this, because there's little point
 in fixing it.

 Same as above, this is related to reuse of MAC addresses.

  Benefits with “VLAN aware VMs” are integration with existing Neutron
 services.

 Benefits with Trunk networks are less consumption of Neutron networks,
 less management per VLAN.



 Actually, the benefit of trunk networks is:

 - if I use an infrastructure where all networks are trunks, I can find out
 that a network

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-11-03 Thread Wuhongning
Is the trunk port use case like the super vlan?

Also there is another typical use case maybe not covered: extended flat 
network. Traffic on the port carries multiple vlans, but these vlans are not 
necessarily managed by neutron-network, so can not be classified to trunk port. 
And they don't need a gateway to communicate with other nodes in the physical 
provider network, what they expect neutron to do, is much like what the flat 
network does(so I call it extended flat): just keep the packets as is 
bidirectionally between wire and vnic.


From: Erik Moe [erik@ericsson.com]
Sent: Tuesday, November 04, 2014 5:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints


I created an etherpad and added use cases (so far just the ones in your email).

https://etherpad.openstack.org/p/tenant_vlans

/Erik


From: Erik Moe [mailto:erik@ericsson.com]
Sent: den 2 november 2014 23:12
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints



From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: den 31 oktober 2014 23:35
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints


On 31 October 2014 06:29, Erik Moe 
erik@ericsson.commailto:erik@ericsson.com wrote:


I thought Monday network meeting agreed on that “VLAN aware VMs”, Trunk network 
+ L2GW were different use cases.

Still I get the feeling that the proposals are put up against each other.

I think we agreed they were different, or at least the light was beginning to 
dawn on the differences, but Maru's point was that if we really want to decide 
what specs we have we need to show use cases not just for each spec 
independently, but also include use cases where e.g. two specs are required and 
the third doesn't help, so as to show that *all* of them are needed.  In fact, 
I suggest that first we do that - here - and then we meet up one lunchtime and 
attack the specs in etherpad before submitting them.  In theory we could have 
them reviewed and approved by the end of the week.  (This theory may not be 
very realistic, but it's good to set lofty goals, my manager tells me.)
Ok, let’s try. I hope you theory turns out to be realistic. :)
Here are some examples why bridging between Neutron internal networks using 
trunk network and L2GW IMO should be avoided. I am still fine with bridging to 
external networks.

Assuming VM with trunk port wants to use floating IP on specific VLAN. Router 
has to be created on a Neutron network behind L2GW since Neutron router cannot 
handle VLANs. (Maybe not too common use case, but just to show what kind of 
issues you can get into)
neutron floatingip-associate FLOATING_IP_ID INTERNAL_VM_PORT_ID
The code to check if valid port has to be able to traverse the L2GW. Handing of 
IP addresses of VM will most likely be affected since VM port is connected to 
several broadcast domains. Alternatively new API can be created.

Now, this is a very good argument for 'trunk ports', yes.  It's not actually an 
argument against bridging between networks.  I think the bridging case 
addresses use cases (generally NFV use cases) where you're not interested in 
Openstack managing addresses - often because you're forwarding traffic rather 
than being an endpoint, and/or you plan on disabling all firewalling for speed 
reasons, but perhaps because you wish to statically configure an address rather 
than use DHCP.  The point is that, in the absence of a need for address-aware 
functions, you don't really care much about ports, and in fact configuring 
ports with many addresses may simply be overhead.  Also, as you say, this 
doesn't address the external bridging use case where what you're bridging to is 
not necessarily in Openstack's domain of control.
I know that many NFVs currently prefer to manage everything themselves. At the 
same time, IMO, I think they should be encouraged to become Neutronified.
In “VLAN aware VMs” trunk port mac address has to be globally unique since it 
can be connected to any network, other ports still only has to be unique per 
network. But for L2GW all mac addresses has to be globally unique since they 
might be bridged together at a later stage.

I'm not sure that that's particularly a problem - any VM with a port will have 
one globally unique MAC address.  I wonder if I'm missing the point here, 
though.
Ok, this was probably too specific, sorry. Neutron can reuse MAC addresses 
among Neutron networks. But I guess this is configurable.
Also some implementations might not be able to take VID into account when doing 
mac address learning, forcing at least unique macs on a trunk network.

If an implementation struggles with VLANs then the logical thing to do would be 
not to implement them

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-11-02 Thread Erik Moe


From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: den 31 oktober 2014 23:35
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints


On 31 October 2014 06:29, Erik Moe 
erik@ericsson.commailto:erik@ericsson.com wrote:


I thought Monday network meeting agreed on that “VLAN aware VMs”, Trunk network 
+ L2GW were different use cases.

Still I get the feeling that the proposals are put up against each other.

I think we agreed they were different, or at least the light was beginning to 
dawn on the differences, but Maru's point was that if we really want to decide 
what specs we have we need to show use cases not just for each spec 
independently, but also include use cases where e.g. two specs are required and 
the third doesn't help, so as to show that *all* of them are needed.  In fact, 
I suggest that first we do that - here - and then we meet up one lunchtime and 
attack the specs in etherpad before submitting them.  In theory we could have 
them reviewed and approved by the end of the week.  (This theory may not be 
very realistic, but it's good to set lofty goals, my manager tells me.)
Ok, let’s try. I hope you theory turns out to be realistic. ☺
Here are some examples why bridging between Neutron internal networks using 
trunk network and L2GW IMO should be avoided. I am still fine with bridging to 
external networks.

Assuming VM with trunk port wants to use floating IP on specific VLAN. Router 
has to be created on a Neutron network behind L2GW since Neutron router cannot 
handle VLANs. (Maybe not too common use case, but just to show what kind of 
issues you can get into)
neutron floatingip-associate FLOATING_IP_ID INTERNAL_VM_PORT_ID
The code to check if valid port has to be able to traverse the L2GW. Handing of 
IP addresses of VM will most likely be affected since VM port is connected to 
several broadcast domains. Alternatively new API can be created.

Now, this is a very good argument for 'trunk ports', yes.  It's not actually an 
argument against bridging between networks.  I think the bridging case 
addresses use cases (generally NFV use cases) where you're not interested in 
Openstack managing addresses - often because you're forwarding traffic rather 
than being an endpoint, and/or you plan on disabling all firewalling for speed 
reasons, but perhaps because you wish to statically configure an address rather 
than use DHCP.  The point is that, in the absence of a need for address-aware 
functions, you don't really care much about ports, and in fact configuring 
ports with many addresses may simply be overhead.  Also, as you say, this 
doesn't address the external bridging use case where what you're bridging to is 
not necessarily in Openstack's domain of control.
I know that many NFVs currently prefer to manage everything themselves. At the 
same time, IMO, I think they should be encouraged to become Neutronified.
In “VLAN aware VMs” trunk port mac address has to be globally unique since it 
can be connected to any network, other ports still only has to be unique per 
network. But for L2GW all mac addresses has to be globally unique since they 
might be bridged together at a later stage.

I'm not sure that that's particularly a problem - any VM with a port will have 
one globally unique MAC address.  I wonder if I'm missing the point here, 
though.
Ok, this was probably too specific, sorry. Neutron can reuse MAC addresses 
among Neutron networks. But I guess this is configurable.
Also some implementations might not be able to take VID into account when doing 
mac address learning, forcing at least unique macs on a trunk network.

If an implementation struggles with VLANs then the logical thing to do would be 
not to implement them in that driver.  Which is fine: I would expect (for 
instance) LB-driver networking to work for this and leave OVS-driver networking 
to never work for this, because there's little point in fixing it.

Same as above, this is related to reuse of MAC addresses.
Benefits with “VLAN aware VMs” are integration with existing Neutron services.
Benefits with Trunk networks are less consumption of Neutron networks, less 
management per VLAN.

Actually, the benefit of trunk networks is:
- if I use an infrastructure where all networks are trunks, I can find out that 
a network is a trunk
- if I use an infrastructure where no networks are trunks, I can find out that 
a network is not a trunk
- if I use an infrastructure where trunk networks are more expensive, my 
operator can price accordingly

And, again, this is all entirely independent of either VLAN-aware ports or L2GW 
blocks.
Both are true. I was referring of “true” trunk networks, you were referring to 
your additions, right?
Benefits with L2GW is ease to do network stitching.
There are other benefits with the different proposals, the point is that it 
might be beneficial to have all solutions.

I totally agree

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-11-02 Thread Kevin Benton
When/where is the meeting on Monday?

--
Kevin Benton

 On Nov 2, 2014, at 11:12 PM, Erik Moe erik@ericsson.com wrote:
 
  
  
 From: Ian Wells [mailto:ijw.ubu...@cack.org.uk] 
 Sent: den 31 oktober 2014 23:35
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints
  
 
 On 31 October 2014 06:29, Erik Moe erik@ericsson.com wrote:
  
  
 I thought Monday network meeting agreed on that “VLAN aware VMs”, Trunk 
 network + L2GW were different use cases.
  
 Still I get the feeling that the proposals are put up against each other.
  
 I think we agreed they were different, or at least the light was beginning to 
 dawn on the differences, but Maru's point was that if we really want to 
 decide what specs we have we need to show use cases not just for each spec 
 independently, but also include use cases where e.g. two specs are required 
 and the third doesn't help, so as to show that *all* of them are needed.  In 
 fact, I suggest that first we do that - here - and then we meet up one 
 lunchtime and attack the specs in etherpad before submitting them.  In theory 
 we could have them reviewed and approved by the end of the week.  (This 
 theory may not be very realistic, but it's good to set lofty goals, my 
 manager tells me.)
 
 Ok, let’s try. I hope you theory turns out to be realistic. J
 
 Here are some examples why bridging between Neutron internal networks using 
 trunk network and L2GW IMO should be avoided. I am still fine with bridging 
 to external networks.
  
 Assuming VM with trunk port wants to use floating IP on specific VLAN. Router 
 has to be created on a Neutron network behind L2GW since Neutron router 
 cannot handle VLANs. (Maybe not too common use case, but just to show what 
 kind of issues you can get into)
 neutron floatingip-associate FLOATING_IP_ID INTERNAL_VM_PORT_ID
 The code to check if valid port has to be able to traverse the L2GW. Handing 
 of IP addresses of VM will most likely be affected since VM port is connected 
 to several broadcast domains. Alternatively new API can be created.
  
 Now, this is a very good argument for 'trunk ports', yes.  It's not actually 
 an argument against bridging between networks.  I think the bridging case 
 addresses use cases (generally NFV use cases) where you're not interested in 
 Openstack managing addresses - often because you're forwarding traffic rather 
 than being an endpoint, and/or you plan on disabling all firewalling for 
 speed reasons, but perhaps because you wish to statically configure an 
 address rather than use DHCP.  The point is that, in the absence of a need 
 for address-aware functions, you don't really care much about ports, and in 
 fact configuring ports with many addresses may simply be overhead.  Also, as 
 you say, this doesn't address the external bridging use case where what 
 you're bridging to is not necessarily in Openstack's domain of control.
 
 I know that many NFVs currently prefer to manage everything themselves. At 
 the same time, IMO, I think they should be encouraged to become Neutronified.
 
 In “VLAN aware VMs” trunk port mac address has to be globally unique since it 
 can be connected to any network, other ports still only has to be unique per 
 network. But for L2GW all mac addresses has to be globally unique since they 
 might be bridged together at a later stage.
  
 I'm not sure that that's particularly a problem - any VM with a port will 
 have one globally unique MAC address.  I wonder if I'm missing the point 
 here, though.
 
 Ok, this was probably too specific, sorry. Neutron can reuse MAC addresses 
 among Neutron networks. But I guess this is configurable.
 
 Also some implementations might not be able to take VID into account when 
 doing mac address learning, forcing at least unique macs on a trunk network.
  
 If an implementation struggles with VLANs then the logical thing to do would 
 be not to implement them in that driver.  Which is fine: I would expect (for 
 instance) LB-driver networking to work for this and leave OVS-driver 
 networking to never work for this, because there's little point in fixing it.
 
 Same as above, this is related to reuse of MAC addresses.
 Benefits with “VLAN aware VMs” are integration with existing Neutron services.
 Benefits with Trunk networks are less consumption of Neutron networks, less 
 management per VLAN.
  
 Actually, the benefit of trunk networks is:
 
 - if I use an infrastructure where all networks are trunks, I can find out 
 that a network is a trunk
 - if I use an infrastructure where no networks are trunks, I can find out 
 that a network is not a trunk
 - if I use an infrastructure where trunk networks are more expensive, my 
 operator can price accordingly
  
 And, again, this is all entirely independent of either VLAN-aware ports or 
 L2GW blocks.
 
 Both are true. I was referring of “true” trunk networks, you were referring

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-11-01 Thread Alan Kavanagh
+1 yep, as an NFV vendor this is how it really works, and no need for an 
active/standby on ports….does not make sense for this type of deployment 
scenario. Typically the two or more sets of apps synch together and address the 
failover/HA, so no need to make this also in the Infra….you gain nothing from 
doing active-standby port mgmt. in ovs.
/Alan

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: October-31-14 11:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Go  read about HSRP and VRRP.  What you propose is akin to turning off one 
physical switch port and turning on another when you want to switch from an 
active physical server to a standby, and this is not how it's done in practice; 
instead, you connect the two VMs to the same network and let them decide which 
gets the primary address.

On 28 October 2014 10:27, A, Keshava 
keshav...@hp.commailto:keshav...@hp.com wrote:
Hi Alan and  Salvatore,

Thanks for response and I also agree we need to take small steps.
However I have below points to make.

It is very important how the Service VM needs will be deployed w.r.t HA.
As per current discussion, you are proposing something like below kind of 
deployment for Carrier Grade HA.
Since there is a separate port for Standby-VM also, then the corresponding 
standby-VM interface address should be globally routable also.
Means it may require the Standby Routing protocols to advertise its interface 
as Next-HOP for prefix it routes.
However external world should not be aware of the standby-routing running in 
the network.

[cid:image001.png@01CFF601.E49C6A50]

[cid:image002.png@01CFF601.E49C6A50]

Instead if we can think of running Standby on same stack with Passive port, ( 
as shown below)  then external world will be unaware of the standing Service 
Routing running.
This may be  something very basic requirement from Service-VM (NFV HA 
perspective) for Routing/MPLS/Packet processing domain.
I am brining this issue now itself, because you are proposing to change the 
basic framework of packer delivering to VM’s.
(Of course there may be  other mechanism of supporting redundancy, however it 
will not be as efficient as that of handing at packet level).

[cid:image003.png@01CFF601.E49C6A50]


Thanks  regards,
Keshava

From: Alan Kavanagh 
[mailto:alan.kavan...@ericsson.commailto:alan.kavan...@ericsson.com]
Sent: Tuesday, October 28, 2014 6:48 PM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Hi Salvatore

Inline below.

From: Salvatore Orlando [mailto:sorla...@nicira.com]
Sent: October-28-14 12:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Keshava,

I think the thread is not going a bit off its stated topic - which is to 
discuss the various proposed approaches to vlan trunking.
Regarding your last post, I'm not sure I saw either spec implying that at the 
data plane level every instance attached to a trunk will be implemented as a 
different network stack.
AK-- Agree
Also, quoting the principle earlier cited in this thread -  make the easy 
stuff easy and the hard stuff possible - I would say that unless five 9s is a 
minimum requirement for a NFV application, we might start worrying about it 
once we have the bare minimum set of tools for allowing a NFV application over 
a neutron network.
AK-- five 9’s is a 100% must requirement for NFV, but lets ensure we don’t mix 
up what the underlay service needs to guarantee and what openstack needs to do 
to ensure this type of service. Would agree, we should focus more on having the 
right configuration sets for onboarding NFV which is what Openstack needs to 
ensure is exposed then what is used underneath guarantee the 5 9’s is a 
separate matter.
I think Ian has done a good job in explaining that while both approaches 
considered here address trunking for NFV use cases, they propose alternative 
implementations which can be leveraged in different way by NFV applications. I 
do not see now a reason for which we should not allow NFV apps to leverage a 
trunk network or create port-aware VLANs (or maybe you can even have VLAN aware 
ports which tap into a trunk network?)
AK-- Agree, I think we can hammer this out once and for all in Paris…….this 
feature has been lingering too long.
We may continue discussing the pros and cons of each approach - but to me it's 
now just a matter of choosing the best solution for exposing them at the API 
layer. At the control/data plane layer, it seems to me that trunk networks are 
pretty much straightforward. VLAN aware ports are instead a bit more 
convoluted, but not excessively complicated in my opinion.
AK-- My thinking too Salvatore, lets ensure the right elements are exposed at 
API Layer, I would also go a little

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-31 Thread Erik Moe


I thought Monday network meeting agreed on that “VLAN aware VMs”, Trunk network 
+ L2GW were different use cases.

Still I get the feeling that the proposals are put up against each other.

Here are some examples why bridging between Neutron internal networks using 
trunk network and L2GW IMO should be avoided. I am still fine with bridging to 
external networks.

Assuming VM with trunk port wants to use floating IP on specific VLAN. Router 
has to be created on a Neutron network behind L2GW since Neutron router cannot 
handle VLANs. (Maybe not too common use case, but just to show what kind of 
issues you can get into)
neutron floatingip-associate FLOATING_IP_ID INTERNAL_VM_PORT_ID
The code to check if valid port has to be able to traverse the L2GW. Handing of 
IP addresses of VM will most likely be affected since VM port is connected to 
several broadcast domains. Alternatively new API can be created.

In “VLAN aware VMs” trunk port mac address has to be globally unique since it 
can be connected to any network, other ports still only has to be unique per 
network. But for L2GW all mac addresses has to be globally unique since they 
might be bridged together at a later stage. Also some implementations might not 
be able to take VID into account when doing mac address learning, forcing at 
least unique macs on a trunk network.

Benefits with “VLAN aware VMs” are integration with existing Neutron services.
Benefits with Trunk networks are less consumption of Neutron networks, less 
management per VLAN.
Benefits with L2GW is ease to do network stitching.
There are other benefits with the different proposals, the point is that it 
might be beneficial to have all solutions.

Platforms that have issues forking of VLANs at VM port level could get around 
with trunk network + L2GW but having more hacks if integration with other parts 
of Neutron is needed. Platforms that have issues implementing trunk networks 
could get around using “VLAN aware VMs” but being forced to separately manage 
every VLAN as a Neutron network. On platforms that have both, user can select 
method depending on what is needed.

Thanks,
Erik



From: Armando M. [mailto:arma...@gmail.com]
Sent: den 28 oktober 2014 19:01
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Sorry for jumping into this thread late...there's lots of details to process, 
and I needed time to digest!

Having said that, I'd like to recap before moving the discussion forward, at 
the Summit and beyond.

As it's being pointed out, there are a few efforts targeting this area; I think 
that is sensible to adopt the latest spec system we have been using to 
understand where we are, and I mean Gerrit and the spec submissions.

To this aim I see the following specs:

https://review.openstack.org/93613 - Service API for L2 bridging 
tenants/provider networks
https://review.openstack.org/100278 - API Extension for l2-gateway
https://review.openstack.org/94612 - VLAN aware VMs
https://review.openstack.org/97714 - VLAN trunking networks for NFV

First of all: did I miss any? I am intentionally leaving out any vendor 
specific blueprint for now.

When I look at these I clearly see that we jump all the way to implementations 
details. From an architectural point of view, this clearly does not make a lot 
of sense.

In order to ensure that everyone is on the same page, I would suggest to have a 
discussion where we focus on the following aspects:

- Identify the use cases: what are, in simple terms, the possible interactions 
that an actor (i.e. the tenant or the admin) can have with the system (an 
OpenStack deployment), when these NFV-enabling capabilities are available? What 
are the observed outcomes once these interactions have taken place?

-  Management API: what abstractions do we expose to the tenant or admin (do we 
augment the existing resources, or do we create new resources, or do we do 
both)? This should obviously driven by a set of use cases, and we need to 
identify the minimum set or logical artifacts that would let us meet the needs 
of the widest set of use cases.

- Core Neutron changes: what needs to happen to the core of Neutron, if 
anything, so that we can implement this NFV-enabling constructs successfully? 
Are there any changes to the core L2 API? Are there any changes required to the 
core framework (scheduling, policy, notifications, data model etc)?

- Add support to the existing plugin backends: the openvswitch reference 
implementation is an obvious candidate, but other plugins may want to leverage 
the newly defined capabilities too. Once the above mentioned points have been 
fleshed out, it should be fairly straightforward to have these efforts progress 
in autonomy.

IMO, until we can get a full understanding of the aspects above, I don't 
believe like the core team is in the best position to determine the best 
approach forward; I think it's

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-31 Thread Ian Wells
On 31 October 2014 06:29, Erik Moe erik@ericsson.com wrote:





 I thought Monday network meeting agreed on that “VLAN aware VMs”, Trunk
 network + L2GW were different use cases.



 Still I get the feeling that the proposals are put up against each other.


I think we agreed they were different, or at least the light was beginning
to dawn on the differences, but Maru's point was that if we really want to
decide what specs we have we need to show use cases not just for each spec
independently, but also include use cases where e.g. two specs are required
and the third doesn't help, so as to show that *all* of them are needed.
In fact, I suggest that first we do that - here - and then we meet up one
lunchtime and attack the specs in etherpad before submitting them.  In
theory we could have them reviewed and approved by the end of the week.
(This theory may not be very realistic, but it's good to set lofty goals,
my manager tells me.)

 Here are some examples why bridging between Neutron internal networks
 using trunk network and L2GW IMO should be avoided. I am still fine with
 bridging to external networks.



 Assuming VM with trunk port wants to use floating IP on specific VLAN.
 Router has to be created on a Neutron network behind L2GW since Neutron
 router cannot handle VLANs. (Maybe not too common use case, but just to
 show what kind of issues you can get into)

 neutron floatingip-associate FLOATING_IP_ID INTERNAL_VM_PORT_ID

 The code to check if valid port has to be able to traverse the L2GW.
 Handing of IP addresses of VM will most likely be affected since VM port is
 connected to several broadcast domains. Alternatively new API can be
 created.


Now, this is a very good argument for 'trunk ports', yes.  It's not
actually an argument against bridging between networks.  I think the
bridging case addresses use cases (generally NFV use cases) where you're
not interested in Openstack managing addresses - often because you're
forwarding traffic rather than being an endpoint, and/or you plan on
disabling all firewalling for speed reasons, but perhaps because you wish
to statically configure an address rather than use DHCP.  The point is
that, in the absence of a need for address-aware functions, you don't
really care much about ports, and in fact configuring ports with many
addresses may simply be overhead.  Also, as you say, this doesn't address
the external bridging use case where what you're bridging to is not
necessarily in Openstack's domain of control.

 In “VLAN aware VMs” trunk port mac address has to be globally unique since
 it can be connected to any network, other ports still only has to be unique
 per network. But for L2GW all mac addresses has to be globally unique since
 they might be bridged together at a later stage.


I'm not sure that that's particularly a problem - any VM with a port will
have one globally unique MAC address.  I wonder if I'm missing the point
here, though.

Also some implementations might not be able to take VID into account when
 doing mac address learning, forcing at least unique macs on a trunk network.


If an implementation struggles with VLANs then the logical thing to do
would be not to implement them in that driver.  Which is fine: I would
expect (for instance) LB-driver networking to work for this and leave
OVS-driver networking to never work for this, because there's little point
in fixing it.


  Benefits with “VLAN aware VMs” are integration with existing Neutron
 services.

 Benefits with Trunk networks are less consumption of Neutron networks,
 less management per VLAN.


Actually, the benefit of trunk networks is:

- if I use an infrastructure where all networks are trunks, I can find out
that a network is a trunk
- if I use an infrastructure where no networks are trunks, I can find out
that a network is not a trunk
- if I use an infrastructure where trunk networks are more expensive, my
operator can price accordingly

And, again, this is all entirely independent of either VLAN-aware ports or
L2GW blocks.

 Benefits with L2GW is ease to do network stitching.

 There are other benefits with the different proposals, the point is that
 it might be beneficial to have all solutions.


I totally agree with this.

So, use cases that come to mind:

1. I want to pass VLAN-encapped traffic from VM A to VM B.  I do not know
at network setup time what VLANs I will use.
case A: I'm simulating a network with routers in.  The router config is not
under my control, so I don't know addresses or the number of VLANs in use.
(Yes, this use case exists, search for 'Cisco VIRL'.)
case B: NFV scenarios where the VNF orchestrator decides how few or many
VLANs are used, where the endpoints may or may not be addressed, and where
the addresses are selected by the VNF manager.  (For instance, every time I
add a customer to a VNF service I create another VLAN on an internal link.
The orchestrator is intelligent and selects the VLAN; telling Openstack the
details is 

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-31 Thread Ian Wells
Go  read about HSRP and VRRP.  What you propose is akin to turning off one
physical switch port and turning on another when you want to switch from an
active physical server to a standby, and this is not how it's done in
practice; instead, you connect the two VMs to the same network and let them
decide which gets the primary address.

On 28 October 2014 10:27, A, Keshava keshav...@hp.com wrote:

  Hi Alan and  Salvatore,



 Thanks for response and I also agree we need to take small steps.

 However I have below points to make.



 It is very important how the Service VM needs will be deployed w.r.t HA.

 As per current discussion, you are proposing something like below kind of
 deployment for Carrier Grade HA.

 Since there is a separate port for Standby-VM also, then the corresponding
 standby-VM interface address should be globally routable also.

 Means it may require the Standby Routing protocols to advertise its
 interface as Next-HOP for prefix it routes.

 However external world should not be aware of the standby-routing running
 in the network.









 Instead if we can think of running Standby on same stack with Passive
 port, ( as shown below)  then external world will be unaware of the
 standing Service Routing running.

 *This may be  something very basic requirement from Service-VM (NFV HA
 perspective) for Routing/MPLS/Packet processing domain. *

 *I am brining this issue now itself, because you are proposing to change
 the basic framework of packer delivering to VM’s.*

 *(Of course there may be  other mechanism of supporting redundancy,
 however it will not be as efficient as that of handing at packet level). *







 Thanks  regards,

 Keshava



 *From:* Alan Kavanagh [mailto:alan.kavan...@ericsson.com]
 *Sent:* Tuesday, October 28, 2014 6:48 PM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking
 blueprints



 Hi Salvatore



 Inline below.



 *From:* Salvatore Orlando [mailto:sorla...@nicira.com
 sorla...@nicira.com]
 *Sent:* October-28-14 12:37 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking
 blueprints



 Keshava,



 I think the thread is not going a bit off its stated topic - which is to
 discuss the various proposed approaches to vlan trunking.

 Regarding your last post, I'm not sure I saw either spec implying that at
 the data plane level every instance attached to a trunk will be implemented
 as a different network stack.

 AKà Agree

 Also, quoting the principle earlier cited in this thread -  make the
 easy stuff easy and the hard stuff possible - I would say that unless five
 9s is a minimum requirement for a NFV application, we might start worrying
 about it once we have the bare minimum set of tools for allowing a NFV
 application over a neutron network.

 AKà five 9’s is a 100% must requirement for NFV, but lets ensure we don’t
 mix up what the underlay service needs to guarantee and what openstack
 needs to do to ensure this type of service. Would agree, we should focus
 more on having the right configuration sets for onboarding NFV which is
 what Openstack needs to ensure is exposed then what is used underneath
 guarantee the 5 9’s is a separate matter.

 I think Ian has done a good job in explaining that while both approaches
 considered here address trunking for NFV use cases, they propose
 alternative implementations which can be leveraged in different way by NFV
 applications. I do not see now a reason for which we should not allow NFV
 apps to leverage a trunk network or create port-aware VLANs (or maybe you
 can even have VLAN aware ports which tap into a trunk network?)

 AKà Agree, I think we can hammer this out once and for all in
 Paris…….this feature has been lingering too long.

 We may continue discussing the pros and cons of each approach - but to me
 it's now just a matter of choosing the best solution for exposing them at
 the API layer. At the control/data plane layer, it seems to me that trunk
 networks are pretty much straightforward. VLAN aware ports are instead a
 bit more convoluted, but not excessively complicated in my opinion.

 AKà My thinking too Salvatore, lets ensure the right elements are exposed
 at API Layer, I would also go a little further to ensure we get those
 feature sets to be supported into the Core API (another can of worms
 discussion but we need to have it).

 Salvatore





 On 28 October 2014 11:55, A, Keshava keshav...@hp.com wrote:

 Hi,

 Pl find my reply ..





 Regards,

 keshava



 *From:* Alan Kavanagh [mailto:alan.kavan...@ericsson.com]
 *Sent:* Tuesday, October 28, 2014 3:35 PM


 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking
 blueprints



 Hi

 Please find some additions to Ian and responses below.

 /Alan



 *From:* A, Keshava

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-31 Thread Ian Wells
To address a point or two that Armando has raised here that weren't covered
in my other mail:

On 28 October 2014 11:00, Armando M. arma...@gmail.com wrote:

 - Core Neutron changes: what needs to happen to the core of Neutron, if
 anything, so that we can implement this NFV-enabling constructs
 successfully? Are there any changes to the core L2 API? Are there any
 changes required to the core framework (scheduling, policy, notifications,
 data model etc)?


In the L2 API, I think this involves
- adding capability flag for trunking on networks and propagating that into
ML2's drivers (for what it's worth, this needs solving anyway; MTUs need
propagation as well)
- adding the trunk ports API and somehow implementing that in ML2

The L2GW block is in fact a new service and a reference implemenation can
be made with a namepsace, independently of the L2 plugin.

- Add support to the existing plugin backends: the openvswitch reference
 implementation is an obvious candidate,


Actually, it isn't.  The LB reference implementation is the obvious
candidate.  Because of the way it's implemented, it's easiest if the OVS
implementation refuses to make trunk networks (and therefore couldn't use
an L2GW block), but that's fine: we need one reference implementation and
it doesn't have to be OVS.

OVS may still be suitable to show off a trunk port reference
implementation; trunk ports would need addressing in the L2 plugin (in that
they're VM-to-network connectivity, which falls under its responsibility).
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-28 Thread Ian Wells
This all appears to be referring to trunking ports, rather than anything
else, so I've addressed the points in that respect.

On 28 October 2014 00:03, A, Keshava keshav...@hp.com wrote:

  Hi,

 1.   How many Trunk ports can be created ?

Why would there be a limit?

 Will there be any Active-Standby concepts will be there ?

I don't believe active-standby, or any HA concept, is directly relevant.
Did you have something in mind?

   2.   Is it possible to configure multiple IP address configured on
 these ports ?

Yes, in the sense that you can have addresses per port.  The usual
restrictions to ports would apply, and they don't currently allow multiple
IP addresses (with the exception of the address-pair extension).

 In case IPv6 there can be multiple primary address configured will this be
 supported ?

No reason why not - we're expecting to re-use the usual port, so you'd
expect the features there to apply (in addition to having multiple sets of
subnet on a trunking port).

   3.   If required can these ports can be aggregated into single one
 dynamically ?

That's not really relevant to trunk ports or networks.

  4.   Will there be requirement to handle Nested tagged packet on
 such interfaces ?

For trunking ports, I don't believe anyone was considering it.








 Thanks  Regards,

 Keshava



 *From:* Ian Wells [mailto:ijw.ubu...@cack.org.uk]
 *Sent:* Monday, October 27, 2014 9:45 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking
 blueprints



 On 25 October 2014 15:36, Erik Moe erik@ericsson.com wrote:

  Then I tried to just use the trunk network as a plain pipe to the
 L2-gateway and connect to normal Neutron networks. One issue is that the
 L2-gateway will bridge the networks, but the services in the network you
 bridge to is unaware of your existence. This IMO is ok then bridging
 Neutron network to some remote network, but if you have an Neutron VM and
 want to utilize various resources in another Neutron network (since the one
 you sit on does not have any resources), things gets, let’s say non
 streamlined.



 Indeed.  However, non-streamlined is not the end of the world, and I
 wouldn't want to have to tag all VLANs a port is using on the port in
 advance of using it (this works for some use cases, and makes others
 difficult, particularly if you just want a native trunk and are happy for
 Openstack not to have insight into what's going on on the wire).



   Another issue with trunk network is that it puts new requirements on
 the infrastructure. It needs to be able to handle VLAN tagged frames. For a
 VLAN based network it would be QinQ.



 Yes, and that's the point of the VLAN trunk spec, where we flag a network
 as passing VLAN tagged packets; if the operator-chosen network
 implementation doesn't support trunks, the API can refuse to make a trunk
 network.  Without it we're still in the situation that on some clouds
 passing VLANs works and on others it doesn't, and that the tenant can't
 actually tell in advance which sort of cloud they're working on.

 Trunk networks are a requirement for some use cases independent of the
 port awareness of VLANs.  Based on the maxim, 'make the easy stuff easy and
 the hard stuff possible' we can't just say 'no Neutron network passes VLAN
 tagged packets'.  And even if we did, we're evading a problem that exists
 with exactly one sort of network infrastructure - VLAN tagging for network
 separation - while making it hard to use for all of the many other cases in
 which it would work just fine.

 In summary, if we did port-based VLAN knowledge I would want to be able to
 use VLANs without having to use it (in much the same way that I would like,
 in certain circumstances, not to have to use Openstack's address allocation
 and DHCP - it's nice that I can, but I shouldn't be forced to).

  My requirements were to have low/no extra cost for VMs using VLAN trunks
 compared to normal ports, no new bottlenecks/single point of failure. Due
 to this and previous issues I implemented the L2 gateway in a distributed
 fashion and since trunk network could not be realized in reality I only had
 them in the model and optimized them away.



 Again, this is down to your choice of VLAN tagged networking and/or the
 OVS ML2 driver; it doesn't apply to all deployments.



  But the L2-gateway + trunk network has a flexible API, what if someone
 connects two VMs to one trunk network, well, hard to optimize away.



 That's certainly true, but it wasn't really intended to be optimised away.

  Anyway, due to these and other issues, I limited my scope and switched
 to the current trunk port/subport model.



 The code that is for review is functional, you can boot a VM with a trunk
 port + subports (each subport maps to a VLAN). The VM can send/receive VLAN
 traffic. You can add/remove subports on a running VM. You can specify IP
 address per subport

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-28 Thread A, Keshava
Hi,
Pl fine the reply for the same.

Regards,
keshava

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Tuesday, October 28, 2014 1:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

This all appears to be referring to trunking ports, rather than anything else, 
so I've addressed the points in that respect.
On 28 October 2014 00:03, A, Keshava 
keshav...@hp.commailto:keshav...@hp.com wrote:
Hi,

1.   How many Trunk ports can be created ?
Why would there be a limit?

Will there be any Active-Standby concepts will be there ?
I don't believe active-standby, or any HA concept, is directly relevant.  Did 
you have something in mind?
For the NFV kind of the scenario, it is very much required to run the ‘Service 
VM’ in Active and Standby Mode.
Standby is more of passive entity and will not take any action to external 
network. It will be passive consumer of the packet/information.
In that scenario it will be very meaningful to have
“Active port – connected to  “Active  Service VM”.
“Standby port – connected to ‘Standby Service VM’. Which will turn Active when 
old Active-VM goes down  ?

Let us know others opinion about this concept.

 2.   Is it possible to configure multiple IP address configured on these 
ports ?
Yes, in the sense that you can have addresses per port.  The usual restrictions 
to ports would apply, and they don't currently allow multiple IP addresses 
(with the exception of the address-pair extension).

In case IPv6 there can be multiple primary address configured will this be 
supported ?
No reason why not - we're expecting to re-use the usual port, so you'd expect 
the features there to apply (in addition to having multiple sets of subnet on a 
trunking port).

 3.   If required can these ports can be aggregated into single one 
dynamically ?
That's not really relevant to trunk ports or networks.

 4.   Will there be requirement to handle Nested tagged packet on such 
interfaces ?
For trunking ports, I don't believe anyone was considering it.




Thanks  Regards,
Keshava

From: Ian Wells [mailto:ijw.ubu...@cack.org.ukmailto:ijw.ubu...@cack.org.uk]
Sent: Monday, October 27, 2014 9:45 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

On 25 October 2014 15:36, Erik Moe 
erik@ericsson.commailto:erik@ericsson.com wrote:
Then I tried to just use the trunk network as a plain pipe to the L2-gateway 
and connect to normal Neutron networks. One issue is that the L2-gateway will 
bridge the networks, but the services in the network you bridge to is unaware 
of your existence. This IMO is ok then bridging Neutron network to some remote 
network, but if you have an Neutron VM and want to utilize various resources in 
another Neutron network (since the one you sit on does not have any resources), 
things gets, let’s say non streamlined.

Indeed.  However, non-streamlined is not the end of the world, and I wouldn't 
want to have to tag all VLANs a port is using on the port in advance of using 
it (this works for some use cases, and makes others difficult, particularly if 
you just want a native trunk and are happy for Openstack not to have insight 
into what's going on on the wire).

 Another issue with trunk network is that it puts new requirements on the 
infrastructure. It needs to be able to handle VLAN tagged frames. For a VLAN 
based network it would be QinQ.

Yes, and that's the point of the VLAN trunk spec, where we flag a network as 
passing VLAN tagged packets; if the operator-chosen network implementation 
doesn't support trunks, the API can refuse to make a trunk network.  Without it 
we're still in the situation that on some clouds passing VLANs works and on 
others it doesn't, and that the tenant can't actually tell in advance which 
sort of cloud they're working on.
Trunk networks are a requirement for some use cases independent of the port 
awareness of VLANs.  Based on the maxim, 'make the easy stuff easy and the hard 
stuff possible' we can't just say 'no Neutron network passes VLAN tagged 
packets'.  And even if we did, we're evading a problem that exists with exactly 
one sort of network infrastructure - VLAN tagging for network separation - 
while making it hard to use for all of the many other cases in which it would 
work just fine.

In summary, if we did port-based VLAN knowledge I would want to be able to use 
VLANs without having to use it (in much the same way that I would like, in 
certain circumstances, not to have to use Openstack's address allocation and 
DHCP - it's nice that I can, but I shouldn't be forced to).
My requirements were to have low/no extra cost for VMs using VLAN trunks 
compared to normal ports, no new bottlenecks/single point of failure. Due to 
this and previous issues I implemented the L2 gateway in a distributed fashion 
and since

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-28 Thread Alan Kavanagh
Hi
Please find some additions to Ian and responses below.
/Alan

From: A, Keshava [mailto:keshav...@hp.com]
Sent: October-28-14 9:57 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Hi,
Pl fine the reply for the same.

Regards,
keshava

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Tuesday, October 28, 2014 1:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

This all appears to be referring to trunking ports, rather than anything else, 
so I've addressed the points in that respect.
On 28 October 2014 00:03, A, Keshava 
keshav...@hp.commailto:keshav...@hp.com wrote:
Hi,

1.   How many Trunk ports can be created ?
Why would there be a limit?

Will there be any Active-Standby concepts will be there ?
I don't believe active-standby, or any HA concept, is directly relevant.  Did 
you have something in mind?
For the NFV kind of the scenario, it is very much required to run the ‘Service 
VM’ in Active and Standby Mode.
AK-- We have a different view on this, the “application runs as a pair” of 
which the application either runs in active-active or active standby…this has 
nothing to do with HA, its down to the application and how its provisioned and 
configured via Openstack. So agree with Ian on this.
Standby is more of passive entity and will not take any action to external 
network. It will be passive consumer of the packet/information.
AK-- Why would we need to care?
In that scenario it will be very meaningful to have
“Active port – connected to  “Active  Service VM”.
“Standby port – connected to ‘Standby Service VM’. Which will turn Active when 
old Active-VM goes down  ?
AK-- Cant you just have two VM’s and then via a controller decide how to 
address MAC+IP_Address control…..FYI…most NFV Apps have that built-in today.
Let us know others opinion about this concept.
AK--Perhaps I am miss reading this but I don’t understand what this would 
provide as opposed to having two VM’s instantiated and running, why does 
Neutron need to care about the port state between these two VM’s? Similarly its 
better to just have 2 or more VM’s up and the application will be able to 
address when failover occurs/requires. Lets keep it simple and not mix up with 
what the apps do inside the containment.

 2.   Is it possible to configure multiple IP address configured on these 
ports ?
Yes, in the sense that you can have addresses per port.  The usual restrictions 
to ports would apply, and they don't currently allow multiple IP addresses 
(with the exception of the address-pair extension).

In case IPv6 there can be multiple primary address configured will this be 
supported ?
No reason why not - we're expecting to re-use the usual port, so you'd expect 
the features there to apply (in addition to having multiple sets of subnet on a 
trunking port).

 3.   If required can these ports can be aggregated into single one 
dynamically ?
That's not really relevant to trunk ports or networks.

 4.   Will there be requirement to handle Nested tagged packet on such 
interfaces ?
For trunking ports, I don't believe anyone was considering it.





Thanks  Regards,
Keshava

From: Ian Wells [mailto:ijw.ubu...@cack.org.ukmailto:ijw.ubu...@cack.org.uk]
Sent: Monday, October 27, 2014 9:45 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

On 25 October 2014 15:36, Erik Moe 
erik@ericsson.commailto:erik@ericsson.com wrote:
Then I tried to just use the trunk network as a plain pipe to the L2-gateway 
and connect to normal Neutron networks. One issue is that the L2-gateway will 
bridge the networks, but the services in the network you bridge to is unaware 
of your existence. This IMO is ok then bridging Neutron network to some remote 
network, but if you have an Neutron VM and want to utilize various resources in 
another Neutron network (since the one you sit on does not have any resources), 
things gets, let’s say non streamlined.

Indeed.  However, non-streamlined is not the end of the world, and I wouldn't 
want to have to tag all VLANs a port is using on the port in advance of using 
it (this works for some use cases, and makes others difficult, particularly if 
you just want a native trunk and are happy for Openstack not to have insight 
into what's going on on the wire).

 Another issue with trunk network is that it puts new requirements on the 
infrastructure. It needs to be able to handle VLAN tagged frames. For a VLAN 
based network it would be QinQ.

Yes, and that's the point of the VLAN trunk spec, where we flag a network as 
passing VLAN tagged packets; if the operator-chosen network implementation 
doesn't support trunks, the API can refuse to make a trunk network.  Without it 
we're still in the situation

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-28 Thread A, Keshava
Hi,
Pl find my reply ..


Regards,
keshava

From: Alan Kavanagh [mailto:alan.kavan...@ericsson.com]
Sent: Tuesday, October 28, 2014 3:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Hi
Please find some additions to Ian and responses below.
/Alan

From: A, Keshava [mailto:keshav...@hp.com]
Sent: October-28-14 9:57 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Hi,
Pl fine the reply for the same.

Regards,
keshava

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Tuesday, October 28, 2014 1:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

This all appears to be referring to trunking ports, rather than anything else, 
so I've addressed the points in that respect.
On 28 October 2014 00:03, A, Keshava 
keshav...@hp.commailto:keshav...@hp.com wrote:
Hi,

1.   How many Trunk ports can be created ?
Why would there be a limit?

Will there be any Active-Standby concepts will be there ?
I don't believe active-standby, or any HA concept, is directly relevant.  Did 
you have something in mind?
For the NFV kind of the scenario, it is very much required to run the ‘Service 
VM’ in Active and Standby Mode.
AK-- We have a different view on this, the “application runs as a pair” of 
which the application either runs in active-active or active standby…this has 
nothing to do with HA, its down to the application and how its provisioned and 
configured via Openstack. So agree with Ian on this.
Standby is more of passive entity and will not take any action to external 
network. It will be passive consumer of the packet/information.
AK-- Why would we need to care?
In that scenario it will be very meaningful to have
“Active port – connected to  “Active  Service VM”.
“Standby port – connected to ‘Standby Service VM’. Which will turn Active when 
old Active-VM goes down  ?
AK-- Cant you just have two VM’s and then via a controller decide how to 
address MAC+IP_Address control…..FYI…most NFV Apps have that built-in today.
Let us know others opinion about this concept.
AK--Perhaps I am miss reading this but I don’t understand what this would 
provide as opposed to having two VM’s instantiated and running, why does 
Neutron need to care about the port state between these two VM’s? Similarly its 
better to just have 2 or more VM’s up and the application will be able to 
address when failover occurs/requires. Lets keep it simple and not mix up with 
what the apps do inside the containment.

Keshava:
Since this is solution is more for Carrier Grade NFV Service VM, I have below 
points to make.
Let’s us say Service-VM running is BGP or BGP-VPN or ‘MPLS + LDP + BGP-VPN’.
When such kind of carrier grade service are running, how to provide the Five-9  
HA ?
In my opinion,
Both (Active,/Standby) Service-VM to hook same underlying 
OpenStack infrastructure stack (br-ext-br-int-qxx- VMa)
However ‘active VM’ can hooks to  ‘active-port’  and ‘standby VM’ hook to 
‘passive-port’ with in same stack.

Instead if Active and Standby VM hooks to 2 different stack (br-ext1-br-int1 
--qxx1- VM-active) and (br-ext2-br-int2-qxx2- VM-Standby) can those 
Service-VM achieve the 99.9 reliability ?

Yes I may be thinking little  complicated  way from open-stack 
perspective..

 2.   Is it possible to configure multiple IP address configured on these 
ports ?
Yes, in the sense that you can have addresses per port.  The usual restrictions 
to ports would apply, and they don't currently allow multiple IP addresses 
(with the exception of the address-pair extension).

In case IPv6 there can be multiple primary address configured will this be 
supported ?
No reason why not - we're expecting to re-use the usual port, so you'd expect 
the features there to apply (in addition to having multiple sets of subnet on a 
trunking port).

 3.   If required can these ports can be aggregated into single one 
dynamically ?
That's not really relevant to trunk ports or networks.

 4.   Will there be requirement to handle Nested tagged packet on such 
interfaces ?
For trunking ports, I don't believe anyone was considering it.





Thanks  Regards,
Keshava

From: Ian Wells [mailto:ijw.ubu...@cack.org.ukmailto:ijw.ubu...@cack.org.uk]
Sent: Monday, October 27, 2014 9:45 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

On 25 October 2014 15:36, Erik Moe 
erik@ericsson.commailto:erik@ericsson.com wrote:
Then I tried to just use the trunk network as a plain pipe to the L2-gateway 
and connect to normal Neutron networks. One issue is that the L2-gateway will 
bridge the networks, but the services in the network you bridge

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-28 Thread Salvatore Orlando
Keshava,

I think the thread is not going a bit off its stated topic - which is to
discuss the various proposed approaches to vlan trunking.
Regarding your last post, I'm not sure I saw either spec implying that at
the data plane level every instance attached to a trunk will be implemented
as a different network stack.

Also, quoting the principle earlier cited in this thread -  make the easy
stuff easy and the hard stuff possible - I would say that unless five 9s
is a minimum requirement for a NFV application, we might start worrying
about it once we have the bare minimum set of tools for allowing a NFV
application over a neutron network.

I think Ian has done a good job in explaining that while both approaches
considered here address trunking for NFV use cases, they propose
alternative implementations which can be leveraged in different way by NFV
applications. I do not see now a reason for which we should not allow NFV
apps to leverage a trunk network or create port-aware VLANs (or maybe you
can even have VLAN aware ports which tap into a trunk network?)

We may continue discussing the pros and cons of each approach - but to me
it's now just a matter of choosing the best solution for exposing them at
the API layer. At the control/data plane layer, it seems to me that trunk
networks are pretty much straightforward. VLAN aware ports are instead a
bit more convoluted, but not excessively complicated in my opinion.

Salvatore


On 28 October 2014 11:55, A, Keshava keshav...@hp.com wrote:

  Hi,

 Pl find my reply ..





 Regards,

 keshava



 *From:* Alan Kavanagh [mailto:alan.kavan...@ericsson.com]
 *Sent:* Tuesday, October 28, 2014 3:35 PM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking
 blueprints



 Hi

 Please find some additions to Ian and responses below.

 /Alan



 *From:* A, Keshava [mailto:keshav...@hp.com keshav...@hp.com]
 *Sent:* October-28-14 9:57 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking
 blueprints



 *Hi,*

 *Pl fine the reply for the same.*



 *Regards,*

 *keshava*



 *From:* Ian Wells [mailto:ijw.ubu...@cack.org.uk ijw.ubu...@cack.org.uk]

 *Sent:* Tuesday, October 28, 2014 1:11 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking
 blueprints



 This all appears to be referring to trunking ports, rather than anything
 else, so I've addressed the points in that respect.

 On 28 October 2014 00:03, A, Keshava keshav...@hp.com wrote:

   Hi,

 1.   How many Trunk ports can be created ?

  Why would there be a limit?

  Will there be any Active-Standby concepts will be there ?

  I don't believe active-standby, or any HA concept, is directly
 relevant.  Did you have something in mind?

 *For the NFV kind of the scenario, it is very much required to run the
 ‘Service VM’ in Active and Standby Mode.*

 *AK**à** We have a different view on this, the “application runs as a
 pair” of which the application either runs in active-active or active
 standby…this has nothing to do with HA, its down to the application and how
 its provisioned and configured via Openstack. So agree with Ian on this.*

 *Standby is more of passive entity and will not take any action to
 external network. It will be passive consumer of the packet/information.*

 *AK**à** Why would we need to care?*

 *In that scenario it will be very meaningful to have*

 *“Active port – connected to  “Active  Service VM”.*

 *“Standby port – connected to ‘Standby Service VM’. Which will turn Active
 when old Active-VM goes down  ?*

 *AK**à** Cant you just have two VM’s and then via a controller decide how
 to address MAC+IP_Address control…..FYI…most NFV Apps have that built-in
 today.*

 *Let us know others opinion about this concept.*

 *AK**à**Perhaps I am miss reading this but I don’t understand what this
 would provide as opposed to having two VM’s instantiated and running, why
 does Neutron need to care about the port state between these two VM’s?
 Similarly its better to just have 2 or more VM’s up and the application
 will be able to address when failover occurs/requires. Lets keep it simple
 and not mix up with what the apps do inside the containment.*



 *Keshava: *

 *Since this is solution is more for Carrier Grade NFV Service VM, I have
 below points to make.*

 *Let’s us say Service-VM running is BGP or BGP-VPN or ‘MPLS + LDP +
 BGP-VPN’.*

 *When such kind of carrier grade service are running, how to provide the
 Five-9  HA ?*

 *In my opinion, *

 *Both (Active,/Standby) Service-VM to hook same underlying
 OpenStack infrastructure stack (br-ext-br-int-qxx- VMa) *

 *However ‘active VM’ can hooks to  ‘active-port’  and ‘standby VM’ hook to
 ‘passive-port’ with in same stack.*



 *Instead if Active and Standby VM

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-28 Thread Alan Kavanagh
Hi Salvatore

Inline below.

From: Salvatore Orlando [mailto:sorla...@nicira.com]
Sent: October-28-14 12:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Keshava,

I think the thread is not going a bit off its stated topic - which is to 
discuss the various proposed approaches to vlan trunking.
Regarding your last post, I'm not sure I saw either spec implying that at the 
data plane level every instance attached to a trunk will be implemented as a 
different network stack.
AK-- Agree
Also, quoting the principle earlier cited in this thread -  make the easy 
stuff easy and the hard stuff possible - I would say that unless five 9s is a 
minimum requirement for a NFV application, we might start worrying about it 
once we have the bare minimum set of tools for allowing a NFV application over 
a neutron network.
AK-- five 9’s is a 100% must requirement for NFV, but lets ensure we don’t mix 
up what the underlay service needs to guarantee and what openstack needs to do 
to ensure this type of service. Would agree, we should focus more on having the 
right configuration sets for onboarding NFV which is what Openstack needs to 
ensure is exposed then what is used underneath guarantee the 5 9’s is a 
separate matter.
I think Ian has done a good job in explaining that while both approaches 
considered here address trunking for NFV use cases, they propose alternative 
implementations which can be leveraged in different way by NFV applications. I 
do not see now a reason for which we should not allow NFV apps to leverage a 
trunk network or create port-aware VLANs (or maybe you can even have VLAN aware 
ports which tap into a trunk network?)
AK-- Agree, I think we can hammer this out once and for all in Paris…….this 
feature has been lingering too long.
We may continue discussing the pros and cons of each approach - but to me it's 
now just a matter of choosing the best solution for exposing them at the API 
layer. At the control/data plane layer, it seems to me that trunk networks are 
pretty much straightforward. VLAN aware ports are instead a bit more 
convoluted, but not excessively complicated in my opinion.
AK-- My thinking too Salvatore, lets ensure the right elements are exposed at 
API Layer, I would also go a little further to ensure we get those feature sets 
to be supported into the Core API (another can of worms discussion but we need 
to have it).
Salvatore


On 28 October 2014 11:55, A, Keshava 
keshav...@hp.commailto:keshav...@hp.com wrote:
Hi,
Pl find my reply ..


Regards,
keshava

From: Alan Kavanagh 
[mailto:alan.kavan...@ericsson.commailto:alan.kavan...@ericsson.com]
Sent: Tuesday, October 28, 2014 3:35 PM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Hi
Please find some additions to Ian and responses below.
/Alan

From: A, Keshava [mailto:keshav...@hp.com]
Sent: October-28-14 9:57 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Hi,
Pl fine the reply for the same.

Regards,
keshava

From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Tuesday, October 28, 2014 1:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

This all appears to be referring to trunking ports, rather than anything else, 
so I've addressed the points in that respect.
On 28 October 2014 00:03, A, Keshava 
keshav...@hp.commailto:keshav...@hp.com wrote:
Hi,

1.   How many Trunk ports can be created ?
Why would there be a limit?

Will there be any Active-Standby concepts will be there ?
I don't believe active-standby, or any HA concept, is directly relevant.  Did 
you have something in mind?
For the NFV kind of the scenario, it is very much required to run the ‘Service 
VM’ in Active and Standby Mode.
AK-- We have a different view on this, the “application runs as a pair” of 
which the application either runs in active-active or active standby…this has 
nothing to do with HA, its down to the application and how its provisioned and 
configured via Openstack. So agree with Ian on this.
Standby is more of passive entity and will not take any action to external 
network. It will be passive consumer of the packet/information.
AK-- Why would we need to care?
In that scenario it will be very meaningful to have
“Active port – connected to  “Active  Service VM”.
“Standby port – connected to ‘Standby Service VM’. Which will turn Active when 
old Active-VM goes down  ?
AK-- Cant you just have two VM’s and then via a controller decide how to 
address MAC+IP_Address control…..FYI…most NFV Apps have that built-in today.
Let us know others opinion about this concept.
AK--Perhaps I am miss reading this but I don’t understand what this would

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-28 Thread Armando M.
Sorry for jumping into this thread late...there's lots of details to
process, and I needed time to digest!

Having said that, I'd like to recap before moving the discussion forward,
at the Summit and beyond.

As it's being pointed out, there are a few efforts targeting this area; I
think that is sensible to adopt the latest spec system we have been using
to understand where we are, and I mean Gerrit and the spec submissions.

To this aim I see the following specs:

https://review.openstack.org/93613 - Service API for L2 bridging
tenants/provider networks
https://review.openstack.org/100278 - API Extension for l2-gateway
https://review.openstack.org/94612 - VLAN aware VMs
https://review.openstack.org/97714 - VLAN trunking networks for NFV

First of all: did I miss any? I am intentionally leaving out any vendor
specific blueprint for now.

When I look at these I clearly see that we jump all the way to
implementations details. From an architectural point of view, this clearly
does not make a lot of sense.

In order to ensure that everyone is on the same page, I would suggest to
have a discussion where we focus on the following aspects:

- Identify the use cases: what are, in simple terms, the possible
interactions that an actor (i.e. the tenant or the admin) can have with the
system (an OpenStack deployment), when these NFV-enabling capabilities are
available? What are the observed outcomes once these interactions have
taken place?

-  Management API: what abstractions do we expose to the tenant or admin
(do we augment the existing resources, or do we create new resources, or do
we do both)? This should obviously driven by a set of use cases, and we
need to identify the minimum set or logical artifacts that would let us
meet the needs of the widest set of use cases.

- Core Neutron changes: what needs to happen to the core of Neutron, if
anything, so that we can implement this NFV-enabling constructs
successfully? Are there any changes to the core L2 API? Are there any
changes required to the core framework (scheduling, policy, notifications,
data model etc)?

- Add support to the existing plugin backends: the openvswitch reference
implementation is an obvious candidate, but other plugins may want to
leverage the newly defined capabilities too. Once the above mentioned
points have been fleshed out, it should be fairly straightforward to have
these efforts progress in autonomy.

IMO, until we can get a full understanding of the aspects above, I don't
believe like the core team is in the best position to determine the best
approach forward; I think it's in everyone's interest in making sure that
something cohesive comes out of this; the worst possible outcome is no
progress at all, or even worse, some frankenstein system that no-one really
know what it does, or how it can be used.

I will go over the specs one more time in order to identify some answers to
my points above. I hope someone can help me through the process.


Many thanks,
Armando
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-27 Thread Ian Wells
On 25 October 2014 15:36, Erik Moe erik@ericsson.com wrote:

  Then I tried to just use the trunk network as a plain pipe to the
 L2-gateway and connect to normal Neutron networks. One issue is that the
 L2-gateway will bridge the networks, but the services in the network you
 bridge to is unaware of your existence. This IMO is ok then bridging
 Neutron network to some remote network, but if you have an Neutron VM and
 want to utilize various resources in another Neutron network (since the one
 you sit on does not have any resources), things gets, let’s say non
 streamlined.


Indeed.  However, non-streamlined is not the end of the world, and I
wouldn't want to have to tag all VLANs a port is using on the port in
advance of using it (this works for some use cases, and makes others
difficult, particularly if you just want a native trunk and are happy for
Openstack not to have insight into what's going on on the wire).


  Another issue with trunk network is that it puts new requirements on the
 infrastructure. It needs to be able to handle VLAN tagged frames. For a
 VLAN based network it would be QinQ.


Yes, and that's the point of the VLAN trunk spec, where we flag a network
as passing VLAN tagged packets; if the operator-chosen network
implementation doesn't support trunks, the API can refuse to make a trunk
network.  Without it we're still in the situation that on some clouds
passing VLANs works and on others it doesn't, and that the tenant can't
actually tell in advance which sort of cloud they're working on.

Trunk networks are a requirement for some use cases independent of the port
awareness of VLANs.  Based on the maxim, 'make the easy stuff easy and the
hard stuff possible' we can't just say 'no Neutron network passes VLAN
tagged packets'.  And even if we did, we're evading a problem that exists
with exactly one sort of network infrastructure - VLAN tagging for network
separation - while making it hard to use for all of the many other cases in
which it would work just fine.

In summary, if we did port-based VLAN knowledge I would want to be able to
use VLANs without having to use it (in much the same way that I would like,
in certain circumstances, not to have to use Openstack's address allocation
and DHCP - it's nice that I can, but I shouldn't be forced to).

My requirements were to have low/no extra cost for VMs using VLAN trunks
 compared to normal ports, no new bottlenecks/single point of failure. Due
 to this and previous issues I implemented the L2 gateway in a distributed
 fashion and since trunk network could not be realized in reality I only had
 them in the model and optimized them away.


Again, this is down to your choice of VLAN tagged networking and/or the OVS
ML2 driver; it doesn't apply to all deployments.


 But the L2-gateway + trunk network has a flexible API, what if someone
 connects two VMs to one trunk network, well, hard to optimize away.


That's certainly true, but it wasn't really intended to be optimised away.

 Anyway, due to these and other issues, I limited my scope and switched to
 the current trunk port/subport model.



 The code that is for review is functional, you can boot a VM with a trunk
 port + subports (each subport maps to a VLAN). The VM can send/receive VLAN
 traffic. You can add/remove subports on a running VM. You can specify IP
 address per subport and use DHCP to retrieve them etc.


I'm coming to realise that the two solutions address different needs - the
VLAN port one is much more useful for cases where you know what's going on
in the network and you want Openstack to help, but it's just not broad
enough to solve every problem.  It may well be that we want both solutions,
in which case we just need to agree that 'we shouldn't do trunk networking
because VLAN aware ports solve this problem' is not a valid argument during
spec review.
-- 
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-25 Thread Erik Moe

Proposal C, VLAN-aware-VMs, is about trying to integrate VLAN traffic from VMs 
with Neutron in a more tightly fashion.

It terminates the VLAN at the port connected to the VM. It does not bring in 
the VLAN concept further into Neutron. This is done by mapping each VLAN from 
the VM to a neutron network. After all, VLANs and Neutron networks are very 
much alike.

The modelling reuses the current port structure, there is one port on each 
network. The port still contains information relevant to that network.

By doing these things it's possible to utilize the rest of the features in 
Neutron, only features that have implementation close to VM has to be 
overlooked when implementing this. Other features that have attributes on a VM 
port but is realized remotely works fine, for example DHCP (including 
extra_dhcp_opts) and mechanism drivers that uses portbindings to do network 
plumbing on a switch.

After the Icehouse summit where we discussed the L2-gateway solution I started 
to implement an L2-gateway. The idea was to have an VM with a trunk port 
connected to a trunk network carrying tagged traffic. The network would then be 
connected to a L2-gateway for breaking out a single VLAN and connect it with a 
normal Neutron network. Following are some of the issues I encountered.

Currently a neutron port/network contains attributes related to one broadcast 
domain. A trunk network requires that many attributes are per broadcast domain. 
This would require a bigger refractory of Neutron port/network and affect all 
services using the ports/networks.
Due to this I dropped the track of tight integration with trunk network.

Then I tried to just use the trunk network as a plain pipe to the L2-gateway 
and connect to normal Neutron networks. One issue is that the L2-gateway will 
bridge the networks, but the services in the network you bridge to is unaware 
of your existence. This IMO is ok then bridging Neutron network to some remote 
network, but if you have an Neutron VM and want to utilize various resources in 
another Neutron network (since the one you sit on does not have any resources), 
things gets, let's say non streamlined.

Another issue with trunk network is that it puts new requirements on the 
infrastructure. It needs to be able to handle VLAN tagged frames. For a VLAN 
based network it would be QinQ.

My requirements were to have low/no extra cost for VMs using VLAN trunks 
compared to normal ports, no new bottlenecks/single point of failure. Due to 
this and previous issues I implemented the L2 gateway in a distributed fashion 
and since trunk network could not be realized in reality I only had them in the 
model and optimized them away. But the L2-gateway + trunk network has a 
flexible API, what if someone connects two VMs to one trunk network, well, hard 
to optimize away.

Anyway, due to these and other issues, I limited my scope and switched to the 
current trunk port/subport model.

The code that is for review is functional, you can boot a VM with a trunk port 
+ subports (each subport maps to a VLAN). The VM can send/receive VLAN traffic. 
You can add/remove subports on a running VM. You can specify IP address per 
subport and use DHCP to retrieve them etc.

Thanks,
Erik



From: Bob Melander (bmelande) [mailto:bmela...@cisco.com]
Sent: den 24 oktober 2014 20:13
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

What scares me a bit about the let's find a common solution for both external 
devices and VMs approach is the challenge to reach an agreement. I remember a 
rather long discussion in the dev lounge in HongKong about trunking support 
that ended up going in all kinds of directions.

I work on implementing services in VMs so my opinion is definitely colored by 
that. Personally, proposal C is the most appealing to me for the following 
reasons: It is good enough, a trunk port notion is semantically easy to take 
in (at least to me), by doing it all within the port resource Nova implications 
are minimal, it seemingly can handle multiple network types (VLAN, GRE, VXLAN, 
... they are all mapped to different trunk port local VLAN tags), DHCP should 
work to the trunk ports and its sub ports (unless I overlook something), the 
spec already elaborates a lot on details, there is also already code available 
that can be inspected.

Thanks,
Bob

From: Ian Wells ijw.ubu...@cack.org.ukmailto:ijw.ubu...@cack.org.uk
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: torsdag 23 oktober 2014 23:58
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

There are two categories of problems:
1. some networks don't pass VLAN tagged traffic

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-24 Thread racha
 networks.  Probably
 doesn't work with appliances.

 I would recommend we try and find a solution that works with both external
 hardware and internal networks.  (B) is only a partial solution.

 Considering the others, note that (C) and (D) add significant complexity
 to the data model, independently of the benefits they bring.  (A) adds one
 new functional block to networking (similar to today's routers, or even
 today's Nova instances).

 Finally, I suggest we consider the most prominent use case for
 multiplexing networks.  This seems to be condensing traffic from many
 networks to either a service VM or a service appliance.  It's useful, but
 not essential, to have Neutron control the addresses on the trunk port
 subinterfaces.

 So, that said, I personally favour (A) is the simplest way to solve our
 current needs, and I recommend paring (A) right down to its basics: a block
 that has access ports that we tag with a VLAN ID, and one trunk port that
 has all of the access networks multiplexed onto it.  This is a slightly
 dangerous block, in that you can actually set up forwarding blocks with it,
 and that's a concern; but it's a simple service block like a router, it's
 very, very simple to implement, and it solves our immediate problems so
 that we can make forward progress.  It also doesn't affect the other
 solutions significantly, so someone could implement (C) or (D) or (E) in
 the future.
 --
 Ian.


 On 23 October 2014 02:13, Alan Kavanagh alan.kavan...@ericsson.com
 wrote:

 +1 many thanks to Kyle for putting this as a priority, its most welcome.
 /Alan

 -Original Message-
 From: Erik Moe [mailto:erik@ericsson.com]
 Sent: October-22-14 5:01 PM
 To: Steve Gordon; OpenStack Development Mailing List (not for usage
 questions)
 Cc: iawe...@cisco.com
 Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking
 blueprints


 Hi,

 Great that we can have more focus on this. I'll attend the meeting on
 Monday and also attend the summit, looking forward to these discussions.

 Thanks,
 Erik


 -Original Message-
 From: Steve Gordon [mailto:sgor...@redhat.com]
 Sent: den 22 oktober 2014 16:29
 To: OpenStack Development Mailing List (not for usage questions)
 Cc: Erik Moe; iawe...@cisco.com; calum.lou...@metaswitch.com
 Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking
 blueprints

 - Original Message -
  From: Kyle Mestery mest...@mestery.com
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org
 
  There are currently at least two BPs registered for VLAN trunk support
  to VMs in neutron-specs [1] [2]. This is clearly something that I'd
  like to see us land in Kilo, as it enables a bunch of things for the
  NFV use cases. I'm going to propose that we talk about this at an
  upcoming Neutron meeting [3]. Given the rotating schedule of this
  meeting, and the fact the Summit is fast approaching, I'm going to
  propose we allocate a bit of time in next Monday's meeting to discuss
  this. It's likely we can continue this discussion F2F in Paris as
  well, but getting a head start would be good.
 
  Thanks,
  Kyle
 
  [1] https://review.openstack.org/#/c/94612/
  [2] https://review.openstack.org/#/c/97714
  [3] https://wiki.openstack.org/wiki/Network/Meetings

 Hi Kyle,

 Thanks for raising this, it would be great to have a converged plan for
 addressing this use case [1] for Kilo. I plan to attend the Neutron meeting
 and I've CC'd Erik, Ian, and Calum to make sure they are aware as well.

 Thanks,

 Steve

 [1]
 http://lists.openstack.org/pipermail/openstack-dev/2014-October/047548.html
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-24 Thread Bob Melander (bmelande)
What scares me a bit about the “let’s find a common solution for both external 
devices and VMs” approach is the challenge to reach an agreement. I remember a 
rather long discussion in the dev lounge in HongKong about trunking support 
that ended up going in all kinds of directions.

I work on implementing services in VMs so my opinion is definitely colored by 
that. Personally, proposal C is the most appealing to me for the following 
reasons: It is “good enough”, a trunk port notion is semantically easy to take 
in (at least to me), by doing it all within the port resource Nova implications 
are minimal, it seemingly can handle multiple network types (VLAN, GRE, VXLAN, 
… they are all mapped to different trunk port local VLAN tags), DHCP should 
work to the trunk ports and its sub ports (unless I overlook something), the 
spec already elaborates a lot on details, there is also already code available 
that can be inspected.

Thanks,
Bob

From: Ian Wells ijw.ubu...@cack.org.ukmailto:ijw.ubu...@cack.org.uk
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: torsdag 23 oktober 2014 23:58
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

There are two categories of problems:

1. some networks don't pass VLAN tagged traffic, and it's impossible to detect 
this from the API
2. it's not possible to pass traffic from multiple networks to one port on one 
machine as (e.g.) VLAN tagged traffic

(1) is addressed by the VLAN trunking network blueprint, XXX. Nothing else 
addresses this, particularly in the case that one VM is emitting tagged packets 
that another one should receive and Openstack knows nothing about what's going 
on.

We should get this in, and ideally in quickly and in a simple form where it 
simply tells you if a network is capable of passing tagged traffic.  In 
general, this is possible to calculate but a bit tricky in ML2 - anything using 
the OVS mechanism driver won't pass VLAN traffic, anything using VLANs should 
probably also claim it doesn't pass VLAN traffic (though actually it depends a 
little on the switch), and combinations of L3 tunnels plus Linuxbridge seem to 
pass VLAN traffic just fine.  Beyond that, it's got a backward compatibility 
mode, so it's possible to ensure that any plugin that doesn't implement VLAN 
reporting is still behaving correctly per the specification.

(2) is addressed by several blueprints, and these have overlapping ideas that 
all solve the problem.  I would summarise the possibilities as follows:

A. Racha's L2 gateway blueprint, 
https://blueprints.launchpad.net/neutron/+spec/gateway-api-extension, which (at 
its simplest, though it's had features added on and is somewhat OVS-specific in 
its detail) acts as a concentrator to multiplex multiple networks onto one as a 
trunk.  This is a very simple approach and doesn't attempt to resolve any of 
the hairier questions like making DHCP work as you might want it to on the 
ports attached to the trunk network.
B. Isaku's L2 gateway blueprint, https://review.openstack.org/#/c/100278/, 
which is more limited in that it refers only to external connections.
C. Erik's VLAN port blueprint, 
https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms, which tries to 
solve the addressing problem mentioned above by having ports within ports (much 
as, on the VM side, interfaces passing trunk traffic tend to have subinterfaces 
that deal with the traffic streams).
D. Not a blueprint, but an idea I've come across: create a network that is a 
collection of other networks, each 'subnetwork' being a VLAN in the network 
trunk.
E. Kyle's very old blueprint, 
https://blueprints.launchpad.net/neutron/+spec/quantum-network-bundle-api - 
where we attach a port, not a network, to multiple networks.  Probably doesn't 
work with appliances.

I would recommend we try and find a solution that works with both external 
hardware and internal networks.  (B) is only a partial solution.

Considering the others, note that (C) and (D) add significant complexity to the 
data model, independently of the benefits they bring.  (A) adds one new 
functional block to networking (similar to today's routers, or even today's 
Nova instances).

Finally, I suggest we consider the most prominent use case for multiplexing 
networks.  This seems to be condensing traffic from many networks to either a 
service VM or a service appliance.  It's useful, but not essential, to have 
Neutron control the addresses on the trunk port subinterfaces.

So, that said, I personally favour (A) is the simplest way to solve our current 
needs, and I recommend paring (A) right down to its basics: a block that has 
access ports that we tag with a VLAN ID, and one trunk port that has all of the 
access

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-23 Thread Alan Kavanagh
+1 many thanks to Kyle for putting this as a priority, its most welcome.
/Alan

-Original Message-
From: Erik Moe [mailto:erik@ericsson.com] 
Sent: October-22-14 5:01 PM
To: Steve Gordon; OpenStack Development Mailing List (not for usage questions)
Cc: iawe...@cisco.com
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints


Hi,

Great that we can have more focus on this. I'll attend the meeting on Monday 
and also attend the summit, looking forward to these discussions.

Thanks,
Erik


-Original Message-
From: Steve Gordon [mailto:sgor...@redhat.com]
Sent: den 22 oktober 2014 16:29
To: OpenStack Development Mailing List (not for usage questions)
Cc: Erik Moe; iawe...@cisco.com; calum.lou...@metaswitch.com
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

- Original Message -
 From: Kyle Mestery mest...@mestery.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 
 There are currently at least two BPs registered for VLAN trunk support 
 to VMs in neutron-specs [1] [2]. This is clearly something that I'd 
 like to see us land in Kilo, as it enables a bunch of things for the 
 NFV use cases. I'm going to propose that we talk about this at an 
 upcoming Neutron meeting [3]. Given the rotating schedule of this 
 meeting, and the fact the Summit is fast approaching, I'm going to 
 propose we allocate a bit of time in next Monday's meeting to discuss 
 this. It's likely we can continue this discussion F2F in Paris as 
 well, but getting a head start would be good.
 
 Thanks,
 Kyle
 
 [1] https://review.openstack.org/#/c/94612/
 [2] https://review.openstack.org/#/c/97714
 [3] https://wiki.openstack.org/wiki/Network/Meetings

Hi Kyle,

Thanks for raising this, it would be great to have a converged plan for 
addressing this use case [1] for Kilo. I plan to attend the Neutron meeting and 
I've CC'd Erik, Ian, and Calum to make sure they are aware as well.

Thanks,

Steve

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-October/047548.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-23 Thread Ian Wells
There are two categories of problems:

1. some networks don't pass VLAN tagged traffic, and it's impossible to
detect this from the API
2. it's not possible to pass traffic from multiple networks to one port on
one machine as (e.g.) VLAN tagged traffic

(1) is addressed by the VLAN trunking network blueprint, XXX. Nothing else
addresses this, particularly in the case that one VM is emitting tagged
packets that another one should receive and Openstack knows nothing about
what's going on.

We should get this in, and ideally in quickly and in a simple form where it
simply tells you if a network is capable of passing tagged traffic.  In
general, this is possible to calculate but a bit tricky in ML2 - anything
using the OVS mechanism driver won't pass VLAN traffic, anything using
VLANs should probably also claim it doesn't pass VLAN traffic (though
actually it depends a little on the switch), and combinations of L3 tunnels
plus Linuxbridge seem to pass VLAN traffic just fine.  Beyond that, it's
got a backward compatibility mode, so it's possible to ensure that any
plugin that doesn't implement VLAN reporting is still behaving correctly
per the specification.

(2) is addressed by several blueprints, and these have overlapping ideas
that all solve the problem.  I would summarise the possibilities as follows:

A. Racha's L2 gateway blueprint,
https://blueprints.launchpad.net/neutron/+spec/gateway-api-extension, which
(at its simplest, though it's had features added on and is somewhat
OVS-specific in its detail) acts as a concentrator to multiplex multiple
networks onto one as a trunk.  This is a very simple approach and doesn't
attempt to resolve any of the hairier questions like making DHCP work as
you might want it to on the ports attached to the trunk network.
B. Isaku's L2 gateway blueprint, https://review.openstack.org/#/c/100278/,
which is more limited in that it refers only to external connections.
C. Erik's VLAN port blueprint,
https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms, which tries
to solve the addressing problem mentioned above by having ports within
ports (much as, on the VM side, interfaces passing trunk traffic tend to
have subinterfaces that deal with the traffic streams).
D. Not a blueprint, but an idea I've come across: create a network that is
a collection of other networks, each 'subnetwork' being a VLAN in the
network trunk.
E. Kyle's very old blueprint,
https://blueprints.launchpad.net/neutron/+spec/quantum-network-bundle-api -
where we attach a port, not a network, to multiple networks.  Probably
doesn't work with appliances.

I would recommend we try and find a solution that works with both external
hardware and internal networks.  (B) is only a partial solution.

Considering the others, note that (C) and (D) add significant complexity to
the data model, independently of the benefits they bring.  (A) adds one new
functional block to networking (similar to today's routers, or even today's
Nova instances).

Finally, I suggest we consider the most prominent use case for multiplexing
networks.  This seems to be condensing traffic from many networks to either
a service VM or a service appliance.  It's useful, but not essential, to
have Neutron control the addresses on the trunk port subinterfaces.

So, that said, I personally favour (A) is the simplest way to solve our
current needs, and I recommend paring (A) right down to its basics: a block
that has access ports that we tag with a VLAN ID, and one trunk port that
has all of the access networks multiplexed onto it.  This is a slightly
dangerous block, in that you can actually set up forwarding blocks with it,
and that's a concern; but it's a simple service block like a router, it's
very, very simple to implement, and it solves our immediate problems so
that we can make forward progress.  It also doesn't affect the other
solutions significantly, so someone could implement (C) or (D) or (E) in
the future.
-- 
Ian.


On 23 October 2014 02:13, Alan Kavanagh alan.kavan...@ericsson.com wrote:

 +1 many thanks to Kyle for putting this as a priority, its most welcome.
 /Alan

 -Original Message-
 From: Erik Moe [mailto:erik@ericsson.com]
 Sent: October-22-14 5:01 PM
 To: Steve Gordon; OpenStack Development Mailing List (not for usage
 questions)
 Cc: iawe...@cisco.com
 Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking
 blueprints


 Hi,

 Great that we can have more focus on this. I'll attend the meeting on
 Monday and also attend the summit, looking forward to these discussions.

 Thanks,
 Erik


 -Original Message-
 From: Steve Gordon [mailto:sgor...@redhat.com]
 Sent: den 22 oktober 2014 16:29
 To: OpenStack Development Mailing List (not for usage questions)
 Cc: Erik Moe; iawe...@cisco.com; calum.lou...@metaswitch.com
 Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking
 blueprints

 - Original Message -
  From: Kyle Mestery mest...@mestery.com
  To: OpenStack

[openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-22 Thread Kyle Mestery
There are currently at least two BPs registered for VLAN trunk support
to VMs in neutron-specs [1] [2]. This is clearly something that I'd
like to see us land in Kilo, as it enables a bunch of things for the
NFV use cases. I'm going to propose that we talk about this at an
upcoming Neutron meeting [3]. Given the rotating schedule of this
meeting, and the fact the Summit is fast approaching, I'm going to
propose we allocate a bit of time in next Monday's meeting to discuss
this. It's likely we can continue this discussion F2F in Paris as
well, but getting a head start would be good.

Thanks,
Kyle

[1] https://review.openstack.org/#/c/94612/
[2] https://review.openstack.org/#/c/97714
[3] https://wiki.openstack.org/wiki/Network/Meetings

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-22 Thread Salvatore Orlando
Kyle,

I pointed out the similarity of the two specifications while reviewing them
a few months ago (see patch set #4).
Ian then approached me on IRC (I'm afraid it's going to be a bit difficult
to retrieve those logs), and pointed out that actually the two
specifications, in his opinion, try to address different problems.

While the proposed approaches appear different, their ultimate goal is
apparently that of enabling instances to see multiple networks on the same
data-plane level port (as opposed to the mgmt-level logical port). While it
might be ok to have a variety of choice at the data plane level - my
suggestion is that we should have only a single way of specifying this at
the mgmt level, with the least possible changes to the simple logical model
we have - and here I'm referring to the proposed trunkport/subport approach
[1]

Salvatore

[1] https://review.openstack.org/#/c/94612/


On 22 October 2014 14:42, Kyle Mestery mest...@mestery.com wrote:

 There are currently at least two BPs registered for VLAN trunk support
 to VMs in neutron-specs [1] [2]. This is clearly something that I'd
 like to see us land in Kilo, as it enables a bunch of things for the
 NFV use cases. I'm going to propose that we talk about this at an
 upcoming Neutron meeting [3]. Given the rotating schedule of this
 meeting, and the fact the Summit is fast approaching, I'm going to
 propose we allocate a bit of time in next Monday's meeting to discuss
 this. It's likely we can continue this discussion F2F in Paris as
 well, but getting a head start would be good.

 Thanks,
 Kyle

 [1] https://review.openstack.org/#/c/94612/
 [2] https://review.openstack.org/#/c/97714
 [3] https://wiki.openstack.org/wiki/Network/Meetings

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-22 Thread Steve Gordon
- Original Message -
 From: Kyle Mestery mest...@mestery.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 
 There are currently at least two BPs registered for VLAN trunk support
 to VMs in neutron-specs [1] [2]. This is clearly something that I'd
 like to see us land in Kilo, as it enables a bunch of things for the
 NFV use cases. I'm going to propose that we talk about this at an
 upcoming Neutron meeting [3]. Given the rotating schedule of this
 meeting, and the fact the Summit is fast approaching, I'm going to
 propose we allocate a bit of time in next Monday's meeting to discuss
 this. It's likely we can continue this discussion F2F in Paris as
 well, but getting a head start would be good.
 
 Thanks,
 Kyle
 
 [1] https://review.openstack.org/#/c/94612/
 [2] https://review.openstack.org/#/c/97714
 [3] https://wiki.openstack.org/wiki/Network/Meetings

Hi Kyle,

Thanks for raising this, it would be great to have a converged plan for 
addressing this use case [1] for Kilo. I plan to attend the Neutron meeting and 
I've CC'd Erik, Ian, and Calum to make sure they are aware as well.

Thanks,

Steve

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-October/047548.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-22 Thread Erik Moe

Hi,

Great that we can have more focus on this. I'll attend the meeting on Monday 
and also attend the summit, looking forward to these discussions.

Thanks,
Erik


-Original Message-
From: Steve Gordon [mailto:sgor...@redhat.com] 
Sent: den 22 oktober 2014 16:29
To: OpenStack Development Mailing List (not for usage questions)
Cc: Erik Moe; iawe...@cisco.com; calum.lou...@metaswitch.com
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

- Original Message -
 From: Kyle Mestery mest...@mestery.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 
 There are currently at least two BPs registered for VLAN trunk support 
 to VMs in neutron-specs [1] [2]. This is clearly something that I'd 
 like to see us land in Kilo, as it enables a bunch of things for the 
 NFV use cases. I'm going to propose that we talk about this at an 
 upcoming Neutron meeting [3]. Given the rotating schedule of this 
 meeting, and the fact the Summit is fast approaching, I'm going to 
 propose we allocate a bit of time in next Monday's meeting to discuss 
 this. It's likely we can continue this discussion F2F in Paris as 
 well, but getting a head start would be good.
 
 Thanks,
 Kyle
 
 [1] https://review.openstack.org/#/c/94612/
 [2] https://review.openstack.org/#/c/97714
 [3] https://wiki.openstack.org/wiki/Network/Meetings

Hi Kyle,

Thanks for raising this, it would be great to have a converged plan for 
addressing this use case [1] for Kilo. I plan to attend the Neutron meeting and 
I've CC'd Erik, Ian, and Calum to make sure they are aware as well.

Thanks,

Steve

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-October/047548.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-22 Thread Bob Melander (bmelande)
I suppose this BP also has some relevance to such a discussion.

https://review.openstack.org/#/c/100278/

/ Bob


On 2014-10-22 15:42, Kyle Mestery mest...@mestery.com wrote:

There are currently at least two BPs registered for VLAN trunk support
to VMs in neutron-specs [1] [2]. This is clearly something that I'd
like to see us land in Kilo, as it enables a bunch of things for the
NFV use cases. I'm going to propose that we talk about this at an
upcoming Neutron meeting [3]. Given the rotating schedule of this
meeting, and the fact the Summit is fast approaching, I'm going to
propose we allocate a bit of time in next Monday's meeting to discuss
this. It's likely we can continue this discussion F2F in Paris as
well, but getting a head start would be good.

Thanks,
Kyle

[1] https://review.openstack.org/#/c/94612/
[2] https://review.openstack.org/#/c/97714
[3] https://wiki.openstack.org/wiki/Network/Meetings

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev