Re: [openstack-dev] [neutron]Anyone looking at support for VLAN-aware VMs in Liberty?

2015-05-12 Thread Erik Moe

Hi All,

Great! In my view it looks like a lot of interest in this.

Now some other news, after more internal discussions it now looks like I will 
not work with this. Petr Savelyev will take over from Ericsson side.

I wish him the best of luck!

Thanks,
Erik


-Original Message-
From: Moshe Levi [mailto:mosh...@mellanox.com] 
Sent: den 12 maj 2015 20:21
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron]Anyone looking at support for VLAN-aware 
VMs in Liberty?

Hi Erik,

Mellanox is also interested in it but for  ML2 plugin with and sriov-nic-switch 
mechanism driver.
I planning to do agent support for the sriov-nic-switch (when the driver will 
support it), but if you need help in other places I can also pitch in.


-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Friday, May 08, 2015 8:23 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron]Anyone looking at support for VLAN-aware 
VMs in Liberty?

On 05/08/2015 09:29 AM, Erik Moe wrote:
> Hi,
>
> I have not been able to work with upstreaming of this for some time now.
> But now it looks like I may make another attempt. Who else is 
> interested in this, as a user or to help contributing? If we get some 
> traction we can have an IRC meeting sometime next week.

Hi Erik,

Mirantis has interest in this functionality, and depending on the amount of 
work involved, we could pitch in...

Please cc me or add me to relevant reviews and I'll make sure the right folks 
are paying attention.

All the best,
-jay

> *From:*Scott Drennan [mailto:sco...@nuagenetworks.net]
> *Sent:* den 4 maj 2015 18:42
> *To:* openstack-dev@lists.openstack.org
> *Subject:* [openstack-dev] [neutron]Anyone looking at support for 
> VLAN-aware VMs in Liberty?
>
> VLAN-transparent or VLAN-trunking networks have landed in Kilo, but I 
> don't see any work on VLAN-aware VMs for Liberty.  There is a 
> blueprint[1] and specs[2] which was deferred from Kilo - is this 
> something anyone is looking at as a Liberty candidate?  I looked but 
> didn't find any recent work - is there somewhere else work on this is 
> happening?  No-one has listed it on the liberty summit topics[3] 
> etherpad, which could mean it's uncontroversial, but given history on 
> this, I think that's unlikely.
>
> cheers,
>
> Scott
>
> [1]: https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms
>
> [2]: https://review.openstack.org/#/c/94612
>
> [3]: https://etherpad.openstack.org/p/liberty-neutron-summit-topics
>
>
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Anyone looking at support for VLAN-aware VMs in Liberty?

2015-05-08 Thread Erik Moe

Hi,

I have not been able to work with upstreaming of this for some time now. But 
now it looks like I may make another attempt. Who else is interested in this, 
as a user or to help contributing? If we get some traction we can have an IRC 
meeting sometime next week.

Thanks,
Erik


From: Scott Drennan [mailto:sco...@nuagenetworks.net]
Sent: den 4 maj 2015 18:42
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [neutron]Anyone looking at support for VLAN-aware VMs 
in Liberty?

VLAN-transparent or VLAN-trunking networks have landed in Kilo, but I don't see 
any work on VLAN-aware VMs for Liberty.  There is a blueprint[1] and specs[2] 
which was deferred from Kilo - is this something anyone is looking at as a 
Liberty candidate?  I looked but didn't find any recent work - is there 
somewhere else work on this is happening?  No-one has listed it on the liberty 
summit topics[3] etherpad, which could mean it's uncontroversial, but given 
history on this, I think that's unlikely.

cheers,
Scott

[1]: https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms
[2]: https://review.openstack.org/#/c/94612
[3]: https://etherpad.openstack.org/p/liberty-neutron-summit-topics
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Edge-VPN and Edge-Id

2014-12-05 Thread Erik Moe

One reason for trying to get an more complete API into Neutron is to have a 
standardized API. So users know what to expect and for providers to have 
something to comply to. Do you suggest we bring this standardization work to 
some other forum, OPNFV for example? Neutron provides low level hooks and the 
rest is defined elsewhere. Maybe this could work, but there would probably be 
other issues if the actual implementation is not on the edge or outside Neutron.

/Erik


From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: den 4 december 2014 20:19
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] Edge-VPN and Edge-Id

On 1 December 2014 at 21:26, Mohammad Hanif 
mailto:mha...@brocade.com>> wrote:
I hope we all understand how edge VPN works and what interactions are 
introduced as part of this spec.  I see references to neutron-network mapping 
to the tunnel which is not at all case and the edge-VPN spec doesn’t propose 
it.  At a very high level, there are two main concepts:

  1.  Creation of a per tenant VPN “service” on a PE (physical router) which 
has a connectivity to other PEs using some tunnel (not known to tenant or 
tenant-facing).  An attachment circuit for this VPN service is also created 
which carries a “list" of tenant networks (the list is initially empty) .
  2.  Tenant “updates” the list of tenant networks in the attachment circuit 
which essentially allows the VPN “service” to add or remove the network from 
being part of that VPN.
A service plugin implements what is described in (1) and provides an API which 
is called by what is described in (2).  The Neutron driver only “updates” the 
attachment circuit using an API (attachment circuit is also part of the service 
plugin’ data model).   I don’t see where we are introducing large data model 
changes to Neutron?

Well, you have attachment types, tunnels, and so on - these are all objects 
with data models, and your spec is on Neutron so I'm assuming you plan on 
putting them into the Neutron database - where they are, for ever more, a 
Neutron maintenance overhead both on the dev side and also on the ops side, 
specifically at upgrade.

How else one introduces a network service in OpenStack if it is not through a 
service plugin?

Again, I've missed something here, so can you define 'service plugin' for me?  
How similar is it to a Neutron extension - which we agreed at the summit we 
should take pains to avoid, per Salvatore's session?
And the answer to that is to stop talking about plugins or trying to integrate 
this into the Neutron API or the Neutron DB, and make it an independent service 
with a small and well defined interaction with Neutron, which is what the 
edge-id proposal suggests.  If we do incorporate it into Neutron then there are 
probably 90% of Openstack users and developers who don't want or need it but 
care a great deal if it breaks the tests.  If it isn't in Neutron they simply 
don't install it.

As we can see, tenant needs to communicate (explicit or otherwise) to 
add/remove its networks to/from the VPN.  There has to be a channel and the 
APIs to achieve this.

Agreed.  I'm suggesting it should be a separate service endpoint.
--
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][TripleO] Clear all flows when ovs agent start? why and how avoid?

2014-11-28 Thread Erik Moe
--+ |
   | eth0(ethernet card) |
   | |
   | |
   +-+
since ovs-br's vlan is assigned as x, this will mod to y to br-int, but y is 
assigned by ovs, not our config, so there may exist more than one mod flow for 
vlan packet in ovs-br,
this will cause vlan_id falsify! And may cause network loop!
The above accidents are what happened our experiment, not only my imagine.
Please take more caution in design!
Please feel free to contact me with this email address and welcome to comments.
Damon Wang

2014-11-06 2:59 GMT+08:00 Armando M. 
mailto:arma...@gmail.com>>:
I would be open to making this toggle switch available, however I feel that 
doing it via static configuration can introduce unnecessary burden to the 
operator. Perhaps we could explore a way where the agent can figure which state 
it's supposed to be in based on its reported status?

Armando

On 5 November 2014 12:09, Salvatore Orlando 
mailto:sorla...@nicira.com>> wrote:
I have no opposition to that, and I will be happy to assist reviewing the code 
that will enable flow synchronisation  (or to say it in an easier way, punctual 
removal of flows unknown to the l2 agent).

In the meanwhile, I hope you won't mind if we go ahead and start making flow 
reset optional - so that we stop causing downtime upon agent restart.

Salvatore

On 5 November 2014 11:57, Erik Moe 
mailto:erik@ericsson.com>> wrote:

Hi,

I also agree, IMHO we need flow synchronization method so we can avoid network 
downtime and stray flows.

Regards,
Erik


From: Germy Lure [mailto:germy.l...@gmail.com<mailto:germy.l...@gmail.com>]
Sent: den 5 november 2014 10:46
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][TripleO] Clear all flows when ovs agent 
start? why and how avoid?

Hi Salvatore,
A startup flag is really a simpler approach. But in what situation we should 
set this flag to remove all flows? upgrade? restart manually? internal fault?

Indeed, only at the time that there are inconsistent(incorrect, unwanted, 
stable and so on) flows between agent and the ovs related, we need refresh 
flows. But the problem is how we know this? I think a startup flag is too 
rough, unless we can tolerate the inconsistent situation.

Of course, I believe that turn off startup reset flows action can resolve most 
problem. The flows are correct most time after all. But considering NFV 5 9s, I 
still recommend flow synchronization approach.

BR,
Germy

On Wed, Nov 5, 2014 at 3:36 PM, Salvatore Orlando 
mailto:sorla...@nicira.com>> wrote:
From what I gather from this thread and related bug report, the change 
introduced in the OVS agent is causing a data plane outage upon agent restart, 
which is not desirable in most cases.

The rationale for the change that introduced this bug was, I believe, cleaning 
up stale flows on the OVS agent, which also makes some sense.

Unless I'm missing something, I reckon the best way forward is actually quite 
straightforward; we might add a startup flag to reset all flows and not reset 
them by default.
While I agree the "flow synchronisation" process proposed in the previous post 
is valuable too, I hope we might be able to fix this with a simpler approach.

Salvatore

On 5 November 2014 04:43, Germy Lure 
mailto:germy.l...@gmail.com>> wrote:
Hi,

Consider the triggering of restart agent, I think it's nothing but:
1). only restart agent
2). reboot the host that agent deployed on

When the agent started, the ovs may:
a.have all correct flows
b.have nothing at all
c.have partly correct flows, the others may need to be reprogrammed, deleted or 
added

In any case, I think both user and developer would happy to see that the system 
recovery ASAP after agent restarting. The best is agent only push those 
incorrect flows, but keep the correct ones. This can ensure those business with 
correct flows working during agent starting.

So, I suggest two solutions:
1.Agent gets all flows from ovs and compare with its local flows after 
restarting. And agent only corrects the different ones.
2.Adapt ovs and agent. Agent just push all(not remove) flows every time and ovs 
prepares two tables for flows switch(like RCU lock).

1 is recommended because of the 3rd vendors.

BR,
Germy


On Fri, Oct 31, 2014 at 10:28 PM, Ben Nemec 
mailto:openst...@nemebean.com>> wrote:
On 10/29/2014 10:17 AM, Kyle Mestery wrote:
> On Wed, Oct 29, 2014 at 7:25 AM, Hly 
> mailto:henry4...@gmail.com>> wrote:
>>
>>
>> Sent from my iPad
>>
>> On 2014-10-29, at 下午8:01, Robert van Leeuwen 
>> mailto:robert.vanleeu...@spilgames.com>> 
>> wrote:
>>
>>>>> I find our current design is remove all flows then add flow by entry, this
>>>>> will cause every network node will break off all tunnels between other
&

Re: [openstack-dev] [neutron][TripleO] Clear all flows when ovs agent start? why and how avoid?

2014-11-05 Thread Erik Moe

Ok, I don’t mind starting with the simplistic approach.

Regards,
Erik


From: Gariganti, Sudhakar Babu [mailto:sudhakar-babu.gariga...@hp.com]
Sent: den 5 november 2014 12:14
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][TripleO] Clear all flows when ovs agent 
start? why and how avoid?

I guess this blueprint[1] attempted to implement the flow synchronization issue 
during the agent restart.
But I see no progress/updates. It would be helpful to know about the progress 
there.

[1] https://blueprints.launchpad.net/neutron/+spec/neutron-agent-soft-restart

On a different note, I agree with Salvatore on getting started with the 
simplistic approach and improve it further.

Regards,
Sudhakar.

From: Salvatore Orlando [mailto:sorla...@nicira.com]
Sent: Wednesday, November 05, 2014 4:39 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][TripleO] Clear all flows when ovs agent 
start? why and how avoid?

I have no opposition to that, and I will be happy to assist reviewing the code 
that will enable flow synchronisation  (or to say it in an easier way, punctual 
removal of flows unknown to the l2 agent).

In the meanwhile, I hope you won't mind if we go ahead and start making flow 
reset optional - so that we stop causing downtime upon agent restart.

Salvatore

On 5 November 2014 11:57, Erik Moe 
mailto:erik@ericsson.com>> wrote:

Hi,

I also agree, IMHO we need flow synchronization method so we can avoid network 
downtime and stray flows.

Regards,
Erik


From: Germy Lure [mailto:germy.l...@gmail.com<mailto:germy.l...@gmail.com>]
Sent: den 5 november 2014 10:46
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][TripleO] Clear all flows when ovs agent 
start? why and how avoid?

Hi Salvatore,
A startup flag is really a simpler approach. But in what situation we should 
set this flag to remove all flows? upgrade? restart manually? internal fault?

Indeed, only at the time that there are inconsistent(incorrect, unwanted, 
stable and so on) flows between agent and the ovs related, we need refresh 
flows. But the problem is how we know this? I think a startup flag is too 
rough, unless we can tolerate the inconsistent situation.

Of course, I believe that turn off startup reset flows action can resolve most 
problem. The flows are correct most time after all. But considering NFV 5 9s, I 
still recommend flow synchronization approach.

BR,
Germy

On Wed, Nov 5, 2014 at 3:36 PM, Salvatore Orlando 
mailto:sorla...@nicira.com>> wrote:
From what I gather from this thread and related bug report, the change 
introduced in the OVS agent is causing a data plane outage upon agent restart, 
which is not desirable in most cases.

The rationale for the change that introduced this bug was, I believe, cleaning 
up stale flows on the OVS agent, which also makes some sense.

Unless I'm missing something, I reckon the best way forward is actually quite 
straightforward; we might add a startup flag to reset all flows and not reset 
them by default.
While I agree the "flow synchronisation" process proposed in the previous post 
is valuable too, I hope we might be able to fix this with a simpler approach.

Salvatore

On 5 November 2014 04:43, Germy Lure 
mailto:germy.l...@gmail.com>> wrote:
Hi,

Consider the triggering of restart agent, I think it's nothing but:
1). only restart agent
2). reboot the host that agent deployed on

When the agent started, the ovs may:
a.have all correct flows
b.have nothing at all
c.have partly correct flows, the others may need to be reprogrammed, deleted or 
added

In any case, I think both user and developer would happy to see that the system 
recovery ASAP after agent restarting. The best is agent only push those 
incorrect flows, but keep the correct ones. This can ensure those business with 
correct flows working during agent starting.

So, I suggest two solutions:
1.Agent gets all flows from ovs and compare with its local flows after 
restarting. And agent only corrects the different ones.
2.Adapt ovs and agent. Agent just push all(not remove) flows every time and ovs 
prepares two tables for flows switch(like RCU lock).

1 is recommended because of the 3rd vendors.

BR,
Germy


On Fri, Oct 31, 2014 at 10:28 PM, Ben Nemec 
mailto:openst...@nemebean.com>> wrote:
On 10/29/2014 10:17 AM, Kyle Mestery wrote:
> On Wed, Oct 29, 2014 at 7:25 AM, Hly 
> mailto:henry4...@gmail.com>> wrote:
>>
>>
>> Sent from my iPad
>>
>> On 2014-10-29, at 下午8:01, Robert van Leeuwen 
>> mailto:robert.vanleeu...@spilgames.com>> 
>> wrote:
>>
>>>>> I find our current design is remove all flows then add flow by entry, this
>>>>> will cause every network node will break off all tunnels between other
>>&g

Re: [openstack-dev] [neutron][TripleO] Clear all flows when ovs agent start? why and how avoid?

2014-11-05 Thread Erik Moe

Hi,

I also agree, IMHO we need flow synchronization method so we can avoid network 
downtime and stray flows.

Regards,
Erik


From: Germy Lure [mailto:germy.l...@gmail.com]
Sent: den 5 november 2014 10:46
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][TripleO] Clear all flows when ovs agent 
start? why and how avoid?

Hi Salvatore,
A startup flag is really a simpler approach. But in what situation we should 
set this flag to remove all flows? upgrade? restart manually? internal fault?

Indeed, only at the time that there are inconsistent(incorrect, unwanted, 
stable and so on) flows between agent and the ovs related, we need refresh 
flows. But the problem is how we know this? I think a startup flag is too 
rough, unless we can tolerate the inconsistent situation.

Of course, I believe that turn off startup reset flows action can resolve most 
problem. The flows are correct most time after all. But considering NFV 5 9s, I 
still recommend flow synchronization approach.

BR,
Germy

On Wed, Nov 5, 2014 at 3:36 PM, Salvatore Orlando 
mailto:sorla...@nicira.com>> wrote:
From what I gather from this thread and related bug report, the change 
introduced in the OVS agent is causing a data plane outage upon agent restart, 
which is not desirable in most cases.

The rationale for the change that introduced this bug was, I believe, cleaning 
up stale flows on the OVS agent, which also makes some sense.

Unless I'm missing something, I reckon the best way forward is actually quite 
straightforward; we might add a startup flag to reset all flows and not reset 
them by default.
While I agree the "flow synchronisation" process proposed in the previous post 
is valuable too, I hope we might be able to fix this with a simpler approach.

Salvatore

On 5 November 2014 04:43, Germy Lure 
mailto:germy.l...@gmail.com>> wrote:
Hi,

Consider the triggering of restart agent, I think it's nothing but:
1). only restart agent
2). reboot the host that agent deployed on

When the agent started, the ovs may:
a.have all correct flows
b.have nothing at all
c.have partly correct flows, the others may need to be reprogrammed, deleted or 
added

In any case, I think both user and developer would happy to see that the system 
recovery ASAP after agent restarting. The best is agent only push those 
incorrect flows, but keep the correct ones. This can ensure those business with 
correct flows working during agent starting.

So, I suggest two solutions:
1.Agent gets all flows from ovs and compare with its local flows after 
restarting. And agent only corrects the different ones.
2.Adapt ovs and agent. Agent just push all(not remove) flows every time and ovs 
prepares two tables for flows switch(like RCU lock).

1 is recommended because of the 3rd vendors.

BR,
Germy


On Fri, Oct 31, 2014 at 10:28 PM, Ben Nemec 
mailto:openst...@nemebean.com>> wrote:
On 10/29/2014 10:17 AM, Kyle Mestery wrote:
> On Wed, Oct 29, 2014 at 7:25 AM, Hly 
> mailto:henry4...@gmail.com>> wrote:
>>
>>
>> Sent from my iPad
>>
>> On 2014-10-29, at 下午8:01, Robert van Leeuwen 
>> mailto:robert.vanleeu...@spilgames.com>> 
>> wrote:
>>
> I find our current design is remove all flows then add flow by entry, this
> will cause every network node will break off all tunnels between other
> network node and all compute node.
 Perhaps a way around this would be to add a flag on agent startup
 which would have it skip reprogramming flows. This could be used for
 the upgrade case.
>>>
>>> I hit the same issue last week and filed a bug here:
>>> https://bugs.launchpad.net/neutron/+bug/1383674
>>>
>>> From an operators perspective this is VERY annoying since you also cannot 
>>> push any config changes that requires/triggers a restart of the agent.
>>> e.g. something simple like changing a log setting becomes a hassle.
>>> I would prefer the default behaviour to be to not clear the flows or at the 
>>> least an config option to disable it.
>>>
>>
>> +1, we also suffered from this even when a very little patch is done
>>
> I'd really like to get some input from the tripleo folks, because they
> were the ones who filed the original bug here and were hit by the
> agent NOT reprogramming flows on agent restart. It does seem fairly
> obvious that adding an option around this would be a good way forward,
> however.

Since nobody else has commented, I'll put in my two cents (though I
might be overcharging you ;-).  I've also added the TripleO tag to the
subject, although with Summit coming up I don't know if that will help.

Anyway, if the bug you're referring to is the one I think, then our
issue was just with the flows not existing.  I don't think we care
whether they get reprogrammed on agent restart or not as long as they
somehow come into existence at some point.

It's possible I'm wrong about that, and probably the best person to talk
to would be Robert Collins since I think he's the one who actually
trac

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-11-04 Thread Erik Moe

It sounds like this would match VLAN trunk.

Maybe it could be mapped to trunk port also, but I have not really worked with 
flat networks so I am not sure how DHCP etc. looks like.

Is it desired to be able to control port membership of VLANs or is it ok to 
connect all VLANs to all ports?

/Erik


From: Wuhongning [mailto:wuhongn...@huawei.com]
Sent: den 4 november 2014 03:41
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Is the trunk port use case like the super vlan?

Also there is another typical use case maybe not covered: extended flat 
network. Traffic on the port carries multiple vlans, but these vlans are not 
necessarily managed by neutron-network, so can not be classified to trunk port. 
And they don't need a gateway to communicate with other nodes in the physical 
provider network, what they expect neutron to do, is much like what the flat 
network does(so I call it extended flat): just keep the packets as is 
bidirectionally between wire and vnic.


From: Erik Moe [erik@ericsson.com]
Sent: Tuesday, November 04, 2014 5:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

I created an etherpad and added use cases (so far just the ones in your email).

https://etherpad.openstack.org/p/tenant_vlans

/Erik


From: Erik Moe [mailto:erik@ericsson.com]
Sent: den 2 november 2014 23:12
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints



From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: den 31 oktober 2014 23:35
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints


On 31 October 2014 06:29, Erik Moe 
mailto:erik@ericsson.com>> wrote:


I thought Monday network meeting agreed on that "VLAN aware VMs", Trunk network 
+ L2GW were different use cases.

Still I get the feeling that the proposals are put up against each other.

I think we agreed they were different, or at least the light was beginning to 
dawn on the differences, but Maru's point was that if we really want to decide 
what specs we have we need to show use cases not just for each spec 
independently, but also include use cases where e.g. two specs are required and 
the third doesn't help, so as to show that *all* of them are needed.  In fact, 
I suggest that first we do that - here - and then we meet up one lunchtime and 
attack the specs in etherpad before submitting them.  In theory we could have 
them reviewed and approved by the end of the week.  (This theory may not be 
very realistic, but it's good to set lofty goals, my manager tells me.)
Ok, let's try. I hope you theory turns out to be realistic. :)
Here are some examples why bridging between Neutron internal networks using 
trunk network and L2GW IMO should be avoided. I am still fine with bridging to 
external networks.

Assuming VM with trunk port wants to use floating IP on specific VLAN. Router 
has to be created on a Neutron network behind L2GW since Neutron router cannot 
handle VLANs. (Maybe not too common use case, but just to show what kind of 
issues you can get into)
neutron floatingip-associate FLOATING_IP_ID INTERNAL_VM_PORT_ID
The code to check if valid port has to be able to traverse the L2GW. Handing of 
IP addresses of VM will most likely be affected since VM port is connected to 
several broadcast domains. Alternatively new API can be created.

Now, this is a very good argument for 'trunk ports', yes.  It's not actually an 
argument against bridging between networks.  I think the bridging case 
addresses use cases (generally NFV use cases) where you're not interested in 
Openstack managing addresses - often because you're forwarding traffic rather 
than being an endpoint, and/or you plan on disabling all firewalling for speed 
reasons, but perhaps because you wish to statically configure an address rather 
than use DHCP.  The point is that, in the absence of a need for address-aware 
functions, you don't really care much about ports, and in fact configuring 
ports with many addresses may simply be overhead.  Also, as you say, this 
doesn't address the external bridging use case where what you're bridging to is 
not necessarily in Openstack's domain of control.
I know that many NFVs currently prefer to manage everything themselves. At the 
same time, IMO, I think they should be encouraged to become Neutronified.
In "VLAN aware VMs" trunk port mac address has to be globally unique since it 
can be connected to any network, other ports still only has to be unique per 
network. But for L2GW all mac addresses has to be globally unique since they 
might

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-11-04 Thread Erik Moe

Hi,

I have reserved the last slot on Friday.

https://etherpad.openstack.org/p/neutron-kilo-meetup-slots

/Erik


From: Richard Woo [mailto:richardwoo2...@gmail.com]
Sent: den 3 november 2014 23:56
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Hello, will this topic be discussed in the design session?

Richard

On Mon, Nov 3, 2014 at 10:36 PM, Erik Moe 
mailto:erik@ericsson.com>> wrote:

I created an etherpad and added use cases (so far just the ones in your email).

https://etherpad.openstack.org/p/tenant_vlans

/Erik


From: Erik Moe [mailto:erik@ericsson.com<mailto:erik@ericsson.com>]
Sent: den 2 november 2014 23:12

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints



From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: den 31 oktober 2014 23:35
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints


On 31 October 2014 06:29, Erik Moe 
mailto:erik@ericsson.com>> wrote:


I thought Monday network meeting agreed on that “VLAN aware VMs”, Trunk network 
+ L2GW were different use cases.

Still I get the feeling that the proposals are put up against each other.

I think we agreed they were different, or at least the light was beginning to 
dawn on the differences, but Maru's point was that if we really want to decide 
what specs we have we need to show use cases not just for each spec 
independently, but also include use cases where e.g. two specs are required and 
the third doesn't help, so as to show that *all* of them are needed.  In fact, 
I suggest that first we do that - here - and then we meet up one lunchtime and 
attack the specs in etherpad before submitting them.  In theory we could have 
them reviewed and approved by the end of the week.  (This theory may not be 
very realistic, but it's good to set lofty goals, my manager tells me.)
Ok, let’s try. I hope you theory turns out to be realistic. ☺
Here are some examples why bridging between Neutron internal networks using 
trunk network and L2GW IMO should be avoided. I am still fine with bridging to 
external networks.

Assuming VM with trunk port wants to use floating IP on specific VLAN. Router 
has to be created on a Neutron network behind L2GW since Neutron router cannot 
handle VLANs. (Maybe not too common use case, but just to show what kind of 
issues you can get into)
neutron floatingip-associate FLOATING_IP_ID INTERNAL_VM_PORT_ID
The code to check if valid port has to be able to traverse the L2GW. Handing of 
IP addresses of VM will most likely be affected since VM port is connected to 
several broadcast domains. Alternatively new API can be created.

Now, this is a very good argument for 'trunk ports', yes.  It's not actually an 
argument against bridging between networks.  I think the bridging case 
addresses use cases (generally NFV use cases) where you're not interested in 
Openstack managing addresses - often because you're forwarding traffic rather 
than being an endpoint, and/or you plan on disabling all firewalling for speed 
reasons, but perhaps because you wish to statically configure an address rather 
than use DHCP.  The point is that, in the absence of a need for address-aware 
functions, you don't really care much about ports, and in fact configuring 
ports with many addresses may simply be overhead.  Also, as you say, this 
doesn't address the external bridging use case where what you're bridging to is 
not necessarily in Openstack's domain of control.
I know that many NFVs currently prefer to manage everything themselves. At the 
same time, IMO, I think they should be encouraged to become Neutronified.
In “VLAN aware VMs” trunk port mac address has to be globally unique since it 
can be connected to any network, other ports still only has to be unique per 
network. But for L2GW all mac addresses has to be globally unique since they 
might be bridged together at a later stage.

I'm not sure that that's particularly a problem - any VM with a port will have 
one globally unique MAC address.  I wonder if I'm missing the point here, 
though.
Ok, this was probably too specific, sorry. Neutron can reuse MAC addresses 
among Neutron networks. But I guess this is configurable.
Also some implementations might not be able to take VID into account when doing 
mac address learning, forcing at least unique macs on a trunk network.

If an implementation struggles with VLANs then the logical thing to do would be 
not to implement them in that driver.  Which is fine: I would expect (for 
instance) LB-driver networking to work for this and leave OVS-driver networking 
to never work for this, because there's little point in fixing it.
Same as abo

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-11-03 Thread Erik Moe

I created an etherpad and added use cases (so far just the ones in your email).

https://etherpad.openstack.org/p/tenant_vlans

/Erik


From: Erik Moe [mailto:erik@ericsson.com]
Sent: den 2 november 2014 23:12
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints



From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: den 31 oktober 2014 23:35
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints


On 31 October 2014 06:29, Erik Moe 
mailto:erik@ericsson.com>> wrote:


I thought Monday network meeting agreed on that “VLAN aware VMs”, Trunk network 
+ L2GW were different use cases.

Still I get the feeling that the proposals are put up against each other.

I think we agreed they were different, or at least the light was beginning to 
dawn on the differences, but Maru's point was that if we really want to decide 
what specs we have we need to show use cases not just for each spec 
independently, but also include use cases where e.g. two specs are required and 
the third doesn't help, so as to show that *all* of them are needed.  In fact, 
I suggest that first we do that - here - and then we meet up one lunchtime and 
attack the specs in etherpad before submitting them.  In theory we could have 
them reviewed and approved by the end of the week.  (This theory may not be 
very realistic, but it's good to set lofty goals, my manager tells me.)
Ok, let’s try. I hope you theory turns out to be realistic. ☺
Here are some examples why bridging between Neutron internal networks using 
trunk network and L2GW IMO should be avoided. I am still fine with bridging to 
external networks.

Assuming VM with trunk port wants to use floating IP on specific VLAN. Router 
has to be created on a Neutron network behind L2GW since Neutron router cannot 
handle VLANs. (Maybe not too common use case, but just to show what kind of 
issues you can get into)
neutron floatingip-associate FLOATING_IP_ID INTERNAL_VM_PORT_ID
The code to check if valid port has to be able to traverse the L2GW. Handing of 
IP addresses of VM will most likely be affected since VM port is connected to 
several broadcast domains. Alternatively new API can be created.

Now, this is a very good argument for 'trunk ports', yes.  It's not actually an 
argument against bridging between networks.  I think the bridging case 
addresses use cases (generally NFV use cases) where you're not interested in 
Openstack managing addresses - often because you're forwarding traffic rather 
than being an endpoint, and/or you plan on disabling all firewalling for speed 
reasons, but perhaps because you wish to statically configure an address rather 
than use DHCP.  The point is that, in the absence of a need for address-aware 
functions, you don't really care much about ports, and in fact configuring 
ports with many addresses may simply be overhead.  Also, as you say, this 
doesn't address the external bridging use case where what you're bridging to is 
not necessarily in Openstack's domain of control.
I know that many NFVs currently prefer to manage everything themselves. At the 
same time, IMO, I think they should be encouraged to become Neutronified.
In “VLAN aware VMs” trunk port mac address has to be globally unique since it 
can be connected to any network, other ports still only has to be unique per 
network. But for L2GW all mac addresses has to be globally unique since they 
might be bridged together at a later stage.

I'm not sure that that's particularly a problem - any VM with a port will have 
one globally unique MAC address.  I wonder if I'm missing the point here, 
though.
Ok, this was probably too specific, sorry. Neutron can reuse MAC addresses 
among Neutron networks. But I guess this is configurable.
Also some implementations might not be able to take VID into account when doing 
mac address learning, forcing at least unique macs on a trunk network.

If an implementation struggles with VLANs then the logical thing to do would be 
not to implement them in that driver.  Which is fine: I would expect (for 
instance) LB-driver networking to work for this and leave OVS-driver networking 
to never work for this, because there's little point in fixing it.
Same as above, this is related to reuse of MAC addresses.
Benefits with “VLAN aware VMs” are integration with existing Neutron services.
Benefits with Trunk networks are less consumption of Neutron networks, less 
management per VLAN.

Actually, the benefit of trunk networks is:
- if I use an infrastructure where all networks are trunks, I can find out that 
a network is a trunk
- if I use an infrastructure where no networks are trunks, I can find out that 
a network is not a trunk
- if I use an infrastructure where trunk networks are more expen

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-11-02 Thread Erik Moe


From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: den 31 oktober 2014 23:35
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints


On 31 October 2014 06:29, Erik Moe 
mailto:erik@ericsson.com>> wrote:


I thought Monday network meeting agreed on that “VLAN aware VMs”, Trunk network 
+ L2GW were different use cases.

Still I get the feeling that the proposals are put up against each other.

I think we agreed they were different, or at least the light was beginning to 
dawn on the differences, but Maru's point was that if we really want to decide 
what specs we have we need to show use cases not just for each spec 
independently, but also include use cases where e.g. two specs are required and 
the third doesn't help, so as to show that *all* of them are needed.  In fact, 
I suggest that first we do that - here - and then we meet up one lunchtime and 
attack the specs in etherpad before submitting them.  In theory we could have 
them reviewed and approved by the end of the week.  (This theory may not be 
very realistic, but it's good to set lofty goals, my manager tells me.)
Ok, let’s try. I hope you theory turns out to be realistic. ☺
Here are some examples why bridging between Neutron internal networks using 
trunk network and L2GW IMO should be avoided. I am still fine with bridging to 
external networks.

Assuming VM with trunk port wants to use floating IP on specific VLAN. Router 
has to be created on a Neutron network behind L2GW since Neutron router cannot 
handle VLANs. (Maybe not too common use case, but just to show what kind of 
issues you can get into)
neutron floatingip-associate FLOATING_IP_ID INTERNAL_VM_PORT_ID
The code to check if valid port has to be able to traverse the L2GW. Handing of 
IP addresses of VM will most likely be affected since VM port is connected to 
several broadcast domains. Alternatively new API can be created.

Now, this is a very good argument for 'trunk ports', yes.  It's not actually an 
argument against bridging between networks.  I think the bridging case 
addresses use cases (generally NFV use cases) where you're not interested in 
Openstack managing addresses - often because you're forwarding traffic rather 
than being an endpoint, and/or you plan on disabling all firewalling for speed 
reasons, but perhaps because you wish to statically configure an address rather 
than use DHCP.  The point is that, in the absence of a need for address-aware 
functions, you don't really care much about ports, and in fact configuring 
ports with many addresses may simply be overhead.  Also, as you say, this 
doesn't address the external bridging use case where what you're bridging to is 
not necessarily in Openstack's domain of control.
I know that many NFVs currently prefer to manage everything themselves. At the 
same time, IMO, I think they should be encouraged to become Neutronified.
In “VLAN aware VMs” trunk port mac address has to be globally unique since it 
can be connected to any network, other ports still only has to be unique per 
network. But for L2GW all mac addresses has to be globally unique since they 
might be bridged together at a later stage.

I'm not sure that that's particularly a problem - any VM with a port will have 
one globally unique MAC address.  I wonder if I'm missing the point here, 
though.
Ok, this was probably too specific, sorry. Neutron can reuse MAC addresses 
among Neutron networks. But I guess this is configurable.
Also some implementations might not be able to take VID into account when doing 
mac address learning, forcing at least unique macs on a trunk network.

If an implementation struggles with VLANs then the logical thing to do would be 
not to implement them in that driver.  Which is fine: I would expect (for 
instance) LB-driver networking to work for this and leave OVS-driver networking 
to never work for this, because there's little point in fixing it.

Same as above, this is related to reuse of MAC addresses.
Benefits with “VLAN aware VMs” are integration with existing Neutron services.
Benefits with Trunk networks are less consumption of Neutron networks, less 
management per VLAN.

Actually, the benefit of trunk networks is:
- if I use an infrastructure where all networks are trunks, I can find out that 
a network is a trunk
- if I use an infrastructure where no networks are trunks, I can find out that 
a network is not a trunk
- if I use an infrastructure where trunk networks are more expensive, my 
operator can price accordingly

And, again, this is all entirely independent of either VLAN-aware ports or L2GW 
blocks.
Both are true. I was referring of “true” trunk networks, you were referring to 
your additions, right?
Benefits with L2GW is ease to do network stitching.
There are other benefits with the different proposals, the point is t

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-31 Thread Erik Moe


I thought Monday network meeting agreed on that “VLAN aware VMs”, Trunk network 
+ L2GW were different use cases.

Still I get the feeling that the proposals are put up against each other.

Here are some examples why bridging between Neutron internal networks using 
trunk network and L2GW IMO should be avoided. I am still fine with bridging to 
external networks.

Assuming VM with trunk port wants to use floating IP on specific VLAN. Router 
has to be created on a Neutron network behind L2GW since Neutron router cannot 
handle VLANs. (Maybe not too common use case, but just to show what kind of 
issues you can get into)
neutron floatingip-associate FLOATING_IP_ID INTERNAL_VM_PORT_ID
The code to check if valid port has to be able to traverse the L2GW. Handing of 
IP addresses of VM will most likely be affected since VM port is connected to 
several broadcast domains. Alternatively new API can be created.

In “VLAN aware VMs” trunk port mac address has to be globally unique since it 
can be connected to any network, other ports still only has to be unique per 
network. But for L2GW all mac addresses has to be globally unique since they 
might be bridged together at a later stage. Also some implementations might not 
be able to take VID into account when doing mac address learning, forcing at 
least unique macs on a trunk network.

Benefits with “VLAN aware VMs” are integration with existing Neutron services.
Benefits with Trunk networks are less consumption of Neutron networks, less 
management per VLAN.
Benefits with L2GW is ease to do network stitching.
There are other benefits with the different proposals, the point is that it 
might be beneficial to have all solutions.

Platforms that have issues forking of VLANs at VM port level could get around 
with trunk network + L2GW but having more hacks if integration with other parts 
of Neutron is needed. Platforms that have issues implementing trunk networks 
could get around using “VLAN aware VMs” but being forced to separately manage 
every VLAN as a Neutron network. On platforms that have both, user can select 
method depending on what is needed.

Thanks,
Erik



From: Armando M. [mailto:arma...@gmail.com]
Sent: den 28 oktober 2014 19:01
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Sorry for jumping into this thread late...there's lots of details to process, 
and I needed time to digest!

Having said that, I'd like to recap before moving the discussion forward, at 
the Summit and beyond.

As it's being pointed out, there are a few efforts targeting this area; I think 
that is sensible to adopt the latest spec system we have been using to 
understand where we are, and I mean Gerrit and the spec submissions.

To this aim I see the following specs:

https://review.openstack.org/93613 - Service API for L2 bridging 
tenants/provider networks
https://review.openstack.org/100278 - API Extension for l2-gateway
https://review.openstack.org/94612 - VLAN aware VMs
https://review.openstack.org/97714 - VLAN trunking networks for NFV

First of all: did I miss any? I am intentionally leaving out any vendor 
specific blueprint for now.

When I look at these I clearly see that we jump all the way to implementations 
details. From an architectural point of view, this clearly does not make a lot 
of sense.

In order to ensure that everyone is on the same page, I would suggest to have a 
discussion where we focus on the following aspects:

- Identify the use cases: what are, in simple terms, the possible interactions 
that an actor (i.e. the tenant or the admin) can have with the system (an 
OpenStack deployment), when these NFV-enabling capabilities are available? What 
are the observed outcomes once these interactions have taken place?

-  Management API: what abstractions do we expose to the tenant or admin (do we 
augment the existing resources, or do we create new resources, or do we do 
both)? This should obviously driven by a set of use cases, and we need to 
identify the minimum set or logical artifacts that would let us meet the needs 
of the widest set of use cases.

- Core Neutron changes: what needs to happen to the core of Neutron, if 
anything, so that we can implement this NFV-enabling constructs successfully? 
Are there any changes to the core L2 API? Are there any changes required to the 
core framework (scheduling, policy, notifications, data model etc)?

- Add support to the existing plugin backends: the openvswitch reference 
implementation is an obvious candidate, but other plugins may want to leverage 
the newly defined capabilities too. Once the above mentioned points have been 
fleshed out, it should be fairly straightforward to have these efforts progress 
in autonomy.

IMO, until we can get a full understanding of the aspects above, I don't 
believe like the core team is in the best position to determine the best 
approach forward; I think it's in eve

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-25 Thread Erik Moe
ffic, and it's impossible to detect 
this from the API
2. it's not possible to pass traffic from multiple networks to one port on one 
machine as (e.g.) VLAN tagged traffic
(1) is addressed by the VLAN trunking network blueprint, XXX. Nothing else 
addresses this, particularly in the case that one VM is emitting tagged packets 
that another one should receive and Openstack knows nothing about what's going 
on.

We should get this in, and ideally in quickly and in a simple form where it 
simply tells you if a network is capable of passing tagged traffic.  In 
general, this is possible to calculate but a bit tricky in ML2 - anything using 
the OVS mechanism driver won't pass VLAN traffic, anything using VLANs should 
probably also claim it doesn't pass VLAN traffic (though actually it depends a 
little on the switch), and combinations of L3 tunnels plus Linuxbridge seem to 
pass VLAN traffic just fine.  Beyond that, it's got a backward compatibility 
mode, so it's possible to ensure that any plugin that doesn't implement VLAN 
reporting is still behaving correctly per the specification.

(2) is addressed by several blueprints, and these have overlapping ideas that 
all solve the problem.  I would summarise the possibilities as follows:
A. Racha's L2 gateway blueprint, 
https://blueprints.launchpad.net/neutron/+spec/gateway-api-extension, which (at 
its simplest, though it's had features added on and is somewhat OVS-specific in 
its detail) acts as a concentrator to multiplex multiple networks onto one as a 
trunk.  This is a very simple approach and doesn't attempt to resolve any of 
the hairier questions like making DHCP work as you might want it to on the 
ports attached to the trunk network.
B. Isaku's L2 gateway blueprint, https://review.openstack.org/#/c/100278/, 
which is more limited in that it refers only to external connections.
C. Erik's VLAN port blueprint, 
https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms, which tries to 
solve the addressing problem mentioned above by having ports within ports (much 
as, on the VM side, interfaces passing trunk traffic tend to have subinterfaces 
that deal with the traffic streams).
D. Not a blueprint, but an idea I've come across: create a network that is a 
collection of other networks, each 'subnetwork' being a VLAN in the network 
trunk.
E. Kyle's very old blueprint, 
https://blueprints.launchpad.net/neutron/+spec/quantum-network-bundle-api - 
where we attach a port, not a network, to multiple networks.  Probably doesn't 
work with appliances.

I would recommend we try and find a solution that works with both external 
hardware and internal networks.  (B) is only a partial solution.

Considering the others, note that (C) and (D) add significant complexity to the 
data model, independently of the benefits they bring.  (A) adds one new 
functional block to networking (similar to today's routers, or even today's 
Nova instances).
Finally, I suggest we consider the most prominent use case for multiplexing 
networks.  This seems to be condensing traffic from many networks to either a 
service VM or a service appliance.  It's useful, but not essential, to have 
Neutron control the addresses on the trunk port subinterfaces.
So, that said, I personally favour (A) is the simplest way to solve our current 
needs, and I recommend paring (A) right down to its basics: a block that has 
access ports that we tag with a VLAN ID, and one trunk port that has all of the 
access networks multiplexed onto it.  This is a slightly dangerous block, in 
that you can actually set up forwarding blocks with it, and that's a concern; 
but it's a simple service block like a router, it's very, very simple to 
implement, and it solves our immediate problems so that we can make forward 
progress.  It also doesn't affect the other solutions significantly, so someone 
could implement (C) or (D) or (E) in the future.
--
Ian.


On 23 October 2014 02:13, Alan Kavanagh 
mailto:alan.kavan...@ericsson.com>> wrote:
+1 many thanks to Kyle for putting this as a priority, its most welcome.
/Alan

-Original Message-
From: Erik Moe [mailto:erik@ericsson.com<mailto:erik@ericsson.com>]
Sent: October-22-14 5:01 PM
To: Steve Gordon; OpenStack Development Mailing List (not for usage questions)
Cc: iawe...@cisco.com<mailto:iawe...@cisco.com>
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints


Hi,

Great that we can have more focus on this. I'll attend the meeting on Monday 
and also attend the summit, looking forward to these discussions.

Thanks,
Erik


-Original Message-
From: Steve Gordon [mailto:sgor...@redhat.com<mailto:sgor...@redhat.com>]
Sent: den 22 oktober 2014 16:29
To: OpenStack Development Mailing List (not for usage questions)
Cc: Erik Moe; iawe...@cisco.com<mailto:iawe...@cisco.c

Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-22 Thread Erik Moe

Hi,

Great that we can have more focus on this. I'll attend the meeting on Monday 
and also attend the summit, looking forward to these discussions.

Thanks,
Erik


-Original Message-
From: Steve Gordon [mailto:sgor...@redhat.com] 
Sent: den 22 oktober 2014 16:29
To: OpenStack Development Mailing List (not for usage questions)
Cc: Erik Moe; iawe...@cisco.com; calum.lou...@metaswitch.com
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

- Original Message -
> From: "Kyle Mestery" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> 
> There are currently at least two BPs registered for VLAN trunk support 
> to VMs in neutron-specs [1] [2]. This is clearly something that I'd 
> like to see us land in Kilo, as it enables a bunch of things for the 
> NFV use cases. I'm going to propose that we talk about this at an 
> upcoming Neutron meeting [3]. Given the rotating schedule of this 
> meeting, and the fact the Summit is fast approaching, I'm going to 
> propose we allocate a bit of time in next Monday's meeting to discuss 
> this. It's likely we can continue this discussion F2F in Paris as 
> well, but getting a head start would be good.
> 
> Thanks,
> Kyle
> 
> [1] https://review.openstack.org/#/c/94612/
> [2] https://review.openstack.org/#/c/97714
> [3] https://wiki.openstack.org/wiki/Network/Meetings

Hi Kyle,

Thanks for raising this, it would be great to have a converged plan for 
addressing this use case [1] for Kilo. I plan to attend the Neutron meeting and 
I've CC'd Erik, Ian, and Calum to make sure they are aware as well.

Thanks,

Steve

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-October/047548.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QoS] Interest in a meeting at the Networking pod at the design summit?

2014-05-11 Thread Erik Moe
Hi,

I am also interested.

/Erik


On Fri, May 9, 2014 at 3:43 AM, Mohammad Banikazemi  wrote:

> Sounds good.  Thanks.
>
> Mohammad
>
>
> On May 8, 2014, at 3:02 PM, "Stephen Wong" 
> <*s3w...@midokura.com*>
> wrote:
>
> Hi Sean,
>
> Perfect (I assume it is local time, i.e. 2:30pm EDT).
>
> And I also assume this will be at the Neutron pod?
>
> - Stephen
>
>
> On Thu, May 8, 2014 at 9:22 AM, Collins, Sean <
> *sean_colli...@cable.comcast.com* >
> wrote:
> On Tue, May 06, 2014 at 07:17:26PM EDT, Mohammad Banikazemi wrote:
> >
> > There are networking talks in the general session in the afternoon on
> > Thursday including the talk on Network Policies from 1:30 to 2:10pm.
> > Anything after that is ok with me.
>
> How does 2:30PM on Thursday sound to everyone?
>
> --
> Sean M. Collins
> ___
> OpenStack-dev mailing list
> *OpenStack-dev@lists.openstack.org* 
> *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
>
> ___
> OpenStack-dev mailing 
> list*OpenStack-dev@lists.openstack.org*
> *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][QoS] API Design Document v2

2014-04-09 Thread Erik Moe
Hi,

API Design Document
v2


Includes following example:

Response:

{

  "qos":

  [

   {"id": "1234-5678-1234-5678",

 "description": "Gold level service",

  "type": "ratelimit",

  "policy": {"kbps":"10240"}

},

   {"id": "1235-5678-1234-5678",

 "description": "Silver level service",

 "type": "dscp",

  "policy": "af32"

}

  ]
}

It looks like a gold tenant would get ratelimit and a silver tenant would
get dscp policy.

*Is there a proposal of how to set both ratelimt and dscp for a tenant?
Would that tenant be both gold and silver (associated with both)?*

*Regards,*
*Erik*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Re: [Blueprint vlan-aware-vms] VLAN aware VMs

2014-01-08 Thread Erik Moe
I feel that we are getting quite far away from supporting my use case. Use
case: VM wants to connect to different 'normal' Neutron networks from one
VNIC. VLANs are proposed in blueprint since it's a common way to separate
'networks'. It is just a way to connect to different Neutron networks, it
does not put requirements on method used for tenant separation in Neutron.
Ability to specify VID from user is there since, for this use case, the
service would be used by normal tenants, and preferable not exposing
Neutron internals (that might not use VLANS at all for tenant separation).
Also several VMs could specify the same VID for connecting to different
Neutron networks, this to avoid dependencies between tenants.

We would like to have this functionality close to the VNIC, not requiring a
extra 'hop' in the network both for latency, throughput performance and
fault management. The strange optimizations are there because of this.

Also, for this use case, the APIs from a user perspective could be cleaner.

Maybe we should break out this use case from the L2-gateway?

/Erik



On Mon, Dec 23, 2013 at 10:09 PM, Ian Wells  wrote:

> I think we have two different cases here - one where a 'trunk' network
> passes all VLANs, which is potentially supportable by anything that's not
> based on VLANs for separation, and one where a trunk can't feasibly do that
> but where we could make it pass a restricted set of VLANs by mapping.
>
> In the former case, obviously we need no special awareness of the nature
> of the network to implement an L2 gateway.
>
> In the latter case, we're looking at a specialisation of networks, one
> where you would first create them with a set of VLANs you wanted to pass
> (and - presumably - the driver would say 'ah, I must allocate multiple
> VLANs to this network rather than just one'.  You've jumped in with two
> optimisations on top of that:
>
> - we can precalculate the VLANs the network needs to pass in some cases,
> because it's the sum of VLANs that L2 gateways on that network know about
> - we can use L2 gateways to make the mapping from 'tenant' VLANs to
> 'overlay' VLANs
>
> They're good ideas but they add some limitations to what you can do with
> trunk networks that aren't actually necessary in a number of solutions.
>
> I wonder if we should try the general case first with e.g. a
> Linuxbridge/GRE based infrastructure, and then add the optimisations
> afterwards.  If I were going to do that optimisation I'd start with the
> capability mechanism and add the ability to let the tenant specify the
> specific VLAN tags which must be passed (as you normally would on a
> physical switch). I'd then have two port types - a user-facing one that
> ensures the entry and exit mapping is made on the port, and an
> administrative one which exposes that mapping internally and lets the
> client code (e.g. the L2 gateway) do the mapping itself.  But I think it
> would be complicated, and maybe even has more complexity than is
> immediately apparent (e.g. we're effectively allocating a cluster-wide
> network to get backbone segmentation IDs for each VLAN we pass, which is
> new and different) hence my thought that we should start with the easy case
> first just to have something working, and see how the tenant API feels.  We
> could do this with a basic bit of gateway code running on a system using
> Linuxbridge + GRE, I think - the key seems to be avoiding VLANs in the
> overlay and then the problem is drastically simplified.
> --
> Ian.
>
>
> On 21 December 2013 23:00, Erik Moe  wrote:
>
>> Hi Ian,
>>
>> I think your VLAN trunking capability proposal can be a good thing, so
>> the user can request a Neutron network that can trunk VLANs without caring
>> about detailed information regarding which VLANs to pass. This could be
>> used for use cases there user wants to pass VLANs between endpoints on a L2
>> network etc.
>>
>> For the use case there a VM wants to connect to several "normal" Neutron
>> networks using VLANs, I would prefer a solution that did not require a
>> Neutron trunk network. Possibly by connecting a L2-gateway directly to the
>> Neutron 'vNic' port, or some other solution. IMHO it would be good to map
>> VLAN to Neutron network as soon as possible.
>>
>> Thanks,
>> Erik
>>
>>
>>
>> On Thu, Dec 19, 2013 at 2:15 PM, Ian Wells wrote:
>>
>>> On 19 December 2013 06:35, Isaku Yamahata wrote:
>>>
>>>>
>>>> Hi Ian.
>>>>
>>>> I can't see your proposal. Can you 

Re: [openstack-dev] [neutron] Re: [Blueprint vlan-aware-vms] VLAN aware VMs

2013-12-21 Thread Erik Moe
Hi Ian,

I think your VLAN trunking capability proposal can be a good thing, so the
user can request a Neutron network that can trunk VLANs without caring
about detailed information regarding which VLANs to pass. This could be
used for use cases there user wants to pass VLANs between endpoints on a L2
network etc.

For the use case there a VM wants to connect to several "normal" Neutron
networks using VLANs, I would prefer a solution that did not require a
Neutron trunk network. Possibly by connecting a L2-gateway directly to the
Neutron 'vNic' port, or some other solution. IMHO it would be good to map
VLAN to Neutron network as soon as possible.

Thanks,
Erik



On Thu, Dec 19, 2013 at 2:15 PM, Ian Wells  wrote:

> On 19 December 2013 06:35, Isaku Yamahata wrote:
>
>>
>> Hi Ian.
>>
>> I can't see your proposal. Can you please make it public viewable?
>>
>
> Crap, sorry - fixed.
>
>
>> > Even before I read the document I could list three use cases.  Eric's
>> > covered some of them himself.
>>
>> I'm not against trunking.
>> I'm trying to understand what requirements need "trunk network" in
>> the figure 1 in addition to "L2 gateway" directly connected to VM via
>> "trunk port".
>>
>
> No problem, just putting the information there for you.
>
> --
> Ian.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Re: [Blueprint vlan-aware-vms] VLAN aware VMs

2013-12-17 Thread Erik Moe
Hi,

thanks for your comments.

see answers below.

Thanks,
Erik


On Tue, Dec 17, 2013 at 6:17 AM, Isaku Yamahata wrote:

> Added openstack-dev
>
> The document is view-only. So I commented below.
>
> - 2 Modeling proposal
>   What's the purpose of trunk network?
>   Can you please add a use case that trunk network can't be optimized away?
>

In some use cases the trunk network will trunk all VLANS from a VM, so they
can for example be 'tunneled' to another VM or externally. In the use case
where a VM wants to connect to multiple Neutron networks, the trunk network
is a logical connection between the VM trunk port and the L2-gateways. From
my point of view it looks a little strange for this use case, but I think
this is what we said during our meeting in Hong Kong (Unless I
misunderstood something...).

I added use case where two VMs are connected through a trunk network. This
can not be optimized away. The network would have to be able to trunk all
VLANs between the VMs.


>
> - 4 IP address management
>   nitpick
>   Can you please clarify what "the L2 gateway ports" in section 2
>   modeling proposal, figure 1?
>
>
I have now tried to clarify this more.


> - Table 3
>   Will this be same to l2-gateway one?
>   https://blueprints.launchpad.net/neutron/+spec/l2-gateway
>
>
I will try to align to this and maybe other proposals as much as possible.
Just wanted to have some feedback before I do too many assumptions.


> - Figure 5
>   What's the purpose of br-int local VID?
>   VID can be directly converted from br-eth1 VID to VM VID untagged.
>
>
Unless something has changed, all vNICs handled by OVS-agent are connected
to br-int. br-int has a local VID for separating traffic. br-int is
connected to one or more other bridges representing one or more physical
networks. The br-int VID is mapped to a per bridge VID, so two separate
Neutron networks could have the same VID on two different physical networks.




> --
> Isaku Yamahata 
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] VLAN aware VMs

2013-11-04 Thread Erik Moe
Ok, 2PM Thursday is fine with me.



On Tue, Nov 5, 2013 at 6:49 AM, Kyle Mestery (kmestery)
wrote:

> How about if we do a developers lounge chat at 2PM Thursday?
>
> On Nov 4, 2013, at 10:20 PM, Yi Sun 
>  wrote:
>
> > Guys,
> > I just checked the schedule of unconference sessions. There are no free
> slots anymore.
> >
> > Yi
> >
> > On Tuesday, October 29, 2013, Isaku Yamahata wrote:
> > Hi Erik and Li.
> > Unconference at the next summit?
> >
> > On Mon, Oct 28, 2013 at 02:34:28PM -0700,
> > beyounn  wrote:
> >
> > > Hi Erik,
> > >
> > > While we were discussing about the service VM framework, the trunk port
> > > support was also mentioned.  I think people do see the needs for it.
> > >
> > > I have seen someone have mentioned another BP
> > >
> https://blueprints.launchpad.net/neutron/+spec/quantum-network-bundle-apiin
> > > your BP already. Maybe it is same as what you are doing.
> > >
> > > And the trunk port use case can also impact how the zone being
> constructed
> > > in the fwaas context (when a firewall VM uses a trunk port to connect
> > > multiple networks). The basic question is how we should present a
> trunk port
> > > and the vlan on a trunk port to Neutron.
> > >
> > >
> > >
> > > Yi
> > >
> > >
> > >
> > > From: Erik Moe [mailto:emoe...@gmail.com]
> > > Sent: Monday, October 28, 2013 1:56 PM
> > > To: openstack-dev@lists.openstack.org
> > > Subject: [openstack-dev] [Neutron] VLAN aware VMs
> > >
> > >
> > >
> > > Hi!
> > >
> > > We are looking into how to make it possible for tenant VMs to use VLAN
> > > tagged traffic to connect to different Neutron networks.
> > >
> > >
> > >
> > > The VID on frames sent/received will determine which Neutron network
> the
> > > frames are connected to.
> > >
> > >
> > > https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms
> > >
> > > I would like to find others that also see the need for this kind of
> > > functionality and would like to discuss this.
> > >
> > > Regards,
> > > Erik
> > >
> >
> > > ___
> > > OpenStack-dev mailing list
> > > OpenStack-dev@lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > --
> > Isaku Yamahata 
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > --
> > Android-x86
> > http://www.android-x86.org
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] VLAN aware VMs

2013-10-28 Thread Erik Moe
Hi Yamahata,

yes, unconference sounds good.

I agree that quantum-network-bundle-api is in the same area. I missed this
blueprint.

/Erik


On Mon, Oct 28, 2013 at 11:10 PM, Isaku Yamahata
wrote:

> Hi Erik and Li.
> Unconference at the next summit?
>
> On Mon, Oct 28, 2013 at 02:34:28PM -0700,
> beyounn  wrote:
>
> > Hi Erik,
> >
> > While we were discussing about the service VM framework, the trunk port
> > support was also mentioned.  I think people do see the needs for it.
> >
> > I have seen someone have mentioned another BP
> >
> https://blueprints.launchpad.net/neutron/+spec/quantum-network-bundle-apiin
> > your BP already. Maybe it is same as what you are doing.
> >
> > And the trunk port use case can also impact how the zone being
> constructed
> > in the fwaas context (when a firewall VM uses a trunk port to connect
> > multiple networks). The basic question is how we should present a trunk
> port
> > and the vlan on a trunk port to Neutron.
> >
> >
> >
> > Yi
> >
> >
> >
> > From: Erik Moe [mailto:emoe...@gmail.com]
> > Sent: Monday, October 28, 2013 1:56 PM
> > To: openstack-dev@lists.openstack.org
> > Subject: [openstack-dev] [Neutron] VLAN aware VMs
> >
> >
> >
> > Hi!
> >
> > We are looking into how to make it possible for tenant VMs to use VLAN
> > tagged traffic to connect to different Neutron networks.
> >
> >
> >
> > The VID on frames sent/received will determine which Neutron network the
> > frames are connected to.
> >
> >
> > https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms
> >
> > I would like to find others that also see the need for this kind of
> > functionality and would like to discuss this.
> >
> > Regards,
> > Erik
> >
>
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> --
> Isaku Yamahata 
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] VLAN aware VMs

2013-10-28 Thread Erik Moe
Hi!

We are looking into how to make it possible for tenant VMs to use VLAN
tagged traffic to connect to different Neutron networks.

The VID on frames sent/received will determine which Neutron network the
frames are connected to.

https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms

I would like to find others that also see the need for this kind of
functionality and would like to discuss this.

Regards,
Erik
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev