[openstack-dev] [neutron] Cancelling net-bgpvpn today's meeting

2016-11-22 Thread Mathieu Rohon
Hi,

neither thomas nor I are available to chair today's bgpvpn meeting.
We don't have a lot of topics on today's agenda, so let's cancel today's
meeting.

See you next week.

Mathieu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][vpnaas]Question about MPLS VPN

2016-06-06 Thread Mathieu Rohon
Hi,

sorry for the late reply, but if you want to attach a neutron network or a
neutron router to an existing MPLS based BGP L3 VPN, you can use the BGPVPN
project [1], with its API and one of it's backend, bagpipe [2] being its
opensource and reference implementation.

Those projects have dedicated devstack plugins, so it's quite easy to
experiment.

[1]http://git.openstack.org/cgit/openstack/networking-bgpvpn
[2]http://git.openstack.org/cgit/openstack/networking-bagpipe

Mathieu

On Thu, May 26, 2016 at 5:29 PM, Kosnik, Lubosz 
wrote:

> I had a discussion with few operators and after what I heard about VPNaaS
> I can tell that we not suppose to help with that implementation.
> Maybe we should work on service VM’s and prepare implementation of VPNaaS
> using them and using some prebuild images like VyOS or other.
>
> Lubosz Kosnik
> Cloud Software Engineer OSIC
> lubosz.kos...@intel.com
>
> > On May 26, 2016, at 9:39 AM, Ihar Hrachyshka 
> wrote:
> >
> >
> >> On 26 May 2016, at 16:23, Kosnik, Lubosz 
> wrote:
> >>
> >> You should read e-mails on ML. VPNaaS will be removed in next 6 months
> from repo. You need to look into something else like starting VyOS image,
> pfSense or other.
> >
> > Strictly speaking, vpnaas is on probation right now, and if interested
> parties actually revive the project, it may stay past those 6 months. That
> said, I haven’t heard about anyone stepping in since the summit.
> >
> > Ihar
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-04-14 Thread Mathieu Rohon
Hi cathy,

at net-bgpvpn, we're very interested in this effort. Please, keep us in the
loop.

Mathieu

On Thu, Apr 14, 2016 at 8:59 AM, Haim Daniel  wrote:

> Hi,
>
> I'd +1 Vikram's comment on neutron-classifier , RFE [1] contains the
> original thread about that topic.
>
>
> [1] https://bugs.launchpad.net/neutron/+bug/1527671
>
> On Thu, Apr 14, 2016 at 5:33 AM, Vikram Choudhary 
> wrote:
>
>> Hi Cathy,
>>
>> A project called "neutron-classifier [1]" is also there addressing the
>> same use case. Let's sync up and avoid work duplicity.
>>
>> [1] https://github.com/openstack/neutron-classifier
>>
>> Thanks
>> Vikram
>> On Apr 14, 2016 6:40 AM, "Cathy Zhang"  wrote:
>>
>> Hi everyone,
>>
>> Per Armando’s request, Louis and I are looking into the following
>> features for Newton cycle.
>>
>> · Neutron Common FC used for SFC, QoS, Tap as a service etc.,
>>
>> · OVS Agent extension
>>
>> Some of you might know that we already developed a FC in networking-sfc
>> project and QoS also has a FC. It makes sense that we have one common FC in
>> Neutron that could be shared by SFC, QoS, Tap as a service etc. features in
>> Neutron.
>>
>> Different features may extend OVS agent and add different new OVS flow
>> tables to support their new functionality. A mechanism is needed to ensure
>> consistent OVS flow table modification when multiple features co-exist.
>> AFAIK, there is some preliminary work on this, but it is not a complete
>> solution yet.
>>
>> We will like to start these effort by collecting requirements and then
>> posting specifications for review. If any of you would like to join this
>> effort, please chime in. We can set up a meet-up session in the Summit to
>> discuss this face-in-face.
>>
>> Thanks,
>>
>> Cathy
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][bgpvpn] Meetings canceled

2015-12-21 Thread Mathieu Rohon
Hi,

tmorin and myself won't be available for the next two meetings.
The next weekly meeting for the networking-bgpvpn project will be on
tuesday, the 5th of january, at 15:00 UTC on #openstack-meeting-alt.

Thanks

Mathieu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Live Migration Issues with L3/L2

2015-12-18 Thread Mathieu Rohon
Hi,

I'm also interested in live-migration in the context of two bugs in l2pop :
https://bugs.launchpad.net/neutron/+bug/1483601
https://bugs.launchpad.net/neutron/+bug/1443421

Mathieu

On Fri, Dec 18, 2015 at 11:16 AM, Oleg Bondarev 
wrote:

> I think it might be a bit early to start a cross-project discussion on
> this.
> I'd suggest to first figure out what questions do we have, what would we
> like to get from nova.
> So I think it'd be more constructive if we first think on it within
> neutron team.
> I left some questions on the bug [1], please see comment #8
>
> [1] https://bugs.launchpad.net/neutron/+bug/1456073
>
> Thanks,
> Oleg
>
> On Fri, Dec 18, 2015 at 12:14 AM, Vasudevan, Swaminathan (PNB Roseville) <
> swaminathan.vasude...@hpe.com> wrote:
>
>> Hi Sean M. Collins,
>> Thanks for the information.
>> It would be great if we can bring in the right people from both sides to
>> discuss and solve this problem
>> Please let me know if you can pull in the right people from the nova side
>> and I can get the people from the neutron side.
>>
>> Thanks
>> Swami
>>
>> -Original Message-
>> From: Sean M. Collins [mailto:s...@coreitpro.com]
>> Sent: Thursday, December 17, 2015 1:04 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Nova][Neutron] Live Migration Issues with
>> L3/L2
>>
>> On Thu, Dec 17, 2015 at 02:08:42PM EST, Vasudevan, Swaminathan (PNB
>> Roseville) wrote:
>> > Hi Folks,
>> > I would like organize a meeting between the Nova and Neutron team to
>> work refining the Nova/Neutron notificiations for the Live Migration.
>> >
>> > Today we only have Notification from Neutron to Nova on any port status
>> update.
>> >
>> > But we don't have any similar notification from Nova on any Migration
>> state change.
>> > Neutron L3 will be interested in knowing the state change for vm
>> migration and can take necessary action pro-actively to create the
>> necessary L3 related plumbing that is required.
>> >
>> > Here are some of the bugs that are currently filed with respect to nova
>> live migration and neutron.
>> > https://bugs.launchpad.net/neutron/+bug/1456073
>> > https://bugs.launchpad.net/neutron/+bug/1414559
>> >
>> > Please let me know who will be interested in participating in the
>> discussion.
>> > It would be great if we get some cores attention from "Nova and
>> Neutron".
>> >
>> > Thanks.
>> > Swaminathan Vasudevan
>> > Systems Software Engineer (TC)
>>
>>
>> Cool. Brent and I are inter-project liaisons between Neutron and Nova, so
>> let us know what we can do to help raise awareness on both sides.
>>
>>
>> https://wiki.openstack.org/wiki/CrossProjectLiaisons#Inter-project_Liaisons
>>
>> --
>> Sean M. Collins
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Multiple locations for documentation of features

2015-12-08 Thread Mathieu Rohon
Hi all,

thanks for the explanation.
Can you also explain how does neutron team use blueprints? how it overlaps
with RFE bugs?

On Mon, Dec 7, 2015 at 6:40 PM, Sean M. Collins  wrote:

> On Mon, Dec 07, 2015 at 12:18:29PM EST, Carl Baldwin wrote:
> > On Fri, Dec 4, 2015 at 12:22 PM, Henry Gessau  wrote:
> > > 1. RFE: "I want X"
> > > 2. Spec: "I plan to implement X like this"
> > > 3. devref: "How X is implemented and how to extend it"
> > > 4. OS docs: "API and guide for using X"
> > >
> > > Once X is implemented I don't want to have to go to 1 or 2 to find
> information
> > > on it. The devref may have a lot of content from the spec, but the
> spec is not
> > > maintained and the implementation may differ in some ways. The devref
> should
> > > be kept current with refactorings, etc. of the implementation.
> >
> > Henry, I was also impressed by how clearly you communicated this.
> > This ought to be included somewhere prominently in our
> > doc/source/policies/ or somewhere like that.
> >
>
> +1 for Henry's great answer, and +1 for Carl's suggestion!
>
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] How could an L2 agent extension access agent methods ?

2015-11-23 Thread Mathieu Rohon
Thanks ihar!

On Thu, Nov 19, 2015 at 2:32 PM, Ihar Hrachyshka 
wrote:

> UPD: now that we have some understanding what’s needed from l2 agent
> extension mechanism to cater for interested subprojects (and now that we
> see that probably the agent in focus right now is OVS only), we may move to
> RFE step. I reported the following RFE for the feature:
>
> https://bugs.launchpad.net/neutron/+bug/1517903
>
> It may require BP if drivers team will request one.
>
> Cheers,
>
> Ihar
>
> Ihar Hrachyshka  wrote:
>
> Reviving the thread.
>>
>> On the design summit session dedicated to agent and plugin extensions [1]
>> the following was stated for l2 agent extensions (I appreciate if someone
>> checks me on the following though):
>>
>> - current l2 agent extensions mechanism lacks insight into agent details
>> like bridges or vlan maps;
>>
>> - in some cases, we don’t care about extension portability across
>> multiple agents, so it’s not of concern if some of them use implementation
>> details like bridges to set specific flows, or to wire up some additional
>> ports to them;
>>
>> - that said, we still don’t want extensions to have unlimited access to
>> agent details; the rationale for hard constraints on what is seen inside
>> extensions is that we cannot support backwards compatibility for *all*
>> possible internal attributes of an agent; instead, we should explicitly
>> define where we can make an effort to provide stable API into agent
>> details, and what’s, on contrary, beyond real life use cases and hence can
>> be left to be broken/refactored as neutron developers see fit; this API can
>> be agent specific though;
>>
>> - agent details that are to be passed into extensions should be driven by
>> actual use cases. There were several subprojects mentioned in the session
>> that are assumed to lack enough access to agent attributes to do their job
>> without patching core ovs agent files. Those are: BGP-VPN, SFC, (anything
>> else?) Those subprojects that are interested in extending l2 agent
>> extension framework are expected to come up with a list of things missing
>> in current implementation, so that neutron developers can agree on proper
>> abstractions to provide missing details to extensions. For that goal, I set
>> up a new etherpad to collect feedback from subprojects [2].
>>
>> Once we collect use cases there and agree on agent API for extensions
>> (even if per agent type), we will implement it and define as stable API,
>> then pass objects that implement the API into extensions thru extension
>> manager. If extensions support multiple agent types, they can still
>> distinguish between which API to use based on agent type string passed into
>> extension manager.
>>
>> I really hope we start to collect use cases early so that we have time to
>> polish agent API and make it part of l2 extensions earlier in Mitaka cycle.
>>
>> [1]: https://etherpad.openstack.org/p/mitaka-neutron-core-extensibility
>> [2]: https://etherpad.openstack.org/p/l2-agent-extensions-api-expansion
>>
>> Ihar
>>
>> Ihar Hrachyshka  wrote:
>>
>> On 30 Sep 2015, at 12:53, Miguel Angel Ajo  wrote:



 Ihar Hrachyshka wrote:

> On 30 Sep 2015, at 12:08, thomas.mo...@orange.com wrote:
>>
>> Hi Ihar,
>>
>> Ihar Hrachyshka :
>>
>>> Miguel Angel Ajo :

> Do you have a rough idea of what operations you may need to do?
>
 Right now, what bagpipe driver for networking-bgpvpn needs to
 interact with is:
 - int_br OVSBridge (read-only)
 - tun_br OVSBridge (add patch port, add flows)
 - patch_int_ofport port number (read-only)
 - local_vlan_map dict (read-only)
 - setup_entry_for_arp_reply method (called to add static ARP
 entries)

>>> Sounds very tightly coupled to OVS agent.
>>>
 Please bear in mind, the extension interface will be available from
> different agent types
> (OVS, SR-IOV, [eventually LB]), so this interface you're talking
> about could also serve as
> a translation driver for the agents (where the translation is
> possible), I totally understand
> that most extensions are specific agent bound, and we must be able
> to identify
> the agent we're serving back exactly.
>
 Yes, I do have this in mind, but what we've identified for now
 seems to be OVS specific.

>>> Indeed it does. Maybe you can try to define the needed pieces in
>>> high level actions, not internal objects you need to access to. Like ‘-
>>> connect endpoint X to Y’, ‘determine segmentation id for a network’ etc.
>>>
>> I've been thinking about this, but would tend to reach the conclusion
>> that the things we need to interact with are pretty hard to abstract out
>> into something that would be generic across different agents.  Everything
>> we need to do in our case relates t

Re: [openstack-dev] [Neutron][bgpvpn] Service Plugin vs Service driver

2015-08-19 Thread Mathieu Rohon
Hi,

thanks for your reply irena and salvatore.

Currently, we're targeting 4 backends : bagpipe (the ref impelmentations
compatible with other ref implementations of neutron), ODL, contrail and
nuage.
Contrail and bagpipe work with networks attachments to a bgpvpn connection,
while ODL and Nuage work with routers attachments. We even start thinking
about port attachments [1]
Moreover, ODL needs a RD attribute that won't be supported by other
backends.

I think that each backend should be able to manage each kind of attachment
in the future, depending on the will of the backend dev team. But in a
firts step, we have to manage the capacity of each backend.

So, indeed, the managment of attachments to a bgpvpn connection through the
use of extensions will expose backend capacity. And I agree that it's not
the good way, since when moving from one cloud to another, the API will
change depending on the backend.

So I see two ways to solve this issue :
1-In first releases, backends that don't support a feature will through a
'"NotImplemented" exception when the feature will be called through the
API; We still have an inconsistent API, but hopefully, this gone be
temporary.
2-reducing the scope of the spec [2] and having less compatible backends,
and a smaller community for the bgpvpn project.

[1]https://blueprints.launchpad.net/bgpvpn/+spec/port-association
[2]https://review.openstack.org/#/c/177740/

regards,

Mathieu

On Wed, Aug 19, 2015 at 1:55 PM, Irena Berezovsky 
wrote:

> Current VPNaaS Service Plugin inherits from VpnPluginRpcDbMixin, which is
> not required for some vendor solutions, since L3 is implemented without
> leveraging L3 Agents to manage router namespaces (ODL, MidoNet, etc).
> I guess if Mixin usage will be changed to conditional RPC support based on
> drivers requirements, follow what Salvatore suggested makes perfect sense.
>
>
> On Wed, Aug 19, 2015 at 11:06 AM, Salvatore Orlando <
> salv.orla...@gmail.com> wrote:
>
>> my 0.02€ on the matter inline.
>>
>> Regards,
>> Salvatore
>>
>> On 18 August 2015 at 23:45, Mathieu Rohon 
>> wrote:
>>
>>> hi brandon,
>>>
>>> thanks for your answer.
>>>
>>> my answers inline,
>>>
>>>
>>>
>>> On Tue, Aug 18, 2015 at 8:53 PM, Brandon Logan <
>>> brandon.lo...@rackspace.com> wrote:
>>>
>>>> ​So let me make sure I understand this. You want to do a separate
>>>> service plugin for what would normally be separate drivers under one
>>>> service plugin.  The reasons for this are:
>>>>
>>>>
>>>> 1. You dont want users the ability to choose the type, you want it
>>>> always to be the same one
>>>>
>>> While in theory it is be possible to have multiple BGPVPN providers in
>> the same deployment, there are control and data plane aspects that the
>> service type framework at the moment cannot deal with it. Mathieu brought
>> some examples in the bug report. The bottom line appears to be that the
>> choice of the l3 service plugin (or whatever serves l3 in your deployment)
>> also dictates the choiche of the BGPVPN service provider to employ.
>>
>>> 2. Some types do want to be the source of truth of the data stored,
>>>> instead of it being the service plugin database.
>>>>
>>> This point has little to do with service types. It's about the fact that
>> plugins are not required to implemented the various db mixins in neutron.db
>> and therefore not required to use the neutron DB.
>>
>>>
>>>> First, let me address the possibility of a solution using one service
>>>> plugin and multiple drivers per type:
>>>>
>>>>
>>>> I think that you can overcome #1 in the instantiation of the service
>>>> plugin to check if there are more than 1 provider active, if so you can
>>>> just throw an exception saying you can only have 1.  I'd have to look at it
>>>> more to see if there are any caveats to this, but I think that would work.
>>>>
>>>>
>>>> For #2, assuming #1 works, then the drivers that are defined can have
>>>> some boolean that they set that will tell the plugin whether they are the
>>>> source of truth or not, and depending on that you can store the data in the
>>>> service plugin's db or just pass the data along, also pass GET requests to
>>>> the drivers as well.
>>>>
>>>>
>>> I agree that those workarounds will surely works but I wonder what is
>>> the meaning of a service plugin/type that can only su

Re: [openstack-dev] [Neutron][bgpvpn] Service Plugin vs Service driver

2015-08-18 Thread Mathieu Rohon
hi brandon,

thanks for your answer.

my answers inline,



On Tue, Aug 18, 2015 at 8:53 PM, Brandon Logan 
wrote:

> ​So let me make sure I understand this. You want to do a separate service
> plugin for what would normally be separate drivers under one service
> plugin.  The reasons for this are:
>
>
> 1. You dont want users the ability to choose the type, you want it always
> to be the same one
>
> 2. Some types do want to be the source of truth of the data stored,
> instead of it being the service plugin database.
>
>
> First, let me address the possibility of a solution using one service
> plugin and multiple drivers per type:
>
>
> I think that you can overcome #1 in the instantiation of the service
> plugin to check if there are more than 1 provider active, if so you can
> just throw an exception saying you can only have 1.  I'd have to look at it
> more to see if there are any caveats to this, but I think that would work.
>
>
> For #2, assuming #1 works, then the drivers that are defined can have some
> boolean that they set that will tell the plugin whether they are the source
> of truth or not, and depending on that you can store the data in the
> service plugin's db or just pass the data along, also pass GET requests to
> the drivers as well.
>
>
I agree that those workarounds will surely works but I wonder what is the
meaning of a service plugin/type that can only support one service
provider? can't the service plugin be the service provider directly?

The reasons why I'm considering this change are :

1. I'm not sure we would have some use cases where we would be able to
choose one bgpvpn backend independently from the provider of the core
plugin (or a mech driver in the ML2 case) and/or the router plugin.
If one use ODL to manage its core resources, he won't be able to use nuage
or contrail to manage its bgpvpn connection.
The bgpvpn project is more about having a common API than having the
capacity to mix backends. At least for the moment.

2. I'm also considering that each plugin, which would be backend dependent,
could declare what features it supports through the use of extensions. Each
plugin would be a "bgpvpn" service type, and would implement the bgpvpn
extension, but some of them could extend the bgpvpn_connection resource
with other extensions also hosted in the bgpvpn project. Since some
backends only support attachment of networks to a bgpvpn_connection, others
support attachment of routers, and others both attachments, I'm considering
having an extension for each type of attachment. Then the bgpvpn plugin
declares what extensions it supports and the end user can act accordingly
depending on the scan of neutron extensions.
By moving to one plugin per backend, the load of extensions would be done
by the neutron framework, natively. We won't have to scan each service
providers to see what extensions it supports, as it is done by the ML2
extension manager.
But I agree that with your workaround, of allowing only one service
provider, we can easily scan this driver for its extensions.


> As for making a service plugin for each type, I don't see why that
> wouldn't work.  It seems a bit overkill to me though because you'd probably
> end up having 2 base classes for every service plugin type, one for using
> the service plugin database and another for the data source of truth being
> external.  Probably a better way to do this, I'm sure I'm oversimplifying.
>
You're right, and you're not oversimplifying. With this change, the bgpvpn
framework will only manage API extensions and the DB layer if needed. And
we don't want this framework to be complicated, in a first step, we just
want to have a consistent API for every backends.

> I don't see much technical reasons why you couldn't do this, though.  It's
> just inconsistent and might cause some confusion.  I'd need to spend some
> time on it to really have an educated opinion.
>
The fact that this change will lead to inconsistency between usage of each
service plugins is a valid point and might be enough to not do it and
instead limiting the bgpvpn service plugin to be able to only load one
service driver for the moment. Which is also inconsistent with some other
service plugins, but probably less.

thanks brandon.

Mathieu

Thanks,
> Brandon
> --
> *From:* Mathieu Rohon 
> *Sent:* Tuesday, August 18, 2015 7:13 AM
> *To:* OpenStack Development Mailing List
> *Subject:* [openstack-dev] [Neutron][bgpvpn] Service Plugin vs Service
> driver
>
> Adding the related subject :)
>
> On Tue, Aug 18, 2015 at 10:35 AM, Mathieu Rohon 
> wrote:
>
>> Hi all,
>>
>> The current bgpvpn implementation is using the service type framework,
>&g

[openstack-dev] [Neutron][bgpvpn] Service Plugin vs Service driver

2015-08-18 Thread Mathieu Rohon
Adding the related subject :)

On Tue, Aug 18, 2015 at 10:35 AM, Mathieu Rohon 
wrote:

> Hi all,
>
> The current bgpvpn implementation is using the service type framework,
> with a service plugin and one or more service providers.
>
> After registering the bug [1], I wonder if we would rather use a service
> plugin per implementation type (bagpipe, ODL, OpenContrail, Nuage...) which
> handles API calls, instead of having one service plugin which forwards API
> calls to a service driver depending on the provider chosen by the end
> user.
>
> I would like to better understand what would be the main drawbacks of such
> a move apart from the fact that a deployment would be tightly coupled to a
> bgpvpn plugin, and multiple implementations of the plugin couldn't coexist.
>
> Thanks,
>
> Mathieu
>
> [1]https://bugs.launchpad.net/bgpvpn/+bug/1485515
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][bgpvpn]

2015-08-18 Thread Mathieu Rohon
Hi all,

The current bgpvpn implementation is using the service type framework, with
a service plugin and one or more service providers.

After registering the bug [1], I wonder if we would rather use a service
plugin per implementation type (bagpipe, ODL, OpenContrail, Nuage...) which
handles API calls, instead of having one service plugin which forwards API
calls to a service driver depending on the provider chosen by the end user.

I would like to better understand what would be the main drawbacks of such
a move apart from the fact that a deployment would be tightly coupled to a
bgpvpn plugin, and multiple implementations of the plugin couldn't coexist.

Thanks,

Mathieu

[1]https://bugs.launchpad.net/bgpvpn/+bug/1485515
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Modular L2 Agent

2015-06-23 Thread Mathieu Rohon
Hi,

there are still some differences in terms of features supported by the two
implementations. Those I am aware of are :
-LB can support VLAN transparent networks as mentionned in [2];
-OVS supports MPLS tagging, needed by the bagpipe driver of the bgpvpn
project;
-when arp responder is activated (with l2pop), OVS supports the fallback to
learning mode if the arp responder is not populated. The Vxlan module used
with LB does not support it, which leads to bug like [3].

The framework mentionned by irena in [1] is a good approach to report back
to the user what features are supported by the cloud and the underlying
technology in use.

[2]https://review.openstack.org/#/c/136554/3/specs/kilo/nfv-vlan-trunks.rst
[3]https://bugs.launchpad.net/neutron/+bug/1445089

On Tue, Jun 23, 2015 at 9:03 AM, Irena Berezovsky 
wrote:

>
>
> On Mon, Jun 22, 2015 at 7:48 PM, Sean M. Collins 
> wrote:
>
>> On Mon, Jun 22, 2015 at 10:47:39AM EDT, Salvatore Orlando wrote:
>> > I would probably start with something for enabling the L2 agent to
>> process
>> > "features" such as QoS and security groups, working on the OVS agent,
>> and
>> > then in a second step abstract a driver interface for communicating with
>> > the data plane. But I honestly do not know if this will keep the work
>> too
>> > "OVS-centric" and therefore won't play well with the current efforts to
>> put
>> > linux bridge on par with OVS in Neutron. For those question we should
>> seek
>> > an answer from our glorious reference control plane lieutenant, and
>> perhaps
>> > also from Sean Collins, who's coordinating efforts around linux bridge
>> > parity.
>>
>> I think that what Salvatore has suggested is good. If we start creating
>> good API contracts, and well defined interfaces in the reference control
>> plane agents - this is a good first step. Even if we start off by doing
>> this just for the OVS agent, that'll be a good template for what we
>> would need to do for any agent-driven L2 implementation - and it could
>> easily be re-used by others.
>>
>> To be honest, if you squint hard enough there really are very few
>> differences between the OVS agent and the Linux Bridge agent does -
>> the parts that handle control plane communication, processing
>> data updates, and so forth should all be very similar.
>>
>> They only become different at the lower
>> levels where it's brctl/ip2 vs. ovs-vsctl/of-ofctl CLI calls - so why
>> maintain two separate agent implementations when quite a bit of what
>> they do is functionally identical?
>>
>
> As Miguel mentioned, the patch [1] adds support for QoS driver in L2
> Agents. Since QoS support is planned to be leveraged by OVS and SR-IOV and
> maybe later by Linux Bridge, the idea is to make common L2Agent layer to
> enable generic support for features (extensions) and QoS as the first
> feature to support. This is not the Modular L2 Agent, but definitely the
> step into the right direction.
> This work should have minimal impact on Server side, and mostly about code
> reuse by L2 Agents.
>
> [1] https://review.openstack.org/#/c/189723/
>
> BR,
> Irena
>
>> --
>> Sean M. Collins
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][vpnaas] Supporting multiple local subnets for VPN connections

2015-05-20 Thread Mathieu Rohon
However after thinking about it more deeply, option A might be suitable if
the vpn-service becomes more generic, and usable by other vpn objects
(ipsec-site-connection, bgpvpn-connection).
I would become an object that the tenant can update to attach CIDR it wants
to export in its VPN.
To transform the vpn-service object in a generic vpn description, we need
to remove the mandatory ROUTER attribute, so that we can use it for l2vpn
either.

Hope we can discuss that on friday morning

On Wed, May 20, 2015 at 5:12 PM, Paul Michali  wrote:

> Hi Mathieu,
>
> In Kilo, VPNaaS APIs were no longer marked as experimental. We need to
> understand the implication of changing the API. I'm not sure how much
> VPNaaS is being used by OpenStack users at this time, either.  I'm hoping
> to seek out answers to this at the summit.
>
> If anyone has suggestions, comments, information on this, please chime in. 
> I'll
> likely make a BP for multiple subnets, when I get back from the summit.
>
> Lastly, I'm planning on trying to get people interested in VPN to meet on
> Friday morning at the summit to discuss all the VPN topics that have been
> coming up.
>
> Regards,
>
> Paul Michali (pc_m)
>
> On Wed, May 20, 2015 at 7:54 AM Mathieu Rohon 
> wrote:
>
>> Hi paul,
>>
>> this is also something that we would like to introduce for BGP/MPLS VPNs
>> [2].
>>
>> We choose to allow tenants to attach existing networks ( it might evolve
>> to subnets) to bgpvpn-connection objects, by updating the bgpvpn-connection
>> object, which is the equivalent of the ipsec-site-connection object.
>>
>> So I think that Option B is suitable here.
>>
>> Concerning backward compatibility, I think VPNaas is still considered as
>> experimental, am I wrong? Do you have to provide backward compatbility in
>> this case?
>>
>> Mathieu
>>
>> [2] https://review.openstack.org/#/c/177740/
>>
>> On Wed, May 13, 2015 at 8:59 PM, Paul Michali  wrote:
>>
>>> Hi,
>>>
>>> There has been, over the years, some mention about having VPN IPSec
>>> connections supporting multiple CIDRs for local (left side) private
>>> networks, in addition to the current support for multiple peer CIDRs (right
>>> side).
>>>
>>> I'm raising the question again with these goals:
>>>
>>> 1) Determine if the reference IPSec implementations support this
>>> capability
>>> 2) Determine if there is a community desire to enhance VPN to support
>>> this capability (for all VPN types)
>>> 3) See what would be the best way to handle this (changes to API and
>>> model)
>>> 4) Identify any consequences of making this change.
>>>
>>> Note: My assumption here is something that could be used for any type of
>>> VPN connection - current IPSec, future BGP/MPLS VPN, DM VPN, etc.
>>>
>>> Here is some information that was gathered from people on the VPN team
>>> so far. Please correct any inaccuracies and comment on the items...
>>>
>>> (1) It looks like OpenSwan and Libreswan will support this capability.
>>> StrongSwan will support this with IKEv2. For IKEv1, a Cisco Unity plugin
>>> extensions is needed. I'm not sure what that implies [1].
>>>
>>> (2) Do we, as a community, want to enhance VPNaaS to provide this
>>> capability of N:M subnets for VPN implementations? Putting on my vendor
>>> hat, I can see cases where customers want to be able to only create one
>>> connection and reference multiple subnets on each end. Is there a desire to
>>> do this and bake it into the reference implementation (thus making it
>>> available for other implementations)?
>>>
>>> (3) Currently, the vpn service API includes the router and subnet ID.
>>> The IPSec connection command includes the peer CIDR(s). For reference, here
>>> are two of the APIs:
>>>
>>> usage: neutron vpn-service-create [-h] [-f
>>> {html,json,shell,table,value,yaml}]
>>>   [-c COLUMN] [--max-width ]
>>>   [--prefix PREFIX]
>>>   [--request-format {json,xml}]
>>>   [--tenant-id TENANT_ID]
>>> [--admin-state-down]
>>>   [--name NAME] [--description
>>> DESCRIPTION]
>>>   ROUTER SUBNET
>>>
>>> usage: neutron ipsec-site-connection-create [-h]
>>> [-f
&

Re: [openstack-dev] [neutron][vpnaas] Supporting multiple local subnets for VPN connections

2015-05-20 Thread Mathieu Rohon
Hi paul,

this is also something that we would like to introduce for BGP/MPLS VPNs
[2].

We choose to allow tenants to attach existing networks ( it might evolve to
subnets) to bgpvpn-connection objects, by updating the bgpvpn-connection
object, which is the equivalent of the ipsec-site-connection object.

So I think that Option B is suitable here.

Concerning backward compatibility, I think VPNaas is still considered as
experimental, am I wrong? Do you have to provide backward compatbility in
this case?

Mathieu

[2] https://review.openstack.org/#/c/177740/

On Wed, May 13, 2015 at 8:59 PM, Paul Michali  wrote:

> Hi,
>
> There has been, over the years, some mention about having VPN IPSec
> connections supporting multiple CIDRs for local (left side) private
> networks, in addition to the current support for multiple peer CIDRs (right
> side).
>
> I'm raising the question again with these goals:
>
> 1) Determine if the reference IPSec implementations support this capability
> 2) Determine if there is a community desire to enhance VPN to support this
> capability (for all VPN types)
> 3) See what would be the best way to handle this (changes to API and model)
> 4) Identify any consequences of making this change.
>
> Note: My assumption here is something that could be used for any type of
> VPN connection - current IPSec, future BGP/MPLS VPN, DM VPN, etc.
>
> Here is some information that was gathered from people on the VPN team so
> far. Please correct any inaccuracies and comment on the items...
>
> (1) It looks like OpenSwan and Libreswan will support this capability.
> StrongSwan will support this with IKEv2. For IKEv1, a Cisco Unity plugin
> extensions is needed. I'm not sure what that implies [1].
>
> (2) Do we, as a community, want to enhance VPNaaS to provide this
> capability of N:M subnets for VPN implementations? Putting on my vendor
> hat, I can see cases where customers want to be able to only create one
> connection and reference multiple subnets on each end. Is there a desire to
> do this and bake it into the reference implementation (thus making it
> available for other implementations)?
>
> (3) Currently, the vpn service API includes the router and subnet ID. The
> IPSec connection command includes the peer CIDR(s). For reference, here are
> two of the APIs:
>
> usage: neutron vpn-service-create [-h] [-f
> {html,json,shell,table,value,yaml}]
>   [-c COLUMN] [--max-width ]
>   [--prefix PREFIX]
>   [--request-format {json,xml}]
>   [--tenant-id TENANT_ID]
> [--admin-state-down]
>   [--name NAME] [--description DESCRIPTION]
>   ROUTER SUBNET
>
> usage: neutron ipsec-site-connection-create [-h]
> [-f
> {html,json,shell,table,value,yaml}]
> [-c COLUMN]
> [--max-width ]
> [--prefix PREFIX]
> [--request-format {json,xml}]
> [--tenant-id TENANT_ID]
> [--admin-state-down] [--name
> NAME]
> [--description DESCRIPTION]
> [--mtu MTU]
> [--initiator
> {bi-directional,response-only}]
> [--dpd
> action=ACTION,interval=INTERVAL,timeout=TIMEOUT]
> --vpnservice-id VPNSERVICE
> --ikepolicy-id IKEPOLICY
> --ipsecpolicy-id IPSECPOLICY
> --peer-address PEER_ADDRESS
> --peer-id PEER_ID --peer-cidr
> PEER_CIDRS --psk PSK
>
> I could envision several ways to handle this (feel free to add more). Here
> are some thoughts on this...
>
> A) Allow multiple subnets for the vpn service API. The implication here is
> that all types of VPN connections would be able to support multiple subnets.
>
> vpn-service-create ... ROUTER LOCAL_CIDRS
>
> Issue here is that, if a change is desired on a subnet for a specific
> connection, the service must be updated.
>
> Today, I think one has to delete the connections from the service, and
> then can update the service. Would need to have ability to update a
> service, but still there is concern about the effect on other connections.
> It just doesn't seem like the right thing to me.
>
>
> B) Remove the subnet from the vpn service API and add it to the IPSec
> connection API, like is done with peer CIDR selection, allowing multiples.
> Different VPN types could do 

Re: [openstack-dev] [neutron] How should edge services APIs integrate into Neutron?

2015-05-07 Thread Mathieu Rohon
Hi,

On Wed, May 6, 2015 at 8:42 AM, Salvatore Orlando 
wrote:

> I think Paul is correctly scoping this discussion in terms of APIs and
> management layer.
> For instance, it is true that dynamic routing support, and BGP support
> might be a prerequisite for BGP VPNs, but it should be possible to have at
> least an idea of how user and admin APIs for this VPN use case should look
> like.
>

the spec [4] is mainly focusing on API and data model. Of course there
might be some overlap with BGP support and/or dynamic routing support, but
this is more about implementation details to my POV.
We hope we'll see some good progress about the API during reviews and
design summit, since it seem to suit to several players.


> In particular the discussion on service chaining is a bit out of scope
> here. I'd just note that [1] seems to have a lot of overlap with
> group-based-policies [2], and that it appears to be a service that consumes
> Neutron rather than an extension to it.
>
> The current VPN service was conceived to be fairly generic. IPSEC VPN is
> the only implemented one, but SSL VPN and BGP VPN were on the map as far as
> I recall.
> Personally having a lot of different VPN APIs is not ideal for users. As a
> user, I probably don't even care about configuring a VPN. What is important
> for me is to get L2 or L3 access to a network in the cloud; therefore I
> would seek for common abstractions that might allow a user for configuring
> a VPN service using the same APIs. Obviously then there will be parameters
> which will be specific for the particular class of VPN being created.
>

> I listened to several contributors in the area in the past, and there are
> plenty of opinions across a spectrum which goes from total abstraction
> (just expose "edges" at the API layer) to what could be tantamount to a
> RESTful configuration of a VPN appliance. I am not in a position such to
> prescribe what direction the community should take; so, for instance, if
> the people working on XXX VPN believe the best way forward for them is to
> start a new project, so be it.
>

that's what BGP VPN and Edge VPN did by creating their own stackforge
project. But I think the idea was more about sharing the framework upstream
after failing at finding a consensus during design summits, rather than
promoting the fact that this has nothing to do with other VPN stuff in
Neutron.


>
> The other approach would obviously to build onto the current APIs. The
> only way the Neutron API layer provides to do that is to extend and
> extension. This sounds terrible, and it is indeed terrible. There is a
> proposal for moving toward versioned APIs [3], but until that proposal is
> approved and implemented extensions are the only thing we have.
>

Advanced services, such as VPNaaS, are out of the scope of the current
proposal [3]. It might take a while before the VPNaaS team moves to the
micro-versionning framework.


> From an API perspective the mechanism would be simpler:
> 1 - declare the extension, and implement get_required_extension to put
> 'vpnaas' as a requirement
> 2 - implement a DB mixin for it providing basic CRUD operations
> 3 - add it to the VPN service plugin and add its alias to
> 'supported_extensions_aliases' (step 2 and 3 can be merged if you wish not
> to have a mixin)
>
> What might be a bit more challenging is defining how this reflects onto
> VPN. Ideally you would have a driver for every VPN type you support, and
> then have a little dispatcher to route the API call to the appropriate
> driver according to the VPN type.
>
> Salvatore
>
> [1]
> https://blueprints.launchpad.net/neutron/+spec/intent-based-service-chaining
> [2] https://wiki.openstack.org/wiki/GroupBasedPolicy
> [3] https://review.openstack.org/#/c/136760
>

[4]  https://review.openstack.org/#/c/177740/


> On 6 May 2015 at 07:14, Vikram Choudhary 
> wrote:
>
>>  Hi Paul,
>>
>>
>>
>> Thanks for starting this mail thread.  We are also eyeing for supporting
>> MPBGP in neutron and will like to actively participate in this discussion.
>>
>> Please let me know about the IRC channels which we will be following for
>> this discussion.
>>
>>
>>
>> Currently, I am following below BP’s for this work.
>>
>> https://blueprints.launchpad.net/neutron/+spec/edge-vpn
>>
>> https://blueprints.launchpad.net/neutron/+spec/bgp-dynamic-routing
>>
>> https://blueprints.launchpad.net/neutron/+spec/dynamic-routing-framework
>>
>>
>> https://blueprints.launchpad.net/neutron/+spec/prefix-clashing-issue-with-dynamic-routing-protocol
>>
>>
>>
>> Moreover, a similar kind of work is being headed by Cathy for defining an
>> intent framework which can extended for various use case. Currently it will
>> be leveraged for SFC but I feel the same can be used for providing intend
>> VPN use case.
>>
>>
>> https://blueprints.launchpad.net/neutron/+spec/intent-based-service-chaining
>>
>>
>>
>> Thanks
>>
>> Vikram
>>
>>
>>
>> *From:* Paul Michali [mailto:p...@michali.net]
>> *Sent:* 06 May 2015 

Re: [openstack-dev] [Neutron] [Nova] [Cinder] [tc] Should Openstack project maintained by core team keep only API/DB in the future?

2015-04-24 Thread Mathieu Rohon
Hi loy, thanks for this dedicated thread.

On Fri, Apr 24, 2015 at 3:13 PM, Kyle Mestery  wrote:

> On Fri, Apr 24, 2015 at 4:06 AM, loy wolfe  wrote:
>
>> It's already away from the original thread, so I start this new one,
>> also with some extra tag because I think it touch some corss-project
>> area.
>>
>> Original discuss and reference:
>> http://lists.openstack.org/pipermail/openstack-dev/2015-April/062384.html
>>
>> https://review.openstack.org/#/c/176501/1/specs/liberty/reference-split.rst
>>
>> Background summary:
>> All in-tree implementation would be splitted from Openstack
>> networking, leaving Neutron as a naked "API/DB" platform, with a list
>> of out-tree implementation git repos, which are not maintained by core
>> team any more, but may be given a nominal "big tent" under the
>> Openstack umbrella.
>>
>>
> I'm not sure what led you to this discussion, but it's patently incorrect.
> We're going to split the in-tree reference implementation into a separate
> git repository. I have not said anything about the current core revewier
> team not being responsible for that. It's natural to evolve to a core
> reviewer team which cares deeply about that, vs. those who care deeply
> about the DB/API layer. This is exactly what happened when we split out the
> advanced services.
>
>
>> Motivation: a) Smaller core team only focus on the in-tree API/DB
>> definition, released from concrete controlling function
>> implementation; b) If there is official implementation inside Neutron,
>> 3rd external SDN controller would face the competition.
>>
>> I'm not sure whether it's exactly what cloud operators want the
>> Openstack to deliver. Do they want a off-the-shelf package, or just a
>> framework and have to take the responsibility of integrating with
>> other external controlling projects? A analogy with Linux that only
>> kernel without any device driver has no use at all.
>>
>>
> We're still going to deliver ML2+OVS/LB+[DHCP, L3, metadata] agents for
> Liberty. I'm not sure where your incorrect assumption on what we're going
> to deliver is coming from.
>
>
>> There are already many debates about nova-network to Neutron parity.
>> If largely used OVS and LB driver is out of tree and has to be
>> integrated separately by customers, how do those they migrate from
>> nova network? Standalone SDN controller has steep learning curve, and
>> a lot of users don't care which one is better of ODL vs. OpenContrail
>> to be integrated, they just want Openstack package easy to go by
>> default in tree implementation,  and are ready to drive all kinds of
>> opensource or commercial backends.
>>
>>
> Do you realize that ML2 is plus the L2 agent is an SDN controller already?
>

I totally agree that this part of Neutron should be considered as a SDN
controller. Actually we can even say that the Neutron SDN controller is
composed of ML2+ref service plugins+agents.
I think this thread is also motivated by the fact that, during design
summit,  we keep on earing that Neutron should NOT deliver and maintain a
SDN controller, and it should rely on 3rd party SDN controllers.


>
>> BTW: +1 to henry and mathieu, that indeed Openstack is not responsible
>> projects of switch/router/fw, but it should be responsible for
>> scheduling, pooling, and driving of those backends, which is the same
>> case with Nova/Cinder scheduler and compute/volume manager. These
>> controlling functions shouldn't be classified as backends in Neutron
>> and be splitted out of tree.
>>
>
>
>> Regards
>>
>>
>> On Fri, Apr 24, 2015 at 2:37 AM, Kyle Mestery 
>> wrote:
>> >
>> >
>> > On Thu, Apr 23, 2015 at 1:31 PM, Fox, Kevin M 
>> wrote:
>> >>
>> >> Yeah. In the end, its what git repo the source for a given rpm you
>> install
>> >> comes from. Ops will not care that neutron-openvswitch-agent comes
>> from repo
>> >> foo.git instead of bar.git.
>> >>
>> >
>> >
>> > That's really the tl;dr of the proposed split.
>> >
>> > Thanks,
>> > Kyle
>> >
>> >>
>> >> Thanks,
>> >> Kevin
>> >> 
>> >> From: Armando M. [arma...@gmail.com]
>> >> Sent: Thursday, April 23, 2015 9:10 AM
>> >> To: OpenStack Development Mailing List (not for usage questions)
>> >> Subject: Re: [openstack-dev] [Neutron] A big tent home for Neutron
>> backend
>> >> code
>> >>
>> 
>> >>> I agree with henry here.
>> >>> Armando, If we use your analogy with nova that doesn't build and
>> deliver
>> >>> KVM, we can say that Neutron doesn't build or deliver OVS. It builds a
>> >>> driver and an agent which manage OVS, just like nova which provides a
>> driver
>> >>> to manage libvirt/KVM.
>> >>> Moreover, external SDN controllers are much more complex than Neutron
>> >>> with its reference drivers. I feel like forcing the cloud admin to
>> deploy
>> >>> and maintain an external SDN controller would be a terrible
>> experience for
>> >>> him if he just needs a simple way manage connectivity between VMs.
>> >>> At the end of the day, it might be detrimental for th

Re: [openstack-dev] [Neutron] A big tent home for Neutron backend code

2015-04-23 Thread Mathieu Rohon
On Thu, Apr 23, 2015 at 10:28 AM, henry hly  wrote:

> On Thu, Apr 23, 2015 at 10:44 AM, Armando M.  wrote:
> >>
> >> Could you please also pay some attention on Cons of this ultimate
> >> splitting Kyle? I'm afraid it would hurt the user experiences.
> >>
> >> On the position of Dev, A naked Neutron without "official" built-in
> >> reference implementation probably has a more clear architecture. On
> >> the other side, users would be forced to make choice between a long
> >> list of backend implementations, which is very difficult for
> >> non-professionals.
> >>
> >> In most of the time, users need a off-the-shelf solution without
> >> paying much extra integration effort, and they have less interest to
> >> study which SDN controller is powerful and is better than others. Can
> >> we imagine Nova without KVM/Qemu virt driver, Cinder without Ceph/VKM
> >> volume driver [See Deployment Profiles section in 1a] ? Shall we
> >> really decide to make Neutron the only Openstack project which has not
> >> any in tree official implementation?
> >
> >
> > I think the analogy here is between the agent reference implementation vs
> > KVM or Ceph, rather than the plumbing that taps into the underlying
> > technology. Nova doesn't build/package KVM as Cinder doesn't
> build/package
> > Ceph. Neutron could rely on other open source solutions (ODL,
> OpenContrail,
> > OVN, etc), and still be similar to the other projects.
> >
> > I think there's still room for clarifying what the split needs to be,
> but I
> > have always seen Neutron as the exception rather than norm, where, for
> > historic reasons, we had to build everything from the ground up for lack
> of
> > viable open source solutions at the time the project was conceived.
> >
>
> Thanks for bring in this interesting topic, maybe it should not be
> scoped only inside Neutron, also I found a similar discuss from John
> griffith on Cinder vs SDS controller :-)
>
>
> https://griffithscorner.wordpress.com/2014/05/16/the-problem-with-sds-under-cinder/
>
> It's clear that an typical Cloud deployment is composed of two
> distinct part: workload engine vs. supervisor. The engine part
> obviously do not belong to Openstack project, which include open
> sourced like KVM, Ceph, OVS/Linuxstack/haproxy/openswan, and vendor's
> like Vcenter/ESXi, SAN disk arrays, and all kinds of networking
> hardware gears or virtualized service VMs.
>
> However for the supervisor part, there is some blurring for debates:
> Should Openstack provide complete in-house implementation of
> controlling functions which could directly drive backend workload
> engine (via backend driver), or just thin API/DB layer which need to
> integrate some 3rd external controller projects to finish those works:
> scheduling, pooling and service logic abstraction? For networking, how
> should we regard the functions of plugin/agent and SDN controller, are
> they classified in the same layer of "real" backends working engine
> like Switchs/Routers/Firewalls?
>
> For Nova & Cinder, it seems former is adopted: a single unified
> central framework including API, scheduling, abstraction service
> logic, rpc & message queue, and a common agent side framework of
> compute/volume manager, then with a bunch of virt/volume drivers
> plugged to abstract the all kinds of backends. There are standalone
> backends like KVM and LVM, and aggregated clustering backends like
> vcenter and ceph.
>
> The Neutron was just like a developing game of continuously
> re-factoring: plugin, meta plugin, ML2, and now the "platform". Next
> ML2 plugin suddenly became just a "reference" for concept proving, and
> no plugin/agent would be maintained in-tree officially anymore, while
> the reason is confusingly "not to compete with other 3rd SDN
> controllers" :-P
>

I agree with henry here.
Armando, If we use your analogy with nova that doesn't build and deliver
KVM, we can say that Neutron doesn't build or deliver OVS. It builds a
driver and an agent which manage OVS, just like nova which provides a
driver to manage libvirt/KVM.
Moreover, external SDN controllers are much more complex than Neutron with
its reference drivers. I feel like forcing the cloud admin to deploy and
maintain an external SDN controller would be a terrible experience for him
if he just needs a simple way manage connectivity between VMs.
At the end of the day, it might be detrimental for the neutron project.


> >
> >>
> >>
> >> [1a]
> >>
> http://superuser.openstack.org/articles/openstack-user-survey-insights-november-2014
> >>
> >> Here is my personal suggestion: decomposition decision needs some
> >> trade-off, remaining 2-3 mainstream opensource backends  in tree [ML2
> >> with OVS&LB, based on the survey result of 1a above]. While we are
> >> progressing radically with architecture re-factoring, smooth
> >> experience and easy to adoption should also be cared.
> >>
> >> >
> >> > One thing which is worth bringing up in this context is the potential
> >> > overlap be

Re: [openstack-dev] [neutron] [QoS] QoS weekly meeting

2015-04-15 Thread Mathieu Rohon
Hi,

It will overlap with the Telco Working group weekly meeting [1]. It's too
bad, since Qos is a big interest for Telco Cloud Operator!

Mathieu

[1]https://wiki.openstack.org/wiki/TelcoWorkingGroup#Meetings

On Tue, Apr 14, 2015 at 10:43 AM, Miguel Angel Ajo Pelayo <
mangel...@redhat.com> wrote:

> Ok, after one week, looks like the most popular time slot is B,
> that is 14:00 UTC / Wednesdays.
>
> I’m proposing first meeting for Wednesday / Apr 22th 14:00 UTC /
> #openstack-meeting-2.
>
> Tomorrow (Apr 15th / 14:00 UTC) It’s a been early since the announcement,
> so
> I will join #openstack-meeting-2 while working on the agenda for next
> week, feel free to join
> if you want/have time.
>
>
>
>
> On 9/4/2015, at 22:43, Howard, Victor 
> wrote:
>
>  I prefer Timeslot B, thanks for coordinating.  I would be interested in
> helping out in any way with the design session let me know!
>
>   From: "Sandhya Dasu (sadasu)" 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Tuesday, April 7, 2015 12:19 PM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [neutron] [QoS] QoS weekly meeting
>
>   Hi Miguel,
> Both time slots work for me. Thanks for rekindling this effort.
>
>  Thanks,
> Sandhya
>
>   From: Miguel Ángel Ajo 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Tuesday, April 7, 2015 1:45 AM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [neutron] [QoS] QoS weekly meeting
>
>   On Tuesday, 7 de April de 2015 at 3:14, Kyle Mestery wrote:
>
>On Mon, Apr 6, 2015 at 6:04 PM, Salvatore Orlando 
> wrote:
>
>
>
> On 7 April 2015 at 00:33, Armando M.  wrote:
>
>
> On 6 April 2015 at 08:56, Miguel Ángel Ajo  wrote:
>
>  I’d like to co-organized a QoS weekly meeting with Sean M. Collins,
>
>  In the last few years, the interest for QoS support has increased,
> Sean has been leading
> this effort [1] and we believe we should get into a consensus about how to
> model an extension
> to let vendor plugins implement QoS capabilities on network ports and
> tenant networks, and
> how to extend agents, and the reference implementation & others [2]
>
>
>  As you surely know, so far every attempt to achieve a consensus has
> failed in a pretty miserable way.
> This mostly because "QoS" can be interpreted in a lot of different ways,
> both from the conceptual and practical perspective.
>
>Yes, I’m fully aware of it, it was also a new feature, so it was out
> of scope for Kilo.
>
>  It is important in my opinion to clearly define the goals first. For
> instance a simple extensions for bandwidth limiting could be a reasonable
> target for the Liberty release.
>
>I quite agree here, but IMHO, as you said it’s a quite open field
> (limiting, guaranteeing,
> marking, traffic shaping..), we should do our best in trying to define a
> model allowing us
> to build that up in the future without huge changes, on the API side I
> guess micro versioning
> is going to help in the API evolution.
>
>  Also, at some point, we should/could need to involve the nova folks, for
> example, to define
> port flavors that can be associated to nova
> instance flavors, providing them
> 1) different types of network port speeds/guarantees/priorities,
> 2) being able to schedule instance/ports in coordination to be able to met
> specified guarantees.
>
>  yes, complexity can sky rocket fast,
>
>  Moving things such as ECN into "future works" is the right thing to
> do in my opinion. Attempting to define a flexible framework that can deal
> with advanced QoS policies specification is a laudable effort, but I am a
> bit skeptical about its feasibility.
>
>++, I think focusing on perhaps bandwidth limiting may make a lot of
> sense
>
> Yes, I believe we should look into the future , but at the same pick our
> very first feature (or a
> very simple set of them) for L, stick to it, and try to make a design
> that can be extended.
>
>
>
>
>
>
>  As per discussion we’ve had during the last few months [3], I
> believe we should start simple, but
> prepare a model allowing future extendibility, to allow for example
> specific traffic rules (per port,
> per IP, etc..), congestion notification support [4], …
>
>
>  "Simple" in my mind is even more extreme then what you're proposing
> here... I'd start with bare APIs for specifying bandwidth limiting, and
> then phase them out once this "framework" is in place.
> Also note that this kind of design bears some overlap with the flavor
> framework which is probably going to be another goal for Liberty.
>
>Indeed, and the flavor framework is something I'm hoping we can land
> by Liberty-1 (yes, I just said Liberty-1).
>
> Yes it’s something I looked at, I mus

Re: [openstack-dev] [Neutron] - Joining the team - interested in a Debian Developer and experienced Python and Network programmer?

2015-04-09 Thread Mathieu Rohon
Hi Matt,

Jaume did an awesome work at proposing and implementing a framework for
announcing public IP with a BGP speaker [1].
Unfortunately, the spec hasn't been merged in kilo. Hope it will be
resubmitted in L.
Your proposal seems to be a mix of Jaume proposal and HA router design?

We also play with a BGP speaker (BagPipe[3], derived from ExaBGP, written
in python) for IPVPN attachment [2].

[1]https://blueprints.launchpad.net/neutron/+spec/bgp-dynamic-routing
[2]https://launchpad.net/bgpvpn
[3]https://github.com/Orange-OpenSource/bagpipe-bgp

On Thu, Apr 9, 2015 at 3:54 PM, Kyle Mestery  wrote:

> On Thu, Apr 9, 2015 at 2:13 AM, Matt Grant  wrote:
>
>> Hi!
>>
>> I am just wondering what the story is about joining the neutron team.
>> Could you tell me if you are looking for new contributors?
>>
>> We're always looking for someone new to participate! Thanks for reaching
> out!
>
>
>> Previously I have programmed OSPFv2 in Zebra/Quagga, and worked as a
>> router developer for Allied Telesyn.  I also have extensive Python
>> programming experience, having worked on the DNS Management System.
>>
>> Sounds like you have extensive experience programming network elements. :)
>
>
>> I have been experimenting with IPv6 since 2008 on my own home network,
>> and I am currently installing a Juno Openstack cluster to learn ho
>> things tick.
>>
>> Great, this will give you an overview of things.
>
>
>> Have you guys ever figured out how to do a hybrid L3 North/South Neutron
>> router that propagates tenant routes and networks into OSPF/BGP via a
>> routing daemon, and uses floating MAC addresses/costed flow rules via
>> OVS to fail over to a hot standby router? There are practical use cases
>> for such a thing in smaller deployments.
>>
>> BGP integration with L3 is something we'll look at again for Liberty.
> Carl Baldwin leads the L3 work in Neutron, and would be a good person to
> sync with on this work item. I suspect he may be looking for people to help
> integrate the BGP work in Liberty, this may be a good place for you to jump
> in.
>
> I have a single stand alone example working by turning off
>> neutron-l3-agent network name space support, and importing the connected
>> interface and static routes into Bird and Birdv6. The AMPQ connection
>> back to the neutron-server is via the upstream interface and is secured
>> via transport mode IPSEC (just easier than bothering with https/SSL).
>> Bird looks easier to run from neutron as they are single process than a
>> multi process Quagga implementation.  Incidentally, I am running this in
>> an LXC container.
>>
>> Nice!
>
>
>> Could some one please point me in the right direction.  I would love to
>> be in Vancouver :-)
>>
>> If you're not already on #openstack-neutron on Freenode, jump in there.
> Plenty of helpful people abound. Since you're in New Zealand, I would
> suggest reaching out to Akihiro Motoki (amotoki) on IRC, as he's in Japan
> and closer to your timezone.
>
> Thanks!
> Kyle
>
> Best Regards,
>>
>> --
>> Matt Grant,  Debian and Linux Systems Administration and Consulting
>> Mobile: 021 0267 0578
>> Email: m...@mattgrant.net.nz
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [ML2] using binding:tun_ip instead of agent_ip for l2pop to support agentless backend

2015-03-30 Thread Mathieu Rohon
hi henry,

thanks for this interesting idea. It would be interesting to think about
how external gateway could leverage the l2pop framework.

Currently l2pop sends its fdb messages once the status of the port is
modified. AFAIK, this status is only modified by agents which send
update_devce_up/down().
This issue has also to be addressed if we want agent less equipments to be
announced through l2pop.

Another way to do it is to introduce some bgp speakers with e-vpn
capabilities at the control plane of ML2 (as a MD for instance). Bagpipe
[1] is an opensource bgp speaker which is able to do that.
BGP is standardized so equipments might already have it embedded.

last summit, we talked about this kind of idea [2]. We were going further
by introducing the bgp speaker on each compute node, in use case B of [2].

[1]https://github.com/Orange-OpenSource/bagpipe-bgp
[2]http://www.slideshare.net/ThomasMorin1/neutron-and-bgp-vpns-with-bagpipe

On Thu, Mar 26, 2015 at 7:21 AM, henry hly  wrote:

> Hi ML2er,
>
> Today we use agent_ip in L2pop to store endpoints for ports on a
> tunnel type network, such as vxlan or gre. However this has some
> drawbacks:
>
> 1) It can only work with backends with agents;
> 2) Only one fixed ip is supported per-each agent;
> 3) Difficult to interact with other backend and world outside of Openstack.
>
> L2pop is already widely accepted and deployed in host based overlay,
> however because it use agent_ip to populate tunnel endpoint, it's very
> hard to co-exist and inter-operating with other vxlan backend,
> especially agentless MD.
>
> A small change is suggested that the tunnel endpoint should not be the
> attribute of *agent*, but be the attribute of *port*, so if we store
> it in something like *binding:tun_ip*, it is much easier for different
> backend to co-exists. Existing ovs agent and bridge need a small
> patch, to put the local agent_ip into the port context binding fields
> when doing port_up rpc.
>
> Several extra benefits may also be obtained by this way:
>
> 1) we can easily and naturally create *external vxlan/gre port* which
> is not attached by an Nova booted VM, with the binding:tun_ip set when
> creating;
> 2) we can develop some *proxy agent* which manage a bunch of remote
> external backend, without restriction of its agent_ip.
>
> Best Regards,
> Henry
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] VLAN trunking network for NFV

2015-03-25 Thread Mathieu Rohon
Guo,

AFAIU, the guest will tag frames with VLAN, then the host won't remove this
tag ASA the underlying host uses an overlay encapsulation (VXLAN or GRE) to
encapsulate the entire frame, including the VLAN submitted by the guest.
This will be only compatible with LinuxBridge running on the host, since
OVS overwrites VLAN tags with its own VLAN tags to isolate traffic of one
network on a host. Linuxbridge isolate the traffic by dedicating a bridge
per network.

However I'm not sure that the compatibility matrix proposed in the spec is
accurate since LB doesn't seem to support GRE encapsulation.

The question raised in this thread is more about how the Linuxbridge
implementation in Neutron can evolve. It is currently not tested by the CI,
is it?
Does it mean that evolution of this kind of implementation should be
blocked?

The next step of the spin out of drivers might move LB and OVS MD out of
Neutron tree. Will there be any volunteer to support the LinuxBridge
implementation? If not, does it mean that LB implementation will be
deprecated?



On Wed, Mar 25, 2015 at 1:48 AM, Guo, Ruijing  wrote:

>  I am trying to understand how guest os use trunking network.
>
>
>
> If guest os use bridge like Linuxbride and OVS, how we launch it and how
> libvirt to support it?
>
>
>
> Thanks,
>
> -Ruijing
>
>
>
>
>
> *From:* Ian Wells [mailto:ijw.ubu...@cack.org.uk]
> *Sent:* Wednesday, March 25, 2015 2:18 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Neutron] VLAN trunking network for NFV
>
>
>
> That spec ensures that you can tell what the plugin is doing.  You can ask
> for a VLAN transparent network, but the cloud may tell you it can't make
> one.
>
> The OVS driver in Openstack drops VLAN tagged packets, I'm afraid, and the
> spec you're referring to doesn't change that.  The spec does ensure that if
> you try and create a VLAN trunk on a cloud that uses the OVS driver, you'll
> be told you can't.  in the future, the OVS driver can be fixed, but that's
> how things stand at present.  Fixing the OVS driver really involves getting
> in at the OVS flow level - can be done, but we started with the basics.
>
> If you want to use a VLAN trunk using the current code, I recommend VXLAN
> or GRE along with the Linuxbridge driver, both of which support VLAN
> transparent networking.  If they're configured and you ask for a VLAN trunk
> you'll be told you got one.
> --
>
> Ian.
>
>
>
>
>
> On 24 March 2015 at 09:43, Daniele Casini 
> wrote:
>
> Hi all:
>
> in reference to the following specification about the creation of VLAN
> trunking network for NFV
>
> https://review.openstack.org/#/c/136554/3/specs/kilo/nfv-vlan-trunks.rst
>
> I would like to better understand how the tagged traffic will be realized.
> In order to explain myself, I report the following use case:
>
> A VNF is deployed in one VM, which has a trunk port carrying traffic for
> two VLANs over a single link able to transport more than one VLAN through a
> single integration-bridge (br-int) port. So, How does br-int manage the
> VLAN-ID? In other words, what are the action performed by the br-int when a
> VM forwards traffic to another host?
> Does it put an additional tag or replace the existing one keeping the
> match with a table or something like that?
>
> Thank you very much.
>
> Daniele
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Removing udp_port field from 'ml2_vxlan_endpoint' table

2015-03-24 Thread Mathieu Rohon
Hi Andreas,

Linuxbridge is also able to use Unicast, but currently, it is only
available when l2pop is activated.
AFAIR, I saw the mix of LB agents and ovs agent working, with vxlan, l2pop
and and ARP responders turned on everywhere. You also have to tune your
vxlan module, or ovs, to make sure that every agents (LB and OVS) use the
same UDP port for vxlan.
Romil's patch might be a first step to get rid of this tuning of modules.

On Tue, Mar 24, 2015 at 9:40 AM, Andreas Scheuring <
scheu...@linux.vnet.ibm.com> wrote:

> Mathieu,
> now I'm getting curious, is it possible to combine Linuxbridge and OVS
> VXLAN Nodes in the same cloud?
>
> I thought this does not work as Linuxbridge-vxlan uses multicast for
> instances broad- and multicasts (e.g. an arp request), while ovs-vxlan
> only does unicast? At least one can specify a vxlan_group, which is a
> mulitcast address, for linuxbridge vxlan.
>
>
> Or is multicasting prohibited by l2_pop driver and the vxlan_group
> attribute is not applicable in this case?
>
>
>
> --
> Andreas
> (irc: scheuran)
>
>
> On Mon, 2015-03-23 at 14:49 +0100, Mathieu Rohon wrote:
> > Hi romil,
> >
> >
> > I think the main purpose of this DB field is to maintain the
> > compatibility in dataplane between OVS and LinuxBridge which, by
> > default, don't use the same UDP port for VXLAN.
> >
> > It might be useful for a cloud admin which wants to run some nodes
> > with LB and some others with OVS.
> >
> >
> > I feel like your patch proposal will enable this scenario if the
> > tunnel_update() RPC message gets updated with the UDP port too.
> >
> >
> >
> > Mathieu
> >
> >
> > On Mon, Mar 23, 2015 at 11:40 AM, Romil Gupta 
> > wrote:
> > Hello everyone,
> >
> >
> > There is regarding the following bug:
> > https://bugs.launchpad.net/neutron/+bug/1373359
> >
> >
> > May I know what is the significance of having the 'udp_port'
> > field in the  'ml2_vxlan_endpoints' table in Neutron DB, Do we
> > have any plans in future that we could use this field for
> > synchronization or any other purpose instead of simply keeping
> > it in the DB.
> >
> >
> > The following patchset will fix the bug mentioned above,
> > https://review.openstack.org/#/c/153891/
> >
> >
> > But the question still remains the same. Do we need to keep
> > this field or we need to remove it?
> >
> >
> > --
> >
> > Regards,
> > Romil
> >
> >
> >
> >
>  __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Removing udp_port field from 'ml2_vxlan_endpoint' table

2015-03-24 Thread Mathieu Rohon
On Tue, Mar 24, 2015 at 12:15 PM, Salvatore Orlando 
wrote:

>
>
> On 23 March 2015 at 14:49, Mathieu Rohon  wrote:
>
>> Hi romil,
>>
>> I think the main purpose of this DB field is to maintain the
>> compatibility in dataplane between OVS and LinuxBridge which, by default,
>> don't use the same UDP port for VXLAN.
>>
> It might be useful for a cloud admin which wants to run some nodes with LB
>> and some others with OVS.
>>
>
>> I feel like your patch proposal will enable this scenario if the
>> tunnel_update() RPC message gets updated with the UDP port too.
>>
>
> I have scanned a bit the ML2 code - to find out why we're storing
> configuration info into the server side database.
> It seems the tunnel_sync RPC callback it actually acts as a relay for
> tunnel synchronisation messages from agents.
> An agent notifies its tunnel information, these are stored into the
> server, and then the server propagates updated information about tunnels to
> all agents.
> By storing the information in the DB we have a sort of guarantee against
> lost messages, as the whole tunnel info would be relayed again the next
> time an update comes up. So every agent will eventually receive the lost
> message (where eventually means "at some point before the end of the
> universe" and has nothing to do with eventual consistency).
>
> While there might be questions about this approach, I don't think we have
> time and energy to look at it before the end of the release cycle. In my
> opinion if Romil's patch actually enable the scenario described by Mathieu
> then it might make sense to change the RPC interface to allow this.
> Otherwise, I don't think there's any urgency for squashing this change in
> Kilo.
>
> Salvatore
>

Hi, it's fine for me, romil's patch is a good step forward.


>
>
>> Mathieu
>>
>> On Mon, Mar 23, 2015 at 11:40 AM, Romil Gupta 
>> wrote:
>>
>>> Hello everyone,
>>>
>>> There is regarding the following bug:
>>> https://bugs.launchpad.net/neutron/+bug/1373359
>>>
>>> May I know what is the significance of having the '*udp_port'* field in
>>> the  *'ml2_vxlan_endpoints*' table in Neutron DB, Do we have any plans
>>> in future that we could use this field for synchronization or any other
>>> purpose instead of simply keeping it in the DB.
>>>
>>> The following patchset will fix the bug mentioned above,
>>> https://review.openstack.org/#/c/153891/
>>>
>>> But the question still remains the same. Do we need to keep this field
>>> or we need to remove it?
>>>
>>> --
>>> *Regards,*
>>>
>>> *Romil *
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Removing udp_port field from 'ml2_vxlan_endpoint' table

2015-03-23 Thread Mathieu Rohon
Hi romil,

I think the main purpose of this DB field is to maintain the compatibility
in dataplane between OVS and LinuxBridge which, by default, don't use the
same UDP port for VXLAN.
It might be useful for a cloud admin which wants to run some nodes with LB
and some others with OVS.

I feel like your patch proposal will enable this scenario if the
tunnel_update() RPC message gets updated with the UDP port too.

Mathieu

On Mon, Mar 23, 2015 at 11:40 AM, Romil Gupta 
wrote:

> Hello everyone,
>
> There is regarding the following bug:
> https://bugs.launchpad.net/neutron/+bug/1373359
>
> May I know what is the significance of having the '*udp_port'* field in
> the  *'ml2_vxlan_endpoints*' table in Neutron DB, Do we have any plans in
> future that we could use this field for synchronization or any other
> purpose instead of simply keeping it in the DB.
>
> The following patchset will fix the bug mentioned above,
> https://review.openstack.org/#/c/153891/
>
> But the question still remains the same. Do we need to keep this field or
> we need to remove it?
>
> --
> *Regards,*
>
> *Romil *
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Generic question about synchronizing neutron agent on compute node with DB

2015-03-15 Thread Mathieu Rohon
Hi slawek,

may be you're hitting this l2pop bug :
https://bugs.launchpad.net/neutron/+bug/1372438

On Sun, Mar 15, 2015 at 11:37 PM, Sławek Kapłoński 
wrote:

> Hello,
>
> Dnia niedziela, 15 marca 2015 17:45:05 Salvatore Orlando pisze:
> > On 14 March 2015 at 11:19, Sławek Kapłoński  wrote:
> > > Hello,
> > >
> > > I'm using ovs agents with L2 population mechanism in ML2 plugin. I
> noticed
> > > that sometimes agents don't receive proper RPC to add new vxlan tunnel
> > > openflow rules and then vxlan network between some compute nodes not
> > > working.
> > > I'm now using still havana release but want to upgrade to Juno. I was
> > > checking
> > > Juno code in l2 population mech driver and ovs plugin and I didn't find
> > > anything like periodic check if openflow rules are proper set or maybe
> > > resynced.
> > > Maybe it would be also good idea to add something like that to ovs
> agent?
> >
> > It would surely be a good idea to add some form of reliability into
> > communications between server and agents.
> > So far there are still several instances where the server sends a "fire
> and
> > forget" notification to the agent, and does not take any step to ensure
> the
> > state change associated with that notification has been actually applied
> to
> > the agent. This applies also to some messages from the agent side, such
> as
> > status change notifications.
>
> Maybe good idea for the beginning could be to implement some periodic task
> called from agent to check db config and compare it with real state on
> host?
> What do You think? Or maybe I'm competly wrong with such idea and it
> should be
> done in different way?
>
> >
> > This is something that can be beneficial any neutron implementation which
> > depends on one or more agents, not just for those using the ovs/linux
> > bridge agents with the l2-population driver.
>
> Probably yes, but I had this problem only with this l2-population driver so
> far and that's why I wrote about it :)
>
> --
> Pozdrawiam / Best regards
> Sławek Kapłoński
> sla...@kaplonski.pl
>
> >
> > Salvatore
> >
> > > --
> > > Pozdrawiam / Best regards
> > > Sławek Kapłoński
> > > sla...@kaplonski.pl
> > >
> > > Dnia piątek, 13 marca 2015 11:18:28 YAMAMOTO Takashi pisze:
> > > > > However, I briefly looked through the L2 agent code and didn't see
> a
> > > > > periodic task to resync the port information to protect from a
> neutron
> > > > > server that failed to send a notification because it crashed or
> lost
> > >
> > > its
> > >
> > > > > amqp connection. The L3 agent has a period sync routers task that
> > >
> > > helps in
> > >
> > > > > this regard. Maybe another neutron developer more familiar with
> the L2
> > > > > agent can chime in here if I'm missing anything.
> > > >
> > > > i don't think you are missing anything.
> > > > periodic sync would be a good improvement.
> > > >
> > > > YAMAMAOTO Takashi
> > >
> > >
> __
> > >
> > > > OpenStack Development Mailing List (not for usage questions)
> > >
> > > > Unsubscribe:
> > > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > >
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][vpnaas] VPNaaS Subteam meetings

2015-03-05 Thread Mathieu Rohon
Hi,

I'm fine with C) and 1600 UTC would be more adapted for EU time Zone :)

However, I Agree that neutron-vpnaas meetings was mainly focus on
maintaining the current IPSec implementation, by managing the slip out,
adding StrongSwan support and adding functional tests.
Maybe we will get a broader audience once we will speak about adding new
use cases such as edge-vpn.
Edge-vpn use cases overlap with the Telco WG VPN use case [1]. May be those
edge-vpn discussions should occur during the Telco WG meeting?

[1]
https://wiki.openstack.org/wiki/TelcoWorkingGroup/UseCases#VPN_Instantiation

On Thu, Mar 5, 2015 at 3:02 AM, Sridhar Ramaswamy  wrote:

> Hi Paul.
>
> I'd vote for (C) and a slightly later time-slot on Tuesdays - 1630 UTC (or
> later).
>
> The meetings so far was indeed quite useful. I guess the current busy Kilo
> cycle is also contributing to the low turnout. As we pick up things going
> forward this forum will be quite useful to discuss edge-vpn and, perhaps,
> other vpn variants.
>
> - Sridhar
>
> On Tue, Mar 3, 2015 at 3:38 AM, Paul Michali  wrote:
>
>> Hi all! The email, that I sent on 2/24 didn't make it to the mailing list
>> (no wonder I didn't get responses!). I think I had an issue with my email
>> address used - sorry for the confusion!
>>
>> So, I'll hold the meeting today (1500 UTC meeting-4, if it is still
>> available), and we can discuss this...
>>
>>
>> We've been having very low turnout for meetings for the past several
>> weeks, so I'd like to ask those in the community interested in VPNaaS, what
>> the preference would be regarding meetings...
>>
>> A) hold at the same day/time, but only on-demand.
>> B) hold at a different day/time.
>> C) hold at a different day/time, but only on-demand.
>> D) hold as a on-demand topic in main Neutron meeting.
>>
>> Please vote your interest, and provide desired day/time, if you pick B or
>> C. The fallback will be (D), if there's not much interest anymore for
>> meeting, or we can't seem to come to a consensus (or super-majority :)
>>
>> Regards,
>>
>> PCM
>>
>> Twitter: @pmichali
>> TEXT: 6032894458
>> PCM (Paul Michali)
>>
>> IRC pc_m (irc.freenode.com)
>> Twitter... @pmichali
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [ML2] [arp] [l2pop] arp responding for vlan network

2015-02-04 Thread Mathieu Rohon
Hi henry,

It looks great and quite simple thanks to the work done by the ofagent team.

This kind of work might be used also for DVR which now support VLAN
networks [3].

I have some concerns about the patch submitted in [1], so let's review!

[3]https://review.openstack.org/#/c/129884/

On Wed, Feb 4, 2015 at 8:06 AM, henry hly  wrote:

> Hi ML2'ers,
>
> We encounter use case of large amount of vlan network deployment, and
> want to reduce ARP storm by local responding.
>
> Luckily from Icehouse arp local response is implemented, however vlan
> is missed for l2pop. Then came this BP[1], which implement the plugin
> support of l2pop for configurable network types, and the ofagent vlan
> l2pop.
>
> Now I find proposal for ovs vlan support for l2pop [2], it's very
> small and was submitted as a bugfix, so I want to know is it possible
> to be merged in the K cycle?
>
> Best regards
> Henry
>
> [1] https://review.openstack.org/#/c/112947/
> [2] https://bugs.launchpad.net/neutron/+bug/1413056
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] CLI support in ML2 driver

2015-01-29 Thread Mathieu Rohon
Hi,

you can develop you own service plugin which extends the current Neutron
API, and transforms Neutron API call in ODL NB API call.

You can take example from the GBP service plugin to understand how to route
Neutron API call to an independent service plugin.

regards,

Mathieu


On Thu, Jan 29, 2015 at 9:42 AM, Naresh Kumar <
naresh.saa...@lnttechservices.com> wrote:

>  Hi,
>
> I have more expertise in Opendaylight than Openstack.I have created a CLI 
> application in Opendaylight which uses AdventNetCLI library in SB that will 
> create/delete services in my non-openflow carrier ethernet switch through 
> RESTCONF(It's working!). I want this app to be called from the Neutron server 
> of Openstack and that REST call should be routed to my ODL NB and my 
> controller should take care of the operation. Anyone has any ideas how this 
> can be implemented ?
>
> Thanks,
> Naresh.
>
>  *L&T Technology Services Ltd*
>
> www.LntTechservices.com 
>
> This Email may contain confidential or privileged information for the
> intended recipient (s). If you are not the intended recipient, please do
> not use or disseminate the information, notify the sender and delete it
> from your system.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [adv-services] Question on functional tests

2015-01-20 Thread Mathieu Rohon
FYI,

numan created a bug [1] about being able to run functional test job in
neutron-vpnaas CI
I've proposed a patch [2] which simply add neutron-dsvm-functional job in
check and gate queue of neutron-vpnaas.

Unfortunately, as discussed with marun on IRC, this won't be enough, since
this job depends on hook scripts hosted in the neutron repository [3].

This issue will impact all advanced services which want to run functional
test.

I will try to investigate this deeper, but any though on this issue would
be appreciated

[1]https://bugs.launchpad.net/openstack-ci/+bug/1412770
[2]https://review.openstack.org/#/c/148616/
[3]
https://github.com/openstack-infra/project-config/blob/master/jenkins/jobs/neutron-functional.yaml

On Tue, Jan 20, 2015 at 9:02 PM, Kevin Benton  wrote:

> I don't believe we have any unit tests that create namespaces or veth
> pairs. This sounds like it belongs with functional tests.
>
> On Tue, Jan 20, 2015 at 10:20 AM, Numan Siddique <
> numan.siddi...@enovance.com> wrote:
>
>>  Hello,
>>
>> I am working on a bug [1] on neutron vpnaas and submitted the patch here
>> [2].
>>
>> The test code to test the fix does the following
>> - creates a namespace
>> - creates a veth pair and add one interface into the namespace
>> - configures the interface with an ip address and
>> - adds a default gateway
>> - and of course tests the code.
>>
>> This test code only tests a specific function ( OpenSwanProcess.
>> _get_nexthop())
>>
>> Reviewers of this patch are not clear if this should be part of
>> functional tests or unit tests.
>> Can unit tests create linux namespaces, interfaces etc or it falls under
>> functional tests?
>>
>> Please let me know your thoughts on this.
>>
>> [1] - https://bugs.launchpad.net/neutron/+bug/1405413
>> [2] - https://review.openstack.org/#/c/145005/5
>>
>>
>> Regards
>> Numan
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Using DevStack for multi-node setup

2015-01-09 Thread Mathieu Rohon
hi danny,

if you're using neutron, you can use the option :

NEUTRON_CREATE_INITIAL_NETWORKS=False

in your local.conf.
This way no router or network are created. You have to create it manually,
and of course you can do it once every agent is up is Neutron.

Mathieu

On Thu, Jan 8, 2015 at 3:46 PM, Kashyap Chamarthy 
wrote:

> On Mon, Jan 05, 2015 at 08:20:48AM -0500, Sean Dague wrote:
> > On 01/03/2015 04:41 PM, Danny Choi (dannchoi) wrote:
> > > Hi,
> > >
> > > I’m using DevStack to deploy OpenStack on a multi-node setup:
> > > Controller, Network, Compute as 3 separate nodes
> > >
> > > Since the Controller node is stacked first, during which the Network
> > > node is not yet ready, it fails to create the router instance and the
> > > public network.
> > > Both have to be created manually.
> > >
> > > Is this the expected behavior?  Is there a workaround to have DevStack
> > > create them?
> >
> > The only way folks tend to run multinode devstack is Controller +
> > Compute nodes. And that sequence of creating an all in one controller,
> > plus additional compute nodes later, works.
>
> Sean, I wonder if you have a pointer to an example CI gate job (assuming
> there's one) for the above with Neutron networking?
>
>
> --
> /kashyap
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Request for comments for a possible solution

2015-01-09 Thread Mathieu Rohon
Hi Mike,

after reviewing your latest patch [1], I  think that a possible solution
could be to add a new entry in fdb RPC message.
This entry would specify whether the port is multi-bound or not.
The new fdb message would look like this :
{net_id:
  {port:
{agent_ip:
  {mac, ip, *multi-bound*}
}
  }
   network_type:
 vxlan,
   segment_id:
 id
 }

When the multi-bound option would be set, the ARP responder would be
provisioned but the underlying module (ovs or kernel vxlan) would be
provisioned to flood the packet to every tunnel concerned by this overlay
segment, and not only the tunnel to agent that is supposed to host the port.
In the LB world, this means not adding fdb entry for the MAC of the
multi-bound port, whereas in the OVS world, it means not adding a flow that
send the trafic that matches the MAC of the multi-bound port to only one
tunnel port, but to every tunnel port of this overlay segment.

This way, traffic to multi-bound port will behave as unknown unicast
traffic. First packet will be flood to every tunnel and local bridge will
learn the correct tunnel for the following packets based on which tunnel
received the answer.
Once learning occurs with first ingress packet, following packets would be
sent to the correct tunnel and not flooded anymore.

I've tested this with linuxbridge and it works fine. Based on code
overview, this should work correctly with OVS too. I'll test it ASAP.

I know that DVR team already add such a flag in RPC messages, but they
revert it in later patches. I would be very interested in having their
opinion on this proposal.
It seems that DVR port could also use this flag. This would result in
having ARP responder activated for DVR port too.

This shouldn't need a bump in RPC versioning since this flag would be
optionnal. So their shouldn't have any issue with backward compatibility.

Regards,

Mathieu

[1]https://review.openstack.org/#/c/141114/2

On Sun, Dec 21, 2014 at 12:14 PM, Narasimhan, Vivekanandan <
vivekanandan.narasim...@hp.com> wrote:

> Hi Mike,
>
> Just one comment [Vivek]
>
> -Original Message-
> From: Mike Kolesnik [mailto:mkole...@redhat.com]
> Sent: Sunday, December 21, 2014 11:17 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: Robert Kukura
> Subject: Re: [openstack-dev] [Neutron][L2Pop][HA Routers] Request for
> comments for a possible solution
>
> Hi Mathieu,
>
> Comments inline
>
> Regards,
> Mike
>
> - Original Message -
> > Mike,
> >
> > I'm not even sure that your solution works without being able to bind
> > a router HA port to several hosts.
> > What's happening currently is that you :
> >
> > 1.create the router on two l3agent.
> > 2. those l3agent trigger the sync_router() on the l3plugin.
> > 3. l3plugin.sync_routers() will trigger
> l2plugin.update_port(host=l3agent).
> > 4. ML2 will bind the port to the host mentioned in the last
> update_port().
> >
> > From a l2pop perspective, this will result in creating only one tunnel
> > to the host lastly specified.
> > I can't find any code that forces that only the master router binds
> > its router port. So we don't even know if the host which binds the
> > router port is hosting the master router or the slave one, and so if
> > l2pop is creating the tunnel to the master or to the slave.
> >
> > Can you confirm that the above sequence is correct? or am I missing
> > something?
>
> Are you referring to the alternative solution?
>
> In that case it seems that you're correct so that there would need to be
> awareness of the master router at some level there as well.
> I can't say for sure as I've been thinking on the proposed solution with
> no FDBs so there would be some issues with the alternative that need to be
> ironed out.
>
> >
> > Without the capacity to bind a port to several hosts, l2pop won't be
> > able to create tunnel correctly, that's the reason why I was saying
> > that a prerequisite for a smart solution would be to first fix the bug
> > :
> > https://bugs.launchpad.net/neutron/+bug/1367391
> >
> > DVR Had the same issue. Their workaround was to create a new
> > port_binding tables, that manages the capacity for one DVR port to be
> > bound to several host.
> > As mentioned in the bug 1367391, this adding a technical debt in ML2,
> > which has to be tackle down in priority from my POV.
>
> I agree that this would simplify work but even without this bug fixed we
> can achieve either solution.
>
> We have already knowledge of the agents hosting a router so this is
> completely doable without waiting for fix for bug 1367391.
>
> Also from my understanding the bug 1367391 is targeted at DVR only, not at
> HA router ports.
>
> [Vivek]  Currently yes, but Bob's concept embraces all replicated ports
> and so HA router ports will play into it :)
>
> --
> Thanks,
>
> Vivek
>
>
> >
> >
> > On Thu, Dec 18, 2014 at 6:28 PM, Mike Kolesnik 
> wrote:
> > > Hi Mathieu,
> > >
> > > Thanks for the quick reply, some comments inline..

Re: [openstack-dev] [neutron][vpnaas] VPNaaS Subteam meeting Tuesday 1500 UTC meeting-4

2015-01-06 Thread Mathieu Rohon
Thanks Paul,

I'll be there!

Mathieu

On Mon, Jan 5, 2015 at 8:43 PM, Paul Michali (pcm)  wrote:
> Since we took a break for two weeks and the meeting channel has changed, I
> figured I send a reminder.
>
> Please update the page for any agenda items you may have,
>
> Regards,
>
>
> PCM (Paul Michali)
>
> MAIL …..…. p...@cisco.com
> IRC ……..… pc_m (irc.freenode.com)
> TW ………... @pmichali
> GPG Key … 4525ECC253E31A83
> Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
>
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L2Pop][HA Routers] Request for comments for a possible solution

2014-12-19 Thread Mathieu Rohon
Mike,

I'm not even sure that your solution works without being able to bind
a router HA port to several hosts.
What's happening currently is that you :

1.create the router on two l3agent.
2. those l3agent trigger the sync_router() on the l3plugin.
3. l3plugin.sync_routers() will trigger l2plugin.update_port(host=l3agent).
4. ML2 will bind the port to the host mentioned in the last update_port().

>From a l2pop perspective, this will result in creating only one tunnel
to the host lastly specified.
I can't find any code that forces that only the master router binds
its router port. So we don't even know if the host which binds the
router port is hosting the master router or the slave one, and so if
l2pop is creating the tunnel to the master or to the slave.

Can you confirm that the above sequence is correct? or am I missing something?

Without the capacity to bind a port to several hosts, l2pop won't be
able to create tunnel correctly, that's the reason why I was saying
that a prerequisite for a smart solution would be to first fix the bug
:
https://bugs.launchpad.net/neutron/+bug/1367391

DVR Had the same issue. Their workaround was to create a new
port_binding tables, that manages the capacity for one DVR port to be
bound to several host.
As mentioned in the bug 1367391, this adding a technical debt in ML2,
which has to be tackle down in priority from my POV.


On Thu, Dec 18, 2014 at 6:28 PM, Mike Kolesnik  wrote:
> Hi Mathieu,
>
> Thanks for the quick reply, some comments inline..
>
> Regards,
> Mike
>
> - Original Message -
>> Hi mike,
>>
>> thanks for working on this bug :
>>
>> On Thu, Dec 18, 2014 at 1:47 PM, Gary Kotton  wrote:
>> >
>> >
>> > On 12/18/14, 2:06 PM, "Mike Kolesnik"  wrote:
>> >
>> >>Hi Neutron community members.
>> >>
>> >>I wanted to query the community about a proposal of how to fix HA routers
>> >>not
>> >>working with L2Population (bug 1365476[1]).
>> >>This bug is important to fix especially if we want to have HA routers and
>> >>DVR
>> >>routers working together.
>> >>
>> >>[1] https://bugs.launchpad.net/neutron/+bug/1365476
>> >>
>> >>What's happening now?
>> >>* HA routers use distributed ports, i.e. the port with the same IP & MAC
>> >>  details is applied on all nodes where an L3 agent is hosting this
>> >>router.
>> >>* Currently, the port details have a binding pointing to an arbitrary node
>> >>  and this is not updated.
>> >>* L2pop takes this "potentially stale" information and uses it to create:
>> >>  1. A tunnel to the node.
>> >>  2. An FDB entry that directs traffic for that port to that node.
>> >>  3. If ARP responder is on, ARP requests will not traverse the network.
>> >>* Problem is, the master router wouldn't necessarily be running on the
>> >>  reported agent.
>> >>  This means that traffic would not reach the master node but some
>> >>arbitrary
>> >>  node where the router master might be running, but might be in another
>> >>  state (standby, fail).
>> >>
>> >>What is proposed?
>> >>Basically the idea is not to do L2Pop for HA router ports that reside on
>> >>the
>> >>tenant network.
>> >>Instead, we would create a tunnel to each node hosting the HA router so
>> >>that
>> >>the normal learning switch functionality would take care of switching the
>> >>traffic to the master router.
>> >
>> > In Neutron we just ensure that the MAC address is unique per network.
>> > Could a duplicate MAC address cause problems here?
>>
>> gary, AFAIU, from a Neutron POV, there is only one port, which is the
>> router Port, which is plugged twice. One time per port.
>> I think that the capacity to bind a port to several host is also a
>> prerequisite for a clean solution here. This will be provided by
>> patches to this bug :
>> https://bugs.launchpad.net/neutron/+bug/1367391
>>
>>
>> >>This way no matter where the master router is currently running, the data
>> >>plane would know how to forward traffic to it.
>> >>This solution requires changes on the controller only.
>> >>
>> >>What's to gain?
>> >>* Data plane only solution, independent of the control plane.
>> >>* Lowest failover time (same as HA routers today).
>> >>* High backport potential:
>> >>  * No APIs changed/added.
>> >>  * No configuration changes.
>> >>  * No DB changes.
>> >>  * Changes localized to a single file and limited in scope.
>> >>
>> >>What's the alternative?
>> >>An alternative solution would be to have the controller update the port
>> >>binding
>> >>on the single port so that the plain old L2Pop happens and notifies about
>> >>the
>> >>location of the master router.
>> >>This basically negates all the benefits of the proposed solution, but is
>> >>wider.
>> >>This solution depends on the report-ha-router-master spec which is
>> >>currently in
>> >>the implementation phase.
>> >>
>> >>It's important to note that these two solutions don't collide and could
>> >>be done
>> >>independently. The one I'm proposing just makes more sense from an HA
>> >>viewpoint
>> >>because of it's benefits 

Re: [openstack-dev] Request for comments for a possible solution

2014-12-19 Thread Mathieu Rohon
Hi vivek,

On Fri, Dec 19, 2014 at 10:44 AM, Narasimhan, Vivekanandan
 wrote:
> Hi Mike,
>
> Few clarifications inline [Vivek]
>
> -Original Message-
> From: Mike Kolesnik [mailto:mkole...@redhat.com]
> Sent: Thursday, December 18, 2014 10:58 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][L2Pop][HA Routers] Request for 
> comments for a possible solution
>
> Hi Mathieu,
>
> Thanks for the quick reply, some comments inline..
>
> Regards,
> Mike
>
> - Original Message -
>> Hi mike,
>>
>> thanks for working on this bug :
>>
>> On Thu, Dec 18, 2014 at 1:47 PM, Gary Kotton  wrote:
>> >
>> >
>> > On 12/18/14, 2:06 PM, "Mike Kolesnik"  wrote:
>> >
>> >>Hi Neutron community members.
>> >>
>> >>I wanted to query the community about a proposal of how to fix HA
>> >>routers not working with L2Population (bug 1365476[1]).
>> >>This bug is important to fix especially if we want to have HA
>> >>routers and DVR routers working together.
>> >>
>> >>[1] https://bugs.launchpad.net/neutron/+bug/1365476
>> >>
>> >>What's happening now?
>> >>* HA routers use distributed ports, i.e. the port with the same IP &
>> >>MAC
>> >>  details is applied on all nodes where an L3 agent is hosting this
>> >>router.
>> >>* Currently, the port details have a binding pointing to an
>> >>arbitrary node
>> >>  and this is not updated.
>> >>* L2pop takes this "potentially stale" information and uses it to create:
>> >>  1. A tunnel to the node.
>> >>  2. An FDB entry that directs traffic for that port to that node.
>> >>  3. If ARP responder is on, ARP requests will not traverse the network.
>> >>* Problem is, the master router wouldn't necessarily be running on
>> >>the
>> >>  reported agent.
>> >>  This means that traffic would not reach the master node but some
>> >>arbitrary
>> >>  node where the router master might be running, but might be in
>> >>another
>> >>  state (standby, fail).
>> >>
>> >>What is proposed?
>> >>Basically the idea is not to do L2Pop for HA router ports that
>> >>reside on the tenant network.
>> >>Instead, we would create a tunnel to each node hosting the HA router
>> >>so that the normal learning switch functionality would take care of
>> >>switching the traffic to the master router.
>> >
>> > In Neutron we just ensure that the MAC address is unique per network.
>> > Could a duplicate MAC address cause problems here?
>>
>> gary, AFAIU, from a Neutron POV, there is only one port, which is the
>> router Port, which is plugged twice. One time per port.
>> I think that the capacity to bind a port to several host is also a
>> prerequisite for a clean solution here. This will be provided by
>> patches to this bug :
>> https://bugs.launchpad.net/neutron/+bug/1367391
>>
>>
>> >>This way no matter where the master router is currently running, the
>> >>data plane would know how to forward traffic to it.
>> >>This solution requires changes on the controller only.
>> >>
>> >>What's to gain?
>> >>* Data plane only solution, independent of the control plane.
>> >>* Lowest failover time (same as HA routers today).
>> >>* High backport potential:
>> >>  * No APIs changed/added.
>> >>  * No configuration changes.
>> >>  * No DB changes.
>> >>  * Changes localized to a single file and limited in scope.
>> >>
>> >>What's the alternative?
>> >>An alternative solution would be to have the controller update the
>> >>port binding on the single port so that the plain old L2Pop happens
>> >>and notifies about the location of the master router.
>> >>This basically negates all the benefits of the proposed solution,
>> >>but is wider.
>> >>This solution depends on the report-ha-router-master spec which is
>> >>currently in the implementation phase.
>> >>
>> >>It's important to note that these two solutions don't collide and
>> >>could be done independently. The one I'm proposing just makes more
>> >>sense from an HA viewpoint because of it's benefits which fit the HA
>> >>methodology of being fast & having as little outside dependency as
>> >>possible.
>> >>It could be done as an initial solution which solves the bug for
>> >>mechanism drivers that support normal learning switch (OVS), and
>> >>later kept as an optimization to the more general, controller based,
>> >>solution which will solve the issue for any mechanism driver working
>> >>with L2Pop (Linux Bridge, possibly others).
>> >>
>> >>Would love to hear your thoughts on the subject.
>>
>> You will have to clearly update the doc to mention that deployment
>> with Linuxbridge+l2pop are not compatible with HA.
>
> Yes this should be added and this is already the situation right now.
> However if anyone would like to work on a LB fix (the general one or some 
> specific one) I would gladly help with reviewing it.
>
>>
>> Moreover, this solution is downgrading the l2pop solution, by
>> disabling the ARP-responder when VMs want to talk to a HA router.
>> This means that ARP requests will be duplicated to every ove

Re: [openstack-dev] [Neutron][L2Pop][HA Routers] Request for comments for a possible solution

2014-12-18 Thread Mathieu Rohon
Hi mike,

thanks for working on this bug :

On Thu, Dec 18, 2014 at 1:47 PM, Gary Kotton  wrote:
>
>
> On 12/18/14, 2:06 PM, "Mike Kolesnik"  wrote:
>
>>Hi Neutron community members.
>>
>>I wanted to query the community about a proposal of how to fix HA routers
>>not
>>working with L2Population (bug 1365476[1]).
>>This bug is important to fix especially if we want to have HA routers and
>>DVR
>>routers working together.
>>
>>[1] https://bugs.launchpad.net/neutron/+bug/1365476
>>
>>What's happening now?
>>* HA routers use distributed ports, i.e. the port with the same IP & MAC
>>  details is applied on all nodes where an L3 agent is hosting this
>>router.
>>* Currently, the port details have a binding pointing to an arbitrary node
>>  and this is not updated.
>>* L2pop takes this "potentially stale" information and uses it to create:
>>  1. A tunnel to the node.
>>  2. An FDB entry that directs traffic for that port to that node.
>>  3. If ARP responder is on, ARP requests will not traverse the network.
>>* Problem is, the master router wouldn't necessarily be running on the
>>  reported agent.
>>  This means that traffic would not reach the master node but some
>>arbitrary
>>  node where the router master might be running, but might be in another
>>  state (standby, fail).
>>
>>What is proposed?
>>Basically the idea is not to do L2Pop for HA router ports that reside on
>>the
>>tenant network.
>>Instead, we would create a tunnel to each node hosting the HA router so
>>that
>>the normal learning switch functionality would take care of switching the
>>traffic to the master router.
>
> In Neutron we just ensure that the MAC address is unique per network.
> Could a duplicate MAC address cause problems here?

gary, AFAIU, from a Neutron POV, there is only one port, which is the
router Port, which is plugged twice. One time per port.
I think that the capacity to bind a port to several host is also a
prerequisite for a clean solution here. This will be provided by
patches to this bug :
https://bugs.launchpad.net/neutron/+bug/1367391


>>This way no matter where the master router is currently running, the data
>>plane would know how to forward traffic to it.
>>This solution requires changes on the controller only.
>>
>>What's to gain?
>>* Data plane only solution, independent of the control plane.
>>* Lowest failover time (same as HA routers today).
>>* High backport potential:
>>  * No APIs changed/added.
>>  * No configuration changes.
>>  * No DB changes.
>>  * Changes localized to a single file and limited in scope.
>>
>>What's the alternative?
>>An alternative solution would be to have the controller update the port
>>binding
>>on the single port so that the plain old L2Pop happens and notifies about
>>the
>>location of the master router.
>>This basically negates all the benefits of the proposed solution, but is
>>wider.
>>This solution depends on the report-ha-router-master spec which is
>>currently in
>>the implementation phase.
>>
>>It's important to note that these two solutions don't collide and could
>>be done
>>independently. The one I'm proposing just makes more sense from an HA
>>viewpoint
>>because of it's benefits which fit the HA methodology of being fast &
>>having as
>>little outside dependency as possible.
>>It could be done as an initial solution which solves the bug for mechanism
>>drivers that support normal learning switch (OVS), and later kept as an
>>optimization to the more general, controller based, solution which will
>>solve
>>the issue for any mechanism driver working with L2Pop (Linux Bridge,
>>possibly
>>others).
>>
>>Would love to hear your thoughts on the subject.

You will have to clearly update the doc to mention that deployment
with Linuxbridge+l2pop are not compatible with HA.

Moreover, this solution is downgrading the l2pop solution, by
disabling the ARP-responder when VMs want to talk to a HA router.
This means that ARP requests will be duplicated to every overlay
tunnel to feed the OVS Mac learning table.
This is something that we were trying to avoid with l2pop. But may be
this is acceptable.

I know that ofagent is also using l2pop, I would like to know if
ofagent deployment will be compatible with the workaround that you are
proposing.

My concern is that, with DVR, there are at least two major features
that are not compatible with Linuxbridge.
Linuxbridge is not running in the gate. I don't know if anybody is
running a 3rd party testing with Linuxbridge deployments. If anybody
does, it would be great to have it voting on gerrit!

But I really wonder what is the future of linuxbridge compatibility?
should we keep on improving OVS solution without taking into account
the linuxbridge implementation?

Regards,

Mathieu

>>
>>Regards,
>>Mike
>>
>>___
>>OpenStack-dev mailing list
>>OpenStack-dev@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> 

Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration

2014-12-15 Thread Mathieu Rohon
Hi Ryan,

We have been working on similar Use cases to announce /32 with the
Bagpipe BGPSpeaker that supports EVPN.
Please have a look at use case B in [1][2].
Note also that the L2population Mechanism driver for ML2, that is
compatible with OVS, Linuxbridge and ryu ofagent, is inspired by EVPN,
and I'm sure it could help in your use case

[1]http://fr.slideshare.net/ThomasMorin1/neutron-and-bgp-vpns-with-bagpipe
[2]https://www.youtube.com/watch?v=q5z0aPrUZYc&sns
[3]https://blueprints.launchpad.net/neutron/+spec/l2-population

Mathieu

On Thu, Dec 4, 2014 at 12:02 AM, Ryan Clevenger
 wrote:
> Hi,
>
> At Rackspace, we have a need to create a higher level networking service
> primarily for the purpose of creating a Floating IP solution in our
> environment. The current solutions for Floating IPs, being tied to plugin
> implementations, does not meet our needs at scale for the following reasons:
>
> 1. Limited endpoint H/A mainly targeting failover only and not multi-active
> endpoints,
> 2. Lack of noisy neighbor and DDOS mitigation,
> 3. IP fragmentation (with cells, public connectivity is terminated inside
> each cell leading to fragmentation and IP stranding when cell CPU/Memory use
> doesn't line up with allocated IP blocks. Abstracting public connectivity
> away from nova installations allows for much more efficient use of those
> precious IPv4 blocks).
> 4. Diversity in transit (multiple encapsulation and transit types on a per
> floating ip basis).
>
> We realize that network infrastructures are often unique and such a solution
> would likely diverge from provider to provider. However, we would love to
> collaborate with the community to see if such a project could be built that
> would meet the needs of providers at scale. We believe that, at its core,
> this solution would boil down to terminating north<->south traffic
> temporarily at a massively horizontally scalable centralized core and then
> encapsulating traffic east<->west to a specific host based on the
> association setup via the current L3 router's extension's 'floatingips'
> resource.
>
> Our current idea, involves using Open vSwitch for header rewriting and
> tunnel encapsulation combined with a set of Ryu applications for management:
>
> https://i.imgur.com/bivSdcC.png
>
> The Ryu application uses Ryu's BGP support to announce up to the Public
> Routing layer individual floating ips (/32's or /128's) which are then
> summarized and announced to the rest of the datacenter. If a particular
> floating ip is experiencing unusually large traffic (DDOS, slashdot effect,
> etc.), the Ryu application could change the announcements up to the Public
> layer to shift that traffic to dedicated hosts setup for that purpose. It
> also announces a single /32 "Tunnel Endpoint" ip downstream to the TunnelNet
> Routing system which provides transit to and from the cells and their
> hypervisors. Since traffic from either direction can then end up on any of
> the FLIP hosts, a simple flow table to modify the MAC and IP in either the
> SRC or DST fields (depending on traffic direction) allows the system to be
> completely stateless. We have proven this out (with static routing and
> flows) to work reliably in a small lab setup.
>
> On the hypervisor side, we currently plumb networks into separate OVS
> bridges. Another Ryu application would control the bridge that handles
> overlay networking to selectively divert traffic destined for the default
> gateway up to the FLIP NAT systems, taking into account any configured
> logical routing and local L2 traffic to pass out into the existing overlay
> fabric undisturbed.
>
> Adding in support for L2VPN EVPN
> (https://tools.ietf.org/html/draft-ietf-l2vpn-evpn-11) and L2VPN EVPN
> Overlay (https://tools.ietf.org/html/draft-sd-l2vpn-evpn-overlay-03) to the
> Ryu BGP speaker will allow the hypervisor side Ryu application to advertise
> up to the FLIP system reachability information to take into account VM
> failover, live-migrate, and supported encapsulation types. We believe that
> decoupling the tunnel endpoint discovery from the control plane
> (Nova/Neutron) will provide for a more robust solution as well as allow for
> use outside of openstack if desired.
>
> 
>
> Ryan Clevenger
> Manager, Cloud Engineering - US
> m: 678.548.7261
> e: ryan.cleven...@rackspace.com
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Changes to the core team

2014-12-04 Thread Mathieu Rohon
On Thu, Dec 4, 2014 at 8:38 AM, Sumit Naiksatam
 wrote:
> On Wed, Dec 3, 2014 at 9:07 PM, Adam Young  wrote:
>> On 12/03/2014 06:24 PM, Sukhdev Kapur wrote:
>>
>> Congratulations Henry and Kevin. It has always been pleasure working with
>> you guys.
>>
>>
>> If I may express my opinion, Bob's contribution to ML2 has been quite
>> substantial. The kind of stability ML2 has achieved makes a statement of his
>> dedication to this work. I have worked very closely with Bob on several
>> issues and co-chaired ML2-Subteam with him and have developed tremendous
>> respect for his dedication.
>> Reading his email reply makes me believe he wants to continue to contribute
>> as core developer. Therefore, I would like to take an opportunity to appeal
>> to the core team to consider granting him his wish - i.e. vote -1 on his
>> removal.
>>
>> If I might venture an outside voice in support of Bob:  you don't want to
>> chase away the continuity.  Yes, sometimes the day job makes us focus on
>> things other than upstream work for a while, but I would say that you should
>> err on the side of keeping someone that is otherwise still engaged.
>> Especially when that core has been as fundamental on a project as I know Bob
>> to have been on Quantumer Neutron.
>
> I would definitely echo the above sentiments; Bob has continually made
> valuable design contributions to ML2 and Neutron that go beyond the
> review count metric. Kindly consider keeping him as a part of the core
> team.

Working with bob in the ML2 sub team was a real pleasure. He is
providing a good technical and community leadership.
Its reviews are really valuable, since he always reviews a patch in
the context of the overall project and other work in progress.
This takes more time.

> That said, a big +1 to both, Henry and Kevin, as additions to the core
> team! Welcome!!
>
> Thanks,
> ~Sumit.
>
>>
>>
>>
>>
>>
>>
>> regards..
>> -Sukhdev
>>
>>
>>
>>
>>
>>
>> On Wed, Dec 3, 2014 at 11:48 AM, Edgar Magana 
>> wrote:
>>>
>>> I give +2 to Henry and Kevin. So, Congratulations Folks!
>>> I have been working with both of them and great quality reviews are always
>>> coming out from them.
>>>
>>> Many thanks to Nachi and Bob for their hard work!
>>>
>>> Edgar
>>>
>>> On 12/2/14, 7:59 AM, "Kyle Mestery"  wrote:
>>>
>>> >Now that we're in the thick of working hard on Kilo deliverables, I'd
>>> >like to make some changes to the neutron core team. Reviews are the
>>> >most important part of being a core reviewer, so we need to ensure
>>> >cores are doing reviews. The stats for the 180 day period [1] indicate
>>> >some changes are needed for cores who are no longer reviewing.
>>> >
>>> >First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from
>>> >neutron-core. Bob and Nachi have been core members for a while now.
>>> >They have contributed to Neutron over the years in reviews, code and
>>> >leading sub-teams. I'd like to thank them for all that they have done
>>> >over the years. I'd also like to propose that should they start
>>> >reviewing more going forward the core team looks to fast track them
>>> >back into neutron-core. But for now, their review stats place them
>>> >below the rest of the team for 180 days.
>>> >
>>> >As part of the changes, I'd also like to propose two new members to
>>> >neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin have
>>> >been very active in reviews, meetings, and code for a while now. Henry
>>> >lead the DB team which fixed Neutron DB migrations during Juno. Kevin
>>> >has been actively working across all of Neutron, he's done some great
>>> >work on security fixes and stability fixes in particular. Their
>>> >comments in reviews are insightful and they have helped to onboard new
>>> >reviewers and taken the time to work with people on their patches.
>>> >
>>> >Existing neutron cores, please vote +1/-1 for the addition of Henry
>>> >and Kevin to the core team.
>>> >
>>> >Thanks!
>>> >Kyle
>>> >
>>> >[1] http://stackalytics.com/report/contribution/neutron-group/180
>>> >
>>> >___
>>> >OpenStack-dev mailing list
>>> >OpenStack-dev@lists.openstack.org
>>> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mai

Re: [openstack-dev] [neutron] Changes to the core team

2014-12-03 Thread Mathieu Rohon
Hi all,

It seems that a process with a survey for neutron core
election/removal was about to take place [1]. Has it been applied for
this proposal?
This proposal has been hardly discussed during neutron meetings
[2][3]. Many cores agree that the number of reviews shouldn't be the
only metrics. And this statement is reflected in the Survey Questions.
So I'm surprised to see such a proposal based on stackalitics figures.

[1]https://etherpad.openstack.org/p/neutron-peer-review
[2]http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-10-13-21.02.log.html
[3]http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-10-21-14.00.log.html

On Wed, Dec 3, 2014 at 9:44 AM, Oleg Bondarev  wrote:
> +1! Congrats, Henry and Kevin!
>
> On Tue, Dec 2, 2014 at 6:59 PM, Kyle Mestery  wrote:
>>
>> Now that we're in the thick of working hard on Kilo deliverables, I'd
>> like to make some changes to the neutron core team. Reviews are the
>> most important part of being a core reviewer, so we need to ensure
>> cores are doing reviews. The stats for the 180 day period [1] indicate
>> some changes are needed for cores who are no longer reviewing.
>>
>> First of all, I'm proposing we remove Bob Kukura and Nachi Ueno from
>> neutron-core. Bob and Nachi have been core members for a while now.
>> They have contributed to Neutron over the years in reviews, code and
>> leading sub-teams. I'd like to thank them for all that they have done
>> over the years. I'd also like to propose that should they start
>> reviewing more going forward the core team looks to fast track them
>> back into neutron-core. But for now, their review stats place them
>> below the rest of the team for 180 days.
>>
>> As part of the changes, I'd also like to propose two new members to
>> neutron-core: Henry Gessau and Kevin Benton. Both Henry and Kevin have
>> been very active in reviews, meetings, and code for a while now. Henry
>> lead the DB team which fixed Neutron DB migrations during Juno. Kevin
>> has been actively working across all of Neutron, he's done some great
>> work on security fixes and stability fixes in particular. Their
>> comments in reviews are insightful and they have helped to onboard new
>> reviewers and taken the time to work with people on their patches.
>>
>> Existing neutron cores, please vote +1/-1 for the addition of Henry
>> and Kevin to the core team.
>>
>> Thanks!
>> Kyle
>>
>> [1] http://stackalytics.com/report/contribution/neutron-group/180
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Edge-VPN and Edge-Id

2014-12-01 Thread Mathieu Rohon
On Mon, Dec 1, 2014 at 4:46 PM, Ian Wells  wrote:
> On 1 December 2014 at 04:43, Mathieu Rohon  wrote:
>>
>> This is not entirely true, as soon as a reference implementation,
>> based on existing Neutron components (L2agent/L3agent...) can exist.
>
>
> The specific thing I was saying is that that's harder with an edge-id
> mechanism than one incorporated into Neutron, because the point of the
> edge-id proposal is to make tunnelling explicitly *not* a responsibility of
> Neutron.  So how do you get the agents to terminate tunnels when Neutron
> doesn't know anything about tunnels and the agents are a part of Neutron?

by having modular agents that can drive the dataplane with pluggable
components that would be part of any advanced service. This is a way
to move forward on splitting out advanced services.

> Conversely, you can add a mechanism to the OVS subsystem so that you can tap
> an L2 bridge into a network, which would probably be more straightforward.

This is an alternative that would say : you want an advanced service
for your VM, please stretch your l2 network to this external
component, that is driven by an external controller, and make your
traffic goes to this component to take benefit of this advanced
service. This is a valid alternative of course, but distributing the
service directly to each compute node is much more valuable, ASA it is
doable.

>> But even if it were true, this could at least give a standardized API
>> to Operators that want to connect their Neutron networks to external
>> VPNs, without coupling their cloud solution with whatever SDN
>> controller. And to me, this is the main issue that we want to solve by
>> proposing some neutron specs.
>
>
> So the issue I worry about here is that if we start down the path of adding
> the MPLS datamodels to Neutron we have to add Kevin's switch control work.
> And the L2VPN descriptions for GRE, L2TPv3, VxLAN, and EVPN.  And whatever
> else comes along.  And we get back to 'that's a lot of big changes that
> aren't interesting to 90% of Neutron users' - difficult to get in and a lot
> of overhead to maintain for the majority of Neutron developers who don't
> want or need it.

This shouldn't be a lot of big changes, once interfaces between
advanced services and neutron core services will be cleaner. The
description of the interconnection has to to be done somewhere, and
neutron and its advanced services are a good candidate for that.

> --
> Ian.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Edge-VPN and Edge-Id

2014-12-01 Thread Mathieu Rohon
Hi,


On Sun, Nov 30, 2014 at 8:35 AM, Ian Wells  wrote:
> On 27 November 2014 at 12:11, Mohammad Hanif  wrote:
>>
>> Folks,
>>
>> Recently, as part of the L2 gateway thread, there was some discussion on
>> BGP/MPLS/Edge VPN and how to bridge any overlay networks to the neutron
>> network.  Just to update everyone in the community, Ian and I have
>> separately submitted specs which make an attempt to address the cloud edge
>> connectivity.  Below are the links describing it:
>>
>> Edge-Id: https://review.openstack.org/#/c/136555/
>> Edge-VPN: https://review.openstack.org/#/c/136929 .  This is a resubmit of
>> https://review.openstack.org/#/c/101043/ for the kilo release under the
>> “Edge VPN” title.  “Inter-datacenter connectivity orchestration” was just
>> too long and just too generic of a title to continue discussing about :-(
>
>
> Per the summit discussions, the difference is one of approach.
>
> The Edge-VPN case addresses MPLS attachments via a set of APIs to be added
> to the core of Neutron.  Those APIs are all new objects and don't really
> change the existing API so much as extend it.  There's talk of making it a
> 'service plugin' but if it were me I would simply argue for a new service
> endpoint.  Keystone's good at service discovery, endpoints are pretty easy
> to create and I don't see why you need to fold it in.
>
> The edge-id case says 'Neutron doesn't really care about what happens
> outside of the cloud at this point in time, there are loads of different
> edge termination types, and so the best solution would be one where the
> description of the actual edge datamodel does not make its way into core
> Neutron'.  This avoids us folding in the information about edges in the same
> way that we folded in the information about services and later regretted it.
> The notable downside is that this method would work with an external network
> controller such as ODL, but probably will never make its way into the
> inbuilt OVS/ML2 network controller if it's implemented as described
> (explicitly *because* it's designed in such a way as to keep the
> functionality out of core Neutron).  Basically, it's not completely
> incompatible with the datamodel that the Edge-VPN change describes, but
> pushes that datamodel out to an independent service which would have its own
> service endpoint to avoid complicating the Neutron API with information
> that, likely, Neutron itself could probably only ever validate, store and
> pass on to an external controller.

This is not entirely true, as soon as a reference implementation,
based on existing Neutron components (L2agent/L3agent...) can exist.
But even if it were true, this could at least give a standardized API
to Operators that want to connect their Neutron networks to external
VPNs, without coupling their cloud solution with whatever SDN
controller. And to me, this is the main issue that we want to solve by
proposing some neutron specs.

> Also, the Edge-VPN case is specified for only MPLS VPNs, and doesn't
> consider other edge cases such as Kevin's switch-based edges in
> https://review.openstack.org/#/c/87825/ .  The edge-ID one is agnostic of
> termination types (since it absolves Neutron of all of that responsibility)
> and would leave the edge type description to the determination of an
> external service.
>
> Obviously, I'm biased, having written the competing spec; but I prefer the
> simple change that pushes complexity out of the core to the larger but
> comprehensive change that keeps it as a part of Neutron.  And in fact if you
> look at the two specs with that in mind, they do go together; the Edge-VPN
> model is almost precisely what you need to describe an endpoint that you
> could then associate with an Edge-ID to attach it to Neutron.
> --
> Ian.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Query regarding vRouter BGP/MPLS VPN support

2014-11-26 Thread Mathieu Rohon
Hi Vikram,

thanks to latest ovs capabilities, and to an upstreamed compatible
BGPSpeaker, we proposed to implement MPLS VPN interconnection directly
in Neutron [1]
The Tacker project aims at managing VNF, and its first use case is
about implementing a RouterAAS. Maybe you can have a look at that [2]

[1]http://lists.openstack.org/pipermail/openstack-dev/2014-October/048626.html
[2]https://wiki.openstack.org/wiki/ServiceVM

Mathieu

On Wed, Nov 26, 2014 at 12:17 PM, Vikram Choudhary
 wrote:
> Hi All,
>
>
>
> Can anyone please help us with below queries:
>
>
>
> 1.   Did any of the open source vRouter/s (say vyaTTa, Qagga, BIRD or
> etc) support BGP/MPLS VPN functionality?
>
> 2.   How neutron can communicate with vRouter (say vyaTTa, Qagga, BIRD
> or etc)? Did someone has done this?
>
>
>
> Thanks
>
> Vikram
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] #Personal# Ref: L3 service integration with service framework

2014-11-26 Thread Mathieu Rohon
Hi,

you can still add your own service plugin, as a mixin of
L3RouterPlugin (have a look at brocade's code).
AFAIU service framework would manage the coexistence several
implementation of a single service plugin.

This is currently not prioritized by neutron. This kind of work might
restart in the advanced_services project.

On Wed, Nov 26, 2014 at 2:28 PM, Priyanka Chopra
 wrote:
> Hi Gary, All,
>
>
> This is with reference to blueprint - L3 router Service Type Framework and
> corresponding development at github repo.
>
> I noticed that the patch was abandoned due to inactivity. Wanted to know if
> there is a specific reason for which the development was put on hold?
>
> I am working on a Use-case to enable neutron calls (L2 and L3) from
> OpenStack to OpenDaylight neutron. However presently ML2 forwards the L2
> calls to ODL neutron, but not the L3 calls (router and FIP).
> With this blueprint submission the L3 Service framework (that includes L3
> driver, agent and plugin) will be completed and hence L3 calls from
> OpenStack can be redirected to any controller platform. Please suggest in
> case anyone else is working on the same or if we can do the enhancements
> required and submit the code to enable such a usecase.
>
>
> Best Regards
> Priyanka
>
> =-=-=
> Notice: The information contained in this e-mail
> message and/or attachments to it may contain
> confidential or privileged information. If you are
> not the intended recipient, any dissemination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone and
> immediately and permanently delete the message
> and any attachments. Thank you
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Telco] [NFV] [Heat] Telco Orchestration

2014-11-26 Thread Mathieu Rohon
Hi,

On Wed, Nov 26, 2014 at 12:48 AM, Georgy Okrokvertskhov
 wrote:
> Hi,
>
> In Murano we did couple projects related to networking orchestration. As NFV

Can you tell us more about those projects? Does it include
mutli-datacenter use cases?

> is a quite broad term I can say that Murano approach fits into it too. In
> our case we had bunch of virtual appliances with specific networking
> capabilities and requirements. Some of these appliances had to work together
> to provide a required functionality. These virtual appliances were exposed
> as Murano applications with defined dependencies between apps and operators
> were able to create different networking configuration with these apps
> combining them according their requirements\capabilities. Underlying
> workflows were responsible to bind these virtual appliances together.

Can you provide us a link to such a murano Application, how you define
dependencies with apps, and how you translate those dependencies in
networking configuration?

> I will be glad to participate in tomorrow meeting and answer any questions
> you have.
>
> Thanks
> Georgy
>
> On Tue, Nov 25, 2014 at 6:14 AM, Marc Koderer  wrote:
>>
>> Hi Angus,
>>
>> Am 25.11.2014 um 12:48 schrieb Angus Salkeld :
>>
>> On Tue, Nov 25, 2014 at 7:27 PM, Marc Koderer wrote:
>>>
>>> Hi all,
>>>
>>> as discussed during our summit sessions we would like to expand the scope
>>> of the Telco WG (aka OpenStack NFV group) and start working
>>> on the orchestration topic (ETSI MANO).
>>>
>>> Therefore we started with an etherpad [1] to collect ideas, use-cases and
>>> requirements.
>>
>>
>> Hi Marc,
>>
>> You have quite a high acronym per sentence ratio going on that etherpad;)
>>
>>
>> Haha, welcome to the telco world :)
>>
>>
>> From Heat's perspective, we have a lot going on already, but we would love
>> to support
>> what you are doing.
>>
>>
>> That’s exactly what we are planning. What we have is a long list of
>> use-cases and
>> requirements. We need to transform them into specs for the OpenStack
>> projects.
>> Many of those specs won’t be NFV specify, for instance a Telco cloud will
>> be highly
>> distributed. So what we need is a multi-region heat support (which is
>> already a planned
>> feature for Heat as I learned today).
>>
>>
>> You need to start getting specific about what you need and what the
>> missing gaps are.
>> I see you are already looking at higher layers (TOSCA) also check out
>> Murano as well.
>>
>>
>> Yep, I will check Murano.. I never had a closer look to it.
>>
>> Regards
>> Marc
>>
>>
>> Regards
>> -Angus
>>
>>>
>>> Goal is to discuss this document and move it onto the Telco WG wiki [2]
>>> when
>>> it becomes stable.
>>>
>>> Feedback welcome ;)
>>>
>>> Regards
>>> Marc
>>> Deutsche Telekom
>>>
>>> [1] https://etherpad.openstack.org/p/telco_orchestration
>>> [2] https://wiki.openstack.org/wiki/TelcoWorkingGroup
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Georgy Okrokvertskhov
> Architect,
> OpenStack Platform Products,
> Mirantis
> http://www.mirantis.com
> Tel. +1 650 963 9828
> Mob. +1 650 996 3284
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L2 Agent][Debt] Bootstrapping an L2 agent debt repayment task force

2014-11-25 Thread Mathieu Rohon
On Tue, Nov 25, 2014 at 9:59 AM, henry hly  wrote:
> Hi Armando,
>
> Indeed agent-less solution like external controller is very
> interesting, and in some certain cases it has advantage over agent
> solution, e.g. software installation is prohibited on Compute Node.
>
> However, Neutron Agent has its irreplaceable benefits: multiple
> backends support like SRIOV, macvtap, vhost-user snabbswitch, hybrid
> vswitch solution like NIC offloading or VDP based TOR offloading...All
> these backend can not be easily controlled by an remote OF controller.

Moreover, this solution is tested by the gate (at least ovs), and is
simpler for small deployments

> Also considering DVR (maybe with upcoming FW for W-E), and Security
> Group, W-E traffic control capability gap still exists between linux
> stack and OF flowtable, whether features like advanced netfilter, or
> performance for webserver app which incur huge concurrent sessions
> (because of basic OF upcall model, the more complex flow rule, the
> less megaflow aggregation might take effect)
>
> Thanks to L2pop and DVR, now many customer give the feedback that
> Neutron has made great progressing, and already meet nearly all their
> L2/L3 connectivity W-E control directing (The next big expectation is
> N-S traffic directing like dynamic routing agent), without forcing
> them to learn and integrate another huge platform like external SDN
> controller.

+100. Note that Dynamic routing is in progress.

> No attention to argue on agent vs. agentless, built-in reference vs.
> external controller, Openstack is an open community. But, I just want
> to say that modularized agent re-factoring does make a lot of sense,
> while forcing customer to piggyback an extra SDN controller on their
> Cloud solution is not the only future direction of Neutron.
>
> Best Regard
> Henry
>
> On Wed, Nov 19, 2014 at 5:45 AM, Armando M.  wrote:
>> Hi Carl,
>>
>> Thanks for kicking this off. I am also willing to help as a core reviewer of
>> blueprints and code
>> submissions only.
>>
>> As for the ML2 agent, we all know that for historic reasons Neutron has
>> grown to be not only a networking orchestration project but also a reference
>> implementation that is resembling what some might call an SDN controller.
>>
>> I think that most of the Neutron folks realize that we need to move away
>> from this model and rely on a strong open source SDN alternative; for these
>> reasons, I don't think that pursuing an ML2 agent would be a path we should
>> go down to anymore. It's time and energy that could be more effectively
>> spent elsewhere, especially on the refactoring. Now if the refactoring
>> effort ends up being labelled ML2 Agent, I would be okay with it, but my gut
>> feeling tells me that any attempt at consolidating code to embrace more than
>> one agent logic at once is gonna derail the major goal of paying down the so
>> called agent debt.
>>
>> My 2c
>> Armando
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] L2 gateway as a service

2014-11-17 Thread Mathieu Rohon
Hi

On Fri, Nov 14, 2014 at 6:26 PM, Armando M.  wrote:
> Last Friday I recall we had two discussions around this topic. One in the
> morning, which I think led to Maruti to push [1]. The way I understood [1]
> was that it is an attempt at unifying [2] and [3], by choosing the API
> approach of one and the architectural approach of the other.
>
> [1] https://review.openstack.org/#/c/134179/
> [2] https://review.openstack.org/#/c/100278/
> [3] https://review.openstack.org/#/c/93613/
>
> Then there was another discussion in the afternoon, but I am not 100% of the
> outcome.

Me neither, that's why I'd like ian, who led this discussion, to sum
up the outcome from its point of view.

> All this churn makes me believe that we probably just need to stop
> pretending we can achieve any sort of consensus on the approach and let the
> different alternatives develop independently, assumed they can all develop
> independently, and then let natural evolution take its course :)

I tend to agree, but I think that one of the reason why we are looking
for a consensus, is because API evolutions proposed through
Neutron-spec are rejected by core-dev, because they rely on external
components (sdn controller, proprietary hardware...) or they are not a
high priority for neutron core-dev.
By finding a consensus, we show that several players are interested in
such an API, and it helps to convince core-dev that this use-case, and
its API, is missing in neutron.

Now, if there is room for easily propose new API in Neutron, It make
sense to leave new API appear and evolve, and then " let natural
evolution take its course ", as you said.
To me, this is in the scope of the "advanced services" project.

> Ultimately the biggest debate is on what the API model needs to be for these
> abstractions. We can judge on which one is the best API of all, but
> sometimes this ends up being a religious fight. A good API for me might not
> be a good API for you, even though I strongly believe that a good API is one
> that can:
>
> - be hard to use incorrectly
> - clear to understand
> - does one thing, and one thing well
>
> So far I have been unable to be convinced why we'd need to cram more than
> one abstraction in one single API, as it does violate the above mentioned
> principles. Ultimately I like the L2 GW API proposed by 1 and 2 because it's
> in line with those principles. I'd rather start from there and iterate.
>
> My 2c,
> Armando
>
> On 14 November 2014 08:47, Salvatore Orlando  wrote:
>>
>> Thanks guys.
>>
>> I think you've answered my initial question. Probably not in the way I was
>> hoping it to be answered, but it's ok.
>>
>> So now we have potentially 4 different blueprint describing more or less
>> overlapping use cases that we need to reconcile into one?
>> If the above is correct, then I suggest we go back to the use case and
>> make an effort to abstract a bit from thinking about how those use cases
>> should be implemented.
>>
>> Salvatore
>>
>> On 14 November 2014 15:42, Igor Cardoso  wrote:
>>>
>>> Hello all,
>>> Also, what about Kevin's https://review.openstack.org/#/c/87825/? One of
>>> its use cases is exactly the L2 gateway. These proposals could probably be
>>> inserted in a more generic work for moving existing datacenter L2 resources
>>> to Neutron.
>>> Cheers,
>>>
>>> On 14 November 2014 15:28, Mathieu Rohon  wrote:
>>>>
>>>> Hi,
>>>>
>>>> As far as I understood last friday afternoon dicussions during the
>>>> design summit, this use case is in the scope of another umbrella spec
>>>> which would define external connectivity for neutron networks. Details
>>>> of those connectivity would be defined through service plugin API.
>>>>
>>>> Ian do you plan to define such an umbrella spec? or at least, could
>>>> you sum up the agreement of the design summit discussion in the ML?
>>>>
>>>> I see at least 3 specs which would be under such an umbrella spec :
>>>> https://review.openstack.org/#/c/93329/ (BGPVPN)
>>>> https://review.openstack.org/#/c/101043/ (Inter DC connectivity with
>>>> VPN)
>>>> https://review.openstack.org/#/c/134179/ (l2 gw aas)
>>>>
>>>>
>>>> On Fri, Nov 14, 2014 at 1:13 PM, Salvatore Orlando 
>>>> wrote:
>>>> > Thanks Maruti,
>>>> >
>>>> > I have some comments and questions which I've posted on gerrit.
>>>> > There are two things I would like to discuss on the mailing l

Re: [openstack-dev] [neutron] L2 gateway as a service

2014-11-14 Thread Mathieu Rohon
Hi,

As far as I understood last friday afternoon dicussions during the
design summit, this use case is in the scope of another umbrella spec
which would define external connectivity for neutron networks. Details
of those connectivity would be defined through service plugin API.

Ian do you plan to define such an umbrella spec? or at least, could
you sum up the agreement of the design summit discussion in the ML?

I see at least 3 specs which would be under such an umbrella spec :
https://review.openstack.org/#/c/93329/ (BGPVPN)
https://review.openstack.org/#/c/101043/ (Inter DC connectivity with VPN)
https://review.openstack.org/#/c/134179/ (l2 gw aas)


On Fri, Nov 14, 2014 at 1:13 PM, Salvatore Orlando  wrote:
> Thanks Maruti,
>
> I have some comments and questions which I've posted on gerrit.
> There are two things I would like to discuss on the mailing list concerning
> this effort.
>
> 1) Is this spec replacing  https://review.openstack.org/#/c/100278 and
> https://review.openstack.org/#/c/93613 - I hope so, otherwise this just adds
> even more complexity.
>
> 2) It sounds like you should be able to implement this service plugin in
> either a feature branch or a repository distinct from neutron. Can you
> confirm that?
>
> Salvatore
>
> On 13 November 2014 13:26, Kamat, Maruti Haridas 
> wrote:
>>
>> Hi Friends,
>>
>>  As discussed during the summit, I have uploaded the spec for review
>> at https://review.openstack.org/#/c/134179/
>>
>> Thanks,
>> Maruti
>>
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] BGP - VPN BoF session in Kilo design summit

2014-11-04 Thread Mathieu Rohon
Hi,

Thanks Jaume, it make sense since those use cases need l3-agent refectoring.
I've updated the BGPVPN etherpad [1] with materials used during
today's techtalk.

I hope this will help everyone to better uderstand our use cases.

[1]https://etherpad.openstack.org/p/bgpvpn

On Tue, Nov 4, 2014 at 2:43 PM, Jaume Devesa  wrote:
> Hello,
>
> BoF will be Wednesday 5 at 15:00 pm at Design Summit building. We will have
> the chance to talk about it into the Kilo L3 refactoring BoF
>
> https://etherpad.openstack.org/p/kilo-l3-refactoring
>
> Cheers,
>
> On 30 October 2014 07:28, Carl Baldwin  wrote:
>>
>> Yes, let's discuss this in the meeting on Thursday.
>>
>> Carl
>>
>> On Oct 29, 2014 5:27 AM, "Jaume Devesa"  wrote:
>>>
>>> Hello,
>>>
>>> it seems like the BGP dynamic routing it is in a good shape to be
>>> included in Neutron during Kilo[1]. There is quite interest in offer BGP-VPN
>>> too. Mathieu Rohon's spec[2] goes in this direction. Of course it makes
>>> sense that his development leverages the BGP one.
>>>
>>> I would like to have a BoF session and invite anyone interested on these
>>> blueprints to join us or even add a new related one. I've created an
>>> etherpad[3] to share ideas and agree with session schedule. I propose
>>> Wednesday afternoon.
>>>
>>> If Carl Baldwin is agree, we can talk about it also during the open
>>> discussion of today's L3 subteam meeting.
>>>
>>> [1]: https://review.openstack.org/#/c/125401/
>>> [
>>> 2]: https://review.openstack.org/#/c/125401/
>>> [3]: https://etherpad.openstack.org/p/bgp-vpn-dynamic-routing
>>>
>>> Cheers,
>>> --
>>> Jaume Devesa
>>> Software Engineer at Midokura
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Jaume Devesa
> Software Engineer at Midokura
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] BGP - VPN BoF session in Kilo design summit

2014-10-29 Thread Mathieu Rohon
Hi,

thanks jaume for planning this bof.
the bgpvpn spec [2] has been initiated by nati ueno. I hope nati will
attend this bof too.
our proposal is to try to implement this spec with existing Neutron
components [4].

Me and thomas will also have a tech talk about BGPVPN on tuesday, 1:15 PM. [5]

[4]https://docs.google.com/drawings/d/1NN4tDgnZlBRr8ZUf5-6zzUcnDOUkWSnSiPm8LuuAkoQ/edit
[5]http://openstack.prov12n.com/vbrownbag-techtalk-schedule-at-openstack-summit-paris-2/

On Wed, Oct 29, 2014 at 12:26 PM, Jaume Devesa  wrote:
> Hello,
>
> it seems like the BGP dynamic routing it is in a good shape to be included
> in Neutron during Kilo[1]. There is quite interest in offer BGP-VPN too.
> Mathieu Rohon's spec[2] goes in this direction. Of course it makes sense
> that his development leverages the BGP one.
>
> I would like to have a BoF session and invite anyone interested on these
> blueprints to join us or even add a new related one. I've created an
> etherpad[3] to share ideas and agree with session schedule. I propose
> Wednesday afternoon.
>
> If Carl Baldwin is agree, we can talk about it also during the open
> discussion of today's L3 subteam meeting.
>
> [1]: https://review.openstack.org/#/c/125401/
> [
> 2]: https://review.openstack.org/#/c/125401/
> [3]: https://etherpad.openstack.org/p/bgp-vpn-dynamic-routing
>
> Cheers,
> --
> Jaume Devesa
> Software Engineer at Midokura
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QoS] Pod time at Paris Summit

2014-10-29 Thread Mathieu Rohon
Hi,

+1

thanks

On Wed, Oct 29, 2014 at 7:19 AM, Kevin Benton  wrote:
> I believe "Déjà vu" is an appropriate term here. :-)
>
> Count me in.
>
> On Tue, Oct 28, 2014 at 1:16 PM, Collins, Sean
>  wrote:
>>
>> Hi,
>>
>> Like Atlanta, I will be at the summit. If there is interest, I can
>> schedule a time to talk about the QoS API extension.
>>
>> --
>> Sean M. Collins
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Kevin Benton
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] New specs on routed networking

2014-10-28 Thread Mathieu Rohon
Hi,

really interesting, thanks corry.
during l3 meeting we spoke about planning a POD session around bgp use cases.
at least 2 spec have bgp use cases :

https://review.openstack.org/#/c/125401/
https://review.openstack.org/#/c/93329/

It would be interesting that you join this POD, to share your view and
leverage bgp capabilities that will be introduced in kilo for the
calico project.

Mathieu


On Tue, Oct 28, 2014 at 8:44 AM, A, Keshava  wrote:
> Hi,
>
> Current Open-stack was built as flat network.
>
> With the introduction of the L3 lookup (by inserting the routing table in
> forwarding path) and separate ‘VIF Route Type’ interface:
>
>
>
> At what point of time in the packet processing  decision will be made to
> lookup FIB  during ? For each packet there will additional  FIB lookup ?
>
> How about the  impact on  ‘inter compute traffic’, processed by  DVR  ?
>
>
>
> Here thinking  OpenStack cloud as hierarchical network instead of Flat
> network ?
>
>
>
> Thanks & regards,
>
> Keshava
>
>
>
> From: Rohit Agarwalla (roagarwa) [mailto:roaga...@cisco.com]
> Sent: Monday, October 27, 2014 12:36 AM
> To: OpenStack Development Mailing List (not for usage questions)
>
> Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking
>
>
>
> Hi
>
>
>
> I'm interested as well in this model. Curious to understand the routing
> filters and their implementation that will enable isolation between tenant
> networks.
>
> Also, having a BoF session on "Virtual Networking using L3" may be useful to
> get all interested folks together at the Summit.
>
>
>
>
>
> Thanks
>
> Rohit
>
>
>
> From: Kevin Benton 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: Friday, October 24, 2014 12:51 PM
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: Re: [openstack-dev] [neutron][nova] New specs on routed networking
>
>
>
> Hi,
>
>
>
> Thanks for posting this. I am interested in this use case as well.
>
>
>
> I didn't find a link to a review for the ML2 driver. Do you have any more
> details for that available?
>
> It seems like not providing L2 connectivity between members of the same
> Neutron network conflicts with assumptions ML2 will make about segmentation
> IDs, etc. So I am interested in seeing how exactly the ML2 driver will bind
> ports, segments, etc.
>
>
>
>
>
> Cheers,
>
> Kevin Benton
>
>
>
> On Fri, Oct 24, 2014 at 6:38 AM, Cory Benfield
>  wrote:
>
> All,
>
> Project Calico [1] is an open source approach to virtual networking based on
> L3 routing as opposed to L2 bridging.  In order to accommodate this approach
> within OpenStack, we've just submitted 3 blueprints that cover
>
> -  minor changes to nova to add a new VIF type [2]
> -  some changes to neutron to add DHCP support for routed interfaces [3]
> -  an ML2 mechanism driver that adds support for Project Calico [4].
>
> We feel that allowing for routed network interfaces is of general use within
> OpenStack, which was our motivation for submitting [2] and [3].  We also
> recognise that there is an open question over the future of 3rd party ML2
> drivers in OpenStack, but until that is finally resolved in Paris, we felt
> submitting our driver spec [4] was appropriate (not least to provide more
> context on the changes proposed in [2] and [3]).
>
> We're extremely keen to hear any and all feedback on these proposals from
> the community.  We'll be around at the Paris summit in a couple of weeks and
> would love to discuss with anyone else who is interested in this direction.
>
> Regards,
>
> Cory Benfield (on behalf of the entire Project Calico team)
>
> [1] http://www.projectcalico.org
> [2] https://blueprints.launchpad.net/nova/+spec/vif-type-routed
> [3] https://blueprints.launchpad.net/neutron/+spec/dhcp-for-routed-ifs
> [4] https://blueprints.launchpad.net/neutron/+spec/calico-mechanism-driver
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Clear all flows when ovs agent start? why and how avoid?

2014-10-28 Thread Mathieu Rohon
Hi wei,

the agent will be re-worked with modular l2 agent [1]
your proposal could be handle during this work.

[1]https://review.openstack.org/#/c/106189/

Mathieu

On Tue, Oct 28, 2014 at 4:01 AM, Damon Wang  wrote:
> Hi all,
>
> We have suffered a long down time when we upgrade our public cloud's neutron
> into the latest version (close to Juno RC2), for ovs-agent cleaned all flows
> in br-tun when it start.
>
> I find our current design is remove all flows then add flow by entry, this
> will cause every network node will break off all tunnels between other
> network node and all compute node.
>
> ( plugins.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent.__init__ ->
> plugins.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent#setup_tunnel_br
> :
> self.tun_br.remove_all_flows() )
>
> Do we have any mechanism or ideas to avoid this, or should we rethink
> current design? Welcome comments
>
> Wei Wang
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] BGPVPN implementation discussions

2014-10-22 Thread Mathieu Rohon
Hi keshava,

> Hi,
>
>
>
> 1.   From where the MPLS traffic will be initiated ?

In this design, MPLS traffic will be initiated from a network node,
where the qrouter is located. However, we though of alternative design
where MPLS traffic is initiated on the compute node, directly from a
VM plugged in an exported IPVPN network. In this case, /32 would be
advertised, and the bgpspeaker would be hosted on each compute node.
see here :

https://docs.google.com/drawings/d/1bMXiOwHsbKS89xfE0vQMtu7D9H3XV8Cvkmcoz6rzDOE/edit?usp=sharing

>
> 2.   How it will be mapped ?

in the proposed design, the mapping in br-mpls will be done on
destinantion network received in bgp, and on the "in_port", to
distinguish trafic from each qrouter. in br-mpls, there would be one
internal port per qrouter.

Regards

Mathieu

>
>
>
> Regards,
>
> Keshava
>
> From: Damon Wang [mailto:damon.dev...@gmail.com]
> Sent: Friday, October 17, 2014 12:42 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron] BGPVPN implementation discussions
>
>
>
> Good news, +1
>
>
>
> 2014-10-17 0:48 GMT+08:00 Mathieu Rohon :
>
> Hi all,
>
> as discussed during today's l3-meeting, we keep on working on BGPVPN
> service plugin implementation [1].
> MPLS encapsulation is now supported in OVS [2], so we would like to
> summit a design to leverage OVS capabilities. A first design proposal,
> based on l3agent, can be found here :
>
> https://docs.google.com/drawings/d/1NN4tDgnZlBRr8ZUf5-6zzUcnDOUkWSnSiPm8LuuAkoQ/edit
>
> this solution is based on bagpipe [3], and its capacity to manipulate
> OVS, based on advertised and learned routes.
>
> [1]https://blueprints.launchpad.net/neutron/+spec/neutron-bgp-vpn
> [2]https://raw.githubusercontent.com/openvswitch/ovs/master/FAQ
> [3]https://github.com/Orange-OpenSource/bagpipe-bgp
>
>
> Thanks
>
> Mathieu
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] BGPVPN implementation discussions

2014-10-16 Thread Mathieu Rohon
Hi all,

as discussed during today's l3-meeting, we keep on working on BGPVPN
service plugin implementation [1].
MPLS encapsulation is now supported in OVS [2], so we would like to
summit a design to leverage OVS capabilities. A first design proposal,
based on l3agent, can be found here :

https://docs.google.com/drawings/d/1NN4tDgnZlBRr8ZUf5-6zzUcnDOUkWSnSiPm8LuuAkoQ/edit

this solution is based on bagpipe [3], and its capacity to manipulate
OVS, based on advertised and learned routes.

[1]https://blueprints.launchpad.net/neutron/+spec/neutron-bgp-vpn
[2]https://raw.githubusercontent.com/openvswitch/ovs/master/FAQ
[3]https://github.com/Orange-OpenSource/bagpipe-bgp


Thanks

Mathieu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [ml2] How ML2 reflects on the topology?

2014-10-16 Thread Mathieu Rohon
Hi,

if you use a VLAN type driver, TOR switches should be configured in
trunk mode to allow VLAN specified in vlan_type section of
ml2_conf.ini.
vlan id range is defined in this section. Any tenant network will use
an id from this range, and it is totally independent from tenant id.
Some mechanism drivers should allow you to automatically configure the
TOR switch with the correct vlan ID in the trunk port connected to
compute nodes.

when you create a port, traffic from this port will use the vlan tag
from the network which owns this port.

Hope this help

On Wed, Oct 15, 2014 at 7:18 PM, Ettore zugliani
 wrote:
> Hi, I've got a few questions that have been left unanswered on Ask.Openstack
> and on the IRC channel.
>
> How the topology may be affected by the ML2 API calls? In other words, how
> would a "Create Network" call affect the actual topology? How is it
> controlled?
>
> An example: Once we receive a "Create Network" ML2 API call we don't know
> how exactly it reflects on ANY switch configuration. Supposing that we
> received a create_network with the tenant_id = tid and we are using the
> TypeDriver VLAN, should we create a VLAN on the swtich with vid = tid?
>
> On a create_port API call should we add a specifc port -manually- to this
> vlan? Another thing that comes to mind is that if is there a default port or
> do we get the correct port from Neutron context?
>
> Thank you
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] How to write extension

2014-10-08 Thread Mathieu Rohon
Hi,

please tell us about the extension you want to write, it would really help.
In ML2, you now have the ability to easily add extensions thanks to this work :

https://blueprints.launchpad.net/neutron/+spec/extensions-in-ml2

Mathieu

On Wed, Oct 8, 2014 at 7:52 AM, Damon Wang  wrote:
> Hi Kapłoński,
>
> I recommend you to read these materials:
>
> Keynotes:
> https://www.openstack.org/assets/presentation-media/insideneutron2.pdf
> Video:
> https://www.openstack.org/summit/openstack-summit-hong-kong-2013/session-videos/presentation/deep-dive-into-neutron
> Post:
> http://control-that-vm.blogspot.com/2014/05/writing-api-extensions-in-neutron.html
>
> Regards,
> Wang Wei
>
> 2014-10-08 3:29 GMT+08:00 Sławek Kapłoński :
>>
>> Hello,
>>
>> Can You send me some examples or link to tutorial how to write extension
>> to
>> neutron. I'm using ML2 plugin in neutron and I want to add some new
>> functionality and API methods to this neutron. So imho I should do it via
>> extension but how exactly write it that it will work in existing ML2
>> plugin?
>>
>> ---
>> Best regards
>> Sławek Kapłoński
>> sla...@kaplonski.pl
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] About the question "Switch firewall_driver control from Neutron to Nova"

2014-09-22 Thread Mathieu Rohon
hi,

to switch to nova firewall management, you also have to set
enable_security_group = False in ml2_conf.ini.
Nova will have to manage security_groups by setting security_group_api
= nova in nova.conf.
by doing this you will loose some capabilities of neutron
(allowed_address_pair for example).

However, you should be able to implement your use case with a correct
neutron setup, otherwise neutron need some new to code. I think we
should keep talking the the "ask" page first, to try to solve your
issue.


On Mon, Sep 22, 2014 at 5:12 AM, Tien-Trung Trinh
 wrote:
> Dear OpenStack-dev,
>
>
>
> I’ve posted a question on
> https://ask.openstack.org/en/question/47819/switch-firewall_driver-control-from-neutron-to-nova/
>
> It’s suggested that I should forward this question to OpenStack-dev mailing
> list for discussion.
>
>
>
> Any feedback/answer would be very appreciated.
>
>
>
> Thanks and regards
>
> Trung
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Openvswitch agent support for non promic mode adapters

2014-08-27 Thread Mathieu Rohon
hi irena,

in the proposal of andreas you want to enforce the non-promisc mode
per l2-agent? so every port managed by this agent will have to be in a
non-promisc state?
at a first read of the mail, I understood that you want to manage that
per port with an extension.
By using an extension, an agent could host promisc and non-promisc
net-adapters, and other MD could potentially leverage this info (at
least LB MD).

On Wed, Aug 27, 2014 at 3:45 PM, Irena Berezovsky  wrote:
> Hi Mathieu,
> We had a short discussion with Andreas about the use case stated below and 
> also considered the SR-IOV related use case.
> It seems that all required changes can be encapsulated in the L2 OVS agent, 
> since it requires to add fdb mac registration on adapted interface.
> What was your idea related to extension manager in ML2?
>
> Thanks,
> Irena
>
> -----Original Message-
> From: Mathieu Rohon [mailto:]
> Sent: Wednesday, August 27, 2014 3:11 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [neutron][ml2] Openvswitch agent support for non 
> promic mode adapters
>
> you probably should consider using the future extension manager in ML2 :
>
> https://review.openstack.org/#/c/89211/
>
> On Mon, Aug 25, 2014 at 12:54 PM, Irena Berezovsky  
> wrote:
>> Hi Andreas,
>> We can definitely set some time to discuss this.
>> I am usually available from 5 to 14:00 UTC.
>> Let's follow up on IRC (irenab).
>>
>> BR,
>> Irena
>>
>> -Original Message-
>> From: Andreas Scheuring [mailto:scheu...@linux.vnet.ibm.com]
>> Sent: Monday, August 25, 2014 11:00 AM
>> To: Irena Berezovsky
>> Cc: OpenStack Development Mailing List (not for usage questions)
>> Subject: RE: [openstack-dev] [neutron][ml2] Openvswitch agent support
>> for non promic mode adapters
>>
>> Hi Irena,
>> thanks for your reply. Yes sure, collaboration would be great.
>> Do you already have a blueprint out there? Maybe wen can synchup this week 
>> to discuss more details? Cause I would like to understand what exactly 
>> you're looking for. Normally I'm available form 7 UTC to 16 utc (today only 
>> until 13 utc). My irc name is scheuran. Maybe we can get in contact this 
>> week!
>>
>> You also where talking about sriov. I saw some blueprint mentioning sriov & 
>> macvtap. Do you have any insights into this one, too? What we also would 
>> like to do is to introduce macvtap as network virtualization option. Macvtap 
>> also registers mac addresses to network adapters...
>>
>>
>> Thanks,
>> Andreas
>>
>>
>> On Sun, 2014-08-24 at 08:51 +, Irena Berezovsky wrote:
>>> Hi Andreas,
>>> Thank you for this initiative.
>>> We were looking on similar problem for mixing OVS and SR-IOV on same 
>>> network adapter, which also requires mac addresses registration of OVS 
>>> ports.
>>> Please let me know if you would like to collaborate on this effort.
>>>
>>> BR,
>>> Irena
>>>
>>> -Original Message-
>>> From: Andreas Scheuring [mailto:scheu...@linux.vnet.ibm.com]
>>> Sent: Friday, August 22, 2014 11:16 AM
>>> To: openstack-dev@lists.openstack.org
>>> Subject: Re: [openstack-dev] [neutron][ml2] Openvswitch agent support
>>> for non promic mode adapters
>>>
>>> Thanks for your feedback.
>>>
>>> No, I do not yet have code for it. Just wanted to get a feeling if such a 
>>> feature would get acceptance in the community.
>>> But if that helps I can sit down and start some prototyping while I'm 
>>> preparing a blueprint spec in parallel.
>>>
>>> The main part of the implementation I wanted to do on my own to get more 
>>> familiar with the code base and to get more in touch with the community.
>>> But of course advice and feedback of experienced neutron developers is 
>>> essential!
>>>
>>> So I will proceed like this
>>> - Create a blueprint
>>> - Commit first pieces of code to get early feedback (e.g. ask via the
>>> mailing list or irc)
>>> - Upload a spec (as soon as the repo is available for K)
>>>
>>> Does that make sense for you?
>>>
>>> Thanks,
>>> Andreas
>>>
>>>
>>>
>>> On Thu, 2014-08-21 at 13:44 -0700, Kevin Benton wrote:
>>> > I think this sounds reasonable. Do you have code for this already,
>>> > or are you looking for a developer to help implement it?
>>> >
>>> >
>>>

Re: [openstack-dev] [neutron][ml2] Mech driver as out-of-tree add-on

2014-08-27 Thread Mathieu Rohon
l2pop is about l2 networks optimization with tunnel creation and arp
repsonder population (so this is
not only a overlays network optimization. For example ofagent now use
l2pop info for flat and vlan optimization [1]),
This optimization is orthogonal to several agent based mechanism
driver (lb, ovs, ofagent).
I agree that this optimization should be accessible to every MD, by
providing an access to fdb dict directly from ML2.db.
a controler based MD like ODL could use those fdb entries the same way
agents use it, by optimizing the datapath under its control.

[1]https://review.openstack.org/#/c/114119/

On Wed, Aug 27, 2014 at 10:30 AM, Kevin Benton  wrote:
>>So why not agent based?
>
> Maybe I have an experimental operating system that can't run python. Maybe
> the RPC channel between compute nodes and Neutron doesn't satisfy certain
> security criteria. Regardless of the reason, it doesn't matter because that
> is an implementation detail that should be irrelevant to separate ML2
> drivers.
>
> l2pop should be concerned with tunnel endpoints and tunnel endpoints only.
> Whether or not you're running a chunk of code responding to messages on an
> RPC bus and sending heartbeats should not be Neutron's concern. It defeats
> the purpose of ML2 if everything that can bind a port has to be running a
> neutron RPC-compatible agent.
>
> The l2pop functionality should become part of the tunnel type drivers and
> the mechanism drivers should be able to provide the termination endpoints
> for the tunnels using whatever mechanism it chooses. Agent-based drivers can
> use the agent DB to do this and then the REST drivers can provide whatever
> termination point they want. This solves the interoperability problem and
> relaxes this tight coupling between vxlan and agents.
>
>
> On Wed, Aug 27, 2014 at 1:09 AM, loy wolfe  wrote:
>>
>>
>>
>>
>> On Wed, Aug 27, 2014 at 3:13 PM, Kevin Benton  wrote:
>>>
>>> Ports are bound in order of configured drivers so as long as the
>>> OpenVswitch driver is put first in the list, it will bind the ports it can
>>> and then ODL would bind the leftovers. [1][2] The only missing component is
>>> that ODL doesn't look like it uses l2pop so establishing tunnels between the
>>> OVS agents and the ODL-managed vswitches would be an issue that would have
>>> to be handled via another process.
>>>
>>> Regardless, my original point is that the driver keeps the neutron
>>> semantics and DB in tact. In my opinion, the lack of compatibility with
>>> l2pop isn't an issue with the driver, but more of an issue with how l2pop
>>> was designed. It's very tightly coupled to having agents managed by Neutron
>>> via RPC, which shouldn't be necessary when it's primary purpose is to
>>> establish endpoints for overlay tunnels.
>>
>>
>> So why not agent based? Neutron shouldn't be treated as just an resource
>> storage, built-in backends naturally need things like l2pop and dvr for
>> distributed dynamic topology control,  we couldn't say that something as a
>> part was "tightly coupled".
>>
>> On the contrary, 3rd backends should adapt themselves to be integrated
>> into Neutron as thin as they can, focusing on the backend device control but
>> not re-implement core service logic duplicated with Neutron . BTW, Ofagent
>> is a good example for this style.
>>
>>>
>>>
>>>
>>> 1.
>>> https://github.com/openstack/neutron/blob/7f466c8730cfca13f2fb374c80d810929bb8/neutron/plugins/ml2/drivers/mech_agent.py#L53
>>> 2.
>>> https://github.com/openstack/neutron/blob/7f466c8730cfca13f2fb374c80d810929bb8/neutron/plugins/ml2/drivers/mechanism_odl.py#L326
>>>
>>>
>>> On Tue, Aug 26, 2014 at 10:05 PM, loy wolfe  wrote:




 On Wed, Aug 27, 2014 at 12:42 PM, Kevin Benton 
 wrote:
>
> >I think that "opensource" is not the only factor, it's about built-in
> > vs. 3rd backend. Built-in must be opensource, but opensource is not
> > necessarily built-in. By my thought, current OVS and linuxbridge are
> > built-in, but shim RESTful proxy for all kinds of sdn controller should 
> > be
> > 3rd, for they keep all virtual networking data model and service logic 
> > in
> > their own places, using Neutron API just as the NB shell (they can't 
> > even
> > co-work with built-in l2pop driver for vxlan/gre network type today).
>
>
> I understand the point you are trying to make, but this blanket
> statement about the data model of drivers/plugins with REST backends is
> wrong. Look at the ODL mechanism driver for a counter-example.[1] The data
> is still stored in Neutron and all of the semantics of the API are
> maintained. The l2pop driver is to deal with decentralized overlays, so 
> I'm
> not sure how its interoperability with the ODL driver is relevant.


 If we create a vxlan network,  then can we bind some ports to built-in
 ovs driver, and other ports to ODL driver? linux bridge agnet, ovs agent,
 of

Re: [openstack-dev] [neutron][ml2] Openvswitch agent support for non promic mode adapters

2014-08-27 Thread Mathieu Rohon
you probably should consider using the future extension manager in ML2 :

https://review.openstack.org/#/c/89211/

On Mon, Aug 25, 2014 at 12:54 PM, Irena Berezovsky  wrote:
> Hi Andreas,
> We can definitely set some time to discuss this.
> I am usually available from 5 to 14:00 UTC.
> Let's follow up on IRC (irenab).
>
> BR,
> Irena
>
> -Original Message-
> From: Andreas Scheuring [mailto:scheu...@linux.vnet.ibm.com]
> Sent: Monday, August 25, 2014 11:00 AM
> To: Irena Berezovsky
> Cc: OpenStack Development Mailing List (not for usage questions)
> Subject: RE: [openstack-dev] [neutron][ml2] Openvswitch agent support for non 
> promic mode adapters
>
> Hi Irena,
> thanks for your reply. Yes sure, collaboration would be great.
> Do you already have a blueprint out there? Maybe wen can synchup this week to 
> discuss more details? Cause I would like to understand what exactly you're 
> looking for. Normally I'm available form 7 UTC to 16 utc (today only until 13 
> utc). My irc name is scheuran. Maybe we can get in contact this week!
>
> You also where talking about sriov. I saw some blueprint mentioning sriov & 
> macvtap. Do you have any insights into this one, too? What we also would like 
> to do is to introduce macvtap as network virtualization option. Macvtap also 
> registers mac addresses to network adapters...
>
>
> Thanks,
> Andreas
>
>
> On Sun, 2014-08-24 at 08:51 +, Irena Berezovsky wrote:
>> Hi Andreas,
>> Thank you for this initiative.
>> We were looking on similar problem for mixing OVS and SR-IOV on same network 
>> adapter, which also requires mac addresses registration of OVS ports.
>> Please let me know if you would like to collaborate on this effort.
>>
>> BR,
>> Irena
>>
>> -Original Message-
>> From: Andreas Scheuring [mailto:scheu...@linux.vnet.ibm.com]
>> Sent: Friday, August 22, 2014 11:16 AM
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [neutron][ml2] Openvswitch agent support
>> for non promic mode adapters
>>
>> Thanks for your feedback.
>>
>> No, I do not yet have code for it. Just wanted to get a feeling if such a 
>> feature would get acceptance in the community.
>> But if that helps I can sit down and start some prototyping while I'm 
>> preparing a blueprint spec in parallel.
>>
>> The main part of the implementation I wanted to do on my own to get more 
>> familiar with the code base and to get more in touch with the community.
>> But of course advice and feedback of experienced neutron developers is 
>> essential!
>>
>> So I will proceed like this
>> - Create a blueprint
>> - Commit first pieces of code to get early feedback (e.g. ask via the
>> mailing list or irc)
>> - Upload a spec (as soon as the repo is available for K)
>>
>> Does that make sense for you?
>>
>> Thanks,
>> Andreas
>>
>>
>>
>> On Thu, 2014-08-21 at 13:44 -0700, Kevin Benton wrote:
>> > I think this sounds reasonable. Do you have code for this already,
>> > or are you looking for a developer to help implement it?
>> >
>> >
>> > On Thu, Aug 21, 2014 at 8:45 AM, Andreas Scheuring
>> >  wrote:
>> > Hi,
>> > last week I started discussing an extension to the existing
>> > neutron
>> > openvswitch agent to support network adapters that are not in
>> > promiscuous mode. Now I would like to enhance the round to get
>> > feedback
>> > from a broader audience via the mailing list.
>> >
>> >
>> > The Problem
>> > When driving vlan or flat networking, openvswitch requires an
>> > network
>> > adapter in promiscuous mode.
>> >
>> >
>> > Why not having promiscuous mode in your adapter?
>> > - Admins like to have full control over their environment and
>> > which
>> > network packets enter the system.
>> > - The network adapter just does not have support for it.
>> >
>> >
>> > What to do?
>> > Linux net-dev driver offer an interface to manually register
>> > additional
>> > mac addresses (also called secondary unicast addresses).
>> > Exploiting this
>> > one can register additional mac addresses to the network
>> > adapter. This
>> > also works via a well known ip user space tool.
>> >
>> > `bridge fdb add aa:aa:aa:aa:aa:aa dev eth0`
>> >
>> >
>> > What to do in openstack?
>> > As neutron is aware of all the mac addresses that are in use
>> > it's the
>> > perfect candidate for doing the mac registrations. The idea is
>> > to modify
>> > the neutron openvswitch agent that it does the registration on
>> > "port
>> > add" and "port remove" via the bridge command.
>> > There would be a new optional configuration parameter,
>> > something like
>> > 'non-promisc-mode' that is by default set to false. Only when
>> > set to
>> > true, macs get manually registered. Otherwise the a

Re: [openstack-dev] Thought on service plugin architecture (was [Neutron][QoS] Request to be considered for neutron-incubator)

2014-08-22 Thread Mathieu Rohon
hi,

On Wed, Aug 20, 2014 at 1:03 PM, Salvatore Orlando  wrote:
> As the original thread had a completely different subject, I'm starting a
> new one here.
>
> More specifically the aim of this thread is about:
> 1) Define when a service is best implemented with a service plugin or with a
> ML2 driver
> 2) Discuss how bindings between a "core" resource and the one provided by
> the service plugin should be exposed at the management plane, implemented at
> the control plane, and if necessary also at the data plane.
>
> Some more comments inline.
>
> Salvatore
>
> On 20 August 2014 11:31, Mathieu Rohon  wrote:
>>
>> Hi
>>
>> On Wed, Aug 20, 2014 at 12:12 AM, Salvatore Orlando 
>> wrote:
>> > In the current approach QoS support is being "hardwired" into ML2.
>> >
>> > Maybe this is not the best way of doing that, as perhaps it will end up
>> > requiring every mech driver which enforces VIF configuration should
>> > support
>> > it.
>> > I see two routes. One is a mechanism driver similar to l2-pop, and then
>> > you
>> > might have a look at the proposed extension framework (and partecipate
>> > into
>> > the discussion).
>> > The other is doing a service plugin. Still, we'll have to solve how to
>> > implement the "binding" between a port/network and the QoS entity.
>>
>> We have exactly the same issue while implementing the BGPVPN service
>> plugin [1].
>> As for the Qos extension, the BGPVPN extension can extend network by
>> adding route target infos.
>> the BGPVPN data model has a foreign key to the extended network.
>>
>> If Qos is implemented as a service plugin, I assume that the
>> architecture would be similar, with Qos datamodel
>> having  foreign keys to ports and/or Networks.
>
>
> From a data model perspective, I believe so if we follow the pattern we've
> followed so far. However, I think this would be correct also if QoS is not
> implemented as a service plugin!
>
>>
>> When a port is created, and it has Qos enforcement thanks to the service
>> plugin,
>> let's assume that a ML2 Qos Mech Driver can fetch Qos info and send
>> them back to the L2 agent.
>> We would probably need a Qos Agent which communicates with the plugin
>> through a dedicated topic.
>
>
> A distinct agent has pro and cons. I think however that we should try and
> limit the number of agents on the hosts to a minimum. And this minimum in my
> opinion should be 1! There is already a proposal around a modular agent
> which should be able of loading modules for handling distinct services. I
> think that's the best way forward.

I totally agree, and when I was referring to an agent, I was speaking
of something like the current sec group agent,
or an extension driver in the proposed modular L2 agent semantic [2]

>
>>
>>
>> But when a Qos info is updated through the Qos extension, backed with
>> the service plugin,
>> the driver that implements the Qos plugin should send the new Qos
>> enforcment to the Qos agent through the Qos topic.
>
>
> I reckon that is pretty much correct. At the end of the day, the agent which
> enforces QoS at the data plane just needs to ensure the appropriate
> configuration is in place on all ports. Whether this information is coming
> from a driver or a serivice plugin, it does not matter a lot (as long as
> it's not coming from an untrusted source, obviously). If you look at sec
> group agent module, the concept is pretty much the same.
>
>>
>> So I feel like implementing a core resource extension with a service
>> plugin needs :
>> 1 : a MD to interact with the service plugin
>> 2 : an agent and a mixin used by the the L2 agent.
>> 3 : a dedicated topic used by the MD and the driver of the service
>> plugin to communicate with the new agent
>>
>> Am I wrong?
>
>
> There is nothing wrong with that. Nevertheless, the fact that we need a Mech
> driver _and_ a service plugin probably also implies that the service plugin
> at the end of the day has not succeeded in its goal of being orthogonal.
> I think it's worth try and exploring solutions which will allow us to
> completely decouple the service plugin for the core functionality, and
> therefore completely contain QoS management within its service plugin. If
> you too think this is not risible, I can perhaps put together something to
> validate this idea.

It doesn't seems risible to me at all. I feel quite uncomfortable to
have to create a MD
to deal with core resource modifications, when those core r

Re: [openstack-dev] [Neutron][QoS] Request to be considered for neutron-incubator

2014-08-20 Thread Mathieu Rohon
Hi

On Wed, Aug 20, 2014 at 12:12 AM, Salvatore Orlando  wrote:
> In the current approach QoS support is being "hardwired" into ML2.
>
> Maybe this is not the best way of doing that, as perhaps it will end up
> requiring every mech driver which enforces VIF configuration should support
> it.
> I see two routes. One is a mechanism driver similar to l2-pop, and then you
> might have a look at the proposed extension framework (and partecipate into
> the discussion).
> The other is doing a service plugin. Still, we'll have to solve how to
> implement the "binding" between a port/network and the QoS entity.

We have exactly the same issue while implementing the BGPVPN service plugin [1].
As for the Qos extension, the BGPVPN extension can extend network by
adding route target infos.
the BGPVPN data model has a foreign key to the extended network.

If Qos is implemented as a service plugin, I assume that the
architecture would be similar, with Qos datamodel
having  foreign keys to ports and/or Networks.

When a port is created, and it has Qos enforcement thanks to the service plugin,
let's assume that a ML2 Qos Mech Driver can fetch Qos info and send
them back to the L2 agent.
We would probably need a Qos Agent which communicates with the plugin
through a dedicated topic.

But when a Qos info is updated through the Qos extension, backed with
the service plugin,
the driver that implements the Qos plugin should send the new Qos
enforcment to the Qos agent through the Qos topic.

So I feel like implementing a core resource extension with a service
plugin needs :
1 : a MD to interact with the service plugin
2 : an agent and a mixin used by the the L2 agent.
3 : a dedicated topic used by the MD and the driver of the service
plugin to communicate with the new agent

Am I wrong?


[1]https://review.openstack.org/#/c/93329/


> If we go
> for the approach we've chosen so far the resource extension model you still
> have to deal with ML2 extensions. But I like orthogonality in services, and
> QoS is a service to me.
> Another arguable point is that we might want to reconsider our
> abuse^H^H^H^H^H use of resource attribute extension, but this is a story for
> a different thread.
>
> Regarding the incubator request, I think we need to wait for the process to
> be "blessed". But you have my support and I would happy to help to assist
> with this work item through its process towards graduation.
>
> This obviously provided the QoS team wants me to do that!
>
> Salvatore
>
>
> On 19 August 2014 23:15, Alan Kavanagh  wrote:
>>
>> +1, I am hoping this is just a short term holding point and this will
>> eventually be merged into main branch as this is a feature a lot of
>> companies, us included would definitely benefit from having supported and
>> many thanks to Sean for sticking with this and continue to push this.
>> /Alan
>>
>> -Original Message-
>> From: Collins, Sean [mailto:sean_colli...@cable.comcast.com]
>> Sent: August-19-14 8:33 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: [openstack-dev] [Neutron][QoS] Request to be considered for
>> neutron-incubator
>>
>> Hi,
>>
>> The QoS API extension has lived in Gerrit/been in review for about a year.
>> It's gone through revisions, summit design sessions, and for a little while,
>> a subteam.
>>
>> I would like to request incubation in the upcoming incubator, so that the
>> code will have a more permanent "home" where we can collaborate and improve.
>> --
>> Sean M. Collins
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Mech driver as out-of-tree add-on

2014-08-14 Thread Mathieu Rohon
Hi,

I would like to add that it would be harder for the community to help
maintaining drivers.
such a work [1] wouldn't have occured with an out of tree ODL driver.

[1] https://review.openstack.org/#/c/96459/

On Wed, Aug 13, 2014 at 1:09 PM, Robert Kukura  wrote:
> One thing to keep in mind is that the ML2 driver API does sometimes change,
> requiring updates to drivers. Drivers that are in-tree get updated along
> with the driver API change. Drivers that are out-of-tree must be updated by
> the owner.
>
> -Bob
>
>
> On 8/13/14, 6:59 AM, ZZelle wrote:
>
> Hi,
>
>
> The important thing to understand is how to integrate with neutron through
> stevedore/entrypoints:
>
> https://github.com/dave-tucker/odl-neutron-drivers/blob/master/setup.cfg#L32-L34
>
>
> Cedric
>
>
> On Wed, Aug 13, 2014 at 12:17 PM, Dave Tucker  wrote:
>>
>> I've been working on this for OpenDaylight
>> https://github.com/dave-tucker/odl-neutron-drivers
>>
>> This seems to work for me (tested Devstack w/ML2) but YMMV.
>>
>> -- Dave
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] l2pop problems

2014-08-06 Thread Mathieu Rohon
Hi Zang,

On Tue, Aug 5, 2014 at 1:18 PM, Zang MingJie  wrote:
> Hi Mathieu:
>
> We have deployed the new l2pop described in the previous mail in our
> environment, and works pretty well. It solved the timing problem, and
> also reduces lots of l2pop rpc calls. I'm going to file a blueprint to
> propose the changes.

great, I would be pleased to review this BP.

> On Fri, Jul 18, 2014 at 10:26 PM, Mathieu Rohon  
> wrote:
>> Hi Zang,
>>
>> On Wed, Jul 16, 2014 at 4:43 PM, Zang MingJie  wrote:
>>> Hi, all:
>>>
>>> While resolving ovs restart rebuild br-tun flows[1], we have found
>>> several l2pop problems:
>>>
>>> 1. L2pop is depending on agent_boot_time to decide whether send all
>>> port information or not, but the agent_boot_time is unreliable, for
>>> example if the service receives port up message before agent status
>>> report, the agent won't receive any port on other agents forever.
>>
>> you're right, there a race condition here, if the agent has more than
>> 1 port on the same network and if the agent sends its
>> update_device_up() on every port before it sends its report_state(),
>> it won't receive fdb concerning these network. Is it the race you are
>> mentionning above?
>> Since the report_state is done in a dedicated greenthread, and is
>> launched before the greenthread that manages ovsdb_monitor, the state
>> of the agent should be updated before the agent gets aware of its
>> ports and sends get_device_details()/update_device_up(), am I wrong?
>> So, after a restart of an agent, the agent_uptime() should be less
>> than the agent_boot_time configured by default in the conf when the
>> agent sent its first update_device_up(), the l2pop MD will be aware of
>> this restart and trigger the cast of all fdb entries to the restarted
>> agent.
>>
>> But I agree that it might relies on enventlet thread managment and on
>> agent_boot_time that can be misconfigured by the provider.
>>
>>> 2. If the openvswitch restarted, all flows will be lost, including all
>>> l2pop flows, the agent is unable to fetch or recreate the l2pop flows.
>>>
>>> To resolve the problems, I'm suggesting some changes:
>>>
>>> 1. Because the agent_boot_time is unreliable, the service can't decide
>>> whether to send flooding entry or not. But the agent can build up the
>>> flooding entries from unicast entries, it has already been
>>> implemented[2]
>>>
>>> 2. Create a rpc from agent to service which fetch all fdb entries, the
>>> agent calls the rpc in `provision_local_vlan`, before setting up any
>>> port.[3]
>>>
>>> After these changes, the l2pop service part becomes simpler and more
>>> robust, mainly 2 function: first, returns all fdb entries at once when
>>> requested; second, broadcast fdb single entry when a port is up/down.
>>
>> That's an implementation that we have been thinking about during the
>> l2pop implementation.
>> Our purpose was to minimize RPC calls. But if this implementation is
>> buggy due to uncontrolled thread order and/or bad usage of the
>> agent_boot_time parameter, it's worth investigating your proposal [3].
>> However, I don't get why [3] depends on [2]. couldn't we have a
>> network_sync() sent by the agent during provision_local_vlan() which
>> will reconfigure ovs when the agent and/or the ovs restart?
>
> actual, [3] doesn't strictly depend [2], we have encountered l2pop
> problems several times where the unicast is correct, but the broadcast
> fails, so we decide completely ignore the broadcast entries in rpc,
> only deal unicast entries, and use unicast entries to build broadcast
> rules.

Understood, but i could be interesting to understand why the MD sends
wrong broadcast entries. Do you have any clue?

>
>>
>>
>>> [1] https://bugs.launchpad.net/neutron/+bug/1332450
>>> [2] https://review.openstack.org/#/c/101581/
>>> [3] https://review.openstack.org/#/c/107409/
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] l2pop problems

2014-07-18 Thread Mathieu Rohon
Hi Zang,

On Wed, Jul 16, 2014 at 4:43 PM, Zang MingJie  wrote:
> Hi, all:
>
> While resolving ovs restart rebuild br-tun flows[1], we have found
> several l2pop problems:
>
> 1. L2pop is depending on agent_boot_time to decide whether send all
> port information or not, but the agent_boot_time is unreliable, for
> example if the service receives port up message before agent status
> report, the agent won't receive any port on other agents forever.

you're right, there a race condition here, if the agent has more than
1 port on the same network and if the agent sends its
update_device_up() on every port before it sends its report_state(),
it won't receive fdb concerning these network. Is it the race you are
mentionning above?
Since the report_state is done in a dedicated greenthread, and is
launched before the greenthread that manages ovsdb_monitor, the state
of the agent should be updated before the agent gets aware of its
ports and sends get_device_details()/update_device_up(), am I wrong?
So, after a restart of an agent, the agent_uptime() should be less
than the agent_boot_time configured by default in the conf when the
agent sent its first update_device_up(), the l2pop MD will be aware of
this restart and trigger the cast of all fdb entries to the restarted
agent.

But I agree that it might relies on enventlet thread managment and on
agent_boot_time that can be misconfigured by the provider.

> 2. If the openvswitch restarted, all flows will be lost, including all
> l2pop flows, the agent is unable to fetch or recreate the l2pop flows.
>
> To resolve the problems, I'm suggesting some changes:
>
> 1. Because the agent_boot_time is unreliable, the service can't decide
> whether to send flooding entry or not. But the agent can build up the
> flooding entries from unicast entries, it has already been
> implemented[2]
>
> 2. Create a rpc from agent to service which fetch all fdb entries, the
> agent calls the rpc in `provision_local_vlan`, before setting up any
> port.[3]
>
> After these changes, the l2pop service part becomes simpler and more
> robust, mainly 2 function: first, returns all fdb entries at once when
> requested; second, broadcast fdb single entry when a port is up/down.

That's an implementation that we have been thinking about during the
l2pop implementation.
Our purpose was to minimize RPC calls. But if this implementation is
buggy due to uncontrolled thread order and/or bad usage of the
agent_boot_time parameter, it's worth investigating your proposal [3].
However, I don't get why [3] depends on [2]. couldn't we have a
network_sync() sent by the agent during provision_local_vlan() which
will reconfigure ovs when the agent and/or the ovs restart?


> [1] https://bugs.launchpad.net/neutron/+bug/1332450
> [2] https://review.openstack.org/#/c/101581/
> [3] https://review.openstack.org/#/c/107409/
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Support dpdk ovs with ml2 plugin

2014-07-11 Thread Mathieu Rohon
A simple usecase could be to have a compute node able start VM with
optimized net I/O or standard net I/O, depending on the network flavor
ordered for this VM.

On Fri, Jul 11, 2014 at 11:16 AM, Czesnowicz, Przemyslaw
 wrote:
>
>
> Can you explain whats the use case for  running both ovs and userspace ovs
> on the same host?
>
>
>
> Thanks
>
> Przemek
>
> From: loy wolfe [mailto:loywo...@gmail.com]
> Sent: Friday, July 11, 2014 3:17 AM
>
>
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][ML2] Support dpdk ovs with ml2 plugin
>
>
>
> +1
>
>
>
> It's totally different between ovs and userspace ovs.
>
> also, there is strong need to keep ovs even we have a userspace ovs in the
> same host
>
>
>
>
>
> --
>
>
> Intel Shannon Limited
> Registered in Ireland
> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
> Registered Number: 308263
> Business address: Dromore House, East Park, Shannon, Co. Clare
>
> This e-mail and any attachments may contain confidential material for the
> sole use of the intended recipient(s). Any review or distribution by others
> is strictly prohibited. If you are not the intended recipient, please
> contact the sender and delete all copies.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture

2014-07-02 Thread Mathieu Rohon
Hi, sorry for the late reply, I was out of office for 3 weeks.

I also love the idea of having a single thread in charge of writing
dataplane actions.
As Zang described, this thread would read events in a queue, which could be
populated by agent_drivers.
The main goal would be to avoid desynchronization. While a first
greenthread can yield while performing a dataplane action (for instance,
running a ofctl command) another greenthread could potentially process
another action that should be done after the first one is terminated.
enqueuing those actions and processing them in a single thread would avoid
such a behavior.

I also think this is orthogonal with the agent/resource architecture. I
think that agent driver could populate the queue, and the singleton thread
would call the correct resource driver, depending on the impacted port, to
interpret the order filed in the queue.

regards
Mathieu


On Fri, Jun 20, 2014 at 10:38 PM, Mohammad Banikazemi  wrote:

> Zang, thanks for your comments.
>
> I think what you are suggesting is perhaps orthogonal to having Resource
> and Agent drivers. By that I mean we can have what you are suggesting and
> keep the Resource and Agent drivers. The reason for having Resource drivers
> is to provide the means for possibly extending what an agent does in
> response to say changes to a port in a modular way. We can restrict the
> access to Resource drivers from the events loop only. That restriction is
> not there in the current model but would adding that address your concerns?
> What are your thoughts? As Salvatore has mentioned in his email in this
> thread, that is what the current OVS agent does wrt port updates. That is,
> the update to ports get processed from the events loop.
>
> As a separate but relevant issue, we can and should discuss whether having
> the Resource and Agent drivers is useful in making the agent more modular.
> The idea behind using these drivers is to have the agent use a collection
> of drivers rather than mixin classes so we can more easily select what
>  (and how) functionalities an agent support and reuse as much as we can
> across L2 agents. Are there better ways of achieving this? Any thoughts?
>
> Best,
>
> Mohammad
>
>
>
> [image: Inactive hide details for Zang MingJie ---06/19/2014 06:27:31
> AM---Hi: I don't like the idea of ResourceDriver and AgentDriver.]Zang
> MingJie ---06/19/2014 06:27:31 AM---Hi: I don't like the idea of
> ResourceDriver and AgentDriver. I suggested
>
> From: Zang MingJie 
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>,
> Date: 06/19/2014 06:27 AM
>
> Subject: Re: [openstack-dev] [Neutron][ML2] Modular L2 agent architecture
> --
>
>
>
> Hi:
>
> I don't like the idea of ResourceDriver and AgentDriver. I suggested
> use a singleton worker thread to manager all underlying setup, so the
> driver should do nothing other than fire a update event to the worker.
>
> The worker thread may looks like this one:
>
> # the only variable store all local state which survives between
> different events, including lvm, fdb or whatever
> state = {}
>
> # loop forever
> while True:
>event = ev_queue.pop()
>if not event:
>sleep() # may be interrupted when new event comes
>continue
>
>origin_state = state
>new_state = event.merge_state(state)
>
>if event.is_ovsdb_changed():
>if event.is_tunnel_changed():
>setup_tunnel(new_state, old_state, event)
>if event.is_port_tags_changed():
>setup_port_tags(new_state, old_state, event)
>
>if event.is_flow_changed():
>if event.is_flow_table_1_changed():
>setup_flow_table_1(new_state, old_state, event)
>if event.is_flow_table_2_changed():
>setup_flow_table_2(new_state, old_state, event)
>if event.is_flow_table_3_changed():
>setup_flow_table_3(new_state, old_state, event)
>if event.is_flow_table_4_changed():
>setup_flow_table_4(new_state, old_state, event)
>
>if event.is_iptable_changed():
>if event.is_iptable_nat_changed():
>setup_iptable_nat(new_state, old_state, event)
>if event.is_iptable_filter_changed():
>setup_iptable_filter(new_state, old_state, event)
>
>   state = new_state
>
> when any part has been changed by a event, the corresponding setup_xxx
> function rebuild the whole part, then use the restore like
> `iptables-restore` or `ovs-ofctl replace-flows` to reset the whole
> part.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
O

Re: [openstack-dev] [Neutron] Too much "shim rest proxy" mechanism drivers in ML2

2014-06-06 Thread Mathieu Rohon
hi henry,

On Fri, Jun 6, 2014 at 10:17 AM, henry hly  wrote:
> ML2 mechanism drivers are becoming another kind of "plugins". Although they
> can be loaded together, but can not work with each other.
>
> Today, there are more and more drivers, supporting all kinds of networking
> hardware and middleware (sdn controller). Unfortunately, they are designed
> exclusively as chimney REST proxies.
>
> A very general use case of heterogeneous networking: we have OVS controlled
> by ovs agent, and  switchs from different vendor, some of them are
> controller by their own driver/agent directly, others are controlled by a
> sdn controller middleware. Can we create a vxlan network, across all these
> sw/hw switchs?
>
> It's not so easy: neutron ovs use l2 population mech driver, sdn controllers
> have their own population way, today most dedicated switch driver can only
> support vlan. sdn controller people may say: it's ok, just put everything
> under the control of my controller, leaving ml2 plugin as a shim rest proxy
> layer. But, shouldn't Openstack Neutron itself be the first class citizen
> even if there is not controller involved?

I totally agree. By using l2population with tunnel networks (vxlan,
gre), you will not be able to plug an external device which could
possibly terminate your tunnel. The ML2 plugin has to be aware a new
port in the vxlan segment. I think this is the scope of this bp :
https://blueprints.launchpad.net/neutron/+spec/neutron-switch-port-extension

mixing several SDN controller (when used with ovs/of/lb agent, neutron
could be considered has a SDN controller) could be achieved the same
way, with the SDN controller sending notification to neutron for the
ports that it manages.

>
> Could we remove all device related adaption(rest/ssh/netconf/of... proxy)
> from these mechanism driver to the agent side, leaving only necessary code
> in the plugin? Heterogeneous networking may become easier, while ofagent
> give a good example, it can co-exist with native neutron OVS agent in vxlan
> l2 population.

linuxbridge agent can coexist too

And with the help of coming ML2 agent framework, hardware
> device or middleware controller adaption agent could be more simplified.

I don't understand the reason why you want to move middleware
controller to the agent.

>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Modular agent architecture

2014-06-03 Thread Mathieu Rohon
Hi mohammad,

What I meant in my email is totaly in line with your proposal. My dataplane
driver is your resource driver, whereas my controlplane driver is your
agent driver!
I totally agree that the real challenge is defining a common abstract class
for every resource driver.
My proposal was to bind a port to a resource driver, so that we can have
several resource driver on the same agent. This seems to be the goal the
method * ResourceDriver**.port_bound() *in [3], am I wrong?

Tahnks for the etherpad, I will try to participate through it.

Mathieu


On Sat, May 31, 2014 at 5:10 AM, Mohammad Banikazemi  wrote:

> Hi Mathieu,
>
> Thanks for the email. As discussed during the ML2 IRC meeting [2], we have
> not decided on a design. That is why we do not have a spec for review yet.
> The idea is that we spend a bit more time and figure out the details and
> try out some possible options before we go ahead with the spec. So new
> comments/suggestions are much appreciated.
>
> In addition to having different drivers we want to reduce the code
> replication across current agents. I am wondering if with what you are
> proposing as dataplane drivers, we will end up with having different
> drivers which look like the current agents and we do not deal with reducing
> code replication across agents. If this is not a correct assessment could
> you describe how we can avoid code replication across agents/drivers.
>
> Let me briefly explain what I have outlined in [3] (also mentioned in
> [2]). We are thinking of having drivers for each extension or probably
> better said each functionality. So we can have a base l2 connectivity
> driver, an l2pop driver, a sg driver (not to be confused with sq drivers),
> so on so forth. I think in your email you are referring to these drivers
> (or something close to them) as Extension drivers. In [3] they are called
> Agent Drivers.
>
> Then we have the Resource Drivers which will be essentially used for
> realizing these features depending on the technology/resource being used
> (e.g., using  OVS switches, or Linux Bridges, or some other technology).
> The main reason for using such a organization is to be able to have
> different agent drivers utilize the same resource and reuse code. The
> challenge is figuring out the api for such a driver. Any thoughts on this?
>
> Mohammad
>
> [3] https://etherpad.openstack.org/p/modular-l2-agent-outline
>
>
> [image: Inactive hide details for Mathieu Rohon ---05/30/2014 06:25:29
> AM---Hi all, Modular agent seems to have to choose between two t]Mathieu
> Rohon ---05/30/2014 06:25:29 AM---Hi all, Modular agent seems to have to
> choose between two type of architecture [1].
>
> From: Mathieu Rohon 
> To: OpenStack Development Mailing List ,
> Mohammad Banikazemi/Watson/IBM@IBMUS,
> Date: 05/30/2014 06:25 AM
> Subject: [openstack-dev][Neutron][ML2] Modular agent architecture
> --
>
>
>
> Hi all,
>
> Modular agent seems to have to choose between two type of architecture [1].
>
> As I understood during the last ML2 meeting [2], Extension driver
> seems to be the most reasonnable choice.
> But I think that those two approaches are complementory : Extension
> drivers will deal with RPC callbacks form the plugin, wheras Agent
> drivers will deal with controlling the underlying technology to
> interpret those callbacks.
>
> It looks like a controlPlane/Dataplane architecture. Could we have a
> control plane manager on which each Extension driver should register
> (and register callbacks it is listening at), and a data plane manager,
> on which each dataplane controller will register (ofagent, ovs, LB..),
> and which implement a common abastract class.
> A port will be managed by only one dataplane controller, and when a
> control plane driver wants to apply a modification on a port, it will
> retrieve the correct dataplane controller for this port in order to
> call one of the abstracted method to modify the dataplane.
>
>
> [1]
> https://wiki.openstack.org/wiki/Neutron/ModularL2Agent#Possible_Directions
> [2]
> http://eavesdrop.openstack.org/meetings/networking_ml2/2014/networking_ml2.2014-05-28-16.02.log.html
>
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][ML2] Modular agent architecture

2014-05-30 Thread Mathieu Rohon
Hi all,

Modular agent seems to have to choose between two type of architecture [1].

As I understood during the last ML2 meeting [2], Extension driver
seems to be the most reasonnable choice.
But I think that those two approaches are complementory : Extension
drivers will deal with RPC callbacks form the plugin, wheras Agent
drivers will deal with controlling the underlying technology to
interpret those callbacks.

It looks like a controlPlane/Dataplane architecture. Could we have a
control plane manager on which each Extension driver should register
(and register callbacks it is listening at), and a data plane manager,
on which each dataplane controller will register (ofagent, ovs, LB..),
and which implement a common abastract class.
A port will be managed by only one dataplane controller, and when a
control plane driver wants to apply a modification on a port, it will
retrieve the correct dataplane controller for this port in order to
call one of the abstracted method to modify the dataplane.


[1]https://wiki.openstack.org/wiki/Neutron/ModularL2Agent#Possible_Directions
[2]http://eavesdrop.openstack.org/meetings/networking_ml2/2014/networking_ml2.2014-05-28-16.02.log.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] BGP Dynamic Routing Proposal

2014-05-30 Thread Mathieu Rohon
Hi,

I was about mentionning ExaBGP too! can we also consider using those
BGP speakers for BGPVPN implementation [1].
This would be consistent to have the same BGP speaker used for every
BGP needs inside Neutron.

[1]https://review.openstack.org/#/c/93329/


On Fri, May 30, 2014 at 10:54 AM, Jaume Devesa  wrote:
> Hello Takashi,
>
> thanks for doing this! As we have proposed ExaBgp[1] in the Dynamic Routing
> blueprint[2], I've added a new column for this speaker in the wiki page. I
> plan to fill it soon.
>
> ExaBgp was our first choice because we thought that run something in library
> mode would be much more easy to deal with (especially the exceptions and
> corner cases) and the code would be much cleaner. But seems that Ryu BGP
> also can fit in this requirement. And having the help from a Ryu developer
> like you turns it into a promising candidate!
>
> I'll start working now in a proof of concept to run the agent with these
> implementations and see if we need more requirements to compare between the
> speakers.
>
> [1]: https://wiki.openstack.org/wiki/Neutron/BGPSpeakersComparison
> [2]: https://review.openstack.org/#/c/90833/
>
> Regards,
>
>
> On 29 May 2014 18:42, YAMAMOTO Takashi  wrote:
>>
>> as per discussions on l3 subteem meeting today, i started
>> a bgp speakers comparison wiki page for this bp.
>>
>> https://wiki.openstack.org/wiki/Neutron/BGPSpeakersComparison
>>
>> Artem, can you add other requirements as columns?
>>
>> as one of ryu developers, i'm naturally biased to ryu bgp.
>> i appreciate if someone provides more info for other bgp speakers.
>>
>> YAMAMOTO Takashi
>>
>> > Good afternoon Neutron developers!
>> >
>> > There has been a discussion about dynamic routing in Neutron for the
>> > past few weeks in the L3 subteam weekly meetings. I've submitted a review
>> > request of the blueprint documenting the proposal of this feature:
>> > https://review.openstack.org/#/c/90833/. If you have any feedback or
>> > suggestions for improvement, I would love to hear your comments and include
>> > your thoughts in the document.
>> >
>> > Thank you.
>> >
>> > Sincerely,
>> > Artem Dmytrenko
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Jaume Devesa
> Software Engineer at Midokura
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] ML2 extensions info propagation

2014-05-09 Thread Mathieu Rohon
Hi mohammad,


On Thu, May 8, 2014 at 5:11 PM, Mohammad Banikazemi  wrote:

> Hi Mathieu,
>
> Yes, the enhancement of the get_device_details method sounds like an
> interesting and useful option.
> The option of using drivers in the agent for supporting extensions is to
> make the agent more modular and allow for selectively supporting extensions
> as needed by a given agent.If we take the approach you are suggesting and
> eliminate or reduce the use of extension specific RPCs how can we achieve
> the modularity goal above? Is there a way to make these options useful
> together? More broadly, what would be the impact of your proposal on the
> modularity of the agent (if any)?
>

I don't think this approach breaks the modularity architecture you
proposed. It is more about building a driver context as it is done in the
ML2 plugin, based on informations received through RPC (while ML2 build its
driver context base on information received through API Call). This driver
context populated with RPC informations would then be spread among drivers.
I think that for readability and code understanding, it would be great to
share this achitecture with ML2 plugin. Moreover, this might be needed if
you want to support several implementation of the same extension in one
agent.



>
> Please note that as per discussion during the ML2 meeting yesterday we are
> going to have a single etherpad for each of ML2 sessions. The etherpad for
> the Modular Layer 2 Agent session can be found at [2] from your original
> email below. We may reorganize the information that is already there but
> please do add your comments there.
>
done! I will also update the etherpad [1]

thanks  Kyle and Mohammad for your reply

Mathieu

> Thanks,
>
> Mohammad
>
>
> [image: Inactive hide details for Mathieu Rohon ---05/07/2014 10:25:58
> AM---Hi ML2er and others, I'm considering discussions around ML2]Mathieu
> Rohon ---05/07/2014 10:25:58 AM---Hi ML2er and others, I'm considering
> discussions around ML2 for the summit. Unfortunatly I
>
> From: Mathieu Rohon 
> To: OpenStack Development Mailing List ,
>
> Date: 05/07/2014 10:25 AM
> Subject: [openstack-dev] [Neutron] ML2 extensions info propagation
>
--
>
>
>
> Hi ML2er and others,
>
> I'm considering discussions around ML2 for the summit. Unfortunatly I
> won't attend the summit, so I'll try to participate through the
> mailing list and etherpads.
>
> I'm especially interested in extension support by Mechanism Driver[1]
> and Modular agent[2]. During the Juno cycle I'll work on the capacity
> to propagate IPVPN informations (route-target) down to the agent, so
> that the agent can manage MPLS encapsulation.
> I think that the easiest way to do that is to enhance
> get_device_details() RPC message to add network extension informations
> of the concerned port in the dict sent.
>
> Moreover I think this approach could be generalized, and
> get_device_details() in the agent should return serialized information
> of a port with every extension informations (security_group,
> port_binding...). When the core datamodel or the extension datamodel
> would be modified, this would result in a port_update() with the
> updated serialization of the datamodel. This way, we could get rid of
> security-group and l2pop RPC. Modular agent wouldn't need to deal with
> one driver by extension which need to register its RPC callbacks.
>
> Those informations should also be stored in ML2 driver context. When a
> port is created by ML2 plugin, it calls super() for creating core
> datamodel, which will return a dict without extension informations,
> because extension informations in the Rest call has not been processed
> yet. But once the plugin call its core extension, it should call MD
> registered extensions as proposed by nader here [4] and then call
> make_port_dict(with extension), or an equivalent serialization
> function, to create the driver context. this seralization function
> would be used by get_device_details() RPC callbacks too.
>
> Regards,
>
> Mathieu
>
> [1]https://etherpad.openstack.org/p/ML2_mechanismdriver_extensions_support
> [2]https://etherpad.openstack.org/p/juno-neutron-modular-l2-agent
> [3]http://summit.openstack.org/cfp/details/240
> [4]https://review.openstack.org/#/c/89211/
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] [ML2] extensions info propagation

2014-05-07 Thread Mathieu Rohon
Hi ML2er and others,

I'm considering discussions around ML2 for the summit. Unfortunatly I
won't attend the summit, so I'll try to participate through the
mailing list and etherpads.

I'm especially interested in extension support by Mechanism Driver[1]
and Modular agent[2]. During the Juno cycle I'll work on the capacity
to propagate IPVPN informations (route-target) down to the agent, so
that the agent can manage MPLS encapsulation.
I think that the easiest way to do that is to enhance
get_device_details() RPC message to add network extension informations
of the concerned port in the dict sent.

Moreover I think this approach could be generalized, and
get_device_details() in the agent should return serialized information
of a port with every extension informations (security_group,
port_binding...). When the core datamodel or the extension datamodel
would be modified, this would result in a port_update() with the
updated serialization of the datamodel. This way, we could get rid of
security-group and l2pop RPC. Modular agent wouldn't need to deal with
one driver by extension which need to register its RPC callbacks.

Those informations should also be stored in ML2 driver context. When a
port is created by ML2 plugin, it calls super() for creating core
datamodel, which will return a dict without extension informations,
because extension informations in the Rest call has not been processed
yet. But once the plugin call its core extension, it should call MD
registered extensions as proposed by nader here [4] and then call
make_port_dict(with extension), or an equivalent serialization
function, to create the driver context. this seralization function
would be used by get_device_details() RPC callbacks too.

Regards,

Mathieu

[1]https://etherpad.openstack.org/p/ML2_mechanismdriver_extensions_support
[2]https://etherpad.openstack.org/p/juno-neutron-modular-l2-agent
[3]http://summit.openstack.org/cfp/details/240
[4]https://review.openstack.org/#/c/89211/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] ML2 extensions info propagation

2014-05-07 Thread Mathieu Rohon
Hi ML2er and others,

I'm considering discussions around ML2 for the summit. Unfortunatly I
won't attend the summit, so I'll try to participate through the
mailing list and etherpads.

I'm especially interested in extension support by Mechanism Driver[1]
and Modular agent[2]. During the Juno cycle I'll work on the capacity
to propagate IPVPN informations (route-target) down to the agent, so
that the agent can manage MPLS encapsulation.
I think that the easiest way to do that is to enhance
get_device_details() RPC message to add network extension informations
of the concerned port in the dict sent.

Moreover I think this approach could be generalized, and
get_device_details() in the agent should return serialized information
of a port with every extension informations (security_group,
port_binding...). When the core datamodel or the extension datamodel
would be modified, this would result in a port_update() with the
updated serialization of the datamodel. This way, we could get rid of
security-group and l2pop RPC. Modular agent wouldn't need to deal with
one driver by extension which need to register its RPC callbacks.

Those informations should also be stored in ML2 driver context. When a
port is created by ML2 plugin, it calls super() for creating core
datamodel, which will return a dict without extension informations,
because extension informations in the Rest call has not been processed
yet. But once the plugin call its core extension, it should call MD
registered extensions as proposed by nader here [4] and then call
make_port_dict(with extension), or an equivalent serialization
function, to create the driver context. this seralization function
would be used by get_device_details() RPC callbacks too.

Regards,

Mathieu

[1]https://etherpad.openstack.org/p/ML2_mechanismdriver_extensions_support
[2]https://etherpad.openstack.org/p/juno-neutron-modular-l2-agent
[3]http://summit.openstack.org/cfp/details/240
[4]https://review.openstack.org/#/c/89211/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ML2] L2 population mechanism driver

2014-05-07 Thread Mathieu Rohon
hi,

please consider that maintaining drivers from one openstack version to
others is really hard when those drivers are out of the scope of
neutron developers/continuous Integration process. For instance, ML2
is currently refactored by this patch :

https://review.openstack.org/#/c/82945/

you will have to adapt your code permanently!

regards

On Tue, May 6, 2014 at 8:13 PM, Sławek Kapłoński  wrote:
> Hello,
>
> Thanks for explanation. Now it is clear for me :) I made my own driver because
> I made config on hosts in "special way" but I can't describe details :/
>
> Best regards
> Slawek Kaplonski
> sla...@kaplonski.pl
>
> Dnia wtorek, 6 maja 2014 10:35:05 Mathieu Rohon pisze:
>> Hi slawek,
>>
>> As soon as you declare "l2population" in the  MD section of the config
>> file, it will be loaded by ML2 plugin. l2population MD will listen to
>> ML2 DB events and send RPC messages to ovs agent when needed.
>> l2population MD will not bind the port, ovs MD will.
>> By the way, you need to ad add the flag l2population in config file of
>> the agent.
>>
>> just to be curious, what your MD which inherit from l2_pop MD is made for?
>>
>> regards
>>
>> Mathieu
>>
>> On Mon, May 5, 2014 at 11:06 PM, Sławek Kapłoński 
> wrote:
>> > Hello,
>> >
>> > Thanks for answear. Now I made my own mech_driver which inherits from
>> > l2_pop driver and it is working ok. But If I will set it as:
>> > "mechanism_drivers=openvswitch,l2population" than ports will be binded
>> > with
>> > ovs driver so population mechanism will be working in such network?
>> >
>> > Best regards
>> > Slawek Kaplonski
>> > sla...@kaplonski.pl
>> >
>> > Dnia poniedziałek, 5 maja 2014 00:29:56 Narasimhan, Vivekanandan pisze:
>> >> Hi Slawek,
>> >>
>> >> I think L2 pop driver needs to be used in conjunction with other
>> >> mechanism
>> >> drivers.
>> >>
>> >> It only deals with pro-actively informing agents on which MAC Addresses
>> >> became available/unavailable on cloud nodes and is not meant for
>> >> binding/unbinding ports on segments.
>> >>
>> >> If you configure mechanism_drivers=openvswitch,l2population in your
>> >> ml2_conf.ini and restart your neutron-server, you'll notice that
>> >> bind_port
>> >> is handled by OVS mechanism driver (via AgentMechanismDriverBase inside
>> >> ml2/drivers/mech_agent.py).
>> >>
>> >> --
>> >> Thanks,
>> >>
>> >> Vivek
>> >>
>> >>
>> >> -Original Message-
>> >> From: Sławek Kapłoński [mailto:sla...@kaplonski.pl]
>> >> Sent: Sunday, May 04, 2014 12:32 PM
>> >> To: openstack-dev@lists.openstack.org
>> >> Subject: [openstack-dev] [ML2] L2 population mechanism driver
>> >>
>> >> Hello,
>> >>
>> >> Last time I want to try using L2pop mechanism driver in ML2 (and
>> >> openvswitch agents on compute nodes). But every time when I try to spawn
>> >> instance I have got error "binding failed". After some searching in code
>> >> I found that l2pop driver have not implemented method "bind_port" and as
>> >> it inherit directly from "MechanismDriver" this method is in fact not
>> >> implemented. Is is ok and this mechanism driver should be used in other
>> >> way or maybe there is some bug in this driver and it miss this method?
>> >>
>> >> Best regards
>> >> Slawek Kaplonski
>> >> sla...@kaplonski.pl
>> >>
>> >> ___
>> >> OpenStack-dev mailing list
>> >> OpenStack-dev@lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >> ___
>> >> OpenStack-dev mailing list
>> >> OpenStack-dev@lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Add VMware dvSwitch/vSphere API support for Neutron ML2

2014-05-07 Thread Mathieu Rohon
Hi,

one other part of your future work, if the BP is accepted, is to
deploy a third party testing environment. Since your work doesn't rely
on opensource solution (VMWare), it won't be tested by the openstack
CI. you will have to deploy an infra to test and vote on each patch
submitted to neutron that might break your driver.

see : http://ci.openstack.org/third_party.html

regards

On Wed, May 7, 2014 at 8:39 AM, Jussi Sorjonen
 wrote:
> Hi,
>
> Just to give this some context, the original idea for the driver came from
> the fact that there was no readily available method for using VLAN-backed
> port groups in dvSwitches with neutron. We tried using nova-network with
> regular vSwitches and VlanManager to evaluate the VMware driver in
> nova-compute but it was decided that neutron is a an absolute requirement
> for production use.
>
> The current code of the driver is tested with nova-compute controlling a
> vSphere 5.1 cluster. Multiple instances were successfully created,
> attached to the correct port group and network connectivity was achieved.
>
> We are not using VXLANs at the moment and are not actively looking into
> deploying them, so implementing a VXLAN support in the driver is not
> currently in our interest. Considering that VXLANs are configured as port
> groups in dvSwitches on the VMware side there isn¹t much difference in
> that part. However, configuring the VXLANs in the vShield app is something
> I think is out of the scope of this driver.
>
> We¹re interested in going through the blueprint process. Due to a rather
> tight schedule on our end we had to get a limited-functionality version of
> the driver ready before we had time to look into the process of submitting
> a blueprint and the required specs. The current version of the driver
> implements the only required feature in our environment - attaching
> virtual machines on the VMware side to correct dvSwitch port groups.
> Adding features like creating the port groups based on networks defined in
> neutron etc. are in consideration.
>
> I hope this answers some of the questions and I¹m happy to provide more
> details, if needed.
>
> Regards
>
> --
> Jussi Sorjonen, Systems Specialist, Data Center
> +358 (0)50 594 7848, jussi.sorjo...@cybercom.com
> Cybercom Finland Oy, Urho Kekkosen katu 3 B, 00100 Helsinki, FINLAND
> www.cybercom.fi | www.cybercom.com
>
>
>
>
> On 06/05/14 11:17, "Mathieu Rohon"  wrote:
>
>>Hi IIkka,
>>
>>this is a very interesting MD for ML2. Have you ever tried to use your
>>ML2 driver with VMWare drivers on the nova side, so that you could
>>manage your VM with nova, and its network with neutron.
>>Do you think it would be difficult to extend your driver to support
>>vxlan encapsulation?
>>
>>Neutron has a new process to validate BP. Please follow those
>>instructions to submit your spec for review :
>>https://wiki.openstack.org/wiki/Blueprints#Neutron
>>
>>regards
>>
>>On Mon, May 5, 2014 at 2:22 PM, Ilkka Tengvall
>> wrote:
>>> Hi,
>>>
>>> I would like to start a discussion about a ML2 driver for VMware
>>>distributed
>>> virtual switch (dvSwitch) for Neutron. There is a new blueprint made by
>>>Sami
>>> Mäkinen (sjm) in
>>>
>>>https://blueprints.launchpad.net/neutron/+spec/neutron-ml2-mech-vmware-dv
>>>switch.
>>>
>>> The driver is described and code is publicly available and hosted in
>>>github:
>>>
>>>https://blueprints.launchpad.net/neutron/+spec/neutron-ml2-mech-vmware-dv
>>>switch
>>>
>>> We would like to get the driver through contribution process, what ever
>>>that
>>> exactly means :)
>>>
>>> The original problem this driver solves for is is the following:
>>>
>>> We've been running VMware virtualization platform in our data center
>>>before
>>> OpenStack, and we will keep doing it due existing services. We also have
>>> been running OpenStack for a while also. Now we wanted to get the most
>>>out
>>> of both by combining the customers networks on the both plafroms by
>>>using
>>> provider networks. The problem is that the networks need two separate
>>> managers, neutron and vmware. There was no OpenStack tools to attach the
>>> guests on VMware side to OpenStack provider networks during instance
>>> creation.
>>>
>>> Now we are putting our VMware under control of OpenStack. We want to
>>>have
>>> one master to control the networks, Neutron. We implemented

Re: [openstack-dev] [ML2] L2 population mechanism driver

2014-05-06 Thread Mathieu Rohon
Hi slawek,

As soon as you declare "l2population" in the  MD section of the config
file, it will be loaded by ML2 plugin. l2population MD will listen to
ML2 DB events and send RPC messages to ovs agent when needed.
l2population MD will not bind the port, ovs MD will.
By the way, you need to ad add the flag l2population in config file of
the agent.

just to be curious, what your MD which inherit from l2_pop MD is made for?

regards

Mathieu

On Mon, May 5, 2014 at 11:06 PM, Sławek Kapłoński  wrote:
> Hello,
>
> Thanks for answear. Now I made my own mech_driver which inherits from l2_pop
> driver and it is working ok. But If I will set it as:
> "mechanism_drivers=openvswitch,l2population" than ports will be binded with
> ovs driver so population mechanism will be working in such network?
>
> Best regards
> Slawek Kaplonski
> sla...@kaplonski.pl
>
> Dnia poniedziałek, 5 maja 2014 00:29:56 Narasimhan, Vivekanandan pisze:
>> Hi Slawek,
>>
>> I think L2 pop driver needs to be used in conjunction with other mechanism
>> drivers.
>>
>> It only deals with pro-actively informing agents on which MAC Addresses
>> became available/unavailable on cloud nodes and is not meant for
>> binding/unbinding ports on segments.
>>
>> If you configure mechanism_drivers=openvswitch,l2population in your
>> ml2_conf.ini and restart your neutron-server, you'll notice that bind_port
>> is handled by OVS mechanism driver (via AgentMechanismDriverBase inside
>> ml2/drivers/mech_agent.py).
>>
>> --
>> Thanks,
>>
>> Vivek
>>
>>
>> -Original Message-
>> From: Sławek Kapłoński [mailto:sla...@kaplonski.pl]
>> Sent: Sunday, May 04, 2014 12:32 PM
>> To: openstack-dev@lists.openstack.org
>> Subject: [openstack-dev] [ML2] L2 population mechanism driver
>>
>> Hello,
>>
>> Last time I want to try using L2pop mechanism driver in ML2 (and openvswitch
>> agents on compute nodes). But every time when I try to spawn instance I
>> have got error "binding failed". After some searching in code I found that
>> l2pop driver have not implemented method "bind_port" and as it inherit
>> directly from "MechanismDriver" this method is in fact not implemented.
>> Is is ok and this mechanism driver should be used in other way or maybe
>> there is some bug in this driver and it miss this method?
>>
>> Best regards
>> Slawek Kaplonski
>> sla...@kaplonski.pl
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Add VMware dvSwitch/vSphere API support for Neutron ML2

2014-05-06 Thread Mathieu Rohon
Hi IIkka,

this is a very interesting MD for ML2. Have you ever tried to use your
ML2 driver with VMWare drivers on the nova side, so that you could
manage your VM with nova, and its network with neutron.
Do you think it would be difficult to extend your driver to support
vxlan encapsulation?

Neutron has a new process to validate BP. Please follow those
instructions to submit your spec for review :
https://wiki.openstack.org/wiki/Blueprints#Neutron

regards

On Mon, May 5, 2014 at 2:22 PM, Ilkka Tengvall
 wrote:
> Hi,
>
> I would like to start a discussion about a ML2 driver for VMware distributed
> virtual switch (dvSwitch) for Neutron. There is a new blueprint made by Sami
> Mäkinen (sjm) in
> https://blueprints.launchpad.net/neutron/+spec/neutron-ml2-mech-vmware-dvswitch.
>
> The driver is described and code is publicly available and hosted in github:
> https://blueprints.launchpad.net/neutron/+spec/neutron-ml2-mech-vmware-dvswitch
>
> We would like to get the driver through contribution process, what ever that
> exactly means :)
>
> The original problem this driver solves for is is the following:
>
> We've been running VMware virtualization platform in our data center before
> OpenStack, and we will keep doing it due existing services. We also have
> been running OpenStack for a while also. Now we wanted to get the most out
> of both by combining the customers networks on the both plafroms by using
> provider networks. The problem is that the networks need two separate
> managers, neutron and vmware. There was no OpenStack tools to attach the
> guests on VMware side to OpenStack provider networks during instance
> creation.
>
> Now we are putting our VMware under control of OpenStack. We want to have
> one master to control the networks, Neutron. We implemented the new ML2
> driver to do just that. It is capable of joining the machines created in
> vSphere to the same provider networks the OpenStack uses, using dvSwitch
> port groups.
>
>
> I just wanted to open the discussion, for the technical details please
> contact our experts on the CC list:
>
> Sami J. Mäkinen
> Jussi Sorjonen (freenode: mieleton)
>
>
> BR,
>
> Ilkka Tengvall
>  Advisory Consultant, Cloud Architecture
>  email:  ilkka.tengv...@cybercom.com
>  mobile: +358408443462
>  freenode: ikke-t
>  web:http://cybercom.com - http://cybercom.fi
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ML2 Type driver for supporting network overlays, with more than 4K seg

2014-03-28 Thread Mathieu Rohon
Hi,


the more I think about your use case, the more I think you should
create a BP to have tenant network based on interfaces created with
VDP protocol.
I'm not a VDP specialist, but if it creates some vlan back interfaces,
you might match those physical interfaces with the
physical_interface_mappings parameter in your ml2_conf.ini. Then you
could create flat networks backed on those interfaces.
SR-IOv use cases also talk about using vif_type 802.1qbg :
https://wiki.openstack.org/wiki/Nova-neutron-sriov


Mathieu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2][Ml2Plugin] Setting _original_network in NetworkContext:

2014-03-28 Thread Mathieu Rohon
hi nader,

I don't think this parameter could be used in this case. As andre said
, tha original-network is usefull for update and delete commands. It
would led to misunderstandings if we use this param in other cases,
and particulary in create commands.
I'm still thinking that the result of  super(Ml2Plugin,
self).create_network(context, network), should have network extension
informations[1]. did you talk with salvotore about reverting his
change and using another workaround?

[1]https://answers.launchpad.net/neutron/+question/245773

Mathieu

On Thu, Mar 27, 2014 at 5:24 PM, Nader Lahouti  wrote:
> Hi Andre,
>
> Thans for your reply.
>
> There is no existing network. The scenario is for the first time that we
> create a network with an extension. Consider, a mechanism driver adds an
> attribute (through extensions) to the network resource. When user creates a
> network, the attribute is set and it is present in the 'network' parameter,
> when calling create_network() in Ml2Plugin.
> But when create_network_pre/post_commit is called, the attribute won't be
> available to the mechanism driver. Because the attribute is not included in
> network object passed to MD - as I mentioned in previous email, the 'result'
> does not have the new attribute.
>
>
> Thanks,
> Nader.
>
>
>
>
>
>
>
>
> On Wed, Mar 26, 2014 at 3:52 PM, Andre Pech 
> wrote:
>>
>> Hi Nader,
>>
>> When I wrote this, the intention was that original_network only really
>> makes sense during an update_network call (ie when there's an existing
>> network that you are modifying). In a create_network call, the assumption is
>> that no network exists yet, so there is no "original network" to set.
>>
>> Can you provide a bit more detail on the case where there's an existing
>> network when create_network is called? Sorry, I didn't totally follow when
>> this would happen.
>>
>> Thanks
>> Andre
>>
>>
>> On Tue, Mar 25, 2014 at 8:45 AM, Nader Lahouti 
>> wrote:
>>>
>>> Hi All,
>>>
>>> In the current Ml2Plugin code when 'create_network' is called, as shown
>>> below:
>>>
>>>
>>>
>>> def create_network(self, context, network)
>>>
>>> net_data = network['network']
>>>
>>> ...
>>>
>>> session = context.session
>>>
>>> with session.begin(subtransactions=True):
>>>
>>> self._ensure_default_security_group(context, tenant_id)
>>>
>>> result = super(Ml2Plugin, self).create_network(context,
>>> network)
>>>
>>> ...
>>>
>>> mech_context = driver_context.NetworkContext(self, context,
>>> result)
>>>
>>> self.mechanism_manager.create_network_precommit(mech_context)
>>>
>>> ...
>>>
>>>
>>>
>>> the original_network parameter is not set (the default is None) when
>>> instantiating NetworkContext, and as a result the mech_context has only the
>>> value of network object returned from super(Ml2Plugin,
>>> self).create_network().
>>>
>>> This causes issue when a mechanism driver needs to use the original
>>> network parameters (given to the create_network), specially when extension
>>> is used for the network resources.
>>>
>>> (The 'result' only has the network attributes without extension which is
>>> used to set the '_network' in the NetwrokContext object).
>>>
>>> Even using  extension function registration using
>>>
>>> db_base_plugin_v2.NeutronDbPluginV2.register_dict_extend_funcs(...) won't
>>> help as the network object that is passed to the registered function does
>>> not include the extension parameters.
>>>
>>>
>>> Is there any reason that the original_network is not set when
>>> initializing the NetworkContext? Would that cause any issue to set it to
>>> 'net_data' so that any mechanism driver can use original network parameters
>>> as they are available when create_network is called?
>>>
>>>
>>> Appreciate your comments.
>>>
>>>
>>> Thanks,
>>>
>>> Nader.
>>>
>>>
>>>
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ML2 Type driver for supporting network overlays, with more than 4K seg

2014-03-26 Thread Mathieu Rohon
Hi,

thanks for this very interesting use case!
May be you can still use VXLAN or GRE for tenant networks, to bypass
the 4k limit of vlans. then you would have to send packets to the vlan
tagged interface, with the tag assigned by the VDP protocol, and this
traffic would be encapsulated inside the segment to be carried inside
the network fabric. Of course you will have to take care about MTU.
The only thing you have to consider is to be sure that the default
route between VXLan endpoints go through your vlan tagged interface.



Best,
Mathieu

On Tue, Mar 25, 2014 at 12:13 AM, Padmanabhan Krishnan  wrote:
> Hello,
> I have a topology where my Openstack compute nodes are connected to the
> external switches. The fabric comprising of the switches support more than
> 4K segments. So, i should be able to create more than 4K networks in
> Openstack. But, the VLAN to be used for communication with the switches is
> assigned by the switches using 802.1QBG (VDP) protocol. This can be thought
> of as a network overlay. The VM's sends .1q frames to the switches and the
> switches associate it to the segment (VNI in case of VXLAN).
> My question is:
> 1. I cannot use a type driver of VLAN because of the 4K limitation. I cannot
> use a type driver of VXLAN or GRE because that may mean host based overlay.
> Is there an integrated type driver i can use like an "external network" for
> achieving the above?
> 2. The Openstack module running in the compute should communicate with VDP
> module (lldpad) running there.
> In the computes, i see that ovs_neutron_agent.py is the one programming the
> flows. Here, for the new type driver, should i add a special case to
> provision_local_vlan() for communicating with lldpad for retrieving the
> provider VLAN? If there was a type driver component running in each
> computes, i would have added another one for my purpose. Since, the ML2
> architecture has its mechanism/type driver modules in the controller only, i
> can only make changes here.
>
> Please let me know if there's already an implementation for my above
> requirements. If not, should i create a blue-print?
>
> Thanks,
> Paddu
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] OVS 2.1.0 is available but not the Neutron ARP responder

2014-03-21 Thread Mathieu Rohon
Hi edouard,

thanks for the information. I would love to see your patch getting
merged to have l2-population MD fully functional with an OVS based
deployment. Moreover, this patch has a minimal impact on neutron,
since the code is used only if l2-population MD is used in the ML2
plugin.

markmcclain was concerned that no functional testing is done, but
L2-population MD needs mutlinode deployment to be tested. A deployment
based on a single VM won't create overlay tunnels, which is a
mandatory technology to have l2-population activated.
The Opensatck-CI is not able, for the moment, to run job based on
multi-node deployment. We proposed an evolution of devstack to have a
multinode deployment based on a single VM which launch compute nodes
in LXC containers [1], but this evolution has been refused by
Opensatck-CI since there is other ways to run multinode setup with
devstack, and LXC container is not compatible with iscsi and probably
ovs [2][3].

One way to have functional test for this feature would be to deploy
3rd party testing environment, but it would be a pity to have to
maintain a 3rd party to test some functionalities which are not based
on 3rd party equipments. So we are currently learning about the
Openstack-CI tools to propose some evolutions to have mutinode setup
inside the gate [4]. There are a lot of way to implement it
(node-pools evolution, usage of tripleO, of Heat [5]), and we don't
know which one would be the easiest, and so the one we have to work on
to have the multinode feature available ASAP.

This feature looks very important for Neutron, at least to test
overlay tunneling. I thinks it's very important for nova too, to test
live-migration.


[1]https://blueprints.launchpad.net/devstack/+spec/lxc-computes
[2]https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1226855
[3]http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-02-18-19.01.log.html
[4]https://www.mail-archive.com/openstack-infra@lists.openstack.org/msg00968.html
[5]http://lists.openstack.org/pipermail/openstack-infra/2013-July/000128.html

On Fri, Mar 21, 2014 at 10:08 AM, Édouard Thuleau  wrote:
> Hi,
>
> Just to inform you that the new OVS release 2.1.0 was done yesterday [1].
> This release contains new features and significant performance improvements
> [2].
>
> And in that new features, one [3] was use to add local ARP responder with
> OVS agent and the plugin ML2 with the MD l2-pop [4]. Perhaps, it's time to
> reconsider that review?
>
> [1] https://www.mail-archive.com/discuss@openvswitch.org/msg09251.html
> [2] http://openvswitch.org/releases/NEWS-2.1.0
> [3]
> http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=commitdiff;h=f6c8a6b163af343c66aea54953553d84863835f7
> [4] https://review.openstack.org/#/c/49227/
>
> Regards,
> Édouard.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][OVS Agent]

2014-03-18 Thread Mathieu Rohon
Hi nader,

The easiest way would be to register a new RPC callback in the current
ovs agent. This is what we have done for the l2-pop MD, with fdb_add
and fdb_remove callbacks.
However, it could become a mess if every MD adds it own callback
directly into the code of the agent. L2 agent should be able to load
drivers, which might register new callbacks.
This could potentially be something to do while refactoring the agent
: https://review.openstack.org/#/c/57627/

On Tue, Mar 18, 2014 at 7:42 AM, Nader Lahouti  wrote:
> Hi All,
>
> In a multi-node setup, I'm using Ml2Plugin (as core plugin) and OVS
> (OVSNeutronAgent) as an agent on compute nodes. From controller I need to
> call a *new method* on agent ( on all compute nodes - using  RPC), to
> perform a task (i.e. to communicate with an external process). As I need to
> use OVSNeutronAgent, I am thinking the following as potential solution for
> adding the new method to the agent:
> 1. Create new plugin based on existing OVS agent - That means cloning
> OVSNeutronAgent and add the new method to that.
> 2. Create new plugin, which inherits OVSNeutronPlugin - the new plugin
> defines the new method, setup_rpc,...
> 3. Add the new method to the existing OVSNeutronAgent
>
> Please let me know your thought and comments.
>
> Regards,
> Nader.
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2]

2014-03-17 Thread Mathieu Rohon
Hi racha,

I don't think your topic has something to deal with Nader's topics.
Please, create another topic, it would be easier to follow.
FYI, robert kukura is currently refactoring the MD binding, please
have a look here : https://bugs.launchpad.net/neutron/+bug/1276391. As
i understand, there won't be priority between MD that can bind a same
port. The first that will respond to the binding request will give its
vif_type.

Best,

Mathieu

On Fri, Mar 14, 2014 at 8:14 PM, racha  wrote:
> Hi,
>   Is it possible (in the latest upstream) to partition the same
> integration bridge "br-int" into multiple isolated partitions (in terms of
> lvids ranges, patch ports, etc.) between OVS mechanism driver and ODL
> mechanism driver? And then how can we pass some details to Neutron API (as
> in the provider segmentation type/id/etc) so that ML2 assigns a mechanism
> driver to the virtual network? The other alternative I guess is to create
> another integration bridge managed by a different Neutron instance? Probably
> I am missing something.
>
> Best Regards,
> Racha
>
>
> On Fri, Mar 7, 2014 at 10:33 AM, Nader Lahouti 
> wrote:
>>
>> 1) Does it mean an interim solution is to have our own plugin (and have
>> all the changes in it) and declare it as core_plugin instead of Ml2Plugin?
>>
>> 2) The other issue as I mentioned before, is that the extension(s) is not
>> showing up in the result, for instance when create_network is called
>> [result = super(Ml2Plugin, self).create_network(context, network)], and as
>> a result they cannot be used in the mechanism drivers when needed.
>>
>> Looks like the process_extensions is disabled when fix for Bug 1201957
>> committed and here is the change:
>> Any idea why it is disabled?
>>
>> --
>> Avoid performing extra query for fetching port security binding
>>
>> Bug 1201957
>>
>>
>> Add a relationship performing eager load in Port and Network
>>
>> models, thus preventing the 'extend' function from performing
>>
>> an extra database query.
>>
>> Also fixes a comment in securitygroups_db.py
>>
>>
>> Change-Id: If0f0277191884aab4dcb1ee36826df7f7d66a8fa
>>
>>  master   h.1
>>
>> ...
>>
>>  2013.2
>>
>> commit f581b2faf11b49852b0e1d6f2ddd8d19b8b69cdf 1 parent ca421e7
>>
>> Salvatore Orlando salv-orlando authored 8 months ago
>>
>>
>> 2  neutron/db/db_base_plugin_v2.py View
>>
>>  @@ -995,7 +995,7 @@ def create_network(self, context, network):
>>
>> 995   'status': constants.NET_STATUS_ACTIVE}
>>
>> 996   network = models_v2.Network(**args)
>>
>> 997   context.session.add(network)
>>
>> 998 -return self._make_network_dict(network)
>>
>> 998 +return self._make_network_dict(network,
>> process_extensions=False)
>>
>> 999
>>
>> 1000  def update_network(self, context, id, network):
>>
>> 1001
>>
>>  n = network['network']
>>
>>
>> ---
>>
>>
>> Regards,
>> Nader.
>>
>>
>>
>>
>>
>> On Fri, Mar 7, 2014 at 6:26 AM, Robert Kukura 
>> wrote:
>>>
>>>
>>> On 3/7/14, 3:53 AM, Édouard Thuleau wrote:
>>>
>>> Yes, that sounds good to be able to load extensions from a mechanism
>>> driver.
>>>
>>> But another problem I think we have with ML2 plugin is the list
>>> extensions supported by default [1].
>>> The extensions should only load by MD and the ML2 plugin should only
>>> implement the Neutron core API.
>>>
>>>
>>> Keep in mind that ML2 supports multiple MDs simultaneously, so no single
>>> MD can really control what set of extensions are active. Drivers need to be
>>> able to load private extensions that only pertain to that driver, but we
>>> also need to be able to share common extensions across subsets of drivers.
>>> Furthermore, the semantics of the extensions need to be correct in the face
>>> of multiple co-existing drivers, some of which know about the extension, and
>>> some of which don't. Getting this properly defined and implemented seems
>>> like a good goal for juno.
>>>
>>> -Bob
>>>
>>>
>>>
>>> Any though ?
>>> Édouard.
>>>
>>> [1]
>>> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L87
>>>
>>>
>>>
>>> On Fri, Mar 7, 2014 at 8:32 AM, Akihiro Motoki  wrote:

 Hi,

 I think it is better to continue the discussion here. It is a good log
 :-)

 Eugine and I talked the related topic to allow drivers to load
 extensions)  in Icehouse Summit
 but I could not have enough time to work on it during Icehouse.
 I am still interested in implementing it and will register a blueprint
 on it.

 etherpad in icehouse summit has baseline thought on how to achieve it.
 https://etherpad.openstack.org/p/icehouse-neutron-vendor-extension
 I hope it is a good start point of the discussion.

 Thanks,
 Akihiro

 On Fri, Mar 7, 2014 at 4:07 PM, Nader Lahouti 
 wrote:
 > Hi Kyle,
 >
 > Just wanted to clarify: Should I continue using this mailing list to
 > post my
 > question/concerns about ML2? Please advise.
>>>

Re: [openstack-dev] [Neutron][ML2]

2014-03-17 Thread Mathieu Rohon
Hi

On Fri, Mar 7, 2014 at 7:33 PM, Nader Lahouti  wrote:
> 1) Does it mean an interim solution is to have our own plugin (and have all
> the changes in it) and declare it as core_plugin instead of Ml2Plugin?

I don't think you should create your own Plugin, having a MD is
simpler to develop and to maintain,
you should just help us make ML2 evolve on the good path that feet
your needs. Moreover,
having MD to be able to load extensions is already identified, as Akihiro said.
Developing this part would be more usefull for you and for the entire ML2 users.

> 2) The other issue as I mentioned before, is that the extension(s) is not
> showing up in the result, for instance when create_network is called
> [result = super(Ml2Plugin, self).create_network(context, network)], and as a
> result they cannot be used in the mechanism drivers when needed.
>
> Looks like the process_extensions is disabled when fix for Bug 1201957
> committed and here is the change:
> Any idea why it is disabled?

As you underlined it, it shouldn't be disabled since MD might need to
have the entire network dict,
with extensions data. You might contact salvatore to talk about
another workaround for its bug.

> --
> Avoid performing extra query for fetching port security binding
>
> Bug 1201957
>
>
> Add a relationship performing eager load in Port and Network
>
> models, thus preventing the 'extend' function from performing
>
> an extra database query.
>
> Also fixes a comment in securitygroups_db.py
>
>
> Change-Id: If0f0277191884aab4dcb1ee36826df7f7d66a8fa
>
>  master   h.1
>
> ...
>
>  2013.2
>
> commit f581b2faf11b49852b0e1d6f2ddd8d19b8b69cdf 1 parent ca421e7
>
> Salvatore Orlando salv-orlando authored 8 months ago
>
>
> 2  neutron/db/db_base_plugin_v2.py View
>
>  @@ -995,7 +995,7 @@ def create_network(self, context, network):
>
> 995   'status': constants.NET_STATUS_ACTIVE}
>
> 996   network = models_v2.Network(**args)
>
> 997   context.session.add(network)
>
> 998 -return self._make_network_dict(network)
>
> 998 +return self._make_network_dict(network,
> process_extensions=False)
>
> 999
>
> 1000  def update_network(self, context, id, network):
>
> 1001
>
>  n = network['network']
>
>
> ---
>
>
> Regards,
> Nader.
>
>
>
>
>
> On Fri, Mar 7, 2014 at 6:26 AM, Robert Kukura 
> wrote:
>>
>>
>> On 3/7/14, 3:53 AM, Édouard Thuleau wrote:
>>
>> Yes, that sounds good to be able to load extensions from a mechanism
>> driver.
>>
>> But another problem I think we have with ML2 plugin is the list extensions
>> supported by default [1].
>> The extensions should only load by MD and the ML2 plugin should only
>> implement the Neutron core API.
>>
>>
>> Keep in mind that ML2 supports multiple MDs simultaneously, so no single
>> MD can really control what set of extensions are active. Drivers need to be
>> able to load private extensions that only pertain to that driver, but we
>> also need to be able to share common extensions across subsets of drivers.
>> Furthermore, the semantics of the extensions need to be correct in the face
>> of multiple co-existing drivers, some of which know about the extension, and
>> some of which don't. Getting this properly defined and implemented seems
>> like a good goal for juno.
>>
>> -Bob
>>
>>
>>
>> Any though ?
>> Édouard.
>>
>> [1]
>> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L87
>>
>>
>>
>> On Fri, Mar 7, 2014 at 8:32 AM, Akihiro Motoki  wrote:
>>>
>>> Hi,
>>>
>>> I think it is better to continue the discussion here. It is a good log
>>> :-)
>>>
>>> Eugine and I talked the related topic to allow drivers to load
>>> extensions)  in Icehouse Summit
>>> but I could not have enough time to work on it during Icehouse.
>>> I am still interested in implementing it and will register a blueprint on
>>> it.
>>>
>>> etherpad in icehouse summit has baseline thought on how to achieve it.
>>> https://etherpad.openstack.org/p/icehouse-neutron-vendor-extension
>>> I hope it is a good start point of the discussion.
>>>
>>> Thanks,
>>> Akihiro
>>>
>>> On Fri, Mar 7, 2014 at 4:07 PM, Nader Lahouti 
>>> wrote:
>>> > Hi Kyle,
>>> >
>>> > Just wanted to clarify: Should I continue using this mailing list to
>>> > post my
>>> > question/concerns about ML2? Please advise.
>>> >
>>> > Thanks,
>>> > Nader.
>>> >
>>> >
>>> >
>>> > On Thu, Mar 6, 2014 at 1:50 PM, Kyle Mestery
>>> > 
>>> > wrote:
>>> >>
>>> >> Thanks Edgar, I think this is the appropriate place to continue this
>>> >> discussion.
>>> >>
>>> >>
>>> >> On Thu, Mar 6, 2014 at 2:52 PM, Edgar Magana 
>>> >> wrote:
>>> >>>
>>> >>> Nader,
>>> >>>
>>> >>> I would encourage you to first discuss the possible extension with
>>> >>> the
>>> >>> ML2 team. Rober and Kyle are leading this effort and they have a IRC
>>> >>> meeting
>>> >>> every week:
>>> >>> https://wiki.openstack.org/wiki/Meetings#ML2_Network_sub-team_meeting
>>> >>>
>>> >>> Bring your concerns on this meeting and get the r

Re: [openstack-dev] [Neutron][L3] FFE request: L3 HA VRRP

2014-03-11 Thread Mathieu Rohon
+1

On Mon, Mar 10, 2014 at 10:51 AM, Miguel Angel Ajo  wrote:
> +1  (Voting here to workaround my previous top-posting).
>
>
> On 03/09/2014 01:22 PM, Nir Yechiel wrote:
>>
>> +1
>>
>> I see it as one of the main current gaps and I believe that this is
>> something that can promote Neutron as stable and production ready.
>> Based on Édouard's comment below, having this enabled in Icehouse as
>> experimental makes a lot of sense to me.
>>
>> - Original Message -
>>>
>>> +1
>>>
>>> - Original Message -

 +1

 On Fri, Mar 7, 2014 at 2:42 AM, Édouard Thuleau 
 wrote:
>
> +1
> I though it must merge as experimental for IceHouse, to let the
> community
> tries it and stabilizes it during the Juno release. And for the Juno
> release, we will be able to announce it as stable.
>
> Furthermore, the next work, will be to distribute the l3 stuff at the
> edge
> (compute) (called DVR) but this VRRP work will still needed for that
> [1].
> So if we merge L3 HA VRRP as experimental in I to be stable in J, will
> could
> also propose an experimental DVR solution for J and a stable for K.
>
> [1]
>
> https://docs.google.com/drawings/d/1GGwbLa72n8c2T3SBApKK7uJ6WLTSRa7erTI_3QNj5Bg/edit
>
> Regards,
> Édouard.
>
>
> On Thu, Mar 6, 2014 at 4:27 PM, Sylvain Afchain
>  wrote:
>>
>>
>> Hi all,
>>
>> I would like to request a FFE for the following patches of the L3 HA
>> VRRP
>> BP :
>>
>> https://blueprints.launchpad.net/neutron/+spec/l3-high-availability
>>
>> https://review.openstack.org/#/c/64553/
>> https://review.openstack.org/#/c/66347/
>> https://review.openstack.org/#/c/68142/
>> https://review.openstack.org/#/c/70700/
>>
>> These should be low risk since HA is not enabled by default.
>> The server side code has been developed as an extension which
>> minimizes
>> risk.
>> The agent side code introduces a bit more changes but only to filter
>> whether to apply the
>> new HA behavior.
>>
>> I think it's a good idea to have this feature in Icehouse, perhaps
>> even
>> marked as experimental,
>> especially considering the demand for HA in real world deployments.
>>
>> Here is a doc to test it :
>>
>>
>>
>> https://docs.google.com/document/d/1P2OnlKAGMeSZTbGENNAKOse6B2TRXJ8keUMVvtUCUSM/edit#heading=h.xjip6aepu7ug
>>
>> -Sylvain
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] setting up 1-node devstack + ml2 + vxlan

2014-02-26 Thread Mathieu Rohon
Please look at my last reply. Do you have several compute nodes? If
you have only one node, you won't have any vxlan port on br-tun.

On Wed, Feb 26, 2014 at 6:44 PM, Varadhan, Sowmini
 wrote:
> On 2/26/14 5:47 AM, Mathieu Rohon wrote:
>> Hi,
>>
>> FYI setting the vxlan UDP doesn't work properly for the moment :
>> https://bugs.launchpad.net/neutron/+bug/1241561
>
> So I checked this again by going back to 13.10, still no luck.
>
>>
>> May be your kernel has the vxlan module already loaded, which bind the
>> udp port 8472. that a reason why the vxlan port can't be created by
>> ovs. Check your ovs-vswitchd.log
>
> Yes vxlan is loaded (as indicated by lsmod) but I didnt
> see any messages around 8472 in the ovs-vswitchd.log, so it must
> be something else in my config. To double check, I even tried some
> other port (8474) for vxlan_udp_port, still no luck.
>
> So is there a template stack.sh around for this? That would
> help me elminate the obvious config errors I may have made?
>
> --Sowmini
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] setting up 1-node devstack + ml2 + vxlan

2014-02-26 Thread Mathieu Rohon
To more precise, the kernel will listen on udp port 8472 as soon as
you create a vxlan port, with ip_route2. I doubt it's your case, but i
can't figure out why you want to change the vxlan port? please tell us
about that, it could be interesting.

In your case, your config doesn't seem to be a multi-node one, so you
will have only one vxlan endpoint, which hosts network node and
compute node. If this is right, it's normal that no vxlan tunnel is
created, since you don't have any other vxlan endpoint.


On Wed, Feb 26, 2014 at 11:38 AM, Mathieu Rohon  wrote:
> Hi,
>
> FYI setting the vxlan UDP doesn't work properly for the moment :
> https://bugs.launchpad.net/neutron/+bug/1241561
>
> May be your kernel has the vxlan module already loaded, which bind the
> udp port 8472. that a reason why the vxlan port can't be created by
> ovs. Check your ovs-vswitchd.log
>
> On Tue, Feb 25, 2014 at 10:08 PM, Varadhan, Sowmini
>  wrote:
>> Folks,
>>
>> I'm trying to set up a simple single-node devstack + ml2 + vxlan
>> combination, and though this ought to be a simple RTFM exercise,
>> I'm having some trouble setting this up. Perhaps I'm doing something
>> wrong- clues would be welcome.
>>
>> I made sure to use ovs_version 1.10.2, and followed
>> the instructions in https://wiki.openstack.org/wiki/Neutron/ML2
>> (and then some, based on various and sundry blogs that google found)
>>
>> Can someone share (all) the contents of their localrc,
>> and if possible, a description of their VM (virtualbox?  qemu-kvm?)
>> setup so that I can compare against my env?
>>
>> FWIW, I tried the attached configs.
>> localrc.all - sets up
>> Q_PLUGIN=ml2
>> Q_ML2_TENANT_NETWORK_TYPE=vxlan
>> Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_type=vxlan vxlan_udp_port=8472)
>> Q_SRV_EXTRA_OPTS=(tenant_network_type=vxlan)
>> Resulting VM boots, but no vxlan interfaces show up (see ovs-ctl.out.all)
>>
>> localrc.vxlan.only - disallow anything other than vxlan and gre.
>> VM does not boot- I get a "binding_failed" error. See ovs-ctl.out.vxlan.only
>>
>> Thanks in advance,
>> Sowmini
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] setting up 1-node devstack + ml2 + vxlan

2014-02-26 Thread Mathieu Rohon
Hi,

FYI setting the vxlan UDP doesn't work properly for the moment :
https://bugs.launchpad.net/neutron/+bug/1241561

May be your kernel has the vxlan module already loaded, which bind the
udp port 8472. that a reason why the vxlan port can't be created by
ovs. Check your ovs-vswitchd.log

On Tue, Feb 25, 2014 at 10:08 PM, Varadhan, Sowmini
 wrote:
> Folks,
>
> I'm trying to set up a simple single-node devstack + ml2 + vxlan
> combination, and though this ought to be a simple RTFM exercise,
> I'm having some trouble setting this up. Perhaps I'm doing something
> wrong- clues would be welcome.
>
> I made sure to use ovs_version 1.10.2, and followed
> the instructions in https://wiki.openstack.org/wiki/Neutron/ML2
> (and then some, based on various and sundry blogs that google found)
>
> Can someone share (all) the contents of their localrc,
> and if possible, a description of their VM (virtualbox?  qemu-kvm?)
> setup so that I can compare against my env?
>
> FWIW, I tried the attached configs.
> localrc.all - sets up
> Q_PLUGIN=ml2
> Q_ML2_TENANT_NETWORK_TYPE=vxlan
> Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_type=vxlan vxlan_udp_port=8472)
> Q_SRV_EXTRA_OPTS=(tenant_network_type=vxlan)
> Resulting VM boots, but no vxlan interfaces show up (see ovs-ctl.out.all)
>
> localrc.vxlan.only - disallow anything other than vxlan and gre.
> VM does not boot- I get a "binding_failed" error. See ovs-ctl.out.vxlan.only
>
> Thanks in advance,
> Sowmini
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron ML2 and openvswitch agent

2014-02-26 Thread Mathieu Rohon
Hi,

you can get inspired by the L2-population MD, which call new functions
in the agents (like add_fdb_entries) through AMQP.
Does your work relate to an existing blueprint?



On Tue, Feb 25, 2014 at 9:23 PM, Sławek Kapłoński  wrote:
> Hello,
>
> Trinath, this presentation I saw before You send me it. There is nice
> explanation what methods are (and should be) in type driver and mech driver
> but I need exactly that information what sent me Assaf. Thanks both of You for
> Your help :)
>
> --
> Best regards
> Sławek Kapłoński
> Dnia wtorek, 25 lutego 2014 12:18:50 Assaf Muller pisze:
>
>> - Original Message -
>>
>> > Hi
>> >
>> > Hope this helps
>> >
>> > http://fr.slideshare.net/mestery/modular-layer-2-in-openstack-neutron
>> >
>> > ___
>> >
>> > Trinath Somanchi
>> >
>> > _
>> > From: Sławek Kapłoński [sla...@kaplonski.pl]
>> > Sent: Tuesday, February 25, 2014 9:24 PM
>> > To: openstack-dev@lists.openstack.org
>> > Subject: [openstack-dev] Neutron ML2 and openvswitch agent
>> >
>> > Hello,
>> >
>> > I have question to You guys. Can someone explain me (or send to link
>> > with such explanation) how exactly ML2 plugin which is working on
>> > neutron server is communicating with compute hosts with openvswitch
>> > agents?
>>
>> Maybe this will set you on your way:
>> ml2/plugin.py:Ml2Plugin.update_port uses _notify_port_updated, which then
>> uses ml2/rpc.py:AgentNotifierApi.port_update, which makes an RPC call with
>> the topic stated in that file.
>>
>> When the message is received by the OVS agent, it calls:
>> neutron/plugins/openvswitch/agent/ovs_neutron_agent.py:OVSNeutronAgent.port_
>> update.
>> > I suppose that this is working with rabbitmq queues but I need
>> > to add own function which will be called in this agent and I don't know
>> > how to do that. It would be perfect if such think will be possible with
>> > writing for example new mechanical driver in ML2 plugin (but how?).
>> > Thanks in advance for any help from You :)
>> >
>> > --
>> > Best regards
>> > Slawek Kaplonski
>> > sla...@kaplonski.pl
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][nova] Neutron plugin authors: Does port status indicate liveness?

2014-02-19 Thread Mathieu Rohon
Hi Aaron,

You seem to have abandonned this patch :
https://review.openstack.org/#/c/74218/

You want neutron to update port in nova, can you please tell us how do
you want to do that?

I think that we should use such a mechanism for live-migration.
live-migration should occur once the port is set up on the destination
host. This could potentially resolve this bug :

https://bugs.launchpad.net/neutron/+bug/1274160

Best,

Mathieu

On Tue, Feb 18, 2014 at 2:55 AM, Aaron Rosen  wrote:
> Hi Maru,
>
> Thanks for getting this thread started. I've filed the following blueprint
> for this:
>
> https://blueprints.launchpad.net/nova/+spec/check-neutron-port-status
>
> and have a have a prototype of it working here:
>
> https://review.openstack.org/#/c/74197/
> https://review.openstack.org/#/c/74218/
>
> One part that threw me a little while getting this working is that if using
> ovs and the new libvirt_vif_driver LibvirtGenericVifDriver, nova no longer
> calls ovs-vsctl to set external_ids:iface-id and that libvirt automatically
> does that for you. Unfortunately, this data seems to only make it to ovsdb
> when the instance is powered on. Because of this I needed to add back those
> calls as neutron needs this data to be set in ovsdb before it can start
> wiring the ports.
>
> I'm hoping this change should help out with
> https://bugs.launchpad.net/neutron/+bug/1253896 but we'll see. I'm not sure
> if it's to late to merge this in icehouse but it might be worth considering
> if we find that it helps reduce gate failures.
>
> Best,
>
> Aaron
>
>
> On Thu, Feb 13, 2014 at 3:31 AM, Mathieu Rohon 
> wrote:
>>
>> +1 for this feature which could potentially resolve a race condition
>> that could occur after port-binding refactoring in ML2 [1].
>> in ML2, the port could be ACTIVE once a MD has bound the port. the
>> vif_type could then be known by nova, and nova could create the
>> network correctly thanks to vif_type and vif_details ( with
>> vif_security embedded [2])
>>
>>
>> [1]http://lists.openstack.org/pipermail/openstack-dev/2014-February/026750.html
>> [2]https://review.openstack.org/#/c/72452/
>>
>> On Thu, Feb 13, 2014 at 3:13 AM, Maru Newby  wrote:
>> > Booting a Nova instance when Neutron is enabled is often unreliable due
>> > to the lack of coordination between Nova and Neutron apart from port
>> > allocation.  Aaron Rosen and I have been talking about fixing this by 
>> > having
>> > Nova perform a check for port 'liveness' after vif plug and before vm boot.
>> > The idea is to have Nova fail the instance if its ports are not seen to be
>> > 'live' within a reasonable timeframe after plug.  Our initial thought is
>> > that the compute node would call Nova's networking subsystem which could
>> > query Neutron for the status of the instance's ports.
>> >
>> > The open question is whether the port 'status' field can be relied upon
>> > to become ACTIVE for all the plugins currently in the tree.  If this is not
>> > the case, please reply to this thread with an indication of how one would 
>> > be
>> > able to tell the 'liveness' of a port managed by the plugin you maintain.
>> >
>> > In the event that one or more plugins cannot reliably indicate port
>> > liveness, we'll need to ensure that the port liveness check can be
>> > optionally disabled so that the existing behavior of racing vm boot is
>> > maintained for plugins that need it.
>> >
>> > Thanks in advance,
>> >
>> >
>> > Maru
>> >
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][nova] Neutron plugin authors: Does port status indicate liveness?

2014-02-13 Thread Mathieu Rohon
+1 for this feature which could potentially resolve a race condition
that could occur after port-binding refactoring in ML2 [1].
in ML2, the port could be ACTIVE once a MD has bound the port. the
vif_type could then be known by nova, and nova could create the
network correctly thanks to vif_type and vif_details ( with
vif_security embedded [2])

[1]http://lists.openstack.org/pipermail/openstack-dev/2014-February/026750.html
[2]https://review.openstack.org/#/c/72452/

On Thu, Feb 13, 2014 at 3:13 AM, Maru Newby  wrote:
> Booting a Nova instance when Neutron is enabled is often unreliable due to 
> the lack of coordination between Nova and Neutron apart from port allocation. 
>  Aaron Rosen and I have been talking about fixing this by having Nova perform 
> a check for port 'liveness' after vif plug and before vm boot.  The idea is 
> to have Nova fail the instance if its ports are not seen to be 'live' within 
> a reasonable timeframe after plug.  Our initial thought is that the compute 
> node would call Nova's networking subsystem which could query Neutron for the 
> status of the instance's ports.
>
> The open question is whether the port 'status' field can be relied upon to 
> become ACTIVE for all the plugins currently in the tree.  If this is not the 
> case, please reply to this thread with an indication of how one would be able 
> to tell the 'liveness' of a port managed by the plugin you maintain.
>
> In the event that one or more plugins cannot reliably indicate port liveness, 
> we'll need to ensure that the port liveness check can be optionally disabled 
> so that the existing behavior of racing vm boot is maintained for plugins 
> that need it.
>
> Thanks in advance,
>
>
> Maru
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] ARP Proxy in l2-population Mechanism Driver for OVS

2014-02-13 Thread Mathieu Rohon
Hi,

You can see in the review [1] that doude first proposed an ebtables
manager to implement the ARP responder for ovs. OVS 2.1 is now able to
manage an ARP responder based on flow [2], so he switches his
implementation to a flow based ARP responder (please, have a look at
patches history).
ebtables driver seems more interesting since this implementation would
be compatible with any ovs version, but VM needs to be plugged to a
linux bridge and with ovsfirewalldriver [3], nova won't need to plug
VM to a linux bridge anymore, so ARP responder based on ebtables won't
work.

[1]https://review.openstack.org/#/c/49227/
[2]https://review.openstack.org/#/c/49227/27/neutron/plugins/ml2/drivers/l2pop/README
[3]https://blueprints.launchpad.net/neutron/+spec/ovs-firewall-driver

On Thu, Feb 13, 2014 at 9:51 AM, Édouard Thuleau  wrote:
> Hi,
>
> On Havana, a local ARP responder is available if you use the ML2 with the
> l2-pop MD and the Linux Bridge (natively implemented by the Linux kernel
> VXLAN module).
> It's not (yet [1]) available with the OVS agent. The proposed OVS
> implementation use new OVS flows integrated on branch 2.1.
>
> Just few remarks about the ML2 MD l2-pop. Two important bugs persists:
> - One [2] impacts all the MD l2-pop (Linux Bridge and OVS agents). Merged on
> trunk and waiting to be backported [3]
> - Another one [4] impacts only the OVS agent stills waiting review.
>
> [1] https://review.openstack.org/#/c/49227/
> [2] https://review.openstack.org/#/c/63913/
> [3] https://review.openstack.org/#/c/71821/
> [4] https://review.openstack.org/#/c/63917/
>
> Édouard.
>
>
> On Thu, Feb 13, 2014 at 4:57 AM, Nick Ma  wrote:
>>
>> Hi all,
>>
>> I'm running a OpenStack Havana cloud on pre-production stage using
>> Neutron ML2 VxLAN. I'd like to incorporate l2-population to get rid of
>> tunnel broadcast.
>>
>> However, it seems that ARP Proxy has NOT been implemented yet for Open
>> vSwitch for Havana and also the latest master branch.
>>
>> I find that ebtables arpreply can do it and then put some corresponding
>> flow rules into OVS.
>>
>> Could anyone provide more hints on how to implement it in l2-pop?
>>
>> thanks,
>>
>> --
>>
>> Nick Ma
>> skywalker.n...@gmail.com
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to write a new neutron L2 plugin using ML2 framework?

2014-02-11 Thread Mathieu Rohon
Hi,

mellanox is also developing a ML2 driver :
https://blueprints.launchpad.net/neutron/+spec/mlnx-ml2-support

The Havana release is already out, and we are currently working for
IceHouse. But the code for IceHouse should be under review before feb.
18th. So it would be difficult to have your code included in IceHouse.
I think you'd better targeting Juno, the next release.

On Tue, Feb 11, 2014 at 7:40 AM, Yang, Yi Y  wrote:
> Thank you for your detailed info, but I want to implement this in Havana 
> release, mlnx is a good reference, what I want to implement on Intel NIC is 
> similar to mlnx, but it is a standalone plugin and didn't use ML2 framework, 
> I want to use ML2 framework, I think nova has supported SR-IOV in Havana, so 
> I just need to implement Neutron part, I hope you can provide some guide 
> about this. BTW, We can't afford to wait Icehouse release.
>
> -Original Message-
> From: Irena Berezovsky [mailto:ire...@mellanox.com]
> Sent: Monday, February 10, 2014 8:11 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: Yang, Yi Y
> Subject: RE: [openstack-dev] How to write a new neutron L2 plugin using ML2 
> framework?
>
> Hi,
> As stated below, we are already having this work both in nova and neuron.
> Please take a look at the following discussions:
> https://wiki.openstack.org/wiki/Meetings#PCI_Passthrough_Meeting
>
> For neutron part there are two different flavors that are coming as part of 
> this effort:
> 1. Cisco SRIOV supporting 802.1QBH - no L2 agent 2. Mellanox Flavor - SRIOV 
> embedded switch ("HW_VEB") - with L2 agent.
> My guess is that second flavor of SRIOV embedded switch should work for Intel 
> NICs as well.
>
> Please join the PCI pass-through meeting discussions to see that you do not 
> do any redundant work or just follow-up on mailing list.
>
> BR,
> Irena
>
>
> -Original Message-
> From: Mathieu Rohon [mailto:mathieu.ro...@gmail.com]
> Sent: Monday, February 10, 2014 1:25 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] How to write a new neutron L2 plugin using ML2 
> framework?
>
> Hi,
>
> SRIOV is under implementation in nova and neutron. Did you have a look to :
> https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support
> https://blueprints.launchpad.net/neutron/+spec/ml2-binding-profile
> https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type
> https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov
>
>
> On Mon, Feb 10, 2014 at 7:27 AM, Isaku Yamahata  
> wrote:
>> On Sat, Feb 08, 2014 at 03:49:46AM +, "Yang, Yi Y"
>>  wrote:
>>
>>> Hi, All
>>
>> Hi.
>>
>>
>>> I want to write a new neutron L2 plugin using ML2 framework, I noticed 
>>> openvswitch and linxubridge have been ported into ML2 framework, but it 
>>> seems many code is removed compared to standalone L2 plugin, I guess some 
>>> code has been written into a common library. Now I want to write a L2 
>>> plugin to enable switch for a SR-IOV 10g NIC, I think I need to write as 
>>> follows:
>>
>
> having such a feature would be awesome : did you fill a BP for that?
>
>>
>>> 1. a new mechanism driver neutron/plugins/ml2/drivers/mech_XXX.py, but from 
>>> source code, it seems nothing to do.
>
> You mean, you want to use AgentMechanismDriverBase directly? this is an 
> abstract class du to check_segment_for_agent method.
>
>>
>> This requires to define how your plugin utilize network.
>> If multi tenant network is wanted, what/how technology will be used.
>> The common one is VLAN or tunneling(GRE, VXLAN).
>> This depends on what feature your NIC supports.
>>
>
>>> 2. a new agent neutron/plugins/XXX/ XXX_neutron_plugin.py
>
> I don't know if this would be mandatory. May be you can just add necessary 
> informations with extend_port_dict while your MD bind the port, as proposed 
> by this patch :
> https://review.openstack.org/#/c/69783/
>
> Nova will then configure the port correctly. The only need for an agent would 
> be to populate the agent DB with supported segment types, so that during 
> bind_port, the MD find an appropriate segment (with check_segment_for_agent).
>
>>>
>>> After this, an issue it how to let neutron know it and load it by default 
>>> or by configuration. Debugging is also an issue, nobody can write code 
>>> correctly once :-),  does neutron have any good debugging way for a newbie?
>>
>> LOG.debug and debug middle ware.
>> If there are any other better way, I'd also like to

Re: [openstack-dev] How to write a new neutron L2 plugin using ML2 framework?

2014-02-10 Thread Mathieu Rohon
Hi,

SRIOV is under implementation in nova and neutron. Did you have a look to :
https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support
https://blueprints.launchpad.net/neutron/+spec/ml2-binding-profile
https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type
https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov


On Mon, Feb 10, 2014 at 7:27 AM, Isaku Yamahata
 wrote:
> On Sat, Feb 08, 2014 at 03:49:46AM +,
> "Yang, Yi Y"  wrote:
>
>> Hi, All
>
> Hi.
>
>
>> I want to write a new neutron L2 plugin using ML2 framework, I noticed 
>> openvswitch and linxubridge have been ported into ML2 framework, but it 
>> seems many code is removed compared to standalone L2 plugin, I guess some 
>> code has been written into a common library. Now I want to write a L2 plugin 
>> to enable switch for a SR-IOV 10g NIC, I think I need to write as follows:
>

having such a feature would be awesome : did you fill a BP for that?

>
>> 1. a new mechanism driver neutron/plugins/ml2/drivers/mech_XXX.py, but from 
>> source code, it seems nothing to do.

You mean, you want to use AgentMechanismDriverBase directly? this is
an abstract class du to check_segment_for_agent method.

>
> This requires to define how your plugin utilize network.
> If multi tenant network is wanted, what/how technology will be used.
> The common one is VLAN or tunneling(GRE, VXLAN).
> This depends on what feature your NIC supports.
>

>> 2. a new agent neutron/plugins/XXX/ XXX_neutron_plugin.py

I don't know if this would be mandatory. May be you can just add
necessary informations with extend_port_dict while your MD bind the
port, as proposed by this patch :
https://review.openstack.org/#/c/69783/

Nova will then configure the port correctly. The only need for an
agent would be to populate the agent DB with supported segment types,
so that during bind_port, the MD find an appropriate segment (with
check_segment_for_agent).

>>
>> After this, an issue it how to let neutron know it and load it by default or 
>> by configuration. Debugging is also an issue, nobody can write code 
>> correctly once :-),  does neutron have any good debugging way for a newbie?
>
> LOG.debug and debug middle ware.
> If there are any other better way, I'd also like to know.
>
> thanks,
>
>> I'm very eager to be able to get your help and sincerely thank you in 
>> advance.
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> --
> Isaku Yamahata 
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Port binding information, transactions, and concurrency

2014-02-10 Thread Mathieu Rohon
Hi,

one other comment inline :

On Wed, Feb 5, 2014 at 5:01 PM, Robert Kukura  wrote:
> On 02/05/2014 09:10 AM, Henry Gessau wrote:
>> Bob, this is fantastic, I really appreciate all the detail. A couple of
>> questions ...
>>
>> On Wed, Feb 05, at 2:16 am, Robert Kukura  wrote:
>>
>>> A couple of interrelated issues with the ML2 plugin's port binding have
>>> been discussed over the past several months in the weekly ML2 meetings.
>>> These effect drivers being implemented for icehouse, and therefore need
>>> to be addressed in icehouse:
>>>
>>> * MechanismDrivers need detailed information about all binding changes,
>>> including unbinding on port deletion
>>> (https://bugs.launchpad.net/neutron/+bug/1276395)
>>> * MechanismDrivers' bind_port() methods are currently called inside
>>> transactions, but in some cases need to make remote calls to controllers
>>> or devices (https://bugs.launchpad.net/neutron/+bug/1276391)
>>> * Semantics of concurrent port binding need to be defined if binding is
>>> moved outside the triggering transaction.
>>>
>>> I've taken the action of writing up a unified proposal for resolving
>>> these issues, which follows...
>>>
>>> 1) An original_bound_segment property will be added to PortContext. When
>>> the MechanismDriver update_port_precommit() and update_port_postcommit()
>>> methods are called and a binding previously existed (whether its being
>>> torn down or not), this property will provide access to the network
>>> segment used by the old binding. In these same cases, the portbinding
>>> extension attributes (such as binding:vif_type) for the old binding will
>>> be available via the PortContext.original property. It may be helpful to
>>> also add bound_driver and original_bound_driver properties to
>>> PortContext that behave similarly to bound_segment and
>>> original_bound_segment.
>>>
>>> 2) The MechanismDriver.bind_port() method will no longer be called from
>>> within a transaction. This will allow drivers to make remote calls on
>>> controllers or devices from within this method without holding a DB
>>> transaction open during those calls. Drivers can manage their own
>>> transactions within bind_port() if needed, but need to be aware that
>>> these are independent from the transaction that triggered binding, and
>>> concurrent changes to the port could be occurring.
>>>
>>> 3) Binding will only occur after the transaction that triggers it has
>>> been completely processed and committed. That initial transaction will
>>> unbind the port if necessary. Four cases for the initial transaction are
>>> possible:
>>>
>>> 3a) In a port create operation, whether the binding:host_id is supplied
>>> or not, all drivers' port_create_precommit() methods will be called, the
>>> initial transaction will be committed, and all drivers'
>>> port_create_postcommit() methods will be called. The drivers will see
>>> this as creation of a new unbound port, with PortContext properties as
>>> shown. If a value for binding:host_id was supplied, binding will occur
>>> afterwards as described in 4 below.
>>>
>>> PortContext.original: None
>>> PortContext.original_bound_segment: None
>>> PortContext.original_bound_driver: None
>>> PortContext.current['binding:host_id']: supplied value or None
>>> PortContext.current['binding:vif_type']: 'unbound'
>>> PortContext.bound_segment: None
>>> PortContext.bound_driver: None
>>>
>>> 3b) Similarly, in a port update operation on a previously unbound port,
>>> all drivers' port_update_precommit() and port_update_postcommit()
>>> methods will be called, with PortContext properies as shown. If a value
>>> for binding:host_id was supplied, binding will occur afterwards as
>>> described in 4 below.
>>>
>>> PortContext.original['binding:host_id']: previous value or None
>>> PortContext.original['binding:vif_type']: 'unbound' or 'binding_failed'
>>> PortContext.original_bound_segment: None
>>> PortContext.original_bound_driver: None
>>> PortContext.current['binding:host_id']: current value or None
>>> PortContext.current['binding:vif_type']: 'unbound'
>>> PortContext.bound_segment: None
>>> PortContext.bound_driver: None
>>>
>>> 3c) In a port update operation on a previously bound port that does not
>>> trigger unbinding or rebinding, all drivers' update_port_precommit() and
>>> update_port_postcommit() methods will be called with PortContext
>>> properties reflecting unchanged binding states as shown.
>>>
>>> PortContext.original['binding:host_id']: previous value
>>> PortContext.original['binding:vif_type']: previous value
>>> PortContext.original_bound_segment: previous value
>>> PortContext.original_bound_driver: previous value
>>> PortContext.current['binding:host_id']: previous value
>>> PortContext.current['binding:vif_type']: previous value
>>> PortContext.bound_segment: previous value
>>> PortContext.bound_driver: previous value
>>>
>>> 3d) In a the port update operation on a previously bound port that does
>>> trigger unbinding or rebinding, all d

Re: [openstack-dev] [neutron][ml2] Port binding information, transactions, and concurrency

2014-02-05 Thread Mathieu Rohon
Hi,

thanks for this great proposal


On Wed, Feb 5, 2014 at 3:10 PM, Henry Gessau  wrote:
> Bob, this is fantastic, I really appreciate all the detail. A couple of
> questions ...
>
> On Wed, Feb 05, at 2:16 am, Robert Kukura  wrote:
>
>> A couple of interrelated issues with the ML2 plugin's port binding have
>> been discussed over the past several months in the weekly ML2 meetings.
>> These effect drivers being implemented for icehouse, and therefore need
>> to be addressed in icehouse:
>>
>> * MechanismDrivers need detailed information about all binding changes,
>> including unbinding on port deletion
>> (https://bugs.launchpad.net/neutron/+bug/1276395)
>> * MechanismDrivers' bind_port() methods are currently called inside
>> transactions, but in some cases need to make remote calls to controllers
>> or devices (https://bugs.launchpad.net/neutron/+bug/1276391)
>> * Semantics of concurrent port binding need to be defined if binding is
>> moved outside the triggering transaction.
>>
>> I've taken the action of writing up a unified proposal for resolving
>> these issues, which follows...
>>
>> 1) An original_bound_segment property will be added to PortContext. When
>> the MechanismDriver update_port_precommit() and update_port_postcommit()
>> methods are called and a binding previously existed (whether its being
>> torn down or not), this property will provide access to the network
>> segment used by the old binding. In these same cases, the portbinding
>> extension attributes (such as binding:vif_type) for the old binding will
>> be available via the PortContext.original property. It may be helpful to
>> also add bound_driver and original_bound_driver properties to
>> PortContext that behave similarly to bound_segment and
>> original_bound_segment.
>>
>> 2) The MechanismDriver.bind_port() method will no longer be called from
>> within a transaction. This will allow drivers to make remote calls on
>> controllers or devices from within this method without holding a DB
>> transaction open during those calls. Drivers can manage their own
>> transactions within bind_port() if needed, but need to be aware that
>> these are independent from the transaction that triggered binding, and
>> concurrent changes to the port could be occurring.
>>
>> 3) Binding will only occur after the transaction that triggers it has
>> been completely processed and committed. That initial transaction will
>> unbind the port if necessary. Four cases for the initial transaction are
>> possible:
>>
>> 3a) In a port create operation, whether the binding:host_id is supplied
>> or not, all drivers' port_create_precommit() methods will be called, the
>> initial transaction will be committed, and all drivers'
>> port_create_postcommit() methods will be called. The drivers will see
>> this as creation of a new unbound port, with PortContext properties as
>> shown. If a value for binding:host_id was supplied, binding will occur
>> afterwards as described in 4 below.
>>
>> PortContext.original: None
>> PortContext.original_bound_segment: None
>> PortContext.original_bound_driver: None
>> PortContext.current['binding:host_id']: supplied value or None
>> PortContext.current['binding:vif_type']: 'unbound'
>> PortContext.bound_segment: None
>> PortContext.bound_driver: None
>>
>> 3b) Similarly, in a port update operation on a previously unbound port,
>> all drivers' port_update_precommit() and port_update_postcommit()
>> methods will be called, with PortContext properies as shown. If a value
>> for binding:host_id was supplied, binding will occur afterwards as
>> described in 4 below.
>>
>> PortContext.original['binding:host_id']: previous value or None
>> PortContext.original['binding:vif_type']: 'unbound' or 'binding_failed'
>> PortContext.original_bound_segment: None
>> PortContext.original_bound_driver: None
>> PortContext.current['binding:host_id']: current value or None
>> PortContext.current['binding:vif_type']: 'unbound'
>> PortContext.bound_segment: None
>> PortContext.bound_driver: None
>>
>> 3c) In a port update operation on a previously bound port that does not
>> trigger unbinding or rebinding, all drivers' update_port_precommit() and
>> update_port_postcommit() methods will be called with PortContext
>> properties reflecting unchanged binding states as shown.
>>
>> PortContext.original['binding:host_id']: previous value
>> PortContext.original['binding:vif_type']: previous value
>> PortContext.original_bound_segment: previous value
>> PortContext.original_bound_driver: previous value
>> PortContext.current['binding:host_id']: previous value
>> PortContext.current['binding:vif_type']: previous value
>> PortContext.bound_segment: previous value
>> PortContext.bound_driver: previous value
>>
>> 3d) In a the port update operation on a previously bound port that does
>> trigger unbinding or rebinding, all drivers' update_port_precommit() and
>> update_port_postcommit() methods will be called with PortContext
>> properties reflecting the pr

Re: [openstack-dev] [Neturon] firewall_driver and ML2 and vif_security discussion

2014-01-20 Thread Mathieu Rohon
x27;t think the MD and server have any business worrying
>> about which firewall driver class runs in the L2 agent. Theoretically,
>> the agent could be written in java;-). And don't forget that users may
>> want to plug in a custom firewall driver class instead.
>>
>> I think these are the options, in my descending or of current preference:
>>
>> 1) Configure firewall_driver only in the agent and pass vif_security
>> from the agent to the server. Each L2 agent gets the vif_security value
>> from its configured driver and includes it in the agents_db RPC data.
>> The MD copies the vif_security value from the agents_db to the port
>> dictionary.
>>
>> 2) Configure firewall_driver only in the agent but the hardwire
>> vif_security value for each MD. This is a reasonable short term solution
>> until we actually have multiple firewall drivers that can work with
>> single MD+agent.
>>
>> 3) Configure firewall_driver only in the agent and configure the
>> vif_security value for each MD in the server. This is a slight
>> improvement on #2 but doesn't handle the use case above. It seems more
>> complicated and error prone for the user than #1.
>>
>> 4) Configure the firewall_driver or security_group_mode for each MD in
>> the server. This would mean some new RPC is needed to for the agent to
>> fetch the fthis from the server at startup. This could be problematic if
>> the server isn't running when the L2 agent starts.
>
> Let's discuss more when you could have openflow based security group
> implementation.
>
> This is my thought for general architecture.
> - We should be able to manage such agent network behavior via Agent
> Resource REST API in the server.
> - The server should control agents,
> - Agents should have only rpc connection information.
>
> so I'm +1 for option4. Agent can't work without server anyway, and he
> can wait until it will be connected with servers.

I agree with this general architecture, but we still need some local
settings for the agent, such as local bridge_mappings and local_ip. I
don't know how this informations could be stored on the server side.

IMHO the main issue here and for a lot of bugs in neutron (I'm
thinking about pooling issues in the agent) comes from the fact that
port creation is delegated to nova. I think that neutron should be
able to create tap interfaces thanks to a mandatory l2-agent. This
agent would be responsible for creating nwfilters used by libvirt for
mac/ip anti spoofing, for creating bridges/flow s/iptables chains and
whatever the tap needs to get connected correctly.

So with option 4 described above, the new RPC message to the selected
agent would be create_port([port_dict_extended_by_MD]), and the
firewall driver would get instantiated depending on arguments of
create port. get_device_details wouldn't be necessary anymore.

With such an architecture, we wouldn't have to tell neutron about
vif_security or vif_type when it creates a port. When Neutron get
called with port_create, it should only return the tap created.

I don't know if such a proposal is reasonable since I can't find good
informations about the ability of libvirt to use an already created
tap, when it creates a VM. It seem to be usable with KVM.
But I would love to have feedback of the communnity on this
architecture. May be it has already been discussed on the ML, so
please give me the pointer.

Bests,
Mathieu

>
>>>>
>>> I agree with your thinking here Nachi. Leaving this as a global
>>> configuration makes the most sense.
>>>
>>>>
>>>>> Thanks,
>>>>>
>>>>> Amir
>>>>>
>>>>>
>>>>> On Jan 16, 2014, at 11:42 AM, Nachi Ueno  wrote:
>>>>>
>>>>>> Hi Mathieu, Bob
>>>>>>
>>>>>> Thank you for your reply
>>>>>> OK let's do (A) - (C) for now.
>>>>>>
>>>>>> (A) Remove firewall_driver from server side
>>>>>>Remove Noop <-- I'll write patch for this
>>
>> This gets replaced with the enable_security_groups server config, right?
>>
>>>>>>
>>>>>> (B) update ML2 with extend_port_dict <-- Bob will push new review for 
>>>>>> this
>>>>>>
>>>>>> (C) Fix vif_security patch using (1) and (2). <-- I'll update the
>>>>>> patch after (A) and (B) merged
>>>>>># config is hardwired for each mech drivers for now
>>
>> I completely agree with doing A, B, and C now. My understanding is that
>> this is equ

Re: [openstack-dev] [Neturon] firewall_driver and ML2 and vif_security discussion

2014-01-16 Thread Mathieu Rohon
Hi,

your proposals make sense. Having the firewall driver configuring so
much things looks pretty stange.
Enabling security group should be a plugin/MD decision, not a driver decision.

For ML2, in a first implementation, having vif security based on
vif_type looks good too.
Once OVSfirewallDriver will be available, the firewall drivers that
the operator wants to use should be in a MD config file/section and
ovs MD could bind one of the firewall driver during
port_create/update/get.

Best,
Mathieu

On Wed, Jan 15, 2014 at 6:29 PM, Nachi Ueno  wrote:
> Hi folks
>
> Security group for OVS agent (ovs plugin or ML2) is being broken.
> so we need vif_security port binding to fix this
> (https://review.openstack.org/#/c/21946/)
>
> We got discussed about the architecture for ML2 on ML2 weekly meetings, and
> I wanna continue discussion in here.
>
> Here is my proposal for how to fix it.
>
> https://docs.google.com/presentation/d/1ktF7NOFY_0cBAhfqE4XjxVG9yyl88RU_w9JcNiOukzI/edit#slide=id.p
>
> Best
> Nachi
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party-testing] Sharing information

2014-01-15 Thread Mathieu Rohon
Hi,

does anyone knows if the multi-node environnement is targeted by the
openstack-infra team?


On Tue, Jan 14, 2014 at 6:22 PM, Anita Kuno  wrote:
> On 01/14/2014 11:34 AM, Kyle Mestery wrote:
>> Given the Tempest Sprint in Montreal, I still think we should have this 
>> meeting
>> on IRC.
> Shouldn't this be, "given OpenStack is an open project [0], this meeting
> will take place on irc, just like all our other meetings."?
>
> Thanks,
> Anita.
>
> [0] https://wiki.openstack.org/wiki/Governance/Foundation/Mission
>
>  So, lets nail down the time as 2200 UTC on #openstack-meeting-alt
>> for tomorrow. If you can’t make it, I’ll send the meeting logs out.
>>
>> Thanks, look forward to seeing people there tomorrow!
>>
>> Kyle
>>
>> On Jan 14, 2014, at 9:49 AM, Lucas Eznarriaga  wrote:
>>
>>> Hi,
>>> I will also be available for a meeting tomorrow.
>>> @Mohammad, we are still working on our 3rd party testing setup so do not 
>>> take Midokura CI Bot votes too seriously yet.
>>> So far I have followed the links on the etherpad to have the jenkins+gerrit 
>>> trigger plugin working with the current setup that's what I haven't added 
>>> anything else yet.
>>>
>>> Cheers,
>>> Lucas
>>>
>>>
>>>
>>> On Tue, Jan 14, 2014 at 3:55 PM, Edgar Magana  wrote:
>>> I like it and I am in favor.
>>> Some of us, will be in Montreal attending the sprint tempest session. 
>>> Hopefully we can all take it from there.
>>>
>>> Edgar
>>>
 On Jan 14, 2014, at 6:31 AM, Kyle Mestery  wrote:

 Thanks for sending this note Mohammad. I am all in favor of another
 3rd party testing meeting on IRC. How about if we shoot for tomorrow,
 Wednesday the 15, at 2200 UTC? Please ack if that works for everyone.

 Thanks,
 Kyle

> On Jan 13, 2014, at 5:08 PM, Mohammad Banikazemi  wrote:
>
> Hi everybody,
>
> I see that we already have at least two 3rd party testing setups (from 
> Arista and Midokura) up and running. Noticed their votes on our newly 
> submitted plugin.
> The etherpad which is to be used for sharing information about setting up 
> 3rd party testing (as well as multi-node testing) [1] seems to have not 
> been updated recently. Would those who have setup their 3rd party testing 
> successfully be willing to share more information as to what they have 
> done and possibly update the etherpad?
>
> Would it be of value to others if we have another IRC meeting to discuss 
> this matter?
> (Kyle, I am sure you are busy so I took the liberty to send this note. 
> Please let us know what you think.)
>
> Thanks,
>
> Mohammad
>
>
> [1] https://etherpad.openstack.org/p/multi-node-neutron-tempest
>
> Kyle Mestery ---12/19/2013 09:17:44 AM---Apologies folks, I 
> meant 2200 UTC Thursday. We'll still do the meeting today.
>
> From:Kyle Mestery 
> To:"OpenStack Development Mailing List \(not for usage questions\)" 
> ,
> Date:12/19/2013 09:17 AM
> Subject:Re: [openstack-dev] [neutron] [third-party-testing] Reminder: 
>Meeting tomorrow
>
>
>
> Apologies folks, I meant 2200 UTC Thursday. We'll still do the
> meeting today.
>
>> On Dec 18, 2013, at 4:40 PM, Don Kehn  wrote:
>>
>> Wouldn't 2200 UTC be in about 20 mins?
>>
>>
>> On Wed, Dec 18, 2013 at 3:32 PM, Itsuro ODA  wrote:
>> Hi,
>>
>> It seems the meeting was not held on 2200 UTC on Wednesday (today).
>>
>> Do you mean 2200 UTC on Thursday ?
>>
>> Thanks.
>>
>> On Thu, 12 Dec 2013 11:43:03 -0600
>> Kyle Mestery  wrote:
>>
>>> Hi everyone:
>>>
>>> We had a meeting around Neutron Third-Party testing today on IRC.
>>> The logs are available here [1]. We plan to host another meeting
>>> next week, and it will be at 2200 UTC on Wednesday in the
>>> #openstack-meeting-alt channel on IRC. Please attend and update
>>> the etherpad [2] with any items relevant to you before then.
>>>
>>> Thanks again!
>>> Kyle
>>>
>>> [1] 
>>> http://eavesdrop.openstack.org/meetings/networking_third_party_testing/2013/
>>> [2] https://etherpad.openstack.org/p/multi-node-neutron-tempest
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> On Wed, 18 Dec 2013 15:10:46 -0600
>> Kyle Mestery  wrote:
>>
>>> Just a reminder, we'll be meeting at 2200 UTC on #openstack-meeting-alt.
>>> We'll be looking at this etherpad [1] again, and continuing discussions 
>>> from
>>> last week.
>>>
>>> Thanks!
>>> Kyle
>>>
>>> [1] https://etherpad.openstack.org/p/multi-node-neutron-tempest
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> Open

Re: [openstack-dev] [Neutron] Partially Shared Networks

2014-01-13 Thread Mathieu Rohon
Hi,

This is something that we potentially could implement during the
implementation of the isolated-network bp [1]
Basically, on an isolated network, an ARP responder will respond to
ARP request. For an L2 network which is totally isolated, ARP
responder will only respond to arp-request of the gateway, other
broadcast requests will be dropped (except for DHCP requests)

We could enhance this feature to populate the arp-responder so that if
tenant A and tenant B wants to be able to communicate on this shared
and isolated network, ARP responder for the VM of tenant A will be
populated with Mac address of VM of the Tenant B, and vice versa.

[1] https://blueprints.launchpad.net/neutron/+spec/isolated-network

On Fri, Jan 10, 2014 at 10:00 PM, Jay Pipes  wrote:
> On Fri, 2014-01-10 at 17:06 +, CARVER, PAUL wrote:
>> If anyone is giving any thought to networks that are available to
>> multiple tenants (controlled by a configurable list of tenants) but
>> not visible to all tenants I’d like to hear about it.
>>
>> I’m especially thinking of scenarios where specific networks exist
>> outside of OpenStack and have specific purposes and rules for who can
>> deploy servers on them. We’d like to enable the use of OpenStack to
>> deploy to these sorts of networks but we can’t do that with the
>> current “shared or not shared” binary choice.
>
> Hi Paul :) Please see here:
>
> https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg07268.html
>
> for a similar discussion.
>
> best,
> -jay
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >