Re: [openstack-dev] [neutron] - L3 flavors and issues with usecasesfor multiple L3 backends

2016-02-03 Thread Germy Lure
People need high performance but also xaaS integrated, slow and free but
also packet logged. And lots of back-ends have multiple characters.
According to the example described in this thread, those characters really
should be modeled as different flavors.
Indeed, I think people just want to know what features can those backends
provide and chose one of them to deploy her or his business. Flavor
sub-system can help people to choose easier.
So flavor should be understood by user, any change that facing to user
should introduce a NEW flavor. One vendor for one flavor, even every
version of a vendor for one flavor.

IMHO, no interruption, no rescheduling. Everything should be ready when
user creates a router, according to a flavor gets from neutron.

Thanks.
Germy


On Wed, Feb 3, 2016 at 12:01 PM, rzang  wrote:

> Is it possible that the third router interface that the user wants to add
> will bind to a provider network that the chosen driver (for bare metal
> routers) can not access physically? Even though the chosen driver has the
> capability for that type of network? Is it a third dimension that needs to
> take into consideration besides flavors and capabilities? If this case is
> possible, it is a problem even we restrict all the drivers in the same
> flavor should have the same capability set.
>
>
> -- Original --
> *From: * "Kevin Benton";;
> *Send time:* Wednesday, Feb 3, 2016 9:43 AM
> *To:* "OpenStack Development Mailing List (not for usage questions)"<
> openstack-dev@lists.openstack.org>;
> *Subject: * Re: [openstack-dev] [neutron] - L3 flavors and issues with
> usecasesfor multiple L3 backends
>
> So flavors are for routers with different behaviors that you want the user
> to be able to choose from (e.g. High performance, slow but free, packet
> logged, etc). Multiple drivers are for when you have multiple backends
> providing the same flavor (e.g. The high performance flavor has several
> drivers for various bare metal routers).
> On Feb 2, 2016 18:22, "rzang"  wrote:
>
>> What advantage can we get from putting multiple drivers into one flavor
>> over strictly limit one flavor one driver (or whatever it is called).
>>
>> Thanks,
>> Rui
>>
>> -- Original --
>> *From: * "Kevin Benton";;
>> *Send time:* Wednesday, Feb 3, 2016 8:55 AM
>> *To:* "OpenStack Development Mailing List (not for usage questions)"<
>> openstack-dev@lists.openstack.org>;
>> *Subject: * Re: [openstack-dev] [neutron] - L3 flavors and issues with
>> usecases for multiple L3 backends
>>
>> Choosing from multiple drivers for the same flavor is scheduling. I
>> didn't mean automatically selecting other flavors.
>> On Feb 2, 2016 17:53, "Eichberger, German" 
>> wrote:
>>
>>> Not that you could call it scheduling. The intent was that the user
>>> could pick the best flavor for his task (e.g. a gold router as opposed to a
>>> silver one). The system then would “schedule” the driver configured for
>>> gold or silver. Rescheduling wasn’t really a consideration…
>>>
>>> German
>>>
>>> From: Doug Wiegley > doug...@parksidesoftware.com>>
>>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>>> > openstack-dev@lists.openstack.org>>
>>> Date: Monday, February 1, 2016 at 8:17 PM
>>> To: "OpenStack Development Mailing List (not for usage questions)" <
>>> openstack-dev@lists.openstack.org>> openstack-dev@lists.openstack.org>>
>>> Subject: Re: [openstack-dev] [neutron] - L3 flavors and issues with use
>>> cases for multiple L3 backends
>>>
>>> Yes, scheduling was a big gnarly wart that was punted for the first
>>> pass. The intention was that any driver you put in a single flavor had
>>> equivalent capabilities/plumbed to the same networks/etc.
>>>
>>> doug
>>>
>>>
>>> On Feb 1, 2016, at 7:08 AM, Kevin Benton > blak...@gmail.com>> wrote:
>>>
>>>
>>> Hi all,
>>>
>>> I've been working on an implementation of the multiple L3 backends
>>> RFE[1] using the flavor framework and I've run into some snags with the
>>> use-cases.[2]
>>>
>>> The first use cases are relatively straightforward where the user
>>> requests a specific flavor and that request gets dispatched to a driver
>>> associated with that flavor via a service profile. However, several of the
>>> use-cases are based around the idea that there is a single flavor with
>>> multiple drivers and a specific driver will need to be used depending on
>>> the placement of the router interfaces. i.e. a router cannot be bound to a
>>> driver until an interface is attached.
>>>
>>> This creates some painful coordination problems amongst drivers. For
>>> example, say the first two networks that a user attaches a router to can be
>>> reached by all drivers because they use overlays so the first driver chosen
>>> by the 

Re: [openstack-dev] [neutron][fwaas]some architectural advice on fwaas driver writing

2015-11-22 Thread Germy Lure
Hi,
Under current FWaaS architecture or framework, only integrating hardware
firewall is not easy. That requires neutron support service level multiple
vendors. In another word, vendors must fit each other for their services
while currently vendors just provides all services through controller.

I think the root cause is Neutron just doesn't known how the network
devices connect each other.  Neutron provides FW, LB, VPN and other
advanced network functionalists as services. But as the implementation
layer, Neutron needs TOPO info to make right decision, routing traffic to
the right device. For example, from namespace router to hardware firewall,
Neutron should add some internal routes even extra L3 interfaces according
to the connection relationship between them. If the firewall service is
integrated with router, like Vyatta, it's simple. The only thing you need
to do is just enable the firewall itself.

All in all, it requires linkage between services, especially between
advanced services and L3 router.

Germy
.

On Fri, Nov 20, 2015 at 9:19 PM, Somanchi Trinath <
trinath.soman...@freescale.com> wrote:

> Hi-
>
>
>
> As I understand you are not sure on “How to locate the Hardware Appliance”
> which you have as your FW?
>
>
>
> Am I right?  If so you can look into,
> https://github.com/jumpojoy/generic_switch kind of approach.
>
>
>
> -
>
> Trinath
>
>
>
>
>
>
>
> *From:* Oguz Yarimtepe [mailto:oguzyarimt...@gmail.com]
> *Sent:* Friday, November 20, 2015 5:52 PM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [neutron][fwaas]some architectural advice
> on fwaas driver writing
>
>
>
> I created a sample driver by looking at vArmour driver that is at the
> Github FWaaS repo. I am planning to call the FW's REST API from the
> suitable functions.
>
> The problem is, i am still not sure how to locate the hardware appliance.
> One of the FWaaS guy says that Service Chaining can help, any body has an
> idea or how to insert the fw to OpenStack?
>
> On 11/02/2015 02:36 PM, Somanchi Trinath wrote:
>
> Hi-
>
>
>
> I’m confused. Do you really have an PoC implementation of what is to be
> achieved?
>
>
>
> As I look into these type of Implementations, I would prefer to have proxy
> driver/plugin to get the configuration from Openstack to external
> controller/device and do the rest of the magic.
>
>
>
> -
>
> Trinath
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-05 Thread Germy Lure
I don't know if this would make more sense. Let's assume that
we add arbitrary blobs(ABs) to IPAM even every neutron object. What
would happen? People can do anything via those APIs. Any new
attribute even the whole model could be passed through those
so-called ABs. Except the architecture issues, I think people like
Shraddha would never report any case to the community. People even
didn't need the community. Because they even can define an object
contains only id and AB. eg. Port like this:
{
"id":"" #uuid format
"params":{} # a json dictionary
}
Everything can be filled in this *Big Box*. Is that an API?

But on the other hand, If we don't have such a block. People must
extend API and extra tables themselves or push community approve
and merge the feature which will be a long cycle. In the end, people
would think that Neutron is so bad to use it with so many limitations
and update like a snail.

It's difficult, but it's time to make a decision. OK, I prefer adding it.

Thanks.
Germy
.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]What happened when the 3-rd controller restarted?

2015-10-16 Thread Germy Lure
Hi Salvatore,

Thanks so much for your continuing answers. What you mentioned is partially
where I was going. Indeed, I want to solve the whole consistency issue, not
just at startup. I just thought that ensuring consistency at startup and
each operation is enough for it. A periodic task need more resource.

OK, anyway, I would think about filing a bug or BP to push this moving
forward.
Thanks for the help. Will get back to you if required.

Germy
.



On Fri, Oct 16, 2015 at 1:46 AM, Salvatore Orlando <salv.orla...@gmail.com>
wrote:

> Hi Germy,
>
> It seems that you're looking at solutions for ensuring consistency between
> the "desired" configuration (Neutron), and the actual one (whatever is in
> your backend) at startup.
> This has been discussed several times in the past - not just for
> synchronization at startup, but also for ensuring neutron and the backend
> are in sync at each operation.
>
> At a very high level I think a "general" solution is only partially
> possible. At some point there must be a plugin interface that verifies
> whether, for a given resource, data on the backend differ from those in
> neutron.
> The component which evaluates the result of such operation and updates the
> status of the resources being synchronised could instead be shared across
> plugins.
> For the ML2 plugin I don't see any architectural difference, beyond the
> fact that the plugin level operation should probably query all the
> mechanism drivers.
>
> Anyway, If this is something you'd like to see implemented (regardless of
> whether my analysis matches your use case) you should considering filing a
> RFE bug so that it will be considered during the drivers meetings.
>
> Salvatore
>
> On 14 October 2015 at 11:43, Germy Lure <germy.l...@gmail.com> wrote:
>
>> Hi Salvatore and Kevin,
>>
>> I'm sorry for replying so late.
>> I wanted to see whether the community had considered data sync for these
>> two style(agent and controller) integration. To solve integrating multiple
>> vendor's controllers, I need some help from community. That's the original
>> purpose of this thread. In another word, I had no idea when I sent this
>> message and I just asked some help.
>>
>> Anyway, the issues I mentioned last mail are exists. We still need face
>> them. I have some rough ideas for your reference.
>>
>> 1.try best to keep the source is correct.
>> Think about CREATE operation, if the backend was be in exception and
>> Neutron is timeout, then the record should be destroyed or marked ERROR to
>> warn the operator. If Neutron was be in exception, the backend will has an
>> extra record. To avoid this, Neutron could store and mark a record
>> CREATE_PENDING before push it to backend, then scan data and check with the
>> backend after restarting when exception occurs. If the record in Neutron is
>> extra, destroy or mark ERROR to warn the operator. UPDATE and DELETE need
>> similar logic.
>> Currently in Neutron, some objects have defined XX_PENDING and some not.
>> 2.check each other when they restart.
>> After restarting, the backend should report the states of all objects and
>> may re-load data from Neutron to rebuild or check local data. When Neutron
>> restarting, it should get data from backend and check it. Maybe, it can
>> notify backend, and backend act as it just restarted.
>> All in all, I think it's enough that keeping the data be correct when you
>> write(CUD) it and check it when restarting.
>>
>> About implementation, I think a common frame is best. Plugins or even
>> drivers just provide methods for backend to load data, update state and
>> etc.
>>
>> As I mentioned earlier, this is just a rough and superficial idea. Any
>> comment please.
>>
>> Thanks,
>> Germy
>> .
>>
>>
>>
>> On Tue, Oct 13, 2015 at 3:28 AM, Kevin Benton <blak...@gmail.com> wrote:
>>
>>> >*But there is no such a feature in Neutron. Right? Will the community
>>> merge it soon? And can we consider it with agent-style mechanism together?*
>>>
>>> The agents have their own mechanisms for getting information from the
>>> server. The community has no plans to merge a feature that is going to be
>>> different for almost every vendor.
>>>
>>> We tried to come up with some common syncing stuff in the recent ML2
>>> meeting, the various backends had different methods of detecting when they
>>> were out of sync with Neutron (e.g. headers in hashes, recording errors,
>>> etc), all of which depended on the capabilities of the backend. Then the
>>> sync 

[openstack-dev] [neutron]How to install lbaas integrating with barbican?

2015-10-16 Thread Germy Lure
Hi stackers,

I plan to test the https functionality of lbaas. Can anyone paste some
guide hyperlink about installation, deployment and operation?

Thank you.
Germy
.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Anyone tried to mix-use openstack components or projects?

2015-10-14 Thread Germy Lure
Hi Salvatore,
Thank you so much.
I think I see your points now. Next step, I will have a try to check it.

Many thanks.
Germy
.


On Mon, Oct 12, 2015 at 11:11 PM, Salvatore Orlando <salv.orla...@gmail.com>
wrote:

> Inline,
> Salvatore
>
> On 12 October 2015 at 10:23, Germy Lure <germy.l...@gmail.com> wrote:
>
>> Thank you, Kevin.
>> So the community just divided the whole openstack into separate
>> sub-projects(Nova,Neutron and etc.) but it's not taken into account that if
>> those modules can work together with different versions. Yes?
>>
>
> The developer community has been addressing this by ensuring, to some
> extent, backward compatibility between the APIs used for communicating
> across services. This is what allows a component at version X to operate
> with another component at version Y.
>
> In the case of Neutron and Nova, this is only done with REST over HTTP.
> Other projects also use RPC over AMQP.
> Neutron strived to be backward compatible since the v2 API was introduced
> in Folsom. Therefore you should be able to run Neutron Kilo with Nova
> Havana; as Kevin noted, you might want to disable notifications on the
> Neutron side as the nova extension that processes them does not exist in
> Havana.
>
>
>
>>
>> If so, is it possible to keep being compatible with each other in
>> technology? How about just N+1? And how about just in Neutron?
>>
>
> While it is surely possible, enforcing this, as far as I can tell, is not
> a requirement for Openstack projects. Indeed, it is not something which is
> tested in the gate. It would be interesting to have it as a part of a
> rolling upgrade test for an OpenStack cloud, where, for instance, you first
> upgrade the networking service and then the compute service. But beyond
> that I do not think the upstream developer community should provide any
> additional guarantee, notwithstanding guarantees on API backward
> compatibility.
>
>
>> Germy
>> .
>>
>> On Sun, Oct 11, 2015 at 4:33 PM, Kevin Benton <blak...@gmail.com> wrote:
>>
>>> For the particular Nova Neutron example, the Neutron Kilo API should
>>> still be compatible with the calls Havana Nova makes. I think you will need
>>> to disable the Nova callbacks on the Neutron side because the Havana
>>> version wasn't expecting them.
>>>
>>> I've tried out many N+1 combinations (e.g. Icehouse + Juno, Juno + Kilo)
>>> but I haven't tried a gap that big.
>>>
>>> Cheers,
>>> Kevin Benton
>>>
>>> On Sat, Oct 10, 2015 at 1:50 AM, Germy Lure <germy.l...@gmail.com>
>>> wrote:
>>>
>>>> Hi all,
>>>>
>>>> As you know, openstack projects are developed separately. And
>>>> theoretically, people can create networks with Neutron in Kilo version for
>>>> Nova in Havana version.
>>>>
>>>> Did Anyone tried it?
>>>> Do we have some pages to show what combination can work together?
>>>>
>>>> Thanks.
>>>> Germy
>>>> .
>>>>
>>>>
>>>> __
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe:
>>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>>
>>> --
>>> Kevin Benton
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]What happened when the 3-rd controller restarted?

2015-10-14 Thread Germy Lure
Hi Salvatore and Kevin,

I'm sorry for replying so late.
I wanted to see whether the community had considered data sync for these
two style(agent and controller) integration. To solve integrating multiple
vendor's controllers, I need some help from community. That's the original
purpose of this thread. In another word, I had no idea when I sent this
message and I just asked some help.

Anyway, the issues I mentioned last mail are exists. We still need face
them. I have some rough ideas for your reference.

1.try best to keep the source is correct.
Think about CREATE operation, if the backend was be in exception and
Neutron is timeout, then the record should be destroyed or marked ERROR to
warn the operator. If Neutron was be in exception, the backend will has an
extra record. To avoid this, Neutron could store and mark a record
CREATE_PENDING before push it to backend, then scan data and check with the
backend after restarting when exception occurs. If the record in Neutron is
extra, destroy or mark ERROR to warn the operator. UPDATE and DELETE need
similar logic.
Currently in Neutron, some objects have defined XX_PENDING and some not.
2.check each other when they restart.
After restarting, the backend should report the states of all objects and
may re-load data from Neutron to rebuild or check local data. When Neutron
restarting, it should get data from backend and check it. Maybe, it can
notify backend, and backend act as it just restarted.
All in all, I think it's enough that keeping the data be correct when you
write(CUD) it and check it when restarting.

About implementation, I think a common frame is best. Plugins or even
drivers just provide methods for backend to load data, update state and
etc.

As I mentioned earlier, this is just a rough and superficial idea. Any
comment please.

Thanks,
Germy
.



On Tue, Oct 13, 2015 at 3:28 AM, Kevin Benton <blak...@gmail.com> wrote:

> >*But there is no such a feature in Neutron. Right? Will the community
> merge it soon? And can we consider it with agent-style mechanism together?*
>
> The agents have their own mechanisms for getting information from the
> server. The community has no plans to merge a feature that is going to be
> different for almost every vendor.
>
> We tried to come up with some common syncing stuff in the recent ML2
> meeting, the various backends had different methods of detecting when they
> were out of sync with Neutron (e.g. headers in hashes, recording errors,
> etc), all of which depended on the capabilities of the backend. Then the
> sync method itself was different between backends (sending deltas, sending
> entire state, sending a replay log, etc).
>
> About the only thing they have in common is that they need a way detect if
> they are out of sync and they need a method to sync. So that's two abstract
> methods, and we likely can't even agree on when they should be called.
>
> Echoing Salvatore's comments, what is it that you want to see?
>
> On Mon, Oct 12, 2015 at 12:29 AM, Germy Lure <germy.l...@gmail.com> wrote:
>
>> Hi Kevin,
>>
>> *Thank you for your response. Periodic data checking is a popular and
>> effective method to sync info. But there is no such a feature in Neutron.
>> Right? Will the community merge it soon? And can we consider it with
>> agent-style mechanism together?*
>>
>> Vendor-specific extension or coding a periodic task private by vendor is
>> not a good solution, I think. Because it means that Neutron-Sever could not
>> integrate with multiple vendors' controller and even the controller of
>> those vendors that introduced this extension or task could not integrate
>> with a standard community Neutron-Server.
>> That is just the tip of the iceberg. Many of the other problems
>> resulting, such as fixing bugs,upgrade,patch and etc.
>> But wait, is it a vendor-specific feature? Of course not. All software
>> systems need data checking.
>>
>> Many thanks.
>> Germy
>>
>>
>> On Sun, Oct 11, 2015 at 4:28 PM, Kevin Benton <blak...@gmail.com> wrote:
>>
>>> You can have a periodic task that asks your backend if it needs sync
>>> info.
>>> Another option is to define a vendor-specific extension that makes it
>>> easy to retrieve all info in one call via the HTTP API.
>>>
>>> On Sat, Oct 10, 2015 at 2:24 AM, Germy Lure <germy.l...@gmail.com>
>>> wrote:
>>>
>>>> Hi all,
>>>>
>>>> After restarting, Agents load data from Neutron via RPC. What about
>>>> 3-rd controller? They only can re-gather data via NBI. Right?
>>>>
>>>> Is it possible to provide some mechanism for those controllers and
>>>> agents to sync data? or something else I 

Re: [openstack-dev] [neutron]What happened when the 3-rd controller restarted?

2015-10-12 Thread Germy Lure
Hi Kevin,

*Thank you for your response. Periodic data checking is a popular and
effective method to sync info. But there is no such a feature in Neutron.
Right? Will the community merge it soon? And can we consider it with
agent-style mechanism together?*

Vendor-specific extension or coding a periodic task private by vendor is
not a good solution, I think. Because it means that Neutron-Sever could not
integrate with multiple vendors' controller and even the controller of
those vendors that introduced this extension or task could not integrate
with a standard community Neutron-Server.
That is just the tip of the iceberg. Many of the other problems resulting,
such as fixing bugs,upgrade,patch and etc.
But wait, is it a vendor-specific feature? Of course not. All software
systems need data checking.

Many thanks.
Germy


On Sun, Oct 11, 2015 at 4:28 PM, Kevin Benton <blak...@gmail.com> wrote:

> You can have a periodic task that asks your backend if it needs sync info.
> Another option is to define a vendor-specific extension that makes it easy
> to retrieve all info in one call via the HTTP API.
>
> On Sat, Oct 10, 2015 at 2:24 AM, Germy Lure <germy.l...@gmail.com> wrote:
>
>> Hi all,
>>
>> After restarting, Agents load data from Neutron via RPC. What about 3-rd
>> controller? They only can re-gather data via NBI. Right?
>>
>> Is it possible to provide some mechanism for those controllers and agents
>> to sync data? or something else I missed?
>>
>> Thanks
>> Germy
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Anyone tried to mix-use openstack components or projects?

2015-10-12 Thread Germy Lure
Thank you, Kevin.
So the community just divided the whole openstack into separate
sub-projects(Nova,Neutron and etc.) but it's not taken into account that if
those modules can work together with different versions. Yes?

If so, is it possible to keep being compatible with each other in
technology? How about just N+1? And how about just in Neutron?

Germy
.

On Sun, Oct 11, 2015 at 4:33 PM, Kevin Benton <blak...@gmail.com> wrote:

> For the particular Nova Neutron example, the Neutron Kilo API should still
> be compatible with the calls Havana Nova makes. I think you will need to
> disable the Nova callbacks on the Neutron side because the Havana version
> wasn't expecting them.
>
> I've tried out many N+1 combinations (e.g. Icehouse + Juno, Juno + Kilo)
> but I haven't tried a gap that big.
>
> Cheers,
> Kevin Benton
>
> On Sat, Oct 10, 2015 at 1:50 AM, Germy Lure <germy.l...@gmail.com> wrote:
>
>> Hi all,
>>
>> As you know, openstack projects are developed separately. And
>> theoretically, people can create networks with Neutron in Kilo version for
>> Nova in Havana version.
>>
>> Did Anyone tried it?
>> Do we have some pages to show what combination can work together?
>>
>> Thanks.
>> Germy
>> .
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron]Anyone tried to mix-use openstack components or projects?

2015-10-10 Thread Germy Lure
Hi all,

As you know, openstack projects are developed separately. And
theoretically, people can create networks with Neutron in Kilo version for
Nova in Havana version.

Did Anyone tried it?
Do we have some pages to show what combination can work together?

Thanks.
Germy
.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron]What happened when the 3-rd controller restarted?

2015-10-10 Thread Germy Lure
Hi all,

After restarting, Agents load data from Neutron via RPC. What about 3-rd
controller? They only can re-gather data via NBI. Right?

Is it possible to provide some mechanism for those controllers and agents
to sync data? or something else I missed?

Thanks
Germy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Port forwarding

2015-09-09 Thread Germy Lure
Hi Gal,

Congratulations, eventually you understand what I mean.

Yes, in bulk. But I don't think that's an enhancement to the API. The bulk
operation is more common scenario. It is more useful and covers the single
port-mapping scenario.

By the way, bulk operation may apply to a subnet, a range(IP1 to IP100) or
even all the VMs behind a router. Perhaps, we need make a choice between
them while I prefer "range". Because it's more flexible and easier to use.

Many thanks.
Germy

On Wed, Sep 9, 2015 at 3:30 AM, Carl Baldwin  wrote:

> On Tue, Sep 1, 2015 at 11:59 PM, Gal Sagie  wrote:
> > Hello All,
> >
> > I have searched and found many past efforts to implement port forwarding
> in
> > Neutron.
>
> I have heard a few express a desire for this use case a few times in
> the past without gaining much traction.  Your summary here seems to
> show that this continues to come up.  I would be interested in seeing
> this move forward.
>
> > I have found two incomplete blueprints [1], [2] and an abandoned patch
> [3].
> >
> > There is even a project in Stackforge [4], [5] that claims
> > to implement this, but the L3 parts in it seems older then current
> master.
>
> I looked at this stack forge project.  It looks like files copied out
> of neutron and modified as an alternative to proposing a patch set to
> neutron.
>
> > I have recently came across this requirement for various use cases, one
> of
> > them is
> > providing feature compliance with Docker port-mapping feature (for
> Kuryr),
> > and saving floating
> > IP's space.
>
> I think both of these could be compelling use cases.
>
> > There has been many discussions in the past that require this feature,
> so i
> > assume
> > there is a demand to make this formal, just a small examples [6], [7],
> [8],
> > [9]
> >
> > The idea in a nutshell is to support port forwarding (TCP/UDP ports) on
> the
> > external router
> > leg from the public network to internal ports, so user can use one
> Floating
> > IP (the external
> > gateway router interface IP) and reach different internal ports
> depending on
> > the port numbers.
> > This should happen on the network node (and can also be leveraged for
> > security reasons).
>
> I'm sure someone will ask how this works with DVR.  It should be
> implemented so that it works with a DVR router but it will be
> implemented in the central part of the router.  Ideally, DVR and
> legacy routers work the same in this regard and a single bit of code
> will implement it for both.  If this isn't the case, I think that is a
> problem with our current code structure.
>
> > I think that the POC implementation in the Stackforge project shows that
> > this needs to be
> > implemented inside the L3 parts of the current reference implementation,
> it
> > will be hard
> > to maintain something like that in an external repository.
> > (I also think that the API/DB extensions should be close to the current
> L3
> > reference
> > implementation)
>
> Agreed.
>
> > I would like to renew the efforts on this feature and propose a RFE and a
> > spec for this to the
> > next release, any comments/ideas/thoughts are welcome.
> > And of course if any of the people interested or any of the people that
> > worked on this before
> > want to join the effort, you are more then welcome to join and comment.
>
> I have added this to the agenda for the Neutron drivers meeting.  When
> the team starts to turn its eye toward Mitaka, we'll discuss it.
> Hopefully that will be soon as I'm started to think about it already.
>
> I'd like to see how the API for this will look.  I don't think we'll
> need more detail that that for now.
>
> Carl
>
> > [1]
> https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding
> > [2] https://blueprints.launchpad.net/neutron/+spec/fip-portforwarding
> > [3] https://review.openstack.org/#/c/60512/
> > [4] https://github.com/stackforge/networking-portforwarding
> > [5] https://review.openstack.org/#/q/port+forwarding,n,z
> >
> > [6]
> >
> https://ask.openstack.org/en/question/75190/neutron-port-forwarding-qrouter-vms/
> > [7] http://www.gossamer-threads.com/lists/openstack/dev/34307
> > [8]
> >
> http://openstack.10931.n7.nabble.com/Neutron-port-forwarding-for-router-td46639.html
> > [9]
> >
> http://openstack.10931.n7.nabble.com/Neutron-port-forwarding-from-gateway-to-internal-hosts-td32410.html
> >
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Port forwarding

2015-09-08 Thread Germy Lure
Hi Gal,

Thank you for your explanation.
As you mentioned, PF is a way of reusing floating IP to access several
Neutron ports. I agree with your point of view completely.
Let me extend your example to explain where I was going.
T1 has 20 subnets behind a router, and one of them is 10.0.0.0/24 named s1.
There are 100 VMs named VM1~VM100 in the subnet s1 and T1 wants to update
the same file(or something else) in those VMs. Let's have a look how will
T1 do it.

T1 invokes Neutron API to create a port-mapping for VM1(Maybe that will be
did by operator)
For example :  172.20.20.10:4001  =>  10.0.0.1:80
And then T1 does the update task via 172.20.20.10:4001.

Now for the VM2,VM3,...VM100, T1 must repeat the steps above with different
ports. And T1 must clean those records(100 records in DB) after accessing.
That's badly, I think.
Note that T1 still has 19 subnets to be dealt with. That's a nightmare to
T1.
To PaaS, SaaS, that also is a big trouble.

So, can we do it like this?
T1 invokes Neutron API one time for s1(not VM1), and Neutron setups a group
of port-mapping relation. For example:
172.20.20.10:4001  =>  10.0.0.1:80
172.20.20.10:4002  =>  10.0.0.2:80
172.20.20.10:4003  =>  10.0.0.3:80
..   ..
172.20.20.10:4100  =>  10.0.0.100:80
Now T1 just needs focus on his/her business work not PF.

We just store one record in Neutron DB for such one time API invoking. For
the single VM scene, we can specific private IP range instead of subnet.
For example, 10.0.0.1 to 10.0.0.3. The mapped ports(like 4001,4002...) can
be returned in the response body, for example, 4001 to 4003, also can just
return a base number(4000) and upper layer rework it. For example, 4000+1,
where 1 is the last number in the private IP address of VM1.

Forgive my poor E.
Hope that's clear enough and i am happy to discuss it further if necessary.

Germy


On Tue, Sep 8, 2015 at 1:58 PM, Gal Sagie <gal.sa...@gmail.com> wrote:

> Hi Germy,
>
> Port forwarding the way i see it, is a way of reusing the same floating ip
> to access several different Neutron ports (VM's , Containers)
> So for example if we have floating IP 172.20.20.10 , we can assign
> 172.20.20.10:4001 to VM1 and 172.20.20.10:4002 to VM2 (which are behind
> that same router
> which has an external gw).
> The user use the same IP but according to the tcp/udp port Neutron
> performs mapping in the virtual router namespace to the private IP and
> possibly to a different port
> that is running on that instance for example port 80
>
> So for example if we have two VM's with private IP's 10.0.0.1 and 10.0.0.2
> and we have a floating ip assigned to the router of 172.20.20.10
> with port forwarding we can build the following mapping:
>
> 172.20.20.10:4001  =>  10.0.0.1:80
> 172.20.20.10:4002  =>  10.0.0.2:80
>
> And this is only from the Neutron API, this feature is usefull when you
> offer PaaS, SaaS and have an automated framework that calls the API
> to allocate these "client ports"
>
> I am not sure why you think the operator will need to ssh the instances,
> the operator just needs to build the mapping of <floating_ip, port>  to the
> instance private IP.
> Of course keep in mind that we didnt yet discuss full API details but its
> going to be something like that (at least the way i see it)
>
> Hope thats explains it.
>
> Gal.
>
> On Mon, Sep 7, 2015 at 5:21 AM, Germy Lure <germy.l...@gmail.com> wrote:
>
>> Hi Gal,
>>
>> I'm sorry for my poor English. Let me try again.
>>
>> What operator wants to access is several related instances, instead of
>> only one or one by one. The use case is periodical check and maintain.
>> RELATED means instance maybe in one subnet, or one network, or one host.
>> The host's scene is similar to access the docker on the host as you
>> mentioned before.
>>
>> Via what you mentioned of API, user must ssh an instance and then invoke
>> API to update the IP address and port, or even create a new PF to access
>> another one. It will be a nightmare to a VPC operator who owns so many
>> instances.
>>
>> In a word, I think the "inside_addr" should be "subnet" or "host".
>>
>> Hope this is clear enough.
>>
>> Germy
>>
>> On Sun, Sep 6, 2015 at 1:05 PM, Gal Sagie <gal.sa...@gmail.com> wrote:
>>
>>> Hi Germy,
>>>
>>> I am not sure i understand what you mean, can you please explain it
>>> further?
>>>
>>> Thanks
>>> Gal.
>>>
>>> On Sun, Sep 6, 2015 at 5:39 AM, Germy Lure <germy.l...@gmail.com> wrote:
>>>
>>>> Hi, Gal
>>>>
>>>> Thank you for bringing this up. Bu

Re: [openstack-dev] [Neutron] Port forwarding

2015-09-06 Thread Germy Lure
Hi Gal,

I'm sorry for my poor English. Let me try again.

What operator wants to access is several related instances, instead of only
one or one by one. The use case is periodical check and maintain. RELATED
means instance maybe in one subnet, or one network, or one host. The host's
scene is similar to access the docker on the host as you mentioned before.

Via what you mentioned of API, user must ssh an instance and then invoke
API to update the IP address and port, or even create a new PF to access
another one. It will be a nightmare to a VPC operator who owns so many
instances.

In a word, I think the "inside_addr" should be "subnet" or "host".

Hope this is clear enough.

Germy

On Sun, Sep 6, 2015 at 1:05 PM, Gal Sagie <gal.sa...@gmail.com> wrote:

> Hi Germy,
>
> I am not sure i understand what you mean, can you please explain it
> further?
>
> Thanks
> Gal.
>
> On Sun, Sep 6, 2015 at 5:39 AM, Germy Lure <germy.l...@gmail.com> wrote:
>
>> Hi, Gal
>>
>> Thank you for bringing this up. But I have some suggestions for the API.
>>
>> An operator or some other component wants to reach several VMs related
>> NOT only one or one by one. Here, RELATED means that the VMs are in one
>> subnet or network or a host(similar to reaching dockers on a host).
>>
>> Via the API you mentioned, user must ssh one VM and update even delete
>> and add PF to ssh another. To a VPC(with 20 subnets?) admin, it's totally a
>> nightmare.
>>
>> Germy
>>
>>
>> On Wed, Sep 2, 2015 at 1:59 PM, Gal Sagie <gal.sa...@gmail.com> wrote:
>>
>>> Hello All,
>>>
>>> I have searched and found many past efforts to implement port forwarding
>>> in Neutron.
>>> I have found two incomplete blueprints [1], [2] and an abandoned patch
>>> [3].
>>>
>>> There is even a project in Stackforge [4], [5] that claims
>>> to implement this, but the L3 parts in it seems older then current
>>> master.
>>>
>>> I have recently came across this requirement for various use cases, one
>>> of them is
>>> providing feature compliance with Docker port-mapping feature (for
>>> Kuryr), and saving floating
>>> IP's space.
>>> There has been many discussions in the past that require this feature,
>>> so i assume
>>> there is a demand to make this formal, just a small examples [6], [7],
>>> [8], [9]
>>>
>>> The idea in a nutshell is to support port forwarding (TCP/UDP ports) on
>>> the external router
>>> leg from the public network to internal ports, so user can use one
>>> Floating IP (the external
>>> gateway router interface IP) and reach different internal ports
>>> depending on the port numbers.
>>> This should happen on the network node (and can also be leveraged for
>>> security reasons).
>>>
>>> I think that the POC implementation in the Stackforge project shows that
>>> this needs to be
>>> implemented inside the L3 parts of the current reference implementation,
>>> it will be hard
>>> to maintain something like that in an external repository.
>>> (I also think that the API/DB extensions should be close to the current
>>> L3 reference
>>> implementation)
>>>
>>> I would like to renew the efforts on this feature and propose a RFE and
>>> a spec for this to the
>>> next release, any comments/ideas/thoughts are welcome.
>>> And of course if any of the people interested or any of the people that
>>> worked on this before
>>> want to join the effort, you are more then welcome to join and comment.
>>>
>>> Thanks
>>> Gal.
>>>
>>> [1]
>>> https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding
>>> [2] https://blueprints.launchpad.net/neutron/+spec/fip-portforwarding
>>> [3] https://review.openstack.org/#/c/60512/
>>> [4] https://github.com/stackforge/networking-portforwarding
>>> [5] https://review.openstack.org/#/q/port+forwarding,n,z
>>>
>>> [6]
>>> https://ask.openstack.org/en/question/75190/neutron-port-forwarding-qrouter-vms/
>>> [7] http://www.gossamer-threads.com/lists/openstack/dev/34307
>>> [8]
>>> http://openstack.10931.n7.nabble.com/Neutron-port-forwarding-for-router-td46639.html
>>> [9]
>>> http://openstack.10931.n7.nabble.com/Neutron-port-forwarding-from-gateway-to-internal-hosts-td32410.html
>>>
>>>
>>>
>>>
>>> _

Re: [openstack-dev] [Neutron] Port forwarding

2015-09-05 Thread Germy Lure
Hi, Gal

Thank you for bringing this up. But I have some suggestions for the API.

An operator or some other component wants to reach several VMs related NOT
only one or one by one. Here, RELATED means that the VMs are in one subnet
or network or a host(similar to reaching dockers on a host).

Via the API you mentioned, user must ssh one VM and update even delete and
add PF to ssh another. To a VPC(with 20 subnets?) admin, it's totally a
nightmare.

Germy


On Wed, Sep 2, 2015 at 1:59 PM, Gal Sagie  wrote:

> Hello All,
>
> I have searched and found many past efforts to implement port forwarding
> in Neutron.
> I have found two incomplete blueprints [1], [2] and an abandoned patch [3].
>
> There is even a project in Stackforge [4], [5] that claims
> to implement this, but the L3 parts in it seems older then current master.
>
> I have recently came across this requirement for various use cases, one of
> them is
> providing feature compliance with Docker port-mapping feature (for Kuryr),
> and saving floating
> IP's space.
> There has been many discussions in the past that require this feature, so
> i assume
> there is a demand to make this formal, just a small examples [6], [7],
> [8], [9]
>
> The idea in a nutshell is to support port forwarding (TCP/UDP ports) on
> the external router
> leg from the public network to internal ports, so user can use one
> Floating IP (the external
> gateway router interface IP) and reach different internal ports depending
> on the port numbers.
> This should happen on the network node (and can also be leveraged for
> security reasons).
>
> I think that the POC implementation in the Stackforge project shows that
> this needs to be
> implemented inside the L3 parts of the current reference implementation,
> it will be hard
> to maintain something like that in an external repository.
> (I also think that the API/DB extensions should be close to the current L3
> reference
> implementation)
>
> I would like to renew the efforts on this feature and propose a RFE and a
> spec for this to the
> next release, any comments/ideas/thoughts are welcome.
> And of course if any of the people interested or any of the people that
> worked on this before
> want to join the effort, you are more then welcome to join and comment.
>
> Thanks
> Gal.
>
> [1] https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding
> [2] https://blueprints.launchpad.net/neutron/+spec/fip-portforwarding
> [3] https://review.openstack.org/#/c/60512/
> [4] https://github.com/stackforge/networking-portforwarding
> [5] https://review.openstack.org/#/q/port+forwarding,n,z
>
> [6]
> https://ask.openstack.org/en/question/75190/neutron-port-forwarding-qrouter-vms/
> [7] http://www.gossamer-threads.com/lists/openstack/dev/34307
> [8]
> http://openstack.10931.n7.nabble.com/Neutron-port-forwarding-for-router-td46639.html
> [9]
> http://openstack.10931.n7.nabble.com/Neutron-port-forwarding-from-gateway-to-internal-hosts-td32410.html
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [DVR] easyOVS -- Smart tool to use/debug Neutron/DVR

2015-08-31 Thread Germy Lure
Hi,

It's Interesting! I have three points for you here.
a.Support packet tracking which show the path of a packet traveled on the
host, even on the source/destination host.
b.Given a communication type and packet characteristic to find out the
fault point. For example, if you want VM1 talk with VM2 via DVR but failed.
The tool should tell you that the packet is sent to the snat router and the
DVR router on the host VM1 residents is created with a wrong
route[dest=xx,nexthop=yy], and the right route should be dest=xx,nexthop=zz.
c.As a tool, I think if it should be simple. The best is no installation.
Copy and use it. Can you simple it? One of the possible method may
implement it using C/C++ and publish executable file.

BR,
Germy

On Fri, Aug 28, 2015 at 6:05 PM, Baohua Yang  wrote:

> Hi , all
>
> When using neutron (especially with DVR), I find it difficult to debug
> problems with lots of ovs rules, complicated iptables rules, network
> namespaces, routing tables, ...
>
> So I create 
> easyOVS
> , in summary, it can
>
>
>- Format the output and use color to make it clear and easy to compare.
>- Associate the OpenStack information (e.g., vm ip) on the virtual
>port or rule
>- Query openvswitch,iptables,namespace information in smart way.
>- Check if the DVR configuration is correct.
>- Smart command completion, try tab everywhere.
>- Support runing local system commands.
>
> In latest 0.5 version, it supports checking your dvr configuration and
> running states, e.g., on a compute node, I run 'dvr check' command, then it
> will automatically check the configuration files, bridges, ports, network
> spaces, iptables rules,... like
>
>  No type given, guessing...compute node
> === Checking DVR on compute node ===
> >>> Checking config files...
> # Checking file = /etc/sysctl.conf...
> # Checking file = /etc/neutron/neutron.conf...
> # Checking file = /etc/neutron/plugins/ml2/ml2_conf.ini...
> file /etc/neutron/plugins/ml2/ml2_conf.ini Not has [agent]
> file /etc/neutron/plugins/ml2/ml2_conf.ini Not has l2_population = True
> file /etc/neutron/plugins/ml2/ml2_conf.ini Not has
> enable_distributed_routing = True
> file /etc/neutron/plugins/ml2/ml2_conf.ini Not has arp_responder = True
> # Checking file = /etc/neutron/l3_agent.ini...
> <<< Checking config files has warnings
>
> >>> Checking bridges...
> # Existing bridges are br-tun, br-int, br-eno1, br-ex
> # Vlan bridge is at br-tun, br-int, br-eno1, br-ex
> <<< Checking bridges passed
>
> >>> Checking vports ...
> ## Checking router port = qr-b0142af2-12
> ### Checking rfp port rfp-f046c591-7
> Found associated floating ips : 172.29.161.127/32, 172.29.161.126/32
> ### Checking associated fpr port fpr-f046c591-7
> ### Check related fip_ns=fip-9e1c850d-e424-4379-8ebd-278ae995d5c3
> Bridging in the same subnet
> fg port is attached to br-ex
> floating ip 172.29.161.127 match fg subnet
> floating ip 172.29.161.126 match fg subnet
> Checking chain rule number: neutron-postrouting-bottom...Passed
> Checking chain rule number: OUTPUT...Passed
> Checking chain rule number: neutron-l3-agent-snat...Passed
> Checking chain rules: neutron-postrouting-bottom...Passed
> Checking chain rules: PREROUTING...Passed
> Checking chain rules: OUTPUT...Passed
> Checking chain rules: POSTROUTING...Passed
> Checking chain rules: POSTROUTING...Passed
> Checking chain rules: neutron-l3-agent-POSTROUTING...Passed
> Checking chain rules: neutron-l3-agent-PREROUTING...Passed
> Checking chain rules: neutron-l3-agent-OUTPUT...Passed
> DNAT for incoming: 172.29.161.127 --> 10.0.0.3 passed
> Checking chain rules: neutron-l3-agent-float-snat...Passed
> SNAT for outgoing: 10.0.0.3 --> 172.29.161.127 passed
> Checking chain rules: neutron-l3-agent-OUTPUT...Passed
> DNAT for incoming: 172.29.161.126 --> 10.0.0.216 passed
> Checking chain rules: neutron-l3-agent-float-snat...Passed
> SNAT for outgoing: 10.0.0.216 --> 172.29.161.126 passed
> ## Checking router port = qr-8c41bfc7-56
> Checking passed already
> <<< Checking vports passed
>
>
> Welcome for any feedback, and welcome for any contribution!
>
> I am trying to put this project into stackforge to let more people can use
> and improve it, any thoughts if it is suitable?
>
> https://review.openstack.org/#/c/212396/
>
> Thanks for any help or suggestion!
>
>
> --
> Best wishes!
> Baohua
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [Neutron] DHCP configuration

2015-08-31 Thread Germy Lure
+1
common.config should be global and general while agent.config should be
local and related to the special back-end.
Maybe, we can add different prefix to the same option.

Germy

On Mon, Aug 31, 2015 at 11:13 PM, Kevin Benton  wrote:

> neutron.common.config should have general DHCP options that aren't
> specific to the reference DHCP agent. neutron.agent.dhcp.config should have
> all of the stuff specific to our agent and dnsmasq.
>
> On Mon, Aug 31, 2015 at 7:54 AM, Gal Sagie  wrote:
>
>> Hello all,
>>
>> I went over the code and noticed that we have default DHCP configuration
>> both in neutron/common/config.py  (dhcp_lease_duration , dns_domain and
>> dhcp_agent_notification)
>>
>> But also we have it in neutron/agent/dhcp/config.py (DHCP_AGENT_OPTS,
>> DHCP_OPTS)
>>
>> I think we should consider merging them (especially the agent
>> configuration)
>> into one place so it will be easier to find them.
>>
>> I will add a bug on myself to address that, anyone know if this was done
>> in purpose
>> for some reason, or anyone have other thoughts regarding this?
>>
>> Thanks
>> Gal.
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][L3][dvr][fwaas] FWaaS with DVR

2015-08-28 Thread Germy Lure
Hi all,

I have two points.
a. For the problem in this thread, my suggestion is to introduce new
concepts to replace the existing firewall and SG.
Perhaps you have found the overlap between firewall and SG. It's trouble
for user to select.
So the new concepts are edge-firewall for N/S traffic and Distributed
firewall for W/E traffic. The former is similar to the existing firewall
but without E/W controlling and deployed on those nodes connect with
external world. The latter controls E/W traffic such as subnet to subnet,
VM to VM and subnet to VM and will be deployed on compute nodes.

We can attach firewall rules to VM port implicitly, especially the DVR is
disabled. I think it's difficult for a user to do that explicitly while
there are hundreds VMs.

b. For the problems like this.
From recent mailing list, we can see so many problems introduced by DVR.
Such as VPNaaS, floating-IP and FWaaS co-existing with DVR, etc..
Then, stackers, I don't know what's the standard or outgoing check of
releasing a feature in community. But can we make or add some provisions or
something else in order to avoid conflict between features?

Forgive my poor English
BR,
Germy

On Thu, Aug 27, 2015 at 11:44 PM, Mickey Spiegel emspi...@us.ibm.com
wrote:

 Bump

 The FWaaS team would really like some feedback from the DVR side.

 Mickey

 -Mickey Spiegel/San Jose/IBM wrote: -
 To: openstack-dev@lists.openstack.org
 From: Mickey Spiegel/San Jose/IBM
 Date: 08/19/2015 09:45AM
 Subject: [fwaas][dvr] FWaaS with DVR

 Currently, FWaaS behaves differently with DVR, applying to only
 north/south traffic, whereas FWaaS on routers in network nodes applies to
 both north/south and east/west traffic. There is a compatibility issue due
 to the asymmetric design of L3 forwarding in DVR, which breaks the
 connection tracking that FWaaS currently relies on.

 I started an etherpad where I hope the community can discuss the problem,
 collect multiple possible solutions, and eventually try to reach consensus
 about how to move forward:
 https://etherpad.openstack.org/p/FWaaS_with_DVR

 I listed every possible solution that I can think of as a starting point.
 I am somewhat new to OpenStack and FWaaS, so please correct anything that I
 might have misrepresented.

 Please add more possible solutions and comment on the possible solutions
 already listed.

 Mickey




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Targeting Logging API for SG and FW rules feature to L-3 milestone

2015-08-28 Thread Germy Lure
Hi Cao,

I have reviewed the specification linked above. Thank you for introducing
such an interesting and important feature. But as I commented inline, I
think it still need some further work to do. Such as how to get those logs
stored? To admin and tenant, I think it's different.
And performance impact, if tenantA turn on logs, will tenantB on the same
host be impacted?

Many thanks,
Germy

On Fri, Aug 21, 2015 at 6:04 PM, hoan...@vn.fujitsu.com 
hoan...@vn.fujitsu.com wrote:

 Good day,

  The specification and source codes will definitely reviewing/filing in
 next week.
  #link
  http://eavesdrop.openstack.org/meetings/networking_fwaas/2015/network
  ing_fwaas.2015-08-19-23.59.log.html
 
  No - I did not say definitely - nowhere in that IRC log was that word
 used.

 I'm sorry.  Yes, that should be probably.

 --
 Best regards,

 Cao Xuan Hoang
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][vpnaas] Need community guidance please...

2015-08-26 Thread Germy Lure
Hi,

Maybe I missed some key points. But why we introduced vpn-endpoint groups
here?

ipsec-site-connection for IPSec VPN only, gre-connection for GRE VPN
only, and mpls-connection for MPLS VPN only. You see, different
connections for different vpn types. Indeed, We can't reuse connection API.

Piece of the ref document(https://review.openstack.org/#/c/191944/) like
this:
allowing subnets (local) and CIDRs (peer) to be used for IPSec, but
routers, networks, and VLANs to be used for other VPN types (BGP, L2,
direct connection)

You see, different epg types for different vpn types. We can't reuse epg.

So, how we meet The third goal, is to do this in a manner that the code
can be reused for other flavors of VPN.?

Thanks.


On Tue, Aug 25, 2015 at 1:54 AM, Madhusudhan Kandadai 
madhusudhan.openst...@gmail.com wrote:

 My two cents..

 On Mon, Aug 24, 2015 at 8:48 AM, Jay Pipes jaypi...@gmail.com wrote:

 Hi Paul, comments inline...

 On 08/24/2015 07:02 AM, Paul Michali wrote:

 Hi,

 I'm working on the multiple local subnet feature for VPN (RFE
 https://bugs.launchpad.net/neutron/+bug/1459423), with a developer
 reference document detailing the proposed process
 (https://review.openstack.org/#/c/191944/). The plan is to do this in
 two steps. The first is to add new APIs and database support for
 endpoint groups (see dev ref for details). The second is to modify the
 IPSec/VPN APIs to make use of the new information (and no longer use
 some older, but equivalent info that is being extended).

 I have a few process/procedural questions for the community...

 Q1) Should I do this all as one huge commit, as two commits (one for
 endpoint groups and one for modification to support multiple local
 subnets), or multiple (chained) commits (e.g. commit for each API for
 the endpoint groups and for each part of the multiple subnet change)?

 My thought (now) is to do this as two commits, with the endpoint groups
 as one, and multiple subnet groups as a second. I started with a commit
 for create API of endpoint (212692), and then did a chained commit for
 delete/show/list (215717), thinking they could be reviewed in pieces,
 but they are not that large and could be easily merged.


 My advice would be 2 commits, as you have split them out.


 I would prefer to have two commits with end-point groups as one and
 modification to support multiple local subnets as another. This will be
 easy to troubleshoot when in need.


 Q2) If the two parts are done separately, should the endpoint group
 portion, which adds a table and API calls, be done as part of the
 existing version (v2) of VPN, instead of introducing a new version at
 that step?


 Is the Neutron VPN API microversioned? If not, then I suppose your only
 option is to modify the existing v2 API. These seem to be additive changes,
 not modifications to existing API calls, in which case they are
 backwards-compatible (just not discoverable via an API microversion).

 I suggest to be done as part of the existing version v2 API . As the api
 tests are in transition from neutron to neutron-vpnaas repo, we can modify
 the tests and submit as a one patch


 Q3) For the new API additions, do I create a new subclass for the
 interface that includes all the existing APIs, introduce a new class
 that is used together with the existing class, or do I add this to the
 existing API?


 Until microversioning is introduced to the Neutron VPN API, it should
 probably be a change to the existing v2 API.

 +1


 Q4) With the final multiple local subnet changes, there will be changes
 to the VPN service API (delete subnet_id arg) and IPSec connection API
 (delete peer_cidrs arg, and add local_endpoints and peer_endpoints
 args). Do we modify the URI so that it calls out v3 (versus v2)? Where
 do we do that?


 Hmm, with the backwards-incompatible API changes like the above, your
 only option is to increment the major version number. The alternative would
 be to add support for microversioning as a prerequisite to the patch that
 adds backwards-incompatible changes, and then use a microversion to
 introduce those changes.

 Right now, we are beefing up scenario tests for VPN, adding
 microversioning feature seems better option for me, but open to have
 reviews from community.


 Best,
 -jay

 I'm unsure of the mechanism of increasing the version.

 Thanks in advance for any guidance here on how this should be rolled
 out...

 Regards,

 Paul Michali (pc_m)



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 

Re: [openstack-dev] [api] Re: [Neutron][L3] Stop agent scheduling without topping sevices

2015-01-13 Thread Germy Lure
Hi all,
I think we just power the scheduler API to be able to add and remove
candidates is enough.

As mentioned this thread, the agent just doesn't receive new request but
still keep old service alive.
So, just stop schedule new request to it. Direct and simple.

Hope my expression is clear enough.
Germy

On Fri, Jan 9, 2015 at 10:15 PM, Jay Pipes jaypi...@gmail.com wrote:

 Adding [api] topic.

 On 01/08/2015 07:47 PM, Kevin Benton wrote:

 Is there another openstack service that allows this so we can make the
 API consistent between the two when this change is made?


 Kevin, thank you VERY much for asking the above question and caring about
 consistency in the APIs!

 There was a discussion on the ML about this very area of the APIs, and how
 there is current inconsistency to resolve:

 http://openstack-dev.openstack.narkive.com/UbM1J7dH/horizon-all-status-
 vs-state

 You were involved in that thread, so I know you're very familiar with the
 problem domain :)

 In the above thread, I mentioned that this really was something that the
 API WG should tackle, and this here ML thread should be a catalyst for
 getting that done.

 What we need is a patch proposed to the openstack/api-wg that proposes
 some guidelines around the REST API structure for disabling some resource
 for administrative purposes, with some content that discusses the semantic
 differences between state and status, and makes recommendations on the
 naming of resource attributes that indicate an admnistrative state.

 Of course, this doesn't really address Jack M's question about whether
 there should be a separate mode (in Jack's terms) to indicate that some
 resource can be only manually assigned and not automatically assigned.
 Personally, I don't feel there is a need for another mode. I think if
 something has been administratively disabled, that an administrator should
 still be able to manually alter that thing.

 All the best,
 -jay

  On Thu, Jan 8, 2015 at 3:09 PM, Carl Baldwin c...@ecbaldwin.net
 mailto:c...@ecbaldwin.net wrote:

 I added a link to @Jack's post to the ML to the bug report [1].  I am
 willing to support @Itsuro with reviews of the implementation and am
 willing to consult if you need and would like to ping me.

 Carl

 [1] https://bugs.launchpad.net/neutron/+bug/1408488

 On Thu, Jan 8, 2015 at 7:49 AM, McCann, Jack jack.mcc...@hp.com
 mailto:jack.mcc...@hp.com wrote:
   +1 on need for this feature
  
   The way I've thought about this is we need a mode that stops the
 *automatic*
   scheduling of routers/dhcp-servers to specific hosts/agents,
 while allowing
   manual assignment of routers/dhcp-servers to those hosts/agents,
 and where
   any existing routers/dhcp-servers on those hosts continue to
 operate as normal.
  
   The maintenance use case was mentioned: I want to evacuate
 routers/dhcp-servers
   from a host before taking it down, and having the scheduler add
 new routers/dhcp
   while I'm evacuating the node is a) an annoyance, and b) causes a
 service blip
   when I have to right away move that new router/dhcp to another
 host.
  
   The other use case is adding a new host/agent into an existing
 environment.
   I want to be able to bring the new host/agent up and into the
 neutron config, but
   I don't want any of my customers' routers/dhcp-servers scheduled
 there until I've
   had a chance to assign some test routers/dhcp-servers and make
 sure the new server
   is properly configured and fully operational.
  
   - Jack
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]why FIP is integrated into router not as a separated service like XxxaaS?

2014-11-06 Thread Germy Lure
Hi Carl and Akilesh,

Thank you for your response and explanation.
My manager tells me that enterprises usually use several IP addresses and
ports for AT while Neutron just use external gateway port fixed IP for
SNAT. I found that if I extended the SNAT attributes, the L3 plugin will be
very complex. So I must tolerate this to provider more useful SNAT feature
which is really what customer needs.
I think as a separated service, SNAT will be easier to do this or even it
can support those scenarios.
We known that VPNaaS and FwaaS dependent on L3 route service but not AT
which also dependents on L3. From this point, L2 is the core of network
service and L3 is the core of other advanced services. ML3 is coming.
Besides, It's strange that L3's API contains a field called snat_enable.
Isn't  it?

BR,
Germy

On Wed, Nov 5, 2014 at 5:37 PM, Akilesh K akilesh1...@gmail.com wrote:

 @Germy Lure,
 I cannot give you a direct answer as I am not a developer.

 But let me point out that openstack can make use of many agents for l3 and
 above and not just neutron-l3-agent. You may even create your own agent.

 The 'neutron-l3-agent' works that way just to keep things simple. One
 point to consider is that Tenants may share same network space. So it
 becomes necessary to tie a router which belongs to a tenant to the tenant's
 security groups. If you try to distribute routing and firewall service you
 might end up making it too complicated.


 On Wed, Nov 5, 2014 at 2:40 PM, Carl Baldwin c...@ecbaldwin.net wrote:

 I don't think I know the precise answer to your question.  My best guess
 is that floating ips were one of the initial core L3 features implemented
 before other advanced services existed.  Implementing them in this way may
 have been the path of least resistance at the time.

 Are you suggesting a change?  What change?  What advantages would your
 change bring?  Do you see something fundamentally wrong with the current
 approach?  Does it have some deficiency that you can point out?  Basically,
 we need a suggested modification with some good justification to spend time
 making that modification.

 Carl
 Hi,

 Address Translation(FIP, snat and dnat) looks like an advanced service.
 Why it is integrated into L3 router? Actually, this is not how it's done in
 practice. They are usually provided by Firewall device but not router.

 What's the design concept?

 ThanksRegards,
 Germy

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]why FIP is integrated into router not as a separated service like XxxaaS?

2014-11-06 Thread Germy Lure
Hi Akilesh,
Thanks for your response. I have some comments inline.

BR,
Germy

On Thu, Nov 6, 2014 at 10:56 PM, Akilesh K akilesh1...@gmail.com wrote:

 Hi Geremy,

 It is necessary to not think of openstack as a way to replace all
 functionality of your enterprise data center, but rather to better utilize
 your resources. So I believe you should still continue to use your
 enterprise devices to do Address Translation outside of OpenStack. Why I
 say so is Address Translation is not necessarily a 'cloud' service. All you
 want in your cloud is servers, private and public networks, and firewall to
 secure them.

As you said,  we really need private and public networks. And we also need
communication between them, from private to public and the opposite
direction. So how to do this without AT? I think this is just the reason
that the community introduces AT into Neutron so early, although, it is a
little simple IMO.


 Anything more than that should be kept external and decoupled to
 OpenStack. But as I said before OpenStack is to an extent modular and I
 believe its getting better. As of now if you are using just
 'neutron-l3-agent' it will do 'snat' to the ip address of your router
 attaching to 'external network' , but you can always add an extra service
 on top of 'neutron-l3-agent' to do address translation alone as per your
 needs.

Good idea. But I think as a cloud platform, a flexible and extendable
architecture is more important. Agent-style or Controller-style is just an
implementation for the architecture. People can always deal with such a
problem. My ugly extension and your add an extra service are both one of
those solution. But they should not be the Neutron's solution. I don't
think Neutron's goal is keeping AT external.


 On Thu, Nov 6, 2014 at 6:28 PM, Henry henry4...@gmail.com wrote:

 So, do you mean that we need a better way to control snat ip address? I
 think it make sense, but maybe simple attribute extension can solve part
 problem, no need to separate it at this time. For example, add a snat-ip
 field in the route, like fip.

 However if multiple snat ip is needed, and control which tenant ip is
 served by each snat ip, separate plugin may be needed.


 Sent from my iPad

 On 2014-11-6, at 下午6:21, Germy Lure germy.l...@gmail.com wrote:

 Hi Carl and Akilesh,

 Thank you for your response and explanation.
 My manager tells me that enterprises usually use several IP addresses and
 ports for AT while Neutron just use external gateway port fixed IP for
 SNAT. I found that if I extended the SNAT attributes, the L3 plugin will be
 very complex. So I must tolerate this to provider more useful SNAT feature
 which is really what customer needs.
 I think as a separated service, SNAT will be easier to do this or even it
 can support those scenarios.
 We known that VPNaaS and FwaaS dependent on L3 route service but not AT
 which also dependents on L3. From this point, L2 is the core of network
 service and L3 is the core of other advanced services. ML3 is coming.
 Besides, It's strange that L3's API contains a field called
 snat_enable. Isn't  it?

 BR,
 Germy

 On Wed, Nov 5, 2014 at 5:37 PM, Akilesh K akilesh1...@gmail.com wrote:

 @Germy Lure,
 I cannot give you a direct answer as I am not a developer.

 But let me point out that openstack can make use of many agents for l3
 and above and not just neutron-l3-agent. You may even create your own agent.

 The 'neutron-l3-agent' works that way just to keep things simple. One
 point to consider is that Tenants may share same network space. So it
 becomes necessary to tie a router which belongs to a tenant to the tenant's
 security groups. If you try to distribute routing and firewall service you
 might end up making it too complicated.


 On Wed, Nov 5, 2014 at 2:40 PM, Carl Baldwin c...@ecbaldwin.net wrote:

 I don't think I know the precise answer to your question.  My best
 guess is that floating ips were one of the initial core L3 features
 implemented before other advanced services existed.  Implementing them in
 this way may have been the path of least resistance at the time.

 Are you suggesting a change?  What change?  What advantages would your
 change bring?  Do you see something fundamentally wrong with the current
 approach?  Does it have some deficiency that you can point out?  Basically,
 we need a suggested modification with some good justification to spend time
 making that modification.

 Carl
 Hi,

 Address Translation(FIP, snat and dnat) looks like an advanced service.
 Why it is integrated into L3 router? Actually, this is not how it's done in
 practice. They are usually provided by Firewall device but not router.

 What's the design concept?

 ThanksRegards,
 Germy

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list

Re: [openstack-dev] [Neutron]why FIP is integrated into router not as a separated service like XxxaaS?

2014-11-06 Thread Germy Lure
Hi Henry,

Thanks for your suggestion. As you wrote, your approach can solve part
problem.
I believe there's a good reason(Maybe Carl's guess is right. It's a
programmer's good habit to leave something for latecomers :).) for AT
coupled with Router, but on the face of it, AT should be separated from
Router, at least SNAT. IMHO it's better to provide a unified service
including all kinds of AT, such as FIP, SNAT and DNAT.

BR,
Germy

On Fri, Nov 7, 2014 at 2:42 PM, Germy Lure germy.l...@gmail.com wrote:

 Hi Akilesh,
 Thanks for your response. I have some comments inline.

 BR,
 Germy

 On Thu, Nov 6, 2014 at 10:56 PM, Akilesh K akilesh1...@gmail.com wrote:

 Hi Geremy,

 It is necessary to not think of openstack as a way to replace all
 functionality of your enterprise data center, but rather to better utilize
 your resources. So I believe you should still continue to use your
 enterprise devices to do Address Translation outside of OpenStack. Why I
 say so is Address Translation is not necessarily a 'cloud' service. All you
 want in your cloud is servers, private and public networks, and firewall to
 secure them.

 As you said,  we really need private and public networks. And we also need
 communication between them, from private to public and the opposite
 direction. So how to do this without AT? I think this is just the reason
 that the community introduces AT into Neutron so early, although, it is a
 little simple IMO.


 Anything more than that should be kept external and decoupled to
 OpenStack. But as I said before OpenStack is to an extent modular and I
 believe its getting better. As of now if you are using just
 'neutron-l3-agent' it will do 'snat' to the ip address of your router
 attaching to 'external network' , but you can always add an extra service
 on top of 'neutron-l3-agent' to do address translation alone as per your
 needs.

 Good idea. But I think as a cloud platform, a flexible and extendable
 architecture is more important. Agent-style or Controller-style is just an
 implementation for the architecture. People can always deal with such a
 problem. My ugly extension and your add an extra service are both one of
 those solution. But they should not be the Neutron's solution. I don't
 think Neutron's goal is keeping AT external.


 On Thu, Nov 6, 2014 at 6:28 PM, Henry henry4...@gmail.com wrote:

 So, do you mean that we need a better way to control snat ip address? I
 think it make sense, but maybe simple attribute extension can solve part
 problem, no need to separate it at this time. For example, add a snat-ip
 field in the route, like fip.

 However if multiple snat ip is needed, and control which tenant ip is
 served by each snat ip, separate plugin may be needed.


 Sent from my iPad

 On 2014-11-6, at 下午6:21, Germy Lure germy.l...@gmail.com wrote:

 Hi Carl and Akilesh,

 Thank you for your response and explanation.
 My manager tells me that enterprises usually use several IP addresses
 and ports for AT while Neutron just use external gateway port fixed IP for
 SNAT. I found that if I extended the SNAT attributes, the L3 plugin will be
 very complex. So I must tolerate this to provider more useful SNAT feature
 which is really what customer needs.
 I think as a separated service, SNAT will be easier to do this or even
 it can support those scenarios.
 We known that VPNaaS and FwaaS dependent on L3 route service but not AT
 which also dependents on L3. From this point, L2 is the core of network
 service and L3 is the core of other advanced services. ML3 is coming.
 Besides, It's strange that L3's API contains a field called
 snat_enable. Isn't  it?

 BR,
 Germy

 On Wed, Nov 5, 2014 at 5:37 PM, Akilesh K akilesh1...@gmail.com wrote:

 @Germy Lure,
 I cannot give you a direct answer as I am not a developer.

 But let me point out that openstack can make use of many agents for l3
 and above and not just neutron-l3-agent. You may even create your own 
 agent.

 The 'neutron-l3-agent' works that way just to keep things simple. One
 point to consider is that Tenants may share same network space. So it
 becomes necessary to tie a router which belongs to a tenant to the tenant's
 security groups. If you try to distribute routing and firewall service you
 might end up making it too complicated.


 On Wed, Nov 5, 2014 at 2:40 PM, Carl Baldwin c...@ecbaldwin.net
 wrote:

 I don't think I know the precise answer to your question.  My best
 guess is that floating ips were one of the initial core L3 features
 implemented before other advanced services existed.  Implementing them in
 this way may have been the path of least resistance at the time.

 Are you suggesting a change?  What change?  What advantages would your
 change bring?  Do you see something fundamentally wrong with the current
 approach?  Does it have some deficiency that you can point out?  
 Basically,
 we need a suggested modification with some good justification to spend 
 time
 making that modification.

 Carl
 Hi

Re: [openstack-dev] [neutron][TripleO] Clear all flows when ovs agent start? why and how avoid?

2014-11-06 Thread Germy Lure
Hi Armando,
Static configuration really introduces unnecessary burden to the operator.
But I can't understand your explore a way, although it sounds
interesting. Can you explain it in detail? Thank you.

BTW, as Sudhakar wrote, [1] attempted to implement the flow
synchronization, but without any progress/updates. So how to remind the
register. Or if I want to participate in it even work on it alone, what I
need do? register another BP?

[1]
https://blueprints.launchpad.net/neutron/+spec/neutron-agent-soft-restart

BR,
Germy


On Thu, Nov 6, 2014 at 2:59 AM, Armando M. arma...@gmail.com wrote:

 I would be open to making this toggle switch available, however I feel
 that doing it via static configuration can introduce unnecessary burden to
 the operator. Perhaps we could explore a way where the agent can figure
 which state it's supposed to be in based on its reported status?

 Armando

 On 5 November 2014 12:09, Salvatore Orlando sorla...@nicira.com wrote:

 I have no opposition to that, and I will be happy to assist reviewing the
 code that will enable flow synchronisation  (or to say it in an easier way,
 punctual removal of flows unknown to the l2 agent).

 In the meanwhile, I hope you won't mind if we go ahead and start making
 flow reset optional - so that we stop causing downtime upon agent restart.

 Salvatore

 On 5 November 2014 11:57, Erik Moe erik@ericsson.com wrote:



 Hi,



 I also agree, IMHO we need flow synchronization method so we can avoid
 network downtime and stray flows.



 Regards,

 Erik





 *From:* Germy Lure [mailto:germy.l...@gmail.com]
 *Sent:* den 5 november 2014 10:46
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [neutron][TripleO] Clear all flows when
 ovs agent start? why and how avoid?



 Hi Salvatore,

 A startup flag is really a simpler approach. But in what situation we
 should set this flag to remove all flows? upgrade? restart manually?
 internal fault?



 Indeed, only at the time that there are inconsistent(incorrect,
 unwanted, stable and so on) flows between agent and the ovs related, we
 need refresh flows. But the problem is how we know this? I think a startup
 flag is too rough, unless we can tolerate the inconsistent situation.



 Of course, I believe that turn off startup reset flows action can
 resolve most problem. The flows are correct most time after all. But
 considering NFV 5 9s, I still recommend flow synchronization approach.



 BR,

 Germy



 On Wed, Nov 5, 2014 at 3:36 PM, Salvatore Orlando sorla...@nicira.com
 wrote:

 From what I gather from this thread and related bug report, the change
 introduced in the OVS agent is causing a data plane outage upon agent
 restart, which is not desirable in most cases.



 The rationale for the change that introduced this bug was, I believe,
 cleaning up stale flows on the OVS agent, which also makes some sense.



 Unless I'm missing something, I reckon the best way forward is actually
 quite straightforward; we might add a startup flag to reset all flows and
 not reset them by default.

 While I agree the flow synchronisation process proposed in the
 previous post is valuable too, I hope we might be able to fix this with a
 simpler approach.



 Salvatore



 On 5 November 2014 04:43, Germy Lure germy.l...@gmail.com wrote:

 Hi,



 Consider the triggering of restart agent, I think it's nothing but:

 1). only restart agent

 2). reboot the host that agent deployed on



 When the agent started, the ovs may:

 a.have all correct flows

 b.have nothing at all

 c.have partly correct flows, the others may need to be reprogrammed,
 deleted or added



 In any case, I think both user and developer would happy to see that the
 system recovery ASAP after agent restarting. The best is agent only push
 those incorrect flows, but keep the correct ones. This can ensure those
 business with correct flows working during agent starting.



 So, I suggest two solutions:

 1.Agent gets all flows from ovs and compare with its local flows after
 restarting. And agent only corrects the different ones.

 2.Adapt ovs and agent. Agent just push all(not remove) flows every time
 and ovs prepares two tables for flows switch(like RCU lock).



 1 is recommended because of the 3rd vendors.



 BR,

 Germy





 On Fri, Oct 31, 2014 at 10:28 PM, Ben Nemec openst...@nemebean.com
 wrote:

 On 10/29/2014 10:17 AM, Kyle Mestery wrote:
  On Wed, Oct 29, 2014 at 7:25 AM, Hly henry4...@gmail.com wrote:
 
 
  Sent from my iPad
 
  On 2014-10-29, at 下午8:01, Robert van Leeuwen 
 robert.vanleeu...@spilgames.com wrote:
 
  I find our current design is remove all flows then add flow by
 entry, this
  will cause every network node will break off all tunnels between
 other
  network node and all compute node.
  Perhaps a way around this would be to add a flag on agent startup
  which would have it skip reprogramming flows. This could be used for
  the upgrade case.
 
  I hit the same

Re: [openstack-dev] [neutron][TripleO] Clear all flows when ovs agent start? why and how avoid?

2014-11-05 Thread Germy Lure
Hi Salvatore,
A startup flag is really a simpler approach. But in what situation we
should set this flag to remove all flows? upgrade? restart manually?
internal fault?

Indeed, only at the time that there are inconsistent(incorrect, unwanted,
stable and so on) flows between agent and the ovs related, we need refresh
flows. But the problem is how we know this? I think a startup flag is too
rough, unless we can tolerate the inconsistent situation.

Of course, I believe that turn off startup reset flows action can resolve
most problem. The flows are correct most time after all. But considering
NFV 5 9s, I still recommend flow synchronization approach.

BR,
Germy

On Wed, Nov 5, 2014 at 3:36 PM, Salvatore Orlando sorla...@nicira.com
wrote:

 From what I gather from this thread and related bug report, the change
 introduced in the OVS agent is causing a data plane outage upon agent
 restart, which is not desirable in most cases.

 The rationale for the change that introduced this bug was, I believe,
 cleaning up stale flows on the OVS agent, which also makes some sense.

 Unless I'm missing something, I reckon the best way forward is actually
 quite straightforward; we might add a startup flag to reset all flows and
 not reset them by default.
 While I agree the flow synchronisation process proposed in the previous
 post is valuable too, I hope we might be able to fix this with a simpler
 approach.

 Salvatore

 On 5 November 2014 04:43, Germy Lure germy.l...@gmail.com wrote:

 Hi,

 Consider the triggering of restart agent, I think it's nothing but:
 1). only restart agent
 2). reboot the host that agent deployed on

 When the agent started, the ovs may:
 a.have all correct flows
 b.have nothing at all
 c.have partly correct flows, the others may need to be reprogrammed,
 deleted or added

 In any case, I think both user and developer would happy to see that the
 system recovery ASAP after agent restarting. The best is agent only push
 those incorrect flows, but keep the correct ones. This can ensure those
 business with correct flows working during agent starting.

 So, I suggest two solutions:
 1.Agent gets all flows from ovs and compare with its local flows after
 restarting. And agent only corrects the different ones.
 2.Adapt ovs and agent. Agent just push all(not remove) flows every time
 and ovs prepares two tables for flows switch(like RCU lock).

 1 is recommended because of the 3rd vendors.

 BR,
 Germy


 On Fri, Oct 31, 2014 at 10:28 PM, Ben Nemec openst...@nemebean.com
 wrote:

 On 10/29/2014 10:17 AM, Kyle Mestery wrote:
  On Wed, Oct 29, 2014 at 7:25 AM, Hly henry4...@gmail.com wrote:
 
 
  Sent from my iPad
 
  On 2014-10-29, at 下午8:01, Robert van Leeuwen 
 robert.vanleeu...@spilgames.com wrote:
 
  I find our current design is remove all flows then add flow by
 entry, this
  will cause every network node will break off all tunnels between
 other
  network node and all compute node.
  Perhaps a way around this would be to add a flag on agent startup
  which would have it skip reprogramming flows. This could be used for
  the upgrade case.
 
  I hit the same issue last week and filed a bug here:
  https://bugs.launchpad.net/neutron/+bug/1383674
 
  From an operators perspective this is VERY annoying since you also
 cannot push any config changes that requires/triggers a restart of the
 agent.
  e.g. something simple like changing a log setting becomes a hassle.
  I would prefer the default behaviour to be to not clear the flows or
 at the least an config option to disable it.
 
 
  +1, we also suffered from this even when a very little patch is done
 
  I'd really like to get some input from the tripleo folks, because they
  were the ones who filed the original bug here and were hit by the
  agent NOT reprogramming flows on agent restart. It does seem fairly
  obvious that adding an option around this would be a good way forward,
  however.

 Since nobody else has commented, I'll put in my two cents (though I
 might be overcharging you ;-).  I've also added the TripleO tag to the
 subject, although with Summit coming up I don't know if that will help.

 Anyway, if the bug you're referring to is the one I think, then our
 issue was just with the flows not existing.  I don't think we care
 whether they get reprogrammed on agent restart or not as long as they
 somehow come into existence at some point.

 It's possible I'm wrong about that, and probably the best person to talk
 to would be Robert Collins since I think he's the one who actually
 tracked down the problem in the first place.

 -Ben


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Re: [openstack-dev] [neutron][TripleO] Clear all flows when ovs agent start? why and how avoid?

2014-11-04 Thread Germy Lure
Hi,

Consider the triggering of restart agent, I think it's nothing but:
1). only restart agent
2). reboot the host that agent deployed on

When the agent started, the ovs may:
a.have all correct flows
b.have nothing at all
c.have partly correct flows, the others may need to be reprogrammed,
deleted or added

In any case, I think both user and developer would happy to see that the
system recovery ASAP after agent restarting. The best is agent only push
those incorrect flows, but keep the correct ones. This can ensure those
business with correct flows working during agent starting.

So, I suggest two solutions:
1.Agent gets all flows from ovs and compare with its local flows after
restarting. And agent only corrects the different ones.
2.Adapt ovs and agent. Agent just push all(not remove) flows every time and
ovs prepares two tables for flows switch(like RCU lock).

1 is recommended because of the 3rd vendors.

BR,
Germy


On Fri, Oct 31, 2014 at 10:28 PM, Ben Nemec openst...@nemebean.com wrote:

 On 10/29/2014 10:17 AM, Kyle Mestery wrote:
  On Wed, Oct 29, 2014 at 7:25 AM, Hly henry4...@gmail.com wrote:
 
 
  Sent from my iPad
 
  On 2014-10-29, at 下午8:01, Robert van Leeuwen 
 robert.vanleeu...@spilgames.com wrote:
 
  I find our current design is remove all flows then add flow by
 entry, this
  will cause every network node will break off all tunnels between
 other
  network node and all compute node.
  Perhaps a way around this would be to add a flag on agent startup
  which would have it skip reprogramming flows. This could be used for
  the upgrade case.
 
  I hit the same issue last week and filed a bug here:
  https://bugs.launchpad.net/neutron/+bug/1383674
 
  From an operators perspective this is VERY annoying since you also
 cannot push any config changes that requires/triggers a restart of the
 agent.
  e.g. something simple like changing a log setting becomes a hassle.
  I would prefer the default behaviour to be to not clear the flows or
 at the least an config option to disable it.
 
 
  +1, we also suffered from this even when a very little patch is done
 
  I'd really like to get some input from the tripleo folks, because they
  were the ones who filed the original bug here and were hit by the
  agent NOT reprogramming flows on agent restart. It does seem fairly
  obvious that adding an option around this would be a good way forward,
  however.

 Since nobody else has commented, I'll put in my two cents (though I
 might be overcharging you ;-).  I've also added the TripleO tag to the
 subject, although with Summit coming up I don't know if that will help.

 Anyway, if the bug you're referring to is the one I think, then our
 issue was just with the flows not existing.  I don't think we care
 whether they get reprogrammed on agent restart or not as long as they
 somehow come into existence at some point.

 It's possible I'm wrong about that, and probably the best person to talk
 to would be Robert Collins since I think he's the one who actually
 tracked down the problem in the first place.

 -Ben


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron]why FIP is integrated into router not as a separated service like XxxaaS?

2014-11-04 Thread Germy Lure
Hi,

Address Translation(FIP, snat and dnat) looks like an advanced service. Why
it is integrated into L3 router? Actually, this is not how it's done in
practice. They are usually provided by Firewall device but not router.

What's the design concept?

ThanksRegards,
Germy
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VPNaaS site to site connection down.

2014-09-27 Thread Germy Lure
Hi,

masoom:
I think firstly you can just check that if you could ping from left to
right without installing VPN connection.
If it worked, then you should cat the system logs to confirm the
configure's OK.
You can ping and tcpdump to dialog where packets are blocked.

stackers:
I think we should give mechanism to show the cause when vpn-connection is
down. At least, we could extend an attribute to explain this. Maybe the
VPN-incubator project is a chance?

BR,
Germy


On Sat, Sep 27, 2014 at 7:04 PM, masoom alam masoom.a...@gmail.com wrote:

 Hi Every one,

 I am trying to establish the VPN connection by giving the neutron
 ipsec-site-connection-create.

 neutron ipsec-site-connection-create --name vpnconnection1 --vpnservice-id 
 myvpn --ikepolicy-id ikepolicy1 --ipsecpolicy-id ipsecpolicy1 --peer-address 
 172.24.4.233 --peer-id 172.24.4.233 --peer-cidr 10.2.0.0/24 --psk secret


 For the --peer-address I am giving the public interface of the other
 devstack node. Please note that my two devstack nodes are on different
 public addresses, so scenario is a little different than the one described
 here: https://wiki.openstack.org/wiki/Neutron/VPNaaS/HowToInstall

 The --peer-id is the ip address of the Qrouter connected to the public
 interface. With this configuration, I am not able to up the VPN site to
 site connection. Do you think its a firewall issue, I have disabled both
 firewalls with sudo ufw disable. Any help in this regard. Am I giving the
 correct parameters?

 Thanks





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [IPv6] New API format for extra_dhcp_opts

2014-09-26 Thread Germy Lure
Hi, Xu Han,
Can we distinguish version by parsing the opt_value? Is there any service
binding v4 address but providing service for v6? or v6 for v4?

BTW, Why not the format is directly opt_name_value:opt_value_value, like 
server-ip-address:1.1.1.1?

BR,
Germy


On Fri, Sep 26, 2014 at 2:39 PM, Xu Han Peng pengxu...@gmail.com wrote:

  Currently the extra_dhcp_opts has the following API interface on a port:

 {
 port:
 {
 extra_dhcp_opts: [
 {opt_value: testfile.1,opt_name: bootfile-name},
 {opt_value: 123.123.123.123, opt_name: tftp-server},
 {opt_value: 123.123.123.45, opt_name:
 server-ip-address}
 ],
 
  }
 }

 During the development of DHCPv6 function for IPv6 subnets, we found this
 format doesn't work anymore because an port can have both IPv4 and IPv6
 address. So we need to find a new way to specify extra_dhcp_opts for DHCPv4
 and DHCPv6, respectively. (
 https://bugs.launchpad.net/neutron/+bug/1356383)

 Here are some thoughts about the new format:

 Option1: Change the opt_name in extra_dhcp_opts to add a prefix (v4 or v6)
 so we can distinguish opts for v4 or v6 by parsing the opt_name. For
 backward compatibility, no prefix means IPv4 dhcp opt.

 extra_dhcp_opts: [
 {opt_value: testfile.1,opt_name: bootfile-name},
 {opt_value: 123.123.123.123, opt_name: *v4:*
 tftp-server},
 {opt_value: [2001:0200:feed:7ac0::1], opt_name: *v6:*
 dns-server}
 ]

 Option2: break extra_dhcp_opts into IPv4 opts and IPv6 opts. For backward
 compatibility, both old format and new format are acceptable, but old
 format means IPv4 dhcp opts.

 extra_dhcp_opts: {
  ipv4: [
 {opt_value: testfile.1,opt_name:
 bootfile-name},
 {opt_value: 123.123.123.123, opt_name:
 tftp-server},
  ],
  ipv6: [
 {opt_value: [2001:0200:feed:7ac0::1], opt_name:
 dns-server}
  ]
 }

 The pro of Option1 is there is no need to change API structure but only
 need to add validation and parsing to opt_name. The con of Option1 is that
 user need to input prefix for every opt_name which can be error prone. The
 pro of Option2 is that it's clearer than Option1. The con is that we need
 to check two formats for backward compatibility.

 We discussed this in IPv6 sub-team meeting and we think Option2 is
 preferred. Can I also get community's feedback on which one is preferred or
 any other comments?

 Thanks,
 Xu Han

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron]Dynamically load service provider

2014-09-23 Thread Germy Lure
Hi stackers,

I have an idea about service provider framework. Anyone interested in this
topic can give me some suggestions.

My idea is that providers report their services capability dynamically not
configured in neutron.conf. See details by the link below.
https://docs.google.com/presentation/d/1_uNF0JEDyoFor8xj-MaaacPL334hiWJWB7NzfRrcVJg/edit?usp=sharing

Everyone can comment this doc.

BR,
Germy
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Architecture]Suggestions for the third vendors' plugin and driver

2014-09-18 Thread Germy Lure
Hi Trinath,

I think the vendor company has many experts to review their codes. They can
do it well.

But I still have some comments inline.

Germy

On Thu, Sep 18, 2014 at 1:42 PM, trinath.soman...@freescale.com 
trinath.soman...@freescale.com wrote:

  Though Code reviews for vendor code takes more time, I feel it must go
 through Core reviews.



 Since, Vendors might submit the code that is working fine within their
 third party CI environment but the Code review make it more efficient with
 respect to the coding standards followed in the community.



 Also, for all the vendor plugins/drivers the code reviews (+1s and +2s)
 give a feedback on the quality they must be in to be with Neutron.

I think the quality of a software mainly lies on developers, otherwise
reviewers will be very very busy.
We suppose that all core members reviewed your plugin and gave feedback
many +, so can you guarantee the plugin high quality? even no BUGs?
I think only the vendor, cooperating with customer and providing plugin and
driver, can and must guarantee the quality. But those *private* releases
only exist in vendor's disk and running in customer's machine. It cannot be
updated to community because of approving waiting, because of not efficient
enough, because of the coding standards, 



 But one suggestion I want to put forward, when an -1 or -2 is given to the
 code, Reviewers might give a brief comment on why this was given, what
 might be preferred solution and Is there any reference implementation that
 can be considered for the code in review to move away from these errors.
 This can help the developers.

If core members prefer Cisco's implementation, all the other vendors follow
it? Why different plugins? Only one is enough.
Of course, this is a very extreme assumption. We just discuss a problem.









 --

 Trinath Somanchi - B39208

 trinath.soman...@freescale.com | extn: 4048
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Architecture]Suggestions for the third vendors' plugin and driver

2014-09-17 Thread Germy Lure
Hi Salvatore,
Thanks for your hyperlink. It's really a monster thread that contains
everyone's opinion. But it's useful to me.
So, Before we focus on the Neutron core itself, we should firstly release a
suite standardized APIs and a framework for vendors' codes.
About this job, I think most of it is already OK. We have 20+ monolithic
plugins following NB API and plugin framework.
We need publish an API doc for internal interface(I prefer to call it SB
API, stand on the Neutron core's point to consider, vendors' codes
do not belong to core.) and other things unsuitable now.

In my opinion, the Neutron core's main responsibility is data model and DB,
schedule and dispatch, API and validation, framework and workflow.

Some more comments inline.


This is a very important discussion - very closely related to the one going
on in this other thread
http://lists.openstack.org/pipermail/openstack-dev/2014-September/045768.html
.
Unfortunately it is also a discussion that tends to easily fragment and
move in a thousand different directions.
A few months ago I was too of the opinion that vendor plugins and drivers
were the main reason of unnecessary load for the core team. I still think
that they're an unnecessary heavy load, but I reckon the problem does not
really lies with open source versus vendor code. It lies in matching
people's competencies with subsystems and proper interface across them - as
already pointed out in this thread.
Yes, it's really important.

I have some more comments inline, but unless growing another monster thread
I'd rather start a different, cross-project discussion (which will
hopefully not become just a cross-project monster thread!)

Salvatore

On 15 September 2014 08:29, Germy Lure germy.l...@gmail.com wrote:

 Obviously, to a vendor's plugin/driver, the most important thing is
 API.Yes?
 NB API for a monolithic plugin or a service plugin and SB API for a
 service driver or agent, even MD. That's the basic.
 Now we have released a set of NB APIs with relative stability. The SB
 APIs' standardization are needed.


The internal interface between the API and the plugins is standardized at
the moment through use of classes like [1]. A similar interface exists for
ML2 drivers [2].
To the monolithic plugins, [1] is useful. Vendors can implement those APIs
and keep their codes locally.

At the moment the dispatch of an API call to the plugin or from a plugin to
a ML2 driver is purely a local call so these interfaces are working fairly
well at the moment. I don't know yet however whether they will be
sufficient in case plugins are split into different repos. ML2 Driver
maintainers have however been warned in the past that the driver interface
is to be considered internal and can be changed at any time. This does not
apply to the plugin interface which has been conceived in this way to
facilitate the development of out of tree plugins.
Indeed, it's difficult to split MDs from ML2 plugin framework. I think it
need some adaption.

On the other hand, if by SB interfaces you are referring to the RPC
interfaces for communicating between the servers and the various plugin, I
would say that they should be considered internal at the moment.

[1]
https://github.com/openstack/neutron/blob/master/neutron/neutron_plugin_base_v2.py#L28
[2]
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/driver_api.py


 Some comments inline.



 On Fri, Sep 12, 2014 at 5:18 PM, Kevin Benton blak...@gmail.com wrote:

  So my suggestion is remove all vendors' plugins and drivers except
 opensource as built-in.

 Yes, I think this is currently the view held by the PTL (Kyle) and some
 of the other cores so what you're suggesting will definitely come up at the
 summit.

 Good!


The discussion however will not be that different from the one we're seeing
on that huge thread on splitting out drivers, which has become in my
opinion a frankenthread.
Nevertheless, that thread points out that this is far from being merely a
neutron topic (despite neutron being the project with the highest number of
drivers and plugins).




  Why do we need a different repo to store vendors' codes? That's not the
 community business.
  I think only a proper architecture and normal NBSB API can bring a
 clear separation between plugins(or drivers) and core code, not a
 different repo.

 The problem is that that architecture won't stay stable if there is no
 shared community plugin depending on its stability. Let me ask you the
 inverse question. Why do you think the reference driver should stay in the
 core repo?

 A separate repo won't have an impact on what is packaged and released so
 it should have no impact on user experience, complete versions,
 providing code examples,  or developing new features. In fact, it will
 likely help with the last two because it will provide a clear delineation
 between what a plugin is responsible for vs. what the core API is
 responsible for. And, because new cores can be added faster

Re: [openstack-dev] [Neutron][Architecture]Suggestions for the third vendors' plugin and driver

2014-09-15 Thread Germy Lure
Obviously, to a vendor's plugin/driver, the most important thing is API.Yes?
NB API for a monolithic plugin or a service plugin and SB API for a service
driver or agent, even MD. That's the basic.
Now we have released a set of NB APIs with relative stability. The SB APIs'
standardization are needed.

Some comments inline.



On Fri, Sep 12, 2014 at 5:18 PM, Kevin Benton blak...@gmail.com wrote:

  So my suggestion is remove all vendors' plugins and drivers except
 opensource as built-in.

 Yes, I think this is currently the view held by the PTL (Kyle) and some of
 the other cores so what you're suggesting will definitely come up at the
 summit.

Good!



  Why do we need a different repo to store vendors' codes? That's not the
 community business.
  I think only a proper architecture and normal NBSB API can bring a
 clear separation between plugins(or drivers) and core code, not a
 different repo.

 The problem is that that architecture won't stay stable if there is no
 shared community plugin depending on its stability. Let me ask you the
 inverse question. Why do you think the reference driver should stay in the
 core repo?

 A separate repo won't have an impact on what is packaged and released so
 it should have no impact on user experience, complete versions,
 providing code examples,  or developing new features. In fact, it will
 likely help with the last two because it will provide a clear delineation
 between what a plugin is responsible for vs. what the core API is
 responsible for. And, because new cores can be added faster to the open
 source plugins repo due to a smaller code base to learn, it will help with
 developing new features by reducing reviewer load.

OK, the key point is that vendors' code should be kept by themselves NOT by
the community. But in the same time, the community should provide
some open source reference as standard examples for those new cores and
vendors.
U are right, A separate repo won't have an impact on what is packaged and
released. The open source can stays in the core repo or a different one.
In any case, we need them there for referencing and version releasing.
Any vendor would not maintain the open source codes, the community only.



 On Fri, Sep 12, 2014 at 1:50 AM, Germy Lure germy.l...@gmail.com wrote:



 On Fri, Sep 12, 2014 at 11:11 AM, Kevin Benton blak...@gmail.com wrote:


  Maybe I missed something, but what's the solution?

 There isn't one yet. That's why it's going to be discussed at the summit.

 So my suggestion is remove all vendors' plugins and drivers except
 opensource as built-in.
 By leaving open source plugins and drivers in the tree , we can resolve
 such problems:
   1)release a workable and COMPLETE version
   2)user experience(especially for beginners)
   3)provide code example to learn for new contributors and vendors
   4)develop and verify new features



  I think we should release a workable version.

 Definitely. But that doesn't have anything to do with it living in the
 same repository. By putting it in a different repo, it provides smaller
 code bases to learn for new contributors wanting to become a core developer
 in addition to a clear separation between plugins and core code.

 Why do we need a different repo to store vendors' codes? That's not the
 community business.
 I think only a proper architecture and normal NBSB API can bring a
 clear separation between plugins(or drivers) and core code, not a
 different repo.
 Of course, if the community provides a wiki page for vendors to add
 hyperlink of their codes, I think it's perfect.


  Besides of user experience, the open source drivers are also used for
 developing and verifying new features, even small-scale case.

 Sure, but this also isn't affected by the code being in a separate repo.

 See comments above.


  The community should and just need focus on the Neutron core and
 provide framework for vendors' devices.

 I agree, but without the open source drivers being separated as well,
 it's very difficult for the framework for external drivers to be stable
 enough to be useful.

 Architecture and API. The community should ensure core and API stable
 enough and high quality. Vendors for external drivers.
 Who provides, who maintains(including development, storage, distribution,
 quality, etc).


 On Thu, Sep 11, 2014 at 7:24 PM, Germy Lure germy.l...@gmail.com
 wrote:

 Some comments inline.

 BR,
 Germy

 On Thu, Sep 11, 2014 at 5:47 PM, Kevin Benton blak...@gmail.com
 wrote:

 This has been brought up several times already and I believe is going
 to be discussed at the Kilo summit.

 Maybe I missed something, but what's the solution?


 I agree that reviewing third party patches eats community time.
 However, claiming that the community pays 46% of it's energy to maintain
 vendor-specific code doesn't make any sense. LOC in the repo has very
 little to do with ongoing required maintenance. Assuming the APIs for the
 plugins stay consistent, there should be few

Re: [openstack-dev] [Neutron][Architecture]Suggestions for the third vendors' plugin and driver

2014-09-12 Thread Germy Lure
On Fri, Sep 12, 2014 at 11:11 AM, Kevin Benton blak...@gmail.com wrote:


  Maybe I missed something, but what's the solution?

 There isn't one yet. That's why it's going to be discussed at the summit.

So my suggestion is remove all vendors' plugins and drivers except
opensource as built-in.
By leaving open source plugins and drivers in the tree , we can resolve
such problems:
  1)release a workable and COMPLETE version
  2)user experience(especially for beginners)
  3)provide code example to learn for new contributors and vendors
  4)develop and verify new features



  I think we should release a workable version.

 Definitely. But that doesn't have anything to do with it living in the
 same repository. By putting it in a different repo, it provides smaller
 code bases to learn for new contributors wanting to become a core developer
 in addition to a clear separation between plugins and core code.

Why do we need a different repo to store vendors' codes? That's not the
community business.
I think only a proper architecture and normal NBSB API can bring a clear
separation between plugins(or drivers) and core code, not a different repo.
Of course, if the community provides a wiki page for vendors to add
hyperlink of their codes, I think it's perfect.


  Besides of user experience, the open source drivers are also used for
 developing and verifying new features, even small-scale case.

 Sure, but this also isn't affected by the code being in a separate repo.

See comments above.


  The community should and just need focus on the Neutron core and provide
 framework for vendors' devices.

 I agree, but without the open source drivers being separated as well, it's
 very difficult for the framework for external drivers to be stable enough
 to be useful.

Architecture and API. The community should ensure core and API stable
enough and high quality. Vendors for external drivers.
Who provides, who maintains(including development, storage, distribution,
quality, etc).


 On Thu, Sep 11, 2014 at 7:24 PM, Germy Lure germy.l...@gmail.com wrote:

 Some comments inline.

 BR,
 Germy

 On Thu, Sep 11, 2014 at 5:47 PM, Kevin Benton blak...@gmail.com wrote:

 This has been brought up several times already and I believe is going to
 be discussed at the Kilo summit.

 Maybe I missed something, but what's the solution?


 I agree that reviewing third party patches eats community time. However,
 claiming that the community pays 46% of it's energy to maintain
 vendor-specific code doesn't make any sense. LOC in the repo has very
 little to do with ongoing required maintenance. Assuming the APIs for the
 plugins stay consistent, there should be few 'maintenance' changes required
 to a plugin once it's in the tree. If there are that many changes to
 plugins just to keep them operational, that means Neutron is far too
 unstable to support drivers living outside of the tree anyway.

 Yes, you are right. Neutron is far too unstable to support drivers
 living outside of the tree anyway. So I think this is really our important
 point.
 The community should focus on standardizing NBSB API, introducing and
 improving new features NOT wasting energy to introduce and maintain
 vendor-specific codes.


 On a related note, if we are going to pull plugins/drivers out of
 Neutron, I think all of them should be removed, including the OVS and
 LinuxBridge ones. There is no reason for them to be there if Neutron has
 stable enough internal APIs to eject the 3rd party plugins from the repo.
 They should be able to live in a separate neutron-opensource-drivers repo
 or something along those lines. This will free up significant amounts of
 developer/reviewer cycles for neutron to work on the API refactor, task
 based workflows, performance improvements for the DB operations, etc.

 I think we should release a workable version. User can experience the
 functions powered by built-in components. And they can replace them with
 the release of those vendors who cooperate with them. The community
 should not work for vendor's codes.


 If the open source drivers stay in the tree and the others are removed,
 there is little incentive to keep the internal APIs stable and 3rd party
 drivers sitting outside of the tree will break on every refactor or data
 structure change. If that's the way we want to treat external driver
 developers, let's be explicit about it and just post warnings that 3rd
 party drivers can break at any point and that the onus is on the external
 developers to learn what changed an react to it. At some point they will
 stop bothering with Neutron completely in their deployments and mimic its
 public API.

 Besides of user experience, the open source drivers are also used for
 developing and verifying new features, even small-scale case.


 A clear separation of the open source drivers/plugins and core Neutron
 would give a much better model for 3rd party driver developers to follow
 and would enforce a stable internal API

[openstack-dev] [Neutron][Architecture]Suggestions for the third vendors' plugin and driver

2014-09-11 Thread Germy Lure
Hi stackers,

According to my statistics(J2), the LOC of vendors' plugin and driver is
about 102K, while the whole under neutron is 220K.
That is to say the community has paid and is paying over 46% energy to
maintain vendors' code. If we take mails, bugs,
BPs  and so on into consideration, this percentage will be more.

Most of these codes are just plugins and drivers implementing almost  the
same functions. Every vendor submits a plugin,
and the community only do the same thing, repeat and repeat. Meaningless.I
think it's time to move them out.
Let's focus on improving those exist but still weak features, on
introducing important and interesting new features.

My suggestions now:
1.monopolized plugins
  1)The community only standards NB API and keeps built-ins, such as ML2,
OVS and Linux bridge plugins.
  2)Vendors maintain their plugins locally.
  3)Users get neutron from community and plugin from some vendor on demand.
2.service plugins
  1)The community standards SB API and keeps open source driver(iptables,
openSwan and etc.) as built-in.
  2)Vendors only provide drivers not plugin. And those drivers also need
not deliver to community.
  3)Like above, Users can get code on demand from vendors or just use open
source.
3.ML2 plugin
  1)Like service and monopolized plugin, the community just keep open
source implementations as built-in.
  2)L2-population should be kept.

I am very happy to discuss this further.

vendors' code stat. table(excluding built-in plugins and drivers)

Path Size
neutron-master\neutron\plugins\63170
neutron-master\neutron\services\ 4052
neutron-master\neutron\tests\ 35756

BR,
Germy
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [H][Neutron][IPSecVPN]Cannot tunnel two namespace Routers

2014-09-02 Thread Germy Lure
Hi Stackers,

Network TOPO like this: VM1(net1)--Router1---IPSec VPN
tunnel---Router2--VM2(net2)
If left and right side deploy on different OpenStack environments, it works
well. But in the same environment, Router1 and Router2 are namespace
implement in the same network node. I cannot ping from VM1 to VM2.

In R2(Router2), tcpdump tool tells us that R2 receives ICMP echo request
packets but doesnt send them out.

*7837C113-D21D-B211-9630-**00821800:~ # ip netns exec
qrouter-4fd2e76e-37d0-4d05-**b5a1-dd987c0231ef tcpdump -i any *
*tcpdump: verbose output suppressed, use -v or -vv for full protocol decode*
*listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535
bytes*
* 11:50:14.853470 IP 10.10.5.2  10.10.5.3 http://10.10.5.3:
ESP(spi=0xc6d65c02,seq=0x1e6), length 132*
*11:50:14.853470 IP 128.6.25.2  128.6.26.2 http://128.6.26.2: ICMP echo
request, id 44567, seq 486, length 64*
* 11:50:15.853475 IP 10.10.5.2  10.10.5.3 http://10.10.5.3:
ESP(spi=0xc6d65c02,seq=0x1e7), length 132*
*11:50:15.853475 IP 128.6.25.2  128.6.26.2 http://128.6.26.2: ICMP echo
request, id 44567, seq 487, length 64*
* 11:50:16.853461 IP 10.10.5.2  10.10.5.3 http://10.10.5.3:
ESP(spi=0xc6d65c02,seq=0x1e8), length 132*
*11:50:16.853461 IP 128.6.25.2  128.6.26.2 http://128.6.26.2: ICMP echo
request, id 44567, seq 488, length 64*
* 11:50:17.853447 IP 10.10.5.2  10.10.5.3 http://10.10.5.3:
ESP(spi=0xc6d65c02,seq=0x1e9), length 132*
*11:50:17.853447 IP 128.6.25.2  128.6.26.2 http://128.6.26.2: ICMP echo
request, id 44567, seq 489, length 64*
* ^C*
*8 packets captured*
*8 packets received by filter*
*0 packets dropped by kernel*

ip addr in R2:

7837C113-D21D-B211-9630-00821800:~ # ip netns exec
qrouter-4fd2e76e-37d0-4d05-b5a1-dd987c0231ef ip addr
187: lo: LOOPBACK,UP,LOWER_UP mtu 16436 qdisc noqueue state UNKNOWN group
default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
206: qr-4bacb61c-72: BROADCAST,UP,LOWER_UP mtu 1500 qdisc noqueue state
UNKNOWN group default
link/ether fa:16:3e:23:10:97 brd ff:ff:ff:ff:ff:ff
inet 128.6.26.1/24 brd 128.6.26.255 scope global qr-4bacb61c-72
inet6 fe80::f816:3eff:fe23:1097/64 scope link
   valid_lft forever preferred_lft forever
208: qg-4abd4bb0-21: BROADCAST,UP,LOWER_UP mtu 1500 qdisc noqueue state
UNKNOWN group default
link/ether fa:16:3e:e6:cd:1a brd ff:ff:ff:ff:ff:ff
inet 10.10.5.3/24 brd 10.10.5.255 scope global qg-4abd4bb0-21
inet6 fe80::f816:3eff:fee6:cd1a/64 scope link
   valid_lft forever preferred_lft forever


In addition, the kernel counter /proc/net/snmp in namespace is unchanged.
These couters do not work well with namespace?


BR,
Germy
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev