[openstack-dev] [neutron] Is anybody using the "next_hop" field in FIP association update notifications?

2017-01-12 Thread Tidwell, Ryan
Hi all,

In reviewing the change [1], the question arose as to whether any project other 
than neutron-dynamic-routing has been using the "next_hop" field included in a 
callback notification emitted when FIP association is updated [2]. The question 
as to whether to deprecate or simply remove this field has arisen due to the 
history of inclusion of this field in the notification. It was initially 
included to support BGP dynamic routing in Mitaka when it was in-tree. However, 
due to the bug [3] neutron-dynamic-routing no longer requires this field be 
included in the notification and its inclusion now seems to have little value, 
and this value shouldn't really be identified by code inside the neutron repo 
anyway. I wanted to poke the mailing list to give folks a chance to take a look 
and make sure they aren't relying on this. >From what I can tell there are no 
uses of this inside the neutron repo, nor are there any other consumers inside 
the stadium.  For maintainers of stadium projects, could you please confirm or 
debunk my analysis of the situation?

-Ryan Tidwell


[1] https://review.openstack.org/#/c/357722
[2] https://review.openstack.org/#/c/357722/5/neutron/db/l3_db.py@a1182
[3] https://launchpad.net/bugs/1615919

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Dynamic Routing] Plans for scenario testing?

2016-12-06 Thread Tidwell, Ryan
I just saw this review [1] come in this afternoon. I'm starting to review and 
digest it, but it looks pretty slick. It looks like Ryu may have some test 
helpers that stand up Quagga in a container and allow you to interact with it 
natively in python. This gives us a clean way to set up a BGP peer that we can 
run assertions against to ensure neutron announces the correct routes. I see 
what looks like some decent scenario tests here. So it looks like we do have 
work in flight to address some test gaps in neutron-dynamic-routing.


-Ryan


[1] https://review.openstack.org/#/c/407779


From: Tidwell, Ryan
Sent: Tuesday, December 6, 2016 3:13:17 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][Dynamic Routing] Plans for scenario 
testing?

Thanks for the pointer. I'll take a look and see what can be leveraged.

-Ryan


_
From: Armando M. <arma...@gmail.com<mailto:arma...@gmail.com>>
Sent: Tuesday, December 6, 2016 2:57 PM
Subject: Re: [openstack-dev] [Neutron][Dynamic Routing] Plans for scenario 
testing?
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>




On 6 December 2016 at 14:44, Tidwell, Ryan 
<ryan.tidw...@hpe.com<mailto:ryan.tidw...@hpe.com>> wrote:
This is at the top of my list to look at. I've been thinking a lot about how to 
implement some tests. For instance, do we need to actually stand up a BGP peer 
of some sort to peer neutron with and assert the announcements somehow? Or 
should we assume that Ryu works properly and make sure we have solid coverage 
of the driver interface somehow. I'm open to suggestions as to how to approach 
this.

Thomas Morin et al have had a few ideas and put together [1]. There are some 
similarities between the efforts. Something worth mulling over on.

Cheers,
Armando

[1] https://review.openstack.org/#/c/396967/


-Ryan

-Original Message-
From: Assaf Muller [mailto:as...@redhat.com<mailto:as...@redhat.com>]
Sent: Tuesday, December 06, 2016 2:36 PM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Neutron][Dynamic Routing] Plans for scenario testing?

Hi all,

General query - Is there anyone in the Dynamic Routing community that is 
planning on contributing a scenario test? As far as I could tell, none of the 
current API tests would fail if, for example, the BGP agent was not running. 
Please correct me if I'm wrong.

Thank you.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Dynamic Routing] Plans for scenario testing?

2016-12-06 Thread Tidwell, Ryan
Thanks for the pointer. I'll take a look and see what can be leveraged.

-Ryan


_
From: Armando M. <arma...@gmail.com<mailto:arma...@gmail.com>>
Sent: Tuesday, December 6, 2016 2:57 PM
Subject: Re: [openstack-dev] [Neutron][Dynamic Routing] Plans for scenario 
testing?
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>




On 6 December 2016 at 14:44, Tidwell, Ryan 
<ryan.tidw...@hpe.com<mailto:ryan.tidw...@hpe.com>> wrote:
This is at the top of my list to look at. I've been thinking a lot about how to 
implement some tests. For instance, do we need to actually stand up a BGP peer 
of some sort to peer neutron with and assert the announcements somehow? Or 
should we assume that Ryu works properly and make sure we have solid coverage 
of the driver interface somehow. I'm open to suggestions as to how to approach 
this.

Thomas Morin et al have had a few ideas and put together [1]. There are some 
similarities between the efforts. Something worth mulling over on.

Cheers,
Armando

[1] https://review.openstack.org/#/c/396967/


-Ryan

-Original Message-
From: Assaf Muller [mailto:as...@redhat.com<mailto:as...@redhat.com>]
Sent: Tuesday, December 06, 2016 2:36 PM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Neutron][Dynamic Routing] Plans for scenario testing?

Hi all,

General query - Is there anyone in the Dynamic Routing community that is 
planning on contributing a scenario test? As far as I could tell, none of the 
current API tests would fail if, for example, the BGP agent was not running. 
Please correct me if I'm wrong.

Thank you.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Dynamic Routing] Plans for scenario testing?

2016-12-06 Thread Tidwell, Ryan
This is at the top of my list to look at. I've been thinking a lot about how to 
implement some tests. For instance, do we need to actually stand up a BGP peer 
of some sort to peer neutron with and assert the announcements somehow? Or 
should we assume that Ryu works properly and make sure we have solid coverage 
of the driver interface somehow. I'm open to suggestions as to how to approach 
this.

-Ryan

-Original Message-
From: Assaf Muller [mailto:as...@redhat.com] 
Sent: Tuesday, December 06, 2016 2:36 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [Neutron][Dynamic Routing] Plans for scenario testing?

Hi all,

General query - Is there anyone in the Dynamic Routing community that is 
planning on contributing a scenario test? As far as I could tell, none of the 
current API tests would fail if, for example, the BGP agent was not running. 
Please correct me if I'm wrong.

Thank you.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] trunk api performance and scale measurments

2016-12-06 Thread Tidwell, Ryan
Bence,

I had been meaning to go a little deeper with performance benchmarking, but 
I’ve been crunched for time. Thanks for doing this, this is some great analysis.

As Armando mentioned, L2pop seemed to be the biggest impediment to control 
plane performance.  If I were to use trunks heavily in production I would 
consider A) not using overlays that would leverage L2pop (ie VLAN or flat 
segmentation) or B) disabling L2pop and dealing with the MAC learning overhead 
in the overlay.  Another consideration is the rpc_timeout setting, I tested 
with rpc_timeout=300 and I didn’t really encounter any rpc_timeouts. However, 
there may be other reasons operators may have for not bumping rpc_timeout up 
that high.

I failed to make much mention of it in previous write-ups, but I also 
encountered scale issues with listing ports after a certain threshold. I 
haven’t gone back to identify where the tipping point is, but I did notice that 
Horizon began to really bog down as I added ports to the system. On the surface 
it didn’t seem to matter whether these ports were used as subports or not, the 
sheer volume of ports added to the system seemed to cause both Horizon and more 
importantly GET on v2.0/ports to really bog down.

-Ryan

From: Armando M. [mailto:arma...@gmail.com]
Sent: Monday, December 05, 2016 8:37 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [neutron] trunk api performance and scale 
measurments



On 5 December 2016 at 08:07, Jay Pipes 
> wrote:
On 12/05/2016 10:59 AM, Bence Romsics wrote:
Hi,

I measured how the new trunk API scales with lots of subports. You can
find the results here:

https://wiki.openstack.org/wiki/Neutron_Trunk_API_Performance_and_Scaling

Hope you find it useful. There are several open ends, let me know if
you're interested in following up some of them.

Great info in there, Ben, thanks very much for sharing!

Bence,

Thanks for the wealth of information provided, I was looking forward to it! The 
results of the experimentation campaign makes me somewhat confident that trunk 
feature design is solid, or at least that is what it looks like! I'll look into 
why there is a penalty on port-list, because that's surprising for me too.

I also know that the QE team internally at HPE has done some perf testing 
(though I don't have results publicly available yet), but what I can share at 
this point is:

  *   They also disabled l2pop to push the boundaries of trunk deployments;
  *   They disabled OVS firewall (though for reasons orthogonal to scalability 
limits introduced by the functionality);
  *   They flipped back to ovsctl interface, as it turned out to be one of 
components that introduced some penalty. Since you use native interface, it'd 
be nice to see what happens if you flipped this switch too;
  *   RPC timeout of 300.
On a testbed of 3 controllers and 7 computes, this is at high level what they 
found out the following:

  *   100 trunks, with 1900 subports took about 30 mins with no errors;
  *   500 subports take about 1 min to bind to a trunk;
  *   Booting a VM on trunk with 100 subports takes as little as 15 seconds to 
successful ping. Trunk goes from BUILD -> ACTIVE within 60 seconds of booting 
the VM;
  *   Scaling to 700 VM's incrementally on trunks with 100 initial subports is 
constant (e.g. booting time stays set at ~15 seconds).
I believe Ryan Tidwell may have more on this.

Cheers,
Armando



-jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Is OVS implementation for supporting VLAN-Aware-VM compeleted?

2016-09-13 Thread Tidwell, Ryan
Cathy,

There are a few outstanding reviews to be wrapped up, including docs. However, 
this is mostly complete and the bulk of the functionality has merged and you 
can try it out.

Code Reviews: 
https://review.openstack.org/#/q/project:openstack/neutron+branch:master+topic:bp/vlan-aware-vms
Docs: https://review.openstack.org/#/c/361776/

-Ryan

From: Cathy Zhang [mailto:cathy.h.zh...@huawei.com]
Sent: Tuesday, September 13, 2016 11:25 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [neutron] Is OVS implementation for supporting 
VLAN-Aware-VM compeleted?

Hi All,

Sorry I lost track of this work. Is the implementation completed? Can we start 
using the OVS version of VLAN-Aware VMs ?

Thanks,
Cathy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] IRC meeting for VLAN-aware VM's

2016-06-20 Thread Tidwell, Ryan
At one point early during the Mitaka cycle we had a weekly IRC meeting on this 
topic going.  I got side-tracked by other work at the time and I stopped 
attending, so my apologies if these are still happening.  I'm wondering if it 
would be useful to get these going again if this is not currently happening.  
Thoughts?

-Ryan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][stable] Ryu 4.2 breaking python34 jobs

2016-05-18 Thread Tidwell, Ryan
I just wanted to give a heads-up to everyone that a bug in Ryu 4.2 which was 
just recently pushed to pypi seems to causing issues in the python34 jobs in 
neutron-dynamic-routing.  This issue will likely also cause problems for 
backports to stable/mitaka in the main neutron repository. I have filed 
https://bugs.launchpad.net/neutron/+bug/1583011 to track the issue.  The short 
version of the problem is that an incompatibility with python 3 was briefly 
introduced. Note that master in the neutron repository is probably not affected 
as we have spun out the BGP code that exercises the portions of Ryu affected by 
the python 3 incompatibility. Please also note that a fix for the issue merged 
in Ryu several hours ago.

-Ryan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Social at the summit

2016-04-27 Thread Tidwell, Ryan
+1

-Original Message-
From: Kyle Mestery [mailto:mest...@mestery.com] 
Sent: Monday, April 25, 2016 11:07 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [neutron] Social at the summit

OK, there is enough interest, I'll find a place on 6th Street for us and get a 
reservation for Thursday around 7 or so.

Thanks folks!

On Mon, Apr 25, 2016 at 12:30 PM, Zhou, Han  wrote:
> +1 :)
>
> Han Zhou
> Irc: zhouhan
>
>
> On Monday, April 25, 2016, Korzeniewski, Artur 
>  wrote:
>>
>> Sign me up :)
>>
>> Artur
>> IRC: korzen
>>
>> -Original Message-
>> From: Darek Smigiel [mailto:smigiel.dari...@gmail.com]
>> Sent: Monday, April 25, 2016 7:19 PM
>> To: OpenStack Development Mailing List (not for usage questions) 
>> 
>> Subject: Re: [openstack-dev] [neutron] Social at the summit
>>
>> Count me in!
>> Will be good to meet all you guys!
>>
>> Darek (dasm) Smigiel
>>
>> > On Apr 25, 2016, at 12:13 PM, Doug Wiegley 
>> >  wrote:
>> >
>> >
>> >> On Apr 25, 2016, at 12:01 PM, Ihar Hrachyshka 
>> >> 
>> >> wrote:
>> >>
>> >> WAT???
>> >>
>> >> It was never supposed to be core only. Everyone is welcome!
>> >
>> > +2
>> >
>> > irony intended.
>> >
>> > Socials are not controlled by gerrit ACLs.  :-)
>> >
>> > doug
>> >
>> >>
>> >> Sent from my iPhone
>> >>
>> >>> On 25 Apr 2016, at 11:56, Edgar Magana 
>> >>> wrote:
>> >>>
>> >>> Would you extend it to ex-cores?
>> >>>
>> >>> Edgar
>> >>>
>> >>>
>> >>>
>> >>>
>>  On 4/25/16, 10:55 AM, "Kyle Mestery"  wrote:
>> 
>>  Ihar, Henry and I were talking and we thought Thursday night 
>>  makes sense for a Neutron social in Austin. If others agree, 
>>  reply on this thread and we'll find a place.
>> 
>>  Thanks!
>>  Kyle
>> 
>>  
>>  ___ ___ OpenStack Development Mailing List (not for usage
>>  questions)
>>  Unsubscribe:
>>  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-de
>>  v
>> >>> _
>> >>> ___ __ OpenStack Development Mailing List (not for usage 
>> >>> questions)
>> >>> Unsubscribe:
>> >>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >> __
>> >> ___ _ OpenStack Development Mailing List (not for usage 
>> >> questions)
>> >> Unsubscribe:
>> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> > ___
>> > ___  OpenStack Development Mailing List (not for usage 
>> > questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> _
>> _ OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> _
>> _ OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] BGP support

2016-03-30 Thread Tidwell, Ryan
Gary,

I’m not sure I understand the relationship you’re drawing between BGP and L2 
GW, could you elaborate?  The BGP code that landed in Mitaka is mostly geared 
toward the use case where you want to directly route your tenant networks 
without any NAT (ie no floating IP’s, no SNAT).  Neutron peers with upstream 
routers and announces prefixes that tenants allocate dynamically.  We have 
talked about how we could build on what was merged in Mitaka to support L3 VPN 
in the future, but to my knowledge no concrete plan has emerged as of yet.

-Ryan

From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Sunday, March 27, 2016 11:36 PM
To: OpenStack List
Subject: [openstack-dev] [Neutron] BGP support

Hi,
In the M cycle BGP support was added in tree. I have seen specs in the L2 GW 
project for this support too. Are we planning to consolidate the efforts? Will 
the BGP code be moved from the Neutron git to the L2-GW project? Will a new 
project be created?
Sorry, a little in the dark here and it would be nice if someone could please 
provide some clarity here. It would be a pity that there were competing efforts 
and my take would be that the Neutron code would be the single source of truth 
(until we decide otherwise).
I think that the L2-GW project would be a very good place for that service code 
to reside. It can also have MPLS etc. support. So it may be a natural fit.
Thanks
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] BGP Dynamic Routing Development Going Forward

2016-01-25 Thread Tidwell, Ryan
Responses inline

From: Gal Sagie [mailto:gal.sa...@gmail.com]
Sent: Friday, January 22, 2016 9:49 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] BGP Dynamic Routing Development Going 
Forward

The real question that needs to be asked (at least for me) is how this feature 
can work with other plugins/ML2 drivers
that are not the reference implementation.


-  Regardless of the ML2 drivers you use, ML2 is supported 
with the reference implementation.  The code we have only works with ML2 
though, which is a concern for putting this in the main repo.

How hard (possible) it is to take the API part (or maybe even the agent) and 
use that in another Neutron implementation.
Then focus on which ever option that works best to achieve this.


-  The agent is actually very portable in my opinion.  The server-side 
code is not so portable, as mentioned above only ML2 is supported.  Identifying 
next-hops is done by querying the DB, it’s hard to make that portable between 
plugins.

I personally think that if the long term goal is to have this in a separate 
repo then this should happen right now.
"We will do this later" just won't work, it will be harder and it will just not 
happen (or it will cause a lot of pain to people
that started deploying this)
At least thats my opinion, of course it depends a lot on the people who 
actually work on this...


-  I completely agree which is why I’m not too excited 
about deferring a split.  It doesn’t really set us back in our development 
efforts to move out to a separate repo.  We’re quickly closing in on being 
functionally complete and this code peels out of the main repo rather cleanly, 
so I feel we really lose nothing by just moving to out of the main repo 
immediately if that’s the direction we go for the long haul.  As you point out 
it saves users some pain in during a future upgrade.

Gal.

On Sat, Jan 23, 2016 at 2:15 AM, Vikram Choudhary 
<viks...@gmail.com<mailto:viks...@gmail.com>> wrote:

I agree with Armando and feel option 2 would be viable if we really want to 
deliver this feature in Mitaka time frame. Adding a new stadium project invites 
more work and can be done in N release.

Thanks
Vikram
On Jan 22, 2016 11:47 PM, "Armando M." 
<arma...@gmail.com<mailto:arma...@gmail.com>> wrote:


On 22 January 2016 at 08:57, Tidwell, Ryan 
<ryan.tidw...@hpe.com<mailto:ryan.tidw...@hpe.com>> wrote:
I wanted to raise the question of whether to develop BGP dynamic routing in the 
Neutron repo or spin it out to as a stadium project.  This question has been 
raised recently on reviews and in offline discussions.  For those unfamiliar 
with this work, BGP efforts in Neutron entail admin-only API’s for configuring 
and propagating BGP announcements of next-hops for floating IP’s, tenant 
networks, and host routes for each compute port when using DVR.  As we are 
getting late in the Mitaka cycle, I would like to be sure there is consensus on 
the approach for Mitaka.  As I see it, we have 3 courses of action:

1. Continue with development in the main repo without any intention of spinning 
out to a stadium project
2. Continue on the current development course for Mitaka while targeting a 
spin-out to a stadium project during the N cycle
3. Spin out to a stadium project immediately

Each has pros and cons.  This question seems to have arisen while looking at 
the sheer amount code being proposed, its place in the Neutron model, and 
questioning whether we really want to bring that code into Neutron.  As such, 
continuing with option 1 definitely requires us to come to some consensus.  Let 
me be clear that I’m not opposed to any of these options, I’m simply looking 
for some guidance.  With that said, if the end game is a stadium project I do 
question whether #2 makes sense.

Not sure if you followed the latest discussion on [1,2] ([1] capturing the 
latest events). Delivering something production worthy goes a lot more beyond 
simply posting code upstream. We, as a community, have promised to deliver BGP 
capabilities for many cycles, and failed so far. Choosing 3 is clearly going to 
defer this to N or even O because of the amount of effort required to set it 
all up (release, docs, testing, etc). Option 2, as painful as it may sound, 
gives us the ability to get immediate access to all that's required to deliver 
something to users so that they can play with it at the end of Mitaka if they 
choose to. In the meantime that will give us some breathing room to get ready 
as soon as N opens up.

I am operating under the assumption that what you guys have been working on is 
close to being functionally complete. If we don't even have that, then we're in 
trouble no matter which option we choose and we can defer this yet again :/

Having said that, we can all agree that option #1 is not what we all want. Just

[openstack-dev] [neutron] BGP Dynamic Routing Development Going Forward

2016-01-22 Thread Tidwell, Ryan
I wanted to raise the question of whether to develop BGP dynamic routing in the 
Neutron repo or spin it out to as a stadium project.  This question has been 
raised recently on reviews and in offline discussions.  For those unfamiliar 
with this work, BGP efforts in Neutron entail admin-only API's for configuring 
and propagating BGP announcements of next-hops for floating IP's, tenant 
networks, and host routes for each compute port when using DVR.  As we are 
getting late in the Mitaka cycle, I would like to be sure there is consensus on 
the approach for Mitaka.  As I see it, we have 3 courses of action:

1. Continue with development in the main repo without any intention of spinning 
out to a stadium project
2. Continue on the current development course for Mitaka while targeting a 
spin-out to a stadium project during the N cycle
3. Spin out to a stadium project immediately

Each has pros and cons.  This question seems to have arisen while looking at 
the sheer amount code being proposed, its place in the Neutron model, and 
questioning whether we really want to bring that code into Neutron.  As such, 
continuing with option 1 definitely requires us to come to some consensus.  Let 
me be clear that I'm not opposed to any of these options, I'm simply looking 
for some guidance.  With that said, if the end game is a stadium project I do 
question whether #2 makes sense.

-Ryan

https://review.openstack.org/#/c/201621/
https://review.openstack.org/#/q/topic:bp/bgp-dynamic-routing

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] subnetallocation is in core resource, while there is a extension for it?

2015-09-01 Thread Tidwell, Ryan
This was a compromise we made toward the end of Kilo.  The subnetpools resource 
was implemented as a core resource, but for purposes of Horizon interaction and 
a lack of another method for evolving the Neutron API we deliberately added a 
shim extension.  I believe this was done with a couple other “extensions” like 
VLAN transparent networks.  I don’t think we want to remove the shim extension.

-Ryan

From: gong_ys2004 [mailto:gong_ys2...@aliyun.com]
Sent: Monday, August 31, 2015 9:45 PM
To: openstack-dev
Subject: [openstack-dev] [neutron] subnetallocation is in core resource, while 
there is a extension for it?




Hi, neutron guys,

look at 
https://github.com/openstack/neutron/blob/master/neutron/extensions/subnetallocation.py,

which defines an extension Subnetallocation but defines no extension resource. 
Actually, it is implemented

in core resource.

So I think we should remove this extension.



I filed a bug for it:

https://bugs.launchpad.net/neutron/+bug/1490815





Regards,

yong sheng gong
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Question about neutron-ovs-cleanup and L3/DHCP agents

2015-08-27 Thread Tidwell, Ryan
I was looking over the admin guide 
http://docs.openstack.org/admin-guide-cloud/networking_config-agents.html#configure-l3-agent
 and noticed this:

If you reboot a node that runs the L3 agent, you must run the 
neutron-ovs-cleanup command before the neutron-l3-agent service starts.

Taking a look at neutron-ovs-cleanup, it appears to remove stray veth pairs and 
tap port in OVS.  The admin guide suggests ensuring neutron-ovs-cleanup runs 
before L3 agent and DHCP agent start when rebooting a node.  My question is 
whether there is something special about a reboot vs. an agent restart that is 
the genesis of this note in the admin guide.  What conditions can get you into 
a state where neutron-ovs-cleanup is required?  Is it just a matter of the OVS 
agent getting out of sync and needing to go back to a clean slate? Can anyone 
shed some light on this note?

-Ryan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Representing a networks connected by routers

2015-07-23 Thread Tidwell, Ryan
 John Belamaric made a good point that the closest thing that we have to 
representing an L3 domain right now is a subnet pool.

This is actually a really good point.  If you take the example of a L3 network 
that spans segments, you could put something like a /16 into a subnet pool.  
That /16 can be allocated in smaller, non-uniformly sized chunks across 
multiple network segments.  As you allocate chunks of the /16 across network 
segments, you need only identify the subnets allocated from your subnet pool 
and their associated Neutron networks to be able to stitch those segments 
together to represent the whole L3 network.  Conveniently, we currently enforce 
the restriction that subnets on any given segment must all be allocated from 
the same subnet pool which makes stitching segments together in this way 
feasible.  This is an existing construct that seems to model the world the way 
you want.  I think we should at least explore this angle, there are still 
potentially some gotchas with regard to the interface with Nova that I haven't 
though.

-Ryan

-Original Message-
From: Carl Baldwin [mailto:c...@ecbaldwin.net] 
Sent: Thursday, July 23, 2015 9:26 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][L3] Representing a networks connected by 
routers

On Wed, Jul 22, 2015 at 3:00 PM, Kevin Benton blak...@gmail.com wrote:
 I proposed the port scheduling RFE to deal with the part about 
 selecting a network that is appropriate for the port based on provided 
 hints and host_id. [1]

Thanks for the pointer.  I hadn't paid much attention to this RFE yet.

the neutron network might have been conceived as being just a 
broadcast  domain but, in practice, it is L2 and L3.

 I disagree with this and I think we need to be very clear on what our 
 API constructs mean. If we don't, we will have constant proposals to 
 smear the boundaries between things, which is sort of what we are 
 running into already.

We ran in to this long ago.

 Today I can create a standalone network and attach ports to it. That 
 network is just an L2 broadcast domain and has no IP addresses or any 
 L3 info associated with it, but the ports can communicate via L2. The 
 network doesn't know anything about the l3 addresses and just forwards 
 the traffic according to L2 semantics.

Sure, a network *can* be just L2.  But, my point is that when you start adding 
L3 on top of that network by adding subnets, the subnets don't fully 
encapsulate the L3 part.  A subnet is just a cidr but that's not enough.  To 
illustrate, the IPv4 part of the L3 network can have several cidrs lumped 
together.  The full IPv4 story on that network includes the collection of all 
of the IPv4 subnets associated to the network.  That collection belongs to the 
network.  Without going to the network, there is no way to describe L3 
addresses that is more than just a single cidr.

 The neutron subnet provides L3 addressing info that can be associated 
 with an arbitrary neutron network. To route between subnets, we attach 
 routers to subnets. It doesn't matter if those subnets are on the same 
 or different networks, because it's L3 and it doesn't matter.

 It is conceivable that we could remove the requirement for a subnet to 
 have an underlying network in the fully-routed case. However, that 
 would mean we would need to remove the requirement for a port to have 
 a network as well (unless this is only used for floating IPs).

If we remove the requirement then we have no way to group cidrs together.  A 
single cidr isn't sufficient to express the addressing needed for an L3 
network.  My L3 network could be a bunch of disjoint or fragmented cidrs lumped 
together.  They should all be considered equivalent addresses, they just aren't 
lined up in a perfect little cidr.

John Belamaric made a good point that the closest thing that we have to 
representing an L3 domain right now is a subnet pool.  I'm still thinking about 
how we might be able to use this concept to help out this situation.

Carl

 1. https://bugs.launchpad.net/neutron/+bug/1469668

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] virtual machine can not get DHCP lease due packet has no checksum

2015-06-01 Thread Tidwell, Ryan
I see a fix for https://bugs.launchpad.net/neutron/+bug/1244589 merged during 
Kilo.  I'm wondering if we think we have identified a root cause and have 
merged an appropriate long-term fix, or if https://review.openstack.org/148718 
was merged just so there's at least a fix available while we investigate other 
alternatives.  Does anyone have an update to provide?

-Ryan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] virtual machine can not get DHCP lease due packet has no checksum

2015-06-01 Thread Tidwell, Ryan
Not seeing this on Kilo, we're seeing this on Juno builds (that's expected).  
I'm interested in a Juno backport, but mainly wanted to be see if others had 
confidence in the fix.  The discussion in the bug report also seemed to 
indicate there were other alternative solutions others might be looking into 
that didn't involve an iptables rule.

-Ryan

-Original Message-
From: Mark McClain [mailto:m...@mcclain.xyz] 
Sent: Monday, June 01, 2015 6:47 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] virtual machine can not get DHCP lease 
due packet has no checksum


 On Jun 1, 2015, at 7:26 PM, Tidwell, Ryan ryan.tidw...@hp.com wrote:
 
 I see a fix for https://bugs.launchpad.net/neutron/+bug/1244589 merged during 
 Kilo.  I'm wondering if we think we have identified a root cause and have 
 merged an appropriate long-term fix, or if 
 https://review.openstack.org/148718 was merged just so there's at least a fix 
 available while we investigate other alternatives.  Does anyone have an 
 update to provide?
 
 -Ryan

The fix works in environments we’ve tested in.  Are you still seeing problems?

mark
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron API rate limiting

2015-05-15 Thread Tidwell, Ryan
The Nova analog in Neutron is specifically what I was interested in.  Makes 
perfect sense.  Thanks!

-Ryan

-Original Message-
From: Russell Bryant [mailto:rbry...@redhat.com] 
Sent: Friday, May 15, 2015 11:54 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron] Neutron API rate limiting

On 05/14/2015 08:32 PM, Kevin Benton wrote:
 There isn't anything in neutron at this point that does that. I think 
 the assumption so far is that you could rate limit at your load 
 balancer or whatever distributes requests to neutron servers.

Right, which a lot of sense given the horizontally scalable nature of the API 
workers.  Nova had some rate limiting built-in but I think it may have even 
been disabled by default now because it's basically useless when you run 
multiple API workers.

--
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron API rate limiting

2015-05-15 Thread Tidwell, Ryan


From: Carl Baldwin [c...@ecbaldwin.net]
Sent: Thursday, May 14, 2015 9:10 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [neutron] Neutron API rate limiting

@Gal, your proposal sounds like packet or flow rate limiting of data through a 
port.  What Ryan is proposing is rate limiting of api requests to the server.  
They are separate topics, each may be a valid need on its own but should be 
considered separately.

@Ryan, I tend to agree that rate limiting belongs in front of the api servers 
at the load balancer level.  That is not to say we couldn't eventually use our 
own lbaas for this someday and integrate rate limiting there.  Thoughts?

Carl

On May 14, 2015 9:26 PM, Gal Sagie 
gal.sa...@gmail.commailto:gal.sa...@gmail.com wrote:
Hello Ryan,

We have proposed a spec to liberty to add rate limit functionality to security 
groups [1].
We see two big use cases for it, one as you mentioned is DDoS for east-west and 
another
is brute force prevention (for example port scanning).

We are re-writing the spec as an extension to the current API, we also have a 
proposal
to enhance the Security Group / FWaaS implementation in order to make it easily 
extendible by such
new classes of security rules.

We are planning to discuss all of that in the SG/FWaaS future directions 
session [2].
I or Lionel will update you as soon as we have the fixed spec for review, and 
feel free to come to the discussion
as we are more then welcoming everyone to help this effort.

Gal.

[1] https://review.openstack.org/#/c/151247/
[2] https://etherpad.openstack.org/p/YVR-neutron-sg-fwaas-future-direction

On Fri, May 15, 2015 at 2:21 AM, Tidwell, Ryan 
ryan.tidw...@hp.commailto:ryan.tidw...@hp.com wrote:
I was batting around some ideas regarding IPAM functionality, and it occurred 
to me that rate-limiting at an API level might come in handy and as an example 
might help provide one level of defense against DoS for an external IPAM 
provider that Neutron might make calls off to.  I’m simply using IPAM as an 
example here, there are a number of other (ie better) reasons for rate-limiting 
at the API level.  I may just be ignorant (please forgive me if I am ☺ ), but 
I’m not aware of any rate-limiting functionality at the API level in Neutron.  
Does anyone know if such a feature exists that could point me at some 
documentation? If it doesn’t exist, has the Neutron community broached this 
subject before? I have to imagine someone has brought this up before and I just 
was out of the loop.  Anyone have thoughts they care to share? Thanks!

-Ryan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Best Regards ,

The G.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Certainly you want to do some rate-limiting before a request even hits Neutron. 
 I was asking the question since I believe Nova has a rate-limiting feature 
that is built-in, although it seems to serve a different purpose than just 
keeping generic DoS attacks at bay (which is why you want to put something in 
front of Neutron/Nova/etc.).  I simply wondered if there was any utility to 
per-tenant throttling which is what Nova seems to have.  I shared a very poor 
example and wasn't very clear, my apologies.

-Ryan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Neutron API rate limiting

2015-05-14 Thread Tidwell, Ryan
I was batting around some ideas regarding IPAM functionality, and it occurred 
to me that rate-limiting at an API level might come in handy and as an example 
might help provide one level of defense against DoS for an external IPAM 
provider that Neutron might make calls off to.  I'm simply using IPAM as an 
example here, there are a number of other (ie better) reasons for rate-limiting 
at the API level.  I may just be ignorant (please forgive me if I am :) ), but 
I'm not aware of any rate-limiting functionality at the API level in Neutron.  
Does anyone know if such a feature exists that could point me at some 
documentation? If it doesn't exist, has the Neutron community broached this 
subject before? I have to imagine someone has brought this up before and I just 
was out of the loop.  Anyone have thoughts they care to share? Thanks!

-Ryan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Anyone looking at support for VLAN-aware VMs in Liberty?

2015-05-11 Thread Tidwell, Ryan
Erik,

I’m looking forward to seeing this blueprint re-proposed and am able to pitch 
in to help get this in to Liberty.  Let me know how I can help.

-Ryan

From: Erik Moe [mailto:erik@ericsson.com]
Sent: Friday, May 08, 2015 6:30 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron]Anyone looking at support for VLAN-aware 
VMs in Liberty?


Hi,

I have not been able to work with upstreaming of this for some time now. But 
now it looks like I may make another attempt. Who else is interested in this, 
as a user or to help contributing? If we get some traction we can have an IRC 
meeting sometime next week.

Thanks,
Erik


From: Scott Drennan [mailto:sco...@nuagenetworks.net]
Sent: den 4 maj 2015 18:42
To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [neutron]Anyone looking at support for VLAN-aware VMs 
in Liberty?

VLAN-transparent or VLAN-trunking networks have landed in Kilo, but I don't see 
any work on VLAN-aware VMs for Liberty.  There is a blueprint[1] and specs[2] 
which was deferred from Kilo - is this something anyone is looking at as a 
Liberty candidate?  I looked but didn't find any recent work - is there 
somewhere else work on this is happening?  No-one has listed it on the liberty 
summit topics[3] etherpad, which could mean it's uncontroversial, but given 
history on this, I think that's unlikely.

cheers,
Scott

[1]: https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms
[2]: https://review.openstack.org/#/c/94612
[3]: https://etherpad.openstack.org/p/liberty-neutron-summit-topics
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Core API vs extension: the subnet pool feature

2015-03-30 Thread Tidwell, Ryan
I will quickly spin another patch set with the shim extension.  Hopefully this 
will be all it takes to get subnet allocation merged.

-Ryan

-Original Message-
From: Akihiro Motoki [mailto:amot...@gmail.com] 
Sent: Monday, March 30, 2015 2:00 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] Core API vs extension: the subnet pool 
feature

Hi Carl,

I am now reading the detail from Salvatore, but would like to response this 
first.

I don't want to kill this useful feature too and move the thing forward.
I am fine with the empty/shim extension approach.
The subnet pool is regarded as a part of Core API, so I think this extension 
can be always enabled even if no plugin declares to use it.
Sorry for interrupting the work at the last stage, and thank for understanding.

Akihiro

2015-03-31 5:28 GMT+09:00 Carl Baldwin c...@ecbaldwin.net:
 Akihiro,

 If we go with the empty extension you proposed in the patch will that 
 be acceptable?

 We've got to stop killing new functionality on the very last day like this .
 It just kills progress.  This proposal isn't new.

 Carl

 On Mar 30, 2015 11:37 AM, Akihiro Motoki amot...@gmail.com wrote:

 Hi Neutron folks
 (API folks may be interested on this)

 We have another discussion on Core vs extension in the subnet pool 
 feature reivew https://review.openstack.org/#/c/157597/.
 We did the similar discussion on VLAN transparency and MTU for a 
 network model last week.
 I would like to share my concerns on changing the core API directly.
 I hope this help us make the discussion productive.
 Note that I don't want to discuss the micro-versioning because it 
 mainly focues on Kilo FFE BP.

 I would like to discuss this topic in today's neutron meeting, but I 
 am not so confident I can get up in time, I would like to send this 
 mail.


 The extension mechanism in Neutron provides two points for extensibility:
 - (a) visibility of features in API (users can know which features 
 are available through the API)
 - (b) opt-in mechanism in plugins (plugin maintainers can decide to 
 support some feature after checking the detail)

 My concerns mainly comes from the first point (a).
 If we have no way to detect it, users (including Horizon) need to do 
 a dirty work around to determine whether some feature is available. I 
 believe this is one important point in API.

 On the second point, my only concern (not so important) is that we 
 are making the core API change at this moment of the release. Some 
 plugins do not consume db_base_plugin and such plugins need to 
 investigate the impact from now on.
 On the other hand, if we use the extension mechanism all plugins need 
 to update their extension list in the last moment :-(


 My vote at this moment is still to use an extension, but an extension 
 layer can be a shim.
 The idea is to that all implementation can be as-is and we just add 
 an extension module so that the new feature is visible thru the 
 extension list.
 It is not perfect but I think it is a good compromise regarding the 
 first point.


 I know there was a suggestion to change this into the core API in the 
 spec review and I didn't notice it at that time, but I would like to 
 raise this before releasing it.

 For longer term (and Liberty cycle),  we need to define more clear 
 guideline on Core vs extension vs micro-versioning in spec reviews.

 Thanks,
 Akihiro

 _
 _ OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
  OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Akihiro Motoki amot...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][neutron] Best API for generating subnets from pool

2015-03-20 Thread Tidwell, Ryan
Great suggestion Kevin.  Passing 0.0.0.1 as gateway_ip_template (or whatever 
you call it) is essentially passing an address index, so when you OR 0.0.0.1 
with the CIDR you get your gateway set as the first usable IP in the subnet.  
The intent of the user is to allocate the first usable IP address in the subnet 
to the gateway.  The wildcard notation for gateway IP is really a more 
convoluted way of expressing this intent.  Something like address_index is a 
little more explicit in my mind.  I think Kevin is on to something.

-Ryan

From: Kevin Benton [mailto:blak...@gmail.com]
Sent: Friday, March 20, 2015 2:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [api][neutron] Best API for generating subnets 
from pool

What if we just call it 'address_index' and make it an integer representing the 
offset from the network start address?

On Fri, Mar 20, 2015 at 12:39 PM, Carl Baldwin 
c...@ecbaldwin.netmailto:c...@ecbaldwin.net wrote:
On Fri, Mar 20, 2015 at 1:34 PM, Jay Pipes 
jaypi...@gmail.commailto:jaypi...@gmail.com wrote:
 How is 0.0.0.1 a host address? That isn't a valid IP address, AFAIK.

It isn't a valid *IP* address without the network part.  However, it
can be referred to as the host address on the network or the host
part of the IP address.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][neutron] Best API for generating subnets from pool

2015-03-12 Thread Tidwell, Ryan
I agree with dropping support for the wildcards.  It can always be revisited at 
later. I agree that being locked into backward compatibility with a design that 
we really haven't thought through is a good thing to avoid.  Most importantly 
(to me anyway) is that this will help in getting subnet allocation completed 
for Kilo. We can iterate on it later, but at least the base functionality will 
be there.

-Ryan

-Original Message-
From: Carl Baldwin [mailto:c...@ecbaldwin.net] 
Sent: Tuesday, March 10, 2015 11:44 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [api][neutron] Best API for generating subnets 
from pool

On Tue, Mar 10, 2015 at 12:24 PM, Salvatore Orlando sorla...@nicira.com wrote:
 I guess that frustration has now become part of the norm for Openstack.
 It is not the first time I frustrate people because I ask to reconsider
 decisions approved in specifications.

I'm okay revisiting decisions.  It is just the timing that is difficult.

 This is probably bad behaviour on my side. Anyway, I'm not suggesting to go
 back to the drawing board, merely trying to get larger feedback, especially
 since that patch should always have had the ApiImpact flag.

It did have the ApiImpact flag since PS1 [1].

 Needless to say, I'm happy to proceed with things as they've been agreed.

I'm happy to discuss and I value your input very highly.  I was just
hoping that it had come at a better time to react.

 There is nothing intrinsically wrong with it - in the sense that it does not
 impact the functional behaviour of the system.
 My comment is about RESTful API guidelines. What we pass to/from the API
 endpoint is a resource, in this case the subnet being created.
 You expect gateway_ip to be always one thing - a gateway address, whereas
 with the wildcarded design it could be an address or an incremental counter
 within a range, but with the counter being valid only in request objects.
 Differences in entities between requests and response are however fairly
 common in RESTful APIs, so if the wildcards sastisfy a concrete and valid
 use case I will stop complaining, but I'm not sure I see any use case for
 wildcarded gateways and allocation pools.

Let's drop the use case and the wildcards as we've discussed.

 Also, there might also be backward-compatible ways of switching from one
 approach to another, in which case I'm happy to keep things as they are and
 relieve Ryan from yet another worry.

I think dropping the use case for now allows us the most freedom and
doesn't commit us to supporting backward compatibility for a decision
that may end up proving to be a mistake in API design.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][neutron] Best API for generating subnets from pool

2015-03-09 Thread Tidwell, Ryan
Thanks Salvatore.  Here are my thoughts, hopefully there’s some merit to them:

With implicit allocations, the thinking is that this is where a subnet is 
created in a backward-compatible way with no subnetpool_id and the subnets 
API’s continue to work as they always have.

In the case of a specific subnet allocation request (create-subnet passing a 
pool ID and specific CIDR), I would look in the pool’s available prefix list 
and carve out a subnet from one of those prefixes and ask for it to be reserved 
for me.  In that case I know the CIDR I’ll be getting up front.  In such a 
case, I’m not sure I’d ever specify my gateway using notation like 0.0.0.1, 
even if I was allowed to.  If I know I’ll be getting 10.10.10.0/24, I can 
simply pass gateway_ip as 10.10.10.1 and be done with it.  I see no added value 
in supporting that wildcard notation for a gateway on a specific subnet 
allocation.

In the case of an “any” subnet allocation request (create-subnet passing a pool 
ID, but no specific CIDR), I’m already delegating responsibility for addressing 
my subnet to Neutron.  As such, it seems reasonable to not have strong opinions 
about details like gateway_ip when making the request to create a subnet in 
this manner.

To me, this all points to not supporting wildcards for gateway_ip and 
allocation_pools on subnet create (even though it found its way into the spec). 
 My opinion (which I think lines up with yours) is that on an any request it 
makes sense to let the pool fill in allocation_pools and gateway_ip when 
requesting an “any” allocation from a subnet pool.  When creating a specific 
subnet from a pool, gateway IP and allocation pools could still be passed 
explicitly by the user.

-Ryan

From: Salvatore Orlando [mailto:sorla...@nicira.com]
Sent: Monday, March 09, 2015 6:06 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [api][neutron] Best API for generating subnets from 
pool

Greetings!

Neutron is adding a new concept of subnet pool. To put it simply, it is a 
collection of IP prefixes from which subnets can be allocated. In this way a 
user does not have to specify a full CIDR, but simply a desired prefix length, 
and then let the pool generate a CIDR from its prefixes. The full spec is 
available at [1], whereas two patches are up for review at [2] (CRUD) and [3] 
(integration between subnets and subnet pools).
While [2] is quite straightforward, I must admit I am not really sure that the 
current approach chosen for generating subnets from a pool might be the best 
one, and I'm therefore seeking your advice on this matter.

A subnet can be created with or without a pool.
Without a pool the user will pass the desired cidr:

POST /v2.0/subnets
{'network_id': 'meh',
  'cidr': '192.168.0.0/24http://192.168.0.0/24'}

Instead with a pool the user will pass pool id and desired prefix lenght:
POST /v2.0/subnets
{'network_id': 'meh',
 'prefix_len': 24,
 'pool_id': 'some_pool'}

The response to the previous call would populate the subnet cidr.
So far it looks quite good. Prefix_len is a bit of duplicated information, but 
that's tolerable.
It gets a bit awkward when the user specifies also attributes such as desired 
gateway ip or allocation pools, as they have to be specified in a 
cidr-agnostic way. For instance:

POST /v2.0/subnets
{'network_id': 'meh',
 'gateway_ip': '0.0.0.1',
 'prefix_len': 24,
 'pool_id': 'some_pool'}

would indicate that the user wishes to use the first address in the range as 
the gateway IP, and the API would return something like this:

POST /v2.0/subnets
{'network_id': 'meh',
 'cidr': '10.10.10.0/24http://10.10.10.0/24'
 'gateway_ip': '10.10.10.1',
 'prefix_len': 24,
 'pool_id': 'some_pool'}

The problem with this approach is, in my opinion, that attributes such as 
gateway_ip are used with different semantics in requests and responses; this 
might also need users to write client applications expecting the values in the 
response might differ from those in the request.

I have been considering alternatives, but could not find any that I would 
regard as winner.
I therefore have some questions for the neutron community and the API working 
group:

1) (this is more for neutron people) Is there a real use case for requesting 
specific gateway IPs and allocation pools when allocating from a pool? If not, 
maybe we should let the pool set a default gateway IP and allocation pools. The 
user can then update them with another call. Another option would be to provide 
subnet templates from which a user can choose. For instance one template 
could have the gateway as first IP, and then a single pool for the rest of the 
CIDR.

2) Is the action of creating a subnet from a pool better realized as a 
different way of creating a subnet, or should there be some sort of pool 
action? Eg.:

POST /subnet_pools/my_pool_id/subnet
{'prefix_len': 24}

which would return a subnet response like this (note prefix_len might not be 
needed in this case)

{'id': 'meh',
 

Re: [openstack-dev] openstack-dev] [neutron] [nfv]

2014-11-05 Thread Tidwell, Ryan
Keshava,

This sounds like you're asking how you might do service function chaining with 
Neutron.  Is that a fair way to characterize your thoughts? I think the concept 
of service chain provisioning in Neutron is worth some discussion, keeping in 
mind Neutron is not a fabric controller.

-Ryan

From: A, Keshava
Sent: Tuesday, November 04, 2014 11:28 PM
To: OpenStack Development Mailing List (not for usage questions); Singh, 
Gangadhar S
Subject: [openstack-dev] openstack-dev] [neutron] [nfv]

Hi,
I am thinking loud here, about NFV Service VM and OpenStack infrastructure.
Please let me know does the below scenario analysis make sense.

NFV Service VM's are hosted on cloud (OpenStack)  where in there are  2 Tenants 
with different Service order of execution.
(Service order what I have mentioned here is  just an example ..)

* Does OpenStack controls the order of Service execution for every 
packet ?

* Does OpenStack will have different Service-Tag for different Service ?

* If there are multiple features with in a Service-VM, how 
Service-Execution is controlled in that  VM ?

* After completion of a particular Service ,  how the next Service will 
be invoked ?

Will there be pre-configured flows from OpenStack  to invoke next service for 
tagged packet from Service-VM ?

[cid:image004.png@01CFF8ED.4E25C730]

[cid:image008.png@01CFF8ED.4E25C730]


Thanks  regards,
keshava




image002.emz
Description: image002.emz
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev