[openstack-dev] [neutron] dns-nameservers order not honored

2015-06-23 Thread Paul Ward
I haven't dug into the code yet, but from testing via CLI and REST API, 
it appears neutron does not honor the order in which users specify their 
dns-nameservers.  For example, no matter what order I specify 10.0.0.1 
and 10.0.0.2 for dns-nameservers, they are always ordered with the 
numerically lowest IP first when doing a subnet-show (ie, 10.0.0.1 will 
be first, even if I specified 10.0.0.2 first).  As stated above, CLI and 
REST API behave the same.


I believe this is a problem because these are passed to activation on a 
deployed VM in the order neutron lists them in the subnet.  A user may 
have a reason they want the numerically higher DNS IP listed first, say 
if they are trying to load balance their DNS servers.  By always 
ordering them numerically, we give them no way to do this.


So my question is... is this by design or an oversight?  If it's an 
oversight, I'll dig into the code and propose a patch.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] No concept for user "owner" of a neutron port... security issue?

2015-03-13 Thread Paul Ward
From what I can tell, neutron ports do not have the concept of an 
"owner" that is a user.  They have "device_owner", which seems to be 
more for things like assigning to a router.


The reason I bring this up is because there seems to be no way to 
restrict the update/delete of a port to only the owner of the nova 
server it's attached to.  You can set the policy file to enforce 
tenant_id, but that would still allow any user in a tenant to delete any 
OTHER user's neutron port in that same tenant.


This actually seems like a security problem to me.  But given it deals 
with a core neutron object, maybe the best way to approach it is with a 
blueprint in Liberty rather than a bug...


Thoughts?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] ovs-neutron-agent wipes out all flows on startup

2014-06-30 Thread Paul Ward
The current design for ovs-neutron-agent is that it will wipe out all 
flows configured on the system when it starts up, recreating them for 
each neutron port it's aware of.  This has a not-so-desirable side 
effects that there's a temporary hiccup in network connectivity for the 
VMs on the host.


My questions to the list: Is there a reason it was designed this way 
(other than "Everything on the system must be managed by OpenStack")? 
Is there ongoing work to address this or would it be a worthwhile 
contribution from our side?



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Supporting retries in neutronclient

2014-06-05 Thread Paul Ward

Carl,

I haven't been able to try this yet as it requires us to run a pretty  
big scale

test.

But to try to summarize the current feeling on this thread... the  
retry logic is

being put into the neutronclient already (via
https://review.openstack.org/#/c/71464/), it's just that it's not  
"automatic" and
is being left up to the invoker to decide when to use retry.  The idea  
of doing

the retries automatically isn't the way to go because it is dangerous for
non-idempotent operations.

So... I think we leave the proposed change as is and will potentially need to
enhance users as we see fit.  The invoker in our failure case is nova trying
to get network info, so this seems like a good first one to try out.

Thoughts?

Thanks,
  Paul

Quoting Carl Baldwin :


Paul,

I'm curious.  Have you been able to update to a client using requests?
 Has it solved your problem?

Carl

On Thu, May 29, 2014 at 11:15 AM, Paul Ward  wrote:

Yes, we're still on a code level that uses httplib2.  I noticed that as
well, but wasn't sure if that would really
help here as it seems like an ssl thing itself.  But... who knows??  I'm not
sure how consistently we can
recreate this, but if we can, I'll try using that patch to use requests and
see if that helps.



"Armando M."  wrote on 05/29/2014 11:52:34 AM:


From: "Armando M." 




To: "OpenStack Development Mailing List (not for usage questions)"
,
Date: 05/29/2014 11:58 AM



Subject: Re: [openstack-dev] [neutron] Supporting retries in neutronclient

Hi Paul,

Just out of curiosity, I am assuming you are using the client that
still relies on httplib2. Patch [1] replaced httplib2 with requests,
but I believe that a new client that incorporates this change has not
yet been published. I wonder if the failures you are referring to
manifest themselves with the former http library rather than the
latter. Could you clarify?

Thanks,
Armando

[1] - https://review.openstack.org/#/c/89879/

On 29 May 2014 17:25, Paul Ward  wrote:
> Well, for my specific error, it was an intermittent ssl handshake error
> before the request was ever sent to the
> neutron-server.  In our case, we saw that 4 out of 5 resize operations
> worked, the fifth failed with this ssl
> handshake error in neutronclient.
>
> I certainly think a GET is safe to retry, and I agree with your
> statement
> that PUTs and DELETEs probably
> are as well.  This still leaves a change in nova needing to be made to
> actually a) specify a conf option and
> b) pass it to neutronclient where appropriate.
>
>
> Aaron Rosen  wrote on 05/28/2014 07:38:56 PM:
>
>> From: Aaron Rosen 
>
>
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> ,
>> Date: 05/28/2014 07:44 PM
>
>> Subject: Re: [openstack-dev] [neutron] Supporting retries in
>> neutronclient
>>
>> Hi,
>>
>> I'm curious if other openstack clients implement this type of retry
>> thing. I think retrying on GET/DELETES/PUT's should probably be okay.
>>
>> What types of errors do you see in the neutron-server when it fails
>> to respond? I think it would be better to move the retry logic into
>> the server around the failures rather than the client (or better yet
>> if we fixed the server :)). Most of the times I've seen this type of
>> failure is due to deadlock errors caused between (sqlalchemy and
>> eventlet *i think*) which cause the client to eventually timeout.
>>
>> Best,
>>
>> Aaron
>>
>
>> On Wed, May 28, 2014 at 11:51 AM, Paul Ward  wrote:
>> Would it be feasible to make the retry logic only apply to read-only
>> operations?  This would still require a nova change to specify the
>> number of retries, but it'd also prevent invokers from shooting
>> themselves in the foot if they call for a write operation.
>>
>>
>>
>> Aaron Rosen  wrote on 05/27/2014 09:40:00 PM:
>>
>> > From: Aaron Rosen 
>>
>> > To: "OpenStack Development Mailing List (not for usage questions)"
>> > ,
>> > Date: 05/27/2014 09:44 PM
>>
>> > Subject: Re: [openstack-dev] [neutron] Supporting retries in
>> > neutronclient
>> >
>> > Hi,
>>
>> >
>> > Is it possible to detect when the ssl handshaking error occurs on
>> > the client side (and only retry for that)? If so I think we should
>> > do that rather than retrying multiple times. The danger here is
>> > mostly for POST operations (as Eugene pointed out) where it's
>> > possible for the response to not make it back to the client and for
>> > the operation to actually succeed.
>> >
>> 

Re: [openstack-dev] [neutron] Supporting retries in neutronclient

2014-05-29 Thread Paul Ward
Yes, we're still on a code level that uses httplib2.  I noticed that as
well, but wasn't sure if that would really
help here as it seems like an ssl thing itself.  But... who knows??  I'm
not sure how consistently we can
recreate this, but if we can, I'll try using that patch to use requests and
see if that helps.



"Armando M."  wrote on 05/29/2014 11:52:34 AM:

> From: "Armando M." 
> To: "OpenStack Development Mailing List (not for usage questions)"
> ,
> Date: 05/29/2014 11:58 AM
> Subject: Re: [openstack-dev] [neutron] Supporting retries in
neutronclient
>
> Hi Paul,
>
> Just out of curiosity, I am assuming you are using the client that
> still relies on httplib2. Patch [1] replaced httplib2 with requests,
> but I believe that a new client that incorporates this change has not
> yet been published. I wonder if the failures you are referring to
> manifest themselves with the former http library rather than the
> latter. Could you clarify?
>
> Thanks,
> Armando
>
> [1] - https://review.openstack.org/#/c/89879/
>
> On 29 May 2014 17:25, Paul Ward  wrote:
> > Well, for my specific error, it was an intermittent ssl handshake error
> > before the request was ever sent to the
> > neutron-server.  In our case, we saw that 4 out of 5 resize operations
> > worked, the fifth failed with this ssl
> > handshake error in neutronclient.
> >
> > I certainly think a GET is safe to retry, and I agree with your
statement
> > that PUTs and DELETEs probably
> > are as well.  This still leaves a change in nova needing to be made to
> > actually a) specify a conf option and
> > b) pass it to neutronclient where appropriate.
> >
> >
> > Aaron Rosen  wrote on 05/28/2014 07:38:56 PM:
> >
> >> From: Aaron Rosen 
> >
> >
> >> To: "OpenStack Development Mailing List (not for usage questions)"
> >> ,
> >> Date: 05/28/2014 07:44 PM
> >
> >> Subject: Re: [openstack-dev] [neutron] Supporting retries in
neutronclient
> >>
> >> Hi,
> >>
> >> I'm curious if other openstack clients implement this type of retry
> >> thing. I think retrying on GET/DELETES/PUT's should probably be okay.
> >>
> >> What types of errors do you see in the neutron-server when it fails
> >> to respond? I think it would be better to move the retry logic into
> >> the server around the failures rather than the client (or better yet
> >> if we fixed the server :)). Most of the times I've seen this type of
> >> failure is due to deadlock errors caused between (sqlalchemy and
> >> eventlet *i think*) which cause the client to eventually timeout.
> >>
> >> Best,
> >>
> >> Aaron
> >>
> >
> >> On Wed, May 28, 2014 at 11:51 AM, Paul Ward  wrote:
> >> Would it be feasible to make the retry logic only apply to read-only
> >> operations?  This would still require a nova change to specify the
> >> number of retries, but it'd also prevent invokers from shooting
> >> themselves in the foot if they call for a write operation.
> >>
> >>
> >>
> >> Aaron Rosen  wrote on 05/27/2014 09:40:00 PM:
> >>
> >> > From: Aaron Rosen 
> >>
> >> > To: "OpenStack Development Mailing List (not for usage questions)"
> >> > ,
> >> > Date: 05/27/2014 09:44 PM
> >>
> >> > Subject: Re: [openstack-dev] [neutron] Supporting retries in
> >> > neutronclient
> >> >
> >> > Hi,
> >>
> >> >
> >> > Is it possible to detect when the ssl handshaking error occurs on
> >> > the client side (and only retry for that)? If so I think we should
> >> > do that rather than retrying multiple times. The danger here is
> >> > mostly for POST operations (as Eugene pointed out) where it's
> >> > possible for the response to not make it back to the client and for
> >> > the operation to actually succeed.
> >> >
> >> > Having this retry logic nested in the client also prevents things
> >> > like nova from handling these types of failures individually since
> >> > this retry logic is happening inside of the client. I think it would
> >> > be better not to have this internal mechanism in the client and
> >> > instead make the user of the client implement retry so they are
> >> > aware of failures.
> >> >
> >> > Aaron
> >> >

Re: [openstack-dev] [neutron] Supporting retries in neutronclient

2014-05-29 Thread Paul Ward

Well, for my specific error, it was an intermittent ssl handshake error
before the request was ever sent to the
neutron-server.  In our case, we saw that 4 out of 5 resize operations
worked, the fifth failed with this ssl
handshake error in neutronclient.

I certainly think a GET is safe to retry, and I agree with your statement
that PUTs and DELETEs probably
are as well.  This still leaves a change in nova needing to be made to
actually a) specify a conf option and
b) pass it to neutronclient where appropriate.


Aaron Rosen  wrote on 05/28/2014 07:38:56 PM:

> From: Aaron Rosen 
> To: "OpenStack Development Mailing List (not for usage questions)"
> ,
> Date: 05/28/2014 07:44 PM
> Subject: Re: [openstack-dev] [neutron] Supporting retries in
neutronclient
>
> Hi,
>
> I'm curious if other openstack clients implement this type of retry
> thing. I think retrying on GET/DELETES/PUT's should probably be okay.
>
> What types of errors do you see in the neutron-server when it fails
> to respond? I think it would be better to move the retry logic into
> the server around the failures rather than the client (or better yet
> if we fixed the server :)). Most of the times I've seen this type of
> failure is due to deadlock errors caused between (sqlalchemy and
> eventlet *i think*) which cause the client to eventually timeout.
>
> Best,
>
> Aaron
>

> On Wed, May 28, 2014 at 11:51 AM, Paul Ward  wrote:
> Would it be feasible to make the retry logic only apply to read-only
> operations?  This would still require a nova change to specify the
> number of retries, but it'd also prevent invokers from shooting
> themselves in the foot if they call for a write operation.
>
>
>
> Aaron Rosen  wrote on 05/27/2014 09:40:00 PM:
>
> > From: Aaron Rosen 
>
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > ,
> > Date: 05/27/2014 09:44 PM
>
> > Subject: Re: [openstack-dev] [neutron] Supporting retries in
neutronclient
> >
> > Hi,
>
> >
> > Is it possible to detect when the ssl handshaking error occurs on
> > the client side (and only retry for that)? If so I think we should
> > do that rather than retrying multiple times. The danger here is
> > mostly for POST operations (as Eugene pointed out) where it's
> > possible for the response to not make it back to the client and for
> > the operation to actually succeed.
> >
> > Having this retry logic nested in the client also prevents things
> > like nova from handling these types of failures individually since
> > this retry logic is happening inside of the client. I think it would
> > be better not to have this internal mechanism in the client and
> > instead make the user of the client implement retry so they are
> > aware of failures.
> >
> > Aaron
> >
>
> > On Tue, May 27, 2014 at 10:48 AM, Paul Ward  wrote:
> > Currently, neutronclient is hardcoded to only try a request once in
> > retry_request by virtue of the fact that it uses self.retries as the
> > retry count, and that's initialized to 0 and never changed.  We've
> > seen an issue where we get an ssl handshaking error intermittently
> > (seems like more of an ssl bug) and a retry would probably have
> > worked.  Yet, since neutronclient only tries once and gives up, it
> > fails the entire operation.  Here is the code in question:
> >
> > https://github.com/openstack/python-neutronclient/blob/master/
> > neutronclient/v2_0/client.py#L1296
> >
> > Does anybody know if there's some explicit reason we don't currently
> > allow configuring the number of retries?  If not, I'm inclined to
> > propose a change for just that.
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack-stable-maint] Stable exception

2014-05-28 Thread Paul Ward

I'll start by saying that we don't need this ported to icehouse as we've
included it in our distro, as Alan suggested.

However, I would like to explain why we needed it.  We do generate
cert files for the controller node.  However, for cases where the different
services are all running on the controller node, we use 127.0.0.1 as the
address they communicate on.  Since the cert was based on hostname,
this will fail unless we have the api_insecure flag set.  And since we're
communicating on 127.0.0.1, it's ok to ignore ssl errors.

Since this is in Juno, and we've patched it in Icehouse for our distro, we
have no pressing need to backport this one.  Thanks for keeping an
eye on it!

Alan Pevec wrote:
> https://bugs.launchpad.net/neutron/+bug/1306822
> https://bugs.launchpad.net/neutron/+bug/1309694
>
> Those bugs describe the missing options, but do not do a great job of
> describing the impact of not having them. My guess is that without those
> parameters, you have to rely on system certificates (as you can't
> provide your own and you can't disable the check). Is that a correct
> assumption ? Who is impacted by these bugs ?

I think you're right that 1309694 can be worked around by using system
cert store.
Disabling cert check bug 1306822 is definitely not needed - why would
you use certs if you don't check them?
So unless more justification is provided in the bugs (importance of
both is Undecided) I don't think we have the case for granting the
exception.

Distributions are of course free to take those patches, if it suits
their policies.
BTW having such backports proposed is fine even if denied for stable
merge, we can use stable reviews as a mean to share patches among
distros.

> If my interpretation is correct, then this falls a bit in a grey area:
> it is a "feature" to allow your own certificate to be provided, but it
> could be seen as a bug (feature gap) if Neutron was the only project in
> Icehouse not having that feature (and people would generally expect
> those parameters to be present). Is Neutron the only project that misses
> those parameters ?

Currently yes, only Neutron has a new feature in Icehouse to send port
events to Nova but Cinder will need to same to properly fix the race
with volumes during VM setup.

Cheers,
Alan___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Supporting retries in neutronclient

2014-05-28 Thread Paul Ward

Would it be feasible to make the retry logic only apply to read-only
operations?  This would still require a nova change to specify the number
of retries, but it'd also prevent invokers from shooting themselves in the
foot if they call for a write operation.



Aaron Rosen  wrote on 05/27/2014 09:40:00 PM:

> From: Aaron Rosen 
> To: "OpenStack Development Mailing List (not for usage questions)"
> ,
> Date: 05/27/2014 09:44 PM
> Subject: Re: [openstack-dev] [neutron] Supporting retries in
neutronclient
>
> Hi,
>
> Is it possible to detect when the ssl handshaking error occurs on
> the client side (and only retry for that)? If so I think we should
> do that rather than retrying multiple times. The danger here is
> mostly for POST operations (as Eugene pointed out) where it's
> possible for the response to not make it back to the client and for
> the operation to actually succeed.
>
> Having this retry logic nested in the client also prevents things
> like nova from handling these types of failures individually since
> this retry logic is happening inside of the client. I think it would
> be better not to have this internal mechanism in the client and
> instead make the user of the client implement retry so they are
> aware of failures.
>
> Aaron
>

> On Tue, May 27, 2014 at 10:48 AM, Paul Ward  wrote:
> Currently, neutronclient is hardcoded to only try a request once in
> retry_request by virtue of the fact that it uses self.retries as the
> retry count, and that's initialized to 0 and never changed.  We've
> seen an issue where we get an ssl handshaking error intermittently
> (seems like more of an ssl bug) and a retry would probably have
> worked.  Yet, since neutronclient only tries once and gives up, it
> fails the entire operation.  Here is the code in question:
>
> https://github.com/openstack/python-neutronclient/blob/master/
> neutronclient/v2_0/client.py#L1296
>
> Does anybody know if there's some explicit reason we don't currently
> allow configuring the number of retries?  If not, I'm inclined to
> propose a change for just that.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Supporting retries in neutronclient

2014-05-27 Thread Paul Ward

That is great information, thanks Eugene.


Eugene Nikanorov  wrote on 05/27/2014 03:51:36 PM:

> From: Eugene Nikanorov 
> To: "OpenStack Development Mailing List (not for usage questions)"
> ,
> Date: 05/27/2014 03:56 PM
> Subject: Re: [openstack-dev] [neutron] Supporting retries in
neutronclient
>
> In fact, nova should be careful about changing number of retries for
> neutron client.
> It's known that under significant load (people test serial VM
> creation) neutron client may timeout on POST operation which does
> port creation; retrying this again leads to multiple fixed IPs
> assigned to a VM
>
> Thanks,
> Eugene.
>

> On Wed, May 28, 2014 at 12:09 AM, Kyle Mestery  > wrote:
> I'm not aware of any such change at the moment, no.
>
> On Tue, May 27, 2014 at 3:06 PM, Paul Ward  wrote:
> > Great!  Do you know if there's any corresponding nova changes to
support
> > this as a conf option that gets passed in to this new parm?
> >
> >
> >
> > Kyle Mestery  wrote on 05/27/2014 01:56:12
PM:
> >
> >> From: Kyle Mestery 
> >> To: "OpenStack Development Mailing List (not for usage questions)"
> >> ,
> >> Date: 05/27/2014 02:00 PM
> >> Subject: Re: [openstack-dev] [neutron] Supporting retries in
neutronclient
> >
> >
> >>
> >> On Tue, May 27, 2014 at 12:48 PM, Paul Ward  wrote:
> >> > Currently, neutronclient is hardcoded to only try a request once in
> >> > retry_request by virtue of the fact that it uses self.retries as the
> >> > retry
> >> > count, and that's initialized to 0 and never changed.  We've seen an
> >> > issue
> >> > where we get an ssl handshaking error intermittently (seems like
more of
> >> > an
> >> > ssl bug) and a retry would probably have worked.  Yet, since
> >> > neutronclient
> >> > only tries once and gives up, it fails the entire operation.  Here
is
> >> > the
> >> > code in question:
> >> >
> >> > https://github.com/openstack/python-neutronclient/blob/master/
> >> neutronclient/v2_0/client.py#L1296
> >> >
> >> > Does anybody know if there's some explicit reason we don't currently
> >> > allow
> >> > configuring the number of retries?  If not, I'm inclined to propose
a
> >> > change
> >> > for just that.
> >> >
> >> There is a review to address this in place now [1]. I've given a -1
> >> due to a trivial reason which I hope Jakub will address soon so we can
> >> land this patch in the client code.
> >>
> >> Thanks,
> >> Kyle
> >>
> >> [1] https://review.openstack.org/#/c/90104/
> >>
> >> >
> >> > ___
> >> > OpenStack-dev mailing list
> >> > OpenStack-dev@lists.openstack.org
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Supporting retries in neutronclient

2014-05-27 Thread Paul Ward
Great!  Do you know if there's any corresponding nova changes to support
this as a conf option that gets passed in to this new parm?



Kyle Mestery  wrote on 05/27/2014 01:56:12 PM:

> From: Kyle Mestery 
> To: "OpenStack Development Mailing List (not for usage questions)"
> ,
> Date: 05/27/2014 02:00 PM
> Subject: Re: [openstack-dev] [neutron] Supporting retries in
neutronclient
>
> On Tue, May 27, 2014 at 12:48 PM, Paul Ward  wrote:
> > Currently, neutronclient is hardcoded to only try a request once in
> > retry_request by virtue of the fact that it uses self.retries as the
retry
> > count, and that's initialized to 0 and never changed.  We've seen an
issue
> > where we get an ssl handshaking error intermittently (seems like more
of an
> > ssl bug) and a retry would probably have worked.  Yet, since
neutronclient
> > only tries once and gives up, it fails the entire operation.  Here is
the
> > code in question:
> >
> > https://github.com/openstack/python-neutronclient/blob/master/
> neutronclient/v2_0/client.py#L1296
> >
> > Does anybody know if there's some explicit reason we don't currently
allow
> > configuring the number of retries?  If not, I'm inclined to propose a
change
> > for just that.
> >
> There is a review to address this in place now [1]. I've given a -1
> due to a trivial reason which I hope Jakub will address soon so we can
> land this patch in the client code.
>
> Thanks,
> Kyle
>
> [1] https://review.openstack.org/#/c/90104/
>
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Supporting retries in neutronclient

2014-05-27 Thread Paul Ward


Currently, neutronclient is hardcoded to only try a request once in
retry_request by virtue of the fact that it uses self.retries as the retry
count, and that's initialized to 0 and never changed.  We've seen an issue
where we get an ssl handshaking error intermittently (seems like more of an
ssl bug) and a retry would probably have worked.  Yet, since neutronclient
only tries once and gives up, it fails the entire operation.  Here is the
code in question:

https://github.com/openstack/python-neutronclient/blob/master/neutronclient/v2_0/client.py#L1296

Does anybody know if there's some explicit reason we don't currently allow
configuring the number of retries?  If not, I'm inclined to propose a
change for just that.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] alembic migration not working? specifically in regards to ml2_port_bindings table

2014-04-08 Thread Paul Ward
My apologies, I didn't see that README.  Looks like we need to explicitly
call the migration as part of our upgrade path.  Thanks for pointing that
out!



Itzik Brown  wrote on 04/08/2014 04:06:32 PM:

> From: Itzik Brown 
> To: "OpenStack Development Mailing List (not for usage questions)"
> ,
> Date: 04/08/2014 04:11 PM
> Subject: Re: [openstack-dev] [neutron] alembic migration not
> working? specifically in regards to ml2_port_bindings table
>
> Hi,
> Have you looked at https://github.com/openstack/neutron/blob/master/
> neutron/db/migration/README ?
>
> Itzik

> On 08/04/2014 23:41, Paul Ward wrote:
> Is anyone else out there seeing failures that appear to be because
> alembic is not upgrading db tables in neutron?  I'm seeing, on an
> upgrade, that ml2_port_bindings is not being updated to remove
> column cap_port_filter or add columns vnic_type, profile, or
> vif_details,  I'm also seeing the subnets table not getting updated
> with the ipv6_ra_mode column.
>
> I'm not intimately familiar with alembic so I'm not really sure what
> is supposed to kick off the upgrade/downgrade or if the following
> revision chains are ok.  Does merely starting neutron-server
> initiate the upgrade?  In perusing some of the ml2_port_bindings
> alembic files, I came up with these revision chains:
>
> 32a65f71af51  (where ml2_port_bindings was first created)
> ^
> 14f24494ca31  (this is creating some arista tables... I don't know
> why it's a down_revision for ml2_port_bindings table creation above)
>
>
>
> 157a5d299379  (adds profile column to ml2_port_bindings table
> apparently not called in my environment's upgrade)
> ^
> 50d5ba354c23  (adds vif_details column to ml2_port_bindings table
> and removes cap_port_filter colume from ml2_port_bindings table
> apparently not called in my environment's upgrade)
> ^
> 27cc183af192  (first file to add a column, vnic_type,  to
> ml2_port_bindings apparently not called in my environment's upgrade)
> ^
> 4ca36cfc898c  (creates table neutron_nsx_router_mappings... don't
> see how this is related to ml2_port_bindings other than similar
> foreign key constraints)
>
> Notice the chains do not connect with each other.  It seems to me
> that 27cc183af192 should actually call out 32a65f71af51 as the
> down_revision as 32a65f71af51 is where the ml2_port_bindings table
> was first created.  4ca36cfc898c just deals with the
> neutron_nsx_router_mappings table... I don't see how that's related
> to ml2_port_bindings table other than having some similar foreign
> key constraints.
>
> Thanks in advance!
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] alembic migration not working? specifically in regards to ml2_port_bindings table

2014-04-08 Thread Paul Ward


Is anyone else out there seeing failures that appear to be because alembic
is not upgrading db tables in neutron?  I'm seeing, on an upgrade, that
ml2_port_bindings is not being updated to remove column cap_port_filter or
add columns vnic_type, profile, or vif_details,  I'm also seeing the
subnets table not getting updated with the ipv6_ra_mode column.

I'm not intimately familiar with alembic so I'm not really sure what is
supposed to kick off the upgrade/downgrade or if the following revision
chains are ok.  Does merely starting neutron-server initiate the upgrade?
In perusing some of the ml2_port_bindings alembic files, I came up with
these revision chains:

32a65f71af51  (where ml2_port_bindings was first created)
^
14f24494ca31  (this is creating some arista tables... I don't know why it's
a down_revision for ml2_port_bindings table creation above)



157a5d299379  (adds profile column to ml2_port_bindings table
apparently not called in my environment's upgrade)
^
50d5ba354c23  (adds vif_details column to ml2_port_bindings table and
removes cap_port_filter colume from ml2_port_bindings table apparently
not called in my environment's upgrade)
^
27cc183af192  (first file to add a column, vnic_type,  to
ml2_port_bindings apparently not called in my environment's upgrade)
^
4ca36cfc898c  (creates table neutron_nsx_router_mappings... don't see how
this is related to ml2_port_bindings other than similar foreign key
constraints)

Notice the chains do not connect with each other.  It seems to me that
27cc183af192 should actually call out 32a65f71af51 as the down_revision as
32a65f71af51 is where the ml2_port_bindings table was first created.
4ca36cfc898c just deals with the neutron_nsx_router_mappings table... I
don't see how that's related to ml2_port_bindings table other than having
some similar foreign key constraints.

Thanks in advance!___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] ML2 plugin swallows mechanism driver exceptions

2014-01-28 Thread Paul Ward

FYI - I have pushed a change to gerrit for this:
https://review.openstack.org/#/c/69748/

I went the simple route of just including the last exception encountered.

All comments and reviews welcome!!



Andre Pech  wrote on 01/24/2014 03:43:24 PM:

> From: Andre Pech 
> To: "OpenStack Development Mailing List (not for usage questions)"
> ,
> Date: 01/24/2014 03:48 PM
> Subject: Re: [openstack-dev] [neutron] ML2 plugin swallows mechanism
> driver exceptions
>
> Hey Paul,
>
> This is by design, and reraising a single MechanismDriverError was
> really to have a nice defined API for the MechanismManager class,
> avoid blanket try/except calls in the caller. But I do agree that
> it's really annoying to lose the information about the underlying
> exception. I like your idea of including the original exception text
> in the MechanismDriverError message, I think that'd help a lot.
>
> Andre
>

> On Fri, Jan 24, 2014 at 1:19 PM, Paul Ward  wrote:
> In implementing a mechanism driver for ML2 today, I discovered that
> any exceptions thrown from your mechanism driver will get swallowed
> by the ML2 manager (https://github.com/openstack/neutron/blob/
> master/neutron/plugins/ml2/managers.py at line 164).
>
> Is this by design?  Sure, you can look at the logs, but it seems
> more user friendly to reraise the exception that got us here.  There
> could be multiple mechanism drivers being called in a chain, so
> changing this to reraise an exception that got us in trouble would
> really only be able to reraise the last exception encountered, but
> it seems that's better than none at all.  Or maybe even keep a list
> of exceptions raised and put all their texts into the
> MechanismDriverError message.
>
> Thoughts?
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] ML2 plugin swallows mechanism driver exceptions

2014-01-24 Thread Paul Ward


In implementing a mechanism driver for ML2 today, I discovered that any
exceptions thrown from your mechanism driver will get swallowed by the ML2
manager (
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/managers.py
 at line 164).

Is this by design?  Sure, you can look at the logs, but it seems more user
friendly to reraise the exception that got us here.  There could be
multiple mechanism drivers being called in a chain, so changing this to
reraise an exception that got us in trouble would really only be able to
reraise the last exception encountered, but it seems that's better than
none at all.  Or maybe even keep a list of exceptions raised and put all
their texts into the MechanismDriverError message.

Thoughts?___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron should disallow /32 CIDR

2014-01-24 Thread Paul Ward
Given your obviously much more extensive understanding of networking than
mine, I'm starting to move over to the "we shouldn't make this fix" camp.
Mostly because of this:

"CARVER, PAUL"  wrote on 01/23/2014 08:57:10 PM:

> Putting a friendly helper in Horizon will help novice users and
> provide a good example to anyone who is developing an alternate UI
> to invoke the Neutron API. I’m not sure what the benefit is of
> putting code in the backend to disallow valid but silly subnet
> masks. I include /30, /31, AND /32 in the category of “silly” subnet
> masks to use on a broadcast medium. All three are entirely
> legitimate subnet masks, it’s just that they’re not useful for end
> host networks.

My mindset has always been that we should programmatically prevent things
that are definitively wrong.  Of which, these netmasks apparently are not.
So it would seem we should leave neutron server code alone under the
assumption that those using CLI to create networks *probably* know what
they're doing.

However, the UI is supposed to be the more friendly interface and perhaps
this is the more appropriate place for this change?  As I stated before,
horizon prevents /32, but allows /31.

I'm no UI guy, so maybe the best course of action is to abandon my change
in gerrit and move the launchpad bug back to unassigned and see if someone
with horizon experience wants to pick this up.  What do others think about
this?

Thanks again for your participation in this discussion, Paul.  It's been
very enlightening to me.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron should disallow /32 CIDR

2014-01-23 Thread Paul Ward
FWIW, Horizon does prevent the /32 subnet with this message right in the
UI: "The subnet in the Network Address is too small (/32)."  However, it
does NOT prevent a /31 or smaller prefix.

Given your statement about routers potentially using a /30 network, I think
we should leave the restriction at /30 rather than /29.  I'm assuming your
statement that some routers use /30 subnets to connect to each other could
potentially apply to neutron-created routers.

My reasoning behind checking the number of IP addresses in the subnet
rather than the actual CIDR prefix length is that I want the code to be IP
version agnostic.  If we're talking IPv6, then /30 isn't going to be
relevant.  I'm not overly familiar with IPv6, but is it safe to say it has
the same restriction that there must be more than 2 IPs available as the
highest IP is the broadcast?  Thinking more about this, I think this would
be a better check (which still covers both IPv4 and IPv6):

if len(list(netaddr.IPNetwork(new_subnet_cidr))) < 3:


So where I think we're at and need to go:
- Concurrence on whether this change is made at all.  I'm of the
opinion that if a subnet is truly and
   completely unusable, we should prevent it in neutron rather than
rely on horizon since products
   built on openstack probably don't use horizon.  If agreed, proceed
to next items.
- Change current fix to allow /(N-2) prefixes
- Potential horizon changes, in a separate changeset
- Change to fail on /(N-1) rather than only /(N)
- More descriptive failure message... though I kinda think the
current one is sufficient.



"CARVER, PAUL"  wrote on 01/23/2014 02:22:06 PM:

> From: "CARVER, PAUL" 
> To: "OpenStack Development Mailing List (not for usage questions)"
> ,
> Date: 01/23/2014 02:26 PM
> Subject: Re: [openstack-dev] [neutron] Neutron should disallow /32 CIDR
>
> Paul Ward:

> Thank you to all who have participated in this thread.  I've just
> proposed a fix in gerrit.  For those involved thus far, if you could
> review I would be greatly appreciative!
>
> https://review.openstack.org/#/c/68742/1
>
> I wouldn’t go so far as to say this verification SHOULDN’T be added,
> but neither would I say it should. From a general use case
> perspective I don’t think IPv4 subnets smaller than /29 make sense.
> A /32 is a commonly used subnet length for some use cases (e.g.
> router loopback interface) but may not have an applicable use in a
> cloud network. I have never seen a /31 network used anywhere. Point
> to point links (e.g. T1/Frame Relay/etc) are often /30 but I’ve
> never seen a /30 subnet for anything other than connecting two routers.
>
> However, does it really benefit the user to specifically block them
> from entering /32 or block them from entering /30, /31, and /32?
>
> It might not be an equal amount of code, I think a much better
> effort to help the user would be to provide them with a subnet
> calculator directly in Horizon to show them how many usable IPs are
> in the subnet they’re defining. In this case, displaying “Usable
> addresses: 0” right when they enter /32 would be helpful and they
> would figure out for themselves whether they really wanted that mask
> or if they meant something else?
>  ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron should disallow /32 CIDR

2014-01-23 Thread Paul Ward
Thank you to all who have participated in this thread.  I've just proposed
a fix in gerrit.  For those involved thus far, if you could review I would
be greatly appreciative!

https://review.openstack.org/#/c/68742/1



Carl Baldwin  wrote on 01/21/2014 05:27:49 PM:

> From: Carl Baldwin 
> To: "OpenStack Development Mailing List (not for usage questions)"
> ,
> Date: 01/21/2014 05:32 PM
> Subject: Re: [openstack-dev] [neutron] Neutron should disallow /32 CIDR
>
> I think there may be some confusion between the two concepts:  subnet
> and allocation pool.  You are right that an ipv4 subnet smaller than
> /30 is not useable on a network.
>
> However, this method is checking the validity of an allocation pool.
> These pools should not include room for a gateway nor broadcast
> address.  Their relation to subnets is that the range of ips contained
> in the pool must fit within the allocatable IP space on the subnet
> from which they are allocated.  Other than that, they are simple
> ranges; they don't need to be cidr aligned or anything.  A pool of a
> single IP is valid.
>
> I just checked the method's implementation now.  It does check that
> the pool fits within the allocatable range of the subnet.  I think
> we're good.
>
> Carl
>
> On Tue, Jan 21, 2014 at 3:35 PM, Paul Ward  wrote:
> > Currently, NeutronDbPluginV2._validate_allocation_pools() does some
very
> > basic checking to be sure the specified subnet is valid.  One thing
that's
> > missing is checking for a CIDR of /32.  A subnet with one IP address in
it
> > is unusable as the sole IP address will be allocated to the gateway,
and
> > thus no IPs are left over to be allocated to VMs.
> >
> > The fix for this is simple.  In
> > NeutronDbPluginV2._validate_allocation_pools(), we'd check for start_ip
==
> > end_ip and raise an exception if that's true.
> >
> > I've opened lauchpad bug report 1271311
> > (https://bugs.launchpad.net/neutron/+bug/1271311) for this, but wanted
to
> > start a discussion here to see if others find this enhancement to be a
> > valuable addition.
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron should disallow /32 CIDR

2014-01-22 Thread Paul Ward

Thanks for your input, Carl.  You're right, it seems the more appropriate
place for this is _validate_subnet().  It checks ip version, gateway,
etc... but not the size of the subnet.



Carl Baldwin  wrote on 01/21/2014 09:22:55 PM:

> From: Carl Baldwin 
> To: OpenStack Development Mailing List
,
> Date: 01/21/2014 09:27 PM
> Subject: Re: [openstack-dev] [neutron] Neutron should disallow /32 CIDR
>
> The bottom line is that the method you mentioned shouldn't validate
> the subnet. It should assume the subnet has been validated and
> validate the pool.  It seems to do a adequate job of that.
> Perhaps there is a _validate_subnet method that you should be
> focused on?  (I'd check but I don't have convenient access to the
> code at the moment)
> Carl
> On Jan 21, 2014 6:16 PM, "Paul Ward"  wrote:
> You beat me to it. :)  I just responded about not checking the
> allocation pool start and end but rather, checking subnet_first_ip
> and subnet_last_ip, which is set as follows:
>
> subnet = netaddr.IPNetwork(subnet_cidr)
> subnet_first_ip = netaddr.IPAddress(subnet.first + 1)
> subnet_last_ip = netaddr.IPAddress(subnet.last - 1)
>
> However, I'm curious about your contention that we're ok... I'm
> assuming you mean that this should already be handled.   I don't
> believe anything is really checking to be sure the allocation pool
> leaves room for a gateway, I think it just makes sure it fits in the
> subnet.  A member of our test team successfully created a network
> with a subnet of 255.255.255.255, so it got through somehow.  I will
> look into that more tomorrow.
>
>
>
> Carl Baldwin  wrote on 01/21/2014 05:27:49 PM:
>
> > From: Carl Baldwin 
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > ,
> > Date: 01/21/2014 05:32 PM
> > Subject: Re: [openstack-dev] [neutron] Neutron should disallow /32 CIDR
> >
> > I think there may be some confusion between the two concepts:  subnet
> > and allocation pool.  You are right that an ipv4 subnet smaller than
> > /30 is not useable on a network.
> >
> > However, this method is checking the validity of an allocation pool.
> > These pools should not include room for a gateway nor broadcast
> > address.  Their relation to subnets is that the range of ips contained
> > in the pool must fit within the allocatable IP space on the subnet
> > from which they are allocated.  Other than that, they are simple
> > ranges; they don't need to be cidr aligned or anything.  A pool of a
> > single IP is valid.
> >
> > I just checked the method's implementation now.  It does check that
> > the pool fits within the allocatable range of the subnet.  I think
> > we're good.
> >
> > Carl
> >
> > On Tue, Jan 21, 2014 at 3:35 PM, Paul Ward  wrote:
> > > Currently, NeutronDbPluginV2._validate_allocation_pools() does some
very
> > > basic checking to be sure the specified subnet is valid.  One thing
that's
> > > missing is checking for a CIDR of /32.  A subnet with one IP address
in it
> > > is unusable as the sole IP address will be allocated to the gateway,
and
> > > thus no IPs are left over to be allocated to VMs.
> > >
> > > The fix for this is simple.  In
> > > NeutronDbPluginV2._validate_allocation_pools(), we'd check for
start_ip ==
> > > end_ip and raise an exception if that's true.
> > >
> > > I've opened lauchpad bug report 1271311
> > > (https://bugs.launchpad.net/neutron/+bug/1271311) for this, but
wanted to
> > > start a discussion here to see if others find this enhancement to be
a
> > > valuable addition.
> > >
> > >
> > > ___
> > > OpenStack-dev mailing list
> > > OpenStack-dev@lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron should disallow /32 CIDR

2014-01-21 Thread Paul Ward

You beat me to it. :)  I just responded about not checking the allocation
pool start and end but rather, checking subnet_first_ip and subnet_last_ip,
which is set as follows:

subnet = netaddr.IPNetwork(subnet_cidr)
subnet_first_ip = netaddr.IPAddress(subnet.first + 1)
subnet_last_ip = netaddr.IPAddress(subnet.last - 1)

However, I'm curious about your contention that we're ok... I'm assuming
you mean that this should already be handled.   I don't believe anything is
really checking to be sure the allocation pool leaves room for a gateway, I
think it just makes sure it fits in the subnet.  A member of our test team
successfully created a network with a subnet of 255.255.255.255, so it got
through somehow.  I will look into that more tomorrow.



Carl Baldwin  wrote on 01/21/2014 05:27:49 PM:

> From: Carl Baldwin 
> To: "OpenStack Development Mailing List (not for usage questions)"
> ,
> Date: 01/21/2014 05:32 PM
> Subject: Re: [openstack-dev] [neutron] Neutron should disallow /32 CIDR
>
> I think there may be some confusion between the two concepts:  subnet
> and allocation pool.  You are right that an ipv4 subnet smaller than
> /30 is not useable on a network.
>
> However, this method is checking the validity of an allocation pool.
> These pools should not include room for a gateway nor broadcast
> address.  Their relation to subnets is that the range of ips contained
> in the pool must fit within the allocatable IP space on the subnet
> from which they are allocated.  Other than that, they are simple
> ranges; they don't need to be cidr aligned or anything.  A pool of a
> single IP is valid.
>
> I just checked the method's implementation now.  It does check that
> the pool fits within the allocatable range of the subnet.  I think
> we're good.
>
> Carl
>
> On Tue, Jan 21, 2014 at 3:35 PM, Paul Ward  wrote:
> > Currently, NeutronDbPluginV2._validate_allocation_pools() does some
very
> > basic checking to be sure the specified subnet is valid.  One thing
that's
> > missing is checking for a CIDR of /32.  A subnet with one IP address in
it
> > is unusable as the sole IP address will be allocated to the gateway,
and
> > thus no IPs are left over to be allocated to VMs.
> >
> > The fix for this is simple.  In
> > NeutronDbPluginV2._validate_allocation_pools(), we'd check for start_ip
==
> > end_ip and raise an exception if that's true.
> >
> > I've opened lauchpad bug report 1271311
> > (https://bugs.launchpad.net/neutron/+bug/1271311) for this, but wanted
to
> > start a discussion here to see if others find this enhancement to be a
> > valuable addition.
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron should disallow /32 CIDR

2014-01-21 Thread Paul Ward

Possibly, though I don't see code that checks the actual CIDR length.  It
seems to check CIDR correctness via IP correctness.  ie, things like the
ending IP not being smaller than the starting IP, etc.

One change to my original message on what the fix is, we'd have to compare
subnet_first_ip and subnet_last_ip... not start_ip and end_ip as those are
from the pool passed in, not the actual first and last IPs in the subnet.

In the launchpad bug report, it was mentioned you can create a subnet
without a gateway.   I would still contend this is invalid because then you
have a VM on a single-IP subnet without a gateway, which is also a dead
end.

Thoughts?



Edgar Magana  wrote on 01/21/2014 03:04:47 PM:

> From: Edgar Magana 
> To: OpenStack List ,
> Date: 01/21/2014 03:10 PM
> Subject: Re: [openstack-dev] [neutron] Neutron should disallow /32 CIDR
>
> Wouldn't be easier just to check if:
>
> cidr is 32?
>
>  I believe it is a good idea to not allow /32 network but this is
> just my opinion
>
> Edgar
>
> From: Paul Ward 
> Reply-To: OpenStack List 
> Date: Tuesday, January 21, 2014 12:35 PM
> To: OpenStack List 
> Subject: [openstack-dev] [neutron] Neutron should disallow /32 CIDR
>
> Currently, NeutronDbPluginV2._validate_allocation_pools() does some
> very basic checking to be sure the specified subnet is valid.  One
> thing that's missing is checking for a CIDR of /32.  A subnet with
> one IP address in it is unusable as the sole IP address will be
> allocated to the gateway, and thus no IPs are left over to be
> allocated to VMs.
>
> The fix for this is simple.  In
> NeutronDbPluginV2._validate_allocation_pools(), we'd check for
> start_ip == end_ip and raise an exception if that's true.
>
> I've opened lauchpad bug report 1271311 (https://bugs.launchpad.net/
> neutron/+bug/1271311) for this, but wanted to start a discussion
> here to see if others find this enhancement to be a valuable addition.
> ___ OpenStack-dev mailing
list
> OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-
> bin/mailman/listinfo/openstack-dev
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Neutron should disallow /32 CIDR

2014-01-21 Thread Paul Ward


Currently, NeutronDbPluginV2._validate_allocation_pools() does some very
basic checking to be sure the specified subnet is valid.  One thing that's
missing is checking for a CIDR of /32.  A subnet with one IP address in it
is unusable as the sole IP address will be allocated to the gateway, and
thus no IPs are left over to be allocated to VMs.

The fix for this is simple.  In
NeutronDbPluginV2._validate_allocation_pools(), we'd check for start_ip ==
end_ip and raise an exception if that's true.

I've opened lauchpad bug report 1271311
(https://bugs.launchpad.net/neutron/+bug/1271311) for this, but wanted to
start a discussion here to see if others find this enhancement to be a
valuable addition.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] ML2 vlan type driver does not honor network_vlan_ranges

2014-01-17 Thread Paul Ward

Henry, thank you very much for your reply.  To try to tie together our
discussion here with what's in the launchpad bug report I opened
(https://bugs.launchpad.net/neutron/+bug/1269926), here is the method used
to create the network.  I'm creating the network via a UI, which does a
rest api POST to https:///powervc/openstack/network/v2.0//networks with
the following payload:

name: "test4094"
provider:network_type: "vlan"
provider:physical_network: "default"
provider:segmentation_id: 4094
Per the documentation, I assume the tenant_id is obtained via keystone.

Also interesting, I see this in /var/log/neutron/server.log:

2014-01-17 17:43:05.688 62718 DEBUG neutron.plugins.ml2.drivers.type_vlan
[req-484c7ddd-7f83-443b-9427-f7ac327dd99d 0
26e92528a0bc4d84ac0777b2d2b93a83] NT-E59BA3F Reserving specific vlan 4094
on physical network default outside pool
reserve_provider_segment 
/usr/lib/python2.6/site-packages/neutron/plugins/ml2/drivers/type_vlan.py:212

Which indicates OpenStack realizes this is outside the vlan range yet still
allowed it.  Lending even more credence to the fact that I'm incorrect in
my thinking that this should have been prevented.  Further information to
help understand why this is not being enforced would be greatly
appreciated.

Thanks!

- Paul

Henry Gessau  wrote on 01/16/2014 03:31:44 PM:

> Date: Thu, 16 Jan 2014 16:31:44 -0500
> From: Henry Gessau 
> To: "OpenStack Development Mailing List (not for usage questions)"
>
> Subject: Re: [openstack-dev] [neutron] ML2 vlan type driver does not
>honor network_vlan_ranges
> Message-ID: <52d84fc0.8020...@cisco.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> network_vlan_ranges is a 'pool' of vlans from which to pick a vlans for
> tenant networks. Provider networks are not confined to this pool. In
fact, I
> believe it is a more common use-case that provider vlans are outside the
> pool so that they do not conflict with tenant vlan allocation.
>
> -- Henry
>
> On Thu, Jan 16, at 3:45 pm, Paul Ward  wrote:
>
> > In testing some new function I've written, I've unsurfaced the problem
that
> > the ML2 vlan type driver does not enforce the vlan range specified in
the
> > network_vlan_ranges option in ml2_conf.ini file.  It is properly
enforcing
> > the physical network name, and even checking to be sure the
segmentation_id
> > is valid in the sense that it's not outside the range of ALL validvlan
ids.
> >  But it does not actually enforce that segmentation_id is within the
vlan
> > range specified for the given physical network in network_vlan_ranges.
> >
> > The fix I propose is simple.  Add the following check to
> > /neutron/plugins/ml2/drivers/type_vlan.py
> > (TypeVlanDriver.validate_provider_segment()):
> >
> > range_min, range_max = self.network_vlan_ranges
[physical_network][0]
> > if segmentation_id not in range(range_min, range_max):
> > msg = (_("segmentation_id out of range (%(min)s through "
> >  "%(max)s)") %
> >{'min': range_min,
> > 'max': range_max})
> > raise exc.InvalidInput(error_message=msg)
> >
> > This would go near line 182 in
> > https://github.com/openstack/neutron/blob/master/neutron/plugins/
> ml2/drivers/type_vlan.py.
> >
> > One question I have is whether self.network_vlan_ranges
[physical_network]
> > could actually be an empty list rather than a tuple representing the
vlan
> > range.  I believe that should always exist, but the documentation is
not
> > clear on this.  For reference, the corresponding line in
> ml2_conf.ini is this:
> >
> > [ml2_type_vlan]
> > network_vlan_ranges = default:1:4093
> >
> > Thanks in advance to any that choose to provide some insight here!
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] ML2 vlan type driver does not honor network_vlan_ranges

2014-01-16 Thread Paul Ward


In testing some new function I've written, I've unsurfaced the problem that
the ML2 vlan type driver does not enforce the vlan range specified in the
network_vlan_ranges option in ml2_conf.ini file.  It is properly enforcing
the physical network name, and even checking to be sure the segmentation_id
is valid in the sense that it's not outside the range of ALL valid vlan
ids.  But it does not actually enforce that segmentation_id is within the
vlan range specified for the given physical network in network_vlan_ranges.

The fix I propose is simple.  Add the following check
to /neutron/plugins/ml2/drivers/type_vlan.py
(TypeVlanDriver.validate_provider_segment()):

range_min, range_max = self.network_vlan_ranges
[physical_network][0]
if segmentation_id not in range(range_min, range_max):
msg = (_("segmentation_id out of range (%(min)s through "
 "%(max)s)") %
   {'min': range_min,
'max': range_max})
raise exc.InvalidInput(error_message=msg)

This would go near line 182 in
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/type_vlan.py.

One question I have is whether self.network_vlan_ranges[physical_network]
could actually be an empty list rather than a tuple representing the vlan
range.  I believe that should always exist, but the documentation is not
clear on this.  For reference, the corresponding line in ml2_conf.ini is
this:

[ml2_type_vlan]
network_vlan_ranges = default:1:4093

Thanks in advance to any that choose to provide some insight here!___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev