Re: [openstack-dev] [Nova] New API requirements, review of GCE

2013-12-04 Thread Christopher Yeoh
On Wed, Dec 4, 2013 at 5:22 PM, Alexandre Levine
alev...@cloudscaling.comwrote:

  It is not a problem to update the code in given direction when the
 decision is made. As I understood right now it's not about the code -
 that's why I'd canceled the code review until the blueprint is accepted -
 it's more about architecture and procedures such as which tests should be
 obligatory and alike.

 My vision of architecture when we'd started the project was:
 Yes, eventually GCE API as well as EC2 should be a separate service
 because of the following reasons:
 1. It covers wider functionality than compute - in fact it covers almost
 the whole cloud. That's why both EC2 and GCE have to go to Neutron, Cinder
 and other services out of the nova boundaries to perform tasks not related
 to compute at all.
 2. Nova is quite big already and has lots of services. It'll be great to
 disintegrate it a little bit for simplicity, loose coupling and such other
 reasons.

 But:
 As long as EC2 is in the nova other alike APIs might stay there as well.
 And it is rather a different task to separate them from it.


It does add a small amount of overhead to making changes to internal Nova
apis as in addition to making the appropriate changes to the native Nova
API and EC2, patches will also have to modify the GCE API as well.



 The thing is - we can make GCE API a separate service but we need to be
 told about that. Some decisions should be made so that we could react or we
 won't have time and might miss even Icehouse.


Apologies if I've missed an answer to this question before, but would it be
possible to sit the GCE API on top of the Nova REST API which has much
higher guarantees of stability compared to the internal Nova APIs?

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][heat][[keystone] RFC: introducing request identification

2013-12-04 Thread haruka tanizawa
Thank you for your reply.
I chould understand instance-tasks-api more clearly.


2013/12/4 Andrew Laski andrew.la...@rackspace.com


  This is something that's entirely contained within Nova.  It's just
 adding a different representation of what is already occurring with
 task_states on an instance.


I've got it!



 I think it's better to think of it as a 'tag' field, not task_id.  task_id
 is something that would be generated within Nova, but a tag field would
 allow a client to specify a small amount of data to attach to the task.
  Like a token that could be used to identify requests that have been made.
  So if nothing is specified the field will remain blank.


Is getting task information(e.g. list tasks) API separated by each user?
Or can anybody execute these APIs?
Without user separated thought, user may not set unique id,
because there is a case that other user has already used this id.
This id doesn't work as an unique key of a request.



 2013/11/28 Andrew Laski andrew.laski@rackspa andrew.la...@rackspace.com

 You're correct on request_id and task_id.  What I'm planning is a string
 field that a user can pass in with the request and it will be part of the
 task representation.  That field will have no meaning to Nova, but a
 client
 like Heat could use it to ensure that they don't send requests twice by
 checking if there's a task with that field set.


 Since task_id is auto generated, so I want to set unique string at 'tag'
field by myself.
(Maybe putting task_id by user his/her self is hard to accept?)
I want to use this field as judgement materials of retry(duplicate) request
or new request.
So, how about making this 'tag' field like flexible metadata field such as
other API(I don't know yet) can refer it.

Sincerely, Haruka Tanizawa
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Anybody working on bug 1251055?

2013-12-04 Thread David koo
Hi All,

Bug 1251055 was taken on 14th Nov but there seems to have been no activity 
since then (no fixed proposed).

I think I have a fix for this and would like to upload the fix but is it 
acceptable to do so when the bug is assigned to somebody else? Can I just 
assign it to myself and upload the fix? Would such a move considered (very) 
rude?

--
Koo

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] [barbican] Curious about oslo.messaging (from Incubation Request for Barbican)

2013-12-04 Thread Sylvain Bauza

Le 04/12/2013 06:01, John Wood a écrit :

I was curious if there is an OpenStack project that would be a good example to 
follow as we convert Barbican over to oslo messaging.

I've been examining existing OpenStack projects such as Ceilometer and Keystone 
to see how they are utilizing oslo messaging. These projects appear to be 
utilizing packages such as 'rpc' and 'notifier' from the oslo-incubator 
project. It seems that the oslo.messaging project's structure is different than 
the messaging structure of oslo-incubator (there are newer classes such as 
Transport now for example). Is there an example OpenStack project utilizing the 
oslo.messaging structure that might be better for us to follow?

The RPC approach of Ceilometer in particular seems well suited to Barbican's 
use case, so seems to be a good option for us to follow, unless there are 
better options folks can suggest.

Thanks,


Hi John,

Climate (a Stackforge project) is currently using oslo-incubator rpc, 
but we have a review in progress for changing to oslo.messaging. Maybe 
you can take a look on it [1] and contact us on #openstack-climate if 
you need help.


-Sylvain

[1] : https://review.openstack.org/#/c/57880/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Service scoped role definition

2013-12-04 Thread David Chadwick
I am happy with this as far as it goes. I would like to see it being
made more general, where domains, services and projects can also own and
name roles

regards

David


On 04/12/2013 01:51, Adam Young wrote:
 I've been thinking about your comment that nested roles are confusing
 
 
 What if we backed off and said the following:
 
 
 Some role-definitions are owned by services.  If a Role definition is
 owned by a service, in role assignment lists in tokens, those roles we
 be prefixd by the service name.  / is a reserved cahracter and weill be
 used as the divider between segments of the role definition 
 
 That drops arbitrary nesting, and provides a reasonable namespace.  Then
 a role def would look like:
 
 glance/admin  for the admin role on the glance project.
 
 
 
 In theory, we could add the domain to the namespace, but that seems
 unwieldy.  If we did, a role def would then look like this
 
 
 default/glance/admin  for the admin role on the glance project.
 
 Is that clearer than the nested roles?
 
 
 
 On 11/26/2013 06:57 PM, Tiwari, Arvind wrote:
 Hi Adam,

 Based on our discussion over IRC, I have updated the below etherpad
 with proposal for nested role definition

 https://etherpad.openstack.org/p/service-scoped-role-definition

 Please take a look @ Proposal (Ayoung) - Nested role definitions, I
 am sorry if I could not catch your idea.

 Feel free to update the etherpad.

 Regards,
 Arvind


 -Original Message-
 From: Tiwari, Arvind
 Sent: Tuesday, November 26, 2013 4:08 PM
 To: David Chadwick; OpenStack Development Mailing List
 Subject: Re: [openstack-dev] [keystone] Service scoped role definition

 Hi David,

 Thanks for your time and valuable comments. I have replied to your
 comments and try to explain why I am advocating to this BP.

 Let me know your thoughts, please feel free to update below etherpad
 https://etherpad.openstack.org/p/service-scoped-role-definition

 Thanks again,
 Arvind

 -Original Message-
 From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk]
 Sent: Monday, November 25, 2013 12:12 PM
 To: Tiwari, Arvind; OpenStack Development Mailing List
 Cc: Henry Nash; ayo...@redhat.com; dolph.math...@gmail.com; Yee, Guang
 Subject: Re: [openstack-dev] [keystone] Service scoped role definition

 Hi Arvind

 I have just added some comments to your blueprint page

 regards

 David


 On 19/11/2013 00:01, Tiwari, Arvind wrote:
 Hi,

  
 Based on our discussion in design summit , I have redone the service_id
 binding with roles BP
 https://blueprints.launchpad.net/keystone/+spec/serviceid-binding-with-role-definition.

 I have added a new BP (link below) along with detailed use case to
 support this BP.

 https://blueprints.launchpad.net/keystone/+spec/service-scoped-role-definition


 Below etherpad link has some proposals for Role REST representation and
 pros and cons analysis

  
 https://etherpad.openstack.org/p/service-scoped-role-definition

  
 Please take look and let me know your thoughts.

  
 It would be awesome if we can discuss it in tomorrow's meeting.

  
 Thanks,

 Arvind

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Layering olso.messaging usage of config

2013-12-04 Thread Julien Danjou
On Tue, Dec 03 2013, Joshua Harlow wrote:

 Another question that¹s come up recently. Python 3.3 support? Will
 oslo.messaging achieve that? Maybe its a later goal, but it seems like one
 that is required (and should almost be expected of new libraries imho).
 Thoughts? Seems to be mainly eventlet that is the blocker for
 oslo.messaging (so maybe it can be adjusted to work with things other than
 eventlet to make it python33 compat).

I didn't dig into that yet, but I think the only blocker shall be
eventlet indeed.

However, the eventlet usage in oslo.messaging is smartly decoupled from
the rest of the code. It's only used in an executor. That means only
this executor module won't be available on Python 3. But there's still
the generic one, and any other somebody would add.

-- 
Julien Danjou
-- Free Software hacker - independent consultant
-- http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] [barbican] Curious about oslo.messaging (from Incubation Request for Barbican)

2013-12-04 Thread Mark McLoughlin
On Wed, 2013-12-04 at 05:01 +, John Wood wrote:
 Hello folks,
 
 I was curious if there is an OpenStack project that would be a good
 example to follow as we convert Barbican over to oslo messaging. 
 
 I've been examining existing OpenStack projects such as Ceilometer and
 Keystone to see how they are utilizing oslo messaging. These projects
 appear to be utilizing packages such as 'rpc' and 'notifier' from the
 oslo-incubator project. It seems that the oslo.messaging project's
 structure is different than the messaging structure of oslo-incubator
 (there are newer classes such as Transport now for example). Is there
 an example OpenStack project utilizing the oslo.messaging structure
 that might be better for us to follow?
 
 The RPC approach of Ceilometer in particular seems well suited to
 Barbican's use case, so seems to be a good option for us to follow,
 unless there are better options folks can suggest.

The patch to port Nova might be helpful to you:

  https://review.openstack.org/39929

(Note - the patch is complete and ready to merge, it's just temporarily
blocked on a separate Nova issue being resolved)

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Service scoped role definition

2013-12-04 Thread David Chadwick
I have added comments 111 to 122

david

On 03/12/2013 23:58, Tiwari, Arvind wrote:
 Hi David,
 
 I have added my comments underneath line # 97 till line #110, it is mostly 
 aligned with your proposal with some modification.
  
 https://etherpad.openstack.org/p/service-scoped-role-definition
 
 
 Thanks for your time,
 Arvind
 
 
 
 -Original Message-
 From: Tiwari, Arvind 
 Sent: Monday, December 02, 2013 4:22 PM
 To: Adam Young; OpenStack Development Mailing List (not for usage questions); 
 David Chadwick
 Subject: Re: [openstack-dev] [keystone] Service scoped role definition
 
 Hi Adam and David,
 
 Thank you so much for all the great comments, seems we are making good 
 progress.
 
 I have replied to your comments and also added some to support my proposal
 
 https://etherpad.openstack.org/p/service-scoped-role-definition
 
 David, I like your suggestion for role-def scoping which can fit in my Plan B 
 and I think Adam is cool with plan B.
 
 Please let me know if David's proposal for role-def scoping is cool for 
 everybody?
 
 
 Thanks,
 Arvind
 
 -Original Message-
 From: Adam Young [mailto:ayo...@redhat.com] 
 Sent: Wednesday, November 27, 2013 8:44 AM
 To: Tiwari, Arvind; OpenStack Development Mailing List (not for usage 
 questions)
 Cc: Henry Nash; dolph.math...@gmail.com; David Chadwick
 Subject: Re: [openstack-dev] [keystone] Service scoped role definition
 
 
 
 On 11/26/2013 06:57 PM, Tiwari, Arvind wrote:
 Hi Adam,

 Based on our discussion over IRC, I have updated the below etherpad with 
 proposal for nested role definition
 
 Updated.  I made my changes Green.  It isn't easy being green.
 

 https://etherpad.openstack.org/p/service-scoped-role-definition

 Please take a look @ Proposal (Ayoung) - Nested role definitions, I am 
 sorry if I could not catch your idea.

 Feel free to update the etherpad.

 Regards,
 Arvind


 -Original Message-
 From: Tiwari, Arvind
 Sent: Tuesday, November 26, 2013 4:08 PM
 To: David Chadwick; OpenStack Development Mailing List
 Subject: Re: [openstack-dev] [keystone] Service scoped role definition

 Hi David,

 Thanks for your time and valuable comments. I have replied to your comments 
 and try to explain why I am advocating to this BP.

 Let me know your thoughts, please feel free to update below etherpad
 https://etherpad.openstack.org/p/service-scoped-role-definition

 Thanks again,
 Arvind

 -Original Message-
 From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk]
 Sent: Monday, November 25, 2013 12:12 PM
 To: Tiwari, Arvind; OpenStack Development Mailing List
 Cc: Henry Nash; ayo...@redhat.com; dolph.math...@gmail.com; Yee, Guang
 Subject: Re: [openstack-dev] [keystone] Service scoped role definition

 Hi Arvind

 I have just added some comments to your blueprint page

 regards

 David


 On 19/11/2013 00:01, Tiwari, Arvind wrote:
 Hi,

   

 Based on our discussion in design summit , I have redone the service_id
 binding with roles BP
 https://blueprints.launchpad.net/keystone/+spec/serviceid-binding-with-role-definition.
 I have added a new BP (link below) along with detailed use case to
 support this BP.

 https://blueprints.launchpad.net/keystone/+spec/service-scoped-role-definition

 Below etherpad link has some proposals for Role REST representation and
 pros and cons analysis

   

 https://etherpad.openstack.org/p/service-scoped-role-definition

   

 Please take look and let me know your thoughts.

   

 It would be awesome if we can discuss it in tomorrow's meeting.

   

 Thanks,

 Arvind

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] New API requirements, review of GCE

2013-12-04 Thread Alexandre Levine


04.12.2013 11:57, Christopher Yeoh ?:
On Wed, Dec 4, 2013 at 5:22 PM, Alexandre Levine 
alev...@cloudscaling.com mailto:alev...@cloudscaling.com wrote:


It is not a problem to update the code in given direction when the
decision is made. As I understood right now it's not about the
code - that's why I'd canceled the code review until the blueprint
is accepted - it's more about architecture and procedures such as
which tests should be obligatory and alike.

My vision of architecture when we'd started the project was:
Yes, eventually GCE API as well as EC2 should be a separate
service because of the following reasons:
1. It covers wider functionality than compute - in fact it covers
almost the whole cloud. That's why both EC2 and GCE have to go to
Neutron, Cinder and other services out of the nova boundaries to
perform tasks not related to compute at all.
2. Nova is quite big already and has lots of services. It'll be
great to disintegrate it a little bit for simplicity, loose
coupling and such other reasons.

But:
As long as EC2 is in the nova other alike APIs might stay there as
well. And it is rather a different task to separate them from it.


It does add a small amount of overhead to making changes to internal 
Nova apis as in addition to making the appropriate changes to the 
native Nova API and EC2, patches will also have to modify the GCE API 
as well.


Agree.



The thing is - we can make GCE API a separate service but we need
to be told about that. Some decisions should be made so that we
could react or we won't have time and might miss even Icehouse.


Apologies if I've missed an answer to this question before, but would 
it be possible to sit the GCE API on top of the Nova REST API which 
has much higher guarantees of stability compared to the internal Nova 
APIs?




It is totally possible to sit the GCE API on top of the Nova, Neutron, 
Cinder and other REST APIs. In fact It is already on top of other REST 
APIs except for Nova services which we'd implemented as it's done in EC2 
for uniformity's sake.



Chris




Best regards,
  Alex Levine



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Tuning QueuePool parameters?

2013-12-04 Thread Salvatore Orlando
I think this bug was considered fixed because at the time once the patch
addressing was merged, the bug automatically went into fix committed.
It should therefore be re-opened. Even if tweaking sql pool parameters
avoids the issue, this should be considered more of a mitigation rather
than a permanent fix; and I'm not sure the issue and mitigation have been
properly documented.

However, I think we still need to fully understand why the effect of this
issue is two interfaces on the same instance.
Even if sql session management is improved, there could always be a chance
of pool exhaustion; not making this happen is up to the deployer (who needs
appropriate documentation to the aim).
But in case of pool exhaustion (TimeoutError: QueuePool limit of size 5
overflow 10 reached, connection timed out, timeout 30) the effect should
probably be a 500 error when performing the neutron operation - not what's
being observed by bug 1160442.

Salvatore



On 3 December 2013 16:16, Maru Newby ma...@redhat.com wrote:

 I recently ran into this bug while trying to concurrently boot a large
 number (75) of VMs:

 https://bugs.launchpad.net/neutron/+bug/1160442

 I see that the fix for the bug added configuration of SQLAlchemy QueuePool
 parameters that should prevent the boot failures I was seeing.  However, I
 don't see a good explanation on the bug as to what values to set the
 configuration to or why the defaults weren't updated to something sane.  If
 that information is available somewhere, please share!  I'm not sure why
 this bug is considered fixed if it's still possible to trigger it with no
 clear path to resolution.


 Maru


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] Layering olso.messaging usage of config

2013-12-04 Thread Flavio Percoco

On 04/12/13 10:14 +0100, Julien Danjou wrote:

On Tue, Dec 03 2013, Joshua Harlow wrote:


Another question that¹s come up recently. Python 3.3 support? Will
oslo.messaging achieve that? Maybe its a later goal, but it seems like one
that is required (and should almost be expected of new libraries imho).
Thoughts? Seems to be mainly eventlet that is the blocker for
oslo.messaging (so maybe it can be adjusted to work with things other than
eventlet to make it python33 compat).


I didn't dig into that yet, but I think the only blocker shall be
eventlet indeed.

However, the eventlet usage in oslo.messaging is smartly decoupled from
the rest of the code. It's only used in an executor. That means only
this executor module won't be available on Python 3. But there's still
the generic one, and any other somebody would add.



Correct. The idea there is be able to support different executors that
will make it easier to port oslo.messaging to Py3K without depending
on the support of it for eventlet / gevent.

I just created this:
https://blueprints.launchpad.net/oslo.messaging/+spec/asyncio-executor

FF

--
@flaper87
Flavio Percoco


pgp55WNmmCkAb.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Core sponsors wanted for BP user defined shutdown

2013-12-04 Thread Day, Phil
Hi Nova cores,

As per the discussion at the Summit I need two (or more) nova cores to sponsor 
the BP that allows Guests a chance to shutdown cleanly rather than just yanking 
the virtual power cord out  -which is approved and targeted for I2

https://review.openstack.org/#/c/35303/

The Non API aspect of this is has been kicking around for a while now (on patch 
set 30), and passing all of the tests etc (The change in timing was upsetting 
some of the long running Tempest tests but this has now been fixed) - and as 
far as I know there are no outstanding issues to be addressed.   Would be 
really nice to get this landed now before it needs another rebase.

The API aspect is also under development and on target to be available for 
review soon.

Any takers ?

Cheers
Phil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When should things get added to Oslo.Incubator

2013-12-04 Thread Flavio Percoco

On 04/12/13 01:13 +, Adrian Otto wrote:

Jay is right. What we have is probably close enough to what's in Nova to 
qualify for oslo-incubator. The simplifications seem to me to have general 
appeal so this code would be more attractive to other projects. One worry I 
have is that there is still a good deal of magic behavior in this code, as 
reviewers have made clear notes about in the code review. I'd like to try it 
and see if there are further simplifications we could entertain to make this 
code easier to debug and maintain. It would be great if such iterations 
happened in a place where other projects could easily leverage them.

I will remind us that Solum is not currently an incubated project. Should we 
address this concern now, or during an incubation phase?


This is not just a Solum issue but a general issue throughout
OpenStack. The sooner we sort this out, the better.



Some approaches for us to consider:

1) Merge this code in Solum, open a bug against it to move it back into 
oslo-incubation, open a stub project in oslo-incubation with a read me that 
references the bug, and continue iterate on it in Solum until we are reasonably 
happy with it. Then during an incubation phase, we can resolve that bug by 
putting the code into oslo-incubation, and achieve the goal of making more 
reusable work between projects.

We could also address that bug at such time as any other ecosystem project is 
looking for a similar solution, and finds the stub project in oslo-incubation.

2) Just plunk all of this code into oslo-incubation as-is and do all iterating 
there. That might cause a bit more copying around of code during the 
simplification process, but would potentially achieve the reusability goal 
sooner, possibly by a couple of months.

3) Use pypi. In all honesty we have enough new developers (about half the 
engineers on this project) coming up to speed with how things work in the 
OpenStack ecosystem that I'm reluctant to throw that into the mix too.

What do you all prefer?



I'd personally prefer number 2. Besides the reasons already raised in
this thread we should also add the fact that having it in
oslo-incubator will make it easier for people from other projects to
contribute, review and improve that code.



On Dec 3, 2013, at 2:58 PM, Mark McLoughlin mar...@redhat.com
wrote:


On Tue, 2013-12-03 at 22:44 +, Joshua Harlow wrote:

Sure sure, let me not make that assumption (can't speak for them), but
even libraries on pypi have to deal with API instability.


Yes, they do ... either by my maintaining stability, bumping their major
version number to reflect an incompatible change ... or annoying the
hell out of their users!


Just more of suggesting, might as well bite the bullet (if objects folks
feel ok with this) and just learn to deal with the pypi method for dealing
with API instability (versions, deprecation...). Since code copying around
is just creating a miniature version of the same 'learning experience'
except u lose the other parts (versioning, deprecation, ...) which comes
along with pypi and libraries.


Yes, if the maintainers of the API are prepared to deal with the demands
of API stability, publishing the API as a standalone library would be
far more preferable.

Failing that, oslo-incubator offers a halfway house which sucks, but not
as as much as the alternative - projects copying and pasting each
other's code and evolving their copies independently.


Agreed. Also, as mentioned above, keeping the code in oslo will bring
more eyeballs to the review, which helps a lot when designing APIs and
seeking for stability.

Projects throughout OpenStack look for re-usable code in Oslo first -
or at least I think they should - and then elsewhere. Putting things
in oslo-incubator has also a community impact, not just technical
benefits. IMHO.

FF

--
@flaper87
Flavio Percoco


pgp6KLFAs7iWx.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] ML2 improvement, more extensible and more modular

2013-12-04 Thread Zang MingJie
Hi, all:

I have already written a patch[1] which makes ml2 segment more extensible,
where segments can contains more fields other than physical network and
segmentation id, but there is still a lot of work to do to make the ml2
more extensible. Here are some of my thoughts.

First, introduce a new extension to abandon provider extension. Currently
the provider extension only support physical network and segmentation id,
as a result the net-create and net-show calls can't handle any extra
fields. Because we already have multi-segment support, we may need an
extension which extends the network with only one field, segments; json can
be used to describe segments when accessing the API (net-create/show). But
there comes a new problem, type drivers must check the access policy of
fields inside segment very carefully, there is nowhere to ensure the access
permission other than the type driver. multiprovidernet extension is a good
start point, but some modification still required.

Second, add segment calls to mechanism driver. There is an one-to-many
relation between network and segments, but it is not clear and hide inside
multi-segment implementation, it should be more clear and more extensible,
so people can use it wisely. I want to add some new APIs to mechanism
manager which handles segment relate operations, eg,
segment_create/segment_release, and separate segment operations from
network.

Last, as our l2 agent (ovs-agent) only handles l2 segments operations, and
do nothing with networks or subnets, I wonder if we can remove all network
related code inside agent implementation, and only handle segments, change
lvm map from {network_id-segment/ports} to {segment_id-segment/ports}.
The goal is to make the ovs-agent pure l2 agent.

[1] https://review.openstack.org/#/c/37893/

--
Zang MingJie
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-04 Thread Jiri Tomasek

Hi,

As the development of Tuskar-UI somehow stagnated recently, I have been 
focusing more on Horizon project lately to get features we need for 
Tuskar-UI. I acknowledge that I haven't been paying enough attention and 
reviews in TripleO. The statistics says it all. Although as the 
development of Tuskar-UI is about to rise rapidly, it would be nice to 
be able to give +2's here. I'll try to get up to speed with TripleO 
together with upcoming Tuskar-UI changes.


Jirka


On 12/04/2013 08:12 AM, Robert Collins wrote:

Hi,
 like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

In this months review:
  - Ghe Rivero for -core
  - Jan Provaznik for removal from -core
  - Jordan O'Mara for removal from -core
  - Martyn Taylor for removal from -core
  - Jiri Tomasek for removal from -core
  - Jamomir Coufal for removal from -core

Existing -core members are eligible to vote - please indicate your
opinion on each of the three changes above in reply to this email.

Ghe, please let me know if you're willing to be in tripleo-core. Jan,
Jordan, Martyn, Jiri  Jaromir, if you are planning on becoming
substantially more active in TripleO reviews in the short term, please
let us know.

My approach to this caused some confusion a while back, so I'm going
to throw in some boilerplate here for a few more editions... - I'm
going to talk about stats here, but they
are only part of the picture : folk that aren't really being /felt/ as
effective reviewers won't be asked to take on -core responsibility,
and folk who are less active than needed but still very connected to
the project may still keep them : it's not pure numbers.

Also, it's a vote: that is direct representation by the existing -core
reviewers as to whether they are ready to accept a new reviewer as
core or not. This mail from me merely kicks off the proposal for any
changes.

But, the metrics provide an easy fingerprint - they are a useful tool
to avoid bias (e.g. remembering folk who are just short-term active) -
human memory can be particularly treacherous - see 'Thinking, Fast and
Slow'.

With that prelude out of the way:

Please see Russell's excellent stats:
http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt

For joining and retaining core I look at the 90 day statistics; folk
who are particularly low in the 30 day stats get a heads up so they
aren't caught by surprise.

Our merger with Tuskar has now had plenty of time to bed down; folk
from the Tuskar project who have been reviewing widely within TripleO
for the last three months are not in any way disadvantaged vs previous
core reviewers when merely looking at the stats; and they've had three
months to get familiar with the broad set of codebases we maintain.

90 day active-enough stats:

+--+---++
| Reviewer | Reviews   -2  -1  +1  +2  +A+/- % | Disagreements* |
+--+---++
|   lifeless **| 521   16 181   6 318 14162.2% |   16 (  3.1%)  |
| cmsj **  | 4161  30   1 384 20692.5% |   22 (  5.3%)  |
| clint-fewbar **  | 3792  83   0 294 12077.6% |   11 (  2.9%)  |
|derekh ** | 1960  36   2 158  7881.6% |6 (  3.1%)  |
|slagle ** | 1650  36  94  35  1478.2% |   15 (  9.1%)  |
|ghe.rivero| 1500  26 124   0   082.7% |   17 ( 11.3%)  |
|rpodolyaka| 1420  34 108   0   076.1% |   21 ( 14.8%)  |
|lsmola ** | 1011  15  27  58  3884.2% |4 (  4.0%)  |
|ifarkas **|  950  10   8  77  2589.5% |4 (  4.2%)  |
| jistr ** |  951  19  16  59  2378.9% |5 (  5.3%)  |
|  markmc  |  940  35  59   0   062.8% |4 (  4.3%)  |
|pblaho ** |  831  13  45  24   983.1% |   19 ( 22.9%)  |
|marios ** |  720   7  32  33  1590.3% |6 (  8.3%)  |
|   tzumainn **|  670  17  15  35  1574.6% |3 (  4.5%)  |
|dan-prince|  590  10  35  14  1083.1% |7 ( 11.9%)  |
|   jogo   |  570   6  51   0   089.5% |2 (  3.5%)  |


This is a massive improvement over last months report. \o/ Yay. The
cutoff line here is pretty arbitrary - I extended a couple of rows
below one-per-work-day because Dan and Joe were basically there - and
there is a somewhat bigger gap to the next most active reviewer below
that.

About half of Ghe's reviews are in the last 30 days, and ~85% in the
last 60 - but he has been doing significant numbers of thoughtful
reviews over the whole three months - I'd like to propose him for
-core.
Roman has very 

[openstack-dev] [Neutron][LBaaS] Vendor feedback needed

2013-12-04 Thread Eugene Nikanorov
Hi load balancing vendors!

I have specific question: how drivers for your solutions
(devices/vms/processes) are going to wire a VIP with external and tenant
networks?
As we're working on creating a suite for third-party testing, we would like
to make sure that scenarios that we create fits usage pattern of all
providers, if it is possible at all.
If it is not possible, we need to think of more comprehensive LBaaS API and
tests.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DHCP Agent Reliability

2013-12-04 Thread Joe Gordon
On Dec 4, 2013 5:41 AM, Maru Newby ma...@redhat.com wrote:


 On Dec 4, 2013, at 11:57 AM, Clint Byrum cl...@fewbar.com wrote:

  Excerpts from Maru Newby's message of 2013-12-03 08:08:09 -0800:
  I've been investigating a bug that is preventing VM's from receiving
IP addresses when a Neutron service is under high load:
 
  https://bugs.launchpad.net/neutron/+bug/1192381
 
  High load causes the DHCP agent's status updates to be delayed,
causing the Neutron service to assume that the agent is down.  This results
in the Neutron service not sending notifications of port addition to the
DHCP agent.  At present, the notifications are simply dropped.  A simple
fix is to send notifications regardless of agent status.  Does anybody have
any objections to this stop-gap approach?  I'm not clear on the
implications of sending notifications to agents that are down, but I'm
hoping for a simple fix that can be backported to both havana and grizzly
(yes, this bug has been with us that long).
 
  Fixing this problem for real, though, will likely be more involved.
 The proposal to replace the current wsgi framework with Pecan may increase
the Neutron service's scalability, but should we continue to use a 'fire
and forget' approach to notification?  Being able to track the success or
failure of a given action outside of the logs would seem pretty important,
and allow for more effective coordination with Nova than is currently
possible.
 
 
  Dropping requests without triggering a user-visible error is a pretty
  serious problem. You didn't mention if you have filed a bug about that.
  If not, please do or let us know here so we can investigate and file
  a bug.

 There is a bug linked to in the original message that I am already
working on.  The fact that that bug title is 'dhcp agent doesn't configure
ports' rather than 'dhcp notifications are silently dropped' is incidental.

 
  It seems to me that they should be put into a queue to be retried.
  Sending the notifications blindly is almost as bad as dropping them,
  as you have no idea if the agent is alive or not.

 This is more the kind of discussion I was looking for.

 In the current architecture, the Neutron service handles RPC and WSGI
with a single process and is prone to being overloaded such that agent
heartbeats can be delayed beyond the limit for the agent being declared
'down'.  Even if we increased the agent timeout as Yongsheg suggests, there
is no guarantee that we can accurately detect whether an agent is 'live'
with the current architecture.  Given that amqp can ensure eventual
delivery - it is a queue - is sending a notification blind such a bad idea?
 In the best case the agent isn't really down and can process the
notification.  In the worst case, the agent really is down but will be
brought up eventually by a deployment's monitoring solution and process the
notification when it returns.  What am I missing?

 Please consider that while a good solution will track notification
delivery and success, we may need 2 solutions:

 1. A 'good-enough', minimally-invasive stop-gap that can be back-ported
to grizzly and havana.

 2. A 'best-effort' refactor that maximizes the reliability of the DHCP
agent.

 I'm hoping that coming up with a solution to #1 will allow us the
breathing room to work on #2 in this cycle.

I like the two part approach but I would phrase it slightly differently.

Short term solution to help neutron meet the deprecate nova-network goals
by icshouse-2 and a long term more robust solution.



 m.



 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Advanced validation of config options

2013-12-04 Thread Maxim Kulkin
Hi guys.

There was a blueprint on adding advanced validation for configuration
options:

  https://blueprints.launchpad.net/oslo/+spec/oslo-config-options-validation

I'm glad to announce that I have finished implementation for that and I
invite everybody interested for a review:

  https://review.openstack.org/#/c/58960/

Thanks,
Max
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tempest] Which is the best way for skipping tests?

2013-12-04 Thread Daisuke Morita
Hi, everyone.

Which do you think is the best way of coding test skipping, writing
cls.skipException statement in setUpClass method or skipIf annotation
for each test method ?

This question comes to me in reviewing
https://review.openstack.org/#/c/59759/ . I think that work itself is
great and I hope this patch is merged to Tempest. I just want to focus
on coding styles and explicitness of test outputs.

If skipIf annotation is used, test output of Swift is as follows.

---
tempest.api.object_storage.test_account_quotas.AccountQuotasTest
test_admin_modify_quota[gate,smoke]
SKIP  1.15
test_upload_large_object[gate,negative,smoke]
SKIP  0.03
test_upload_valid_object[gate,smoke]
SKIP  0.03
test_user_modify_quota[gate,negative,smoke]
SKIP  0.03
tempest.api.object_storage.test_account_services.AccountTest
test_create_and_delete_account_metadata[gate,smoke]   OK
 0.32
test_list_account_metadata[gate,smoke]OK
 0.02
test_list_containers[gate,smoke]  OK
 0.02

...(SKIP)...

Ran 54 tests in 85.977s

OK
---


On the other hand, if cls.skipException is used, an output is changed as
follows.

---
setUpClass (tempest.api.object_storage.test_account_quotas
AccountQuotasTest)
SKIP  0.00
tempest.api.object_storage.test_account_services.AccountTest
test_create_and_delete_account_metadata[gate,smoke]   OK
 0.48
test_list_account_metadata[gate,smoke]OK
 0.02
test_list_containers[gate,smoke]  OK
 0.02

...(SKIP)...

Ran 49 tests in 81.475s

OK
---


I believe the output of the code using skipIf annotation is better.
Since the coverage of tests is displayed more definitely, it is easier
to find out what tests are really skipped.

I scanned the whole code of Tempest. The count of cls.skipException
statements is 63, and the count of skipIf annotations is 24. Replacing
them is not trivial task, but I think the most impportant for testing is
to output consistent and accurate log.


Am I missing something? Or, this kind of discussion has been done
already in the past? If so, could you let me know?


Best Regards,

-- 
Daisuke Morita morita.dais...@lab.ntt.co.jp
NTT Software Innovation Center, NTT Corporation


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] creating a default for oslo config variables within a project?

2013-12-04 Thread Sean Dague
On 12/03/2013 11:21 PM, Clint Byrum wrote:
 Excerpts from Sean Dague's message of 2013-12-03 16:05:47 -0800:
 On 12/03/2013 06:13 PM, Ben Nemec wrote:
 On 2013-12-03 17:09, Sean Dague wrote:
 On 12/03/2013 05:50 PM, Mark McLoughlin wrote:
 On Tue, 2013-12-03 at 16:23 -0600, Ben Nemec wrote:
 On 2013-12-03 15:56, Sean Dague wrote:
 This cinder patch - https://review.openstack.org/#/c/48935/

 Is blocked on failing upgrade because the updated oslo lockutils won't
 function until there is a specific configuration variable added to the
 cinder.conf.

 That work around is proposed here -
 https://review.openstack.org/#/c/52070/3

 However I think this is exactly the kind of forward breaks that we
 want
 to prevent with grenade, as cinder failing to function after a rolling
 upgrade because a config item wasn't added is exactly the kind of pain
 we are trying to prevent happening to ops.

 So the question is, how is this done correctly so that a default
 can be
 set in the cinder code for this value, and it not require a config
 change to work?

 You're absolutely correct, in principle - if the default value for
 lock_path worked for users before, we should absolutely continue to
 support it.

 I don't know that I have a good answer on how to handle this, but for
 context this change is the result of a nasty bug in lockutils that
 meant
 external locks were doing nothing if lock_path wasn't set.  Basically
 it's something we should never have allowed in the first place.

 As far as setting this in code, it's important that all of the
 processes
 for a service are using the same value to avoid the same bad situation
 we were in before.  For tests, we have a lockutils wrapper
 (https://github.com/openstack/oslo-incubator/blob/master/openstack/common/lockutils.py#L282)
 that sets an environment variable to address this, but that only
 works if all of the processes are going to be spawned from within
 the same wrapper, and I'm not sure how secure that is for production
 deployments since it puts all of the lock files in a temporary
 directory.

 Right, I don't think the previous default really worked - if you used
 the default, then external locking was broken.

 I suspect most distros do set a default - I see RDO has this in its
 default nova.conf:

   lock_path = /var/lib/nova/tmp

 So, yes - this is all terrible.

 IMHO, rather than raise an exception we should log a big fat warning
 about relying on the default and perhaps just treat the lock as an
 in-process lock in that case ... since that's essentially what it was
 before, right?

 So a default of lock_path = /tmp will work (FHS says that path has to be
 there), even if not optimal. Could we make it a default value like that
 instead of the current default which is null (and hence the problem).

 IIRC, my initial fix was something similar to that, but it got shot down
 because putting the lock files in a known world writeable location was a
 security issue.

 Although maybe if we put them in a subdirectory of /tmp and ensured that
 the permissions were such that only the user running the service could
 use that directory, it might be acceptable?  We could still log a
 warning if we wanted.

 This seems like it would have implications for people running services
 on Windows too, but we can probably find a way to make that work if we
 decide on a solution.

 How is that a security issue? Are the lock files being written with some
 sensitive data in them and have g or o permissions on? The sticky bit
 (+t) on /tmp will prevent other users from deleting the file.

 
 Right, but it won't prevent users from creating a symlink with the same
 name.
 
 ln -s /var/lib/nova/instances/x/image.raw /tmp/well.known.location
 
 Now when you do
 
 with open('/tmp/well.known.location', 'w') as lockfile:
   lockfile.write('Stuff')
 
 Nova has just truncated the image file and written 'Stuff' to it.

So that's the generic case (and the way people often write this). But
the oslo lockutils code doesn't work that way. While it does open the
file for write, it does not actually write, it's using fcntl to hold
locks. That's taking a data structure on the fd in kernel memory (IIRC),
so it correctly gives it up if the process crashes.

I'm not saying there isn't some other possible security vulnerability
here as well, but it's not jumping out at me. So I'd love to understand
that, because if we can close that exposure we can provide a working
default, plus a strong recommendation for how to do that *right*. I'd be
totally happy with printing WARNING level at startup if lock_path = /tmp
that this should be adjusted.

 The typical solution is to use a lock directory, /var/run/yourprogram,
 that has restrictive enough permissions setup for your program to have
 exclusive use of it, and is created by root at boot time. That is what
 the packages do now.
 
 It would be good if everybody agreed on a default, %(nova_home)/locks
 or something, but root still must set it up with the right 

[openstack-dev] stable/havana 2013.2.1 freeze tomorrow

2013-12-04 Thread Alan Pevec
Hi,

first stable/havana release 2013.2.1 is scheduled[1] to be released
next week on December 12th, so freeze on stable/havana goes into
effect tomorrow EOD, one week before the release.
Everybody is welcome to help review proposed changes[2] taking into
account criteria for stable fixes[3].

Cheers,
Alan


[1] 
https://wiki.openstack.org/wiki/StableBranchRelease#Planned_stable.2Fhavana_releases
[2] 
https://review.openstack.org/#/q/status:open+AND+branch:stable/havana+AND+(project:openstack/nova+OR+project:openstack/keystone+OR+project:openstack/glance+OR+project:openstack/cinder+OR+project:openstack/neutron+OR+project:openstack/horizon+OR+project:openstack/heat+OR+project:openstack/ceilometer),n,z
[3] https://wiki.openstack.org/wiki/StableBranch#Appropriate_Fixes

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tempest] Which is the best way for skipping tests?

2013-12-04 Thread Sean Dague
I agree, preference should be to the function level skip vs. the class
exception. Especially as I have some sample fixture code from Robert
that will remove setUpClass in the future (we do have a long term goal
of getting rid of it in favor of proper fixtures). It also gives us an
actual count of the number of tests skipped in the configuration, which
is nice.

-Sean

On 12/04/2013 06:46 AM, Daisuke Morita wrote:
 Hi, everyone.
 
 Which do you think is the best way of coding test skipping, writing
 cls.skipException statement in setUpClass method or skipIf annotation
 for each test method ?
 
 This question comes to me in reviewing
 https://review.openstack.org/#/c/59759/ . I think that work itself is
 great and I hope this patch is merged to Tempest. I just want to focus
 on coding styles and explicitness of test outputs.
 
 If skipIf annotation is used, test output of Swift is as follows.
 
 ---
 tempest.api.object_storage.test_account_quotas.AccountQuotasTest
 test_admin_modify_quota[gate,smoke]
 SKIP  1.15
 test_upload_large_object[gate,negative,smoke]
 SKIP  0.03
 test_upload_valid_object[gate,smoke]
 SKIP  0.03
 test_user_modify_quota[gate,negative,smoke]
 SKIP  0.03
 tempest.api.object_storage.test_account_services.AccountTest
 test_create_and_delete_account_metadata[gate,smoke]   OK
  0.32
 test_list_account_metadata[gate,smoke]OK
  0.02
 test_list_containers[gate,smoke]  OK
  0.02
 
 ...(SKIP)...
 
 Ran 54 tests in 85.977s
 
 OK
 ---
 
 
 On the other hand, if cls.skipException is used, an output is changed as
 follows.
 
 ---
 setUpClass (tempest.api.object_storage.test_account_quotas
 AccountQuotasTest)
 SKIP  0.00
 tempest.api.object_storage.test_account_services.AccountTest
 test_create_and_delete_account_metadata[gate,smoke]   OK
  0.48
 test_list_account_metadata[gate,smoke]OK
  0.02
 test_list_containers[gate,smoke]  OK
  0.02
 
 ...(SKIP)...
 
 Ran 49 tests in 81.475s
 
 OK
 ---
 
 
 I believe the output of the code using skipIf annotation is better.
 Since the coverage of tests is displayed more definitely, it is easier
 to find out what tests are really skipped.
 
 I scanned the whole code of Tempest. The count of cls.skipException
 statements is 63, and the count of skipIf annotations is 24. Replacing
 them is not trivial task, but I think the most impportant for testing is
 to output consistent and accurate log.
 
 
 Am I missing something? Or, this kind of discussion has been done
 already in the past? If so, could you let me know?
 
 
 Best Regards,
 


-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Top Gate Bugs

2013-12-04 Thread Joe Gordon
TL;DR: Gate is failing 23% of the time due to bugs in nova, neutron and
tempest. We need help fixing these bugs.


Hi All,

Before going any further we have a bug that is affecting gate and stable,
so its getting top priority here. elastic-recheck currently doesn't track
unit tests because we don't expect them to fail very often. Turns out that
assessment was wrong, we now have a nova py27 unit test bug in gate and
stable gate.

https://bugs.launchpad.net/nova/+bug/1216851
Title: nova unit tests occasionally fail migration tests for mysql and
postgres
Hits
  FAILURE: 74
The failures appear multiple times for a single job, and some of those are
due to bad patches in the check queue.  But this is being seen in stable
and trunk gate so something is definitely wrong.

===


Its time for another edition of of 'Top Gate Bugs.'  I am sending this out
now because in addition to our usual gate bugs a few new ones have cropped
up recently, and as we saw a few weeks ago it doesn't take very many new
bugs to wedge the gate.

Currently the gate has a failure rate of at least 23%! [0]

Note: this email was generated with
http://status.openstack.org/elastic-recheck/ and 'elastic-recheck-success'
[1]

1) https://bugs.launchpad.net/bugs/1253896
Title: test_minimum_basic_scenario fails with SSHException: Error reading
SSH protocol banner
Projects:  neutron, nova, tempest
Hits
  FAILURE: 324
This one has been around for several weeks now and although we have made
some attempts at fixing this, we aren't any closer at resolving this then
we were a few weeks ago.

2) https://bugs.launchpad.net/bugs/1251448
Title: BadRequest: Multiple possible networks found, use a Network ID to be
more specific.
Project: neutron
Hits
  FAILURE: 141

3) https://bugs.launchpad.net/bugs/1249065
Title: Tempest failure: tempest/scenario/test_snapshot_pattern.py
Project: nova
Hits
  FAILURE: 112
This is a bug in nova's neutron code.

4) https://bugs.launchpad.net/bugs/1250168
Title: gate-tempest-devstack-vm-neutron-large-ops is failing
Projects: neutron, nova
Hits
  FAILURE: 94
This is an old bug that was fixed, but came back on December 3rd. So this
is a recent regression. This may be an infra issue.

5) https://bugs.launchpad.net/bugs/1210483
Title: ServerAddressesTestXML.test_list_server_addresses FAIL
Projects: neutron, nova
Hits
  FAILURE: 73
This has had some attempts made at fixing it but its still around.


In addition to the existing bugs, we have some new bugs on the rise:

1) https://bugs.launchpad.net/bugs/1257626
Title: Timeout while waiting on RPC response - topic: network, RPC
method: allocate_for_instance info: unknown
Project: nova
Hits
  FAILURE: 52
large-ops only bug. This has been around for at least two weeks, but we
have seen this in higher numbers starting around December 3rd. This may  be
an infrastructure issue as the neutron-large-ops started failing more
around the same time.

2) https://bugs.launchpad.net/bugs/1257641
Title: Quota exceeded for instances: Requested 1, but already used 10 of 10
instances
Projects: nova, tempest
Hits
  FAILURE: 41
Like the previous bug, this has been around for at least two weeks but
appears to be on the rise.



Raw Data: http://paste.openstack.org/show/54419/


best,
Joe


[0] failure rate = 1-(success rate gate-tempest-dsvm-neutron)*(success rate
...) * ...

gate-tempest-dsvm-neutron = 0.00
gate-tempest-dsvm-neutron-large-ops = 11.11
gate-tempest-dsvm-full = 11.11
gate-tempest-dsvm-large-ops = 4.55
gate-tempest-dsvm-postgres-full = 10.00
gate-grenade-dsvm = 0.00

(I hope I got the math right here)

[1]
http://git.openstack.org/cgit/openstack-infra/elastic-recheck/tree/elastic_recheck/cmd/check_success.py
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Multidomain User Ids

2013-12-04 Thread Dolph Mathews
On Sun, Nov 24, 2013 at 9:39 PM, Adam Young ayo...@redhat.com wrote:

 The #1 pain point I hear from people in the field is that they need to
 consume read only  LDAP but have service users in something Keystone
 specific.  We are close to having this, but we have not closed the loop.
  This was something that was Henry's to drive home to completion.  Do we
 have a plan?  Federation depends on this, I think, but this problem stands
 alone.


I'm still thinking through the idea of having keystone natively federate to
itself out of the box, where keystone presents itself as an IdP (primarily
for service users). It sounds like a simpler architectural solution than
having to shuffle around code paths for both federated identities and local
identities.



 Two Solutions:
 1 always require domain ID along with the user id for role assignments.


From an API perspective, how? (while still allowing for cross-domain role
assignments)


 2 provide some way to parse from the user ID what domain it is.


I think you meant this one the other way around: Determine the domain given
the user ID.



 I was thinking that we could do something along the lines of 2 where we
 provide  domain specific user_id prefix  for example, if there is just
 one ldpa service, and they wanted to prefix anyting out of ldap with ldap@,
 then an id would be  prefix  field from LDAP.  And would be configured
 on a per domain basis.  THis would be optional.

 The weakness is that itbe Log N to determine which Domain a user_id came
 from.  A better approach would be to use a divider, like '@' and then
 prefix would be the key for a hashtable lookup.  Since it is optional,
 domains could still be stored in SQL and user_ids could be uuids.

 One problem is if someone comes by later an must use email address as
 the userid, the @ would mess them up.  So The default divider should be
 something URL safe but no likely to be part of a userid. I realize that it
 might be impossible to match this criterion.


For usernames, sure... but I don't know why anyone would care to use email
addresses as ID's.



 Actually, there might be other reasons to forbid @ signs from IDs, as they
 look like phishing attempts in URLs.


Phishing attempts?? They need to be encoded anyway...





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][database] Update compute_nodes table

2013-12-04 Thread Murray, Paul (HP Cloud Services)
Hi Abbass,

I guess you read the blueprint Russell referred to. I think you actually are 
saying the same - but please read steps below and tell me if they don't cover 
what you want.

This is what it will do:

1.   Add a column to the compute_nodes table for a JSON blob

2.   Add plug-in framework for additional resources in resource_tacker 
(just like filters in filter scheduler)

3.   Resource plugin classes will implement things like:

a.   Claims test method

b.  add your data here method (so it can populate the JSON blob)

4.   Additional column is available in host_state at filter scheduler

You will then be able to do any or all of the following:

1.   Add new parameters to requests in extra_specs

2.   Add new filter/weight classes as scheduler plugins

a.   Will have access to filter properties (including extra_specs)

b.  Will have access to extra resource data (from compute node)

c.   Can generate limits

3.   Add new resource classes as scheduler plugins

a.   Will have access to filter properties (including extra specs)

b.  Will have access to limits (from scheduler)

c.   Can generate extra resource data to go to scheduler

Does this match your needs?

There are also plans to change how data goes from compute nodes to scheduler 
(i.e. not through the database). This will remove the database from the 
equation. But that can be kept as a separate concern.

Paul.



From: Abbass MAROUNI [mailto:abbass.maro...@virtualscale.fr]
Sent: 03 December 2013 08:54
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova][database] Update compute_nodes table

I am aware of this work, in fact I reused a column (pci_stats) in the 
compute_nodes table to store a JSON blob.
I track the resource in the resource_tracker and update the column and then use 
the blob in a filter.
Maybe I should reformulate my question, How can I add a column to the table and 
use it in resource_tracker without breaking something ?

Best regards,

2013/12/2 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org


--

Message: 1
Date: Mon, 02 Dec 2013 12:06:21 -0500
From: Russell Bryant rbry...@redhat.commailto:rbry...@redhat.com
To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova][database] Update compute_nodes
table
Message-ID: 529cbe0d@redhat.commailto:529cbe0d@redhat.com
Content-Type: text/plain; charset=ISO-8859-1

On 12/02/2013 11:47 AM, Abbass MAROUNI wrote:
 Hello,

 I'm looking for way to a add new attribute to the compute nodes by
  adding a column to the compute_nodes table in nova database in order to
 track a metric on the compute nodes and use later it in nova-scheduler.

 I checked the  sqlalchemy/migrate_repo/versions and thought about adding
 my own upgrade then sync using nova-manage db sync.

 My question is :
 What is the process of upgrading a table in the database ? Do I have to
 modify or add a new variable in some class in order to associate the
 newly added column with a variable that I can use ?

Don't add this.  :-)

There is work in progress to just have a column with a json blob in it
for additional metadata like this.

https://blueprints.launchpad.net/nova/+spec/extensible-resource-tracking
https://wiki.openstack.org/wiki/ExtensibleResourceTracking

--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request for Barbican

2013-12-04 Thread Jarret Raim

 While I am all for adding a new program, I think we should only add one
if we 
 rule out all existing programs as a home. With that in mind why not add
this 
 to the  keystone program? Perhaps that may require a tweak to keystones
mission 
 statement, but that is doable. I saw a partial answer to this somewhere
but not a full one.

From our point of view, Barbican can certainly help solve some problems
related to identity like SSH key management and client certs. However,
there is a wide array of functionality that Barbican will handle that is
not related to identity.


Some examples, there is some additional detail in our application if you
want to dig deeper [1].


* Symmetric key management - These keys are used for encryption of data at
rest in various places including Swift, Nova, Cinder, etc. Keys are
resources that roll up to a project, much like servers or load balancers,
but they have no direct relationship to an identity.

* SSL / TLS certificates - The management of certificate authorities and
the issuance of keys for SSL / TLS. Again, these are resources rather than
anything attached to identity.

* SSH Key Management - These could certainly be managed through keystone
if we think that¹s the right way to go about it, but from Barbican¹s point
of view, these are just another type of a key to be generated and tracked
that roll up to an identity.


* Client certificates - These are most likely tied to an identity, but
again, just managed as resources from a Barbican point of view.

* Raw Secret Storage - This functionality is usually used by applications
residing on an Cloud. An app can use Barbican to store secrets such as
sensitive configuration files, encryption keys and the like. This data
belongs to the application rather than any particular user in Keystone.
For example, some Rackspace customers don¹t allow their application dev /
maintenance teams direct access to the Rackspace APIs.

* Boot Verification - This functionality is used as part of the trusted
boot functionality for transparent disk encryption on Nova.

* Randomness Source - Barbican manages HSMs which allow us to offer a
source of true randomness.



In short (ha), I would encourage everyone to think of keys / certificates
as resources managed by an API in much the same way we think of VMs being
managed by the Nova API. A consumer of Barbican (either as an OpenStack
service or a consumer of an OpenStack cloud) will have an API to create
and manage various types of secrets that are owned by their project.

Keystone plays a critical role for us (as it does with every service) in
authenticating the user to a particular project and storing the roles that
the user has for that project. Barbican then enforces these restrictions.
However, keys / secrets are fundamentally divorced from identities in much
the same way that databases in Trove are, they are owned by a project, not
a particular user.

Hopefully our thought process makes some sense, let me know if I can
provide more detail.



Jarret





[1] https://wiki.openstack.org/wiki/Barbican/Incubation


smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Creating recipes for mimicking behavior of nova-networking network managers.

2013-12-04 Thread Brent Eagles

Hi,

As part of the Icehouse nova-networking parity effort, we need to 
describe how nova-networking managers work and how the behavior is 
mapped to neutron. The benefits are:


1. It aides migration: deployers who are nova-network savvy can see how 
functionality maps from one to the other.


2. It aides implementation: if we cannot provide a mapping, there is a 
breakage in parity and it needs to be addressed somehow.


3. It aides testing (and debugging): by illuminating points where the 
implementations differ, it makes it easier to design and implement tests 
that can be expected to function the same for each networking 
implementation.


4. It aides acceptance: at some point, the proverbial *we* are going to 
decide whether neutron is ready to replace nova. The existence of 
working recipes is a pretty strong set of acceptance criteria.  Another 
way to look at it is that the recipes are essentially a high level 
description of how to go about manually testing parity.


5. It aides support and documentation efforts: nearly any point in the 
openstack user spectrum (casual experimenter to hard-core 
developer/implementer) who has anything to do with legacy deployments or 
parity itself will benefit from having these recipes on hand. NOT to 
mention the rtfm option when somebody asks I'm using FlatManager in 
nova-network and want to do the same in neutron, how does that work? (I 
love being able to write rtfm, don't you?)


Sounds great!?! Cool! Do you want to help or know someone who does (or 
should - third-person volunteering not discouraged!)? We need 
nova-networking savvy and neutron savvy folks alike, though you need not 
necessarily be both at the same time.


As some of the aforementioned benefits are directly relevant to the 
Icehouse development cycle AND the holiday season is upon us, it is 
important to get the ball rolling ASAP. To be specific, working recipes 
are most valuable if they are available for Icehouse-2 (see reasons 2, 3 
and most importantly 4).


Please respond if interested, want to volunteer someone or have comments 
and suggestions.


Cheers,

Brent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-04 Thread Jordan OMara

On 04/12/13 12:07 +0100, Jiri Tomasek wrote:

Hi,

As the development of Tuskar-UI somehow stagnated recently, I have been  
focusing more on Horizon project lately to get features we need for  
Tuskar-UI. I acknowledge that I haven't been paying enough attention and  
reviews in TripleO. The statistics says it all. Although as the  
development of Tuskar-UI is about to rise rapidly, it would be nice to  
be able to give +2's here. I'll try to get up to speed with TripleO  
together with upcoming Tuskar-UI changes.


Jirka


I'm in exactly the same boat
--
Jordan O'Mara jomara at redhat.com
Red Hat Engineering, Raleigh 


pgpNoVY5J4YkJ.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tempest] Which is the best way for skipping tests?

2013-12-04 Thread Joe Hakim Rahme
I am in favor of class level exceptions for the obvious reasons:

+ It reduces code duplication. Copy/pasting a SkipIf decorator on every test
  method in the class is tedious and possibly error prone. Adding the exception
  as a guard in the setUpClass() makes for a more elegant solution

+ function level skips will waste unnecessary time in the setup/teardown
  methods. If I know I'm skipping all the tests in a class, why should I bother
  executing all the boilerplate preliminary actions? In the context of heavy
  use, like the CI gate, this can accumulate and be a pain.

+ Using function level skips requires importing an extra module (from testtools
  import SkipIf) that would be otherwise unnecessary.

If the output of the class level skipException needs to be improved, maybe there
should be a patch there to list all the methods skipped.

If proper fixtures are meant to replace setUpClass in the future (something I
would really love to see in Tempest), we still need to take into account that
setUpClass might do more than just fixtures, and certain guards are expected to
be found in there.

What do you guys think?

---
Joe H. Rahme



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [All] Status oslo.messaging migration

2013-12-04 Thread Flavio Percoco

Greetings,

I'd like to give a heads up about the status of the migration to
oslo.messaging. Glance migration to oslo.messaging just landed and
it'll be part of the I-1 cut.

Remaining projects are:

- Nova
- Cinder
- Trove
- Keystone
- Heat
- Ceilometer
- Neutron
- Horizon

I know most of the projects already have a blueprint for it but some
might be missing one. In order to make it easier to track the progress
here, it'd be nice if such blueprints could use 'oslo-messaging' as
the blueprint name - it's possible to change it in the 'Change
details' section. These are the projects using 'oslo-messaging' as the
blueprint name[0].

[0]
https://blueprints.launchpad.net/openstack/?searchtext=oslo-messaging

Cheers,
FF

P.S: For people with a working copy of glance. After pulling this
change you'll have to clean up the .pyc files and remove the
'glance/notifier' package which was replaced with 'glance/notifier.py'

--
@flaper87
Flavio Percoco


pgpemy6SpW8wz.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DHCP Agent Reliability

2013-12-04 Thread Maru Newby

On Dec 4, 2013, at 8:55 AM, Carl Baldwin c...@ecbaldwin.net wrote:

 Stephen, all,
 
 I agree that there may be some opportunity to split things out a bit.
 However, I'm not sure what the best way will be.  I recall that Mark
 mentioned breaking out the processes that handle API requests and RPC
 from each other at the summit.  Anyway, it is something that has been
 discussed.
 
 I actually wanted to point out that the neutron server now has the
 ability to run a configurable number of sub-processes to handle a
 heavier load.  Introduced with this commit:
 
 https://review.openstack.org/#/c/37131/
 
 Set api_workers to something  1 and restart the server.
 
 The server can also be run on more than one physical host in
 combination with multiple child processes.

I completely misunderstood the import of the commit in question.  Being able to 
run the wsgi server(s) out of process is a nice improvement, thank you for 
making it happen.  Has there been any discussion around making the default for 
api_workers  0 (at least 1) to ensure that the default configuration separates 
wsgi and rpc load?  This also seems like a great candidate for backporting to 
havana and maybe even grizzly, although api_workers should probably be 
defaulted to 0 in those cases.

FYI, I re-ran the test that attempted to boot 75 micro VM's simultaneously with 
api_workers = 2, with mixed results.  The increased wsgi throughput resulted in 
almost half of the boot requests failing with 500 errors due to QueuePool 
errors (https://bugs.launchpad.net/neutron/+bug/1160442) in Neutron.  It also 
appears that maximizing the number of wsgi requests has the side-effect of 
increasing the RPC load on the main process, and this means that the problem of 
dhcp notifications being dropped is little improved.  I intend to submit a fix 
that ensures that notifications are sent regardless of agent status, in any 
case.


m.

 
 Carl
 
 On Tue, Dec 3, 2013 at 9:47 AM, Stephen Gran
 stephen.g...@theguardian.com wrote:
 On 03/12/13 16:08, Maru Newby wrote:
 
 I've been investigating a bug that is preventing VM's from receiving IP
 addresses when a Neutron service is under high load:
 
 https://bugs.launchpad.net/neutron/+bug/1192381
 
 High load causes the DHCP agent's status updates to be delayed, causing
 the Neutron service to assume that the agent is down.  This results in the
 Neutron service not sending notifications of port addition to the DHCP
 agent.  At present, the notifications are simply dropped.  A simple fix is
 to send notifications regardless of agent status.  Does anybody have any
 objections to this stop-gap approach?  I'm not clear on the implications of
 sending notifications to agents that are down, but I'm hoping for a simple
 fix that can be backported to both havana and grizzly (yes, this bug has
 been with us that long).
 
 Fixing this problem for real, though, will likely be more involved.  The
 proposal to replace the current wsgi framework with Pecan may increase the
 Neutron service's scalability, but should we continue to use a 'fire and
 forget' approach to notification?  Being able to track the success or
 failure of a given action outside of the logs would seem pretty important,
 and allow for more effective coordination with Nova than is currently
 possible.
 
 
 It strikes me that we ask an awful lot of a single neutron-server instance -
 it has to take state updates from all the agents, it has to do scheduling,
 it has to respond to API requests, and it has to communicate about actual
 changes with the agents.
 
 Maybe breaking some of these out the way nova has a scheduler and a
 conductor and so on might be a good model (I know there are things people
 are unhappy about with nova-scheduler, but imagine how much worse it would
 be if it was built into the API).
 
 Doing all of those tasks, and doing it largely single threaded, is just
 asking for overload.
 
 Cheers,
 --
 Stephen Gran
 Senior Systems Integrator - theguardian.com
 Please consider the environment before printing this email.
 --
 Visit theguardian.com
 On your mobile, download the Guardian iPhone app theguardian.com/iphone and
 our iPad edition theguardian.com/iPad   Save up to 33% by subscribing to the
 Guardian and Observer - choose the papers you want and get full digital
 access.
 Visit subscribe.theguardian.com
 
 This e-mail and all attachments are confidential and may also
 be privileged. If you are not the named recipient, please notify
 the sender and delete the e-mail and all attachments immediately.
 Do not disclose the contents to another person. You may not use
 the information for any purpose, or store, or copy, it in any way.
 
 Guardian News  Media Limited is not liable for any computer
 viruses or other material transmitted with or as part of this
 e-mail. You should employ virus checking software.
 
 Guardian News  Media Limited
 
 A member of Guardian Media Group plc
 

Re: [openstack-dev] [keystone] Service scoped role definition

2013-12-04 Thread Adam Young

On 12/04/2013 04:08 AM, David Chadwick wrote:

I am happy with this as far as it goes. I would like to see it being
made more general, where domains, services and projects can also own and
name roles
Domains should be OK, but services would confuse the matter.  You'd have 
to end up with something like LDAP


role=  domain=default,service=glance

vs

role=  domain=default,project=glance

unless we have unambiguous implicit ordering, we'll need to make it 
explicit, which is messy.


I'd rather do:

One segment: globally defined roles.  These could also be considered 
roles defined in the default domain.

Two segments service defined roles in the default domain
Three Segments, service defined roles from non-default domain

To do domain scoped roles we could do something like:

domX//admin


But It seems confusing.

Perhaps a better approach for project roles is to have the rule that the 
default domain can show up as an empty string.  Thus, project scoped 
roles from the default domain  would be:


\glance\admin

and from a non default domain

domX\glance\admin









regards

David


On 04/12/2013 01:51, Adam Young wrote:

I've been thinking about your comment that nested roles are confusing


What if we backed off and said the following:


Some role-definitions are owned by services.  If a Role definition is
owned by a service, in role assignment lists in tokens, those roles we
be prefixd by the service name.  / is a reserved cahracter and weill be
used as the divider between segments of the role definition 

That drops arbitrary nesting, and provides a reasonable namespace.  Then
a role def would look like:

glance/admin  for the admin role on the glance project.



In theory, we could add the domain to the namespace, but that seems
unwieldy.  If we did, a role def would then look like this


default/glance/admin  for the admin role on the glance project.

Is that clearer than the nested roles?



On 11/26/2013 06:57 PM, Tiwari, Arvind wrote:

Hi Adam,

Based on our discussion over IRC, I have updated the below etherpad
with proposal for nested role definition

https://etherpad.openstack.org/p/service-scoped-role-definition

Please take a look @ Proposal (Ayoung) - Nested role definitions, I
am sorry if I could not catch your idea.

Feel free to update the etherpad.

Regards,
Arvind


-Original Message-
From: Tiwari, Arvind
Sent: Tuesday, November 26, 2013 4:08 PM
To: David Chadwick; OpenStack Development Mailing List
Subject: Re: [openstack-dev] [keystone] Service scoped role definition

Hi David,

Thanks for your time and valuable comments. I have replied to your
comments and try to explain why I am advocating to this BP.

Let me know your thoughts, please feel free to update below etherpad
https://etherpad.openstack.org/p/service-scoped-role-definition

Thanks again,
Arvind

-Original Message-
From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk]
Sent: Monday, November 25, 2013 12:12 PM
To: Tiwari, Arvind; OpenStack Development Mailing List
Cc: Henry Nash; ayo...@redhat.com; dolph.math...@gmail.com; Yee, Guang
Subject: Re: [openstack-dev] [keystone] Service scoped role definition

Hi Arvind

I have just added some comments to your blueprint page

regards

David


On 19/11/2013 00:01, Tiwari, Arvind wrote:

Hi,

  
Based on our discussion in design summit , I have redone the service_id

binding with roles BP
https://blueprints.launchpad.net/keystone/+spec/serviceid-binding-with-role-definition.

I have added a new BP (link below) along with detailed use case to
support this BP.

https://blueprints.launchpad.net/keystone/+spec/service-scoped-role-definition


Below etherpad link has some proposals for Role REST representation and
pros and cons analysis

  
https://etherpad.openstack.org/p/service-scoped-role-definition


  
Please take look and let me know your thoughts.


  
It would be awesome if we can discuss it in tomorrow's meeting.


  
Thanks,


Arvind


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] python-swiftclient, verifying SSL certs by default

2013-12-04 Thread Chmouel Boudjnah
Hello,

There has been a lengthy discussion going on for quite sometime on a review
for swiftclient here :

https://review.openstack.org/#/c/33473/

The review change the way works swiftclient to refuse to connect to
insecure (i.e: self signed) SSL swift proxies unless you are specifying the
--insecure flag to the CLI.

This change the default behavior of the client but that's for the greater
good of a better security.

We are getting this merged now and want to make sure that people are aware
of it first.

We would probably bump the version of swiftclient to 2.0 since this is a
big change.

This would allow to close this CVE:
https://bugs.launchpad.net/bugs/cve/2013-6396 and give ability to
distributors for providing updates.

I'll announce it on -users and -operators after this is merged.

Chmouel.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Icehouse-1 milestone candidates available

2013-12-04 Thread Thierry Carrez
Hi everyone,

Milestone-proposed branches were created for Keystone, Glance, Nova,
Horizon, Neutron, Cinder, Ceilometer, Heat and and Trove in preparation
for the icehouse-1 milestone publication tomorrow.

During this milestone (since the opening of the Icehouse development
cycle) we implemented 69 blueprints and fixed 738 bugs
(so far).

Please test proposed deliveries to ensure no critical regression found
its way in. Milestone-critical fixes will be backported to the
milestone-proposed branch until final delivery of the milestone, and
will be tracked using the icehouse-1 milestone targeting.

You can find candidate tarballs at:
http://tarballs.openstack.org/keystone/keystone-milestone-proposed.tar.gz
http://tarballs.openstack.org/glance/glance-milestone-proposed.tar.gz
http://tarballs.openstack.org/nova/nova-milestone-proposed.tar.gz
http://tarballs.openstack.org/horizon/horizon-milestone-proposed.tar.gz
http://tarballs.openstack.org/neutron/neutron-milestone-proposed.tar.gz
http://tarballs.openstack.org/cinder/cinder-milestone-proposed.tar.gz
http://tarballs.openstack.org/ceilometer/ceilometer-milestone-proposed.tar.gz
http://tarballs.openstack.org/heat/heat-milestone-proposed.tar.gz
http://tarballs.openstack.org/trove/trove-milestone-proposed.tar.gz

You can also access the milestone-proposed branches directly at:
https://github.com/openstack/keystone/tree/milestone-proposed
https://github.com/openstack/glance/tree/milestone-proposed
https://github.com/openstack/nova/tree/milestone-proposed
https://github.com/openstack/horizon/tree/milestone-proposed
https://github.com/openstack/neutron/tree/milestone-proposed
https://github.com/openstack/cinder/tree/milestone-proposed
https://github.com/openstack/ceilometer/tree/milestone-proposed
https://github.com/openstack/heat/tree/milestone-proposed
https://github.com/openstack/trove/tree/milestone-proposed

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Vendor feedback needed

2013-12-04 Thread Samuel Bercovici
Hi Eugene,

We currently support out-of-the-box VIP and Nodes on the same network.
The VIP can be associated with a floating IP if need to access from the 
external network.

We are considering other options but will address as we get to this.

Regards,
-Sam.

From: Eugene Nikanorov [mailto:enikano...@mirantis.com]
Sent: Wednesday, December 04, 2013 1:14 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Neutron][LBaaS] Vendor feedback needed

Hi load balancing vendors!

I have specific question: how drivers for your solutions 
(devices/vms/processes) are going to wire a VIP with external and tenant 
networks?
As we're working on creating a suite for third-party testing, we would like to 
make sure that scenarios that we create fits usage pattern of all providers, if 
it is possible at all.
If it is not possible, we need to think of more comprehensive LBaaS API and 
tests.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] functional (aka integration) gate testing

2013-12-04 Thread Vladimir Kozhukalov
Some additional information about the case.

Infra at the moment has a limitation in one VM per jenkins job. So we are
not able to launch two VMs and boot one of them via PXE from another. We
need to start VM and then start another nested VM inside first one.
Besides, nested VM must be qemu, not kvm. It is because donated cloud
resources used for testing do not support hardware nesting.


Vladimir Kozhukalov


On Tue, Dec 3, 2013 at 7:32 PM, Vladimir Kozhukalov 
vkozhuka...@mirantis.com wrote:

 We are going to make integration testing gate scheme for Ironic and we've
 investigated several cases which are actual for TripleO.

 1) https://etherpad.openstack.org/p/tripleo-test-cluster
 This is the newest and most advanced initiative. It is something like
 test environment on demand. It is still not ready to use.

 2) https://github.com/openstack-infra/tripleo-ci
 This project seems not to be actively used at the moment. It contains
 toci_gate_test.sh, but this script is empty and is used as a gate hook. It
 is supposed that it will then implement the whole gate testing logic using
 test env on demand (see previous point).
 This project also has some shell code which is used to manage emulated
 bare metal environments. It is something like prepare libvirt VM xml and
 launch VM using virsh (nothing special).

 3) https://github.com/openstack/tripleo-incubator/blob/master/scripts(aka 
 devtest)
 This is a set of shell scripts which are intended to reproduce the whole
 TripleO flow (seed, undercloud, overcloud). It is supposed to be used to
 perform testing actions (including gate tests).
 Documentation is available
 http://docs.openstack.org/developer/tripleo-incubator/devtest.html

 So, the situation looks like there is no fully working and mature scheme
 at the moment.

 My suggestion is to start from creating empty gate test flow (like in
 tripleo-ci). Then we can write some code implementing some testing logic.
 It is possible even before conductor manager is ready. We can just directly
 import driver modules and test them in a functional (aka integration)
 manner. As for managing emulated bare metal environments, here we can write
 (or copy from tripleo) some scripts for that (shell or python). What we
 actually need to be able to do is to launch one VM, then to install ironic
 on it, and then launch another VM and boot it via PXE from the first one.
 In the future we can use environment on demand scheme, when it is ready.
 So we can follow the same scenario as they use in TripleO.

 Besides, there is an idea about how to manage test environment using
 openstack itself. Right now nova can make VMs and it has advanced
 functionality for that. What it can NOT do is to boot them via PXE. There
 is a blueprint for that
 https://blueprints.launchpad.net/nova/+spec/libvirt-empty-vm-boot-pxe.


 --
 Vladimir Kozhukalov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Webex Recording of IPv6 / Neutron Sync-up

2013-12-04 Thread Richard Woo
Shixiong,

Thank you for the updates, do you mind to share the slide to the openstack
mailing list?

Richard


On Tue, Dec 3, 2013 at 11:30 PM, Shixiong Shang 
sparkofwisdom.cl...@gmail.com wrote:

 Hi, guys:

 We had a great discussion tonight with stackers from Comcast, IBM, HP,
 Cisco and Nephos6! Here is the debrief of what we discussed during this
 1-hr session:

 1) Sean from Comcast provided clarification of his short-term and mid-term
 goals in the proposed blueprint.
 2) Da Zhao, Yu Yang, and Xu Han from IBM went throughout the patches and
 bug fixes they submitted.
 3) Brian from HP shared his view to support IPv6 and HA in the near future.
 4) Shixiong from Nephos6 and Randy from Cisco presented a slide to
 summarize the issues they encountered during POC together with the
 solutions.
 5) We reached consensus to leverage the work Sean, Da Zhao have done
 previously and integrate it with the L3 agent efforts brought by Shixiong
 and Randy.


 Please see below for Webex recording.


 https://cisco.webex.com/ciscosales/lsr.php?AT=pbSP=MCrID=73520027rKey=8e508b63604bb9d0

 IPv6 / Neutron synch-up-20131204 0204-1
 Tuesday, December 3, 2013 9:04 pm New York Time
 1 Hour 4 Minutes

 Thanks!

 Shixiong

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Webex Recording of IPv6 / Neutron Sync-up

2013-12-04 Thread Ian Wells
Next time, could you perhaps do it (a) with a bit more notice and (b) at a
slightly more amenable time for us Europeans?


On 4 December 2013 15:27, Richard Woo richardwoo2...@gmail.com wrote:

 Shixiong,

 Thank you for the updates, do you mind to share the slide to the openstack
 mailing list?

 Richard


 On Tue, Dec 3, 2013 at 11:30 PM, Shixiong Shang 
 sparkofwisdom.cl...@gmail.com wrote:

  Hi, guys:

 We had a great discussion tonight with stackers from Comcast, IBM, HP,
 Cisco and Nephos6! Here is the debrief of what we discussed during this
 1-hr session:

 1) Sean from Comcast provided clarification of his short-term and
 mid-term goals in the proposed blueprint.
 2) Da Zhao, Yu Yang, and Xu Han from IBM went throughout the patches and
 bug fixes they submitted.
 3) Brian from HP shared his view to support IPv6 and HA in the near
 future.
 4) Shixiong from Nephos6 and Randy from Cisco presented a slide to
 summarize the issues they encountered during POC together with the
 solutions.
 5) We reached consensus to leverage the work Sean, Da Zhao have done
 previously and integrate it with the L3 agent efforts brought by Shixiong
 and Randy.


 Please see below for Webex recording.


 https://cisco.webex.com/ciscosales/lsr.php?AT=pbSP=MCrID=73520027rKey=8e508b63604bb9d0

 IPv6 / Neutron synch-up-20131204 0204-1
 Tuesday, December 3, 2013 9:04 pm New York Time
 1 Hour 4 Minutes

 Thanks!

 Shixiong

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-04 Thread Chris Jones
Hi

On 4 December 2013 07:12, Robert Collins robe...@robertcollins.net wrote:

  - Ghe Rivero for -core


+1


  - Jan Provaznik for removal from -core
  - Jordan O'Mara for removal from -core
  - Martyn Taylor for removal from -core
  - Jiri Tomasek for removal from -core
  - Jamomir Coufal for removal from -core


I'm skipping voting on these for now, since not all have responded, but in
general I am +1 de-core-ing folk who have shifted their focus elsewhere and
I thank them for their efforts on TripleO to date, and hope that the winds
of time and focus, bring them back to us at some point in the future :)

Cheers,
-- 
Cheers,

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tempest] Which is the best way for skipping tests?

2013-12-04 Thread Sean Dague
On 12/04/2013 09:24 AM, Joe Hakim Rahme wrote:
 I am in favor of class level exceptions for the obvious reasons:
 
 + It reduces code duplication. Copy/pasting a SkipIf decorator on every test
   method in the class is tedious and possibly error prone. Adding the 
 exception
   as a guard in the setUpClass() makes for a more elegant solution
 
 + function level skips will waste unnecessary time in the setup/teardown
   methods. If I know I'm skipping all the tests in a class, why should I 
 bother
   executing all the boilerplate preliminary actions? In the context of heavy
   use, like the CI gate, this can accumulate and be a pain.
 
 + Using function level skips requires importing an extra module (from 
 testtools
   import SkipIf) that would be otherwise unnecessary.
 
 If the output of the class level skipException needs to be improved, maybe 
 there
 should be a patch there to list all the methods skipped.
 
 If proper fixtures are meant to replace setUpClass in the future (something I
 would really love to see in Tempest), we still need to take into account that
 setUpClass might do more than just fixtures, and certain guards are expected 
 to
 be found in there.
 
 What do you guys think?

So I'd be ok with a compromise, which would build a decorator for the
setUpClass method, at least that would make it easier to refactor out later.

That will require someone signing up to writing that though.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] [marconi] Notifications brainstorming session tomorrow @ 1500 UTC

2013-12-04 Thread Kurt Griffiths
Thanks! We touched on this briefly during the chat yesterday, and I will
make sure it gets further attention.

On 12/3/13, 3:54 AM, Julien Danjou jul...@danjou.info wrote:

On Mon, Dec 02 2013, Kurt Griffiths wrote:

 Following up on some conversations we had at the summit, I¹d like to get
 folks together on IRC tomorrow to crystalize the design for a
notifications
 project under the Marconi program. The project¹s goal is to create a
service
 for surfacing events to end users (where a user can be a cloud app
 developer, or a customer using one of those apps). For example, a
developer
 may want to be notified when one of their servers is low on disk space.
 Alternatively, a user of MyHipsterApp may want to get a text when one of
 their friends invites them to listen to That Band You¹ve Never Heard Of.

 Interested? Please join me and other members of the Marconi team
tomorrow,
 Dec. 3rd, for a brainstorming session in #openstack-marconi at 1500
 
UTChttp://www.timeanddate.com/worldclock/fixedtime.html?hour=15min=0se
c=0.
 Your contributions are crucial to making this project awesome.

 I¹ve seeded an etherpad for the discussion:

 https://etherpad.openstack.org/p/marconi-notifications-brainstorm

This might (partially) overlap with what Ceilometer is doing with its
alarming feature, and one of the blueprint our roadmap for Icehouse:

  https://blueprints.launchpad.net/ceilometer/+spec/alarm-on-notification

While it doesn't solve the use case at the same level, the technical
mechanism is likely to be similar.

-- 
Julien Danjou
# Free Software hacker # independent consultant
# http://julien.danjou.info


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DHCP Agent Reliability

2013-12-04 Thread Ashok Kumaran
On Wed, Dec 4, 2013 at 8:30 PM, Maru Newby ma...@redhat.com wrote:


 On Dec 4, 2013, at 8:55 AM, Carl Baldwin c...@ecbaldwin.net wrote:

  Stephen, all,
 
  I agree that there may be some opportunity to split things out a bit.
  However, I'm not sure what the best way will be.  I recall that Mark
  mentioned breaking out the processes that handle API requests and RPC
  from each other at the summit.  Anyway, it is something that has been
  discussed.
 
  I actually wanted to point out that the neutron server now has the
  ability to run a configurable number of sub-processes to handle a
  heavier load.  Introduced with this commit:
 
  https://review.openstack.org/#/c/37131/
 
  Set api_workers to something  1 and restart the server.
 
  The server can also be run on more than one physical host in
  combination with multiple child processes.

 I completely misunderstood the import of the commit in question.  Being
 able to run the wsgi server(s) out of process is a nice improvement, thank
 you for making it happen.  Has there been any discussion around making the
 default for api_workers  0 (at least 1) to ensure that the default
 configuration separates wsgi and rpc load?  This also seems like a great
 candidate for backporting to havana and maybe even grizzly, although
 api_workers should probably be defaulted to 0 in those cases.


+1 for backporting the api_workers feature to havana as well as Grizzly :)


 FYI, I re-ran the test that attempted to boot 75 micro VM's simultaneously
 with api_workers = 2, with mixed results.  The increased wsgi throughput
 resulted in almost half of the boot requests failing with 500 errors due to
 QueuePool errors (https://bugs.launchpad.net/neutron/+bug/1160442) in
 Neutron.  It also appears that maximizing the number of wsgi requests has
 the side-effect of increasing the RPC load on the main process, and this
 means that the problem of dhcp notifications being dropped is little
 improved.  I intend to submit a fix that ensures that notifications are
 sent regardless of agent status, in any case.


 m.

 
  Carl
 
  On Tue, Dec 3, 2013 at 9:47 AM, Stephen Gran
  stephen.g...@theguardian.com wrote:
  On 03/12/13 16:08, Maru Newby wrote:
 
  I've been investigating a bug that is preventing VM's from receiving IP
  addresses when a Neutron service is under high load:
 
  https://bugs.launchpad.net/neutron/+bug/1192381
 
  High load causes the DHCP agent's status updates to be delayed, causing
  the Neutron service to assume that the agent is down.  This results in
 the
  Neutron service not sending notifications of port addition to the DHCP
  agent.  At present, the notifications are simply dropped.  A simple
 fix is
  to send notifications regardless of agent status.  Does anybody have
 any
  objections to this stop-gap approach?  I'm not clear on the
 implications of
  sending notifications to agents that are down, but I'm hoping for a
 simple
  fix that can be backported to both havana and grizzly (yes, this bug
 has
  been with us that long).
 
  Fixing this problem for real, though, will likely be more involved.
  The
  proposal to replace the current wsgi framework with Pecan may increase
 the
  Neutron service's scalability, but should we continue to use a 'fire
 and
  forget' approach to notification?  Being able to track the success or
  failure of a given action outside of the logs would seem pretty
 important,
  and allow for more effective coordination with Nova than is currently
  possible.
 
 
  It strikes me that we ask an awful lot of a single neutron-server
 instance -
  it has to take state updates from all the agents, it has to do
 scheduling,
  it has to respond to API requests, and it has to communicate about
 actual
  changes with the agents.
 
  Maybe breaking some of these out the way nova has a scheduler and a
  conductor and so on might be a good model (I know there are things
 people
  are unhappy about with nova-scheduler, but imagine how much worse it
 would
  be if it was built into the API).
 
  Doing all of those tasks, and doing it largely single threaded, is just
  asking for overload.
 
  Cheers,
  --
  Stephen Gran
  Senior Systems Integrator - theguardian.com
  Please consider the environment before printing this email.
  --
  Visit theguardian.com
  On your mobile, download the Guardian iPhone app theguardian.com/iphoneand
  our iPad edition theguardian.com/iPad   Save up to 33% by subscribing
 to the
  Guardian and Observer - choose the papers you want and get full digital
  access.
  Visit subscribe.theguardian.com
 
  This e-mail and all attachments are confidential and may also
  be privileged. If you are not the named recipient, please notify
  the sender and delete the e-mail and all attachments immediately.
  Do not disclose the contents to another person. You may not use
  the information for any purpose, or store, or copy, it in any way.
 
  Guardian News  

Re: [openstack-dev] [marconi] New meeting time

2013-12-04 Thread Kurt Griffiths
Sorry to change things up again, but it’s been requested that we move our
meeting to Tuesday at 1500 UTC instead of Monday, since a lot of people’s
Mondays are crazy busy as it is. Unless anyone objects, let’s plan on
doing that starting with our next meeting (Dec 10).

On 11/25/13, 6:22 PM, Kurt Griffiths kurt.griffi...@rackspace.com
wrote:

OK, I¹ve changed the time. Starting next Monday (2 Dec.) we will be
meeting at 1500 UTC in #openstack-meeting-alt.

See also: https://wiki.openstack.org/wiki/Meetings/Marconi

On 11/25/13, 11:33 AM, Flavio Percoco fla...@redhat.com wrote:

On 25/11/13 17:05 +, Amit Gandhi wrote:
Works for me.

Works for me!

-- 
@flaper87
Flavio Percoco


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do we have some guidelines for mock, stub, mox when writing unit test?

2013-12-04 Thread Nikola Đipanov
On 11/19/2013 05:52 PM, Peter Feiner wrote:
 On Tue, Nov 19, 2013 at 11:19 AM, Chuck Short chuck.sh...@canonical.com 
 wrote:
 Hi


 On Tue, Nov 19, 2013 at 10:43 AM, Peter Feiner pe...@gridcentric.ca wrote:

 A substantive reason for switching from mox to mock is the derelict
 state of mox releases. There hasn't been a release of mox in three
 years: the latest, mox-0.5.3, was released in 2010 [1, 2]. Moreover,
 in the past 3 years, substantial bugs have been fixed in upstream mox.
 For example, with the year-old fix to
 https://code.google.com/p/pymox/issues/detail?id=16, a very nasty bug
 in nova would have been caught by an existing test [3].

 Alternatively, a copy of the upstream mox code could be added in-tree.

 Please no, I think we are in an agreement with mox3 and mock.
 
 That's cool. As long as the mox* is phased out, the false-positive
 test results will be fixed.
 
 Of course, there's _another_ alternative, which is to retrofit mox3
 with the upstream mox fixes (e.g., the bug I cited above exists in
 mox3). However, the delta between mox3 and upstream mox is pretty huge
 (I just checked), so effort is probably better spent switching to
 mock. To that end, I plan on changing the tests I cited above.
 

Resurrecting this thread because of an interesting review that came up
yesterday [1].

It seems that our lack of a firm decision on what to do with the mocking
framework has left people confused. In hope to help - I'll give my view
of where things are now and what we should do going forward, and
hopefully we'll reach some consensus on this.

Here's the breakdown:

We should abandon mox:
* It has not had a release in over 3 years [2] and a patch upstream for 2
* There are bugs that are impacting the project with it (see above)
* It will not be ported to python 3

Proposed path forward options:
1) Port nova to mock now:
  * Literally unmanageable - huge review overhead and regression risk
for not so much gain (maybe) [1]

2) Opportunistically port nova (write new tests using mock, when fixing
tests, move them to mock):
 * Will take a really long time to move to mock, and is not really a
solution since we are stuck with mock for an undetermined period of time
- it's what we are doing now (kind of).

3) Same as 2) but move current codebase to mox3
 * Buys us py3k compat, and fresher code
 * Mox3 and mox have diverged and we would need to backport mox fixes
onto the mox3 three and become de-facto active maintainers (as per Peter
Feiner's last email - that may not be so easy).

I think we should follow path 3) if we can, but we need to:

1) Figure out what is the deal with mox3 and decide if owning it will
really be less trouble than porting nova. To be hones - I was unable to
even find the code repo for it, only [3]. If anyone has more info -
please weigh in. We'll also need volunteers

2) Make better testing guidelines when using mock, and maybe add some
testing helpers (like we do already have for mox) that will make porting
existing tests easier. mreidem already put this on this weeks nova
meeting agenda - so that might be a good place to discuss all the issues
mentioned here as well.

We should really take a stronger stance on this soon IMHO, as this comes
up with literally every commit.

Cheers,

Nikola

[1] https://review.openstack.org/#/c/59694/
[2] https://pypi.python.org/pypi/mox
[3] https://pypi.python.org/pypi/mox3/0.7.0

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hacking] License headers in empty files

2013-12-04 Thread Ben Nemec

On 2013-12-02 11:37, Julien Danjou wrote:

On Thu, Nov 28 2013, Julien Danjou wrote:


On Thu, Nov 28 2013, Sean Dague wrote:

I'm totally in favor of going further and saying empty files 
shouldn't

have license headers, because their content of emptiness isn't
copyrightable [1]. That's just not how it's written today.


I went ahead and sent a first patch:

  https://review.openstack.org/#/c/59090/

Help appreciated. :)


The patch is ready for review, but it also a bit stricter as it
completely disallows files with _only_ comments in them.

This is something that sounds like a good idea, but Joe wanted to bring
this to the mailing list for attention first, in case there would be a
problem.


For reference, I believe the primary concern was that this would require 
the removal of a few author comments in empty files, such as this: 
https://github.com/openstack/nova/blob/master/nova/network/security_group/__init__.py#L18


I don't see a problem with that (the files with actual code also have 
the author comment, so it will still be clear who wrote it, and of 
course Git knows all of this too), but I agree that this is not 
something we want to do without giving people the opportunity to discuss 
it.


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tempest] Which is the best way for skipping tests?

2013-12-04 Thread Joe Hakim Rahme
On 04 Dec 2013, at 17:05, Sean Dague s...@dague.net wrote:
 That will require someone signing up to writing that though.

I could do that.

Since you know the code better than me, can you confirm that 
tempest/test.py is the best place to define this decorator?

Thanks.
---
Joe H. Rahme


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] [marconi] Notifications brainstorming session tomorrow @ 1500 UTC

2013-12-04 Thread Ian Wells
How frequent do you imagine these notifications being?  There's a wide
variation here between the 'blue moon' case where disk space is low and
frequent notifications of things like OS performance, which you might want
to display in Horizon or another monitoring tool on an every-few-seconds
basis, or instance state change, which is usually driven by polling at
present.

I'm not saying that we should necessarily design notifications for the
latter cases, because it introduces potentially quite a lot of
user-demanded load on the Openstack components, I'm just asking for a
statement of intent.
-- 
Ian.


On 4 December 2013 16:09, Kurt Griffiths kurt.griffi...@rackspace.comwrote:

 Thanks! We touched on this briefly during the chat yesterday, and I will
 make sure it gets further attention.

 On 12/3/13, 3:54 AM, Julien Danjou jul...@danjou.info wrote:

 On Mon, Dec 02 2013, Kurt Griffiths wrote:
 
  Following up on some conversations we had at the summit, I¹d like to get
  folks together on IRC tomorrow to crystalize the design for a
 notifications
  project under the Marconi program. The project¹s goal is to create a
 service
  for surfacing events to end users (where a user can be a cloud app
  developer, or a customer using one of those apps). For example, a
 developer
  may want to be notified when one of their servers is low on disk space.
  Alternatively, a user of MyHipsterApp may want to get a text when one of
  their friends invites them to listen to That Band You¹ve Never Heard Of.
 
  Interested? Please join me and other members of the Marconi team
 tomorrow,
  Dec. 3rd, for a brainstorming session in #openstack-marconi at 1500
 
 UTC
 http://www.timeanddate.com/worldclock/fixedtime.html?hour=15min=0se
 c=0.
  Your contributions are crucial to making this project awesome.
 
  I¹ve seeded an etherpad for the discussion:
 
  https://etherpad.openstack.org/p/marconi-notifications-brainstorm
 
 This might (partially) overlap with what Ceilometer is doing with its
 alarming feature, and one of the blueprint our roadmap for Icehouse:
 
   https://blueprints.launchpad.net/ceilometer/+spec/alarm-on-notification
 
 While it doesn't solve the use case at the same level, the technical
 mechanism is likely to be similar.
 
 --
 Julien Danjou
 # Free Software hacker # independent consultant
 # http://julien.danjou.info


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Interfaces file format, was [Tempest] Need to prepare the IPv6 environment for static IPv6 injection test case

2013-12-04 Thread Ian Wells
We seem to have bound our config drive file formats to those used by the
operating system we're running, which doesn't seem like the right approach
to take.

Firstly, the above format doesn't actually work even for Debian-based
systems - if you have a network without ipv6, ipv6 ND will be enabled on
the ipv4-only interfaces, which strikes me as wrong.  (This is a feature of
Linux - ipv4 is enabled on interfaces which are specifically configured
with ipv4, but ipv6 is enabled on all interfaces that are brought up.)

But more importantly, the above file template only works for Debian-based
machines - not Redhat, not Windows, not anything else - and we seem to have
made that a feature of Openstack from the relatively early days of file
injection.  That's not an ipv6 only thing but a general statement.  It
seems wrong to have to extend Openstack's config drive injection for every
OS that might come along, so is there a way we can make this work without
tying the two things together?  Are we expecting the cloud-init code in
whatever OS to parse and understand this file format, or are they supposed
to use other information?  In general, what would the recommendation be for
someone using a VM where this config format is not native?

-- 
Ian.


On 2 December 2013 03:01, Yang XY Yu yuyan...@cn.ibm.com wrote:

 Hi all stackers,

 Currently Neutron/Nova code has supported the static IPv6 injection, but
 there is no tempest scenario coverage to support IPv6 injection test case.
 So I finished the test case and run the it successfully in my local
 environment, and already submitted the code-review in community:
 *https://review.openstack.org/#/c/58721/*https://review.openstack.org/#/c/58721/,
 but the community Jenkins env has not supported IPv6 and there are still a
 few pre-requisites setup below if running the test case correctly,

 1. Special Image needed to support IPv6 by using cloud-init, currently the
 cirros image used by tempest does not installed cloud-init.

 2. Prepare interfaces.template file below on compute node.
 edit  /usr/share/nova/interfaces.template

 # Injected by Nova on instance boot
 #
 # This file describes the network interfaces available on your system
 # and how to activate them. For more information, see interfaces(5).

 # The loopback network interface
 auto lo
 iface lo inet loopback

 {% for ifc in interfaces -%}
 auto {{ ifc.name }}
 {% if use_ipv6 -%}
 iface {{ ifc.name }} inet6 static
 address {{ ifc.address_v6 }}
 netmask {{ ifc.netmask_v6 }}
 {%- if ifc.gateway_v6 %}
 gateway {{ ifc.gateway_v6 }}
 {%- endif %}
 {%- endif %}

 {%- endfor %}


 So considering these two pre-requisites, what should be done to enable
 this patch for IPv6 injection? Should I open a bug for cirros to enable
 cloud-init?   Or skip the test case because of this bug ?
 Any comments are appreciated!

 Thanks  Best Regards,
 
 Yang Yu(于杨)
 Cloud Solutions and OpenStack Development
 China Systems  Technology Laboratory Beijing
 E-mail: yuyan...@cn.ibm.com
 Tel: 86-10-82452757
 Address: Ring Bldg. No.28 Building, Zhong Guan Cun Software Park,
 No. 8 Dong Bei Wang West Road, ShangDi, Haidian District, Beijing 100193,
 P.R.China
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tempest] Which is the best way for skipping tests?

2013-12-04 Thread Sean Dague
On 12/04/2013 11:32 AM, Joe Hakim Rahme wrote:
 On 04 Dec 2013, at 17:05, Sean Dague s...@dague.net wrote:
 That will require someone signing up to writing that though.
 
 I could do that.
 
 Since you know the code better than me, can you confirm that 
 tempest/test.py is the best place to define this decorator?

Yes, that would be the right place to add it.

And thanks for signing up for this!

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova]:packing-filter-scheduler

2013-12-04 Thread Digambar Patil
Hi Russell,

   Thank you for your inputs on launchpad. Can you tell me what is
exactly expected in-terms of design detail, so I can ensure that I send you
the correct details in the first go itself.

Best Regards,
Digambar
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]:packing-filter-scheduler

2013-12-04 Thread Russell Bryant
On 12/04/2013 11:58 AM, Digambar Patil wrote:
 Hi Russell,
 
Thank you for your inputs on launchpad. Can you tell me what is
 exactly expected in-terms of design detail, so I can ensure that I send
 you the correct details in the first go itself.

It looked like you described a scheduling use case, but not how you
would achieve it.  Requirement vs design.

I'm looking for some insight into things like ...

What filter(s) do you expect to add?  What will they do exactly?

Are there any changes proposed outside of additional filters?

Going through this now could save you a lot of implementation time.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] creating a default for oslo config variables within a project?

2013-12-04 Thread Ben Nemec

On 2013-12-04 06:07, Sean Dague wrote:

On 12/03/2013 11:21 PM, Clint Byrum wrote:

Excerpts from Sean Dague's message of 2013-12-03 16:05:47 -0800:

On 12/03/2013 06:13 PM, Ben Nemec wrote:

On 2013-12-03 17:09, Sean Dague wrote:

On 12/03/2013 05:50 PM, Mark McLoughlin wrote:

On Tue, 2013-12-03 at 16:23 -0600, Ben Nemec wrote:

On 2013-12-03 15:56, Sean Dague wrote:

This cinder patch - https://review.openstack.org/#/c/48935/

Is blocked on failing upgrade because the updated oslo lockutils 
won't
function until there is a specific configuration variable added 
to the

cinder.conf.

That work around is proposed here -
https://review.openstack.org/#/c/52070/3

However I think this is exactly the kind of forward breaks that 
we

want
to prevent with grenade, as cinder failing to function after a 
rolling
upgrade because a config item wasn't added is exactly the kind 
of pain

we are trying to prevent happening to ops.

So the question is, how is this done correctly so that a default
can be
set in the cinder code for this value, and it not require a 
config

change to work?


You're absolutely correct, in principle - if the default value for
lock_path worked for users before, we should absolutely continue 
to

support it.

I don't know that I have a good answer on how to handle this, but 
for
context this change is the result of a nasty bug in lockutils 
that

meant
external locks were doing nothing if lock_path wasn't set.  
Basically

it's something we should never have allowed in the first place.

As far as setting this in code, it's important that all of the
processes
for a service are using the same value to avoid the same bad 
situation

we were in before.  For tests, we have a lockutils wrapper
(https://github.com/openstack/oslo-incubator/blob/master/openstack/common/lockutils.py#L282)
that sets an environment variable to address this, but that only
works if all of the processes are going to be spawned from within
the same wrapper, and I'm not sure how secure that is for 
production

deployments since it puts all of the lock files in a temporary
directory.


Right, I don't think the previous default really worked - if you 
used

the default, then external locking was broken.

I suspect most distros do set a default - I see RDO has this in 
its

default nova.conf:

  lock_path = /var/lib/nova/tmp

So, yes - this is all terrible.

IMHO, rather than raise an exception we should log a big fat 
warning

about relying on the default and perhaps just treat the lock as an
in-process lock in that case ... since that's essentially what it 
was

before, right?


So a default of lock_path = /tmp will work (FHS says that path has 
to be
there), even if not optimal. Could we make it a default value like 
that
instead of the current default which is null (and hence the 
problem).


IIRC, my initial fix was something similar to that, but it got shot 
down
because putting the lock files in a known world writeable location 
was a

security issue.

Although maybe if we put them in a subdirectory of /tmp and ensured 
that
the permissions were such that only the user running the service 
could

use that directory, it might be acceptable?  We could still log a
warning if we wanted.

This seems like it would have implications for people running 
services
on Windows too, but we can probably find a way to make that work if 
we

decide on a solution.


How is that a security issue? Are the lock files being written with 
some

sensitive data in them and have g or o permissions on? The sticky bit
(+t) on /tmp will prevent other users from deleting the file.



Right, but it won't prevent users from creating a symlink with the 
same

name.

ln -s /var/lib/nova/instances/x/image.raw /tmp/well.known.location

Now when you do

with open('/tmp/well.known.location', 'w') as lockfile:
  lockfile.write('Stuff')

Nova has just truncated the image file and written 'Stuff' to it.


So that's the generic case (and the way people often write this). But
the oslo lockutils code doesn't work that way. While it does open the
file for write, it does not actually write, it's using fcntl to hold
locks. That's taking a data structure on the fd in kernel memory 
(IIRC),

so it correctly gives it up if the process crashes.

I'm not saying there isn't some other possible security vulnerability
here as well, but it's not jumping out at me. So I'd love to understand
that, because if we can close that exposure we can provide a working
default, plus a strong recommendation for how to do that *right*. I'd 
be
totally happy with printing WARNING level at startup if lock_path = 
/tmp

that this should be adjusted.


Full disclosure: I don't claim to be a security expert, so take my 
thoughts on this with a grain of salt.


tldr: I still don't see a way to do this without breaking _something_.

Unfortunately, while we don't actually write to the file, just opening 
it for write access truncates it.  So there remains the issue Clint 
raised if someone 

Re: [openstack-dev] [nova] Do we have some guidelines for mock, stub, mox when writing unit test?

2013-12-04 Thread Russell Bryant
On 12/04/2013 11:16 AM, Nikola Đipanov wrote:
 Resurrecting this thread because of an interesting review that came up
 yesterday [1].
 
 It seems that our lack of a firm decision on what to do with the mocking
 framework has left people confused. In hope to help - I'll give my view
 of where things are now and what we should do going forward, and
 hopefully we'll reach some consensus on this.
 
 Here's the breakdown:
 
 We should abandon mox:
 * It has not had a release in over 3 years [2] and a patch upstream for 2
 * There are bugs that are impacting the project with it (see above)
 * It will not be ported to python 3
 
 Proposed path forward options:
 1) Port nova to mock now:
   * Literally unmanageable - huge review overhead and regression risk
 for not so much gain (maybe) [1]
 
 2) Opportunistically port nova (write new tests using mock, when fixing
 tests, move them to mock):
  * Will take a really long time to move to mock, and is not really a
 solution since we are stuck with mock for an undetermined period of time
 - it's what we are doing now (kind of).
 
 3) Same as 2) but move current codebase to mox3
  * Buys us py3k compat, and fresher code
  * Mox3 and mox have diverged and we would need to backport mox fixes
 onto the mox3 three and become de-facto active maintainers (as per Peter
 Feiner's last email - that may not be so easy).
 
 I think we should follow path 3) if we can, but we need to:
 
 1) Figure out what is the deal with mox3 and decide if owning it will
 really be less trouble than porting nova. To be hones - I was unable to
 even find the code repo for it, only [3]. If anyone has more info -
 please weigh in. We'll also need volunteers
 
 2) Make better testing guidelines when using mock, and maybe add some
 testing helpers (like we do already have for mox) that will make porting
 existing tests easier. mreidem already put this on this weeks nova
 meeting agenda - so that might be a good place to discuss all the issues
 mentioned here as well.
 
 We should really take a stronger stance on this soon IMHO, as this comes
 up with literally every commit.

I think option 3 makes the most sense here (pending anyone saying we
should run away screaming from mox3 for some reason).  It's actually
what I had been assuming since this thread a while back.

This means that we don't need to *require* that tests get converted if
you're changing one.  It just gets you bonus imaginary internet points.

Requiring mock for new tests seems fine.  We can grant exceptions in
specific cases if necessary.  In general, we should be using mock for
new tests.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do we have some guidelines for mock, stub, mox when writing unit test?

2013-12-04 Thread Peter Feiner
On Wed, Dec 4, 2013 at 11:16 AM, Nikola Đipanov ndipa...@redhat.com wrote:
 1) Figure out what is the deal with mox3 and decide if owning it will
 really be less trouble than porting nova. To be hones - I was unable to
 even find the code repo for it, only [3]. If anyone has more info -
 please weigh in. We'll also need volunteers

 [3] https://pypi.python.org/pypi/mox3/0.7.0

That's all I was able to find.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Calling a controller from within a session in the plugin

2013-12-04 Thread Mohammad Banikazemi


Have a question regarding calling an external SDN controller in a plugin.
The ML2 model brings up the fact that it is preferred not to call an
external controller within a database session by splitting up each call
into two calls: *_precommit and *_postcommit. Makes sense.

Looking at the existing monolithic plugins, I see some plugins that do not
follow this approach and have the call to the controller from within a
session. The obvious benefit of this approach would be a simpler cleanup
code segment for cases where the call to controller fails. So my question
is whether it is still OK to use this simpler approach in monolithic
plugins. As we move to the ML2 model, we will be using the ML2 approach but
in the meantime, we leave the option of calling the controller within a
session as an OK option. Would that be reasonable?

-Mohammad

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Service scoped role definition

2013-12-04 Thread Tiwari, Arvind
Hi Adam,

I have added my comments in line. 

As per my request yesterday and David's proposal, following role-def data model 
is looks generic enough and seems innovative to accommodate future extensions.

{
  role: {
id: 76e72a,
name: admin, (you can give whatever name you like)
scope: {
  id: ---id--, (ID should be  1 to 1 mapped with resource in type and 
must be immutable value)
  type: service | file | domain etc., (Type can be any type of resource 
which explains the scoping context)
  interface:--interface--  (We are still need working on this field. My 
idea of this optional field is to indicate the interface of the resource 
(endpoint for service, path for File,) for which the role-def is
   created and can be empty.)
}
  }
}

Based on above data model two admin roles for nova for two separate region wd 
be as below

{
  role: {
id: 76e71a,
name: admin,
scope: {
  id: 110, (suppose 110 is Nova serviceId)
  interface: 1101, (suppose 1101 is Nova region East endpointId)
  type: service
}
  }
}

{
  role: {
id: 76e72a,
name: admin,
scope: {
  id: 110, 
  interface: 1102,(suppose 1102 is Nova region West endpointId)
  type: service
}
  }
}

This way we can keep role-assignments abstracted from resource on which the 
assignment is created. This also open doors to have service and/or endpoint 
scoped token as I mentioned in https://etherpad.openstack.org/p/1Uiwcbfpxq.

David, I have updated 
https://etherpad.openstack.org/p/service-scoped-role-definition line #118 
explaining the rationale behind the field.
I wd also appreciate your vision on https://etherpad.openstack.org/p/1Uiwcbfpxq 
too which is support 
https://blueprints.launchpad.net/keystone/+spec/service-scoped-tokens BP.


Thanks,
Arvind

-Original Message-
From: Adam Young [mailto:ayo...@redhat.com] 
Sent: Tuesday, December 03, 2013 6:52 PM
To: Tiwari, Arvind; OpenStack Development Mailing List (not for usage questions)
Cc: Henry Nash; dolph.math...@gmail.com; David Chadwick
Subject: Re: [openstack-dev] [keystone] Service scoped role definition

I've been thinking about your comment that nested roles are confusing
AT: Thanks for considering my comment about nested role-def.

What if we backed off and said the following:


Some role-definitions are owned by services.  If a Role definition is 
owned by a service, in role assignment lists in tokens, those roles we 
be prefixd by the service name.  / is a reserved cahracter and weill be 
used as the divider between segments of the role definition 

That drops arbitrary nesting, and provides a reasonable namespace.  Then 
a role def would look like:

glance/admin  for the admin role on the glance project.

AT: It seems this approach is not going to help, service rename would impact 
all the role-def for a particular service. And we are back to the same problem.

In theory, we could add the domain to the namespace, but that seems 
unwieldy.  If we did, a role def would then look like this


default/glance/admin  for the admin role on the glance project.

Is that clearer than the nested roles?
AT: It is defiantly clearer but it will create same problems as what we are 
trying to fix. 



On 11/26/2013 06:57 PM, Tiwari, Arvind wrote:
 Hi Adam,

 Based on our discussion over IRC, I have updated the below etherpad with 
 proposal for nested role definition

 https://etherpad.openstack.org/p/service-scoped-role-definition

 Please take a look @ Proposal (Ayoung) - Nested role definitions, I am 
 sorry if I could not catch your idea.

 Feel free to update the etherpad.

 Regards,
 Arvind


 -Original Message-
 From: Tiwari, Arvind
 Sent: Tuesday, November 26, 2013 4:08 PM
 To: David Chadwick; OpenStack Development Mailing List
 Subject: Re: [openstack-dev] [keystone] Service scoped role definition

 Hi David,

 Thanks for your time and valuable comments. I have replied to your comments 
 and try to explain why I am advocating to this BP.

 Let me know your thoughts, please feel free to update below etherpad
 https://etherpad.openstack.org/p/service-scoped-role-definition

 Thanks again,
 Arvind

 -Original Message-
 From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk]
 Sent: Monday, November 25, 2013 12:12 PM
 To: Tiwari, Arvind; OpenStack Development Mailing List
 Cc: Henry Nash; ayo...@redhat.com; dolph.math...@gmail.com; Yee, Guang
 Subject: Re: [openstack-dev] [keystone] Service scoped role definition

 Hi Arvind

 I have just added some comments to your blueprint page

 regards

 David


 On 19/11/2013 00:01, Tiwari, Arvind wrote:
 Hi,

   

 Based on our discussion in design summit , I have redone the service_id
 binding with roles BP
 https://blueprints.launchpad.net/keystone/+spec/serviceid-binding-with-role-definition.
 I have added a new BP (link below) along with detailed use case to
 support this BP.

 

Re: [openstack-dev] [keystone] Service scoped role definition

2013-12-04 Thread Tiwari, Arvind
Hi David, 

Thanks for your valuable comments. 

I have updated https://etherpad.openstack.org/p/service-scoped-role-definition 
line #118 explaining the rationale behind the field.

I wd also appreciate your thoughts on 
https://etherpad.openstack.org/p/1Uiwcbfpxq too, which is support 
https://blueprints.launchpad.net/keystone/+spec/service-scoped-tokens BP.


Thanks,
Arvind

-Original Message-
From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk] 
Sent: Wednesday, December 04, 2013 2:16 AM
To: Tiwari, Arvind; OpenStack Development Mailing List (not for usage 
questions); Adam Young
Subject: Re: [openstack-dev] [keystone] Service scoped role definition

I have added comments 111 to 122

david

On 03/12/2013 23:58, Tiwari, Arvind wrote:
 Hi David,
 
 I have added my comments underneath line # 97 till line #110, it is mostly 
 aligned with your proposal with some modification.
  
 https://etherpad.openstack.org/p/service-scoped-role-definition
 
 
 Thanks for your time,
 Arvind
 
 
 
 -Original Message-
 From: Tiwari, Arvind 
 Sent: Monday, December 02, 2013 4:22 PM
 To: Adam Young; OpenStack Development Mailing List (not for usage questions); 
 David Chadwick
 Subject: Re: [openstack-dev] [keystone] Service scoped role definition
 
 Hi Adam and David,
 
 Thank you so much for all the great comments, seems we are making good 
 progress.
 
 I have replied to your comments and also added some to support my proposal
 
 https://etherpad.openstack.org/p/service-scoped-role-definition
 
 David, I like your suggestion for role-def scoping which can fit in my Plan B 
 and I think Adam is cool with plan B.
 
 Please let me know if David's proposal for role-def scoping is cool for 
 everybody?
 
 
 Thanks,
 Arvind
 
 -Original Message-
 From: Adam Young [mailto:ayo...@redhat.com] 
 Sent: Wednesday, November 27, 2013 8:44 AM
 To: Tiwari, Arvind; OpenStack Development Mailing List (not for usage 
 questions)
 Cc: Henry Nash; dolph.math...@gmail.com; David Chadwick
 Subject: Re: [openstack-dev] [keystone] Service scoped role definition
 
 
 
 On 11/26/2013 06:57 PM, Tiwari, Arvind wrote:
 Hi Adam,

 Based on our discussion over IRC, I have updated the below etherpad with 
 proposal for nested role definition
 
 Updated.  I made my changes Green.  It isn't easy being green.
 

 https://etherpad.openstack.org/p/service-scoped-role-definition

 Please take a look @ Proposal (Ayoung) - Nested role definitions, I am 
 sorry if I could not catch your idea.

 Feel free to update the etherpad.

 Regards,
 Arvind


 -Original Message-
 From: Tiwari, Arvind
 Sent: Tuesday, November 26, 2013 4:08 PM
 To: David Chadwick; OpenStack Development Mailing List
 Subject: Re: [openstack-dev] [keystone] Service scoped role definition

 Hi David,

 Thanks for your time and valuable comments. I have replied to your comments 
 and try to explain why I am advocating to this BP.

 Let me know your thoughts, please feel free to update below etherpad
 https://etherpad.openstack.org/p/service-scoped-role-definition

 Thanks again,
 Arvind

 -Original Message-
 From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk]
 Sent: Monday, November 25, 2013 12:12 PM
 To: Tiwari, Arvind; OpenStack Development Mailing List
 Cc: Henry Nash; ayo...@redhat.com; dolph.math...@gmail.com; Yee, Guang
 Subject: Re: [openstack-dev] [keystone] Service scoped role definition

 Hi Arvind

 I have just added some comments to your blueprint page

 regards

 David


 On 19/11/2013 00:01, Tiwari, Arvind wrote:
 Hi,

   

 Based on our discussion in design summit , I have redone the service_id
 binding with roles BP
 https://blueprints.launchpad.net/keystone/+spec/serviceid-binding-with-role-definition.
 I have added a new BP (link below) along with detailed use case to
 support this BP.

 https://blueprints.launchpad.net/keystone/+spec/service-scoped-role-definition

 Below etherpad link has some proposals for Role REST representation and
 pros and cons analysis

   

 https://etherpad.openstack.org/p/service-scoped-role-definition

   

 Please take look and let me know your thoughts.

   

 It would be awesome if we can discuss it in tomorrow's meeting.

   

 Thanks,

 Arvind

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Multidomain User Ids

2013-12-04 Thread Henry Nash

On 4 Dec 2013, at 13:28, Dolph Mathews dolph.math...@gmail.com wrote:

 
 On Sun, Nov 24, 2013 at 9:39 PM, Adam Young ayo...@redhat.com wrote:
 The #1 pain point I hear from people in the field is that they need to 
 consume read only  LDAP but have service users in something Keystone 
 specific.  We are close to having this, but we have not closed the loop.  
 This was something that was Henry's to drive home to completion.  Do we have 
 a plan?  Federation depends on this, I think, but this problem stands alone.
 
 I'm still thinking through the idea of having keystone natively federate to 
 itself out of the box, where keystone presents itself as an IdP (primarily 
 for service users). It sounds like a simpler architectural solution than 
 having to shuffle around code paths for both federated identities and local 
 identities.
  
 
 Two Solutions:
 1 always require domain ID along with the user id for role assignments.
 
 From an API perspective, how? (while still allowing for cross-domain role 
 assignments)
  
 2 provide some way to parse from the user ID what domain it is.
 
 I think you meant this one the other way around: Determine the domain given 
 the user ID.
  
 
 I was thinking that we could do something along the lines of 2 where we 
 provide  domain specific user_id prefix  for example, if there is just one 
 ldpa service, and they wanted to prefix anyting out of ldap with ldap@, 
 then an id would be  prefix  field from LDAP.  And would be configured on 
 a per domain basis.  THis would be optional.
 
 The weakness is that itbe Log N to determine which Domain a user_id came 
 from.  A better approach would be to use a divider, like '@' and then prefix 
 would be the key for a hashtable lookup.  Since it is optional, domains could 
 still be stored in SQL and user_ids could be uuids.
 
 One problem is if someone comes by later an must use email address as the 
 userid, the @ would mess them up.  So The default divider should be something 
 URL safe but no likely to be part of a userid. I realize that it might be 
 impossible to match this criterion.
 
I know this sounds a bit like back to the future', but how about we make a 
user_id passed via the API a structured binary field, containing a 
concatenation of domain_id and (the actual) user_id, but rather than have a 
separator, encode the start positions in the first few digits, e.g. something 
like:

Digit # Meaning
0-1 Start position of domain_id, (e.g. this will usually be 4)
2-3 Start position of user_id
4-N domain_id
M-end   user_id

We would run a migration that would convert all existing mappings.  Further, we 
would ensure (with padding if necessary) that this new user_id is ALWAYS 
larger than 64chars - hence we could easily detect which type of ID we had.

 For usernames, sure... but I don't know why anyone would care to use email 
 addresses as ID's.
  
 
 Actually, there might be other reasons to forbid @ signs from IDs, as they 
 look like phishing attempts in URLs.
 
 Phishing attempts?? They need to be encoded anyway...
  
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 -- 
 
 -Dolph
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Service scoped role definition

2013-12-04 Thread David Chadwick
Hi Adam

I understand your problem: having projects and services which have the
same name, then the lineage of a role containing this name is not
deterministically known without some other rule or syntax that can
differentiate between the two.

Since domains contain projects which contain services then isnt the
containment hierarchy already known and predetermined? If it is then:

4 name components mean it is a service specified role
3 name components mean it is a project specified role
2 name components mean it is a domain specified role
1 name component means it is globally named role (from the default domain)

a null string means the default domain or all projects in a domain. You
would never have null for a service name.

admin means the global admin role
/admin ditto
x/admin means the admin of the X domain
x/y/admin means the admin role for the y project in domain x
//x/admin means admin for service x from the default domain
etc.

will that work?

regards

David


On 04/12/2013 15:04, Adam Young wrote:
 On 12/04/2013 04:08 AM, David Chadwick wrote:
 I am happy with this as far as it goes. I would like to see it being
 made more general, where domains, services and projects can also own and
 name roles
 Domains should be OK, but services would confuse the matter.  You'd have
 to end up with something like LDAP
 
 role=  domain=default,service=glance
 
 vs
 
 role=  domain=default,project=glance
 
 unless we have unambiguous implicit ordering, we'll need to make it
 explicit, which is messy.
 
 I'd rather do:
 
 One segment: globally defined roles.  These could also be considered
 roles defined in the default domain.
 Two segments service defined roles in the default domain
 Three Segments, service defined roles from non-default domain
 
 To do domain scoped roles we could do something like:
 
 domX//admin
 
 
 But It seems confusing.
 
 Perhaps a better approach for project roles is to have the rule that the
 default domain can show up as an empty string.  Thus, project scoped
 roles from the default domain  would be:
 
 \glance\admin
 
 and from a non default domain
 
 domX\glance\admin
 
 
 
 
 
 
 

 regards

 David


 On 04/12/2013 01:51, Adam Young wrote:
 I've been thinking about your comment that nested roles are confusing


 What if we backed off and said the following:


 Some role-definitions are owned by services.  If a Role definition is
 owned by a service, in role assignment lists in tokens, those roles we
 be prefixd by the service name.  / is a reserved cahracter and weill be
 used as the divider between segments of the role definition 

 That drops arbitrary nesting, and provides a reasonable namespace.  Then
 a role def would look like:

 glance/admin  for the admin role on the glance project.



 In theory, we could add the domain to the namespace, but that seems
 unwieldy.  If we did, a role def would then look like this


 default/glance/admin  for the admin role on the glance project.

 Is that clearer than the nested roles?



 On 11/26/2013 06:57 PM, Tiwari, Arvind wrote:
 Hi Adam,

 Based on our discussion over IRC, I have updated the below etherpad
 with proposal for nested role definition

 https://etherpad.openstack.org/p/service-scoped-role-definition

 Please take a look @ Proposal (Ayoung) - Nested role definitions, I
 am sorry if I could not catch your idea.

 Feel free to update the etherpad.

 Regards,
 Arvind


 -Original Message-
 From: Tiwari, Arvind
 Sent: Tuesday, November 26, 2013 4:08 PM
 To: David Chadwick; OpenStack Development Mailing List
 Subject: Re: [openstack-dev] [keystone] Service scoped role definition

 Hi David,

 Thanks for your time and valuable comments. I have replied to your
 comments and try to explain why I am advocating to this BP.

 Let me know your thoughts, please feel free to update below etherpad
 https://etherpad.openstack.org/p/service-scoped-role-definition

 Thanks again,
 Arvind

 -Original Message-
 From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk]
 Sent: Monday, November 25, 2013 12:12 PM
 To: Tiwari, Arvind; OpenStack Development Mailing List
 Cc: Henry Nash; ayo...@redhat.com; dolph.math...@gmail.com; Yee, Guang
 Subject: Re: [openstack-dev] [keystone] Service scoped role definition

 Hi Arvind

 I have just added some comments to your blueprint page

 regards

 David


 On 19/11/2013 00:01, Tiwari, Arvind wrote:
 Hi,

   Based on our discussion in design summit , I have redone the
 service_id
 binding with roles BP
 https://blueprints.launchpad.net/keystone/+spec/serviceid-binding-with-role-definition.


 I have added a new BP (link below) along with detailed use case to
 support this BP.

 https://blueprints.launchpad.net/keystone/+spec/service-scoped-role-definition



 Below etherpad link has some proposals for Role REST representation
 and
 pros and cons analysis

   https://etherpad.openstack.org/p/service-scoped-role-definition

   Please take look and let me know your thoughts.

   It would be 

Re: [openstack-dev] [nova] Do we have some guidelines for mock, stub, mox when writing unit test?

2013-12-04 Thread Nikola Đipanov
On 12/04/2013 06:15 PM, Peter Feiner wrote:
 On Wed, Dec 4, 2013 at 11:16 AM, Nikola Đipanov ndipa...@redhat.com wrote:
 1) Figure out what is the deal with mox3 and decide if owning it will
 really be less trouble than porting nova. To be hones - I was unable to
 even find the code repo for it, only [3]. If anyone has more info -
 please weigh in. We'll also need volunteers

 [3] https://pypi.python.org/pypi/mox3/0.7.0
 
 That's all I was able to find.
 

The package seems to be owned by people from the community - so maybe
someone will respond on this thread. Or if anyone knows who is behind
the package - let us know - it would be good to start figuring this out
sooner rather than later.

N.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-04 Thread James Slagle
On Wed, Dec 4, 2013 at 2:12 AM, Robert Collins
robe...@robertcollins.net wrote:
 In this months review:
  - Ghe Rivero for -core

+1.  Has been doing very good reviews.

  - Jan Provaznik for removal from -core
  - Jordan O'Mara for removal from -core
  - Martyn Taylor for removal from -core
  - Jiri Tomasek for removal from -core
  - Jamomir Coufal for removal from -core

 Jan, Jordan, Martyn, Jiri and Jaromir are still actively contributing
 to TripleO and OpenStack, but I don't think they are tracking /
 engaging in the code review discussions enough to stay in -core: I'd
 be delighted if they want to rejoin as core - as we discussed last
 time, after a shorter than usual ramp up period if they get stuck in.

What's the shorter than usual ramp up period?

In general, I agree with your points about removing folks from core.

We do have a situation though where some folks weren't reviewing as
frequently when the Tuskar UI/API development slowed a bit post-merge.
 Since that is getting ready to pick back up, my concern with removing
this group of folks, is that it leaves less people on core who are
deeply familiar with that code base.  Maybe that's ok, especially if
the fast track process to get them back on core is reasonable.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do we have some guidelines for mock, stub, mox when writing unit test?

2013-12-04 Thread Chuck Short
On Wed, Dec 4, 2013 at 11:16 AM, Nikola Đipanov ndipa...@redhat.com wrote:

 On 11/19/2013 05:52 PM, Peter Feiner wrote:
  On Tue, Nov 19, 2013 at 11:19 AM, Chuck Short chuck.sh...@canonical.com
 wrote:
  Hi
 
 
  On Tue, Nov 19, 2013 at 10:43 AM, Peter Feiner pe...@gridcentric.ca
 wrote:
 
  A substantive reason for switching from mox to mock is the derelict
  state of mox releases. There hasn't been a release of mox in three
  years: the latest, mox-0.5.3, was released in 2010 [1, 2]. Moreover,
  in the past 3 years, substantial bugs have been fixed in upstream mox.
  For example, with the year-old fix to
  https://code.google.com/p/pymox/issues/detail?id=16, a very nasty bug
  in nova would have been caught by an existing test [3].
 
  Alternatively, a copy of the upstream mox code could be added in-tree.
 
  Please no, I think we are in an agreement with mox3 and mock.
 
  That's cool. As long as the mox* is phased out, the false-positive
  test results will be fixed.
 
  Of course, there's _another_ alternative, which is to retrofit mox3
  with the upstream mox fixes (e.g., the bug I cited above exists in
  mox3). However, the delta between mox3 and upstream mox is pretty huge
  (I just checked), so effort is probably better spent switching to
  mock. To that end, I plan on changing the tests I cited above.
 

 Resurrecting this thread because of an interesting review that came up
 yesterday [1].

 It seems that our lack of a firm decision on what to do with the mocking
 framework has left people confused. In hope to help - I'll give my view
 of where things are now and what we should do going forward, and
 hopefully we'll reach some consensus on this.

 Here's the breakdown:

 We should abandon mox:
 * It has not had a release in over 3 years [2] and a patch upstream for 2
 * There are bugs that are impacting the project with it (see above)
 * It will not be ported to python 3

 Proposed path forward options:
 1) Port nova to mock now:
   * Literally unmanageable - huge review overhead and regression risk
 for not so much gain (maybe) [1]

 2) Opportunistically port nova (write new tests using mock, when fixing
 tests, move them to mock):
  * Will take a really long time to move to mock, and is not really a
 solution since we are stuck with mock for an undetermined period of time
 - it's what we are doing now (kind of).

 3) Same as 2) but move current codebase to mox3
  * Buys us py3k compat, and fresher code
  * Mox3 and mox have diverged and we would need to backport mox fixes
 onto the mox3 three and become de-facto active maintainers (as per Peter
 Feiner's last email - that may not be so easy).


So I thought we cleared this up already. We convert the current codebase
over to mox3, new tests should be done in mock. Eventually we start
converting over code to use mock.



 I think we should follow path 3) if we can, but we need to:

 1) Figure out what is the deal with mox3 and decide if owning it will
 really be less trouble than porting nova. To be hones - I was unable to
 even find the code repo for it, only [3]. If anyone has more info -
 please weigh in. We'll also need volunteers


Monty and I did this last cycle, its apart of the openstack project,
although its not available in gerrit. Which should be fixed so we can start
getting bug fixes in for it.


 2) Make better testing guidelines when using mock, and maybe add some
 testing helpers (like we do already have for mox) that will make porting
 existing tests easier. mreidem already put this on this weeks nova
 meeting agenda - so that might be a good place to discuss all the issues
 mentioned here as well.

 We should really take a stronger stance on this soon IMHO, as this comes
 up with literally every commit.


totally



 Cheers,

 Nikola

 [1] https://review.openstack.org/#/c/59694/
 [2] https://pypi.python.org/pypi/mox
 [3] https://pypi.python.org/pypi/mox3/0.7.0

  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Service scoped role definition

2013-12-04 Thread David Chadwick


On 04/12/2013 17:28, Tiwari, Arvind wrote:
 Hi David,
 
 Thanks for your valuable comments.
 
 I have updated
 https://etherpad.openstack.org/p/service-scoped-role-definition line
 #118 explaining the rationale behind the field.

#119 for my reply

 
 I wd also appreciate your thoughts on
 https://etherpad.openstack.org/p/1Uiwcbfpxq too,

I have added a comment to the original bug report -
https://bugs.launchpad.net/keystone/+bug/968696

I think you should be going for simplifying Keystone's RBAC model rather
than making it more complex. In essence this would mean that assigning
permissions to roles and users to roles are separate and independent
processes and that roles on creation do not have to have any baggage or
restrictions tied to them. Here are my suggestions:

1. Allow different entities to create roles, and use hierarchical role
naming to maintain global uniqueness and to show which entity created
(owns) the role definition. Creating a role does not imply anything
about a role's subsequent permissions unless a scope field is included
in the definition.

2. When a role is created allow the creator to optionally add a scope
field which will limit the permissions that can be assigned to the role
to the prescribed scope.

3. Permissions will be assigned to roles in policy files by resource
owners. The can assign any permissions to their resources to the role
that they want to, except that they cannot override the scope field (ie.
grant permissions to resources which are out of the role's scope).

4. Remove any linkage of roles to tenants/projects on creation. This is
unnecessary baggage and only complicates the model for no good
functional reason.

regards

David


 which is support
 https://blueprints.launchpad.net/keystone/+spec/service-scoped-tokens
 BP.
 
 
 Thanks, Arvind
 
 -Original Message- From: David Chadwick
 [mailto:d.w.chadw...@kent.ac.uk] Sent: Wednesday, December 04, 2013
 2:16 AM To: Tiwari, Arvind; OpenStack Development Mailing List (not
 for usage questions); Adam Young Subject: Re: [openstack-dev]
 [keystone] Service scoped role definition
 
 I have added comments 111 to 122
 
 david
 
 On 03/12/2013 23:58, Tiwari, Arvind wrote:
 Hi David,
 
 I have added my comments underneath line # 97 till line #110, it is
 mostly aligned with your proposal with some modification.
 
 https://etherpad.openstack.org/p/service-scoped-role-definition
 
 
 Thanks for your time, Arvind
 
 
 
 -Original Message- From: Tiwari, Arvind Sent: Monday,
 December 02, 2013 4:22 PM To: Adam Young; OpenStack Development
 Mailing List (not for usage questions); David Chadwick Subject: Re:
 [openstack-dev] [keystone] Service scoped role definition
 
 Hi Adam and David,
 
 Thank you so much for all the great comments, seems we are making
 good progress.
 
 I have replied to your comments and also added some to support my
 proposal
 
 https://etherpad.openstack.org/p/service-scoped-role-definition
 
 David, I like your suggestion for role-def scoping which can fit in
 my Plan B and I think Adam is cool with plan B.
 
 Please let me know if David's proposal for role-def scoping is cool
 for everybody?
 
 
 Thanks, Arvind
 
 -Original Message- From: Adam Young
 [mailto:ayo...@redhat.com] Sent: Wednesday, November 27, 2013 8:44
 AM To: Tiwari, Arvind; OpenStack Development Mailing List (not for
 usage questions) Cc: Henry Nash; dolph.math...@gmail.com; David
 Chadwick Subject: Re: [openstack-dev] [keystone] Service scoped
 role definition
 
 
 
 On 11/26/2013 06:57 PM, Tiwari, Arvind wrote:
 Hi Adam,
 
 Based on our discussion over IRC, I have updated the below
 etherpad with proposal for nested role definition
 
 Updated.  I made my changes Green.  It isn't easy being green.
 
 
 https://etherpad.openstack.org/p/service-scoped-role-definition
 
 Please take a look @ Proposal (Ayoung) - Nested role
 definitions, I am sorry if I could not catch your idea.
 
 Feel free to update the etherpad.
 
 Regards, Arvind
 
 
 -Original Message- From: Tiwari, Arvind Sent: Tuesday,
 November 26, 2013 4:08 PM To: David Chadwick; OpenStack
 Development Mailing List Subject: Re: [openstack-dev] [keystone]
 Service scoped role definition
 
 Hi David,
 
 Thanks for your time and valuable comments. I have replied to
 your comments and try to explain why I am advocating to this BP.
 
 Let me know your thoughts, please feel free to update below
 etherpad 
 https://etherpad.openstack.org/p/service-scoped-role-definition
 
 Thanks again, Arvind
 
 -Original Message- From: David Chadwick
 [mailto:d.w.chadw...@kent.ac.uk] Sent: Monday, November 25, 2013
 12:12 PM To: Tiwari, Arvind; OpenStack Development Mailing List 
 Cc: Henry Nash; ayo...@redhat.com; dolph.math...@gmail.com; Yee,
 Guang Subject: Re: [openstack-dev] [keystone] Service scoped role
 definition
 
 Hi Arvind
 
 I have just added some comments to your blueprint page
 
 regards
 
 David
 
 
 On 19/11/2013 00:01, Tiwari, Arvind wrote:
 Hi,
 

Re: [openstack-dev] creating a default for oslo config variables within a project?

2013-12-04 Thread Sean Dague
On 12/04/2013 11:56 AM, Ben Nemec wrote:
 On 2013-12-04 06:07, Sean Dague wrote:
 On 12/03/2013 11:21 PM, Clint Byrum wrote:
 Excerpts from Sean Dague's message of 2013-12-03 16:05:47 -0800:
 On 12/03/2013 06:13 PM, Ben Nemec wrote:
 On 2013-12-03 17:09, Sean Dague wrote:
 On 12/03/2013 05:50 PM, Mark McLoughlin wrote:
 On Tue, 2013-12-03 at 16:23 -0600, Ben Nemec wrote:
 On 2013-12-03 15:56, Sean Dague wrote:
 This cinder patch - https://review.openstack.org/#/c/48935/

 Is blocked on failing upgrade because the updated oslo
 lockutils won't
 function until there is a specific configuration variable added
 to the
 cinder.conf.

 That work around is proposed here -
 https://review.openstack.org/#/c/52070/3

 However I think this is exactly the kind of forward breaks that we
 want
 to prevent with grenade, as cinder failing to function after a
 rolling
 upgrade because a config item wasn't added is exactly the kind
 of pain
 we are trying to prevent happening to ops.

 So the question is, how is this done correctly so that a default
 can be
 set in the cinder code for this value, and it not require a config
 change to work?

 You're absolutely correct, in principle - if the default value for
 lock_path worked for users before, we should absolutely continue to
 support it.

 I don't know that I have a good answer on how to handle this,
 but for
 context this change is the result of a nasty bug in lockutils that
 meant
 external locks were doing nothing if lock_path wasn't set. 
 Basically
 it's something we should never have allowed in the first place.

 As far as setting this in code, it's important that all of the
 processes
 for a service are using the same value to avoid the same bad
 situation
 we were in before.  For tests, we have a lockutils wrapper
 (https://github.com/openstack/oslo-incubator/blob/master/openstack/common/lockutils.py#L282)

 that sets an environment variable to address this, but that only
 works if all of the processes are going to be spawned from within
 the same wrapper, and I'm not sure how secure that is for
 production
 deployments since it puts all of the lock files in a temporary
 directory.

 Right, I don't think the previous default really worked - if
 you used
 the default, then external locking was broken.

 I suspect most distros do set a default - I see RDO has this in its
 default nova.conf:

   lock_path = /var/lib/nova/tmp

 So, yes - this is all terrible.

 IMHO, rather than raise an exception we should log a big fat warning
 about relying on the default and perhaps just treat the lock as an
 in-process lock in that case ... since that's essentially what it
 was
 before, right?

 So a default of lock_path = /tmp will work (FHS says that path has
 to be
 there), even if not optimal. Could we make it a default value like
 that
 instead of the current default which is null (and hence the problem).

 IIRC, my initial fix was something similar to that, but it got shot
 down
 because putting the lock files in a known world writeable location
 was a
 security issue.

 Although maybe if we put them in a subdirectory of /tmp and ensured
 that
 the permissions were such that only the user running the service could
 use that directory, it might be acceptable?  We could still log a
 warning if we wanted.

 This seems like it would have implications for people running services
 on Windows too, but we can probably find a way to make that work if we
 decide on a solution.

 How is that a security issue? Are the lock files being written with
 some
 sensitive data in them and have g or o permissions on? The sticky bit
 (+t) on /tmp will prevent other users from deleting the file.


 Right, but it won't prevent users from creating a symlink with the same
 name.

 ln -s /var/lib/nova/instances/x/image.raw /tmp/well.known.location

 Now when you do

 with open('/tmp/well.known.location', 'w') as lockfile:
   lockfile.write('Stuff')

 Nova has just truncated the image file and written 'Stuff' to it.

 So that's the generic case (and the way people often write this). But
 the oslo lockutils code doesn't work that way. While it does open the
 file for write, it does not actually write, it's using fcntl to hold
 locks. That's taking a data structure on the fd in kernel memory (IIRC),
 so it correctly gives it up if the process crashes.

 I'm not saying there isn't some other possible security vulnerability
 here as well, but it's not jumping out at me. So I'd love to understand
 that, because if we can close that exposure we can provide a working
 default, plus a strong recommendation for how to do that *right*. I'd be
 totally happy with printing WARNING level at startup if lock_path = /tmp
 that this should be adjusted.
 
 Full disclosure: I don't claim to be a security expert, so take my
 thoughts on this with a grain of salt.
 
 tldr: I still don't see a way to do this without breaking _something_.
 
 Unfortunately, while we don't actually write to the file, just opening
 it for 

Re: [openstack-dev] {TripleO] UI Wireframes - close to implementation start

2013-12-04 Thread Liz Blanchard

On Dec 3, 2013, at 9:30 AM, Tzu-Mainn Chen tzuma...@redhat.com wrote:

 Hey folks,
 
 I opened 2 issues on UX discussion forum with TripleO UI topics:
 
 Resource Management:
 http://ask-openstackux.rhcloud.com/question/95/tripleo-ui-resource-management/
 - this section was already reviewed before, there is not much surprises, just 
 smaller updates
 - we are about to implement this area
 
 http://ask-openstackux.rhcloud.com/question/96/tripleo-ui-deployment-management/
 - these are completely new views and they need a lot of attention so that in 
 time we don't change direction drastically
 - any feedback here is welcome
 
 We need to get into implementation ASAP. It doesn't mean that we have 
 everything perfect from the very beginning, but that we have direction and we 
 move forward by enhancements.
 
 Therefor implementation of above mentioned areas should start very soon.
 
 If all possible, I will try to record walkthrough with further explanations. 
 If you have any questions or feedback, please follow the threads on 
 ask-openstackux.
 
 Thanks
 -- Jarda
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 These wireframes look really good!  However, would it be possible to get the 
 list of requirements driving them?  For example, something on the level of:
 
 1) removal of resource classes and racks
 2) what happens behind the scenes when deployment occurs
 3) the purpose of compute class
 4) etc
 
I completely agree with Mainn on this. It would be also be great to see use 
cases built from requirements that we can consider while reviewing wireframes. 
These would allow us to think in terms of how the user would come to the UI and 
perform the actions needed to complete their task.

Here are some additional thoughts/questions regarding the current designs:

Deployment Management

Slide 3
- I like that we are making it clear to the user which types of nodes they 
could add to their deployment and how many of each node will be added, but why 
aren't we continuing to use this method as the user may want to scale out their 
deployment? It seems very inconsistent to use this design just for this one use 
case.
- The section is labeled Resource distribution but the user is has a number 
of unallocated nodes. I think we should try to make these terms consistent. 
Either Resource distribution and Undistributed resources or Node 
Allocation and Unallocated Nodes. This would limit the terminology that a 
user would need to try to learn.
- Would the More details link bring up a description of the type of node? If 
so, should this just be a ? next to the title that the user could roll over? 
Okay. After reviewing the youtube video I understand the point of this link 
now. I think this should be labeled accordingly. The controller link should 
probably read Controller to match up with the navigation section. It might be 
better to label these both Controller Nodes.
- Somehow, we should make it easier for the user to walk through these details 
steps. It's great that this initial screen allows them to just assign nodes and 
then quickly deploy, but when they want to dive into the details I think we 
need to make it much clearer how to do this without the user needing to 
remember that they have to select each node type section and click Deploy. 
Maybe a wizard would work better?
- Would the user be able to click on the text and enter a number of nodes into 
the text box rather than have to click the up or down arrow?

Slide 5
- It might be a helpful hint to the user to maybe list the number of nodes that 
need action in each section next to the navigation item. I'm just trying to 
think of better ways to walk the user through if they want to look at more 
details on each of the nodes (or types of nodes) before deploying.
- Would there be other states that a node could be in within the Nodes waiting 
for action section? If not, this could just be the Nodes waiting to be 
deployed section.
- I like that the stacked representation of the nodes will save space, but will 
the user be able to perform the actions that they would need to quickly on this 
page? It's also inconsistent with the original first use design used in Slide 
3. Maybe we could repeat this interaction and allow the user to add/remove the 
number of nodes quickly here? We could still use this stacked representation to 
monitor like nodes as you are showing in Slide 11.

Slide 6
-The addition of Create New Compute Class makes it seem like the user will be 
able to deploy these 27 nodes without being attached to a class. Or is this 
initial group a default resource class? How would the user move nodes from one 
class to another?

Slide 7
- This icon seems to be a way of telling the user that they've added all of 
these nodes to their plan and now they need to act on it. Would these 
notifications show as they change 

Re: [openstack-dev] [keystone] Service scoped role definition

2013-12-04 Thread Tiwari, Arvind
Hi David,

The biggest problems in my opinion are, 

1. We are overloading and adding extra complexities on role name to maintain 
the generalization for role-def data model. 
2. Name spacing the role name is not going to resolve all the issues listed in 
BP.
3. All the namespaces are derived from mutable string  (domain name, project 
name, service name etc...) which makes the role name fragile.

I think it is time to break generic role-def data model to accommodate more 
specialized use cases.


Thanks,
Arvind

-Original Message-
From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk] 
Sent: Wednesday, December 04, 2013 10:41 AM
To: Adam Young; Tiwari, Arvind; OpenStack Development Mailing List (not for 
usage questions)
Cc: Henry Nash; dolph.math...@gmail.com
Subject: Re: [openstack-dev] [keystone] Service scoped role definition

Hi Adam

I understand your problem: having projects and services which have the
same name, then the lineage of a role containing this name is not
deterministically known without some other rule or syntax that can
differentiate between the two.

Since domains contain projects which contain services then isnt the
containment hierarchy already known and predetermined? If it is then:

4 name components mean it is a service specified role
3 name components mean it is a project specified role
2 name components mean it is a domain specified role
1 name component means it is globally named role (from the default domain)

a null string means the default domain or all projects in a domain. You
would never have null for a service name.

admin means the global admin role
/admin ditto
x/admin means the admin of the X domain
x/y/admin means the admin role for the y project in domain x
//x/admin means admin for service x from the default domain
etc.

will that work?

regards

David


On 04/12/2013 15:04, Adam Young wrote:
 On 12/04/2013 04:08 AM, David Chadwick wrote:
 I am happy with this as far as it goes. I would like to see it being
 made more general, where domains, services and projects can also own and
 name roles
 Domains should be OK, but services would confuse the matter.  You'd have
 to end up with something like LDAP
 
 role=  domain=default,service=glance
 
 vs
 
 role=  domain=default,project=glance
 
 unless we have unambiguous implicit ordering, we'll need to make it
 explicit, which is messy.
 
 I'd rather do:
 
 One segment: globally defined roles.  These could also be considered
 roles defined in the default domain.
 Two segments service defined roles in the default domain
 Three Segments, service defined roles from non-default domain
 
 To do domain scoped roles we could do something like:
 
 domX//admin
 
 
 But It seems confusing.
 
 Perhaps a better approach for project roles is to have the rule that the
 default domain can show up as an empty string.  Thus, project scoped
 roles from the default domain  would be:
 
 \glance\admin
 
 and from a non default domain
 
 domX\glance\admin
 
 
 
 
 
 
 

 regards

 David


 On 04/12/2013 01:51, Adam Young wrote:
 I've been thinking about your comment that nested roles are confusing


 What if we backed off and said the following:


 Some role-definitions are owned by services.  If a Role definition is
 owned by a service, in role assignment lists in tokens, those roles we
 be prefixd by the service name.  / is a reserved cahracter and weill be
 used as the divider between segments of the role definition 

 That drops arbitrary nesting, and provides a reasonable namespace.  Then
 a role def would look like:

 glance/admin  for the admin role on the glance project.



 In theory, we could add the domain to the namespace, but that seems
 unwieldy.  If we did, a role def would then look like this


 default/glance/admin  for the admin role on the glance project.

 Is that clearer than the nested roles?



 On 11/26/2013 06:57 PM, Tiwari, Arvind wrote:
 Hi Adam,

 Based on our discussion over IRC, I have updated the below etherpad
 with proposal for nested role definition

 https://etherpad.openstack.org/p/service-scoped-role-definition

 Please take a look @ Proposal (Ayoung) - Nested role definitions, I
 am sorry if I could not catch your idea.

 Feel free to update the etherpad.

 Regards,
 Arvind


 -Original Message-
 From: Tiwari, Arvind
 Sent: Tuesday, November 26, 2013 4:08 PM
 To: David Chadwick; OpenStack Development Mailing List
 Subject: Re: [openstack-dev] [keystone] Service scoped role definition

 Hi David,

 Thanks for your time and valuable comments. I have replied to your
 comments and try to explain why I am advocating to this BP.

 Let me know your thoughts, please feel free to update below etherpad
 https://etherpad.openstack.org/p/service-scoped-role-definition

 Thanks again,
 Arvind

 -Original Message-
 From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk]
 Sent: Monday, November 25, 2013 12:12 PM
 To: Tiwari, Arvind; OpenStack Development Mailing List
 Cc: Henry Nash; 

Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-04 Thread Robert Collins
On 5 December 2013 06:55, James Slagle james.sla...@gmail.com wrote:
 On Wed, Dec 4, 2013 at 2:12 AM, Robert Collins
 Jan, Jordan, Martyn, Jiri and Jaromir are still actively contributing
 to TripleO and OpenStack, but I don't think they are tracking /
 engaging in the code review discussions enough to stay in -core: I'd
 be delighted if they want to rejoin as core - as we discussed last
 time, after a shorter than usual ramp up period if they get stuck in.

 What's the shorter than usual ramp up period?

You know, we haven't actually put numbers on it. But I'd be
comfortable with a few weeks of sustained involvement.

 In general, I agree with your points about removing folks from core.

 We do have a situation though where some folks weren't reviewing as
 frequently when the Tuskar UI/API development slowed a bit post-merge.
  Since that is getting ready to pick back up, my concern with removing
 this group of folks, is that it leaves less people on core who are
 deeply familiar with that code base.  Maybe that's ok, especially if
 the fast track process to get them back on core is reasonable.

Well, I don't think we want a situation where when a single org
decides to tackle something else for a bit, that noone can comfortably
fix bugs in e.g. Tuskar / or worse the whole thing stalls - thats why
I've been so keen to get /everyone/ in Tripleo-core familiar with the
entire collection of codebases we're maintaining.

So I think after 3 months that other cores should be reasonably familiar too ;).

That said, perhaps we should review these projects.

Tuskar as an API to drive deployment and ops clearly belongs in
TripleO - though we need to keep pushing features out of it into more
generalised tools like Heat, Nova and Solum. TuskarUI though, as far
as I know all the other programs have their web UI in Horizon itself -
perhaps TuskarUI belongs in the Horizon program as a separate code
base for now, and merge them once Tuskar begins integration?

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] creating a default for oslo config variables within a project?

2013-12-04 Thread Ben Nemec

On 2013-12-04 12:51, Sean Dague wrote:

On 12/04/2013 11:56 AM, Ben Nemec wrote:

On 2013-12-04 06:07, Sean Dague wrote:

On 12/03/2013 11:21 PM, Clint Byrum wrote:

Excerpts from Sean Dague's message of 2013-12-03 16:05:47 -0800:

On 12/03/2013 06:13 PM, Ben Nemec wrote:

On 2013-12-03 17:09, Sean Dague wrote:

On 12/03/2013 05:50 PM, Mark McLoughlin wrote:

On Tue, 2013-12-03 at 16:23 -0600, Ben Nemec wrote:

On 2013-12-03 15:56, Sean Dague wrote:

This cinder patch - https://review.openstack.org/#/c/48935/

Is blocked on failing upgrade because the updated oslo
lockutils won't
function until there is a specific configuration variable 
added

to the
cinder.conf.

That work around is proposed here -
https://review.openstack.org/#/c/52070/3

However I think this is exactly the kind of forward breaks 
that we

want
to prevent with grenade, as cinder failing to function after a
rolling
upgrade because a config item wasn't added is exactly the kind
of pain
we are trying to prevent happening to ops.

So the question is, how is this done correctly so that a 
default

can be
set in the cinder code for this value, and it not require a 
config

change to work?


You're absolutely correct, in principle - if the default value 
for
lock_path worked for users before, we should absolutely continue 
to

support it.


I don't know that I have a good answer on how to handle this,
but for
context this change is the result of a nasty bug in lockutils 
that

meant
external locks were doing nothing if lock_path wasn't set.
Basically
it's something we should never have allowed in the first place.

As far as setting this in code, it's important that all of the
processes
for a service are using the same value to avoid the same bad
situation
we were in before.  For tests, we have a lockutils wrapper
(https://github.com/openstack/oslo-incubator/blob/master/openstack/common/lockutils.py#L282)

that sets an environment variable to address this, but that 
only
works if all of the processes are going to be spawned from 
within

the same wrapper, and I'm not sure how secure that is for
production
deployments since it puts all of the lock files in a temporary
directory.


Right, I don't think the previous default really worked - if
you used
the default, then external locking was broken.

I suspect most distros do set a default - I see RDO has this in 
its

default nova.conf:

  lock_path = /var/lib/nova/tmp

So, yes - this is all terrible.

IMHO, rather than raise an exception we should log a big fat 
warning
about relying on the default and perhaps just treat the lock as 
an
in-process lock in that case ... since that's essentially what 
it

was
before, right?


So a default of lock_path = /tmp will work (FHS says that path 
has

to be
there), even if not optimal. Could we make it a default value 
like

that
instead of the current default which is null (and hence the 
problem).


IIRC, my initial fix was something similar to that, but it got 
shot

down
because putting the lock files in a known world writeable location
was a
security issue.

Although maybe if we put them in a subdirectory of /tmp and 
ensured

that
the permissions were such that only the user running the service 
could

use that directory, it might be acceptable?  We could still log a
warning if we wanted.

This seems like it would have implications for people running 
services
on Windows too, but we can probably find a way to make that work 
if we

decide on a solution.


How is that a security issue? Are the lock files being written with
some
sensitive data in them and have g or o permissions on? The sticky 
bit

(+t) on /tmp will prevent other users from deleting the file.



Right, but it won't prevent users from creating a symlink with the 
same

name.

ln -s /var/lib/nova/instances/x/image.raw 
/tmp/well.known.location


Now when you do

with open('/tmp/well.known.location', 'w') as lockfile:
  lockfile.write('Stuff')

Nova has just truncated the image file and written 'Stuff' to it.


So that's the generic case (and the way people often write this). But
the oslo lockutils code doesn't work that way. While it does open the
file for write, it does not actually write, it's using fcntl to hold
locks. That's taking a data structure on the fd in kernel memory 
(IIRC),

so it correctly gives it up if the process crashes.

I'm not saying there isn't some other possible security vulnerability
here as well, but it's not jumping out at me. So I'd love to 
understand

that, because if we can close that exposure we can provide a working
default, plus a strong recommendation for how to do that *right*. I'd 
be
totally happy with printing WARNING level at startup if lock_path = 
/tmp

that this should be adjusted.


Full disclosure: I don't claim to be a security expert, so take my
thoughts on this with a grain of salt.

tldr: I still don't see a way to do this without breaking _something_.

Unfortunately, while we don't actually write to the file, just opening
it for 

[openstack-dev] [Keystoneclient] [Keystone] Last released version of keystoneclient does not work with python33

2013-12-04 Thread Georgy Okrokvertskhov
Hi,

I have failed tests in gate-solum-python33 because kesytoneclient fails to
import xmlrpclib.
The exact error is:
File
/home/jenkins/workspace/gate-solum-python33/.tox/py33/lib/python3.3/site-packages/
keystoneclient/openstack/common/jsonutils.py, line 42, in module
2013-11-28 18:27:12.655 | import xmlrpclib
2013-11-28 18:27:12.655 | ImportError: No module named 'xmlrpclib

This issue appeared because of xmlrpclib was renamed in python33.
Is there any plan to release a new version of keystoneclient with the fix
for that issue? As I see it is fixed in master.

If there is no new release for keystoneclient can you recommend any
workaround for this issue?

Thanks
Georgy
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] creating a default for oslo config variables within a project?

2013-12-04 Thread Clint Byrum
Excerpts from Sean Dague's message of 2013-12-04 10:51:16 -0800:
 On 12/04/2013 11:56 AM, Ben Nemec wrote:
  On 2013-12-04 06:07, Sean Dague wrote:
  On 12/03/2013 11:21 PM, Clint Byrum wrote:
  Excerpts from Sean Dague's message of 2013-12-03 16:05:47 -0800:
  On 12/03/2013 06:13 PM, Ben Nemec wrote:
  On 2013-12-03 17:09, Sean Dague wrote:
  On 12/03/2013 05:50 PM, Mark McLoughlin wrote:
  On Tue, 2013-12-03 at 16:23 -0600, Ben Nemec wrote:
  On 2013-12-03 15:56, Sean Dague wrote:
  This cinder patch - https://review.openstack.org/#/c/48935/
 
  Is blocked on failing upgrade because the updated oslo
  lockutils won't
  function until there is a specific configuration variable added
  to the
  cinder.conf.
 
  That work around is proposed here -
  https://review.openstack.org/#/c/52070/3
 
  However I think this is exactly the kind of forward breaks that we
  want
  to prevent with grenade, as cinder failing to function after a
  rolling
  upgrade because a config item wasn't added is exactly the kind
  of pain
  we are trying to prevent happening to ops.
 
  So the question is, how is this done correctly so that a default
  can be
  set in the cinder code for this value, and it not require a config
  change to work?
 
  You're absolutely correct, in principle - if the default value for
  lock_path worked for users before, we should absolutely continue to
  support it.
 
  I don't know that I have a good answer on how to handle this,
  but for
  context this change is the result of a nasty bug in lockutils that
  meant
  external locks were doing nothing if lock_path wasn't set. 
  Basically
  it's something we should never have allowed in the first place.
 
  As far as setting this in code, it's important that all of the
  processes
  for a service are using the same value to avoid the same bad
  situation
  we were in before.  For tests, we have a lockutils wrapper
  (https://github.com/openstack/oslo-incubator/blob/master/openstack/common/lockutils.py#L282)
 
  that sets an environment variable to address this, but that only
  works if all of the processes are going to be spawned from within
  the same wrapper, and I'm not sure how secure that is for
  production
  deployments since it puts all of the lock files in a temporary
  directory.
 
  Right, I don't think the previous default really worked - if
  you used
  the default, then external locking was broken.
 
  I suspect most distros do set a default - I see RDO has this in its
  default nova.conf:
 
lock_path = /var/lib/nova/tmp
 
  So, yes - this is all terrible.
 
  IMHO, rather than raise an exception we should log a big fat warning
  about relying on the default and perhaps just treat the lock as an
  in-process lock in that case ... since that's essentially what it
  was
  before, right?
 
  So a default of lock_path = /tmp will work (FHS says that path has
  to be
  there), even if not optimal. Could we make it a default value like
  that
  instead of the current default which is null (and hence the problem).
 
  IIRC, my initial fix was something similar to that, but it got shot
  down
  because putting the lock files in a known world writeable location
  was a
  security issue.
 
  Although maybe if we put them in a subdirectory of /tmp and ensured
  that
  the permissions were such that only the user running the service could
  use that directory, it might be acceptable?  We could still log a
  warning if we wanted.
 
  This seems like it would have implications for people running services
  on Windows too, but we can probably find a way to make that work if we
  decide on a solution.
 
  How is that a security issue? Are the lock files being written with
  some
  sensitive data in them and have g or o permissions on? The sticky bit
  (+t) on /tmp will prevent other users from deleting the file.
 
 
  Right, but it won't prevent users from creating a symlink with the same
  name.
 
  ln -s /var/lib/nova/instances/x/image.raw /tmp/well.known.location
 
  Now when you do
 
  with open('/tmp/well.known.location', 'w') as lockfile:
lockfile.write('Stuff')
 
  Nova has just truncated the image file and written 'Stuff' to it.
 
  So that's the generic case (and the way people often write this). But
  the oslo lockutils code doesn't work that way. While it does open the
  file for write, it does not actually write, it's using fcntl to hold
  locks. That's taking a data structure on the fd in kernel memory (IIRC),
  so it correctly gives it up if the process crashes.
 
  I'm not saying there isn't some other possible security vulnerability
  here as well, but it's not jumping out at me. So I'd love to understand
  that, because if we can close that exposure we can provide a working
  default, plus a strong recommendation for how to do that *right*. I'd be
  totally happy with printing WARNING level at startup if lock_path = /tmp
  that this should be adjusted.
  
  Full disclosure: I don't claim to be a security expert, so take my
  

Re: [openstack-dev] [keystone] Service scoped role definition

2013-12-04 Thread Tiwari, Arvind
Thanks David,

Appended line # 119 with my reply. endpoint sounds perfect to me.

In a nutshell we are agreeing on following new data model for role-def. 

{
  role: {
id: 76e72a,
name: admin, (you can give whatever name you like)
scope: {
  id: ---id--, (ID should be  1 to 1 mapped with resource in type and 
must be immutable value)
  type: service | file | domain etc., (Type can be any type of resource 
which explains the scoping context)
  endpoint:-- endpoint--  (An optional field to indicate the interface 
of the resource (endpoint for service, path for File,) for which the 
role-def is created.)
}
  }
}

If other community members are cool with this, I will start drafting the API 
specs?


Regards,
Arvind


-Original Message-
From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk] 
Sent: Wednesday, December 04, 2013 11:42 AM
To: Tiwari, Arvind; Adam Young
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [keystone] Service scoped role definition



On 04/12/2013 17:28, Tiwari, Arvind wrote:
 Hi David,
 
 Thanks for your valuable comments.
 
 I have updated
 https://etherpad.openstack.org/p/service-scoped-role-definition line
 #118 explaining the rationale behind the field.

#119 for my reply

 
 I wd also appreciate your thoughts on
 https://etherpad.openstack.org/p/1Uiwcbfpxq too,

I have added a comment to the original bug report -
https://bugs.launchpad.net/keystone/+bug/968696

I think you should be going for simplifying Keystone's RBAC model rather
than making it more complex. In essence this would mean that assigning
permissions to roles and users to roles are separate and independent
processes and that roles on creation do not have to have any baggage or
restrictions tied to them. Here are my suggestions:

1. Allow different entities to create roles, and use hierarchical role
naming to maintain global uniqueness and to show which entity created
(owns) the role definition. Creating a role does not imply anything
about a role's subsequent permissions unless a scope field is included
in the definition.

2. When a role is created allow the creator to optionally add a scope
field which will limit the permissions that can be assigned to the role
to the prescribed scope.

3. Permissions will be assigned to roles in policy files by resource
owners. The can assign any permissions to their resources to the role
that they want to, except that they cannot override the scope field (ie.
grant permissions to resources which are out of the role's scope).

4. Remove any linkage of roles to tenants/projects on creation. This is
unnecessary baggage and only complicates the model for no good
functional reason.

regards

David


 which is support
 https://blueprints.launchpad.net/keystone/+spec/service-scoped-tokens
 BP.
 
 
 Thanks, Arvind
 
 -Original Message- From: David Chadwick
 [mailto:d.w.chadw...@kent.ac.uk] Sent: Wednesday, December 04, 2013
 2:16 AM To: Tiwari, Arvind; OpenStack Development Mailing List (not
 for usage questions); Adam Young Subject: Re: [openstack-dev]
 [keystone] Service scoped role definition
 
 I have added comments 111 to 122
 
 david
 
 On 03/12/2013 23:58, Tiwari, Arvind wrote:
 Hi David,
 
 I have added my comments underneath line # 97 till line #110, it is
 mostly aligned with your proposal with some modification.
 
 https://etherpad.openstack.org/p/service-scoped-role-definition
 
 
 Thanks for your time, Arvind
 
 
 
 -Original Message- From: Tiwari, Arvind Sent: Monday,
 December 02, 2013 4:22 PM To: Adam Young; OpenStack Development
 Mailing List (not for usage questions); David Chadwick Subject: Re:
 [openstack-dev] [keystone] Service scoped role definition
 
 Hi Adam and David,
 
 Thank you so much for all the great comments, seems we are making
 good progress.
 
 I have replied to your comments and also added some to support my
 proposal
 
 https://etherpad.openstack.org/p/service-scoped-role-definition
 
 David, I like your suggestion for role-def scoping which can fit in
 my Plan B and I think Adam is cool with plan B.
 
 Please let me know if David's proposal for role-def scoping is cool
 for everybody?
 
 
 Thanks, Arvind
 
 -Original Message- From: Adam Young
 [mailto:ayo...@redhat.com] Sent: Wednesday, November 27, 2013 8:44
 AM To: Tiwari, Arvind; OpenStack Development Mailing List (not for
 usage questions) Cc: Henry Nash; dolph.math...@gmail.com; David
 Chadwick Subject: Re: [openstack-dev] [keystone] Service scoped
 role definition
 
 
 
 On 11/26/2013 06:57 PM, Tiwari, Arvind wrote:
 Hi Adam,
 
 Based on our discussion over IRC, I have updated the below
 etherpad with proposal for nested role definition
 
 Updated.  I made my changes Green.  It isn't easy being green.
 
 
 https://etherpad.openstack.org/p/service-scoped-role-definition
 
 Please take a look @ Proposal (Ayoung) - Nested role
 definitions, I am sorry if I could not catch your idea.
 
 Feel 

Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-04 Thread Lyle, David
On 5 December 2013 12:10, Robert Collins robe...@robertcollins.net wrote:

-snip-

 
 That said, perhaps we should review these projects.
 
 Tuskar as an API to drive deployment and ops clearly belongs in
 TripleO - though we need to keep pushing features out of it into more
 generalised tools like Heat, Nova and Solum. TuskarUI though, as far
 as I know all the other programs have their web UI in Horizon itself -
 perhaps TuskarUI belongs in the Horizon program as a separate code
 base for now, and merge them once Tuskar begins integration?


This sounds reasonable to me.  The code base for TuskarUI is building on 
Horizon and we are planning on integrating TuskarUI into Horizon once TripleO 
is part of the integrated release.  The review skills and focus for TuskarUI is 
certainly more consistent with Horizon than the rest of the TripleO program.
 
 -Rob
 
 
 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solum] MySQL Storage Engine

2013-12-04 Thread Paul Montgomery
TLDR: Should Solum log a warning if operators do not use the InnoDB
storage engine with MySQL in Solum's control plane?


Details:

I was looking at: https://review.openstack.org/#/c/57024/
Models.py to be specific.

The default storage engine is InnoDB for MySQL which is good.  I took a
quick look at the storage engines and only InnoDB seems reasonable for the
Solum control plane (it is ACID complaint).  I assume that we'll all be
coding towards an ACID compliant database for performance (not having to
revalidate database writes and consistency and such) and ease of
development.

If all of that is true, should we log a warning to the operator that they
are using an untested and potentially problematic storage engine (which in
a worst case scenario can corrupt their data)?  Should we even enable an
operator to change the storage engine through configuration?  I think
enabling that configuration is fine as long as we make sure that the
operator knows that they are on their own with this unsupported
configuration but I welcome thoughts from the group on this topic.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Vendor feedback needed

2013-12-04 Thread Ivar Lazzaro
Hi Eugene,

Right now (before the model change discussed during the summit) our 
implementation, in regards to your question, can be summarized as follows:


-  Whenever a Pool is created, a port on the specific subnet/network is 
created and associated to it;

-  Whenever a VIP is associated to the Pool, our Service Manager places 
the interfaces of a newly created appliance based on the information got from 
the VIP's and Pool's port (i.e. network_type and segmentation_id);

-  In order to wire the VIP with an external network, a Floating IP can 
be associated to its port (or the VIP created ON the external network itself);

-  VIP and Pool on the same network/subnet is supported as well.


The new model (LoadBalancer object) would change how and when the Load Balancer 
appliance is created, but the way the interfaces are placed into the networks 
remains the same.
I hope this answers your question.

Regards,
Ivar.

From: Samuel Bercovici [mailto:samu...@radware.com]
Sent: Wednesday, December 04, 2013 7:11 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Vendor feedback needed

Hi Eugene,

We currently support out-of-the-box VIP and Nodes on the same network.
The VIP can be associated with a floating IP if need to access from the 
external network.

We are considering other options but will address as we get to this.

Regards,
-Sam.


From: Eugene Nikanorov [mailto:enikano...@mirantis.com]
Sent: Wednesday, December 04, 2013 1:14 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Neutron][LBaaS] Vendor feedback needed

Hi load balancing vendors!

I have specific question: how drivers for your solutions 
(devices/vms/processes) are going to wire a VIP with external and tenant 
networks?
As we're working on creating a suite for third-party testing, we would like to 
make sure that scenarios that we create fits usage pattern of all providers, if 
it is possible at all.
If it is not possible, we need to think of more comprehensive LBaaS API and 
tests.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] MySQL Storage Engine

2013-12-04 Thread Clayton Coleman


- Original Message -
 TLDR: Should Solum log a warning if operators do not use the InnoDB
 storage engine with MySQL in Solum's control plane?
 
 
 Details:
 
 I was looking at: https://review.openstack.org/#/c/57024/
 Models.py to be specific.
 
 The default storage engine is InnoDB for MySQL which is good.  I took a
 quick look at the storage engines and only InnoDB seems reasonable for the
 Solum control plane (it is ACID complaint).  I assume that we'll all be
 coding towards an ACID compliant database for performance (not having to
 revalidate database writes and consistency and such) and ease of
 development.
 
 If all of that is true, should we log a warning to the operator that they
 are using an untested and potentially problematic storage engine (which in
 a worst case scenario can corrupt their data)?  Should we even enable an
 operator to change the storage engine through configuration?  I think
 enabling that configuration is fine as long as we make sure that the
 operator knows that they are on their own with this unsupported
 configuration but I welcome thoughts from the group on this topic.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

I'd be a +1 to InnoDB only until such a time as someone demonstrates a need.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] Core reviewer update Dec

2013-12-04 Thread Keith Basil
On Dec 4, 2013, at 2:44 PM, Lyle, David wrote:

 On 5 December 2013 12:10, Robert Collins robe...@robertcollins.net wrote:
 
 -snip-
 
 
 That said, perhaps we should review these projects.
 
 Tuskar as an API to drive deployment and ops clearly belongs in
 TripleO - though we need to keep pushing features out of it into more
 generalised tools like Heat, Nova and Solum. TuskarUI though, as far
 as I know all the other programs have their web UI in Horizon itself -
 perhaps TuskarUI belongs in the Horizon program as a separate code
 base for now, and merge them once Tuskar begins integration?
 
 
 This sounds reasonable to me.  The code base for TuskarUI is building on 
 Horizon and we are planning on integrating TuskarUI into Horizon once TripleO 
 is part of the integrated release.  The review skills and focus for TuskarUI 
 is certainly more consistent with Horizon than the rest of the TripleO 
 program.

Focus is needed on the Horizon bits to ensure that we have something to build on
for the operator side of a deployment.  One possible concern here is that the 
Horizon
folk won't understand what's driving the UI need for Tuskar.

-k


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] MySQL Storage Engine

2013-12-04 Thread Clint Byrum
Excerpts from Paul Montgomery's message of 2013-12-04 12:04:06 -0800:
 TLDR: Should Solum log a warning if operators do not use the InnoDB
 storage engine with MySQL in Solum's control plane?
 
 
 Details:
 
 I was looking at: https://review.openstack.org/#/c/57024/
 Models.py to be specific.
 
 The default storage engine is InnoDB for MySQL which is good.  I took a
 quick look at the storage engines and only InnoDB seems reasonable for the
 Solum control plane (it is ACID complaint).  I assume that we'll all be
 coding towards an ACID compliant database for performance (not having to
 revalidate database writes and consistency and such) and ease of
 development.
 
 If all of that is true, should we log a warning to the operator that they
 are using an untested and potentially problematic storage engine (which in
 a worst case scenario can corrupt their data)?  Should we even enable an
 operator to change the storage engine through configuration?  I think
 enabling that configuration is fine as long as we make sure that the
 operator knows that they are on their own with this unsupported
 configuration but I welcome thoughts from the group on this topic.
 

Just assume MyISAM _does not exist_. It is 2013 for crying out loud.

If somebody accidentally uses MyISAM, point at them and laugh, but then
do help them pick up the pieces when it breaks.

In all seriousness, if you can force the engine to InnoDB, do that.
Otherwise, just ignore this. We are all consenting adults here and if
people cant' RTFM on MySQL, they shouldn't be storing data in it.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] MySQL Storage Engine

2013-12-04 Thread Monty Taylor


On 12/04/2013 03:25 PM, Clint Byrum wrote:
 Excerpts from Paul Montgomery's message of 2013-12-04 12:04:06 -0800:
 TLDR: Should Solum log a warning if operators do not use the InnoDB
 storage engine with MySQL in Solum's control plane?


 Details:

 I was looking at: https://review.openstack.org/#/c/57024/
 Models.py to be specific.

 The default storage engine is InnoDB for MySQL which is good.  I took a
 quick look at the storage engines and only InnoDB seems reasonable for the
 Solum control plane (it is ACID complaint).  I assume that we'll all be
 coding towards an ACID compliant database for performance (not having to
 revalidate database writes and consistency and such) and ease of
 development.

 If all of that is true, should we log a warning to the operator that they
 are using an untested and potentially problematic storage engine (which in
 a worst case scenario can corrupt their data)?  Should we even enable an
 operator to change the storage engine through configuration?  I think
 enabling that configuration is fine as long as we make sure that the
 operator knows that they are on their own with this unsupported
 configuration but I welcome thoughts from the group on this topic.

 
 Just assume MyISAM _does not exist_. It is 2013 for crying out loud.
 
 If somebody accidentally uses MyISAM, point at them and laugh, but then
 do help them pick up the pieces when it breaks.
 
 In all seriousness, if you can force the engine to InnoDB, do that.
 Otherwise, just ignore this. We are all consenting adults here and if
 people cant' RTFM on MySQL, they shouldn't be storing data in it.

+1000

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] recheck bug # for the django fail

2013-12-04 Thread Sean Dague
For people who's patches got killed earlier today because django 1.6 was
allowed into global requirements, please use the following bug for
recheck - https://bugs.launchpad.net/horizon/+bug/1257885

The horizon team is working towards 1.6 compatibility, but it's not
there yet. This will be a good tracking artifact for us getting there.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystoneclient] [Keystone] Last released version of keystoneclient does not work with python33

2013-12-04 Thread Adrian Otto
Dolph,

Is anyone already focusing on py33 compatibility for python-keystoneclient? Has 
that effort been scoped? I'd like to judge whether it's reasonable to expect us 
to patch it up to be compatible in the near term, or relax our expectations. 
For Solum, we are trying to make all our code py33 compatible from the start, 
so we take it seriously when the py33 gate fails. Please advise.

Thanks,

Adrian

On Dec 4, 2013, at 12:24 PM, Dolph Mathews 
dolph.math...@gmail.commailto:dolph.math...@gmail.com
 wrote:


On Wed, Dec 4, 2013 at 1:26 PM, Georgy Okrokvertskhov 
gokrokvertsk...@mirantis.commailto:gokrokvertsk...@mirantis.com wrote:
Hi,

I have failed tests in gate-solum-python33 because kesytoneclient fails to 
import xmlrpclib.
The exact error is:
File 
/home/jenkins/workspace/gate-solum-python33/.tox/py33/lib/python3.3/site-packages/keystoneclient/openstack/common/jsonutils.py,
 line 42, in module
2013-11-28 18:27:12.655 | import xmlrpclib
2013-11-28 18:27:12.655 | ImportError: No module named 'xmlrpclib

This issue appeared because of xmlrpclib was renamed in python33.
Is there any plan to release a new version of keystoneclient with the fix for 
that issue? As I see it is fixed in master.

If there is no new release for keystoneclient can you recommend any workaround 
for this issue?


I'd be happy to make a release keystoneclient, but I don't believe that's the 
only issue with python 3 at the moment (at least on the CLI?). For example:

  https://bugs.launchpad.net/python-keystoneclient/+bug/1249165

In the current master, the above issue is only reproducible after syncing oslo 
(otherwise it fails on yet another python 3 incompatibility).

Thanks
Georgy

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Service scoped role definition

2013-12-04 Thread Adam Young

On 12/04/2013 12:40 PM, David Chadwick wrote:

Hi Adam

I understand your problem: having projects and services which have the
same name, then the lineage of a role containing this name is not
deterministically known without some other rule or syntax that can
differentiate between the two.

Since domains contain projects which contain services then isnt the
containment hierarchy already known and predetermined? If it is then:

4 name components mean it is a service specified role
3 name components mean it is a project specified role
2 name components mean it is a domain specified role
1 name component means it is globally named role (from the default domain)

a null string means the default domain or all projects in a domain. You
would never have null for a service name.

admin means the global admin role
/admin ditto
x/admin means the admin of the X domain
x/y/admin means the admin role for the y project in domain x
//x/admin means admin for service x from the default domain
etc.

will that work?

Very clean.  Yes. That will work.



regards

David


On 04/12/2013 15:04, Adam Young wrote:

On 12/04/2013 04:08 AM, David Chadwick wrote:

I am happy with this as far as it goes. I would like to see it being
made more general, where domains, services and projects can also own and
name roles

Domains should be OK, but services would confuse the matter.  You'd have
to end up with something like LDAP

role=  domain=default,service=glance

vs

role=  domain=default,project=glance

unless we have unambiguous implicit ordering, we'll need to make it
explicit, which is messy.

I'd rather do:

One segment: globally defined roles.  These could also be considered
roles defined in the default domain.
Two segments service defined roles in the default domain
Three Segments, service defined roles from non-default domain

To do domain scoped roles we could do something like:

domX//admin


But It seems confusing.

Perhaps a better approach for project roles is to have the rule that the
default domain can show up as an empty string.  Thus, project scoped
roles from the default domain  would be:

\glance\admin

and from a non default domain

domX\glance\admin








regards

David


On 04/12/2013 01:51, Adam Young wrote:

I've been thinking about your comment that nested roles are confusing


What if we backed off and said the following:


Some role-definitions are owned by services.  If a Role definition is
owned by a service, in role assignment lists in tokens, those roles we
be prefixd by the service name.  / is a reserved cahracter and weill be
used as the divider between segments of the role definition 

That drops arbitrary nesting, and provides a reasonable namespace.  Then
a role def would look like:

glance/admin  for the admin role on the glance project.



In theory, we could add the domain to the namespace, but that seems
unwieldy.  If we did, a role def would then look like this


default/glance/admin  for the admin role on the glance project.

Is that clearer than the nested roles?



On 11/26/2013 06:57 PM, Tiwari, Arvind wrote:

Hi Adam,

Based on our discussion over IRC, I have updated the below etherpad
with proposal for nested role definition

https://etherpad.openstack.org/p/service-scoped-role-definition

Please take a look @ Proposal (Ayoung) - Nested role definitions, I
am sorry if I could not catch your idea.

Feel free to update the etherpad.

Regards,
Arvind


-Original Message-
From: Tiwari, Arvind
Sent: Tuesday, November 26, 2013 4:08 PM
To: David Chadwick; OpenStack Development Mailing List
Subject: Re: [openstack-dev] [keystone] Service scoped role definition

Hi David,

Thanks for your time and valuable comments. I have replied to your
comments and try to explain why I am advocating to this BP.

Let me know your thoughts, please feel free to update below etherpad
https://etherpad.openstack.org/p/service-scoped-role-definition

Thanks again,
Arvind

-Original Message-
From: David Chadwick [mailto:d.w.chadw...@kent.ac.uk]
Sent: Monday, November 25, 2013 12:12 PM
To: Tiwari, Arvind; OpenStack Development Mailing List
Cc: Henry Nash; ayo...@redhat.com; dolph.math...@gmail.com; Yee, Guang
Subject: Re: [openstack-dev] [keystone] Service scoped role definition

Hi Arvind

I have just added some comments to your blueprint page

regards

David


On 19/11/2013 00:01, Tiwari, Arvind wrote:

Hi,

   Based on our discussion in design summit , I have redone the
service_id
binding with roles BP
https://blueprints.launchpad.net/keystone/+spec/serviceid-binding-with-role-definition.


I have added a new BP (link below) along with detailed use case to
support this BP.

https://blueprints.launchpad.net/keystone/+spec/service-scoped-role-definition



Below etherpad link has some proposals for Role REST representation
and
pros and cons analysis

   https://etherpad.openstack.org/p/service-scoped-role-definition

   Please take look and let me know your thoughts.

   It would be awesome if we can 

Re: [openstack-dev] [keystone][py3] Usage of httpretty

2013-12-04 Thread Dolph Mathews
Looks like there's some recent progress here:

  https://github.com/gabrielfalcao/HTTPretty/pull/124

On Wed, Nov 20, 2013 at 9:30 PM, Morgan Fainberg m...@metacloud.com wrote:

 I'd be more willing to toss in and help to maintain/fix appropriately
 on StackForge if that is needed.  Though I am very much hoping
 upstream can be used.

 Cheers,
 Morgan Fainberg

 On Wed, Nov 20, 2013 at 7:21 PM, Chuck Short chuck.sh...@canonical.com
 wrote:
  Hi,
 
  So maybe if it gets to the point where it gets too be much of a porblem
 we
  should just put it on stackforge.
 
  Regards
  chuck
 
 
  On Wed, Nov 20, 2013 at 9:08 PM, Jamie Lennox jamielen...@redhat.com
  wrote:
 
  Chuck,
 
  So it is being used to handle stubbing returns from requests and httplib
  rather than having to having fake handlers in place in our testing code,
  or stubbing out the request library and continually having to update the
  arguments being passed to keep the mocks working. From my looking around
  it is the best library for this sort of job.
 
  When i evalutated it for keystoneclient upstream
  (https://github.com/gabrielfalcao/HTTPretty/ ) was quickly responsive
  and had CI tests that seemed to be checking python 3 support. I haven't
  seen as much happening recently as there are pull requests upstream for
  python 3 fixes that just don't seem to be moving anywhere. The CI for
  python 3 was also commented out at some point.
 
  It also turns out to be a PITA to package correctly. I attempted this
  for fedora, and i know there was someone attempting the same for gentoo.
  I have a pull request upstream that would at least get the dependencies
  under control.
 
  I do not want to go back to stubbing the request library, or having a
  fake client path that is only used in testing. However I have also
  noticed it is the cause of at least some of our python 3 problems.
 
  If there are other libraries out there that can do the same job we
  should consider them though i am holding some hope for upstream.
 
 
  Jamie
 
 
  On Wed, 2013-11-20 at 14:27 -0800, Morgan Fainberg wrote:
   Chuck,
  
   The reason to use httpretty is that it handles everything at the
   socket layer, this means if we change out urllib for requests or some
   other transport to make HTTP requests to we don't need to refactor
   every one of the mock/mox subouts to match the exact set of parameters
   to be passed.  Httpretty makes managing this significantly easier
   (hence was the reasoning to move towards it).  Though, I'm sure Jamie
   Lennox can provide more insight into deeper specifics as he did most
   of the work to convert it.
  
   At least the above is my understanding of the reasoning.
  
   --Morgan
  
   On Wed, Nov 20, 2013 at 2:08 PM, Dolph Mathews 
 dolph.math...@gmail.com
   wrote:
I don't have a great answer -- do any projects depend on it other
 than
python-keystoneclient? I'm happy to see it removed -- I see the
immediate
benefit but it's obviously not significant relative to python 3
support.
   
BTW, this exact issue is being tracked here-
https://bugs.launchpad.net/python-keystoneclient/+bug/1249165
   
   
   
   
On Wed, Nov 20, 2013 at 3:28 PM, Chuck Short
chuck.sh...@canonical.com
wrote:
   
Hi,
   
I was wondering for the reason behind the usage httpretty in
python-keystoneclient. It seems to me like a total overkill for a
test. It
also has some problems with python3 support that is currently
blocking
python3 porting as well.
   
Regards
chuck
   
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   
   
   
   
--
   
-Dolph
   
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] MySQL Storage Engine

2013-12-04 Thread Adrian Otto

On Dec 4, 2013, at 12:32 PM, Monty Taylor mord...@inaugust.com wrote:

 On 12/04/2013 03:25 PM, Clint Byrum wrote:
 Excerpts from Paul Montgomery's message of 2013-12-04 12:04:06 -0800:
 TLDR: Should Solum log a warning if operators do not use the InnoDB
 storage engine with MySQL in Solum's control plane?
 
 
 Details:
 
 I was looking at: https://review.openstack.org/#/c/57024/
 Models.py to be specific.
 
 The default storage engine is InnoDB for MySQL which is good.  I took a
 quick look at the storage engines and only InnoDB seems reasonable for the
 Solum control plane (it is ACID complaint).  I assume that we'll all be
 coding towards an ACID compliant database for performance (not having to
 revalidate database writes and consistency and such) and ease of
 development.
 
 If all of that is true, should we log a warning to the operator that they
 are using an untested and potentially problematic storage engine (which in
 a worst case scenario can corrupt their data)?  Should we even enable an
 operator to change the storage engine through configuration?  I think
 enabling that configuration is fine as long as we make sure that the
 operator knows that they are on their own with this unsupported
 configuration but I welcome thoughts from the group on this topic.
 
 
 Just assume MyISAM _does not exist_. It is 2013 for crying out loud.
 
 If somebody accidentally uses MyISAM, point at them and laugh, but then
 do help them pick up the pieces when it breaks.
 
 In all seriousness, if you can force the engine to InnoDB, do that.
 Otherwise, just ignore this. We are all consenting adults here and if
 people cant' RTFM on MySQL, they shouldn't be storing data in it.
 
 +1000

So are you suggesting we have a bit of database code in Solum that would 
quickly check the Engine of each table upon startup. Something like:

SHOT TABLE STATUS LIKE '%solum%';

…and iterate the Engine column looking for anything not InnoDB, and logging a 
warning error if other values are found?

Or, are you suggesting that we just trust people not to be fools, and leave 
this subject alone completely?

Thanks,

Adrian
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Moving the QA meeting time

2013-12-04 Thread Matthew Treinish
Hi everyone,

I'm looking at changing our weekly QA meeting time to make it more globally
attendable. Right now the current time of 17:00 UTC doesn't really work for
people who live in Asia Pacific timezones. (which includes a third of the
current core review team) There are 2 approaches that I can see taking here:

 1. We could either move the meeting time later so that it makes it easier for
people in the Asia Pacific region to attend.

 2. Or we move to a alternating meeting time, where every other week the meeting
time changes. So we keep the current slot and alternate with something more
friendly for other regions.

I think trying to stick to a single meeting time would be a better call just for
simplicity. But it gets difficult to appease everyone that way which is where 
the
appeal of the 2nd approach comes in.

Looking at the available time slots here: 
https://wiki.openstack.org/wiki/Meetings
there are plenty of open slots before 1500 UTC which would be early for people 
in
the US and late for people in the Asia Pacific region. There are plenty of slots
starting at 2300 UTC which is late for people in Europe.

Would something like 2200 UTC on Wed. or Thurs work for everyone?

What are people's opinions on this?

-Matt Treinish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] MySQL Storage Engine

2013-12-04 Thread Paul Czarkowski


On 12/4/13 3:06 PM, Adrian Otto adrian.o...@rackspace.com wrote:


On Dec 4, 2013, at 12:32 PM, Monty Taylor mord...@inaugust.com wrote:

 On 12/04/2013 03:25 PM, Clint Byrum wrote:
 Excerpts from Paul Montgomery's message of 2013-12-04 12:04:06 -0800:
 TLDR: Should Solum log a warning if operators do not use the InnoDB
 storage engine with MySQL in Solum's control plane?
 
 
 Details:
 
 I was looking at: https://review.openstack.org/#/c/57024/
 Models.py to be specific.
 
 The default storage engine is InnoDB for MySQL which is good.  I took
a
 quick look at the storage engines and only InnoDB seems reasonable
for the
 Solum control plane (it is ACID complaint).  I assume that we'll all
be
 coding towards an ACID compliant database for performance (not having
to
 revalidate database writes and consistency and such) and ease of
 development.
 
 If all of that is true, should we log a warning to the operator that
they
 are using an untested and potentially problematic storage engine
(which in
 a worst case scenario can corrupt their data)?  Should we even enable
an
 operator to change the storage engine through configuration?  I think
 enabling that configuration is fine as long as we make sure that the
 operator knows that they are on their own with this unsupported
 configuration but I welcome thoughts from the group on this topic.
 
 
 Just assume MyISAM _does not exist_. It is 2013 for crying out loud.
 
 If somebody accidentally uses MyISAM, point at them and laugh, but then
 do help them pick up the pieces when it breaks.
 
 In all seriousness, if you can force the engine to InnoDB, do that.
 Otherwise, just ignore this. We are all consenting adults here and if
 people cant' RTFM on MySQL, they shouldn't be storing data in it.
 
 +1000

So are you suggesting we have a bit of database code in Solum that would
quickly check the Engine of each table upon startup. Something like:

SHOT TABLE STATUS LIKE '%solum%';

Šand iterate the Engine column looking for anything not InnoDB, and
logging a warning error if other values are found?

Or, are you suggesting that we just trust people not to be fools, and
leave this subject alone completely?

Thanks,

Adrian
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I think if we're abstracting the database via a library like SQLAlchemy
then we should assume the person at the other end knows how to configure a
database.  Otherwise do we have to do the same for every database
supported by that library.

If an Operator can manage to install and run and production openstack,
plus solum,  we should be fairly confident they're able to set up a good
mysql.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] MySQL Storage Engine

2013-12-04 Thread Paul Montgomery
With the comments that we've received, I would recommend that Clayton
remove the option to select the MySQL storage engine from Oslo config and
just hardcode InnoDB.  I believe it entails just removing a few lines of
code.  It doesn't sound like anyone has a reason to use any other storage
engine at this point.  Sound good?  I can update my review if that is the
path that we want to take.

(You can see the area in the code at this link under my comment:
https://review.openstack.org/#/c/57024/7/solum/objects/sqlalchemy/models.py
)



On 12/4/13 3:06 PM, Adrian Otto adrian.o...@rackspace.com wrote:


On Dec 4, 2013, at 12:32 PM, Monty Taylor mord...@inaugust.com wrote:

 On 12/04/2013 03:25 PM, Clint Byrum wrote:
 Excerpts from Paul Montgomery's message of 2013-12-04 12:04:06 -0800:
 TLDR: Should Solum log a warning if operators do not use the InnoDB
 storage engine with MySQL in Solum's control plane?
 
 
 Details:
 
 I was looking at: https://review.openstack.org/#/c/57024/
 Models.py to be specific.
 
 The default storage engine is InnoDB for MySQL which is good.  I took
a
 quick look at the storage engines and only InnoDB seems reasonable
for the
 Solum control plane (it is ACID complaint).  I assume that we'll all
be
 coding towards an ACID compliant database for performance (not having
to
 revalidate database writes and consistency and such) and ease of
 development.
 
 If all of that is true, should we log a warning to the operator that
they
 are using an untested and potentially problematic storage engine
(which in
 a worst case scenario can corrupt their data)?  Should we even enable
an
 operator to change the storage engine through configuration?  I think
 enabling that configuration is fine as long as we make sure that the
 operator knows that they are on their own with this unsupported
 configuration but I welcome thoughts from the group on this topic.
 
 
 Just assume MyISAM _does not exist_. It is 2013 for crying out loud.
 
 If somebody accidentally uses MyISAM, point at them and laugh, but then
 do help them pick up the pieces when it breaks.
 
 In all seriousness, if you can force the engine to InnoDB, do that.
 Otherwise, just ignore this. We are all consenting adults here and if
 people cant' RTFM on MySQL, they shouldn't be storing data in it.
 
 +1000

So are you suggesting we have a bit of database code in Solum that would
quickly check the Engine of each table upon startup. Something like:

SHOT TABLE STATUS LIKE '%solum%';

Šand iterate the Engine column looking for anything not InnoDB, and
logging a warning error if other values are found?

Or, are you suggesting that we just trust people not to be fools, and
leave this subject alone completely?

Thanks,

Adrian
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solum] Unicode strings in Python3

2013-12-04 Thread Georgy Okrokvertskhov
Hi,

I am working on unit tests for Solum as a side effect of new unit tests I
found that we use unicode strings in the way which is not compatible with
python3.

Here is an exception form python3 gate:
Server-side error: global name 'unicode' is not defined. Detail:
2013-12-04 Traceback (most recent call last): File
/home/jenkins/workspace/gate-solum-python33/.tox/py33/lib/python3.3/site-packages/wsmeext/pecan.py
 result = f(self, *args, **kwargs)
 File ./solum/api/controllers/v1/assembly.py, line 59, in get
raise wsme.exc.ClientSideError(unicode(error))
NameError: global name 'unicode' is not defined

Here is a documentation for python3:
http://docs.python.org/3.0/whatsnew/3.0.html

Quick summary: you can't use unicode() function and u' ' strings in Pyhton3.

Thanks
Georgy
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DHCP Agent Reliability

2013-12-04 Thread Carl Baldwin
Sorry to have taken the discussion on a slight tangent.  I meant only
to offer the solution as a stop-gap.  I agree that the fundamental
problem should still be addressed.

On Tue, Dec 3, 2013 at 8:01 PM, Maru Newby ma...@redhat.com wrote:

 On Dec 4, 2013, at 1:47 AM, Stephen Gran stephen.g...@theguardian.com wrote:

 On 03/12/13 16:08, Maru Newby wrote:
 I've been investigating a bug that is preventing VM's from receiving IP 
 addresses when a Neutron service is under high load:

 https://bugs.launchpad.net/neutron/+bug/1192381

 High load causes the DHCP agent's status updates to be delayed, causing the 
 Neutron service to assume that the agent is down.  This results in the 
 Neutron service not sending notifications of port addition to the DHCP 
 agent.  At present, the notifications are simply dropped.  A simple fix is 
 to send notifications regardless of agent status.  Does anybody have any 
 objections to this stop-gap approach?  I'm not clear on the implications of 
 sending notifications to agents that are down, but I'm hoping for a simple 
 fix that can be backported to both havana and grizzly (yes, this bug has 
 been with us that long).

 Fixing this problem for real, though, will likely be more involved.  The 
 proposal to replace the current wsgi framework with Pecan may increase the 
 Neutron service's scalability, but should we continue to use a 'fire and 
 forget' approach to notification?  Being able to track the success or 
 failure of a given action outside of the logs would seem pretty important, 
 and allow for more effective coordination with Nova than is currently 
 possible.

 It strikes me that we ask an awful lot of a single neutron-server instance - 
 it has to take state updates from all the agents, it has to do scheduling, 
 it has to respond to API requests, and it has to communicate about actual 
 changes with the agents.

 Maybe breaking some of these out the way nova has a scheduler and a 
 conductor and so on might be a good model (I know there are things people 
 are unhappy about with nova-scheduler, but imagine how much worse it would 
 be if it was built into the API).

 Doing all of those tasks, and doing it largely single threaded, is just 
 asking for overload.

 I'm sorry if it wasn't clear in my original message, but my primary concern 
 lies with the reliability rather than the scalability of the Neutron service. 
  Carl's addition of multiple workers is a good stop-gap to minimize the 
 impact of blocking IO calls in the current architecture, and we already have 
 consensus on the need to separate RPC and WSGI functions as part of the Pecan 
 rewrite.  I am worried, though, that we are not being sufficiently diligent 
 in how we manage state transitions through notifications.  Managing 
 transitions and their associate error states is needlessly complicated by the 
 current ad-hoc approach, and I'd appreciate input on the part of distributed 
 systems experts as to how we could do better.


 m.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] MySQL Storage Engine

2013-12-04 Thread Clayton Coleman
- Original Message -
 With the comments that we've received, I would recommend that Clayton
 remove the option to select the MySQL storage engine from Oslo config and
 just hardcode InnoDB.  I believe it entails just removing a few lines of
 code.  It doesn't sound like anyone has a reason to use any other storage
 engine at this point.  Sound good?  I can update my review if that is the
 path that we want to take.
 
 (You can see the area in the code at this link under my comment:
 https://review.openstack.org/#/c/57024/7/solum/objects/sqlalchemy/models.py
 )
 

Will do.  I also need to force table_args() to be actually used.  When 
reviewing new models we just need to ensure table_args is set.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Unicode strings in Python3

2013-12-04 Thread Adrian Otto
Am I interpreting this to mean that WSME is calling unicode()?

On Dec 4, 2013, at 1:32 PM, Georgy Okrokvertskhov 
gokrokvertsk...@mirantis.commailto:gokrokvertsk...@mirantis.com
 wrote:

Hi,

I am working on unit tests for Solum as a side effect of new unit tests I found 
that we use unicode strings in the way which is not compatible with python3.

Here is an exception form python3 gate:
Server-side error: global name 'unicode' is not defined. Detail: 2013-12-04 
Traceback (most recent call last): File 
/home/jenkins/workspace/gate-solum-python33/.tox/py33/lib/python3.3/site-packages/wsmeext/pecan.py
 result = f(self, *args, **kwargs)
 File ./solum/api/controllers/v1/assembly.py, line 59, in get
raise wsme.exc.ClientSideError(unicode(error))
NameError: global name 'unicode' is not defined

Here is a documentation for python3: 
http://docs.python.org/3.0/whatsnew/3.0.html

Quick summary: you can't use unicode() function and u' ' strings in Pyhton3.

Thanks
Georgy
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DHCP Agent Reliability

2013-12-04 Thread Carl Baldwin
I have offered up https://review.openstack.org/#/c/60082/ as a
backport to Havana.  Interest was expressed in the blueprint for doing
this even before this thread.  If there is consensus for this as the
stop-gap then it is there for the merging.  However, I do not want to
discourage discussion of other stop-gap solutions like what Maru
proposed in the original post.

Carl

On Wed, Dec 4, 2013 at 9:12 AM, Ashok Kumaran ashokkumara...@gmail.com wrote:



 On Wed, Dec 4, 2013 at 8:30 PM, Maru Newby ma...@redhat.com wrote:


 On Dec 4, 2013, at 8:55 AM, Carl Baldwin c...@ecbaldwin.net wrote:

  Stephen, all,
 
  I agree that there may be some opportunity to split things out a bit.
  However, I'm not sure what the best way will be.  I recall that Mark
  mentioned breaking out the processes that handle API requests and RPC
  from each other at the summit.  Anyway, it is something that has been
  discussed.
 
  I actually wanted to point out that the neutron server now has the
  ability to run a configurable number of sub-processes to handle a
  heavier load.  Introduced with this commit:
 
  https://review.openstack.org/#/c/37131/
 
  Set api_workers to something  1 and restart the server.
 
  The server can also be run on more than one physical host in
  combination with multiple child processes.

 I completely misunderstood the import of the commit in question.  Being
 able to run the wsgi server(s) out of process is a nice improvement, thank
 you for making it happen.  Has there been any discussion around making the
 default for api_workers  0 (at least 1) to ensure that the default
 configuration separates wsgi and rpc load?  This also seems like a great
 candidate for backporting to havana and maybe even grizzly, although
 api_workers should probably be defaulted to 0 in those cases.


 +1 for backporting the api_workers feature to havana as well as Grizzly :)


 FYI, I re-ran the test that attempted to boot 75 micro VM's simultaneously
 with api_workers = 2, with mixed results.  The increased wsgi throughput
 resulted in almost half of the boot requests failing with 500 errors due to
 QueuePool errors (https://bugs.launchpad.net/neutron/+bug/1160442) in
 Neutron.  It also appears that maximizing the number of wsgi requests has
 the side-effect of increasing the RPC load on the main process, and this
 means that the problem of dhcp notifications being dropped is little
 improved.  I intend to submit a fix that ensures that notifications are sent
 regardless of agent status, in any case.


 m.

 
  Carl
 
  On Tue, Dec 3, 2013 at 9:47 AM, Stephen Gran
  stephen.g...@theguardian.com wrote:
  On 03/12/13 16:08, Maru Newby wrote:
 
  I've been investigating a bug that is preventing VM's from receiving
  IP
  addresses when a Neutron service is under high load:
 
  https://bugs.launchpad.net/neutron/+bug/1192381
 
  High load causes the DHCP agent's status updates to be delayed,
  causing
  the Neutron service to assume that the agent is down.  This results in
  the
  Neutron service not sending notifications of port addition to the DHCP
  agent.  At present, the notifications are simply dropped.  A simple
  fix is
  to send notifications regardless of agent status.  Does anybody have
  any
  objections to this stop-gap approach?  I'm not clear on the
  implications of
  sending notifications to agents that are down, but I'm hoping for a
  simple
  fix that can be backported to both havana and grizzly (yes, this bug
  has
  been with us that long).
 
  Fixing this problem for real, though, will likely be more involved.
  The
  proposal to replace the current wsgi framework with Pecan may increase
  the
  Neutron service's scalability, but should we continue to use a 'fire
  and
  forget' approach to notification?  Being able to track the success or
  failure of a given action outside of the logs would seem pretty
  important,
  and allow for more effective coordination with Nova than is currently
  possible.
 
 
  It strikes me that we ask an awful lot of a single neutron-server
  instance -
  it has to take state updates from all the agents, it has to do
  scheduling,
  it has to respond to API requests, and it has to communicate about
  actual
  changes with the agents.
 
  Maybe breaking some of these out the way nova has a scheduler and a
  conductor and so on might be a good model (I know there are things
  people
  are unhappy about with nova-scheduler, but imagine how much worse it
  would
  be if it was built into the API).
 
  Doing all of those tasks, and doing it largely single threaded, is just
  asking for overload.
 
  Cheers,
  --
  Stephen Gran
  Senior Systems Integrator - theguardian.com
  Please consider the environment before printing this email.
  --
  Visit theguardian.com
  On your mobile, download the Guardian iPhone app theguardian.com/iphone
  and
  our iPad edition theguardian.com/iPad   Save up to 33% by subscribing
  to 

Re: [openstack-dev] [qa] Moving the QA meeting time

2013-12-04 Thread Miguel Lavalle
Matt,

2200 UTC Wed. or Thurs. is fine with me

Cheers


On Wed, Dec 4, 2013 at 3:04 PM, Matthew Treinish mtrein...@kortar.orgwrote:

 Hi everyone,

 I'm looking at changing our weekly QA meeting time to make it more globally
 attendable. Right now the current time of 17:00 UTC doesn't really work for
 people who live in Asia Pacific timezones. (which includes a third of the
 current core review team) There are 2 approaches that I can see taking
 here:

  1. We could either move the meeting time later so that it makes it easier
 for
 people in the Asia Pacific region to attend.

  2. Or we move to a alternating meeting time, where every other week the
 meeting
 time changes. So we keep the current slot and alternate with something
 more
 friendly for other regions.

 I think trying to stick to a single meeting time would be a better call
 just for
 simplicity. But it gets difficult to appease everyone that way which is
 where the
 appeal of the 2nd approach comes in.

 Looking at the available time slots here:
 https://wiki.openstack.org/wiki/Meetings
 there are plenty of open slots before 1500 UTC which would be early for
 people in
 the US and late for people in the Asia Pacific region. There are plenty of
 slots
 starting at 2300 UTC which is late for people in Europe.

 Would something like 2200 UTC on Wed. or Thurs work for everyone?

 What are people's opinions on this?

 -Matt Treinish

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Unicode strings in Python3

2013-12-04 Thread Georgy Okrokvertskhov
No,

This is Solum code:
https://github.com/stackforge/solum/blob/master/solum/api/controllers/v1/assembly.py#L59

Thanks
Georgy


On Wed, Dec 4, 2013 at 1:41 PM, Adrian Otto adrian.o...@rackspace.comwrote:

  Am I interpreting this to mean that WSME is calling unicode()?

  On Dec 4, 2013, at 1:32 PM, Georgy Okrokvertskhov 
 gokrokvertsk...@mirantis.com
  wrote:

  Hi,

  I am working on unit tests for Solum as a side effect of new unit tests
 I found that we use unicode strings in the way which is not compatible with
 python3.

  Here is an exception form python3 gate:
 Server-side error: global name 'unicode' is not defined. Detail:
 2013-12-04 Traceback (most recent call last): File
 /home/jenkins/workspace/gate-solum-python33/.tox/py33/lib/python3.3/site-packages/wsmeext/pecan.py
  result = f(self, *args, **kwargs)
  File ./solum/api/controllers/v1/assembly.py, line 59, in get
 raise wsme.exc.ClientSideError(unicode(error))
 NameError: global name 'unicode' is not defined

  Here is a documentation for python3:
 http://docs.python.org/3.0/whatsnew/3.0.html

  Quick summary: you can't use unicode() function and u' ' strings in
 Pyhton3.

  Thanks
 Georgy
  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Georgy Okrokvertskhov
Technical Program Manager,
Cloud and Infrastructure Services,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Unicode strings in Python3

2013-12-04 Thread Ben Nemec
 

I don't think so. It looks like ./solum/api/controllers/v1/assembly.py
is calling unicode(). It will need to be changed to six.text_type() for
Python 3 compat. 

-Ben 

On 2013-12-04 15:41, Adrian Otto wrote: 

 Am I interpreting this to mean that WSME is calling unicode()? 
 
 On Dec 4, 2013, at 1:32 PM, Georgy Okrokvertskhov 
 gokrokvertsk...@mirantis.com 
 wrote: 
 
 Hi, 
 
 I am working on unit tests for Solum as a side effect of new unit tests I 
 found that we use unicode strings in the way which is not compatible with 
 python3. 
 
 Here is an exception form python3 gate: 
 Server-side error: global name 'unicode' is not defined. Detail: 
 2013-12-04 Traceback (most recent call last): File 
 /home/jenkins/workspace/gate-solum-python33/.tox/py33/lib/python3.3/site-packages/wsmeext/pecan.py
  
 result = f(self, *args, **kwargs) 
 File ./solum/api/controllers/v1/assembly.py, line 59, in get 
 raise wsme.exc.ClientSideError(unicode(error)) 
 NameError: global name 'unicode' is not defined 
 
 Here is a documentation for python3: 
 http://docs.python.org/3.0/whatsnew/3.0.html [1] 
 
 Quick summary: you can't use unicode() function and u' ' strings in Pyhton3. 
 
 Thanks 
 Georgy ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2]

 

Links:
--
[1] http://docs.python.org/3.0/whatsnew/3.0.html
[2] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Unicode strings in Python3

2013-12-04 Thread Georgy Okrokvertskhov
I opened a bug https://bugs.launchpad.net/solum/+bug/1257929 for that issue.

Ben, thank you for a quick fix proposal.

Thanks
Georgy


On Wed, Dec 4, 2013 at 1:41 PM, Ben Nemec openst...@nemebean.com wrote:

  I don't think so.  It looks like ./solum/api/controllers/v1/assembly.py
 is calling unicode().  It will need to be changed to six.text_type() for
 Python 3 compat.

 -Ben

 On 2013-12-04 15:41, Adrian Otto wrote:

 Am I interpreting this to mean that WSME is calling unicode()?

  On Dec 4, 2013, at 1:32 PM, Georgy Okrokvertskhov 
 gokrokvertsk...@mirantis.com
  wrote:

  Hi,

 I am working on unit tests for Solum as a side effect of new unit tests I
 found that we use unicode strings in the way which is not compatible with
 python3.

 Here is an exception form python3 gate:
 Server-side error: global name 'unicode' is not defined. Detail:
 2013-12-04 Traceback (most recent call last): File
 /home/jenkins/workspace/gate-solum-python33/.tox/py33/lib/python3.3/site-packages/wsmeext/pecan.py
  result = f(self, *args, **kwargs)
  File ./solum/api/controllers/v1/assembly.py, line 59, in get
 raise wsme.exc.ClientSideError(unicode(error))
 NameError: global name 'unicode' is not defined

 Here is a documentation for python3:
 http://docs.python.org/3.0/whatsnew/3.0.html

 Quick summary: you can't use unicode() function and u' ' strings in
 Pyhton3.

 Thanks
 Georgy
  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Georgy Okrokvertskhov
Technical Program Manager,
Cloud and Infrastructure Services,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solumn] Suggestion about usage of mailing lists

2013-12-04 Thread Boris Pavlovic
Guys,

I really appreciate that all discussion are real open, and you are doing a
great job, discussing all arch stuff of the project. But for such things
there are:
1) IRC chats
2) WIKI pages
3) ehterpads

E.g. in Rally we have 1 page:
https://etherpad.openstack.org/p/Rally_Main that contains all important
information about:
1) RoadMaps
2) Links to active discussions
3) Current Active and Open Tasks
So everybody is able to discuss what he want with the rest of the project
community, without tons of emails..

Could I ask you guys to send emails only about:
1) Releases
2) Major changes and news
3) Questions
4) Incubation stuff
5) Integration with other projects

And avoid discussing every piece of arch in mailing list. Because if  each
project will start to discuss all arch stuff in mailing list, we will have
billions of emails per minute in openstack-dev.


Sorry for this email, and thank you for understanding.

Best regards,
Boris Pavlovic
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solumn] Suggestion about usage of mailing lists

2013-12-04 Thread Russell Bryant
On 12/04/2013 05:07 PM, Boris Pavlovic wrote:
 Guys, 
 
 I really appreciate that all discussion are real open, and you are doing
 a great job, discussing all arch stuff of the project. But for such
 things there are:
 1) IRC chats
 2) WIKI pages
 3) ehterpads 
 
 E.g. in Rally we have 1 page:
 https://etherpad.openstack.org/p/Rally_Main that contains all important
 information about:
 1) RoadMaps
 2) Links to active discussions
 3) Current Active and Open Tasks
 So everybody is able to discuss what he want with the rest of the
 project community, without tons of emails..
 
 Could I ask you guys to send emails only about:
 1) Releases
 2) Major changes and news
 3) Questions 
 4) Incubation stuff 
 5) Integration with other projects
 
 And avoid discussing every piece of arch in mailing list. Because if
  each project will start to discuss all arch stuff in mailing list, we
 will have billions of emails per minute in openstack-dev.  
 
 
 Sorry for this email, and thank you for understanding. 

I have pretty much the exact opposite opinion of what you expressed here.

If you don't want to see it, please use filters or only subscribe to the
topics you care about.

Thanks,

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When should things get added to Oslo.Incubator

2013-12-04 Thread Doug Hellmann
On Wednesday, December 4, 2013, Flavio Percoco wrote:

 On 04/12/13 01:13 +, Adrian Otto wrote:

 Jay is right. What we have is probably close enough to what's in Nova to
 qualify for oslo-incubator. The simplifications seem to me to have general
 appeal so this code would be more attractive to other projects. One worry I
 have is that there is still a good deal of magic behavior in this code, as
 reviewers have made clear notes about in the code review. I'd like to try
 it and see if there are further simplifications we could entertain to make
 this code easier to debug and maintain. It would be great if such
 iterations happened in a place where other projects could easily leverage
 them.

 I will remind us that Solum is not currently an incubated project. Should
 we address this concern now, or during an incubation phase?


 This is not just a Solum issue but a general issue throughout
 OpenStack. The sooner we sort this out, the better.


 Some approaches for us to consider:

 1) Merge this code in Solum, open a bug against it to move it back into
 oslo-incubation, open a stub project in oslo-incubation with a read me that
 references the bug, and continue iterate on it in Solum until we are
 reasonably happy with it. Then during an incubation phase, we can resolve
 that bug by putting the code into oslo-incubation, and achieve the goal of
 making more reusable work between projects.

 We could also address that bug at such time as any other ecosystem
 project is looking for a similar solution, and finds the stub project in
 oslo-incubation.

 2) Just plunk all of this code into oslo-incubation as-is and do all
 iterating there. That might cause a bit more copying around of code during
 the simplification process, but would potentially achieve the reusability
 goal sooner, possibly by a couple of months.

 3) Use pypi. In all honesty we have enough new developers (about half the
 engineers on this project) coming up to speed with how things work in the
 OpenStack ecosystem that I'm reluctant to throw that into the mix too.

 What do you all prefer?



 I'd personally prefer number 2. Besides the reasons already raised in
 this thread we should also add the fact that having it in
 oslo-incubator will make it easier for people from other projects to
 contribute, review and improve that code.



Exactly. This sounds like a feature we want to live in a common library,
but without a currently stable API. That's exactly what the incubator is
for.

Doug




 On Dec 3, 2013, at 2:58 PM, Mark McLoughlin mar...@redhat.com
 wrote:

  On Tue, 2013-12-03 at 22:44 +, Joshua Harlow wrote:

 Sure sure, let me not make that assumption (can't speak for them), but
 even libraries on pypi have to deal with API instability.


 Yes, they do ... either by my maintaining stability, bumping their major
 version number to reflect an incompatible change ... or annoying the
 hell out of their users!

  Just more of suggesting, might as well bite the bullet (if objects folks
 feel ok with this) and just learn to deal with the pypi method for
 dealing
 with API instability (versions, deprecation...). Since code copying
 around
 is just creating a miniature version of the same 'learning experience'
 except u lose the other parts (versioning, deprecation, ...) which comes
 along with pypi and libraries.


 Yes, if the maintainers of the API are prepared to deal with the demands
 of API stability, publishing the API as a standalone library would be
 far more preferable.

 Failing that, oslo-incubator offers a halfway house which sucks, but not
 as as much as the alternative - projects copying and pasting each
 other's code and evolving their copies independently.


 Agreed. Also, as mentioned above, keeping the code in oslo will bring
 more eyeballs to the review, which helps a lot when designing APIs and
 seeking for stability.

 Projects throughout OpenStack look for re-usable code in Oslo first -
 or at least I think they should - and then elsewhere. Putting things
 in oslo-incubator has also a community impact, not just technical
 benefits. IMHO.

 FF

 --
 @flaper87
 Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] capturing build details in images

2013-12-04 Thread Robert Collins
This is a follow up to https://review.openstack.org/59621 to get
broader discussion..

So at the moment we capture a bunch of details in the image - what
parameters the image was built with and some environment variables.

Last week we were capturing everything, which there is broad consensus
was too much, but it seems to me that that is based on two things:
 - the security ramifications of unanticipated details being baked
into the image
 - many variables being irrelevant most of the time

I think those are both good points. But... the problem with diagnostic
information is you don't know that you need it until you don't have
it.

I'm particularly worried that things like bad http proxies, and third
party elements that need variables we don't know about will be
undiagnosable. Forcing everything through a DIB_FOO variable thunk
seems like just creating work for ourselves - I'd like to avoid that.

Further, some variables we should capture (like http_proxy) have
passwords embedded in them, so even whitelisting what variables to
capture doesn't solve the general problem.

So - what about us capturing this information outside the image: we
can create a uuid for the build, and write a file in the image with
that uuid, and outside the image we can write:
 - all variables (no security ramifications now as this file can be
kept by whomever built the image)
 - command line args
 - version information for the toolchain etc.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Clous

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solumn] Suggestion about usage of mailing lists

2013-12-04 Thread Adrian Otto
Boris,

Thanks for expressing your concern about this. For those reading the Solum 
topic on this list, we really want this to be a community building resource 
were we are free to discuss ideas and as a primary mechanism for gauging 
consensus. Applying restrictions to how the list is used by project 
contributors will actually limit discussion and reduce our level of engagement. 
That's just bad.

Solum is in a rapid growth phase, and lots of discussion is required, including 
coordination of our design efforts. There are many divisions to be made, and 
even with three hours of scheduled meetings every week, there is just not 
enough time to make them all on IRC. We are treating the ML as a first class 
resource to make decisions that are not practical to make interactively in our 
regularly scheduled meetings. This is one of the peculiarities of starting a 
widely collaborative project from scratch, and sourcing input from a very 
diverse team.

We do have a wiki page where people can find recurring meeting times, and the 
schedule for things once scheduled. Please recognize that schedules have been 
changing as new working groups are forming, which is requiring us to coordinate 
them by email.

If what you really want is to ignore everything but major project 
announcements, I suggest you filter out [Solum] on openstack-dev and look for 
announcement messages on the regular openstack mailing list when we begin to 
engage the user community. That way you can clue in from time to time. We can 
also post a project status section on the wiki so you can get a sense of it at 
a glance. That's on my to-do list.

Lot's of meaty email on this topic is a good thing, not a bad thing.

Adrian

On Dec 4, 2013, at 2:07 PM, Boris Pavlovic 
bpavlo...@mirantis.commailto:bpavlo...@mirantis.com
 wrote:

Guys,

I really appreciate that all discussion are real open, and you are doing a 
great job, discussing all arch stuff of the project. But for such things there 
are:
1) IRC chats
2) WIKI pages
3) ehterpads

E.g. in Rally we have 1 page:
https://etherpad.openstack.org/p/Rally_Main that contains all important 
information about:
1) RoadMaps
2) Links to active discussions
3) Current Active and Open Tasks
So everybody is able to discuss what he want with the rest of the project 
community, without tons of emails..

Could I ask you guys to send emails only about:
1) Releases
2) Major changes and news
3) Questions
4) Incubation stuff
5) Integration with other projects

And avoid discussing every piece of arch in mailing list. Because if  each 
project will start to discuss all arch stuff in mailing list, we will have 
billions of emails per minute in openstack-dev.


Sorry for this email, and thank you for understanding.

Best regards,
Boris Pavlovic
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >