Re: [openstack-dev] [Manila] ./run_tests issue

2014-09-22 Thread Deepak Shetty
Thats incorrect, as i said in my original mail.. I am usign devstack+manila
and it wasn't very clear to me that mysql-devel needs to be installed and
it didn't get installed. I am on F20, not sure if that causes this , if
yes, then we need to debug and fix this.

Maybe its a good idea to put a comment in requirements.txt statign that the
following C libs needs to be installed for  the venv to work smoothly. That
would help too for the short term.

On Sun, Sep 21, 2014 at 12:12 PM, Valeriy Ponomaryov 
vponomar...@mirantis.com wrote:

 Dep MySQL-python is already in test-requirements.txt file. As Andreas
 said, second one mysql-devel is C lib and can not be installed via pip.
 So, project itself, as all projects in OpenStack, can not install it.

 C lib deps are handled by Devstack, if it is used. See:
 https://github.com/openstack-dev/devstack/tree/master/files/rpms

 https://github.com/openstack-dev/devstack/blob/2f27a0ed3c609bfcd6344a55c121e56d5569afc9/functions-common#L895

 Yes, Manila could have its files in the same way in
 https://github.com/openstack/manila/tree/master/contrib/devstack , but
 this lib is already exist in deps for other projects. So, I guess you used
 Manila run_tests.sh file on host without devstack installation, in that
 case all other projects would fail in the same way.

 On Sun, Sep 21, 2014 at 2:54 AM, Alex Leonhardt aleonhardt...@gmail.com
 wrote:

 And yet it's a dependency so I'm with Deepak and it should at least be
 mentioned in the prerequisites on a webpage somewhere .. :) I might even
 try and update/add that myself as it caught me out a few times too..

 Alex
  On 20 Sep 2014 12:44, Andreas Jaeger a...@suse.com wrote:

 On 09/20/2014 09:34 AM, Deepak Shetty wrote:
  thanks , that worked.
  Any idea why it doesn't install it automatically and/or it isn't
 present
  in requirements.txt ?
  I thought that was the purpose of requirements.txt ?

 AFAIU requirements.txt has only python dependencies while
 mysql-devel is a C development package,

 Andreas
 --
  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
   SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
 GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kind Regards
 Valeriy Ponomaryov
 www.mirantis.com
 vponomar...@mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] ./run_tests issue

2014-09-22 Thread Deepak Shetty
Even better, whenever ./run_tests fail... maybe put a msg stating the
following C libs needs to be installed, have the user check the
same..something like that would help too.

On Mon, Sep 22, 2014 at 11:59 AM, Deepak Shetty dpkshe...@gmail.com wrote:

 Thats incorrect, as i said in my original mail.. I am usign
 devstack+manila and it wasn't very clear to me that mysql-devel needs to be
 installed and it didn't get installed. I am on F20, not sure if that causes
 this , if yes, then we need to debug and fix this.

 Maybe its a good idea to put a comment in requirements.txt statign that
 the following C libs needs to be installed for  the venv to work smoothly.
 That would help too for the short term.

 On Sun, Sep 21, 2014 at 12:12 PM, Valeriy Ponomaryov 
 vponomar...@mirantis.com wrote:

 Dep MySQL-python is already in test-requirements.txt file. As Andreas
 said, second one mysql-devel is C lib and can not be installed via
 pip. So, project itself, as all projects in OpenStack, can not install it.

 C lib deps are handled by Devstack, if it is used. See:
 https://github.com/openstack-dev/devstack/tree/master/files/rpms

 https://github.com/openstack-dev/devstack/blob/2f27a0ed3c609bfcd6344a55c121e56d5569afc9/functions-common#L895

 Yes, Manila could have its files in the same way in
 https://github.com/openstack/manila/tree/master/contrib/devstack , but
 this lib is already exist in deps for other projects. So, I guess you used
 Manila run_tests.sh file on host without devstack installation, in that
 case all other projects would fail in the same way.

 On Sun, Sep 21, 2014 at 2:54 AM, Alex Leonhardt aleonhardt...@gmail.com
 wrote:

 And yet it's a dependency so I'm with Deepak and it should at least be
 mentioned in the prerequisites on a webpage somewhere .. :) I might even
 try and update/add that myself as it caught me out a few times too..

 Alex
  On 20 Sep 2014 12:44, Andreas Jaeger a...@suse.com wrote:

 On 09/20/2014 09:34 AM, Deepak Shetty wrote:
  thanks , that worked.
  Any idea why it doesn't install it automatically and/or it isn't
 present
  in requirements.txt ?
  I thought that was the purpose of requirements.txt ?

 AFAIU requirements.txt has only python dependencies while
 mysql-devel is a C development package,

 Andreas
 --
  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
   SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
 GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kind Regards
 Valeriy Ponomaryov
 www.mirantis.com
 vponomar...@mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Design Summit planning

2014-09-22 Thread Tom Fifield
On 18/09/14 16:03, Thierry Carrez wrote:
 Maish Saidel-Keesing wrote:
 On 17/09/2014 23:12, Anita Kuno wrote:
 On 09/17/2014 04:01 PM, Maish Saidel-Keesing wrote:
 This looks great - but I am afraid that something might be missing.

 As part of the Design summit in Atlanta there was an Ops Meetup track.
 [1] I do not see where this fits into the current planning process that
 has been posted.
 I would like to assume that part of the purpose of the summit is to also
 collect feedback from Enterprise Operators and also from smaller ones as
 well.

 If that is so then I would kindly request that there be some other way
 of allowing that part of the community to voice their concerns, and
 provide feedback.

 Perhaps a track that is not only Operator centric - but also an End-user
 focused one as well (mixing the two would be fine as well)

 Most of them are not on the openstack-dev list and they do not
 participate in the IRC team meetings, simply because they have no idea
 that these exist or maybe do not feel comfortable there. So they will
 not have any exposure to the process.

 My 0.02 Shekels.

 [1] - http://junodesignsummit.sched.org/overview/type/ops+meetup

 Hi Maish:

 This thread is about the Design Summit, the Operators Track is a
 different thing.

 In Atlanta the Operators Track was organized by Tom Fifield and I have
 every confidence he is working hard to ensure the operators have a voice
 in Paris and that those interested can participate.

 Last summit the Operators Track ran on the Monday and the Friday giving
 folks who usually spend most of the time at the Design summit to
 participate and hear the operator's voices. I know I did and I found it
 highly educational.

 Thanks,
 Anita.
 Thanks for the clarification Anita :)
 
 I think the plan is to have the Ops summit run on Monday, with an extra
 day on Thursday to continue the discussions. I CCed Tom for more input
 on that.


Sorry for the delay all, and thanks for the kind notes. The ops meetup
will indeed return in Paris. Standby for details and planning etherpad
any day now - on the openstack-operators mailing list.


Regards,


Tom


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Group-Based Policy Understanding and Queries

2014-09-22 Thread Sachi Gupta
Hi All,

Request you all to provide inputs on below understanding:

Openstack: Group-based policy is a blueprint for Juno-3 release of 
Openstack. It will extend OpenStack Networking with policy and 
connectivity abstractions that enable significantly more simplified and 
application-oriented interfaces than with the current Neutron API model. 
When will be the code ready for Group-based policy as an open source?

Openstack group policy API will be an extension to the Neutron APIs. There 
will be a policy manager to manage the policy and policy rules. Will GBP a 
part of neutron?? If yes, then will GBP be a part of Horizon under 
neutron?

Policy driver which will act as an interface(ODL Policy Driver). For eg. 
we used neutron ML2 plugin as an interface between Openstack neutron and 
ODL neutron northbound. When will the policy driver for ODL available?

Openstack policy driver for ODL will act as an interface to ODL. Which API 
in ODL, Policy calls from Openstack ODL Policy driver will be hitting??


Thanks  Regards
Sachi Gupta
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Zaqar] PTL Candidacy

2014-09-22 Thread Flavio Percoco
Greetings,

I'd like to announce my candidacy for the Message Program PTL position.

I've been part of this program since its inception. Along with a really
amazing team, I've been working on changing the openstack messaging
service reality for the last 2 years.

Zaqar has a set of features that I consider essential for different use
cases. However, some of this features are very specific and could be
supported differently.

Therefore, my focus for the next cycle will be working on shrinking
Zaqar's feature set down to what's really essential for most of the use
cases.

In addition to the above, I'd like to focus on expanding Zaqar's
adoption by starting from the world it belongs to, OpenStack. We've
identified a set of projects and use-cases that we need/can cover. I
believe working together with those projects will help Zaqar to improve
and the rest of the ecosystem.

Last but not least, I'm planning to dedicate extra time into supporting
our development community and make sure Zaqar's mission, vision, goals
and internals are clear to everyone. I believe we're all working in a
community full of very talented folks and the feedback from each one of
them has an immense value for the project, team and the rest of the
ecosystem.

Thanks for reading,
Flavio


-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Oslo messaging vs zaqar

2014-09-22 Thread Flavio Percoco
On 09/20/2014 10:17 AM, Geoff O'Callaghan wrote:
 Hi all,
 I'm just trying to understand the messaging strategy in openstack.It
 seems we have at least 2 messaging layers.
 
 Oslo.messaging and zaqar,  Can someone explain to me why there are
 two?To quote from the zaqar faq :
 -
 How does Zaqar compare to oslo.messaging?
 
 oslo.messsaging is an RPC library used throughout OpenStack to manage
 distributed commands by sending messages through different messaging
 layers. Oslo Messaging was originally developed as an abstraction over
 AMQP, but has since added support for ZeroMQ.
 
 As opposed to Oslo Messaging, Zaqar is a messaging service for the over
 and under cloud. As a service, it is meant to be consumed by using
 libraries for different languages. Zaqar currently supports 1 protocol
 (HTTP) and sits on top of other existing technologies (MongoDB as of
 version 1.0).
 
 It seems to my casual view that we could have one and scale that and use
 it for SQS style messages, internal messaging (which could include
 logging) all managed by message schemas and QoS.  This would give a very
 robust and flexible system for endpoints to consume.
 
 Is there a plan to consolidate?

Hi Geoff,

No, there's no plan to consolidate.

As mentioned in the FAQ, oslo.messaging is a messaging *library* whereas
Zaqar is a messaging *service*. Moreover, oslo messaging is highly tight
to AMQP semantics whereas Zaqar is not.

Note that I'm not saying this isn't technically possible, I'm saying
these 2 projects have different goals, visions and scope, hence they
weren't merged nor they will.

Cheers,
Flavio

-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] devstack - horizon dashboard issues

2014-09-22 Thread Matthias Runge
On 21/09/14 22:55, Alex Leonhardt wrote:
 Hi guys,
 
 trying to get a devstack up and running, on a VM, but I keep getting this: 
 
 
 Error during template rendering
 
 In
 template 
 |/opt/stack/horizon/openstack_dashboard/templates/context_selection/_project_list.html|,
 error at line *7*
 
 
   Reverse for ''switch_tenants'' with arguments
   '(u'ca0fd29936a649e59850d7bb8c17e44c',)' and keyword arguments
   '{}' not found.
 
I'd say: please file a bug, if not already happened, and I couldn't find
it quickly.

Matthias


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-22 Thread Thomas Goirand
On 09/21/2014 11:30 PM, Ian Cordasco wrote:
 Hi Thomas,
 
 Several people, many of whom are core contributors to other projects, have
 asked that this discussion not be continued in this venue. Discussion of
 the decisions of the core-developers of requests are not appropriate for
 this list. All three of us have email addresses that you can retrieve from
 anywhere you please. There’s a mailing list for request, albeit very
 lightly trafficked, and there’s twitter.
 
 Cheers,
 Ian

Ian,

I don't use non-free software as a service platform such as twitter,
even less for discussing about software development. I anyway tried
twitter and didn't like it.

I first didn't reply to any discussion about vendoring, but then I saw
*a lot* of this discussion happening in this list. Because of that, I
thought I couldn't left it unanswered. Now that I did answer to so many
points of your argumentation, you're telling me to go away from this
list and do it somewhere else.

In some cases, its looking like you're just closing discussions and tell
everyone to go away from your own channel of communication. It shows here:
https://github.com/kennethreitz/requests/pull/1812

Will the discussion stay open if I'm joining your list? Will you guys be
open minded with someone with a different view? If so, I may try to make
a new attempt. Please do open a topic on your list with my last reply as
a start, and just CC me (I don't really want to register to
yet-another-new-mailing-list...).

 Further, I’m disappointed that you felt it appropriate or necessary
 to result to personal attacks on this list. At the very least you
 could have contained those to Twitter like others in this thread have
 done. I expected a more civil response on the openstack-dev mailing
 list.

I have re-read myself multiple times to make sure that there was no such
thing as personal attacks. I tried to use a nice tone, and have a solid
argumentation. It's looking like I failed. :(

If there's some words which you consider as a personal attack, please
feel free to quote them in a private mail, and let me know (away from
this list) where I was not nice enough, so that I get to know which part
you didn't like.

Though remember something: it's very common to read someone from a list,
and believe that there's an aggressive, when in fact, the intention is
only to be convincing. Please assume good faith.

Cheers,

Thomas Goirand (zigo)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] devstack - horizon dashboard issues

2014-09-22 Thread Alex Leonhardt
No probs - done that just now.

Alex

On 22 September 2014 08:20, Matthias Runge mru...@redhat.com wrote:

 On 21/09/14 22:55, Alex Leonhardt wrote:
  Hi guys,
 
  trying to get a devstack up and running, on a VM, but I keep getting
 this:
 
 
  Error during template rendering
 
  In
  template
 |/opt/stack/horizon/openstack_dashboard/templates/context_selection/_project_list.html|,
  error at line *7*
 
 
Reverse for ''switch_tenants'' with arguments
'(u'ca0fd29936a649e59850d7bb8c17e44c',)' and keyword arguments
'{}' not found.
 
 I'd say: please file a bug, if not already happened, and I couldn't find
 it quickly.

 Matthias


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][elections] Not running as PTL this cycle

2014-09-22 Thread Chris Jones
Hi

On 22 September 2014 04:26, Robert Collins robe...@robertcollins.net
wrote:

 I'm not running as PTL for the TripleO program this cycle.


As someone who's been involved in TripleO for a couple of years, I'd like
to say thank you very much for your efforts in bootstrapping and PTLing the
program. I think it has benefitted enormously from your contributions
(leadership and otherwise).

-- 
Cheers,

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] PTL Candidacy

2014-09-22 Thread David Lyle
I would like to announce my candidacy for Horizon PTL.

I've been actively contributing to Horizon since the Grizzly cycle and
I've been a core contributor since the Havana cycle. For the last two
cycles, I have had the pleasure of serving as PTL.

Horizon is in the midst of some large transitions that we've been laying
the foundation for in the past two releases. I would like to continue to
help guide those changes through to completion.

We've made progress on splitting the horizon repo into logical parts. The
vendored 3rd party JavaScript libraries have been extracted. This was a
major hurdle to completing the separation. Finishing the split will
improve maintainability and extensibility. I believe we can complete this
in the Kilo cycle.

We've also continued the transition to leveraging AngularJS to improve
usability and providing richer client experiences. I would like to see
this effort accelerate in Kilo. But, I would like to see it driven by a
clear unified strategy rather than numerous uncoordinated efforts. My goal
is to leave the Paris Summit with a plan we can work toward together. A
richer client side approach is key to addressing many of the usability
shortcomings in Horizon.

We successfully integrated the Sahara UI components into Horizon in Juno.
And in Kilo, we'll look to integrate Ironic support. In Kilo, there is
also potential for wider integration requirements on Horizon that may need
greater attention and likely a revised repo strategy.

Finally, Horizon like most of OpenStack is benefiting from a rapidly
growing contributor base. Like other projects, we aren't immune to the
stresses such growth creates. We've started taking steps toward improving
the blueprint process as well as changes to how we manage bugs. We need to
continue to refine these efforts to improve the overall direction of the
project.

Driven by a terrific community of contributors, reviewers and users,
Horizon has made great strides in Juno. I look forward to seeing what this
community can accomplish in Kilo. As PTL, my job is to enable this
community to continue to flourish.

Thank you,
David Lyle
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Oslo messaging vs zaqar

2014-09-22 Thread Gordon Sim

On 09/22/2014 08:00 AM, Flavio Percoco wrote:

oslo messaging is highly tight to AMQP semantics whereas Zaqar is not.


The API for oslo.messaging is actually quite high level and could easily 
be mapped to different messaging technologies.


There is some terminology that comes from older versions of AMQP, e.g. 
the use of 'exchanges' as essentially a namespace, but I don't myself 
believe these tie the API in anyway to the original implementation from 
which the naming arose.


In what way do you see olso.messaging as being tied to AMQP semantics?



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Reverting recent refactorings of RBD support for config drives

2014-09-22 Thread Michael Still
Hi.

Today I encountered bug 1369627 [1] as I trolled the status of release
critical bugs, which appears to be fall out from the decision to
implement adding support for config drives stored in RBD. While I have
no problem with that being at thing we do, I'm concerned by the way it
was implemented -- the image caching code for libvirt was being used
to cache the config drive, and then upload it to ceph as a side
effect of the image caching mechanism.

I'd prefer we don't to it that way, and given its introduced as
security bug, I have proposed the following reverts:

 - https://review.openstack.org/#/c/123070/
 - https://review.openstack.org/#/c/123071/
 - https://review.openstack.org/#/c/123072/

Now, because I want to move us forward, I've also proposed an
alternate implementation which achieves the same thing without using
the caching code:

 - https://review.openstack.org/#/c/123073/

The new implementation only supports RBD, but that's mostly because
its the only image storage backend in the libvirt driver where it
makes immediate sense to do this sort of thing. I think this code
could do with a refactor, but I was attempting to produce the minimum
functional implementation given where we are in the release cycle.

Persuant to our revert policy [2], I am asking cores to take a look at
these patches as soon as possible.

Thanks,
Michael

1: https://bugs.launchpad.net/nova/+bug/1369627
2: https://github.com/openstack/nova/blob/master/doc/source/devref/policies.rst

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-22 Thread Thierry Carrez
John Dickinson wrote:
 I propose that we can get the benefits of Monty's proposal and implement all 
 of his concrete suggestions (which are fantastic) by slightly adjusting our 
 usage of the program/project concepts.
 
 I had originally hoped that the program concept would have been a little 
 higher-level instead of effectively spelling project as program. I'd love 
 to see a hierarchy of openstack-program-project/team-repos. Right now, we 
 have added the program layer but have effectively mapped it 1:1 to the 
 project. For example, we used to have a few repos in the Swift project 
 managed by the same group of people, and now we have a few repos in the 
 object storage program, all managed by the same group of people. And every 
 time something is added to OpenStack, its added as a new program, effectively 
 putting us exactly where we were before we called it a program with the same 
 governance and management scaling problems.
 
 Today, I'd group existing OpenStack projects into programs as follows:
 
 Compute: nova, sahara, ironic
 Storage: swift, cinder, glance, trove
 Network: neutron, designate, zaqar
 Deployment/management: heat, triple-o, horizon, ceilometer
 Identity: keystone, barbican
 Support (not user facing): infra, docs, tempest, devstack, oslo
 (potentially even) stackforge: lots

I'm not clear on what those $THINGS would actually represent.

Would they have a single PTL ? (I guess not, otherwise we are back at
the team/theme duality I called out in my answer to Monty). Who decides
which $THINGS may exist ? (I suppose anybody can declare one, otherwise
we are back with TC blessing fields). Can $THINGS contain alternative
competing solutions ? (I suppose yes, otherwise we are back to
preventing innovation).

So what are they ? A tag you can apply to your project ? A category you
can file yourself under ?

I like Monty's end-user approach to defining layer #1. The use case
being, you want a functioning compute instance. Your taxonomy seems more
artificial. Swift, Cinder, Glance and Trove (and Manila) all represent
very different end-user use cases. Yes they all store stuff, but
that's about the only thing they have in common. And no user in their
sane mind would use all of them at the same time (if they do, they are
probably doing it wrong).

I guess we could recognize more basic use cases. I.e. Object storage
is a end-user use case all by itself, and it only requires Swift +
Keystone. So we could have a Layer #1bis that is Swift + Keystone. But
then we are back to blessing stuff as essential/official, and next thing
we know, Sahara will knock on the TC door to get Map/Reduce recognized
as essential layer-1-like end-user use case.

I can see how a give-me-a-damn-instance layer-1 definition is
restrictive, but that's the beauty and the predictability of Monty's
proposal. It limits integration where it really matters, unleashes
competition where it will help, and removes almost all of the
badge-granting role of the TC.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Some ideas for micro-version implementation

2014-09-22 Thread Kenichi Oomichi
Hi Alex,

Thank you for doing this.

 -Original Message-
 From: Alex Xu [mailto:x...@linux.vnet.ibm.com]
 Sent: Friday, September 19, 2014 3:40 PM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [Nova] Some ideas for micro-version implementation
 
 Close to Kilo, it is time to think about what's next for nova API. In
 Kilo, we
 will continue develop the important feature micro-version.
 
 In previous v2 on v3 propose, it's include some implementations can be
 used for micro-version.
 (https://review.openstack.org/#/c/84695/19/specs/juno/v2-on-v3-api.rst)
 But finally, those implementations was considered too complex.
 
 So I'm try to find out more simple implementation and solution for
 micro-version.
 
 I wrote down some ideas as blog post at:
 http://soulxu.github.io/blog/2014/09/12/one-option-for-nova-api/
 
 And for those ideas also already done some POC, you can find out in the
 blog post.
 
 As discussion in the Nova API meeting, we want to bring it up to
 mail-list to
 discussion. Hope we can get more idea and option from all developers.
 
 We will appreciate for any comment and suggestion!

Before discussing how to implement, I'd like to consider what we should
implement. IIUC, the purpose of v3 API is to make consistent API with the
backwards incompatible changes. Through huge discussion in Juno cycle, we
knew that backwards incompatible changes of REST API would be huge pain
against clients and we should avoid such changes as possible. If new APIs
which are consistent in Nova API only are inconsistent for whole OpenStack
projects, maybe we need to change them again for whole OpenStack consistency.

For avoiding such situation, I think we need to define what is consistent
REST API across projects. According to Alex's blog, The topics might be

 - Input/Output attribute names
 - Resource names
 - Status code

The following are hints for making consistent APIs from Nova v3 API experience,
I'd like to know whether they are the best for API consistency.

(1) Input/Output attribute names
(1.1) These names should be snake_case. 
  eg: imageRef - image_ref, flavorRef - flavor_ref, hostId - host_id
(1.2) These names should contain extension names if they are provided in case 
of some extension loading.
  eg: security_groups - os-security-groups:security_groups
  config_drive - os-config-drive:config_drive
(1.3) Extension names should consist of hyphens and low chars.
  eg: OS-EXT-AZ:availability_zone - 
os-extended-availability-zone:availability_zone
  OS-EXT-STS:task_state - os-extended-status:task_state
(1.4) Extension names should contain the prefix os- if the extension is not 
core.
  eg: rxtx_factor - os-flavor-rxtx:rxtx_factor
  os-flavor-access:is_public - flavor-access:is_public (flavor-access 
extension became core)
(1.5) The length of the first attribute depth should be one.
  eg: create a server API with scheduler hints
-- v2 API input attribute sample ---
  {
  server: {
  imageRef: e5468cc9-3e91-4449-8c4f-e4203c71e365,
  [..]
  },
  OS-SCH-HNT:scheduler_hints: {
  same_host: 5a3dec46-a6e1-4c4d-93c0-8543f5ffe196
  }
  }
-- v3 API input attribute sample ---
  {
  server: {
  image_ref: e5468cc9-3e91-4449-8c4f-e4203c71e365,
  [..]
  os-scheduler-hints:scheduler_hints: {
  same_host: 5a3dec46-a6e1-4c4d-93c0-8543f5ffe196
  }
  }
  }

(2) Resource names
(2.1) Resource names should consist of hyphens and low chars.
  eg: /os-instance_usage_audit_log - /os-instance-usage-audit-log
(2.2) Resource names should contain the prefix os- if the extension is not 
core.
  eg: /servers/diagnostics - /servers/os-server-diagnostics
  /os-flavor-access - /flavor-access (flavor-access extension became core)
(2.3) Action names should be snake_case.
  eg: os-getConsoleOutput - get_console_output
  addTenantAccess - add_tenant_access, removeTenantAccess - 
remove_tenant_access

(3) Status code
(3.1) Return 201(Created) if a resource creation/updating finishes before 
returning a response.
  eg: create a keypair API: 200 - 201
  create an agent API: 200 - 201
  create an aggregate API: 200 - 201
(3.2) Return 204(No Content) if a resource deletion finishes before returning a 
response.
  eg: delete a keypair API: 200 - 204
  delete an agent API: 200 - 204
  delete an aggregate API: 200 - 204
(3.3) Return 202(Accepted) if a request doesn't finish yet before returning a 
response.
  eg: rescue a server API: 200 -202

Any comments are welcome.

Thanks
Ken'ichi Ohmichi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] libvirt boot parameters

2014-09-22 Thread Angelo Matarazzo

Hi all,
I need to add the option rebootTimeout when the instance boots.
If you use the*qemu-kvm*, boot parameter/|reboot-timeout|/allows  a 
virtual machine to retry booting if no bootable device is found:


/*# qemu-kvm --boot reboot-timeout=1000*/

Ref: 
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6-Beta/html-single/Virtualization_Host_Configuration_and_Guest_Installation_Guide/index.html


In Openstack a new boot parameter should be entered into Libvirt XML 
attributes where required:


bios rebootTimeout=5000 / under the os in the libvirt.xml file.

My idea is to add an option to nova boot command changing  (nova API, 
nova base,python-novaclient) but I would like to know what you think 
about that?


Thank you beforehand
Angelo

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] are we going to remove the novaclient v3 shell or what?

2014-09-22 Thread Kenichi Oomichi
 -Original Message-
 From: Day, Phil [mailto:philip@hp.com]
 Sent: Friday, September 19, 2014 7:08 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] are we going to remove the novaclient v3 
 shell or what?
 
 
  
   DevStack doesn't register v2.1 endpoint to keytone now, but we can use
   it with calling it directly.
   It is true that it is difficult to use v2.1 API now and we can check
   its behavior via v3 API instead.
 
  I posted a patch[1] for registering v2.1 endpoint to keystone, and I 
  confirmed
  --service-type option of current nova command works for it.
 
 Ah - I'd misunderstood where we'd got to with the v2.1 endpoint, thanks for 
 putting me straight.
 
 So with this in place then yes I agree we could stop fixing the v3 client.
 
 Since its actually broken for even operations like boot do we merge in the 
 changes I pushed this week so it can still
 do basic functions, or just go straight to removing v3 from the client ?

That would depend on what we will implement through v2.1+microversions.
If the interface of the microversions is almost the same as v3 API, current
v3 code of novaclient would be useful. Otherwise, it would be better to remove
the code from novaclient.
I posted a mail[1] for the direction of v2.1+microversions, I am glad if we get
some consensus/direction.

Thanks
Ken'ichi Ohmichi

---
[1]: 
http://lists.openstack.org/pipermail/openstack-dev/2014-September/046646.html


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Oslo messaging vs zaqar

2014-09-22 Thread Flavio Percoco
On 09/22/2014 11:04 AM, Gordon Sim wrote:
 On 09/22/2014 08:00 AM, Flavio Percoco wrote:
 oslo messaging is highly tight to AMQP semantics whereas Zaqar is not.
 
 The API for oslo.messaging is actually quite high level and could easily
 be mapped to different messaging technologies.
 
 There is some terminology that comes from older versions of AMQP, e.g.
 the use of 'exchanges' as essentially a namespace, but I don't myself
 believe these tie the API in anyway to the original implementation from
 which the naming arose.
 
 In what way do you see olso.messaging as being tied to AMQP semantics?

I'm pretty sure it can be mapped to different messaging technologies
(there's a zmq driver). I could see myself experimenting on a Zaqar
driver for oslo.messaging in the future.

What I meant is that oslo.messaging is an rpc library and it depends on
few very specific message delivery patterns that are somehow tight/based
on AMQP semantics. Implementing Zaqar's API in oslo.messaging would be
like trying to add an AMQP driver to Zaqar.

Probably highly tight to AMQP wasn't the best way to express this,
sorry about that.

Cheers,
Flavio

-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] libvirt boot parameters

2014-09-22 Thread Chen CH Ji
whether following variables fit for your purpose? guess you want to
override the value through boot command?

cfg.IntOpt(reboot_timeout,
   default=0,
   help=Automatically hard reboot an instance if it has been 
stuck in a rebooting state longer than N seconds. 
Set to 0 to disable.),
cfg.IntOpt(instance_build_timeout,
   default=0,
   help=Amount of time in seconds an instance can be in BUILD

before going into ERROR status.
Set to 0 to disable.),

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC



From:   Angelo Matarazzo angelo.matara...@dektech.com.au
To: openstack-dev@lists.openstack.org
Date:   09/22/2014 05:30 PM
Subject:[openstack-dev] [nova] libvirt boot parameters



Hi all,
I need to add the option rebootTimeout when the instance boots.


If you use the qemu-kvm, boot parameter reboot-timeout allows  a virtual
machine to retry booting if no bootable device is found:

# qemu-kvm --boot reboot-timeout=1000

Ref:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6-Beta/html-single/Virtualization_Host_Configuration_and_Guest_Installation_Guide/index.html


In Openstack a new boot parameter should be entered into Libvirt XML
attributes where required:

bios rebootTimeout=5000 / under the os in the libvirt.xml file.

My idea is to add an option to nova boot command changing  (nova API, nova
base,python-novaclient) but I would like to know what you think about that?


Thank you beforehand
Angelo
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] libvirt boot parameters

2014-09-22 Thread Daniel P. Berrange
On Mon, Sep 22, 2014 at 11:32:57AM +0200, Angelo Matarazzo wrote:
 Hi all,
 I need to add the option rebootTimeout when the instance boots.
 If you use the*qemu-kvm*, boot parameter/|reboot-timeout|/allows  a virtual
 machine to retry booting if no bootable device is found:

What are the scenarios in which this is useful / required ?

 My idea is to add an option to nova boot command changing  (nova API, nova
 base,python-novaclient) but I would like to know what you think about that?

As a general rule we aim to avoid exposing such low level config parameters
via the APIs, since it is not in keeping with the cloud virtualization
model which is to avoid exposing hardware configuration details to the users.
It could be considered as a flavor extra specs parameter or image metadata
property, depending on the use cases for us

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] No one replying on tempest issue?Please share your experience

2014-09-22 Thread Ken'ichi Ohmichi
Hi Nikesh,

 -Original Message-
 From: Nikesh Kumar Mahalka [mailto:nikeshmaha...@vedams.com]
 Sent: Saturday, September 20, 2014 9:49 PM
 To: openst...@lists.openstack.org; OpenStack Development Mailing List (not 
 for usage questions)
 Subject: Re: [Openstack] No one replying on tempest issue?Please share your 
 experience

 Still i didnot get any reply.

Jay has already replied to this mail, please check the nova-compute
and cinder-volume log as he said[1].

[1]: 
http://lists.openstack.org/pipermail/openstack-dev/2014-September/046147.html

 Now i ran below command:
 ./run_tempest.sh 
 tempest.api.volume.test_volumes_snapshots.VolumesSnapshotTest.test_volume_from_snapshot

 and i am getting test failed.


 Actually,after analyzing tempest.log,i found that:
 during creation of a volume from snapshot,tearDownClass is called and it is 
 deleting snapshot bfore creation of volume
 and my test is getting failed.

I guess the failure you mentioned at the above is:

2014-09-20 00:42:12.519 10684 INFO tempest.common.rest_client
[req-d4dccdcd-bbfa-4ddf-acd8-5a7dcd5b15db None] Request
(VolumesSnapshotTest:tearDownClass): 404 GET
http://192.168.2.153:8776/v1/ff110b66c98d455092c6f2a2577b4c80/snapshots/71d3cad4-440d-4fbb-8758-76da17b6ace6
0.029s

and

2014-09-20 00:42:22.511 10684 INFO tempest.common.rest_client
[req-520a54ad-7e0a-44ba-95c0-17f4657bc3b0 None] Request
(VolumesSnapshotTest:tearDownClass): 404 GET
http://192.168.2.153:8776/v1/ff110b66c98d455092c6f2a2577b4c80/volumes/7469271a-d2a7-4ee6-b54a-cd0bf767be6b
0.034s

right?
If so, that is not a problem.
VolumesSnapshotTest creates two volumes, and the tearDownClass checks these
volumes deletions by getting volume status until 404(NotFound) [2].

[2]: 
https://github.com/openstack/tempest/blob/master/tempest/api/volume/base.py#L128

 I deployed a juno devstack setup for a cinder volume driver.
 I changed cinder.conf file and tempest.conf file for single backend and 
 restarted cinder services.
 Now i ran tempest test as below:
 /opt/stack/tempest/run_tempest.sh tempest.api.volume.test_volumes_snapshots

 I am getting below output:
  Traceback (most recent call last):
   File /opt/stack/tempest/tempest/api/volume/test_volumes_snapshots.py, 
 line 176, in test_volume_from_snapshot
 snapshot = self.create_snapshot(self.volume_origin['id'])
   File /opt/stack/tempest/tempest/api/volume/base.py, line 112, in 
 create_snapshot
 'available')
   File /opt/stack/tempest/tempest/services/volume/json/snapshots_client.py, 
 line 126, in wait_for_snapshot_status
 value = self._get_snapshot_status(snapshot_id)
   File /opt/stack/tempest/tempest/services/volume/json/snapshots_client.py, 
 line 99, in _get_snapshot_status
 snapshot_id=snapshot_id)
 SnapshotBuildErrorException: Snapshot 6b1eb319-33ef-4357-987a-58eb15549520 
 failed to build and is in
 ERROR status

What happens if running the same operation as Tempest by hands on your
environment like the following ?

[1] $ cinder create 1
[2] $ cinder snapshot-create id of the created volume at [1]
[3] $ cinder create --snapshot-id id of the created snapshot at [2] 1
[4] $ cinder show id of the created volume at [3]

Please check whether the status of created volume at [3] is available or not.

Thanks
Ken'ichi Ohmichi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-22 Thread Mark McLoughlin
Hey

On Thu, 2014-09-18 at 11:53 -0700, Monty Taylor wrote:
 Hey all,
 
 I've recently been thinking a lot about Sean's Layers stuff. So I wrote
 a blog post which Jim Blair and Devananda were kind enough to help me edit.
 
 http://inaugust.com/post/108

Lots of great stuff here, but too much to respond to everything in
detail.

I love the way you've framed this in terms of the needs of developers,
distributors, deployers and end users. I'd like to make sure we're
focused on tackling those places where we're failing these groups, so:


 - Developers

   I think we're catering pretty well to developers with the big tent
   concept of Programs. There's been some good discussion about how
   Programs could be better at embracing projects in their related area,
   and that would be great to pursue. But the general concept - of 
   endorsing and empowering teams of people collaborating in the
   OpenStack way - has a lot of legs, I think.

   I also think our release cadence does a pretty good job of serving 
   developers. We've talked many times about the benefit of it, and I'd 
   like to see it applied to more than just the server projects.

   OTOH, the integrated gate is straining, and a source of frustration 
   for everyone. You raise the question of whether everything currently 
   in the integrated release needs to be co-gated, and I totally agree 
   that needs re-visiting.


 - Distributors

   We may be doing a better job of catering to distributors than any 
   other group. For example, things like the release cadence, stable 
   branch and common set of dependencies works pretty well.

.  The concept of an integrated release (with an incubation process) is
   great, because it nicely defines a set of stuff that distributors
   should ship. Certainly, life would be more difficult for distributors
   if there was a smaller set of projects in the release and a whole 
   bunch of other projects which are interesting to distro users, but 
   with an ambiguous level of commitment from our community. Right now, 
   our integration process has a huge amount of influence over what 
   gets shipped by distros, and that in turn serves distro users by 
   ensuring a greater level of commonality between distros.


 - Operators

   I think the feedback we've been getting over the past few cycles 
   suggests we are failing this group the most.

   Operators want to offer a compelling set of services to their users, 
   but they want those services to be stable, performant, and perhaps 
   most importantly, cost-effective. No operator wants to have to
   invest a huge amount of time in getting a new service running.

   You suggest a Production Ready tag. Absolutely - our graduation of 
   projects has been interpreted as meaning production ready, when 
   it's actually more useful as a signal to distros rather than 
   operators. Graduation does not necessarily imply that a service is
   ready for production, no matter how you define production.

   I'd like to think we could give more nuanced advice to operators than
   a simple tag, but perhaps the information gathering process that
   projects would need to go through to be awarded that tag would 
   uncover the more detailed information for operators.

   You could question whether the TC is the right body for this 
   process. How might it work if the User Committee owned this?

   There are many other ways we can and should help operators, 
   obviously, but this setting expectations is the aspect most 
   relevant to this discussion.


 - End users

   You're right that we don't pay sufficient attention to this group.
   For me, the highest priority challenge here is interoperability. 
   Particularly interoperability between public clouds.

   The only real interop effort to date you can point to is the 
   board-owned DefCore and RefStack efforts. The idea being that a
   trademark program with API testing requirements will focus minds on
   interoperability. I'd love us (as a community) to be making more
   rapid progress on interoperability, but at least there are no
   encouraging signs that we should make some definite progress soon.

   Your end-user focused concrete suggestions (#7-#10) are interesting,
   and I find myself thinking about how much of a positive effect on 
   interop each of them would have. For example, making our tools 
   multi-cloud aware would help encourage people to demand interop from 
   their providers. I also agree that end-user tools should support 
   older versions of our APIs, but don't think that necessarily implies 
   rolling releases.



So, if I was to pick the areas which I think would address our most
pressing challenges:

  1) Shrinking the integrated gate, and allowing per-project testing 
 strategies other than shoving every integrated project into the 
 gate.

  2) Giving more direction to operators about the readiness of our 
 projects for different use cases. A process around awarding 

[openstack-dev] [Nova] - do we need .start and .end notifications in all cases ?

2014-09-22 Thread Day, Phil
Hi Folks,

I'd like to get some opinions on the use of pairs of notification messages for 
simple events.   I get that for complex operations on an instance (create, 
rebuild, etc) a start and end message are useful to help instrument progress 
and how long the operations took.However we also use this pattern for 
things like aggregate creation, which is just a single DB operation - and it 
strikes me as kind of overkill and probably not all that useful to any external 
system compared to a a single event .create event after the DB operation.

There is a change up for review to add notifications for service groups which 
is following this pattern (https://review.openstack.org/#/c/107954/) - the 
author isn't doing  anything wrong in that there just following that pattern, 
but it made me wonder if we shouldn't have some better guidance on when to use 
a single notification rather that a .start/.end pair.

Does anyone else have thoughts on this , or know of external systems that would 
break if we restricted .start and .end usage to long-lived instance operations ?

Phil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-22 Thread Zhipeng Huang
Great conversations here.

I'd like to echo Dean Troyer's comment on Suggestion 9, for multi-cloud
span node pooling ,we need standards. It'll make life easier when user
tools could be configured against a limit as well as standard set of rules,
instead of numerous different rules by vendors. It is gonna be a difficult
task, since this kind of thing is always hard to do, but some form of intra
cloud standard has to be introduced and worked on so users could actually
have a resource pool

Best
Regards

Zhipeng




-- 
Zhipeng Huang
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402
OpenStack, OpenDaylight, OpenCompute affcienado
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] - do we need .start and .end notifications in all cases ?

2014-09-22 Thread Daniel P. Berrange
On Mon, Sep 22, 2014 at 11:03:02AM +, Day, Phil wrote:
 Hi Folks,
 
 I'd like to get some opinions on the use of pairs of notification
 messages for simple events.   I get that for complex operations on
 an instance (create, rebuild, etc) a start and end message are useful
 to help instrument progress and how long the operations took. However
 we also use this pattern for things like aggregate creation, which is
 just a single DB operation - and it strikes me as kind of overkill and
 probably not all that useful to any external system compared to a
 single event .create event after the DB operation.

A start + end pair is not solely useful for timing, but also potentially
detecting if it completed successfully. eg if you receive an end event
notification you know it has completed. That said, if this is a use case
we want to target, then ideally we'd have a third notification for this
failure case, so consumers don't have to wait  timeout to detect error.

 There is a change up for review to add notifications for service groups
 which is following this pattern (https://review.openstack.org/#/c/107954/)
 - the author isn't doing  anything wrong in that there just following that
 pattern, but it made me wonder if we shouldn't have some better guidance
 on when to use a single notification rather that a .start/.end pair.
 
 Does anyone else have thoughts on this , or know of external systems that
 would break if we restricted .start and .end usage to long-lived instance
 operations ?

I think we should aim to /always/ have 3 notifications using a pattern of

  try:
 ...notify start...

 ...do the work...

 ...notify end...
  except:
 ...notify abort...

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] - do we need .start and .end notifications in all cases ?

2014-09-22 Thread Day, Phil
Hi Daniel,


 -Original Message-
 From: Daniel P. Berrange [mailto:berra...@redhat.com]
 Sent: 22 September 2014 12:24
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Nova] - do we need .start and .end
 notifications in all cases ?
 
 On Mon, Sep 22, 2014 at 11:03:02AM +, Day, Phil wrote:
  Hi Folks,
 
  I'd like to get some opinions on the use of pairs of notification
  messages for simple events.   I get that for complex operations on
  an instance (create, rebuild, etc) a start and end message are useful
  to help instrument progress and how long the operations took. However
  we also use this pattern for things like aggregate creation, which is
  just a single DB operation - and it strikes me as kind of overkill and
  probably not all that useful to any external system compared to a
  single event .create event after the DB operation.
 
 A start + end pair is not solely useful for timing, but also potentially 
 detecting
 if it completed successfully. eg if you receive an end event notification you
 know it has completed. That said, if this is a use case we want to target, 
 then
 ideally we'd have a third notification for this failure case, so consumers 
 don't
 have to wait  timeout to detect error.
 

I'm just a tad worried that this sounds like its starting to use notification 
as a replacement for logging.If we did this for every CRUD operation on an 
object don't we risk flooding the notification system.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] - do we need .start and .end notifications in all cases ?

2014-09-22 Thread Sandy Walsh
Hey Phil, (sorry for top-post, web client)

There's no firm rule for requiring .start/.end and I think your criteria 
defines it well. Long running transactions (or multi complex-step transactions).

The main motivator behind .start/.end code was .error notifications not getting 
generated in many cases. We had no idea where something was failing. Putting a 
.start before the db operation let us know well, at least the service got the 
call

For some operations like resize, migrate, etc., the .start/.end is good for 
auditing and billing. Although, we could do a better job by simply managing the 
launched_at, deleted_at times better.

Later, we found that by reviewing .start/.end deltas we were able to predict 
pending failures before timeouts actually occurred.

But no, they're not mandatory and a single notification should certainly be 
used for simple operations.

Cheers!
-S



From: Day, Phil [philip@hp.com]
Sent: Monday, September 22, 2014 8:03 AM
To: OpenStack Development Mailing List (openstack-dev@lists.openstack.org)
Subject: [openstack-dev] [Nova] - do we need .start and .end notifications in 
all cases ?

Hi Folks,

I’d like to get some opinions on the use of pairs of notification messages for 
simple events.   I get that for complex operations on an instance (create, 
rebuild, etc) a start and end message are useful to help instrument progress 
and how long the operations took.However we also use this pattern for 
things like aggregate creation, which is just a single DB operation – and it 
strikes me as kind of overkill and probably not all that useful to any external 
system compared to a a single event “.create” event after the DB operation.

There is a change up for review to add notifications for service groups which 
is following this pattern (https://review.openstack.org/#/c/107954/) – the 
author isn’t doing  anything wrong in that there just following that pattern, 
but it made me wonder if we shouldn’t have some better guidance on when to use 
a single notification rather that a .start/.end pair.

Does anyone else have thoughts on this , or know of external systems that would 
break if we restricted .start and .end usage to long-lived instance operations ?

Phil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] - do we need .start and .end notifications in all cases ?

2014-09-22 Thread Sandy Walsh
+1, the high-level code should deal with top-level exceptions and generate 
.error notifications (though it's a little spotty). Ideally we shouldn't need 
three events for simple operations. 

The use of .start/.end vs. logging is a bit of a blurry line. At its heart a 
notification should provide context around an operation: What happened? Who did 
it? Who did they do it to? Where did it happen? Where is it going to? etc.  
Stuff that could be used for auditing/billing. That's their main purpose.

But for mission critical operations (create instance, etc) notifications give 
us a hot-line to god. Something is wrong! vs. having to pour through log 
files looking for problems. Real-time. Low latency.  

I think it's a case-by-case judgement call which should be used.




From: Day, Phil [philip@hp.com]

I'm just a tad worried that this sounds like its starting to use notification 
as a replacement for logging.If we did this for every CRUD operation on an 
object don't we risk flooding the notification system.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] PTL Candidacy

2014-09-22 Thread Tristan Cacqueray
confirmed

On 22/09/14 04:59 AM, David Lyle wrote:
 I would like to announce my candidacy for Horizon PTL.
 
 I've been actively contributing to Horizon since the Grizzly cycle and
 I've been a core contributor since the Havana cycle. For the last two
 cycles, I have had the pleasure of serving as PTL.
 
 Horizon is in the midst of some large transitions that we've been laying
 the foundation for in the past two releases. I would like to continue to
 help guide those changes through to completion.
 
 We've made progress on splitting the horizon repo into logical parts. The
 vendored 3rd party JavaScript libraries have been extracted. This was a
 major hurdle to completing the separation. Finishing the split will
 improve maintainability and extensibility. I believe we can complete this
 in the Kilo cycle.
 
 We've also continued the transition to leveraging AngularJS to improve
 usability and providing richer client experiences. I would like to see
 this effort accelerate in Kilo. But, I would like to see it driven by a
 clear unified strategy rather than numerous uncoordinated efforts. My goal
 is to leave the Paris Summit with a plan we can work toward together. A
 richer client side approach is key to addressing many of the usability
 shortcomings in Horizon.
 
 We successfully integrated the Sahara UI components into Horizon in Juno.
 And in Kilo, we'll look to integrate Ironic support. In Kilo, there is
 also potential for wider integration requirements on Horizon that may need
 greater attention and likely a revised repo strategy.
 
 Finally, Horizon like most of OpenStack is benefiting from a rapidly
 growing contributor base. Like other projects, we aren't immune to the
 stresses such growth creates. We've started taking steps toward improving
 the blueprint process as well as changes to how we manage bugs. We need to
 continue to refine these efforts to improve the overall direction of the
 project.
 
 Driven by a terrific community of contributors, reviewers and users,
 Horizon has made great strides in Juno. I look forward to seeing what this
 community can accomplish in Kilo. As PTL, my job is to enable this
 community to continue to flourish.
 
 Thank you,
 David Lyle
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] PTL Candidacy

2014-09-22 Thread Tristan Cacqueray
confirmed

On 22/09/14 02:49 AM, Flavio Percoco wrote:
 Greetings,
 
 I'd like to announce my candidacy for the Message Program PTL position.
 
 I've been part of this program since its inception. Along with a really
 amazing team, I've been working on changing the openstack messaging
 service reality for the last 2 years.
 
 Zaqar has a set of features that I consider essential for different use
 cases. However, some of this features are very specific and could be
 supported differently.
 
 Therefore, my focus for the next cycle will be working on shrinking
 Zaqar's feature set down to what's really essential for most of the use
 cases.
 
 In addition to the above, I'd like to focus on expanding Zaqar's
 adoption by starting from the world it belongs to, OpenStack. We've
 identified a set of projects and use-cases that we need/can cover. I
 believe working together with those projects will help Zaqar to improve
 and the rest of the ecosystem.
 
 Last but not least, I'm planning to dedicate extra time into supporting
 our development community and make sure Zaqar's mission, vision, goals
 and internals are clear to everyone. I believe we're all working in a
 community full of very talented folks and the feedback from each one of
 them has an immense value for the project, team and the rest of the
 ecosystem.
 
 Thanks for reading,
 Flavio
 
 




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Sahara][Doc] Updating documentation for overview; Needs changing image

2014-09-22 Thread Sharan Kumar M
Hi all,

The bug on updating documentation for overview / details
https://bugs.launchpad.net/sahara/+bug/1350063 also requires the changing
of image openstack-interop.png. So is there any specific tool used for
creating the image? Since I am working on fixing this bug, I thought I
could also update the image.

Thanks,
Sharan Kumar M
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Oslo messaging vs zaqar

2014-09-22 Thread Gordon Sim

On 09/22/2014 10:56 AM, Flavio Percoco wrote:

What I meant is that oslo.messaging is an rpc library and it depends on
few very specific message delivery patterns that are somehow tight/based
on AMQP semantics.


RPC at it's core is the request-response pattern which is directly 
supported by many messaging technologies and implementable over almost 
any form of communication (it's mostly just convention for addressing).


In addition the oslo.messaging model offers the ability to invoke on one 
of a group of servers, which is the task-queue pattern and again is very 
general.


One-way messages are also supported either unicast (to a specific 
server) or broadcast (to all servers in a group).


I don't think any of these patterns are in any way tightly bound or 
specific to AMQP.


In fact Zaqar offers (mostly) the same patterns (there is no way to 
indicate the 'address' to reply to defined by the spec itself, but that 
could be easily added).



Implementing Zaqar's API in oslo.messaging would be
like trying to add an AMQP driver to Zaqar.


I don't think it would be. AMQP and Zaqar are wire protocols offering 
very similar levels of abstraction (sending, consuming, browsing, 
acknowledging messages). By contrast oslo.messaging is a language level 
API, generally at a slightly higher level of abstraction, mapping method 
invocation to particular messaging patterns between processes.


Implementing Zaqar's model over oslo.messaging would be like 
implementing AMQP's model over oslo.messaging, i.e. reimplementing a 
general purpose message-oriented abstraction on top of an API intended 
to hide such a model behind message invocation. Though technically 
possible it seems a little pointless (and I don't think anyone is 
suggesting otherwise).


Zaqar drivers are really providing different implementations of 
(distributed) message storage. AMQP (and I'm talking primarily about 
version 1.0) is not intended for that purpose. It's intended to control 
the transfer of messages between processes. Exposing AMQP as an 
alternative interface for publishing/receiving/consuming messages 
through Zaqar on the other hand would be simple.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][Cinder] Coraid CI system

2014-09-22 Thread Mykola Grygoriev
Dear Infra team,

We have set up Coraid CI system to test Coraid Cinder driver. It is located
here - http://38.111.159.9:8080/job/CoraidCI/

We have done all requirements for Third Party CI system, provided here -
http://ci.openstack.org/third_party.html#requirements

Please add voting rights to gerrit user coraid-ci. Now it is working in
silent mode.

Please look at Coraid CI system and give us a response.

On Thu, Sep 18, 2014 at 1:51 PM, Mykola Grygoriev mgrygor...@mirantis.com
wrote:

 Dear community members,

 Please have a look to Coraid CI system -
 http://38.111.159.9:8080/job/CoraidCI/

 We have done all requirements for Third Party CI system, provided here -
 http://ci.openstack.org/third_party.html#requirements

 Please look at Coraid third-party system one more time and, please, show
 us what we have to add or improve in order to get voting rights for gerrit
 user coraid-ci.

 On Fri, Sep 12, 2014 at 3:15 PM, Roman Bogorodskiy 
 rbogorods...@mirantis.com wrote:

 Hi,

 Mykola has some problems sending emails to the list, so he asked me to
 post a response, here it goes:

 ---
 Remy, I have improved Coraid CI system and added logs of all components of
 devstack. Please have a look:

 http://38.111.159.9:8080/job/Coraid_CI/164/

 According to the requirements from
 http://ci.openstack.org/third_party.html#requesting-a-service-account ,
 Gerrit plugin from Jenkins should be given the following options:

 Successful: gerrit approve CHANGE,PATCHSET --message 'Build
 Successful BUILDS_STATS' --verified VERIFIED --code-review
 CODE_REVIEW
 Failed: gerrit approve CHANGE,PATCHSET --message 'Build Failed
 BUILDS_STATS' --verified VERIFIED --code-review CODE_REVIEW
 Unstable: gerrit approve CHANGE,PATCHSET --message 'Build Unstable
 BUILDS_STATS' --verified VERIFIED --code-review CODE_REVIEW

 I configured gerrit plugin this way, so it sends the following comment
 after checking patchset or comment with recheck. For example,
 https://review.openstack.org/#/c/120907/

 Patch Set 1:

 Build Successful

 http://38.111.159.9:8080/job/Coraid_CI/164/ : SUCCESS


 All logs are on this page. They are there as artifacts.

  I took a quick look and I don’t see which test cases are being run?
 We test Coraid Cinder driver with standard tempest tests using
 ./driver_certs/cinder_driver_cert.sh script. Test cases are in the log of
 job.

 Please look at Coraid third-party system one more time and, please, show
 us
 what we have to add or improve in order to get voting rights for gerrit
 user coraid-ci.

 Also I have set gerrit plugin on our Jenkins to the silent mode as you
 suggested.

 Thank you in advance.
 

 On Fri, Sep 5, 2014 at 7:34 PM, Asselin, Ramy ramy.asse...@hp.com
 wrote:

 -1 from me (non-cinder core)

 It very nice to see you're making progress. I, personally, was very
 confused about voting.
 Here's my understanding: Voting: it is the ability to provide an
 official +1 -1 vote in the gerrit system.

 I don't see a stable history [1]. Before requesting voting, you should
 enable your system on the cinder project itself.
 Initially, you should disable ALL gerrit comments, i.e. run in silent
 mode, per request from cinder PTL [2]. Once stable there, you can enable
 gerrit comments. At this point, everyone can see pass/fail comments with a
 vote=0.
 Once stable there on real patches, you can request voting again, where
 the pass/fail would vote +1/-1.

 Ramy
 [1] http://38.111.159.9:8080/job/Coraid_CI/35/console
 [2]
 http://lists.openstack.org/pipermail/openstack-dev/2014-August/043876.html


 -Original Message-
 From: Duncan Thomas [mailto:duncan.tho...@gmail.com]
 Sent: Friday, September 05, 2014 7:55 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Infra][Cinder] Coraid CI system

 +1 from me (Cinder core)


 On 5 September 2014 15:09, Mykola Grygoriev mgrygor...@mirantis.com
 wrote:
  Hi,
 
  My name is Mykola Grygoriev and I'm engineer who currently working on
  deploying 3d party CI for Сoraid Сinder driver.
 
  Following instructions on
 
  http://ci.openstack.org/third_party.html#requesting-a-service-account
 
  asking for adding gerrit CI account (coraid-ci) to the Voting
  Third-Party CI Gerrit group.
 
 
 
  We have already added description of Coraid CI system to wiki page -
  https://wiki.openstack.org/wiki/ThirdPartySystems/Coraid_CI
 
  We used openstack-dev/sandbox project to test current CI
  infrastructure with OpenStack Gerrit system. Please find our history
 there.
 
  Please have a look to results of Coraid CI system. it currently takes
  updates from openstack/cinder project:
  http://38.111.159.9:8080/job/Coraid_CI/32/
  http://38.111.159.9:8080/job/Coraid_CI/33/
 
  Thank you in advance.
 
  --
  Best regards,
  Mykola Grygoriev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  

Re: [openstack-dev] [Sahara][Doc] Updating documentation for overview; Needs changing image

2014-09-22 Thread Dmitry Mescheryakov
Hello,

I used google docs to create the initial image. If you want to edit that
one, copy the doc[1] to your drive and edit it. It is not the latest
version of the image, but the only difference is that this one has the very
first project name EHO in place of Sahara.

Thanks,

Dmitry

[1]
https://docs.google.com/a/mirantis.com/drawings/d/1kCahSrGI0OvPeQBcqjX9GV54GZYBZpt_W4nOt5nsKB8/edit

2014-09-22 16:29 GMT+04:00 Sharan Kumar M sharan.monikan...@gmail.com:

 Hi all,

 The bug on updating documentation for overview / details
 https://bugs.launchpad.net/sahara/+bug/1350063 also requires the changing
 of image openstack-interop.png. So is there any specific tool used for
 creating the image? Since I am working on fixing this bug, I thought I
 could also update the image.

 Thanks,
 Sharan Kumar M

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Oslo messaging vs zaqar

2014-09-22 Thread Flavio Percoco
On 09/22/2014 02:35 PM, Gordon Sim wrote:
 On 09/22/2014 10:56 AM, Flavio Percoco wrote:
 What I meant is that oslo.messaging is an rpc library and it depends on
 few very specific message delivery patterns that are somehow tight/based
 on AMQP semantics.
 
 RPC at it's core is the request-response pattern which is directly
 supported by many messaging technologies and implementable over almost
 any form of communication (it's mostly just convention for addressing).
 
 In addition the oslo.messaging model offers the ability to invoke on one
 of a group of servers, which is the task-queue pattern and again is very
 general.
 
 One-way messages are also supported either unicast (to a specific
 server) or broadcast (to all servers in a group).
 
 I don't think any of these patterns are in any way tightly bound or
 specific to AMQP.
 
 In fact Zaqar offers (mostly) the same patterns (there is no way to
 indicate the 'address' to reply to defined by the spec itself, but that
 could be easily added).
 
 Implementing Zaqar's API in oslo.messaging would be
 like trying to add an AMQP driver to Zaqar.
 
 I don't think it would be. AMQP and Zaqar are wire protocols offering
 very similar levels of abstraction (sending, consuming, browsing,
 acknowledging messages). By contrast oslo.messaging is a language level
 API, generally at a slightly higher level of abstraction, mapping method
 invocation to particular messaging patterns between processes.
 
 Implementing Zaqar's model over oslo.messaging would be like
 implementing AMQP's model over oslo.messaging, i.e. reimplementing a
 general purpose message-oriented abstraction on top of an API intended
 to hide such a model behind message invocation. Though technically
 possible it seems a little pointless (and I don't think anyone is
 suggesting otherwise).

I was referring to the messaging/storage technologies both projects
target, which IMHO, are different.

 
 Zaqar drivers are really providing different implementations of
 (distributed) message storage. AMQP (and I'm talking primarily about
 version 1.0) is not intended for that purpose. It's intended to control
 the transfer of messages between processes. Exposing AMQP as an
 alternative interface for publishing/receiving/consuming messages
 through Zaqar on the other hand would be simple.

Somehow, I keep failing at explaining things here.

The point is that IMHO, it doesn't make sense to merge both projects
because they both have different goals and purposes.

Cheers,
Flavio

-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][db] End of meetings for neutron-db

2014-09-22 Thread Henry Gessau
https://wiki.openstack.org/wiki/Meetings/NeutronDB

The work on healing and reorganizing Neutron DB migrations is complete, and so
we will no longer hold meetings.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-22 Thread Troy Toman
FWIW, I think this is a great approach to evolving our thinking of the projects 
and ecosystem around OpenStack. I’m far too removed these days from the details 
of the day-to-day running of the programs and projects to comment on details. 
But, I’ve long felt a need to go beyond the simple core + everyone else 
thinking. This is moving in the right direction IMHO.

Troy

 On Sep 22, 2014, at 5:49 AM, Mark McLoughlin mar...@redhat.com wrote:
 
 Hey
 
 On Thu, 2014-09-18 at 11:53 -0700, Monty Taylor wrote:
 Hey all,
 
 I've recently been thinking a lot about Sean's Layers stuff. So I wrote
 a blog post which Jim Blair and Devananda were kind enough to help me edit.
 
 http://inaugust.com/post/108
 
 Lots of great stuff here, but too much to respond to everything in
 detail.
 
 I love the way you've framed this in terms of the needs of developers,
 distributors, deployers and end users. I'd like to make sure we're
 focused on tackling those places where we're failing these groups, so:
 
 
 - Developers
 
   I think we're catering pretty well to developers with the big tent
   concept of Programs. There's been some good discussion about how
   Programs could be better at embracing projects in their related area,
   and that would be great to pursue. But the general concept - of 
   endorsing and empowering teams of people collaborating in the
   OpenStack way - has a lot of legs, I think.
 
   I also think our release cadence does a pretty good job of serving 
   developers. We've talked many times about the benefit of it, and I'd 
   like to see it applied to more than just the server projects.
 
   OTOH, the integrated gate is straining, and a source of frustration 
   for everyone. You raise the question of whether everything currently 
   in the integrated release needs to be co-gated, and I totally agree 
   that needs re-visiting.
 
 
 - Distributors
 
   We may be doing a better job of catering to distributors than any 
   other group. For example, things like the release cadence, stable 
   branch and common set of dependencies works pretty well.
 
 .  The concept of an integrated release (with an incubation process) is
   great, because it nicely defines a set of stuff that distributors
   should ship. Certainly, life would be more difficult for distributors
   if there was a smaller set of projects in the release and a whole 
   bunch of other projects which are interesting to distro users, but 
   with an ambiguous level of commitment from our community. Right now, 
   our integration process has a huge amount of influence over what 
   gets shipped by distros, and that in turn serves distro users by 
   ensuring a greater level of commonality between distros.
 
 
 - Operators
 
   I think the feedback we've been getting over the past few cycles 
   suggests we are failing this group the most.
 
   Operators want to offer a compelling set of services to their users, 
   but they want those services to be stable, performant, and perhaps 
   most importantly, cost-effective. No operator wants to have to
   invest a huge amount of time in getting a new service running.
 
   You suggest a Production Ready tag. Absolutely - our graduation of 
   projects has been interpreted as meaning production ready, when 
   it's actually more useful as a signal to distros rather than 
   operators. Graduation does not necessarily imply that a service is
   ready for production, no matter how you define production.
 
   I'd like to think we could give more nuanced advice to operators than
   a simple tag, but perhaps the information gathering process that
   projects would need to go through to be awarded that tag would 
   uncover the more detailed information for operators.
 
   You could question whether the TC is the right body for this 
   process. How might it work if the User Committee owned this?
 
   There are many other ways we can and should help operators, 
   obviously, but this setting expectations is the aspect most 
   relevant to this discussion.
 
 
 - End users
 
   You're right that we don't pay sufficient attention to this group.
   For me, the highest priority challenge here is interoperability. 
   Particularly interoperability between public clouds.
 
   The only real interop effort to date you can point to is the 
   board-owned DefCore and RefStack efforts. The idea being that a
   trademark program with API testing requirements will focus minds on
   interoperability. I'd love us (as a community) to be making more
   rapid progress on interoperability, but at least there are no
   encouraging signs that we should make some definite progress soon.
 
   Your end-user focused concrete suggestions (#7-#10) are interesting,
   and I find myself thinking about how much of a positive effect on 
   interop each of them would have. For example, making our tools 
   multi-cloud aware would help encourage people to demand interop from 
   their providers. I also agree that end-user tools should 

Re: [openstack-dev] [Neutron][db] End of meetings for neutron-db

2014-09-22 Thread Akihiro Motoki
I updated the corresponding entry in https://wiki.openstack.org/wiki/Meetings .

Thanks for the great work of the team (and folks assisting them).
I believe this kind of meetings which focus on a specific topic was
really useful.

Thanks,
Akihiro

On Mon, Sep 22, 2014 at 10:26 PM, Henry Gessau ges...@cisco.com wrote:
 https://wiki.openstack.org/wiki/Meetings/NeutronDB

 The work on healing and reorganizing Neutron DB migrations is complete, and so
 we will no longer hold meetings.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Akihiro Motoki amot...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Zaqar and SQS Properties of Distributed Queues

2014-09-22 Thread Gordon Sim

On 09/19/2014 09:13 PM, Zane Bitter wrote:

SQS offers very, very limited guarantees, and it's clear that the reason
for that is to make it massively, massively scalable in the way that
e.g. S3 is scalable while also remaining comparably durable (S3 is
supposedly designed for 11 nines, BTW).

Zaqar, meanwhile, seems to be promising the world in terms of
guarantees. (And then taking it away in the fine print, where it says
that the operator can disregard many of them, potentially without the
user's knowledge.)

On the other hand, IIUC Zaqar does in fact have a sharding feature
(Pools) which is its answer to the massive scaling question.


There are different dimensions to the scaling problem.

As I understand it, pools don't help scaling a given queue since all the 
messages for that queue must be in the same pool. At present traffic 
through different Zaqar queues are essentially entirely orthogonal 
streams. Pooling can help scale the number of such orthogonal streams, 
but to be honest, that's the easier part of the problem.


There is also the possibility of using the sharding capabilities of the 
underlying storage. But the pattern of use will determine how effective 
that can be.


So for example, on the ordering question, if order is defined by a 
single sequence number held in the database and atomically incremented 
for every message published, that is not likely to be something where 
the databases sharding is going to help in scaling the number of 
concurrent publications.


Though sharding would allow scaling the total number messages on the 
queue (by distributing them over multiple shards), the total ordering of 
those messages reduces it's effectiveness in scaling the number of 
concurrent getters (e.g. the concurrent subscribers in pub-sub) since 
they will all be getting the messages in exactly the same order.


Strict ordering impacts the competing consumers case also (and is in my 
opinion of limited value as a guarantee anyway). At any given time, the 
head of the queue is in one shard, and all concurrent claim requests 
will contend for messages in that same shard. Though the unsuccessful 
claimants may then move to another shard as the head moves, they will 
all again try to access the messages in the same order.


So if Zaqar's goal is to scale the number of orthogonal queues, and the 
number of messages held at any time within these, the pooling facility 
and any sharding capability in the underlying store for a pool would 
likely be effective even with the strict ordering guarantee.


If scaling the number of communicants on a given communication channel 
is a goal however, then strict ordering may hamper that. If it does, it 
seems to me that this is not just a policy tweak on the underlying 
datastore to choose the desired balance between ordering and scale, but 
a more fundamental question on the internal structure of the queue 
implementation built on top of the datastore.


I also get the impression, perhaps wrongly, that providing the strict 
ordering guarantee wasn't necessarily an explicit requirement, but was 
simply a property of the underlying implementation(?).


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [neutron] [tc] Neutron Incubator workflow

2014-09-22 Thread Anne Gentle
On Thu, Aug 28, 2014 at 12:22 PM, Richard Woo richardwoo2...@gmail.com
wrote:

 I have another question about incubator proposal, for CLI and GUI. Do we
 imply that the incubator feature will need to branch python-neutron client,
 Horizon, and or Nova ( if changes are needed)?




This is as good a place for the docs perspective as any on this thread.

I think it's great to increase your dev velocity, but in the proposal I
need to see a well-thought-out plan for when documentation should be on
docs.openstack.org for deployers and users and when documentation should be
in the API reference page at http://developer.openstack.org/api-ref.html. A
section addressing the timing and requirements would be sufficient.

The docs affected are:
CLI Reference http://docs.openstack.org/cli-reference/content/
Config Reference
http://docs.openstack.org/icehouse/config-reference/content/
User Guide http://docs.openstack.org/user-guide/content/
Admin User Guide http://docs.openstack.org/user-guide-admin/content/
Cloud Admin User Guide http://docs.openstack.org/admin-guide-cloud/content/
API Reference http://developer.openstack.org/api-ref.html

I won't argue that the Install Guide should be included as these items
aren't exactly happy path quite yet.

Also in the wiki page, I see a line saying  link to commits in private
trees is acceptable -- is it really? How would readers get to it?

Thanks,
Anne



 On Tue, Aug 26, 2014 at 7:09 PM, James E. Blair cor...@inaugust.com
 wrote:

 Hi,

 After reading https://wiki.openstack.org/wiki/Network/Incubator I have
 some thoughts about the proposed workflow.

 We have quite a bit of experience and some good tools around splitting
 code out of projects and into new projects.  But we don't generally do a
 lot of importing code into projects.  We've done this once, to my
 recollection, in a way that preserved history, and that was with the
 switch to keystone-lite.

 It wasn't easy; it's major git surgery and would require significant
 infra-team involvement any time we wanted to do it.

 However, reading the proposal, it occurred to me that it's pretty clear
 that we expect these tools to be able to operate outside of the Neutron
 project itself, to even be releasable on their own.  Why not just stick
 with that?  In other words, the goal of this process should be to create
 separate projects with their own development lifecycle that will
 continue indefinitely, rather than expecting the code itself to merge
 into the neutron repo.

 This has advantages in simplifying workflow and making it more
 consistent.  Plus it builds on known integration mechanisms like APIs
 and python project versions.

 But more importantly, it helps scale the neutron project itself.  I
 think that a focused neutron core upon which projects like these can
 build on in a reliable fashion would be ideal.

 -Jim

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Oslo messaging vs zaqar

2014-09-22 Thread Geoff O'Callaghan
So
On 22/09/2014 10:01 PM, Zane Bitter zbit...@redhat.com wrote:

 On 20/09/14 04:17, Geoff O'Callaghan wrote:

 Hi all,
 I'm just trying to understand the messaging strategy in openstack.It
 seems we have at least 2 messaging layers.

 Oslo.messaging and zaqar,  Can someone explain to me why there are two?

 Is there a plan to consolidate?


 I'm trying to understand the database strategy in OpenStack. It seems we
have at least 2 database layers - sqlalchemy and Trove.

 Can anyone explain to me why there are two?


 Is there a plan to consolidate?
 /sarcasm


So the answer is because we can ;)  Not a great answer, but an answer
nonetheless.  :)

That being said I'm not sure why a well constructed zaqar with an rpc
interface couldn't meet the requirements of oslo.messsaging and much
more.It seems I need to dig some more.

Thanks all for taking the time to reply.

Geoff
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] - do we need .start and .end notifications in all cases ?

2014-09-22 Thread Jay Pipes

On 09/22/2014 07:24 AM, Daniel P. Berrange wrote:

On Mon, Sep 22, 2014 at 11:03:02AM +, Day, Phil wrote:

Hi Folks,

I'd like to get some opinions on the use of pairs of notification
messages for simple events.   I get that for complex operations on
an instance (create, rebuild, etc) a start and end message are useful
to help instrument progress and how long the operations took. However
we also use this pattern for things like aggregate creation, which is
just a single DB operation - and it strikes me as kind of overkill and
probably not all that useful to any external system compared to a
single event .create event after the DB operation.


A start + end pair is not solely useful for timing, but also potentially
detecting if it completed successfully. eg if you receive an end event
notification you know it has completed. That said, if this is a use case
we want to target, then ideally we'd have a third notification for this
failure case, so consumers don't have to wait  timeout to detect error.


There is a change up for review to add notifications for service groups
which is following this pattern (https://review.openstack.org/#/c/107954/)
- the author isn't doing  anything wrong in that there just following that
pattern, but it made me wonder if we shouldn't have some better guidance
on when to use a single notification rather that a .start/.end pair.

Does anyone else have thoughts on this , or know of external systems that
would break if we restricted .start and .end usage to long-lived instance
operations ?


I think we should aim to /always/ have 3 notifications using a pattern of

   try:
  ...notify start...

  ...do the work...

  ...notify end...
   except:
  ...notify abort...


Precisely my viewpoint as well. Unless we standardize on the above, our 
notifications are less than useful, since they will be open to 
interpretation by the consumer as to what precisely they mean (and the 
consumer will need to go looking into the source code to determine when 
an event actually occurred...)


Smells like a blueprint to me. Anyone have objections to me writing one 
up for Kilo?


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] - do we need .start and .end notifications in all cases ?

2014-09-22 Thread Jay Pipes

On 09/22/2014 07:35 AM, Day, Phil wrote:

I'm just a tad worried that this sounds like its starting to use
notification as a replacement for logging.If we did this for
every CRUD operation on an object don't we risk flooding the
notification system.


Notifications of this sort aren't a replacement for logging. In fact, 
you'll notice that in many cases, a notification is sent at the same 
time a particular logging event is written to the log streams. The 
difference is that logging isn't consumable by the same systems that 
notifications are. They serve a different audience.


Both need to be standardized and cleaned up, though! :)

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Stepping down as PTL

2014-09-22 Thread Dolph Mathews
Dearest stackers and [key]stoners,

With the PTL candidacies officially open for Kilo, I'm going to take the
opportunity to announce that I won't be running again for the position.

I thoroughly enjoyed my time as PTL during Havana, Icehouse and Juno. There
was a perceived increase in stability [citation needed], which was one of
my foremost goals. We primarily achieved that by improving the
communication between developers which allowed developers to share their
intent early and often (by way of API designs and specs). As a result, we
had a lot more collaboration and a great working knowledge in the community
when it came time for bug fixes. I also think we raised the bar for user
experience, especially by way of reasonable defaults, strong documentation,
and effective error messages. I'm consistently told that we have the best
out-of-the-box experience of any OpenStack service. Well done!

I'll still be involved in OpenStack, and I'm super confident in our
incredibly strong core team of reviewers on Keystone. I thoroughly enjoy
helping other developers be as productive as possible, and intend to
continue doing exactly that.

Keep hacking responsibly,

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Some ideas for micro-version implementation

2014-09-22 Thread Anne Gentle
On Mon, Sep 22, 2014 at 4:29 AM, Kenichi Oomichi oomi...@mxs.nes.nec.co.jp
wrote:

 Hi Alex,

 Thank you for doing this.

  -Original Message-
  From: Alex Xu [mailto:x...@linux.vnet.ibm.com]
  Sent: Friday, September 19, 2014 3:40 PM
  To: OpenStack Development Mailing List
  Subject: [openstack-dev] [Nova] Some ideas for micro-version
 implementation
 
  Close to Kilo, it is time to think about what's next for nova API. In
  Kilo, we
  will continue develop the important feature micro-version.
 
  In previous v2 on v3 propose, it's include some implementations can be
  used for micro-version.
  (https://review.openstack.org/#/c/84695/19/specs/juno/v2-on-v3-api.rst)
  But finally, those implementations was considered too complex.
 
  So I'm try to find out more simple implementation and solution for
  micro-version.
 
  I wrote down some ideas as blog post at:
  http://soulxu.github.io/blog/2014/09/12/one-option-for-nova-api/
 
  And for those ideas also already done some POC, you can find out in the
  blog post.
 
  As discussion in the Nova API meeting, we want to bring it up to
  mail-list to
  discussion. Hope we can get more idea and option from all developers.
 
  We will appreciate for any comment and suggestion!


I would greatly appreciate this style guide to be finalized for
documentation purposes as well. Thanks for starting this write-up. I'd be
happy to write it up on a wiki page while we get agreement, would that be
helpful?



 Before discussing how to implement, I'd like to consider what we should
 implement. IIUC, the purpose of v3 API is to make consistent API with the
 backwards incompatible changes. Through huge discussion in Juno cycle, we
 knew that backwards incompatible changes of REST API would be huge pain
 against clients and we should avoid such changes as possible. If new APIs
 which are consistent in Nova API only are inconsistent for whole OpenStack
 projects, maybe we need to change them again for whole OpenStack
 consistency.

 For avoiding such situation, I think we need to define what is consistent
 REST API across projects. According to Alex's blog, The topics might be

  - Input/Output attribute names
  - Resource names
  - Status code

 The following are hints for making consistent APIs from Nova v3 API
 experience,
 I'd like to know whether they are the best for API consistency.

 (1) Input/Output attribute names
 (1.1) These names should be snake_case.
   eg: imageRef - image_ref, flavorRef - flavor_ref, hostId - host_id
 (1.2) These names should contain extension names if they are provided in
 case of some extension loading.
   eg: security_groups - os-security-groups:security_groups
   config_drive - os-config-drive:config_drive


Do you mean that the os- prefix should be dropped? Or that it should be
maintained and added as needed?


 (1.3) Extension names should consist of hyphens and low chars.
   eg: OS-EXT-AZ:availability_zone -
 os-extended-availability-zone:availability_zone
   OS-EXT-STS:task_state - os-extended-status:task_state


Yes, I don't like the shoutyness of the ALL CAPS.


 (1.4) Extension names should contain the prefix os- if the extension is
 not core.
   eg: rxtx_factor - os-flavor-rxtx:rxtx_factor
   os-flavor-access:is_public - flavor-access:is_public (flavor-access
 extension became core)


Do we have a list of core yet?


 (1.5) The length of the first attribute depth should be one.
   eg: create a server API with scheduler hints
 -- v2 API input attribute sample
 ---
   {
   server: {
   imageRef: e5468cc9-3e91-4449-8c4f-e4203c71e365,
   [..]
   },
   OS-SCH-HNT:scheduler_hints: {
   same_host: 5a3dec46-a6e1-4c4d-93c0-8543f5ffe196
   }
   }
 -- v3 API input attribute sample
 ---
   {
   server: {
   image_ref: e5468cc9-3e91-4449-8c4f-e4203c71e365,
   [..]
   os-scheduler-hints:scheduler_hints: {
   same_host: 5a3dec46-a6e1-4c4d-93c0-8543f5ffe196
   }
   }
   }

 (2) Resource names
 (2.1) Resource names should consist of hyphens and low chars.
   eg: /os-instance_usage_audit_log - /os-instance-usage-audit-log
 (2.2) Resource names should contain the prefix os- if the extension is
 not core.
   eg: /servers/diagnostics - /servers/os-server-diagnostics
   /os-flavor-access - /flavor-access (flavor-access extension became
 core)
 (2.3) Action names should be snake_case.
   eg: os-getConsoleOutput - get_console_output
   addTenantAccess - add_tenant_access, removeTenantAccess -
 remove_tenant_access


Yes to all of these.


 (3) Status code
 (3.1) Return 201(Created) if a resource creation/updating finishes before
 returning a response.
   eg: create a keypair API: 200 - 201
   create an agent API: 200 - 201
   create an aggregate API: 200 - 201


Do you mean a 200 

Re: [openstack-dev] [Neutron] - what integration with Keystone is allowed?

2014-09-22 Thread Dolph Mathews
On Sun, Sep 21, 2014 at 3:58 PM, Kevin Benton blak...@gmail.com wrote:

 So based on those guidelines there would be a problem with the IBM patch
 because it's storing the tenant name in a backend controller, right?

It would need to be regarded as an expiring cache if Neutron chose to go
that route. I'd wholly recommend against it though, because I don't see a
strong use case to use names instead of IDs here (correct me if I'm wrong).


 On Sep 21, 2014 12:18 PM, Dolph Mathews dolph.math...@gmail.com wrote:

 Querying keystone for tenant names is certainly fair game.

 Keystone should be considered the only source of truth for tenant names
 though, as they are mutable and not globally unique on their own, so other
 services should not stash any names from keystone into long term
 persistence (users, projects, domains, groups, etc-- roles might be an odd
 outlier worth a separate conversation if anyone is interested).

 Store IDs where necessary, and use IDs on the wire where possible though,
 as they are immutable.

 On Sat, Sep 20, 2014 at 7:46 PM, Kevin Benton blak...@gmail.com wrote:

 Hello all,

 A patch has come up to query keystone for tenant names in the IBM
 plugin.[1] As I understand it, this was one of the reasons another
 mechanism driver was reverted.[2] Can we get some clarity on the level
 of integration with Keystone that is permitted?

 Thanks

 1. https://review.openstack.org/#/c/122382
 2. https://review.openstack.org/#/c/118456

 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] - do we need .start and .end notifications in all cases ?

2014-09-22 Thread Jay Pipes

On 09/22/2014 07:37 AM, Sandy Walsh wrote:

For some operations like resize, migrate, etc., the .start/.end is
good for auditing and billing. Although, we could do a better job by
 simply managing the launched_at, deleted_at times better.


I'm sure I'll get no real disagreement from you or Andrew Laski on
this... but the above is one of the reasons we really should be moving
with pace towards a fully task-driven system, both internally in Nova 
and externally via the Compute REST API. This would allow us to get rid 
of the launched_at, deleted_at, created_at, updated_at, etc fields in 
many of the database tables and instead have a data store for tasks 
RDBMS or otherwise) that had start and end times in the task record, 
along with codified task types.


You can see what I had in mind for the public-facing side of this here:

http://docs.oscomputevnext.apiary.io/#schema

See the schema for server task and server task item.

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] 2 weeks in the bug tracker

2014-09-22 Thread Jay S. Bryant


On 09/21/2014 07:37 PM, Matt Riedemann wrote:



On 9/19/2014 8:13 AM, Sean Dague wrote:

I've spent the better part of the last 2 weeks in the Nova bug tracker
to try to turn it into something that doesn't cause people to run away
screaming. I don't remember exactly where we started at open bug count 2
weeks ago (it was north of 1400, with  200 bugs in new, but it might
have been north of 1600), but as of this email we're at  1000 open bugs
(I'm counting Fix Committed as closed, even though LP does not), and ~0
new bugs (depending on the time of the day).

== Philosophy in Triaging ==

I'm going to lay out the philosophy of triaging I've had, because this
may also set the tone going forward.

A bug tracker is a tool to help us make a better release. It does not
exist for it's own good, it exists to help. Which means when evaluating
what stays in and what leaves we need to evaluate if any particular
artifact will help us make a better release. But also more importantly
realize that there is a cost for carrying every artifact in the tracker.
Resolving duplicates gets non linearly harder as the number of artifacts
go up. Triaging gets non-linearly hard as the number of artifacts go up.

With this I was being somewhat pragmatic about closing bugs. An old bug
that is just a stacktrace is typically not useful. An old bug that is a
vague sentence that we should refactor a particular module (with no
specifics on the details) is not useful. A bug reported against a very
old version of OpenStack where the code has changed a lot in the
relevant area, and there aren't responses from the author, is not
useful. Not useful bugs just add debt, and we should get rid of them.
That makes the chance of pulling a random bug off the tracker something
that you could actually look at fixing, instead of mostly just 
stalling out.


So I closed a lot of stuff as Invalid / Opinion that fell into those 
camps.


== Keeping New Bugs at close to 0 ==

After driving the bugs in the New state down to zero last week, I found
it's actually pretty easy to keep it at 0.

We get 10 - 20 new bugs a day in Nova (during a weekday). Of those ~20%
aren't actually a bug, and can be closed immediately. ~30% look like a
bug, but don't have anywhere near enough information in them, and
flipping them to incomplete with questions quickly means we have a real
chance of getting the right info. ~10% are fixable in  30 minutes worth
of work. And the rest are real bugs, that seem to have enough to dive
into it, and can be triaged into Confirmed, set a priority, and add the
appropriate tags for the area.

But, more importantly, this means we can filter bug quality on the way
in. And we can also encourage bug reporters that are giving us good
stuff, or even easy stuff, as we respond quickly.

Recommendation #1: we adopt a 0 new bugs policy to keep this from
getting away from us in the future.

== Our worse bug reporters are often core reviewers ==

I'm going to pick on Dan Prince here, mostly because I have a recent
concrete example, however in triaging the bug queue much of the core
team is to blame (including myself).

https://bugs.launchpad.net/nova/+bug/1368773 is a terrible bug. Also, it
was set incomplete and no response. I'm almost 100% sure it's a dupe of
the multiprocess bug we've been tracking down but it's so terse that you
can't get to the bottom of it.

There were a ton of 2012 nova bugs that were basically post it notes.
Oh, we should refactor this function. Full stop. While those are fine
for personal tracking, their value goes to zero probably 3 months after
they are files, especially if the reporter stops working on the issue at
hand. Nova has plenty of wouldn't it be great if we...  ideas. I'm not
convinced using bugs for those is useful unless we go and close them out
aggressively if they stall.

Also, if Nova core can't file a good bug, it's hard to set the example
for others in our community.

Recommendation #2: hey, Nova core, lets be better about filing the kinds
of bugs we want to see! mkay!

Recommendation #3: Let's create a tag for personal work items or
something for these class of TODOs people are leaving themselves that
make them a ton easier to cull later when they stall and no one else has
enough context to pick them up.

== Tags ==

The aggressive tagging that Tracy brought into the project has been
awesome. It definitely helps slice out into better functional areas.
Here is the top of our current official tag list (and bug count):

95 compute
83 libvirt
74 api
68 vmware
67 network
41 db
40 testing
40 volumes
36 ec2
35 icehouse-backport-potential
32 low-hanging-fruit
31 xenserver
25 ironic
23 hyper-v
16 cells
14 scheduler
12 baremetal
9 ceph
9 security
8 oslo
...

So, good stuff. However I think we probably want to take a further step
and attempt to get champions for tags. So that tag owners would ensure
their bug list looks sane, and actually spend some time fixing them.
It's pretty clear, for instance, that the ec2 bugs 

Re: [openstack-dev] [keystone] Stepping down as PTL

2014-09-22 Thread Jay Pipes

Just want to say thanks for your leadership over the years, Dolph.

All the best,
-jay

On 09/22/2014 10:47 AM, Dolph Mathews wrote:

Dearest stackers and [key]stoners,

With the PTL candidacies officially open for Kilo, I'm going to take the
opportunity to announce that I won't be running again for the position.

I thoroughly enjoyed my time as PTL during Havana, Icehouse and Juno.
There was a perceived increase in stability [citation needed], which was
one of my foremost goals. We primarily achieved that by improving the
communication between developers which allowed developers to share their
intent early and often (by way of API designs and specs). As a result,
we had a lot more collaboration and a great working knowledge in the
community when it came time for bug fixes. I also think we raised the
bar for user experience, especially by way of reasonable defaults,
strong documentation, and effective error messages. I'm consistently
told that we have the best out-of-the-box experience of any OpenStack
service. Well done!

I'll still be involved in OpenStack, and I'm super confident in our
incredibly strong core team of reviewers on Keystone. I thoroughly enjoy
helping other developers be as productive as possible, and intend to
continue doing exactly that.

Keep hacking responsibly,

-Dolph


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Question regarding Stack updates and templates

2014-09-22 Thread Anant Patil
Hi,

In convergence, we discuss about having concurrent updates to a stack. I
wanted to know if it is safe to assume that the an update will be a
super set of it's previous updates. Understanding this is critical to
arrive at implementation of concurrent stack operations.

Assuming that an admin will have VCS setup and will issue requests by
checking-out the template and modifying it, I could see that the updates
will be incremental and not discreet. Is this assumption correct? When
an update is issued before a previous update is complete, would the
template for that be based on the template of previously issued
incomplete update or the last completed one?

- Anant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Deprecating exceptions

2014-09-22 Thread Radomir Dopieralski
Horizon's tests were recently broken by a change in the Ceilometer
client that removed a deprecated exception. The exception was deprecated
for a while already, but as it often is, nobody did the
work of removing all references to it from Horizon before it was too
late. Sure, in theory we should all be reading the release notes of
all versions of all dependencies and acting upon things like this.
In practice, if there is no warning generated in the unit tests,
nobody is going to do anything about it.

So I sat down and started thinking about how to best generate a warning
when someone is trying to catch a deprecated exception. I came up with
this code:

http://paste.openstack.org/show/114170/

It's not pretty -- it is based on the fact that the `except` statement
has to do a subclass check on the exceptions it is catching. It requires
a metaclass and a class decorator to work, and it uses a global
variable. I'm sure it would be possible to do it in a little bit cleaner
way. But at least it gives us the warning (sure, only if an exception is
actually being thrown, but that's test coverage problem).

I propose to do exception deprecating in this way in the future.
-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Convergence: Backing up template instead of stack

2014-09-22 Thread Anant Patil
Hi,

One of the steps in the direction of convergence is to enable Heat
engine to handle concurrent stack operations. The main convergence spec
talks about it. Resource versioning would be needed to handle concurrent
stack operations.

As of now, while updating a stack, a backup stack is created with a new
ID and only one update runs at a time. If we keep the raw_template
linked to it's previous completed template, i.e. have a back up of
template instead of stack, we avoid having backup of stack.

Since there won't be a backup stack and only one stack_id to be dealt
with, resources and their versions can be queried for a stack with that
single ID. The idea is to identify resources for a stack by using stack
id and version. Please let me know your thoughts.

- Anant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Stepping down as PTL

2014-09-22 Thread Brad Topol
+1000!!!  Dolph, you did a fantastic job leading the Keystone project. 
Many thanks for all of your efforts!!!

--Brad


Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.com
Assistant: Kendra Witherspoon (919) 254-0680



From:   Jay Pipes jaypi...@gmail.com
To: openstack-dev@lists.openstack.org, 
Date:   09/22/2014 11:03 AM
Subject:Re: [openstack-dev] [keystone] Stepping down as PTL



Just want to say thanks for your leadership over the years, Dolph.

All the best,
-jay

On 09/22/2014 10:47 AM, Dolph Mathews wrote:
 Dearest stackers and [key]stoners,

 With the PTL candidacies officially open for Kilo, I'm going to take the
 opportunity to announce that I won't be running again for the position.

 I thoroughly enjoyed my time as PTL during Havana, Icehouse and Juno.
 There was a perceived increase in stability [citation needed], which was
 one of my foremost goals. We primarily achieved that by improving the
 communication between developers which allowed developers to share their
 intent early and often (by way of API designs and specs). As a result,
 we had a lot more collaboration and a great working knowledge in the
 community when it came time for bug fixes. I also think we raised the
 bar for user experience, especially by way of reasonable defaults,
 strong documentation, and effective error messages. I'm consistently
 told that we have the best out-of-the-box experience of any OpenStack
 service. Well done!

 I'll still be involved in OpenStack, and I'm super confident in our
 incredibly strong core team of reviewers on Keystone. I thoroughly enjoy
 helping other developers be as productive as possible, and intend to
 continue doing exactly that.

 Keep hacking responsibly,

 -Dolph


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] - do we need .start and .end notifications in all cases ?

2014-09-22 Thread Jay Lau
Hi Jay,

There was actually a discussion about file a blueprint for object
notification http://markmail.org/message/ztehzx2wc6dacnk2

But for patch https://review.openstack.org/#/c/107954/ , I'd like we keep
it as it is now to resolve the requirement of server group notifications
for 3rd party client.

Thanks.

2014-09-22 22:41 GMT+08:00 Jay Pipes jaypi...@gmail.com:

 On 09/22/2014 07:24 AM, Daniel P. Berrange wrote:

 On Mon, Sep 22, 2014 at 11:03:02AM +, Day, Phil wrote:

 Hi Folks,

 I'd like to get some opinions on the use of pairs of notification
 messages for simple events.   I get that for complex operations on
 an instance (create, rebuild, etc) a start and end message are useful
 to help instrument progress and how long the operations took. However
 we also use this pattern for things like aggregate creation, which is
 just a single DB operation - and it strikes me as kind of overkill and
 probably not all that useful to any external system compared to a
 single event .create event after the DB operation.


 A start + end pair is not solely useful for timing, but also potentially
 detecting if it completed successfully. eg if you receive an end event
 notification you know it has completed. That said, if this is a use case
 we want to target, then ideally we'd have a third notification for this
 failure case, so consumers don't have to wait  timeout to detect error.

  There is a change up for review to add notifications for service groups
 which is following this pattern (https://review.openstack.org/
 #/c/107954/)
 - the author isn't doing  anything wrong in that there just following
 that
 pattern, but it made me wonder if we shouldn't have some better guidance
 on when to use a single notification rather that a .start/.end pair.

 Does anyone else have thoughts on this , or know of external systems that
 would break if we restricted .start and .end usage to long-lived instance
 operations ?


 I think we should aim to /always/ have 3 notifications using a pattern of

try:
   ...notify start...

   ...do the work...

   ...notify end...
except:
   ...notify abort...


 Precisely my viewpoint as well. Unless we standardize on the above, our
 notifications are less than useful, since they will be open to
 interpretation by the consumer as to what precisely they mean (and the
 consumer will need to go looking into the source code to determine when an
 event actually occurred...)

 Smells like a blueprint to me. Anyone have objections to me writing one up
 for Kilo?

 Best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Deprecating exceptions

2014-09-22 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 22/09/14 17:07, Radomir Dopieralski wrote:
 Horizon's tests were recently broken by a change in the Ceilometer 
 client that removed a deprecated exception. The exception was
 deprecated for a while already, but as it often is, nobody did the 
 work of removing all references to it from Horizon before it was
 too late. Sure, in theory we should all be reading the release
 notes of all versions of all dependencies and acting upon things
 like this. In practice, if there is no warning generated in the
 unit tests, nobody is going to do anything about it.
 
 So I sat down and started thinking about how to best generate a
 warning when someone is trying to catch a deprecated exception. I
 came up with this code:
 
 http://paste.openstack.org/show/114170/
 
 It's not pretty -- it is based on the fact that the `except`
 statement has to do a subclass check on the exceptions it is
 catching. It requires a metaclass and a class decorator to work,
 and it uses a global variable. I'm sure it would be possible to do
 it in a little bit cleaner way. But at least it gives us the
 warning (sure, only if an exception is actually being thrown, but
 that's test coverage problem).
 
 I propose to do exception deprecating in this way in the future.
 

Aren't clients supposed to be backwards compatible? Isn't it the exact
reason why we don't maintain stable branches for client modules?

So, another reason to actually start maintaining those stable branches
for clients. We already do it in RDO (Red Hat backed Openstack
distribution) anyway.

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUID8xAAoJEC5aWaUY1u57A4IH/RiRtEfFOQjqb2Dr86HTPHGW
XQuUGlB2E3t/VnhDNv0xwlAJg+rnsRv9j/Vlpkyx2LNwOp3SHlmOrA4upB6dW1xc
v7BL02vaKTW2LUwBvVyEOwL0xSywh12260kpl2UBlLFt7xytWQLFaDkGUSAE504Q
EPQg9CSpfqiPz88PTQ0SeFbE2FOtslwTXMb0/tyef34vhfyH7o1rYJNZ7ajAY+iI
S4dkKoVrnjhOpg8RqjYHcj+USy49ECRpz87mB7BwhF2Av4d0BeWhmVvanN8vqDsf
qG785w41Cry+/lLia4Ay0BQnFf1wRVMFdPzHZrhcAJgdOq88KhrW3E879itt464=
=qD0z
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Deprecating exceptions

2014-09-22 Thread Sean Dague
On 09/22/2014 11:24 AM, Ihar Hrachyshka wrote:
 On 22/09/14 17:07, Radomir Dopieralski wrote:
 Horizon's tests were recently broken by a change in the Ceilometer 
 client that removed a deprecated exception. The exception was
 deprecated for a while already, but as it often is, nobody did the 
 work of removing all references to it from Horizon before it was
 too late. Sure, in theory we should all be reading the release
 notes of all versions of all dependencies and acting upon things
 like this. In practice, if there is no warning generated in the
 unit tests, nobody is going to do anything about it.
 
 So I sat down and started thinking about how to best generate a
 warning when someone is trying to catch a deprecated exception. I
 came up with this code:
 
 http://paste.openstack.org/show/114170/
 
 It's not pretty -- it is based on the fact that the `except`
 statement has to do a subclass check on the exceptions it is
 catching. It requires a metaclass and a class decorator to work,
 and it uses a global variable. I'm sure it would be possible to do
 it in a little bit cleaner way. But at least it gives us the
 warning (sure, only if an exception is actually being thrown, but
 that's test coverage problem).
 
 I propose to do exception deprecating in this way in the future.
 
 
 Aren't clients supposed to be backwards compatible? Isn't it the exact
 reason why we don't maintain stable branches for client modules?
 
 So, another reason to actually start maintaining those stable branches
 for clients. We already do it in RDO (Red Hat backed Openstack
 distribution) anyway.

I think the real question is how much warning was given on the removal
of the exception. Was there a release out for 6 months with the
deprecation? That's about our normal time for delete threshold.
Honestly, I have no idea.

If ceilometer client did the right thing and gave enough deprecation
time before the remove, it's an issue about why horizon didn't respond
to the deprecation sooner.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Oslo] Federation and Policy

2014-09-22 Thread Doug Hellmann

On Sep 21, 2014, at 9:59 PM, Adam Young ayo...@redhat.com wrote:

 On 09/19/2014 01:43 PM, Doug Hellmann wrote:
 On Sep 19, 2014, at 6:56 AM, David Chadwick d.w.chadw...@kent.ac.uk wrote:
 
 
 On 18/09/2014 22:14, Doug Hellmann wrote:
 On Sep 18, 2014, at 4:34 PM, David Chadwick d.w.chadw...@kent.ac.uk
 wrote:
 
 
 On 18/09/2014 21:04, Doug Hellmann wrote:
 On Sep 18, 2014, at 12:36 PM, David Chadwick
 d.w.chadw...@kent.ac.uk wrote:
 
 Our recent work on federation suggests we need an improvement
 to the way the policy engine works. My understanding is that
 most functions are protected by the policy engine, but some are
 not. The latter functions are publicly accessible. But there is
 no way in the policy engine to specify public access to a
 function and there ought to be. This will allow an
 administrator to configure the policy for a function to range
 from very lax (publicly accessible) to very strict (admin
 only). A policy of  means that any authenticated user can
 access the function. But there is no way in the policy to
 specify that an unauthenticated user (i.e. public) has access
 to a function.
 
 We have already identified one function (get trusted IdPs
 identity:list_identity_providers) that needs to be publicly
 accessible in order for users to choose which IdP to use for
 federated login. However some organisations may not wish to
 make this API call publicly accessible, whilst others may wish
 to restrict it to Horizon only etc. This indicates that that
 the policy needs to be set by the administrator, and not by
 changes to the code (i.e. to either call the policy engine or
 not, or to have two different API calls).
 I don’t know what list_identity_providers does.
 it lists the IDPs that Keystone trusts to authenticate users
 
 Can you give a little more detail about why some providers would
 want to make it not public
 I am not convinced that many cloud services will want to keep this
 list secret. Today if you do federated login, the public web page
 of the service provider typically lists the logos or names of its
 trusted IDPs (usually Facebook and Google). Also all academic
 federations publish their full lists of IdPs. But it has been
 suggested that some commercial cloud providers may not wish to
 publicise this list and instead require the end users to know which
 IDP they are going to use for federated login. In which case the
 list should not be public.
 
 
 if we plan to make it public by default? If we think there’s a
 security issue, shouldn’t we just protect it?
 
 Its more a commercial in confidence issue (I dont want the world to
 know who I have agreements with) rather than a security issue,
 since the IDPs are typically already well known and already protect
 themselves against attacks from hackers on the Internet.
 OK. The weak “someone might want to” requirement aside, and again
 showing my ignorance of implementation details, do we truly have to
 add a new feature to disable the policy check? Is there no way to
 have an “always allow” policy using the current syntax?
 You tell me. If there is, then problem solved. If not, then my request
 still stands
 From looking at the code, it appears that something like True:”True” (or 
 possibly True:True) would always pass, but I haven’t tested that.
 
 Nope;  it errors out before it ever gets to the policy check, when it unpacks 
 the token.

Which “token” do you mean, something in the policy parser or the keystone 
token? Are you saying that the syntax I suggested isn’t valid and so can’t be 
parsed, or that some other part of the code is throwing an error before the 
policy module is ever being invoked?

 
 Doug
 
 regards
 
 David
 
 Doug
 
 regards
 
 David
 
 If we can invent some policy syntax that indicates public
 access, e.g. reserved keyword of public, then Keystone can
 always call the policy file for every function and there would
 be no need to differentiate between protected APIs and
 non-protected APIs as all would be protected to a greater or
 lesser extent according to the administrator's policy.
 
 Comments please
 It seems reasonable to have a way to mark a function as fully
 public, if we expect to really have those kinds of functions.
 
 Doug
 
 regards
 
 David
 
 
 
 
 
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___ OpenStack-dev mailing
 list OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___ OpenStack-dev mailing
 list OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 

Re: [openstack-dev] [Nova] - do we need .start and .end notifications in all cases ?

2014-09-22 Thread Sandy Walsh


From: Jay Pipes [jaypi...@gmail.com]
Sent: Monday, September 22, 2014 11:51 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] - do we need .start and .end notifications 
in all cases ?

On 09/22/2014 07:37 AM, Sandy Walsh wrote:
 For some operations like resize, migrate, etc., the .start/.end is
 good for auditing and billing. Although, we could do a better job by
  simply managing the launched_at, deleted_at times better.

I'm sure I'll get no real disagreement from you or Andrew Laski on
this... but the above is one of the reasons we really should be moving
with pace towards a fully task-driven system, both internally in Nova
and externally via the Compute REST API. This would allow us to get rid
of the launched_at, deleted_at, created_at, updated_at, etc fields in
many of the database tables and instead have a data store for tasks
RDBMS or otherwise) that had start and end times in the task record,
along with codified task types.

You can see what I had in mind for the public-facing side of this here:

http://docs.oscomputevnext.apiary.io/#schema

See the schema for server task and server task item.

Totally agree. Though I would go one step further and say the Task state
transitions should be managed by notifications.

Then oslo.messaging is reduced to the simple notifications interface (no RPC).
Notification follow proper retry semantics and control Tasks. 
Tasks themselves can restart/retry/etc.

(I'm sure I'm singing to the choir)

-S
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Oslo messaging vs zaqar

2014-09-22 Thread Zane Bitter

On 22/09/14 10:40, Geoff O'Callaghan wrote:

So
On 22/09/2014 10:01 PM, Zane Bitter zbit...@redhat.com wrote:


On 20/09/14 04:17, Geoff O'Callaghan wrote:


Hi all,
I'm just trying to understand the messaging strategy in openstack.It
seems we have at least 2 messaging layers.

Oslo.messaging and zaqar,  Can someone explain to me why there are two?

Is there a plan to consolidate?



I'm trying to understand the database strategy in OpenStack. It seems we

have at least 2 database layers - sqlalchemy and Trove.


Can anyone explain to me why there are two?


Is there a plan to consolidate?
/sarcasm



So the answer is because we can ;)  Not a great answer, but an answer
nonetheless.  :)


No, the answer is that they're completely different things :)


That being said I'm not sure why a well constructed zaqar with an rpc
interface couldn't meet the requirements of oslo.messsaging and much
more.It seems I need to dig some more.


Usually when people talk about consolidation they mean why isn't 
Zaqar just a front-end to oslo.messaging?. If you mean that there 
should be a Zaqar back-end to oslo.messaging (alongside the existing 
AMQP and ZeroMQ back-ends) then that is a stronger possibility. (In my 
increasingly tortured analogy I guess this would be equivalent to using 
Trove in the undercloud to provision the RDBMS for other OpenStack 
services, which is a perfectly respectable idea).


That said, I'm not sure if the semantics fit. Most uses of 
oslo.messaging AFAIK are RPC-style calls (I'm not sure what the 
percentage breakdown of call vs. cast is, but I believe it's heavily 
weighted in favour of the former). So basically it's mostly used for 
synchronous stuff. To me, the big selling point of Zaqar is (or at least 
IMHO should be - see discussion in other thread) that it is end-to-end 
reliable even for completely asynchronous delivery. If that's not a 
requirement for OpenStack services then the stuff Zaqar does to achieve 
it (writing each message to multiple disks in a cluster before 
delivering it) is probably wasted overhead for this particular application.


tl;dr it's possible but probably inefficient due to differing requirements.

cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] In defence of faking

2014-09-22 Thread Matthew Booth
If you missed the inaugural OpenStack Bootstrapping Hour, it's here:
http://youtu.be/jCWtLoSEfmw . I think this is a fantastic idea and big
thanks to Sean, Jay and Dan for doing this. I liked the format, the
informal style and the content. Unfortunately I missed the live event,
but I can confirm that watching it after the event worked just fine
(thanks for reading out live questions for the stream!).

I'd like to make a brief defence of faking, which perhaps predictably in
a talk about mock took a bit of a bashing.

Firstly, when not to fake. As Jay pointed out, faking adds an element of
complexity to a test, so if you can achieve what you need to with a
simple mock then you should. But, as the quote goes, you should make
things as simple as possible, but not simpler.

Here are some simple situations where I believe fake is the better solution:

* Mock assertions aren't sufficiently expressive on their own

For example, imagine your code is calling:

def complex_set(key, value)

You want to assert that on completion of your unit, the final value
assigned to key was value. This is difficult to capture with mock
without risking false assertion failures if complex_set sets other keys
which you aren't interested in, or if key's value is set multiple
times, but you're only interested in the last one. A little fake
function which stores the final value assigned to key does this simply
and accurately without adding a great deal of complexity. e.g.

mykey = [None]
def fake_complex_set(key, value):
  if key == 'FOO':
mykey[0] = value

with mock.patch.object(unit, 'complex_set', side_effect=fake_complex_set):
  run_test()
self.assertEquals('expected', mykey[0])

Summary: fake method is a custom mock assertion.

* A simple fake function is simpler than a complex mock dance

For example, you're mocking 2 functions: start_frobincating(key) and
wait_for_frobnication(key). They can potentially be called overlapping
with different keys. The desired mock return value of one is dependent
on arguments passed to the other. This is better mocked with a couple of
little fake functions and some external state, or you risk introducing
artificial constraints on the code under test.

Jay pointed out that faking methods creates more opportunities for
errors. For this reason, in the above cases, you want to keep your fake
function as simple as possible (but no simpler). However, there's a big
one: the fake driver!

This may make less sense outside of driver code, although faking the
image service came up in the talk. Without looking at the detail, that
doesn't necessarily sound awful to me, depending on context. In the
driver, though, the ultimate measure of correctness isn't a Nova call:
it's the effect produced on the state of an external system.

For the VMware driver we have nova.tests.virt.vmwareapi.fake. This is a
lot of code: 1599 lines as of writing. It contains bugs, and it contains
inaccuracies, and both of these can mess up tests. However:

* It's vastly simpler than the system it models (vSphere server)
* It's common code, so gets fixed over time
* It allows tests to run almost all driver code unmodified

So, for example, it knows that you can't move a file before you create
it. It knows that creating a VM creates a bunch of different files, and
where they're created. It knows what objects are created by the server,
and what attributes they have. And what attributes they don't have. If
you do an object lookup, it knows which objects to return, and what
their properties are.

All of this knowledge is vital to testing, and if it wasn't contained in
the fake driver, or something like it[1], would have to be replicated
across all tests which require it. i.e. It may be 1599 lines of
complexity, but it's all complexity which has to live somewhere anyway.

Incidentally, this is fresh in my mind because of
https://review.openstack.org/#/c/122760/ . Note the diff stat: +70,
-161, and the rewrite has better coverage, too :) It executes the
function under test, it checks that it produces the correct outcome, and
other than that it doesn't care how the function is implemented.

TL;DR

* Bootstrap hour is awesome
* Don't fake if you don't have to
* However, there are situations where it's a good choice

Thanks for reading :)

Matt

[1] There are other ways to skin this cat, but ultimately if you aren't
actually spinning up a vSphere server, you're modelling it somehow.
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Deprecating exceptions

2014-09-22 Thread Akihiro Motoki
Regarding ceilometer client, we horizon team at least was not aware of
its deprecation.
It is not easy thta Horizon team is aware of such changes/warnings in
all integrated projects.
We might need some liaison from the integrated projects as other projects do.

Regarding client backward compatibilities, it raises another question.
How long should a client need to keep backward compatibility?
OpenStack integrated supports two past releases.
Is it enough a client supports the past supported releases and the
current development version?
It seems more clear criteria is required, but I cannot find a good
pointer for this topic.

Akihiro


On Tue, Sep 23, 2014 at 12:32 AM, Sean Dague s...@dague.net wrote:
 On 09/22/2014 11:24 AM, Ihar Hrachyshka wrote:
 On 22/09/14 17:07, Radomir Dopieralski wrote:
 Horizon's tests were recently broken by a change in the Ceilometer
 client that removed a deprecated exception. The exception was
 deprecated for a while already, but as it often is, nobody did the
 work of removing all references to it from Horizon before it was
 too late. Sure, in theory we should all be reading the release
 notes of all versions of all dependencies and acting upon things
 like this. In practice, if there is no warning generated in the
 unit tests, nobody is going to do anything about it.

 So I sat down and started thinking about how to best generate a
 warning when someone is trying to catch a deprecated exception. I
 came up with this code:

 http://paste.openstack.org/show/114170/

 It's not pretty -- it is based on the fact that the `except`
 statement has to do a subclass check on the exceptions it is
 catching. It requires a metaclass and a class decorator to work,
 and it uses a global variable. I'm sure it would be possible to do
 it in a little bit cleaner way. But at least it gives us the
 warning (sure, only if an exception is actually being thrown, but
 that's test coverage problem).

 I propose to do exception deprecating in this way in the future.


 Aren't clients supposed to be backwards compatible? Isn't it the exact
 reason why we don't maintain stable branches for client modules?

 So, another reason to actually start maintaining those stable branches
 for clients. We already do it in RDO (Red Hat backed Openstack
 distribution) anyway.

 I think the real question is how much warning was given on the removal
 of the exception. Was there a release out for 6 months with the
 deprecation? That's about our normal time for delete threshold.
 Honestly, I have no idea.

 If ceilometer client did the right thing and gave enough deprecation
 time before the remove, it's an issue about why horizon didn't respond
 to the deprecation sooner.

 -Sean

 --
 Sean Dague
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Akihiro Motoki amot...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] - do we need .start and .end notifications in all cases ?

2014-09-22 Thread Jay Pipes

On 09/22/2014 11:42 AM, Sandy Walsh wrote:

Totally agree. Though I would go one step further and say the Task state
transitions should be managed by notifications.

Then oslo.messaging is reduced to the simple notifications interface (no RPC).
Notification follow proper retry semantics and control Tasks.
Tasks themselves can restart/retry/etc.

(I'm sure I'm singing to the choir)


Yep. I'm loving the music.

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] - do we need .start and .end notifications in all cases ?

2014-09-22 Thread Jay Pipes

On 09/22/2014 11:22 AM, Jay Lau wrote:

Hi Jay,

There was actually a discussion about file a blueprint for object
notification http://markmail.org/message/ztehzx2wc6dacnk2

But for patch https://review.openstack.org/#/c/107954/ , I'd like we
keep it as it is now to resolve the requirement of server group
notifications for 3rd party client.


Sure, no problem. I'll review the above shortly.

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Oslo messaging vs zaqar

2014-09-22 Thread Gordon Sim

On 09/22/2014 03:40 PM, Geoff O'Callaghan wrote:

That being said I'm not sure why a well constructed zaqar with an rpc
interface couldn't meet the requirements of oslo.messsaging and much more.


What Zaqar is today and what it might become may of course be different 
things but as it stands today, Zaqar relies on polling which in my 
opinion is not a natural fit for RPC[1]. Though using an intermediary 
for routing/addressing can be of benefit, store and forward is not 
necessary and in my opinion even gets in the way[2].


Notifications on the other hand can benefit from store and forward and 
may be less latency sensitive, alleviating the polling concerns.


One of the use cases I've heard cited for Zaqar is as an inbox for 
recording certain sets of relevant events sent out by other open stack 
services. In my opinion using oslo.messaging's notification API on the 
openstack service side of this would seem - to me at least - quite 
sensible, even if the events are then stored in (or forwarded to) Zaqar 
and accessed by users through Zaqar's own protocol.




[1] The latency of an RPC call as perceived by the client is going to 
depend heavily on the polling frequency; to get lower latency, you'll 
need to pool more frequently both on the server and on the client. 
However polling more frequently results in increased load even when no 
requests are being made.


[2] I am of the view that reliable RPC is best handled by replaying the 
request from the client when needed, rather than trying to make the 
request and reply messages durably recorded, replicated and reliably 
delivered. Doing so is more scalable and simpler. An end-to-end 
acknowledgement for the request (rather than a broker taking 
responsibility and acknowledging the request independent of delivery 
status) makes it easier to detect failures and trigger a resend.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] In defence of faking

2014-09-22 Thread Monty Taylor
On 09/22/2014 08:58 AM, Matthew Booth wrote:
 If you missed the inaugural OpenStack Bootstrapping Hour, it's here:
 http://youtu.be/jCWtLoSEfmw . I think this is a fantastic idea and big
 thanks to Sean, Jay and Dan for doing this. I liked the format, the
 informal style and the content. Unfortunately I missed the live event,
 but I can confirm that watching it after the event worked just fine
 (thanks for reading out live questions for the stream!).
 
 I'd like to make a brief defence of faking, which perhaps predictably in
 a talk about mock took a bit of a bashing.
 
 Firstly, when not to fake. As Jay pointed out, faking adds an element of
 complexity to a test, so if you can achieve what you need to with a
 simple mock then you should. But, as the quote goes, you should make
 things as simple as possible, but not simpler.
 
 Here are some simple situations where I believe fake is the better solution:
 
 * Mock assertions aren't sufficiently expressive on their own
 
 For example, imagine your code is calling:
 
 def complex_set(key, value)
 
 You want to assert that on completion of your unit, the final value
 assigned to key was value. This is difficult to capture with mock
 without risking false assertion failures if complex_set sets other keys
 which you aren't interested in, or if key's value is set multiple
 times, but you're only interested in the last one. A little fake
 function which stores the final value assigned to key does this simply
 and accurately without adding a great deal of complexity. e.g.
 
 mykey = [None]
 def fake_complex_set(key, value):
   if key == 'FOO':
 mykey[0] = value
 
 with mock.patch.object(unit, 'complex_set', side_effect=fake_complex_set):
   run_test()
 self.assertEquals('expected', mykey[0])
 
 Summary: fake method is a custom mock assertion.
 
 * A simple fake function is simpler than a complex mock dance
 
 For example, you're mocking 2 functions: start_frobincating(key) and
 wait_for_frobnication(key). They can potentially be called overlapping
 with different keys. The desired mock return value of one is dependent
 on arguments passed to the other. This is better mocked with a couple of
 little fake functions and some external state, or you risk introducing
 artificial constraints on the code under test.
 
 Jay pointed out that faking methods creates more opportunities for
 errors. For this reason, in the above cases, you want to keep your fake
 function as simple as possible (but no simpler). However, there's a big
 one: the fake driver!
 
 This may make less sense outside of driver code, although faking the
 image service came up in the talk. Without looking at the detail, that
 doesn't necessarily sound awful to me, depending on context. In the
 driver, though, the ultimate measure of correctness isn't a Nova call:
 it's the effect produced on the state of an external system.
 
 For the VMware driver we have nova.tests.virt.vmwareapi.fake. This is a
 lot of code: 1599 lines as of writing. It contains bugs, and it contains
 inaccuracies, and both of these can mess up tests. However:
 
 * It's vastly simpler than the system it models (vSphere server)
 * It's common code, so gets fixed over time
 * It allows tests to run almost all driver code unmodified
 
 So, for example, it knows that you can't move a file before you create
 it. It knows that creating a VM creates a bunch of different files, and
 where they're created. It knows what objects are created by the server,
 and what attributes they have. And what attributes they don't have. If
 you do an object lookup, it knows which objects to return, and what
 their properties are.
 
 All of this knowledge is vital to testing, and if it wasn't contained in
 the fake driver, or something like it[1], would have to be replicated
 across all tests which require it. i.e. It may be 1599 lines of
 complexity, but it's all complexity which has to live somewhere anyway.
 
 Incidentally, this is fresh in my mind because of
 https://review.openstack.org/#/c/122760/ . Note the diff stat: +70,
 -161, and the rewrite has better coverage, too :) It executes the
 function under test, it checks that it produces the correct outcome, and
 other than that it doesn't care how the function is implemented.
 
 TL;DR
 
 * Bootstrap hour is awesome
 * Don't fake if you don't have to
 * However, there are situations where it's a good choice

Yes. I agree with all of these things.

For reference, we use several complex-ish fakes in nodepool testing
because it's the sanest thing to do.

Also, the nova fake-virt driver is actually completely amazing and
allows us to test system state things that otherwise would be hard to model.

However - 100 times yes - don't go faking things when a mock will do the
job - and that's almost always.

 Thanks for reading :)
 
 Matt
 
 [1] There are other ways to skin this cat, but ultimately if you aren't
 actually spinning up a vSphere server, you're modelling it somehow.
 



Re: [openstack-dev] [all] Deprecating exceptions

2014-09-22 Thread David Lyle
The larger problem here is that this breaks running Horizon unit tests on
all existing installs of Horizon including Havana, Icehouse and Juno if
those installations update to the newest python-ceilometerclient. I'm not
sure how to handle that type of deprecation other than forcing all existing
installs to pull a new patch to be able to continue development on Horizon.

David

On Mon, Sep 22, 2014 at 9:32 AM, Sean Dague s...@dague.net wrote:

 On 09/22/2014 11:24 AM, Ihar Hrachyshka wrote:
  On 22/09/14 17:07, Radomir Dopieralski wrote:
  Horizon's tests were recently broken by a change in the Ceilometer
  client that removed a deprecated exception. The exception was
  deprecated for a while already, but as it often is, nobody did the
  work of removing all references to it from Horizon before it was
  too late. Sure, in theory we should all be reading the release
  notes of all versions of all dependencies and acting upon things
  like this. In practice, if there is no warning generated in the
  unit tests, nobody is going to do anything about it.
 
  So I sat down and started thinking about how to best generate a
  warning when someone is trying to catch a deprecated exception. I
  came up with this code:
 
  http://paste.openstack.org/show/114170/
 
  It's not pretty -- it is based on the fact that the `except`
  statement has to do a subclass check on the exceptions it is
  catching. It requires a metaclass and a class decorator to work,
  and it uses a global variable. I'm sure it would be possible to do
  it in a little bit cleaner way. But at least it gives us the
  warning (sure, only if an exception is actually being thrown, but
  that's test coverage problem).
 
  I propose to do exception deprecating in this way in the future.
 
 
  Aren't clients supposed to be backwards compatible? Isn't it the exact
  reason why we don't maintain stable branches for client modules?
 
  So, another reason to actually start maintaining those stable branches
  for clients. We already do it in RDO (Red Hat backed Openstack
  distribution) anyway.

 I think the real question is how much warning was given on the removal
 of the exception. Was there a release out for 6 months with the
 deprecation? That's about our normal time for delete threshold.
 Honestly, I have no idea.

 If ceilometer client did the right thing and gave enough deprecation
 time before the remove, it's an issue about why horizon didn't respond
 to the deprecation sooner.

 -Sean

 --
 Sean Dague
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] To slide or not to slide? OpenStack Bootstrapping Hour...

2014-09-22 Thread Jay Pipes

Hi all,

OK, so we had our inaugural OpenStack Bootstrapping Hour last Friday. 
Thanks to Sean and Dan for putting up with my rambling about unittest 
and mock stuff. And thanks to one of my pugs, Winnie, for, according to 
Shrews, looking like she was drunk. :)


One thing we're doing today is a bit of a post-mortem around what worked 
and what didn't.


For this first OBH session, I put together some slides [1] that were 
referenced during the hangout session. I'm wondering what folks who 
watched the video [2] thought about using the slides.


Were the slides useful compared to just providing links to code on the 
etherpad [3]?


Were the slides a hindrance to the flow of the OBH session?

Would a production of slides be better to do *after* an OBH session, 
instead of before?


Very much interested in hearing answers/opinions about the above 
questions. We're keen to provide the best experience possible, and are 
open to any ideas.


Best,
-jay

[1] http://bit.ly/obh-mock-best-practices
[2] http://youtu.be/jCWtLoSEfmw
[3] https://etherpad.openstack.org/p/obh-mock-best-practices

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra][Cinder] Request for voting permissions for Third-party Coraid CI system

2014-09-22 Thread Mykola Grygoriev
Hi,

We finished deployment of 3d party CI for Сoraid Сinder driver.

Following instructions on
http://ci.openstack.org/third_party.html#requesting-a-service-account we
are asking to add gerrit CI account (coraid-ci) to the Voting Third-Party
CI Gerrit group https://review.openstack.org/#/admin/groups/91,members.

We have done all the requirements described on page
http://ci.openstack.org/third_party.html#requirements
1. We have already added description of Coraid CI system to wiki page -
https://wiki.openstack.org/wiki/ThirdPartySystems/Coraid_CI
2. Jenkins Gerrit plugin has been configured to sent +1 if build is in
Success state and -1 if build fails.
3. It sends only one comment per patch set. All comments from Coraid CI
system contain a link to the wiki page of Coraid CI system. Example of
comment is below:

Patch Set 1:

Coraid CI wiki: https://wiki.openstack.org/wiki/ThirdPartySystems/Coraid_CI

http://38.111.159.9:8080/job/Hello%20world/357/ : SUCCESS


4. Coraid CI system supports recheck to request re-running a test.
5. Coraid CI system locates here - http://38.111.159.9:8080/job/CoraidCI/ .
All logs are browsable.
6. Now Coraid CI system is working in silent mode on project
openstack/cinder.

My previous request is here -
http://lists.openstack.org/pipermail/openstack-dev/2014-September/045137.html
. But, unfortunately, I haven't got any response for a week on that thread.

Thank you in advance.

--
Best regards,
Mykola Grygoriev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] PTL Candidacy

2014-09-22 Thread Matthew Treinish
I'd like to throw my hat into the ring for another term as PTL of the QA
program.

The Juno cycle has been very productive with some big developments in the QA
program including: being the first cycle to implement branchless Tempest,
starting tempest-lib, and consolidating the devstack program with the QA
program. This cycle was also unique in that we identified some large scaling
limits with our current integrated gating model. Together with infra we came up
with some substantial changes to try and address these issues.[1] 

Just as OpenStack itself has been growing and evolving the QA program has been
undergoing similar changes. I personally feel that scaling with the rest of
OpenStack has been one of the biggest challenges for the QA program. This will
continue to be true in Kilo as well. However, it's definitely something I feel
that we can continue to address and be solved by working together as a community
to come up with new solutions to as these new issues appear. Moving forward
I expect that working towards this will be a top priority for everyone, not just
QA and Infra, but for the QA program specifically I expect that this change in
our gating model will continue to heavily influence the development goals into
Kilo.

For Kilo I also have a couple of other separate priorities, firstly to make the
QA projects more modular and reusable. [2] This is to in order to make consuming
QA program projects easier to use by themselves. Also by making each project
more targeted and externally consumable it should make it easier to build off of
them and use each project for different scenarios both in and outside of the
gate. Tempest-lib was the start of this effort late in Juno, and I expect this
trend will continue into Kilo.

My other larger priority for Kilo is to work on making the QA projects more
accessible. This is something I outlined as a priority for a Juno cycle, [3] and
I think we've made good progress on this so far. But, recent larger discussions
we've had in the OpenStack community have made it clear that there is still a
lot of work left on making it clearer how things work. Whether it be improving
the UX or the documentation around the projects in the QA program or coming up
with better channels to improve cross-project communication this is something I
view as very important for Kilo. Also, by working on this it will hopefully
enable growing the communities and number of contributors for the QA program
projects.

I'm looking forward to helping make Kilo another great cycle, and I hope to have
the opportunity to continue leading the QA program and work towards making a
better OpenStack with the rest of the community.

Thanks,

Matt Treinish

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-July/041057.html
[2] http://blog.kortar.org/?p=4
[3] http://lists.openstack.org/pipermail/openstack-dev/2014-March/031297.html


pgpu0JPnskDG_.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][Cinder] Request for voting permissions for Third-party Coraid CI system

2014-09-22 Thread Anita Kuno
On 09/22/2014 12:31 PM, Mykola Grygoriev wrote:
 Hi,
 
 We finished deployment of 3d party CI for Сoraid Сinder driver.
 
 Following instructions on
 http://ci.openstack.org/third_party.html#requesting-a-service-account we
 are asking to add gerrit CI account (coraid-ci) to the Voting Third-Party
 CI Gerrit group https://review.openstack.org/#/admin/groups/91,members.
 
 We have done all the requirements described on page
 http://ci.openstack.org/third_party.html#requirements
 1. We have already added description of Coraid CI system to wiki page -
 https://wiki.openstack.org/wiki/ThirdPartySystems/Coraid_CI
 2. Jenkins Gerrit plugin has been configured to sent +1 if build is in
 Success state and -1 if build fails.
 3. It sends only one comment per patch set. All comments from Coraid CI
 system contain a link to the wiki page of Coraid CI system. Example of
 comment is below:
 
 Patch Set 1:
 
 Coraid CI wiki: https://wiki.openstack.org/wiki/ThirdPartySystems/Coraid_CI
 
 http://38.111.159.9:8080/job/Hello%20world/357/ : SUCCESS
 
 
 4. Coraid CI system supports recheck to request re-running a test.
 5. Coraid CI system locates here - http://38.111.159.9:8080/job/CoraidCI/ .
 All logs are browsable.
 6. Now Coraid CI system is working in silent mode on project
 openstack/cinder.
 
 My previous request is here -
 http://lists.openstack.org/pipermail/openstack-dev/2014-September/045137.html
 . But, unfortunately, I haven't got any response for a week on that thread.
In future, please don't start a new thread if you haven't gotten a
response. Please join the #openstack-cinder irc channel and talk to
people. If you don't understand what they advised you, asking questions
on irc is a good follow up so they can exchange thoughts with you and
keep you moving.

Thanks,
Anita.
 
 Thank you in advance.
 
 --
 Best regards,
 Mykola Grygoriev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] PTL Candidacy

2014-09-22 Thread Tristan Cacqueray
confirmed

On 22/09/14 12:32 PM, Matthew Treinish wrote:
 I'd like to throw my hat into the ring for another term as PTL of the QA
 program.
 
 The Juno cycle has been very productive with some big developments in the QA
 program including: being the first cycle to implement branchless Tempest,
 starting tempest-lib, and consolidating the devstack program with the QA
 program. This cycle was also unique in that we identified some large scaling
 limits with our current integrated gating model. Together with infra we came 
 up
 with some substantial changes to try and address these issues.[1] 
 
 Just as OpenStack itself has been growing and evolving the QA program has been
 undergoing similar changes. I personally feel that scaling with the rest of
 OpenStack has been one of the biggest challenges for the QA program. This will
 continue to be true in Kilo as well. However, it's definitely something I feel
 that we can continue to address and be solved by working together as a 
 community
 to come up with new solutions to as these new issues appear. Moving forward
 I expect that working towards this will be a top priority for everyone, not 
 just
 QA and Infra, but for the QA program specifically I expect that this change in
 our gating model will continue to heavily influence the development goals into
 Kilo.
 
 For Kilo I also have a couple of other separate priorities, firstly to make 
 the
 QA projects more modular and reusable. [2] This is to in order to make 
 consuming
 QA program projects easier to use by themselves. Also by making each project
 more targeted and externally consumable it should make it easier to build off 
 of
 them and use each project for different scenarios both in and outside of the
 gate. Tempest-lib was the start of this effort late in Juno, and I expect this
 trend will continue into Kilo.
 
 My other larger priority for Kilo is to work on making the QA projects more
 accessible. This is something I outlined as a priority for a Juno cycle, [3] 
 and
 I think we've made good progress on this so far. But, recent larger 
 discussions
 we've had in the OpenStack community have made it clear that there is still a
 lot of work left on making it clearer how things work. Whether it be improving
 the UX or the documentation around the projects in the QA program or coming up
 with better channels to improve cross-project communication this is something 
 I
 view as very important for Kilo. Also, by working on this it will hopefully
 enable growing the communities and number of contributors for the QA program
 projects.
 
 I'm looking forward to helping make Kilo another great cycle, and I hope to 
 have
 the opportunity to continue leading the QA program and work towards making a
 better OpenStack with the rest of the community.
 
 Thanks,
 
 Matt Treinish
 
 [1] http://lists.openstack.org/pipermail/openstack-dev/2014-July/041057.html
 [2] http://blog.kortar.org/?p=4
 [3] http://lists.openstack.org/pipermail/openstack-dev/2014-March/031297.html
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] - what integration with Keystone is allowed?

2014-09-22 Thread Akihiro Motoki
In Keystone, as Dolph says, a tenant name is not globally unique,
so IMHO tenant_id needs to be passed to a back-end controller
to ensure uniqueness of tenant (or project).

tenant_name can be an additional information.
For example it can be used in a GUI of a back-end controller,
so I think it may be useful for some purpose.

Thanks,
Akihiro


On Sun, Sep 21, 2014 at 8:46 AM, Kevin Benton blak...@gmail.com wrote:
 Hello all,

 A patch has come up to query keystone for tenant names in the IBM
 plugin.[1] As I understand it, this was one of the reasons another
 mechanism driver was reverted.[2] Can we get some clarity on the level
 of integration with Keystone that is permitted?

 Thanks

 1. https://review.openstack.org/#/c/122382
 2. https://review.openstack.org/#/c/118456

 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Akihiro Motoki amot...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Zaqar and SQS Properties of Distributed Queues

2014-09-22 Thread Zane Bitter

On 22/09/14 10:11, Gordon Sim wrote:

On 09/19/2014 09:13 PM, Zane Bitter wrote:

SQS offers very, very limited guarantees, and it's clear that the reason
for that is to make it massively, massively scalable in the way that
e.g. S3 is scalable while also remaining comparably durable (S3 is
supposedly designed for 11 nines, BTW).

Zaqar, meanwhile, seems to be promising the world in terms of
guarantees. (And then taking it away in the fine print, where it says
that the operator can disregard many of them, potentially without the
user's knowledge.)

On the other hand, IIUC Zaqar does in fact have a sharding feature
(Pools) which is its answer to the massive scaling question.


There are different dimensions to the scaling problem.


Many thanks for this analysis, Gordon. This is really helpful stuff.


As I understand it, pools don't help scaling a given queue since all the
messages for that queue must be in the same pool. At present traffic
through different Zaqar queues are essentially entirely orthogonal
streams. Pooling can help scale the number of such orthogonal streams,
but to be honest, that's the easier part of the problem.


But I think it's also the important part of the problem. When I talk 
about scaling, I mean 1 million clients sending 10 messages per second 
each, not 10 clients sending 1 million messages per second each.


When a user gets to the point that individual queues have massive 
throughput, it's unlikely that a one-size-fits-all cloud offering like 
Zaqar or SQS is _ever_ going to meet their needs. Those users will want 
to spin up and configure their own messaging systems on Nova servers, 
and at that kind of size they'll be able to afford to. (In fact, they 
may not be able to afford _not_ to, assuming per-message-based pricing.)



There is also the possibility of using the sharding capabilities of the
underlying storage. But the pattern of use will determine how effective
that can be.

So for example, on the ordering question, if order is defined by a
single sequence number held in the database and atomically incremented
for every message published, that is not likely to be something where
the databases sharding is going to help in scaling the number of
concurrent publications.

Though sharding would allow scaling the total number messages on the
queue (by distributing them over multiple shards), the total ordering of
those messages reduces it's effectiveness in scaling the number of
concurrent getters (e.g. the concurrent subscribers in pub-sub) since
they will all be getting the messages in exactly the same order.

Strict ordering impacts the competing consumers case also (and is in my
opinion of limited value as a guarantee anyway). At any given time, the
head of the queue is in one shard, and all concurrent claim requests
will contend for messages in that same shard. Though the unsuccessful
claimants may then move to another shard as the head moves, they will
all again try to access the messages in the same order.

So if Zaqar's goal is to scale the number of orthogonal queues, and the
number of messages held at any time within these, the pooling facility
and any sharding capability in the underlying store for a pool would
likely be effective even with the strict ordering guarantee.


IMHO this is (or should be) the goal - support enormous numbers of 
small-to-moderate sized queues.



If scaling the number of communicants on a given communication channel
is a goal however, then strict ordering may hamper that. If it does, it
seems to me that this is not just a policy tweak on the underlying
datastore to choose the desired balance between ordering and scale, but
a more fundamental question on the internal structure of the queue
implementation built on top of the datastore.


I agree with your analysis, but I don't think this should be a goal.

Note that the user can still implement this themselves using 
application-level sharding - if you know that in-order delivery is not 
important to you, then randomly assign clients to a queue and then poll 
all of the queues in the round-robin. This yields _exactly_ the same 
semantics as SQS.


The reverse is true of SQS - if you want FIFO then you have to implement 
re-ordering by sequence number in your application. (I'm not certain, 
but it also sounds very much like this situation is ripe for losing 
messages when your client dies.)


So the question is: in which use case do we want to push additional 
complexity into the application? The case where there are truly massive 
volumes of messages flowing to a single point? Or the case where the 
application wants the messages in order?


I'd suggest both that the former applications are better able to handle 
that extra complexity and that the latter applications are probably more 
common. So it seems that the Zaqar team made a good decision.


(Aside: it follows that Zaqar probably should have a maximum throughput 
quota for each queue; or that it should report usage information 

Re: [openstack-dev] [all] To slide or not to slide? OpenStack Bootstrapping Hour...

2014-09-22 Thread John Griffith
On Mon, Sep 22, 2014 at 10:27 AM, Jay Pipes jaypi...@gmail.com wrote:

 Hi all,

 OK, so we had our inaugural OpenStack Bootstrapping Hour last Friday.
 Thanks to Sean and Dan for putting up with my rambling about unittest and
 mock stuff. And thanks to one of my pugs, Winnie, for, according to Shrews,
 looking like she was drunk. :)

 One thing we're doing today is a bit of a post-mortem around what worked
 and what didn't.

 For this first OBH session, I put together some slides [1] that were
 referenced during the hangout session. I'm wondering what folks who watched
 the video [2] thought about using the slides.

 Were the slides useful compared to just providing links to code on the
 etherpad [3]?

 Were the slides a hindrance to the flow of the OBH session?

 Would a production of slides be better to do *after* an OBH session,
 instead of before?

 Very much interested in hearing answers/opinions about the above
 questions. We're keen to provide the best experience possible, and are open
 to any ideas.

 Best,
 -jay

 [1] http://bit.ly/obh-mock-best-practices
 [2] http://youtu.be/jCWtLoSEfmw
 [3] https://etherpad.openstack.org/p/obh-mock-best-practices

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


So first off.. nice job to You, Sean and Dan.  Watched the session last
night and thought it was a great idea and had some really good content.

As far as slides, never thought I'd say I liked having the slides but in
my case viewing the session after the fact I found the slides useful and
personally I think it kept some boundaries around context around the
discussion.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] To slide or not to slide? OpenStack Bootstrapping Hour...

2014-09-22 Thread David Shrewsbury
On Mon, Sep 22, 2014 at 1:07 PM, John Griffith john.griffi...@gmail.com
wrote:



 On Mon, Sep 22, 2014 at 10:27 AM, Jay Pipes jaypi...@gmail.com wrote:

 Hi all,

 OK, so we had our inaugural OpenStack Bootstrapping Hour last Friday.
 Thanks to Sean and Dan for putting up with my rambling about unittest and
 mock stuff. And thanks to one of my pugs, Winnie, for, according to Shrews,
 looking like she was drunk. :)

 One thing we're doing today is a bit of a post-mortem around what worked
 and what didn't.

 For this first OBH session, I put together some slides [1] that were
 referenced during the hangout session. I'm wondering what folks who watched
 the video [2] thought about using the slides.

 Were the slides useful compared to just providing links to code on the
 etherpad [3]?

 Were the slides a hindrance to the flow of the OBH session?

 Would a production of slides be better to do *after* an OBH session,
 instead of before?

 Very much interested in hearing answers/opinions about the above
 questions. We're keen to provide the best experience possible, and are open
 to any ideas.

 Best,
 -jay

 [1] http://bit.ly/obh-mock-best-practices
 [2] http://youtu.be/jCWtLoSEfmw
 [3] https://etherpad.openstack.org/p/obh-mock-best-practices

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 So first off.. nice job to You, Sean and Dan.  Watched the session last
 night and thought it was a great idea and had some really good content.

 As far as slides, never thought I'd say I liked having the slides but in
 my case viewing the session after the fact I found the slides useful and
 personally I think it kept some boundaries around context around the
 discussion.


++ to having the context and the ability to look forward and backward as
needed.

Also, ++ to having drunk pugs on YouTube.


David Shrewsbury (Shrews)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][db] End of meetings for neutron-db

2014-09-22 Thread Mark McClain

On Sep 22, 2014, at 9:26 AM, Henry Gessau ges...@cisco.com wrote:

 https://wiki.openstack.org/wiki/Meetings/NeutronDB
 
 The work on healing and reorganizing Neutron DB migrations is complete, and so
 we will no longer hold meetings.
 

Great work by all who worked on this.  
mark


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] - what integration with Keystone is allowed?

2014-09-22 Thread Monty Taylor
On 09/21/2014 10:57 PM, Nader Lahouti wrote:
 Thanks Kevin for bring it up in the ML, I was looking for a guideline or
 any document to clarify issues on this subject.
 
 I was told, even using keystone API in neutron is not permitted.

I recognize that I'm potentially without context for neutron internals -
but could someone please shed some light on why using keystone API from
neutron would ever be forbidden? That sounds a bit craycray to me and
I'd like to understand more.

 
 
 On Sun, Sep 21, 2014 at 12:58 PM, Kevin Benton blak...@gmail.com wrote:
 
 So based on those guidelines there would be a problem with the IBM patch
 because it's storing the tenant name in a backend controller, right?
 On Sep 21, 2014 12:18 PM, Dolph Mathews dolph.math...@gmail.com wrote:

 Querying keystone for tenant names is certainly fair game.

 Keystone should be considered the only source of truth for tenant names
 though, as they are mutable and not globally unique on their own, so other
 services should not stash any names from keystone into long term
 persistence (users, projects, domains, groups, etc-- roles might be an odd
 outlier worth a separate conversation if anyone is interested).

 Store IDs where necessary, and use IDs on the wire where possible though,
 as they are immutable.

 On Sat, Sep 20, 2014 at 7:46 PM, Kevin Benton blak...@gmail.com wrote:

 Hello all,

 A patch has come up to query keystone for tenant names in the IBM
 plugin.[1] As I understand it, this was one of the reasons another
 mechanism driver was reverted.[2] Can we get some clarity on the level
 of integration with Keystone that is permitted?

 Thanks

 1. https://review.openstack.org/#/c/122382
 2. https://review.openstack.org/#/c/118456

 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] PTL Non-Candidacy

2014-09-22 Thread Mark Washenberger
Greetings,

I will not be running for PTL for Glance for the Kilo release.

I want to thank all of the nice folks I've worked with--especially the
attendees and sponsors of the mid-cycle meetups, which I think were a major
success and one of the highlights of the project for me.

Cheers,
markwash
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [infra] Python 2.6 tests can't possibly be passing in neutron

2014-09-22 Thread Kevin L. Mitchell
My team just ran into an issue where neutron was not passing unit tests
when run under Python 2.6.  We tracked this down to a test support
function using collections.OrderedDict.  This was in locally forked
code, but when I compared it to upstream code, I found that the code in
upstream neutron is identical…meaning that upstream neutron cannot
possibly be passing unit tests under Python 2.6.  Yet, somehow, the
neutron reviews I've looked at are passing the Python 2.6 gate!  Any
ideas as to how this could be happening?

For the record, the problem is in neutron/tests/unit/test_api_v2.py:148,
in the function _get_collection_kwargs(), which uses
collections.OrderedDict.  As there's no reason to use OrderedDict here
that I can see—there's no definite order on the initialization, and all
consumers pass it to an assert_called_once_with() method with the '**'
operator—I have proposed a review[1] to replace it with a simple dict.

[1] https://review.openstack.org/#/c/123189/
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [infra] Python 2.6 tests can't possibly be passing in neutron

2014-09-22 Thread Armando M.
What about:

https://github.com/openstack/neutron/blob/master/test-requirements.txt#L12



On 22 September 2014 10:23, Kevin L. Mitchell
kevin.mitch...@rackspace.com wrote:
 My team just ran into an issue where neutron was not passing unit tests
 when run under Python 2.6.  We tracked this down to a test support
 function using collections.OrderedDict.  This was in locally forked
 code, but when I compared it to upstream code, I found that the code in
 upstream neutron is identical…meaning that upstream neutron cannot
 possibly be passing unit tests under Python 2.6.  Yet, somehow, the
 neutron reviews I've looked at are passing the Python 2.6 gate!  Any
 ideas as to how this could be happening?

 For the record, the problem is in neutron/tests/unit/test_api_v2.py:148,
 in the function _get_collection_kwargs(), which uses
 collections.OrderedDict.  As there's no reason to use OrderedDict here
 that I can see—there's no definite order on the initialization, and all
 consumers pass it to an assert_called_once_with() method with the '**'
 operator—I have proposed a review[1] to replace it with a simple dict.

 [1] https://review.openstack.org/#/c/123189/
 --
 Kevin L. Mitchell kevin.mitch...@rackspace.com
 Rackspace


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] - what integration with Keystone is allowed?

2014-09-22 Thread Mohammad Banikazemi

In the patch being referred to here and in the IBM controller, the project
ID is the unique identifier used. The name is simply an additional piece of
information that may perhaps be used for debugging. The back-end
(controller) keeps a name not as a unique identifier but in addition to the
unique identifier which is the project ID. For all practical purposes, we
can set the project name for all projects to Kevin Benton and nothing will
change functionally.

This should be obvious from the code and how the project id and not the
name has been used in the plugin. Perhaps the commit message can specify
this clearly to avoid any confusion.

Best,

Mohammad





From:   Dolph Mathews dolph.math...@gmail.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:   09/22/2014 10:53 AM
Subject:Re: [openstack-dev] [Neutron] - what integration with Keystone
is  allowed?




On Sun, Sep 21, 2014 at 3:58 PM, Kevin Benton blak...@gmail.com wrote:
  So based on those guidelines there would be a problem with the IBM patch
  because it's storing the tenant name in a backend controller, right?


It would need to be regarded as an expiring cache if Neutron chose to go
that route. I'd wholly recommend against it though, because I don't see a
strong use case to use names instead of IDs here (correct me if I'm wrong).

  On Sep 21, 2014 12:18 PM, Dolph Mathews dolph.math...@gmail.com
  wrote:
   Querying keystone for tenant names is certainly fair game.

   Keystone should be considered the only source of truth for tenant names
   though, as they are mutable and not globally unique on their own, so
   other services should not stash any names from keystone into long term
   persistence (users, projects, domains, groups, etc-- roles might be an
   odd outlier worth a separate conversation if anyone is interested).

   Store IDs where necessary, and use IDs on the wire where possible
   though, as they are immutable.

   On Sat, Sep 20, 2014 at 7:46 PM, Kevin Benton blak...@gmail.com wrote:
 Hello all,

 A patch has come up to query keystone for tenant names in the IBM
 plugin.[1] As I understand it, this was one of the reasons another
 mechanism driver was reverted.[2] Can we get some clarity on the level
 of integration with Keystone that is permitted?

 Thanks

 1. https://review.openstack.org/#/c/122382
 2. https://review.openstack.org/#/c/118456

 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] - what integration with Keystone is allowed?

2014-09-22 Thread Mohammad Banikazemi


Akihiro Motoki amot...@gmail.com wrote on 09/22/2014 12:50:43 PM:

 From: Akihiro Motoki amot...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: 09/22/2014 12:53 PM
 Subject: Re: [openstack-dev] [Neutron] - what integration with
 Keystone is allowed?

 In Keystone, as Dolph says, a tenant name is not globally unique,
 so IMHO tenant_id needs to be passed to a back-end controller
 to ensure uniqueness of tenant (or project).


Yes, that has been the case in this plugin (and the patch in question does
not change that).


 tenant_name can be an additional information.
 For example it can be used in a GUI of a back-end controller,
 so I think it may be useful for some purpose.


Exactly.


Best,

Mohammad


 Thanks,
 Akihiro


 On Sun, Sep 21, 2014 at 8:46 AM, Kevin Benton blak...@gmail.com wrote:
  Hello all,
 
  A patch has come up to query keystone for tenant names in the IBM
  plugin.[1] As I understand it, this was one of the reasons another
  mechanism driver was reverted.[2] Can we get some clarity on the level
  of integration with Keystone that is permitted?
 
  Thanks
 
  1. https://review.openstack.org/#/c/122382
  2. https://review.openstack.org/#/c/118456
 
  --
  Kevin Benton
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Akihiro Motoki amot...@gmail.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Oslo messaging vs zaqar

2014-09-22 Thread Clint Byrum
Geoff, do you expect all of our users to write all of their messaging
code in Python?

oslo.messaging is a _python_ library.

Zaqar is a service with a REST API -- accessible to any application.

As Zane's sarcastic reply implied, these are as related as sharks are
to tornados. Could they be combined? Yes [1]. But the only result would be
dead people and sharks strewn about the landscape.

[1] http://www.imdb.com/title/tt2724064/

Excerpts from Geoff O'Callaghan's message of 2014-09-20 01:17:45 -0700:
 Hi all,
 I'm just trying to understand the messaging strategy in openstack.It
 seems we have at least 2 messaging layers.
 
 Oslo.messaging and zaqar,  Can someone explain to me why there are two?
 To quote from the zaqar faq :
 -
 How does Zaqar compare to oslo.messaging?
 
 oslo.messsaging is an RPC library used throughout OpenStack to manage
 distributed commands by sending messages through different messaging
 layers. Oslo Messaging was originally developed as an abstraction over
 AMQP, but has since added support for ZeroMQ.
 
 As opposed to Oslo Messaging, Zaqar is a messaging service for the over and
 under cloud. As a service, it is meant to be consumed by using libraries
 for different languages. Zaqar currently supports 1 protocol (HTTP) and
 sits on top of other existing technologies (MongoDB as of version 1.0).
 
 It seems to my casual view that we could have one and scale that and use it
 for SQS style messages, internal messaging (which could include logging)
 all managed by message schemas and QoS.  This would give a very robust and
 flexible system for endpoints to consume.
 
 Is there a plan to consolidate?
 
 Rgds
 Geoff

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [infra] Python 2.6 tests can't possibly be passing in neutron

2014-09-22 Thread Kevin L. Mitchell
On Mon, 2014-09-22 at 10:32 -0700, Armando M. wrote:
 What about:
 
 https://github.com/openstack/neutron/blob/master/test-requirements.txt#L12

Pulling in ordereddict doesn't do anything if your code doesn't use it
when OrderedDict isn't in collections, which is the case here.  Further,
there's no reason that _get_collection_kwargs() needs to use an
OrderedDict: it's initialized in an arbitrary order (generator
comprehension over a set), then later passed to functions with **, which
converts it to a plain old dict.
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Stepping down as PTL

2014-09-22 Thread Nathanael Burton
Thanks Dolph!

On Mon, Sep 22, 2014 at 10:47 AM, Dolph Mathews dolph.math...@gmail.com
wrote:

 Dearest stackers and [key]stoners,

 With the PTL candidacies officially open for Kilo, I'm going to take the
 opportunity to announce that I won't be running again for the position.

 I thoroughly enjoyed my time as PTL during Havana, Icehouse and Juno.
 There was a perceived increase in stability [citation needed], which was
 one of my foremost goals. We primarily achieved that by improving the
 communication between developers which allowed developers to share their
 intent early and often (by way of API designs and specs). As a result, we
 had a lot more collaboration and a great working knowledge in the community
 when it came time for bug fixes. I also think we raised the bar for user
 experience, especially by way of reasonable defaults, strong documentation,
 and effective error messages. I'm consistently told that we have the best
 out-of-the-box experience of any OpenStack service. Well done!

 I'll still be involved in OpenStack, and I'm super confident in our
 incredibly strong core team of reviewers on Keystone. I thoroughly enjoy
 helping other developers be as productive as possible, and intend to
 continue doing exactly that.

 Keep hacking responsibly,

 -Dolph

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [infra] Python 2.6 tests can't possibly be passing in neutron

2014-09-22 Thread Monty Taylor
On 09/22/2014 10:58 AM, Kevin L. Mitchell wrote:
 On Mon, 2014-09-22 at 10:32 -0700, Armando M. wrote:
 What about:

 https://github.com/openstack/neutron/blob/master/test-requirements.txt#L12
 
 Pulling in ordereddict doesn't do anything if your code doesn't use it
 when OrderedDict isn't in collections, which is the case here.  Further,
 there's no reason that _get_collection_kwargs() needs to use an
 OrderedDict: it's initialized in an arbitrary order (generator
 comprehension over a set), then later passed to functions with **, which
 converts it to a plain old dict.
 

So - as an update to this, this is due to RedHat once again choosing to
backport features from 2.7 into a thing they have labeled 2.6.

We test 2.6 on Centos6 - which means we get RedHat's patched version of
Python2.6 - which, it turns out, isn't really 2.6 - so while you might
want to assume that we're testing 2.6 - we're not - we're testing
2.6-as-it-appears-in-RHEL.

This brings up a question - in what direction do we care/what's the
point in the first place?

Some points to ponder:

 - 2.6 is end of life - so the fact that this is coming up is silly, we
should have stopped caring about it in OpenStack 2 years ago at least
 - Maybe we ACTUALLY only care about 2.6-on-RHEL - since that was the
point of supporting it at all
 - Maybe we ACTUALLY care about 2.6 support across the board, in which
case we should STOP testing using Centos6 which is not actually 2.6

I vote for just amending our policy right now and killing 2.6 with
prejudice.

(also, I have heard a rumor that there are people running in to problems
due to the fact that they are deploying onto a two-release-old version
of Debian. No offense - but there is no way we're supporting that)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [infra] Python 2.6 tests can't possibly be passing in neutron

2014-09-22 Thread Joshua Harlow
Just as an update to what exactly is RHEL python 2.6...

This is the expanded source rpm:

http://paste.ubuntu.com/8405074/

The main one here appears to be:

- python-2.6.6-ordereddict-backport.patch

Full changelog @ http://paste.ubuntu.com/8405082/

Overall I'd personally like to get rid of python 2.6, and move on, but then I'd 
also like to get rid of 2.7 and move on also ;)

- Josh

On Sep 22, 2014, at 11:17 AM, Monty Taylor mord...@inaugust.com wrote:

 On 09/22/2014 10:58 AM, Kevin L. Mitchell wrote:
 On Mon, 2014-09-22 at 10:32 -0700, Armando M. wrote:
 What about:
 
 https://github.com/openstack/neutron/blob/master/test-requirements.txt#L12
 
 Pulling in ordereddict doesn't do anything if your code doesn't use it
 when OrderedDict isn't in collections, which is the case here.  Further,
 there's no reason that _get_collection_kwargs() needs to use an
 OrderedDict: it's initialized in an arbitrary order (generator
 comprehension over a set), then later passed to functions with **, which
 converts it to a plain old dict.
 
 
 So - as an update to this, this is due to RedHat once again choosing to
 backport features from 2.7 into a thing they have labeled 2.6.
 
 We test 2.6 on Centos6 - which means we get RedHat's patched version of
 Python2.6 - which, it turns out, isn't really 2.6 - so while you might
 want to assume that we're testing 2.6 - we're not - we're testing
 2.6-as-it-appears-in-RHEL.
 
 This brings up a question - in what direction do we care/what's the
 point in the first place?
 
 Some points to ponder:
 
 - 2.6 is end of life - so the fact that this is coming up is silly, we
 should have stopped caring about it in OpenStack 2 years ago at least
 - Maybe we ACTUALLY only care about 2.6-on-RHEL - since that was the
 point of supporting it at all
 - Maybe we ACTUALLY care about 2.6 support across the board, in which
 case we should STOP testing using Centos6 which is not actually 2.6
 
 I vote for just amending our policy right now and killing 2.6 with
 prejudice.
 
 (also, I have heard a rumor that there are people running in to problems
 due to the fact that they are deploying onto a two-release-old version
 of Debian. No offense - but there is no way we're supporting that)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Stepping down as PTL

2014-09-22 Thread Raildo Mascena
Thanks a lot Dolph!

2014-09-22 14:58 GMT-03:00 Nathanael Burton nathanael.i.bur...@gmail.com:

 Thanks Dolph!

 On Mon, Sep 22, 2014 at 10:47 AM, Dolph Mathews dolph.math...@gmail.com
 wrote:

 Dearest stackers and [key]stoners,

 With the PTL candidacies officially open for Kilo, I'm going to take the
 opportunity to announce that I won't be running again for the position.

 I thoroughly enjoyed my time as PTL during Havana, Icehouse and Juno.
 There was a perceived increase in stability [citation needed], which was
 one of my foremost goals. We primarily achieved that by improving the
 communication between developers which allowed developers to share their
 intent early and often (by way of API designs and specs). As a result, we
 had a lot more collaboration and a great working knowledge in the community
 when it came time for bug fixes. I also think we raised the bar for user
 experience, especially by way of reasonable defaults, strong documentation,
 and effective error messages. I'm consistently told that we have the best
 out-of-the-box experience of any OpenStack service. Well done!

 I'll still be involved in OpenStack, and I'm super confident in our
 incredibly strong core team of reviewers on Keystone. I thoroughly enjoy
 helping other developers be as productive as possible, and intend to
 continue doing exactly that.

 Keep hacking responsibly,

 -Dolph

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Raildo Mascena
Software Engineer.
Bachelor of Computer Science.
Distributed Systems Laboratory
Federal University of Campina Grande
Campina Grande, PB - Brazil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [infra] Python 2.6 tests can't possibly be passing in neutron

2014-09-22 Thread Solly Ross
I'm in favor of killing Python 2.6 with fire.
Honestly, I think it's hurting code readability and productivity --

You have to constantly think about whether or not some feature that
the rest of the universe is already using is supported in Python 2.6
whenever you write code.

As for readability, things like 'contextlib.nested' can go away if we can
kill Python 2.6 (Python 2.7 supports nested context managers OOTB, in a much
more readable way).

Best Regards,
Solly

- Original Message -
 From: Joshua Harlow harlo...@outlook.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Monday, September 22, 2014 2:33:16 PM
 Subject: Re: [openstack-dev] [neutron] [infra] Python 2.6 tests can't 
 possibly be passing in neutron
 
 Just as an update to what exactly is RHEL python 2.6...
 
 This is the expanded source rpm:
 
 http://paste.ubuntu.com/8405074/
 
 The main one here appears to be:
 
 - python-2.6.6-ordereddict-backport.patch
 
 Full changelog @ http://paste.ubuntu.com/8405082/
 
 Overall I'd personally like to get rid of python 2.6, and move on, but then
 I'd also like to get rid of 2.7 and move on also ;)
 
 - Josh
 
 On Sep 22, 2014, at 11:17 AM, Monty Taylor mord...@inaugust.com wrote:
 
  On 09/22/2014 10:58 AM, Kevin L. Mitchell wrote:
  On Mon, 2014-09-22 at 10:32 -0700, Armando M. wrote:
  What about:
  
  https://github.com/openstack/neutron/blob/master/test-requirements.txt#L12
  
  Pulling in ordereddict doesn't do anything if your code doesn't use it
  when OrderedDict isn't in collections, which is the case here.  Further,
  there's no reason that _get_collection_kwargs() needs to use an
  OrderedDict: it's initialized in an arbitrary order (generator
  comprehension over a set), then later passed to functions with **, which
  converts it to a plain old dict.
  
  
  So - as an update to this, this is due to RedHat once again choosing to
  backport features from 2.7 into a thing they have labeled 2.6.
  
  We test 2.6 on Centos6 - which means we get RedHat's patched version of
  Python2.6 - which, it turns out, isn't really 2.6 - so while you might
  want to assume that we're testing 2.6 - we're not - we're testing
  2.6-as-it-appears-in-RHEL.
  
  This brings up a question - in what direction do we care/what's the
  point in the first place?
  
  Some points to ponder:
  
  - 2.6 is end of life - so the fact that this is coming up is silly, we
  should have stopped caring about it in OpenStack 2 years ago at least
  - Maybe we ACTUALLY only care about 2.6-on-RHEL - since that was the
  point of supporting it at all
  - Maybe we ACTUALLY care about 2.6 support across the board, in which
  case we should STOP testing using Centos6 which is not actually 2.6
  
  I vote for just amending our policy right now and killing 2.6 with
  prejudice.
  
  (also, I have heard a rumor that there are people running in to problems
  due to the fact that they are deploying onto a two-release-old version
  of Debian. No offense - but there is no way we're supporting that)
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday September 23rd at 19:00 UTC

2014-09-22 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting on Tuesday September 23rd, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [infra] Python 2.6 tests can't possibly be passing in neutron

2014-09-22 Thread Armando M.
I suspect that the very reason underlying the existence of this thread
is that some users out there are not quite ready to pull the plug on
Python 2.6.

Any decision about stopping the support of Python 2.6 should not be
taken solely on making the developer's life easier, but maybe I am
stating the obvious.

Thanks,
Armando

On 22 September 2014 11:39, Solly Ross sr...@redhat.com wrote:
 I'm in favor of killing Python 2.6 with fire.
 Honestly, I think it's hurting code readability and productivity --

 You have to constantly think about whether or not some feature that
 the rest of the universe is already using is supported in Python 2.6
 whenever you write code.

 As for readability, things like 'contextlib.nested' can go away if we can
 kill Python 2.6 (Python 2.7 supports nested context managers OOTB, in a much
 more readable way).

 Best Regards,
 Solly

 - Original Message -
 From: Joshua Harlow harlo...@outlook.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Monday, September 22, 2014 2:33:16 PM
 Subject: Re: [openstack-dev] [neutron] [infra] Python 2.6 tests can't 
 possibly be passing in neutron

 Just as an update to what exactly is RHEL python 2.6...

 This is the expanded source rpm:

 http://paste.ubuntu.com/8405074/

 The main one here appears to be:

 - python-2.6.6-ordereddict-backport.patch

 Full changelog @ http://paste.ubuntu.com/8405082/

 Overall I'd personally like to get rid of python 2.6, and move on, but then
 I'd also like to get rid of 2.7 and move on also ;)

 - Josh

 On Sep 22, 2014, at 11:17 AM, Monty Taylor mord...@inaugust.com wrote:

  On 09/22/2014 10:58 AM, Kevin L. Mitchell wrote:
  On Mon, 2014-09-22 at 10:32 -0700, Armando M. wrote:
  What about:
 
  https://github.com/openstack/neutron/blob/master/test-requirements.txt#L12
 
  Pulling in ordereddict doesn't do anything if your code doesn't use it
  when OrderedDict isn't in collections, which is the case here.  Further,
  there's no reason that _get_collection_kwargs() needs to use an
  OrderedDict: it's initialized in an arbitrary order (generator
  comprehension over a set), then later passed to functions with **, which
  converts it to a plain old dict.
 
 
  So - as an update to this, this is due to RedHat once again choosing to
  backport features from 2.7 into a thing they have labeled 2.6.
 
  We test 2.6 on Centos6 - which means we get RedHat's patched version of
  Python2.6 - which, it turns out, isn't really 2.6 - so while you might
  want to assume that we're testing 2.6 - we're not - we're testing
  2.6-as-it-appears-in-RHEL.
 
  This brings up a question - in what direction do we care/what's the
  point in the first place?
 
  Some points to ponder:
 
  - 2.6 is end of life - so the fact that this is coming up is silly, we
  should have stopped caring about it in OpenStack 2 years ago at least
  - Maybe we ACTUALLY only care about 2.6-on-RHEL - since that was the
  point of supporting it at all
  - Maybe we ACTUALLY care about 2.6 support across the board, in which
  case we should STOP testing using Centos6 which is not actually 2.6
 
  I vote for just amending our policy right now and killing 2.6 with
  prejudice.
 
  (also, I have heard a rumor that there are people running in to problems
  due to the fact that they are deploying onto a two-release-old version
  of Debian. No offense - but there is no way we're supporting that)
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-22 Thread Doug Hellmann

On Sep 19, 2014, at 10:37 PM, Monty Taylor mord...@inaugust.com wrote:

 On 09/19/2014 03:29 AM, Thierry Carrez wrote:
 Monty Taylor wrote:
 I've recently been thinking a lot about Sean's Layers stuff. So I wrote
 a blog post which Jim Blair and Devananda were kind enough to help me edit.
 
 http://inaugust.com/post/108
 
 Hey Monty,
 
 As you can imagine, I read that post with great attention. I generally
 like the concept of a tightly integrated, limited-by-design layer #1
 (I'd personally call it Ring 0) and a large collection of OpenStack
 things gravitating around it. That would at least solve the attraction
 of the integrated release, suppress the need for incubation, foster
 competition/innovation within our community, and generally address the
 problem of community scaling. There are a few details on the
 consequences though, and in those as always the devil lurks.
 
 ## The Technical Committee
 
 The Technical Committee is defined in the OpenStack bylaws, and is the
 representation of the contributors to the project. Teams work on code
 repositories, and at some point ask their work to be recognized as part
 of OpenStack. In doing so, they place their work under the oversight
 of the Technical Committee. In return, team members get to participate
 in electing the technical committee members (they become ATC). It's a
 balanced system, where both parties need to agree: the TC can't force
 itself as overseer of a random project, and a random project can't just
 decide by itself it is OpenStack.
 
 I don't see your proposal breaking that balanced system, but it changes
 its dynamics a bit. The big tent would contain a lot more members. And
 while the TC would arguably bring a significant share of its attention
 to Ring 0, its voters constituency would mostly consist of developers
 who do not participate in Ring 0 development. I don't really see it as
 changing dramatically the membership of the TC, but it's a consequence
 worth mentioning.
 
 Agree. I'm willing to bet it'll be better not worse to have a large
 constituency - but it's also problem that it's a giant disaster. I'm
 still on board with going for it.
 
 ## Programs
 
 Programs were created relatively recently as a way to describe which
 teams are in OpenStack vs. which ones aren't. They directly tie into
 the ATC system: if you contribute to code repositories under a blessed
 program, then you're an ATC, you vote in TC elections and the TC has
 some oversight over your code repositories. Previously, this was granted
 at a code repository level, but that failed to give flexibility for
 teams to organize their code in the most convenient manner for them. So
 we started to bless teams rather than specific code repositories.
 
 Now, that didn't work out so well. Some programs were a 'theme', like
 Infrastructure, or Docs. For those, competing efforts do not really make
 sense: there can only be one, and competition should happen inside those
 efforts rather than outside. Some programs were a 'team', like
 Orchestration/Heat or Deployment/TripleO. And that's where the model
 started to break: some new orchestration things need space, but the
 current Heat team is not really interested in maintaining them. What's
 the point of being under the same program then ? And TripleO is not the
 only way to deploy OpenStack, but its mere existence (and name)
 prevented other flowers to bloom in our community.
 
 You don't talk much about programs in your proposal. In particular, you
 only mention layer 1, Cloud Native applications, User Interface
 applications, and Operator applications. So I'm unsure of where, if
 anywhere, would Infrastructure or Docs repositories live.
 
 Here is how I see it could work. We could keep 'theme' programs (think
 Infra, Release cycle management, Docs, QA) with their current structure
 (collection of code repositories under a single team/PTL). We would get
 rid of 'team' programs completely, and just have a registry of
 OpenStack code repositories (openstack*/*, big tent). Each of those
 could have a specific PTL, or explicitely inherit its PTL from another
 code repository. Under that PTL, they could have separate or same core
 review team, whatever maps reality and how they want to work (so we
 could say that openstack/python-novaclient inherits its PTL from
 openstack/nova and doesn't need a specific one). That would allow us to
 map anything that may come in. Oslo falls a bit in between, could be
 considered a 'theme' program or several repos sharing PTL.
 
 I think we can do what you're saying and generalize a little bit. What
 if we declared programs, as needed, when we think there is a need to
 pick a winner. (I think we can all agree that early winner picking is
 an unintended but very real side effect of the current system)
 
 And when I say need to - I mean it in the same sense as Production
 Ready  The themes you listed are excellent ones - it makes no sense to
 have two Infras, two QAs or two Docs teams. On 

Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-22 Thread Doug Hellmann

On Sep 19, 2014, at 6:29 AM, Thierry Carrez thie...@openstack.org wrote:

 Monty Taylor wrote:
 I've recently been thinking a lot about Sean's Layers stuff. So I wrote
 a blog post which Jim Blair and Devananda were kind enough to help me edit.
 
 http://inaugust.com/post/108
 
 Hey Monty,
 
 As you can imagine, I read that post with great attention. I generally
 like the concept of a tightly integrated, limited-by-design layer #1
 (I'd personally call it Ring 0) and a large collection of OpenStack
 things gravitating around it. That would at least solve the attraction
 of the integrated release, suppress the need for incubation, foster

I’m not sure I see this change reducing the number of incubated projects unless 
we no longer incubate and graduate projects at all. Would everything just live 
on stackforge and have a quality designation instead of an “officialness” 
designation? Or would we have both? ATC status seems to imply we need some sort 
of officialness designation, as you mention below.

 competition/innovation within our community, and generally address the
 problem of community scaling. There are a few details on the
 consequences though, and in those as always the devil lurks.
 
 ## The Technical Committee
 
 The Technical Committee is defined in the OpenStack bylaws, and is the
 representation of the contributors to the project. Teams work on code
 repositories, and at some point ask their work to be recognized as part
 of OpenStack. In doing so, they place their work under the oversight
 of the Technical Committee. In return, team members get to participate
 in electing the technical committee members (they become ATC). It's a
 balanced system, where both parties need to agree: the TC can't force
 itself as overseer of a random project, and a random project can't just
 decide by itself it is OpenStack.
 
 I don't see your proposal breaking that balanced system, but it changes
 its dynamics a bit. The big tent would contain a lot more members. And
 while the TC would arguably bring a significant share of its attention
 to Ring 0, its voters constituency would mostly consist of developers
 who do not participate in Ring 0 development. I don't really see it as
 changing dramatically the membership of the TC, but it's a consequence
 worth mentioning.
 
 ## Programs
 
 Programs were created relatively recently as a way to describe which
 teams are in OpenStack vs. which ones aren't. They directly tie into
 the ATC system: if you contribute to code repositories under a blessed
 program, then you're an ATC, you vote in TC elections and the TC has
 some oversight over your code repositories. Previously, this was granted
 at a code repository level, but that failed to give flexibility for
 teams to organize their code in the most convenient manner for them. So
 we started to bless teams rather than specific code repositories.
 
 Now, that didn't work out so well. Some programs were a 'theme', like
 Infrastructure, or Docs. For those, competing efforts do not really make
 sense: there can only be one, and competition should happen inside those
 efforts rather than outside. Some programs were a 'team', like
 Orchestration/Heat or Deployment/TripleO. And that's where the model
 started to break: some new orchestration things need space, but the
 current Heat team is not really interested in maintaining them. What's
 the point of being under the same program then ? And TripleO is not the
 only way to deploy OpenStack, but its mere existence (and name)
 prevented other flowers to bloom in our community.
 
 You don't talk much about programs in your proposal. In particular, you
 only mention layer 1, Cloud Native applications, User Interface
 applications, and Operator applications. So I'm unsure of where, if
 anywhere, would Infrastructure or Docs repositories live.
 
 Here is how I see it could work. We could keep 'theme' programs (think
 Infra, Release cycle management, Docs, QA) with their current structure
 (collection of code repositories under a single team/PTL). We would get
 rid of 'team' programs completely, and just have a registry of
 OpenStack code repositories (openstack*/*, big tent). Each of those
 could have a specific PTL, or explicitely inherit its PTL from another
 code repository. Under that PTL, they could have separate or same core
 review team, whatever maps reality and how they want to work (so we
 could say that openstack/python-novaclient inherits its PTL from
 openstack/nova and doesn't need a specific one). That would allow us to
 map anything that may come in. Oslo falls a bit in between, could be
 considered a 'theme' program or several repos sharing PTL.

I like the idea of chartered programs as a way to tackling our cross-project 
issues. We need fewer of programs, but for the ones we do need we still need to 
integrate them with the rest of our governance.

 
 ## The release and the development cycle
 
 You touch briefly on the consequences of your model for the common

Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-22 Thread Doug Hellmann

On Sep 18, 2014, at 2:53 PM, Monty Taylor mord...@inaugust.com wrote:

 Hey all,
 
 I've recently been thinking a lot about Sean's Layers stuff. So I wrote
 a blog post which Jim Blair and Devananda were kind enough to help me edit.
 
 http://inaugust.com/post/108
 
 Enjoy.

I’ve read through this a few times now, and I think I can support most of it. 
It doesn’t address some of the issues we have, but most of the concrete 
proposals seem like they take us to incrementally better places than where we 
are now.

I definitely like the idea of making the integrated gate different for each 
project, based on the other projects it actually integrates with. I could see 
extending the two-project gate idea for projects outside of layer 1 to include 
more than 2 projects eventually.

The assumption that all layer 1 projects can depend on the other members of 
layer 1 being present may have ramifications for the trademark “core” 
definition. I don’t think those cause problems, based on the mechanisms for 
defining capabilities and designated sections being worked out now, but it’s 
worth pointing out as something we’ll need to keep in mind. The self-organizing 
groupings that Vish, John, and others have mentioned may well lead us to create 
additional trademarks in a more naturally evolving way than the single big mark 
we’re trying to squeeze everything into now.

Does the unified client SDK fit into layer 1 as one of the “common libraries … 
necessary for these”? Or do we anticipate the services still using their own 
individual libraries for talking to each other?

Having a quality designation will help distros and deployers. I’m not sure we 
want Cern specifically to be our arbiter of that quality, but I do like the 
idea of having users be involved in the determination. Maybe it's something the 
User Committee could help with in a more general way.

This proposal only addresses some of the challenges we have right now. If we 
maintain a big tent approach, and I think we should, we still need a way to 
implement cross-project policies and initiatives outside of the scope of any 
one of our existing programs.

I agree with Vish that we need a different name for Layer 1. A name that 
doesn’t imply “leveling” or “layering” would be good, since some of the 
cloud-native services don’t build on those layer 1 services.

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [infra] Python 2.6 tests can't possibly be passing in neutron

2014-09-22 Thread Sean Dague
My general understanding is 2.6 support is deprecated for Juno, and fair
to be removed in Kilo.

So the killing it with fire should be able to happen soon.

-Sean

On 09/22/2014 02:52 PM, Armando M. wrote:
 I suspect that the very reason underlying the existence of this thread
 is that some users out there are not quite ready to pull the plug on
 Python 2.6.
 
 Any decision about stopping the support of Python 2.6 should not be
 taken solely on making the developer's life easier, but maybe I am
 stating the obvious.
 
 Thanks,
 Armando
 
 On 22 September 2014 11:39, Solly Ross sr...@redhat.com wrote:
 I'm in favor of killing Python 2.6 with fire.
 Honestly, I think it's hurting code readability and productivity --

 You have to constantly think about whether or not some feature that
 the rest of the universe is already using is supported in Python 2.6
 whenever you write code.

 As for readability, things like 'contextlib.nested' can go away if we can
 kill Python 2.6 (Python 2.7 supports nested context managers OOTB, in a much
 more readable way).

 Best Regards,
 Solly

 - Original Message -
 From: Joshua Harlow harlo...@outlook.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Monday, September 22, 2014 2:33:16 PM
 Subject: Re: [openstack-dev] [neutron] [infra] Python 2.6 tests can't 
 possibly be passing in neutron

 Just as an update to what exactly is RHEL python 2.6...

 This is the expanded source rpm:

 http://paste.ubuntu.com/8405074/

 The main one here appears to be:

 - python-2.6.6-ordereddict-backport.patch

 Full changelog @ http://paste.ubuntu.com/8405082/

 Overall I'd personally like to get rid of python 2.6, and move on, but then
 I'd also like to get rid of 2.7 and move on also ;)

 - Josh

 On Sep 22, 2014, at 11:17 AM, Monty Taylor mord...@inaugust.com wrote:

 On 09/22/2014 10:58 AM, Kevin L. Mitchell wrote:
 On Mon, 2014-09-22 at 10:32 -0700, Armando M. wrote:
 What about:

 https://github.com/openstack/neutron/blob/master/test-requirements.txt#L12

 Pulling in ordereddict doesn't do anything if your code doesn't use it
 when OrderedDict isn't in collections, which is the case here.  Further,
 there's no reason that _get_collection_kwargs() needs to use an
 OrderedDict: it's initialized in an arbitrary order (generator
 comprehension over a set), then later passed to functions with **, which
 converts it to a plain old dict.


 So - as an update to this, this is due to RedHat once again choosing to
 backport features from 2.7 into a thing they have labeled 2.6.

 We test 2.6 on Centos6 - which means we get RedHat's patched version of
 Python2.6 - which, it turns out, isn't really 2.6 - so while you might
 want to assume that we're testing 2.6 - we're not - we're testing
 2.6-as-it-appears-in-RHEL.

 This brings up a question - in what direction do we care/what's the
 point in the first place?

 Some points to ponder:

 - 2.6 is end of life - so the fact that this is coming up is silly, we
 should have stopped caring about it in OpenStack 2 years ago at least
 - Maybe we ACTUALLY only care about 2.6-on-RHEL - since that was the
 point of supporting it at all
 - Maybe we ACTUALLY care about 2.6 support across the board, in which
 case we should STOP testing using Centos6 which is not actually 2.6

 I vote for just amending our policy right now and killing 2.6 with
 prejudice.

 (also, I have heard a rumor that there are people running in to problems
 due to the fact that they are deploying onto a two-release-old version
 of Debian. No offense - but there is no way we're supporting that)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] [nova] requirements freeze exception for websockify

2014-09-22 Thread Solly Ross
The reason it was bounded was because we (the websockify upstream mantainers) 
made a backwards-incompatible change (for good reasons -- it brought websockify 
more inline with the Python standard library interfaces).
However, OpenStack had subclassed the WebSocketProxy code, and so the change 
would have broken OpenStack.

I did a commit a while ago that made it possible to use Nova with both the 
newest version and the older versions, but we never bumped the max version for 
OpenStack, even though we could.

Best Regards,
Solly

- Original Message -
 From: Doug Hellmann d...@doughellmann.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Friday, September 19, 2014 1:54:08 PM
 Subject: Re: [openstack-dev] [requirements] [nova] requirements freeze
 exception for websockify
 
 On Sep 19, 2014, at 11:22 AM, Sean Dague s...@dague.net wrote:
 
  I'd like to request a requirements freeze exception for websockify -
  https://review.openstack.org/#/c/122702/
  
  The rationale for this is that websockify version bump fixes a Nova bug
  about zombie processes - https://bugs.launchpad.net/nova/+bug/1048703.
  It also sets g-r to the value we've been testing against for the entire
  last cycle.
  
  I don't believe it has any impacts on other projects, so should be a
  very safe change.
 
 Gantt, Ironic, and Nova all use websockify.
 
 I’m +1 on updating the minimum based on the fact that our current version
 spec is causing us to test with this version anyway.
 
 However, the proposed change also removes the upper bound. Do we know why
 that was bounded before? Have we had issues with API changes in that
 project? Is it safe to remove the cap?
 
 Doug
 
  
  -Sean
  
  --
  Sean Dague
  http://dague.net
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] usability anti-pattern

2014-09-22 Thread Solly Ross
Monty, your messages are always super-entertaining to read.
They also generally have very good points, which is an added bonus :-P

- Original Message -
 From: Monty Taylor mord...@inaugust.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Friday, September 19, 2014 9:52:46 PM
 Subject: [openstack-dev] usability anti-pattern
 
 Hey,
 
 Not to name names, but some of our client libs do this:
 
   client.Client(API_VERSION, os_username, ... )
 
 I'm pretty sure they got the idea from python-glanceclient, so I blame
 Brian Waldon, since he left us for CoreOS.
 
 PLEASE STOP DOING THIS - IT CAUSES BABIES TO CRY. MORE.
 
 As a developer, I have no way of knowing what to put here. Also, imagine
 I'm writing a script that wants to talk to more than one cloud to do
 things - like, say, nodepool for Infra, or an ansible openstack
 inventory module. NOW WHAT? What do I put??? How do I discover that?
 
 Let me make a suggestion...
 
 Default it to something. Make it an optional parameter for experts. THEN
 - when the client lib talks to keystone, check the service catalog for
 the API version.
 
 What's this you say? Sometimes your service doesn't expose a version in
 the keystone catalog?
 
 PLEASE STOP DOING THIS - IT CAUSES DOLPHINS TO WEEP
 
 If you have versioned APIs, put the version in keystone. Because
 otherwise, as as a developer have absolutely zero way to figure it out.
 
 Well, except for the algorithm jeblair suggested: just start with 11
 and count backwards until a number works
 
 This message brought to you by frustrated humans trying to use the cloud.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] - what integration with Keystone is allowed?

2014-09-22 Thread Kevin Benton
Right, I understand that. However, the point is that the tenant name is
being stored outside of Keystone and it doesn't ever appear to be updated.

I had proposed a spec to cache the tenant names for the Big Switch code and
it was declined because of the duplication of information.
On Sep 22, 2014 10:52 AM, Mohammad Banikazemi m...@us.ibm.com wrote:

 In the patch being referred to here and in the IBM controller, the project
 ID is the unique identifier used. The name is simply an additional piece of
 information that may perhaps be used for debugging. The back-end
 (controller) keeps a name not as a unique identifier but in addition to the
 unique identifier which is the project ID. For all practical purposes, we
 can set the project name for all projects to Kevin Benton and nothing will
 change functionally.

 This should be obvious from the code and how the project id and not the
 name has been used in the plugin. Perhaps the commit message can specify
 this clearly to avoid any confusion.

 Best,

 Mohammad



 [image: Inactive hide details for Dolph Mathews ---09/22/2014 10:53:29
 AM---On Sun, Sep 21, 2014 at 3:58 PM, Kevin Benton blak111@gmai]Dolph
 Mathews ---09/22/2014 10:53:29 AM---On Sun, Sep 21, 2014 at 3:58 PM, Kevin
 Benton blak...@gmail.com wrote:  So based on those guideli

 From: Dolph Mathews dolph.math...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: 09/22/2014 10:53 AM
 Subject: Re: [openstack-dev] [Neutron] - what integration with Keystone
 is allowed?
 --




 On Sun, Sep 21, 2014 at 3:58 PM, Kevin Benton *blak...@gmail.com*
 blak...@gmail.com wrote:

So based on those guidelines there would be a problem with the IBM
patch because it's storing the tenant name in a backend controller, right?


 It would need to be regarded as an expiring cache if Neutron chose to go
 that route. I'd wholly recommend against it though, because I don't see a
 strong use case to use names instead of IDs here (correct me if I'm wrong).


On Sep 21, 2014 12:18 PM, Dolph Mathews *dolph.math...@gmail.com*
dolph.math...@gmail.com wrote:
   Querying keystone for tenant names is certainly fair game.

   Keystone should be considered the only source of truth for tenant
   names though, as they are mutable and not globally unique on their own, 
 so
   other services should not stash any names from keystone into long term
   persistence (users, projects, domains, groups, etc-- roles might be an 
 odd
   outlier worth a separate conversation if anyone is interested).

   Store IDs where necessary, and use IDs on the wire where possible
   though, as they are immutable.

   On Sat, Sep 20, 2014 at 7:46 PM, Kevin Benton *blak...@gmail.com*
   blak...@gmail.com wrote:
  Hello all,

  A patch has come up to query keystone for tenant names in the IBM
  plugin.[1] As I understand it, this was one of the reasons
  another
  mechanism driver was reverted.[2] Can we get some clarity on the
  level
  of integration with Keystone that is permitted?

  Thanks

  1. *https://review.openstack.org/#/c/122382*
  https://review.openstack.org/#/c/122382
  2. *https://review.openstack.org/#/c/118456*
  https://review.openstack.org/#/c/118456

  --
  Kevin Benton

  ___
  OpenStack-dev mailing list
 *OpenStack-dev@lists.openstack.org* OpenStack-dev@lists.openstack.org
 *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


   ___
   OpenStack-dev mailing list
 *OpenStack-dev@lists.openstack.org* OpenStack-dev@lists.openstack.org
 *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
 *OpenStack-dev@lists.openstack.org* OpenStack-dev@lists.openstack.org
 *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] PTL Candidacy

2014-09-22 Thread James E. Blair
I would like to announce my candidacy for the Infrastructure PTL.

I have developed and operated the project infrastructure for several
years and have been honored to serve as the PTL for the Juno cycle.

I was instrumental not only in creating the project gating system and
development process, but also in scaling it from three projects to 400.

During the Juno cycle, we have just started on a real effort to make the
project infrastructure consumable in its own right.  There is a lot of
interest from downstream consumers of our tools but our infrastructure
is not set up for that kind of re-use.  We're slowly changing that so
that people who run infrastructure systems similar to ours can
contribute back upstream just like any other OpenStack project.

I am anticipating a number of changes to the OpenStack project that are
related: the further acceptance of a Big Tent, and changes to the
gating structure to accommodate it.  I believe that changes to our
testing methodology, including a smaller integrated gate and more
functional testing which we outlined at the QA/Infra sprint fit right
into that.  These are multi-release efforts, and I am looking forward to
continuing them in Kilo.

All of these efforts mean a lot of new people working on a lot of new
areas of the Infrastructure program in parallel.  A big part of the work
in the next cycle will be helping to coordinate those efforts and make
the Infrastructure program a little less monolithic to support all of
this work.

I am thrilled to be a part of one of the most open free software project
infrastructures, and I would very much like to continue to serve as its
PTL.

Thanks,

Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Stepping down as PTL

2014-09-22 Thread Morgan Fainberg

-Original Message-
From: Dolph Mathews dolph.math...@gmail.com
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: September 22, 2014 at 07:50:42
To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
Subject:  [openstack-dev] [keystone] Stepping down as PTL

 Dearest stackers and [key]stoners,
  
 With the PTL candidacies officially open for Kilo, I'm going to take the
 opportunity to announce that I won't be running again for the position.
  
 I thoroughly enjoyed my time as PTL during Havana, Icehouse and Juno. There
 was a perceived increase in stability [citation needed], which was one of
 my foremost goals. We primarily achieved that by improving the
 communication between developers which allowed developers to share their
 intent early and often (by way of API designs and specs). As a result, we
 had a lot more collaboration and a great working knowledge in the community
 when it came time for bug fixes. I also think we raised the bar for user
 experience, especially by way of reasonable defaults, strong documentation,
 and effective error messages. I'm consistently told that we have the best
 out-of-the-box experience of any OpenStack service. Well done!
  
 I'll still be involved in OpenStack, and I'm super confident in our
 incredibly strong core team of reviewers on Keystone. I thoroughly enjoy
 helping other developers be as productive as possible, and intend to
 continue doing exactly that.
  
 Keep hacking responsibly,
  
 -Dolph

Thanks for bringing us this far! Your leadership has been nothing short of 
fantastic.

Cheers!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >