Re: [openstack-dev] [all] The future of the integrated release

2014-08-22 Thread Michael Chapman
On Fri, Aug 22, 2014 at 2:57 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 08/19/2014 11:28 PM, Robert Collins wrote:

 On 20 August 2014 02:37, Jay Pipes jaypi...@gmail.com wrote:
 ...

  I'd like to see more unification of implementations in TripleO - but I
 still believe our basic principle of using OpenStack technologies that
 already exist in preference to third party ones is still sound, and
 offers substantial dogfood and virtuous circle benefits.



 No doubt Triple-O serves a valuable dogfood and virtuous cycle purpose.
 However, I would move that the Deployment Program should welcome the many
 projects currently in the stackforge/ code namespace that do deployment
 of
 OpenStack using traditional configuration management tools like Chef,
 Puppet, and Ansible. It cannot be argued that these configuration
 management
 systems are the de-facto way that OpenStack is deployed outside of HP,
 and
 they belong in the Deployment Program, IMO.


 I think you mean it 'can be argued'... ;).


 No, I definitely mean cannot be argued :) HP is the only company I know
 of that is deploying OpenStack using Triple-O. The vast majority of
 deployers I know of are deploying OpenStack using configuration management
 platforms and various systems or glue code for baremetal provisioning.

 Note that I am not saying that Triple-O is bad in any way! I'm only saying
 that it does not represent the way that the majority of real-world
 deployments are done.


  And I'd be happy if folk in

 those communities want to join in the deployment program and have code
 repositories in openstack/. To date, none have asked.


 My point in this thread has been and continues to be that by having the TC
 bless a certain project as The OpenStack Way of X, that we implicitly are
 saying to other valid alternatives Sorry, no need to apply here..


  As a TC member, I would welcome someone from the Chef community proposing
 the Chef cookbooks for inclusion in the Deployment program, to live under
 the openstack/ code namespace. Same for the Puppet modules.


 While you may personally welcome the Chef community to propose joining the
 deployment Program and living under the openstack/ code namespace, I'm just
 saying that the impression our governance model and policies create is one
 of exclusion, not inclusion. Hope that clarifies better what I've been
 getting at.



(As one of the core reviewers for the Puppet modules)

Without a standardised package build process it's quite difficult to test
trunk Puppet modules vs trunk official projects. This means we cut release
branches some time after the projects themselves to give people a chance to
test. Until this changes and the modules can be released with the same
cadence as the integrated release I believe they should remain on
Stackforge.

In addition and perhaps as a consequence, there isn't any public
integration testing at this time for the modules, although I know some
parties have developed and maintain their own.

The Chef modules may be in a different state, but it's hard for me to
recommend the Puppet modules become part of an official program at this
stage.



 All the best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][Ironic] Unique way to get a registered machine?

2014-08-22 Thread Steve Kowalik

Hi,

	TripleO has a bridging script we use to register nodes with a baremetal 
service (eg: Ironic or Nova-bm), called register-nodes, which given a 
list of node descriptions (in JSON), will register them with the 
appropriate baremetal service.


	At the moment, if you run register-nodes a second time with the same 
list of nodes, it will happily try and register them and then blow up 
when Ironic or Nova-bm returns an error. If operators are going to 
update their master list of nodes to add or remove machines and then run 
register-nodes again, we need a way to skip registering nodes that are 
already -- except that I don't really want to extract out the UUID of 
the registered nodes, because that puts an onus on the operators to make 
sure that the UUID is listed in the master list, and that would be mean 
requiring manual data entry, or some way to get that data back out in 
the tool they use to manage their master list, which may not even have 
an API. Because our intent is for this bridge between an operators 
master list, and a baremetal service, the intent is for this to run 
again and again when changes happen.


	This means we need a way to uniquely identify the machines in the list 
so we can tell if they are already registered.


For the pxe_ssh driver, this means the set of MAC addresses must 
intersect.

	For other drivers, we think that the pm_address for each machine will 
be unique. Would it be possible add some advice to that effect to 
Ironic's driver API?


Cheers,
--
Steve
Stop breathing down my neck!
My breathing is merely a simulation.
So is my neck! Stop it anyway.
 - EMH vs EMH, USS Prometheus

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Heat] Murano split dsicussion

2014-08-22 Thread Clint Byrum
Excerpts from Angus Salkeld's message of 2014-08-21 20:14:12 -0700:
 On Fri, Aug 22, 2014 at 12:34 PM, Clint Byrum cl...@fewbar.com wrote:
 
  Excerpts from Georgy Okrokvertskhov's message of 2014-08-20 13:14:28 -0700:
   During last Atlanta summit there were couple discussions about
  Application
   Catalog and Application space projects in OpenStack. These cross-project
   discussions occurred as a result of Murano incubation request [1] during
   Icehouse cycle.  On the TC meeting devoted to Murano incubation there was
   an idea about splitting the Murano into parts which might belong to
   different programs[2].
  
  
   Today, I would like to initiate a discussion about potential splitting of
   Murano between two or three programs.
  
  
   *App Catalog API to Catalog Program*
  
   Application Catalog part can belong to Catalog program, the package
   repository will move to artifacts repository part where Murano team
  already
   participates. API part of App Catalog will add a thin layer of API
  methods
   specific to Murano applications and potentially can be implemented as a
   plugin to artifacts repository. Also this API layer will expose other 3rd
   party systems API like CloudFoundry ServiceBroker API which is used by
   CloudFoundry marketplace feature to provide an integration layer between
   OpenStack Application packages and 3rd party PaaS tools.
  
  
 
  I thought this was basically already agreed upon, and that Glance was
  just growing the ability to store more than just images.
 
  
   *Murano Engine to Orchestration Program*
  
   Murano engine orchestrates the Heat template generation. Complementary
  to a
   Heat declarative approach, Murano engine uses imperative approach so that
   it is possible to control the whole flow of the template generation. The
   engine uses Heat updates to update Heat templates to reflect changes in
   applications layout. Murano engine has a concept of actions - special
  flows
   which can be called at any time after application deployment to change
   application parameters or update stacks. The engine is actually
   complementary to Heat engine and adds the following:
  
  
  - orchestrate multiple Heat stacks - DR deployments, HA setups,
  multiple
  datacenters deployment
 
  These sound like features already requested directly in Heat.
 
  - Initiate and controls stack updates on application specific events
 
  Sounds like workflow. :)
 
  - Error handling and self-healing - being imperative Murano allows you
  to handle issues and implement additional logic around error handling
  and
  self-healing.
 
  Also sounds like workflow.
 
  
 
 
  I think we need to re-think what a program is before we consider this.
 
  I really don't know much about Murano. I have no interest in it at
 
 
 get off my lawn;)
 

And turn down that music!

Sorry for the fist shaking, but I wan to highlight that I'm happy to
consider it, just not with programs working the way they do now.

 http://stackalytics.com/?project_type=allmodule=murano-group
 
 HP seems to be involved, you should check it out.
 

HP is involved in a lot of OpenStack things. It's a bit hard for me to
keep my eyes on everything we do. Good to know that others have been able
to take some time and buy into it a bit. +1 for distributing the load. :)

  all, and nobody has come to me saying If we only had Murano in our
  orchestration toolbox, we'd solve xxx. But making them part of the
 
 
 I thought you were saying that opsworks was neat the other day?
 Murano from what I understand was partly inspired from opsworks, yes
 it's a layer up, but still really the same field.


I was saying that OpsWorks is reportedly popular, yes. I did not make
the connection at all from OpsWorks to Murano, and nobody had pointed
that out to me until now.

  Orchestration program would imply that we'll do design sessions together,
  that we'll share the same mission statement, and that we'll have just
 
 
 This is exactly what I hope will happen.
 

Which sessions from last summit would we want to give up to make room
for the Murano-only focused sessions? How much time in our IRC meeting
should we give to Murano-only concerns?

Forgive me for being harsh. We have a cloud to deploy using Heat,
and it is taking far too long to get Heat to do that in an acceptable
manner already. Adding load to our PTL and increasing the burden on our
communication channels doesn't really seem like something that will
increase our velocity. I could be dead wrong though, Murano could be
exactly what we need. I just don't see it, and I'm sorry to be so direct
about saying that.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Criteria for giving a -1 in a review

2014-08-22 Thread Radomir Dopieralski
On 21/08/14 18:05, Matthew Booth wrote:

[snip]

 This seems to mean different things to different people. There's a list
 here which contains some criteria for new commits:

[snip]


 Any more of these?

There is also https://wiki.openstack.org/wiki/CodeReviewGuidelines

-- 
Radomir Dopieralski


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-22 Thread Clint Byrum
Excerpts from Michael Chapman's message of 2014-08-21 23:30:44 -0700:
 On Fri, Aug 22, 2014 at 2:57 AM, Jay Pipes jaypi...@gmail.com wrote:
 
  On 08/19/2014 11:28 PM, Robert Collins wrote:
 
  On 20 August 2014 02:37, Jay Pipes jaypi...@gmail.com wrote:
  ...
 
   I'd like to see more unification of implementations in TripleO - but I
  still believe our basic principle of using OpenStack technologies that
  already exist in preference to third party ones is still sound, and
  offers substantial dogfood and virtuous circle benefits.
 
 
 
  No doubt Triple-O serves a valuable dogfood and virtuous cycle purpose.
  However, I would move that the Deployment Program should welcome the many
  projects currently in the stackforge/ code namespace that do deployment
  of
  OpenStack using traditional configuration management tools like Chef,
  Puppet, and Ansible. It cannot be argued that these configuration
  management
  systems are the de-facto way that OpenStack is deployed outside of HP,
  and
  they belong in the Deployment Program, IMO.
 
 
  I think you mean it 'can be argued'... ;).
 
 
  No, I definitely mean cannot be argued :) HP is the only company I know
  of that is deploying OpenStack using Triple-O. The vast majority of
  deployers I know of are deploying OpenStack using configuration management
  platforms and various systems or glue code for baremetal provisioning.
 
  Note that I am not saying that Triple-O is bad in any way! I'm only saying
  that it does not represent the way that the majority of real-world
  deployments are done.
 
 
   And I'd be happy if folk in
 
  those communities want to join in the deployment program and have code
  repositories in openstack/. To date, none have asked.
 
 
  My point in this thread has been and continues to be that by having the TC
  bless a certain project as The OpenStack Way of X, that we implicitly are
  saying to other valid alternatives Sorry, no need to apply here..
 
 
   As a TC member, I would welcome someone from the Chef community proposing
  the Chef cookbooks for inclusion in the Deployment program, to live under
  the openstack/ code namespace. Same for the Puppet modules.
 
 
  While you may personally welcome the Chef community to propose joining the
  deployment Program and living under the openstack/ code namespace, I'm just
  saying that the impression our governance model and policies create is one
  of exclusion, not inclusion. Hope that clarifies better what I've been
  getting at.
 
 
 
 (As one of the core reviewers for the Puppet modules)
 
 Without a standardised package build process it's quite difficult to test
 trunk Puppet modules vs trunk official projects. This means we cut release
 branches some time after the projects themselves to give people a chance to
 test. Until this changes and the modules can be released with the same
 cadence as the integrated release I believe they should remain on
 Stackforge.
 

Seems like the distros that build the packages are all doing lots of
daily-build type stuff that could somehow be leveraged to get over that.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] repackage ceilometer and ceilometerclient

2014-08-22 Thread Nejc Saje

But I'm not sure the real problem to move the modules. My understanding is
- the ceilometer package has dependency with ceilometerclient so it is easy to
   move them
- all callers for using the moved modules must change paths.


The modules you are talking about are part of Ceilometer's core 
functionality, we can't move them to a completely separate code-tree 
that is meant only for client functionality.


Besides the conceptual difference, python-ceilometerclient is not 
tightly coupled with Ceilometer and has its own release schedule among 
other things.


Regards,
Nejc

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Use lrzip for upgrade tarball - reject?

2014-08-22 Thread Jesse Pretorius
On 21 August 2014 18:47, Sergii Golovatiuk sgolovat...@mirantis.com wrote:

 I think 15 minutes is not too bad. Additionally, it will reduce download
 time and price for bandwidth. It's worth to leave lrzip for customers, as
 upgrade is one time operation so user can wait for a while. For development
 it would be nice to have the fastest solution to boost development time.


I would agree. Perhaps for 6 an option can be made to allow the Fuel master
to use package repositories instead of an upgrade file - and the option can
be used for development and production?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Ironic] Unique way to get a registered machine?

2014-08-22 Thread Chris Jones
Hi

When register-nodes blows up, is the error we get from Ironic sufficiently 
unique that we can just consume it and move on?

I'm all for making the API more powerful wrt inspecting the current setup, but 
I also like idempotency :)

Cheers,
--
Chris Jones

 On 22 Aug 2014, at 07:32, Steve Kowalik ste...@wedontsleep.org wrote:
 
 Hi,
 
TripleO has a bridging script we use to register nodes with a baremetal 
 service (eg: Ironic or Nova-bm), called register-nodes, which given a list 
 of node descriptions (in JSON), will register them with the appropriate 
 baremetal service.
 
At the moment, if you run register-nodes a second time with the same list 
 of nodes, it will happily try and register them and then blow up when Ironic 
 or Nova-bm returns an error. If operators are going to update their master 
 list of nodes to add or remove machines and then run register-nodes again, we 
 need a way to skip registering nodes that are already -- except that I don't 
 really want to extract out the UUID of the registered nodes, because that 
 puts an onus on the operators to make sure that the UUID is listed in the 
 master list, and that would be mean requiring manual data entry, or some way 
 to get that data back out in the tool they use to manage their master list, 
 which may not even have an API. Because our intent is for this bridge between 
 an operators master list, and a baremetal service, the intent is for this to 
 run again and again when changes happen.
 
This means we need a way to uniquely identify the machines in the list so 
 we can tell if they are already registered.
 
For the pxe_ssh driver, this means the set of MAC addresses must intersect.
 
For other drivers, we think that the pm_address for each machine will be 
 unique. Would it be possible add some advice to that effect to Ironic's 
 driver API?
 
 Cheers,
 -- 
Steve
 Stop breathing down my neck!
 My breathing is merely a simulation.
 So is my neck! Stop it anyway.
 - EMH vs EMH, USS Prometheus
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Heat] Murano split dsicussion

2014-08-22 Thread Flavio Percoco
On 08/20/2014 10:14 PM, Georgy Okrokvertskhov wrote:
 During last Atlanta summit there were couple discussions about
 Application Catalog and Application space projects in OpenStack. These
 cross-project discussions occurred as a result of Murano incubation
 request [1] during Icehouse cycle.  On the TC meeting devoted to Murano
 incubation there was an idea about splitting the Murano into parts which
 might belong to different programs[2].
 
 
 Today, I would like to initiate a discussion about potential splitting
 of Murano between two or three programs.
 
 
 *App Catalog API to Catalog Program*
 
 Application Catalog part can belong to Catalog program, the package
 repository will move to artifacts repository part where Murano team
 already participates. API part of App Catalog will add a thin layer of
 API methods specific to Murano applications and potentially can be
 implemented as a plugin to artifacts repository. Also this API layer
 will expose other 3rd party systems API like CloudFoundry ServiceBroker
 API which is used by CloudFoundry marketplace feature to provide an
 integration layer between OpenStack Application packages and 3rd party
 PaaS tools.
 
 

Makes sense to me!

Is this just going to consume the artifacts API? or will it require some
changes to it?


Flavio

 
 *Murano Engine to Orchestration Program*
 
 Murano engine orchestrates the Heat template generation. Complementary
 to a Heat declarative approach, Murano engine uses imperative approach
 so that it is possible to control the whole flow of the template
 generation. The engine uses Heat updates to update Heat templates to
 reflect changes in applications layout. Murano engine has a concept of
 actions - special flows which can be called at any time after
 application deployment to change application parameters or update
 stacks. The engine is actually complementary to Heat engine and adds the
 following:
 
   * orchestrate multiple Heat stacks - DR deployments, HA setups,
 multiple datacenters deployment
   * Initiate and controls stack updates on application specific events
   * Error handling and self-healing - being imperative Murano allows you
 to handle issues and implement additional logic around error
 handling and self-healing.
 
 
 
 *Murano UI to Dashboard Program*
 
 Application Catalog requires  a UI focused on user experience. Currently
 there is a Horizon plugin for Murano App Catalog which adds Application
 catalog page to browse, search and filter applications. It also adds a
 dynamic UI functionality to render a Horizon forms without writing an
 actual code.
 
 
 
 
 [1]
 http://lists.openstack.org/pipermail/openstack-dev/2014-February/027736.html
 
 [2]
 http://eavesdrop.openstack.org/meetings/tc/2014/tc.2014-03-04-20.02.log.txt
 
 
 
 --
 Georgy Okrokvertskhov
 Architect,
 OpenStack Platform Products,
 Mirantis
 http://www.mirantis.com
 Tel. +1 650 963 9828
 Mob. +1 650 996 3284
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Ironic] Unique way to get a registered machine?

2014-08-22 Thread Steve Kowalik

On 22/08/14 17:35, Chris Jones wrote:

Hi

When register-nodes blows up, is the error we get from Ironic sufficiently 
unique that we can just consume it and move on?

I'm all for making the API more powerful wrt inspecting the current setup, but 
I also like idempotency :)


If the master nodes list changes, because say you add a second NIC, and 
up the amount of RAM for a few of your nodes, we then want to update 
those details in the baremetal service, rather than skipping those nodes 
since they are already registered.



Cheers,
--
Chris Jones


On 22 Aug 2014, at 07:32, Steve Kowalik ste...@wedontsleep.org wrote:

Hi,

TripleO has a bridging script we use to register nodes with a baremetal service (eg: 
Ironic or Nova-bm), called register-nodes, which given a list of node 
descriptions (in JSON), will register them with the appropriate baremetal service.

At the moment, if you run register-nodes a second time with the same list 
of nodes, it will happily try and register them and then blow up when Ironic or 
Nova-bm returns an error. If operators are going to update their master list of 
nodes to add or remove machines and then run register-nodes again, we need a 
way to skip registering nodes that are already -- except that I don't really 
want to extract out the UUID of the registered nodes, because that puts an onus 
on the operators to make sure that the UUID is listed in the master list, and 
that would be mean requiring manual data entry, or some way to get that data 
back out in the tool they use to manage their master list, which may not even 
have an API. Because our intent is for this bridge between an operators master 
list, and a baremetal service, the intent is for this to run again and again 
when changes happen.

This means we need a way to uniquely identify the machines in the list so 
we can tell if they are already registered.

For the pxe_ssh driver, this means the set of MAC addresses must intersect.

For other drivers, we think that the pm_address for each machine will be 
unique. Would it be possible add some advice to that effect to Ironic's driver 
API?

Cheers,
--
Steve
Stop breathing down my neck!
My breathing is merely a simulation.
So is my neck! Stop it anyway.
 - EMH vs EMH, USS Prometheus

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Steve
In the beginning was the word, and the word was content-type: text/plain

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Ironic] Unique way to get a registered machine?

2014-08-22 Thread Chris Jones
Hi

Nice, sounds like a very good thing from an operator's perspective. 

Cheers,
--
Chris Jones

 On 22 Aug 2014, at 08:44, Steve Kowalik ste...@wedontsleep.org wrote:
 
 On 22/08/14 17:35, Chris Jones wrote:
 Hi
 
 When register-nodes blows up, is the error we get from Ironic sufficiently 
 unique that we can just consume it and move on?
 
 I'm all for making the API more powerful wrt inspecting the current setup, 
 but I also like idempotency :)
 
 If the master nodes list changes, because say you add a second NIC, and up 
 the amount of RAM for a few of your nodes, we then want to update those 
 details in the baremetal service, rather than skipping those nodes since they 
 are already registered.
 
 Cheers,
 --
 Chris Jones
 
 On 22 Aug 2014, at 07:32, Steve Kowalik ste...@wedontsleep.org wrote:
 
 Hi,
 
TripleO has a bridging script we use to register nodes with a baremetal 
 service (eg: Ironic or Nova-bm), called register-nodes, which given a 
 list of node descriptions (in JSON), will register them with the 
 appropriate baremetal service.
 
At the moment, if you run register-nodes a second time with the same 
 list of nodes, it will happily try and register them and then blow up when 
 Ironic or Nova-bm returns an error. If operators are going to update their 
 master list of nodes to add or remove machines and then run register-nodes 
 again, we need a way to skip registering nodes that are already -- except 
 that I don't really want to extract out the UUID of the registered nodes, 
 because that puts an onus on the operators to make sure that the UUID is 
 listed in the master list, and that would be mean requiring manual data 
 entry, or some way to get that data back out in the tool they use to manage 
 their master list, which may not even have an API. Because our intent is 
 for this bridge between an operators master list, and a baremetal service, 
 the intent is for this to run again and again when changes happen.
 
This means we need a way to uniquely identify the machines in the list 
 so we can tell if they are already registered.
 
For the pxe_ssh driver, this means the set of MAC addresses must 
 intersect.
 
For other drivers, we think that the pm_address for each machine will be 
 unique. Would it be possible add some advice to that effect to Ironic's 
 driver API?
 
 Cheers,
 --
Steve
 Stop breathing down my neck!
 My breathing is merely a simulation.
 So is my neck! Stop it anyway.
 - EMH vs EMH, USS Prometheus
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 -- 
Steve
 In the beginning was the word, and the word was content-type: text/plain
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Openvswitch agent support for non promic mode adapters

2014-08-22 Thread Andreas Scheuring
Thanks for your feedback. 

No, I do not yet have code for it. Just wanted to get a feeling if such
a feature would get acceptance in the community. 
But if that helps I can sit down and start some prototyping while I'm
preparing a blueprint spec in parallel. 

The main part of the implementation I wanted to do on my own to get more
familiar with the code base and to get more in touch with the community.
But of course advice and feedback of experienced neutron developers is
essential!

So I will proceed like this
- Create a blueprint
- Commit first pieces of code to get early feedback (e.g. ask via the
mailing list or irc)
- Upload a spec (as soon as the repo is available for K)

Does that make sense for you?

Thanks,
Andreas



On Thu, 2014-08-21 at 13:44 -0700, Kevin Benton wrote:
 I think this sounds reasonable. Do you have code for this already, or
 are you looking for a developer to help implement it?
 
 
 On Thu, Aug 21, 2014 at 8:45 AM, Andreas Scheuring
 scheu...@linux.vnet.ibm.com wrote:
 Hi,
 last week I started discussing an extension to the existing
 neutron
 openvswitch agent to support network adapters that are not in
 promiscuous mode. Now I would like to enhance the round to get
 feedback
 from a broader audience via the mailing list.
 
 
 The Problem
 When driving vlan or flat networking, openvswitch requires an
 network
 adapter in promiscuous mode.
 
 
 Why not having promiscuous mode in your adapter?
 - Admins like to have full control over their environment and
 which
 network packets enter the system.
 - The network adapter just does not have support for it.
 
 
 What to do?
 Linux net-dev driver offer an interface to manually register
 additional
 mac addresses (also called secondary unicast addresses).
 Exploiting this
 one can register additional mac addresses to the network
 adapter. This
 also works via a well known ip user space tool.
 
 `bridge fdb add aa:aa:aa:aa:aa:aa dev eth0`
 
 
 What to do in openstack?
 As neutron is aware of all the mac addresses that are in use
 it's the
 perfect candidate for doing the mac registrations. The idea is
 to modify
 the neutron openvswitch agent that it does the registration on
 port
 add and port remove via the bridge command.
 There would be a new optional configuration parameter,
 something like
 'non-promisc-mode' that is by default set to false. Only when
 set to
 true, macs get manually registered. Otherwise the agent
 behaves like it
 does today. So I guess only very little changes to the agent
 code are
 required. From my current point of view we do not need any
 changes to
 the ml2 plug-in.
 
 
 Blueprint or a bug?
 I guess it's a blueprint.
 
 What's the timeframe?
 K would be great.
 
 
 
 I would be thankful for any feedback on this! Feel free to
 contact me
 anytime. Thanks in advance!
 
 Regards,
 Andreas
 
 (irc: scheuran)
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Kevin Benton
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Use lrzip for upgrade tarball - reject?

2014-08-22 Thread Matthew Mosesohn
Dmitry, we already use lrzuntar in deploying Docker containers. Use a lower
compression ratio and it will decompress faster on virtual envs and it
takes under two mins on my virtual env.

Compress:
https://github.com/stackforge/fuel-main/blob/master/docker/module.mk#L27

Decompress:
https://github.com/stackforge/fuel-main/blob/master/iso/bootstrap_admin_node.docker.sh#L63
On Aug 21, 2014 5:54 PM, Dmitry Pyzhov dpyz...@mirantis.com wrote:

 Fuelers,

 Our upgrade tarball for 5.1 is more than 4.5Gb. We can reduce it size by
 2Gb with lrzip tool (ticket https://bugs.launchpad.net/fuel/+bug/1356813,
 change in build system https://review.openstack.org/#/c/114201/, change
 in docs https://review.openstack.org/#/c/115331/), but it will
 dramatically increase unpacking time. I've run unpack on my virtualbox
 environment and got this result:
 [root@fuel var]# lrzuntar fuel-5.1-upgrade.tar.lrz
 Decompressing...
 100%7637.48 /   7637.48 MB
 Average DeCompression Speed:  8.014MB/s
 [OK] - 8008478720 bytes
 Total time: 00:15:52.93

 My suggestion is to reject this change, release 5.1 with big tarball and
 find another solution in next release. Any objections?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thought on service plugin architecture (was [Neutron][QoS] Request to be considered for neutron-incubator)

2014-08-22 Thread Baohua Yang
+1
The agent number should be limited restrictly.


On Thu, Aug 21, 2014 at 8:56 AM, loy wolfe loywo...@gmail.com wrote:




 On Wed, Aug 20, 2014 at 7:03 PM, Salvatore Orlando sorla...@nicira.com
 wrote:

 As the original thread had a completely different subject, I'm starting a
 new one here.

 More specifically the aim of this thread is about:
 1) Define when a service is best implemented with a service plugin or
 with a ML2 driver
 2) Discuss how bindings between a core resource and the one provided by
 the service plugin should be exposed at the management plane, implemented
 at the control plane, and if necessary also at the data plane.

 Some more comments inline.

 Salvatore


 When a port is created, and it has Qos enforcement thanks to the service
 plugin,
 let's assume that a ML2 Qos Mech Driver can fetch Qos info and send
 them back to the L2 agent.
 We would probably need a Qos Agent which communicates with the plugin
 through a dedicated topic.


 A distinct agent has pro and cons. I think however that we should try and
 limit the number of agents on the hosts to a minimum. And this minimum in
 my opinion should be 1! There is already a proposal around a modular agent
 which should be able of loading modules for handling distinct services. I
 think that's the best way forward.



 +1
 consolidated modular agent can greatly reduce rpc communication with
 plugin, and redundant code . If we can't merge it to a single Neutron
 agent now, we can at least merge into two agents: modular L2 agent, and
 modular L3+ agent



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best wishes!
Baohua
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Criteria for giving a -1 in a review

2014-08-22 Thread Daniel P. Berrange
On Thu, Aug 21, 2014 at 04:52:37PM -0500, Dolph Mathews wrote:
 On Thu, Aug 21, 2014 at 11:53 AM, Daniel P. Berrange berra...@redhat.com
 wrote:
 
  On Thu, Aug 21, 2014 at 11:34:48AM -0500, Dolph Mathews wrote:
   On Thu, Aug 21, 2014 at 11:21 AM, Daniel P. Berrange 
  berra...@redhat.com
   wrote:
  
On Thu, Aug 21, 2014 at 05:05:04PM +0100, Matthew Booth wrote:
 I would prefer that you didn't merge this.

 i.e. The project is better off without it.
   
A bit off topic, but I've never liked this message that gets added
as it think it sounds overly negative. It would better written
as
   
  This patch needs further work before it can be merged
   
  
   ++ This patch needs further work before it can be merged, and as a
   reviewer, I am either too lazy or just unwilling to checkout your patch
  and
   fix those issues myself.
 
 
  I find the suggestion that reviewers are either too lazy or unwilling
  to fix it themselves rather distasteful to be honest.
 
 
 I should have followed the above with a gently sprinkling of /sarcasm and
 /dogfooding.

Ah, ok that explains it! The perils of communication via email :-)

[snip]

  I'd only recommend fixing  resubmitting someone else's patch if it is
  a really trivial thing that needed tweaking before approval for merge,
  or if they are known to be away for a prolonged time and the patch was
  blocking other important pending work.
 
 
 This is a great general rule. But with enough code reviews, there will be
 exceptions!

Of course. I'd always call these guidelines rather than rules since we
always want to retain the flexibility to ignore guidelines in exceptional
cases where they are counterproductive :-)

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Adding GateFailureFix tag to commit messages

2014-08-22 Thread Daniel P. Berrange
On Thu, Aug 21, 2014 at 09:02:17AM -0700, Armando M. wrote:
 Hi folks,
 
 According to [1], we have ways to introduce external references to commit
 messages.
 
 These are useful to mark certain patches and their relevance in the context
 of documentation, upgrades, etc.
 
 I was wondering if it would be useful considering the addition of another
 tag:
 
 GateFailureFix
 
 The objective of this tag, mainly for consumption by the review team, would
 be to make sure that some patches get more attention than others, as they
 affect the velocity of how certain critical issues are addressed (and gate
 failures affect everyone).
 
 As for machine consumption, I know that some projects use the
 'gate-failure' tag to categorize LP bugs that affect the gate. The use of a
 GateFailureFix tag in the commit message could make the tagging automatic,
 so that we can keep a log of what all the gate failures are over time.
 
 Not sure if this was proposed before, and I welcome any input on the matter.

We've tried a number of different tags in git commit messages before, in
an attempt to help prioritization of reviews and unfortunately none of them
have been particularly successful so far.  I think a key reasonsfor this
is that tags in the commit message are invisible when people are looking at
lists of possible changes to choose for review. Whether in the gerrit web
UI reports / dashboards or in command line tools like my own gerrymander,
reviewers are looking at lists of changes and primarily choosing which
to review based on the subject line, or other explicitly recorded metadata
fields. You won't typically look at the commit message until you've already
decided you want to review the change. So while GateFailureFix may cause
me to pay more attention during the review of it, it probably won't make
me start review any sooner.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][DevStack] How to increase developer usage of Neutron

2014-08-22 Thread Baohua Yang
Through my experience, RDO should be the most reliable way to do the
deployment.
Also, there're some more detailed installation scripts, like
https://github.com/ChaimaGhribi/OpenStack-Icehouse-Installation/blob/master/OpenStack-Icehouse-Installation.rst
.

Still, I think, as a developer, it would be nice to have a deeper
understanding of the underlay implementation.


On Thu, Aug 14, 2014 at 9:25 PM, Mike Spreitzer mspre...@us.ibm.com wrote:

 I'll bet I am not the only developer who is not highly competent with
 bridges and tunnels, Open VSwitch, Neutron configuration, and how DevStack
 transmutes all those.  My bet is that you would have more developers using
 Neutron if there were an easy-to-find and easy-to-follow recipe to use, to
 create a developer install of OpenStack with Neutron.  One that's a pretty
 basic and easy case.  Let's say a developer gets a recent image of Ubuntu
 14.04 from Canonical, and creates an instance in some undercloud, and that
 instance has just one NIC, at 10.9.8.7/16.  If there were a recipe for
 such a developer to follow from that point on, it would be great.

 Regards,
 Mike
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best wishes!
Baohua
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][WSME] Sphinx failing sporadically because of wsme autodoc extension

2014-08-22 Thread Dina Belova
Gordon, exactly. Moreover, I think percentage of failings is something
close to the 15-20% of all the runs - sometimes it passes for the change,
and next run on this exactly the same commit will be the failed for some
reason.

Thanks
Dina


On Thu, Aug 21, 2014 at 10:46 PM, gordon chung g...@live.ca wrote:

 is it possible that it's not on all the nodes?

 seems like it passed here: https://review.openstack.org/#/c/109207/ but
 another patch at roughly the same time failed
 https://review.openstack.org/#/c/113549/

 cheers,
 *gord*

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Use lrzip for upgrade tarball - reject?

2014-08-22 Thread Mike Scherbakov
 I think 15 minutes is not too bad. Additionally, it will reduce download
time and price for bandwidth.
+1

But let's see if we can find some other solution still in 5.1 (hardlinks,
whatever else), and we obviously need to seriously address it in next
release.

 Perhaps for 6 an option can be made to allow the Fuel master to use
package repositories instead of an upgrade file - and the option can be
used for development and production?
Jessee, could please clarify this? Do you mean to use remote repository
with packages, instead of tarballing everything into single bundle?



On Fri, Aug 22, 2014 at 12:39 PM, Matthew Mosesohn mmoses...@mirantis.com
wrote:

 Dmitry, we already use lrzuntar in deploying Docker containers. Use a
 lower compression ratio and it will decompress faster on virtual envs and
 it takes under two mins on my virtual env.

 Compress:
 https://github.com/stackforge/fuel-main/blob/master/docker/module.mk#L27

 Decompress:
 https://github.com/stackforge/fuel-main/blob/master/iso/bootstrap_admin_node.docker.sh#L63
 On Aug 21, 2014 5:54 PM, Dmitry Pyzhov dpyz...@mirantis.com wrote:

 Fuelers,

 Our upgrade tarball for 5.1 is more than 4.5Gb. We can reduce it size by
 2Gb with lrzip tool (ticket
 https://bugs.launchpad.net/fuel/+bug/1356813, change in build system
 https://review.openstack.org/#/c/114201/, change in docs
 https://review.openstack.org/#/c/115331/), but it will dramatically
 increase unpacking time. I've run unpack on my virtualbox environment and
 got this result:
 [root@fuel var]# lrzuntar fuel-5.1-upgrade.tar.lrz
 Decompressing...
 100%7637.48 /   7637.48 MB
 Average DeCompression Speed:  8.014MB/s
 [OK] - 8008478720 bytes
 Total time: 00:15:52.93

 My suggestion is to reject this change, release 5.1 with big tarball and
 find another solution in next release. Any objections?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Criteria for giving a -1 in a review

2014-08-22 Thread Steven Hardy
On Thu, Aug 21, 2014 at 05:05:04PM +0100, Matthew Booth wrote:
 I would prefer that you didn't merge this.
 
 i.e. The project is better off without it.

I'm not quite sure how you make that translation, I would interpret -2 as
meaning the project would be better off without a change.

FWIW, I've always interpreted I would prefer that you didn't merge this.
as having an implied suffix of (in it's current state).

 
 This seems to mean different things to different people. There's a list
 here which contains some criteria for new commits:
 
 https://wiki.openstack.org/wiki/ReviewChecklist.
 
 There's also a treatise on git commit messages and the structure of a
 commit here:
 
 https://wiki.openstack.org/wiki/GitCommitMessages
 
 However, these don't really cover the general case of what a -1 means.
 Here's my brain dump:
 
 * It contains bugs
 * It is likely to confuse future developers/maintainers
 * It is likely to lead to bugs
 * It is inconsistent with other solutions to similar problems
 * It adds complexity which is not matched by its benefits
 * It isn't flexible enough for future work landing RSN
 * It combines multiple changes in a single commit
 
 Any more? I'd be happy to update the above wiki page with any consensus.
 It would be useful if any generally accepted criteria were readily
 referenceable.
 
 I also think it's worth explicitly documenting a few things we
 might/should mention in a review, but which aren't a reason that the
 project would be better off without it:
 
 * Stylistic issues which are not covered by HACKING
 
 By stylistic, I mean changes which have no functional impact on the code
 whatsoever. If a purely stylistic issue is sufficiently important to
 reject code which doesn't adhere to it, it is important enough to add to
 HACKING.

I'll sometimes +1 a change if it looks functionally OK but has some
stylistic or cosmetic issues I would prefer to see fixed before giving a
+2.  I see that as a soft +2, it's not blocking anything, but I'm giving
the patch owner the chance to fix the problem (which they nearly always
do).

Although if a patch contains really significant uglies, I think giving a I
would prefer you didn't merge this, in it's current state with lots of
constructive comments wrt how to improve things is perfectly reasonable.

 * I can think of a better way of doing this
 
 There may be a better solution, but there is already an existing
 solution. We should only be rejecting work that has already been done if
 it would detract from the project for one of the reasons above. We can
 always improve it further later if we find the developer time.

Agreed, although again I'd encourage folks to +1 and leave detailed
information about how to improve the solution - most people (myself
included) really appreciate learning better ways to do things.  I've
definitely become a much better python developer as a result of the
detailed scrutiny and feedback provided via code reviews.

So while I agree with the general message you seem to be proposing (e.g
don't -1 for really trivial stuff), I think it's important to recognise
that if there are obvious and non-painful ways to improve code-quality, the
review is the time to do that.

I've been flamed before for saying this, but I maintain that part of the
reason we have so many (mostly new and non-core) reviewers leaving -1
feedback for really trivial stuff is that we collect, publish and in some
cases over-analyse review statistics.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thought on service plugin architecture (was [Neutron][QoS] Request to be considered for neutron-incubator)

2014-08-22 Thread Mathieu Rohon
hi,

On Wed, Aug 20, 2014 at 1:03 PM, Salvatore Orlando sorla...@nicira.com wrote:
 As the original thread had a completely different subject, I'm starting a
 new one here.

 More specifically the aim of this thread is about:
 1) Define when a service is best implemented with a service plugin or with a
 ML2 driver
 2) Discuss how bindings between a core resource and the one provided by
 the service plugin should be exposed at the management plane, implemented at
 the control plane, and if necessary also at the data plane.

 Some more comments inline.

 Salvatore

 On 20 August 2014 11:31, Mathieu Rohon mathieu.ro...@gmail.com wrote:

 Hi

 On Wed, Aug 20, 2014 at 12:12 AM, Salvatore Orlando sorla...@nicira.com
 wrote:
  In the current approach QoS support is being hardwired into ML2.
 
  Maybe this is not the best way of doing that, as perhaps it will end up
  requiring every mech driver which enforces VIF configuration should
  support
  it.
  I see two routes. One is a mechanism driver similar to l2-pop, and then
  you
  might have a look at the proposed extension framework (and partecipate
  into
  the discussion).
  The other is doing a service plugin. Still, we'll have to solve how to
  implement the binding between a port/network and the QoS entity.

 We have exactly the same issue while implementing the BGPVPN service
 plugin [1].
 As for the Qos extension, the BGPVPN extension can extend network by
 adding route target infos.
 the BGPVPN data model has a foreign key to the extended network.

 If Qos is implemented as a service plugin, I assume that the
 architecture would be similar, with Qos datamodel
 having  foreign keys to ports and/or Networks.


 From a data model perspective, I believe so if we follow the pattern we've
 followed so far. However, I think this would be correct also if QoS is not
 implemented as a service plugin!


 When a port is created, and it has Qos enforcement thanks to the service
 plugin,
 let's assume that a ML2 Qos Mech Driver can fetch Qos info and send
 them back to the L2 agent.
 We would probably need a Qos Agent which communicates with the plugin
 through a dedicated topic.


 A distinct agent has pro and cons. I think however that we should try and
 limit the number of agents on the hosts to a minimum. And this minimum in my
 opinion should be 1! There is already a proposal around a modular agent
 which should be able of loading modules for handling distinct services. I
 think that's the best way forward.

I totally agree, and when I was referring to an agent, I was speaking
of something like the current sec group agent,
or an extension driver in the proposed modular L2 agent semantic [2]




 But when a Qos info is updated through the Qos extension, backed with
 the service plugin,
 the driver that implements the Qos plugin should send the new Qos
 enforcment to the Qos agent through the Qos topic.


 I reckon that is pretty much correct. At the end of the day, the agent which
 enforces QoS at the data plane just needs to ensure the appropriate
 configuration is in place on all ports. Whether this information is coming
 from a driver or a serivice plugin, it does not matter a lot (as long as
 it's not coming from an untrusted source, obviously). If you look at sec
 group agent module, the concept is pretty much the same.


 So I feel like implementing a core resource extension with a service
 plugin needs :
 1 : a MD to interact with the service plugin
 2 : an agent and a mixin used by the the L2 agent.
 3 : a dedicated topic used by the MD and the driver of the service
 plugin to communicate with the new agent

 Am I wrong?


 There is nothing wrong with that. Nevertheless, the fact that we need a Mech
 driver _and_ a service plugin probably also implies that the service plugin
 at the end of the day has not succeeded in its goal of being orthogonal.
 I think it's worth try and exploring solutions which will allow us to
 completely decouple the service plugin for the core functionality, and
 therefore completely contain QoS management within its service plugin. If
 you too think this is not risible, I can perhaps put together something to
 validate this idea.

It doesn't seems risible to me at all. I feel quite uncomfortable to
have to create a MD
to deal with core resource modifications, when those core resources
are extended with a service plugin.
I have proposed a patch [3] to workaround writing a dedicated MD.
The goal of this patch was to add extension's informations in
get_device_details(),
by adding get_resource() generated dict in the dict returned to the agent.
The modular agent should dispatch the dict to the extensions drivers
of the modular agent.
But I'm not keen on this method too because extensions driver can
receive info from two channels :
1. the ML2 plugin which communicates on the plugin/agent topics,
through get_device_details()/update_port()
2. the service plugin which communicates on a dedicated topic to the
dedicated agent

I think we 

[openstack-dev] [oslo] Launchpad tracking of oslo projects

2014-08-22 Thread Thierry Carrez
TL;DR:
Let's create an Oslo projectgroup in Launchpad to track work across all
Oslo libraries. In library projects, let's use milestones connected to
published versions rather than the common milestones.

Long version:
As we graduate more Oslo libraries (which is awesome), tracking Oslo
work in Launchpad (bugs, milestones...) has become more difficult.

There used to be only one Launchpad project (oslo, which covered the
oslo incubator). It would loosely follow the integrated milestones
(juno-1, juno-2...), get blueprints and bugs targeted to those, get tags
pushed around those development milestones: same as the integrated
projects, just with no source code tarball uploads.

When oslo.messaging graduated, a specific Launchpad project was created
to track work around it. It still had integrated development milestones
-- only at the end it would publish a 1.4.0 release instead of a 2014.2
one. That approach creates two problems. First, it's difficult to keep
track of oslo work since it now spans two projects. Second, the
release management logic of marking bugs Fix released at development
milestones doesn't really apply (bugs should rather be marked released
when a published version of the lib carries the fix). Git tags and
Launchpad milestones no longer align, which creates a lot of confusion.

Then as more libraries appeared, some of them piggybacked on the general
oslo Launchpad project (generally adding tags to point to the specific
library), and some others created their own project. More confusion ensues.

Here is a proposal that we could use to solve that, until StoryBoard
gets proper milestone support and Oslo is just migrated to it:

1. Ask for an oslo project group in Launchpad

This would solve the first issue, by allowing to see all oslo work on
single pages (see examples at [1] or [2]). The trade-off here is that
Launchpad projects can't be part of multiple project groups (and project
groups can't belong to project groups). That means that Oslo projects
will no longer be in the openstack Launchpad projectgroup. I think the
benefits outweigh the drawbacks here: the openstack projectgroup is not
very strict anyway so I don't think it's used in people workflows that much.

2. Create one project per library, adopt tag-based milestones

Each graduated library should get its own project (part of the oslo
projectgroup). It should use the same series/cycles as everyone else
(juno), but it should have milestones that match the alpha release
tags, so that you can target work to it and mark them fix released
when that means the fix is released. That would solve the issue of
misaligned tags/milestones. The trade-off here is that you lose the
common milestone rhythm (although I guess you can still align some
alphas to the common development milestones). That sounds a small price
to pay to better communicate which version has which fix.

3. Rename oslo project to oslo-incubator

Keep the Launchpad oslo project as-is, part of the same projectgroup,
to cover oslo-incubator work. This can keep the common development
milestones, since the incubator doesn't do releases anyway. However,
it has to be renamed to oslo-incubator so that it doesn't conflict
with the projectgroup namespace. Once it no longer contains graduated
libs, that name makes much more sense anyway.


This plan requires Launchpad admin assistance to create a projectgroup
and rename a project, so I'd like to get approval on it before moving to
ask them for help.

Comments, thoughts ?

[1] https://blueprints.launchpad.net/openstack
[2] https://bugs.launchpad.net/openstack

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Criteria for giving a -1 in a review

2014-08-22 Thread Daniel P. Berrange
On Fri, Aug 22, 2014 at 10:49:51AM +0100, Steven Hardy wrote:
 On Thu, Aug 21, 2014 at 05:05:04PM +0100, Matthew Booth wrote:
  I also think it's worth explicitly documenting a few things we
  might/should mention in a review, but which aren't a reason that the
  project would be better off without it:
  
  * Stylistic issues which are not covered by HACKING
  
  By stylistic, I mean changes which have no functional impact on the code
  whatsoever. If a purely stylistic issue is sufficiently important to
  reject code which doesn't adhere to it, it is important enough to add to
  HACKING.
 
 I'll sometimes +1 a change if it looks functionally OK but has some
 stylistic or cosmetic issues I would prefer to see fixed before giving a
 +2.  I see that as a soft +2, it's not blocking anything, but I'm giving
 the patch owner the chance to fix the problem (which they nearly always
 do).
 
 Although if a patch contains really significant uglies, I think giving a I
 would prefer you didn't merge this, in it's current state with lots of
 constructive comments wrt how to improve things is perfectly reasonable.
 
  * I can think of a better way of doing this
  
  There may be a better solution, but there is already an existing
  solution. We should only be rejecting work that has already been done if
  it would detract from the project for one of the reasons above. We can
  always improve it further later if we find the developer time.
 
 Agreed, although again I'd encourage folks to +1 and leave detailed
 information about how to improve the solution - most people (myself
 included) really appreciate learning better ways to do things.  I've
 definitely become a much better python developer as a result of the
 detailed scrutiny and feedback provided via code reviews.
 
 So while I agree with the general message you seem to be proposing (e.g
 don't -1 for really trivial stuff), I think it's important to recognise
 that if there are obvious and non-painful ways to improve code-quality, the
 review is the time to do that.

One thing I have seen some people (eg Mark McLoughlin) do a number
of times is to actually submit followup patches. eg they will point
out the minor style issue, or idea for a better approach, but still
leave a +1/+2 score. Then submit a followup change to deal with that
nitpicking.  This seems like it is quite an effective approach
because it ensures the original authors' work gets through review
more quickly. Using a separate follow-on patch also avoids the idea
of the reviewer hijacking the original contributors patches by editing
them  reposting directly

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Use lrzip for upgrade tarball - reject?

2014-08-22 Thread Matthew Mosesohn
Mike, others,

I believe Jesse was proposing an upgrade that downloads all the files
separately on the Fuel Master itself. This is a move that we've gone
away from since Fuel 2.0 because of intermittent issues with 3rd party
mirrors. It's often better to consume 1 large file that has everything
and can be verified than to try to pull hundreds of separate bits that
can't be verified, plus trying to track down errors when something
small doesn't work. Locking down to a single install base really
contributed to Fuel's success early on, so I don't think moving back
to separate file downloads is a good idea.

However, clearly we should do our best to compress the upgrade package
file as best as possible so it is less expensive to transfer and also
consume.

On Fri, Aug 22, 2014 at 1:43 PM, Mike Scherbakov
mscherba...@mirantis.com wrote:
 I think 15 minutes is not too bad. Additionally, it will reduce download
 time and price for bandwidth.
 +1

 But let's see if we can find some other solution still in 5.1 (hardlinks,
 whatever else), and we obviously need to seriously address it in next
 release.

 Perhaps for 6 an option can be made to allow the Fuel master to use
 package repositories instead of an upgrade file - and the option can be used
 for development and production?
 Jessee, could please clarify this? Do you mean to use remote repository with
 packages, instead of tarballing everything into single bundle?



 On Fri, Aug 22, 2014 at 12:39 PM, Matthew Mosesohn mmoses...@mirantis.com
 wrote:

 Dmitry, we already use lrzuntar in deploying Docker containers. Use a
 lower compression ratio and it will decompress faster on virtual envs and it
 takes under two mins on my virtual env.

 Compress:
 https://github.com/stackforge/fuel-main/blob/master/docker/module.mk#L27

 Decompress:
 https://github.com/stackforge/fuel-main/blob/master/iso/bootstrap_admin_node.docker.sh#L63

 On Aug 21, 2014 5:54 PM, Dmitry Pyzhov dpyz...@mirantis.com wrote:

 Fuelers,

 Our upgrade tarball for 5.1 is more than 4.5Gb. We can reduce it size by
 2Gb with lrzip tool (ticket, change in build system, change in docs), but it
 will dramatically increase unpacking time. I've run unpack on my virtualbox
 environment and got this result:
 [root@fuel var]# lrzuntar fuel-5.1-upgrade.tar.lrz
 Decompressing...
 100%7637.48 /   7637.48 MB
 Average DeCompression Speed:  8.014MB/s
 [OK] - 8008478720 bytes
 Total time: 00:15:52.93

 My suggestion is to reject this change, release 5.1 with big tarball and
 find another solution in next release. Any objections?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Mike Scherbakov
 #mihgen


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] repackage ceilometer and ceilometerclient

2014-08-22 Thread Osanai, Hisashi

On Friday, August 22, 2014 2:55 PM, Dean Troyer wrote:
 As one data point, the keystone middleware (auth_token) was just recently 
 moved out of keystoneclient 
 and into its own repo, partially because it had dependencies that otherwise 
 were not required for 
 pure client installations. 

Thank you for this info. I understand that pure client installations are 
required for future deployment 
so I need to take care of it for a spec. 
(https://github.com/openstack/keystonemiddleware)

 I don't know what your middleware dependencies are, but I think it would be 
 good to consider the 
 effect that move would have on client-only installations.

We are talking about the swift middleware (swift_middleware) that is only for 
the swift proxy so 
it is better to have own repo.

Cheers,
Hisashi Osanai
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] repackage ceilometer and ceilometerclient

2014-08-22 Thread Osanai, Hisashi

On Friday, August 22, 2014 4:15 PM, Nejc Saje wrote:
 The modules you are talking about are part of Ceilometer's core
 functionality, we can't move them to a completely separate code-tree
 that is meant only for client functionality.

Thank you for the explanation! I understand your point of the real problem.

 Besides the conceptual difference, python-ceilometerclient is not
 tightly coupled with Ceilometer and has its own release schedule among
 other things.

I checked the requirement.txt in the ceilometer package and saw the line of 
python-ceilometerclient so we may have a chance to control the level of 
ceilometerclient when the ceilometer released.

Cheers,
Hisashi Osanai

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Launchpad tracking of oslo projects

2014-08-22 Thread Davanum Srinivas
Sounds like a good plan going forward @ttx.

On Fri, Aug 22, 2014 at 5:59 AM, Thierry Carrez thie...@openstack.org wrote:
 TL;DR:
 Let's create an Oslo projectgroup in Launchpad to track work across all
 Oslo libraries. In library projects, let's use milestones connected to
 published versions rather than the common milestones.

 Long version:
 As we graduate more Oslo libraries (which is awesome), tracking Oslo
 work in Launchpad (bugs, milestones...) has become more difficult.

 There used to be only one Launchpad project (oslo, which covered the
 oslo incubator). It would loosely follow the integrated milestones
 (juno-1, juno-2...), get blueprints and bugs targeted to those, get tags
 pushed around those development milestones: same as the integrated
 projects, just with no source code tarball uploads.

 When oslo.messaging graduated, a specific Launchpad project was created
 to track work around it. It still had integrated development milestones
 -- only at the end it would publish a 1.4.0 release instead of a 2014.2
 one. That approach creates two problems. First, it's difficult to keep
 track of oslo work since it now spans two projects. Second, the
 release management logic of marking bugs Fix released at development
 milestones doesn't really apply (bugs should rather be marked released
 when a published version of the lib carries the fix). Git tags and
 Launchpad milestones no longer align, which creates a lot of confusion.

 Then as more libraries appeared, some of them piggybacked on the general
 oslo Launchpad project (generally adding tags to point to the specific
 library), and some others created their own project. More confusion ensues.

 Here is a proposal that we could use to solve that, until StoryBoard
 gets proper milestone support and Oslo is just migrated to it:

 1. Ask for an oslo project group in Launchpad

 This would solve the first issue, by allowing to see all oslo work on
 single pages (see examples at [1] or [2]). The trade-off here is that
 Launchpad projects can't be part of multiple project groups (and project
 groups can't belong to project groups). That means that Oslo projects
 will no longer be in the openstack Launchpad projectgroup. I think the
 benefits outweigh the drawbacks here: the openstack projectgroup is not
 very strict anyway so I don't think it's used in people workflows that much.

 2. Create one project per library, adopt tag-based milestones

 Each graduated library should get its own project (part of the oslo
 projectgroup). It should use the same series/cycles as everyone else
 (juno), but it should have milestones that match the alpha release
 tags, so that you can target work to it and mark them fix released
 when that means the fix is released. That would solve the issue of
 misaligned tags/milestones. The trade-off here is that you lose the
 common milestone rhythm (although I guess you can still align some
 alphas to the common development milestones). That sounds a small price
 to pay to better communicate which version has which fix.

 3. Rename oslo project to oslo-incubator

 Keep the Launchpad oslo project as-is, part of the same projectgroup,
 to cover oslo-incubator work. This can keep the common development
 milestones, since the incubator doesn't do releases anyway. However,
 it has to be renamed to oslo-incubator so that it doesn't conflict
 with the projectgroup namespace. Once it no longer contains graduated
 libs, that name makes much more sense anyway.


 This plan requires Launchpad admin assistance to create a projectgroup
 and rename a project, so I'd like to get approval on it before moving to
 ask them for help.

 Comments, thoughts ?

 [1] https://blueprints.launchpad.net/openstack
 [2] https://bugs.launchpad.net/openstack

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Launchpad tracking of oslo projects

2014-08-22 Thread Flavio Percoco
On 08/22/2014 11:59 AM, Thierry Carrez wrote:
 TL;DR:
 Let's create an Oslo projectgroup in Launchpad to track work across all
 Oslo libraries. In library projects, let's use milestones connected to
 published versions rather than the common milestones.
 
 Long version:
 As we graduate more Oslo libraries (which is awesome), tracking Oslo
 work in Launchpad (bugs, milestones...) has become more difficult.
 
 There used to be only one Launchpad project (oslo, which covered the
 oslo incubator). It would loosely follow the integrated milestones
 (juno-1, juno-2...), get blueprints and bugs targeted to those, get tags
 pushed around those development milestones: same as the integrated
 projects, just with no source code tarball uploads.
 
 When oslo.messaging graduated, a specific Launchpad project was created
 to track work around it. It still had integrated development milestones
 -- only at the end it would publish a 1.4.0 release instead of a 2014.2
 one. That approach creates two problems. First, it's difficult to keep
 track of oslo work since it now spans two projects. Second, the
 release management logic of marking bugs Fix released at development
 milestones doesn't really apply (bugs should rather be marked released
 when a published version of the lib carries the fix). Git tags and
 Launchpad milestones no longer align, which creates a lot of confusion.
 
 Then as more libraries appeared, some of them piggybacked on the general
 oslo Launchpad project (generally adding tags to point to the specific
 library), and some others created their own project. More confusion ensues.
 
 Here is a proposal that we could use to solve that, until StoryBoard
 gets proper milestone support and Oslo is just migrated to it:
 
 1. Ask for an oslo project group in Launchpad
 
 This would solve the first issue, by allowing to see all oslo work on
 single pages (see examples at [1] or [2]). The trade-off here is that
 Launchpad projects can't be part of multiple project groups (and project
 groups can't belong to project groups). That means that Oslo projects
 will no longer be in the openstack Launchpad projectgroup. I think the
 benefits outweigh the drawbacks here: the openstack projectgroup is not
 very strict anyway so I don't think it's used in people workflows that much.
 
 2. Create one project per library, adopt tag-based milestones
 
 Each graduated library should get its own project (part of the oslo
 projectgroup). It should use the same series/cycles as everyone else
 (juno), but it should have milestones that match the alpha release
 tags, so that you can target work to it and mark them fix released
 when that means the fix is released. That would solve the issue of
 misaligned tags/milestones. The trade-off here is that you lose the
 common milestone rhythm (although I guess you can still align some
 alphas to the common development milestones). That sounds a small price
 to pay to better communicate which version has which fix.
 
 3. Rename oslo project to oslo-incubator
 
 Keep the Launchpad oslo project as-is, part of the same projectgroup,
 to cover oslo-incubator work. This can keep the common development
 milestones, since the incubator doesn't do releases anyway. However,
 it has to be renamed to oslo-incubator so that it doesn't conflict
 with the projectgroup namespace. Once it no longer contains graduated
 libs, that name makes much more sense anyway.
 
 
 This plan requires Launchpad admin assistance to create a projectgroup
 and rename a project, so I'd like to get approval on it before moving to
 ask them for help.
 
 Comments, thoughts ?

I like this proposal! +1 from me!

 
 [1] https://blueprints.launchpad.net/openstack
 [2] https://bugs.launchpad.net/openstack
 


-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-22 Thread Sean Dague
On 08/22/2014 01:30 AM, Michael Chapman wrote:
 
 
 
 On Fri, Aug 22, 2014 at 2:57 AM, Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com wrote:
 
 On 08/19/2014 11:28 PM, Robert Collins wrote:
 
 On 20 August 2014 02:37, Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com wrote:
 ...
 
 I'd like to see more unification of implementations in
 TripleO - but I
 still believe our basic principle of using OpenStack
 technologies that
 already exist in preference to third party ones is still
 sound, and
 offers substantial dogfood and virtuous circle benefits.
 
 
 
 No doubt Triple-O serves a valuable dogfood and virtuous
 cycle purpose.
 However, I would move that the Deployment Program should
 welcome the many
 projects currently in the stackforge/ code namespace that do
 deployment of
 OpenStack using traditional configuration management tools
 like Chef,
 Puppet, and Ansible. It cannot be argued that these
 configuration management
 systems are the de-facto way that OpenStack is deployed
 outside of HP, and
 they belong in the Deployment Program, IMO.
 
 
 I think you mean it 'can be argued'... ;).
 
 
 No, I definitely mean cannot be argued :) HP is the only company I
 know of that is deploying OpenStack using Triple-O. The vast
 majority of deployers I know of are deploying OpenStack using
 configuration management platforms and various systems or glue code
 for baremetal provisioning.
 
 Note that I am not saying that Triple-O is bad in any way! I'm only
 saying that it does not represent the way that the majority of
 real-world deployments are done.
 
 
  And I'd be happy if folk in
 
 those communities want to join in the deployment program and
 have code
 repositories in openstack/. To date, none have asked.
 
 
 My point in this thread has been and continues to be that by having
 the TC bless a certain project as The OpenStack Way of X, that we
 implicitly are saying to other valid alternatives Sorry, no need to
 apply here..
 
 
 As a TC member, I would welcome someone from the Chef
 community proposing
 the Chef cookbooks for inclusion in the Deployment program,
 to live under
 the openstack/ code namespace. Same for the Puppet modules.
 
 
 While you may personally welcome the Chef community to propose
 joining the deployment Program and living under the openstack/ code
 namespace, I'm just saying that the impression our governance model
 and policies create is one of exclusion, not inclusion. Hope that
 clarifies better what I've been getting at.
 
 
 
 (As one of the core reviewers for the Puppet modules)
 
 Without a standardised package build process it's quite difficult to
 test trunk Puppet modules vs trunk official projects. This means we cut
 release branches some time after the projects themselves to give people
 a chance to test. Until this changes and the modules can be released
 with the same cadence as the integrated release I believe they should
 remain on Stackforge.
 
 In addition and perhaps as a consequence, there isn't any public
 integration testing at this time for the modules, although I know some
 parties have developed and maintain their own.
 
 The Chef modules may be in a different state, but it's hard for me to
 recommend the Puppet modules become part of an official program at this
 stage.

Is the focus of the Puppet modules only stable releases with packages?
Puppet + git based deploys would be honestly a really handy thing
(especially as lots of people end up having custom fixes for their
site). The lack of CM tools for git based deploys is I think one of the
reasons we seen people using DevStack as a generic installer.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] [infra] Glance review patterns and their impact on the gate

2014-08-22 Thread Sean Dague
Earlier this week the freshness checks (the ones that required passing
results within 24 hrs for a change to go into the gate) were removed to
try to conserve nodes as we get to crunch time. The hopes were that
review teams had enough handle on what when code in their program got
into a state that it *could not* pass it's own unit tests to not approve
that code.

The attached screen shot shows that this is not currently true with
Glance. All glance changes will 100% fail their unit tests now. The root
fix might be here - https://review.openstack.org/#/c/115342/ which has 2
-1s and has been out for review for 3 days.

Realistically Glance was the biggest offender of this in the past, and
honestly the top reason for putting freshness checks in the gate in the
first place.

Does anyone from the Glance team have some ideas on better ways to keep
the team up on the fact that Glance is in a non functional state, 100%
of things will fail, and not have people approve things that can't pass?
These kind of issues are the ones that make me uncomfortable with the
Glance team taking on more mission until basic review hygiene is under
control for the existing code.

-Sean

-- 
Sean Dague
http://dague.net


signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [infra] Glance review patterns and their impact on the gate

2014-08-22 Thread Daniel P. Berrange
On Fri, Aug 22, 2014 at 07:09:10AM -0500, Sean Dague wrote:
 Earlier this week the freshness checks (the ones that required passing
 results within 24 hrs for a change to go into the gate) were removed to
 try to conserve nodes as we get to crunch time. The hopes were that
 review teams had enough handle on what when code in their program got
 into a state that it *could not* pass it's own unit tests to not approve
 that code.

Doh, I had no idea that we disabled the freshness checks. Since those
checks have been in place I think reviewers have somewhat come to rely
on them existing. I know I've certainly approved newly uploaded patches
for merge now without waiting for 'check' jobs to finish, since we've
been able to rely on the fact that the freshness checks will ensure it
doesn't get into the 'gate' jobs queue if there was a problem. I see
other reviewers do this reasonably frequently too. Of course this
reliance does not work out for 3rd party jobs, so people shouldn't
relly do that for changes where such jobs are important, but it is
hard to resist in general.

 Realistically Glance was the biggest offender of this in the past, and
 honestly the top reason for putting freshness checks in the gate in the
 first place.

I'm not commenting about the past transgressions, but as long as those
freshness jobs exist I think they sort of serve to actually re-inforce
the bad behaviour you describe because people can start to rely on them.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs

2014-08-22 Thread Thierry Carrez
Hi everyone,

We all know being a project PTL is an extremely busy job. That's because
in our structure the PTL is responsible for almost everything in a project:

- Release management contact
- Work prioritization
- Keeping bugs under control
- Communicate about work being planned or done
- Make sure the gate is not broken
- Team logistics (run meetings, organize sprints)
- ...

They end up being completely drowned in those day-to-day operational
duties, miss the big picture, can't help in development that much
anymore, get burnt out. Since you're either the PTL or not the PTL,
you're very alone and succession planning is not working that great either.

There have been a number of experiments to solve that problem. John
Garbutt has done an incredible job at helping successive Nova PTLs
handling the release management aspect. Tracy Jones took over Nova bug
management. Doug Hellmann successfully introduced the concept of Oslo
liaisons to get clear point of contacts for Oslo library adoption in
projects. It may be time to generalize that solution.

The issue is one of responsibility: the PTL is ultimately responsible
for everything in a project. If we can more formally delegate that
responsibility, we can avoid getting up to the PTL for everything, we
can rely on a team of people rather than just one person.

Enter the Czar system: each project should have a number of liaisons /
official contacts / delegates that are fully responsible to cover one
aspect of the project. We need to have Bugs czars, which are responsible
for getting bugs under control. We need to have Oslo czars, which serve
as liaisons for the Oslo program but also as active project-local oslo
advocates. We need Security czars, which the VMT can go to to progress
quickly on plugging vulnerabilities. We need release management czars,
to handle the communication and process with that painful OpenStack
release manager. We need Gate czars to serve as first-line-of-contact
getting gate issues fixed... You get the idea.

Some people can be czars of multiple areas. PTLs can retain some czar
activity if they wish. Czars can collaborate with their equivalents in
other projects to share best practices. We just need a clear list of
areas/duties and make sure each project has a name assigned to each.

Now, why czars ? Why not rely on informal activity ? Well, for that
system to work we'll need a lot of people to step up and sign up for
more responsibility. Making them czars makes sure that effort is
recognized and gives them something back. Also if we don't formally
designate people, we can't really delegate and the PTL will still be
directly held responsible. The Release management czar should be able to
sign off release SHAs without asking the PTL. The czars and the PTL
should collectively be the new project drivers.

At that point, why not also get rid of the PTL ? And replace him with a
team of czars ? If the czar system is successful, the PTL should be
freed from the day-to-day operational duties and will be able to focus
on the project health again. We still need someone to keep an eye on the
project-wide picture and coordinate the work of the czars. We need
someone to pick czars, in the event multiple candidates sign up. We also
still need someone to have the final say in case of deadlocked issues.

People say we don't have that many deadlocks in OpenStack for which the
PTL ultimate power is needed, so we could get rid of them. I'd argue
that the main reason we don't have that many deadlocks in OpenStack is
precisely *because* we have a system to break them if they arise. That
encourages everyone to find a lazy consensus. That part of the PTL job
works. Let's fix the part that doesn't work (scaling/burnout).

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] new testtools breaking gate

2014-08-22 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hi all,

this week is quite bumpy for unit testing in gate. First, it was
upgrade to new 'tox' version that broke quite some branches.

And today new testtools 0.9.36 were released and were caught by gate,
which resulted in the following unit test failures in multiple projects:

TestCase.setUp was already called. Do not explicitly call setUp from
your tests. In your own setUp, use super to call the base setUp.

All branches are affected: havana, icehouse, and master.

This is because the following check was released with the new version
of the library:
https://github.com/testing-cabal/testtools/commit/5c3b92d90a64efaecdc4010a98002bfe8b888517

And the temporary fix is to merge the version pin patch in global
requirements, backport it to stable branches, and merge the updates
from Openstack Proposal Bot to all affected projects. The patch for
master requirements is: https://review.openstack.org/#/c/116267/

In the meantime, projects will need to fix their tests not to call
setUp() and tearDown() twice. This will be the requirement to unpin
the version of the library.

So, please review, backport, and make sure it lands in project
requirements files.

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJT9z3cAAoJEC5aWaUY1u57DtsIAOFtK2i4zkMcC79nOrc5w9DW
oO2b064eyLwwbQEaWeeIL2JBSLBxqNV5zeN0eZB3Sq7LQLv0oPaUNTMFG2+gvask
JHCTAGKz776Rt7ptcfmpHURwcT9L//+1HXvd+ADtO0sYKwgmvaBF7aA4WFa4TseG
JCnAsi5OiOZZgTo/6U1B55srHkZr0DWxqTkKKysZJbR2Pr/ZT9io8yu9uucaz9VH
uNLfggtCcjGgccl7IqSUtVRf3lsSGuvBAxVqMszSFJQmFCjy2E26GfsTApp9KXtQ
gbCpEns8QCnt6KF9rygjHLMbYikjbITuUfSL2okZelX9VpKNx0CS29K/tRg5/BA=
=YavB
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Heat] Murano split dsicussion

2014-08-22 Thread Angus Salkeld
On Fri, Aug 22, 2014 at 4:53 PM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Angus Salkeld's message of 2014-08-21 20:14:12 -0700:
  On Fri, Aug 22, 2014 at 12:34 PM, Clint Byrum cl...@fewbar.com wrote:
 
   Excerpts from Georgy Okrokvertskhov's message of 2014-08-20 13:14:28
 -0700:
During last Atlanta summit there were couple discussions about
   Application
Catalog and Application space projects in OpenStack. These
 cross-project
discussions occurred as a result of Murano incubation request [1]
 during
Icehouse cycle.  On the TC meeting devoted to Murano incubation
 there was
an idea about splitting the Murano into parts which might belong to
different programs[2].
   
   
Today, I would like to initiate a discussion about potential
 splitting of
Murano between two or three programs.
   
   
*App Catalog API to Catalog Program*
   
Application Catalog part can belong to Catalog program, the package
repository will move to artifacts repository part where Murano team
   already
participates. API part of App Catalog will add a thin layer of API
   methods
specific to Murano applications and potentially can be implemented
 as a
plugin to artifacts repository. Also this API layer will expose
 other 3rd
party systems API like CloudFoundry ServiceBroker API which is used
 by
CloudFoundry marketplace feature to provide an integration layer
 between
OpenStack Application packages and 3rd party PaaS tools.
   
   
  
   I thought this was basically already agreed upon, and that Glance was
   just growing the ability to store more than just images.
  
   
*Murano Engine to Orchestration Program*
   
Murano engine orchestrates the Heat template generation.
 Complementary
   to a
Heat declarative approach, Murano engine uses imperative approach so
 that
it is possible to control the whole flow of the template generation.
 The
engine uses Heat updates to update Heat templates to reflect changes
 in
applications layout. Murano engine has a concept of actions - special
   flows
which can be called at any time after application deployment to
 change
application parameters or update stacks. The engine is actually
complementary to Heat engine and adds the following:
   
   
   - orchestrate multiple Heat stacks - DR deployments, HA setups,
   multiple
   datacenters deployment
  
   These sound like features already requested directly in Heat.
  
   - Initiate and controls stack updates on application specific
 events
  
   Sounds like workflow. :)
  
   - Error handling and self-healing - being imperative Murano
 allows you
   to handle issues and implement additional logic around error
 handling
   and
   self-healing.
  
   Also sounds like workflow.
  
   
  
  
   I think we need to re-think what a program is before we consider this.
  
   I really don't know much about Murano. I have no interest in it at
  
 
  get off my lawn;)
 

 And turn down that music!

 Sorry for the fist shaking, but I wan to highlight that I'm happy to
 consider it, just not with programs working the way they do now.

  http://stackalytics.com/?project_type=allmodule=murano-group
 
  HP seems to be involved, you should check it out.
 

 HP is involved in a lot of OpenStack things. It's a bit hard for me to
 keep my eyes on everything we do. Good to know that others have been able
 to take some time and buy into it a bit. +1 for distributing the load. :)

   all, and nobody has come to me saying If we only had Murano in our
   orchestration toolbox, we'd solve xxx. But making them part of the
  
 
  I thought you were saying that opsworks was neat the other day?
  Murano from what I understand was partly inspired from opsworks, yes
  it's a layer up, but still really the same field.
 

 I was saying that OpsWorks is reportedly popular, yes. I did not make
 the connection at all from OpsWorks to Murano, and nobody had pointed
 that out to me until now.

   Orchestration program would imply that we'll do design sessions
 together,
   that we'll share the same mission statement, and that we'll have just
  
 
  This is exactly what I hope will happen.
 

 Which sessions from last summit would we want to give up to make room
 for the Murano-only focused sessions? How much time in our IRC meeting
 should we give to Murano-only concerns?


 Forgive me for being harsh. We have a cloud to deploy using Heat,
 and it is taking far too long to get Heat to do that in an acceptable
 manner already. Adding load to our PTL and increasing the burden on our
 communication channels doesn't really seem like something that will
 increase our velocity. I could be dead wrong though, Murano could be
 exactly what we need. I just don't see it, and I'm sorry to be so direct
 about saying that.


No problem, we need to understand up front how this is all going to work.

AFAIK nova has sub-team meetings and they summarize at the main 

[openstack-dev] StackTach.v3 - Screencasts ...

2014-08-22 Thread Sandy Walsh
Hey y'all,

We've started a screencast series on the StackTach.v3 dev efforts [1]. It's 
still early-days, so subscribe to the playlist for updates.

The videos start with the StackTach/Ceilometer integration presentation at the 
Hong Kong summit, which is useful for background and motivation but then gets 
into our current dev strategy and state-of-the-union. 

If you're interested, we will be at the Ops Meetup in San Antonio next week and 
would love to chat about your monitoring, usage and billing requirements. 

All the best!
-S

[1] https://www.youtube.com/playlist?list=PLmyM48VxCGaW5pPdyFNWCuwVT1bCBV5p3
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] new testtools breaking gate

2014-08-22 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

UPD: probably pinning the version is really overkill here, and instead
we should fix affected branches. Among them: Havana for neutron,
Havana+Icehouse+Juno for Glance. Other projects may also be affected.

For Neutron, it should be handled by the following backport:
https://review.openstack.org/116271

/Ihar

On 22/08/14 14:55, Ihar Hrachyshka wrote:
 Hi all,
 
 this week is quite bumpy for unit testing in gate. First, it was 
 upgrade to new 'tox' version that broke quite some branches.
 
 And today new testtools 0.9.36 were released and were caught by
 gate, which resulted in the following unit test failures in
 multiple projects:
 
 TestCase.setUp was already called. Do not explicitly call setUp
 from your tests. In your own setUp, use super to call the base
 setUp.
 
 All branches are affected: havana, icehouse, and master.
 
 This is because the following check was released with the new
 version of the library: 
 https://github.com/testing-cabal/testtools/commit/5c3b92d90a64efaecdc4010a98002bfe8b888517

  And the temporary fix is to merge the version pin patch in global 
 requirements, backport it to stable branches, and merge the
 updates from Openstack Proposal Bot to all affected projects. The
 patch for master requirements is:
 https://review.openstack.org/#/c/116267/
 
 In the meantime, projects will need to fix their tests not to call 
 setUp() and tearDown() twice. This will be the requirement to
 unpin the version of the library.
 
 So, please review, backport, and make sure it lands in project 
 requirements files.
 
 /Ihar
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJT90TxAAoJEC5aWaUY1u57no0IAM4XUubdTyQB4siJuUSYZSFi
k3ZtgydQtdXq8hAWEUMTKqlU4hN+0qRJHK8jhXJraOfWj1KPIyzmJkZQg5UGp7WZ
aIlQiQGv2BcqPf+LoGX0QVOcrZWgHpB05iE3iCBOoPJmZeGX/iGG+xV7UVwI/ekV
JwjIp6SAJB4IfS3+S4rB3KUKuDx1XDBqFz40aPA7LTXNBvkQq8Xjiy0K+bb7/YC/
n7R7OUCqvbYgSQRKEADbKlFMCCl7Z1lV4XIcwHL4/cvImZxVGhl3V8rY9dTiE/cE
MVdZvXWZOKRVjfZxewbqT//v96jYGX1/v+RkmP85Vq9O2HW4VV5lwK8vjxLsA7k=
=XSGB
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Heat] Murano split dsicussion

2014-08-22 Thread Zane Bitter

On 21/08/14 04:30, Thierry Carrez wrote:

Georgy Okrokvertskhov wrote:

During last Atlanta summit there were couple discussions about
Application Catalog and Application space projects in OpenStack. These
cross-project discussions occurred as a result of Murano incubation
request [1] during Icehouse cycle.  On the TC meeting devoted to Murano
incubation there was an idea about splitting the Murano into parts which
might belong to different programs[2].


Today, I would like to initiate a discussion about potential splitting
of Murano between two or three programs.
[...]


I think the proposed split makes a lot of sense. Let's wait for the
feedback of the affected programs to see if it's compatible with their
own plans.


I want to start out by saying that I am a big proponent of doing stuff 
that makes sense, and wearing my PTL hat I will support the consensus of 
the community on whatever makes the most sense.


With the PTL hat off again, here is my 2c on what I think makes sense:

* The Glance thing makes total sense to me. Murano's requirements should 
be pretty much limited to an artifact catalog with some metadata - 
that's bread and butter for Glance. Murano folks should join the Glance 
team and drive their requirements into the artifact catalog.


* The Horizon thing makes some sense. I think at least part of the UI 
should be in Horizon, but I suspect there's also some stuff in there 
that is pretty specific to the domain that Murano is tackling and it 
might be better for that to live in the same program as the Murano 
engine. I believe that there's a close analogue here with Tuskar and the 
TripleO program, so maybe we could ask them about any lessons learned. 
Georgy suggested elsewhere that the Merlin framework should be in 
Horizon and the rest in the same program as the engine, and that would 
make total sense to me.


* The Heat thing doesn't make a lot of sense IMHO. I now understand that 
apparently different projects in the same program can have different 
core teams - which just makes me more confused about what a program is 
for, since I thought it was a single team. Nevertheless, I don't think 
that the Murano project would be well-served by being represented by the 
Heat PTL (which is, I guess, the only meaning still attached to a 
program). I don't think they want the Heat PTL triaging their bugs, 
and I don't think it's even feasible for one person to do that for both 
projects (that is to say, I already have a negative amount of extra time 
available for Launchpad just handling Heat). I don't think they want the 
Heat PTL to have control over their design summit sessions, and if I 
were the PTL doing that I would *hate* to be in the position of trying 
to balance the interests of the two projects - *especially*, given that 
I am in Clint's camp of not seeing a lot of value in Murano, when one 
project has not gone through the incubation process and therefore there 
would be no guidance available from the TC or consensus in the wider 
community as to whether that project warranted any time at all devoted 
to it. In fact, I would go so far as to say that it's completely 
unreasonable to put a single PTL in that position.


So, I don't think putting the Murano engine into the Orchestration 
program is being proposed because it makes sense. I think it's being 
proposed, despite not making sense, because people consider it unlikely 
that the TC would grant Murano a separate program due to some 
combination of:


(a) People won't think Murano is a good (enough) idea - in which case we 
shouldn't do it (yet); and/or
(b) People have an irrational belief that projects are lightweight but 
programs are heavyweight, when the reverse is true, and will block any 
new programs for fear of letting another person call themselves a PTL - 
in which case the structure of the OpenStack community is broken and we 
must fix it.


Y'all know my proposed solution to the latter already - rename programs 
to projects and get rid of PTLs :)


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] new testtools breaking gate

2014-08-22 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

[resending to include stable-maint.]

UPD: probably pinning the version is really overkill here, and instead
we should fix affected branches. Among them: Havana for neutron,
Havana+Icehouse+Juno for Glance. Other projects may also be affected.

For Neutron, it should be handled by the following backport:
https://review.openstack.org/116271

/Ihar

On 22/08/14 14:55, Ihar Hrachyshka wrote:
 Hi all,
 
 this week is quite bumpy for unit testing in gate. First, it was 
 upgrade to new 'tox' version that broke quite some branches.
 
 And today new testtools 0.9.36 were released and were caught by
 gate, which resulted in the following unit test failures in
 multiple projects:
 
 TestCase.setUp was already called. Do not explicitly call setUp
 from your tests. In your own setUp, use super to call the base
 setUp.
 
 All branches are affected: havana, icehouse, and master.
 
 This is because the following check was released with the new
 version of the library: 
 https://github.com/testing-cabal/testtools/commit/5c3b92d90a64efaecdc4010a98002bfe8b888517

  And the temporary fix is to merge the version pin patch in global 
 requirements, backport it to stable branches, and merge the
 updates from Openstack Proposal Bot to all affected projects. The
 patch for master requirements is:
 https://review.openstack.org/#/c/116267/
 
 In the meantime, projects will need to fix their tests not to call 
 setUp() and tearDown() twice. This will be the requirement to
 unpin the version of the library.
 
 So, please review, backport, and make sure it lands in project 
 requirements files.
 
 /Ihar
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJT90ZfAAoJEC5aWaUY1u572VgH/1SgYU5aBBfKUmM3JSRdrgwn
kq4dN+gb0VuQtm7EXL9lIRl7riTgR/Ok/An8wuhKehEtQhXzt313oMl1vag/90WP
Wc0EZhb3X9zN5E1eCDtU+F/ssaIRMmx7onqfswemfDaXRmNHlcg9gtde2L7mSlUi
lHRC1xZbIF8y0wfLhhQlfKP1bjh5x1XnpbN/Q3VT2hGnRLtWYWseSQuu5r2G0T8u
6oLOJXw2sGRvi9/ploAAVSGae/q2o+RukiM5t1b+brMtbZY0bn21MmJ09Mtz3hQl
ctZt8dnE+jvNKg/D49kXLsb6SEN+TgYLCEC/YdVP0MdKDlxVn4qBFcFMly7zRfY=
=br27
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs

2014-08-22 Thread Russell Bryant
On 08/22/2014 08:33 AM, Thierry Carrez wrote:
 Hi everyone,
 
 We all know being a project PTL is an extremely busy job. That's because
 in our structure the PTL is responsible for almost everything in a project:
 
 - Release management contact
 - Work prioritization
 - Keeping bugs under control
 - Communicate about work being planned or done
 - Make sure the gate is not broken
 - Team logistics (run meetings, organize sprints)
 - ...
 
 They end up being completely drowned in those day-to-day operational
 duties, miss the big picture, can't help in development that much
 anymore, get burnt out. Since you're either the PTL or not the PTL,
 you're very alone and succession planning is not working that great either.
 
 There have been a number of experiments to solve that problem. John
 Garbutt has done an incredible job at helping successive Nova PTLs
 handling the release management aspect. Tracy Jones took over Nova bug
 management. Doug Hellmann successfully introduced the concept of Oslo
 liaisons to get clear point of contacts for Oslo library adoption in
 projects. It may be time to generalize that solution.
 
 The issue is one of responsibility: the PTL is ultimately responsible
 for everything in a project. If we can more formally delegate that
 responsibility, we can avoid getting up to the PTL for everything, we
 can rely on a team of people rather than just one person.
 
 Enter the Czar system: each project should have a number of liaisons /
 official contacts / delegates that are fully responsible to cover one
 aspect of the project. We need to have Bugs czars, which are responsible
 for getting bugs under control. We need to have Oslo czars, which serve
 as liaisons for the Oslo program but also as active project-local oslo
 advocates. We need Security czars, which the VMT can go to to progress
 quickly on plugging vulnerabilities. We need release management czars,
 to handle the communication and process with that painful OpenStack
 release manager. We need Gate czars to serve as first-line-of-contact
 getting gate issues fixed... You get the idea.
 
 Some people can be czars of multiple areas. PTLs can retain some czar
 activity if they wish. Czars can collaborate with their equivalents in
 other projects to share best practices. We just need a clear list of
 areas/duties and make sure each project has a name assigned to each.
 
 Now, why czars ? Why not rely on informal activity ? Well, for that
 system to work we'll need a lot of people to step up and sign up for
 more responsibility. Making them czars makes sure that effort is
 recognized and gives them something back. Also if we don't formally
 designate people, we can't really delegate and the PTL will still be
 directly held responsible. The Release management czar should be able to
 sign off release SHAs without asking the PTL. The czars and the PTL
 should collectively be the new project drivers.
 
 At that point, why not also get rid of the PTL ? And replace him with a
 team of czars ? If the czar system is successful, the PTL should be
 freed from the day-to-day operational duties and will be able to focus
 on the project health again. We still need someone to keep an eye on the
 project-wide picture and coordinate the work of the czars. We need
 someone to pick czars, in the event multiple candidates sign up. We also
 still need someone to have the final say in case of deadlocked issues.
 
 People say we don't have that many deadlocks in OpenStack for which the
 PTL ultimate power is needed, so we could get rid of them. I'd argue
 that the main reason we don't have that many deadlocks in OpenStack is
 precisely *because* we have a system to break them if they arise. That
 encourages everyone to find a lazy consensus. That part of the PTL job
 works. Let's fix the part that doesn't work (scaling/burnout).
 

+1 on czars.  That's what was working best for me to start scaling
things in Nova, especially through my 2nd term (Icehouse).  John and
Tracy were a big help as you mentioned as examples.  There were others
that were stepping up, too.  I think it's been working well enough to
formalize it.

Another area worth calling out is a gate czar.  Having someone who
understands infra and QA quite well and is regularly on top of the
status of the project in the gate is helpful and quite important.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs

2014-08-22 Thread Russell Bryant
On 08/22/2014 09:40 AM, Russell Bryant wrote:
 Another area worth calling out is a gate czar.  Having someone who
 understands infra and QA quite well and is regularly on top of the
 status of the project in the gate is helpful and quite important.

Oops, you said this one, too.  Anyway, +1.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs

2014-08-22 Thread Anita Kuno
On 08/22/2014 07:33 AM, Thierry Carrez wrote:
 Hi everyone,
 
 We all know being a project PTL is an extremely busy job. That's because
 in our structure the PTL is responsible for almost everything in a project:
 
 - Release management contact
 - Work prioritization
 - Keeping bugs under control
 - Communicate about work being planned or done
 - Make sure the gate is not broken
 - Team logistics (run meetings, organize sprints)
 - ...
 
 They end up being completely drowned in those day-to-day operational
 duties, miss the big picture, can't help in development that much
 anymore, get burnt out. Since you're either the PTL or not the PTL,
 you're very alone and succession planning is not working that great either.
 
 There have been a number of experiments to solve that problem. John
 Garbutt has done an incredible job at helping successive Nova PTLs
 handling the release management aspect. Tracy Jones took over Nova bug
 management. Doug Hellmann successfully introduced the concept of Oslo
 liaisons to get clear point of contacts for Oslo library adoption in
 projects. It may be time to generalize that solution.
 
 The issue is one of responsibility: the PTL is ultimately responsible
 for everything in a project. If we can more formally delegate that
 responsibility, we can avoid getting up to the PTL for everything, we
 can rely on a team of people rather than just one person.
 
 Enter the Czar system: each project should have a number of liaisons /
 official contacts / delegates that are fully responsible to cover one
 aspect of the project. We need to have Bugs czars, which are responsible
 for getting bugs under control. We need to have Oslo czars, which serve
 as liaisons for the Oslo program but also as active project-local oslo
 advocates. We need Security czars, which the VMT can go to to progress
 quickly on plugging vulnerabilities. We need release management czars,
 to handle the communication and process with that painful OpenStack
 release manager. We need Gate czars to serve as first-line-of-contact
 getting gate issues fixed... You get the idea.
 
 Some people can be czars of multiple areas. PTLs can retain some czar
 activity if they wish. Czars can collaborate with their equivalents in
 other projects to share best practices. We just need a clear list of
 areas/duties and make sure each project has a name assigned to each.
 
 Now, why czars ? Why not rely on informal activity ? Well, for that
 system to work we'll need a lot of people to step up and sign up for
 more responsibility. Making them czars makes sure that effort is
 recognized and gives them something back. Also if we don't formally
 designate people, we can't really delegate and the PTL will still be
 directly held responsible. The Release management czar should be able to
 sign off release SHAs without asking the PTL. The czars and the PTL
 should collectively be the new project drivers.
 
 At that point, why not also get rid of the PTL ? And replace him with a
him or her (those French pronouns again!)
 team of czars ? If the czar system is successful, the PTL should be
 freed from the day-to-day operational duties and will be able to focus
 on the project health again. We still need someone to keep an eye on the
 project-wide picture and coordinate the work of the czars. We need
 someone to pick czars, in the event multiple candidates sign up. We also
 still need someone to have the final say in case of deadlocked issues.
 
 People say we don't have that many deadlocks in OpenStack for which the
 PTL ultimate power is needed, so we could get rid of them. I'd argue
 that the main reason we don't have that many deadlocks in OpenStack is
 precisely *because* we have a system to break them if they arise. That
 encourages everyone to find a lazy consensus. That part of the PTL job
 works. Let's fix the part that doesn't work (scaling/burnout).
 
I would like to work with a single point of contact for any program
engaged in the third party space.

Thanks,
Anita.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs

2014-08-22 Thread Daniel P. Berrange
On Fri, Aug 22, 2014 at 02:33:27PM +0200, Thierry Carrez wrote:
 Hi everyone,
 
 We all know being a project PTL is an extremely busy job. That's because
 in our structure the PTL is responsible for almost everything in a project:
 
 - Release management contact
 - Work prioritization
 - Keeping bugs under control
 - Communicate about work being planned or done
 - Make sure the gate is not broken
 - Team logistics (run meetings, organize sprints)
 - ...

This is a good list of responsbilities, but I feel like we're missing
something that is possibly a little too fuzzy to describe a bullet
point. In terms of our governance process, PTL is an elected role, so
in that sense the PTL holds an implicit duty to represent the interests
of the project constituents. I'd characterise this as setting the overall
big picture direction, so that the core team (and/or czars) are focusing
their efforts in the right directioon and generally ensuring that the
project is operating in a healthy manner. Perhaps you could class that
as 'team logistics' but I feel that the idea of 'representation the
voters' is is worth calling out explicitly.

Is there any formal writeup of the PTL role's responsibilities ?
With Google so far I only found one paragraph of governance

  https://wiki.openstack.org/wiki/Governance/Foundation/Structure

  Project Technical Leads (PTLs) lead individual projects. A PTL
   is ultimately responsible for the direction for each project,
   makes tough calls when needed, organizes the work and teams in
   the project and determines if other forms of project leadership
   are needed. The PTL for a project is elected by the body of
   contributors to that particular project.

Anyway, with the idea of elections  representation in mind

 Enter the Czar system: each project should have a number of liaisons /
 official contacts / delegates that are fully responsible to cover one
 aspect of the project. We need to have Bugs czars, which are responsible
 for getting bugs under control. We need to have Oslo czars, which serve
 as liaisons for the Oslo program but also as active project-local oslo
 advocates. We need Security czars, which the VMT can go to to progress
 quickly on plugging vulnerabilities. We need release management czars,
 to handle the communication and process with that painful OpenStack
 release manager. We need Gate czars to serve as first-line-of-contact
 getting gate issues fixed... You get the idea.
 
 Some people can be czars of multiple areas. PTLs can retain some czar
 activity if they wish. Czars can collaborate with their equivalents in
 other projects to share best practices. We just need a clear list of
 areas/duties and make sure each project has a name assigned to each.
 
 Now, why czars ? Why not rely on informal activity ? Well, for that
 system to work we'll need a lot of people to step up and sign up for
 more responsibility. Making them czars makes sure that effort is
 recognized and gives them something back. Also if we don't formally
 designate people, we can't really delegate and the PTL will still be
 directly held responsible. The Release management czar should be able to
 sign off release SHAs without asking the PTL. The czars and the PTL
 should collectively be the new project drivers.
 
 At that point, why not also get rid of the PTL ? And replace him with a
 team of czars ? If the czar system is successful, the PTL should be
 freed from the day-to-day operational duties and will be able to focus
 on the project health again. We still need someone to keep an eye on the
 project-wide picture and coordinate the work of the czars. We need
 someone to pick czars, in the event multiple candidates sign up. We also
 still need someone to have the final say in case of deadlocked issues.

 I'm wondering how people come to be czars ? You don't say explicitly,
but reading this it feels like the team of czars would be more or less
self-selecting amongst the project contributors or nominated by the PTL ?

Thus if we took the next step and got rid of the PTL, then we seem to
have entirely removed the idea of democratically elected leadership from
the individual projects. Is this interpretation correct, wrt the idea of
a czar system with PTL role abolished ?

 People say we don't have that many deadlocks in OpenStack for which the
 PTL ultimate power is needed, so we could get rid of them. I'd argue
 that the main reason we don't have that many deadlocks in OpenStack is
 precisely *because* we have a system to break them if they arise. That
 encourages everyone to find a lazy consensus. That part of the PTL job
 works. Let's fix the part that doesn't work (scaling/burnout).

Even if we didn't have deadlocks with a pure czar system, I fear that
we would be loosing something important by no longer having the project
members directly elect their leader. The elections serve to give the
membership a sense of representation  control over the direction of
the project. Without that 

Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs

2014-08-22 Thread Flavio Percoco
On 08/22/2014 02:33 PM, Thierry Carrez wrote:
 Hi everyone,
 
 We all know being a project PTL is an extremely busy job. That's because
 in our structure the PTL is responsible for almost everything in a project:
 
 - Release management contact
 - Work prioritization
 - Keeping bugs under control
 - Communicate about work being planned or done
 - Make sure the gate is not broken
 - Team logistics (run meetings, organize sprints)
 - ...
 
 They end up being completely drowned in those day-to-day operational
 duties, miss the big picture, can't help in development that much
 anymore, get burnt out. Since you're either the PTL or not the PTL,
 you're very alone and succession planning is not working that great either.
 
 There have been a number of experiments to solve that problem. John
 Garbutt has done an incredible job at helping successive Nova PTLs
 handling the release management aspect. Tracy Jones took over Nova bug
 management. Doug Hellmann successfully introduced the concept of Oslo
 liaisons to get clear point of contacts for Oslo library adoption in
 projects. It may be time to generalize that solution.
 
 The issue is one of responsibility: the PTL is ultimately responsible
 for everything in a project. If we can more formally delegate that
 responsibility, we can avoid getting up to the PTL for everything, we
 can rely on a team of people rather than just one person.
 
 Enter the Czar system: each project should have a number of liaisons /
 official contacts / delegates that are fully responsible to cover one
 aspect of the project. We need to have Bugs czars, which are responsible
 for getting bugs under control. We need to have Oslo czars, which serve
 as liaisons for the Oslo program but also as active project-local oslo
 advocates. We need Security czars, which the VMT can go to to progress
 quickly on plugging vulnerabilities. We need release management czars,
 to handle the communication and process with that painful OpenStack
 release manager. We need Gate czars to serve as first-line-of-contact
 getting gate issues fixed... You get the idea.
 
 Some people can be czars of multiple areas. PTLs can retain some czar
 activity if they wish. Czars can collaborate with their equivalents in
 other projects to share best practices. We just need a clear list of
 areas/duties and make sure each project has a name assigned to each.
 
 Now, why czars ? Why not rely on informal activity ? Well, for that
 system to work we'll need a lot of people to step up and sign up for
 more responsibility. Making them czars makes sure that effort is
 recognized and gives them something back. Also if we don't formally
 designate people, we can't really delegate and the PTL will still be
 directly held responsible. The Release management czar should be able to
 sign off release SHAs without asking the PTL. The czars and the PTL
 should collectively be the new project drivers.
 
 At that point, why not also get rid of the PTL ? And replace him with a
 team of czars ? If the czar system is successful, the PTL should be
 freed from the day-to-day operational duties and will be able to focus
 on the project health again. We still need someone to keep an eye on the
 project-wide picture and coordinate the work of the czars. We need
 someone to pick czars, in the event multiple candidates sign up. We also
 still need someone to have the final say in case of deadlocked issues.
 
 People say we don't have that many deadlocks in OpenStack for which the
 PTL ultimate power is needed, so we could get rid of them. I'd argue
 that the main reason we don't have that many deadlocks in OpenStack is
 precisely *because* we have a system to break them if they arise. That
 encourages everyone to find a lazy consensus. That part of the PTL job
 works. Let's fix the part that doesn't work (scaling/burnout).
 


+1

FWIW, this is how things have worked in Zaqar since the very beginning.
We've split our duties throughout the team, which helped clearing some
of the PTL duties. So far, we have people responsible for:

* Bugs for the server
* Bugs for the client
* Gate and Tests
* Client releases

The areas not listed there are still part of the PTLs responsibilities.

Cheers,
Flavio

-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Use public IP address as instance fixed IP

2014-08-22 Thread Bao Wang
Thank you for your response. Could this be done naturally with Openstack
neutron or have to be done manually outside neutron ?  As we are expecting
to orchestrate hundreds of NFV with all similar network configuration,
programmability is another key element.


On Thu, Aug 21, 2014 at 3:52 PM, Kevin Benton blak...@gmail.com wrote:

 Have you tried making the external network shared as well? Instances that
 need a private IP with NAT attach to an internal network and go through the
 router like normal. Instances that need a public IP without NAT would just
 attach directly to the external network.


 On Thu, Aug 21, 2014 at 7:06 AM, Bao Wang bywan...@gmail.com wrote:

 I have a very complex Openstack deployment for NFV. It could not be
 deployed as Flat. It will have a lot of isolated private networks. Some
 interfaces of a group VM instances will need bridged network with their
 fixed IP addresses to communicate with outside world while other interfaces
 from the same set of VM should keep isolated with real private/fixed IP
 addresses. What happen if we use public IP addresses directly as fixed IP
 on those interfaces ? Will this work with Openstack neutron networking ?
 Will Openstack do NAT automatically on those ?

 Overall, the requirement is to use the fixed/public IP to communicate
 with outside directly on some interfaces of some VM instances while keeping
 others as private. The floating IP is not an option here

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][WSME] Sphinx failing sporadically because of wsme autodoc extension

2014-08-22 Thread Ildikó Váncsa
Hi,

I couldn’t reproduce this issue either. I’ve tried on precise and on a fresh 
trusty too, everything worked fine…

Cheers,
Ildikó

From: Dina Belova [mailto:dbel...@mirantis.com]
Sent: Friday, August 22, 2014 11:40 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ceilometer][WSME] Sphinx failing sporadically 
because of wsme autodoc extension

Gordon, exactly. Moreover, I think percentage of failings is something close to 
the 15-20% of all the runs - sometimes it passes for the change, and next run 
on this exactly the same commit will be the failed for some reason.

Thanks
Dina

On Thu, Aug 21, 2014 at 10:46 PM, gordon chung 
g...@live.camailto:g...@live.ca wrote:
is it possible that it's not on all the nodes?

seems like it passed here: https://review.openstack.org/#/c/109207/ but another 
patch at roughly the same time failed https://review.openstack.org/#/c/113549/

cheers,
gord

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs

2014-08-22 Thread Zane Bitter

On 22/08/14 08:33, Thierry Carrez wrote:

Hi everyone,

We all know being a project PTL is an extremely busy job. That's because
in our structure the PTL is responsible for almost everything in a project:

- Release management contact
- Work prioritization
- Keeping bugs under control
- Communicate about work being planned or done
- Make sure the gate is not broken
- Team logistics (run meetings, organize sprints)
- ...

They end up being completely drowned in those day-to-day operational
duties, miss the big picture, can't help in development that much
anymore, get burnt out. Since you're either the PTL or not the PTL,
you're very alone and succession planning is not working that great either.


Succession planning works as well as you want it to IMO. In Kilo, Heat 
will have its 5th PTL in 5 release cycles, and all of those successions 
except the first were planned. We have multiple potential candidates for 
the next election whom I think would be more than capable of doing a 
great job; the harder thing is finding people who are able to commit the 
time.



There have been a number of experiments to solve that problem. John
Garbutt has done an incredible job at helping successive Nova PTLs
handling the release management aspect. Tracy Jones took over Nova bug
management. Doug Hellmann successfully introduced the concept of Oslo
liaisons to get clear point of contacts for Oslo library adoption in
projects. It may be time to generalize that solution.


+1

My goal as the Heat PTL has been to try to identify all of the places 
where the PTL can be a single point of failure and work toward 
eliminating them, as a first step toward eliminating PTLs altogether.


BTW the big two remaining are scheduling sessions for the design summit 
and approving/targeting blueprints in Launchpad - the specs repos helped 
somewhat with the latter, but it won't be completely fixed until 
Launchpad goes away.



The issue is one of responsibility: the PTL is ultimately responsible
for everything in a project. If we can more formally delegate that
responsibility, we can avoid getting up to the PTL for everything, we
can rely on a team of people rather than just one person.


First off, the PTL is not responsible for everything in a project. 
*Everyone* is responsible for everything in a project.


The PTL is *accountable* for everything in a project. PTLs are the 
mechanism the TC uses to ensure that programs remain accountable to the 
wider community.



Enter the Czar system: each project should have a number of liaisons /
official contacts / delegates that are fully responsible to cover one
aspect of the project. We need to have Bugs czars, which are responsible
for getting bugs under control. We need to have Oslo czars, which serve
as liaisons for the Oslo program but also as active project-local oslo
advocates. We need Security czars, which the VMT can go to to progress
quickly on plugging vulnerabilities. We need release management czars,
to handle the communication and process with that painful OpenStack
release manager. We need Gate czars to serve as first-line-of-contact
getting gate issues fixed... You get the idea.


+1

Rather than putting it all on one person, we should enumerate the ways 
in which we want projects to be accountable to the wider community, and 
allow the projects themselves to determine who is accountable for each 
particular function. Furthermore, we should allow them to do so on their 
own cadence, rather than only at 6-month intervals.



Some people can be czars of multiple areas. PTLs can retain some czar
activity if they wish. Czars can collaborate with their equivalents in
other projects to share best practices. We just need a clear list of
areas/duties and make sure each project has a name assigned to each.


Exactly, maybe we'd have a wiki page or something listing the contact 
person for each area in each project.



Now, why czars ? Why not rely on informal activity ? Well, for that
system to work we'll need a lot of people to step up and sign up for
more responsibility. Making them czars makes sure that effort is
recognized and gives them something back. Also if we don't formally
designate people, we can't really delegate and the PTL will still be
directly held responsible. The Release management czar should be able to
sign off release SHAs without asking the PTL. The czars and the PTL
should collectively be the new project drivers.

At that point, why not also get rid of the PTL ?


+1 :)


And replace him with a
team of czars ? If the czar system is successful, the PTL should be
freed from the day-to-day operational duties and will be able to focus
on the project health again.


I don't see that as something the wider OpenStack community needs to 
dictate. We have a heavyweight election process for PTLs once every 
cycle because that used to be the process for electing the TC. Now that 
it no longer serves this dual purpose, PTL elections have outlived their 
usefulness.


If 

Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs

2014-08-22 Thread Thierry Carrez
Russell Bryant wrote:
 On 08/22/2014 09:40 AM, Russell Bryant wrote:
 Another area worth calling out is a gate czar.  Having someone who
 understands infra and QA quite well and is regularly on top of the
 status of the project in the gate is helpful and quite important.
 
 Oops, you said this one, too.  Anyway, +1.

The one I forgot in my example list would be the Docs czar. I'm pretty
sure Anne would appreciate a clear point contact for docs in every
integrated project.

Another interesting side-effect of having clear positions is that if
nobody fills them (which is a possibility), it's clear and public that
there is a gap. Today the gap just ends up on the PTL's list of other
things to also do.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs

2014-08-22 Thread Thierry Carrez
Daniel P. Berrange wrote:
 On Fri, Aug 22, 2014 at 02:33:27PM +0200, Thierry Carrez wrote:
 Hi everyone,

 We all know being a project PTL is an extremely busy job. That's because
 in our structure the PTL is responsible for almost everything in a project:

 - Release management contact
 - Work prioritization
 - Keeping bugs under control
 - Communicate about work being planned or done
 - Make sure the gate is not broken
 - Team logistics (run meetings, organize sprints)
 - ...
 
 This is a good list of responsbilities, but I feel like we're missing
 something that is possibly a little too fuzzy to describe a bullet
 point. In terms of our governance process, PTL is an elected role, so
 in that sense the PTL holds an implicit duty to represent the interests
 of the project constituents. I'd characterise this as setting the overall
 big picture direction, so that the core team (and/or czars) are focusing
 their efforts in the right directioon and generally ensuring that the
 project is operating in a healthy manner. Perhaps you could class that
 as 'team logistics' but I feel that the idea of 'representation the
 voters' is is worth calling out explicitly.

Indeed. I touch on the keep an eye on the big picture aspect of the
job later in the email, but I didn't call it out in that list.

 Is there any formal writeup of the PTL role's responsibilities ?
 With Google so far I only found one paragraph of governance
 
   https://wiki.openstack.org/wiki/Governance/Foundation/Structure
 
   Project Technical Leads (PTLs) lead individual projects. A PTL
is ultimately responsible for the direction for each project,
makes tough calls when needed, organizes the work and teams in
the project and determines if other forms of project leadership
are needed. The PTL for a project is elected by the body of
contributors to that particular project.

The reference would be:
https://wiki.openstack.org/wiki/PTLguide

 Anyway, with the idea of elections  representation in mind
 
 Enter the Czar system: each project should have a number of liaisons /
 official contacts / delegates that are fully responsible to cover one
 aspect of the project. We need to have Bugs czars, which are responsible
 for getting bugs under control. We need to have Oslo czars, which serve
 as liaisons for the Oslo program but also as active project-local oslo
 advocates. We need Security czars, which the VMT can go to to progress
 quickly on plugging vulnerabilities. We need release management czars,
 to handle the communication and process with that painful OpenStack
 release manager. We need Gate czars to serve as first-line-of-contact
 getting gate issues fixed... You get the idea.

 Some people can be czars of multiple areas. PTLs can retain some czar
 activity if they wish. Czars can collaborate with their equivalents in
 other projects to share best practices. We just need a clear list of
 areas/duties and make sure each project has a name assigned to each.

 Now, why czars ? Why not rely on informal activity ? Well, for that
 system to work we'll need a lot of people to step up and sign up for
 more responsibility. Making them czars makes sure that effort is
 recognized and gives them something back. Also if we don't formally
 designate people, we can't really delegate and the PTL will still be
 directly held responsible. The Release management czar should be able to
 sign off release SHAs without asking the PTL. The czars and the PTL
 should collectively be the new project drivers.

 At that point, why not also get rid of the PTL ? And replace him with a
 team of czars ? If the czar system is successful, the PTL should be
 freed from the day-to-day operational duties and will be able to focus
 on the project health again. We still need someone to keep an eye on the
 project-wide picture and coordinate the work of the czars. We need
 someone to pick czars, in the event multiple candidates sign up. We also
 still need someone to have the final say in case of deadlocked issues.
 
  I'm wondering how people come to be czars ? You don't say explicitly,
 but reading this it feels like the team of czars would be more or less
 self-selecting amongst the project contributors or nominated by the PTL ?

I think you would have volunteers (and volunteered) people. In the
unlikely case people fight to become the czar, I guess the PTL could
have final say, as it's also good to ensure that all czars don't happen
to come up from a single company, etc.

 Thus if we took the next step and got rid of the PTL, then we seem to
 have entirely removed the idea of democratically elected leadership from
 the individual projects. Is this interpretation correct, wrt the idea of
 a czar system with PTL role abolished ?

If you also abolish the PTL, yes.

 People say we don't have that many deadlocks in OpenStack for which the
 PTL ultimate power is needed, so we could get rid of them. I'd argue
 that the main reason we don't have that many 

Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs

2014-08-22 Thread Thierry Carrez
Zane Bitter wrote:
 On 22/08/14 08:33, Thierry Carrez wrote:
 We also
 still need someone to have the final say in case of deadlocked issues.
 
 -1 we really don't.

I know we disagree on that :)

 People say we don't have that many deadlocks in OpenStack for which the
 PTL ultimate power is needed, so we could get rid of them. I'd argue
 that the main reason we don't have that many deadlocks in OpenStack is
 precisely *because* we have a system to break them if they arise.
 
 s/that many/any/ IME and I think that threatening to break a deadlock by
 fiat is just as bad as actually doing it. And by 'bad' I mean
 community-poisoningly, trust-destroyingly bad.

I guess I've been active in too many dysfunctional free and open source
software projects -- I put a very high value on the ability to make a
final decision. Not being able to make a decision is about as
community-poisoning, and also results in inability to make any
significant change or decision.

 That
 encourages everyone to find a lazy consensus. That part of the PTL job
 works. Let's fix the part that doesn't work (scaling/burnout).
 
 Let's allow projects to decide for themselves what works. Not every
 project is the same.

The net effect of not having a PTL having the final call means the final
call would reside at the Technical Committee level. I don't feel like
the Technical Committee should have final say on a project-specific
matter. It's way better that the local leader, chosen by all the
contributors of THAT project every 6 months, makes that final decision.
Or do you also want to get rid of the Technical Committee ?

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] How to get new package requirements into the CI system using a PPA and EPEL?

2014-08-22 Thread Ken Giusti
On Tue, Jul 15, 2014 at 7:28 PM, Ian Wienand iwien...@redhat.com wrote:
 On 07/15/2014 11:55 PM, Ken Giusti wrote:
 Good to hear about epel's availability.  But on the Ubuntu/Debian
 side - is it possible to add the Qpid project's PPA to the config
 project?  From a quick 'grep' of the sources, it appears as if Pypy
 requires a PPA.  It's configured in
 modules/openstack_project/manifests/slave_common.pp).  Can I use
 this as an example for adding Qpid's PPA?

 This is probably a good example of the puppet classes to use, but...

 As discussed, it's questionable what you want here.  Probably for your
 unit tests, you could mock-out calls to the library?  So you may not
 need it installed at all?

 If you want to test it for real; i.e. in a running environment with
 real RPC happening between components, etc, then that would be in a
 devstack environment.  It sounds like you'd probably be wanting to
 define a new rpc-backend [1] that could be optionally enabled.

 Once you had that in devstack, you'd have to start looking at the
 jenkins-job-builder configs [2] and add a specific test that enabled
 the flag for this back-end and add it as probably a non-voting job to
 some component.



Thanks,  I've spent some time hacking this and have the following:

1) a patch to devstack that adds a configuration option to enable AMQP
1.0 as the RPC messaging protocol:

https://review.openstack.org/#/c/109118/

2) a patch to openstack-infra/config that adds a new non-voting job
for oslo.messaging that runs on a devstack node with AMQP 1.0 enabled.
The job runs the AMQP 1.0 functional tests via tox:

https://review.openstack.org/#/c/115752/

#2 is fairly straightforward - I've copypasted the code from existing
neutron-functional tests.  Still needs testing, however.

#1 is the change I'm most concerned about as it adds the Apache qpid
PPA for ubuntu systems.  I've tested this on my Trusty and Centos6
vm's and it works well for me.

Can anyone on the devstack or infra teams give me some feedback on
these changes?  I'm hoping these infrastructure changes will unblock
the AMQP 1.0 blueprint in time for Juno-3 (fingers, toes, eyes
crossed).

thanks!

 -i

 [1] http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/rpc_backend
 [2] 
 https://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/files/jenkins_job_builder/config/



-- 
Ken Giusti  (kgiu...@gmail.com)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Launchpad tracking of oslo projects

2014-08-22 Thread Ben Nemec
I have no problem with the proposed change, so +1 from me.  Doug's
probably the important one to hear from though, since you and he are the
ones who deal with this stuff the most.  I think he's travelling today
so we'll see if he gets a chance to comment.

On 08/22/2014 04:59 AM, Thierry Carrez wrote:
 TL;DR:
 Let's create an Oslo projectgroup in Launchpad to track work across all
 Oslo libraries. In library projects, let's use milestones connected to
 published versions rather than the common milestones.
 
 Long version:
 As we graduate more Oslo libraries (which is awesome), tracking Oslo
 work in Launchpad (bugs, milestones...) has become more difficult.
 
 There used to be only one Launchpad project (oslo, which covered the
 oslo incubator). It would loosely follow the integrated milestones
 (juno-1, juno-2...), get blueprints and bugs targeted to those, get tags
 pushed around those development milestones: same as the integrated
 projects, just with no source code tarball uploads.
 
 When oslo.messaging graduated, a specific Launchpad project was created
 to track work around it. It still had integrated development milestones
 -- only at the end it would publish a 1.4.0 release instead of a 2014.2
 one. That approach creates two problems. First, it's difficult to keep
 track of oslo work since it now spans two projects. Second, the
 release management logic of marking bugs Fix released at development
 milestones doesn't really apply (bugs should rather be marked released
 when a published version of the lib carries the fix). Git tags and
 Launchpad milestones no longer align, which creates a lot of confusion.
 
 Then as more libraries appeared, some of them piggybacked on the general
 oslo Launchpad project (generally adding tags to point to the specific
 library), and some others created their own project. More confusion ensues.
 
 Here is a proposal that we could use to solve that, until StoryBoard
 gets proper milestone support and Oslo is just migrated to it:
 
 1. Ask for an oslo project group in Launchpad
 
 This would solve the first issue, by allowing to see all oslo work on
 single pages (see examples at [1] or [2]). The trade-off here is that
 Launchpad projects can't be part of multiple project groups (and project
 groups can't belong to project groups). That means that Oslo projects
 will no longer be in the openstack Launchpad projectgroup. I think the
 benefits outweigh the drawbacks here: the openstack projectgroup is not
 very strict anyway so I don't think it's used in people workflows that much.
 
 2. Create one project per library, adopt tag-based milestones
 
 Each graduated library should get its own project (part of the oslo
 projectgroup). It should use the same series/cycles as everyone else
 (juno), but it should have milestones that match the alpha release
 tags, so that you can target work to it and mark them fix released
 when that means the fix is released. That would solve the issue of
 misaligned tags/milestones. The trade-off here is that you lose the
 common milestone rhythm (although I guess you can still align some
 alphas to the common development milestones). That sounds a small price
 to pay to better communicate which version has which fix.
 
 3. Rename oslo project to oslo-incubator
 
 Keep the Launchpad oslo project as-is, part of the same projectgroup,
 to cover oslo-incubator work. This can keep the common development
 milestones, since the incubator doesn't do releases anyway. However,
 it has to be renamed to oslo-incubator so that it doesn't conflict
 with the projectgroup namespace. Once it no longer contains graduated
 libs, that name makes much more sense anyway.
 
 
 This plan requires Launchpad admin assistance to create a projectgroup
 and rename a project, so I'd like to get approval on it before moving to
 ask them for help.
 
 Comments, thoughts ?
 
 [1] https://blueprints.launchpad.net/openstack
 [2] https://bugs.launchpad.net/openstack
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] new testtools breaking gate

2014-08-22 Thread Duncan Thomas
At least for glance, the tox fix and the double setup problem are both
blocking the gate, so it isn't possible to fix cleanly, since both
issues need to be fixed in one commit - I think the correct thing is
to merge https://review.openstack.org/#/c/116267/ and give projects
time to fix up their issues cleanly.

On 22 August 2014 13:55, Ihar Hrachyshka ihrac...@redhat.com wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512

 Hi all,

 this week is quite bumpy for unit testing in gate. First, it was
 upgrade to new 'tox' version that broke quite some branches.

 And today new testtools 0.9.36 were released and were caught by gate,
 which resulted in the following unit test failures in multiple projects:

 TestCase.setUp was already called. Do not explicitly call setUp from
 your tests. In your own setUp, use super to call the base setUp.

 All branches are affected: havana, icehouse, and master.

 This is because the following check was released with the new version
 of the library:
 https://github.com/testing-cabal/testtools/commit/5c3b92d90a64efaecdc4010a98002bfe8b888517

 And the temporary fix is to merge the version pin patch in global
 requirements, backport it to stable branches, and merge the updates
 from Openstack Proposal Bot to all affected projects. The patch for
 master requirements is: https://review.openstack.org/#/c/116267/

 In the meantime, projects will need to fix their tests not to call
 setUp() and tearDown() twice. This will be the requirement to unpin
 the version of the library.

 So, please review, backport, and make sure it lands in project
 requirements files.

 /Ihar
 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

 iQEcBAEBCgAGBQJT9z3cAAoJEC5aWaUY1u57DtsIAOFtK2i4zkMcC79nOrc5w9DW
 oO2b064eyLwwbQEaWeeIL2JBSLBxqNV5zeN0eZB3Sq7LQLv0oPaUNTMFG2+gvask
 JHCTAGKz776Rt7ptcfmpHURwcT9L//+1HXvd+ADtO0sYKwgmvaBF7aA4WFa4TseG
 JCnAsi5OiOZZgTo/6U1B55srHkZr0DWxqTkKKysZJbR2Pr/ZT9io8yu9uucaz9VH
 uNLfggtCcjGgccl7IqSUtVRf3lsSGuvBAxVqMszSFJQmFCjy2E26GfsTApp9KXtQ
 gbCpEns8QCnt6KF9rygjHLMbYikjbITuUfSL2okZelX9VpKNx0CS29K/tRg5/BA=
 =YavB
 -END PGP SIGNATURE-

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Non-readonly connection to libvirt in unit tests

2014-08-22 Thread Solly Ross
(response inline)

- Original Message -
 From: Daniel P. Berrange berra...@redhat.com
 To: Solly Ross sr...@redhat.com
 Cc: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Thursday, August 21, 2014 11:23:17 AM
 Subject: Re: [openstack-dev] [nova][libvirt] Non-readonly connection to 
 libvirt in unit tests
 
 On Thu, Aug 21, 2014 at 11:14:33AM -0400, Solly Ross wrote:
  (reply inline)
  
  - Original Message -
   From: Daniel P. Berrange berra...@redhat.com
   To: OpenStack Development Mailing List (not for usage questions)
   openstack-dev@lists.openstack.org
   Sent: Thursday, August 21, 2014 11:05:18 AM
   Subject: Re: [openstack-dev] [nova][libvirt] Non-readonly connection to
   libvirt in unit tests
   
   On Thu, Aug 21, 2014 at 10:52:42AM -0400, Solly Ross wrote:
FYI, the context of this is that I would like to be able to test some
of the libvirt storage pool code against a live file system, as we
currently test the storage pool code.  To do this, we need at least to
be able to get a proper connection to a session daemon.  IMHO, since
these calls aren't expensive, so to speak, it should be fine to have
them run against a real libvirt.
   
   No it really isn't OK to run against the real libvirt host system when
   in the unit tests. Unit tests must *not* rely on external system state
   in this way because it will lead to greater instability and unreliability
   of our unit tests. If you want to test stuff against the real libvirt
   storage pools then that becomes a functional / integration test suite
   which is pretty much what tempest is targetting.
  
  That's all well and good, but we *currently* manipulates the actual file
  system manually in tests.  Should we then say that we should never
  manipulate
  the actual file system either?  In that case, there are some tests which
  need to be refactored.
 
 Places where the tests manipulate the filesystem though should be doing
 so in an isolated playpen directory, not in the live location where
 a deployed nova runs, so that's not the same thing.

Ah, but in the case I mentioned before, we're dealing with storage pools,
which can just be created in the playpen directory.  In that case, libvirt
is simply acting as a library for filesystem access.  To further ensure 
isolation,
you could also connect to a session daemon instead of a system daemon.

I'm of the opinion that requiring some form of libvirt to be installed to run 
the *libvirt*
unit tests isn't actually that big of a deal, since you can build libvirt 
without extra stuff
and get a libvirt that has just enough for you to test against.  Generally it's 
the developers
that will be running the unit tests (and the CI), and if a developer is running 
the libvirt
unit tests, he or she is probably developing for the libvirt driver, and thus 
should probably
have libvirt installed in some form. 

 
 So If we require libvirt-python for tests and that requires
 libvirt-bin, what's stopping us from just removing fakelibvirt since
 it's kind of useless now anyway, right?

The thing about fakelibvirt is that it allows us to operate against
against a libvirt API without actually doing libvirt-y things like
launching VMs.  Now, libvirt does have a test:///default URI that
IIRC has similar functionality, so we could start to phase out fake
libvirt in favor of that.  However, there are probably still some
spots where we'll want to use fakelibvirt.
   
   I'm actually increasingly of the opinion that we should not in fact
   be trying to use the real libvirt library in the unit tests at all
   as it is not really adding any value. We typically nmock out all the
   actual API calls we exercise so despite using libvirt-python we
   are not in fact exercising its code or even validating that we're
   passing the correct numbers of parameters to API calls. Pretty much
   all we really relying on is the existance of the various global
   constants that are defined, and that has been nothing but trouble
   because the constants may or may not be defined depending on the
   version.
  
  Isn't that what 'test:///default' is supposed to be?  A version of libvirt
  with libvirt not actually touching the rest of the system?
 
 Yes, that is what it allows for, however, even if we used that URI we
 still wouldn't be actually exercising any of the libvirt code in any
 meaningful way because our unit tests mock out all the API calls that
 get touched. So using libvirt-python + test:///default URI doesn't
 really seem to buy us anything, but it does still mean that developers
 need to have libvirt installed in order to run  the unit tests. I'm
 not convinced that is a beneficial tradeoff.

I think it would make writing unit tests easier, because you don't have
to worry about making sure that the fakelibvirt implementation matches
the real libvirt implementation, and you don't have to 

Re: [openstack-dev] [QA] Picking a Name for the Tempest Library

2014-08-22 Thread Matthew Treinish
On Fri, Aug 15, 2014 at 03:14:21PM -0400, Matthew Treinish wrote:
 Hi Everyone,
 
 So as part of splitting out common functionality from tempest into a library 
 [1]
 we need to create a new repository. Which means we have the fun task of coming
 up with something to name it. I'm personally thought we should call it:
 
  - mesocyclone
 
 Which has the advantage of being a cloud/weather thing, and the name sort of
 fits because it's a precursor to a tornado. Also, it's an available namespace 
 on
 both launchpad and pypi. But there has been expressed concern that both it is 
 a
 bit on the long side (which might have 80 char line length implications) and
 it's unclear from the name what it does. 
 
 During the last QA meeting some alternatives were also brought up:
 
  - tempest-lib / lib-tempest
  - tsepmet
  - blackstorm
  - calm
  - tempit
  - integration-test-lib
 
 (although I'm not entirely sure I remember which ones were serious suggestions
 or just jokes)
 
 So as a first step I figured that I'd bring it up on the ML to see if anyone 
 had
 any other suggestions. (or maybe get a consensus around one choice) I'll take
 the list, check if the namespaces are available, and make a survey so that
 everyone can vote and hopefully we'll have a clear choice for a name from 
 that.
 

Since the consensus was for renaming tempest and making tempest the library 
name,
which wasn't really feasible, I opened up a survey to poll everyone on the which
name to use:

https://www.surveymonkey.com/s/RLLZRGJ

The choices were taken from the initial list I posted and from the suggestions
which people posted based on the availability of the names.

I'll keep it open for about a week, or until a clear favorite emerges.

-Matt Treinish



pgpqTJ5e2Nca1.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Non-readonly connection to libvirt in unit tests

2014-08-22 Thread Daniel P. Berrange
On Fri, Aug 22, 2014 at 11:32:31AM -0400, Solly Ross wrote:
 (response inline)
 
 - Original Message -
  From: Daniel P. Berrange berra...@redhat.com
  To: Solly Ross sr...@redhat.com
  Cc: OpenStack Development Mailing List (not for usage questions) 
  openstack-dev@lists.openstack.org
  Sent: Thursday, August 21, 2014 11:23:17 AM
  Subject: Re: [openstack-dev] [nova][libvirt] Non-readonly connection to 
  libvirt in unit tests
  
  On Thu, Aug 21, 2014 at 11:14:33AM -0400, Solly Ross wrote:
   (reply inline)
   
   - Original Message -
From: Daniel P. Berrange berra...@redhat.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Sent: Thursday, August 21, 2014 11:05:18 AM
Subject: Re: [openstack-dev] [nova][libvirt] Non-readonly connection to
libvirt in unit tests

On Thu, Aug 21, 2014 at 10:52:42AM -0400, Solly Ross wrote:
 FYI, the context of this is that I would like to be able to test some
 of the libvirt storage pool code against a live file system, as we
 currently test the storage pool code.  To do this, we need at least to
 be able to get a proper connection to a session daemon.  IMHO, since
 these calls aren't expensive, so to speak, it should be fine to have
 them run against a real libvirt.

No it really isn't OK to run against the real libvirt host system when
in the unit tests. Unit tests must *not* rely on external system state
in this way because it will lead to greater instability and 
unreliability
of our unit tests. If you want to test stuff against the real libvirt
storage pools then that becomes a functional / integration test suite
which is pretty much what tempest is targetting.
   
   That's all well and good, but we *currently* manipulates the actual file
   system manually in tests.  Should we then say that we should never
   manipulate
   the actual file system either?  In that case, there are some tests which
   need to be refactored.
  
  Places where the tests manipulate the filesystem though should be doing
  so in an isolated playpen directory, not in the live location where
  a deployed nova runs, so that's not the same thing.
 
 Ah, but in the case I mentioned before, we're dealing with storage pools,
 which can just be created in the playpen directory.  In that case, libvirt
 is simply acting as a library for filesystem access.  To further ensure 
 isolation,
 you could also connect to a session daemon instead of a system daemon.
 
 I'm of the opinion that requiring some form of libvirt to be installed to run 
 the *libvirt*
 unit tests isn't actually that big of a deal, since you can build libvirt 
 without extra stuff
 and get a libvirt that has just enough for you to test against.  Generally 
 it's the developers
 that will be running the unit tests (and the CI), and if a developer is 
 running the libvirt
 unit tests, he or she is probably developing for the libvirt driver, and thus 
 should probably
 have libvirt installed in some form. 

The unit tests are run regardless of whether the developer is working on
libvirt or not. The more libvirt setup we require for the tests the more
pain we're inflicting on non-libvirt developers. That is a big deal for
them.

I'm actually increasingly of the opinion that we should not in fact
be trying to use the real libvirt library in the unit tests at all
as it is not really adding any value. We typically nmock out all the
actual API calls we exercise so despite using libvirt-python we
are not in fact exercising its code or even validating that we're
passing the correct numbers of parameters to API calls. Pretty much
all we really relying on is the existance of the various global
constants that are defined, and that has been nothing but trouble
because the constants may or may not be defined depending on the
version.
   
   Isn't that what 'test:///default' is supposed to be?  A version of libvirt
   with libvirt not actually touching the rest of the system?
  
  Yes, that is what it allows for, however, even if we used that URI we
  still wouldn't be actually exercising any of the libvirt code in any
  meaningful way because our unit tests mock out all the API calls that
  get touched. So using libvirt-python + test:///default URI doesn't
  really seem to buy us anything, but it does still mean that developers
  need to have libvirt installed in order to run  the unit tests. I'm
  not convinced that is a beneficial tradeoff.
 
 I think it would make writing unit tests easier, because you don't have
 to worry about making sure that the fakelibvirt implementation matches
 the real libvirt implementation, and you don't have to go adding extra
 methods to fakelibvirt to get things to work.

Those problems are all artifacts of the way fakelibvirt is /currently/
written. You can easily solve them by auto-generating the stub classes
and methods 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-22 Thread Duncan Thomas
On 21 August 2014 19:39, gordon chung g...@live.ca wrote:
 from the pov of a project that seems to be brought up constantly and maybe
 it's my naivety, i don't really understand the fascination with branding and
 the stigma people have placed on non-'openstack'/stackforge projects. it
 can't be a legal thing because i've gone through that potential mess. also,
 it's just as easy to contribute to 'non-openstack' projects as 'openstack'
 projects (even easier if we're honest).

It may be easier for you, but it certainly isn't inside big companies,
e.g. HP have pretty broad approvals for contributing to (official)
openstack projects, where as individual approval may be needed to
contribute to none-openstack projects.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs

2014-08-22 Thread Mark McLoughlin
On Fri, 2014-08-22 at 11:01 -0400, Zane Bitter wrote:

 I don't see that as something the wider OpenStack community needs to 
 dictate. We have a heavyweight election process for PTLs once every 
 cycle because that used to be the process for electing the TC. Now that 
 it no longer serves this dual purpose, PTL elections have outlived their 
 usefulness.
 
 If projects want to have a designated tech lead, let them. If they want 
 to have the lead elected in a form of representative democracy, let 
 them. But there's no need to impose that process on every project. If 
 they want to rotate the tech lead every week instead of every 6 months, 
 why not let them? We'll soon see from experimentation which models work. 
 Let a thousand flowers bloom, c.

I like the idea of projects being free to experiment with their
governance rather than the TC mandating detailed governance models from
above.

But I also like the way Thierry is taking a trend we're seeing work out
well across multiple projects, and generalizing it. If individual
projects are to adopt explicit PTL duty delegation, then all the better
if those projects adopt it in similar ways.

i.e. this should turn out to be an optional best practice model that
projects can choose to adopt, in much the way the *-specs repo idea took
hold.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-22 Thread Mooney, Sean K
I would have to agree with Thomas.
Many organizations have already worked out strategies and have processes in 
place to cover contributing to  OpenStack which
Cover all official project. Contributing to additional non-OpenStack projects 
may introduce additional barriers in large 
Organizations which require  ip plan/legal approval on a per project basis.

Regards
sean 
-Original Message-
From: Duncan Thomas [mailto:duncan.tho...@gmail.com] 
Sent: Friday, August 22, 2014 4:39 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] The future of the integrated release

On 21 August 2014 19:39, gordon chung g...@live.ca wrote:
 from the pov of a project that seems to be brought up constantly and 
 maybe it's my naivety, i don't really understand the fascination with 
 branding and the stigma people have placed on 
 non-'openstack'/stackforge projects. it can't be a legal thing because 
 i've gone through that potential mess. also, it's just as easy to contribute 
 to 'non-openstack' projects as 'openstack'
 projects (even easier if we're honest).

It may be easier for you, but it certainly isn't inside big companies, e.g. HP 
have pretty broad approvals for contributing to (official) openstack projects, 
where as individual approval may be needed to contribute to none-openstack 
projects.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare

This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). Any review or distribution by others is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender and delete all copies.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Mistral API v2 - discussion

2014-08-22 Thread Renat Akhmerov
Dmitri,

Thanks for sharing this.

We discussed the details of what captured in the etherpad with the team today 
and looks like folks support most of the ideas we came up with. Anyways, let’s 
take a couple of days till the next team meeting on Monday to think it over, 
discuss it all together and make corrections if needed.


Renat Akhmerov
@ Mirantis Inc.



On 22 Aug 2014, at 11:12, Dmitri Zimine dzim...@stackstorm.com wrote:

 Hi Stackers, 
 
 we are discussing the API, v2. 
 
 The core team captured the initial thoughts 
 
 here (most recent): 
 https://etherpad.openstack.org/p/mistral-API-v2-discussion
 
 and here: 
 https://docs.google.com/a/stackstorm.com/document/d/12j66DZiyJahdnV8zXM_fCRJb0qxpsVS_eyhZvdHlIVU/edit
 
 Have a look, leave your comments, questions, suggestions. 
 On Monday’s IRC meeting we’ll talk more and finalize the draft. 
 
 Thanks, 
 
 DZ 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs

2014-08-22 Thread Jay Pipes

On 08/22/2014 08:33 AM, Thierry Carrez wrote:

Hi everyone,

We all know being a project PTL is an extremely busy job. That's because
in our structure the PTL is responsible for almost everything in a project:

- Release management contact
- Work prioritization
- Keeping bugs under control
- Communicate about work being planned or done
- Make sure the gate is not broken
- Team logistics (run meetings, organize sprints)
- ...

They end up being completely drowned in those day-to-day operational
duties, miss the big picture, can't help in development that much
anymore, get burnt out. Since you're either the PTL or not the PTL,
you're very alone and succession planning is not working that great either.

There have been a number of experiments to solve that problem. John
Garbutt has done an incredible job at helping successive Nova PTLs
handling the release management aspect. Tracy Jones took over Nova bug
management. Doug Hellmann successfully introduced the concept of Oslo
liaisons to get clear point of contacts for Oslo library adoption in
projects. It may be time to generalize that solution.

The issue is one of responsibility: the PTL is ultimately responsible
for everything in a project. If we can more formally delegate that
responsibility, we can avoid getting up to the PTL for everything, we
can rely on a team of people rather than just one person.

Enter the Czar system: each project should have a number of liaisons /
official contacts / delegates that are fully responsible to cover one
aspect of the project. We need to have Bugs czars, which are responsible
for getting bugs under control. We need to have Oslo czars, which serve
as liaisons for the Oslo program but also as active project-local oslo
advocates. We need Security czars, which the VMT can go to to progress
quickly on plugging vulnerabilities. We need release management czars,
to handle the communication and process with that painful OpenStack
release manager. We need Gate czars to serve as first-line-of-contact
getting gate issues fixed... You get the idea.

Some people can be czars of multiple areas. PTLs can retain some czar
activity if they wish. Czars can collaborate with their equivalents in
other projects to share best practices. We just need a clear list of
areas/duties and make sure each project has a name assigned to each.

Now, why czars ? Why not rely on informal activity ? Well, for that
system to work we'll need a lot of people to step up and sign up for
more responsibility. Making them czars makes sure that effort is
recognized and gives them something back. Also if we don't formally
designate people, we can't really delegate and the PTL will still be
directly held responsible. The Release management czar should be able to
sign off release SHAs without asking the PTL. The czars and the PTL
should collectively be the new project drivers.

At that point, why not also get rid of the PTL ? And replace him with a
team of czars ? If the czar system is successful, the PTL should be
freed from the day-to-day operational duties and will be able to focus
on the project health again. We still need someone to keep an eye on the
project-wide picture and coordinate the work of the czars. We need
someone to pick czars, in the event multiple candidates sign up. We also
still need someone to have the final say in case of deadlocked issues.

People say we don't have that many deadlocks in OpenStack for which the
PTL ultimate power is needed, so we could get rid of them. I'd argue
that the main reason we don't have that many deadlocks in OpenStack is
precisely *because* we have a system to break them if they arise. That
encourages everyone to find a lazy consensus. That part of the PTL job
works. Let's fix the part that doesn't work (scaling/burnout).


I think the czars approach is sensible and seems to have worked pretty 
well in a couple projects so far.


And, since I work for a software company with Russian origin, I support 
the term czar as well ;)


On the topic of whether a PTL is still needed once a czar system is put 
in place, I think that should be left up to each individual project to 
decide.


Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs

2014-08-22 Thread John Griffith
On Fri, Aug 22, 2014 at 10:51 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 08/22/2014 08:33 AM, Thierry Carrez wrote:

 Hi everyone,

 We all know being a project PTL is an extremely busy job. That's because
 in our structure the PTL is responsible for almost everything in a
 project:

 - Release management contact
 - Work prioritization
 - Keeping bugs under control
 - Communicate about work being planned or done
 - Make sure the gate is not broken
 - Team logistics (run meetings, organize sprints)
 - ...

 They end up being completely drowned in those day-to-day operational
 duties, miss the big picture, can't help in development that much
 anymore, get burnt out. Since you're either the PTL or not the PTL,
 you're very alone and succession planning is not working that great
 either.

 There have been a number of experiments to solve that problem. John
 Garbutt has done an incredible job at helping successive Nova PTLs
 handling the release management aspect. Tracy Jones took over Nova bug
 management. Doug Hellmann successfully introduced the concept of Oslo
 liaisons to get clear point of contacts for Oslo library adoption in
 projects. It may be time to generalize that solution.

 The issue is one of responsibility: the PTL is ultimately responsible
 for everything in a project. If we can more formally delegate that
 responsibility, we can avoid getting up to the PTL for everything, we
 can rely on a team of people rather than just one person.

 Enter the Czar system: each project should have a number of liaisons /
 official contacts / delegates that are fully responsible to cover one
 aspect of the project. We need to have Bugs czars, which are responsible
 for getting bugs under control. We need to have Oslo czars, which serve
 as liaisons for the Oslo program but also as active project-local oslo
 advocates. We need Security czars, which the VMT can go to to progress
 quickly on plugging vulnerabilities. We need release management czars,
 to handle the communication and process with that painful OpenStack
 release manager. We need Gate czars to serve as first-line-of-contact
 getting gate issues fixed... You get the idea.

 Some people can be czars of multiple areas. PTLs can retain some czar
 activity if they wish. Czars can collaborate with their equivalents in
 other projects to share best practices. We just need a clear list of
 areas/duties and make sure each project has a name assigned to each.

 Now, why czars ? Why not rely on informal activity ? Well, for that
 system to work we'll need a lot of people to step up and sign up for
 more responsibility. Making them czars makes sure that effort is
 recognized and gives them something back. Also if we don't formally
 designate people, we can't really delegate and the PTL will still be
 directly held responsible. The Release management czar should be able to
 sign off release SHAs without asking the PTL. The czars and the PTL
 should collectively be the new project drivers.

 At that point, why not also get rid of the PTL ? And replace him with a
 team of czars ? If the czar system is successful, the PTL should be
 freed from the day-to-day operational duties and will be able to focus
 on the project health again. We still need someone to keep an eye on the
 project-wide picture and coordinate the work of the czars. We need
 someone to pick czars, in the event multiple candidates sign up. We also
 still need someone to have the final say in case of deadlocked issues.

 People say we don't have that many deadlocks in OpenStack for which the
 PTL ultimate power is needed, so we could get rid of them. I'd argue
 that the main reason we don't have that many deadlocks in OpenStack is
 precisely *because* we have a system to break them if they arise. That
 encourages everyone to find a lazy consensus. That part of the PTL job
 works. Let's fix the part that doesn't work (scaling/burnout).


 I think the czars approach is sensible and seems to have worked pretty
 well in a couple projects so far.

 And, since I work for a software company with Russian origin, I support
 the term czar as well ;)

 On the topic of whether a PTL is still needed once a czar system is put in
 place, I think that should be left up to each individual project to decide.

 Best,
 -jay



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

​
+1 to czars
+1 to considering the future and role of TC versus future of PTL
​
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-22 Thread Michael Chapman
On Fri, Aug 22, 2014 at 9:51 PM, Sean Dague s...@dague.net wrote:

 On 08/22/2014 01:30 AM, Michael Chapman wrote:
 
 
 
  On Fri, Aug 22, 2014 at 2:57 AM, Jay Pipes jaypi...@gmail.com
  mailto:jaypi...@gmail.com wrote:
 
  On 08/19/2014 11:28 PM, Robert Collins wrote:
 
  On 20 August 2014 02:37, Jay Pipes jaypi...@gmail.com
  mailto:jaypi...@gmail.com wrote:
  ...
 
  I'd like to see more unification of implementations in
  TripleO - but I
  still believe our basic principle of using OpenStack
  technologies that
  already exist in preference to third party ones is still
  sound, and
  offers substantial dogfood and virtuous circle benefits.
 
 
 
  No doubt Triple-O serves a valuable dogfood and virtuous
  cycle purpose.
  However, I would move that the Deployment Program should
  welcome the many
  projects currently in the stackforge/ code namespace that do
  deployment of
  OpenStack using traditional configuration management tools
  like Chef,
  Puppet, and Ansible. It cannot be argued that these
  configuration management
  systems are the de-facto way that OpenStack is deployed
  outside of HP, and
  they belong in the Deployment Program, IMO.
 
 
  I think you mean it 'can be argued'... ;).
 
 
  No, I definitely mean cannot be argued :) HP is the only company I
  know of that is deploying OpenStack using Triple-O. The vast
  majority of deployers I know of are deploying OpenStack using
  configuration management platforms and various systems or glue code
  for baremetal provisioning.
 
  Note that I am not saying that Triple-O is bad in any way! I'm only
  saying that it does not represent the way that the majority of
  real-world deployments are done.
 
 
   And I'd be happy if folk in
 
  those communities want to join in the deployment program and
  have code
  repositories in openstack/. To date, none have asked.
 
 
  My point in this thread has been and continues to be that by having
  the TC bless a certain project as The OpenStack Way of X, that we
  implicitly are saying to other valid alternatives Sorry, no need to
  apply here..
 
 
  As a TC member, I would welcome someone from the Chef
  community proposing
  the Chef cookbooks for inclusion in the Deployment program,
  to live under
  the openstack/ code namespace. Same for the Puppet modules.
 
 
  While you may personally welcome the Chef community to propose
  joining the deployment Program and living under the openstack/ code
  namespace, I'm just saying that the impression our governance model
  and policies create is one of exclusion, not inclusion. Hope that
  clarifies better what I've been getting at.
 
 
 
  (As one of the core reviewers for the Puppet modules)
 
  Without a standardised package build process it's quite difficult to
  test trunk Puppet modules vs trunk official projects. This means we cut
  release branches some time after the projects themselves to give people
  a chance to test. Until this changes and the modules can be released
  with the same cadence as the integrated release I believe they should
  remain on Stackforge.
 
  In addition and perhaps as a consequence, there isn't any public
  integration testing at this time for the modules, although I know some
  parties have developed and maintain their own.
 
  The Chef modules may be in a different state, but it's hard for me to
  recommend the Puppet modules become part of an official program at this
  stage.

 Is the focus of the Puppet modules only stable releases with packages?



We try to target puppet module master at upstream OpenStack master, but
without CI/CD we fall behind. The missing piece is building packages and
creating a local repo before doing the puppet run, which I'm working on
slowly as I want a single system for both deb and rpm that doesn't make my
eyes bleed. fpm and pleaserun are the two key tools here.


 Puppet + git based deploys would be honestly a really handy thing
 (especially as lots of people end up having custom fixes for their
 site). The lack of CM tools for git based deploys is I think one of the
 reasons we seen people using DevStack as a generic installer.


It's possible but it's also straight up a poor thing to do in my opinion.
If you're going to install nova from source, maybe you also want libvirt
from source to test a new feature, then you want some of libvirt's deps and
so on. Puppet isn't equipped to deal with this effectively. It runs yum
install x, and that brings in the dependencies.

It's much better to automate the package building process and 

[openstack-dev] [Cinder][third-party] How to make progress on cinder driver CI post J3

2014-08-22 Thread Asselin, Ramy
Many of us are still working on setting up and testing 3rd party ci for cinder 
drivers.

None of them currently have gerrit +1/-1 voting, but I heard there's a plan to 
disable those that are not working post-J3 (Please correct me if I 
misunderstood).

However, Post-J3 is a great time to work on 3rd party ci for a few reasons:

1.   Increases the overall QA effort when it's most needed.

2.   Many contributors have more time available since feature work is 
complete.

Since erroneous failing tests from non-voting ci accounts can still distract 
reviewers, I'd like to propose a compromise:
Marking 3rd party ci jobs still undergoing testing and stability as 
(non-voting).
e.g.
dsvm-tempest-my-driverhttp://15.126.198.151/67/106567/30/check/dsvm-tempest-hp-lefthand/a212407

SUCCESS in 39m 27s (non-voting)

dsvm-tempest-my-driver

FAILURE in 39m 54s (non-voting)


This way, progress can still be made by cinder vendors working on setting up 
3rd party ci under 'real' load post J3 while minimizing distractions for cinder 
reviewers and other stakeholders.
I think this is consistent with how the OpenStack Jenkins CI marks 
potentially unstable jobs.

Please share your thoughts.

Thanks,
Ramy







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] new testtools breaking gate

2014-08-22 Thread Clark Boylan
On Fri, Aug 22, 2014, at 05:55 AM, Ihar Hrachyshka wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512
 
 Hi all,
 
 this week is quite bumpy for unit testing in gate. First, it was
 upgrade to new 'tox' version that broke quite some branches.
 
I did a ton of work to make the tox upgrade go smoothly because we knew
it would be somewhat painful. About a month ago I sent mail to this list
[0] describing the problem. This thread included a pointer to the bug
filed to track this [1] and example work around changes [2] which I
wrote and proposed for as many projects and branches as I had time to
test at that point.

Updating tox to 1.7.2 is important for a couple reasons. We get a lot of
confused developers wondering why using tox doesn't work to run their
tests when all of our documentation says just run tox. Well you needed a
special version (1.6.1). Communicating that to everyone that tries to
run tox is hard.

It is also important because tox adds new features like the hashseed
randomization. This is the cause of our problems but it is exposing real
bugs in openstack [3]. We should be fixing these issues and hopefully my
proposed workarounds are only temporary.

I decided to push ahead [4] and upgrade tox a couple days ago for a
couple reasons. This is an important change as illustrated above and
feature freeze and stabilization are rapidly approaching and this
probably needed to get in soon to have a chance at getting in at all. I
felt this was appropriate because I had done a ton of work prior to make
things go as smoothly as possible.

Where things did not go smoothly was on the reviews for my workaround.
Some changes were basically ignored [5] others ran into procedural
paperwork associated with stable branches that are not quite appropriate
for changes of this type [6][7]. I get that generally we only want to
backport things from master and that we have some specific way to cherry
pick things, but this type of change is to address issues with
stable/foo directly and has nothing to do with master. I did eventually
go through the backport dance for most of these changes despite this
not actually being a true backport.

[0]
http://lists.openstack.org/pipermail/openstack-dev/2014-July/041283.html
[1] https://bugs.launchpad.net/cinder/+bug/1348818
[2] https://review.openstack.org/#/c/109700/
[3]
http://lists.openstack.org/pipermail/openstack-dev/2014-July/041496.html
[4]
http://lists.openstack.org/pipermail/openstack-dev/2014-August/042010.html
[5] https://review.openstack.org/#/c/109749/
[6] https://review.openstack.org/#/c/109759/
[7] https://review.openstack.org/#/c/109750/

With all of that out of the way are there suggestions for how we can do
this better next time? Do we need more time (I gave us about 4 weeks
which seemed like plenty to me)? Perhaps I should send more reminder
emails? Feedback is very welcome.

Thanks,
Clark

 And today new testtools 0.9.36 were released and were caught by gate,
 which resulted in the following unit test failures in multiple projects:
 
 TestCase.setUp was already called. Do not explicitly call setUp from
 your tests. In your own setUp, use super to call the base setUp.
 
 All branches are affected: havana, icehouse, and master.
 
 This is because the following check was released with the new version
 of the library:
 https://github.com/testing-cabal/testtools/commit/5c3b92d90a64efaecdc4010a98002bfe8b888517
 
 And the temporary fix is to merge the version pin patch in global
 requirements, backport it to stable branches, and merge the updates
 from Openstack Proposal Bot to all affected projects. The patch for
 master requirements is: https://review.openstack.org/#/c/116267/
 
 In the meantime, projects will need to fix their tests not to call
 setUp() and tearDown() twice. This will be the requirement to unpin
 the version of the library.
 
 So, please review, backport, and make sure it lands in project
 requirements files.
 
 /Ihar
 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
 
 iQEcBAEBCgAGBQJT9z3cAAoJEC5aWaUY1u57DtsIAOFtK2i4zkMcC79nOrc5w9DW
 oO2b064eyLwwbQEaWeeIL2JBSLBxqNV5zeN0eZB3Sq7LQLv0oPaUNTMFG2+gvask
 JHCTAGKz776Rt7ptcfmpHURwcT9L//+1HXvd+ADtO0sYKwgmvaBF7aA4WFa4TseG
 JCnAsi5OiOZZgTo/6U1B55srHkZr0DWxqTkKKysZJbR2Pr/ZT9io8yu9uucaz9VH
 uNLfggtCcjGgccl7IqSUtVRf3lsSGuvBAxVqMszSFJQmFCjy2E26GfsTApp9KXtQ
 gbCpEns8QCnt6KF9rygjHLMbYikjbITuUfSL2okZelX9VpKNx0CS29K/tRg5/BA=
 =YavB
 -END PGP SIGNATURE-
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs

2014-08-22 Thread Zane Bitter

On 22/08/14 12:45, Dolph Mathews wrote:

I'm all for getting a final decision, but a 'final' decision that has been
imposed from outside rather than internalised by the participants is...
rarely final.


The expectation of a PTL isn't to stomp around and make final decisions,
it's to step in when necessary and help both sides find the best solution.
To moderate.


Oh sure, but that's not just the PTL's job. That's everyone's job. Don't 
you think?


I did that before I was the PTL and will continue to do it after I'm no 
longer the PTL. And if anyone in the (especially) the core or wider Heat 
team sees an opportunity to step in and moderate a disagreement I 
certainly expect them to take it and not wait for me to step in.


I'm not calling for no leadership here - I'm calling for leadership from 
_everyone_, not just from one person who holds a particular role.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] [TripleO] How to gracefully quiesce a box?

2014-08-22 Thread Clint Byrum
It has been brought to my attention that Ironic uses the biggest hammer
in the IPMI toolbox to control chassis power:

https://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/ipminative.py#n142

Which is

ret = ipmicmd.set_power('off', wait)

This is the most abrupt form, where the system power should be flipped
off at a hardware level. The short press on the power button would be
'shutdown' instead of 'off'.

I also understand that this has been brought up before, and that the
answer given was SSH in and shut it down yourself. I can respect that
position, but I have run into a bit of a pickle using it. Observe:

- ssh box.ip poweroff
- poll ironic until power state is off.
  - This is a race. Ironic is asserting the power. As soon as it sees
that the power is off, it will turn it back on.

- ssh box.ip halt
  - NO way to know that this has worked. Once SSH is off and the network
stack is gone, I cannot actually verify that the disks were
unmounted properly, which is the primary area of concern that I
have.

This is particulary important if I'm issuing a rebuild + preserve
ephemeral, as it is likely I will have lots of I/O going on, and I want
to make sure that it is all quiesced before I reboot to replace the
software and reboot.

Perhaps I missed something. If so, please do educate me on how I can
achieve this without hacking around it. Currently my workaround is to
manually unmount the state partition, which is something system shutdown
is supposed to do and may become problematic if system processes are
holding it open.

It seems to me that Ironic should at least try to use the graceful
shutdown. There can be a timeout, but it would need to be something a user
can disable so if graceful never works we never just dump the power on the
box. Even a journaled filesystem will take quite a bit to do a full fsck.

The inability to gracefully shutdown in a reasonable amount of time
is an error state really, and I need to go to the box and inspect it,
which is precisely the reason we have ERROR states.

Thanks for your time. :)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-22 Thread Joshua Harlow
Comment inline.

On Aug 22, 2014, at 10:13 AM, Michael Chapman wop...@gmail.com wrote:

 
 
 
 On Fri, Aug 22, 2014 at 9:51 PM, Sean Dague s...@dague.net wrote:
 On 08/22/2014 01:30 AM, Michael Chapman wrote:
 
 
 
  On Fri, Aug 22, 2014 at 2:57 AM, Jay Pipes jaypi...@gmail.com
  mailto:jaypi...@gmail.com wrote:
 
  On 08/19/2014 11:28 PM, Robert Collins wrote:
 
  On 20 August 2014 02:37, Jay Pipes jaypi...@gmail.com
  mailto:jaypi...@gmail.com wrote:
  ...
 
  I'd like to see more unification of implementations in
  TripleO - but I
  still believe our basic principle of using OpenStack
  technologies that
  already exist in preference to third party ones is still
  sound, and
  offers substantial dogfood and virtuous circle benefits.
 
 
 
  No doubt Triple-O serves a valuable dogfood and virtuous
  cycle purpose.
  However, I would move that the Deployment Program should
  welcome the many
  projects currently in the stackforge/ code namespace that do
  deployment of
  OpenStack using traditional configuration management tools
  like Chef,
  Puppet, and Ansible. It cannot be argued that these
  configuration management
  systems are the de-facto way that OpenStack is deployed
  outside of HP, and
  they belong in the Deployment Program, IMO.
 
 
  I think you mean it 'can be argued'... ;).
 
 
  No, I definitely mean cannot be argued :) HP is the only company I
  know of that is deploying OpenStack using Triple-O. The vast
  majority of deployers I know of are deploying OpenStack using
  configuration management platforms and various systems or glue code
  for baremetal provisioning.
 
  Note that I am not saying that Triple-O is bad in any way! I'm only
  saying that it does not represent the way that the majority of
  real-world deployments are done.
 
 
   And I'd be happy if folk in
 
  those communities want to join in the deployment program and
  have code
  repositories in openstack/. To date, none have asked.
 
 
  My point in this thread has been and continues to be that by having
  the TC bless a certain project as The OpenStack Way of X, that we
  implicitly are saying to other valid alternatives Sorry, no need to
  apply here..
 
 
  As a TC member, I would welcome someone from the Chef
  community proposing
  the Chef cookbooks for inclusion in the Deployment program,
  to live under
  the openstack/ code namespace. Same for the Puppet modules.
 
 
  While you may personally welcome the Chef community to propose
  joining the deployment Program and living under the openstack/ code
  namespace, I'm just saying that the impression our governance model
  and policies create is one of exclusion, not inclusion. Hope that
  clarifies better what I've been getting at.
 
 
 
  (As one of the core reviewers for the Puppet modules)
 
  Without a standardised package build process it's quite difficult to
  test trunk Puppet modules vs trunk official projects. This means we cut
  release branches some time after the projects themselves to give people
  a chance to test. Until this changes and the modules can be released
  with the same cadence as the integrated release I believe they should
  remain on Stackforge.
 
  In addition and perhaps as a consequence, there isn't any public
  integration testing at this time for the modules, although I know some
  parties have developed and maintain their own.
 
  The Chef modules may be in a different state, but it's hard for me to
  recommend the Puppet modules become part of an official program at this
  stage.
 
 Is the focus of the Puppet modules only stable releases with packages?
 
 
 We try to target puppet module master at upstream OpenStack master, but 
 without CI/CD we fall behind. The missing piece is building packages and 
 creating a local repo before doing the puppet run, which I'm working on 
 slowly as I want a single system for both deb and rpm that doesn't make my 
 eyes bleed. fpm and pleaserun are the two key tools here.
  
 Puppet + git based deploys would be honestly a really handy thing
 (especially as lots of people end up having custom fixes for their
 site). The lack of CM tools for git based deploys is I think one of the
 reasons we seen people using DevStack as a generic installer.
 
 
 It's possible but it's also straight up a poor thing to do in my opinion. If 
 you're going to install nova from source, maybe you also want libvirt from 
 source to test a new feature, then you want some of libvirt's deps and so on. 
 Puppet isn't equipped to deal with this effectively. It runs 

Re: [openstack-dev] [all] new testtools breaking gate

2014-08-22 Thread Flavio Percoco
On 08/22/2014 05:30 PM, Duncan Thomas wrote:
 At least for glance, the tox fix and the double setup problem are both
 blocking the gate, so it isn't possible to fix cleanly, since both
 issues need to be fixed in one commit - I think the correct thing is
 to merge https://review.openstack.org/#/c/116267/ and give projects
 time to fix up their issues cleanly.

We merged both fixes in a single patch. Once this[0] patch lands, glance
shouldn't be blocked by any of these 2 issues anymore.

[0] https://review.openstack.org/#/c/109749/

Flavio

 
 On 22 August 2014 13:55, Ihar Hrachyshka ihrac...@redhat.com wrote:
 Hi all,
 
 this week is quite bumpy for unit testing in gate. First, it was
 upgrade to new 'tox' version that broke quite some branches.
 
 And today new testtools 0.9.36 were released and were caught by gate,
 which resulted in the following unit test failures in multiple projects:
 
 TestCase.setUp was already called. Do not explicitly call setUp from
 your tests. In your own setUp, use super to call the base setUp.
 
 All branches are affected: havana, icehouse, and master.
 
 This is because the following check was released with the new version
 of the library:
 https://github.com/testing-cabal/testtools/commit/5c3b92d90a64efaecdc4010a98002bfe8b888517
 
 And the temporary fix is to merge the version pin patch in global
 requirements, backport it to stable branches, and merge the updates
 from Openstack Proposal Bot to all affected projects. The patch for
 master requirements is: https://review.openstack.org/#/c/116267/
 
 In the meantime, projects will need to fix their tests not to call
 setUp() and tearDown() twice. This will be the requirement to unpin
 the version of the library.
 
 So, please review, backport, and make sure it lands in project
 requirements files.
 
 /Ihar

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 

-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] What tests are required to be run

2014-08-22 Thread Edgar Magana
Excellent Job!

Thanks a lot, I have updated the wiki.

Edgar

From: Hemanth Ravi hemanthrav...@gmail.commailto:hemanthrav...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, August 21, 2014 at 1:37 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron] [third-party] What tests are required to 
be run

Edgar,

CI name: One Convergence CI

The logs issue has been fixed and earlier tests were failing due to  
https://review.openstack.org/#/c/114146

For reference:

1. Please take a look at the vote on patchset 7 in 
https://review.openstack.org/#/c/114968/
2. Logs at https://www.dropbox.com/sh/czydzz5bn2rc2lp/AABZByV8UQUIqWaZSSrZvzvDa

Thanks,
-hemanth



On Thu, Aug 21, 2014 at 12:41 PM, Kevin Benton 
blak...@gmail.commailto:blak...@gmail.com wrote:
Our system would get backed up with patches and sometimes take up to 10 hours 
to respond with results for a change.
We should establish some maximum acceptable time to get the results for a patch.


On Thu, Aug 21, 2014 at 11:59 AM, Edgar Magana 
edgar.mag...@workday.commailto:edgar.mag...@workday.com wrote:
Maximum time to vote, can you clarify?

Edgar

From: Kevin Benton blak...@gmail.commailto:blak...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, August 19, 2014 at 1:11 PM

To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron] [third-party] What tests are required to 
be run


I also added a two more nodes to our cluster to reduce the delay. Can we 
establish a maximum time to vote on the wiki?

On Aug 19, 2014 9:39 AM, Edgar Magana 
edgar.mag...@workday.commailto:edgar.mag...@workday.com wrote:
Kevin,

I just verified, Thanks a lot.

Edgar

From: Kevin Benton blak...@gmail.commailto:blak...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, August 15, 2014 at 4:56 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron] [third-party] What tests are required to 
be run

You didn't wait long enough for the Big Switch CI to reach your negative test. 
:-)
 Currently the CI system is backed by about 100 patches to push responses out 
~22 hours.

This does bring up an old discussion that was never resolved.
What is the minimum expected response time for CI systems?


On Fri, Aug 15, 2014 at 3:35 PM, Edgar Magana 
edgar.mag...@workday.commailto:edgar.mag...@workday.com wrote:
Team,

I did a quick audit on the Neutron CI. Very sad results. Only few plugins
and drivers are running properly and testing all Neutron commits.
I created a report here:
https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers#Existing_Plugin
_and_Drivers


We will discuss the actions to take on the next Neutron IRC meeting. So
please, reach me out to clarify what is the status of your CI.
I had two commits to quickly verify the CI reliability:

https://review.openstack.org/#/c/114393/

https://review.openstack.org/#/c/40296/


I would expect all plugins and drivers passing on the first one and
failing for the second but I got so many surprises.

Neutron code quality and reliability is a top priority, if you ignore this
report that plugin/driver will be candidate to be remove from Neutron tree.

Cheers,

Edgar

P.s. I hate to be the inquisitor hereŠ but someone has to do the dirty job!


On 8/14/14, 8:30 AM, Kyle Mestery 
mest...@mestery.commailto:mest...@mestery.com wrote:

Folks, I'm not sure if all CI accounts are running sufficient tests.
Per the requirements wiki page here [1], everyone needs to be running
more than just Tempest API tests, which I still see most neutron
third-party CI setups doing. I'd like to ask everyone who operates a
third-party CI account for Neutron to please look at the link below
and make sure you are running appropriate tests. If you have
questions, the weekly third-party meeting [2] is a great place to ask
questions.

Thanks,
Kyle

[1] https://wiki.openstack.org/wiki/NeutronThirdPartyTesting
[2] https://wiki.openstack.org/wiki/Meetings/ThirdParty

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [neutron] [third-party] What tests are required to be run

2014-08-22 Thread Edgar Magana
Sorry my bad but I just changed.

Edgar

On 8/21/14, 2:13 PM, Dane Leblanc (leblancd) lebla...@cisco.com wrote:

Edgar:

I'm still seeing the comment Results are not accurate. Needs
clarification...

Dane

-Original Message-
From: Edgar Magana [mailto:edgar.mag...@workday.com]
Sent: Thursday, August 21, 2014 2:58 PM
To: Dane Leblanc (leblancd); OpenStack Development Mailing List (not for
usage questions)
Subject: Re: [openstack-dev] [neutron] [third-party] What tests are
required to be run

Dane,

Wiki has been updated.

Thanks,

Edgar

On 8/21/14, 7:57 AM, Dane Leblanc (leblancd) lebla...@cisco.com wrote:

Edgar:

The status on the wiki page says Results are not accurate. Needs
clarification from Cisco.
Can you please tell me what we are missing?

-Dane

-Original Message-
From: Dane Leblanc (leblancd)
Sent: Tuesday, August 19, 2014 3:05 PM
To: 'Edgar Magana'; OpenStack Development Mailing List (not for usage
questions)
Subject: RE: [openstack-dev] [neutron] [third-party] What tests are
required to be run

The APIC CI did run tests against that commit (after some queue latency):

http://128.107.233.28:8080/job/apic/1860/
http://cisco-neutron-ci.cisco.com/logs/apic/1860/

But the review comments never showed up on Gerrit. This seems to be an
intermittent quirk of Jenkins/Gerrit: We have 3 CIs triggered from this
Jenkins/Gerrit server. Whenever we disable another one of our other
Jenkins jobs (in this case, we disabled DFA for some rework), the
review comments sometimes stop showing up on Gerrit.

-Original Message-
From: Edgar Magana [mailto:edgar.mag...@workday.com]
Sent: Tuesday, August 19, 2014 1:33 PM
To: Dane Leblanc (leblancd); OpenStack Development Mailing List (not
for usage questions)
Subject: Re: [openstack-dev] [neutron] [third-party] What tests are
required to be run

I was looking to one of the most recent Neutron commits:
https://review.openstack.org/#/c/115175/


I could not find the APIC report.

Edgar

On 8/19/14, 9:48 AM, Dane Leblanc (leblancd) lebla...@cisco.com
wrote:

From which commit is it missing?
https://review.openstack.org/#/c/114629/
https://review.openstack.org/#/c/114393/

-Original Message-
From: Edgar Magana [mailto:edgar.mag...@workday.com]
Sent: Tuesday, August 19, 2014 12:28 PM
To: Dane Leblanc (leblancd); OpenStack Development Mailing List (not
for usage questions)
Subject: Re: [openstack-dev] [neutron] [third-party] What tests are
required to be run

Dane,

Are you sure about it?
I just went to this commit and I could not find the APIC tests.

Thanks,

Edgar

On 8/17/14, 8:47 PM, Dane Leblanc (leblancd) lebla...@cisco.com
wrote:

Edgar:

The Cisco APIC should be reporting results for both APIC-related and
non-APIC related changes now.
(See http://cisco-neutron-ci.cisco.com/logs/apic/1738/).

Will you be updating the wiki page?

-Dane

-Original Message-
From: Dane Leblanc (leblancd)
Sent: Friday, August 15, 2014 8:18 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [third-party] What tests are
required to be run

Also, you can add me as a contact person for the Cisco VPNaaS driver.

-Original Message-
From: Dane Leblanc (leblancd)
Sent: Friday, August 15, 2014 8:14 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: RE: [openstack-dev] [neutron] [third-party] What tests are
required to be run

Edgar:

For the Notes for the Cisco APIC, can you change the comment results
are fake to something like results are only valid for APIC-related
commits? I think this more accurately represents our current results
(for reasons we chatted about on another thread).

Thanks,
Dane

-Original Message-
From: Edgar Magana [mailto:edgar.mag...@workday.com]
Sent: Friday, August 15, 2014 6:36 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [third-party] What tests are
required to be run
Importance: High

Team,

I did a quick audit on the Neutron CI. Very sad results. Only few
plugins and drivers are running properly and testing all Neutron
commits.
I created a report here:
https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers#Existing_
P
l
ugi
n
_and_Drivers


We will discuss the actions to take on the next Neutron IRC meeting.
So please, reach me out to clarify what is the status of your CI.
I had two commits to quickly verify the CI reliability:

https://review.openstack.org/#/c/114393/

https://review.openstack.org/#/c/40296/


I would expect all plugins and drivers passing on the first one and
failing for the second but I got so many surprises.

Neutron code quality and reliability is a top priority, if you ignore
this report that plugin/driver will be candidate to be remove from
Neutron tree.

Cheers,

Edgar

P.s. I hate to be the inquisitor hereŠ but someone has to do the
dirty job!


On 8/14/14, 8:30 AM, Kyle Mestery mest...@mestery.com wrote:

Folks, I'm not sure if all CI accounts 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-22 Thread gordon chung
 It may be easier for you, but it certainly isn't inside big companies,
 e.g. HP have pretty broad approvals for contributing to (official)
 openstack projects, where as individual approval may be needed to
 contribute to none-openstack projects.
i was referring to a company bigger than hp... maybe the legal team is nicer 
there. :)  couldn't hurt to ask them anyways... plenty of good projects that 
exist in stackforge domain.
cheers,gord   ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [TripleO] How to gracefully quiesce a box?

2014-08-22 Thread Jay Pipes

On 08/22/2014 01:48 PM, Clint Byrum wrote:

It has been brought to my attention that Ironic uses the biggest hammer
in the IPMI toolbox to control chassis power:

https://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/ipminative.py#n142

Which is

 ret = ipmicmd.set_power('off', wait)

This is the most abrupt form, where the system power should be flipped
off at a hardware level. The short press on the power button would be
'shutdown' instead of 'off'.

I also understand that this has been brought up before, and that the
answer given was SSH in and shut it down yourself. I can respect that
position, but I have run into a bit of a pickle using it. Observe:

- ssh box.ip poweroff
- poll ironic until power state is off.
   - This is a race. Ironic is asserting the power. As soon as it sees
 that the power is off, it will turn it back on.

- ssh box.ip halt
   - NO way to know that this has worked. Once SSH is off and the network
 stack is gone, I cannot actually verify that the disks were
 unmounted properly, which is the primary area of concern that I
 have.

This is particulary important if I'm issuing a rebuild + preserve
ephemeral, as it is likely I will have lots of I/O going on, and I want
to make sure that it is all quiesced before I reboot to replace the
software and reboot.

Perhaps I missed something. If so, please do educate me on how I can
achieve this without hacking around it. Currently my workaround is to
manually unmount the state partition, which is something system shutdown
is supposed to do and may become problematic if system processes are
holding it open.

It seems to me that Ironic should at least try to use the graceful
shutdown. There can be a timeout, but it would need to be something a user
can disable so if graceful never works we never just dump the power on the
box. Even a journaled filesystem will take quite a bit to do a full fsck.

The inability to gracefully shutdown in a reasonable amount of time
is an error state really, and I need to go to the box and inspect it,
which is precisely the reason we have ERROR states.


What about placing a runlevel script in /etc/init.d/ and symlinking it 
to run on shutdown -- i.e. /etc/rc0.d/? You could run fsync or unmount 
the state partition in that script which would ensure disk state was 
quiesced, no?


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][WSME] Sphinx failing sporadically because of wsme autodoc extension

2014-08-22 Thread gordon chung
 I couldn’t reproduce this issue either. I’ve tried on precise and on a fresh 
 trusty too, everything worked fine…
fun
from the limited error message. it's because service path isn't found 
(https://github.com/stackforge/wsme/blob/master/wsmeext/sphinxext.py#L133-L140) 
and this code is returning None... so for some reason scan_services is not 
finding what it needs to find 
(https://github.com/stackforge/wsme/blob/master/wsmeext/sphinxext.py#L114-L130) 
in most cases but in the rare case, it actually does find a path
looking at the build trends. there doesn't seem to be a noticeable trend and 
more a crapshoot whether a check passes or 
not:https://jenkins01.openstack.org/job/gate-ceilometer-docs/buildTimeTrendhttps://jenkins02.openstack.org/job/gate-ceilometer-docs/buildTimeTrendhttps://jenkins03.openstack.org/job/gate-ceilometer-docs/buildTimeTrendhttps://jenkins04.openstack.org/job/gate-ceilometer-docs/buildTimeTrendhttps://jenkins05.openstack.org/job/gate-ceilometer-docs/buildTimeTrendhttps://jenkins06.openstack.org/job/gate-ceilometer-docs/buildTimeTrendhttps://jenkins07.openstack.org/job/gate-ceilometer-docs/buildTimeTrend
cheers,gord   ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [TripleO] How to gracefully quiesce a box?

2014-08-22 Thread Clint Byrum
Excerpts from Jay Pipes's message of 2014-08-22 11:16:05 -0700:
 On 08/22/2014 01:48 PM, Clint Byrum wrote:
  It has been brought to my attention that Ironic uses the biggest hammer
  in the IPMI toolbox to control chassis power:
 
  https://git.openstack.org/cgit/openstack/ironic/tree/ironic/drivers/modules/ipminative.py#n142
 
  Which is
 
   ret = ipmicmd.set_power('off', wait)
 
  This is the most abrupt form, where the system power should be flipped
  off at a hardware level. The short press on the power button would be
  'shutdown' instead of 'off'.
 
  I also understand that this has been brought up before, and that the
  answer given was SSH in and shut it down yourself. I can respect that
  position, but I have run into a bit of a pickle using it. Observe:
 
  - ssh box.ip poweroff
  - poll ironic until power state is off.
 - This is a race. Ironic is asserting the power. As soon as it sees
   that the power is off, it will turn it back on.
 
  - ssh box.ip halt
 - NO way to know that this has worked. Once SSH is off and the network
   stack is gone, I cannot actually verify that the disks were
   unmounted properly, which is the primary area of concern that I
   have.
 
  This is particulary important if I'm issuing a rebuild + preserve
  ephemeral, as it is likely I will have lots of I/O going on, and I want
  to make sure that it is all quiesced before I reboot to replace the
  software and reboot.
 
  Perhaps I missed something. If so, please do educate me on how I can
  achieve this without hacking around it. Currently my workaround is to
  manually unmount the state partition, which is something system shutdown
  is supposed to do and may become problematic if system processes are
  holding it open.
 
  It seems to me that Ironic should at least try to use the graceful
  shutdown. There can be a timeout, but it would need to be something a user
  can disable so if graceful never works we never just dump the power on the
  box. Even a journaled filesystem will take quite a bit to do a full fsck.
 
  The inability to gracefully shutdown in a reasonable amount of time
  is an error state really, and I need to go to the box and inspect it,
  which is precisely the reason we have ERROR states.
 
 What about placing a runlevel script in /etc/init.d/ and symlinking it 
 to run on shutdown -- i.e. /etc/rc0.d/? You could run fsync or unmount 
 the state partition in that script which would ensure disk state was 
 quiesced, no?

That's already what OS's do in their rc0.d.

My point is, I don't have any way to know that process happened, without
the box turning itself off after it succeeded.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Need community weigh-in on requests-mock

2014-08-22 Thread Paul Michali (pcm)
Hi! Need to get the community to weigh in on this…

In Neutron there currently is no mock library for the Requests package. During 
Icehouse, I created unit tests that used the httmock package (a context-library 
based Requests mock). However, the community did not want me to add this to 
global requirements, because there was httpretty (a URL registration based 
mock) already approved for use (but not in Neutron). As a result, I disabled 
the UTs (renaming the module to have a “no” prefix).

Instead of modifying the UT to work with httpretty, and requesting that 
httpretty be added to Neutron test-requirements, I waited, as there was 
discussion on replacing httpretty.

Fast forward to now, and there is a new requests-mock package that has been 
implemented, added to global requirements, and is being used in keystone client 
and nova client projects.  My goal is to make use of the new mock library, as 
it has become the library of choice.

I have migrated my UT to use the requests-mock package, and would like to gain 
approval to add requests-mock to Neutron. I have two commits out for review. 
The first, https://review.openstack.org/#/c/115107/, is to add requests-mock to 
test-requirements for Neutron. The 
second,https://review.openstack.org/#/c/116018/, has the UT module reworked to 
use requests-mock, AND includes the addition of requests-mock to 
test-requirements (so that there is one commit showing the use of this new 
library - originally, I had just the UT, but there was a request to join the 
two changes).

Community questions:

Is it OK to add requests-mock to Neutron test-requirements?
If so, would you rather see this done as two commits (one for the package, one 
for the UT), or one combined commit?

Cores, you can “vote/comment” in the reviews, so that I can proceed in the 
right direction.

Thanks for your consideration!


PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pcm_ (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83





signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Acceptable methods for establishing per-test-suite behaviors

2014-08-22 Thread Mike Bayer
Hi all -

I’ve spent many weeks on a series of patches for which the primary goal is to 
provide very efficient patterns for tests that use databases and schemas within 
those databases, including compatibility with parallel tests, transactional 
testing, and scenario-driven testing (e.g. a test that runs multiple times 
against different databases).

To that end, the current two patches that achieve this behavior in a rudimental 
fashion are part of oslo.db and are at: 
https://review.openstack.org/#/c/110486/ and 
https://review.openstack.org/#/c/113153/.They have been in the queue for 
about four weeks now.  The general theory of operation is that within a 
particular Python process, a fixed database identifier is established 
(currently via an environment variable).   As tests request the services of 
databases, such as a Postgresql database or a MySQL database, the system will 
provision a database within that backend of that fixed identifier and return 
it.   The test can then request that it make use of a particular “schema” - for 
example, Nova’s tests may request that they are using the “nova schema”, which 
means that the schema for Nova’s model will be created within this database, 
and will them remain permanently across the span of many tests which use this 
same schema.  Only when a test requests that it wants a different schema, or no 
schema, will the tables be dropped.To ensure the schema is “clean” for 
every test, the provisioning system ensures that each test runs within a 
transaction, which at test end is rolled back.In order to accommodate tests 
that themselves need to roll back, the test additionally runs within the 
context of a SAVEPOINT.   This system is entirely working, and for those that 
are wondering, yes it works with SQLite as well (see 
https://review.openstack.org/#/c/113152/).

And as implied earlier, to ensure the operations upon this schema don’t 
conflict with parallel test runs, the whole thing is running within a database 
that is specific to the Python process.

So instead of the current behavior of generating the entire nova schema for 
every test and being hardcoded to Sqlite, a particular test will be able to run 
itself against any specific backend or all available backends in series, 
without needing to do a CREATE for the whole schema on every test.   It will 
greatly expand database coverage as well as allow database tests to run 
dramatically faster, using entirely consistent systems for setting up schemas 
and database connectivity.

The “transactional test” system is one I’ve used extensively in other projects. 
 SQLAlchemy itself now runs tests against a py.test-specific variant which runs 
under parallel testing and generates ad-hoc schemas per Python process.The 
patches above achieve these patterns successfully and transparently in the 
context of Openstack tests, only the “scenarios” support for a single test to 
run repeatedly against multiple backends is still a todo.

However, the first patch has just been -1’ed by Robert Collins, the publisher 
of many of the various “testtools” libraries that are prevalent within 
Openstack projects.

Robert suggests that the approach integrate with the testresources library: 
https://pypi.python.org/pypi/testresources.   I’ve evaluated this system and 
after some initial resistance I can see that it would in fact work very nicely 
with the system I have, in that it provides the OptimisingTestSuite - a special 
unittest test suite that will take tests like the above which are marked 
needing particular resources, and then sort them such that individual resources 
are set up and torn down a minimal number of times.It has heavy algorithmic 
logic to accomplish this which is certainly far beyond what would be 
appropriate to home-roll within oslo.db.

I like the idea of integrating this optimization a lot, however it runs into a 
particular issue which I also hit upon with my more simplistic approach.   

The issue is that being able to use a resource like a database schema across 
many tests requires that some kind of logic has access to the test run as a 
whole.At the very least, a hook that indicates “the tests are done, lets 
tear down these ad-hoc databases” is needed.

For my first iteration, I observed that Openstack tests are generally run 
either via testr, or via a shell script.  So to that end I expanded upon an 
approach that was already present in oslo.db, that is to use scripts which 
provision the names of databases to create, and then drop them at the end of 
all tests run.   For testr, I used the “instance_execute”, “instance_dispose”, 
and “instance_provision” hooks in testr.conf to call upon these sub-scripts:

instance_provision=${PYTHON:-python} -m oslo.db.sqlalchemy.provision echo 
$INSTANCE_COUNT
instance_dispose=${PYTHON:-python} -m oslo.db.sqlalchemy.provision drop 
--conditional $INSTANCE_IDS
instance_execute=OSLO_SCHEMA_TOKEN=$INSTANCE_ID 

[openstack-dev] [Heat] Heat Juno Mid-cycle Meetup report

2014-08-22 Thread Zane Bitter
We held the inaugural Heat mid-cycle meetup in Raleigh, North Carolina 
this week. There were a dozen folks in attendance, and I think everyone 
agreed that it was a very successful event. Notes from the meetup are on 
the Etherpad here:


https://etherpad.openstack.org/p/heat-juno-midcycle-meetup

Here are a few of the conclusions:

* Everyone wishes the Design Summit worked like this.
The meetup seemed a lot more productive than the design summit ever is. 
It's really nice to be in a room small enough that you can talk normally 
and hear everyone, instead of in a room designed for 150 people. It's 
really nice to be able to discuss stuff that isn't related to a 
particular feature - we had a long discussion about how to get through 
the review backlog, for example. It's really nice to not have fixed time 
slots for discussions - because everyone was in the room the whole time, 
we could dip in and out of different topics at will. Often we came back 
to one that we'd previously discussed because we had discovered new 
information. Finally, it's critical to be in a room covered in 
full-sized whiteboards that everyone can see. A single tiny flip chart 
doesn't cut it.


* 3 days seems to be about the right length.
Not a lot got done on day 3, and people started to drift out at various 
times to catch flights, but the fact that everyone was there for _all_ 
of day 1 and 2 was essential (the critical Convergence plan was 
finalised around 7.30pm on Tuesday).


* There was a lot more discussion than hacking.
The main value of the meet-up was more in the discussions you'd hope to 
be able to have at the design summit than in working collaboratively on 
code.


* Marconi is now called Zaqar.
Who knew?

* Marc^W Zaqar is critical to pretty much every major non-Convergence 
feature on the roadmap.
We knew that we wanted to use it for notifications, but we also want to 
make those a replacement for events, and a conduit for warnings and 
debugging information to the user. This is becoming so important that 
we're going to push ahead with an implementation now without waiting to 
see when Zaqar will graduate. Zaqar would also be a good candidate for 
pushing metadata changes to servers, to resolve the performance issues 
currently caused by polling.


* We are on track to meet the immediate requirements of TripleO.
Obviously it would be nice to have Convergence now, but failing that the 
most critical bugs are under control. In the immediate future, we need 
to work on finding a consensus on running with multiple worker by 
default, split trees of nested stacks so that each nested stack runs in 
a separate engine, and find a way to push metadata out from Heat instead 
of having servers poll us for it.


* We have a plan for what Convergence will look like.
Here's some horrific photos of a whiteboard: 
https://www.dropbox.com/sh/tamoc8dhhckb81w/AAA6xp2be9xv20P7SWx-xnZba?dl=0
Clint is, I believe, working on turning that into something more 
consumable. This was definitely the biggest success of the meet-up. 
Before this I had no idea what convergence would look like; now I have a 
fair idea how it will work and where the tricky bits might be. I doubt 
this could have happened at a design summit.


* We probably don't need TaskFlow.
After coming up with the Convergence plan we realised that, while 
TaskFlow would be useful/essential if we were planning a more modest 
refactoring of Heat, the architecture of Convergence should actually 
eliminate the need for it. All we think we need is a bunch of work 
queues that can be provided by oslo.messaging. TaskFlow seems great for 
the problem it solves, but we have the opportunity to not create that 
problem for ourselves in the first place.


* Convergence probably won't buy us much in the short term.
I think we all hoped that our incremental work on Convergence would 
render incremental benefits for Heat. After figuring out the rough 
outline of how Convergence could work, we realised that the incremental 
steps along the way (like implementing the observer process) will 
actually not have a big impact. So while, of course, we'll continue to 
work incrementally, we don't expect to see major benefits until nearer 
the end of the process.



Thanks to everyone who made the trip, and of course also to everyone who 
contributed input via IRC and generally held down the fort while we were 
meeting. If I misstated or just plain missed anything above, please feel 
free to weigh in.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Heat] Murano split dsicussion

2014-08-22 Thread Georgy Okrokvertskhov
Hi,


Let me comment on the reasons why we’ve suggested adding Murano engine to
Orchestration program.

If we consider the previous Murano incubation discussions, we see that an
overlap with the Orchestration program was one of the TC’s concerns. We
also find that it was the Heat team that proposed to split Murano into
specific parts.

As for items a) and b) highlighted by Zane, we definitely shared some of
these concerns, but we were guided by TC feedback from the previous Murano
incubation review that recommended splitting Murano into separate
components. We are fine with having separate program or joining existing
programs. In the current situation with Glance mission changed to more
generic Catalog mission and Orchestration program mission covering
everything it is better to joining existing programs rather than trying to
add a new one stepping on everyone’s foot.


If the Orchestration program team believes that Murano does not intersect
with the mission of the Orchestration program and should start its own
program, let’s send this message to the TC and Murano team is ready to go
this route.

Thanks
Georgy

On Fri, Aug 22, 2014 at 6:29 AM, Zane Bitter zbit...@redhat.com wrote:

 On 21/08/14 04:30, Thierry Carrez wrote:

 Georgy Okrokvertskhov wrote:

 During last Atlanta summit there were couple discussions about
 Application Catalog and Application space projects in OpenStack. These
 cross-project discussions occurred as a result of Murano incubation
 request [1] during Icehouse cycle.  On the TC meeting devoted to Murano
 incubation there was an idea about splitting the Murano into parts which
 might belong to different programs[2].


 Today, I would like to initiate a discussion about potential splitting
 of Murano between two or three programs.
 [...]


 I think the proposed split makes a lot of sense. Let's wait for the
 feedback of the affected programs to see if it's compatible with their
 own plans.


 I want to start out by saying that I am a big proponent of doing stuff
that makes sense, and wearing my PTL hat I will support the consensus of
the community on whatever makes the most sense.

 With the PTL hat off again, here is my 2c on what I think makes sense:

 * The Glance thing makes total sense to me. Murano's requirements should
be pretty much limited to an artifact catalog with some metadata - that's
bread and butter for Glance. Murano folks should join the Glance team and
drive their requirements into the artifact catalog.

 * The Horizon thing makes some sense. I think at least part of the UI
should be in Horizon, but I suspect there's also some stuff in there that
is pretty specific to the domain that Murano is tackling and it might be
better for that to live in the same program as the Murano engine. I believe
that there's a close analogue here with Tuskar and the TripleO program, so
maybe we could ask them about any lessons learned. Georgy suggested
elsewhere that the Merlin framework should be in Horizon and the rest in
the same program as the engine, and that would make total sense to me.

 * The Heat thing doesn't make a lot of sense IMHO. I now understand that
apparently different projects in the same program can have different core
teams - which just makes me more confused about what a program is for,
since I thought it was a single team. Nevertheless, I don't think that the
Murano project would be well-served by being represented by the Heat PTL
(which is, I guess, the only meaning still attached to a program). I
don't think they want the Heat PTL triaging their bugs, and I don't think
it's even feasible for one person to do that for both projects (that is to
say, I already have a negative amount of extra time available for Launchpad
just handling Heat). I don't think they want the Heat PTL to have control
over their design summit sessions, and if I were the PTL doing that I would
*hate* to be in the position of trying to balance the interests of the two
projects - *especially*, given that I am in Clint's camp of not seeing a
lot of value in Murano, when one project has not gone through the
incubation process and therefore there would be no guidance available from
the TC or consensus in the wider community as to whether that project
warranted any time at all devoted to it. In fact, I would go so far as to
say that it's completely unreasonable to put a single PTL in that position.

 So, I don't think putting the Murano engine into the Orchestration
program is being proposed because it makes sense. I think it's being
proposed, despite not making sense, because people consider it unlikely
that the TC would grant Murano a separate program due to some combination
of:

 (a) People won't think Murano is a good (enough) idea - in which case we
shouldn't do it (yet); and/or
 (b) People have an irrational belief that projects are lightweight but
programs are heavyweight, when the reverse is true, and will block any new
programs for fear of letting another person call themselves a PTL 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-22 Thread gustavo panizzo (gfa)


On 08/22/2014 02:13 PM, Michael Chapman wrote:
 
 We try to target puppet module master at upstream OpenStack master, but
 without CI/CD we fall behind. The missing piece is building packages and
 creating a local repo before doing the puppet run, which I'm working on
 slowly as I want a single system for both deb and rpm that doesn't make
 my eyes bleed. fpm and pleaserun are the two key tools here.

i have used fpm to package python apps, i would happy to help if you can
provide pointers where to start


-- 
1AE0 322E B8F7 4717 BDEA BF1D 44BB 1BA7 9F6C 6333


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Acceptable methods for establishing per-test-suite behaviors

2014-08-22 Thread James E. Blair
Hi,

One of the things we've wanted for a while in some projects is a
completely separate database environment for each test when using MySQL.
To that end, I wrote a MySQL schema fixture that is in use in nodepool:

http://git.openstack.org/cgit/openstack-infra/nodepool/tree/nodepool/tests/__init__.py#n75

While a per-test schema is more overhead than what you're asking about,
it's sometimes very desirable and quite simple.

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Need community weigh-in on requests-mock

2014-08-22 Thread Carl Baldwin
I put this in the review but will repeat it here.  +1 to adding the
dependency with the tests that you've written to require it when those
tests have been reviewed and accepted.  I don't have an objection to
adding requests-mock as a test-requirement.

Carl

On Fri, Aug 22, 2014 at 12:50 PM, Paul Michali (pcm) p...@cisco.com wrote:
 Hi! Need to get the community to weigh in on this…

 In Neutron there currently is no mock library for the Requests package.
 During Icehouse, I created unit tests that used the httmock package (a
 context-library based Requests mock). However, the community did not want me
 to add this to global requirements, because there was httpretty (a URL
 registration based mock) already approved for use (but not in Neutron). As a
 result, I disabled the UTs (renaming the module to have a “no” prefix).

 Instead of modifying the UT to work with httpretty, and requesting that
 httpretty be added to Neutron test-requirements, I waited, as there was
 discussion on replacing httpretty.

 Fast forward to now, and there is a new requests-mock package that has been
 implemented, added to global requirements, and is being used in keystone
 client and nova client projects.  My goal is to make use of the new mock
 library, as it has become the library of choice.

 I have migrated my UT to use the requests-mock package, and would like to
 gain approval to add requests-mock to Neutron. I have two commits out for
 review. The first, https://review.openstack.org/#/c/115107/, is to add
 requests-mock to test-requirements for Neutron. The
 second,https://review.openstack.org/#/c/116018/, has the UT module reworked
 to use requests-mock, AND includes the addition of requests-mock to
 test-requirements (so that there is one commit showing the use of this new
 library - originally, I had just the UT, but there was a request to join the
 two changes).

 Community questions:

 Is it OK to add requests-mock to Neutron test-requirements?
 If so, would you rather see this done as two commits (one for the package,
 one for the UT), or one combined commit?

 Cores, you can “vote/comment” in the reviews, so that I can proceed in the
 right direction.

 Thanks for your consideration!


 PCM (Paul Michali)

 MAIL …..…. p...@cisco.com
 IRC ……..… pcm_ (irc.freenode.com)
 TW ………... @pmichali
 GPG Key … 4525ECC253E31A83
 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Use public IP address as instance fixed IP

2014-08-22 Thread Kevin Benton
Yes, you should be able to create a shared/external network within Neutron
to accomplish this.


On Fri, Aug 22, 2014 at 7:25 AM, Bao Wang bywan...@gmail.com wrote:

 Thank you for your response. Could this be done naturally with Openstack
 neutron or have to be done manually outside neutron ?  As we are expecting
 to orchestrate hundreds of NFV with all similar network configuration,
 programmability is another key element.


 On Thu, Aug 21, 2014 at 3:52 PM, Kevin Benton blak...@gmail.com wrote:

 Have you tried making the external network shared as well? Instances that
 need a private IP with NAT attach to an internal network and go through the
 router like normal. Instances that need a public IP without NAT would just
 attach directly to the external network.


 On Thu, Aug 21, 2014 at 7:06 AM, Bao Wang bywan...@gmail.com wrote:

 I have a very complex Openstack deployment for NFV. It could not be
 deployed as Flat. It will have a lot of isolated private networks. Some
 interfaces of a group VM instances will need bridged network with their
 fixed IP addresses to communicate with outside world while other interfaces
 from the same set of VM should keep isolated with real private/fixed IP
 addresses. What happen if we use public IP addresses directly as fixed IP
 on those interfaces ? Will this work with Openstack neutron networking ?
 Will Openstack do NAT automatically on those ?

 Overall, the requirement is to use the fixed/public IP to communicate
 with outside directly on some interfaces of some VM instances while keeping
 others as private. The floating IP is not an option here

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][third-party] How to make progress on cinder driver CI post J3

2014-08-22 Thread John Griffith
On Fri, Aug 22, 2014 at 11:19 AM, Asselin, Ramy ramy.asse...@hp.com wrote:

  Many of us are still working on setting up and testing 3rd party ci for
 cinder drivers.



 None of them currently have gerrit +1/-1 voting, but I heard there’s a
 plan to “disable” those that are “not working” post-J3 (Please correct me
 if I misunderstood).



 However, Post-J3 is a great time to work on 3rd party ci for a few
 reasons:

 1.   Increases the overall QA effort when it’s most needed.

 2.   Many contributors have more time available since feature work is
 complete.



 Since erroneous failing tests from non-voting ci accounts can still
 distract reviewers, I’d like to propose a compromise:

 Marking 3rd party ci jobs still undergoing testing and stability as
 (non-voting).

 e.g.

 dsvm-tempest-my-driver
 http://15.126.198.151/67/106567/30/check/dsvm-tempest-hp-lefthand/a212407

 SUCCESS in 39m 27s (non-voting)

 dsvm-tempest-my-driver

 FAILURE in 39m 54s (non-voting)



 This way, progress can still be made by cinder vendors working on setting
 up 3rd party ci under ‘real’ load post J3 while minimizing distractions
 for cinder reviewers and other stakeholders.

 I think this is consistent with how the OpenStack “Jenkins” CI marks
 potentially unstable jobs.



 Please share your thoughts.



 Thanks,

 Ramy















 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ​Just FYI, I haven't necessarily heard that drivers etc are going to be
removed.  I would say however that the job should be removed until it's
stable.  There's no reason you can't do that (just don't submit your
results) and still work on fixing things up.  My system for example has
been running on every OpenStack commit today but I'm dumping results to a
test location while I make sure that things are stable and probably won't
turn it on until late next week when I'm sure it will perform.  Submitting
a system that fails to start 80% of the time doesn't help any of us out IMO.

I realize there's a LOT of really hard work going on and progress being
made, not discounting that at all.  Just pointing out that populating every
review with ci-system-xyz failed doesn't help us out much either.  If
it's backend problems they need fixed, if it's infra problems they need
fixed.  That's the whole point of this exercise to begin with IIRC.

Thanks
John​
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [infra] Glance review patterns and their impact on the gate

2014-08-22 Thread Fei Long Wang


On 23/08/14 00:09, Sean Dague wrote:

Earlier this week the freshness checks (the ones that required passing
results within 24 hrs for a change to go into the gate) were removed to
try to conserve nodes as we get to crunch time. The hopes were that
review teams had enough handle on what when code in their program got
into a state that it *could not* pass it's own unit tests to not approve
that code.

The attached screen shot shows that this is not currently true with
Glance. All glance changes will 100% fail their unit tests now. The root
fix might be here - https://review.openstack.org/#/c/115342/ which has 2
-1s and has been out for review for 3 days.
The unit test failure is about the order of some collections, such as 
dict, list, etc, which introduced by bug 
https://bugs.launchpad.net/nova/+bug/1348818. So that means some 
failures arise randomly. As a result, the patch owner may just 
re-trigger Jenkins and the failure may be gone. Therefore, it is not 
discovered by the reviewers. For example, the patch 115342 failed again 
since there are some other order issue. BTW, I'm working on that patch 
right now.

Realistically Glance was the biggest offender of this in the past, and
honestly the top reason for putting freshness checks in the gate in the
first place.

Does anyone from the Glance team have some ideas on better ways to keep
the team up on the fact that Glance is in a non functional state, 100%
of things will fail, and not have people approve things that can't pass?
These kind of issues are the ones that make me uncomfortable with the
Glance team taking on more mission until basic review hygiene is under
control for the existing code.

-Sean



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone][Marconi][Heat] Creating accounts in Keystone

2014-08-22 Thread Zane Bitter
Here's an interesting fact about Zaqar (the project formerly known as 
Marconi) that I hadn't thought about before this week: it's probably the 
first OpenStack project where a major part of the API primarily faces 
software running in the cloud rather than facing the user.


That is to say, nobody is going to be sending themselves messages on 
their laptop, from their laptop, via a cloud. At least one end of any 
given queue is likely to be on a VM in the cloud.


That makes me wonder: how does Zaqar authenticate users who are sending 
and receiving messages (as opposed to setting up the queues in the first 
place)? Presumably using Keystone, in which case it will run into a 
problem we've been struggling with in Heat since the very early days.


Keystone is generally a front end for an identity store with a 1:1 
correspondence between users and actual natural persons. Only the 
operator can add or remove accounts. This breaks down as soon as you 
need to authenticate automated services running in the cloud - in 
particular, you never ever want to store the credentials belonging to an 
actual natural person in a server in the cloud.


Heat has managed to work around this to some extent (for those running 
the Keystone v3 API) by creating users in a separate domain and more or 
less doing our own authorisation for them. However, this requires action 
on the part of the operator, and isn't an option for the end user. I 
guess Zaqar could do something similar and pass out sets of credentials 
good only for reading and writing to queues (respectively), but it seems 
like it would be better if the user could create the keystone accounts 
and set their own access control rules on the queues.


On AWS the very first thing a user does is create a bunch of IAM 
accounts so that they virtually never have to use the credentials 
associated with their natural person ever again. There are both user 
accounts and service accounts - the latter IIUC have 
automatically-rotating keys. Is there anything like this planned in 
Keystone? Zaqar is likely only the first (I guess second, if you count 
Heat) of many services that will need it.


I have this irrational fear that somebody is going to tell me that this 
issue is the reason for the hierarchical-multitenancy idea - fear 
because that both sounds like it requires intrusive changes in every 
OpenStack project and fails to solve the problem. I hope somebody will 
disabuse me of that notion in 3... 2... 1...


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] Procedure for adding official projects

2014-08-22 Thread Zane Bitter
Over the past couple of release cycles, the TC has put together a fairly 
comprehensive checklist for projects entering into incubation with a 
view to being included in the integrated release. However, I'm not aware 
of anything equivalent for projects that are becoming official (i.e. 
moving to the openstack/ namespace) but that are not targeting eventual 
integration - or, indeed, that _are_ targeting eventual integration but 
have not yet applied for incubation.


The current procedure afaict is to submit a review to the governance 
repo listing the repository under a particular program. It seems like at 
a minimum we should be checking for basic due diligence stuff like Is 
it Apache licensed?, Did everyone sign the CLA? (may it diaf) and 
Are there any trademark issues?. And maybe there are other things the 
TC should be looking at too.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] What tests are required to be run

2014-08-22 Thread Dane Leblanc (leblancd)
Thanks Edgar for updating the APIC status!!! 

Edgar and Kyle: *PLEASE NOTE**  I need your understanding and 
advice on the following:

We are still stuck with a problem stemming from a design limitation of Jenkins 
that prevents us from being compliant with Neutron 3rd Party CI requirements 
for our DFA CI.

The issue is that Jenkins only allows our scripts to (programmatically) return 
either Success or Fail. There is no option to return Aborted, Not Tested, 
or Skipped.

Why does this matter? The DFA plugin is just being introduced, and initial 
DFA-enabling change sets have not yet been merged. Therefore, all other change 
sets will fail our Tempest tests, since they are not DFA-enabled.

Similarly, we were recently blocked in our APIC CI with a critical bug, causing 
all change sets without this fix to fail on our APIC testbed.

In these cases, we would like to enter a throttled or partially blocked 
mode, where we would skip testing on change sets we know will fail, and (in an 
ideal world) signal this shortcoming to Gerrit e.g. by returning a Skipped 
status. Unfortunately, this option is not available in Jenkins scripts, as 
Jenkins is currently designed. The only options we have available is Success 
or all Fail, which are both misleading. We would also incorrectly report 
success or fail on one of the following test commits:
https://review.openstack.org/#/c/114393/
https://review.openstack.org/#/c/40296/

I've brought this issue up on the openstack-infra IRC, and jeblair confirmed 
the Jenkins limitation, but asked me to get consensus from the Neutron 
community as to this being a problem/requirement. I've also sent out an e-mail 
on the Neutron ML trying to start a discussion on this problem (no traction). I 
plan on bringing this up in the 3rd Party CI IRC on Monday, assuming there is 
time permitted in the open discussion.

I'm also investigating 

For the short term, I would like to propose the following:
* We bring this up on the 3rd Party CI IRC on Monday to get a solution or 
workaround, if available. If a solution is available, let's consider including 
that as a hint when we come up with CI requirements for handling CIs bocked by 
some critical fix.
* I'm also looking into using a REST API to cancel a Jenkins job 
programmatically.
* If no solution or workaround is available, we work with infra team or with 
Jenkins team to create a solution.
* Until a solution is available, for plugins which are blocked by a critical 
bug, we post a status/notes indicating the plugin's situation on our 3rd party 
CI status wiki, e.g.:

Vendor  Plugin/Driver Name  Contact NameStatus  
Notes
My Vendor Name  My Plugin CIMy Contact Person   T   
Throttled / Partially blocked / Awaiting Intial Commits

The status/notes should be clear and understood by the Neutron team.  The 
console logs for change sets where the tests were skipped should also contain a 
message that all testing is being skipped for that commit.

Note that when the DFA initial commits are merged, then this issue would go 
away for the DFA CI. However, this problem will reappear every time a blocking 
critical bug shows up for a 3rd party CI setup, or a new plugin is introduced 
and the hardware-enabling commits are not yet merged.  (That is, until we have 
a solution for the Jenkins limitation).

Let me know what you think.

Thanks,
Dane

-Original Message-
From: Edgar Magana [mailto:edgar.mag...@workday.com] 
Sent: Friday, August 22, 2014 1:57 PM
To: Dane Leblanc (leblancd); OpenStack Development Mailing List (not for usage 
questions)
Subject: Re: [openstack-dev] [neutron] [third-party] What tests are required to 
be run

Sorry my bad but I just changed.

Edgar

On 8/21/14, 2:13 PM, Dane Leblanc (leblancd) lebla...@cisco.com wrote:

Edgar:

I'm still seeing the comment Results are not accurate. Needs 
clarification...

Dane

-Original Message-
From: Edgar Magana [mailto:edgar.mag...@workday.com]
Sent: Thursday, August 21, 2014 2:58 PM
To: Dane Leblanc (leblancd); OpenStack Development Mailing List (not 
for usage questions)
Subject: Re: [openstack-dev] [neutron] [third-party] What tests are 
required to be run

Dane,

Wiki has been updated.

Thanks,

Edgar

On 8/21/14, 7:57 AM, Dane Leblanc (leblancd) lebla...@cisco.com wrote:

Edgar:

The status on the wiki page says Results are not accurate. Needs 
clarification from Cisco.
Can you please tell me what we are missing?

-Dane

-Original Message-
From: Dane Leblanc (leblancd)
Sent: Tuesday, August 19, 2014 3:05 PM
To: 'Edgar Magana'; OpenStack Development Mailing List (not for usage
questions)
Subject: RE: [openstack-dev] [neutron] [third-party] What tests are 
required to be run

The APIC CI did run tests against that commit (after some queue latency):

http://128.107.233.28:8080/job/apic/1860/
http://cisco-neutron-ci.cisco.com/logs/apic/1860/

But the review comments never showed up on Gerrit. This 

Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs

2014-08-22 Thread Rochelle.RochelleGrober
/flame-on
Ok, this is funny to some of us in the community.  The general populace of this 
community is so against the idea of management that they will use the term for 
a despotic dictator as a position name rather than manager.  Sorry, but this 
needed to be said.
/flame-off

Specific comments in line:

Thierry Carrez wrote:
 
 Hi everyone,
 
 We all know being a project PTL is an extremely busy job. That's
 because
 in our structure the PTL is responsible for almost everything in a
 project:
 
 - Release management contact
 - Work prioritization
 - Keeping bugs under control
 - Communicate about work being planned or done
 - Make sure the gate is not broken
 - Team logistics (run meetings, organize sprints)
 - ...
 

Point of clarification:  I've heard PTL=Project Technical Lead and PTL=Program 
Technical Lead. Which is it?  It is kind of important as OpenStack grows, 
because the first is responsible for *a* project, and the second is responsible 
for all projects within a program.

I'd also like to set out as an example of a Program that is growing to 
encompass multiple projects, the Neutron Program.  Look at how it is expanding:

Multiple sub-teams for:  LBAAS, DNAAS, GBP, etc.  This model could be extended 
such that:  
- the subteam is responsible for code reviews, including the first +2 for 
design, architecture and code of the sub-project, always also keeping an eye 
out that the sub-project code continues to both integrate well with the 
program, and that the program continues to provide the needed code bits, 
architecture modifications and improvements, etc. to support the sub-project.
- the final +2/A would be from the Program reviewers to ensure that all 
integrate nicely together into a single, cohesive program.  
- This would allow sub-projects to have core reviewers, along with the program 
and be a good separation of duties.  It would also help to increase the number 
of reviews moving to merged code.
- Taken to a logical stepping stone, you would have project technical leads for 
each project, and they would make up a program council, with the program 
technical lead being the chair of the council.

This is a way to offload a good chunk of PTL tactical responsibilities and help 
them focus more on the strategic.

 They end up being completely drowned in those day-to-day operational
 duties, miss the big picture, can't help in development that much
 anymore, get burnt out. Since you're either the PTL or not the PTL,
 you're very alone and succession planning is not working that great
 either.
 
 There have been a number of experiments to solve that problem. John
 Garbutt has done an incredible job at helping successive Nova PTLs
 handling the release management aspect. Tracy Jones took over Nova bug
 management. Doug Hellmann successfully introduced the concept of Oslo
 liaisons to get clear point of contacts for Oslo library adoption in
 projects. It may be time to generalize that solution.
 
 The issue is one of responsibility: the PTL is ultimately responsible
 for everything in a project. If we can more formally delegate that
 responsibility, we can avoid getting up to the PTL for everything, we
 can rely on a team of people rather than just one person.
 
 Enter the Czar system: each project should have a number of liaisons /
 official contacts / delegates that are fully responsible to cover one
 aspect of the project. We need to have Bugs czars, which are
 responsible
 for getting bugs under control. We need to have Oslo czars, which serve
 as liaisons for the Oslo program but also as active project-local oslo
 advocates. We need Security czars, which the VMT can go to to progress
 quickly on plugging vulnerabilities. We need release management czars,
 to handle the communication and process with that painful OpenStack
 release manager. We need Gate czars to serve as first-line-of-contact
 getting gate issues fixed... You get the idea.
 
/flame-on
Let's call spades, spades here.  Czar is not only overkill, but the wrong 
metaphor.
/flame-off

Each position suggested here exists in corporate development projects:
- Bug czar == bug manager/administrator/QA engineer/whatever - someone in 
charge of making sure bugs get triages, assigned and completed
- Oslo czar == systems engineers/project managers who make sure that the 
project is in line with the rest of the projects that together make an 
integrated release.  This position needs to stretch beyond just Oslo to 
encompass all the cross-project requirements and will likely be its own 
committee
- Gate Czar == integration engineer(manager)/QA engineer(manager)/build-release 
engineer.  This position would also likely be a liaison to Infra.
- Security Czar == security guru (that name takes me back ;-)
- Release management Czar == Project release manager
- Doc Czar == tech editor
- Tempest Czar == QA engineer(manager)

Yes, programs are now mostly big enough to require coordination and management. 
 The roles are long defined, so 

Re: [openstack-dev] [glance] Review priorities

2014-08-22 Thread Tripp, Travis S
Thanks for sending out, Arnaud. I added two to the etherpad for metadefs: the 
glance client and related tempest patch.  

Without the client, we can't get the related Horizon patches landed.

Glance Client: https://review.openstack.org/#/c/105231/ 
Tempest: https://review.openstack.org/113632/ 

For anybody looking at metadefs, I added a script to the etherpad[1] that can 
be run on a fresh devstack to get all of the Glance patches + the first Horizon 
patch will enable this for image metadata.  (Go to Admin dashboard -- Images 
-- Update Metadata (row action).  Please ensure that all code is committed or 
stashed before running the script (or snapshot your vm).

[1] https://etherpad.openstack.org/p/j3-glance-patches

Thanks,
Travis

-Original Message-
From: Flavio Percoco [mailto:fla...@redhat.com] 
Sent: Thursday, August 21, 2014 4:27 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [glance] Review priorities

On 08/21/2014 11:59 PM, Arnaud Legendre wrote:
 Greetings,
 
 The Juno-3 feature freeze is not far away (September 4th). See [1] for 
 more information.
 We have a bunch of outstanding patches that need to get reviewed. We 
 are trying to come with a list of the most important features/bugs [2].
 So far, we have the following:
 
 
 Features
 
 
 
 https://review.openstack.org/#/c/44355/ | Introduces eventlet
 executor for Glance Tasks
 
 https://review.openstack.org/#/c/80182/ | Add GPFS support as image
 store
 
 https://review.openstack.org/#/c/98737/ | Restrict users from
 downloading protected image
 
 https://review.openstack.org/111441 | Glance Metadata Definitions
 Catalog - DB
 
 https://review.openstack.org/111483 | Glance Metadata Definitions
 Catalog - Seed
 
 https://review.openstack.org/111455 | Glance Metadata Definitions
 Catalog - API
 

I'd like to add to this list the adoption of glance.store. The patch is not 
passing the gate because it depends on some changes that still need to happen 
in the gate and devstack. However, the work there is pretty much done and the 
patch is ready to be reviewed. The sooner we start addressing comments there, 
the better.

https://review.openstack.org/#/c/100636/

Flavio

 
 Bugs
 
 
 
 https://review.openstack.org/#/c/103959/ | Changes HTTP response
 code for unsupported methods
 
 https://review.openstack.org/#/c/115111/ | Add swift store upload
 recover
 
 https://bugs.launchpad.net/glance/+bug/1316233 | image data not
 deleted while deleting image in v2 api
 
 https://bugs.launchpad.net/glance/+bug/1316234 | Image delete calls
 slow due to synchronous wait on backend store delete
 
 Please edit the etherpad [2] if you think something is missing.
 I will add the J-3 tag to the bugs later today.
 
 Thanks,
 Arnaud
 
 
 [1]https://wiki.openstack.org/wiki/Juno_Release_Schedule
 [2]https://etherpad.openstack.org/p/j3-glance-patches
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][third-party] How to make progress on cinder driver CI post J3

2014-08-22 Thread Asselin, Ramy
Fungi (on irc) suggested the following option to run the tests without 
reporting them to gerrit.
I’m still testing it out, but for the benefit of others using the “zuul” ci 
approach, try the “silent” pipeline [1]

Thanks,
Ramy

[1]https://github.com/openstack-infra/zuul/blob/master/doc/source/zuul.rst:
This will trigger jobs whenever a change is merged to a named branch (e.g., 
master). No output will be reported to Gerrit. This is useful for side effects 
such as creating per-commit tarballs.
- name: silent
  manager: IndependentPipelineManager
  trigger:
- event: patchset-created

From: John Griffith [mailto:john.griff...@solidfire.com]
Sent: Friday, August 22, 2014 2:25 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder][third-party] How to make progress on 
cinder driver CI post J3



On Fri, Aug 22, 2014 at 11:19 AM, Asselin, Ramy 
ramy.asse...@hp.commailto:ramy.asse...@hp.com wrote:
Many of us are still working on setting up and testing 3rd party ci for cinder 
drivers.

None of them currently have gerrit +1/-1 voting, but I heard there’s a plan to 
“disable” those that are “not working” post-J3 (Please correct me if I 
misunderstood).

However, Post-J3 is a great time to work on 3rd party ci for a few reasons:

1.   Increases the overall QA effort when it’s most needed.

2.   Many contributors have more time available since feature work is 
complete.

Since erroneous failing tests from non-voting ci accounts can still distract 
reviewers, I’d like to propose a compromise:
Marking 3rd party ci jobs still undergoing testing and stability as 
(non-voting).
e.g.
dsvm-tempest-my-driver

SUCCESS in 39m 27s (non-voting)

dsvm-tempest-my-driver

FAILURE in 39m 54s (non-voting)


This way, progress can still be made by cinder vendors working on setting up 
3rd party ci under ‘real’ load post J3 while minimizing distractions for cinder 
reviewers and other stakeholders.
I think this is consistent with how the OpenStack “Jenkins” CI marks 
potentially unstable jobs.

Please share your thoughts.

Thanks,
Ramy








___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
​Just FYI, I haven't necessarily heard that drivers etc are going to be 
removed.  I would say however that the job should be removed until it's stable. 
 There's no reason you can't do that (just don't submit your results) and still 
work on fixing things up.  My system for example has been running on every 
OpenStack commit today but I'm dumping results to a test location while I make 
sure that things are stable and probably won't turn it on until late next week 
when I'm sure it will perform.  Submitting a system that fails to start 80% of 
the time doesn't help any of us out IMO.

I realize there's a LOT of really hard work going on and progress being made, 
not discounting that at all.  Just pointing out that populating every review 
with ci-system-xyz failed doesn't help us out much either.  If it's backend 
problems they need fixed, if it's infra problems they need fixed.  That's the 
whole point of this exercise to begin with IIRC.
Thanks
John​

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs

2014-08-22 Thread Anne Gentle
On Fri, Aug 22, 2014 at 6:17 PM, Rochelle.RochelleGrober 
rochelle.gro...@huawei.com wrote:

 /flame-on
 Ok, this is funny to some of us in the community.  The general populace of
 this community is so against the idea of management that they will use the
 term for a despotic dictator as a position name rather than manager.
 Sorry, but this needed to be said.
 /flame-off

 Specific comments in line:

 Thierry Carrez wrote:
 
  Hi everyone,
 
  We all know being a project PTL is an extremely busy job. That's
  because
  in our structure the PTL is responsible for almost everything in a
  project:
 
  - Release management contact
  - Work prioritization
  - Keeping bugs under control
  - Communicate about work being planned or done
  - Make sure the gate is not broken
  - Team logistics (run meetings, organize sprints)
  - ...
 

 Point of clarification:  I've heard PTL=Project Technical Lead and
 PTL=Program Technical Lead. Which is it?  It is kind of important as
 OpenStack grows, because the first is responsible for *a* project, and the
 second is responsible for all projects within a program.


Now Program, formerly Project.


 I'd also like to set out as an example of a Program that is growing to
 encompass multiple projects, the Neutron Program.  Look at how it is
 expanding:

 Multiple sub-teams for:  LBAAS, DNAAS, GBP, etc.  This model could be
 extended such that:
 - the subteam is responsible for code reviews, including the first +2 for
 design, architecture and code of the sub-project, always also keeping an
 eye out that the sub-project code continues to both integrate well with the
 program, and that the program continues to provide the needed code bits,
 architecture modifications and improvements, etc. to support the
 sub-project.
 - the final +2/A would be from the Program reviewers to ensure that all
 integrate nicely together into a single, cohesive program.
 - This would allow sub-projects to have core reviewers, along with the
 program and be a good separation of duties.  It would also help to increase
 the number of reviews moving to merged code.
 - Taken to a logical stepping stone, you would have project technical
 leads for each project, and they would make up a program council, with the
 program technical lead being the chair of the council.

 This is a way to offload a good chunk of PTL tactical responsibilities and
 help them focus more on the strategic.

  They end up being completely drowned in those day-to-day operational
  duties, miss the big picture, can't help in development that much
  anymore, get burnt out. Since you're either the PTL or not the PTL,
  you're very alone and succession planning is not working that great
  either.
 
  There have been a number of experiments to solve that problem. John
  Garbutt has done an incredible job at helping successive Nova PTLs
  handling the release management aspect. Tracy Jones took over Nova bug
  management. Doug Hellmann successfully introduced the concept of Oslo
  liaisons to get clear point of contacts for Oslo library adoption in
  projects. It may be time to generalize that solution.
 
  The issue is one of responsibility: the PTL is ultimately responsible
  for everything in a project. If we can more formally delegate that
  responsibility, we can avoid getting up to the PTL for everything, we
  can rely on a team of people rather than just one person.
 
  Enter the Czar system: each project should have a number of liaisons /
  official contacts / delegates that are fully responsible to cover one
  aspect of the project. We need to have Bugs czars, which are
  responsible
  for getting bugs under control. We need to have Oslo czars, which serve
  as liaisons for the Oslo program but also as active project-local oslo
  advocates. We need Security czars, which the VMT can go to to progress
  quickly on plugging vulnerabilities. We need release management czars,
  to handle the communication and process with that painful OpenStack
  release manager. We need Gate czars to serve as first-line-of-contact
  getting gate issues fixed... You get the idea.
 
 /flame-on
 Let's call spades, spades here.  Czar is not only overkill, but the wrong
 metaphor.
 /flame-off


I'm with Rocky on the anti-czar-as-a-word camp. We all like clever names to
shed the corporate stigma but this word ain't it. Liaison or lead?

Also wanted to point to https://wiki.openstack.org/wiki/PTLguide as it's
quite nice.

I think PTLs tend to find help when they absolutely are ready to fall over,
and I'm all for a plan that helps us not fall over. I've had people step up
for bug triaging, gate work, tests, and oslo, sometimes one person did
three or four at once. I certainly can't do it all for cross-project. Based
on what I've seen, I doubt that we can add this much formality to this
across 20+ programs. It's the bigger more integrated project vs. smaller
more focused project difference that won't let us do a pattern here. We
can certainly document the 

Re: [openstack-dev] [all] [ptls] The Czar system, or how to scale PTLs

2014-08-22 Thread John Dickinson
I think Anne makes some excellent points about the pattern being proposed being 
unlikely to be commonly implemented across all the programs (or, at best, very 
difficult). Let's not try to formalize another best practice that works many 
times and force it to work every time. Here's an alternate proposal:

Let's let PTLs be PTLs and effectively coordinate and manage the activity in 
their respective projects. And let's get the PTLs together for one or two days 
every cycle to discuss project issues. Just PTLs, and let's focus on the 
project management stuff and some cross-project issues.

Getting the PTLs together would allow them to discuss cross-project issues, 
share frustrations and solutions about what does and doesn't work. Basically, 
think of it as a mid-cycle meetup, but for PTLs. (Perhaps we could even ask the 
Foundation to sponsor it.)

--John





On Aug 22, 2014, at 6:02 PM, Anne Gentle a...@openstack.org wrote:

 
 
 
 On Fri, Aug 22, 2014 at 6:17 PM, Rochelle.RochelleGrober 
 rochelle.gro...@huawei.com wrote:
 /flame-on
 Ok, this is funny to some of us in the community.  The general populace of 
 this community is so against the idea of management that they will use the 
 term for a despotic dictator as a position name rather than manager.  
 Sorry, but this needed to be said.
 /flame-off
 
 Specific comments in line:
 
 Thierry Carrez wrote:
 
  Hi everyone,
 
  We all know being a project PTL is an extremely busy job. That's
  because
  in our structure the PTL is responsible for almost everything in a
  project:
 
  - Release management contact
  - Work prioritization
  - Keeping bugs under control
  - Communicate about work being planned or done
  - Make sure the gate is not broken
  - Team logistics (run meetings, organize sprints)
  - ...
 
 
 Point of clarification:  I've heard PTL=Project Technical Lead and 
 PTL=Program Technical Lead. Which is it?  It is kind of important as 
 OpenStack grows, because the first is responsible for *a* project, and the 
 second is responsible for all projects within a program.
 
 
 Now Program, formerly Project.
  
 I'd also like to set out as an example of a Program that is growing to 
 encompass multiple projects, the Neutron Program.  Look at how it is 
 expanding:
 
 Multiple sub-teams for:  LBAAS, DNAAS, GBP, etc.  This model could be 
 extended such that:
 - the subteam is responsible for code reviews, including the first +2 for 
 design, architecture and code of the sub-project, always also keeping an eye 
 out that the sub-project code continues to both integrate well with the 
 program, and that the program continues to provide the needed code bits, 
 architecture modifications and improvements, etc. to support the sub-project.
 - the final +2/A would be from the Program reviewers to ensure that all 
 integrate nicely together into a single, cohesive program.
 - This would allow sub-projects to have core reviewers, along with the 
 program and be a good separation of duties.  It would also help to increase 
 the number of reviews moving to merged code.
 - Taken to a logical stepping stone, you would have project technical leads 
 for each project, and they would make up a program council, with the program 
 technical lead being the chair of the council.
 
 This is a way to offload a good chunk of PTL tactical responsibilities and 
 help them focus more on the strategic.
 
  They end up being completely drowned in those day-to-day operational
  duties, miss the big picture, can't help in development that much
  anymore, get burnt out. Since you're either the PTL or not the PTL,
  you're very alone and succession planning is not working that great
  either.
 
  There have been a number of experiments to solve that problem. John
  Garbutt has done an incredible job at helping successive Nova PTLs
  handling the release management aspect. Tracy Jones took over Nova bug
  management. Doug Hellmann successfully introduced the concept of Oslo
  liaisons to get clear point of contacts for Oslo library adoption in
  projects. It may be time to generalize that solution.
 
  The issue is one of responsibility: the PTL is ultimately responsible
  for everything in a project. If we can more formally delegate that
  responsibility, we can avoid getting up to the PTL for everything, we
  can rely on a team of people rather than just one person.
 
  Enter the Czar system: each project should have a number of liaisons /
  official contacts / delegates that are fully responsible to cover one
  aspect of the project. We need to have Bugs czars, which are
  responsible
  for getting bugs under control. We need to have Oslo czars, which serve
  as liaisons for the Oslo program but also as active project-local oslo
  advocates. We need Security czars, which the VMT can go to to progress
  quickly on plugging vulnerabilities. We need release management czars,
  to handle the communication and process with that painful OpenStack
  release manager. We need Gate 

Re: [openstack-dev] [TripleO][Ironic] Unique way to get a registered machine?

2014-08-22 Thread Gregory Haynes
Excerpts from Steve Kowalik's message of 2014-08-22 06:32:04 +:
 At the moment, if you run register-nodes a second time with the same 
 list of nodes, it will happily try and register them and then blow up 
 when Ironic or Nova-bm returns an error. If operators are going to 
 update their master list of nodes to add or remove machines and then run 
 register-nodes again, we need a way to skip registering nodes that are 
 already -- except that I don't really want to extract out the UUID of 
 the registered nodes, because that puts an onus on the operators to make 
 sure that the UUID is listed in the master list, and that would be mean 
 requiring manual data entry, or some way to get that data back out in 
 the tool they use to manage their master list, which may not even have 
 an API. Because our intent is for this bridge between an operators 
 master list, and a baremetal service, the intent is for this to run 
 again and again when changes happen.

I dont understand why inputting the UUID into the master list requires
manual entry? Why cant we, on insertion, also insert the UUID into the
nodes list? One potential downside is that operators cannot fully regen
the nodes list when editing it but have to 'merge in' changes but IMO
this is a good enough start and preferrable to some non-straightforward
implicit behavior done by our own routine.

 This means we need a way to uniquely identify the machines in the list 
 so we can tell if they are already registered.
 
 For the pxe_ssh driver, this means the set of MAC addresses must 
 intersect.
 
 For other drivers, we think that the pm_address for each machine will 
 be unique. Would it be possible add some advice to that effect to 
 Ironic's driver API?

Building off my previous comment - If we really want to provide an
implicit updating mechanism so operators can re-gen node lists in
entirety then why not build it as a new processing stage? Im thinking:

1) gen list of just_nodes.json
2) run update_nodes to update nodes.json containing new data and old
UUIDS where applicable
3) pass nodes.json into register nodes

Your proposal of auto detecting updated nodes would live purely in the
update_nodes script but operators could elect to skip this if their node
generation tooling supports it.

- Greg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Incubator concerns from packaging perspective

2014-08-22 Thread Sumit Naiksatam
On Thu, Aug 21, 2014 at 7:28 AM, Kyle Mestery mest...@mestery.com wrote:
 On Thu, Aug 21, 2014 at 5:12 AM, Ihar Hrachyshka ihrac...@redhat.com wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512

 On 20/08/14 18:28, Salvatore Orlando wrote:
 Some comments inline.

 Salvatore

 On 20 August 2014 17:38, Ihar Hrachyshka ihrac...@redhat.com
 mailto:ihrac...@redhat.com wrote:

 Hi all,

 I've read the proposal for incubator as described at [1], and I
 have several comments/concerns/suggestions to this.

 Overall, the idea of giving some space for experimentation that
 does not alienate parts of community from Neutron is good. In that
 way, we may relax review rules and quicken turnaround for preview
 features without loosing control on those features too much.

 Though the way it's to be implemented leaves several concerns, as
 follows:

 1. From packaging perspective, having a separate repository and
 tarballs seems not optimal. As a packager, I would better deal with
 a single tarball instead of two. Meaning, it would be better to
 keep the code in the same tree.

 I know that we're afraid of shipping the code for which some users
 may expect the usual level of support and stability and
 compatibility. This can be solved by making it explicit that the
 incubated code is unsupported and used on your user's risk. 1) The
 experimental code wouldn't probably be installed unless explicitly
 requested, and 2) it would be put in a separate namespace (like
 'preview', 'experimental', or 'staging', as the call it in Linux
 kernel world [2]).

 This would facilitate keeping commit history instead of loosing it
 during graduation.

 Yes, I know that people don't like to be called experimental or
 preview or incubator... And maybe neutron-labs repo sounds more
 appealing than an 'experimental' subtree in the core project.
 Well, there are lots of EXPERIMENTAL features in Linux kernel that
 we actively use (for example, btrfs is still considered
 experimental by Linux kernel devs, while being exposed as a
 supported option to RHEL7 users), so I don't see how that naming
 concern is significant.


 I think this is the whole point of the discussion around the
 incubator and the reason for which, to the best of my knowledge,
 no proposal has been accepted yet.


 I wonder where discussion around the proposal is running. Is it public?

 The discussion started out privately as the incubation proposal was
 put together, but it's now on the mailing list, in person, and in IRC
 meetings. Lets keep the discussion going on list now.


In the spirit of keeping the discussion going, I think we probably
need to iterate in practice on this idea a little bit before we can
crystallize on the policy and process for this new repo. Here are few
ideas on how we can start this iteration:

* Namespace for the new repo:
Should this be in the neutron namespace, or a completely different
namespace like neutron labs? Perhaps creating a separate namespace
will help the packagers to avoid issues of conflicting package owners
of the namespace.

* Dependency on Neutron (core) repository:
We would need to sort this out so that we can get UTs to run and pass
in the new repo. Can we set the dependency on Neutron milestone
releases? We already publish tar balls for the milestone releases, but
I am not sure we publish these as packages to pypi. If not could we
start doing that? With this in place, the incubator would always lag
the Neutron core by at the most one milestone release.

* Modules overlapping with the Neutron (core) repository:
We could initially start with the features that required very little
or no changes to the Neutron core, to avoid getting into the issue of
blocking on changes to the Neutron (core) repository before progress
can be made in the incubator.

* Packaging of ancillary code (CLI, Horizon, Heat):
We start by adding these as subdirectories inside each feature. The
packaging folks are going to find it difficult to package this.
However, can we start with this approach, and have a parallel
discussion on how we can evolved this strategy? Perhaps the individual
projects might decide to allow support for the Neutron incubator
features once they can actually see what goes into the incubator,
and/or other projects might also follow the incubator approach.

If we have loose consensus on the above, some of us folks who are
involved with features that are candidates for the incubator (e.g.
GBP, LBaaS), can immediately start iterating on this plan, and report
back our progress in a specified time frame.

Thanks,
~Sumit.


 2. If those 'extras' are really moved into a separate repository
 and tarballs, this will raise questions on whether packagers even
 want to cope with it before graduation. When it comes to supporting
 another build manifest for a piece of code of unknown quality, this
 is not the same as just cutting part of the code into a separate
 experimental/labs package. So unless I'm explicitly asked to
 package the