Re: [openstack-dev] [Nova] PCI support

2014-08-13 Thread Irena Berezovsky
Hi Gary,
I understand your concern. I think CI is mandatory to ensure that code is not 
broken. While unit tests provide great value, it may end up with the code that 
does not work...
I am not sure how this code can be checked for validity without running the 
neutron part.
Probably our CI job should be triggered by nova changes in the PCI area.
What do you suggest?

Irena

From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Tuesday, August 12, 2014 4:29 PM
To: Irena Berezovsky; OpenStack Development Mailing List (not for usage 
questions)
Subject: Re: [openstack-dev] [Nova] PCI support

Thanks, the concern is for the code in Nova and not in Neutron. That is, there 
is quite a lot of PCI code being added and no way of knowing that it actually 
works (unless we trust the developers working on it :)).
Thanks
Gary

From: Irena Berezovsky ire...@mellanox.commailto:ire...@mellanox.com
Date: Tuesday, August 12, 2014 at 10:25 AM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: Gary Kotton gkot...@vmware.commailto:gkot...@vmware.com
Subject: RE: [openstack-dev] [Nova] PCI support

Hi Gary,
Mellanox already established CI support on Mellanox SR-IOV NICs, as one of the 
jobs of Mellanox External Testing CI 
(Check-MLNX-Neutron-ML2-Sriov-driverhttps://urldefense.proofpoint.com/v1/url?u=http://144.76.193.39/ci-artifacts/94888/13/Check-MLNX-Neutron-ML2-Sriov-driverk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=OFhjKT9ipKmAmkiQpq6hlqZIHthaGP7q1PTygNW2RXs%3D%0As=13fdee114a421eeed33edf26a639f8450df6efa361ba912c41694ff75292e789).
Meanwhile not voting, but will be soon.

BR,
Irena

From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Monday, August 11, 2014 5:17 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] PCI support

Thanks for the update.

From: Robert Li (baoli) ba...@cisco.commailto:ba...@cisco.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, August 11, 2014 at 5:08 PM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] PCI support

Gary,

Cisco is adding it in our CI testbed. I guess that mlnx is doing the same for 
their MD as well.

-Robert

On 8/11/14, 9:05 AM, Gary Kotton 
gkot...@vmware.commailto:gkot...@vmware.com wrote:

Hi,
At the moment all of the drivers are required CI support. Are there any plans 
regarding the PIC support. I understand that this is something that requires 
specific hardware. Are there any plans to add this?
Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Get Tenant Details in novaclient

2014-08-13 Thread Sachi Gupta
Hi,

nova --os-tenant-name admin list --tenant c40ad5830e194f2296ad11a96cefc487 
--all-tenants 1 - Works Fine and returns all the servers available where 
c40ad5830e194f2296ad11a96cefc487  is the id of the demo tenant whereas 
nova --os-tenant-name admin list --tenant demo --all-tenants 1 - Returns 
nothing when tenant-name demo is passed in place of its id.

For the above bug, need to get the tenant details in novaclient on the 
basis of tenant-name being passed to nova api so that the list of servers 
can be shown up by both tenant_name or tenant_id.

Also, to interact between Openstaack components we can use the rest calls.

Can anyone suggest how to get the keystone tenant-details in novaclient to 
make the above functionality work.

Thanks in advance
Sachi
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] Update on third party CI in Neutron

2014-08-13 Thread Irena Berezovsky
Hi,
Mellanox CI was also failing due to the same issue, 
https://bugs.launchpad.net/neutron/+bug/1355780 (apparently duplicated bug for 
https://bugs.launchpad.net/neutron/+bug/1353309)
We currently fixed the issue locally, by patching the server side RPC version 
support to 1.3.

BR,
Irena


From: Hemanth Ravi [mailto:hemanthrav...@gmail.com]
Sent: Wednesday, August 13, 2014 12:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [third-party] Update on third party CI 
in Neutron

Kyle,

One Convergence third-party CI is failing due to 
https://bugs.launchpad.net/neutron/+bug/1353309.

Let me know if we should turn off the CI logs until this is fixed or if we need 
to fix anything on the CI end. I think one other third-party CI (Mellanox) is 
failing due to the same issue.

Regards,
-hemanth

On Tue, Jul 29, 2014 at 6:02 AM, Kyle Mestery 
mest...@mestery.commailto:mest...@mestery.com wrote:
On Mon, Jul 28, 2014 at 1:42 PM, Hemanth Ravi 
hemanthrav...@gmail.commailto:hemanthrav...@gmail.com wrote:
 Kyle,

 One Convergence CI has been fixed (setup issue) and is running without the
 failures for ~10 days now. Updated the etherpad.

Thanks for the update Hemanth, much appreciated!

Kyle

 Thanks,
 -hemanth


 On Fri, Jul 11, 2014 at 4:50 PM, Fawad Khaliq 
 fa...@plumgrid.commailto:fa...@plumgrid.com wrote:


 On Fri, Jul 11, 2014 at 8:56 AM, Kyle Mestery 
 mest...@noironetworks.commailto:mest...@noironetworks.com
 wrote:

 PLUMgrid

 Not saving enough logs

 All Jenkins slaves were just updated to upload all required logs. PLUMgrid
 CI should be good now.


 Thanks,
 Fawad Khaliq


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Get Tenant Details in novaclient

2014-08-13 Thread Chen CH Ji
this spec has some thought on functionality to validate the tenant or user
that is consumed by nova, not sure whether it's what you want, FYI

https://review.openstack.org/#/c/92507/

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC



From:   Sachi Gupta sachi.gu...@tcs.com
To: openstack-dev@lists.openstack.org,
Date:   08/13/2014 01:58 PM
Subject:[openstack-dev] Get Tenant Details in novaclient



Hi,

nova --os-tenant-name admin list --tenant c40ad5830e194f2296ad11a96cefc487
--all-tenants 1 - Works Fine and returns all the servers available where
c40ad5830e194f2296ad11a96cefc487  is the id of the demo tenant whereas
nova --os-tenant-name admin list --tenant demo --all-tenants 1 - Returns
nothing when tenant-name demo is passed in place of its id.

For the above bug, need to get the tenant details in novaclient on the
basis of tenant-name being passed to nova api so that the list of servers
can be shown up by both tenant_name or tenant_id.

Also, to interact between Openstaack components we can use the rest calls.

Can anyone suggest how to get the keystone tenant-details in novaclient to
make the above functionality work.

Thanks in advance
Sachi


=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][policy] Group Based Policy - Renaming

2014-08-13 Thread Baohua Yang
Like the policy-group naming.

The policy-target is better than policy-point, but still feel there's some
little confusing, as the target is usually meaning what it's for, but not
what it's on.

Hence, the policy-endpoint might be more exact.


On Fri, Aug 8, 2014 at 11:43 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 08/07/2014 01:17 PM, Ronak Shah wrote:

 Hi,
 Following a very interesting and vocal thread on GBP for last couple of
 days and the GBP meeting today, GBP sub-team proposes following name
 changes to the resource.


 policy-point for endpoint
 policy-group for endpointgroup (epg)

 Please reply if you feel that it is not ok with reason and suggestion.


 Thanks Ronak and Sumit for sharing. I, too, wasn't able to attend the
 meeting (was in other meetings yesterday and today).

 I'm very happy with the change from endpoint-group - policy-group.

 policy-point is better than endpoint, for sure. The only other suggestion
 I might have would be to use policy-target instead of policy-point,
 since the former clearly delineates what the object is used for (a target
 for a policy).

 But... I won't raise a stink about this. Sorry for sparking long and
 tangential discussions on GBP topics earlier this week. And thanks to the
 folks who persevered and didn't take too much offense to my questioning.

 Best,
 -jay



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best wishes!
Baohua
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] PCI support

2014-08-13 Thread Gary Kotton
Hi,
If I understand correctly the only way that this work is with nova and neutron 
running. My understanding would be to have the CI running with this as the 
configuration. I just think that this should be a prerequisite similar to 
having validations of virtualization drivers.
Does that make sense?
Thanks
Gary

From: Irena Berezovsky ire...@mellanox.commailto:ire...@mellanox.com
Date: Wednesday, August 13, 2014 at 9:01 AM
To: Gary Kotton gkot...@vmware.commailto:gkot...@vmware.com, OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: RE: [openstack-dev] [Nova] PCI support

Hi Gary,
I understand your concern. I think CI is mandatory to ensure that code is not 
broken. While unit tests provide great value, it may end up with the code that 
does not work...
I am not sure how this code can be checked for validity without running the 
neutron part.
Probably our CI job should be triggered by nova changes in the PCI area.
What do you suggest?

Irena

From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Tuesday, August 12, 2014 4:29 PM
To: Irena Berezovsky; OpenStack Development Mailing List (not for usage 
questions)
Subject: Re: [openstack-dev] [Nova] PCI support

Thanks, the concern is for the code in Nova and not in Neutron. That is, there 
is quite a lot of PCI code being added and no way of knowing that it actually 
works (unless we trust the developers working on it :)).
Thanks
Gary

From: Irena Berezovsky ire...@mellanox.commailto:ire...@mellanox.com
Date: Tuesday, August 12, 2014 at 10:25 AM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: Gary Kotton gkot...@vmware.commailto:gkot...@vmware.com
Subject: RE: [openstack-dev] [Nova] PCI support

Hi Gary,
Mellanox already established CI support on Mellanox SR-IOV NICs, as one of the 
jobs of Mellanox External Testing CI 
(Check-MLNX-Neutron-ML2-Sriov-driverhttps://urldefense.proofpoint.com/v1/url?u=http://144.76.193.39/ci-artifacts/94888/13/Check-MLNX-Neutron-ML2-Sriov-driverk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=OFhjKT9ipKmAmkiQpq6hlqZIHthaGP7q1PTygNW2RXs%3D%0As=13fdee114a421eeed33edf26a639f8450df6efa361ba912c41694ff75292e789).
Meanwhile not voting, but will be soon.

BR,
Irena

From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Monday, August 11, 2014 5:17 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] PCI support

Thanks for the update.

From: Robert Li (baoli) ba...@cisco.commailto:ba...@cisco.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, August 11, 2014 at 5:08 PM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] PCI support

Gary,

Cisco is adding it in our CI testbed. I guess that mlnx is doing the same for 
their MD as well.

-Robert

On 8/11/14, 9:05 AM, Gary Kotton 
gkot...@vmware.commailto:gkot...@vmware.com wrote:

Hi,
At the moment all of the drivers are required CI support. Are there any plans 
regarding the PIC support. I understand that this is something that requires 
specific hardware. Are there any plans to add this?
Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Which changes need accompanying bugs?

2014-08-13 Thread Angus Lees
I'm doing various small cleanup changes as I explore the neutron codebase.  
Some of these cleanups are to fix actual bugs discovered in the code.  Almost 
all of them are tiny and obviously correct.

A recurring reviewer comment is that the change should have had an 
accompanying bug report and that they would rather that change was not 
submitted without one (or at least, they've -1'ed my change).

I often didn't discover these issues by encountering an actual production 
issue so I'm unsure what to include in the bug report other than basically a 
copy of the change description.  I also haven't worked out the pattern yet of 
which changes should have a bug and which don't need one.

There's a section describing blueprints in NeutronDevelopment but nothing on 
bugs.  It would be great if someone who understands the nuances here could add 
some words on when to file bugs:
Which type of changes should have accompanying bug reports?
What is the purpose of that bug, and what should it contain?

-- 
Thanks,
 - Gus

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Which changes need accompanying bugs?

2014-08-13 Thread Kevin Benton
I'm not sure what the guideline is, but I would like to point out a good
reason to have the bug report even for obvious fixes.
When users encounters bugs, they go to launchpad to report them. They don't
first scan the commits of the master branch to see what was fixed. Having
the bug in launchpad provides a way to track the status (fixed, backported,
impact, etc) of the bug and reduces the chances of duplicated bugs.

Can you provide an example of a patch that you felt was trivial that a
reviewer requested a bug for so we have something concrete to discuss and
establish guidelines around?
On Aug 13, 2014 12:32 AM, Angus Lees g...@inodes.org wrote:

 I'm doing various small cleanup changes as I explore the neutron codebase.
 Some of these cleanups are to fix actual bugs discovered in the code.
  Almost
 all of them are tiny and obviously correct.

 A recurring reviewer comment is that the change should have had an
 accompanying bug report and that they would rather that change was not
 submitted without one (or at least, they've -1'ed my change).

 I often didn't discover these issues by encountering an actual production
 issue so I'm unsure what to include in the bug report other than basically
 a
 copy of the change description.  I also haven't worked out the pattern yet
 of
 which changes should have a bug and which don't need one.

 There's a section describing blueprints in NeutronDevelopment but nothing
 on
 bugs.  It would be great if someone who understands the nuances here could
 add
 some words on when to file bugs:
 Which type of changes should have accompanying bug reports?
 What is the purpose of that bug, and what should it contain?

 --
 Thanks,
  - Gus

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] gate-ceilometer-python33 failed because of wrong setup.py in happybase

2014-08-13 Thread Julien Danjou
On Wed, Aug 13 2014, Osanai, Hisashi wrote:

 On Tuesday, August 12, 2014 10:14 PM, Julien Danjou wrote:
 The py33 gate shouldn't be activated for the stable/icehouse. I'm no
 infra-config expert, but we should be able to patch it for that (hint?).

 Thank you for the response. 

 Now we have two choices:
 (1) deter to activate the py33 gate
 (2) a patch to happybase

 I prefer to choose (1) first because (2) is only problem if we activate the 
 py33 gate in stable/icehouse together with python33 and as you mentioned 
 the py33 gate shouldn't be activated in stable/icehouse but there is the 
 entry 
 for the py33 gate in tox.ini so I would like to remove it from 
 stable/icehouse.

 If it's ok, I make a bug report for tox.ini in stable/icehouse and commit a 
 fix 
 for it.  (then proceed https://review.openstack.org/#/c/112806/ ahead)

This is not a problem in tox.ini, this is a problem in the
infrastructure config. Removing py33 from the envlist in tox.ini isn't
going to fix anything unforunately.

-- 
Julien Danjou
// Free Software hacker
// http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gantt project

2014-08-13 Thread Dugger, Donald D
Our initial goal is to just split the scheduler out into a separate project, 
not make it a part of Nova compute.  The functionality will be exactly the same 
as the Nova scheduler (the vast majority of the code will be a copy of the Nova 
scheduler code modulo some path name changes).  When the split is complete and 
we've thoroughly tested it to show the same functionality with Gantt we can 
make Gantt the default Nova scheduler, target all new scheduler work into Gantt 
and deprecate use of the Nova scheduler.  Hopefully in the L or M time frame we 
would excise the scheduler code out of Nova.

I would certainly not advocate forced usage of Gantt by fiat for other 
projects.  Instead we should evaluate the scheduling requirements needed by 
other projects, see if they can be handled by a common scheduler and, if so, 
enhance Gantt appropriately so that other projects can use it.  (Hopefully if 
we build Gantt they will come :-)  This should be no worse than the current 
situation where projects are forced to create their own scheduler, projects 
will have the option to utilize Gantt and not waste effort duplicating a 
scheduler function.

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786

-Original Message-
From: John Dickinson [mailto:m...@not.mn] 
Sent: Tuesday, August 12, 2014 9:24 AM
To: Dugger, Donald D
Cc: OpenStack Development Mailing List (not for usage questions); Michael 
Still; Mark Washenberger; Dolph Mathews; Lyle, David; Kyle Mestery; John 
Griffith; Eoghan Glynn; Zane Bitter; Nikhil Manchanda; Devananda van der Veen; 
Doug Hellmann; James E. Blair; Anne Gentle; Matthew Treinish; Robert Collins; 
Dean Troyer; Thierry Carrez; Kurt Griffiths; Sergey Lukjanov; Jarret Raim
Subject: Re: Gantt project

Thanks for the info. It does seem like most OpenStack projects have some 
concept of a scheduler, as you mentioned. Perhaps that's expected in any 
distributed system.

Is it expected or assumed that Gantt will become the common scheduler for all 
OpenStack projects? That is, is Gantt's plan and/or design goals to provide 
scheduling (or a scheduling framework) for all OpenStack projects? Perhaps 
this is a question for the TC rather than Don. [1]

Since Gantt is initially intended to be used by Nova, will it be under the 
compute program or will there be a new program created for it?


--John


[1] You'll forgive me, but I've certainly seen OpenStack projects move from 
you can use it if you want to you must start using this in the past.




On Aug 11, 2014, at 11:09 PM, Dugger, Donald D donald.d.dug...@intel.com 
wrote:

 This is to make sure that everyone knows about the Gantt project and to make 
 sure that no one has a strong aversion to what we are doing.
  
 The basic goal is to split the scheduler out of Nova and create a separate 
 project that, ultimately, can be used by other OpenStack projects that have a 
 need for scheduling services.  Note that we have no intention of forcing 
 people to use Gantt but it seems silly to have a scheduler inside Nova, 
 another scheduler inside Cinder, another scheduler inside Neutron and so 
 forth.  This is clearly predicated on the idea that we can create a common, 
 flexible scheduler that can meet everyone's needs but, as I said, theirs is 
 no rule that any project has to use Gantt, if we don't meet your needs you 
 are free to roll your own scheduler.
  
 We will start out by just splitting the scheduler code out of Nova into a 
 separate project that will initially only be used by Nova.  This will be 
 followed by enhancements, like a common API, that can then be utilized by 
 other projects.
  
 We are cleaning up the internal interfaces in the Juno release with the 
 expectation that early in the Kilo cycle we will be able to do the split and 
 create a Gantt project that is completely compatible with the current Nova 
 scheduler.
  
 Hopefully our initial goal (a separate project that is completely compatible 
 with the Nova scheduler) is not too controversial but feel free to reply with 
 any concerns you may have.
  
 --
 Don Dugger
 Censeo Toto nos in Kansa esse decisse. - D. Gale
 Ph: 303/443-3786


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] PCI support

2014-08-13 Thread Irena Berezovsky
Generally, I agree with you. But it's a little tricky.
There are different types of SR-IOV NICs and what will work for some vendor may 
be broken for another.
I think that both current SR-IOV networking flavors: Embedded switching (Intel, 
Mellanox) and Cisco VM-FEX should be verified for relevant nova patches.
What tests do you think it should run for nova side?

Thanks,
Irena

From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Wednesday, August 13, 2014 10:10 AM
To: Irena Berezovsky; OpenStack Development Mailing List (not for usage 
questions)
Subject: Re: [openstack-dev] [Nova] PCI support

Hi,
If I understand correctly the only way that this work is with nova and neutron 
running. My understanding would be to have the CI running with this as the 
configuration. I just think that this should be a prerequisite similar to 
having validations of virtualization drivers.
Does that make sense?
Thanks
Gary

From: Irena Berezovsky ire...@mellanox.commailto:ire...@mellanox.com
Date: Wednesday, August 13, 2014 at 9:01 AM
To: Gary Kotton gkot...@vmware.commailto:gkot...@vmware.com, OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: RE: [openstack-dev] [Nova] PCI support

Hi Gary,
I understand your concern. I think CI is mandatory to ensure that code is not 
broken. While unit tests provide great value, it may end up with the code that 
does not work...
I am not sure how this code can be checked for validity without running the 
neutron part.
Probably our CI job should be triggered by nova changes in the PCI area.
What do you suggest?

Irena

From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Tuesday, August 12, 2014 4:29 PM
To: Irena Berezovsky; OpenStack Development Mailing List (not for usage 
questions)
Subject: Re: [openstack-dev] [Nova] PCI support

Thanks, the concern is for the code in Nova and not in Neutron. That is, there 
is quite a lot of PCI code being added and no way of knowing that it actually 
works (unless we trust the developers working on it :)).
Thanks
Gary

From: Irena Berezovsky ire...@mellanox.commailto:ire...@mellanox.com
Date: Tuesday, August 12, 2014 at 10:25 AM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: Gary Kotton gkot...@vmware.commailto:gkot...@vmware.com
Subject: RE: [openstack-dev] [Nova] PCI support

Hi Gary,
Mellanox already established CI support on Mellanox SR-IOV NICs, as one of the 
jobs of Mellanox External Testing CI 
(Check-MLNX-Neutron-ML2-Sriov-driverhttps://urldefense.proofpoint.com/v1/url?u=http://144.76.193.39/ci-artifacts/94888/13/Check-MLNX-Neutron-ML2-Sriov-driverk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=OFhjKT9ipKmAmkiQpq6hlqZIHthaGP7q1PTygNW2RXs%3D%0As=13fdee114a421eeed33edf26a639f8450df6efa361ba912c41694ff75292e789).
Meanwhile not voting, but will be soon.

BR,
Irena

From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Monday, August 11, 2014 5:17 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] PCI support

Thanks for the update.

From: Robert Li (baoli) ba...@cisco.commailto:ba...@cisco.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, August 11, 2014 at 5:08 PM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova] PCI support

Gary,

Cisco is adding it in our CI testbed. I guess that mlnx is doing the same for 
their MD as well.

-Robert

On 8/11/14, 9:05 AM, Gary Kotton 
gkot...@vmware.commailto:gkot...@vmware.com wrote:

Hi,
At the moment all of the drivers are required CI support. Are there any plans 
regarding the PIC support. I understand that this is something that requires 
specific hardware. Are there any plans to add this?
Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Eoghan Glynn


  One thing I'm not seeing shine through in this discussion of slots is
  whether any notion of individual cores, or small subsets of the core
  team with aligned interests, can champion blueprints that they have
  a particular interest in.
 
 I think that's because we've focussed in this discussion on the slots
 themselves, not the process of obtaining a slot.

That's fair.
 
 The proposal as it stands now is that we would have a public list of
 features that are ready to occupy a slot. That list would the ranked
 in order of priority to the project, and the next free slot goes to
 the top item on the list. The ordering of the list is determined by
 nova-core, based on their understanding of the importance of a given
 thing, as well as what they are hearing from our users.
 
 So -- there's totally scope for lobbying, or for a subset of core to
 champion a feature to land, or for a company to explain why a given
 feature is very important to them.

Yeah, that's pretty much what I mean by the championing being subsumed
under the group will.

What's lost is not so much the ability to champion something, as the
freedom to do so in an independent/emergent way.

(Note that this is explicitly not verging into the retrospective veto
policy discussion on another thread[1], I'm totally assuming good faith
and good intent on the part of such champions)
 
 It sort of happens now -- there is a subset of core which cares more
 about xen than libvirt for example. We're just being more open about
 the process and setting expectations for our users. At the moment its
 very confusing as a user, there are hundreds of proposed features for
 Juno, nearly 100 of which have been accepted. However, we're kidding
 ourselves if we think we can land 100 blueprints in a release cycle.

Yeah, so I guess it would be worth drilling down into that user
confusion.

Are users confused because they don't understand the current nature
of the group dynamic, the unseen hand that causes some blueprints to
prosper while others fester seemingly unnoticed?

(for example, in the sense of not appreciating the emergent championing
done by say the core subset interested in libvirt)

Or are they confused in that they read some implicit contract or
commitment into the targeting of those 100 blueprints to a release
cycle?

(in sense of expecting that the core team will land all/most of those
100 target'd BPs within the cycle)

Cheers,
Eoghan 

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-August/042728.html

  For example it might address some pain-point they've encountered, or
  impact on some functional area that they themselves have worked on in
  the past, or line up with their thinking on some architectural point.
 
  But for whatever motivation, such small groups of cores currently have
  the freedom to self-organize in a fairly emergent way and champion
  individual BPs that are important to them, simply by *independently*
  giving those BPs review attention.
 
  Whereas under the slots initiative, presumably this power would be
  subsumed by the group will, as expressed by the prioritization
  applied to the holding pattern feeding the runways?
 
  I'm not saying this is good or bad, just pointing out a change that
  we should have our eyes open to.
 
 Michael
 
 --
 Rackspace Australia
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Nikola Đipanov
On 08/13/2014 04:05 AM, Michael Still wrote:
 On Wed, Aug 13, 2014 at 4:26 AM, Eoghan Glynn egl...@redhat.com wrote:

 It seems like this is exactly what the slots give us, though. The core 
 review
 team picks a number of slots indicating how much work they think they can
 actually do (less than the available number of blueprints), and then
 blueprints queue up to get a slot based on priorities and turnaround time
 and other criteria that try to make slot allocation fair. By having the
 slots, not only is the review priority communicated to the review team, it
 is also communicated to anyone watching the project.

 One thing I'm not seeing shine through in this discussion of slots is
 whether any notion of individual cores, or small subsets of the core
 team with aligned interests, can champion blueprints that they have
 a particular interest in.
 
 I think that's because we've focussed in this discussion on the slots
 themselves, not the process of obtaining a slot.
 
 The proposal as it stands now is that we would have a public list of
 features that are ready to occupy a slot. That list would the ranked
 in order of priority to the project, and the next free slot goes to
 the top item on the list. The ordering of the list is determined by
 nova-core, based on their understanding of the importance of a given
 thing, as well as what they are hearing from our users.
 
 So -- there's totally scope for lobbying, or for a subset of core to
 champion a feature to land, or for a company to explain why a given
 feature is very important to them.
 
 It sort of happens now -- there is a subset of core which cares more
 about xen than libvirt for example. We're just being more open about
 the process and setting expectations for our users. At the moment its
 very confusing as a user, there are hundreds of proposed features for
 Juno, nearly 100 of which have been accepted. However, we're kidding
 ourselves if we think we can land 100 blueprints in a release cycle.
 

While I agree with motivation for this - setting the expectations, I
fail to see how this is different to what the Swift guys seem to be
doing apart from more red tape.

I would love for us to say: If you want your feature in - you need to
convince us that it's awesome and that we need to listen to you, by
being active in the community (not only by means of writing code of
course).

I fear that slots will have us saying: Here's another check-box for you
to tick, and the code goes in, which in addition to not communicating
that we are ultimately the ones who chose what goes in, regardless of
slots, also shifts the conversation away from what is really important,
and that is the relative merit of the feature itself.

But it obviously depends on the implementation.

N.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [Cinder] 3'rd party CI systems

2014-08-13 Thread David Pineau
Hello,

I have currently setup the Scality CI not to report (mostly because it
isn't fully functionnal yet, as the machine it runs on turns out to be
undersized and thus the tests fails on some timeout), partly because
it's currently a nightly build. I have no way of testing multiple
patchsets at the same time so it is easier this way.

How do you plan to Officialize the different 3rd party CIs ? I
remember that the cinder meeting about that in the Atlanta Summit
concluded that a nightly build would be enough, but such build cannot
really report on gerrit.

David Pineau
gerrit: Joachim
IRC#freenode: joa

2014-08-13 2:28 GMT+02:00 Asselin, Ramy ramy.asse...@hp.com:
 I forked jaypipe’s repos  working on extending it to support nodepool, log
 server, etc.

 Still WIP but generally working.



 If you need help, ping me on IRC #openstack-cinder (asselin)



 Ramy



 From: Jesse Pretorius [mailto:jesse.pretor...@gmail.com]
 Sent: Monday, August 11, 2014 11:33 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [OpenStack-Dev] [Cinder] 3'rd party CI systems



 On 12 August 2014 07:26, Amit Das amit@cloudbyte.com wrote:

 I would like some guidance in this regards in form of some links, wiki pages
 etc.



 I am currently gathering the driver cert test results i.e. tempest tests
 from devstack in our environment  CI setup would be my next step.



 This should get you started:

 http://ci.openstack.org/third_party.html



 Then Jay Pipes' excellent two part series will help you with the details of
 getting it done:

 http://www.joinfu.com/2014/02/setting-up-an-external-openstack-testing-system/

 http://www.joinfu.com/2014/02/setting-up-an-openstack-external-testing-system-part-2/




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
David Pineau,
Developer RD at Scality

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] so what do i do about libvirt-python if i'm on precise?

2014-08-13 Thread Daniel P. Berrange
On Tue, Aug 12, 2014 at 10:09:52PM +0100, Mark McLoughlin wrote:
 On Wed, 2014-07-30 at 15:34 -0700, Clark Boylan wrote:
  On Wed, Jul 30, 2014, at 03:23 PM, Jeremy Stanley wrote:
   On 2014-07-30 13:21:10 -0700 (-0700), Joe Gordon wrote:
While forcing people to move to a newer version of libvirt is
doable on most environments, do we want to do that now? What is
the benefit of doing so?
   [...]
   
   The only dog I have in this fight is that using the split-out
   libvirt-python on PyPI means we finally get to run Nova unit tests
   in virtualenvs which aren't built with system-site-packages enabled.
   It's been a long-running headache which I'd like to see eradicated
   everywhere we can. I understand though if we have to go about it
   more slowly, I'm just excited to see it finally within our grasp.
   -- 
   Jeremy Stanley
  
  We aren't quite forcing people to move to newer versions. Only those
  installing nova test-requirements need newer libvirt.
 
 Yeah, I'm a bit confused about the problem here. Is it that people want
 to satisfy test-requirements through packages rather than using a
 virtualenv?
 
 (i.e. if people just use virtualenvs for unit tests, there's no problem
 right?)
 
 If so, is it possible/easy to create new, alternate packages of the
 libvirt python bindings (from PyPI) on their own separately from the
 libvirt.so and libvirtd packages?

The libvirt python API is (mostly) automatically generated from a
description of the XML that is built from the C source files. In
tree with have fakelibvirt which is a semi-crappy attempt to provide
a pure python libvirt client API with the same signature. IIUC, what
you are saying is that we should get a better fakelibvirt that is
truely identical with same API coverage /signatures as real libvirt ?


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Using any username/password to create tempest clients

2014-08-13 Thread Udi Kalifon
Hello.

I am writing a tempest scenario for keystone. In this scenario I create a 
domain, project and a user with admin rights on the project. I then try to 
instantiate a Manager so I can call keystone using the new user credentials:

creds = KeystoneV3Credentials(username=dom1proj1admin_name, 
password=dom1proj1admin_name, domain_name=dom1_name, user_domain_name=dom1_name)
auth_provider = KeystoneV3AuthProvider(creds)
creds = auth_provider.fill_credentials()
admin_client = clients.Manager(interface=self._interface, 
credentials=creds)

The problem is that I get unauthorized return codes for every call I make 
with this client. I verified that the user is created properly and has the 
needed credentials, by manually authenticating and getting a token with his 
credentials and then using that token. Apparently, in my code I don't create 
the creds properly or I'm missing another step. How can I use the new user in 
tempest properly?

Thanks in advance,
Udi Kalifon.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Thierry Carrez
Nikola Đipanov wrote:
 While I agree with motivation for this - setting the expectations, I
 fail to see how this is different to what the Swift guys seem to be
 doing apart from more red tape.

It's not different imho. It's just that nova as significantly more
features being thrown at it, so the job of selecting priority features
is significantly harder, and the backlog is a lot bigger. The slot
system allows to visualize that backlog.

Currently we target all features to juno-3, everyone expects their stuff
to get review attention, nothing gets merged until the end of the
milestone period, and and in the end we merge almost nothing. The
blueprint priorities don't cut it, what you want is a ranked list. See
how likely you are to be considered for a release. Communicate that the
feature will actually be a Kilo feature earlier. Set downstream
expectations right. Merge earlier.

That ties into the discussions we are having for StoryBoard to support
task lists[1], which are arbitrary ranked lists of tasks. Those are much
more flexible than mono-dimensional priorities that fail to express the
complexity of priority in a complex ecosystem like OpenStack development.

[1] https://wiki.openstack.org/wiki/StoryBoard/Task_Lists

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Thierry Carrez
Rochelle.RochelleGrober wrote:
 [...]
 So, with all that prologue, here is what I propose (and please consider 
 proposing your improvements/changes to it).  I would like to see for Kilo:
 
 - IRC meetings and mailing list meetings beginning with Juno release and 
 continuing through the summit that focus on core project needs (what Thierry 
 call strategic) that as a set would be considered the primary focus of the 
 Kilo release for each project.  This could include high priority bugs, 
 refactoring projects, small improvement projects, high interest extensions 
 and new features, specs that didn't make it into Juno, etc.
 - Develop the list and prioritize it into Needs and Wants. Consider these 
 the feeder projects for the two runways if you like.  
 - Discuss the lists.  Maybe have a community vote? The vote will freeze the 
 list, but as in most development project freezes, it can be a soft freeze 
 that the core, or drivers or TC can amend (or throw out for that matter).
 [...]

One thing we've been unable to do so far is to set release goals at
the beginning of a release cycle and stick to those. It used to be
because we were so fast moving that new awesome stuff was proposed
mid-cycle and ends up being a key feature (sometimes THE key feature)
for the project. Now it's because there is so much proposed noone knows
what will actually get completed.

So while I agree that what you propose is the ultimate solution (and the
workflow I've pushed PTLs to follow every single OpenStack release so
far), we have struggled to have the visibility, long-term thinking and
discipline to stick to it in the past. If you look at the post-summit
plans and compare to what we end up in a release, you'll see quite a lot
of differences :)

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-13 Thread Daniel P. Berrange
On Wed, Aug 13, 2014 at 08:57:40AM +1000, Michael Still wrote:
 Hi.
 
 One of the action items from the nova midcycle was that I was asked to
 make nova's expectations of core reviews more clear. This email is an
 attempt at that.
 
 Nova expects a minimum level of sustained code reviews from cores. In
 the past this has been generally held to be in the order of two code
 reviews a day, which is a pretty low bar compared to the review
 workload of many cores. I feel that existing cores understand this
 requirement well, and I am mostly stating it here for completeness.
 
 Additionally, there is increasing levels of concern that cores need to
 be on the same page about the criteria we hold code to, as well as the
 overall direction of nova. While the weekly meetings help here, it was
 agreed that summit attendance is really important to cores. Its the
 way we decide where we're going for the next cycle, as well as a
 chance to make sure that people are all pulling in the same direction
 and trust each other.
 
 There is also a strong preference for midcycle meetup attendance,
 although I understand that can sometimes be hard to arrange. My stance
 is that I'd like core's to try to attend, but understand that
 sometimes people will miss one. In response to the increasing
 importance of midcycles over time, I commit to trying to get the dates
 for these events announced further in advance.

Personally I'm going to find it really hard to justify long distance
travel 4 times a year for OpenStack for personal / family reasons,
let alone company cost. I couldn't attend Icehouse mid-cycle because
I just had too much travel in a short time to be able to do another
week long trip away from family. I couldn't attend Juno mid-cycle
because it clashed we personal holiday. There are other opensource
related conferences that I also have to attend (LinuxCon, FOSDEM,
KVM Forum, etc), etc so doubling the expected number of openstack
conferences from 2 to 4 is really very undesirable from my POV.
I might be able to attend the occassional mid-cycle meetup if the
location was convenient, but in general I don't see myself being
able to attend them regularly.

I tend to view the fact that we're emphasising the need of in-person
meetups to be somewhat of an indication of failure of our community
operation. The majority of open source projects work very effectively
with far less face-to-face time. OpenStack is fortunate that companies
are currently willing to spend 6/7-figure sums flying 1000's of
developers around the world many times a year, but I don't see that
lasting forever so I'm concerned about baking the idea of f2f midcycle
meetups into our way of life even more strongly.

 Given that we consider these physical events so important, I'd like
 people to let me know if they have travel funding issues. I can then
 approach the Foundation about funding travel if that is required.

Travel funding is certainly an issue, but I'm not sure that Foundation
funding would be a solution, because the impact probably isn't directly
on the core devs. Speaking with my Red Hat on, if the midcycle meetup
is important enough, the core devs will likely get the funding to attend.
The fallout of this though is that every attendee at a mid-cycle summit
means fewer attendees at the next design summit. So the impact of having
more core devs at mid-cycle is that we'll get fewer non-core devs at
the design summit. This sucks big time for the non-core devs who want
to engage with our community.

Also having each team do a f2f mid-cycle meetup at a different location
makes it even harder for people who have a genuine desire / need to take
part in multiple teams. Going to multiple mid-cycle meetups is even more
difficult to justify so they're having to make difficult decisions about
which to go to :-(

I'm also not a fan of mid-cycle meetups because I feel it further
stratifies our contributors into two increasly distinct camps - core
vs non-core.

I can see that a big benefit of a mid-cycle meetup is to be a focal
point for collaboration, to forcably break contributors our of their
day-to-day work pattern to concentrate on discussing specific issues.
It also obviously solves the distinct timezone problem we have with
our dispersed contributor base. I think that we should be examining
what we can achieve with some kind of virtual online mid-cycle meetups
instead. Using technology like google hangouts or some similar live
collaboration technology, not merely an IRC discussion. Pick a 2-3
day period, schedule formal agendas / talking slots as you would with
a physical summit and so on. I feel this would be more inclusive to
our community as a whole, avoid excessive travel costs, so allowing
more of our community to attend the bigger design summits. It would
even open possibility of having multiple meetups during a cycle (eg
could arrange mini virtual events around each milestone if we wanted)

Regards,
Daniel
-- 
|: http://berrange.com  -o-

Re: [openstack-dev] [neutron] Which changes need accompanying bugs?

2014-08-13 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 13/08/14 09:28, Angus Lees wrote:
 I'm doing various small cleanup changes as I explore the neutron
 codebase. Some of these cleanups are to fix actual bugs discovered
 in the code.  Almost all of them are tiny and obviously correct.
 
 A recurring reviewer comment is that the change should have had an
  accompanying bug report and that they would rather that change was
 not submitted without one (or at least, they've -1'ed my change).
 
 I often didn't discover these issues by encountering an actual
 production issue so I'm unsure what to include in the bug report
 other than basically a copy of the change description.  I also
 haven't worked out the pattern yet of which changes should have a
 bug and which don't need one.
 
 There's a section describing blueprints in NeutronDevelopment but
 nothing on bugs.  It would be great if someone who understands the
 nuances here could add some words on when to file bugs: Which type
 of changes should have accompanying bug reports? What is the
 purpose of that bug, and what should it contain?
 

It was discussed before at:
http://lists.openstack.org/pipermail/openstack-dev/2014-May/035789.html

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJT6zfOAAoJEC5aWaUY1u570wQIAMpoXIK/p5invp+GW0aMMUK0
C/MR6WIJ83e6e2tOVUrxheK6bncVvidOI4EWGW1xzP1sg9q+8Hs1TNyKHXhJAb+I
c435MMHWsDwj6p1OeDxHnSOVMthcGH96sgRa1+CIk6+oktDF3IMmiOPRkxdpqWCZ
7TkV75mryehrTNwAkVPfpWG3OhWO44d5lLnJFCIMCuOw2NHzyLIOoGQAlWNQpy4V
a869s00WO37GEed6A5Zizc9K/05/6kpDIQVim37tw91JcZ69VelUlZ1THx+RTd33
92r87APm3fC/LioKN3fq1UUo2c94Vzl3gYPFVl8ZateQNMKB7ONMBePOfWR9H1k=
=wCJQ
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] 5.0.2

2014-08-13 Thread Mike Scherbakov
Hi Fuelers,
I'd like to clarify 5.0.2 state. This is not planned to be an official ISO
with 5.0.2, but rather it's going to be a set of packages and manifests,
which represent bugfixes on bugs reported to 5.0.2 milestone in LP [1].

5.0.2 is going to be cut in stable/5.0 at the same time as 5.1 is produced
and tagged, and upgrade tarball is created (with 5.0.2 packages). 5.0.2
will follow maintenance release of 5.0.1. So in fact, for now all the
changes which are merged into stable/5.0 will be in 5.0.1. Currently, we
run acceptance testing against RC for 5.0.1. If it succeeds without
critical bugs, it's going to be released on this Thursday, 14th of August.
Right after that, all changes merged to stable/5.0 will become a part of
5.0.2.

All, please don't forget about 5.0.2. For all High/Critical issues we face
in 5.1, we need to consider whether we want to see a fix in 5.0.2. So
please do not forget about proposing those into 5.0.2 milestone, proposing
commits consequently into stable/5.0 branch, helping out with reviewing
those and merging (if you have rights).

[1] https://launchpad.net/fuel/+milestone/5.0.2

Thanks,
-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Openstack Capacity Planning

2014-08-13 Thread Sylvain Bauza


Le 13/08/2014 03:48, Fei Long Wang a écrit :

Hi Adam,

Please refer this https://wiki.openstack.org/wiki/Blazar. Hope it's 
helpful. Cheers.


On 13/08/14 12:54, Adam Lawson wrote:
Something was presented at a meeting recently which had me curious: 
what sort of capacity planning tools/capabilities are being developed 
as an Openstack program? It's another area where non-proprietary 
cloud control is needed and would be another way to kick a peg away 
from the stool of cloud resistance. Also, this ties quite nicely into 
Software Defined Datacenter but appropriateness for the Openstack 
suite itself is another matter...


Has this been given much thought at this stage of the game? I'd be 
more than happy to host a meeting to talk about it.


Mahalo,
Adam



Hi Adam,
As a Blazar developer, what do you want to know about Capacity Planning 
? This topic is pretty wide, so more details are welcome :-)


Thanks,
-Sylvain


*/
Adam Lawson/*
AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Cheers  Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email:flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
--


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] gate-ceilometer-python33 failed because of wrong setup.py in happybase

2014-08-13 Thread Osanai, Hisashi

On Wednesday, August 13, 2014 5:03 PM, Julien Danjou wrote:
 This is not a problem in tox.ini, this is a problem in the
 infrastructure config. Removing py33 from the envlist in tox.ini isn't
 going to fix anything unforunately.

Thank you for your quick response.

I may misunderstand this topic. Let me clarify ...
My understanding is:
- the py33 failed because there is a problem that the happybase-0.8 cannot 
  work with python33 env. (execfile function calls on python33 doesn't work)
- the happybase is NOT an OpenStack component.
- the py33 doesn't need to execute on stable/icehouse

One idea to solve this problem is:
If the py33 doesn't need to execute on stable/icehouse, just eliminate the py33.

 This is not a problem in tox.ini, 
Means the py33 needs to execute on stable/icehouse. Here I misunderstand 
something...

 this is a problem in the infrastructure config.
Means execfile function calls on python33 in happybase is a problem. If my 
understanding 
is correct, I agree with you and I think this is the direct cause of this 
problem.

Your idea to solve this is creating a patch for the direct cause, right?

Thanks in advance,
Hisashi Osanai

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Daniel P. Berrange
On Mon, Aug 11, 2014 at 10:30:12PM -0700, Joe Gordon wrote:
 On Fri, Aug 8, 2014 at 6:58 AM, Kyle Mestery mest...@mestery.com wrote:
   I really like this idea, as Michael and others alluded to in above, we
  are
   attempting to set cycle goals for Kilo in Nova. but I think it is worth
   doing for all of OpenStack. We would like to make a list of key goals
  before
   the summit so that we can plan our summit sessions around the goals. On a
   really high level one way to look at this is, in Kilo we need to pay down
   our technical debt.
  
   The slots/runway idea is somewhat separate from defining key cycle
  goals; we
   can be approve blueprints based on key cycle goals without doing slots.
   But
   with so many concurrent blueprints up for review at any given time, the
   review teams are doing a lot of multitasking and humans are not very
  good at
   multitasking. Hopefully slots can help address this issue, and hopefully
   allow us to actually merge more blueprints in a given cycle.
  
  I'm not 100% sold on what the slots idea buys us. What I've seen this
  cycle in Neutron is that we have a LOT of BPs proposed. We approve
  them after review. And then we hit one of two issues: Slow review
  cycles, and slow code turnaround issues. I don't think slots would
  help this, and in fact may cause more issues. If we approve a BP and
  give it a slot for which the eventual result is slow review and/or
  code review turnaround, we're right back where we started. Even worse,
  we may have not picked a BP for which the code submitter would have
  turned around reviews faster. So we've now doubly hurt ourselves. I
  have no idea how to solve this issue, but by over subscribing the
  slots (e.g. over approving), we allow for the submissions with faster
  turnaround a chance to merge quicker. With slots, we've removed this
  capability by limiting what is even allowed to be considered for
  review.
 
 
 Slow review: by limiting the number of blueprints up we hope to focus our
 efforts on fewer concurrent things
 slow code turn around: when a blueprint is given a slot (runway) we will
 first make sure the author/owner is available for fast code turnaround.
 
 If a blueprint review stalls out (slow code turnaround, stalemate in review
 discussions etc.) we will take the slot and give it to another blueprint.

This idea of fixed slots is not really very appealing to me. It sounds
like we're adding a significant amount of buerocratic overhead to our
development process that is going to make us increasingly inefficient.
I don't want to waste time wating for a stalled blueprint to time out
before we give the slot to another blueprint. On any given day when I
have spare review time available I'll just review anything that is up
and waiting for review. If we can set a priority for the things up for
review that is great since I can look at those first, but the idea of
having fixed slots for things we should review does not do anything to
help my review efficiency IMHO.

I also thing it will kill our flexibility in approving  dealing with
changes that are not strategically important, but none the less go
through our blueprint/specs process. There have been a bunch of things
I've dealt with that are not strategic, but have low overhead to code
and review and easily dealt with in the slack time between looking at
the high priority reviews. It sounds like we're going to loose our
flexibility to pull in stuff like this if it only gets a chance when
strategically imporatant stuff is not occupying a slot

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][ml2] Mech driver as out-of-tree add-on

2014-08-13 Thread Dave Tucker
I've been working on this for OpenDaylight
https://github.com/dave-tucker/odl-neutron-drivers

This seems to work for me (tested Devstack w/ML2) but YMMV.

-- Dave

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Giulio Fidente

On 08/07/2014 12:56 PM, Jay Pipes wrote:

On 08/07/2014 02:12 AM, Kashyap Chamarthy wrote:

On Thu, Aug 07, 2014 at 07:10:23AM +1000, Michael Still wrote:

On Wed, Aug 6, 2014 at 2:03 AM, Thierry Carrez
thie...@openstack.org wrote:


We seem to be unable to address some key issues in the software we
produce, and part of it is due to strategic contributors (and core
reviewers) being overwhelmed just trying to stay afloat of what's
happening. For such projects, is it time for a pause ? Is it time to
define key cycle goals and defer everything else ?


[. . .]


We also talked about tweaking the ratio of tech debt runways vs
'feature runways. So, perhaps every second release is focussed on
burning down tech debt and stability, whilst the others are focussed
on adding features.



I would suggest if we do such a thing, Kilo should be a stability'
release.


Excellent sugestion. I've wondered multiple times that if we could
dedicate a good chunk (or whole) of a specific release for heads down
bug fixing/stabilization. As it has been stated elsewhere on this list:
there's no pressing need for a whole lot of new code submissions, rather
we focusing on fixing issues that affect _existing_ users/operators.


There's a whole world of GBP/NFV/VPN/DVR/TLA folks that would beg to
differ on that viewpoint. :)

That said, I entirely agree with you and wish efforts to stabilize would
take precedence over feature work.


I'm of this same opinion: I think a periodic, concerted effort to 
stabilize the existing features (which shouldn't be about bugs fixing 
only) would be helpful to work on some of the issues mentioned.


I'm thinking of qa, infra, the tactical contributions, the code clean-up 
and more in general the reviews backlog as some of these.


And I also think it would useful to figure what are the *strategic* 
features needed, as it would provide with some time to gather feedback 
from the field.


--
Giulio Fidente
GPG KEY: 08D733BA

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] gate-ceilometer-python33 failed because of wrong setup.py in happybase

2014-08-13 Thread Dina Belova
Hisashi Osanai, I have really strange feeling about this issue.
It happens only with py33 job for icehouse branch? Because actually happy
base is the same for the master code Jenkins jobs, so it looks like that
exec file issue should appear in master runs as well... Do I understand
everything right?

As I understand Julien, he proposes to run this job only for master (as it
works for now magically for master checks) and skip in for everything
earlier - mostly because it won't work for stable branches anyway - as
there were no fixed ceilometer code itself there.

Thanks,
Dina


On Wed, Aug 13, 2014 at 2:11 PM, Osanai, Hisashi 
osanai.hisa...@jp.fujitsu.com wrote:


 On Wednesday, August 13, 2014 5:03 PM, Julien Danjou wrote:
  This is not a problem in tox.ini, this is a problem in the
  infrastructure config. Removing py33 from the envlist in tox.ini isn't
  going to fix anything unforunately.

 Thank you for your quick response.

 I may misunderstand this topic. Let me clarify ...
 My understanding is:
 - the py33 failed because there is a problem that the happybase-0.8 cannot
   work with python33 env. (execfile function calls on python33 doesn't
 work)
 - the happybase is NOT an OpenStack component.
 - the py33 doesn't need to execute on stable/icehouse

 One idea to solve this problem is:
 If the py33 doesn't need to execute on stable/icehouse, just eliminate the
 py33.

  This is not a problem in tox.ini,
 Means the py33 needs to execute on stable/icehouse. Here I misunderstand
 something...

  this is a problem in the infrastructure config.
 Means execfile function calls on python33 in happybase is a problem. If my
 understanding
 is correct, I agree with you and I think this is the direct cause of this
 problem.

 Your idea to solve this is creating a patch for the direct cause, right?

 Thanks in advance,
 Hisashi Osanai

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concerns around the Extensible Resource Tracker design - revert maybe?

2014-08-13 Thread Sylvain Bauza


Le 12/08/2014 22:06, Sylvain Bauza a écrit :


Le 12/08/2014 18:54, Nikola Đipanov a écrit :

On 08/12/2014 04:49 PM, Sylvain Bauza wrote:

(sorry for reposting, missed 2 links...)

Hi Nikola,

Le 12/08/2014 12:21, Nikola Đipanov a écrit :

Hey Nova-istas,

While I was hacking on [1] I was considering how to approach the fact
that we now need to track one more thing (NUMA node utilization) in 
our

resources. I went with - I'll add it to compute nodes table thinking
it's a fundamental enough property of a compute host that it 
deserves to
be there, although I was considering  Extensible Resource Tracker 
at one

point (ERT from now on - see [2]) but looking at the code - it did not
seem to provide anything I desperately needed, so I went with 
keeping it

simple.

So fast-forward a few days, and I caught myself solving a problem 
that I

kept thinking ERT should have solved - but apparently hasn't, and I
think it is fundamentally a broken design without it - so I'd really
like to see it re-visited.

The problem can be described by the following lemma (if you take 
'lemma'

to mean 'a sentence I came up with just now' :)):


Due to the way scheduling works in Nova (roughly: pick a host based on
stale(ish) data, rely on claims to trigger a re-schedule), _same 
exact_

information that scheduling service used when making a placement
decision, needs to be available to the compute service when testing 
the

placement.


This is not the case right now, and the ERT does not propose any 
way to

solve it - (see how I hacked around needing to be able to get
extra_specs when making claims in [3], without hammering the DB). The
result will be that any resource that we add and needs user supplied
info for scheduling an instance against it, will need a buggy
re-implementation of gathering all the bits from the request that
scheduler sees, to be able to work properly.

Well, ERT does provide a plugin mechanism for testing resources at the
claim level. This is the plugin responsibility to implement a test()
method [2.1] which will be called when test_claim() [2.2]

So, provided this method is implemented, a local host check can be done
based on the host's view of resources.



Yes - the problem is there is no clear API to get all the needed bits to
do so - especially the user supplied one from image and flavors.
On top of that, in current implementation we only pass a hand-wavy
'usage' blob in. This makes anyone wanting to use this in conjunction
with some of the user supplied bits roll their own
'extract_data_from_instance_metadata_flavor_image' or similar which is
horrible and also likely bad for performance.


I see your concern where there is no interface for user-facing 
resources like flavor or image metadata.
I also think indeed that the big 'usage' blob is not a good choice for 
long-term vision.


That said, I don't think as we say in French to throw the bath 
water... ie. the problem is with the RT, not the ERT (apart the 
mention of third-party API that you noted - I'll go to it later below)

This is obviously a bigger concern when we want to allow users to pass
data (through image or flavor) that can affect scheduling, but still a
huge concern IMHO.
And here is where I agree with you : at the moment, ResourceTracker 
(and

consequently Extensible RT) only provides the view of the resources the
host is knowing (see my point above) and possibly some other resources
are missing.
So, whatever your choice of going with or without ERT, your patch [3]
still deserves it if we want not to lookup DB each time a claim goes.



As I see that there are already BPs proposing to use this IMHO broken
ERT ([4] for example), which will surely add to the proliferation of
code that hacks around these design shortcomings in what is already a
messy, but also crucial (for perf as well as features) bit of Nova 
code.

Two distinct implementations of that spec (ie. instances and flavors)
have been proposed [2.3] [2.4] so reviews are welcome. If you see the
test() method, it's no-op thing for both plugins. I'm open to comments
because I have the stated problem : how can we define a limit on just a
counter of instances and flavors ?


Will look at these - but none of them seem to hit the issue I am
complaining about, and that is that it will need to consider other
request data for claims, not only data available for on instances.

Also - the fact that you don't implement test() in flavor ones tells me
that the implementation is indeed racy (but it is racy atm as well) and
two requests can indeed race for the same host, and since no claims are
done, both can succeed. This is I believe (at least in case of single
flavor hosts) unlikely to happen in practice, but you get the idea.


Agreed, these 2 patches probably require another iteration, in 
particular how we make sure that it won't be racy. So I need another 
run to think about what to test() for these 2 examples.
Another patch has to be done for aggregates, but it's still WIP so 

Re: [openstack-dev] [ceilometer] gate-ceilometer-python33 failed because of wrong setup.py in happybase

2014-08-13 Thread Julien Danjou
On Wed, Aug 13 2014, Osanai, Hisashi wrote:

 One idea to solve this problem is:
 If the py33 doesn't need to execute on stable/icehouse, just eliminate
 the py33.

Yes, that IS the solution.

But modifying tox.ini is not going be a working implementation of that
solution.

 This is not a problem in tox.ini, 
 Means the py33 needs to execute on stable/icehouse. Here I misunderstand 
 something...

Not it does not, that line in tox.ini is not use by the gate.

 this is a problem in the infrastructure config.
 Means execfile function calls on python33 in happybase is a problem. If my 
 understanding 
 is correct, I agree with you and I think this is the direct cause of this 
 problem.

 Your idea to solve this is creating a patch for the direct cause, right?

My idea to solve this is to create a patch on
http://git.openstack.org/cgit/openstack-infra/config/
to exclude py33 on the stable/icehouse branch of Ceilometer in the gate.

-- 
Julien Danjou
# Free Software hacker
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Blueprint -- Floating IP Auto Association

2014-08-13 Thread Salvatore Orlando
Hi,

this discussion came up recently regarding a nodepool issue.
The blueprint was recently revived and there is a proposed specification [1]

I tend to disagree with the way nova implements this feature today.
A configuration-wide flag indeed has the downside that this creates
different API behaviour across deployments.
As an API consumer which wants a public IP for an instance, I would
probably have to check if such IP is already available before allocating,
which, by the way, it's what nodepool does [2].

The specification [1] tries to make this clearer to user allowing control
of this behaviour on a per-subnet basis. This is not bad, but I still think
it's not a great idea to introduce side effect in neutron API (in this case
port create). Personally I think from the neutron side we can make user's
life easier by tying a floating IP lifecycle to the port it is associated
with, so that when the port is deleted, the floating IP is not just
disassociated but removed too. This won't give the same ease of use which
nova achieves today with auto_assign_floating_ips but will still be a
better level of automation without adding orchestration on the neutron side.

I've not yet made up my mind on this topic, but if you have any opinion,
please share it.

Salvatore


[1] https://review.openstack.org/#/c/106487/
[2]
http://git.openstack.org/cgit/openstack-infra/nodepool/tree/nodepool/nodepool.py#n398


On 17 November 2013 01:08, Steven Weston steven-wes...@live.com wrote:

  Hi Salvatore!

 My responses (to your responses) are in-line. I think we could also use
 some feedback from the rest of the community on this, as well … would it be
 a good idea to discuss the implementation further at the next IRC meeting?

 Good Stuff!!

 Steven


 On 11/15/2013 7:39 AM, Salvatore Orlando wrote:




 On 14 November 2013 23:03, Steven Weston steven-wes...@live.com wrote:

  Hi Salvatore,

 My Launchpad ID is steven-weston.  I do not know who those other Steven
 Westons are … if someone has created clones of me, I am going to be upset!
 Anyway, Here are my thoughts on the implementation approach.

 I have now assigned the blueprint to you.


 Great, thank you!

 Is there any reason why the two alternatives you listed should be
 considered mutually exclusive?

 In line of principle they're not. But if we provide the facility in
 Neutron, doing the orchestration from nova for the association would be, in
 my opinion, just redundant.
 Unless I am not understanding what you suggest.


 I agree, implementing the functionality in nova and neutron would be
 redundant, although I was suggesting that the nova api be modified to allow
 for the auto association request on vm creation, which would then be passed
 to neutron for the port creation.  Currently it looks to only be available
 as a configuration option in nova.


   So far I understand the goal is to pass a 'autoassociate_fip' flag (or
 something similar) to POST /v2/port
  the operation will create two resource: a floating IP and a port, with
 only the port being returned (hence the side-effect).


 This sounds good, unless we want to modify the api behavior to return a
 list of floating ips, as you already suggested below.  Or would it be
 better to return a mapping of fixed ips to floating ips, since that would
 technically be more accurate?



   I think that in consideration of loosely coupled design, it would be
 best to make the attribute addition to the port in neutron and create the
 ability for nova to orchestrate the call as well.  I do not see a way to
 prevent modification of the REST API, and in the interest of fulfilling
 your concern of atomicity, the fact that an auto association was requested
 will need to be stored somewhere, in addition to the state of the request
 as well.

 Storing the autoassociation could be achieved with a flag on the floating
 IP data model. But would that also imply that the association for an
 auto-associate floatingIP cannot be altered?


 I think that depends on how we want it to work … see my comments below.

 Plus, tracking the attribute in neutron would allow the ability of
 other events to fire that would need to be performed in response to an auto
 associate request, such as split zone dns updates (for example).  The
 primary use case for this would be for request by nova, although I can
 think of other services which could use it as well -- load balancers,
 firewalls, vpn’s, and any component that would require connectivity to
 another network.  I think the default behavior of the auto association
 request would be to create ip addresses on the associated networks of the
 attached routers, unless a specific network is given.


  Perhaps I need more info on this specific point; I think the current
 floating_port_id - port_id might work to this aim; perhaps the reverse
 mapping would be needed to, and we might work to add id - but I don't see
 why we would need a 'auto_associate' flag. This is not a criticism. It's
 

Re: [openstack-dev] [ceilometer] gate-ceilometer-python33 failed because of wrong setup.py in happybase

2014-08-13 Thread Julien Danjou
On Wed, Aug 13 2014, Dina Belova wrote:

 Hisashi Osanai, I have really strange feeling about this issue.
 It happens only with py33 job for icehouse branch? Because actually happy
 base is the same for the master code Jenkins jobs, so it looks like that
 exec file issue should appear in master runs as well... Do I understand
 everything right?

happybase is not installed when running py33 in master because it has a
requirements-py3.txt without happybase in it. Which stable/icehouse has
not.

 As I understand Julien, he proposes to run this job only for master (as it
 works for now magically for master checks) and skip in for everything
 earlier - mostly because it won't work for stable branches anyway - as
 there were no fixed ceilometer code itself there.

That's what I propose, and that should be done by hacking
openstack-infra/config AFAIK.

-- 
Julien Danjou
-- Free Software hacker
-- http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] gate-ceilometer-python33 failed because of wrong setup.py in happybase

2014-08-13 Thread Dina Belova
Julien, will do right now.

Thanks
Dina


On Wed, Aug 13, 2014 at 2:35 PM, Julien Danjou jul...@danjou.info wrote:

 On Wed, Aug 13 2014, Osanai, Hisashi wrote:

  One idea to solve this problem is:
  If the py33 doesn't need to execute on stable/icehouse, just eliminate
  the py33.

 Yes, that IS the solution.

 But modifying tox.ini is not going be a working implementation of that
 solution.

  This is not a problem in tox.ini,
  Means the py33 needs to execute on stable/icehouse. Here I misunderstand
 something...

 Not it does not, that line in tox.ini is not use by the gate.

  this is a problem in the infrastructure config.
  Means execfile function calls on python33 in happybase is a problem. If
 my understanding
  is correct, I agree with you and I think this is the direct cause of
 this problem.
 
  Your idea to solve this is creating a patch for the direct cause, right?

 My idea to solve this is to create a patch on
 http://git.openstack.org/cgit/openstack-infra/config/
 to exclude py33 on the stable/icehouse branch of Ceilometer in the gate.

 --
 Julien Danjou
 # Free Software hacker
 # http://julien.danjou.info

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Daniel P. Berrange
On Thu, Aug 07, 2014 at 03:56:04AM -0700, Jay Pipes wrote:
 On 08/07/2014 02:12 AM, Kashyap Chamarthy wrote:
 On Thu, Aug 07, 2014 at 07:10:23AM +1000, Michael Still wrote:
 On Wed, Aug 6, 2014 at 2:03 AM, Thierry Carrez thie...@openstack.org 
 wrote:
 
 We seem to be unable to address some key issues in the software we
 produce, and part of it is due to strategic contributors (and core
 reviewers) being overwhelmed just trying to stay afloat of what's
 happening. For such projects, is it time for a pause ? Is it time to
 define key cycle goals and defer everything else ?
 
 [. . .]
 
 We also talked about tweaking the ratio of tech debt runways vs
 'feature runways. So, perhaps every second release is focussed on
 burning down tech debt and stability, whilst the others are focussed
 on adding features.
 
 I would suggest if we do such a thing, Kilo should be a stability'
 release.
 
 Excellent sugestion. I've wondered multiple times that if we could
 dedicate a good chunk (or whole) of a specific release for heads down
 bug fixing/stabilization. As it has been stated elsewhere on this list:
 there's no pressing need for a whole lot of new code submissions, rather
 we focusing on fixing issues that affect _existing_ users/operators.
 
 There's a whole world of GBP/NFV/VPN/DVR/TLA folks that would beg to differ
 on that viewpoint. :)

Yeah, I think declaring entire cycles to be stabilization vs feature
focused is far to coarse  inflexibile. The most likely effect
of it would be that people who would otherwise contribute useful
features to openstack will simply walk away from the project for
that cycle.

I think that in fact the time when we need the strongest focus on
bug fixing is immediately after sizeable features have merged. I
don't think you want to give people the message that stabalization
work doesn't take place until the next 6 month cycle - that's far
too long to live with unstable code.

Currently we have a bit of focus on stabalization at each milestone
but to be honest most of that focus is on the last milestone only.
I'd like to see us have a much more explicit push for regular
stabalization work during the cycle, to really re-inforce the
idea that stabilization is an activity that should be taking place
continuously. Be really proactive in designating a day of the week
(eg Bug fix wednesdays) and make a concerted effort during that
day to have reviewers  developers concentrate exclusively on
stabilization related activities.

 That said, I entirely agree with you and wish efforts to stabilize would
 take precedence over feature work.

I find it really contradictory that we have such a strong desire for
stabilization and testing of our code, but at the same time so many
people argue that the core teams should have nothing at all todo with
the stable release branches which a good portion of our users will
actually be running. By ignoring stable branches, leaving it upto a
small team to handle, I think we giving the wrong message about what
our priorities as a team team are. I can't help thinking this filters
through to impact the way people think about their work on master.
Stabilization is important and should be baked into the DNA of our
teams to the extent that identifying bug fixes for stable is just
an automatic part of our dev lifecycle. The quantity of patches going
into stable isn't so high that it take up significant resources when
spread across the entire core team.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] gate-ceilometer-python33 failed because of wrong setup.py in happybase

2014-08-13 Thread Dina Belova
Here it is: https://review.openstack.org/#/c/113842/

Thanks,
Dina


On Wed, Aug 13, 2014 at 2:40 PM, Dina Belova dbel...@mirantis.com wrote:

 Julien, will do right now.

 Thanks
 Dina


 On Wed, Aug 13, 2014 at 2:35 PM, Julien Danjou jul...@danjou.info wrote:

 On Wed, Aug 13 2014, Osanai, Hisashi wrote:

  One idea to solve this problem is:
  If the py33 doesn't need to execute on stable/icehouse, just eliminate
  the py33.

 Yes, that IS the solution.

 But modifying tox.ini is not going be a working implementation of that
 solution.

  This is not a problem in tox.ini,
  Means the py33 needs to execute on stable/icehouse. Here I
 misunderstand something...

 Not it does not, that line in tox.ini is not use by the gate.

  this is a problem in the infrastructure config.
  Means execfile function calls on python33 in happybase is a problem. If
 my understanding
  is correct, I agree with you and I think this is the direct cause of
 this problem.
 
  Your idea to solve this is creating a patch for the direct cause, right?

 My idea to solve this is to create a patch on
 http://git.openstack.org/cgit/openstack-infra/config/
 to exclude py33 on the stable/icehouse branch of Ceilometer in the gate.

 --
 Julien Danjou
 # Free Software hacker
 # http://julien.danjou.info

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 Best regards,

 Dina Belova

 Software Engineer

 Mirantis Inc.




-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] SoftwareDeployment resource is always in progress

2014-08-13 Thread david ferahi
Hello,


Thank you Steve for your reply !

Yes I'm using the same manual you provided to create my image.

https://github.com/openstack/heat-templates/tree/master/hot/software-config/elements


In my network configuration, the tenant network is created in the same
subnet as OpenStack Management
network (in order to ensure that controller, compute and network nodes can
ping my instances and vice versa).

The problem is:
   - From controller, network and compute nodes I can not ping my
instance and router address connected to tenant network.
   - From my instance I can just ping the router address but not
controller node.

Note that ICMP rules are added in the security group.

Maybe the deployments can't signal back because the instance can not reach
the controller node.

Thank you in advance.

Regards,

David


2014-08-12 23:19 GMT+02:00 Steve Baker sba...@redhat.com:

  On 11/08/14 20:42, david ferahi wrote:

  Hello,

 I 'm trying to create a simple stack with heat (Icehouse release).
 The template contains SoftwareConfig, SoftwareDeployment and a single
 server resources.

 The problem is that the SoftwareDeployment resource is always in progress !

   So first I'm going to assume you're using an image that you have
 created with diskimage-builder which includes the heat-config-script
 element:

 https://github.com/openstack/heat-templates/tree/master/hot/software-config/elements

 When I a diagnosing deployments which don't signal back I do the following:
 - ssh into the server and sudo to root
 - stop the os-collect-config service:
   systemctl stop os-collect-config
 - run os-collect-config manually and check for errors:
   os-collect-config --one-time --debug


  After waiting for more than an hour the stack deployment failed and I
 got this error:

  TRACE heat.engine.resource HTTPUnauthorized: ERROR: Authentication
 failed. Please try again with option --include-password or export
 HEAT_INCLUDE_PASSWORD=1
 TRACE heat.engine.resource Authentication required

   This looks like a different issue, you should find out what is
 happening to your server configuration first.



  When I checked the log file (/var/log/heat/heat-engine.log), it shows
  the following message(every second):
 2014-08-10 19:41:09.622 2391 INFO urllib3.connectionpool [-] Starting new
 HTTP connection (1): 192.168.122.10
 2014-08-10 19:41:10.648 2391 INFO urllib3.connectionpool [-] Starting new
 HTTP connection (1): 192.168.122.10
 2014-08-10 19:41:11.671 2391 INFO urllib3.connectionpool [-] Starting new
 HTTP connection (1): 192.168.122.10
 2014-08-10 19:41:12.690 2391 INFO urllib3.connectionpool [-] Starting new
 HTTP connection (1): 192.168.122.10

 Here the template I am using :

 https://github.com/openstack/heat-templates/blob/master/hot/software-config/example-templates/wordpress/WordPress_software-config_1-instance.yaml

 Please help !



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Mech driver as out-of-tree add-on

2014-08-13 Thread ZZelle
Hi,


The important thing to understand is how to integrate with neutron through
stevedore/entrypoints:

https://github.com/dave-tucker/odl-neutron-drivers/blob/master/setup.cfg#L32-L34


Cedric


On Wed, Aug 13, 2014 at 12:17 PM, Dave Tucker d...@dtucker.co.uk wrote:

 I've been working on this for OpenDaylight
 https://github.com/dave-tucker/odl-neutron-drivers

 This seems to work for me (tested Devstack w/ML2) but YMMV.

 -- Dave

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Mech driver as out-of-tree add-on

2014-08-13 Thread Robert Kukura
One thing to keep in mind is that the ML2 driver API does sometimes 
change, requiring updates to drivers. Drivers that are in-tree get 
updated along with the driver API change. Drivers that are out-of-tree 
must be updated by the owner.


-Bob

On 8/13/14, 6:59 AM, ZZelle wrote:

Hi,


The important thing to understand is how to integrate with neutron 
through stevedore/entrypoints:


https://github.com/dave-tucker/odl-neutron-drivers/blob/master/setup.cfg#L32-L34


Cedric


On Wed, Aug 13, 2014 at 12:17 PM, Dave Tucker d...@dtucker.co.uk 
mailto:d...@dtucker.co.uk wrote:


I've been working on this for OpenDaylight
https://github.com/dave-tucker/odl-neutron-drivers

This seems to work for me (tested Devstack w/ML2) but YMMV.

-- Dave

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] periodic python2.6 checks for havana failing

2014-08-13 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hi all,

several periodic checks for havana are failing due to missing
libffi-devel or missing rpm/yum tools on bare-centos (sic!) node.

For example, see [1] (rpm/yum missing) and [2] (compile failure due to
missing libffi-devel).

AFAIK there is a hack to overcome some issues in gate in infra config
[3], though it looks it's not enough, or the hack is wrong.

I'm not involved in infra, so I lack knowledge to fix it on my own,
hence I ask community for help.

Ideas/fixes?
/Ihar

===

[1]: -
http://logs.openstack.org/periodic-stable/periodic-cinder-python26-havana/093cf3d/console.html
[2]: -
http://logs.openstack.org/periodic-stable/periodic-glance-python26-havana/9be423b/console.html.gz
[3]:
https://github.com/openstack-infra/config/blob/master/modules/openstack_project/files/jenkins_job_builder/config/python-bitrot-jobs.yaml#L10
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJT60tWAAoJEC5aWaUY1u57QdEH/0zsNqGekvNR7py/TUpTjtc3
qIE5lf0uzdrn5sr5rlOGdbJGzi1fkeHvuftMrXJnFcN5jkWnRtb979xGYR01gvmK
7IQajjwCjp4ClO2eRGFrKqc0tFPx/j0Lo7yrrLc1jZDt6LTcdrPdkZxob8QvKCfo
5RRa95XSv0fRCp8whyEGZTYlNab/DLWjvrL1COsEjZfO9KU2gT6B9KrRNpejn0yu
A9dJyEFkYVVDvfbXvlY2PFdE8bilHJEkGSOp27//d7c8Oo+x9ZtNZjTZs1kqwv+4
IyMawKG+KY9WQm8BBvPRcVCSP4Z3tX9zqLjiUvQoz8ex7iK3pNqh6SCSsbb7lqI=
=4MIb
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Mark McLoughlin
On Tue, 2014-08-05 at 18:03 +0200, Thierry Carrez wrote:
 Hi everyone,
 
 With the incredible growth of OpenStack, our development community is
 facing complex challenges. How we handle those might determine the
 ultimate success or failure of OpenStack.
 
 With this cycle we hit new limits in our processes, tools and cultural
 setup. This resulted in new limiting factors on our overall velocity,
 which is frustrating for developers. This resulted in the burnout of key
 firefighting resources. This resulted in tension between people who try
 to get specific work done and people who try to keep a handle on the big
 picture.

Always fun catching up on threads like this after being away ... :)

I think the thread has revolved around three distinct areas:

  1) The per-project review backlog, its implications for per-project 
 velocity, and ideas for new workflows or tooling

  2) Cross-project scaling issues that get worse as we add more 
 integrated projects

  3) The factors that go into deciding whether a project belongs in the 
 integrated release - including the appropriateness of its scope,
 the soundness of its architecture and how production ready it is.

The first is important - hugely important - but I don't think it has any
bearing on the makeup, scope or contents of the integrated release, but
certainly will have a huge bearing on the success of the release and the
project more generally.

The third strikes me as a part of the natural evolution around how we
think about the integrated release. I don't think there's any particular
crisis or massive urgency here. As the TC considers proposals to
integrate (or de-integrate) projects, we'll continue to work through
this. These debates are contentious enough that we should avoid adding
unnecessary drama to them by conflating the issues with more pressing,
urgent issues.

I think the second area is where we should focus. We're concerned that
we're hitting a breaking point with some cross-project issues - like
release management, the gate, a high level of non-deterministic test
failures, insufficient cross-project collaboration on technical debt
(e.g. via Oslo), difficulty in reaching consensus on new cross-project
initiatives (Sean gave the examples of Group Based Policy and Rally) -
such that drastic measures are required. Like maybe we should not accept
any new integrated projects in this cycle while we work through those
issues.

Digging deeper into that means itemizing these cross-project scaling
issues, figuring out which of them need drastic intervention, discussing
what the intervention might be and the realistic overall effects of
those interventions.

AFAICT, the closest we've come in the thread to that level of detail is
Sean's email here:

  http://lists.openstack.org/pipermail/openstack-dev/2014-August/042277.html

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Steven Hardy
On Wed, Aug 13, 2014 at 11:42:52AM +0100, Daniel P. Berrange wrote:
 On Thu, Aug 07, 2014 at 03:56:04AM -0700, Jay Pipes wrote:
  On 08/07/2014 02:12 AM, Kashyap Chamarthy wrote:
  On Thu, Aug 07, 2014 at 07:10:23AM +1000, Michael Still wrote:
  On Wed, Aug 6, 2014 at 2:03 AM, Thierry Carrez thie...@openstack.org 
  wrote:
  
  We seem to be unable to address some key issues in the software we
  produce, and part of it is due to strategic contributors (and core
  reviewers) being overwhelmed just trying to stay afloat of what's
  happening. For such projects, is it time for a pause ? Is it time to
  define key cycle goals and defer everything else ?
  
  [. . .]
  
  We also talked about tweaking the ratio of tech debt runways vs
  'feature runways. So, perhaps every second release is focussed on
  burning down tech debt and stability, whilst the others are focussed
  on adding features.
  
  I would suggest if we do such a thing, Kilo should be a stability'
  release.
  
  Excellent sugestion. I've wondered multiple times that if we could
  dedicate a good chunk (or whole) of a specific release for heads down
  bug fixing/stabilization. As it has been stated elsewhere on this list:
  there's no pressing need for a whole lot of new code submissions, rather
  we focusing on fixing issues that affect _existing_ users/operators.
  
  There's a whole world of GBP/NFV/VPN/DVR/TLA folks that would beg to differ
  on that viewpoint. :)
 
 Yeah, I think declaring entire cycles to be stabilization vs feature
 focused is far to coarse  inflexibile. The most likely effect
 of it would be that people who would otherwise contribute useful
 features to openstack will simply walk away from the project for
 that cycle.
 
 I think that in fact the time when we need the strongest focus on
 bug fixing is immediately after sizeable features have merged. I
 don't think you want to give people the message that stabalization
 work doesn't take place until the next 6 month cycle - that's far
 too long to live with unstable code.
 
 Currently we have a bit of focus on stabalization at each milestone
 but to be honest most of that focus is on the last milestone only.
 I'd like to see us have a much more explicit push for regular
 stabalization work during the cycle, to really re-inforce the
 idea that stabilization is an activity that should be taking place
 continuously. Be really proactive in designating a day of the week
 (eg Bug fix wednesdays) and make a concerted effort during that
 day to have reviewers  developers concentrate exclusively on
 stabilization related activities.
 
  That said, I entirely agree with you and wish efforts to stabilize would
  take precedence over feature work.
 
 I find it really contradictory that we have such a strong desire for
 stabilization and testing of our code, but at the same time so many
 people argue that the core teams should have nothing at all todo with
 the stable release branches which a good portion of our users will
 actually be running. 

Does such an argument actually exist?  My experience has been that
stable-maint folks are very accepting of help, and that it's relatively
easy for core reviewers with an interest in stable branch maintenance to
offer their services and become stable-maint core:

https://wiki.openstack.org/wiki/StableBranch#Joining_the_Team

 By ignoring stable branches, leaving it upto a
 small team to handle, I think we giving the wrong message about what
 our priorities as a team team are. I can't help thinking this filters
 through to impact the way people think about their work on master.

Who is ignoring stable branches?  This sounds like a project specific
failing to me, as all experienced core reviewers should consider offering
their services to help with stable-maint activity.

I don't personally see any reason why the *entire* project core team has to
do this, but a subset of them should feel compelled to participate in the
stable-maint process, if they have sufficient time, interest and historical
context, it's not some other team IMO.

 Stabilization is important and should be baked into the DNA of our
 teams to the extent that identifying bug fixes for stable is just
 an automatic part of our dev lifecycle. The quantity of patches going
 into stable isn't so high that it take up significant resources when
 spread across the entire core team.

+1

Also, contributors should be more actively encouraged to propose their
bugfixes as backports to stable branches themselves, instead of relying on
$someone_else to do it.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which program for Rally

2014-08-13 Thread Boris Pavlovic
Matt,


On Mon, Aug 11, 2014 at 07:06:11PM -0400, Zane Bitter wrote:
  On 11/08/14 16:21, Matthew Treinish wrote:
  I'm sorry, but the fact that the
  docs in the rally tree has a section for user testimonials [4] I feel
 speaks a
  lot about the intent of the project.


Yes, you are absolutely right it speaks a lot about the intent of the
project.

One of the goal of Rally is to be the bridge between Operators and
OpenStack community.
Particularly this directory was made to create a common OpenStack knowledge
base
about how different configuration  deployments impact on OpenStack in
numbers.
There are 2 nice things about using this approach for collecting user
experience:
1) Everybody is able to repeat exactly the same experiment locally, and
prove that it is the true
2) Collecting results by different Operators is absolutely distributed
process and scales really well.

Using this user stories OpenStack community (e.g. Rally team) will be able
to create a
best practice for  deployments configurations  architecture that should
be used in production.
And all this is base on real life experience (not just feelings).


. I personally feel that those user stories
 would probably be more appropriate as a blog post, and shouldn't
 necessarily be
 in a doc tree. But, that's not the stinging indictment which didn't need
 any
 explanation that I apparently thought it was yesterday; it definitely isn't
 something worth calling out on this thread.



PTL is not a dictator, it's just a person who collects opinion of project
team  users and manage work on project in such way
to cover everybody's use cases..
In other words you shouldn't believe or feel, you should just ask users and
community of the project: what they think?.
In my case I asked Rally community and about 20 different Operators from
various companies and they like and support
this idea. So I would prefer to keep this section in code of Rally and help
with involving more people in this work.


Best regards,
Boris Pavlovic







On Tue, Aug 12, 2014 at 9:47 PM, Matthew Treinish mtrein...@kortar.org
wrote:

 On Mon, Aug 11, 2014 at 07:06:11PM -0400, Zane Bitter wrote:
  On 11/08/14 16:21, Matthew Treinish wrote:
  I'm sorry, but the fact that the
  docs in the rally tree has a section for user testimonials [4] I feel
 speaks a
  lot about the intent of the project.
 
  What... does that even mean?

 Yeah, I apologize for that sentence, it was an unfair thing to say and
 uncalled
 for. Looking at it with fresh eyes this morning I'm not entirely sure what
 my intent
 was by pointing out that section. I personally feel that those user stories
 would probably be more appropriate as a blog post, and shouldn't
 necessarily be
 in a doc tree. But, that's not the stinging indictment which didn't need
 any
 explanation that I apparently thought it was yesterday; it definitely isn't
 something worth calling out on this thread.

 
  They seem like just the type of guys that would help Keystone with
  performance benchmarking!
  Burn them!

 I'm pretty sure that's not what I meant. :)

 
  I apologize if any of this is somewhat incoherent, I'm still a bit
 jet-lagged
  so I'm not sure that I'm making much sense.
 
  Ah.
 

 Yeah, let's chalk it up to dulled senses from insufficient sleep and
 trying to
 get back on my usual schedule from a trip down under.

  [4]
 http://git.openstack.org/cgit/stackforge/rally/tree/doc/user_stories

 -Matt Treinish

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Passing a list of ResourceGroup's attributes back to its members

2014-08-13 Thread Tomas Sedovic
On 12/08/14 01:06, Steve Baker wrote:
 On 09/08/14 11:15, Zane Bitter wrote:
 On 08/08/14 11:07, Tomas Sedovic wrote:
 On 08/08/14 00:53, Zane Bitter wrote:
 On 07/08/14 13:22, Tomas Sedovic wrote:
 Hi all,

 I have a ResourceGroup which wraps a custom resource defined in
 another
 template:

   servers:
 type: OS::Heat::ResourceGroup
 properties:
   count: 10
   resource_def:
 type: my_custom_server
 properties:
   prop_1: ...
   prop_2: ...
   ...

 And a corresponding provider template and environment file.

 Now I can get say the list of IP addresses or any custom value of each
 server from the ResourceGroup by using `{get_attr: [servers,
 ip_address]}` and outputs defined in the provider template.

 But I can't figure out how to pass that list back to each server in
 the
 group.

 This is something we use in TripleO for things like building a MySQL
 cluster, where each node in the cluster (the ResourceGroup) needs the
 addresses of all the other nodes.

 Yeah, this is kind of the perpetual problem with clusters. I've been
 hoping that DNSaaS will show up in OpenStack soon and that that will be
 a way to fix this issue.

 The other option is to have the cluster members discover each other
 somehow (mDNS?), but people seem loath to do that.

 Right now, we have the servers ungrouped in the top-level template
 so we
 can build this list manually. But if we move to ResourceGroups (or any
 other scaling mechanism, I think), this is no longer possible.

 So I believe the current solution is to abuse a Launch Config resource
 as a store for the data, and then later retrieve it somehow? Possibly
 you could do something along similar lines, but it's unclear how the
 'later retrieval' part would work... presumably it would have to
 involve
 something outside of Heat closing the loop :(

 Do you mean AWS::AutoScaling::LaunchConfiguration? I'm having trouble
 figuring out how would that work. LaunchConfig represents an instance,
 right?


 We can't pass the list to ResourceGroup's `resource_def` section
 because
 that causes a circular dependency.

 And I'm not aware of a way to attach a SoftwareConfig to a
 ResourceGroup. SoftwareDeployment only allows attaching a config to a
 single server.

 Yeah, and that would be a tricky thing to implement well, because a
 resource group may not be a group of servers (but in many cases it may
 be a group of nested stacks that each contain one or more servers, and
 you'd want to be able to handle that too).

 Yeah, I worried about that, too :-(.

 Here's a proposal that might actually work, though:

 The provider resource exposes the reference to its inner instance by
 declaring it as one of its outputs. A SoftwareDeployment would learn to
 accept a list of Nova servers, too.

 Provider template:

  resources:
my_server:
  type: OS::Nova::Server
  properties:
...

... (some other resource hidden in the provider template)

  outputs:
inner_server:
  value: {get_resource: my_server}
ip_address:
  value: {get_attr: [controller_server, networks, private, 0]}

 Based on my limited testing, this already makes it possible to use the
 inner server with a SoftwareDeployment from another template that uses
 my_server as a provider resource.

 E.g.:

  a_cluster_of_my_servers:
type: OS::Heat::ResourceGroup
properties:
  count: 10
  resource_def:
type: custom::my_server
...

  some_deploy:
type: OS::Heat::StructuredDeployment
properties:
  server: {get_attr: [a_cluster_of_my_servers,
 resource.0.inner_server]}
  config: {get_resource: some_config}


 So what if we allowed SoftwareDeployment to accept a list of servers in
 addition to accepting just one server? Or add another resource that does
 that.

 I approve of that in principle. Only Steve Baker can tell us for sure
 if there are any technical roadblocks in the way of that, but I don't
 see any.

 Maybe if we had a new resource type that was internally implemented as
 a nested stack... that might give us a way of tracking the individual
 deployment statuses for free.

 cheers,
 Zane.

 Then we could do:

  mysql_cluster_deployment:
type: OS::Heat::StructuredDeployment
properties:
  server_list: {get_attr: [a_cluster_of_my_servers,
 inner_server]}
  config: {get_resource: mysql_cluster_config}
  input_values:
cluster_ip_addresses: {get_attr: [a_cluster_of_my_servers,
 ip_address]}

 This isn't that different from having a SoftwareDeployment accepting a
 single server and doesn't have any of the problems of allowing a
 ResourceGroup as a SoftwareDeployment target.

 What do you think?
 All the other solutions I can think of will result in circular issues.
 
 I'll start looking at a spec to create a resource 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Daniel P. Berrange
On Wed, Aug 13, 2014 at 12:55:48PM +0100, Steven Hardy wrote:
 On Wed, Aug 13, 2014 at 11:42:52AM +0100, Daniel P. Berrange wrote:
  On Thu, Aug 07, 2014 at 03:56:04AM -0700, Jay Pipes wrote:
   That said, I entirely agree with you and wish efforts to stabilize would
   take precedence over feature work.
  
  I find it really contradictory that we have such a strong desire for
  stabilization and testing of our code, but at the same time so many
  people argue that the core teams should have nothing at all todo with
  the stable release branches which a good portion of our users will
  actually be running. 
 
 Does such an argument actually exist?  My experience has been that
 stable-maint folks are very accepting of help, and that it's relatively
 easy for core reviewers with an interest in stable branch maintenance to
 offer their services and become stable-maint core:
 
 https://wiki.openstack.org/wiki/StableBranch#Joining_the_Team

There are multiple responses to my mail here to the effect that core
teams should not be involved in stable branch work and leave it upto
the distro maintainers unless individuals wish to volunteer

  http://lists.openstack.org/pipermail/openstack-dev/2014-July/041409.html


  By ignoring stable branches, leaving it upto a
  small team to handle, I think we giving the wrong message about what
  our priorities as a team team are. I can't help thinking this filters
  through to impact the way people think about their work on master.
 
 Who is ignoring stable branches?  This sounds like a project specific
 failing to me, as all experienced core reviewers should consider offering
 their services to help with stable-maint activity.

 I don't personally see any reason why the *entire* project core team has to
 do this, but a subset of them should feel compelled to participate in the
 stable-maint process, if they have sufficient time, interest and historical
 context, it's not some other team IMO.

I think that stable branch review should be a key responsibility for anyone
on the core team, not solely those few who volunteer for stable team. As
the number of projects in openstack grows I think the idea of having a
single stable team with rights to approve across any project is ultimately
flawed because it doesn't scale efficiently and they don't have the same
level of domain knowledge as the respective project teams.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Deprecating CONF.block_device_allocate_retries_interval

2014-08-13 Thread Liyi Meng

Hi Nikola,

Thanks a lot for the input! May I kindly invite you to review the change 
as well?


BR/Liyi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Mark McLoughlin
On Thu, 2014-08-07 at 09:30 -0400, Sean Dague wrote:

 While I definitely think re-balancing our quality responsibilities back
 into the projects will provide an overall better release, I think it's
 going to take a long time before it lightens our load to the point where
 we get more breathing room again.

I'd love to hear more about this re-balancing idea. It sounds like we
have some concrete ideas here and we're saying they're not relevant to
this thread because they won't be an immediate solution?

 This isn't just QA issues, it's a coordination issue on overall
 consistency across projects. Something that worked fine at 5 integrated
 projects, got strained at 9, and I think is completely untenable at 15.

I can certainly relate to that from experience with Oslo.

But if you take a concrete example - as more new projects emerge, it
became harder to get them all using oslo.messaging and using it
consistent ways. That's become a lot better with Doug's idea of Oslo
project delegates.

But if we had not added those projects to the release, the only reason
that the problem would be more manageable is that the use of
oslo.messaging would effectively become a requirement for integration.
So, projects requesting integration have to take cross-project
responsibilities more seriously for fear their application would be
denied.

That's a very sad conclusion. Our only tool for encouraging people to
take this cross-project issue is being accepted into the release and,
once achieved, the cross-project responsibilities aren't taken so
seriously?

I don't think it's so bleak as that - given the proper support,
direction and tracking I think we're seeing in Oslo how projects will
play their part in getting to cross-project consistency.

 I think one of the big issues with a large number of projects is that
 implications of implementation of one project impact others, but people
 don't always realize. Locally correct decisions for each project may not
 be globally correct for OpenStack. The GBP discussion, the Rally
 discussion, all are flavors of this.

I think we need two things here - good examples of how these
cross-project initiatives can succeed so people can learn from them, and
for the initiatives themselves to be patiently lead by those whose goal
is a cross-project solution.

It's hard work, absolutely no doubt. The point again, though, is that it
is possible to do this type of work in such a way that once a small
number of projects adopt the approach, most of the others will follow
quite naturally.

If I was trying to get a consistent cross-project approach in a
particular area, the least of my concerns would be whether Ironic,
Marconi, Barbican or Designate would be willing to fall in line behind a
cross-project consensus.

 People are frustrated in infra load, for instance. It's probably worth
 noting that the 'config' repo currently has more commits landed than any
 other project in OpenStack besides 'nova' in this release. It has 30%
 the core team size as Nova (http://stackalytics.com/?metric=commits).

Yes, infra is an extremely busy project. I'm not sure I'd compare
infra/config commits to Nova commits in order to illustrate that,
though.

Infra is a massive endeavor, it's as critical a part of the project as
any project in the integrated release, and like other strategic
efforts struggles to attract contributors from as diverse a number of
companies as the integrated projects.

 So I do think we need to really think about what *must* be in OpenStack
 for it to be successful, and ensure that story is well thought out, and
 that the pieces which provide those features in OpenStack are clearly
 best of breed, so they are deployed in all OpenStack deployments, and
 can be counted on by users of OpenStack.

I do think we try hard to think this through, but no doubt we need to do
better. Is this conversation concrete enough to really move our thinking
along sufficiently, though?

 Because if every version of
 OpenStack deploys with a different Auth API (an example that's current
 but going away), we can't grow an ecosystem of tools around it.

There's a nice concrete example, but it's going away? What's the best
current example to talk through?

 This is organic definition of OpenStack through feedback with operators
 and developers on what's minimum needed and currently working well
 enough that people are happy to maintain it. And make that solid.
 
 Having a TC that is independently selected separate from the PTLs allows
 that group to try to make some holistic calls here.
 
 At the end of the day, that's probably going to mean saying No to more
 things. Everytime I turn around everyone wants the TC to say No to
 things, just not to their particular thing. :) Which is human nature.
 But I think if we don't start saying No to more things we're going to
 end up with a pile of mud that no one is happy with.

That we're being so abstract about all of this is frustrating. I get
that no-one wants to start a 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 13/08/14 14:07, Daniel P. Berrange wrote:
 On Wed, Aug 13, 2014 at 12:55:48PM +0100, Steven Hardy wrote:
 On Wed, Aug 13, 2014 at 11:42:52AM +0100, Daniel P. Berrange
 wrote:
 On Thu, Aug 07, 2014 at 03:56:04AM -0700, Jay Pipes wrote:
 That said, I entirely agree with you and wish efforts to
 stabilize would take precedence over feature work.
 
 I find it really contradictory that we have such a strong
 desire for stabilization and testing of our code, but at the
 same time so many people argue that the core teams should have
 nothing at all todo with the stable release branches which a
 good portion of our users will actually be running.
 
 Does such an argument actually exist?  My experience has been
 that stable-maint folks are very accepting of help, and that it's
 relatively easy for core reviewers with an interest in stable
 branch maintenance to offer their services and become
 stable-maint core:
 
 https://wiki.openstack.org/wiki/StableBranch#Joining_the_Team
 
 There are multiple responses to my mail here to the effect that
 core teams should not be involved in stable branch work and leave
 it upto the distro maintainers unless individuals wish to
 volunteer
 
 http://lists.openstack.org/pipermail/openstack-dev/2014-July/041409.html

It
 
doesn't indicate that stable maintainers' team is not willing to
get help from core developers. Any core can easily step in and ask for
+2 permission for stable branches, it should not take much time to get
it. Granting +2 should mean that the new member has read and
understood stable branch maintainership procedures (which are short
and clear).

 
 
 By ignoring stable branches, leaving it upto a small team to
 handle, I think we giving the wrong message about what our
 priorities as a team team are. I can't help thinking this
 filters through to impact the way people think about their work
 on master.
 
 Who is ignoring stable branches?  This sounds like a project
 specific failing to me, as all experienced core reviewers should
 consider offering their services to help with stable-maint
 activity.
 
 I don't personally see any reason why the *entire* project core
 team has to do this, but a subset of them should feel compelled
 to participate in the stable-maint process, if they have
 sufficient time, interest and historical context, it's not some
 other team IMO.
 
 I think that stable branch review should be a key responsibility
 for anyone on the core team, not solely those few who volunteer for
 stable team. As the number of projects in openstack grows I think
 the idea of having a single stable team with rights to approve
 across any project is ultimately flawed because it doesn't scale
 efficiently and they don't have the same level of domain knowledge
 as the respective project teams.

Indeed, stable maintainers sometimes lack full understanding of the
proposed patch. Anyway, if a patch is easy and it has a clear
description in its commit message and Launchpad, it's usually easy to
determine whether it's applicable for stable branches.

Yes, sometimes a stable maintainer is not able to determine if a patch
should really go into stable; in that case core developers should be
asked to vote on the patch. In most cases though, it's generally
assumed that the patch contents are ok (they were already merged in
master, meaning, core developers already voted +2 on it before), and
there is no real need for special attention from core developers (that
are usually busy with ongoing work in master).

Note: there are sometimes patches that belong to stable branches only.
In those cases, stable maintainers should not be the ones to decide
whether the patch is going into the tree, because no due review ran
for the patch in master before.

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJT61h5AAoJEC5aWaUY1u57zQoH/1eo6Ut5D96wAxqfImz5ZEHH
IfTFUI9zXCDr1+EKoK3yyA4nOK+lJQ+80+/19281KyYsOLxlf1lOo0rfpXj6iO5o
Iz/AwPMWPsvn4FHcRr2KD31oRusPKvFQgZAdFEaeoOW6pi+AcMy8tHSh5JYuvipk
e2QvB8RqgRQsLnS5z9dcZ0wdrwKJmUMIWlVOcrzupabFtfWkpRP1eamr6oGHqNDK
z5lJiu91+sp/YlDHXZ9cy2e6sk+C2f9j5rgUeTmVkafZjvkZ/be4vprlU7hFZwt6
yXGLp5Ydjr0XK788QtIo7bnLJFdmWK3mv0Y9jzQRfUcPC7xqFCMx9AZegwrD740=
=dsnL
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-13 Thread Russell Bryant
On 08/12/2014 06:57 PM, Michael Still wrote:
 Hi.
 
 One of the action items from the nova midcycle was that I was asked to
 make nova's expectations of core reviews more clear. This email is an
 attempt at that.

Note that we also have:

https://wiki.openstack.org/wiki/Nova/CoreTeam

so once new critera reaches consensus, it should be added there.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Russell Bryant
On 08/12/2014 10:05 PM, Michael Still wrote:
 there are hundreds of proposed features for
 Juno, nearly 100 of which have been accepted. However, we're kidding
 ourselves if we think we can land 100 blueprints in a release cycle.

FWIW, I think this is actually huge improvement from previous cycles.  I
think we had almost double that # of blueprints on the list in the past.

I also don't think 100 is *completely* out of the question.  We're in
the 50-100 range already:

Icehouse - 67
Havana - 91
Grizzly - 66

Anyway, just wanted to share some numbers ... some improvements to
prioritization within that 100 is certainly still a good thing.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Mark McLoughlin
On Fri, 2014-08-08 at 15:36 -0700, Devananda van der Veen wrote:
 On Tue, Aug 5, 2014 at 10:02 AM, Monty Taylor mord...@inaugust.com wrote:

  Yes.
 
  Additionally, and I think we've been getting better at this in the 2 cycles
  that we've had an all-elected TC, I think we need to learn how to say no on
  technical merit - and we need to learn how to say thank you for your
  effort, but this isn't working out Breaking up with someone is hard to do,
  but sometimes it's best for everyone involved.
 
 
 I agree.
 
 The challenge is scaling the technical assessment of projects. We're
 all busy, and digging deeply enough into a new project to make an
 accurate assessment of it is time consuming. Some times, there are
 impartial subject-matter experts who can spot problems very quickly,
 but how do we actually gauge fitness?

Yes, it's important the TC does this and it's obvious we need to get a
lot better at it.

The Marconi architecture threads are an example of us trying harder (and
kudos to you for taking the time), but it's a little disappointing how
it has turned out. On the one hand there's what seems like a this
doesn't make any sense gut feeling and on the other hand an earnest,
but hardly bite-sized justification for how the API was chosen and how
it lead to the architecture. Frustrating that appears to not be
resulting in either improved shared understanding, or improved
architecture. Yet everyone is trying really hard.

 Letting the industry field-test a project and feed their experience
 back into the community is a slow process, but that is the best
 measure of a project's success. I seem to recall this being an
 implicit expectation a few years ago, but haven't seen it discussed in
 a while.

I think I recall us discussing a must have feedback that it's
successfully deployed requirement in the last cycle, but we recognized
that deployers often wait until a project is integrated.

 I'm not suggesting we make a policy of it, but if, after a
 few cycles, a project is still not meeting the needs of users, I think
 that's a very good reason to free up the hold on that role within the
 stack so other projects can try and fill it (assuming that is even a
 role we would want filled).

I'm certainly not against discussing de-integration proposals. But I
could imagine a case for de-integrating every single one of our
integrated projects. None of our software is perfect. How do we make
sure we approach this sanely, rather than run the risk of someone
starting a witch hunt because of a particular pet peeve?

I could imagine a really useful dashboard showing the current state of
projects along a bunch of different lines - summary of latest
deployments data from the user survey, links to known scalability
issues, limitations that operators should take into account, some
capturing of trends so we know whether things are improving. All of this
data would be useful to the TC, but also hugely useful to operators.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Mark McLoughlin
On Tue, 2014-08-12 at 14:26 -0400, Eoghan Glynn wrote:
   It seems like this is exactly what the slots give us, though. The core 
 review
  team picks a number of slots indicating how much work they think they can
  actually do (less than the available number of blueprints), and then
  blueprints queue up to get a slot based on priorities and turnaround time
  and other criteria that try to make slot allocation fair. By having the
  slots, not only is the review priority communicated to the review team, it
  is also communicated to anyone watching the project.
 
 One thing I'm not seeing shine through in this discussion of slots is
 whether any notion of individual cores, or small subsets of the core
 team with aligned interests, can champion blueprints that they have
 a particular interest in.
 
 For example it might address some pain-point they've encountered, or
 impact on some functional area that they themselves have worked on in
 the past, or line up with their thinking on some architectural point.
 
 But for whatever motivation, such small groups of cores currently have
 the freedom to self-organize in a fairly emergent way and champion
 individual BPs that are important to them, simply by *independently*
 giving those BPs review attention.
 
 Whereas under the slots initiative, presumably this power would be
 subsumed by the group will, as expressed by the prioritization
 applied to the holding pattern feeding the runways?
 
 I'm not saying this is good or bad, just pointing out a change that
 we should have our eyes open to.

Yeah, I'm really nervous about that aspect.

Say a contributor proposes a new feature, a couple of core reviewers
think it's important exciting enough for them to champion it but somehow
the 'group will' is that it's not a high enough priority for this
release, even if everyone agrees that it is actually cool and useful.

What does imposing that 'group will' on the two core reviewers and
contributor achieve? That the contributor and reviewers will happily
turn their attention to some of the higher priority work? Or we lose a
contributor and two reviewers because they feel disenfranchised?
Probably somewhere in the middle.

On the other hand, what happens if work proceeds ahead even if not
deemed a high priority? I don't think we can say that the contributor
and two core reviewers were distracted from higher priority work,
because blocking this work is probably unlikely to shift their focus in
a productive way. Perhaps other reviewers are distracted because they
feel the work needs more oversight than just the two core reviewers? It
places more of a burden on the gate?

I dunno ... the consequences of imposing group will worry me more than
the consequences of allowing small groups to self-organize like this.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-13 Thread Russell Bryant
On 08/13/2014 05:57 AM, Daniel P. Berrange wrote:
 On Wed, Aug 13, 2014 at 08:57:40AM +1000, Michael Still wrote:
 Hi.

 One of the action items from the nova midcycle was that I was asked to
 make nova's expectations of core reviews more clear. This email is an
 attempt at that.

 Nova expects a minimum level of sustained code reviews from cores. In
 the past this has been generally held to be in the order of two code
 reviews a day, which is a pretty low bar compared to the review
 workload of many cores. I feel that existing cores understand this
 requirement well, and I am mostly stating it here for completeness.

Yep, this bit is obviously the most important.  I would prefer a good
level of review activity be the only *hard* requirement.

 Additionally, there is increasing levels of concern that cores need to
 be on the same page about the criteria we hold code to, as well as the
 overall direction of nova. While the weekly meetings help here, it was
 agreed that summit attendance is really important to cores. Its the
 way we decide where we're going for the next cycle, as well as a
 chance to make sure that people are all pulling in the same direction
 and trust each other.

 There is also a strong preference for midcycle meetup attendance,
 although I understand that can sometimes be hard to arrange. My stance
 is that I'd like core's to try to attend, but understand that
 sometimes people will miss one. In response to the increasing
 importance of midcycles over time, I commit to trying to get the dates
 for these events announced further in advance.
 
 Personally I'm going to find it really hard to justify long distance
 travel 4 times a year for OpenStack for personal / family reasons,
 let alone company cost. I couldn't attend Icehouse mid-cycle because
 I just had too much travel in a short time to be able to do another
 week long trip away from family. I couldn't attend Juno mid-cycle
 because it clashed we personal holiday. There are other opensource
 related conferences that I also have to attend (LinuxCon, FOSDEM,
 KVM Forum, etc), etc so doubling the expected number of openstack
 conferences from 2 to 4 is really very undesirable from my POV.
 I might be able to attend the occassional mid-cycle meetup if the
 location was convenient, but in general I don't see myself being
 able to attend them regularly.
 
 I tend to view the fact that we're emphasising the need of in-person
 meetups to be somewhat of an indication of failure of our community
 operation. The majority of open source projects work very effectively
 with far less face-to-face time. OpenStack is fortunate that companies
 are currently willing to spend 6/7-figure sums flying 1000's of
 developers around the world many times a year, but I don't see that
 lasting forever so I'm concerned about baking the idea of f2f midcycle
 meetups into our way of life even more strongly.

I'm concerned about this, as well.  There are lots of reasons people
can't attend things (budget or personal reasons).  I'd hate to think
that not being able to travel this much (which I think is *a lot*) hurts
someone's ability to be an important part of the nova team.
Unfortunately, that's the direction we're trending.

I also think it furthers the image of nova being an exclusive clique.  I
think we should always look at things as ways to be as inclusive as
possible.  Focusing the important conversations at the 4 in-person
meetups per year leaves most of the community out.

 Given that we consider these physical events so important, I'd like
 people to let me know if they have travel funding issues. I can then
 approach the Foundation about funding travel if that is required.
 
 Travel funding is certainly an issue, but I'm not sure that Foundation
 funding would be a solution, because the impact probably isn't directly
 on the core devs. Speaking with my Red Hat on, if the midcycle meetup
 is important enough, the core devs will likely get the funding to attend.
 The fallout of this though is that every attendee at a mid-cycle summit
 means fewer attendees at the next design summit. So the impact of having
 more core devs at mid-cycle is that we'll get fewer non-core devs at
 the design summit. This sucks big time for the non-core devs who want
 to engage with our community.

I can confirm that this is the effect I am seeing for our team.  There
were *a lot* of meetups this cycle, and it was expensive.

This was actually one of the arguments against splitting the design
summit out from the main conference, yet I'm afraid we've created the
problem anyway.

 Also having each team do a f2f mid-cycle meetup at a different location
 makes it even harder for people who have a genuine desire / need to take
 part in multiple teams. Going to multiple mid-cycle meetups is even more
 difficult to justify so they're having to make difficult decisions about
 which to go to :-(

Indeed, and we actually need to be strongly *encouraging* cross-project
participation.

 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Kyle Mestery
On Wed, Aug 13, 2014 at 5:15 AM, Daniel P. Berrange berra...@redhat.com wrote:
 On Mon, Aug 11, 2014 at 10:30:12PM -0700, Joe Gordon wrote:
 On Fri, Aug 8, 2014 at 6:58 AM, Kyle Mestery mest...@mestery.com wrote:
   I really like this idea, as Michael and others alluded to in above, we
  are
   attempting to set cycle goals for Kilo in Nova. but I think it is worth
   doing for all of OpenStack. We would like to make a list of key goals
  before
   the summit so that we can plan our summit sessions around the goals. On a
   really high level one way to look at this is, in Kilo we need to pay down
   our technical debt.
  
   The slots/runway idea is somewhat separate from defining key cycle
  goals; we
   can be approve blueprints based on key cycle goals without doing slots.
   But
   with so many concurrent blueprints up for review at any given time, the
   review teams are doing a lot of multitasking and humans are not very
  good at
   multitasking. Hopefully slots can help address this issue, and hopefully
   allow us to actually merge more blueprints in a given cycle.
  
  I'm not 100% sold on what the slots idea buys us. What I've seen this
  cycle in Neutron is that we have a LOT of BPs proposed. We approve
  them after review. And then we hit one of two issues: Slow review
  cycles, and slow code turnaround issues. I don't think slots would
  help this, and in fact may cause more issues. If we approve a BP and
  give it a slot for which the eventual result is slow review and/or
  code review turnaround, we're right back where we started. Even worse,
  we may have not picked a BP for which the code submitter would have
  turned around reviews faster. So we've now doubly hurt ourselves. I
  have no idea how to solve this issue, but by over subscribing the
  slots (e.g. over approving), we allow for the submissions with faster
  turnaround a chance to merge quicker. With slots, we've removed this
  capability by limiting what is even allowed to be considered for
  review.
 

 Slow review: by limiting the number of blueprints up we hope to focus our
 efforts on fewer concurrent things
 slow code turn around: when a blueprint is given a slot (runway) we will
 first make sure the author/owner is available for fast code turnaround.

 If a blueprint review stalls out (slow code turnaround, stalemate in review
 discussions etc.) we will take the slot and give it to another blueprint.

 This idea of fixed slots is not really very appealing to me. It sounds
 like we're adding a significant amount of buerocratic overhead to our
 development process that is going to make us increasingly inefficient.
 I don't want to waste time wating for a stalled blueprint to time out
 before we give the slot to another blueprint. On any given day when I
 have spare review time available I'll just review anything that is up
 and waiting for review. If we can set a priority for the things up for
 review that is great since I can look at those first, but the idea of
 having fixed slots for things we should review does not do anything to
 help my review efficiency IMHO.

 I also thing it will kill our flexibility in approving  dealing with
 changes that are not strategically important, but none the less go
 through our blueprint/specs process. There have been a bunch of things
 I've dealt with that are not strategic, but have low overhead to code
 and review and easily dealt with in the slack time between looking at
 the high priority reviews. It sounds like we're going to loose our
 flexibility to pull in stuff like this if it only gets a chance when
 strategically imporatant stuff is not occupying a slot

I agree with all of Daniel's comments here, and these are the same
reason I'm not in favor of fixed slots or runways. As ttx has
stated in this thread, we have done a really poor job as a project of
understanding what are the priority items for a release, and sticking
to those. Trying to solve that to put focus on the priority items,
while allowing for smaller, low-overhead code and reviews should be
the priority here.

Thanks,
Kyle

 Regards,
 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o- http://virt-manager.org :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-13 Thread Joshua Harlow

A big +1 to what daniel said,

If f2f events are becoming so important  the only way to get things 
done, IMHO we should really start to do some reflection on how our 
community operates and start thinking about what we are doing wrong. 
Expecting every company to send developers (core or non-core) to all 
these events is unrealistic (and IMHO is the wrong path our community 
should go down). If only cores go (they can probably convince their 
employers they should/need to), these f2f events become something akin 
to secret f2f meetings where decisions are made behind some set of 
closed-doors (maybe cores should then be renamed the 'secret society of 
core reviewers', maybe even giving them a illuminati like logo, haha), 
that doesn't seem very open to me (and as daniel said further 
stratifies the people who work on openstack...).


Going the whole virtual route does seem better (although it still feels 
like something is wrong with how we are operating if that's even 
needed).


-Josh

On Wed, Aug 13, 2014 at 2:57 AM, Daniel P. Berrange 
berra...@redhat.com wrote:

On Wed, Aug 13, 2014 at 08:57:40AM +1000, Michael Still wrote:

 Hi.
 
 One of the action items from the nova midcycle was that I was asked 
to
 make nova's expectations of core reviews more clear. This email is 
an

 attempt at that.
 
 Nova expects a minimum level of sustained code reviews from cores. 
In

 the past this has been generally held to be in the order of two code
 reviews a day, which is a pretty low bar compared to the review
 workload of many cores. I feel that existing cores understand this
 requirement well, and I am mostly stating it here for completeness.
 
 Additionally, there is increasing levels of concern that cores need 
to
 be on the same page about the criteria we hold code to, as well as 
the
 overall direction of nova. While the weekly meetings help here, it 
was

 agreed that summit attendance is really important to cores. Its the
 way we decide where we're going for the next cycle, as well as a
 chance to make sure that people are all pulling in the same 
direction

 and trust each other.
 
 There is also a strong preference for midcycle meetup attendance,
 although I understand that can sometimes be hard to arrange. My 
stance

 is that I'd like core's to try to attend, but understand that
 sometimes people will miss one. In response to the increasing
 importance of midcycles over time, I commit to trying to get the 
dates

 for these events announced further in advance.


Personally I'm going to find it really hard to justify long distance
travel 4 times a year for OpenStack for personal / family reasons,
let alone company cost. I couldn't attend Icehouse mid-cycle because
I just had too much travel in a short time to be able to do another
week long trip away from family. I couldn't attend Juno mid-cycle
because it clashed we personal holiday. There are other opensource
related conferences that I also have to attend (LinuxCon, FOSDEM,
KVM Forum, etc), etc so doubling the expected number of openstack
conferences from 2 to 4 is really very undesirable from my POV.
I might be able to attend the occassional mid-cycle meetup if the
location was convenient, but in general I don't see myself being
able to attend them regularly.

I tend to view the fact that we're emphasising the need of in-person
meetups to be somewhat of an indication of failure of our community
operation. The majority of open source projects work very effectively
with far less face-to-face time. OpenStack is fortunate that companies
are currently willing to spend 6/7-figure sums flying 1000's of
developers around the world many times a year, but I don't see that
lasting forever so I'm concerned about baking the idea of f2f midcycle
meetups into our way of life even more strongly.


 Given that we consider these physical events so important, I'd like
 people to let me know if they have travel funding issues. I can then
 approach the Foundation about funding travel if that is required.


Travel funding is certainly an issue, but I'm not sure that Foundation
funding would be a solution, because the impact probably isn't 
directly

on the core devs. Speaking with my Red Hat on, if the midcycle meetup
is important enough, the core devs will likely get the funding to 
attend.
The fallout of this though is that every attendee at a mid-cycle 
summit
means fewer attendees at the next design summit. So the impact of 
having

more core devs at mid-cycle is that we'll get fewer non-core devs at
the design summit. This sucks big time for the non-core devs who want
to engage with our community.

Also having each team do a f2f mid-cycle meetup at a different 
location
makes it even harder for people who have a genuine desire / need to 
take
part in multiple teams. Going to multiple mid-cycle meetups is even 
more
difficult to justify so they're having to make difficult decisions 
about

which to go to :-(

I'm also not a fan of mid-cycle meetups because I feel 

Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-13 Thread Kyle Mestery
On Wed, Aug 13, 2014 at 7:55 AM, Russell Bryant rbry...@redhat.com wrote:
 On 08/13/2014 05:57 AM, Daniel P. Berrange wrote:
 On Wed, Aug 13, 2014 at 08:57:40AM +1000, Michael Still wrote:
 Hi.

 One of the action items from the nova midcycle was that I was asked to
 make nova's expectations of core reviews more clear. This email is an
 attempt at that.

 Nova expects a minimum level of sustained code reviews from cores. In
 the past this has been generally held to be in the order of two code
 reviews a day, which is a pretty low bar compared to the review
 workload of many cores. I feel that existing cores understand this
 requirement well, and I am mostly stating it here for completeness.

 Yep, this bit is obviously the most important.  I would prefer a good
 level of review activity be the only *hard* requirement.

 Additionally, there is increasing levels of concern that cores need to
 be on the same page about the criteria we hold code to, as well as the
 overall direction of nova. While the weekly meetings help here, it was
 agreed that summit attendance is really important to cores. Its the
 way we decide where we're going for the next cycle, as well as a
 chance to make sure that people are all pulling in the same direction
 and trust each other.

 There is also a strong preference for midcycle meetup attendance,
 although I understand that can sometimes be hard to arrange. My stance
 is that I'd like core's to try to attend, but understand that
 sometimes people will miss one. In response to the increasing
 importance of midcycles over time, I commit to trying to get the dates
 for these events announced further in advance.

 Personally I'm going to find it really hard to justify long distance
 travel 4 times a year for OpenStack for personal / family reasons,
 let alone company cost. I couldn't attend Icehouse mid-cycle because
 I just had too much travel in a short time to be able to do another
 week long trip away from family. I couldn't attend Juno mid-cycle
 because it clashed we personal holiday. There are other opensource
 related conferences that I also have to attend (LinuxCon, FOSDEM,
 KVM Forum, etc), etc so doubling the expected number of openstack
 conferences from 2 to 4 is really very undesirable from my POV.
 I might be able to attend the occassional mid-cycle meetup if the
 location was convenient, but in general I don't see myself being
 able to attend them regularly.

 I tend to view the fact that we're emphasising the need of in-person
 meetups to be somewhat of an indication of failure of our community
 operation. The majority of open source projects work very effectively
 with far less face-to-face time. OpenStack is fortunate that companies
 are currently willing to spend 6/7-figure sums flying 1000's of
 developers around the world many times a year, but I don't see that
 lasting forever so I'm concerned about baking the idea of f2f midcycle
 meetups into our way of life even more strongly.

 I'm concerned about this, as well.  There are lots of reasons people
 can't attend things (budget or personal reasons).  I'd hate to think
 that not being able to travel this much (which I think is *a lot*) hurts
 someone's ability to be an important part of the nova team.
 Unfortunately, that's the direction we're trending.


+1

I've seen a definitie uptick in travel for OpenStack, and it's not
sustainable for all the reasons stated here. We need to figure out a
better way to collaborate virtually, as we're a global Open Source
project and we can't assume that everyone can travel all the time for
all the mid-cycles, conferences, etc.


 I also think it furthers the image of nova being an exclusive clique.  I
 think we should always look at things as ways to be as inclusive as
 possible.  Focusing the important conversations at the 4 in-person
 meetups per year leaves most of the community out.


Again, I agree with this assessment. We need to shift things back to
the weekly IRC meetings, ML discussions, and perhaps some sort of
virtual conference scheduling as well.

 Given that we consider these physical events so important, I'd like
 people to let me know if they have travel funding issues. I can then
 approach the Foundation about funding travel if that is required.

 Travel funding is certainly an issue, but I'm not sure that Foundation
 funding would be a solution, because the impact probably isn't directly
 on the core devs. Speaking with my Red Hat on, if the midcycle meetup
 is important enough, the core devs will likely get the funding to attend.
 The fallout of this though is that every attendee at a mid-cycle summit
 means fewer attendees at the next design summit. So the impact of having
 more core devs at mid-cycle is that we'll get fewer non-core devs at
 the design summit. This sucks big time for the non-core devs who want
 to engage with our community.

 I can confirm that this is the effect I am seeing for our team.  There
 were *a lot* of meetups this cycle, and 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Mark McLoughlin
On Tue, 2014-08-12 at 14:12 -0700, Joe Gordon wrote:


 Here is the full nova proposal on  Blueprint in Kilo: Runways and
 Project Priorities
  
 https://review.openstack.org/#/c/112733/
 http://docs-draft.openstack.org/33/112733/4/check/gate-nova-docs/5f38603/doc/build/html/devref/runways.html

Thanks again for doing this.

Four points in the discussion jump out at me. Let's see if I can
paraphrase without misrepresenting :)

  - ttx - we need tools to be able to visualize these runways

  - danpb - the real problem here is that we don't have good tools to 
help reviewers maintain a todo list which feeds, in part, off 
blueprint prioritization

  - eglynn - what are the implications for our current ability for 
groups within the project to self-organize?

  - russellb - why is different from reviewers sponsoring blueprints, 
how will it work better?


I've been struggling to articulate a tooling idea for a while now. Let
me try again based on the runways idea and the thoughts above ...


When a reviewer sits down to do some reviews, their goal should be to
work through the small number of runways they're signed up to and drive
the list of reviews that need their attention to zero.

Reviewers should be able to create their own runways and allow others
sign up to them.

The reviewers responsible for that runway are responsible for pulling
new reviews from explicitly defined feeder runways.

Some feeder runways could be automated; no more than a search query for
say new libvirt patches which aren't already in the libvirt driver
runway.

All of this activity should be visible to everyone. It should be
possible to look at all the runways, see what runways a patch is in,
understand the flow between runways, etc.


There's a lot of detail that would have to be worked out, but I'm pretty
convinced there's an opportunity to carve up the review backlog, empower
people to help out with managing the backlog, give reviewers manageable
queues for them to stay on top of, help ensure that project priorization
is one of the drivers of reviewer activity and increases contributor
visibility into how decisions are made.

Mark.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Russell Bryant
On 08/13/2014 08:52 AM, Mark McLoughlin wrote:
 On Tue, 2014-08-12 at 14:26 -0400, Eoghan Glynn wrote:
   It seems like this is exactly what the slots give us, though. The core 
 review
 team picks a number of slots indicating how much work they think they can
 actually do (less than the available number of blueprints), and then
 blueprints queue up to get a slot based on priorities and turnaround time
 and other criteria that try to make slot allocation fair. By having the
 slots, not only is the review priority communicated to the review team, it
 is also communicated to anyone watching the project.

 One thing I'm not seeing shine through in this discussion of slots is
 whether any notion of individual cores, or small subsets of the core
 team with aligned interests, can champion blueprints that they have
 a particular interest in.

 For example it might address some pain-point they've encountered, or
 impact on some functional area that they themselves have worked on in
 the past, or line up with their thinking on some architectural point.

 But for whatever motivation, such small groups of cores currently have
 the freedom to self-organize in a fairly emergent way and champion
 individual BPs that are important to them, simply by *independently*
 giving those BPs review attention.

 Whereas under the slots initiative, presumably this power would be
 subsumed by the group will, as expressed by the prioritization
 applied to the holding pattern feeding the runways?

 I'm not saying this is good or bad, just pointing out a change that
 we should have our eyes open to.
 
 Yeah, I'm really nervous about that aspect.
 
 Say a contributor proposes a new feature, a couple of core reviewers
 think it's important exciting enough for them to champion it but somehow
 the 'group will' is that it's not a high enough priority for this
 release, even if everyone agrees that it is actually cool and useful.
 
 What does imposing that 'group will' on the two core reviewers and
 contributor achieve? That the contributor and reviewers will happily
 turn their attention to some of the higher priority work? Or we lose a
 contributor and two reviewers because they feel disenfranchised?
 Probably somewhere in the middle.
 
 On the other hand, what happens if work proceeds ahead even if not
 deemed a high priority? I don't think we can say that the contributor
 and two core reviewers were distracted from higher priority work,
 because blocking this work is probably unlikely to shift their focus in
 a productive way. Perhaps other reviewers are distracted because they
 feel the work needs more oversight than just the two core reviewers? It
 places more of a burden on the gate?
 
 I dunno ... the consequences of imposing group will worry me more than
 the consequences of allowing small groups to self-organize like this.

Yes, this is by far my #1 concern with the plan.

I think perhaps some middle ground makes sense.

1) Start doing a better job of generating a priority list, and
identifying the highest priority items based on group will.

2) Expect that reviewers use the priority list to influence their
general review time.

3) Don't actually block other things, should small groups self-organize
and decide it's important enough to them, even if not to the group as a
whole.

That sort of approach still sounds like an improvement to what we have
today, which is alack of good priority communication to direct general
review time.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Daniel P. Berrange
On Wed, Aug 13, 2014 at 09:11:26AM -0400, Russell Bryant wrote:
 On 08/13/2014 08:52 AM, Mark McLoughlin wrote:
  On Tue, 2014-08-12 at 14:26 -0400, Eoghan Glynn wrote:
It seems like this is exactly what the slots give us, though. The core 
  review
  team picks a number of slots indicating how much work they think they can
  actually do (less than the available number of blueprints), and then
  blueprints queue up to get a slot based on priorities and turnaround time
  and other criteria that try to make slot allocation fair. By having the
  slots, not only is the review priority communicated to the review team, it
  is also communicated to anyone watching the project.
 
  One thing I'm not seeing shine through in this discussion of slots is
  whether any notion of individual cores, or small subsets of the core
  team with aligned interests, can champion blueprints that they have
  a particular interest in.
 
  For example it might address some pain-point they've encountered, or
  impact on some functional area that they themselves have worked on in
  the past, or line up with their thinking on some architectural point.
 
  But for whatever motivation, such small groups of cores currently have
  the freedom to self-organize in a fairly emergent way and champion
  individual BPs that are important to them, simply by *independently*
  giving those BPs review attention.
 
  Whereas under the slots initiative, presumably this power would be
  subsumed by the group will, as expressed by the prioritization
  applied to the holding pattern feeding the runways?
 
  I'm not saying this is good or bad, just pointing out a change that
  we should have our eyes open to.
  
  Yeah, I'm really nervous about that aspect.
  
  Say a contributor proposes a new feature, a couple of core reviewers
  think it's important exciting enough for them to champion it but somehow
  the 'group will' is that it's not a high enough priority for this
  release, even if everyone agrees that it is actually cool and useful.
  
  What does imposing that 'group will' on the two core reviewers and
  contributor achieve? That the contributor and reviewers will happily
  turn their attention to some of the higher priority work? Or we lose a
  contributor and two reviewers because they feel disenfranchised?
  Probably somewhere in the middle.
  
  On the other hand, what happens if work proceeds ahead even if not
  deemed a high priority? I don't think we can say that the contributor
  and two core reviewers were distracted from higher priority work,
  because blocking this work is probably unlikely to shift their focus in
  a productive way. Perhaps other reviewers are distracted because they
  feel the work needs more oversight than just the two core reviewers? It
  places more of a burden on the gate?
  
  I dunno ... the consequences of imposing group will worry me more than
  the consequences of allowing small groups to self-organize like this.
 
 Yes, this is by far my #1 concern with the plan.
 
 I think perhaps some middle ground makes sense.
 
 1) Start doing a better job of generating a priority list, and
 identifying the highest priority items based on group will.
 
 2) Expect that reviewers use the priority list to influence their
 general review time.
 
 3) Don't actually block other things, should small groups self-organize
 and decide it's important enough to them, even if not to the group as a
 whole.
 
 That sort of approach still sounds like an improvement to what we have
 today, which is alack of good priority communication to direct general
 review time.

A key thing for the priority list is that it is in a machine consumable
format we can query somehow - even if that's a simple static text file
in a CSV format or something. As long as I can automate fetching  parsing
to correlate priorities with gerrit query results in some manner, that's
the key from my POV.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-13 Thread CARVER, PAUL
Daniel P. Berrange [mailto:berra...@redhat.com] wrote:

our dispersed contributor base. I think that we should be examining
what we can achieve with some kind of virtual online mid-cycle meetups
instead. Using technology like google hangouts or some similar live
collaboration technology, not merely an IRC discussion. Pick a 2-3
day period, schedule formal agendas / talking slots as you would with
a physical summit and so on. I feel this would be more inclusive to
our community as a whole, avoid excessive travel costs, so allowing
more of our community to attend the bigger design summits. It would
even open possibility of having multiple meetups during a cycle (eg
could arrange mini virtual events around each milestone if we wanted)

How about arranging some high quality telepresence rooms? A number of
the big companies associated with OpenStack either make or own some
pretty nice systems. Perhaps it could be negotiated for some of these
companies to open their doors to allow OpenStack developers for some
scheduled events.

With some scheduling and coordination effort it would probably be
possible to setup a bunch of local meet-up points interconnected
by telepresence links.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Eoghan Glynn


It seems like this is exactly what the slots give us, though. The core
review
   team picks a number of slots indicating how much work they think they can
   actually do (less than the available number of blueprints), and then
   blueprints queue up to get a slot based on priorities and turnaround time
   and other criteria that try to make slot allocation fair. By having the
   slots, not only is the review priority communicated to the review team,
   it
   is also communicated to anyone watching the project.
  
  One thing I'm not seeing shine through in this discussion of slots is
  whether any notion of individual cores, or small subsets of the core
  team with aligned interests, can champion blueprints that they have
  a particular interest in.
  
  For example it might address some pain-point they've encountered, or
  impact on some functional area that they themselves have worked on in
  the past, or line up with their thinking on some architectural point.
  
  But for whatever motivation, such small groups of cores currently have
  the freedom to self-organize in a fairly emergent way and champion
  individual BPs that are important to them, simply by *independently*
  giving those BPs review attention.
  
  Whereas under the slots initiative, presumably this power would be
  subsumed by the group will, as expressed by the prioritization
  applied to the holding pattern feeding the runways?
  
  I'm not saying this is good or bad, just pointing out a change that
  we should have our eyes open to.
 
 Yeah, I'm really nervous about that aspect.
 
 Say a contributor proposes a new feature, a couple of core reviewers
 think it's important exciting enough for them to champion it but somehow
 the 'group will' is that it's not a high enough priority for this
 release, even if everyone agrees that it is actually cool and useful.
 
 What does imposing that 'group will' on the two core reviewers and
 contributor achieve? That the contributor and reviewers will happily
 turn their attention to some of the higher priority work? Or we lose a
 contributor and two reviewers because they feel disenfranchised?
 Probably somewhere in the middle.

Yeah, the outcome probably depends on the motivation/incentives that
are operating for individual contributors.

If their brief or primary interest was to land *specific* features,
then they may sit out the cycle, or just work away on their pet features
anyway under the radar.

If, OTOH, they have more of a over-arching make the project better
goal, they may gladly (or reluctantly) apply themselves to the group-
defined goals.

However, human nature being what it is, I'd suspect that the energy
levels applied to self-selected goals may be higher in the average case.
Just a gut feeling on that, no hard data to back it up. 

 On the other hand, what happens if work proceeds ahead even if not
 deemed a high priority? I don't think we can say that the contributor
 and two core reviewers were distracted from higher priority work,
 because blocking this work is probably unlikely to shift their focus in
 a productive way. Perhaps other reviewers are distracted because they
 feel the work needs more oversight than just the two core reviewers? It
 places more of a burden on the gate?

Well I think we have accept the reality that we can't force people to
work on stuff they don't want to, or entirely stop them working on the
stuff that they do.

So inevitably there will be some deviation from the shining path, as
set out in the group will. Agreed that blocking this work from say
being proposed on gerrit won't necessarily have the desired outcome

(OK, it could stop the transitive distraction of other reviewers, and
remove the gate load, but won't restore the time spent working off-piste
by the contributor and two cores in your example)

 I dunno ... the consequences of imposing group will worry me more than
 the consequences of allowing small groups to self-organize like this.

Yep, this capacity for self-organization of informal groups with aligned
interests (as opposed to corporate affiliations) is, or at least should
be IMO, seen as one of the primary strengths of the open source model.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Joshua Harlow



On Wed, Aug 13, 2014 at 5:37 AM, Mark McLoughlin mar...@redhat.com 
wrote:

On Fri, 2014-08-08 at 15:36 -0700, Devananda van der Veen wrote:
 On Tue, Aug 5, 2014 at 10:02 AM, Monty Taylor 
mord...@inaugust.com wrote:



  Yes.
 
  Additionally, and I think we've been getting better at this in 
the 2 cycles
  that we've had an all-elected TC, I think we need to learn how to 
say no on
  technical merit - and we need to learn how to say thank you for 
your
  effort, but this isn't working out Breaking up with someone is 
hard to do,

  but sometimes it's best for everyone involved.
 
 
 I agree.
 
 The challenge is scaling the technical assessment of projects. We're

 all busy, and digging deeply enough into a new project to make an
 accurate assessment of it is time consuming. Some times, there are
 impartial subject-matter experts who can spot problems very quickly,
 but how do we actually gauge fitness?


Yes, it's important the TC does this and it's obvious we need to get a
lot better at it.

The Marconi architecture threads are an example of us trying harder 
(and

kudos to you for taking the time), but it's a little disappointing how
it has turned out. On the one hand there's what seems like a this
doesn't make any sense gut feeling and on the other hand an earnest,
but hardly bite-sized justification for how the API was chosen and how
it lead to the architecture. Frustrating that appears to not be
resulting in either improved shared understanding, or improved
architecture. Yet everyone is trying really hard.


 Letting the industry field-test a project and feed their experience
 back into the community is a slow process, but that is the best
 measure of a project's success. I seem to recall this being an
 implicit expectation a few years ago, but haven't seen it discussed 
in

 a while.


I think I recall us discussing a must have feedback that it's
successfully deployed requirement in the last cycle, but we 
recognized

that deployers often wait until a project is integrated.


 I'm not suggesting we make a policy of it, but if, after a
 few cycles, a project is still not meeting the needs of users, I 
think
 that's a very good reason to free up the hold on that role within 
the

 stack so other projects can try and fill it (assuming that is even a
 role we would want filled).


I'm certainly not against discussing de-integration proposals. But I
could imagine a case for de-integrating every single one of our
integrated projects. None of our software is perfect. How do we make
sure we approach this sanely, rather than run the risk of someone
starting a witch hunt because of a particular pet peeve?

I could imagine a really useful dashboard showing the current state of
projects along a bunch of different lines - summary of latest
deployments data from the user survey, links to known scalability
issues, limitations that operators should take into account, some
capturing of trends so we know whether things are improving. All of 
this

data would be useful to the TC, but also hugely useful to operators.


+1

This seems to be the only way to determine when a project isn't working 
out for the users in the community.


With such unbiased data being available, it would make a great case for 
why de-integration could happen. It would then allow the project to go 
back and fix itself, or allow for a replacement to be created that 
doesn't have the same set of limitations/problems. This would seem like 
a way that let's the project that works for users best to eventually be 
selected (survival of the fittest); although we also have to be 
careful, software isn't static and instead can be reshaped and molded 
and we should give the project that has issues a chance to reshape 
itself (giving the benefit of the doubt vs not).





Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Annoucing CloudKitty : an OpenSource Rating-as-a-Service project for OpenStack

2014-08-13 Thread Christophe Sauthier
We are very pleased at Objectif Libre to intoduce CloudKitty, an effort 
to provide a fully OpenSource Rating-as-a-Service component in 
OpenStack..


Following a first POC presented during the last summit in Atlanta to 
some Ceilometer devs (thanks again Julien Danjou for your great support 
!), we continued our effort to create a real service for rating. Today 
we are happy to share it with you all.



So what do we propose in CloudKitty?
 - a service for collecting metrics (using Ceilometer API)
 - a modular rating architecture to enable/disable modules and create 
your own rules on-the-fly, allowing you to use the rating patterns you 
like
 - an API to interact with the whole environment from core components 
to every rating module
 - a Horizon integration to allow configuration of the rating modules 
and display of pricing information in real time during instance 
creation
 - a CLI client to access this information and easily configure 
everything


Technically we are using all the elements that are used in the various 
OpenStack projects like olso, stevedore, pecan...
CloudKitty is highly modular and allows integration / developement of 
third party collection and rating modules and output formats.


A roadmap is available on the project wiki page (the link is at the end 
of this email), but we are clearly hoping to have some feedback and 
ideas on how to improve the project and reach a tighter integration with 
OpenStack.


The project source code is available at 
http://github.com/stackforge/cloudkitty
More stuff will be available on stackforge as soon as the reviews get 
validated like python-cloudkittyclient and cloudkitty-dashboard, so stay 
tuned.


The project's wiki page (https://wiki.openstack.org/wiki/CloudKitty) 
provides more information, and you can reach us via irc on freenode: 
#cloudkitty. Developper's documentation is on its way to readthedocs 
too.


We plan to present CloudKitty in detail during the Paris Summit, but we 
would love to hear from you sooner...


Cheers,

 Christophe and Objectif Libre


Christophe Sauthier   Mail : 
christophe.sauth...@objectif-libre.com
CEO  Fondateur   Mob : +33 (0) 6 16 98 63 
96
Objectif LibreURL : 
www.objectif-libre.com

Infrastructure et Formations LinuxTwitter : @objectiflibre

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Clint Byrum
Excerpts from Thierry Carrez's message of 2014-08-13 02:54:58 -0700:
 Rochelle.RochelleGrober wrote:
  [...]
  So, with all that prologue, here is what I propose (and please consider 
  proposing your improvements/changes to it).  I would like to see for Kilo:
  
  - IRC meetings and mailing list meetings beginning with Juno release and 
  continuing through the summit that focus on core project needs (what 
  Thierry call strategic) that as a set would be considered the primary 
  focus of the Kilo release for each project.  This could include high 
  priority bugs, refactoring projects, small improvement projects, high 
  interest extensions and new features, specs that didn't make it into Juno, 
  etc.
  - Develop the list and prioritize it into Needs and Wants. Consider 
  these the feeder projects for the two runways if you like.  
  - Discuss the lists.  Maybe have a community vote? The vote will freeze 
  the list, but as in most development project freezes, it can be a soft 
  freeze that the core, or drivers or TC can amend (or throw out for that 
  matter).
  [...]
 
 One thing we've been unable to do so far is to set release goals at
 the beginning of a release cycle and stick to those. It used to be
 because we were so fast moving that new awesome stuff was proposed
 mid-cycle and ends up being a key feature (sometimes THE key feature)
 for the project. Now it's because there is so much proposed noone knows
 what will actually get completed.
 
 So while I agree that what you propose is the ultimate solution (and the
 workflow I've pushed PTLs to follow every single OpenStack release so
 far), we have struggled to have the visibility, long-term thinking and
 discipline to stick to it in the past. If you look at the post-summit
 plans and compare to what we end up in a release, you'll see quite a lot
 of differences :)
 

I think that shows agility, and isn't actually a problem. 6 months
is quite a long time in the future for some business models. Strategic
improvements for the project should be able to stick to a 6 month
schedule, but companies will likely be tactical about where their
developer resources are directed for feature work.

The fact that those resources land code upstream is one of the greatest
strengths of OpenStack. Any potential impact on how that happens should
be carefully considered when making any changes to process and
governance.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Hoban, Adrian
 On Mon, Aug 11, 2014 at 10:30:12PM -0700, Joe Gordon wrote:
  On Fri, Aug 8, 2014 at 6:58 AM, Kyle Mestery mest...@mestery.com
 wrote:
I really like this idea, as Michael and others alluded to in
above, we
   are
attempting to set cycle goals for Kilo in Nova. but I think it is
worth doing for all of OpenStack. We would like to make a list of
key goals
   before
the summit so that we can plan our summit sessions around the
goals. On a really high level one way to look at this is, in Kilo
we need to pay down our technical debt.
   
The slots/runway idea is somewhat separate from defining key cycle
   goals; we
can be approve blueprints based on key cycle goals without doing slots.
But
with so many concurrent blueprints up for review at any given
time, the review teams are doing a lot of multitasking and humans
are not very
   good at
multitasking. Hopefully slots can help address this issue, and
hopefully allow us to actually merge more blueprints in a given cycle.
   
   I'm not 100% sold on what the slots idea buys us. What I've seen
   this cycle in Neutron is that we have a LOT of BPs proposed. We
   approve them after review. And then we hit one of two issues: Slow
   review cycles, and slow code turnaround issues. I don't think slots
   would help this, and in fact may cause more issues. If we approve a
   BP and give it a slot for which the eventual result is slow review
   and/or code review turnaround, we're right back where we started.
   Even worse, we may have not picked a BP for which the code submitter
   would have turned around reviews faster. So we've now doubly hurt
   ourselves. I have no idea how to solve this issue, but by over
   subscribing the slots (e.g. over approving), we allow for the
   submissions with faster turnaround a chance to merge quicker. With
   slots, we've removed this capability by limiting what is even
   allowed to be considered for review.
  
 
  Slow review: by limiting the number of blueprints up we hope to focus
  our efforts on fewer concurrent things slow code turn around: when a
  blueprint is given a slot (runway) we will first make sure the
  author/owner is available for fast code turnaround.
 
  If a blueprint review stalls out (slow code turnaround, stalemate in
  review discussions etc.) we will take the slot and give it to another
 blueprint.

  On Wed , Aug 13, 2014 Daniel Berrange wrote:
 This idea of fixed slots is not really very appealing to me. It sounds like 
 we're
 adding a significant amount of buerocratic overhead to our development
 process that is going to make us increasingly inefficient.
 I don't want to waste time wating for a stalled blueprint to time out before
 we give the slot to another blueprint. On any given day when I have spare
 review time available I'll just review anything that is up and waiting for
 review. If we can set a priority for the things up for review that is great 
 since I
 can look at those first, but the idea of having fixed slots for things we 
 should
 review does not do anything to help my review efficiency IMHO.
 
 I also thing it will kill our flexibility in approving  dealing with changes 
 that
 are not strategically important, but none the less go through our
 blueprint/specs process. There have been a bunch of things I've dealt with
 that are not strategic, but have low overhead to code and review and easily
 dealt with in the slack time between looking at the high priority reviews. It
 sounds like we're going to loose our flexibility to pull in stuff like this 
 if it only
 gets a chance when strategically imporatant stuff is not occupying a slot
 
 Regards,
 Daniel
 --

I am also not in favour of this fixed slots approach because of the potential 
lack of flexibility and overhead that could be introduced in the process. 

There has been lots of great mailing list traffic over the last month about 
blueprint spec freeze deadlines, exceptions, review priorities, inter-project 
dependencies on approvals, etc. We had a brief discussion in the NFV working 
group [1] and this is a really creative thread on how we can address some of 
the challenges in getting a proposal from concept through to blueprint 
acceptance and code integration.  I think some of the difficulty on converging 
on a proposal in this thread stems from the number of problem statements that 
are being addressed simultaneously. 

In no particular order and not an exhaustive list, here are some of the 
challenges that I've seen mentioned on this thread and others so far:
- There is an imbalance between strategic and tactical submissions.
- There is growing technical debt and lack of clarity on how that should be 
dealt with.
- There is inconsistency, and in some cases a lack of clarity, in how the 
entire lifecycle of a new proposal is dealt with within projects and across 
projects. E.g. What the various checkpoints on the lifecycle of a new proposal 
mean. What does 

Re: [openstack-dev] [oslo.db]A proposal for DB read/write separation

2014-08-13 Thread Mike Wilson
Lee,

No problem about mixing up the Mike's, there's a bunch of us out there :-).
What are you are describing here is very much like a spec I wrote for
Nova[1] a couple months ago and then never got back to. At the time I
considered gearing the feature toward oslo.db and I can't remember exactly
why I didn't. I think it probably had more to do with having folks that are
familiar with the problem reviewing code in Nova than anything else.
Anyway, I'd like to revisit this in Kilo or if you see a nice way to
integrate this into oslo.db I'd love to see your proposal.

-Mike

[1] https://review.openstack.org/#/c/93466/


On Sun, Aug 10, 2014 at 10:30 PM, Li Ma skywalker.n...@gmail.com wrote:

  not sure if I said that :).  I know extremely little about galera.

 Hi Mike Bayer, I'm so sorry I mistake you from Mike Wilson in the last
 post. :-) Also, say sorry to Mike Wilson.

  I’d totally guess that Galera would need to first have SELECTs come from
 a slave node, then the moment it sees any kind of DML / writing, it
 transparently switches the rest of the transaction over to a writer node.

 You are totally right.

 
  @transaction.writer
  def read_and_write_something(arg1, arg2, …):
  # …
 
  @transaction.reader
  def only_read_something(arg1, arg2, …):
  # …

 The first approach that I had in mind is the decorator-based method to
 separates read/write ops like what you said. To some degree, it is almost
 the same app-level approach to the master/slave configuration, due to
 transparency to developers. However, as I stated before, the current
 approach is merely used in OpenStack. Decorator is more friendly than
 use_slave_flag or something like that. If ideally transparency cannot be
 achieved, to say the least, decorator-based app-level switching is a great
 improvement, compared with the current implementation.

  OK so Galera would perhaps have some way to make this happen, and that's
 great.

 If any Galera expert here, please correct me. At least in my experiment,
 transactions work in that way.

  this (the word “integrate”, and what does that mean) is really the only
 thing making me nervous.

 Mike, just feel free. What I'd like to do is to add a django-style routing
 method as a plus in oslo.db, like:

 [database]
 # Original master/slave configuration
 master_connection =
 slave_connection =

 # Only Support Synchronous Replication
 enable_auto_routing = True

 [db_cluster]
 master_connection =
 master_connection =
 ...
 slave_connection =
 slave_connection =
 ...

 HOWEVER, I think it needs more investigation, so this is why I'd like to
 put it in the mailing list in the early stage to raise some discussions in
 depth. I'm not a Galera expert. I really appreciate any challenges here.

 Thanks,
 Li Ma


 - Original Message -
 From: Mike Bayer mba...@redhat.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: 星期日, 2014年 8 月 10日 下午 11:57:47
 Subject: Re: [openstack-dev] [oslo.db]A proposal for DB read/write
 separation


 On Aug 10, 2014, at 11:17 AM, Li Ma skywalker.n...@gmail.com wrote:

 
  How about Galera multi-master cluster? As Mike Bayer said, it is
 virtually synchronous by default. It is still possible that outdated rows
 are queried that make results not stable.

 not sure if I said that :).  I know extremely little about galera.


 
 
  Let's move forward to synchronous replication, like Galera with
 causal-reads on. The dominant advantage is that it has consistent
 relational dataset support. The disadvantage are that it uses optimistic
 locking and its performance sucks (also said by Mike Bayer :-). For
 optimistic locking problem, I think it can be dealt with by
 retry-on-deadlock. It's not the topic here.

 I *really* don’t think I said that, because I like optimistic locking, and
 I’ve never used Galera ;).

 Where I am ignorant here is of what exactly occurs if you write some rows
 within a transaction with Galera, then do some reads in that same
 transaction.   I’d totally guess that Galera would need to first have
 SELECTs come from a slave node, then the moment it sees any kind of DML /
 writing, it transparently switches the rest of the transaction over to a
 writer node.   No idea, but it has to be something like that?


 
 
  So, the transparent read/write separation is dependent on such an
 environment. SQLalchemy tutorial provides code sample for it [1]. Besides,
 Mike Bayer also provides a blog post for it [2].

 So this thing with the “django-style routers”, the way that example is, it
 actually would work poorly with a Session that is not in “autocommit” mode,
 assuming you’re working with regular old databases that are doing some
 simple behind-the-scenes replication.   Because again, if you do a flush,
 those rows go to the master, if the transaction is still open, then reading
 from the slaves you won’t see the rows you just inserted.So in reality,
 that example is kind of crappy, if 

[openstack-dev] [neutron] Rotating the weekly Neutron meeting

2014-08-13 Thread Kyle Mestery
Per this week's Neutron meeting [1], it was decided that offering a
rotating meeting slot for the weekly Neutron meeting would be a good
thing. This will allow for a much easier time for people in
Asia/Pacific timezones, as well as for people in Europe.

So, I'd like to propose we rotate the weekly as follows:

Monday 2100UTC
Tuesday 1400UTC

If people are ok with these time slots, I'll set this up and we'll
likely start with this new schedule in September, after the FPF.

Thanks!
Kyle

[1] 
http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-08-11-21.00.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Canceling today's parity meeting

2014-08-13 Thread Kyle Mestery
Sorry for the short notice, but lets cancel today's parity meeting.
We're still circling the wagons around the migration story at this
point, so hopefully next week we'll have more to discuss there.

Thanks,
Kyle

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Rotating the weekly Neutron meeting

2014-08-13 Thread Paul Michali (pcm)
+1

PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pcm_ (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



On Aug 13, 2014, at 10:05 AM, Kyle Mestery mest...@mestery.com wrote:

 Per this week's Neutron meeting [1], it was decided that offering a
 rotating meeting slot for the weekly Neutron meeting would be a good
 thing. This will allow for a much easier time for people in
 Asia/Pacific timezones, as well as for people in Europe.
 
 So, I'd like to propose we rotate the weekly as follows:
 
 Monday 2100UTC
 Tuesday 1400UTC
 
 If people are ok with these time slots, I'll set this up and we'll
 likely start with this new schedule in September, after the FPF.
 
 Thanks!
 Kyle
 
 [1] 
 http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-08-11-21.00.html
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Ian Wells
On 13 August 2014 06:01, Kyle Mestery mest...@mestery.com wrote:

 On Wed, Aug 13, 2014 at 5:15 AM, Daniel P. Berrange berra...@redhat.com
 wrote:
  This idea of fixed slots is not really very appealing to me. It sounds
  like we're adding a significant amount of buerocratic overhead to our
  development process that is going to make us increasingly inefficient.
  I don't want to waste time wating for a stalled blueprint to time out
  before we give the slot to another blueprint.
 
 I agree with all of Daniel's comments here, and these are the same
 reason I'm not in favor of fixed slots or runways. As ttx has
 stated in this thread, we have done a really poor job as a project of
 understanding what are the priority items for a release, and sticking
 to those. Trying to solve that to put focus on the priority items,
 while allowing for smaller, low-overhead code and reviews should be
 the priority here.


It seems to me that we're addressing the symptom and not the cause of the
problem.  We've set ourselves up as more of a cathedral and less of a
bazaar in one important respect: core reviewers are inevitably going to be
a bottleneck.  The slots proposal is simply saying 'we can't think a way of
scaling beyond what we have, and so let's restrict the inflow of changes to
a manageable level' - it doesn't increase capacity at all, it simply
improves the efficiency of using the current capacity and leaves us with a
hard limit that's fractionally higher than we're currently managing - but
we still have a capacity ceiling.

In Linux, to take another large project with significant feature velocity,
there's a degree of decentralisation.  The ultimate cores review code, but
getting code in depends more on a wider network of trusted associates.  We
don't have the same setup: even *proposed* changes have to be reviewed by
two cores before it's necessarily worth writing anything to make the change
in question.  Everything goes through Gerrit, which is one, centralised,
location for everyone to put in their code.

I have no great answer to this, but is there a way - perhaps via team
sponsorship from cores to ensure that the general direction is right, and
cloned repositories for purpose-specific changes, as one example - that we
can get an audience of people to check, try and test proposed changes long
before they need reviewing for final inclusion?
-- 
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] Minesweeper behaving badly

2014-08-13 Thread Jeremy Stanley
On 2014-08-13 02:40:28 +0200 (+0200), Salvatore Orlando wrote:
[...]
 Finally, I have noticed the old grammar is still being used by
 other 3rd party CI. I do not have a list of them, but if you run a
 3rd party CI, and this is completely new to you then probably you
 should look at the syntax for issuing recheck commands.
[...]

I don't think there's any consensus yet (see also [1][2][3]) that
being able to rerun all CI systems, upstream and third-party, from a
single comment is undesirable. Rather, what should really be
avoided, is running scripts which leave comments on dozens or
hundreds of reviews (as happened in this case) solely for the
purpose of retriggering jobs. If you need to rerun jobs on *your* CI
system (I'm speaking generally to all operators here, not just the
one mentioned in the subject line) because of some broad issue, do
so from within your system rather than trying to trigger it by
leaving unnecessary review comments on lots of changes.

[1] https://launchpad.net/bug/1355480
[2] https://review.openstack.org/109565
[3] http://lists.openstack.org/pipermail/openstack-infra/2014-August/001681.html

-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Rotating the weekly Neutron meeting

2014-08-13 Thread Gary Kotton
Huge +1

On 8/13/14, 5:19 PM, Paul Michali (pcm) p...@cisco.com wrote:

+1

PCM (Paul Michali)

MAIL Š..Š. p...@cisco.com
IRC ŠŠ..Š pcm_ (irc.freenode.com)
TW ŠŠŠ... @pmichali
GPG Key Š 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



On Aug 13, 2014, at 10:05 AM, Kyle Mestery mest...@mestery.com wrote:

 Per this week's Neutron meeting [1], it was decided that offering a
 rotating meeting slot for the weekly Neutron meeting would be a good
 thing. This will allow for a much easier time for people in
 Asia/Pacific timezones, as well as for people in Europe.
 
 So, I'd like to propose we rotate the weekly as follows:
 
 Monday 2100UTC
 Tuesday 1400UTC
 
 If people are ok with these time slots, I'll set this up and we'll
 likely start with this new schedule in September, after the FPF.
 
 Thanks!
 Kyle
 
 [1] 
http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-0
8-11-21.00.html
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] Minesweeper behaving badly

2014-08-13 Thread Jeremy Stanley
On 2014-08-13 02:40:28 +0200 (+0200), Salvatore Orlando wrote:
[...]
 The problem has now been fixed, and once the account will be
 re-enabled, rechecks should be issued with the command
 vmware-recheck.
[...]

Oh, I meant to add that I've reenabled the account now. Thanks for
the rapid response when it was brought to your attention!
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Rotating the weekly Neutron meeting

2014-08-13 Thread Assaf Muller
This is fantastic, thank you.

- Original Message -
 +1
 
 PCM (Paul Michali)
 
 MAIL …..…. p...@cisco.com
 IRC ……..… pcm_ (irc.freenode.com)
 TW ………... @pmichali
 GPG Key … 4525ECC253E31A83
 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
 
 
 
 On Aug 13, 2014, at 10:05 AM, Kyle Mestery mest...@mestery.com wrote:
 
  Per this week's Neutron meeting [1], it was decided that offering a
  rotating meeting slot for the weekly Neutron meeting would be a good
  thing. This will allow for a much easier time for people in
  Asia/Pacific timezones, as well as for people in Europe.
  
  So, I'd like to propose we rotate the weekly as follows:
  
  Monday 2100UTC
  Tuesday 1400UTC
  
  If people are ok with these time slots, I'll set this up and we'll
  likely start with this new schedule in September, after the FPF.
  
  Thanks!
  Kyle
  
  [1]
  http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-08-11-21.00.html
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Rotating the weekly Neutron meeting

2014-08-13 Thread trinath.soman...@freescale.com
+1  and Like

--
Trinath Somanchi - B39208
trinath.soman...@freescale.com | extn: 4048

-Original Message-
From: Gary Kotton [mailto:gkot...@vmware.com] 
Sent: Wednesday, August 13, 2014 8:07 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] Rotating the weekly Neutron meeting

Huge +1

On 8/13/14, 5:19 PM, Paul Michali (pcm) p...@cisco.com wrote:

+1

PCM (Paul Michali)

MAIL Š..Š. p...@cisco.com
IRC ŠŠ..Š pcm_ (irc.freenode.com)
TW ŠŠŠ... @pmichali
GPG Key Š 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



On Aug 13, 2014, at 10:05 AM, Kyle Mestery mest...@mestery.com wrote:

 Per this week's Neutron meeting [1], it was decided that offering a 
 rotating meeting slot for the weekly Neutron meeting would be a good 
 thing. This will allow for a much easier time for people in 
 Asia/Pacific timezones, as well as for people in Europe.
 
 So, I'd like to propose we rotate the weekly as follows:
 
 Monday 2100UTC
 Tuesday 1400UTC
 
 If people are ok with these time slots, I'll set this up and we'll 
 likely start with this new schedule in September, after the FPF.
 
 Thanks!
 Kyle
 
 [1]
http://eavesdrop.openstack.org/meetings/networking/2014/networking.201
4-0
8-11-21.00.html
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Rotating the weekly Neutron meeting

2014-08-13 Thread mar...@redhat.com
On 13/08/14 17:05, Kyle Mestery wrote:
 Per this week's Neutron meeting [1], it was decided that offering a
 rotating meeting slot for the weekly Neutron meeting would be a good
 thing. This will allow for a much easier time for people in
 Asia/Pacific timezones, as well as for people in Europe.
 
 So, I'd like to propose we rotate the weekly as follows:
 
 Monday 2100UTC
 Tuesday 1400UTC


 HUGE +1 and thanks!


 
 If people are ok with these time slots, I'll set this up and we'll
 likely start with this new schedule in September, after the FPF.
 
 Thanks!
 Kyle
 
 [1] 
 http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-08-11-21.00.html
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Retrigger turbo-hipster

2014-08-13 Thread John Warren
* we had replied to your email to rcbau, let me know if you didn't  
receive that in-case there is something wrong with our emails


Rackspace Australia

Thanks for your reply.  I do not seem to have received your reply to  
my original message.  Not sure what happened.  I'll admit that it is  
possible that I inadvertently deleted it.


Thanks,

John



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-sdk-php] Meeting canceled

2014-08-13 Thread Matthew Farina
The meeting for today is canceled. Sorry for the short notice.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] so what do i do about libvirt-python if i'm on precise?

2014-08-13 Thread Mark McLoughlin
On Wed, 2014-08-13 at 10:26 +0100, Daniel P. Berrange wrote:
 On Tue, Aug 12, 2014 at 10:09:52PM +0100, Mark McLoughlin wrote:
  On Wed, 2014-07-30 at 15:34 -0700, Clark Boylan wrote:
   On Wed, Jul 30, 2014, at 03:23 PM, Jeremy Stanley wrote:
On 2014-07-30 13:21:10 -0700 (-0700), Joe Gordon wrote:
 While forcing people to move to a newer version of libvirt is
 doable on most environments, do we want to do that now? What is
 the benefit of doing so?
[...]

The only dog I have in this fight is that using the split-out
libvirt-python on PyPI means we finally get to run Nova unit tests
in virtualenvs which aren't built with system-site-packages enabled.
It's been a long-running headache which I'd like to see eradicated
everywhere we can. I understand though if we have to go about it
more slowly, I'm just excited to see it finally within our grasp.
-- 
Jeremy Stanley
   
   We aren't quite forcing people to move to newer versions. Only those
   installing nova test-requirements need newer libvirt.
  
  Yeah, I'm a bit confused about the problem here. Is it that people want
  to satisfy test-requirements through packages rather than using a
  virtualenv?
  
  (i.e. if people just use virtualenvs for unit tests, there's no problem
  right?)
  
  If so, is it possible/easy to create new, alternate packages of the
  libvirt python bindings (from PyPI) on their own separately from the
  libvirt.so and libvirtd packages?
 
 The libvirt python API is (mostly) automatically generated from a
 description of the XML that is built from the C source files. In
 tree with have fakelibvirt which is a semi-crappy attempt to provide
 a pure python libvirt client API with the same signature. IIUC, what
 you are saying is that we should get a better fakelibvirt that is
 truely identical with same API coverage /signatures as real libvirt ?

No, I'm saying that people are installing packaged versions of recent
releases of python libraries. But they're skeptical about upgrading
their libvirt packages. With the work done to enable libvirt be uploaded
to PyPI, can't the two be decoupled? Can't we have packaged versions of
the recent python bindings on PyPI that are independent of the base
packages containing libvirt.so and libvirtd?

(Or I could be completely misunderstanding the issue people are seeing)

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] Minesweeper behaving badly

2014-08-13 Thread Ryan Hsu
Thanks very much Jeremy for getting us back up and running again. Without a 
doubt, we will be triggering directly from the CI system itself next time the 
need to do a mass recheck arises. The lesson was learned seconds after the 
script was run!

Again, my sincere apologies for all the havoc that was caused yesterday.

Regards,
Ryan

On Aug 13, 2014, at 7:38 AM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2014-08-13 02:40:28 +0200 (+0200), Salvatore Orlando wrote:
 [...]
 The problem has now been fixed, and once the account will be
 re-enabled, rechecks should be issued with the command
 vmware-recheck.
 [...]
 
 Oh, I meant to add that I've reenabled the account now. Thanks for
 the rapid response when it was brought to your attention!
 -- 
 Jeremy Stanley
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Which changes need accompanying bugs?

2014-08-13 Thread Armando M.
I am gonna add more color to this story by posting my replies on review [1]:

Hi Angus,

You touched on a number of points. Let me try to give you an answer to all
of them.

 (I'll create a bug report too. I still haven't worked out which class of
changes need an accompanying bug report and which don't.)

The long story can be read below:

https://wiki.openstack.org/wiki/BugFilingRecommendations

https://wiki.openstack.org/wiki/GitCommitMessages

IMO, there's a grey area for some of the issues you found, but when I am
faced with a bug, I tend to answer myself? Would a bug report be useful to
someone else? The author of the code? A consumer of the code? Not everyone
follow the core review system all the time, whereas Launchpad is pretty
much the tool everyone uses to stay abreast with the OpenStack release
cycle. Obviously if you're fixing a grammar nit, or filing a cosmetic
change that has no functional impact then I warrant the lack of a test, but
in this case you're fixing a genuine error: let's say we want to backport
this to icehouse, how else would we make the release manager of that?
He/she is looking at Launchpad.

 I can add a unittest for this particular code path, but it would only
check this particular short segment of code, would need to be maintained as
the code changes, and wouldn't catch another occurrence somewhere else.
This seems an unsatisfying return on the additional work :(

You're looking at this from the wrong perspective. This is not about
ensuring that other code paths are valid, but that this code path stays
valid over time, ensuring that the code path is exercised and that no other
regression of any kind creep in. The reason why this error was introduced
in the first place is because the code wasn't tested when it should have.
If you don't get that this mechanical effort of fixing errors by static
analysis is kind of ineffective, which leads me to the last point

 I actually found this via static analysis using pylint - and my question
is: should I create some sort of pylint unittest that tries to catch this
class of problem across the entire codebase? [...]

I value what you're doing, however I would see even more value if we
prevented these types of errors from occurring in the first place via
automation. You run pylint today, but what about tomorrow, or a week from
now? Are you gonna be filing pylint fixes for ever? We might be better off
automating the check and catch these types of errors before they land in
the tree. This means that the work you are doing it two-pronged: a)
automate the detection of some failures by hooking this into tox.ini via
HACKING/pep8 or equivalent mechanism and b) file all the fixes that require
these validation tests to pass; c) everyone is happy, or at least they
should be.

I'd welcome to explore a better strategy to ensure a better quality of the
code base, without some degree of automation, nothing will stop these
conversation from happening again.

Cheers,

Armando

[1] https://review.openstack.org/#/c/113777/


On 13 August 2014 03:02, Ihar Hrachyshka ihrac...@redhat.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512

 On 13/08/14 09:28, Angus Lees wrote:
  I'm doing various small cleanup changes as I explore the neutron
  codebase. Some of these cleanups are to fix actual bugs discovered
  in the code.  Almost all of them are tiny and obviously correct.
 
  A recurring reviewer comment is that the change should have had an
   accompanying bug report and that they would rather that change was
  not submitted without one (or at least, they've -1'ed my change).
 
  I often didn't discover these issues by encountering an actual
  production issue so I'm unsure what to include in the bug report
  other than basically a copy of the change description.  I also
  haven't worked out the pattern yet of which changes should have a
  bug and which don't need one.
 
  There's a section describing blueprints in NeutronDevelopment but
  nothing on bugs.  It would be great if someone who understands the
  nuances here could add some words on when to file bugs: Which type
  of changes should have accompanying bug reports? What is the
  purpose of that bug, and what should it contain?
 

 It was discussed before at:
 http://lists.openstack.org/pipermail/openstack-dev/2014-May/035789.html

 /Ihar
 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

 iQEcBAEBCgAGBQJT6zfOAAoJEC5aWaUY1u570wQIAMpoXIK/p5invp+GW0aMMUK0
 C/MR6WIJ83e6e2tOVUrxheK6bncVvidOI4EWGW1xzP1sg9q+8Hs1TNyKHXhJAb+I
 c435MMHWsDwj6p1OeDxHnSOVMthcGH96sgRa1+CIk6+oktDF3IMmiOPRkxdpqWCZ
 7TkV75mryehrTNwAkVPfpWG3OhWO44d5lLnJFCIMCuOw2NHzyLIOoGQAlWNQpy4V
 a869s00WO37GEed6A5Zizc9K/05/6kpDIQVim37tw91JcZ69VelUlZ1THx+RTd33
 92r87APm3fC/LioKN3fq1UUo2c94Vzl3gYPFVl8ZateQNMKB7ONMBePOfWR9H1k=
 =wCJQ
 -END PGP SIGNATURE-

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [neutron] Rotating the weekly Neutron meeting

2014-08-13 Thread Fawad Khaliq
like it! +1

Fawad Khaliq


On Wed, Aug 13, 2014 at 7:58 AM, mar...@redhat.com mandr...@redhat.com
wrote:

 On 13/08/14 17:05, Kyle Mestery wrote:
  Per this week's Neutron meeting [1], it was decided that offering a
  rotating meeting slot for the weekly Neutron meeting would be a good
  thing. This will allow for a much easier time for people in
  Asia/Pacific timezones, as well as for people in Europe.
 
  So, I'd like to propose we rotate the weekly as follows:
 
  Monday 2100UTC
  Tuesday 1400UTC


  HUGE +1 and thanks!


 
  If people are ok with these time slots, I'll set this up and we'll
  likely start with this new schedule in September, after the FPF.
 
  Thanks!
  Kyle
 
  [1]
 http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-08-11-21.00.html
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] so what do i do about libvirt-python if i'm on precise?

2014-08-13 Thread Daniel P. Berrange
On Wed, Aug 13, 2014 at 04:24:57PM +0100, Mark McLoughlin wrote:
 On Wed, 2014-08-13 at 10:26 +0100, Daniel P. Berrange wrote:
  On Tue, Aug 12, 2014 at 10:09:52PM +0100, Mark McLoughlin wrote:
   On Wed, 2014-07-30 at 15:34 -0700, Clark Boylan wrote:
On Wed, Jul 30, 2014, at 03:23 PM, Jeremy Stanley wrote:
 On 2014-07-30 13:21:10 -0700 (-0700), Joe Gordon wrote:
  While forcing people to move to a newer version of libvirt is
  doable on most environments, do we want to do that now? What is
  the benefit of doing so?
 [...]
 
 The only dog I have in this fight is that using the split-out
 libvirt-python on PyPI means we finally get to run Nova unit tests
 in virtualenvs which aren't built with system-site-packages enabled.
 It's been a long-running headache which I'd like to see eradicated
 everywhere we can. I understand though if we have to go about it
 more slowly, I'm just excited to see it finally within our grasp.
 -- 
 Jeremy Stanley

We aren't quite forcing people to move to newer versions. Only those
installing nova test-requirements need newer libvirt.
   
   Yeah, I'm a bit confused about the problem here. Is it that people want
   to satisfy test-requirements through packages rather than using a
   virtualenv?
   
   (i.e. if people just use virtualenvs for unit tests, there's no problem
   right?)
   
   If so, is it possible/easy to create new, alternate packages of the
   libvirt python bindings (from PyPI) on their own separately from the
   libvirt.so and libvirtd packages?
  
  The libvirt python API is (mostly) automatically generated from a
  description of the XML that is built from the C source files. In
  tree with have fakelibvirt which is a semi-crappy attempt to provide
  a pure python libvirt client API with the same signature. IIUC, what
  you are saying is that we should get a better fakelibvirt that is
  truely identical with same API coverage /signatures as real libvirt ?
 
 No, I'm saying that people are installing packaged versions of recent
 releases of python libraries. But they're skeptical about upgrading
 their libvirt packages. With the work done to enable libvirt be uploaded
 to PyPI, can't the two be decoupled? Can't we have packaged versions of
 the recent python bindings on PyPI that are independent of the base
 packages containing libvirt.so and libvirtd?

It is already de-coupled - the libvirt-python module up on PyPI is capable
of building against any libvirt.so C library version 0.9.11 - $CURRENT.

The problem with Ubuntu precise is that it is C library version 0.9.6
which we can't build against because that vintage libvirt never
installed the libvirt-api.xml file that we use to auto-generated the
python code from.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up Doc? Aug 13, 2014

2014-08-13 Thread Anne Gentle
__In review and merged this past week__

We're cleaning up the Architecture Design Guide continually and I got my
proof copy yesterday. The green cover is lovely as part of the set. The
interior PDF is made the master branch from today and you can get those
print copies rolling!

The landing page is now available at http://docs.openstack.org/arch/ and
you can order a printed copy for yourself. If anyone wants to place an
order for more than 50 copies of any of our books, contact me and I can get
them to you for half price. Go get a dead tree copy today at
http://www.lulu.com/content/15006967!

The common glossary work continues as well, see
http://specs.openstack.org/openstack/docs-specs/specs/juno/common-glossary-setup.html
for the spec.

We have review in progress for a templated set of landing pages so the HTML
can be generated more consistently and we don't have to write HTML landing
pages by hand. Thank you Christian! Review at
https://review.openstack.org/#/c/112239/

__High priority doc work__

Thanks to all who participated in the networking guide swarm last week!
Still has a few patches ready for review, and there's also a new spec for
that guide. Let's ensure we're all on the same page (ha!) by reviewing the
patch in docs-specs: https://review.openstack.org/#/c/113597/. The next
plans for the networking guide are to get the spec finished and then to
fill in the outline with tested sections. Shaun is handling the spec and
Nick Chase is coordinating.

__Ongoing doc work__

I've completed the WADL updates that enable removal of WADL from the
book-like deliverables, the API References. Now I need to get the API
References themselves merged.

Next, I'm working on a migration from docbook to RST for the API long-form
documents, which are found here:
http://docs.openstack.org/api/api-specs.html. This work is part of a
blueprint to move API specs to project repositories. [1]

What I'm hearing from Dolph Matthews is that those belong in the
project-specs repos, so I'll start there with proposals. I like this
approach for a couple of reasons: 1) it still has publishing available but
2) sets expectations that those are design specs. PTLs, I'll be in touch to
let you know whether you have a document that is affected. So far it's:
Block Storage v2
Compute API v2
Identity API v2.0
Networking API v2.0
Object Storage API v1

The API Reference page is the user-oriented deliverable for APIs:
http://developer.openstack.org/api-ref.html which is still sourced from
WADL in the openstack/api-site repository. I'm investigating replacements
and welcome collaborators.

__New incoming doc requests__

None, our focus is on the Networking Guide, API doc work, the Architecture
Guide readiness, and always the backlog of doc bugs and DocImpact.

__Doc tools updates__

We'll use a static site generator for the content in the
openstack-manuals/www directory that builds our landing pages. It's Jinja2 (
http://jinja.pocoo.org/, already listed in the global requirements).

__Other doc news__

We're holding a Doc Bug Day Tuesday September 9th. Please join in during
your timezone or work all 24 hours!
https://wiki.openstack.org/wiki/Documentation/BugDay

1.
https://wiki.openstack.org/w/index.php?title=Blueprint-os-api-docs#Goal_4_-_Move_API_Specs_to_project_repositories_and_off_docs_landing_page
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: [nova][core] Expectations of core reviewers

2014-08-13 Thread Maru Newby

On Aug 13, 2014, at 2:57 AM, Daniel P. Berrange berra...@redhat.com wrote:

 On Wed, Aug 13, 2014 at 08:57:40AM +1000, Michael Still wrote:
 Hi.
 
 One of the action items from the nova midcycle was that I was asked to
 make nova's expectations of core reviews more clear. This email is an
 attempt at that.
 
 Nova expects a minimum level of sustained code reviews from cores. In
 the past this has been generally held to be in the order of two code
 reviews a day, which is a pretty low bar compared to the review
 workload of many cores. I feel that existing cores understand this
 requirement well, and I am mostly stating it here for completeness.
 
 Additionally, there is increasing levels of concern that cores need to
 be on the same page about the criteria we hold code to, as well as the
 overall direction of nova. While the weekly meetings help here, it was
 agreed that summit attendance is really important to cores. Its the
 way we decide where we're going for the next cycle, as well as a
 chance to make sure that people are all pulling in the same direction
 and trust each other.
 
 There is also a strong preference for midcycle meetup attendance,
 although I understand that can sometimes be hard to arrange. My stance
 is that I'd like core's to try to attend, but understand that
 sometimes people will miss one. In response to the increasing
 importance of midcycles over time, I commit to trying to get the dates
 for these events announced further in advance.
 
 Personally I'm going to find it really hard to justify long distance
 travel 4 times a year for OpenStack for personal / family reasons,
 let alone company cost. I couldn't attend Icehouse mid-cycle because
 I just had too much travel in a short time to be able to do another
 week long trip away from family. I couldn't attend Juno mid-cycle
 because it clashed we personal holiday. There are other opensource
 related conferences that I also have to attend (LinuxCon, FOSDEM,
 KVM Forum, etc), etc so doubling the expected number of openstack
 conferences from 2 to 4 is really very undesirable from my POV.
 I might be able to attend the occassional mid-cycle meetup if the
 location was convenient, but in general I don't see myself being
 able to attend them regularly.
 
 I tend to view the fact that we're emphasising the need of in-person
 meetups to be somewhat of an indication of failure of our community
 operation. The majority of open source projects work very effectively
 with far less face-to-face time. OpenStack is fortunate that companies
 are currently willing to spend 6/7-figure sums flying 1000's of
 developers around the world many times a year, but I don't see that
 lasting forever so I'm concerned about baking the idea of f2f midcycle
 meetups into our way of life even more strongly.

I was fortunate to attend both the Nova and Neutron mid-cycles last month, and 
I can attest to how productive these gatherings were.  Discussion moved quickly 
and misunderstandings were rapidly resolved.  Informal ('water-cooler') 
conversation led to many interactions that might not otherwise have occurred.  
Given your attendance of summit and other open source conferences, though, I'm 
assuming the value of f2f is not in question.

Nothing good is ever free.  The financial cost and exclusionary nature of an 
in-person meetup should definitely be weighed against the opportunity for 
focused and high-bandwidth communication.  It's clear to myself and other 
attendees just how valuable the recent mid-cycles were in terms of making 
technical decisions and building the relationships to support their 
implementation.  Maybe it isn't sustainable over the long-term to meet so 
often, but I don't think that should preclude us from deriving benefit in the 
short-term.  I also don't think we should ignore the opportunity for more 
effective decision-making on the grounds that not everyone can directly 
participate.  Not everyone is able to attend summit, but it is nonetheless a 
critical part of our community's decision-making process.  The topic lists for 
a mid-cycle are published beforehand, just like summit, to allow non-attendees 
the chance to present their views in advance and/or designate one or more 
attendees to advocate on
  their behalf.  It's not perfect, but the alternative - not holding mid-cycles 
- would seem to be a case of throwing out the baby with the bathwater.


Maru

 
 Given that we consider these physical events so important, I'd like
 people to let me know if they have travel funding issues. I can then
 approach the Foundation about funding travel if that is required.
 
 Travel funding is certainly an issue, but I'm not sure that Foundation
 funding would be a solution, because the impact probably isn't directly
 on the core devs. Speaking with my Red Hat on, if the midcycle meetup
 is important enough, the core devs will likely get the funding to attend.
 The fallout of this though is that every attendee at a mid-cycle summit
 

Re: [openstack-dev] [Nova] Concerns around the Extensible Resource Tracker design - revert maybe?

2014-08-13 Thread Sylvain Bauza


Le 13/08/2014 12:21, Sylvain Bauza a écrit :


Le 12/08/2014 22:06, Sylvain Bauza a écrit :


Le 12/08/2014 18:54, Nikola Đipanov a écrit :

On 08/12/2014 04:49 PM, Sylvain Bauza wrote:

(sorry for reposting, missed 2 links...)

Hi Nikola,

Le 12/08/2014 12:21, Nikola Đipanov a écrit :

Hey Nova-istas,

While I was hacking on [1] I was considering how to approach the fact
that we now need to track one more thing (NUMA node utilization) 
in our
resources. I went with - I'll add it to compute nodes table 
thinking
it's a fundamental enough property of a compute host that it 
deserves to
be there, although I was considering  Extensible Resource Tracker 
at one
point (ERT from now on - see [2]) but looking at the code - it did 
not
seem to provide anything I desperately needed, so I went with 
keeping it

simple.

So fast-forward a few days, and I caught myself solving a problem 
that I

kept thinking ERT should have solved - but apparently hasn't, and I
think it is fundamentally a broken design without it - so I'd really
like to see it re-visited.

The problem can be described by the following lemma (if you take 
'lemma'

to mean 'a sentence I came up with just now' :)):


Due to the way scheduling works in Nova (roughly: pick a host 
based on
stale(ish) data, rely on claims to trigger a re-schedule), _same 
exact_

information that scheduling service used when making a placement
decision, needs to be available to the compute service when 
testing the

placement.


This is not the case right now, and the ERT does not propose any 
way to

solve it - (see how I hacked around needing to be able to get
extra_specs when making claims in [3], without hammering the DB). The
result will be that any resource that we add and needs user supplied
info for scheduling an instance against it, will need a buggy
re-implementation of gathering all the bits from the request that
scheduler sees, to be able to work properly.

Well, ERT does provide a plugin mechanism for testing resources at the
claim level. This is the plugin responsibility to implement a test()
method [2.1] which will be called when test_claim() [2.2]

So, provided this method is implemented, a local host check can be 
done

based on the host's view of resources.


Yes - the problem is there is no clear API to get all the needed 
bits to

do so - especially the user supplied one from image and flavors.
On top of that, in current implementation we only pass a hand-wavy
'usage' blob in. This makes anyone wanting to use this in conjunction
with some of the user supplied bits roll their own
'extract_data_from_instance_metadata_flavor_image' or similar which is
horrible and also likely bad for performance.


I see your concern where there is no interface for user-facing 
resources like flavor or image metadata.
I also think indeed that the big 'usage' blob is not a good choice 
for long-term vision.


That said, I don't think as we say in French to throw the bath 
water... ie. the problem is with the RT, not the ERT (apart the 
mention of third-party API that you noted - I'll go to it later below)
This is obviously a bigger concern when we want to allow users to 
pass
data (through image or flavor) that can affect scheduling, but 
still a

huge concern IMHO.
And here is where I agree with you : at the moment, ResourceTracker 
(and
consequently Extensible RT) only provides the view of the resources 
the

host is knowing (see my point above) and possibly some other resources
are missing.
So, whatever your choice of going with or without ERT, your patch [3]
still deserves it if we want not to lookup DB each time a claim goes.



As I see that there are already BPs proposing to use this IMHO broken
ERT ([4] for example), which will surely add to the proliferation of
code that hacks around these design shortcomings in what is already a
messy, but also crucial (for perf as well as features) bit of Nova 
code.

Two distinct implementations of that spec (ie. instances and flavors)
have been proposed [2.3] [2.4] so reviews are welcome. If you see the
test() method, it's no-op thing for both plugins. I'm open to comments
because I have the stated problem : how can we define a limit on 
just a

counter of instances and flavors ?


Will look at these - but none of them seem to hit the issue I am
complaining about, and that is that it will need to consider other
request data for claims, not only data available for on instances.

Also - the fact that you don't implement test() in flavor ones tells me
that the implementation is indeed racy (but it is racy atm as well) and
two requests can indeed race for the same host, and since no claims are
done, both can succeed. This is I believe (at least in case of single
flavor hosts) unlikely to happen in practice, but you get the idea.


Agreed, these 2 patches probably require another iteration, in 
particular how we make sure that it won't be racy. So I need another 
run to think about what to test() for these 2 examples.
Another 

Re: [openstack-dev] [nova] stable branches failure to handle review backlog

2014-08-13 Thread Mark McLoughlin
On Tue, 2014-07-29 at 14:04 +0200, Thierry Carrez wrote:
 Ihar Hrachyshka a écrit :
  On 29/07/14 12:15, Daniel P. Berrange wrote:
  Looking at the current review backlog I think that we have to
  seriously question whether our stable branch review process in
  Nova is working to an acceptable level
  
  On Havana
  
- 43 patches pending
- 19 patches with a single +2
- 1 patch with a -1
- 0 patches wit a -2
- Stalest waiting 111 days since most recent patch upload
- Oldest waiting 250 days since first patch upload
- 26 patches waiting more than 1 month since most recent upload
- 40 patches waiting more than 1 month since first upload
  
  On Icehouse:
  
- 45 patches pending
- 17 patches with a single +2
- 4 patches with a -1
- 1 patch with a -2
- Stalest waiting 84 days since most recent patch upload
- Oldest waiting 88 days since first patch upload
- 10 patches waiting more than 1 month since most recent upload
- 29 patches waiting more than 1 month since first upload
  
  I think those stats paint a pretty poor picture of our stable branch
  review process, particularly Havana.
  
  It should not take us 250 days for our review team to figure out whether
  a patch is suitable material for a stable branch, nor should we have
  nearly all the patches waiting more than 1 month in Havana.
  
  These branches are not getting sufficient reviewer attention and we need
  to take steps to fix that.
  
  If I had to set a benchmark, assuming CI passes, I'd expect us to either
  approve or reject submissions for stable within a 2 week window in the
  common case, 1 month at the worst case.
  
  Totally agreed.
 
 A bit of history.
 
 At the dawn of time there were no OpenStack stable branches, each
 distribution was maintaining its own stable branches, duplicating the
 backporting work.

I'm not sure how much backporting was going on at the time of the Essex
summit. I'm sure Ubuntu had some backports, but that was probably about
it?

  At some point it was suggested (mostly by RedHat and
 Canonical folks) that there should be collaboration around that task,
 and the OpenStack project decided to set up official stable branches
 where all distributions could share the backporting work. The stable
 team group was seeded with package maintainers from all over the distro
 world.

During that first design summit session, it was mainly you, me and
Daviey discussing. Both you and Daviey saw this primarily about distros
collaborating, but I never saw it that way.

I don't see how any self-respecting open-source project can throw a
release over the wall and have no ability to address critical bugs with
that release until the next release 6 months later which will also
include a bunch of new feature work with new bugs. That's not a distro
maintainer point of view.

At that Essex summit, we were lamenting how many critical bugs in Nova
had been discovered shortly after the Diablo release. Our inability to
do a bugfix release of Nova for Diablo seemed like a huge problem to me.

 So these branches originally only exist as a convenient place to
 collaborate on backporting work. This is completely separate from
 development work, even if those days backports are often proposed by
 developers themselves. The stable branch team is separate from the rest
 of OpenStack teams. We have always been very clear tht if the stable
 branches are no longer maintained (i.e. if the distributions don't see
 the value of those anymore), then we'll consider removing them. We, as a
 project, only signed up to support those as long as the distros wanted them.

You can certainly argue that the project never signed up for the
responsibility. I don't see it that way, but there was certainly always
a debate whether this was the project taking responsibility for bugfix
releases or whether it was just downstream distros collaborating.

The thing about branches going away if they're not maintained isn't
anything unusual. If *any* effort within the project becomes so
unmaintained due to a lack of interest such that we can't stand over it,
then we should consider retiring it.

 We have been adding new members to the stable branch teams recently, but
 those tend to come from development teams rather than downstream
 distributions, and that starts to bend the original landscape.
 Basically, the stable branch needs to be very conservative to be a
 source of safe updates -- downstream distributions understand the need
 to weigh the benefit of the patch vs. the disruption it may cause.
 Developers have another type of incentive, which is to get the fix they
 worked on into stable releases, without necessarily being very
 conservative. Adding more -core people to the stable team to compensate
 the absence of distro maintainers will ultimately kill those branches.

That's quite a leap to say that -core team members will be so incapable
of the appropriate level of conservatism that the branch will be 

Re: [openstack-dev] Fwd: [nova][core] Expectations of core reviewers

2014-08-13 Thread Daniel P. Berrange
On Wed, Aug 13, 2014 at 09:01:59AM -0700, Maru Newby wrote:
 
 On Aug 13, 2014, at 2:57 AM, Daniel P. Berrange berra...@redhat.com wrote:
 
  On Wed, Aug 13, 2014 at 08:57:40AM +1000, Michael Still wrote:
  Hi.
  
  One of the action items from the nova midcycle was that I was asked to
  make nova's expectations of core reviews more clear. This email is an
  attempt at that.
  
  Nova expects a minimum level of sustained code reviews from cores. In
  the past this has been generally held to be in the order of two code
  reviews a day, which is a pretty low bar compared to the review
  workload of many cores. I feel that existing cores understand this
  requirement well, and I am mostly stating it here for completeness.
  
  Additionally, there is increasing levels of concern that cores need to
  be on the same page about the criteria we hold code to, as well as the
  overall direction of nova. While the weekly meetings help here, it was
  agreed that summit attendance is really important to cores. Its the
  way we decide where we're going for the next cycle, as well as a
  chance to make sure that people are all pulling in the same direction
  and trust each other.
  
  There is also a strong preference for midcycle meetup attendance,
  although I understand that can sometimes be hard to arrange. My stance
  is that I'd like core's to try to attend, but understand that
  sometimes people will miss one. In response to the increasing
  importance of midcycles over time, I commit to trying to get the dates
  for these events announced further in advance.
  
  Personally I'm going to find it really hard to justify long distance
  travel 4 times a year for OpenStack for personal / family reasons,
  let alone company cost. I couldn't attend Icehouse mid-cycle because
  I just had too much travel in a short time to be able to do another
  week long trip away from family. I couldn't attend Juno mid-cycle
  because it clashed we personal holiday. There are other opensource
  related conferences that I also have to attend (LinuxCon, FOSDEM,
  KVM Forum, etc), etc so doubling the expected number of openstack
  conferences from 2 to 4 is really very undesirable from my POV.
  I might be able to attend the occassional mid-cycle meetup if the
  location was convenient, but in general I don't see myself being
  able to attend them regularly.
  
  I tend to view the fact that we're emphasising the need of in-person
  meetups to be somewhat of an indication of failure of our community
  operation. The majority of open source projects work very effectively
  with far less face-to-face time. OpenStack is fortunate that companies
  are currently willing to spend 6/7-figure sums flying 1000's of
  developers around the world many times a year, but I don't see that
  lasting forever so I'm concerned about baking the idea of f2f midcycle
  meetups into our way of life even more strongly.
 
 I was fortunate to attend both the Nova and Neutron mid-cycles last
 month, and I can attest to how productive these gatherings were. 
 Discussion moved quickly and misunderstandings were rapidly resolved.
 Informal ('water-cooler') conversation led to many interactions that
 might not otherwise have occurred.  Given your attendance of summit
 and other open source conferences, though, I'm assuming the value of
 f2f is not in question.

I'm not questioning the value of f2f - I'm questioning the idea of
doing f2f meetings sooo many times a year. OpenStack is very much
the outlier here among open source projects - the vast majority of
projects get along very well with much less f2f time and a far
smaller % of their contributors attend those f2f meetings that do
happen. So I really do question what is missing from OpenStack's
community interaction that makes us believe that having 4 f2f
meetings a year is critical to our success.

 Nothing good is ever free.  The financial cost and exclusionary
 nature of an in-person meetup should definitely be weighed against
 the opportunity for focused and high-bandwidth communication.  It's
 clear to myself and other attendees just how valuable the recent
 mid-cycles were in terms of making technical decisions and building
 the relationships to support their implementation.  Maybe it isn't
 sustainable over the long-term to meet so often, but I don't think
 that should preclude us from deriving benefit in the short-term.

As pointed out this benefit for core devs has a direct negative
impact on other non-core devs. I'm questioning whether this is
really a net win overall vs other approaches to collaboration.

 I also don't think we should ignore the opportunity for more
 effective decision-making on the grounds that not everyone
 can directly participate.  Not everyone is able to attend
 summit, but it is nonetheless a critical part of our
 community's decision-making process.  The topic lists for a
 mid-cycle are published beforehand, just like summit, to
 allow non-attendees the chance to present their 

Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-13 Thread Maru Newby

On Aug 13, 2014, at 2:57 AM, Daniel P. Berrange berra...@redhat.com wrote:

 On Wed, Aug 13, 2014 at 08:57:40AM +1000, Michael Still wrote:
 Hi.
 
 One of the action items from the nova midcycle was that I was asked to
 make nova's expectations of core reviews more clear. This email is an
 attempt at that.
 
 Nova expects a minimum level of sustained code reviews from cores. In
 the past this has been generally held to be in the order of two code
 reviews a day, which is a pretty low bar compared to the review
 workload of many cores. I feel that existing cores understand this
 requirement well, and I am mostly stating it here for completeness.
 
 Additionally, there is increasing levels of concern that cores need to
 be on the same page about the criteria we hold code to, as well as the
 overall direction of nova. While the weekly meetings help here, it was
 agreed that summit attendance is really important to cores. Its the
 way we decide where we're going for the next cycle, as well as a
 chance to make sure that people are all pulling in the same direction
 and trust each other.
 
 There is also a strong preference for midcycle meetup attendance,
 although I understand that can sometimes be hard to arrange. My stance
 is that I'd like core's to try to attend, but understand that
 sometimes people will miss one. In response to the increasing
 importance of midcycles over time, I commit to trying to get the dates
 for these events announced further in advance.
 
 Personally I'm going to find it really hard to justify long distance
 travel 4 times a year for OpenStack for personal / family reasons,
 let alone company cost. I couldn't attend Icehouse mid-cycle because
 I just had too much travel in a short time to be able to do another
 week long trip away from family. I couldn't attend Juno mid-cycle
 because it clashed we personal holiday. There are other opensource
 related conferences that I also have to attend (LinuxCon, FOSDEM,
 KVM Forum, etc), etc so doubling the expected number of openstack
 conferences from 2 to 4 is really very undesirable from my POV.
 I might be able to attend the occassional mid-cycle meetup if the
 location was convenient, but in general I don't see myself being
 able to attend them regularly.
 
 I tend to view the fact that we're emphasising the need of in-person
 meetups to be somewhat of an indication of failure of our community
 operation. The majority of open source projects work very effectively
 with far less face-to-face time. OpenStack is fortunate that companies
 are currently willing to spend 6/7-figure sums flying 1000's of
 developers around the world many times a year, but I don't see that
 lasting forever so I'm concerned about baking the idea of f2f midcycle
 meetups into our way of life even more strongly.

I was fortunate to attend both the Nova and Neutron mid-cycles last month, and 
I can attest to how productive these gatherings were.  Discussion moved quickly 
and misunderstandings were rapidly resolved.  Informal ('water-cooler') 
conversation led to many interactions that might not otherwise have occurred.  
Given your attendance of summit and other open source conferences, though, I'm 
assuming the value of f2f is not in question.

Nothing good is ever free.  The financial cost and exclusionary nature of an 
in-person meetup should definitely be weighed against the opportunity for 
focused and high-bandwidth communication.  It's clear to myself and other 
attendees just how valuable the recent mid-cycles were in terms of making 
technical decisions and building the relationships to support their 
implementation.  Maybe it isn't sustainable over the long-term to meet so 
often, but I don't think that should preclude us from deriving benefit in the 
short-term.  I also don't think we should ignore the opportunity for more 
effective decision-making on the grounds that not everyone can directly 
participate.  Not everyone is able to attend summit, but it is nonetheless a 
critical part of our community's decision-making process. The topic lists for a 
mid-cycle are published beforehand, just like summit, to allow non-attendees 
the chance to present their views in advance and/or designate one or more 
attendees to advocate on 
 their behalf.  It's not perfect, but the alternative - not holding mid-cycles 
- would seem to be a case of throwing out the baby with the bathwater.


Maru

 
 Given that we consider these physical events so important, I'd like
 people to let me know if they have travel funding issues. I can then
 approach the Foundation about funding travel if that is required.
 
 Travel funding is certainly an issue, but I'm not sure that Foundation
 funding would be a solution, because the impact probably isn't directly
 on the core devs. Speaking with my Red Hat on, if the midcycle meetup
 is important enough, the core devs will likely get the funding to attend.
 The fallout of this though is that every attendee at a mid-cycle summit
 

[openstack-dev] Fwd: [nova][core] Expectations of core reviewers

2014-08-13 Thread Maru Newby
My apologies, I managed to break the thread here.  Please respond to the thread 
with subject 'Re: [openstack-dev] [nova][core] Expectations of core reviewers' 
in preference to this one.


Maru

On Aug 13, 2014, at 9:01 AM, Maru Newby ma...@redhat.com wrote:

 
 On Aug 13, 2014, at 2:57 AM, Daniel P. Berrange berra...@redhat.com wrote:
 
 On Wed, Aug 13, 2014 at 08:57:40AM +1000, Michael Still wrote:
 Hi.
 
 One of the action items from the nova midcycle was that I was asked to
 make nova's expectations of core reviews more clear. This email is an
 attempt at that.
 
 Nova expects a minimum level of sustained code reviews from cores. In
 the past this has been generally held to be in the order of two code
 reviews a day, which is a pretty low bar compared to the review
 workload of many cores. I feel that existing cores understand this
 requirement well, and I am mostly stating it here for completeness.
 
 Additionally, there is increasing levels of concern that cores need to
 be on the same page about the criteria we hold code to, as well as the
 overall direction of nova. While the weekly meetings help here, it was
 agreed that summit attendance is really important to cores. Its the
 way we decide where we're going for the next cycle, as well as a
 chance to make sure that people are all pulling in the same direction
 and trust each other.
 
 There is also a strong preference for midcycle meetup attendance,
 although I understand that can sometimes be hard to arrange. My stance
 is that I'd like core's to try to attend, but understand that
 sometimes people will miss one. In response to the increasing
 importance of midcycles over time, I commit to trying to get the dates
 for these events announced further in advance.
 
 Personally I'm going to find it really hard to justify long distance
 travel 4 times a year for OpenStack for personal / family reasons,
 let alone company cost. I couldn't attend Icehouse mid-cycle because
 I just had too much travel in a short time to be able to do another
 week long trip away from family. I couldn't attend Juno mid-cycle
 because it clashed we personal holiday. There are other opensource
 related conferences that I also have to attend (LinuxCon, FOSDEM,
 KVM Forum, etc), etc so doubling the expected number of openstack
 conferences from 2 to 4 is really very undesirable from my POV.
 I might be able to attend the occassional mid-cycle meetup if the
 location was convenient, but in general I don't see myself being
 able to attend them regularly.
 
 I tend to view the fact that we're emphasising the need of in-person
 meetups to be somewhat of an indication of failure of our community
 operation. The majority of open source projects work very effectively
 with far less face-to-face time. OpenStack is fortunate that companies
 are currently willing to spend 6/7-figure sums flying 1000's of
 developers around the world many times a year, but I don't see that
 lasting forever so I'm concerned about baking the idea of f2f midcycle
 meetups into our way of life even more strongly.
 
 I was fortunate to attend both the Nova and Neutron mid-cycles last month, 
 and I can attest to how productive these gatherings were.  Discussion moved 
 quickly and misunderstandings were rapidly resolved.  Informal 
 ('water-cooler') conversation led to many interactions that might not 
 otherwise have occurred.  Given your attendance of summit and other open 
 source conferences, though, I'm assuming the value of f2f is not in question.
 
 Nothing good is ever free.  The financial cost and exclusionary nature of an 
 in-person meetup should definitely be weighed against the opportunity for 
 focused and high-bandwidth communication.  It's clear to myself and other 
 attendees just how valuable the recent mid-cycles were in terms of making 
 technical decisions and building the relationships to support their 
 implementation.  Maybe it isn't sustainable over the long-term to meet so 
 often, but I don't think that should preclude us from deriving benefit in the 
 short-term.  I also don't think we should ignore the opportunity for more 
 effective decision-making on the grounds that not everyone can directly 
 participate.  Not everyone is able to attend summit, but it is nonetheless a 
 critical part of our community's decision-making process.  The topic lists 
 for a mid-cycle are published beforehand, just like summit, to allow 
 non-attendees the chance to present their views in advance and/or designate 
 one or more attendees to advocate 
 on their behalf.  It's not perfect, but the alternative - not holding 
mid-cycles - would seem to be a case of throwing out the baby with the 
bathwater.
 
 
 Maru
 
 
 Given that we consider these physical events so important, I'd like
 people to let me know if they have travel funding issues. I can then
 approach the Foundation about funding travel if that is required.
 
 Travel funding is certainly an issue, but I'm not sure that Foundation
 

Re: [openstack-dev] [qa] Using any username/password to create tempest clients

2014-08-13 Thread Andrea Frittoli
Hello Udi,

I don't see anything wrong in principle with your code.
This said, the main use case I had in mind when I wrote the auth providers
and credentials classes was to abstract authentication for all tests, so
that it is possible to configure and target identity api version to be used
for authentication, and all tests will use that to obtain a token.

Identity tests are a bit different though, because when running an identity
tests you typically want to specify which version of the identity API is to
be used - like you did in your code.
Nonetheless you you should be able to use the same code from auth provider
and credentials classes - but because identity tests have use more complex
scenarios you may find issues or restrictions in the current implementation.
I see that your credentials do not have a tenant - even though there is a
simple unit test for that case, that case is not used in any other test
atm, so you may as well have hit a bug.

I think it would be helpful if you could push a WIP change - it would be
easier to see what is going wrong.

andrea



On 13 August 2014 10:41, Udi Kalifon ukali...@redhat.com wrote:

 Hello.

 I am writing a tempest scenario for keystone. In this scenario I create a
 domain, project and a user with admin rights on the project. I then try to
 instantiate a Manager so I can call keystone using the new user credentials:

 creds = KeystoneV3Credentials(username=dom1proj1admin_name,
 password=dom1proj1admin_name, domain_name=dom1_name,
 user_domain_name=dom1_name)
 auth_provider = KeystoneV3AuthProvider(creds)
 creds = auth_provider.fill_credentials()
 admin_client = clients.Manager(interface=self._interface,
 credentials=creds)

 The problem is that I get unauthorized return codes for every call I
 make with this client. I verified that the user is created properly and has
 the needed credentials, by manually authenticating and getting a token with
 his credentials and then using that token. Apparently, in my code I don't
 create the creds properly or I'm missing another step. How can I use the
 new user in tempest properly?

 Thanks in advance,
 Udi Kalifon.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-13 Thread Daniel P. Berrange
On Wed, Aug 13, 2014 at 09:18:09AM -0700, Maru Newby wrote:
 
 On Aug 13, 2014, at 2:57 AM, Daniel P. Berrange berra...@redhat.com wrote:
 
  On Wed, Aug 13, 2014 at 08:57:40AM +1000, Michael Still wrote:
  Hi.
  
  One of the action items from the nova midcycle was that I was asked to
  make nova's expectations of core reviews more clear. This email is an
  attempt at that.
  
  Nova expects a minimum level of sustained code reviews from cores. In
  the past this has been generally held to be in the order of two code
  reviews a day, which is a pretty low bar compared to the review
  workload of many cores. I feel that existing cores understand this
  requirement well, and I am mostly stating it here for completeness.
  
  Additionally, there is increasing levels of concern that cores need to
  be on the same page about the criteria we hold code to, as well as the
  overall direction of nova. While the weekly meetings help here, it was
  agreed that summit attendance is really important to cores. Its the
  way we decide where we're going for the next cycle, as well as a
  chance to make sure that people are all pulling in the same direction
  and trust each other.
  
  There is also a strong preference for midcycle meetup attendance,
  although I understand that can sometimes be hard to arrange. My stance
  is that I'd like core's to try to attend, but understand that
  sometimes people will miss one. In response to the increasing
  importance of midcycles over time, I commit to trying to get the dates
  for these events announced further in advance.
  
  Personally I'm going to find it really hard to justify long distance
  travel 4 times a year for OpenStack for personal / family reasons,
  let alone company cost. I couldn't attend Icehouse mid-cycle because
  I just had too much travel in a short time to be able to do another
  week long trip away from family. I couldn't attend Juno mid-cycle
  because it clashed we personal holiday. There are other opensource
  related conferences that I also have to attend (LinuxCon, FOSDEM,
  KVM Forum, etc), etc so doubling the expected number of openstack
  conferences from 2 to 4 is really very undesirable from my POV.
  I might be able to attend the occassional mid-cycle meetup if the
  location was convenient, but in general I don't see myself being
  able to attend them regularly.
  
  I tend to view the fact that we're emphasising the need of in-person
  meetups to be somewhat of an indication of failure of our community
  operation. The majority of open source projects work very effectively
  with far less face-to-face time. OpenStack is fortunate that companies
  are currently willing to spend 6/7-figure sums flying 1000's of
  developers around the world many times a year, but I don't see that
  lasting forever so I'm concerned about baking the idea of f2f midcycle
  meetups into our way of life even more strongly.
 
 I was fortunate to attend both the Nova and Neutron mid-cycles last
 month, and I can attest to how productive these gatherings were.
 Discussion moved quickly and misunderstandings were rapidly resolved.
 Informal ('water-cooler') conversation led to many interactions that
 might not otherwise have occurred.  Given your attendance of summit
 and other open source conferences, though, I'm assuming the value of
 f2f is not in question.

I'm not questioning the value of f2f - I'm questioning the idea of
doing f2f meetings sooo many times a year. OpenStack is very much
the outlier here among open source projects - the vast majority of
projects get along very well with much less f2f time and a far
smaller % of their contributors attend those f2f meetings that do
happen. So I really do question what is missing from OpenStack's
community interaction that makes us believe that having 4 f2f
meetings a year is critical to our success.

 Nothing good is ever free.  The financial cost and exclusionary
 nature of an in-person meetup should definitely be weighed against
 the opportunity for focused and high-bandwidth communication.  It's
 clear to myself and other attendees just how valuable the recent
 mid-cycles were in terms of making technical decisions and building
 the relationships to support their implementation.  Maybe it isn't
 sustainable over the long-term to meet so often, but I don't think
 that should preclude us from deriving benefit in the short-term.

As pointed out this benefit for core devs has a direct negative
impact on other non-core devs. I'm questioning whether this is
really a net win overall vs other approaches to collaboration.

 I also don't think we should ignore the opportunity for more
 effective decision-making on the grounds that not everyone
 can directly participate.  Not everyone is able to attend
 summit, but it is nonetheless a critical part of our
 community's decision-making process.  The topic lists for a
 mid-cycle are published beforehand, just like summit, to
 allow non-attendees the chance to present their 

Re: [openstack-dev] [OpenStack-Dev] [Cinder] 3'rd party CI systems

2014-08-13 Thread Asselin, Ramy
I remember infra team objected to the nightly builds. They wanted reports on 
every patch set in order to report to gerrit. 
In the short-term, I suggest you test on every patch set, but limit the 
resources. This will cause 'long delays' but jobs will eventually go through.
In the long-term, you'll need to scale.
Currently, we're just running 1 job per back-end at a time.

-Original Message-
From: David Pineau [mailto:dav.pin...@gmail.com] 
Sent: Wednesday, August 13, 2014 2:19 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [OpenStack-Dev] [Cinder] 3'rd party CI systems

Hello,

I have currently setup the Scality CI not to report (mostly because it isn't 
fully functionnal yet, as the machine it runs on turns out to be undersized and 
thus the tests fails on some timeout), partly because it's currently a nightly 
build. I have no way of testing multiple patchsets at the same time so it is 
easier this way.

How do you plan to Officialize the different 3rd party CIs ? I remember that 
the cinder meeting about that in the Atlanta Summit concluded that a nightly 
build would be enough, but such build cannot really report on gerrit.

David Pineau
gerrit: Joachim
IRC#freenode: joa

2014-08-13 2:28 GMT+02:00 Asselin, Ramy ramy.asse...@hp.com:
 I forked jaypipe’s repos  working on extending it to support 
 nodepool, log server, etc.

 Still WIP but generally working.



 If you need help, ping me on IRC #openstack-cinder (asselin)



 Ramy



 From: Jesse Pretorius [mailto:jesse.pretor...@gmail.com]
 Sent: Monday, August 11, 2014 11:33 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [OpenStack-Dev] [Cinder] 3'rd party CI 
 systems



 On 12 August 2014 07:26, Amit Das amit@cloudbyte.com wrote:

 I would like some guidance in this regards in form of some links, wiki 
 pages etc.



 I am currently gathering the driver cert test results i.e. tempest 
 tests from devstack in our environment  CI setup would be my next step.



 This should get you started:

 http://ci.openstack.org/third_party.html



 Then Jay Pipes' excellent two part series will help you with the 
 details of getting it done:

 http://www.joinfu.com/2014/02/setting-up-an-external-openstack-testing
 -system/

 http://www.joinfu.com/2014/02/setting-up-an-openstack-external-testing
 -system-part-2/




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
David Pineau,
Developer RD at Scality

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [Cinder] 3'rd party CI systems

2014-08-13 Thread Duncan Thomas
If you limit yourself to only testing once jenkins has put a +1 on,
then you can down a bit... Not sure how to build that into Jay Pipe's
pipeline though

On 13 August 2014 10:30, Asselin, Ramy ramy.asse...@hp.com wrote:
 I remember infra team objected to the nightly builds. They wanted reports on 
 every patch set in order to report to gerrit.
 In the short-term, I suggest you test on every patch set, but limit the 
 resources. This will cause 'long delays' but jobs will eventually go through.
 In the long-term, you'll need to scale.
 Currently, we're just running 1 job per back-end at a time.

 -Original Message-
 From: David Pineau [mailto:dav.pin...@gmail.com]
 Sent: Wednesday, August 13, 2014 2:19 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [OpenStack-Dev] [Cinder] 3'rd party CI systems

 Hello,

 I have currently setup the Scality CI not to report (mostly because it isn't 
 fully functionnal yet, as the machine it runs on turns out to be undersized 
 and thus the tests fails on some timeout), partly because it's currently a 
 nightly build. I have no way of testing multiple patchsets at the same time 
 so it is easier this way.

 How do you plan to Officialize the different 3rd party CIs ? I remember 
 that the cinder meeting about that in the Atlanta Summit concluded that a 
 nightly build would be enough, but such build cannot really report on gerrit.

 David Pineau
 gerrit: Joachim
 IRC#freenode: joa

 2014-08-13 2:28 GMT+02:00 Asselin, Ramy ramy.asse...@hp.com:
 I forked jaypipe’s repos  working on extending it to support
 nodepool, log server, etc.

 Still WIP but generally working.



 If you need help, ping me on IRC #openstack-cinder (asselin)



 Ramy



 From: Jesse Pretorius [mailto:jesse.pretor...@gmail.com]
 Sent: Monday, August 11, 2014 11:33 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [OpenStack-Dev] [Cinder] 3'rd party CI
 systems



 On 12 August 2014 07:26, Amit Das amit@cloudbyte.com wrote:

 I would like some guidance in this regards in form of some links, wiki
 pages etc.



 I am currently gathering the driver cert test results i.e. tempest
 tests from devstack in our environment  CI setup would be my next step.



 This should get you started:

 http://ci.openstack.org/third_party.html



 Then Jay Pipes' excellent two part series will help you with the
 details of getting it done:

 http://www.joinfu.com/2014/02/setting-up-an-external-openstack-testing
 -system/

 http://www.joinfu.com/2014/02/setting-up-an-openstack-external-testing
 -system-part-2/




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 David Pineau,
 Developer RD at Scality

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] lists and merges

2014-08-13 Thread Ben Nemec
On 08/12/2014 05:21 PM, Robert Collins wrote:
 Just ran into a merge conflict with
 https://review.openstack.org/#/c/105878/ which looks like this:
 
 - name: nova_osapi
   port: 8774
   net_binds: *public_binds
 - name: nova_metadata
   port: 8775
   net_binds: *public_binds
 - name: ceilometer
   port: 8777
   net_binds: *public_binds
 - name: swift_proxy_server
   port: 8080
   net_binds: *public_binds
  HEAD
 - name: rabbitmq
   port: 5672
   options:
 - timeout client 0
 - timeout server 0
 ===
 - name: mysql
   port: 3306
   extra_server_params:
 - backup
 Change overcloud to use VIP for MySQL
 
 I'd like to propose that we make it a standard - possibly lint on it,
 certainly fixup things when we see its wrong - to alpha-sort such
 structures: that avoids the textual-merge failure mode of 'append to
 the end'.
 
 -Rob
 

Works for me.  At the very least we could add it to the new guidelines
that are proposed: https://review.openstack.org/#/c/110565/

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concerns around the Extensible Resource Tracker design - revert maybe?

2014-08-13 Thread Brian Elliott

On Aug 12, 2014, at 5:21 AM, Nikola Đipanov ndipa...@redhat.com wrote:

 Hey Nova-istas,
 
 While I was hacking on [1] I was considering how to approach the fact
 that we now need to track one more thing (NUMA node utilization) in our
 resources. I went with - I'll add it to compute nodes table thinking
 it's a fundamental enough property of a compute host that it deserves to
 be there, although I was considering  Extensible Resource Tracker at one
 point (ERT from now on - see [2]) but looking at the code - it did not
 seem to provide anything I desperately needed, so I went with keeping it
 simple.
 
 So fast-forward a few days, and I caught myself solving a problem that I
 kept thinking ERT should have solved - but apparently hasn't, and I
 think it is fundamentally a broken design without it - so I'd really
 like to see it re-visited.
 
 The problem can be described by the following lemma (if you take 'lemma'
 to mean 'a sentence I came up with just now' :)):
 
 
 Due to the way scheduling works in Nova (roughly: pick a host based on
 stale(ish) data, rely on claims to trigger a re-schedule), _same exact_
 information that scheduling service used when making a placement
 decision, needs to be available to the compute service when testing the
 placement.
 “

Correct

 
 This is not the case right now, and the ERT does not propose any way to
 solve it - (see how I hacked around needing to be able to get
 extra_specs when making claims in [3], without hammering the DB). The
 result will be that any resource that we add and needs user supplied
 info for scheduling an instance against it, will need a buggy
 re-implementation of gathering all the bits from the request that
 scheduler sees, to be able to work properly.
Agreed, ERT does not attempt to solve this problem of ensuring RT has an 
identical set of information for testing claims.  I don’t think it was intended 
to.

ERT does solve the issue of bloat in the RT with adding just-one-more-thing to 
test usage-wise.  It gives a nice hook for inserting your claim logic for your 
specific use case.

 
 This is obviously a bigger concern when we want to allow users to pass
 data (through image or flavor) that can affect scheduling, but still a
 huge concern IMHO.
I think passing additional data through to compute just wasn’t a problem that 
ERT aimed to solve.  (Paul Murray?)  That being said, coordinating the passing 
of any extra data required to test a claim that is *not* sourced from the host 
itself would be a very nice addition.  You are working around it with some 
caching in your flavor db lookup use case, although one could of course cook up 
a cleaner patch to pass such data through on the “build this” request to the 
compute.

 
 As I see that there are already BPs proposing to use this IMHO broken
 ERT ([4] for example), which will surely add to the proliferation of
 code that hacks around these design shortcomings in what is already a
 messy, but also crucial (for perf as well as features) bit of Nova code.
 
 I propose to revert [2] ASAP since it is still fresh, and see how we can
 come up with a cleaner design.
 
I think the ERT is forward-progress here, but am willing to review 
patches/specs on improvements/replacements.  

 Would like to hear opinions on this, before I propose the patch tho!
 
 Thanks all,
 
 Nikola
 
 [1] https://blueprints.launchpad.net/nova/+spec/virt-driver-numa-placement
 [2] https://review.openstack.org/#/c/109643/
 [3] https://review.openstack.org/#/c/111782/
 [4] https://review.openstack.org/#/c/89893
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Rotating the weekly Neutron meeting

2014-08-13 Thread Miguel Angel Ajo Pelayo
+1

- Original Message -
 like it! +1
 
 Fawad Khaliq
 
 
 On Wed, Aug 13, 2014 at 7:58 AM, mar...@redhat.com  mandr...@redhat.com 
 wrote:
 
 
 
 On 13/08/14 17:05, Kyle Mestery wrote:
  Per this week's Neutron meeting [1], it was decided that offering a
  rotating meeting slot for the weekly Neutron meeting would be a good
  thing. This will allow for a much easier time for people in
  Asia/Pacific timezones, as well as for people in Europe.
  
  So, I'd like to propose we rotate the weekly as follows:
  
  Monday 2100UTC
  Tuesday 1400UTC
 
 
 HUGE +1 and thanks!
 
 
  
  If people are ok with these time slots, I'll set this up and we'll
  likely start with this new schedule in September, after the FPF.
  
  Thanks!
  Kyle
  
  [1]
  http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-08-11-21.00.html
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-13 Thread Dan Smith
 I'm not questioning the value of f2f - I'm questioning the idea of
 doing f2f meetings sooo many times a year. OpenStack is very much
 the outlier here among open source projects - the vast majority of
 projects get along very well with much less f2f time and a far
 smaller % of their contributors attend those f2f meetings that do
 happen. So I really do question what is missing from OpenStack's
 community interaction that makes us believe that having 4 f2f
 meetings a year is critical to our success.

How many is too many? So far, I have found the midcycles to be extremely
productive -- productive in a way that we don't see at the summits, and
I think other attendees agree. Obviously if budgets start limiting them,
then we'll have to deal with it, but I don't want to stop meeting
preemptively. IMHO, the reasons to cut back would be:

- People leaving with a well, that was useless... feeling
- Not enough people able to travel to make it worthwhile

So far, neither of those have been outcomes of the midcycles we've had,
so I think we're doing okay.

The design summits are structured differently, where we see a lot more
diverse attendance because of the colocation with the user summit. It
doesn't lend itself well to long and in-depth discussions about specific
things, but it's very useful for what it gives us in the way of
exposure. We could try to have less of that at the summit and more
midcycle-ish time, but I think it's unlikely to achieve the same level
of usefulness in that environment.

Specifically, the lack of colocation with too many other projects has
been a benefit. This time, Mark and Maru where there from Neutron. Last
time, Mark from Neutron and the other Mark from Glance were there. If
they were having meetups in other rooms (like at summit) they wouldn't
have been there exposed to discussions that didn't seem like they'd have
a component for their participation, but did after all (re: nova and
glance and who should own flavors).

 As pointed out this benefit for core devs has a direct negative
 impact on other non-core devs. I'm questioning whether this is
 really a net win overall vs other approaches to collaboration.

It's a net win, IMHO.

 As I explain in the rest of my email below I'm not advocating
 getting rid of mid-cycle events entirely. I'm suggesting that
 we can attain a reasonable % of the benefits of f2f meetings
 by doing more formal virtual meetups and so be more efficient
 and inclusive overall.

I'd love to see more high-bandwidth mechanisms used to have discussions
in between f2f meetings. In fact, one of the outcomes of this last
midcycle was that we should have one about APIv3 with the folks that
couldn't attend for other reasons. It came up specifically because we
made more progress in ninety minutes than we had in the previous eight
months (yes, even with a design summit in the middle of that).

Expecting cores to be at these sorts of things seems pretty reasonable
to me, given the usefulness (and gravity) of the discussions we've been
having so far. Companies with more cores will have to send more or make
some hard decisions, but I don't want to cut back on the meetings until
their value becomes unjustified.

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Rotating the weekly Neutron meeting

2014-08-13 Thread Akihiro Motoki
Huge +1

On Wed, Aug 13, 2014 at 11:05 PM, Kyle Mestery mest...@mestery.com wrote:
 Per this week's Neutron meeting [1], it was decided that offering a
 rotating meeting slot for the weekly Neutron meeting would be a good
 thing. This will allow for a much easier time for people in
 Asia/Pacific timezones, as well as for people in Europe.

 So, I'd like to propose we rotate the weekly as follows:

 Monday 2100UTC
 Tuesday 1400UTC

 If people are ok with these time slots, I'll set this up and we'll
 likely start with this new schedule in September, after the FPF.

 Thanks!
 Kyle

 [1] 
 http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-08-11-21.00.html

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concerns around the Extensible Resource Tracker design - revert maybe?

2014-08-13 Thread Sylvain Bauza


Le 13/08/2014 18:40, Brian Elliott a écrit :

On Aug 12, 2014, at 5:21 AM, Nikola Đipanov ndipa...@redhat.com wrote:


Hey Nova-istas,

While I was hacking on [1] I was considering how to approach the fact
that we now need to track one more thing (NUMA node utilization) in our
resources. I went with - I'll add it to compute nodes table thinking
it's a fundamental enough property of a compute host that it deserves to
be there, although I was considering  Extensible Resource Tracker at one
point (ERT from now on - see [2]) but looking at the code - it did not
seem to provide anything I desperately needed, so I went with keeping it
simple.

So fast-forward a few days, and I caught myself solving a problem that I
kept thinking ERT should have solved - but apparently hasn't, and I
think it is fundamentally a broken design without it - so I'd really
like to see it re-visited.

The problem can be described by the following lemma (if you take 'lemma'
to mean 'a sentence I came up with just now' :)):


Due to the way scheduling works in Nova (roughly: pick a host based on
stale(ish) data, rely on claims to trigger a re-schedule), _same exact_
information that scheduling service used when making a placement
decision, needs to be available to the compute service when testing the
placement.
“

Correct


This is not the case right now, and the ERT does not propose any way to
solve it - (see how I hacked around needing to be able to get
extra_specs when making claims in [3], without hammering the DB). The
result will be that any resource that we add and needs user supplied
info for scheduling an instance against it, will need a buggy
re-implementation of gathering all the bits from the request that
scheduler sees, to be able to work properly.

Agreed, ERT does not attempt to solve this problem of ensuring RT has an 
identical set of information for testing claims.  I don’t think it was intended 
to.

ERT does solve the issue of bloat in the RT with adding just-one-more-thing to 
test usage-wise.  It gives a nice hook for inserting your claim logic for your 
specific use case.


I think Nikola and I agreed on the fact that ERT is not responsible for 
this design. That said I can talk on behalf of Nikola...




This is obviously a bigger concern when we want to allow users to pass
data (through image or flavor) that can affect scheduling, but still a
huge concern IMHO.

I think passing additional data through to compute just wasn’t a problem that 
ERT aimed to solve.  (Paul Murray?)  That being said, coordinating the passing 
of any extra data required to test a claim that is *not* sourced from the host 
itself would be a very nice addition.  You are working around it with some 
caching in your flavor db lookup use case, although one could of course cook up 
a cleaner patch to pass such data through on the “build this” request to the 
compute.


Indeed, and that's why I think the problem can be resolved thanks to 2 
different things :
1. Filters need to look at what ERT is giving them, that's what 
isolate-scheduler-db is trying to do (see my patches [2.3 and 2.4] on 
the previous emails
2. Some extra user request needs to be checked in the test() method of 
ERT plugins (where claims are done), so I provided a WIP patch for 
discussing it : https://review.openstack.org/#/c/113936/




As I see that there are already BPs proposing to use this IMHO broken
ERT ([4] for example), which will surely add to the proliferation of
code that hacks around these design shortcomings in what is already a
messy, but also crucial (for perf as well as features) bit of Nova code.

I propose to revert [2] ASAP since it is still fresh, and see how we can
come up with a cleaner design.


I think the ERT is forward-progress here, but am willing to review 
patches/specs on improvements/replacements.


Sure, your comments are welcome on https://review.openstack.org/#/c/113373/
You can find an example where TypeAffinity filter is modified to look at 
HostState and where ERT is being used for updating HostState and for 
claiming resource.






Would like to hear opinions on this, before I propose the patch tho!

Thanks all,

Nikola

[1] https://blueprints.launchpad.net/nova/+spec/virt-driver-numa-placement
[2] https://review.openstack.org/#/c/109643/
[3] https://review.openstack.org/#/c/111782/
[4] https://review.openstack.org/#/c/89893

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-13 Thread Joe Gordon
On Wed, Aug 13, 2014 at 2:01 AM, Nikola Đipanov ndipa...@redhat.com wrote:

 On 08/13/2014 04:05 AM, Michael Still wrote:
  On Wed, Aug 13, 2014 at 4:26 AM, Eoghan Glynn egl...@redhat.com wrote:
 
  It seems like this is exactly what the slots give us, though. The core
 review
  team picks a number of slots indicating how much work they think they
 can
  actually do (less than the available number of blueprints), and then
  blueprints queue up to get a slot based on priorities and turnaround
 time
  and other criteria that try to make slot allocation fair. By having the
  slots, not only is the review priority communicated to the review
 team, it
  is also communicated to anyone watching the project.
 
  One thing I'm not seeing shine through in this discussion of slots is
  whether any notion of individual cores, or small subsets of the core
  team with aligned interests, can champion blueprints that they have
  a particular interest in.
 
  I think that's because we've focussed in this discussion on the slots
  themselves, not the process of obtaining a slot.
 
  The proposal as it stands now is that we would have a public list of
  features that are ready to occupy a slot. That list would the ranked
  in order of priority to the project, and the next free slot goes to
  the top item on the list. The ordering of the list is determined by
  nova-core, based on their understanding of the importance of a given
  thing, as well as what they are hearing from our users.
 
  So -- there's totally scope for lobbying, or for a subset of core to
  champion a feature to land, or for a company to explain why a given
  feature is very important to them.
 
  It sort of happens now -- there is a subset of core which cares more
  about xen than libvirt for example. We're just being more open about
  the process and setting expectations for our users. At the moment its
  very confusing as a user, there are hundreds of proposed features for
  Juno, nearly 100 of which have been accepted. However, we're kidding
  ourselves if we think we can land 100 blueprints in a release cycle.
 

 While I agree with motivation for this - setting the expectations, I
 fail to see how this is different to what the Swift guys seem to be
 doing apart from more red tape.

 I would love for us to say: If you want your feature in - you need to
 convince us that it's awesome and that we need to listen to you, by
 being active in the community (not only by means of writing code of
 course).

 I fear that slots will have us saying: Here's another check-box for you
 to tick, and the code goes in, which in addition to not communicating
 that we are ultimately the ones who chose what goes in, regardless of
 slots, also shifts the conversation away from what is really important,
 and that is the relative merit of the feature itself.

 But it obviously depends on the implementation.



Proposed implementation: https://review.openstack.org/#/c/112733/


 N.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-13 Thread Russell Bryant
On 08/13/2014 01:09 PM, Dan Smith wrote:
 Expecting cores to be at these sorts of things seems pretty reasonable
 to me, given the usefulness (and gravity) of the discussions we've been
 having so far. Companies with more cores will have to send more or make
 some hard decisions, but I don't want to cut back on the meetings until
 their value becomes unjustified.

I disagree.  IMO, *expecting* people to travel, potentially across the
globe, 4 times a year is an unreasonable expectation, and quite
uncharacteristic of open source projects.  If we can't figure out a way
to have the most important conversations in a way that is inclusive of
everyone, we're failing with our processes.

By all means, if a subset wants to meet up and make progress on some
things, I think that's fine.  I don't think anyone think it's not
useful.  However, discussions need to be summarized and taken back to
the list for discussion before decisions are made.  That's not the way
things are trending here, and I think that's a problem.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >