Re: [openstack-dev] auto-abandon changesets considered harmful (was Re: [stable][all] Revisiting the 6 month release cycle [metrics])

2015-03-02 Thread Clay Gerrard
On Mon, Mar 2, 2015 at 8:07 AM, Duncan Thomas duncan.tho...@gmail.com
wrote:

 Why do you say auto-abandon is the wrong tool? I've no problem with the 1
 week warning if somebody wants to implement it - I can see the value. A
 change-set that has been ignored for X weeks is pretty much the dictionary
 definition of abandoned


+1 this

I think Tom's suggested help us help you is a great pre-abandon warning.
In swift as often as not the last message ended with something like you
can catch me on freenode in #openstack-swift if you have any questions

But I really can't fathom what's the harm in closing abandoned patches as
abandoned?

If the author doesn't care about the change enough to address the review
comments (or failing tests!) and the core reviewers don't care about it
enough to *fix it for them* - where do we think the change is going to
go?!  It sounds like the argument is just that instead of using abandoned
as an explicit description of an implicit state we can just filter these
out of every view we use to look for something useful as no changes for X
weeks after negative feedback rather than calling a spade a spade.

I *mostly* look at patches that don't have feedback.  notmyname maintains
the swift review dashboard AFAIK:

http://goo.gl/r2mxbe

It's possible that a pile of abandonded-changes-not-marked-as-abandonded
wouldn't actually interrupt my work-flow.  But I would imagine maintaining
the review dashboard might occasionally require looking at ALL the changes
in the queue in an effort to look for a class of changes that aren't
getting adequate feedback - that workflow might find the extra noise less
than helpful.

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-03-02 Thread Kyle Mestery
On Mon, Mar 2, 2015 at 11:59 AM, Kyle Mestery mest...@mestery.com wrote:

 On Mon, Mar 2, 2015 at 9:57 AM, Ihar Hrachyshka ihrac...@redhat.com
 wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Hi Daniel,

 thanks for a clear write-up of the matter and food for thought.

 I think the idea of having more smooth development mode that would not
 make people to wait for 6+ months to release a new feature is great.

 ++


 It's insane to expect that feature priorities won't ever slightly
 shift in the next 6 months. Having a limited list of features targeted
 for the next 6 months is prone to mistakes, since people behind some
 of approved features may need to postpone the effort for any type of
 reasons (personal, job rush, bad resource allocation, ...), and we
 then end up with approved specs with no actual code drops, using
 review 'slots' that would better be spent for other features that were
 not that lucky to get a rubber stamp during spec phase. Prior resource
 allocation would probably work somehow if we were working for the same
 company that would define priorities to us, but it's not the case.

 It should be noted that even though Nova is using slots for reviews,
 Neutron is not. I've found that it's hard to try and slot people in to
 review specific things. During Juno I tried this for Neutron, and it failed
 miserably. For Kilo in Neutron, we're not using slots but instead I've
 tried to highlight the approved specs of Essential and High priority
 for review by all reviews, core and non-core included. It's gone ok, but
 the reality is you can't force people to review things. There are steps
 submitters can take to try and get timely review (lots of small, easy to
 digest patches, quick turnaround of comments, engagement in IRC and ML,
 etc.).


It was pointed out to me that nova is NOT using slots. Apologies for my
misunderstanding here.

Clearly, this thread has elicited a lot of strong thoughts and emotions. I
hope we can all use this energy to figure out a good solution and a way
forward for the issues presented here.



 Anecdotally, in neutron, we have two Essential blueprints for Kilo,
 and there are no code drops or patches in review for any of those, so
 I would expect them to fail to merge. At the same time, I will need to
 wait for the end of Kilo to consider adding support for guru reports
 to the project. Or in oslo world, I will need to wait for Liberty to
 introduce features in oslo.policy that are needed by neutron to switch
 to it, etc.

 To be fair, there are many reasons those to Essential BPs do not have
 code. I still expect the Pecan focused to have code, but I already moved
 the Plugin one out of Kilo at this point because there was no chance the
 code would land.

 But I get your point here. I think this thread has highlighted the fact
 that the BP/spec process worked to some extent, but for small things, the
 core reviewer team should have the ability to say Yes, we can easily merge
 that, lets approve that spec even if it's late in the cycle.


 Another problem is that currently milestones are used merely for
 targeting bugs and features, but no one really cares about whether the
 actual milestone shipment actually works. Again, a story from neutron
 world: Kilo-1 was shipped in the middle of advanced services split,
 with some essential patches around distutils setup missing (no proper
 migration plan applied, conflicting config files in neutron and *aas
 repos, etc.)

 This is true, the milestone release matter but are not given enough focus
 and they release (for the most part) irregardless of items in them, given
 they are not long-lived, etc.

 So I'm all for reforms around processes we apply.

 If there's one thing OpenStack is good at, it's change.


 That said, I don't believe the real problem here is that we don't
 generate project tarballs frequently enough.

 Major problems I see as critical to tackle in our dev process are:

 - - enforced spec/dev mode. Solution: allow to propose (and approve) a
 reasonable spec at any moment in the cycle; allow code drops for
 approved specs at any moment in the cycle (except pre-release
 stabilization time); stop targeting specs: if it's sane, it's probably
 sane N+2 cycle from now too.

 I'd say this is fine for specs that are small and people agree can easily
 be merged. I'd say this is not the case for large features near the end of
 the release which are unlikely to gather enough review momentum to actually
 merge.


 - - core team rubber stamping a random set of specs and putting -2 on
 all other specs due to project priorities. Solution: stop pretending
 core team (or core drivers) can reasonably estimate review and coding
 resources for the next cycle. Instead allows community to decide
 what's worth the effort by approving all technically reasonable specs
 and allowing everyone to invest time and effort in specs (s)he seems
 worth it.

 If you're referring to Neutron here, I think you fail to estimate the
 

Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-03-02 Thread Joe Gordon
On Mon, Mar 2, 2015 at 9:59 AM, Kyle Mestery mest...@mestery.com wrote:

 On Mon, Mar 2, 2015 at 9:57 AM, Ihar Hrachyshka ihrac...@redhat.com
 wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Hi Daniel,

 thanks for a clear write-up of the matter and food for thought.

 I think the idea of having more smooth development mode that would not
 make people to wait for 6+ months to release a new feature is great.

 ++


 It's insane to expect that feature priorities won't ever slightly
 shift in the next 6 months. Having a limited list of features targeted
 for the next 6 months is prone to mistakes, since people behind some


* Sure, we have had a few things that popped up, nova EC2 split for
example. But this is fairly rare.


 of approved features may need to postpone the effort for any type of
 reasons (personal, job rush, bad resource allocation, ...), and we

 then end up with approved specs with no actual code drops, using
 review 'slots' that would better be spent for other features that were
 not that lucky to get a rubber stamp during spec phase. Prior resource


* As stated below specs approval is very much not rubber stamping
* As stated below this doesn't even make sense, we are *not* using review
slots.


 allocation would probably work somehow if we were working for the same
 company that would define priorities to us, but it's not the case.


 It should be noted that even though Nova is using slots for reviews,
 Neutron is not. I've found that it's hard to try and slot people in to
 review specific things. During Juno I tried this for Neutron, and it failed
 miserably. For Kilo in Neutron, we're not using slots but instead I've
 tried to highlight the approved specs of Essential and High priority
 for review by all reviews, core and non-core included. It's gone ok, but
 the reality is you can't force people to review things. There are steps
 submitters can take to try and get timely review (lots of small, easy to
 digest patches, quick turnaround of comments, engagement in IRC and ML,
 etc.).


So this is a big fat lie, one that others believe as well. Nova is *not*
using slots for reviews. We discussed using slots for reviews but did not
adopt them.




 Anecdotally, in neutron, we have two Essential blueprints for Kilo,
 and there are no code drops or patches in review for any of those, so
 I would expect them to fail to merge. At the same time, I will need to
 wait for the end of Kilo to consider adding support for guru reports
 to the project. Or in oslo world, I will need to wait for Liberty to
 introduce features in oslo.policy that are needed by neutron to switch
 to it, etc.

 To be fair, there are many reasons those to Essential BPs do not have
 code. I still expect the Pecan focused to have code, but I already moved
 the Plugin one out of Kilo at this point because there was no chance the
 code would land.

 But I get your point here. I think this thread has highlighted the fact
 that the BP/spec process worked to some extent, but for small things, the
 core reviewer team should have the ability to say Yes, we can easily merge
 that, lets approve that spec even if it's late in the cycle.


 Another problem is that currently milestones are used merely for
 targeting bugs and features, but no one really cares about whether the
 actual milestone shipment actually works. Again, a story from neutron
 world: Kilo-1 was shipped in the middle of advanced services split,
 with some essential patches around distutils setup missing (no proper
 migration plan applied, conflicting config files in neutron and *aas
 repos, etc.)

 This is true, the milestone release matter but are not given enough focus
 and they release (for the most part) irregardless of items in them, given
 they are not long-lived, etc.

 So I'm all for reforms around processes we apply.

 If there's one thing OpenStack is good at, it's change.


 That said, I don't believe the real problem here is that we don't
 generate project tarballs frequently enough.

 Major problems I see as critical to tackle in our dev process are:

 - - enforced spec/dev mode. Solution: allow to propose (and approve) a
 reasonable spec at any moment in the cycle; allow code drops for
 approved specs at any moment in the cycle (except pre-release
 stabilization time); stop targeting specs: if it's sane, it's probably
 sane N+2 cycle from now too.

 I'd say this is fine for specs that are small and people agree can easily
 be merged. I'd say this is not the case for large features near the end of
 the release which are unlikely to gather enough review momentum to actually
 merge.


 - - core team rubber stamping a random set of specs and putting -2 on
 all other specs due to project priorities. Solution: stop pretending

 core team (or core drivers) can reasonably estimate review and coding
 resources for the next cycle. Instead allows community to decide
 what's worth the effort by approving all technically reasonable specs
 and allowing 

Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-03-02 Thread Kyle Mestery
On Mon, Mar 2, 2015 at 3:38 PM, Joe Gordon joe.gord...@gmail.com wrote:



 On Mon, Mar 2, 2015 at 9:59 AM, Kyle Mestery mest...@mestery.com wrote:

 On Mon, Mar 2, 2015 at 9:57 AM, Ihar Hrachyshka ihrac...@redhat.com
 wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Hi Daniel,

 thanks for a clear write-up of the matter and food for thought.

 I think the idea of having more smooth development mode that would not
 make people to wait for 6+ months to release a new feature is great.

 ++


 It's insane to expect that feature priorities won't ever slightly
 shift in the next 6 months. Having a limited list of features targeted
 for the next 6 months is prone to mistakes, since people behind some


 * Sure, we have had a few things that popped up, nova EC2 split for
 example. But this is fairly rare.


 of approved features may need to postpone the effort for any type of
 reasons (personal, job rush, bad resource allocation, ...), and we

 then end up with approved specs with no actual code drops, using
 review 'slots' that would better be spent for other features that were
 not that lucky to get a rubber stamp during spec phase. Prior resource


 * As stated below specs approval is very much not rubber stamping
 * As stated below this doesn't even make sense, we are *not* using review
 slots.


 allocation would probably work somehow if we were working for the same
 company that would define priorities to us, but it's not the case.


 It should be noted that even though Nova is using slots for reviews,
 Neutron is not. I've found that it's hard to try and slot people in to
 review specific things. During Juno I tried this for Neutron, and it failed
 miserably. For Kilo in Neutron, we're not using slots but instead I've
 tried to highlight the approved specs of Essential and High priority
 for review by all reviews, core and non-core included. It's gone ok, but
 the reality is you can't force people to review things. There are steps
 submitters can take to try and get timely review (lots of small, easy to
 digest patches, quick turnaround of comments, engagement in IRC and ML,
 etc.).


 So this is a big fat lie, one that others believe as well. Nova is *not*
 using slots for reviews. We discussed using slots for reviews but did not
 adopt them.


But I read it on the internet, it must be true.

As I said in a prior email, I'm sorry for that. I recalled reading about
nova's use of slots.





 Anecdotally, in neutron, we have two Essential blueprints for Kilo,
 and there are no code drops or patches in review for any of those, so
 I would expect them to fail to merge. At the same time, I will need to
 wait for the end of Kilo to consider adding support for guru reports
 to the project. Or in oslo world, I will need to wait for Liberty to
 introduce features in oslo.policy that are needed by neutron to switch
 to it, etc.

 To be fair, there are many reasons those to Essential BPs do not have
 code. I still expect the Pecan focused to have code, but I already moved
 the Plugin one out of Kilo at this point because there was no chance the
 code would land.

 But I get your point here. I think this thread has highlighted the fact
 that the BP/spec process worked to some extent, but for small things, the
 core reviewer team should have the ability to say Yes, we can easily merge
 that, lets approve that spec even if it's late in the cycle.


 Another problem is that currently milestones are used merely for
 targeting bugs and features, but no one really cares about whether the
 actual milestone shipment actually works. Again, a story from neutron
 world: Kilo-1 was shipped in the middle of advanced services split,
 with some essential patches around distutils setup missing (no proper
 migration plan applied, conflicting config files in neutron and *aas
 repos, etc.)

 This is true, the milestone release matter but are not given enough
 focus and they release (for the most part) irregardless of items in them,
 given they are not long-lived, etc.

 So I'm all for reforms around processes we apply.

 If there's one thing OpenStack is good at, it's change.


 That said, I don't believe the real problem here is that we don't
 generate project tarballs frequently enough.

 Major problems I see as critical to tackle in our dev process are:

 - - enforced spec/dev mode. Solution: allow to propose (and approve) a
 reasonable spec at any moment in the cycle; allow code drops for
 approved specs at any moment in the cycle (except pre-release
 stabilization time); stop targeting specs: if it's sane, it's probably
 sane N+2 cycle from now too.

 I'd say this is fine for specs that are small and people agree can
 easily be merged. I'd say this is not the case for large features near the
 end of the release which are unlikely to gather enough review momentum to
 actually merge.


 - - core team rubber stamping a random set of specs and putting -2 on
 all other specs due to project priorities. Solution: stop 

Re: [openstack-dev] [OSSN 0044] Older versions of noVNC allow session theft

2015-03-02 Thread Solly Ross
Hi!

I just wanted to note that noVNC 0.5.1 is slated to be in Fedora 22 and
is currently in EPEL testing for EPEL 6 and EPEL 7
(https://apps.fedoraproject.org/packages/novnc).

Best Regards,
Solly Ross

- Original Message -
 From: Nathan Kinder nkin...@redhat.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Monday, March 2, 2015 4:09:06 PM
 Subject: [openstack-dev] [OSSN 0044] Older versions of noVNC allow session
 theft
 
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 Older versions of noVNC allow session theft
 - ---
 
 ### Summary ###
 Commonly packaged versions of noVNC allow an attacker to hijack user
 sessions even when TLS is enabled. noVNC fails to set the secure flag
 when setting cookies containing an authentication token.
 
 ### Affected Services / Software ###
 Nova, when embedding noVNC prior to v0.5
 
 ### Discussion ###
 Versions of noVNC prior to October 28, 2013 do not properly set the
 secure flag on cookies for pages served over TLS. Since noVNC stores
 authentication tokens in these cookies, an attacker who can modify
 user traffic can steal these tokens and connect to the VNC session.
 
 Affected deployments can be identified by looking for the secure
 flag on the token cookie set by noVNC on TLS-enabled installations. If
 the secure flag is missing, the installation is vulnerable.
 
 At the time of writing, Debian, Ubuntu and Fedora do not provide
 versions of this package with the appropriate patch.
 
 ### Recommended Actions ###
 noVNC should be updated to version 0.5 or later. If this is not
 possible, the upstream patch should be applied individually.
 
 Upstream patch:
 https://github.com/kanaka/noVNC/commit/ad941faddead705cd611921730054767a0b32dcd
 
 ### Contacts / References ###
 This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0044
 Original LaunchPad Bug : https://bugs.launchpad.net/nova/+bug/1420942
 OpenStack Security ML : openstack-secur...@lists.openstack.org
 OpenStack Security Group : https://launchpad.net/~openstack-ossg
 CVE: in progress-http://www.openwall.com/lists/oss-security/2015/02/17/1
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1
 
 iQEcBAEBAgAGBQJU9NFyAAoJEJa+6E7Ri+EV5soH/3xK10vI3I4CM8Uhyk8pZcgA
 5+s7ukrcQWymExN4XGDRB5b2hwfmTpHjOJAkgLNvP7edNezE6QvXit6cBBNoXUo2
 nW/iC7QKmu7oS56F+OpqFf+PZNmxDqCF40ec9pjt0id5V/1cvePH+Vc9Kuus6Lig
 LwsIG4A8tRiCsN5d2OOdGULSBhCN/yCdDKbf2mdaB4Ebimb2+6c7Nfs1iskOIZAm
 Me0jC2a0rPP07Fh5dnS+4uDkAk+BU5UIrs64Ua63AQuvC6evHnMF6uByrFdATxk7
 DgDftsY/4ahexV6rTIBvjzbTngmOGWaegknH1dE2Peuv32fe6v3c68LD8lG6BgM=
 =SUiL
 -END PGP SIGNATURE-
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-03-02 Thread Steve Baker

On 03/03/15 00:56, Chris Dent wrote:


I (and a few others) have been using gabbi[1] for a couple of months now
and it has proven very useful and evolved a bit so I thought it would be
worthwhile to followup my original message and give an update.

Some recent reviews[1] give a sample of how it can be used to validate
an existing API as well as search for less than perfect HTTP behavior
(e.g sending a 404 when a 405 would be correct).

Regular use has lead to some important changes:

* It can now be integrated with other tox targets so it can run
  alongside other functional tests.
* Individual tests can be xfailed and skipped. An entire YAML test
  file can be skipped.
* For those APIs which provide insufficient hypermedia support, the
  ability to inspect and reference the prior test and use template
  variables in the current request has been expanded (with support for
  environment variables pending a merge).

My original motivation for creating the tool was to make it easier to
learn APIs by causing a body of readable YAML files to exist. This
remains important but what I've found is that writing the tests is
itself an incredible tool. Not only is it very easy to write tests
(throw some stuff at a URL and see what happen) and find (many) bugs
as a result, the exploratory nature of test writing drives a
learning process.

You'll note that the reviews below are just the YAML files. That's
because the test loading and fixture python code is already merged.
Adding tests is just a matter of adding more YAML. An interesting
trick is to run a small segment of the gabbi tests in a project (e.g.
just one file that represents one type of resource) while producing
coverage data. Reviewing the coverage of just the controller for that
resource can help drive test creation and separation.

[1] http://gabbi.readthedocs.org/en/latest/
[2] https://review.openstack.org/#/c/159945/
https://review.openstack.org/#/c/159204/

This looks very useful, I'd like to use this in the heat functional 
tests job.


Is it possible to write tests which do a POST/PUT then a loop of GETs 
until some condition is met (a response_json_paths match on IN_PROGRESS 
- COMPLETE)


This would allow for testing of non-atomic PUT/POST operations for 
entities like nova servers, heat stacks etc.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OSSN 0044] Older versions of noVNC allow session theft

2015-03-02 Thread Nathan Kinder
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Older versions of noVNC allow session theft
- ---

### Summary ###
Commonly packaged versions of noVNC allow an attacker to hijack user
sessions even when TLS is enabled. noVNC fails to set the secure flag
when setting cookies containing an authentication token.

### Affected Services / Software ###
Nova, when embedding noVNC prior to v0.5

### Discussion ###
Versions of noVNC prior to October 28, 2013 do not properly set the
secure flag on cookies for pages served over TLS. Since noVNC stores
authentication tokens in these cookies, an attacker who can modify
user traffic can steal these tokens and connect to the VNC session.

Affected deployments can be identified by looking for the secure
flag on the token cookie set by noVNC on TLS-enabled installations. If
the secure flag is missing, the installation is vulnerable.

At the time of writing, Debian, Ubuntu and Fedora do not provide
versions of this package with the appropriate patch.

### Recommended Actions ###
noVNC should be updated to version 0.5 or later. If this is not
possible, the upstream patch should be applied individually.

Upstream patch:
https://github.com/kanaka/noVNC/commit/ad941faddead705cd611921730054767a0b32dcd

### Contacts / References ###
This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0044
Original LaunchPad Bug : https://bugs.launchpad.net/nova/+bug/1420942
OpenStack Security ML : openstack-secur...@lists.openstack.org
OpenStack Security Group : https://launchpad.net/~openstack-ossg
CVE: in progress-http://www.openwall.com/lists/oss-security/2015/02/17/1
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJU9NFyAAoJEJa+6E7Ri+EV5soH/3xK10vI3I4CM8Uhyk8pZcgA
5+s7ukrcQWymExN4XGDRB5b2hwfmTpHjOJAkgLNvP7edNezE6QvXit6cBBNoXUo2
nW/iC7QKmu7oS56F+OpqFf+PZNmxDqCF40ec9pjt0id5V/1cvePH+Vc9Kuus6Lig
LwsIG4A8tRiCsN5d2OOdGULSBhCN/yCdDKbf2mdaB4Ebimb2+6c7Nfs1iskOIZAm
Me0jC2a0rPP07Fh5dnS+4uDkAk+BU5UIrs64Ua63AQuvC6evHnMF6uByrFdATxk7
DgDftsY/4ahexV6rTIBvjzbTngmOGWaegknH1dE2Peuv32fe6v3c68LD8lG6BgM=
=SUiL
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] auto-abandon changesets considered harmful (was Re: [stable][all] Revisiting the 6 month release cycle [metrics])

2015-03-02 Thread Clint Byrum
Excerpts from Doug Wiegley's message of 2015-03-02 12:47:14 -0800:
 
  On Mar 2, 2015, at 1:13 PM, James E. Blair cor...@inaugust.com wrote:
  
  Stefano branched this thread from an older one to talk about
  auto-abandon.  In the previous thread, I believe I explained my
  concerns, but since the topic split, perhaps it would be good to
  summarize why this is an issue.
  
  1) A core reviewer forcefully abandoning a change contributed by someone
  else can be a very negative action.  It's one thing for a contributor to
  say I have abandoned this effort, it's very different for a core
  reviewer to do that for them.  It is a very strong action and signal,
  and should not be taken lightly.
 
 I'm not arguing against better tooling, queries, or additional comment 
 warnings.  All of those are good things. But I think some of the push back in 
 this thread is challenging this notion that abandoning is negative, which you 
 seem to be treating as a given.
 
 I don't. At all. And I don't think I'm alone.
 
 I also don't understand your point that the review becomes invisible, since 
 it's a simple gerrit query to see closed reviews, and your own contention is 
 that gerrit queries solve this in the other direction, so it can't be too 
 hard in this one, either. I've done that many times to find mine and others 
 abandoned reviews, the most recent example being resurrecting all of the 
 lbaas v2 reviews after it slipped out of juno and eventually was put into 
 it's own repo.  Some of those reviews were abandoned, others not, and it was 
 roughly equivalent to find them, open or not, and then re-tool those for the 
 latest changes to master.
 

You are correct in saying that just like users can query for a proper
queue of things they should look at, people can also query for abandoned
patches.

However, I'm not sure these are actually the same things.

One is a simple query to hide things you don't want.

The other is a simple query to find things you don't know are missing.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] auto-abandon changesets considered harmful (was Re: [stable][all] Revisiting the 6 month release cycle [metrics])

2015-03-02 Thread Stefano Maffulli
On Mon, 2015-03-02 at 13:35 -0800, Clay Gerrard wrote:
 I think Tom's suggested help us help you is a great pre-abandon
 warning.  In swift as often as not the last message ended with
 something like you can catch me on freenode in #openstack-swift if
 you have any questions  
 
Good, this thread is starting to converge.
 
 But I really can't fathom what's the harm in closing abandoned patches
 as abandoned?

Jim Blair gave a lot of good reasons for not abandoning *automatically*
and instead leave the decision to abandon to humans only. 

His message is worth reading again:
http://lists.openstack.org/pipermail/openstack-dev/2015-March/058104.html

/stef


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Shared storage support

2015-03-02 Thread Rochelle Grober


-Original Message-
From: Jay Pipes Sent Monday, March 02, 2015 16:24

On 02/25/2015 06:41 AM, Daniel P. Berrange wrote:
 On Wed, Feb 25, 2015 at 02:08:32PM +, Gary Kotton wrote:
 I understand that this is a high or critical bug but I think that
 we need to discuss more on it and try have a more robust model.

 What I'm not seeing from the bug description is just what part of
 the scheduler needs the ability to have total summed disk across
 every host in the cloud.

The scheduler does not need to know this information at all. One might 
say that a cloud administrator would want to know the total free disk 
space available in their cloud -- or at least get notified once the 
total free space falls below some threshold. IMO, there are better ways 
of accomplishing such a capacity-management task: use an NPRE/monitoring 
check that simply does a `df` or similar command every so often against 
the actual filesystem backend.

IMHO, this isn't something that needs to be fronted by a 
management/admin-only REST API that needs to iterate over a potentially 
huge number of compute nodes just to enable some pretty graphical 
front-end that shows some pie chart of available disk space.
 
[Rockyg] ++  Scheduler doesn't need to know anything about the individual 
compute nodes attached to *the same* shared storage to do placement.  Scheduler 
can't increase or decrease the physical amount of storage available to the set 
of nodes. The hardware monitor for the shared storage provides the total amount 
of disk on the system, the amount already used and the amount still unused.  
Anywhere the scheduler starts a new vm in this node set will have the same 
amount of disk available or not.

 What is the actual bad functional behaviour that results from this
 bug that means it is a high priority issue to fix ?

The higher priority thing would be to remove the wonky os-hypervisors 
REST API extension and its related cruft. This API extension is fatally 
flawed in a number of ways, including assumptions about things such as 
underlying providers of disk/volume resources and misleading 
relationships between the servicegroup API and the compute nodes table.

[Rockyg] IMO the most important piece of information from OpenStack sw for an 
operator with a set of nodes sharing a storage backend is: what is the current 
total commitment (over commitment more likely) of the storage capacity on the 
set of nodes attached.  And that results in a simple go/no-go for starting 
another vm on the set, or sending a warning/error that the storage is 
over-committed and get more.

--Rocky



Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-03-02 Thread Clint Byrum
Excerpts from Angus Salkeld's message of 2015-03-02 17:08:15 -0800:
 On Tue, Mar 3, 2015 at 9:45 AM, James Bottomley 
 james.bottom...@hansenpartnership.com wrote:
 
  On Tue, 2015-02-24 at 12:05 +0100, Thierry Carrez wrote:
   Daniel P. Berrange wrote:
[...]
The key observations

   
The first key observation from the schedule is that although we have
a 6 month release cycle, we in fact make 4 releases in that six
months because there are 3 milestones releases approx 6-7 weeks apart
from each other, in addition to the final release. So one of the key
burdens of a more frequent release cycle is already being felt, to
some degree.
   
The second observation is that thanks to the need to support a
continuous deployment models, the GIT master branches are generally
considered to be production ready at all times. The tree does not
typically go through periods of major instability that can be seen
in other projects, particular those which lack such comprehensive
testing infrastructure.
   
The third observation is that due to the relatively long cycle, and
increasing amounts of process, the work accomplished during the
cycles is becoming increasingly bursty. This is in turn causing
unacceptably long delays for contributors when their work is unlucky
enough to not get accepted during certain critical short windows of
opportunity in the cycle.
   
The first two observations strongly suggest that the choice of 6
months as a cycle length is a fairly arbitrary decision that can be
changed without unreasonable pain. The third observation suggests a
much shorter cycle length would smooth out the bumps and lead to a
more efficient  satisfying development process for all involved.
  
   I think you're judging the cycle from the perspective of developers
   only. 6 months was not an arbitrary decision. Translations and
   documentation teams basically need a month of feature/string freeze in
   order to complete their work. Since we can't reasonably freeze one month
   every 2 months, we picked 6 months.
 
  Actually, this is possible: look at Linux, it freezes for 10 weeks of a
  12 month release cycle (or 6 weeks of an 8 week one).  More on this
  below.
 
   It's also worth noting that we were on a 3-month cycle at the start of
   OpenStack. That was dropped after a cataclysmic release that managed the
   feat of (a) not having anything significant done, and (b) have out of
   date documentation and translations.
  
   While I agree that the packagers and stable teams can opt to skip a
   release, the docs, translations or security teams don't really have that
   luxury... Please go beyond the developers needs and consider the needs
   of the other teams.
  
   Random other comments below:
  
[...]
Release schedule

   
First the releases would probably be best attached to a set of
pre-determined fixed dates that don't ever vary from year to year.
eg releses happen Feb 1st, Apr 1st, Jun 1st, Aug 1st, Oct 1st, and
Dec 1st. If a particular release slips, don't alter following release
dates, just shorten the length of the dev cycle, so it becomes fully
self-correcting. The even numbered months are suggested to avoid a
release landing in xmas/new year :-)
  
   The Feb 1 release would probably be pretty empty :)
  
[...]
Stable branches
---
   
The consequences of a 2 month release cycle appear fairly severe for
the stable branch maint teams at first sight. This is not, however,
an insurmountable problem. The linux kernel shows an easy way forward
with their approach of only maintaining stable branches for a subset
of major releases, based around user / vendor demand. So it is still
entirely conceivable that the stable team only provide stable branch
releases for 2 out of the 6 yearly releases. ie no additional burden
over what they face today. Of course they might decide they want to
do more stable branches, but maintain each for a shorter time. So I
could equally see them choosing todo 3 or 4 stable branches a year.
Whatever is most effective for those involved and those consuming
them is fine.
  
   Stable branches may have the luxury of skipping releases and designate a
   stable one from time to time (I reject the Linux comparison because
   the kernel is at a very different moment in software lifecycle). The
   trick being, making one release special is sure to recreate the peak
   issues you're trying to solve.
 
  I don't disagree with the observation about different points in the
  lifecycle, but perhaps it might be instructive to ask if the linux
  kernel ever had a period in its development history that looks somewhat
  like OpenStack does now.  I would claim it did: before 2.6, we had the
  odd/even develop/stabilise cycle.  The theory driving it was that we
  

Re: [openstack-dev] [nova] Shared storage support

2015-03-02 Thread Jay Pipes

On 02/25/2015 06:41 AM, Daniel P. Berrange wrote:

On Wed, Feb 25, 2015 at 02:08:32PM +, Gary Kotton wrote:

I understand that this is a high or critical bug but I think that
we need to discuss more on it and try have a more robust model.


What I'm not seeing from the bug description is just what part of
the scheduler needs the ability to have total summed disk across
every host in the cloud.


The scheduler does not need to know this information at all. One might 
say that a cloud administrator would want to know the total free disk 
space available in their cloud -- or at least get notified once the 
total free space falls below some threshold. IMO, there are better ways 
of accomplishing such a capacity-management task: use an NPRE/monitoring 
check that simply does a `df` or similar command every so often against 
the actual filesystem backend.


IMHO, this isn't something that needs to be fronted by a 
management/admin-only REST API that needs to iterate over a potentially 
huge number of compute nodes just to enable some pretty graphical 
front-end that shows some pie chart of available disk space.



What is the actual bad functional behaviour that results from this
bug that means it is a high priority issue to fix ?


The higher priority thing would be to remove the wonky os-hypervisors 
REST API extension and its related cruft. This API extension is fatally 
flawed in a number of ways, including assumptions about things such as 
underlying providers of disk/volume resources and misleading 
relationships between the servicegroup API and the compute nodes table.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Devstack] Can't start service nova-novncproxy

2015-03-02 Thread Li, Chen
Sorry, what you mean  Double-check no make sure that it's enabled   ?

I do set the following in my local.conf:
   enable_service n-nonvc

   NOVA_VNC_ENABLED=True
   NOVNCPROXY_URL=http://192.168.6.91:6080/vnc_auto.html;
   VNCSERVER_LISTEN=0.0.0.0
   VNCSERVER_PROXYCLIENT_ADDRESS=192.168.6.91

Also, I tried to install package  novnc  python-novnc by apt-get install.
Then I re-run ./stack.sh, the devstack installation failed, and complaining 
about the version for module six is wrong.
In order to make my devstack work again, I removed the 2 packages, but devstack 
installation still failed due to the same issue.

Thanks.
-chen

-Original Message-
From: Solly Ross [mailto:sr...@redhat.com] 
Sent: Tuesday, March 03, 2015 12:52 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Devstack] Can't start service nova-novncproxy

Double-check no make sure that it's enabled.  A couple months ago, noVNC got 
removed from the standard install because devstack was installing it from 
GitHub.

- Original Message -
 From: Chen Li chen...@intel.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Sunday, March 1, 2015 7:14:51 PM
 Subject: Re: [openstack-dev] [Devstack] Can't start service 
 nova-novncproxy
 
 That's' the most confusing part.
 I don't even have a log for service nova-novncproxy.
 
 Thanks.
 -chen
 
 -Original Message-
 From: Kashyap Chamarthy [mailto:kcham...@redhat.com]
 Sent: Monday, March 02, 2015 12:16 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Devstack] Can't start service 
 nova-novncproxy
 
 On Sat, Feb 28, 2015 at 06:20:54AM +, Li, Chen wrote:
  Hi all,
  
  I'm trying to install a fresh all-in-one openstack environment by devstack.
  After the installation, all services looks well, but I can't open 
  instance console in Horizon.
  
  I did a little check, and found service nova-novncproxy was not started !
 
 What do you see in your 'screen-n-vnc.log' (I guess) log?
 
 I don't normally run Horizon or nova-vncproxy (only n-cpu, n-sch, 
 n-cond), these are the ENABLED_SERVICES in my minimal DevStack config 
 (Nova, Neutron, Keystone and Glance):
 
 
 ENABLED_SERVICES=g-api,g-reg,key,n-api,n-cpu,n-sch,n-cond,mysql,rabbit
 ,dstat,quantum,q-svc,q-agt,q-dhcp,q-l3,q-meta
 
 [1]
 https://kashyapc.fedorapeople.org/virt/openstack/2-minimal_devstack_lo
 calrc.conf
 
  Anyone has idea why this happened ?
  
  Here is my local.conf : http://paste.openstack.org/show/183344/
  
  My os is:
  Ubuntu 14.04 trusty
  3.13.0-24-generic
  
  
 
 
 --
 /kashyap
 
 __
  OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
  OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-03-02 Thread James Bottomley
On Tue, 2015-02-24 at 12:05 +0100, Thierry Carrez wrote:
 Daniel P. Berrange wrote:
  [...]
  The key observations
  
  
  The first key observation from the schedule is that although we have
  a 6 month release cycle, we in fact make 4 releases in that six
  months because there are 3 milestones releases approx 6-7 weeks apart
  from each other, in addition to the final release. So one of the key
  burdens of a more frequent release cycle is already being felt, to
  some degree.
  
  The second observation is that thanks to the need to support a
  continuous deployment models, the GIT master branches are generally
  considered to be production ready at all times. The tree does not
  typically go through periods of major instability that can be seen
  in other projects, particular those which lack such comprehensive
  testing infrastructure.
  
  The third observation is that due to the relatively long cycle, and
  increasing amounts of process, the work accomplished during the
  cycles is becoming increasingly bursty. This is in turn causing
  unacceptably long delays for contributors when their work is unlucky
  enough to not get accepted during certain critical short windows of
  opportunity in the cycle.
  
  The first two observations strongly suggest that the choice of 6
  months as a cycle length is a fairly arbitrary decision that can be
  changed without unreasonable pain. The third observation suggests a
  much shorter cycle length would smooth out the bumps and lead to a
  more efficient  satisfying development process for all involved.
 
 I think you're judging the cycle from the perspective of developers
 only. 6 months was not an arbitrary decision. Translations and
 documentation teams basically need a month of feature/string freeze in
 order to complete their work. Since we can't reasonably freeze one month
 every 2 months, we picked 6 months.

Actually, this is possible: look at Linux, it freezes for 10 weeks of a
12 month release cycle (or 6 weeks of an 8 week one).  More on this
below.

 It's also worth noting that we were on a 3-month cycle at the start of
 OpenStack. That was dropped after a cataclysmic release that managed the
 feat of (a) not having anything significant done, and (b) have out of
 date documentation and translations.
 
 While I agree that the packagers and stable teams can opt to skip a
 release, the docs, translations or security teams don't really have that
 luxury... Please go beyond the developers needs and consider the needs
 of the other teams.
 
 Random other comments below:
 
  [...]
  Release schedule
  
  
  First the releases would probably be best attached to a set of
  pre-determined fixed dates that don't ever vary from year to year.
  eg releses happen Feb 1st, Apr 1st, Jun 1st, Aug 1st, Oct 1st, and
  Dec 1st. If a particular release slips, don't alter following release
  dates, just shorten the length of the dev cycle, so it becomes fully
  self-correcting. The even numbered months are suggested to avoid a
  release landing in xmas/new year :-)
 
 The Feb 1 release would probably be pretty empty :)
 
  [...]
  Stable branches
  ---
  
  The consequences of a 2 month release cycle appear fairly severe for
  the stable branch maint teams at first sight. This is not, however,
  an insurmountable problem. The linux kernel shows an easy way forward
  with their approach of only maintaining stable branches for a subset
  of major releases, based around user / vendor demand. So it is still
  entirely conceivable that the stable team only provide stable branch
  releases for 2 out of the 6 yearly releases. ie no additional burden
  over what they face today. Of course they might decide they want to
  do more stable branches, but maintain each for a shorter time. So I
  could equally see them choosing todo 3 or 4 stable branches a year.
  Whatever is most effective for those involved and those consuming
  them is fine.
 
 Stable branches may have the luxury of skipping releases and designate a
 stable one from time to time (I reject the Linux comparison because
 the kernel is at a very different moment in software lifecycle). The
 trick being, making one release special is sure to recreate the peak
 issues you're trying to solve.

I don't disagree with the observation about different points in the
lifecycle, but perhaps it might be instructive to ask if the linux
kernel ever had a period in its development history that looks somewhat
like OpenStack does now.  I would claim it did: before 2.6, we had the
odd/even develop/stabilise cycle.  The theory driving it was that we
needed a time for everyone to develop then a time for everyone to help
make stable.  You yourself said this in the other thread:

 Joe Gordon wrote:
  [...]
  I think a lot of the frustration with our current cadence comes out of
  the big stop everything (development, planning etc.), and stabilize the
  release process. Which in turn isn't 

Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-03-02 Thread Angus Salkeld
On Tue, Mar 3, 2015 at 9:45 AM, James Bottomley 
james.bottom...@hansenpartnership.com wrote:

 On Tue, 2015-02-24 at 12:05 +0100, Thierry Carrez wrote:
  Daniel P. Berrange wrote:
   [...]
   The key observations
   
  
   The first key observation from the schedule is that although we have
   a 6 month release cycle, we in fact make 4 releases in that six
   months because there are 3 milestones releases approx 6-7 weeks apart
   from each other, in addition to the final release. So one of the key
   burdens of a more frequent release cycle is already being felt, to
   some degree.
  
   The second observation is that thanks to the need to support a
   continuous deployment models, the GIT master branches are generally
   considered to be production ready at all times. The tree does not
   typically go through periods of major instability that can be seen
   in other projects, particular those which lack such comprehensive
   testing infrastructure.
  
   The third observation is that due to the relatively long cycle, and
   increasing amounts of process, the work accomplished during the
   cycles is becoming increasingly bursty. This is in turn causing
   unacceptably long delays for contributors when their work is unlucky
   enough to not get accepted during certain critical short windows of
   opportunity in the cycle.
  
   The first two observations strongly suggest that the choice of 6
   months as a cycle length is a fairly arbitrary decision that can be
   changed without unreasonable pain. The third observation suggests a
   much shorter cycle length would smooth out the bumps and lead to a
   more efficient  satisfying development process for all involved.
 
  I think you're judging the cycle from the perspective of developers
  only. 6 months was not an arbitrary decision. Translations and
  documentation teams basically need a month of feature/string freeze in
  order to complete their work. Since we can't reasonably freeze one month
  every 2 months, we picked 6 months.

 Actually, this is possible: look at Linux, it freezes for 10 weeks of a
 12 month release cycle (or 6 weeks of an 8 week one).  More on this
 below.

  It's also worth noting that we were on a 3-month cycle at the start of
  OpenStack. That was dropped after a cataclysmic release that managed the
  feat of (a) not having anything significant done, and (b) have out of
  date documentation and translations.
 
  While I agree that the packagers and stable teams can opt to skip a
  release, the docs, translations or security teams don't really have that
  luxury... Please go beyond the developers needs and consider the needs
  of the other teams.
 
  Random other comments below:
 
   [...]
   Release schedule
   
  
   First the releases would probably be best attached to a set of
   pre-determined fixed dates that don't ever vary from year to year.
   eg releses happen Feb 1st, Apr 1st, Jun 1st, Aug 1st, Oct 1st, and
   Dec 1st. If a particular release slips, don't alter following release
   dates, just shorten the length of the dev cycle, so it becomes fully
   self-correcting. The even numbered months are suggested to avoid a
   release landing in xmas/new year :-)
 
  The Feb 1 release would probably be pretty empty :)
 
   [...]
   Stable branches
   ---
  
   The consequences of a 2 month release cycle appear fairly severe for
   the stable branch maint teams at first sight. This is not, however,
   an insurmountable problem. The linux kernel shows an easy way forward
   with their approach of only maintaining stable branches for a subset
   of major releases, based around user / vendor demand. So it is still
   entirely conceivable that the stable team only provide stable branch
   releases for 2 out of the 6 yearly releases. ie no additional burden
   over what they face today. Of course they might decide they want to
   do more stable branches, but maintain each for a shorter time. So I
   could equally see them choosing todo 3 or 4 stable branches a year.
   Whatever is most effective for those involved and those consuming
   them is fine.
 
  Stable branches may have the luxury of skipping releases and designate a
  stable one from time to time (I reject the Linux comparison because
  the kernel is at a very different moment in software lifecycle). The
  trick being, making one release special is sure to recreate the peak
  issues you're trying to solve.

 I don't disagree with the observation about different points in the
 lifecycle, but perhaps it might be instructive to ask if the linux
 kernel ever had a period in its development history that looks somewhat
 like OpenStack does now.  I would claim it did: before 2.6, we had the
 odd/even develop/stabilise cycle.  The theory driving it was that we
 needed a time for everyone to develop then a time for everyone to help
 make stable.  You yourself said this in the other thread:

  Joe Gordon wrote:
   [...]
   I think 

[openstack-dev] [stable] [Glance] Nomination for glance-stable-maint

2015-03-02 Thread Nikhil Komawar
Hi all,


I would like to propose Zhi Yan Liu for the role of stable maintainer for 
Glance program.


Zhi Yan is currently a Glance core member as well as a go-to person for various 
features in Glance. He is also handling quite well the additional 
responsibility of Oslo liaison for Glance. Thus, he has helped us with porting 
the necessary changes to either stable/* branches or patching Oslo related code 
in Glance.


The size of code in Glance is growing and he has has expressed interest in 
being a stable-maint. Hence, I strongly feel that he would be a good addition 
to the team.


Please provide with your votes to this proposal by replying directly to this 
email, sending a private email or messaging me on IRC.


In anticipation for your valuable input,

Sincerely,
-Nikhil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] auto-abandon changesets considered harmful (was Re: [stable][all] Revisiting the 6 month release cycle [metrics])

2015-03-02 Thread Tom Fifield
On 03/03/15 05:35, Clay Gerrard wrote:
 
 
 On Mon, Mar 2, 2015 at 8:07 AM, Duncan Thomas duncan.tho...@gmail.com
 mailto:duncan.tho...@gmail.com wrote:
 
 Why do you say auto-abandon is the wrong tool? I've no problem with
 the 1 week warning if somebody wants to implement it - I can see the
 value. A change-set that has been ignored for X weeks is pretty much
 the dictionary definition of abandoned
 
 
 +1 this
 
 I think Tom's suggested help us help you is a great pre-abandon
 warning.  In swift as often as not the last message ended with something
 like you can catch me on freenode in #openstack-swift if you have any
 questions  
 
 But I really can't fathom what's the harm in closing abandoned patches
 as abandoned?

It might be an interesting exercise to consider how areas like
feedback, criticism or asking for help could potentially differ in
cultures and levels of skill other than the one with which one may be
most familiar.

Now, look at the wording of my above sentence and consider whether you'd
ever write it that way. Pretty damn indirect, and vague right?

It turns out that there are large swathes of the world that operate in
this much more nuanced way. Taking direct action against something
someone has produced using (from their perspective) strong/emotive
language can be at basically the same level as punching someone in the
face and yelling You suck! in other areas :)

I'm sure you are aware of these things - I don't mean to preach, but I
thought it would be a good chance to explain what  what the help us
help you message might come across to these kind of folks:
* This isn't your fault, it's OK!
* We're here to help, and you have permission to ask us for help.
* Here are some steps you can take, and you have permission to take
those steps.
* Here are some standard procedures that everyone follows, so if you
follow them you won't be caught standing out.
* If something happens after this, it's a random third party actor
that's doing it (the system), not a person criticising you.

Anyway, I guess I better dig up jeepyb again ...


 If the author doesn't care about the change enough to address the review
 comments (or failing tests!) and the core reviewers don't care about it
 enough to *fix it for them* - where do we think the change is going to
 go?!  It sounds like the argument is just that instead of using
 abandoned as an explicit description of an implicit state we can just
 filter these out of every view we use to look for something useful as
 no changes for X weeks after negative feedback rather than calling a
 spade a spade.
 
 I *mostly* look at patches that don't have feedback.  notmyname
 maintains the swift review dashboard AFAIK:
 
 http://goo.gl/r2mxbe
 
 It's possible that a pile of abandonded-changes-not-marked-as-abandonded
 wouldn't actually interrupt my work-flow.  But I would imagine
 maintaining the review dashboard might occasionally require looking at
 ALL the changes in the queue in an effort to look for a class of changes
 that aren't getting adequate feedback - that workflow might find the
 extra noise less than helpful.
 
 -Clay
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tempest] API testing coverage in tempest

2015-03-02 Thread Rohan Kanade
Hi,

So i have been tracking API test coverage in tempest in a ad-hoc way where
i check the actual tempest tests and the API documentation for that
specific component eg. Neutron

Is there a better way or documentation maintained by Tempest team for data
about API test coverage per OpenStack project?

Regards,
Rohan Kanade
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gantt] Scheduler sub-group meeting agenda 3/3

2015-03-02 Thread Dugger, Donald D
Meeting on #openstack-meeting at 1500 UTC (8:00AM MST)

1) Status on cleanup work - https://wiki.openstack.org/wiki/Gantt/kilo

(No need to discuss the `Remove direct nova DB access’ spec, it’s been approved 
☺

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Need +A (workflow +1) for https://review.openstack.org/156940

2015-03-02 Thread Deepak Shetty
 Hi all,
Can someone give +A to https://review.openstack.org/156940 - we have
the rest. Need to get this merged for glusterfs CI to pass the
snapshot_when_volume_in_use testcases.

thanx,
deepak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Core nominations.

2015-03-02 Thread Nikhil Komawar
Hi all,


After having thoroughly thought about the proposed rotation and evaluating the 
pros and cons of the same at this point of time, I would like to make an 
alternate proposal.


New Proposal:

  1.  We should go ahead with adding more core members now.
  2.  Come up with a plan and give additional notice for the rotation. Get it 
implemented one month into Liberty.

Reasoning:


Traditionally, Glance program did not implement rotation. This was probably 
with good reason as the program was small and the developers were working 
closely together and were aware of each others' daily activities. If we go 
ahead with this rotation it would be implemented for the first time and would 
appear to have happened out-of-the-blue.


It would be good for us to make a modest attempt at maintaining the friendly 
nature of the Glance development team, give them additional notice and 
preferably send them a common email informing the same. We should propose at 
least a tentative plan for rotation so that all the other core members are 
aware of their responsibilities. This brings to my questions, is the poposed 
list for rotation comprehensive? What is the basis for missing out some of 
them? What would be a fair policy or some level of determinism in expectations? 
I believe that we should have input from the general Glance community (and the 
OpenStack community too) for the same.


In order for all this to be sorted out, I kindly request all the members to 
wait until after the k3 freeze, preferably until a time at which people would 
have a bit more time in their hand to look at their mailboxes for unexpected 
proposals of rotation. Once a decent proposal is set, we can announce the 
change-in-dynamics of the Glance program and get everyone interested familiar 
with it during the summit. Whereas, we should not block the currently active 
to-be-core members from doing great work. Hence, we should go ahead with adding 
them to the list.


I hope that made sense. If you've specific concerns, I'm free to chat on IRC as 
well.


(otherwise) Thoughts?


Cheers,
-Nikhil

From: Alexander Tivelkov ativel...@mirantis.com
Sent: Tuesday, February 24, 2015 7:26 AM
To: Daniel P. Berrange; OpenStack Development Mailing List (not for usage 
questions)
Cc: krag...@gmail.com
Subject: Re: [openstack-dev] [Glance] Core nominations.

+1 on both proposals: rotation is definitely a step in right direction.



--
Regards,
Alexander Tivelkov

On Tue, Feb 24, 2015 at 1:19 PM, Daniel P. Berrange 
berra...@redhat.commailto:berra...@redhat.com wrote:
On Tue, Feb 24, 2015 at 10:47:18AM +0100, Flavio Percoco wrote:
 On 24/02/15 08:57 +0100, Flavio Percoco wrote:
 On 24/02/15 04:38 +, Nikhil Komawar wrote:
 Hi all,
 
 I would like to propose the following members to become part of the Glance 
 core
 team:
 
 Ian Cordasco
 Louis Taylor
 Mike Fedosin
 Hemanth Makkapati
 
 Please, yes!

 Actually - I hope this doesn't come out harsh - I'd really like to
 stop adding new cores until we clean up our current glance-core list.
 This has *nothing* to do with the 4 proposals mentioned above, they
 ALL have been doing an AMAZING work.

 However, I really think we need to start cleaning up our core's list
 and this sounds like a good chance to make these changes. I'd like to
 propose the removal of the following people from Glance core:

 - Brian Lamar
 - Brian Waldon
 - Mark Washenberger
 - Arnaud Legendre
 - Iccha Sethi
 - Eoghan Glynn
 - Dan Prince
 - John Bresnahan

 None of the folks in the above list have neither provided reviews nor
 have they participated in Glance discussions, meetings or summit
 sessions. These are just signs that their focus have changed.

 While I appreciate their huge efforts in the past, I think it's time
 for us to move forward.

 It goes without saying that all of the folks above are more than
 welcome to join the glance-core team again if their focus goes back to
 Glance.

Yep, rotating out inactive members is an important step to ensure that
the community has clear view of who the current active leadership is.

Regards,
Daniel
--
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [oslo.policy] guraduation status

2015-03-02 Thread Doug Hellmann


On Mon, Mar 2, 2015, at 05:01 AM, Osanai, Hisashi wrote:
 oslo.policy folks,
 
 I'm thinking about realization of policy-based access control in swift 
 using oslo.policy [1] so I would like to know oslo.policy's status for 
 graduation.
 
 [1]
 https://github.com/openstack/oslo-specs/blob/master/specs/kilo/graduate-policy.rst

We're making good progress and expect to have a public release with a
stable API fairly soon.

Doug

 
 Thanks in advance,
 Hisashi Osanai
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Kerberos in OpenStack

2015-03-02 Thread Adam Young
Posting response to the mailing list, as I suspect others have these 
questions.





 I understand that in the current proposed implementation only 
keystone runs on apache- httpd.

*
*
*1.  My question is that- is it possible to move Nova server on the 
apache-httpd server just like the way keystone server is running?? And 
if not then what are the technical challanges moving it?? * If these 
services had the mod_auth_kerb module they would be able validate the 
token.


My Keystone work was based on a Web page where where someone did exactly 
this.  I don't know what it would take to make it happend today, but it 
should be posible.


Much of Nova is dealing with Eventlet and the monkeypatching,. Ideally, 
this code would be implemented in one place and then a single boolean at 
startup could say monkeypatch  or no ;  this is what Keystone does.


Nova has more of a dependency on Eventlet than Keystone does, as Nova 
has to deal with reading messages from the message queue.  THis is done 
using a dedicated greenthread, and I don;t know how this would look in 
an HTTPD setup.




*2.Also, I was curious to know if you tried to add the keystone 
middleware to nova and the other services?? In this way Keystone can 
itself act as KDC.*


Not sure what you mean here.  Keystone already has middleware running in 
Nova.  Keystone Data is more like a Kerberos  PAC than a service 
ticket.  Keystone tokens are not limited to endpoints, and even if they 
were, we need to pass a token from one endpoint to another for certain 
workflows.




Thanks,
Sanket

On Wed, Feb 25, 2015 at 12:39 PM, Sanket Lawangare 
sanket.lawang...@gmail.com mailto:sanket.lawang...@gmail.com wrote:


Thank you for replying back Adam. Would let you if i have any
further doubts on it (I am pretty sure i will have many).

Sanket

On Tue, Feb 24, 2015 at 1:26 PM, Adam Young ayo...@redhat.com
mailto:ayo...@redhat.com wrote:

On 02/24/2015 01:53 PM, Sanket Lawangare wrote:

Hello  Everyone,

My name is Sanket Lawangare. I am a graduate Student studying
at The University of Texas, at San Antonio.For my Master’s
Thesis I am working on the Identity component of OpenStack.
My research is to investigate external authentication with
Identity(keystone) using Kerberos.


Based on reading Jammie lennox's Blogs on Kerberos
implementation in OpenStack and my understanding of Kerberos
I have come up with a figure explaining possible interaction
of KDC with the OpenStack client, keystone and the OpenStack
services(Nova, Cinder, Swift...).

These are the Blogs -


http://www.jamielennox.net/blog/2015/02/12/step-by-step-kerberized-keystone/

http://www.jamielennox.net/blog/2013/10/22/keystone-token-binding/

I am trying to understand the working of Kerberos in OpenStack.


Please click this link to view the figure:

https://docs.google.com/drawings/d/1re0lNbiMDTbnkrqGMjLq6oNoBtR_GA0x7NWacf0Ulbs/edit?usp=sharing


P.S. - [The steps in this figure are self explanatory the
basic understanding of Kerberos is expected]


Based on the figure i had couple of questions:


1.

Is Nova or other services registered with the KDC?


Not yet.  Kerberos is only used for Keystone at the moment,
with work underway to make Horizon work with Keystone.  Since
many of the services only run in Eventlet, not in HTTPD,
Kerberos support is hard to support. Ideally, yes, we would do
Kerberos direct to Nova, and weither use the token binding
mechanism, or better yet, not even provide a token...but that
is more work.





2.

What does keystone do with Kerberos ticket/credentials?
Does Keystone authenticates the users and gives them
direct access to other services such as Nova, Swift etc..



THey are used for authentication, and then the Keystone server
uses the principal to resolve the username and user id.  The
rest of the data comes out of LDAP.



3.

After receiving the Ticket from the KDC does keystone
embed some kerberos credential information in the token?


No, it is mapped to the Openstack userid and username



4.

What information does the service (e.g.Nova) see in the
Ticket and the token (Does the token have some kerberos
info or some customized info inside it?).



No kerberos ticket goes to Nova.



If you could share your insights and guide me on this. I
would be really appreciate it. Thank you all for your time.




Let me know if you have more questions.  Really let me know if
you want to help coding.



Regards,

Sanket Lawangare





Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-03-02 Thread Doug Hellmann


On Mon, Mar 2, 2015, at 10:57 AM, Ihar Hrachyshka wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 Hi Daniel,
 
 thanks for a clear write-up of the matter and food for thought.
 
 I think the idea of having more smooth development mode that would not
 make people to wait for 6+ months to release a new feature is great.
 
 It's insane to expect that feature priorities won't ever slightly
 shift in the next 6 months. Having a limited list of features targeted
 for the next 6 months is prone to mistakes, since people behind some
 of approved features may need to postpone the effort for any type of
 reasons (personal, job rush, bad resource allocation, ...), and we
 then end up with approved specs with no actual code drops, using
 review 'slots' that would better be spent for other features that were
 not that lucky to get a rubber stamp during spec phase. Prior resource
 allocation would probably work somehow if we were working for the same
 company that would define priorities to us, but it's not the case.
 
 Anecdotally, in neutron, we have two Essential blueprints for Kilo,
 and there are no code drops or patches in review for any of those, so
 I would expect them to fail to merge. At the same time, I will need to
 wait for the end of Kilo to consider adding support for guru reports
 to the project. Or in oslo world, I will need to wait for Liberty to
 introduce features in oslo.policy that are needed by neutron to switch
 to it, etc.

The Oslo policy is a bit more lenient than some of the others I've heard
described. We don't follow the feature proposal freeze. Instead, we only
have a hard freeze for the master branches of libraries until the
release candidates are done. Any features proposed at any other time are
candidates, with the usual caveats for review team time and attention.

Doug

 
 Another problem is that currently milestones are used merely for
 targeting bugs and features, but no one really cares about whether the
 actual milestone shipment actually works. Again, a story from neutron
 world: Kilo-1 was shipped in the middle of advanced services split,
 with some essential patches around distutils setup missing (no proper
 migration plan applied, conflicting config files in neutron and *aas
 repos, etc.)
 
 So I'm all for reforms around processes we apply.
 
 That said, I don't believe the real problem here is that we don't
 generate project tarballs frequently enough.
 
 Major problems I see as critical to tackle in our dev process are:
 
 - - enforced spec/dev mode. Solution: allow to propose (and approve) a
 reasonable spec at any moment in the cycle; allow code drops for
 approved specs at any moment in the cycle (except pre-release
 stabilization time); stop targeting specs: if it's sane, it's probably
 sane N+2 cycle from now too.
 
 - - core team rubber stamping a random set of specs and putting -2 on
 all other specs due to project priorities. Solution: stop pretending
 core team (or core drivers) can reasonably estimate review and coding
 resources for the next cycle. Instead allows community to decide
 what's worth the effort by approving all technically reasonable specs
 and allowing everyone to invest time and effort in specs (s)he seems
 worth it.
 
 - - no due stabilization process before dev milestones. Solution:
 introduce one in your team workflow, be more strict in what goes in
 during pre milestone stabilization time.
 
 If all above is properly applied, we would get into position similar
 to your proposal. The difference though is that upstream project would
 not call milestone tarballs 'Releases'. Still, if there are brave
 vendors to ship milestones, fine, they would have the same freedom as
 in your proposal.
 
 Note: all the steps mentioned above can be applied on *per team* basis
 without breaking existing release workflow.
 
 Some more comments from stable-maint/distribution side below.
 
 On 02/24/2015 10:53 AM, Daniel P. Berrange wrote:
 [...skip...]
  The modest proposal ===
 [...skip...]
  
  Stable branches ---
  
  The consequences of a 2 month release cycle appear fairly severe
  for the stable branch maint teams at first sight. This is not,
  however, an insurmountable problem. The linux kernel shows an easy
  way forward with their approach of only maintaining stable branches
  for a subset of major releases, based around user / vendor demand.
  So it is still entirely conceivable that the stable team only
  provide stable branch releases for 2 out of the 6 yearly releases.
  ie no additional burden over what they face today. Of course they
  might decide they want to do more stable branches, but maintain
  each for a shorter time. So I could equally see them choosing todo
  3 or 4 stable branches a year. Whatever is most effective for those
  involved and those consuming them is fine.
  
 
 Since it's not only stable branches that are affected (translators,
 documentation writers, VMT were already mentioned), those 

Re: [openstack-dev] [oslo][openstackclient] transferring maintenance of cliff to the OpenStack client team

2015-03-02 Thread Flavio Percoco

On 26/02/15 19:25 -0500, Davanum Srinivas wrote:

+1 to the move!


+1 from me too



On Feb 26, 2015 6:43 PM, Ben Nemec openst...@nemebean.com wrote:

   On 02/26/2015 03:57 PM, Doug Hellmann wrote:
The team behind the unified command line tool is preparing to ask the TC
   to allow us to become an official team. The openstack/
   python-openstackclient repository will be managed by this new team, and I
   suggested we also consider transferring ownership of openstack/cliff from
   Oslo at the same time.
   
Cliff was created specifically to meet the needs to
   python-openstackclient, and it is still primarily maintained by folks who
   will be members of the new team. Some projects outside of OpenStack have
   adopted it, so I think it makes sense to keep it as a separate repository
   and continue to release it that way (Oslo doesn’t have to be the only
   project team to do that, after all).
   
Dean will be submitting a governance change to create the new program,
   but before we do that I wanted to get general feedback from the team about
   the ownership change. We’ll repeat what we did with PyCADF and hacking when
   we transferred those, and offer anyone in oslo-core the chance to continue
   reviewing cliff code if they want to be on the cliff-core team.
   
Please let me know if you have objections or comments.

   Makes sense to me.  +1 to the move.

   
Thanks,
Doug
   
   
   
   __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?
   subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   


   __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco


pgptQEMMqF1o_.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] auto-abandon changesets considered harmful (was Re: [stable][all] Revisiting the 6 month release cycle [metrics])

2015-03-02 Thread Doug Wiegley

 On Mar 2, 2015, at 1:13 PM, James E. Blair cor...@inaugust.com wrote:
 
 Stefano branched this thread from an older one to talk about
 auto-abandon.  In the previous thread, I believe I explained my
 concerns, but since the topic split, perhaps it would be good to
 summarize why this is an issue.
 
 1) A core reviewer forcefully abandoning a change contributed by someone
 else can be a very negative action.  It's one thing for a contributor to
 say I have abandoned this effort, it's very different for a core
 reviewer to do that for them.  It is a very strong action and signal,
 and should not be taken lightly.

I'm not arguing against better tooling, queries, or additional comment 
warnings.  All of those are good things. But I think some of the push back in 
this thread is challenging this notion that abandoning is negative, which you 
seem to be treating as a given.

I don't. At all. And I don't think I'm alone.

I also don't understand your point that the review becomes invisible, since 
it's a simple gerrit query to see closed reviews, and your own contention is 
that gerrit queries solve this in the other direction, so it can't be too hard 
in this one, either. I've done that many times to find mine and others 
abandoned reviews, the most recent example being resurrecting all of the lbaas 
v2 reviews after it slipped out of juno and eventually was put into it's own 
repo.  Some of those reviews were abandoned, others not, and it was roughly 
equivalent to find them, open or not, and then re-tool those for the latest 
changes to master.

Back to your question of what queries are most useful, I already answered, but 
to give you an idea of how we directed folks to find reviews relevant to 
kilo-2, I'll share this monster, which didn't even include targeted bugs we 
wanted looked at.  Some tighter integration with launchpad (or storyboard) 
would likely be necessary for this to be sane.

Neutron, kilo-2 blueprints:

https://review.openstack.org/#/q/project:openstack/neutron+status:open+(topic:bp/wsgi-pecan-switch+OR+topic:bp/plugin-interface-perestroika+OR+topic:bp/reorganize-unit-test-tree+OR+topic:bp/restructure-l2-agent+OR+topic:bp/rootwrap-daemon-mode+OR+topic:bp/retargetable-functional-testing+OR+topic:bp/refactor-iptables-firewall-driver+OR+topic:bp/vsctl-to-ovsdb+OR+topic:bp/lbaas-api-and-objmodel-improvement+OR+topic:bp/restructure-l3-agent+OR+topic:bp/neutron-ovs-dvr-vlan+OR+topic:bp/allow-specific-floating-ip-address+OR+topic:bp/ipset-manager-refactor+OR+topic:bp/agent-child-processes-status+OR+topic:bp/extra-dhcp-opts-ipv4-ipv6+OR+topic:bp/ipsec-strongswan-driver+OR+topic:bp/ofagent-bridge-setup+OR+topic:bp/arp-spoof-patch-ebtables+OR+topic:bp/report-ha-router-master+OR+topic:bp/conntrack-in-security-group+OR+topic:bp/allow-mac-to-be-updated+OR+topic:bp/specify-router-ext-ip+OR+topic:bp/a10-lbaas-v2-driver+OR+topic:bp/brocade-vyatta-fwaas-plugin+OR+topic:bp/netscaler-lbaas-v2-driver+O
 
R+topic:bp/ofagent-sub-driver+OR+topic:bp/ml2-cisco-nexus-mechdriver-providernet+OR+topic:bp/ofagent-flow-based-tunneling+OR+topic:bp/ml2-ovs-portsecurity+OR+topic:bp/fwaas-cisco+OR+topic:bp/freescale-fwaas-plugin+OR+topic:bp/rpc-docs-and-namespace),n,z

Thanks,
doug



 
 2) Many changes become inactive due to no fault of their authors.  For
 instance, a change to nova that missed a freeze deadline might need to
 be deferred for 3 months or more.  It should not be automatically
 abandoned.
 
 3) Abandoned changes are not visible by their authors.  Many
 contributors will not see the abandoned change.  Many contributors use
 their list of open reviews to get their work done, but if you abandon
 their changes, they will no longer see that there is work for them to be
 done.
 
 4) Abandoned changes are not visible to other contributors.  Other
 people contributing to a project may see a change that they could fix up
 and get merged.  However, if the change is abandoned, they are unlikely
 to find it.
 
 5) Abandoned changes are not able to be resumed by other contributors.
 Even if they managed to find changes despite the obstacles imposed by
 #3, they would be unable to restore the change and continue working on
 it.
 
 In short, there are a number of negative impacts to contributors, core
 reviewers, and maintainers of projects caused by automatically
 abandoning changes.  These are not hypothetical; I have seen all of
 these negative impacts on projects I contribute to.
 
 Now this is the most important part -- I can not emphasize this enough:
 
  Whatever is being achieved by auto-abandoning can be achieved through
  other, less harmful, methods.
 
 Core reviewers should not have to wade through lots of extra changes.
 They should not be called upon to deal with drive-by changes that people
 are not willing to collaborate on.  Abandoning changes is an imperfect
 solution to a problem, and we can find a better solution.
 
 We have tools that can filter out changes that are not active so that
 core 

Re: [openstack-dev] [Ironic] Adding vendor drivers in Ironic

2015-03-02 Thread Kyle Mestery
On Sun, Mar 1, 2015 at 4:32 AM, Gary Kotton gkot...@vmware.com wrote:

 Hi,
 I am just relaying pain-points that we encountered in neutron. As I have
 said below it makes the development process a lot quicker for people
 working on external drivers. I personally believe that it fragments the
 community and feel that the external drivers loose the community
 contributions and inputs.
 Thanks
 Gary

 I find it unfair to say that the decomposition in Neutron caused
fragmentation. As of Kilo-2, we had 48 drivers/plugins in-tree in Neutron,
with another 10 or so proposed for Kilo. It's not scalable to continue down
that path! Further, most of those drivers/plugins were upstream but the
contributors were not really contributing to Neutron outside of their own
plugins/drivers. The decomposition lets them contribute and merge where
they are already contributing: In their own plugins/drivers. It also
removes a lot of code from Neutron which most core reviewers could never
test or run due to lacking proprietary hardware or software. All of these
reasons were in the decomposition spec [1].

I've not heard from anyone else who thinks the decomposition is a bad idea
or is not going well in practice. The opposite has actually been true:
Everyone has been happy with it's execution and the results it's allowing.
I credit Armando for his great work leading this effort. It's been a huge
effort but the results have been pretty amazing.

Thanks,
Kyle

[1]
http://specs.openstack.org/openstack/neutron-specs/specs/kilo/core-vendor-decomposition.html

On 2/28/15, 7:58 PM, Clint Byrum cl...@fewbar.com wrote:

 I'm not sure I understand your statement Gary. If Ironic defines
 what is effectively a plugin API, and the vendor drivers are careful
 to utilize that API properly, the two sets of code can be released
 entirely independent of one another. This is how modules work in the
 kernel, X.org drivers work, and etc. etc. Of course, vendors could be
 irresponsible and break compatibility with older releases of Ironic,
 but that is not in their best interest, so I don't see why anybody would
 need to tightly couple.
 
 As far as where generic code goes, that seems obvious: it all has to go
 into Ironic and be hidden behind the plugin API.
 
 Excerpts from Gary Kotton's message of 2015-02-28 09:28:55 -0800:
  Hi,
  There are pros and cons for what you have mentioned. My concern, and I
 mentioned them with the neutron driver decomposition, is that we are are
 loosing the community inputs and contributions. Yes, one can certainly
 move faster and freer (which is a huge pain point in the community). How
 are generic code changes percolated to your repo? Do you have an
 automatic CI that detects this? Please note that when itonic release you
 will need to release your repo so that the relationship is 1:1...
  Thanks
  Gary
 
  From: Ramakrishnan G
 rameshg87.openst...@gmail.commailto:rameshg87.openst...@gmail.com
  Reply-To: OpenStack List
 openstack-dev@lists.openstack.orgmailto:
 openstack-dev@lists.openstack.o
 rg
  Date: Saturday, February 28, 2015 at 8:28 AM
  To: OpenStack List
 openstack-dev@lists.openstack.orgmailto:
 openstack-dev@lists.openstack.o
 rg
  Subject: [openstack-dev] [Ironic] Adding vendor drivers in Ironic
 
 
  Hello All,
 
  This is about adding vendor drivers in Ironic.
 
  In Kilo, we have many vendor drivers getting added in Ironic which is a
 very good thing.  But something I noticed is that, most of these reviews
 have lots of hardware-specific code in them.  This is something most of
 the other Ironic folks cannot understand unless they go and read the
 hardware manuals of the vendor hardware about what is being done.
 Otherwise we just need to blindly mark the file as reviewed.
 
  Now let me pitch in with our story about this.  We added a vendor
 driver for HP Proliant hardware (the *ilo drivers in Ironic).  Initially
 we proposed this same thing in Ironic that we will add all the hardware
 specific code in Ironic itself under the directory drivers/modules/ilo.
 But few of the Ironic folks didn't agree on this (Devananda especially
 who is from my company :)). So we created a new module proliantutils,
 hosted in our own github and recently moved it to stackforge.  We gave a
 limited set of APIs for Ironic to use - like get_host_power_status(),
 set_host_power(), get_one_time_boot(), set_one_time_boot(), etc. (Entire
 list is here
 
 https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_stackforg
 e_proliantutils_blob_master_proliantutils_ilo_operations.pyd=AwICAgc=Sq
 cl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=VlZxHpZBmzzkWT5jqz9JYBk8YTeq9
 N3-diTlNj4GyNcm=QRyrevJwoHB6GFxiTDorDRShZ79rnf-SwtdVwGiYfccs=e9_q3eOLqT
 eI3oNwT_0fur3qzpFLUy9wxVPEjujfAMse=
 
 https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_stackfor
 ge_proliantutils_blob_master_proliantutils_ilo_operations.pyd=AwMFaQc=S
 qcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=VlZxHpZBmzzkWT5jqz9JYBk8YTeq
 9N3-diTlNj4GyNcm=m5_FxZnmz3
  

Re: [openstack-dev] [cinder][horizon]Proper error handling/propagation to UI

2015-03-02 Thread Eduard Matei
@Duncan:
I tried with lvmdriver-1, fails with error:
ImageCopyFailure: Failed to copy image to volume: qemu-img:
/dev/mapper/stack--volumes--lvmdriver--1-volume--e8323fc5--8ce4--4676--bbec--0a85efd866fc:
error while converting raw: Could not open device: Permission denied

It's been configured with 2 drivers (ours, and lvmdriver), but our driver
works, so not sure where it fails.

Eduard

On Mon, Mar 2, 2015 at 8:23 AM, Eduard Matei eduard.ma...@cloudfounders.com
 wrote:

 Thanks
 @Duncan: I'll try with the lvm driver.
 @Avishay, i'm not trying to delete a volume created from a snapshot, i'm
 trying to delete a snapshot that has volumes created from it (actually i
 need to prevent this action and properly report the cause of the failure:
 SnapshotIsBusy).


 Eduard

 On Mon, Mar 2, 2015 at 7:57 AM, Avishay Traeger avis...@stratoscale.com
 wrote:

 Deleting a volume created from a snapshot is permitted.  Performing
 operations on a volume created from snapshot should have the same behavior
 as volumes created from volumes, images, or empty (no source).  In all of
 these cases, the volume should be deleted, regardless of where it came
 from.  Independence from source is one of the differences between volumes
 and snapshots in Cinder.  The driver must take care to ensure this.

 As to your question about propagating errors without changing an object's
 state, that is unfortunately not doable in Cinder today (or any other
 OpenStack project as far as I know).  The object's state is currently the
 only mechanism for reporting an operation's success or failure.

 On Sun, Mar 1, 2015 at 6:07 PM, Duncan Thomas duncan.tho...@gmail.com
 wrote:

 I thought that case should be caught well before it gets to the driver.
 Can you retry with the LVM driver please?

 On 27 February 2015 at 10:48, Eduard Matei 
 eduard.ma...@cloudfounders.com wrote:

 Hi,

 We've been testing our cinder driver extensively and found a strange
 behavior in the UI:
 - when trying to delete a snapshot that has clones (created volume from
 snapshot) and error is raised in our driver which turns into
 error_deleting in cinder and the UI; further actions on that snapshot are
 impossible from the ui, the user has to go to CLI and do cinder
 snapshot-reset-state to be able to delete it (after having deleted the
 clones)
 - to help with that we implemented a check in the driver and now we
 raise exception.SnapshotIsBusy; now the snapshot remains available (as it
 should be) but no error bubble is shown in the UI (only the green one:
 Success. Scheduled deleting of...). So the user has to go to c-vol screen
 and check the cause of the error

 So question: how should we handle this so that
 a. The snapshot remains in state available
 b. An error bubble is shown in the UI stating the cause.

 Thanks,
 Eduard

 --

 *Eduard Biceri Matei, Senior Software Developer*
 www.cloudfounders.com
  | eduard.ma...@cloudfounders.com



 *CloudFounders, The Private Cloud Software Company*

 Disclaimer:
 This email and any files transmitted with it are confidential and intended 
 solely for the use of the individual or entity to whom they are addressed.
 If you are not the named addressee or an employee or agent responsible for 
 delivering this message to the named addressee, you are hereby notified 
 that you are not authorized to read, print, retain, copy or disseminate 
 this message or any part of it. If you have received this email in error 
 we request you to notify us by reply e-mail and to delete all electronic 
 files of the message. If you are not the intended recipient you are 
 notified that disclosing, copying, distributing or taking any action in 
 reliance on the contents of this information is strictly prohibited.
 E-mail transmission cannot be guaranteed to be secure or error free as 
 information could be intercepted, corrupted, lost, destroyed, arrive late 
 or incomplete, or contain viruses. The sender therefore does not accept 
 liability for any errors or omissions in the content of this message, and 
 shall have no liability for any loss or damage suffered by the user, which 
 arise as a result of e-mail transmission.



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Duncan Thomas


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 *Avishay Traeger*
 *Storage RD*

 Mobile: +972 54 447 1475
 E-mail: avis...@stratoscale.com



 Web http://www.stratoscale.com/ | Blog
 http://www.stratoscale.com/blog/ | Twitter
 https://twitter.com/Stratoscale | Google+
 

Re: [openstack-dev] [Ironic] Adding vendor drivers in Ironic

2015-03-02 Thread Anita Kuno
On 02/28/2015 09:36 PM, Ramakrishnan G wrote:
 You may not realize you do a disservice to those reading this post and
 those reviewing future patches if you set unreasonable expectations.
 
 Telling others that they can expect a patch merged in the same day is
 not reasonable, even if that has been your experience. While we do our
 best to keep current, we all are very busy and requests for repos are
 increasing. If folks want a repo they can submit a patch to create one,
 here is a good guide:
 http://docs.openstack.org/infra/manual/creators.html and it will be
 reviewed along with all other patches to project-config.
 
 Anita,
 
 Thanks for correcting me.  Yeah, I just quoted *my experience with
 openstack-infra *blindly.  Sorry for that.
 
 Rather I also wanted to point out to our folks, things in infra are so
 automated that putting an openstack-related module into stackforge has been
 become fully automatic and easy *(easy for the requestor, of course keeping
 in mind that the request has to be correct and get's reviewed and approved
 by  infra guys)*.  Kudos to you guys :-)
 
 Regards,
 Ramesh
You are welcome Ramesh.

I am glad you are having a good experience dealing with the infra team.

Going forward please be informed that I am a woman, I am not a guy. The
infra team has some members which are female.

Thank you,
Anita.
 
 
 On Sun, Mar 1, 2015 at 12:49 AM, Anita Kuno ante...@anteaya.info wrote:
 
 On 02/28/2015 01:28 AM, Ramakrishnan G wrote:
 Hello All,

 This is about adding vendor drivers in Ironic.

 In Kilo, we have many vendor drivers getting added in Ironic which is a
 very good thing.  But something I noticed is that, most of these reviews
 have lots of hardware-specific code in them.  This is something most of
 the
 other Ironic folks cannot understand unless they go and read the hardware
 manuals of the vendor hardware about what is being done.  Otherwise we
 just
 need to blindly mark the file as reviewed.

 Now let me pitch in with our story about this.  We added a vendor driver
 for HP Proliant hardware (the *ilo drivers in Ironic).  Initially we
 proposed this same thing in Ironic that we will add all the hardware
 specific code in Ironic itself under the directory drivers/modules/ilo.
 But few of the Ironic folks didn't agree on this (Devananda especially
 who
 is from my company :)). So we created a new module proliantutils, hosted
 in
 our own github and recently moved it to stackforge.  We gave a limited
 set
 of APIs for Ironic to use - like get_host_power_status(),
 set_host_power(),
 get_one_time_boot(), set_one_time_boot(), etc. (Entire list is here

 https://github.com/stackforge/proliantutils/blob/master/proliantutils/ilo/operations.py
 ).

 We have only seen benefits in doing it.  Let me bring in some examples:

 1) We tried to add support for some lower version of servers.  We could
 do
 this without making any changes in Ironic (Review in proliantutils
 https://review.openstack.org/#/c/153945/)
 2) We are adding support for newer models of servers (earlier we use to
 talk to servers in protocol called RIBCL, newer servers we will use a
 protocol called RIS) - We could do this with just 14 lines of actual code
 change in Ironic (this was needed mainly because we didn't think we will
 have to use a new protocol itself when we started) -
 https://review.openstack.org/#/c/154403/

 Now talking about the advantages of putting hardware-specific code in
 Ironic:

 *1) It's reviewed by Openstack community and tested:*
 No. I doubt if I throw in 600 lines of new iLO specific code that is
 here (

 https://github.com/stackforge/proliantutils/blob/master/proliantutils/ilo/ris.py
 )
 for Ironic folks, they will hardly take a look at it.  And regarding
 testing, it's not tested in the gate unless we have a 3rd party CI for
 it.
  [We (iLO drivers) also don't have 3rd party CI right now, but we are
 working on it.]

 *2) Everything gets packaged into distributions automatically:*
 Now the hardware-specific code that we add in Ironic under
 drivers/modules/vendor/ will get packaged into distributions, but this
 code in turn will have dependencies  which needs to be installed manually
 by the operator (I assume vendor specific dependencies are not considered
 by Linux distributions while packaging Openstack Ironic). Anyone
 installing
 Ironic and wanting to manage my company's servers will again need to
 install these dependencies manually.  Why not install the wrapper if
 there
 is one too.

 I assume we only get these advantages by moving all of hardware-specific
 code to a wrapper module in stackforge and just exposing some APIs for
 Ironic to use:
 * Ironic code would be much cleaner and easier to maintain
 * Any changes related to your hardware - support for newer hardware, bug
 fixes in particular models of hardware, would be very easy. You don't
 need
 to change Ironic code for that. You could just fix the bug in your
 module,
 release a new version and ask your users to 

Re: [openstack-dev] Fwd: [Neutron] ML2 + Nexus :: update_port_precommit ERROR

2015-03-02 Thread Rich Curran (rcurran)
Hi Adam –

Please email me directly w/ the log information.

Thanks,
Rich

From: Adam Lawson [mailto:alaw...@aqorn.com]
Sent: Sunday, March 01, 2015 11:02 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] Fwd: [Neutron] ML2 + Nexus :: update_port_precommit 
ERROR

Hello,

Sending this message to the dev group instead since I may be running into a bug 
(https://bugs.launchpad.net/neutron/+bug/1247976/+activity). I'm running 
Icehouse (modules come up as 2014.1.3 generally).

Thoughts?

Adam Lawson

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072
[http://www.aqorn.com/images/logo.png]

-- Forwarded message --
From: Adam Lawson alaw...@aqorn.commailto:alaw...@aqorn.com
Date: Sat, Feb 28, 2015 at 2:30 PM
Subject: [Neutron] ML2 + Nexus :: update_port_precommit ERROR
To: openstack 
openst...@lists.openstack.orgmailto:openst...@lists.openstack.org

I'm experimenting with OpenStack Icehouse with a dedicated Network node using 
the ML2 plugin, cisco_nexus mechanism driver and seeing Neutron errors relating 
to update_port_precommit and delete_port_precommit. Defined type driver is 
VLAN. Problem is being unable to boot instances using Private or Public 
networks.

Is this a known bug when using ML2 with Cisco Nexus switches or is it 
resolvable? Has anyone else run into this?

Mahalo,
Adam


Adam Lawson

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660tel:%2B1%20302-387-4660
Direct: +1 916-246-2072tel:%2B1%20916-246-2072
[http://www.aqorn.com/images/logo.png]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Change diagnostic snapshot compression algoritm

2015-03-02 Thread Dmitry Pyzhov
Actually, we have some improvements with snapshots size and we are going to
rethink our snapshots in upcoming releases. Miroslav, could you clarify the
importance of this change in 6.1?

On Thu, Jan 29, 2015 at 2:30 PM, Tomasz Napierala tnapier...@mirantis.com
wrote:

 Guys,

 We have requests for this improvement. It will help with huge
 environments, we are talking about 5GiB of logs.
 Is it on the agenda?

 Regards,


  On 22 Dec 2014, at 07:28, Bartlomiej Piotrowski 
 bpiotrow...@mirantis.com wrote:
 
  FYI, xz with multithreading support (5.2 release) has been marked as
 stable yesterday.
 
  Regards,
  Bartłomiej Piotrowski
 
  On Mon, Nov 24, 2014 at 12:32 PM, Bartłomiej Piotrowski 
 bpiotrow...@mirantis.com wrote:
  On 24 Nov 2014, at 12:25, Matthew Mosesohn mmoses...@mirantis.com
 wrote:
   I did this exercise over many iterations during Docker container
   packing and found that as long as the data is under 1gb, it's going to
   compress really well with xz. Over 1gb and lrzip looks more attractive
   (but only on high memory systems). In reality, we're looking at log
   footprints from OpenStack environments on the order of 500mb to 2gb.
  
   xz is very slow on single-core systems with 1.5gb of memory, but it's
   quite a bit faster if you run it on a more powerful system. I've found
   level 4 compression to be the best compromise that works well enough
   that it's still far better than gzip. If increasing compression time
   by 3-5x is too much for you guys, why not just go to bzip? You'll
   still improve compression but be able to cut back on time.
  
   Best Regards,
   Matthew Mosesohn
 
  Alpha release of xz supports multithreading via -T (or —threads)
 parameter.
  We could also use pbzip2 instead of regular bzip to cut some time on
 multi-core
  systems.
 
  Regards,
  Bartłomiej Piotrowski
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 --
 Tomasz 'Zen' Napierala
 Sr. OpenStack Engineer
 tnapier...@mirantis.com







 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] A new wiki page is created for Fuel Plugins

2015-03-02 Thread Irina Povolotskaya
Hi to everyone,

I'd like to inform you that a new wiki page was created for Fuel Plugins -
https://wiki.openstack.org/wiki/Fuel/Plugins.

Beginning with 6.0 release, Fuel supports Pluggable Architecture; that
means, you can download and install plugins for Fuel or even develop your
own.

The wiki page covers not only user-side instructions on installation and
usage, but also FAQ.

As I've already mentioned, it also has information for developers who'd
like to create their own plugin for Fuel (like what is Fuel Plugin Builder,
what UI elements are added for the plugin, what files are generated by FPB
etc.)

Note, that the page also contains some tutorials (soon there'll be even
more, so stay tuned).

Please, feel free to read, review or ask questions - you are always welcome!
If you write into openstack-dev mailing list, do not forget to
use [fuel][plugin] prefix.

Thanks!

-- 
Best regards,

Irina
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ec2-api] Functional tests for EC2-API in nova

2015-03-02 Thread Davanum Srinivas
Alex,

It's better do a experimental one first before trying to a non-voting:
http://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul/layout.yaml#n1509

-- dims

On Mon, Mar 2, 2015 at 7:24 AM, Alexandre Levine
alev...@cloudscaling.com wrote:
 All,

 We've finished setting up the functional testing for the stackforge EC2 API
 project. New test suite consists of 96 API and scenario tests covering
 almost (tags functionality and client tokens absent) all of the
 functionality initially covered by original EC2 API tests in Tempest and
 much more. Also this test suite is periodically run against AWS to ensure
 its compatibility with Amazon. Now it works as a gating for this standalone
 EC2 API project, however the question is:

 Does it make sense and do we want to somehow employ this test suite against
 nova's API (without VPC-related tests, which leaves 54 tests altogether)?

 Internally we did this and it seems that nova's EC2 API is sound enough (it
 still does what it did, say, a year ago), however it's still quite short of
 some features and compatibility. So our tests run against it produce the
 following results:

 With nova-network:
 http://logs.openstack.org/02/160302/1/check/check-functional-nova-network-dsvm-ec2api/ab1af0d/console.html

 With neutron:
 http://logs.openstack.org/02/160302/1/check/check-functional-neutron-dsvm-ec2api/f478a19/console.html

 And the review which we used to run the tests:
 https://review.openstack.org/#/c/160302/

 So if we do want to somehow set this up against nova's EC2 API, I'm not sure
 how to most effectively do this. Non-voting job in Nova fetching tests from
 stackforge/ec2-api and running them as we did in the review above?

 Best regards,
   Alex Levine



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][ec2-api] Functional tests for EC2-API in nova

2015-03-02 Thread Alexandre Levine

All,

We've finished setting up the functional testing for the stackforge EC2 
API project. New test suite consists of 96 API and scenario tests 
covering almost (tags functionality and client tokens absent) all of the 
functionality initially covered by original EC2 API tests in Tempest and 
much more. Also this test suite is periodically run against AWS to 
ensure its compatibility with Amazon. Now it works as a gating for this 
standalone EC2 API project, however the question is:


Does it make sense and do we want to somehow employ this test suite 
against nova's API (without VPC-related tests, which leaves 54 tests 
altogether)?


Internally we did this and it seems that nova's EC2 API is sound enough 
(it still does what it did, say, a year ago), however it's still quite 
short of some features and compatibility. So our tests run against it 
produce the following results:


With nova-network:
http://logs.openstack.org/02/160302/1/check/check-functional-nova-network-dsvm-ec2api/ab1af0d/console.html

With neutron:
http://logs.openstack.org/02/160302/1/check/check-functional-neutron-dsvm-ec2api/f478a19/console.html

And the review which we used to run the tests:
https://review.openstack.org/#/c/160302/

So if we do want to somehow set this up against nova's EC2 API, I'm not 
sure how to most effectively do this. Non-voting job in Nova fetching 
tests from stackforge/ec2-api and running them as we did in the review 
above?


Best regards,
  Alex Levine



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-03-02 Thread Chris Dent


I (and a few others) have been using gabbi[1] for a couple of months now
and it has proven very useful and evolved a bit so I thought it would be
worthwhile to followup my original message and give an update.

Some recent reviews[1] give a sample of how it can be used to validate
an existing API as well as search for less than perfect HTTP behavior
(e.g sending a 404 when a 405 would be correct).

Regular use has lead to some important changes:

* It can now be integrated with other tox targets so it can run
  alongside other functional tests.
* Individual tests can be xfailed and skipped. An entire YAML test
  file can be skipped.
* For those APIs which provide insufficient hypermedia support, the
  ability to inspect and reference the prior test and use template
  variables in the current request has been expanded (with support for
  environment variables pending a merge).

My original motivation for creating the tool was to make it easier to
learn APIs by causing a body of readable YAML files to exist. This
remains important but what I've found is that writing the tests is
itself an incredible tool. Not only is it very easy to write tests
(throw some stuff at a URL and see what happen) and find (many) bugs
as a result, the exploratory nature of test writing drives a
learning process.

You'll note that the reviews below are just the YAML files. That's
because the test loading and fixture python code is already merged.
Adding tests is just a matter of adding more YAML. An interesting
trick is to run a small segment of the gabbi tests in a project (e.g.
just one file that represents one type of resource) while producing
coverage data. Reviewing the coverage of just the controller for that
resource can help drive test creation and separation.

[1] http://gabbi.readthedocs.org/en/latest/
[2] https://review.openstack.org/#/c/159945/
https://review.openstack.org/#/c/159204/

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][glusterfs] Online Snapshot fails with GlusterFS

2015-03-02 Thread Duncan Thomas
I'm assuming your mean the following lines in nova policy.js:
compute_extension:os-assisted-volume-snapshots:create:rule:admin_api,
compute_extension:os-assisted-volume-snapshots:delete:rule:admin_api

These 2 calls are not intended to be made directly via an end user, but via
cinder, as a privileged user.

Please do not patch tempest, since this is a real bug it is highlighting.
The fix is to get cinder to use a privileged user account to make this
call. Please raise a cinder bug.

Thanks





 Hi,

As part of tempest job  gate-tempest-dsvm-full-glusterfs
http://logs.openstack.org/11/159711/1/experimental/gate-tempest-dsvm-full-glusterfs/b2cb37e/
run [1], the test case  test_snapshot_create_with_volume_in_use [2] is
failing.
This is because demo user is unable to create online snapshots, due to nova
policy rules[3].

To avoid this issue we can modify test case, to make demo user as an
admin before creating snapshot and reverting after it finishes.

Another approach is to use privileged user (
https://review.openstack.org/#/c/156940/) to create online snapshot.

[1]
http://logs.openstack.org/11/159711/1/experimental/gate-tempest-dsvm-full-glusterfs/b2cb37e/
[2]
https://github.com/openstack/tempest/blob/master/tempest/api/volume/test_volumes_snapshots.py#L66
[3] https://github.com/openstack/nova/blob/master/etc/nova/policy.json#L329

-- 
Warm Regards,
Bharat Kumar Kobagana
Software Engineer
OpenStack Storage – RedHat India


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Adding vendor drivers in Ironic

2015-03-02 Thread Dmitry Tantsur

On 02/28/2015 07:28 AM, Ramakrishnan G wrote:


Hello All,

Hi!



This is about adding vendor drivers in Ironic.

In Kilo, we have many vendor drivers getting added in Ironic which is a
very good thing.  But something I noticed is that, most of these reviews
have lots of hardware-specific code in them.  This is something most of
the other Ironic folks cannot understand unless they go and read the
hardware manuals of the vendor hardware about what is being done.
Otherwise we just need to blindly mark the file as reviewed.

Now let me pitch in with our story about this.  We added a vendor driver
for HP Proliant hardware (the *ilo drivers in Ironic).  Initially we
proposed this same thing in Ironic that we will add all the hardware
specific code in Ironic itself under the directory drivers/modules/ilo.
But few of the Ironic folks didn't agree on this (Devananda especially
who is from my company :)). So we created a new module proliantutils,
hosted in our own github and recently moved it to stackforge.  We gave a
limited set of APIs for Ironic to use - like get_host_power_status(),
set_host_power(), get_one_time_boot(), set_one_time_boot(), etc. (Entire
list is here
https://github.com/stackforge/proliantutils/blob/master/proliantutils/ilo/operations.py).

We have only seen benefits in doing it.  Let me bring in some examples:

1) We tried to add support for some lower version of servers.  We could
do this without making any changes in Ironic (Review in proliantutils
https://review.openstack.org/#/c/153945/)
2) We are adding support for newer models of servers (earlier we use to
talk to servers in protocol called RIBCL, newer servers we will use a
protocol called RIS) - We could do this with just 14 lines of actual
code change in Ironic (this was needed mainly because we didn't think we
will have to use a new protocol itself when we started) -
https://review.openstack.org/#/c/154403/

Now talking about the advantages of putting hardware-specific code in
Ironic:

*1) It's reviewed by Openstack community and tested:*
No. I doubt if I throw in 600 lines of new iLO specific code that is
here
(https://github.com/stackforge/proliantutils/blob/master/proliantutils/ilo/ris.py)
for Ironic folks, they will hardly take a look at it.  And regarding
testing, it's not tested in the gate unless we have a 3rd party CI for
it.  [We (iLO drivers) also don't have 3rd party CI right now, but we
are working on it.]

*2) Everything gets packaged into distributions automatically:*
Now the hardware-specific code that we add in Ironic under
drivers/modules/vendor/ will get packaged into distributions, but this
code in turn will have dependencies  which needs to be installed
manually by the operator (I assume vendor specific dependencies are not
considered by Linux distributions while packaging Openstack Ironic).
Anyone installing Ironic and wanting to manage my company's servers will
again need to install these dependencies manually.  Why not install the
wrapper if there is one too.

I assume we only get these advantages by moving all of hardware-specific
code to a wrapper module in stackforge and just exposing some APIs for
Ironic to use:
* Ironic code would be much cleaner and easier to maintain
* Any changes related to your hardware - support for newer hardware, bug
fixes in particular models of hardware, would be very easy. You don't
need to change Ironic code for that. You could just fix the bug in your
module, release a new version and ask your users to install a newer
version of the module.
* python-fooclient could be used outside Ironic to easily manage foo
servers.
* Openstack CI for free if you are in stackforge - unit tests, flake
tests, doc generation, merge, pypi release everything handled automatically.

I don't see any disadvantages.

Now regarding the time taken to do this, if you have all the code ready
now in Ironic (which assume you will already have), perhaps it will take
a day to do this - half a day for putting into a separate module in
python/github and half a day for stackforge. The request to add
stackforge should get approved in the same day (if everything is all-right).

Let me know all of your thoughts on this.  If we agree, I feel we should
have some documentation on it in our Ironic docs directory.


Thanks for writing this out!

I understand the concern about splitting this community effort, however, 
I tend to agree with you. Reviewing vendor-specific code does make me 
feel weird: on one hand I do my best to check it, on another - I realize 
that I can approve code that has no chances of working just because I 
can and will miss some hardware-specific detail.


My biggest concern in making it a rule is support level for this 
external code. I'm not questioning quality of proliantutils :) but I can 
envision people not putting too much effort into maintaining their 3rd 
part stuff. E.g. I already saw 3rd party module that is not even on PyPI 
(and does not have any tests). I don't know what to do in 

Re: [openstack-dev] [Keystone] How to check admin authentication?

2015-03-02 Thread Dmitry Tantsur
2015-02-27 17:27 GMT+01:00 Dolph Mathews dolph.math...@gmail.com:


 On Fri, Feb 27, 2015 at 8:39 AM, Dmitry Tantsur dtant...@redhat.com
 wrote:

 Hi all!

 This (presumably) pretty basic question tortures me for several months
 already, so I kindly seek for help here.

 I'm working on a Flask-based service [1] and I'd like to use Keystone
 tokens for authentication. This is an admin-only API, so we need to check
 for an admin role. We ended up with code [2] first accessing Keystone with
 a given token and (configurable) admin tenant name, then checking 'admin'
 role. Things went well for a while.

 Now I'm writing an Ironic driver accessing API of [1]. Pretty naively I
 was trying to use an Ironic service user credentials, that we use for
 accessing all other services. For TripleO-based installations it's a user
 with name 'ironic' and a special tenant 'service'. Here is where problems
 are. Our code perfectly authenticates a mere user (that has tenant
 'admin'), but asks Ironic to go away.

 We've spent some time researching documentation and keystone middleware
 source code, but didn't find any more clues. Neither did we find a way to
 use keystone middleware without rewriting half of project. What we need is
 2 simple things in a simple Flask application:
 1. validate a token
 2. make sure it belongs to admin


 I'm not really clear on what problem you're having, because I'm not sure
 if you care about an admin username, admin tenant name, or admin role
 name. If you're implementing RBAC, you only really need to care about the
 user have an admin role in their list of roles.


Yeah, I guess that's what I need.



 You can wrap your flask application with a configured instance of
 auth_token middleware; this is about the simplest way to do it, and this
 also demos the environment variables exposed to your application that you
 can use to validation authorization:


 https://github.com/dolph/keystone-deploy/blob/master/playbooks/roles/http/templates/echo.py#L33-L41


Thanks a lot, I will give it a try!





 I'll thankfully appreciate any ideas how to fix our situation.
 Thanks in advance!

 Dmitry.

 [1] https://github.com/stackforge/ironic-discoverd
 [2] https://github.com/stackforge/ironic-discoverd/blob/master/
 ironic_discoverd/utils.py#L50-L65

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
--
-- Dmitry Tantsur
--
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Need help in configuring keystone

2015-03-02 Thread Fargetta Marco
Hi Akshik, 

if you look at the log you find these lines: 

2015-02-27 22:36:38 CRIT Shibboleth.Application : no MetadataProvider 
available, configuration is probably unusable
2015-02-27 22:36:38 INFO Shibboleth.Application : no TrustEngine specified or 
installed, using default chain {ExplicitKey, PKIX}
2015-02-27 22:36:38 INFO Shibboleth.Application : building AttributeExtractor 
of type XML... 

It seems there is a problem with your shibboleth2.xml. Check it against a 
working one or try to increase the log verbosity to 
figure out the problem. 

Marco 

 From: Akshik DBK aks...@outlook.com
 To: OpenStack Development Mailing List not for usage questions
 openstack-dev@lists.openstack.org
 Sent: Saturday, 28 February, 2015 17:05:23
 Subject: Re: [openstack-dev] Need help in configuring keystone

 Hi Marco,
 did you get a chance to look at the logs,

 Regards,
 Akshik

 From: aks...@outlook.com
 To: openstack-dev@lists.openstack.org
 Date: Fri, 27 Feb 2015 22:50:47 +0530
 Subject: Re: [openstack-dev] Need help in configuring keystone

 Hi Marco,
 Thanks for responding, Ive cleared the log file and have restarted the shibd
 service.

 the metadata file got created, i've attached the log file and metadata file as
 well.

 Regards,
 Akshik

 Date: Fri, 27 Feb 2015 15:12:39 +0100
 From: marco.farge...@ct.infn.it
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Need help in configuring keystone

 Hi Akshik,

 the metadata error is in your SP, if the error was on testshib you
 should not be redirected back after the login. Maybe there is a configuration
 problem with shibboleth. Try to restart the service and look at shibboleth 
 logs.
 Check also the metadata of testshib are downloaded correctly because from the
 error
 it seems you have not the metadata of testshib.

 Cheers,
 Marco

 On Fri, Feb 27, 2015 at 06:39:30PM +0530, Akshik DBK wrote:
  Hi Marek ,
 I've registered with testshib, this is my keystone-apache-error.log log i get
 [error] [client 121.243.33.212] No MetadataProvider available., referer:
  https://idp.testshib.org/idp/profile/SAML2/Redirect/SSO
  From: aks...@outlook.com
  To: openstack-dev@lists.openstack.org
  Date: Fri, 27 Feb 2015 15:56:57 +0530
  Subject: [openstack-dev] Need help in configuring keystone




 Hi I'm new to SAML, trying to integrate keystone with SAML, Im using Ubuntu
 12.04 with Icehouse,im following http://docs.openstack.org/developer/k...when
 im trying to configure keystone with two idp,when i access
 https://MYSERVER:5000/v3/OS-FEDERATIO...it gets redirected to testshib.org , 
 it
 prompts for username and password when the same is given im
 gettingshibsp::ConfigurationException at (
 https://MYSERVER:5000/Shibboleth.sso/... ) No MetadataProvider available.here
 is my shibboleth2.xml contentSPConfig
  xmlns=urn:mace:shibboleth:2.0:native:sp:config
  xmlns:conf=urn:mace:shibboleth:2.0:native:sp:config
  xmlns:saml=urn:oasis:names:tc:SAML:2.0:assertion
  xmlns:samlp=urn:oasis:names:tc:SAML:2.0:protocol
  xmlns:md=urn:oasis:names:tc:SAML:2.0:metadata
  clockSkew=180

  ApplicationDefaults entityID=https://MYSERVER:5000/Shibboleth;
 Sessions lifetime=28800 timeout=3600 checkAddress=false
  relayState=ss:mem handlerSSL=false
  SSO entityID= https://idp.testshib.org/idp/shibboleth  
  ECP=true
  SAML2 SAML1
  /SSO

  LogoutSAML2 Local/Logout

  Handler type=MetadataGenerator Location=/Metadata 
  signing=false/
  Handler type=Status Location=/Status /
  Handler type=Session Location=/Session 
  showAttributeValues=false/
  Handler type=DiscoveryFeed Location=/DiscoFeed/
  /Sessions

  Errors supportContact=root@localhost
  logoLocation=/shibboleth-sp/logo.jpg
  styleSheet=/shibboleth-sp/main.css/

  AttributeExtractor type=XML validate=true 
  path=attribute-map.xml/
  AttributeResolver type=Query subjectMatch=true/
  AttributeFilter type=XML validate=true 
  path=attribute-policy.xml/
  CredentialResolver type=File key=sp-key.pem 
  certificate=sp-cert.pem/

  ApplicationOverride id=idp_1 
  entityID=https://MYSERVER:5000/Shibboleth;

  Sessions lifetime=28800 timeout=3600 checkAddress=false
  relayState=ss:mem handlerSSL=false
  SSO entityID= 
  https://portal4.mss.internalidp.com/idp/shibboleth  ECP=true
  SAML2 SAML1
  /SSO
  LogoutSAML2 Local/Logout
  /Sessions

 MetadataProvider type=XML uri=
  https://portal4.mss.internalidp.com/idp/shibboleth 
   backingFilePath=/tmp/tata.xml reloadInterval=18 /
  /ApplicationOverride

  ApplicationOverride id=idp_2 
  entityID=https://MYSERVER:5000/Shibboleth;
  Sessions lifetime=28800 timeout=3600 checkAddress=false
  

Re: [openstack-dev] Need help in configuring keystone

2015-03-02 Thread Marek Denis

Akshik,

When you are beginning an adventure with saml, shibboleth and so on, 
it's helpful to start with fetching auto-generated shibboleth2.xml file 
from testshib.org . This should cover most of your use-cases, at least 
in the testing environment.


Marek



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.policy] guraduation status

2015-03-02 Thread Osanai, Hisashi
oslo.policy folks,

I'm thinking about realization of policy-based access control in swift 
using oslo.policy [1] so I would like to know oslo.policy's status for 
graduation.

[1] 
https://github.com/openstack/oslo-specs/blob/master/specs/kilo/graduate-policy.rst

Thanks in advance,
Hisashi Osanai


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.policy] guraduation status

2015-03-02 Thread Jordan Pittier
Hi,
FYI there might be something related to what you plan :
https://github.com/stackforge/swiftpolicy/
https://review.openstack.org/#/c/89568/

The project is abandoned but the initial goal was to have the code somehow
proposed to be merged in Openstack Swift. Feel free to have a look a
continue this effort.

Jordan

On Mon, Mar 2, 2015 at 11:01 AM, Osanai, Hisashi 
osanai.hisa...@jp.fujitsu.com wrote:

 oslo.policy folks,

 I'm thinking about realization of policy-based access control in swift
 using oslo.policy [1] so I would like to know oslo.policy's status for
 graduation.

 [1]
 https://github.com/openstack/oslo-specs/blob/master/specs/kilo/graduate-policy.rst

 Thanks in advance,
 Hisashi Osanai


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] what's the merge plan for current proposed microversions?

2015-03-02 Thread Sean Dague
This change for the additional attributes for ec2 looks like it's
basically ready to go, except it has the wrong microversion on it (as
they anticipated the other changes landing ahead of them) -
https://review.openstack.org/#/c/155853

What's the plan for merging the outstanding microversions? I believe
we're all conceptually approved on all them, and it's an important part
of actually moving forward on the new API. It seems like we're in a bit
of a holding pattern on all of them right now, and I'd like to make sure
we start merging them this week so that they have breathing space before
the freeze.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] New Driver for LBaaS enhancement

2015-03-02 Thread liuxin
Hello,everyone:

we come from China, we have implemented a LBaaS driver for openstack.

This driver can drive an LoadBalancer device which also has a security
filter functionality for vm instance, including DDos shield, web access
control  and so on .

We provide security through LoadBalancer for public or private cloud.

Here is the introduction of our project, Hoping it can integrate with
openstack community edition.

https://github.com/NeusoftSecurity/ADSG-LBaaS-Driver/

 

we would like to contribute lbaas to openstack continuously.

Our company Neusoft have a huge potential interest to openstack,
we hope that there is a deeply cooperate with openstack.

 

 

  

 

---
Confidentiality Notice: The information contained in this e-mail and any 
accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential 
and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of 
this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, 
disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this 
communication in error,please 
immediately notify the sender by return e-mail, and delete the original message 
and all copies from 
your system. Thank you. 
---
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how safe is it to change NoAuthMiddlewareBase?

2015-03-02 Thread Sean Dague
On 02/28/2015 11:51 AM, Jay Pipes wrote:
 On 02/26/2015 04:27 AM, Sean Dague wrote:
 In trying to move the flavor manage negative tests out of Tempest and
 into the Nova functional tree, I ran into one set of tests which are
 permissions checking. Basically that a regular user isn't allowed to do
 certain things.

 In (nearly) all our tests we use auth_strategy=noauth which takes you to
 NoAuthMiddlewareBase instead of to keystone. That path makes you an
 admin regardless of what credentials you send in -
 https://github.com/openstack/nova/blob/master/nova/api/openstack/auth.py#L56-L59


 What I'd like to do is to change this so that if you specify
 user_id='admin' then is_admin is set true, and it's not true otherwise.

 That has a bunch of test fall out, because up until this point most of
 the test users are things like 'fake', which would regress to non admin.
 About 25% of the api samples tests fail in such a change, so they would
 need to be fixed.
 
 Taking a step back... what exactly is the purpose of the API samples
 functional tests? If the purpose of these tests has anything to do
 with validating some policy thing, then I suppose it's worth changing
 the auth middleware to support non-adminness. But, I don't think the API
 samples test purpose has anything to do with that (I think the purpose
 of the API samples tests is fuzzy, at best, actually). So, I'd just
 leave them as-is and not change anything at all.

If we are going to do things like bring API bounds testing into tree,
I'd like to have that also include permissions enforcement. Given that
permissions enforcement is currently happening at multiple levels in
Nova, having a way to actually test that surface in tree seems like a
good thing.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-03-02 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi Daniel,

thanks for a clear write-up of the matter and food for thought.

I think the idea of having more smooth development mode that would not
make people to wait for 6+ months to release a new feature is great.

It's insane to expect that feature priorities won't ever slightly
shift in the next 6 months. Having a limited list of features targeted
for the next 6 months is prone to mistakes, since people behind some
of approved features may need to postpone the effort for any type of
reasons (personal, job rush, bad resource allocation, ...), and we
then end up with approved specs with no actual code drops, using
review 'slots' that would better be spent for other features that were
not that lucky to get a rubber stamp during spec phase. Prior resource
allocation would probably work somehow if we were working for the same
company that would define priorities to us, but it's not the case.

Anecdotally, in neutron, we have two Essential blueprints for Kilo,
and there are no code drops or patches in review for any of those, so
I would expect them to fail to merge. At the same time, I will need to
wait for the end of Kilo to consider adding support for guru reports
to the project. Or in oslo world, I will need to wait for Liberty to
introduce features in oslo.policy that are needed by neutron to switch
to it, etc.

Another problem is that currently milestones are used merely for
targeting bugs and features, but no one really cares about whether the
actual milestone shipment actually works. Again, a story from neutron
world: Kilo-1 was shipped in the middle of advanced services split,
with some essential patches around distutils setup missing (no proper
migration plan applied, conflicting config files in neutron and *aas
repos, etc.)

So I'm all for reforms around processes we apply.

That said, I don't believe the real problem here is that we don't
generate project tarballs frequently enough.

Major problems I see as critical to tackle in our dev process are:

- - enforced spec/dev mode. Solution: allow to propose (and approve) a
reasonable spec at any moment in the cycle; allow code drops for
approved specs at any moment in the cycle (except pre-release
stabilization time); stop targeting specs: if it's sane, it's probably
sane N+2 cycle from now too.

- - core team rubber stamping a random set of specs and putting -2 on
all other specs due to project priorities. Solution: stop pretending
core team (or core drivers) can reasonably estimate review and coding
resources for the next cycle. Instead allows community to decide
what's worth the effort by approving all technically reasonable specs
and allowing everyone to invest time and effort in specs (s)he seems
worth it.

- - no due stabilization process before dev milestones. Solution:
introduce one in your team workflow, be more strict in what goes in
during pre milestone stabilization time.

If all above is properly applied, we would get into position similar
to your proposal. The difference though is that upstream project would
not call milestone tarballs 'Releases'. Still, if there are brave
vendors to ship milestones, fine, they would have the same freedom as
in your proposal.

Note: all the steps mentioned above can be applied on *per team* basis
without breaking existing release workflow.

Some more comments from stable-maint/distribution side below.

On 02/24/2015 10:53 AM, Daniel P. Berrange wrote:
[...skip...]
 The modest proposal ===
[...skip...]
 
 Stable branches ---
 
 The consequences of a 2 month release cycle appear fairly severe
 for the stable branch maint teams at first sight. This is not,
 however, an insurmountable problem. The linux kernel shows an easy
 way forward with their approach of only maintaining stable branches
 for a subset of major releases, based around user / vendor demand.
 So it is still entirely conceivable that the stable team only
 provide stable branch releases for 2 out of the 6 yearly releases.
 ie no additional burden over what they face today. Of course they
 might decide they want to do more stable branches, but maintain
 each for a shorter time. So I could equally see them choosing todo
 3 or 4 stable branches a year. Whatever is most effective for those
 involved and those consuming them is fine.
 

Since it's not only stable branches that are affected (translators,
documentation writers, VMT were already mentioned), those affected
will probably need to come up with some synchronized decision.

Let's say we still decide to support two out of six releases (same
scheme as is now). In that case no process that we usually attach to
releases will be running after release dates. This makes me wonder how
it's different from milestones we already have.

Do you think any downstream vendors will actually ship and support
upstream releases that upstream drops any guarantees for (no VMT, no
stable branches, no gate runs, ...) right after the 

Re: [openstack-dev] [Murano] Should we run tests on all supported database engines?

2015-03-02 Thread Serg Melikyan
Hi Andrew,

I think we should isolate DB access layer and test this layer extensively
on MySQL  PostgreSQL.

On Mon, Mar 2, 2015 at 6:57 PM, Andrew Pashkin apash...@mirantis.com
wrote:

 There is a bug:
 https://bugs.launchpad.net/murano/+bug/1419743

 It's about that search on packages is failing if use Postgres as DB
 backend. The fix is pretty easy there, but how this bug should be
 tested?

 I see such options:
 1) Run all tests for all DBs that we support.
 2) Run some tests for some DBs.
 3) Do not test such issues at all.

 Do anyone has some thoughts on this?

 --
 With kind regards, Andrew Pashkin.
 cell phone - +7 (985) 898 57 59
 Skype - waves_in_fluids
 e-mail - apash...@mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

+7 (495) 640-4904, 0261
+7 (903) 156-0836
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] auto-abandon changesets considered harmful (was Re: [stable][all] Revisiting the 6 month release cycle [metrics])

2015-03-02 Thread James E. Blair
John Griffith john.griffi...@gmail.com writes:

 For what it's worth, at one point the Cinder project setup an auto-abandon
 job that did purge items that had a negative mark either from a reviewer or
 from Jenkins and had not been updated in over two weeks.  This had
 absolutely nothing to do with metrics or statistical analysis of the
 project.  We simply had a hard time dealing with patches that the submitter
 didn't care about.  If somebody takes the time to review a patch, then I
 don't think it's too much to ask that the submitter respond to questions or
 comments within a two week period.  Note, the auto purge in our case only
 purged items that had no updates or activity at all.

 We were actually in a position where we had patches that were submitted,
 failed unit tests in the gate (valid failures that occurred 100% of the
 time) and had sat for an entire release without the submitter ever updating
 the patch. I don't think it's unreasonable at all to abandon these and
 remove them from the queue.  I don't think this is a bad thing, I think
 it's worse to leave them as active when they're bit-rotted and the
 submitter doesn't even care about them any longer.  The other thing is,
 those patches are still there, they can still be accessed and reinstated.

I understand and agree with where you are coming from -- I'm just saying
that abandon is the wrong tool to accomplish this.  If a patch is in
the queue but is failing tests and hasn't been updated in a release,
then we should find a way to remove it from the queue without
abandoning it.  The solution should not impinge on core reviewers' time.
I believe we have the tools needed to do this, but have either not
implemented them or communicated about them correctly.

I'm happy to help identify potential solutions that don't involve
abandoning others' patches if people would be willing to tell me where
this is causing them problems.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Should we run tests on all supported database engines?

2015-03-02 Thread Andrew Pashkin
There is a bug:
https://bugs.launchpad.net/murano/+bug/1419743

It's about that search on packages is failing if use Postgres as DB
backend. The fix is pretty easy there, but how this bug should be
tested?

I see such options:
1) Run all tests for all DBs that we support.
2) Run some tests for some DBs.
3) Do not test such issues at all.

Do anyone has some thoughts on this?

-- 
With kind regards, Andrew Pashkin.
cell phone - +7 (985) 898 57 59
Skype - waves_in_fluids
e-mail - apash...@mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] python-novaclient 2.22.0 released

2015-03-02 Thread Matt Riedemann

This is a bug fix and feature release.


Available here:

http://tarballs.openstack.org/python-novaclient/python-novaclient-2.22.0.tar.gz

https://pypi.python.org/pypi/python-novaclient/2.22.0


Changes:

mriedem@ubuntu:~/git/python-novaclient$ git log --oneline --no-merges 
2.21.0..2.22.0

5f3f52e Fix description of parameters in nova-client functions
53f0c54 Enable check for E124 rule
4f9797a Removed unused 'e' from 'try except' statements
e6883d2 allow --endpoint-type internal|admin|public
f2a581e Fixed redeclared test_names
b00f675 Updated from global requirements
9a06348 add pretty_tox to nova functional tests
be41ae2 add functional test for nova volume-attach bug
ac6636a Revert Overhaul bash-completion to support non-UUID based IDs

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][glusterfs] Online Snapshot fails with GlusterFS

2015-03-02 Thread Deepak Shetty
Duncan, Attila,
Thanks for your response.

We found a issue when using os_priviledge_user_name which is when we sent
this patch
https://review.openstack.org/#/c/156940/

Post that we used admin user, its password and admin tenant in cinder.conf
to make it work
But we thought that using admin creds (esp password) isn't secure in
cinder.conf so we thought of
patching tempest test case, but later while talking with Attila on IRC,
figured  that we can use
user 'nova' which is added to admin role in lib/nova as part of devstack
setup

So we plan to test using nova creds.

thanx,
deepak

On Mon, Mar 2, 2015 at 5:24 PM, Duncan Thomas duncan.tho...@gmail.com
wrote:

 I'm assuming your mean the following lines in nova policy.js:
 compute_extension:os-assisted-volume-snapshots:create:rule:admin_api,
 compute_extension:os-assisted-volume-snapshots:delete:rule:admin_api

 These 2 calls are not intended to be made directly via an end user, but
 via cinder, as a privileged user.

 Please do not patch tempest, since this is a real bug it is highlighting.
 The fix is to get cinder to use a privileged user account to make this
 call. Please raise a cinder bug.

 Thanks





  Hi,

 As part of tempest job  gate-tempest-dsvm-full-glusterfs
 http://logs.openstack.org/11/159711/1/experimental/gate-tempest-dsvm-full-glusterfs/b2cb37e/
 run [1], the test case  test_snapshot_create_with_volume_in_use [2] is
 failing.
 This is because demo user is unable to create online snapshots, due to
 nova policy rules[3].

 To avoid this issue we can modify test case, to make demo user as an
 admin before creating snapshot and reverting after it finishes.

 Another approach is to use privileged user (
 https://review.openstack.org/#/c/156940/) to create online snapshot.

 [1]
 http://logs.openstack.org/11/159711/1/experimental/gate-tempest-dsvm-full-glusterfs/b2cb37e/
 [2]
 https://github.com/openstack/tempest/blob/master/tempest/api/volume/test_volumes_snapshots.py#L66
 [3]
 https://github.com/openstack/nova/blob/master/etc/nova/policy.json#L329

 --
 Warm Regards,
 Bharat Kumar Kobagana
 Software Engineer
 OpenStack Storage – RedHat India


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress]How to add tempest tests for testing murano driver

2015-03-02 Thread Tim Hinrichs
Hi Hong,

Aaron started working on this, but we don’t have anything in place yet, as far 
as I know.  He’s a starting point.

https://review.openstack.org/#/c/157166/

Tim

On Feb 26, 2015, at 2:56 PM, Wong, Hong 
hong.w...@hp.commailto:hong.w...@hp.com wrote:

Hi Aaron,

I am new to congress and trying to write tempest tests for the newly added 
murano datasource driver.  Since the murano datasource tempest tests require 
both murano and python-congress clients as the dependencies.  I was told that I 
can't just simply add the requirements in the tempest/requirements.txt file as 
both packages are in not in the main branch, so CI will not be able to pick 
them up.  Do you know of any workaround ?

Thanks,
Hong
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] auto-abandon changesets considered harmful (was Re: [stable][all] Revisiting the 6 month release cycle [metrics])

2015-03-02 Thread Duncan Thomas
Why do you say auto-abandon is the wrong tool? I've no problem with the 1
week warning if somebody wants to implement it - I can see the value. A
change-set that has been ignored for X weeks is pretty much the dictionary
definition of abandoned, and restoring it is one mouse click. Maybe put
something more verbose in the auto-abandon message than we have been,
encouraging those who feel it shouldn't have been marked abandoned to
restore it (and respond quicker in future) but other than that we seem to
be using the right tool to my eyes

On 2 March 2015 at 17:17, James E. Blair cor...@inaugust.com wrote:

 John Griffith john.griffi...@gmail.com writes:

  For what it's worth, at one point the Cinder project setup an
 auto-abandon
  job that did purge items that had a negative mark either from a reviewer
 or
  from Jenkins and had not been updated in over two weeks.  This had
  absolutely nothing to do with metrics or statistical analysis of the
  project.  We simply had a hard time dealing with patches that the
 submitter
  didn't care about.  If somebody takes the time to review a patch, then
 I
  don't think it's too much to ask that the submitter respond to questions
 or
  comments within a two week period.  Note, the auto purge in our case only
  purged items that had no updates or activity at all.
 
  We were actually in a position where we had patches that were submitted,
  failed unit tests in the gate (valid failures that occurred 100% of the
  time) and had sat for an entire release without the submitter ever
 updating
  the patch. I don't think it's unreasonable at all to abandon these and
  remove them from the queue.  I don't think this is a bad thing, I think
  it's worse to leave them as active when they're bit-rotted and the
  submitter doesn't even care about them any longer.  The other thing is,
  those patches are still there, they can still be accessed and
 reinstated.

 I understand and agree with where you are coming from -- I'm just saying
 that abandon is the wrong tool to accomplish this.  If a patch is in
 the queue but is failing tests and hasn't been updated in a release,
 then we should find a way to remove it from the queue without
 abandoning it.  The solution should not impinge on core reviewers' time.
 I believe we have the tools needed to do this, but have either not
 implemented them or communicated about them correctly.

 I'm happy to help identify potential solutions that don't involve
 abandoning others' patches if people would be willing to tell me where
 this is causing them problems.

 -Jim

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Should we run tests on all supported database engines?

2015-03-02 Thread Andrew Pashkin
What kind of layer do you mean?

Right now we have some kind of isolation in the form of separate
functions for DB lookups. For example for this case its
murano.db.catalog.api.package_search [1].

[1]
https://github.com/stackforge/murano/blob/b4918b512010875b9cf3b59d65ff1fbfeaf5f197/murano/db/catalog/api.py#L226-226

On 02.03.2015 19:05, Serg Melikyan wrote:
 Hi Andrew,
 
 I think we should isolate DB access layer and test this layer
 extensively on MySQL  PostgreSQL. 
 
 On Mon, Mar 2, 2015 at 6:57 PM, Andrew Pashkin apash...@mirantis.com
 mailto:apash...@mirantis.com wrote:
 
 There is a bug:
 https://bugs.launchpad.net/murano/+bug/1419743
 
 It's about that search on packages is failing if use Postgres as DB
 backend. The fix is pretty easy there, but how this bug should be
 tested?
 
 I see such options:
 1) Run all tests for all DBs that we support.
 2) Run some tests for some DBs.
 3) Do not test such issues at all.
 
 Do anyone has some thoughts on this?
 
 --
 With kind regards, Andrew Pashkin.
 cell phone - +7 (985) 898 57 59
 Skype - waves_in_fluids
 e-mail - apash...@mirantis.com mailto:apash...@mirantis.com
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
 http://mirantis.com http://mirantis.com/ | smelik...@mirantis.com
 mailto:smelik...@mirantis.com
 
 +7 (495) 640-4904, 0261
 +7 (903) 156-0836
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

-- 
With kind regards, Andrew Pashkin.
cell phone - +7 (985) 898 57 59
Skype - waves_in_fluids
e-mail - apash...@mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso.db 1.5.0 release

2015-03-02 Thread Mike Bayer


Matt Riedemann mrie...@linux.vnet.ibm.com wrote:

 
 
 On 2/26/2015 3:22 AM, Victor Sergeyev wrote:
 Hi folks!
 
 The Oslo team is pleased to announce the release of: oslo.db - OpenStack
 common DB library
 
 Changes from the previous release:
 
 $ git log --oneline --no-merges 1.4.1..1.5.0
 7bfdb6a Make DBAPI class work with mocks correctly
 96cabf4 Updated from global requirements
 a3a1bdd Imported Translations from Transifex
 ab20754 Fix PyMySQL reference error detection
 99e2ab6 Updated from global requirements
 6ccea34 Organize provisioning to use testresources
 eeb7ea2 Add retry decorator allowing to retry DB operations on request
 d78e3aa Imported Translations from Transifex
 dcd137a Implement backend-specific drop_all_objects for provisioning.
 3fb5098 Refactor database migration manager to use given engine
 afcc3df Fix 0 version handling in migration_cli manager
 f81653b Updated from global requirements
 efdefa9 Fix PatchStacktraceTest for pypy
 c0a4373 Update Oslo imports to remove namespace package
 1b7c295 Retry query if db deadlock error is received
 046e576 Ensure DBConnectionError is raised on failed revalidate
 
 
 Please report issues through launchpad:
 http://bugs.launchpad.net/oslo.db
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 Looks like something here might be related to a spike in DBDeadlock errors in 
 neutron in the last 24 hours:
 
 https://bugs.launchpad.net/neutron/+bug/1426543

the bug report has a lot of back and forth regarding several potential
issues; has oslo.db been ruled out yet?

 -- 
 
 Thanks,
 
 Matt Riedemann
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Devstack] Can't start service nova-novncproxy

2015-03-02 Thread Solly Ross
Double-check no make sure that it's enabled.  A couple months ago, noVNC got 
removed from the standard install because devstack was installing it from 
GitHub.

- Original Message -
 From: Chen Li chen...@intel.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Sunday, March 1, 2015 7:14:51 PM
 Subject: Re: [openstack-dev] [Devstack] Can't start service nova-novncproxy
 
 That's' the most confusing part.
 I don't even have a log for service nova-novncproxy.
 
 Thanks.
 -chen
 
 -Original Message-
 From: Kashyap Chamarthy [mailto:kcham...@redhat.com]
 Sent: Monday, March 02, 2015 12:16 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Devstack] Can't start service nova-novncproxy
 
 On Sat, Feb 28, 2015 at 06:20:54AM +, Li, Chen wrote:
  Hi all,
  
  I'm trying to install a fresh all-in-one openstack environment by devstack.
  After the installation, all services looks well, but I can't open instance
  console in Horizon.
  
  I did a little check, and found service nova-novncproxy was not started !
 
 What do you see in your 'screen-n-vnc.log' (I guess) log?
 
 I don't normally run Horizon or nova-vncproxy (only n-cpu, n-sch, n-cond),
 these are the ENABLED_SERVICES in my minimal DevStack config (Nova, Neutron,
 Keystone and Glance):
 
 
 ENABLED_SERVICES=g-api,g-reg,key,n-api,n-cpu,n-sch,n-cond,mysql,rabbit,dstat,quantum,q-svc,q-agt,q-dhcp,q-l3,q-meta
 
 [1]
 https://kashyapc.fedorapeople.org/virt/openstack/2-minimal_devstack_localrc.conf
 
  Anyone has idea why this happened ?
  
  Here is my local.conf : http://paste.openstack.org/show/183344/
  
  My os is:
  Ubuntu 14.04 trusty
  3.13.0-24-generic
  
  
 
 
 --
 /kashyap
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-03-02 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/26/2015 01:06 AM, Thomas Goirand wrote:
 On 02/24/2015 12:27 PM, Daniel P. Berrange wrote:
 I'm actually trying to judge it from the POV of users, not just 
 developers. I find it pretty untenable that in the fast moving 
 world of cloud, users have to wait as long as 6 months for a 
 feature to get into a openstack release, often much longer.
 
 If you were trying to judge from the POV of users, then you would 
 consider that basically, they don't really care the brand new
 shiny feature which just appear. They care having a long time
 support for whatever version of OpenStack they have installed,
 without having the head-aches of upgrading which is famously
 painful with OpenStack. This shows clearly on our user surveys
 which are presented on every summit: users are lagging behind, with
 a majority still running with OpenStack releases which are already
 EOL.
 

As was already said in the thread, they care about long support AND
their specific subset of shiny new features they need (and they don't
care about all other features that are not in their subjective list of
shiny ones).

 In fact, if you want to judge from the POV of our users, we should
 *SLOW DOWN* our release cycles, and probably move to something like
 one release every year or 2. We should also try to have longer
 periods of support for our stable releases, which would (with my
 Debian package maintainer hat on!) help distributions to do such
 security support.
 
 Debian Jessie will be released in a few month from now, just
 before Icehouse (which it ships) will be EOL. RedHat, Canonical,
 IBM, and so many more are also on the same (sinking) ship.
 

That won't come up magically. If no people from Debian or IBM are
actively involved in solving issues for stable branches, long term
will be just a wish and not a reality. At the moment we have several
people from Red Hat (me, Alan) and Ubuntu (Chuck, Adam) side active in
the team. That would be great to see more diversity, and more hands
dirty with stable branches on daily basis.

 As for my employer side of things, we've seen numerous cases with
 our customer requesting for LTS, which we have to provide by
 ourselves, since it's not supported upstream.
 

Every downstream vendor would be happy to see upstream freely support
their product. :) But you get what you pay to upstream horizontal
initiatives like stable maintenance.

I've heard IBM still supports releases starting from C release!..

 I think the majority of translation work can be done in parallel
 with dev work and the freeze time just needs to tie up the small
 remaining bits.
 
 It'd be nice indeed, but I've never seen any project (open source
 or not) working this way for translations.

It's actually not true. I've coordinated multiple translation teams
for my language (Belarusian), and for what I can tell, the best
practice is to work on translations while development is ongoing. Yes,
it means some effort wasted, but it also means spreading the whole
effort throughout the year instead of doing all the work in
pre-release freeze time.

Freeze time is anyway good to make sure that no last minute patches
break translations, and I don't suggest we drop them completely.

 
 Documentation is again something I'd expect to be done more or
 less in parallel with dev.
 
 Let's go back to reality: the Juno install-guide is still not
 finished, and the doc team is lagging behind.
 
 It would be reasonable for the vulnerability team to take the
 decision that they'll support fixes for master, and any branches
 that the stable team decide to support. ie they would not
 neccessarily have to commit to shipping fixes for every single
 release made.
 
 I've been crying for this type of decision. IE: drop Juno support
 early, and continue to maintain Icehouse for longer. I wish this
 happens, but the release team always complained that nobody works
 on maintaining the gate for the stable branches. Unless this
 changes, I don't see hope... :(
 

- From Red Hat perspective, we put our effort in what we're interested
in. We're more interested in keeping the branch for the latest release
working, not about sticking to an old release that e.g. Debian chose
to put into their next official release, so unless someone from Debian
come to the team and start to invest into Icehouse, it will eventually
be dropped. That's life.

 I really not trying to focus on the developers woes. I'm trying
 to focus on making OpenStack better serve our users. My main
 motiviation here is that I think we're doing a pretty terrible
 job at getting work done that is important to our users in a
 timely manner. This is caused by a workflow  release cycle that
 is negatively impacting the developers.
 
 Our workflow and the release cycle are 2 separate things. From my
 POV, it'd be a mistake to believe switching to a different release
 cycle will fix our workflow.

Here, I actually agree. There are lots of enhancements in 

Re: [openstack-dev] olso.db 1.5.0 release

2015-03-02 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 03/02/2015 05:51 PM, Mike Bayer wrote:
 
 
 Matt Riedemann mrie...@linux.vnet.ibm.com wrote:
 
 
 
 On 2/26/2015 3:22 AM, Victor Sergeyev wrote:
 Hi folks!
 
 The Oslo team is pleased to announce the release of: oslo.db -
 OpenStack common DB library
 
 Changes from the previous release:
 
 $ git log --oneline --no-merges 1.4.1..1.5.0 7bfdb6a Make DBAPI
 class work with mocks correctly 96cabf4 Updated from global
 requirements a3a1bdd Imported Translations from Transifex 
 ab20754 Fix PyMySQL reference error detection 99e2ab6 Updated
 from global requirements 6ccea34 Organize provisioning to use
 testresources eeb7ea2 Add retry decorator allowing to retry DB
 operations on request d78e3aa Imported Translations from
 Transifex dcd137a Implement backend-specific drop_all_objects
 for provisioning. 3fb5098 Refactor database migration manager
 to use given engine afcc3df Fix 0 version handling in
 migration_cli manager f81653b Updated from global requirements 
 efdefa9 Fix PatchStacktraceTest for pypy c0a4373 Update Oslo
 imports to remove namespace package 1b7c295 Retry query if db
 deadlock error is received 046e576 Ensure DBConnectionError is
 raised on failed revalidate
 
 
 Please report issues through launchpad: 
 http://bugs.launchpad.net/oslo.db
 
 
 __

 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 
Looks like something here might be related to a spike in DBDeadlock
errors in neutron in the last 24 hours:
 
 https://bugs.launchpad.net/neutron/+bug/1426543
 
 the bug report has a lot of back and forth regarding several
 potential issues; has oslo.db been ruled out yet?
 

A recently merged neutron patch was found as the guilty one, and it
was already reverted:

https://github.com/openstack/neutron/commit/fcfc5cd8e7836aa19df02e88a6f74e565995345c

 --
 
 Thanks,
 
 Matt Riedemann
 
 
 __

 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 __

 
OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJU9JmgAAoJEC5aWaUY1u574JAH/3ywao0qUFe16pEr+GwdTouR
tvCWQebv1I0KTGsUOQ4jfLZL/fmgfRPZiGBZP7Hj0Y1Fl1UTNBYjow07AqohMOtE
feVOTibqO4k7PNWWivzhZol3Ytpk7qFN0frfvRj1QR25spNGCLvmJPit+F/UzzF0
kn22dw37XmM9uI3Chc/C5cuFmER5Uakz/hiizd8YvXtCTlzLMcZcppFdBLGGnqwe
FQrnaMq3Lhl7mpLVtNuu68WTgS7wE8LKb2Fntlx9zsQNSapRroSRN0NVoHXg8AaV
sO64YMLkJBRtiavFuyHk7HQ4HaszTHkCXgheIHIVMTHQSUANQhbEwfd4OTMzIpo=
=mbmC
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting minutes - 03/02/2015

2015-03-02 Thread Renat Akhmerov
Thanks for coming to our team meeting today.

Meeting minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-03-02-16.00.html
 
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-03-02-16.00.html
Full log: 
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-03-02-16.00.log.html
 
http://eavesdrop.openstack.org/meetings/mistral/2015/mistral.2015-03-02-16.00.log.html

We may change our team meeting time so we’ll notify about that separately.

Renat Akhmerov
@ Mirantis Inc.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress][Delegation] Initial workflow design

2015-03-02 Thread Tim Hinrichs
Inline.

On Feb 27, 2015, at 6:40 AM, 
ruby.krishnasw...@orange.commailto:ruby.krishnasw...@orange.com wrote:

My first suggestion: why don’t we set up call together with Ramki, Yathi, Debo, 
as soon as possible ?

-  How to go forward concretely with the 8 steps for the PoC (details 
within each step),

oIncluding nova integration points

-  Identify “friction points” in the details above to resolve for  
beyond PoC

Happy to have a call.  But maybe it would be good to have something concrete to 
look at.

Yali: could you outline how you envision the integration of Congress and 
solver-scheduler working in that same google doc?  Maybe add a section on 
“Integration with Nova Solver-scheduler”?

https://docs.google.com/document/d/1ksDilJYXV-5AXWON8PLMedDKr9NpS8VbT0jIy_MIEtI/edit#



Tim: Where does the rest of the program originate?  I’m not saying the entire 
LP program is generated from the Datalog constraints; some of it is generated 
by the solver independent of the Datalog.  In the text, I gave the example of 
defining hMemUse[j].
Tim: The VM-placement engine does 2 things: (I) translates Datalog to LP and 
(ii) generates additional LP constraints.  (Both work items could leverage any 
constraints that are builtin to a specific solver, e.g. the solver-scheduler.  
The point is that there are 2 distinct, conceptual origins of the LP 
constraints: those that represent the Datalog and those that codify the domain.
Tim: Each domain-specific solver can do whatever it wants, so it’s not clear to 
me what the value of choosing a modeling language actually is—unless we want to 
build a library of common functionality that makes the construction of 
domain-specific engine (wrappers) easier.  I’d prefer to spend our energy 
understanding whether the proposed workflow/interface works for a couple of 
different domain-specific policy engines OR to flush this one out and build it.




ð  The value of choosing a modeling language is related to how “best to 
automate translations” from Datalog constraints (to LP)?

oWe can have look for one unique way of generation, and not, “some of it is 
generated by the VM-placement engine solver independent of the Datalog”.

oDatalog imposes most constraints (== policies)

oTwo constraints are not “policies”

§  A VM is allocated to only one host.

§  Host capacity is not exceeded.

· Over subscription

ð  Otherwise what was your suggestion?  As follows?

oUse framework (extend) the nova-solver-scheduler currently implements 
(itself using PuLP). This framework specifies an API to write constraints and 
cost functions (in a domain specific way). Modifying this framework:

§  To read data in from DSE

§  To obtain the cost function from Datalog (e.g. minimize Y[host1]…)

§  To obtain Datalog constraints (e.g. 75% memory allocation constraint for 
hosts of special zone)

oWe need to specify the “format” for this? It will likely to be a string of 
the form (?)

§  “hMemUse[0] – 0.75*hMemCap[0]  100*y[0], “ Memory allocation constraint on 
Host 0“,




I wasn’t thinking of building another language for a front-end to the LP 
solver.  I envisioned we’d use one that already exists like PuLP.  I was just 
using notation in the doc that I thought everyone would understand (whether you 
know PuLP or not—I don’t).








ð  From your doc (page 5, section 4)


warning(id) :-
nova:host(id, name, service, zone, memory_capacity),
legacy:special_zone(zone),
ceilometer:statistics(id, memory, avg, count, duration,
durstart, durend, max, min, period, perstart, perend,
sum, unit),
avg  0.75 * memory_capacity


Notice that this is a soft constraint, identified by the use of warning instead 
of error.  When compiling to LP, the VM-placement engine will attempt to 
minimize the number of rows in the warning table.  That is, for each possible 
row r it will create a variable Y[r] and assign it True if the row is a warning 
and False otherwise


The policy (warning) : when will it be evaluated? This should be done 
periodically? Then if the table has even one True entry, then the action should 
be to generate the LP, solve, activate the migrations etc.

ð  The “LP” cannot be generated when the VM-placement engine receives the 
policy snippet.



It’s a good question about how often we re-generate/re-solve/re-migrate.  I had 
assumed that would be built into the domain-specific engine, since it should 
know enough about the domain to make intelligent decisions.  You could imagine 
doing that every time the engine received a new policy; every time it received 
new data; every 10 minutes; or every time new data arrives that is different 
enough.  The engine could even re-generate/re-solve the LP and then decide 
whether the benefit of migrating to the new solution is worth the cost.

And of course there’s a difference between the PoC and the long-term solution.  
I don’t know what the right answer is for the PoC.  Suggestions?

Tim



Ruby

Re: [openstack-dev] [Congress][Delegation] Initial workflow design

2015-03-02 Thread Tim Hinrichs
Inline.

On Feb 26, 2015, at 3:32 PM, Ramki Krishnan 
r...@brocade.commailto:r...@brocade.com wrote:

1)
Ruby: One of the issues highlighted in OpenStack (scheduler) and also elsewhere 
(e.g. Omega scheduler by google) is :

Reading “host utilization” state from the data bases and DB (nova:host table) 
updates and overhead of maintaining in-memory state uptodate.

ð  This is expensive and current nova-scheduler does face this issue (many 
blogs/discussions).

  While the first goal is a PoC, this will likely become a concern in terms 
of adoption.


Tim: So you’re saying we won’t have fresh enough data to make policy decisions? 
 If the data changes so frequently that we can’t get an accurate view, then I’m 
guessing we shouldn’t be migrating based on that data anyway. Could you point 
me to some of these discussions?

Ramki: We have to keep in mind that VM migration could be an expensive 
operation depending on the size of the VM and various other factors; such an 
operation cannot be performed frequently.

2)
From document: As soon as the subscription occurs, the DSE sends the 
VM-placement engine the current contents of those tables, and when these 
tables change, the DSE informs the VM-placement engine in the form of diffs 
(aka deltas or updates).

Ramki: Is the criteria for table change programmable? This would be useful to 
generate significant change events based on application needs.

Not as of now.  We’ve kicked around the idea of changing a subscription from an 
entire table to an arbitrary slice of a table (expressed via Datalog).  That 
functionality will be necessary for dealing with large datasources like 
Ceilometer.  But we don’t have the design fleshed out or the people to build it.

Tim



Thanks,
Ramki

From: Tim Hinrichs [mailto:thinri...@vmware.com]
Sent: Thursday, February 26, 2015 10:17 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress][Delegation] Initial workflow design

Inline.

From: ruby.krishnasw...@orange.commailto:ruby.krishnasw...@orange.com 
ruby.krishnasw...@orange.commailto:ruby.krishnasw...@orange.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, February 25, 2015 at 8:53 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Congress][Delegation] Initial workflow design

Hi Tim, All,


1) Step 3: The VM-placement engine is also a “datalog engine” . Right?

When policies are delegated:

when policies are inserted? When the VM-placement engine has already registered 
itself all policies are given to it?




“In our example, this would mean the domain-specific policy engine executes the 
following API call over the DSE”

ð “domain-agnostic” ….



Done.


2) Step 4:



Ok

But finally: if Congress will likely “delegate”



Not sure what you’re suggesting here.


3) Step 5:  Compilation of subpolicy to LP in VM-placement engine



For the PoC, it is likely that the LP program ( in PuLP or some other ML) is 
*not* completely generated by compiler/translator.

ð Right?

Where does the rest of the program originate?  I’m not saying the entire LP 
program is generated from the Datalog constraints; some of it is generated by 
the solver independent of the Datalog.  In the text, I gave the example of 
defining hMemUse[j].


 You also indicate that some category of constraints (“the LP solver 
doesn’t know what the relationship between assign[i][j], hMemUse[j], and 
vMemUse[i] actually is, so the VM-placement engine must also include 
constraints”) .
 These constraints must be “explicitly” written?  (e.g. max_ram_allocation 
etc that are constraints used in the solver-scheduler’s package).

The VM-placement engine does 2 things: (I) translates Datalog to LP and (ii) 
generates additional LP constraints.  (Both work items could leverage any 
constraints that are builtin to a specific solver, e.g. the solver-scheduler.  
The point is that there are 2 distinct, conceptual origins of the LP 
constraints: those that represent the Datalog and those that codify the domain.



 So what “parts” will be generated:
Cost function :
Constraint from Policy : memory usage  75%

 Then the rest should be “filled” up?

 Could we convene on an intermediary “modeling language”?
@Yathi: do you think we could use some thing like AMPL ? Is this 
proprietary?


A detail: the example “Y[host1] = hMemUse[host1]  0.75 * hMemCap[host1]”


ð To be changed to a linear form (mi – Mi  0 then Yi = 1 else Yi = 0) so 
something like (mi – Mi)  100 yi

Each domain-specific solver can do whatever it wants, so it’s not clear to me 
what the value of choosing a modeling language actually is—unless we want to 
build a library of common 

[openstack-dev] [all] oslo.log 0.4.0 release

2015-03-02 Thread Doug Hellmann
The Oslo team is pleased to announce the release of:

oslo.log 0.4.0: oslo.log library

This release includes a critical bug fix for nova.

For more details, please see the git log history below and:

http://launchpad.net/oslo.log/+milestone/0.4.0

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.log

Changes in oslo.log 0.3.0..0.4.0


60e7730 Pickup instance from log format record

Diffstat (except docs and test files)
-

oslo_log/formatters.py  | 22 ++
2 files changed, 48 insertions(+), 4 deletions(-)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday March 3rd at 19:00 UTC

2015-03-02 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is having our next weekly
meeting on Tuesday March 3rd, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

In case you missed it or would like a refresher, meeting logs and
minutes from our last meeting are available here:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-02-24-19.01.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-02-24-19.01.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-02-24-19.01.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-03-02 Thread Kyle Mestery
On Mon, Mar 2, 2015 at 9:57 AM, Ihar Hrachyshka ihrac...@redhat.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Hi Daniel,

 thanks for a clear write-up of the matter and food for thought.

 I think the idea of having more smooth development mode that would not
 make people to wait for 6+ months to release a new feature is great.

 ++


 It's insane to expect that feature priorities won't ever slightly
 shift in the next 6 months. Having a limited list of features targeted
 for the next 6 months is prone to mistakes, since people behind some
 of approved features may need to postpone the effort for any type of
 reasons (personal, job rush, bad resource allocation, ...), and we
 then end up with approved specs with no actual code drops, using
 review 'slots' that would better be spent for other features that were
 not that lucky to get a rubber stamp during spec phase. Prior resource
 allocation would probably work somehow if we were working for the same
 company that would define priorities to us, but it's not the case.

 It should be noted that even though Nova is using slots for reviews,
Neutron is not. I've found that it's hard to try and slot people in to
review specific things. During Juno I tried this for Neutron, and it failed
miserably. For Kilo in Neutron, we're not using slots but instead I've
tried to highlight the approved specs of Essential and High priority
for review by all reviews, core and non-core included. It's gone ok, but
the reality is you can't force people to review things. There are steps
submitters can take to try and get timely review (lots of small, easy to
digest patches, quick turnaround of comments, engagement in IRC and ML,
etc.).


 Anecdotally, in neutron, we have two Essential blueprints for Kilo,
 and there are no code drops or patches in review for any of those, so
 I would expect them to fail to merge. At the same time, I will need to
 wait for the end of Kilo to consider adding support for guru reports
 to the project. Or in oslo world, I will need to wait for Liberty to
 introduce features in oslo.policy that are needed by neutron to switch
 to it, etc.

 To be fair, there are many reasons those to Essential BPs do not have
code. I still expect the Pecan focused to have code, but I already moved
the Plugin one out of Kilo at this point because there was no chance the
code would land.

But I get your point here. I think this thread has highlighted the fact
that the BP/spec process worked to some extent, but for small things, the
core reviewer team should have the ability to say Yes, we can easily merge
that, lets approve that spec even if it's late in the cycle.


 Another problem is that currently milestones are used merely for
 targeting bugs and features, but no one really cares about whether the
 actual milestone shipment actually works. Again, a story from neutron
 world: Kilo-1 was shipped in the middle of advanced services split,
 with some essential patches around distutils setup missing (no proper
 migration plan applied, conflicting config files in neutron and *aas
 repos, etc.)

 This is true, the milestone release matter but are not given enough focus
and they release (for the most part) irregardless of items in them, given
they are not long-lived, etc.

So I'm all for reforms around processes we apply.

 If there's one thing OpenStack is good at, it's change.


 That said, I don't believe the real problem here is that we don't
 generate project tarballs frequently enough.

 Major problems I see as critical to tackle in our dev process are:

 - - enforced spec/dev mode. Solution: allow to propose (and approve) a
 reasonable spec at any moment in the cycle; allow code drops for
 approved specs at any moment in the cycle (except pre-release
 stabilization time); stop targeting specs: if it's sane, it's probably
 sane N+2 cycle from now too.

 I'd say this is fine for specs that are small and people agree can easily
be merged. I'd say this is not the case for large features near the end of
the release which are unlikely to gather enough review momentum to actually
merge.


 - - core team rubber stamping a random set of specs and putting -2 on
 all other specs due to project priorities. Solution: stop pretending
 core team (or core drivers) can reasonably estimate review and coding
 resources for the next cycle. Instead allows community to decide
 what's worth the effort by approving all technically reasonable specs
 and allowing everyone to invest time and effort in specs (s)he seems
 worth it.

 If you're referring to Neutron here, I think you fail to estimate the
amount of time the neutron-drivers team (along with a handful of other
folks) spent reviewing specs and trying to allocate them for this release.
We're not just rubber stamping things, we're reviewing, providing comment,
and ensuring things fit in a consistent roadmap for the next release. In
the past, we've had this sort of wild west where all specs are approved,
everyone focuses on 

Re: [openstack-dev] [cinder][horizon]Proper error handling/propagation to UI

2015-03-02 Thread Eduard Matei
Thanks Avishay.
In our case the middle layer (our storage appliance) doesn't allow a
snapshot to be deleted if it has clones due to an internal implementation
that tries to optimize storage by using a Dependency tree where the base
volume is the root of the tree and snapshots or clones are nodes and their
clones are leaves of. So deleting a middle point (node) is impossible
without deleting all the children.

Eduard

On Mon, Mar 2, 2015 at 8:10 PM, Avishay Traeger avis...@stratoscale.com
wrote:

 Sorry, I meant to say that the expected behavior is that volumes are
 independent entities, and therefore you should be able to delete a snapshot
 even if it has volumes created from it (just like you should be able to
 delete a volume that has clones from it).  The exception is that Cinder
 will not permit you to delete a volume that has snapshots.

 On Mon, Mar 2, 2015 at 3:22 PM, Eduard Matei 
 eduard.ma...@cloudfounders.com wrote:

 @Duncan:
 I tried with lvmdriver-1, fails with error:
 ImageCopyFailure: Failed to copy image to volume: qemu-img:
 /dev/mapper/stack--volumes--lvmdriver--1-volume--e8323fc5--8ce4--4676--bbec--0a85efd866fc:
 error while converting raw: Could not open device: Permission denied

 It's been configured with 2 drivers (ours, and lvmdriver), but our driver
 works, so not sure where it fails.

 Eduard

 On Mon, Mar 2, 2015 at 8:23 AM, Eduard Matei 
 eduard.ma...@cloudfounders.com wrote:

 Thanks
 @Duncan: I'll try with the lvm driver.
 @Avishay, i'm not trying to delete a volume created from a snapshot, i'm
 trying to delete a snapshot that has volumes created from it (actually i
 need to prevent this action and properly report the cause of the failure:
 SnapshotIsBusy).


 Eduard

 On Mon, Mar 2, 2015 at 7:57 AM, Avishay Traeger avis...@stratoscale.com
  wrote:

 Deleting a volume created from a snapshot is permitted.  Performing
 operations on a volume created from snapshot should have the same behavior
 as volumes created from volumes, images, or empty (no source).  In all of
 these cases, the volume should be deleted, regardless of where it came
 from.  Independence from source is one of the differences between volumes
 and snapshots in Cinder.  The driver must take care to ensure this.

 As to your question about propagating errors without changing an
 object's state, that is unfortunately not doable in Cinder today (or any
 other OpenStack project as far as I know).  The object's state is currently
 the only mechanism for reporting an operation's success or failure.

 On Sun, Mar 1, 2015 at 6:07 PM, Duncan Thomas duncan.tho...@gmail.com
 wrote:

 I thought that case should be caught well before it gets to the
 driver. Can you retry with the LVM driver please?

 On 27 February 2015 at 10:48, Eduard Matei 
 eduard.ma...@cloudfounders.com wrote:

 Hi,

 We've been testing our cinder driver extensively and found a strange
 behavior in the UI:
 - when trying to delete a snapshot that has clones (created volume
 from snapshot) and error is raised in our driver which turns into
 error_deleting in cinder and the UI; further actions on that snapshot 
 are
 impossible from the ui, the user has to go to CLI and do cinder
 snapshot-reset-state to be able to delete it (after having deleted the
 clones)
 - to help with that we implemented a check in the driver and now we
 raise exception.SnapshotIsBusy; now the snapshot remains available (as it
 should be) but no error bubble is shown in the UI (only the green one:
 Success. Scheduled deleting of...). So the user has to go to c-vol screen
 and check the cause of the error

 So question: how should we handle this so that
 a. The snapshot remains in state available
 b. An error bubble is shown in the UI stating the cause.

 Thanks,
 Eduard

 --

 *Eduard Biceri Matei, Senior Software Developer*
 www.cloudfounders.com
  | eduard.ma...@cloudfounders.com



 *CloudFounders, The Private Cloud Software Company*

 Disclaimer:
 This email and any files transmitted with it are confidential and 
 intended solely for the use of the individual or entity to whom they are 
 addressed.
 If you are not the named addressee or an employee or agent responsible 
 for delivering this message to the named addressee, you are hereby 
 notified that you are not authorized to read, print, retain, copy or 
 disseminate this message or any part of it. If you have received this 
 email in error we request you to notify us by reply e-mail and to delete 
 all electronic files of the message. If you are not the intended 
 recipient you are notified that disclosing, copying, distributing or 
 taking any action in reliance on the contents of this information is 
 strictly prohibited.
 E-mail transmission cannot be guaranteed to be secure or error free as 
 information could be intercepted, corrupted, lost, destroyed, arrive 
 late or incomplete, or contain viruses. The sender therefore does not 
 accept liability for any errors or omissions in the content of this 
 message, and 

[openstack-dev] [Ironic][TripleO] IPMI controller for OpenStack instances

2015-03-02 Thread Ben Nemec
It's been quite a while since my last QuintupleO update, mostly due to
excessive busy-ness, but I've been asked about it enough times since
then that I decided to make time for it.  I made another blog post with
more details [1], but the gist is that I have a pyghmi-based IPMI
controller for OpenStack instances that allows Ironic deploys to work as
long as you have my Nova and Neutron hacks applied too.

I still haven't done a full end to end TripleO run in this environment
yet, but I believe this makes all of the necessary pieces available in
case someone wants to try it.  It should be possible to run against an
environment like this using the baremetal-focused setup since it behaves
more like baremetal than the virsh-based setups do (no pxe_ssh driver,
for example).

Anyway, hit me up on IRC if you have questions or want to try this out.
 Thanks.

[1]: http://blog.nemebean.com/content/ipmi-controller-openstack-instances

-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][horizon]Proper error handling/propagation to UI

2015-03-02 Thread Avishay Traeger
Sorry, I meant to say that the expected behavior is that volumes are
independent entities, and therefore you should be able to delete a snapshot
even if it has volumes created from it (just like you should be able to
delete a volume that has clones from it).  The exception is that Cinder
will not permit you to delete a volume that has snapshots.

On Mon, Mar 2, 2015 at 3:22 PM, Eduard Matei eduard.ma...@cloudfounders.com
 wrote:

 @Duncan:
 I tried with lvmdriver-1, fails with error:
 ImageCopyFailure: Failed to copy image to volume: qemu-img:
 /dev/mapper/stack--volumes--lvmdriver--1-volume--e8323fc5--8ce4--4676--bbec--0a85efd866fc:
 error while converting raw: Could not open device: Permission denied

 It's been configured with 2 drivers (ours, and lvmdriver), but our driver
 works, so not sure where it fails.

 Eduard

 On Mon, Mar 2, 2015 at 8:23 AM, Eduard Matei 
 eduard.ma...@cloudfounders.com wrote:

 Thanks
 @Duncan: I'll try with the lvm driver.
 @Avishay, i'm not trying to delete a volume created from a snapshot, i'm
 trying to delete a snapshot that has volumes created from it (actually i
 need to prevent this action and properly report the cause of the failure:
 SnapshotIsBusy).


 Eduard

 On Mon, Mar 2, 2015 at 7:57 AM, Avishay Traeger avis...@stratoscale.com
 wrote:

 Deleting a volume created from a snapshot is permitted.  Performing
 operations on a volume created from snapshot should have the same behavior
 as volumes created from volumes, images, or empty (no source).  In all of
 these cases, the volume should be deleted, regardless of where it came
 from.  Independence from source is one of the differences between volumes
 and snapshots in Cinder.  The driver must take care to ensure this.

 As to your question about propagating errors without changing an
 object's state, that is unfortunately not doable in Cinder today (or any
 other OpenStack project as far as I know).  The object's state is currently
 the only mechanism for reporting an operation's success or failure.

 On Sun, Mar 1, 2015 at 6:07 PM, Duncan Thomas duncan.tho...@gmail.com
 wrote:

 I thought that case should be caught well before it gets to the driver.
 Can you retry with the LVM driver please?

 On 27 February 2015 at 10:48, Eduard Matei 
 eduard.ma...@cloudfounders.com wrote:

 Hi,

 We've been testing our cinder driver extensively and found a strange
 behavior in the UI:
 - when trying to delete a snapshot that has clones (created volume
 from snapshot) and error is raised in our driver which turns into
 error_deleting in cinder and the UI; further actions on that snapshot 
 are
 impossible from the ui, the user has to go to CLI and do cinder
 snapshot-reset-state to be able to delete it (after having deleted the
 clones)
 - to help with that we implemented a check in the driver and now we
 raise exception.SnapshotIsBusy; now the snapshot remains available (as it
 should be) but no error bubble is shown in the UI (only the green one:
 Success. Scheduled deleting of...). So the user has to go to c-vol screen
 and check the cause of the error

 So question: how should we handle this so that
 a. The snapshot remains in state available
 b. An error bubble is shown in the UI stating the cause.

 Thanks,
 Eduard

 --

 *Eduard Biceri Matei, Senior Software Developer*
 www.cloudfounders.com
  | eduard.ma...@cloudfounders.com



 *CloudFounders, The Private Cloud Software Company*

 Disclaimer:
 This email and any files transmitted with it are confidential and 
 intended solely for the use of the individual or entity to whom they are 
 addressed.
 If you are not the named addressee or an employee or agent responsible 
 for delivering this message to the named addressee, you are hereby 
 notified that you are not authorized to read, print, retain, copy or 
 disseminate this message or any part of it. If you have received this 
 email in error we request you to notify us by reply e-mail and to delete 
 all electronic files of the message. If you are not the intended 
 recipient you are notified that disclosing, copying, distributing or 
 taking any action in reliance on the contents of this information is 
 strictly prohibited.
 E-mail transmission cannot be guaranteed to be secure or error free as 
 information could be intercepted, corrupted, lost, destroyed, arrive late 
 or incomplete, or contain viruses. The sender therefore does not accept 
 liability for any errors or omissions in the content of this message, and 
 shall have no liability for any loss or damage suffered by the user, 
 which arise as a result of e-mail transmission.



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Duncan Thomas


 __
 

Re: [openstack-dev] [Congress][Delegation] Initial workflow design

2015-03-02 Thread Tim Hinrichs
Hi Ramki,

Good suggestion.  I added a paragraph at the top of the doc in the Overview 
section to explain what Delegation means and mentioned that some policies won’t 
be delegated.

Tim

On Feb 26, 2015, at 3:15 PM, Ramki Krishnan 
r...@brocade.commailto:r...@brocade.com wrote:

Hi Tim, All,

The document is in great shape! Any global policies such as those impacting 
compute and network (e.g. CPU utilization and network bandwidth utilization) 
would be handled in Congress and not delegated. It would be worthwhile to 
capture this.

Thanks,
Ramki

From: Tim Hinrichs [mailto:thinri...@vmware.com]
Sent: Monday, February 23, 2015 11:28 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Congress][Delegation] Initial workflow design


Hi all,



I made a heavy editing pass of the Delegation google doc, incorporating many of 
your comments and my latest investigations into VM-placement.  I left the old 
stuff in place at the end of the doc and put the new stuff at the top.



My goal was to propose an end-to-end workflow for a PoC that we could put 
together quickly to help us explore the delegation interface.  We should 
iterate on this design until we have something that we think is workable.   And 
by all means pipe up if you think we need a totally different starting point to 
begin the iteration.



(BTW I'm thinking of the integration with solver-scheduler as a long-term 
solution to VM-placement, once we get the delegation interface sorted out.)



https://docs.google.com/document/d/1ksDilJYXV-5AXWON8PLMedDKr9NpS8VbT0jIy_MIEtI/edit#https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_document_d_1ksDilJYXV-2D5AXWON8PLMedDKr9NpS8VbT0jIy-5FMIEtI_editd=AwMFAgc=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEsr=B6BWd4kFfgOzAREgThxkmTZKy7dDXE2-eBAmL0PBK7sm=CDHVp-YQeYXMYsCvaPXTGCHW_AvnWo0kL01u9sbygWEs=72yePfPFTFqBwUU4wSM095IqCvt4olhGxbgPjsMdx2Ue=



Tim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hyper-V Nova CI Infrastructure

2015-03-02 Thread Anita Kuno
On 03/01/2015 09:13 PM, kwon-ho lee wrote:
 Hello, OpenStack dev members,
 
 Is there any problem on Hyper-V Nova CI Infrastructure?
 
 I don't know why i failed my petch set.
 
 My first patch set was succeeded , and then i changed the commit message,
 then failed on Hyper-V test.
 
 Could you tell me the reason?
 
 Here is my test result links.
 http://64.119.130.115/156126/4/
 
 Thanks
 Kwonho
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
Thank you for asking the question Kwonho.

I investigated this system and talked with one of the operators. The
system is offline now, you can track the status of the system on their
third party system wikipage:
https://wiki.openstack.org/wiki/ThirdPartySystems/Hyper-V_CI

Thank you,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] auto-abandon changesets considered harmful (was Re: [stable][all] Revisiting the 6 month release cycle [metrics])

2015-03-02 Thread James E. Blair
Duncan Thomas duncan.tho...@gmail.com writes:

 Why do you say auto-abandon is the wrong tool? I've no problem with the 1
 week warning if somebody wants to implement it - I can see the value. A
 change-set that has been ignored for X weeks is pretty much the dictionary
 definition of abandoned, and restoring it is one mouse click. Maybe put
 something more verbose in the auto-abandon message than we have been,
 encouraging those who feel it shouldn't have been marked abandoned to
 restore it (and respond quicker in future) but other than that we seem to
 be using the right tool to my eyes

Why do you feel the need to abandon changes submitted by other people?

Is it because you have a list of changes to review, and they persist on
that list?  If so, let's work on making a better list for you.  We have
the tools.  What query/page/list/etc are you looking at where you see
changes that you don't want to see?

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Ceph native driver for manila

2015-03-02 Thread Luis Pabon
What is the status on virtfs?  I am not sure if it is being maintained.  Does 
anyone know?

- Luis

- Original Message -
From: Danny Al-Gaaf danny.al-g...@bisect.de
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org, ceph-de...@vger.kernel.org
Sent: Sunday, March 1, 2015 9:07:36 AM
Subject: Re: [openstack-dev] [Manila] Ceph native driver for manila

Am 27.02.2015 um 01:04 schrieb Sage Weil:
 [sorry for ceph-devel double-post, forgot to include
 openstack-dev]
 
 Hi everyone,
 
 The online Ceph Developer Summit is next week[1] and among other
 things we'll be talking about how to support CephFS in Manila.  At
 a high level, there are basically two paths:

We discussed the CephFS Manila topic also on the last Manila Midcycle
Meetup (Kilo) [1][2]

 2) Native CephFS driver
 
 As I currently understand it,
 
 - The driver will set up CephFS auth credentials so that the guest
 VM can mount CephFS directly - The guest VM will need access to the
 Ceph network.  That makes this mainly interesting for private
 clouds and trusted environments. - The guest is responsible for
 running 'mount -t ceph ...'. - I'm not sure how we provide the auth
 credential to the user/guest...

The auth credentials need to be handled currently by a application
orchestration solution I guess. I see currently no solution on the
Manila layer level atm.

If Ceph would provide OpenStack Keystone authentication for
rados/cephfs instead of CephX, it could be handled via app orch easily.

 This would perform better than an NFS gateway, but there are
 several gaps on the security side that make this unusable currently
 in an untrusted environment:
 
 - The CephFS MDS auth credentials currently are _very_ basic.  As
 in, binary: can this host mount or it cannot.  We have the auth cap
 string parsing in place to restrict to a subdirectory (e.g., this
 tenant can only mount /tenants/foo), but the MDS does not enforce
 this yet.  [medium project to add that]
 
 - The same credential could be used directly via librados to access
 the data pool directly, regardless of what the MDS has to say about
 the namespace.  There are two ways around this:
 
 1- Give each tenant a separate rados pool.  This works today.
 You'd set a directory policy that puts all files created in that
 subdirectory in that tenant's pool, then only let the client access
 those rados pools.
 
 1a- We currently lack an MDS auth capability that restricts which 
 clients get to change that policy.  [small project]
 
 2- Extend the MDS file layouts to use the rados namespaces so that
  users can be separated within the same rados pool.  [Medium
 project]
 
 3- Something fancy with MDS-generated capabilities specifying which
  rados objects clients get to read.  This probably falls in the
 category of research, although there are some papers we've seen
 that look promising. [big project]
 
 Anyway, this leads to a few questions:
 
 - Who is interested in using Manila to attach CephFS to guest VMs? 
 - What use cases are you interested? - How important is security in
 your environment?

As you know we (Deutsche Telekom) are may interested to provide shared
filesystems via CephFS to VMs instead of e.g. via NFS. We can
provide/discuss use cases at CDS.

For us security is very critical, as the performance is too. The first
solution via ganesha is not what we prefer (to use CephFS via p9 and
NFS would not perform that well I guess). The second solution, to use
CephFS directly to the VM would be a bad solution from the security
point of view since we can't expose the Ceph public network directly
to the VMs to prevent all the security issues we discussed already.

We discussed during the Midcycle a third option:

Mount CephFS directly on the host system and provide the filesystem to
the VMs via p9/virtfs. This need nova integration (I will work on a
POC patch for this) to setup libvirt config correctly for virtfs. This
solve the security issue and the auth key distribution for the VMs,
but it may introduces performance issues due to virtfs usage. We have
to check what the specific performance impact will be. Currently this
is the preferred solution for our use cases.

What's still missing in this solution is user/tenant/subtree
separation as in the 2th option. But this is needed anyway for CephFS
in general.

Danny

[1] https://etherpad.openstack.org/p/manila-kilo-midcycle-meetup
[2] https://etherpad.openstack.org/p/manila-meetup-winter-2015

--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] auto-abandon changesets considered harmful

2015-03-02 Thread Kyle Mestery
On Mon, Mar 2, 2015 at 1:28 PM, Stefano Maffulli stef...@openstack.org
wrote:

 On Mon, 2015-03-02 at 12:00 -0700, Doug Wiegley wrote:
  Why do you feel the need to keep them?  Do your regularly look at
  older patches? Do you know anyone that does?
 
 I don't think that's the point. The point is to try improving
 contributor's life by providing them one last useful comment before
 ignoring their contribution for good.

 To me, the abandon does this. It's basically giving a good reason for why
it's being abandoned (4 weeks old, one -2. Or, 4 weeks old, no new
comments), with a nice message on how to re-enable the patch and get fresh
test results.


 Tom gave a good suggestion IMHO and I've found a couple of cases where
 maybe if someone reached out to the contributor and offered some help
 probably their patch would have merged (or they would have learned a
 useful, explicit lesson). Instead auto-abandon has *implicit*
 connotation: a contributor may never know exactly why that patch was
 abandoned.

 I suggest to give a second look at Tom's proposal because I think it
 doesn't add any more workload on reviewers but provides for a clean exit
 to a lagging changeset.

 /stef



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] auto-abandon changesets considered harmful (was Re: [stable][all] Revisiting the 6 month release cycle [metrics])

2015-03-02 Thread Doug Wiegley

 On Mar 2, 2015, at 11:44 AM, James E. Blair cor...@inaugust.com wrote:
 
 Duncan Thomas duncan.tho...@gmail.com writes:
 
 Why do you say auto-abandon is the wrong tool? I've no problem with the 1
 week warning if somebody wants to implement it - I can see the value. A
 change-set that has been ignored for X weeks is pretty much the dictionary
 definition of abandoned, and restoring it is one mouse click. Maybe put
 something more verbose in the auto-abandon message than we have been,
 encouraging those who feel it shouldn't have been marked abandoned to
 restore it (and respond quicker in future) but other than that we seem to
 be using the right tool to my eyes
 
 Why do you feel the need to abandon changes submitted by other people?
 
Why do you feel the need to keep them?  Do your regularly look at older 
patches? Do you know anyone that does?

Speaking as a contributor, I personally vastly prefer clicking 'Restore' to 
having gerrit being a haystack of cruft. I have/had many frustrations trying to 
become a useful contributor.  Abandoned patches was never one of them.

 Is it because you have a list of changes to review, and they persist on
 that list?  If so, let's work on making a better list for you.  We have
 the tools.  What query/page/list/etc are you looking at where you see
 changes that you don't want to see?

When I was starting out, hearing that the best thing I could help out was to 
do some reviews, I'd naively browse to gerrit and look for something easy 
to get started with. The default query (status:open) means that there is 
about a 110% (hyperbole added) chance that I'll pick something to review that's 
a waste of time.

A default query that edited out old, jenkins failing, and -2 stuff would be 
helpful.  A default or easy query that highlighted things relevant to the 
current milestone's blueprints and bugs would be SUPER useful to guiding folks 
towards the most useful reviews to be doing for a given project.

The current system does not do a good job of intuitively guiding folks towards 
the right answer.  You have to know the tribal knowledge first, and/or which of 
six conflicting wiki pages has the right info to get started with  (more 
hyperbole added.)

Thanks,
doug


 
 -Jim
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] auto-abandon changesets considered harmful

2015-03-02 Thread Stefano Maffulli
On Mon, 2015-03-02 at 12:00 -0700, Doug Wiegley wrote:
 Why do you feel the need to keep them?  Do your regularly look at
 older patches? Do you know anyone that does?
 
I don't think that's the point. The point is to try improving
contributor's life by providing them one last useful comment before
ignoring their contribution for good.

Tom gave a good suggestion IMHO and I've found a couple of cases where
maybe if someone reached out to the contributor and offered some help
probably their patch would have merged (or they would have learned a
useful, explicit lesson). Instead auto-abandon has *implicit*
connotation: a contributor may never know exactly why that patch was
abandoned.

I suggest to give a second look at Tom's proposal because I think it
doesn't add any more workload on reviewers but provides for a clean exit
to a lagging changeset. 

/stef



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] auto-abandon changesets considered harmful (was Re: [stable][all] Revisiting the 6 month release cycle [metrics])

2015-03-02 Thread James E. Blair
Doug Wiegley doug...@parksidesoftware.com writes:

 A default query that edited out old, jenkins failing, and -2 stuff
 would be helpful.  A default or easy query that highlighted things
 relevant to the current milestone's blueprints and bugs would be SUPER
 useful to guiding folks towards the most useful reviews to be doing
 for a given project.

Great, this is the kind of information I'm looking for.  Please tell me
what page or query you specifically use to find reviews, and let's work
to make sure it shows you the right information.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] auto-abandon changesets considered harmful (was Re: [stable][all] Revisiting the 6 month release cycle [metrics])

2015-03-02 Thread James E. Blair
Stefano branched this thread from an older one to talk about
auto-abandon.  In the previous thread, I believe I explained my
concerns, but since the topic split, perhaps it would be good to
summarize why this is an issue.

1) A core reviewer forcefully abandoning a change contributed by someone
else can be a very negative action.  It's one thing for a contributor to
say I have abandoned this effort, it's very different for a core
reviewer to do that for them.  It is a very strong action and signal,
and should not be taken lightly.

2) Many changes become inactive due to no fault of their authors.  For
instance, a change to nova that missed a freeze deadline might need to
be deferred for 3 months or more.  It should not be automatically
abandoned.

3) Abandoned changes are not visible by their authors.  Many
contributors will not see the abandoned change.  Many contributors use
their list of open reviews to get their work done, but if you abandon
their changes, they will no longer see that there is work for them to be
done.

4) Abandoned changes are not visible to other contributors.  Other
people contributing to a project may see a change that they could fix up
and get merged.  However, if the change is abandoned, they are unlikely
to find it.

5) Abandoned changes are not able to be resumed by other contributors.
Even if they managed to find changes despite the obstacles imposed by
#3, they would be unable to restore the change and continue working on
it.

In short, there are a number of negative impacts to contributors, core
reviewers, and maintainers of projects caused by automatically
abandoning changes.  These are not hypothetical; I have seen all of
these negative impacts on projects I contribute to.

Now this is the most important part -- I can not emphasize this enough:

  Whatever is being achieved by auto-abandoning can be achieved through
  other, less harmful, methods.

Core reviewers should not have to wade through lots of extra changes.
They should not be called upon to deal with drive-by changes that people
are not willing to collaborate on.  Abandoning changes is an imperfect
solution to a problem, and we can find a better solution.

We have tools that can filter out changes that are not active so that
core reviewers are not bothered by them.  In fact, the auto-abandon
script itself is built on one or two exceedingly simple queries which,
when reversed, will show you only the changes it would not abandon.

What I hope to gain by this conversation is to identify where the gaps
in our tooling are.  If you feel strongly that you do not want to see
inactive changes, please tell me what query, dashboard, tool, page,
etc., that you use to find changes to review.  We can help make sure
that it is structured to filter out changes you are not interested in,
and helps surface changes you want to work on.

Thanks,

Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] auto-abandon changesets considered harmful (was Re: [stable][all] Revisiting the 6 month release cycle [metrics])

2015-03-02 Thread John Griffith
On Mon, Mar 2, 2015 at 12:52 PM, James E. Blair cor...@inaugust.com wrote:

 Doug Wiegley doug...@parksidesoftware.com writes:

  A default query that edited out old, jenkins failing, and -2 stuff
  would be helpful.  A default or easy query that highlighted things
  relevant to the current milestone's blueprints and bugs would be SUPER
  useful to guiding folks towards the most useful reviews to be doing
  for a given project.

 Great, this is the kind of information I'm looking for.  Please tell me
 what page or query you specifically use to find reviews, and let's work
 to make sure it shows you the right information.

 -Jim

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


​Wow.. so some really good (and some surprising) viewpoints.  There are
things I hadn't thought about it here so that's great.  I also like the
idea from Tom of an automatic comment being added, that's pretty cool in my
opinion.  I also think James makes some great points about the query tools,
combining all of this sounds great.

Thinking more about this today and the reality is, it's been several months
since I've made it far enough down my review list in gerrit to get to
anything more than a few days old anyway, so I suppose if it presents a
better experience for the submitter, that's great.  In other words it
probably doesn't matter any more if it's in the queue or not.

My own personal opinions (stated earlier) aside.  I think compromise along
the lines suggested is a great approach and a win for everybody involved.

​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-03-02 Thread Jay Pipes

On 03/02/2015 03:56 AM, Chris Dent wrote:


I (and a few others) have been using gabbi[1] for a couple of months now
and it has proven very useful and evolved a bit so I thought it would be
worthwhile to followup my original message and give an update.

Some recent reviews[1] give a sample of how it can be used to validate
an existing API as well as search for less than perfect HTTP behavior
(e.g sending a 404 when a 405 would be correct).

Regular use has lead to some important changes:

* It can now be integrated with other tox targets so it can run
   alongside other functional tests.
* Individual tests can be xfailed and skipped. An entire YAML test
   file can be skipped.
* For those APIs which provide insufficient hypermedia support, the
   ability to inspect and reference the prior test and use template
   variables in the current request has been expanded (with support for
   environment variables pending a merge).

My original motivation for creating the tool was to make it easier to
learn APIs by causing a body of readable YAML files to exist. This
remains important but what I've found is that writing the tests is
itself an incredible tool. Not only is it very easy to write tests
(throw some stuff at a URL and see what happen) and find (many) bugs
as a result, the exploratory nature of test writing drives a
learning process.

You'll note that the reviews below are just the YAML files. That's
because the test loading and fixture python code is already merged.
Adding tests is just a matter of adding more YAML. An interesting
trick is to run a small segment of the gabbi tests in a project (e.g.
just one file that represents one type of resource) while producing
coverage data. Reviewing the coverage of just the controller for that
resource can help drive test creation and separation.


Total awesomesauce, Chris :)

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Liberty specs are now open

2015-03-02 Thread Michael Still
Hi,

this is just a quick note to let you know that Liberty specs are now
open for Nova. By open I mean that it is possible to upload such a
spec, but I wouldn't expect to see much review effort on these until
Kilo is ready.

If authors of previously approved specs (Juno or Kilo) want to use the
previously-approved: release commit tag, that will make fast
tracking re-approvals easier.

For ease of reference, here's the guidelines we used in Kilo, although
the Liberty PTL might want to tweak these:

Blueprints approved in Juno or Kilo
===

For specs approved in Juno or Kilo, there is a fast track approval
process for Liberty. The steps to get your spec re-approved are:

 - Copy your spec from the specs/oldrelease/approved directory to
the specs/liberty/approved directory. Note that if we declared your
spec to be a partial implementation in Kilo, it might be in the
implemented directory. This was rare however.
 - Update the spec to match the new template
 - Commit, with the Previously-approved: oldrelease commit message tag
 - Upload using git review as normal

Reviewers will still do a full review of the spec, we are not offering
a rubber stamp of previously approved specs. However, we are requiring
only one +2 to merge these previously approved specs, so the process
should be a lot faster.

A note for core reviewers here -- please include a short note on why
you're doing a single +2 approval on the spec so future generations
remember why.

Trivial blueprints
==

We are not requiring specs for trivial blueprints in Liberty. Instead,
create a blueprint in Launchpad
at https://blueprints.launchpad.net/nova/+addspec and target the
specification to Liberty. New, targetted, unapproved specs will be
reviewed in weekly nova meetings once Kilo has been finished. If it is
agreed they are indeed trivial in the meeting, they will be approved.

Other proposals
===

For other proposals, the process is the same as Juno and Kilo...
Propose a spec review against the specs/kilo/approved directory and
we'll review it from there.

-- 
Rackspace Australia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev