Re: [openstack-dev] The Evolution of core developer to maintainer?

2015-04-02 Thread Chris Friesen

On 04/01/2015 06:07 PM, Ian Wienand wrote:

On 04/02/2015 09:02 AM, Jeremy Stanley wrote:

but since parties who don't understand our mostly non-hierarchical
community can see those sets of access controls, they cling to them
as a sign of importance and hierarchy of the people listed within.



Once code is submitted, there *is* a hierarchy.  The only way
something gets merged in OpenStack is by Brownian motion of this
hierarchy.  These special cores float around and as a contributor
you just hope that two of them meet up and decide your change is
ready.  You have zero insight into when this might happen, if at all.
The efficiency is appalling but somehow we get there in the end.


This agrees with my experience as a new OpenStack contributor.  The process 
seemed very opaque, there was very little in the way of timely feedback, and it 
was frustrating not knowing if something was going to be reviewed in a day or a 
month (or two).  A year later I'm starting to see the relationships but it's 
still not as clear as it could be.



IMO requiring two cores to approve *every* change is too much.  What
we should do is move the responsibility downwards.  Currently, as a
contributor I am only 1/3 responsible for my change making it through.
I write it, test it, clean it up and contribute it; then require the
extra 2/3 to come from the hierarchy.  If you only need one core,
then core and myself share the responsibility for the change.  In my
mind, this better recognises the skill of the contributor -- we are
essentially saying we trust you.


Interesting idea.

Makes me wonder about the history...where did the two cores to approve model 
come from originally?  Were there bad experiences previously with just one 
approver, or did it start out with two approvers?


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [ML2] using binding:tun_ip instead of agent_ip for l2pop to support agentless backend

2015-04-02 Thread Kevin Benton
Coordinating communication between various backends for encapsulation
termination is something that would be really nice to address in Liberty.
I've added it to the etherpad to bring it up at the summit.[1]


1. http://lists.openstack.org/pipermail/openstack-dev/2015-March/059961.html

On Tue, Mar 31, 2015 at 2:57 PM, Sławek Kapłoński sla...@kaplonski.pl
wrote:

 Hello,

 I think that easiest way could be to have own mech_driver (AFAIK such
 drivers are for such usage) to talk with external devices to tell them
 what tunnels should it establish.
 With change to tun_ip Henry propese l2_pop agent will be able to
 establish tunnel with external device.

 On Mon, Mar 30, 2015 at 10:19:38PM +0200, Mathieu Rohon wrote:
  hi henry,
 
  thanks for this interesting idea. It would be interesting to think about
  how external gateway could leverage the l2pop framework.
 
  Currently l2pop sends its fdb messages once the status of the port is
  modified. AFAIK, this status is only modified by agents which send
  update_devce_up/down().
  This issue has also to be addressed if we want agent less equipments to
 be
  announced through l2pop.
 
  Another way to do it is to introduce some bgp speakers with e-vpn
  capabilities at the control plane of ML2 (as a MD for instance). Bagpipe
  [1] is an opensource bgp speaker which is able to do that.
  BGP is standardized so equipments might already have it embedded.
 
  last summit, we talked about this kind of idea [2]. We were going further
  by introducing the bgp speaker on each compute node, in use case B of
 [2].
 
  [1]https://github.com/Orange-OpenSource/bagpipe-bgp
  [2]
 http://www.slideshare.net/ThomasMorin1/neutron-and-bgp-vpns-with-bagpipe
 
  On Thu, Mar 26, 2015 at 7:21 AM, henry hly henry4...@gmail.com wrote:
 
   Hi ML2er,
  
   Today we use agent_ip in L2pop to store endpoints for ports on a
   tunnel type network, such as vxlan or gre. However this has some
   drawbacks:
  
   1) It can only work with backends with agents;
   2) Only one fixed ip is supported per-each agent;
   3) Difficult to interact with other backend and world outside of
 Openstack.
  
   L2pop is already widely accepted and deployed in host based overlay,
   however because it use agent_ip to populate tunnel endpoint, it's very
   hard to co-exist and inter-operating with other vxlan backend,
   especially agentless MD.
  
   A small change is suggested that the tunnel endpoint should not be the
   attribute of *agent*, but be the attribute of *port*, so if we store
   it in something like *binding:tun_ip*, it is much easier for different
   backend to co-exists. Existing ovs agent and bridge need a small
   patch, to put the local agent_ip into the port context binding fields
   when doing port_up rpc.
  
   Several extra benefits may also be obtained by this way:
  
   1) we can easily and naturally create *external vxlan/gre port* which
   is not attached by an Nova booted VM, with the binding:tun_ip set when
   creating;
   2) we can develop some *proxy agent* which manage a bunch of remote
   external backend, without restriction of its agent_ip.
  
   Best Regards,
   Henry
  
  
 __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  

 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 --
 Pozdrawiam
 Sławek Kapłoński
 sla...@kaplonski.pl

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [ML2] using binding:tun_ip instead of agent_ip for l2pop to support agentless backend

2015-04-02 Thread Kevin Benton
Whoops, wrong link in last email.

https://etherpad.openstack.org/p/liberty-neutron-summit-topics

On Thu, Apr 2, 2015 at 12:50 AM, Kevin Benton blak...@gmail.com wrote:

 Coordinating communication between various backends for encapsulation
 termination is something that would be really nice to address in Liberty.
 I've added it to the etherpad to bring it up at the summit.[1]


 1.
 http://lists.openstack.org/pipermail/openstack-dev/2015-March/059961.html

 On Tue, Mar 31, 2015 at 2:57 PM, Sławek Kapłoński sla...@kaplonski.pl
 wrote:

 Hello,

 I think that easiest way could be to have own mech_driver (AFAIK such
 drivers are for such usage) to talk with external devices to tell them
 what tunnels should it establish.
 With change to tun_ip Henry propese l2_pop agent will be able to
 establish tunnel with external device.

 On Mon, Mar 30, 2015 at 10:19:38PM +0200, Mathieu Rohon wrote:
  hi henry,
 
  thanks for this interesting idea. It would be interesting to think about
  how external gateway could leverage the l2pop framework.
 
  Currently l2pop sends its fdb messages once the status of the port is
  modified. AFAIK, this status is only modified by agents which send
  update_devce_up/down().
  This issue has also to be addressed if we want agent less equipments to
 be
  announced through l2pop.
 
  Another way to do it is to introduce some bgp speakers with e-vpn
  capabilities at the control plane of ML2 (as a MD for instance). Bagpipe
  [1] is an opensource bgp speaker which is able to do that.
  BGP is standardized so equipments might already have it embedded.
 
  last summit, we talked about this kind of idea [2]. We were going
 further
  by introducing the bgp speaker on each compute node, in use case B of
 [2].
 
  [1]https://github.com/Orange-OpenSource/bagpipe-bgp
  [2]
 http://www.slideshare.net/ThomasMorin1/neutron-and-bgp-vpns-with-bagpipe
 
  On Thu, Mar 26, 2015 at 7:21 AM, henry hly henry4...@gmail.com wrote:
 
   Hi ML2er,
  
   Today we use agent_ip in L2pop to store endpoints for ports on a
   tunnel type network, such as vxlan or gre. However this has some
   drawbacks:
  
   1) It can only work with backends with agents;
   2) Only one fixed ip is supported per-each agent;
   3) Difficult to interact with other backend and world outside of
 Openstack.
  
   L2pop is already widely accepted and deployed in host based overlay,
   however because it use agent_ip to populate tunnel endpoint, it's very
   hard to co-exist and inter-operating with other vxlan backend,
   especially agentless MD.
  
   A small change is suggested that the tunnel endpoint should not be the
   attribute of *agent*, but be the attribute of *port*, so if we store
   it in something like *binding:tun_ip*, it is much easier for different
   backend to co-exists. Existing ovs agent and bridge need a small
   patch, to put the local agent_ip into the port context binding fields
   when doing port_up rpc.
  
   Several extra benefits may also be obtained by this way:
  
   1) we can easily and naturally create *external vxlan/gre port* which
   is not attached by an Nova booted VM, with the binding:tun_ip set when
   creating;
   2) we can develop some *proxy agent* which manage a bunch of remote
   external backend, without restriction of its agent_ip.
  
   Best Regards,
   Henry
  
  
 __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  

 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 --
 Pozdrawiam
 Sławek Kapłoński
 sla...@kaplonski.pl

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] unit tests result in false negatives on system z platform CI

2015-04-02 Thread Markus Zoeller
Michael Still mi...@stillhq.com wrote on 04/01/2015 11:01:51 PM:

 From: Michael Still mi...@stillhq.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: 04/01/2015 11:06 PM
 Subject: Re: [openstack-dev] [nova] unit tests result in false 
 negatives on system z platform CI
 
 Thanks for the detailed email on this. How about we add this to the
 agenda for this weeks nova meeting?

Yes, that would be great. I've seen you already put it on the agenda.
I will be in todays meeting.

Regards,
Markus Zoeller (markus_z)

 One option would be to add a fixture to some higher level test class,
 but perhaps someone has a better idea than that.
 
 Michael
 
 On Wed, Apr 1, 2015 at 8:54 PM, Markus Zoeller mzoel...@de.ibm.com 
wrote:
  [...]
  I'm looking for a way to express
  the assumption that x86 should be the default platform in the unit 
tests
  and prevent calls to the underlying system. This has to be rewritable 
if
  platform specific code like in [2] has to be tested.
 
  I'd like to discuss how that could be achieved in a maintainable way.
 
 
  References
  --
  [1] https://blueprints.launchpad.net/nova/+spec/libvirt-kvm-systemz
  [2] test_driver.py; test_get_guest_config_with_type_kvm_on_s390;
 
  https://github.com/openstack/nova/blob/master/nova/tests/unit/virt/
 libvirt/test_driver.py#L2592




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Murano projects pylint job

2015-04-02 Thread Filip Blaha

Hi Serg

we can inspire in other projects like sahara. Important is that pylint 
job should produce reasonable size meaningful output. Pylint without any 
configuration produces huge output. So we should point out which code 
checks are interesting for us and configure pylint accordingly. I will 
do some research on that.


Regards
Filip

On 04/01/2015 05:57 PM, Serg Melikyan wrote:

Hi Filip,

I think adding pylint job to Murano gates is an awesome idea, have you 
checked out how to do this?


On Wed, Apr 1, 2015 at 4:03 PM, Filip Blaha filip.bl...@hp.com 
mailto:filip.bl...@hp.com wrote:


Hello

I have noticed that some openstack projects [1] use pylint gate
job. From my point of view it could simplify code reviews even as
non-voing job and generally it could improve code quality. Some
code issues like code duplication are not easy to discover during
code review so automatic job would be helpful. Please let me know
your opinion about that. Thanks

[1] https://review.openstack.org/#/c/164772/

Regards
Filip

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com http://mirantis.com/ | smelik...@mirantis.com 
mailto:smelik...@mirantis.com


+7 (495) 640-4904, 0261
+7 (903) 156-0836


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] glusterfs plugin

2015-04-02 Thread Przemyslaw Kaminski
Since there is no reply here I have taken steps to become core reviewer
of the (orphaned) repos [1], [2], [3], [4].

Should anyone want to take responsibility for them please write me.

I have also taken steps to get the fuel-qa script working and will make
sure tests pass with new manifests. I will also update manifests'
version so that there will be no deprecation warnings.

P.

[1]
https://review.openstack.org/#/admin/projects/stackforge/fuel-plugin-external-glusterfs,access
[2]
https://review.openstack.org/#/admin/projects/stackforge/fuel-plugin-group-based-policy,access
[3]
https://review.openstack.org/#/admin/projects/stackforge/fuel-plugin-external-nfs,access
[4]
https://review.openstack.org/#/admin/projects/stackforge/fuel-plugin-cinder-netapp,access

On 04/01/2015 03:48 PM, Przemyslaw Kaminski wrote:
 Hello,
 
 I've been investigating bug [1] concentrating on the
 fuel-plugin-external-glusterfs.
 
 First of all: [2] there are no core reviewers for Gerrit for this repo
 so even if there was a patch to fix [1] no one could merge it. I saw
 also fuel-plugin-external-nfs -- same issue, haven't checked other
 repos. Why is this? Can we fix this quickly?
 
 Second, the plugin throws:
 
 DEPRECATION WARNING: The plugin has old 1.0 package format, this format
 does not support many features, such as plugins updates, find plugin in
 new format or migrate and rebuild this one.
 
 I don't think this is appropriate for a plugin that is listed in the
 official catalog [3].
 
 Third, I created a supposed fix for this bug [4] and wanted to test it
 with the fuel-qa scripts. Basically I built an .fp file with
 fuel-plugin-builder from that code, set the GLUSTER_PLUGIN_PATH variable
 to point to that .fp file and then ran the
 group=deploy_ha_one_controller_glusterfs tests. The test failed [5].
 Then I reverted the changes from the patch and the test still failed
 [6]. But installing the plugin by hand shows that it's available there
 so I don't know if it's broken plugin test or am I still missing something.
 
 It would be nice to get some QA help here.
 
 P.
 
 [1] https://bugs.launchpad.net/fuel/+bug/1415058
 [2] https://review.openstack.org/#/admin/groups/577,members
 [3] https://fuel-infra.org/plugins/catalog.html
 [4] https://review.openstack.org/#/c/169683/
 [5]
 https://www.dropbox.com/s/1mhz8gtm2j391mr/fail_error_deploy_ha_one_controller_glusterfs_simple-2015_04_01__11_39_11.tar.xz?dl=0
 [6]
 https://www.dropbox.com/s/ehjox554xl23xgv/fail_error_deploy_ha_one_controller_glusterfs_simple-2015_04_01__13_16_11.tar.xz?dl=0
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat][Horizon] What we can do for Heat in Horizon else?

2015-04-02 Thread Sergey Kraynev
Hi community.

I want to ask feedback from our Heat team and also involve Horizon team in
this discussion.
AFAIK during Kilo was implemented bp:
https://blueprints.launchpad.net/horizon/+spec/heat-ui-improvement

This bp add more base Heat functionality to Horizon.
I asked some ideas from Heat guys. What we want to have here else ?

There is only one idea for me about topology:
create some filters for displaying only particular resources (by their type)
F.e. stack has 50 resources, but there is half of them network resources.
As user I want to see only network level, so I enable filtering by network
resources.


Regards,
Sergey.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Murano projects pylint job

2015-04-02 Thread Ekaterina Chernova
Hi Filip and Serg!

I support the idea!

Let's discuss more details in IRC and summarize everything on the next
community meeting on Tuesday.

Regards,
Kate.

On Thu, Apr 2, 2015 at 11:31 AM, Filip Blaha filip.bl...@hp.com wrote:

  Hi Serg

 we can inspire in other projects like sahara. Important is that pylint job
 should produce reasonable size meaningful output. Pylint without any
 configuration produces huge output. So we should point out which code
 checks are interesting for us and configure pylint accordingly. I will do
 some research on that.

 Regards
 Filip


 On 04/01/2015 05:57 PM, Serg Melikyan wrote:

 Hi Filip,

  I think adding pylint job to Murano gates is an awesome idea, have you
 checked out how to do this?

 On Wed, Apr 1, 2015 at 4:03 PM, Filip Blaha filip.bl...@hp.com wrote:

 Hello

 I have noticed that some openstack projects [1] use pylint gate job. From
 my point of view it could simplify code reviews even as non-voing job and
 generally it could improve code quality. Some code issues like code
 duplication are not easy to discover during code review so automatic job
 would be helpful. Please let me know your opinion about that.  Thanks

 [1] https://review.openstack.org/#/c/164772/

 Regards
 Filip

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




  --
  Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
  http://mirantis.com | smelik...@mirantis.com

 +7 (495) 640-4904, 0261
 +7 (903) 156-0836


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bug expiration

2015-04-02 Thread Thierry Carrez
Sean Dague wrote:
 I just spent a chunk of the morning purging out some really old
 Incomplete bugs because about 9 months ago we disabled the auto
 expiration bit in launchpad -
 https://bugs.launchpad.net/nova/+configure-bugtracker
 
 This is a manually grueling task, which by looking at these bugs, no one
 else is doing. I'd like to turn that bit back on so we can actually get
 attention focused on actionable bugs.
 
 Any objections here?

No objection, just a remark:

One issue with auto-expiration is that it usually results in the
following story:

1. Someone reports bug
2. Triager notices NEW bug, asks reporter for details, sets INCOMPLETE
3. Reporter provides details
4. Triager does not notice reply on bug since they ignore INCOMPLETE
5. Bug expires after n months and disappears forever
6. Reporter is frustrated and won't ever submit issues again

The problem is of course at step 4, not at step 5. Auto-expiration is
very beneficial if your bug triaging routine includes checking Launchpad
for INCOMPLETE bugs with an answer regularly. If nobody does this very
boring task, then auto-expiration can be detrimental.

Is anyone in Nova checking for INCOMPLETE bugs with an answer ? That's
task 4 in https://wiki.openstack.org/wiki/BugTriage ...

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Reducing noise of the ML

2015-04-02 Thread Thierry Carrez
Michael Still wrote:
 Actually, for some projects the +1 is part of a public voting process
 and therefore required.

Could that public voting process happen somewhere else ? Like at an
IRC meeting ?

Also, did anyone ever vote -1 ?

(FWIW originally we used lazy consensus -- PTL proposes, and approval is
automatic after a while unless someone *opposes*. Not sure when +1s or
public voting was added as a requirement).

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] PTL Candidacy

2015-04-02 Thread Michael Still
I'd like another term as Nova PTL, if you'll have me.

I feel Kilo has gone reasonably well for Nova -- our experiment with
priorities has meant that we’ve got a lot of important work done. We
have progressed well with cells v2, our continued objects transition,
scheduler refactoring, and the v2.1 API. The introduction of the
trivial bug review list at the mid-cycle meetup has also seen 123 bug
fixes merged since the mid-cycle, which is a great start.

Kilo is our second release using specs, and I think this process is
still working well for us -- we’re having fewer arguments at code
review time about the fundamentals of design, and we’re signalling to
operators much better what we’re currently working on. Throughout Kilo
I wrote regular summaries of the currently approved specs, and that
seems to have been popular with operators.

We also pivoted a little in Kilo and created a trivial approval
process for Kilo specs which either were very small, or previously
approved in Juno. This released the authors of those specs from
meaningless paperwork, and meant that we were able to start merging
that work very early in the release cycle. I think we should continue
with this process in Liberty.

I think its a good idea also to examine briefly some statistics about specs:

Juno:
   approved but not implemented: 40
   implemented: 49

Kilo:
   approved but not implemented: 30
   implemented: 32

For those previously approved in Juno, 12 were implemented in Kilo.
However, we’ve now approved 7 specs twice, but not merged an
implementation. I’d like to spend some time at the start of Liberty
trying to work out what’s happening with those 7 specs and why we
haven’t managed to land an implementation yet. Approving specs is a
fair bit of work, so doing it and then not merging an implementation
is something we should dig into.

There are certainly priorities which haven’t gone so well in Kilo. We
need to progress more on functional testing, the nova-network
migration effort, and CI testing consistency our drivers. These are
obvious things to try and progress in Liberty, but I don’t want to
pre-empt the design summit discussions by saying that these should be
on the priority list of Liberty.

In my Kilo PTL candidacy email, I called for a “social approach” to
the problems we faced at the time, and that’s what I have focussed on
for this release cycle. At the start of the release we didn’t have an
agreed plan for how to implement the specifics for the v2.1 API, and
we talked through that really well. What we’ve ended up with is an
implementation in tree which I think will meet our needs going
forward. We are similarly still in a talking phase with the
nova-network migration work, and I think that might continue for a bit
longer -- the problem there is that we need a shared vision for what
this migration will look like while meeting the needs of the deployers
who are yet to migrate.

Our velocity continues to amaze me, and I don’t think we’re going
noticeably slower than we did in Juno. In Juno we saw 2,974 changes
with 16,112 patchsets, and 21,958 reviews. In Kilo we have seen 2,886
changes with 15,668 patchsets and 19,516 reviews at the time of
writing this email. For comparison, Neutron saw 11,333 patchsets and
Swift saw 1,139 patchsets for Kilo.

I’d like to thank everyone for their hard work during Kilo. I am
personally very excited by what we achieved in Nova in Kilo, and I’m
looking forwards to Liberty. I hope you are looking forward to our
next release as well!

Michael

-- 
Rackspace Australia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] [Ceilometer] [rc1] bug is unresolved due to requirements freeze

2015-04-02 Thread Pavlo Shchelokovskyy
Hi all,

we have a problem with dependencies for the kilo-rc1 release of Heat - see
bug [1]. Root cause is ceilometerclient was not updated for a long time and
just got an update recently. We are sure that Heat in Kilo would not work
with ceilometerclient =1.0.12 (users would not be able to create
Ceilometer alarms in their stacks). In the same time, global requirements
have  ceilometerclient =1.0.12. That works on the gate, but will fail for
any deployment that happens to use an outdated pypi mirror. I am also
afraid that if the version of ceilometerclient would be upper-capped to
1.0.12 in stable/kilo, Heat in stable/kilo would be completely broken in
regards to Ceilometer alarms usage.

The patch to global requirements was already proposed [2] but is blocked by
requirements freeze. Can we somehow apply for an exception and still merge
it? Are there any other OpenStack projects besides Heat that use
ceilometerclient's Python API (just asking to assert the testing burden)?

[1] https://bugs.launchpad.net/python-ceilometerclient/+bug/1423291

[2] https://review.openstack.org/#/c/167527/


Best regards,
Pavlo Shchelokovskyy
Software Engineer
Mirantis Inc
www.mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The Evolution of core developer to maintainer?

2015-04-02 Thread Duncan Thomas
On 2 April 2015 at 03:07, Ian Wienand iwien...@redhat.com wrote:


 IMO requiring two cores to approve *every* change is too much.  What
 we should do is move the responsibility downwards.  Currently, as a
 contributor I am only 1/3 responsible for my change making it through.
 I write it, test it, clean it up and contribute it; then require the
 extra 2/3 to come from the hierarchy.  If you only need one core,
 then core and myself share the responsibility for the change.  In my
 mind, this better recognises the skill of the contributor -- we are
 essentially saying we trust you.



I frankly disagree. There are a number of fixes that have come in that look
good, particularly to somebody not intimately familiar with a particular
area of code, that turn out to have all sorts of nasty side effects that
were only spotted by the second (or in some cases third, forth) core to
come along.

If you compare the velocity of openstack to many opensource projects, it is
*huge*. We really are making very rapid progress, in so many areas, every
single cycle. I'm starting to worry that we are pushing velocity above
vision, code quality, and many other things. I think we want to reduce the
expectation that your feature (or even contentious bug fix) is likely to
get merged in days - the project is sufficiently big that that is in fact
unlikely.


-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [scheduler] [gantt] Please stop using Gantt for discussing about Nova scheduler

2015-04-02 Thread Sylvain Bauza


Le 02/04/2015 01:28, Dugger, Donald D a écrit :

I think there's a lot of `a rose by any other name would smell as sweet' going 
on here, we're really just arguing about how we label things.  I admit I use 
the term gantt as a very expansive, this is the effort to clean up the current 
scheduler and create a separate scheduler as a service project.  There should 
be no reason that this effort should turn off people, if you're interested in 
the scheduler then very quickly you will get pointed to gantt.

I'd like to hear what others think but I still don't see a need to change the 
name (but I'm willing to change if the majority thinks we should drop gantt for 
now).


Erm, I discussed that point during the weekly meeting and I pledged for 
people giving their opinion in this email.


http://eavesdrop.openstack.org/meetings/gantt/2015/gantt.2015-03-31-15.00.html

As a meeting is by definition a synchronous thing, should we maybe try 
to async that decision using Gerrit ? I could pop up a resolution in 
Gerrit so that people could -1 or +1 it.


-Sylvain



--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786

-Original Message-
From: Sylvain Bauza [mailto:sba...@redhat.com]
Sent: Tuesday, March 31, 2015 1:49 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [scheduler] [gantt] Please stop using 
Gantt for discussing about Nova scheduler


Le 31/03/2015 02:57, Dugger, Donald D a écrit :

I actually prefer to use the term Gantt, it neatly encapsulates the discussions 
and it doesn't take much effort to realize that Gantt refers to the scheduler 
and, if you feel there is confusion, we can clarify things in the wiki page to 
emphasize the process: clean up the current scheduler interfaces and then split 
off the scheduler.  The end goal will be the Gantt scheduler and I'd prefer not 
to change the discussion.

Bottom line is I don't see a need to drop the Gantt reference.

While I agree with you that *most* of the scheduler effort is to spin-off the 
scheduler as a dedicated repository whose codename is Gantt, there are some 
notes to do :
   1. not all the efforts are related to the split, some are only reducing the 
tech debt within Nova (eg.
bp/detach-service-from-computenode has very little impact on the scheduler 
itself, but rather on what is passed to the scheduler as
resources) and may confuse people who could wonder why it is related to the 
split

2. We haven't yet agreed on a migration path for Gantt and what will become the 
existing nova-scheduler. I seriously doubt that the Nova community would accept 
to keep the existing nova-scheduler as a feature duplicate to the future Gantt 
codebase, but that has been not yet discussed and things can be less clear

3. Based on my experience, we are loosing contributors or people interested in 
the scheduler area because they just don't know that Gantt is actually at the 
moment the Nova scheduler.


I seriously don't think that if we decide to leave the Gantt codename unused 
while we're working on Nova, it won't seriously impact our capacity to propose 
an alternative based on a separate repository, ideally as a cross-project 
service. It will just translate the reality, ie. that Gantt is at the moment 
more an idea than a project.

-Sylvain




--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786

-Original Message-
From: Sylvain Bauza [mailto:sba...@redhat.com]
Sent: Monday, March 30, 2015 8:17 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova] [scheduler] [gantt] Please stop using
Gantt for discussing about Nova scheduler

Hi,

tl;dr: I used the [gantt] tag for this e-mail, but I would prefer if we could 
do this for the last time until we spin-off the project.

As it is confusing for many people to understand the difference in between 
the future Gantt project and the Nova scheduler effort we're doing, I'm 
proposing to stop using that name for all the efforts related to reducing the 
technical debt and splitting out the scheduler. That includes, not 
exhaustively, the topic name for our IRC weekly meetings on Tuesdays, any ML 
thread related to the Nova scheduler or any discussed related to the scheduler 
happening on IRC.
Instead of using [gantt], please use [nova] [scheduler] tags.

That said, any discussion related to the real future of a cross-project scheduler based 
on the existing Nova scheduler makes sense to be tagged as Gantt, of course.


-Sylvain


__
 OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
 OpenStack Development Mailing List (not for usage questions)

Re: [openstack-dev] Reducing noise of the ML (was: Re: [Ceilometer] proposal to add ZhiQiang Fan to Ceilometer core)

2015-04-02 Thread Amrith Kumar
I think Erno’s suggestion is an excellent one and we should just make that the 
norm; new core members are proposed as a change in gerrit. Even in the present 
system, once votes are cast in email there are changes (in Infra, I think) 
which need to be made. Why not just make those *the* mechanism for nominating 
and approving changes to core?

Maybe a message to the mailing list indicating that a change nominating John 
Smith to Core (with a link to the gerrit review) would suffice?

Is there a downside that I’m not seeing?

-amrith

From: Steve Martinelli [mailto:steve...@ca.ibm.com]
Sent: Wednesday, April 01, 2015 5:09 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] Reducing noise of the ML (was: Re: [Ceilometer] 
proposal to add ZhiQiang Fan to Ceilometer core)

*puts mailing list police hat on*

Refer to 
http://lists.openstack.org/pipermail/openstack-dev/2015-March/059642.html

I know we're trying to show support for our peers, and +1'ing let's them know 
just that.
But it causes a lot of noise, and in the end it's up the the PTL.

Thanks,

Steve Martinelli
OpenStack Keystone Core

Fei Long Wang feil...@catalyst.net.nzmailto:feil...@catalyst.net.nz wrote 
on 04/01/2015 04:34:12 PM:

 From: Fei Long Wang feil...@catalyst.net.nzmailto:feil...@catalyst.net.nz
 To: 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Date: 04/01/2015 04:39 PM
 Subject: Re: [openstack-dev] [Ceilometer] proposal to add ZhiQiang
 Fan to Ceilometer core

 +1 if this can be counted :)

 On 02/04/15 06:18, gordon chung wrote:
  hi,
 
  i'd like to nominate ZhiQiang Fan to the Ceilometer core team. he
 has been a leading reviewer in Ceilometer and consistently gives
 insightful reviews. he also contributes patches and helps triage bugs.
 
  reviews:
  https://review.openstack.org/#/q/reviewer:%22ZhiQiang+Fan%22
 +project:openstack/ceilometer,n,z
 
  patches:
  https://review.openstack.org/#/q/owner:%22ZhiQiang+Fan%22
 +project:openstack/ceilometer,n,z
 
 
  cheers,
  gord
 
  ps. this isn't an april fool's joke as he initially thought when i
 asked him.
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: 
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribemailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 --
 Cheers  Best regards,
 Fei Long Wang (王飞龙)
 --
 Senior Cloud Software Engineer
 Tel: +64-48032246
 Email: flw...@catalyst.net.nzmailto:flw...@catalyst.net.nz
 Catalyst IT Limited
 Level 6, Catalyst House, 150 Willis Street, Wellington
 --


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribemailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Zaqar] Kilo summary from the team to the community

2015-04-02 Thread Flavio Percoco

Greetings,

The Zaqar team has been quiet in the last cycle but that doesn't mean
it's gone. This email is to share with you all what we've been focused
on during Kilo.

The team has been working on 3 main areas (order does not reflect
priority/importance):

* Relaxing some of the storage constraints by relying more on storage
capabilities and less on the guarantees. [0]

* Implement notifications to fulfill that missing part of the
service's goals. [1]

* Work on a non-wsgi transport[2]

Each of the above areas required important changes in the service that
I'll try to sumarize now:

Relaxing Storage Constraints


As some of you know, the service has support for something we call
flavors. These flavors difer from what we know as flavors in
services like Nova. The flavors in Zaqar, represent the type of
storage a set of messages will be stored into.

Every flavor in Zaqar has a set of capabilities. These capabilities
represent what the storage can/cannot do. For example, durability is
considered a capability and so are high-throughput and claims.

As of Juno, these capabilities were manually created but this has
changed. The capabilities are now introspected from the pools asigned
to the flavors and they are now a read-only field.

In addition, as part of this work, FIFO has been made optional and it
can now be enabled through the pool. [3]

To know a bit more about flavors and pools, please read this post. [4]

Implementing notifications
==

Fei Long Wang (flwang) has done an amazing job on getting this done. The
feature itself was quite big and it required patches in several
places. If you're interested in the details of the feature, please,
read the spec[1].

If you're going to use notifications, please, bare in mind that the
feature itself is quite new and it requires some extra testing
(especiall scale tests ;).

Non persistent transport


Victoria Martinez de la Cruz (vkmc) has dedicated efforts to this
feature. She's done an amazing research on this field to find the best
way to make this work with Zaqar. During the cycle several websocket
libraries were examined and we ended up choosing atobhan. Likewise,
the protocol went through some iterations until we finally reached an
agreement.

This feature, however, didn't make it entirely in Kilo. Part of it
will be completed during Liberty. To be precise, it's possible to
manage queue's and connect to Zaqar using websockets but it's still
not possible to fully operate the API.

We felt that it was not right to rush this in at its current state and
that it'd be a good thing to have part of the feature as a preview for
some folks to play with it.

Move QueueController to the control plane
=

This might be confusing for some folks that are not familiar with
Zaqar's architecture but I think it's also worth mentioning.

TL;DR: Zaqar has 2 data planes. 1 used for the actual data (messages)
and one used for the control - metadata- data (subscriptions information,
flavors, etc). These 2 planes can either use the same DB or a
different one.

Queue's used to live in the data plane but we've moved them to the
control plane. The main reason is that queue's are a lazy resource in
Zaqar and you don't really need to create them manually. The only
reason you'd create a queue manually is to set some metadata for it
(assign a flavor to a queue, for example). Since this is considered to
be metadata, we got to the conclusion there was no value on keeping
queue's in the data plane. This move brings other benefits like saving
space on the data store, avoid hitting the data store for queue's
lookups and management, pick better stores for each plane, etc.

I'd like to thank Shaifali Agrawal for her persistence and hard work
on this task. It was not an easy task, it required internal
refactoring, configuration changes and lots of fights with gerrit.
Thanks again!

The above changes, since they required breaking backwards compatibily,
have been implemented in a new version of the API, v2. The v2 follows
closely what existed already in v1.1 but it also has support for these
new features and the breaking changes I've mentioned already.

Unfortunately, as many of you know, our team has shrunk and each of us
work on other projects too. This means that there's little to none
client support for the above mentioned features. However, we'll be
working on bringing the client up-to-speed.

The team spent the entire cycle heads-down silently working on these
tasks in the best way possible. For the next cycles, I believe the
main focus will be in improving the adoption of the service and
improving UX besides fixing bugs, obviously.

Hope you find this summary useful and please, don't hesitate to shoot
questiongs if you've any.

Flavio

[0] 
https://github.com/openstack/zaqar-specs/blob/master/specs/kilo/approved/storage-capabilities.rst
[1] 

Re: [openstack-dev] [Neutron] [ML2] using binding:tun_ip instead of agent_ip for l2pop to support agentless backend

2015-04-02 Thread henry hly
On Thu, Apr 2, 2015 at 3:51 PM, Kevin Benton blak...@gmail.com wrote:
 Whoops, wrong link in last email.

 https://etherpad.openstack.org/p/liberty-neutron-summit-topics

 On Thu, Apr 2, 2015 at 12:50 AM, Kevin Benton blak...@gmail.com wrote:

 Coordinating communication between various backends for encapsulation
 termination is something that would be really nice to address in Liberty.
 I've added it to the etherpad to bring it up at the summit.[1]


Thanks a lot, Kevin.
I think it's really important, for more customers are asking about
various backends coordinating.


 1.
 http://lists.openstack.org/pipermail/openstack-dev/2015-March/059961.html

 On Tue, Mar 31, 2015 at 2:57 PM, Sławek Kapłoński sla...@kaplonski.pl
 wrote:

 Hello,

 I think that easiest way could be to have own mech_driver (AFAIK such
 drivers are for such usage) to talk with external devices to tell them
 what tunnels should it establish.

Sure, I agree.

 With change to tun_ip Henry propese l2_pop agent will be able to
 establish tunnel with external device.

Maybe not necessary here, the key is that interaction between l2pop
and external device MD is needed. Below are just some very basic
ideas:

1) MD as the plugin side agent?
*  each MD register hook in l2pop, then l2pop call the hook list as
well as notifying to agent;
*  MD simulate a update_device_up/down, however with binding:tun_ip
because it has no agent_ip;
* How MD get port status remain unsolved.

2) Things may be much easier in case of hierarchical port binding
(merged in Kilo)
* A ovs/linuxbridge agent still exist to produce update_device_up/down message;
* external device MD get port status update, then add tun_ip to port
context, then trigger l2pop MD?


 On Mon, Mar 30, 2015 at 10:19:38PM +0200, Mathieu Rohon wrote:
  hi henry,
 
  thanks for this interesting idea. It would be interesting to think
  about
  how external gateway could leverage the l2pop framework.
 
  Currently l2pop sends its fdb messages once the status of the port is
  modified. AFAIK, this status is only modified by agents which send
  update_devce_up/down().
  This issue has also to be addressed if we want agent less equipments to
  be
  announced through l2pop.
 
  Another way to do it is to introduce some bgp speakers with e-vpn
  capabilities at the control plane of ML2 (as a MD for instance).
  Bagpipe

Hi Mathieu,

Thanks for your idea, the interaction between l2pop and other MD is
really the key point, and remove agent_ip is just the first step.
BGP speakers is interesting, however I think the goal is not very
same, because I want to keep compatibility of existing deployed l2pop
solutions, and want to extend and enhance it while not replacing it
totally.

  [1] is an opensource bgp speaker which is able to do that.
  BGP is standardized so equipments might already have it embedded.
 
  last summit, we talked about this kind of idea [2]. We were going
  further
  by introducing the bgp speaker on each compute node, in use case B of
  [2].
 
  [1]https://github.com/Orange-OpenSource/bagpipe-bgp
 
  [2]http://www.slideshare.net/ThomasMorin1/neutron-and-bgp-vpns-with-bagpipe
 
  On Thu, Mar 26, 2015 at 7:21 AM, henry hly henry4...@gmail.com wrote:
 
   Hi ML2er,
  
   Today we use agent_ip in L2pop to store endpoints for ports on a
   tunnel type network, such as vxlan or gre. However this has some
   drawbacks:
  
   1) It can only work with backends with agents;
   2) Only one fixed ip is supported per-each agent;
   3) Difficult to interact with other backend and world outside of
   Openstack.
  
   L2pop is already widely accepted and deployed in host based overlay,
   however because it use agent_ip to populate tunnel endpoint, it's
   very
   hard to co-exist and inter-operating with other vxlan backend,
   especially agentless MD.
  
   A small change is suggested that the tunnel endpoint should not be
   the
   attribute of *agent*, but be the attribute of *port*, so if we store
   it in something like *binding:tun_ip*, it is much easier for
   different
   backend to co-exists. Existing ovs agent and bridge need a small
   patch, to put the local agent_ip into the port context binding fields
   when doing port_up rpc.
  
   Several extra benefits may also be obtained by this way:
  
   1) we can easily and naturally create *external vxlan/gre port* which
   is not attached by an Nova booted VM, with the binding:tun_ip set
   when
   creating;
   2) we can develop some *proxy agent* which manage a bunch of remote
   external backend, without restriction of its agent_ip.
  
   Best Regards,
   Henry
  
  
   __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
   openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  

 
  __
  OpenStack 

Re: [openstack-dev] [Heat] [Ceilometer] [rc1] bug is unresolved due to requirements freeze

2015-04-02 Thread Eoghan Glynn


 Hi all,
 
 we have a problem with dependencies for the kilo-rc1 release of Heat - see
 bug [1]. Root cause is ceilometerclient was not updated for a long time and
 just got an update recently. We are sure that Heat in Kilo would not work
 with ceilometerclient =1.0.12 (users would not be able to create Ceilometer
 alarms in their stacks). In the same time, global requirements have
 ceilometerclient =1.0.12. That works on the gate, but will fail for any
 deployment that happens to use an outdated pypi mirror. I am also afraid
 that if the version of ceilometerclient would be upper-capped to 1.0.12 in
 stable/kilo, Heat in stable/kilo would be completely broken in regards to
 Ceilometer alarms usage.
 
 The patch to global requirements was already proposed [2] but is blocked by
 requirements freeze. Can we somehow apply for an exception and still merge
 it? Are there any other OpenStack projects besides Heat that use
 ceilometerclient's Python API (just asking to assert the testing burden)?

 [1] https://bugs.launchpad.net/python-ceilometerclient/+bug/1423291
 
 [2] https://review.openstack.org/#/c/167527/

Pavlo - part of the resistance here I suspect may be due to the
fact that I inadvertently broke the SEMVER rules when cutting
the ceilometerclient 1.0.13 release, i.e. it was not sufficiently
backward compatible with 1.0.12 to warrant only a Z-version bump.

Sean - would you be any happier with making a requirements freeze
exception to facilitate Heat if we were to cut a fresh ceiloclient
release that's properly versioned, i.e. 2.0.0?

Cheers,
Eoghan
 
 
 Best regards,
 Pavlo Shchelokovskyy
 Software Engineer
 Mirantis Inc
 www.mirantis.com
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The Evolution of core developer to maintainer?

2015-04-02 Thread Sylvain Bauza


Le 02/04/2015 03:19, Jay Pipes a écrit :

On 04/01/2015 12:31 PM, Duncan Thomas wrote:

On 1 April 2015 at 10:04, Joshua Harlow harlo...@outlook.com
mailto:harlo...@outlook.com wrote:

+1 to this. There will always be people who will want to work on fun
stuff and those who don't; it's the job of leadership in the
community to direct people if they can (but also the same job of
that leadership to understand that they can't direct everyone; it is
open-source after all and saying 'no' to people just makes them run
to some other project that doesn't do this...).

IMHO (and a rant probably better for another thread) but I've seen
to many projects/specs/split-outs (ie, scheduler tweaks, constraint
solving scheduler...) get abandoned because of cores saying this or
that is the priority right now (and this in all honesty pisses me
off); I don't feel this is right (cores should be leaders and
guides, not dictators); if a core is going to tell anyone that then
they better act as a guide to the person they are telling that to
and make sure they lead that person they just told no; after all
any child can say no but it takes a real man/woman to go the extra
distance...


So I think saying no is sometimes a vital part of the core team's role,
keeping up code quality and vision is really hard to do while new
features are flooding in, and doing architectural reworking while
features are merging is an epic task. There are also plenty of features
that don't necessarily fit the shared vision of the project; just
because we can do something doesn't mean we should. For example: there
are plenty of companies trying to turn Openstack into a datacentre
manager rather than a cloud (i.e. too much focus on pets .v. cattle
style VMs), and I think we're right to push back against that.


Amen to the above. All of it.


Right now there are some strong indications that there are areas we are
very weak at (nova network still being preferred to neutron, the amount
of difficultly people had establishing 3rd party CI setups for cinder)
that really *should* be prioritised over new features.

That said, some projects can be worked on successfully in parallel with
the main development - I suspect that a scheduler split out proposal is
one of them. This doesn't need much/any buy-in from cores, it can be
demonstrated in a fairly complete state before it is evaluated, so the
only buyi-in needed is on the concept.


Ha, I had to laugh at this last paragraph :) You mention the fact that 
nova-network is still very much in use in the paragraph above (for 
good reasons that have been highlighted in other threads). And yet you 
then go on to suspect that a nova-scheduler split would something that 
would be successfully worked on in parallel...


The Gantt project tried and failed to split the Nova scheduler out 
(before it had any public or versioned interfaces). The solver 
scheduler has not gotten any traction not because as Josh says some 
cores are acting like dictators but because it doesn't solve the 
right problem: it makes more complex scheduling placement decisions in 
a different way from the Nova scheduler, but it doesn't solve the 
distributed scale problems in the Nova scheduler architecture.


If somebody developed an external generic resource placement engine 
that scaled in a distributed, horizontal fashion and that had 
well-documented public interfaces, I'd welcome that work and quickly 
work to add a driver for it inside Nova. But both Gantt and the solver 
scheduler fall victim to the same problem: trying to use the existing 
Nova scheduler architecture when it's flat-out not scalable.


Alright, now that I've said that, I'll wait here for the inevitable 
complaints that as a Nova core, I'm being a dictator because I speak 
my mind about major architectural issues I see in proposals.




And that's also why the more I'm reviewing, the more I'm thinking that 
people need to have a basic understanding of all the Nova repository and 
not only a specific asset if they want to provide a new feature, just 
because of the technical debt and all the inherited interactions.


Take the scheduler as an example again : most of the commits related to 
it are also impacting objects, cells, db migrations to quote the most 
related.


I was originally pro giving a limited set of merge powers to subteams 
for a specific codepath, but my personal experience made me think that 
it can't work that way in Nova at the moment - just because everything 
is intersected.


So, yeah, before kicking-off new features, we need at least people 
enough aware of the global state to give their voice on if it's doable 
or not. I don't want to say it would be a clique or a gang blessing good 
people or bad people, just architects that have enough knowledge to know 
if it will work - or not.


Good ideas can turn into bad implementations just because of the 
existing tech debt. And there is nothing 

Re: [openstack-dev] [swift] swift memory usage in centos7 devstack jobs

2015-04-02 Thread Deepak Shetty
On Thu, Apr 2, 2015 at 4:25 AM, Ian Wienand iwien...@redhat.com wrote:

 Note; I haven't finished debugging the glusterfs job yet.  This
 relates to the OOM that started happening on Centos after we moved to
 using as much pip-packaging as possible.  glusterfs was still failing
 even before this.


Cool, and its not related to glusterfs IMHO. Since it was happening
even w/o glusterfs (with just the tempest all tests running with defaults)

thanx,
deepak



 On 04/01/2015 07:58 PM, Deepak Shetty wrote:

 1) So why did this happen on rax VM only, the same (Centos job)on hpcloud
 didn't seem to hit it even when we ran hpcloud VM with 8GB memory.


 I am still not entirely certain that hp wasn't masking the issue when
 we were accidentally giving hosts 32gb RAM.  We can get back to this
 once these changes merge.

  2) Should this also be sent to centos-devel folks so that they don't
 upgrade/update the pyopenssl in their distro repos until the issue
 is resolved ?


 I think let's give the upstream issues a little while to play-out,
 then we decide our next steps around use of the library based on that
 information.

 thanks

 -i


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][Mistral] DSL improvements - certain and incubating

2015-04-02 Thread Dmitri Zimine
Team, thanks for your input so far, I flashed out one more section, please take 
a look and comment
https://docs.google.com/a/stackstorm.com/document/d/1Gy6V9YBt8W4llyErO_itHetkF1oNYv4ka-_5LdFKA18/edit#heading=h.n1jc8i9qhikt

DZ. 

On Mar 25, 2015, at 9:41 AM, Dmitri Zimine dzim...@stackstorm.com wrote:

 Folks, 
 
 we are discussing DSL improvements, based on Mistral field use and lessons 
 learned.
 
 Please join: comment on the document welcome, extra ideas, preferably based 
 on your experience writing Mistral workflows. 
 
 The summary is in the doc, it contains two sections:
 
 1) certain improvements, which we have complete clarity, agreed, and just 
 need to do
 2) incubating: ideas with more or less clear use case, where we however do 
 not have a certain, agreed solution.
 
 https://docs.google.com/a/stackstorm.com/document/d/1Gy6V9YBt8W4llyErO_itHetkF1oNYv4ka-_5LdFKA18/edit
 
 
 Regards, 
 Dmitri. 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] PTL Candidacy

2015-04-02 Thread Tristan Cacqueray
confirmed

On 04/02/2015 02:46 PM, Mike Perez wrote:
 Hello all,
 
 I'm announcing my candidacy for Cinder PTL for the Liberty release.
 
 I have contributed to block storage in OpenStack since Bexar back when things
 were within nova-volume, before Cinder, and the honor of serving as PTL for
 Cinder in the Kilo cycle.
 
 I've spoke in the past about focused participation, something I still feel is
 needed in the projects that are the basic foundation of OpenStack. Compute,
 storage and networking need to be solid. My work as core in Cinder and
 continuing as PTL has involved a lot of evangelizing and making new
 contributors feel comfortable with becoming part of the team. As a project
 grows, communication needs to be excellent, coordination is key to making sure
 reviews don't stick around too long for contributors to feel discouraged.
 I think the Cinder team has done an excellent job in managing this as we grow,
 based on the feedback received. I really do think participation in Cinder
 is getting better, and it's wonderful to be part of that.
 
 If we take the Kilo-3 milestone for example, we landed 44 blueprints in
 a single milestone [1]. That's huge progress. I would like to believe this
 happens because of focus, and that happens because of better tracking of what
 is a priority and clear communication. Lastly participation, not just core
 folks, but any contributor that feels welcomed by the team and not to be burnt
 out on never ending patch revisions.
 
 Most of 2014 in Cinder was a lot of discussions on third party CI's. Third
 party CI's are a great way for vendors to verify if a proposed upstream patch
 would break their integration. In addition, it identifies if a vendor really
 does work with the current state of the OpenStack project. There have been
 plenty of cases that vendors discovered that their integration in OpenStack
 really didn't work until they ran these tests. Last year, there was a real 
 lack
 of coordination and communication with vendors on getting them on board with
 reporting third party CI results. In 2015 I took on the responsibility of 
 being
 the point of contact for the 70+ drivers in Cinder, emailing the mailing list,
 countless reminders on IRC, contacting maintainers directly, and actually
 making phone calls to companies if maintainers were not responsive by email.
 
 I'm happy to report that majority of vendors have responded back and are 
 active
 in the Cinder community to ensure their integration is solid. Compare that to
 last year when we just had one or two vendors reporting and majority of 
 vendors
 not having a clue! It's very exciting to help build a better experience for
 their users using OpenStack. The communities pouring support to me on this
 issue was hugely appreciated, and is what keeps me coming back to help.
 
 We added 14 new drivers to Cinder in the Kilo release. Coordination was
 beautiful thanks to clear communication and coordination with the hard working
 reviewers in the Cinder team.
 
 My priorities for Cinder in the Kilo release was to make progress on rolling
 upgrades. I have spent a greater deal of my time testing the work to allow
 Cinder services to not be dependent on database schemas. This is a big change,
 and doesn't completely solve rolling upgrades in Cinder, but is a building
 block needed to begin solving the other rolling upgrade problems. I'm really
 happy with the work done by the team in the Kilo release and excited with how
 comfortable I feel in terms of stability of the work thanks to the amount of
 testing we've done.
 
 This work however not only benefits Cinder, but is a general solution into
 Oslo, in attempt to help other OpenStack projects in upgrades. Upgrades are
 a huge problem that needs to be solved across OpenStack, and I'm proud of the
 Cinder team for helping do their part to help drive adoption. Long term I see
 this work contributing to an ideal single upgrade solution, so that operators
 aren't having to learn how to upgrade 12 different services they may deploy.
 
 My plans for Liberty is to work with the team on creating a better use of
 milestones for appropriate changes. While we started some initial requirements
 like making new drivers focus on the first milestone only, I think stability
 time needs to be stretched a bit longer, and I think others will agree Kilo
 didn't have a lot of this as planned for Kilo-3.
 
 Cinder  will continue on efforts for rolling upgrades by now focusing on
 compatibility across Cinder services with RPC. This is a very important piece
 for making rolling upgrades complete. We will continue to work through 
 projects
 like Oslo to make sure these solutions general enough to benefit other
 OpenStack projects, so as a whole, we will improve together.
 
 Cinder volumes that end up in a stuck state. This has been a problem for ages,
 and I have heard from countless people at the Ops Midcycle Meetup that
 I attended. I'm happy to say, as reported from my take on 

Re: [openstack-dev] [Magnum] PTL Candidacy

2015-04-02 Thread Tristan Cacqueray
confirmed

On 04/02/2015 12:36 PM, Adrian Otto wrote:
 I respectfully respect your support to continue as your Magnum PTL.
 
 Here are are my achievements and OpenStack experience and that make me the 
 best choice for this role:
 
 * Founder of the OpenStack Containers Team
 * Established vision and specification for Magnum
 * Served as PTL for Magnum since the first line of code was contributed in 
 November 2014
 * Successful addition of Magnum to the official OpenStack projects list on 
 2015-03-24
 * Planned and executed successful Magnum Midcycle meetup in March 2015
 * 3 terms of experience as elected PTL for Solum
 * Involved with OpenStack since Austin Design Summit in 2010
 
 What background and skills help me to do this role well:
 
 * 20 years of experience in technical leadership positions
 * Considerable experience leading milti-organization collaborations
 * Diplomacy skills for inclusion of numerous viewpoints, and ability to drive 
 consensus and shared vision
 * Considerable experience in public speaking, and running design summit 
 sessions
 * Deep belief in Open Source, Open Development, Open Design, and Open 
 Community
 * I love OpenStack and I love containers, probably more than anyone else in 
 the world in this combination.
 
 What to expect in the Liberty release cycle:
 
 We will continue to focus on making the best Containers-as-a-Service solution 
 for cloud operators. This requires a valuable vertical integration of 
 container management tools with OpenStack. Magnum is quickly maturing. Here 
 are key focus areas that I believe are important for us to work on during our 
 next release:
 
 * Functional testing, and unit test code coverage
 * Overlay networking 
 * Bay type choices (allow for plugging in prevailing container execution 
 engines as needed)
 * Surface additional (small/medium) features (TLS Security, Autoscaling, Load 
 balancer management, etc.)
 
 I look forward to your vote, and to helping us succeed together.
 
 Thanks,
 
 Adrian Otto
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] PTL Candidacy

2015-04-02 Thread Morgan Fainberg
Hello Everyone!



It’s been an exciting development cycle (Kilo) and it is now time to start
looking forward at Liberty and what that will hold. With that said, I’d
like to ask for the community’s support to continue as the Keystone PTL for
the Liberty release cycle.



I came to the table last cycle with a general goal of driving towards
stability and improvement on user experience[1]. For the most part the
Keystone team has managed to improve on a number of the big outstanding
issues:



* Token Persistence issues (table bloat, etc), solved with non-persistent
(Fernet) tokens.



* Improvements on the Federated identity use-cases.



* Hierarchical Multi-Tenancy (initial implementation)



* Significant progress on Keystone V3-only deployment models (a lot of work
in the Keystone Client and Keystone Middleware)



* A good deal of technical debt paydown / cleanup



This cycle I come back to say that I don’t want to shake things up too
much. I think we have a successful team of developers, reviewers,
bug-triagers, and operators collaborating to make Keystone a solid part of
the OpenStack Ecosystem. I remain committed to enabling the contributors
(of all walks) to be part of our community and achieve success.



For the Liberty cycle I would like to see a continued focus on performance,
user experience, deployer experience, and stability. What does this really
mean for everyone contributing to Keystone? It means there are two clear
sides for the Liberty cycle.



New Feature Work:

-



I want to see the development community pick a solid 5 or so “new” features
to land in Liberty and really hit those out of the park (focused
development from the very beginning of the cycle). Generally speaking, it
looks that the new feature list is lining up around providing support /
significantly better experience for the other project(s) under the
OpenStack  tent. In short, I see Keystone new development being less about
the “interesting thing Keystone can do” and more about “the great things
Keystone can do for the other projects”.



Non-Feature Work:

-



We have a lot of drivers/plugins, backends, all with their own rapidly
moving interfaces that make it hard to know what to expect in the next
release. It is time we sit down and commit to the interfaces for the
backends, treat them as stable (just like the REST interface). A stable ABI
for the Keystone backends/plugins goes a long way towards enabling our
community to develop a rich set of backends/plugins for Identity,
Assignment, Roles, Policy, etc. This is a further embracing of the “Big
Tent” conversation; for example we can allow for constructive competition
in how Keystone retrieves Identity from an Identity store (such as LDAP,
AD, or SQL). Not all of the solutions need to be in the Keystone tree
itself, but a developer can be assured that their driver isn’t going to
need radical alterations between Liberty and the next release with this
commitment to stable ABIs.



Beyond the stable interface discussion, the other top “non-feature”
priorities are having a fully realized functional test suite (that can be
run against an arbitrary deployment of Keystone, with whichever
backend/configuration is desired), a serious look at performance profiling
and what we can do to solve the next level of scaling issues, the ability
to deploy OpenStack without Keystone V2 enabled, and finally looking at the
REST API itself so that we can identify how we can improve the end-user’s
experience (the user who consumes the API itself) especially when it comes
to interacting with deployments with different backend configurations.



Some Concluding Thoughts:





I’ll reiterate my conclusion from the last time I ran, as it still
absolutely sums up my feelings:



Above and beyond everything else, as PTL, I am looking to support the
outstanding community of developers so that we can continue Keystone’s
success. Without the dedication and hard work of everyone who has
contributed to Keystone we would not be where we are today. I am extremely
pleased with how far we’ve come and look forward to seeing the continued
success as we move into the Liberty release cycle and beyond not just for
Keystone but all of OpenStack.



Cheers,

Morgan Fainberg



[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-September/046571.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] PTL candidacy

2015-04-02 Thread gordon chung
hi,
i'd like to announce my candidacy for PTL of Ceilometer.
as a quick introduction, i've been a contributor in OpenStack for the past few 
years and for the majority of that time i've been primarily focused on 
Ceilometer where i contribute regularly to the project with code[1] and 
reviews[2]. i'm currently an engineer at Red Hat and have previously worked for 
eNovance and IBM, where in the latter's case, i helped work on advancing the 
adoption of cloud auditing practices within the community, which i continue to 
do.
to model myself on past PTLs i've worked with, my main goal as PTL will be to 
support the community of contributors that exists within OpenStack with 
interests in telemetry. i believe we have a wide variety of contributors with 
various expertise in Ceilometer and OpenStack and while differing opinions have 
arisen, as a collective, we have -- and can continue to -- hash out the ideal 
solutions.
with the politically correct portion all covered, my other interests are: 
stability, the transition to time-series data modeling aka Gnocchi, and events.
- we've made strides in Ceilometer over the past cycles to improve stability by 
adding HA support, remodeling backends, and removing redundancies in 
Ceilometer. i want to make sure we remain critical of new changes and always 
ensure they do not add bloat to Ceilometer.
- in regards to data storage, while Ceilometer can be used solely as a data 
retrieval service, i believe Gnocchi -- the modeling of data as a time-series 
-- will provide a scalable means to storing measurement data regarding 
OpenStack[3]. this work, i believe, will be important for those using 
Ceilometer as a complete solution.
- the concept of events was realised by the team at RAX and was worked on 
further in the last cycle. i hope to see this functionality continue to expand 
by adding the ability to derive and act on events to allow greater insight into 
the system.
- lastly, i want to emphasise documentation. Ceilometer's scope touches all 
projects and because of that, it can be difficult to pick up. in Kilo, we 
started to emphasise the importance of documentation and i hope we continue to 
do so going forward.
[1] 
https://review.openstack.org/#/q/owner:%22gordon+chung%22+project:openstack/ceilometer,n,z[2]
 http://russellbryant.net/openstack-stats/ceilometer-reviewers-365.txt[3] 
http://www.slideshare.net/EoghanGlynn/rdo-hangout-on-gnocchi
cheers,

gord
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo][clients] Let's speed up start of OpenStack libs and clients by optimizing imports with profimp

2015-04-02 Thread Brant Knudson
On Thu, Apr 2, 2015 at 4:52 PM, Boris Pavlovic bo...@pavlovic.me wrote:

 Hi stackers,

 Recently, I started working on speeding up Rally cli.

 What I understand immediately  is that I don't understand why it takes
 700-800ms
 to just run rally version command and it is impossible hard task to find
 what takes so much time just by reading the code.

 I started playing with patching __import__ and make a simple but powerful
 tool
 that allows you to trace any imports and get pretty graphs of nested
 importing with timings:

 https://github.com/boris-42/profimp

 So now it's simple to understand what imports take most of time, just by
 running:

   profimp import lib --html


 Let's optimize OpenStack libs and clients?


 Best regards,
 Boris Pavlovic


There's a review in python-keystoneclient to do lazy importing of modules
here: https://review.openstack.org/#/c/164066/ . It would be interesting to
know if this improves the initial import time significantly. Also, this can
be an example of how to improve other libraries.

 - Brant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The Evolution of core developer to maintainer?

2015-04-02 Thread Joshua Harlow

Joe Gordon wrote:



On Thu, Apr 2, 2015 at 3:14 AM, Thierry Carrez thie...@openstack.org
mailto:thie...@openstack.org wrote:

Joe Gordon wrote:
  My main objection to the model you propose is its binary
nature. You
  bundle core reviewing duties with drivers duties into a
single
  group. That simplification means that drivers have to be core
reviewers,
  and that core reviewers have to be drivers. Sure, a lot of core
  reviewers are good candidates to become drivers. But I think
bundling
  the two concepts excludes a lot of interesting people from
being a
  driver.

  I cannot speak for all projects, but at least in Nova you have to be a
  nova-core to be part of nova-drivers.

And would you describe that as a good thing ? If John Garbutt is so deep
into release liaison work that he can't sustain a review rate suitable
to remain a core reviewer, would you have him removed from the
maintainers group ? If someone steps up and works full-time on
triaging bugs in Nova (and can't commit to do enough reviews as a
result), would you exclude that person from your maintainers group ?


I want to empower that person and recognize them in some semi formal
capacity and make sure they have all the correct permissions.

I do not want a single flat 'maintainers' group, I think we need a
hierarchical notion of maintainers, where different people can end up
with very different responsibilities (and ACLs -- but that is a
implementation detail).


  If someone steps up and owns bug triaging in a project, that
is very
  interesting and I'd like that person to be part of the
drivers group.

  In our current model, not sure why they would need to be part of
  drivers. the bug triage group is open to anyone.

I think we are talking past each other. I'm not saying bug triagers have


It appears that we are talking past each other, at least we agree on
something.

to be drivers. I'm saying bug triagers should be *allowed* to
potentially become drivers, even if they aren't core reviewers. That is
including of all forms of project leadership.

You are the one suggesting that maintainers and core reviewers are the
same thing, and therefore asking that all maintainers/drivers have to be
core reviewers, actively excluding non-reviewers from that project
leadership class.

  Saying core reviewers and maintainers are the same thing, you
basically
  exclude people from stepping up to the project leadership
unless they
  are code reviewers. I think that's a bad thing. We need more
people
  volunteering to own bug triaging and liaison work, not less.

  I don't agree with this statement, I am not saying reviewing and
  maintenance need to be tightly coupled.

You've been proposing to rename core reviewers to maintainers. I'm
not sure how that can be more tightly coupled...


All core reviewers in our current model should be responsible for
maintenance of the project, but not all maintainers need to be
responsible for reviewing code anywhere in the project.


  [...]
  I really want to know what you meant be 'no aristocracy' and the why
  behind that.

Aristocracies are self-selecting, privileged groups. Aristocracies
require that current group members agree on any new member addition,
basically limiting the associated privilege to a caste. Aristocracies
result in limited gene pool, tunnel vision and echo chamber effects.

OpenStack governance mandates that core developers are ultimately the
PTL's choice. Since the PTL is regularly elected by all contributors,
that prevents aristocracy.


Can you site your source for this? Because the earliest reference to
'Core Developer' (what you are calling core reviewer -- even though that
is not the original name) that I could find says nothing about it
ultimately being the PTLs choice.

https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

Where is the current documentation on this?


However in some projects, core reviewers have to be approved by existing
core reviewers. That is an aristocracy. In those projects, if you


Which projects do it differently?

So this is where you loose me. Has there ever been a case of a project's
PTL adding/removing people from the core team where the PTL goes against
the majority of the core developers?  You say that an early (unwritten?)
goal of the system we have is to prevent 'aristocracy,' but all I see is
'aristocracy'.

It sounds like if a goal was no aristocracy then we have miserably
failed at that. But frankly I don't know to prevent what you call
aristocracy.

associate more rights and badges to core reviewing (like by renaming it
maintainer and bundle driver responsibilities with it), I think you
actually extend the aristocracy problem rather 

[openstack-dev] [Infra] PTL Candidacy

2015-04-02 Thread James E. Blair
I would like to announce my candidacy for the Infrastructure PTL.

I have developed and operated the project infrastructure for several
years and have been honored to serve as the PTL for the Kilo cycle.

I was instrumental not only in creating the project gating system and
development process, but also in scaling it from three projects to 600.

In Juno, we have begun a real effort to make the project infrastructure
consumable in its own right.  We've made a lot of progress and we're
seeing more people joining this effort as we get closer to the goal of
re-usability.  It's starting to pay off, but we still have a lot to do.

We're also looking at getting into the business of operating an
OpenStack cloud.  This is potentially a very exciting new venture, with
a heavily used production cloud available for anyone in OpenStack to see
and alter its operation.  We're only just starting this project and hope
to settle on some parameters as soon as we finish up our current
priorities.

I have also proposed some major changes to the key infrastructure
projects Zuul and Nodepool.  In both cases, the intent is to help us
continue to scale out the system to handle more projects easier, and to
make the overall system more comprehensible.

Those are just three of several important efforts I anticipate this
cycle and they span the spectrum from software development to
operations.  Once again, a large part of the work of the PTL this cycle
will be helping to coordinate and facilitate these efforts.

I am thrilled to be a part of one of the most open free software project
infrastructures, and I would very much like to continue to serve as its
PTL.

Thanks,

Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Deadline For Volume Drivers to Be Readded

2015-04-02 Thread Mike Perez
On 14:14 Thu 02 Apr , Marcus Vinícius Ramires do Nascimento wrote:
 Hi Mike,
 
 I'm working on test coverage improvement for HDS/Hitachi drivers. As I
 talked to you in #openstack-cinder channel, I'm facing troubles with 3
 tests (apparently those fails are not related to the driver) and I'm trying
 to discover if it's a bug to report it or if it's a
 infrastructure/configuration problem.
 
 I'll switch back the CI to check mode and I'll continue working with this
 failures isolated to investigate the problem. Is it OK? The CI is running
 300 tests now (http://177.84.241.119:1/27/164527/3/silent/).

Yes.

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][oslo][clients] Let's speed up start of OpenStack libs and clients by optimizing imports with profimp

2015-04-02 Thread Boris Pavlovic
Hi stackers,

Recently, I started working on speeding up Rally cli.

What I understand immediately  is that I don't understand why it takes
700-800ms
to just run rally version command and it is impossible hard task to find
what takes so much time just by reading the code.

I started playing with patching __import__ and make a simple but powerful
tool
that allows you to trace any imports and get pretty graphs of nested
importing with timings:

https://github.com/boris-42/profimp

So now it's simple to understand what imports take most of time, just by
running:

  profimp import lib --html


Let's optimize OpenStack libs and clients?


Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] PTL Candidacy

2015-04-02 Thread Adam Young

On 04/02/2015 04:31 PM, Morgan Fainberg wrote:


Hello Everyone!

It’s been an exciting development cycle (Kilo) and it is now time to 
start looking forward at Liberty and what that will hold. With that 
said, I’d like to ask for the community’s support to continue as the 
Keystone PTL for the Liberty release cycle.


I came to the table last cycle with a general goal of driving towards 
stability and improvement on user experience[1]. For the most part the 
Keystone team has managed to improve on a number of the big 
outstanding issues:


* Token Persistence issues (table bloat, etc), solved with 
non-persistent (Fernet) tokens.


* Improvements on the Federated identity use-cases.

* Hierarchical Multi-Tenancy (initial implementation)

* Significant progress on Keystone V3-only deployment models (a lot of 
work in the Keystone Client and Keystone Middleware)


* A good deal of technical debt paydown / cleanup

This cycle I come back to say that I don’t want to shake things up too 
much. I think we have a successful team of developers, reviewers, 
bug-triagers, and operators collaborating to make Keystone a solid 
part of the OpenStack Ecosystem. I remain committed to enabling the 
contributors (of all walks) to be part of our community and achieve 
success.


For the Liberty cycle I would like to see a continued focus on 
performance, user experience, deployer experience, and stability. What 
does this really mean for everyone contributing to Keystone? It means 
there are two clear sides for the Liberty cycle.


New Feature Work:

-

I want to see the development community pick a solid 5 or so “new” 
features to land in Liberty and really hit those out of the park 
(focused development from the very beginning of the cycle). Generally 
speaking, it looks that the new feature list is lining up around 
providing support / significantly better experience for the other 
project(s) under the OpenStack  tent. In short, I see Keystone new 
development being less about the “interesting thing Keystone can do” 
and more about “the great things Keystone can do for the other projects”.


Non-Feature Work:

-

We have a lot of drivers/plugins, backends, all with their own rapidly 
moving interfaces that make it hard to know what to expect in the next 
release. It is time we sit down and commit to the interfaces for the 
backends, treat them as stable (just like the REST interface). A 
stable ABI for the Keystone backends/plugins goes a long way towards 
enabling our community to develop a rich set of backends/plugins for 
Identity, Assignment, Roles, Policy, etc. This is a further embracing 
of the “Big Tent” conversation; for example we can allow for 
constructive competition in how Keystone retrieves Identity from an 
Identity store (such as LDAP, AD, or SQL). Not all of the solutions 
need to be in the Keystone tree itself, but a developer can be assured 
that their driver isn’t going to need radical alterations between 
Liberty and the next release with this commitment to stable ABIs.


Beyond the stable interface discussion, the other top “non-feature” 
priorities are having a fully realized functional test suite (that can 
be run against an arbitrary deployment of Keystone, with whichever 
backend/configuration is desired), a serious look at performance 
profiling and what we can do to solve the next level of scaling 
issues, the ability to deploy OpenStack without Keystone V2 enabled, 
and finally looking at the REST API itself so that we can identify how 
we can improve the end-user’s experience (the user who consumes the 
API itself) especially when it comes to interacting with deployments 
with different backend configurations.


Some Concluding Thoughts:



I’ll reiterate my conclusion from the last time I ran, as it still 
absolutely sums up my feelings:


Above and beyond everything else, as PTL, I am looking to support the 
outstanding community of developers so that we can continue Keystone’s 
success. Without the dedication and hard work of everyone who has 
contributed to Keystone we would not be where we are today. I am 
extremely pleased with how far we’ve come and look forward to seeing 
the continued success as we move into the Liberty release cycle and 
beyond not just for Keystone but all of OpenStack.


Cheers,

Morgan Fainberg

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2014-September/046571.html




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Please vote for Morgan.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [TripleO][Heat] Overcloud software updates and ResourceGroups

2015-04-02 Thread Fox, Kevin M
I'm not sure how to feel about this... Its clever...

It kind of feels like your really trying to be able to register 'actions' in 
heat so that heat users can poke the vm's to do something... For example 
perform a chef run.

While using stack updates listed below could be made to work, is that trying to 
fit a square peg in a round hole? Would it be better to add a separate api for 
that? Maybe in the end though, thats just is a matter of what command the user 
runs, rather then how it gets things done? It may be the same under the hood.

What about multiple update actions? Perhaps some types of updates could be run 
in parallel and others must be done serially? How would you let the Autoscaling 
group know which updates could run which way?

As for ResourceGroup vs AutoscalingGroup, It would be really good for 
ResourceGroup to support rolling updates properly too. Would it be very 
difficult to implement it there too?

While having the updates happen in the template dependency order is 
interesting, is that really the correct thing to do? Why not reverse order? I'm 
guessing it may totally depend on the software. Maybe some app needs the 
clients upgraded before the server, or the server upgraded before the clients? 
It may even be version specific? There may even be some steps that aren't 
obvious where to run them update the clients, upgrade the server packages, 
stop the servers, run the db upgrade script on one of the servers, start up all 
the servers

Maybe this is a good place to hook Mistral and Heat together. Heat would have 
an api that allows actions to be performed on vm's. It would not have any 
ordering. Mistral could then poke the heat actions api for the stack to 
assemble workflows... Or for tighter integration, maybe a CompoundAction 
resource is created that really is a Mistral workflow that pokes the action 
api, and the workflow was exposed right back through the Heat action api so 
users could invoke complicated workflows the same way as simple ones...

Thanks,
Kevin

From: Zane Bitter [zbit...@redhat.com]
Sent: Thursday, April 02, 2015 3:31 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [TripleO][Heat] Overcloud software updates and 
ResourceGroups

A few of us have been looking for a way to perform software updates to
servers in a TripleO Heat/Puppet-based overcloud that avoids an
impedance mismatch with Heat concepts and how Heat runs its workflow. As
many talented TripleO-ers who have gone before can probably testify,
that's surprisingly difficult to do, but we did come up with an idea
that I think might work and which I'd like to get wider feedback on. For
clarity, I'm speaking here in the context of the new
overcloud-without-mergepy templates.

The idea is that we create a SoftwareConfig that, when run, can update
some software on the server. (The exact mechanism for the update is not
important for this discussion; suffice to say that in principle it could
be as simple as [yum|apt-get] update.) The SoftwareConfig would have
at least one input, though it need not do anything with the value.

Then each server has that config deployed to it with a
SoftwareDeployment at the time it is created. However, it is set to
execute only on the UPDATE action. The value of (one of) the input(s) is
obtained from a parameter.

As a result, we can trigger the software update by simply changing the
value of the input parameter, and the regular Heat dependency graph will
be respected. The actual input value could be by convention a uuid, a
timestamp, a random string, or just about anything so long as it changes.

Here's a trivial example of what this deployment might look like:

   update_config:
 type: OS::Heat::SoftwareConfig
 properties:
   config: {get_file: do_sw_update.sh}
   inputs:
 - name: update_after_time
   description: Timestamp of the most recent update request

   update_deployment:
 type: OS::Heat::SoftwareDeployment
 properties:
   actions:
 - UPDATE
   config: {get_resource: update_config}
   server: {get_resource: my_server}
   input_values:
 update_after_time: {get_param: update_timestamp}


(A possible future enhancement is that if you keep a mapping between
previous input values and the system state after the corresponding
update, you could even automatically handle rollbacks in the event the
user decided to cancel the update.)

And now we should be able to trigger an update to all of our servers, in
the regular Heat dependency order, by simply (thanks to the fact that
parameters now keep their previous values on stack updates unless
they're explicitly changed) running a command like:

   heat stack-update my_overcloud -f $TMPL -P update_timestamp=$(date)

(A future goal of Heat is to make specifying the template again optional
too... I don't think that change landed yet, but in this case we can
always obtain the template from Tuskar, so 

Re: [openstack-dev] [all][oslo][clients] Let's speed up start of OpenStack libs and clients by optimizing imports with profimp

2015-04-02 Thread Monty Taylor
On 04/02/2015 06:22 PM, Brant Knudson wrote:
 On Thu, Apr 2, 2015 at 4:52 PM, Boris Pavlovic bo...@pavlovic.me wrote:
 
 Hi stackers,

 Recently, I started working on speeding up Rally cli.

 What I understand immediately  is that I don't understand why it takes
 700-800ms
 to just run rally version command and it is impossible hard task to find
 what takes so much time just by reading the code.

 I started playing with patching __import__ and make a simple but powerful
 tool
 that allows you to trace any imports and get pretty graphs of nested
 importing with timings:

 https://github.com/boris-42/profimp

 So now it's simple to understand what imports take most of time, just by
 running:

   profimp import lib --html


 Let's optimize OpenStack libs and clients?


 Best regards,
 Boris Pavlovic


 There's a review in python-keystoneclient to do lazy importing of modules
 here: https://review.openstack.org/#/c/164066/ . It would be interesting to
 know if this improves the initial import time significantly. Also, this can
 be an example of how to improve other libraries.

Yes please.

Also - for libraries - let's try to not import lots of things.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] PTL Candidacy

2015-04-02 Thread Jeremy Stanley
On 2015-04-02 19:32:52 -0400 (-0400), Adam Young wrote:
 Please vote for Morgan.

Please refrain from distributing campaign literature, placing
political advertising or soliciting votes within 25 meters of the
polling place. ;)
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][Heat] Overcloud software updates and ResourceGroups

2015-04-02 Thread Zane Bitter
A few of us have been looking for a way to perform software updates to 
servers in a TripleO Heat/Puppet-based overcloud that avoids an 
impedance mismatch with Heat concepts and how Heat runs its workflow. As 
many talented TripleO-ers who have gone before can probably testify, 
that's surprisingly difficult to do, but we did come up with an idea 
that I think might work and which I'd like to get wider feedback on. For 
clarity, I'm speaking here in the context of the new 
overcloud-without-mergepy templates.


The idea is that we create a SoftwareConfig that, when run, can update 
some software on the server. (The exact mechanism for the update is not 
important for this discussion; suffice to say that in principle it could 
be as simple as [yum|apt-get] update.) The SoftwareConfig would have 
at least one input, though it need not do anything with the value.


Then each server has that config deployed to it with a 
SoftwareDeployment at the time it is created. However, it is set to 
execute only on the UPDATE action. The value of (one of) the input(s) is 
obtained from a parameter.


As a result, we can trigger the software update by simply changing the 
value of the input parameter, and the regular Heat dependency graph will 
be respected. The actual input value could be by convention a uuid, a 
timestamp, a random string, or just about anything so long as it changes.


Here's a trivial example of what this deployment might look like:

  update_config:
type: OS::Heat::SoftwareConfig
properties:
  config: {get_file: do_sw_update.sh}
  inputs:
- name: update_after_time
  description: Timestamp of the most recent update request

  update_deployment:
type: OS::Heat::SoftwareDeployment
properties:
  actions:
- UPDATE
  config: {get_resource: update_config}
  server: {get_resource: my_server}
  input_values:
update_after_time: {get_param: update_timestamp}


(A possible future enhancement is that if you keep a mapping between 
previous input values and the system state after the corresponding 
update, you could even automatically handle rollbacks in the event the 
user decided to cancel the update.)


And now we should be able to trigger an update to all of our servers, in 
the regular Heat dependency order, by simply (thanks to the fact that 
parameters now keep their previous values on stack updates unless 
they're explicitly changed) running a command like:


  heat stack-update my_overcloud -f $TMPL -P update_timestamp=$(date)

(A future goal of Heat is to make specifying the template again optional 
too... I don't think that change landed yet, but in this case we can 
always obtain the template from Tuskar, so it's not so bad.)



Astute readers may have noticed that this does not actually solve our 
problem. In reality groups of similar servers are deployed within 
ResourceGroups and there are no dependencies between the members. So, 
for example, all of the controller nodes would be updated in parallel, 
with the likely result that the overcloud could be unavailable for some 
time even if it is deployed with HA.


The good news is that a solution to this problem is already implemented 
in Heat: rolling updates. For example, the controller node availability 
problem can be solved by setting a rolling update batch size of 1. The 
bad news is that rolling updates are implemented only for 
AutoscalingGroups, not ResourceGroups.


Accordingly, I propose that we switch the implementation of 
overcloud-without-mergepy from ResourceGroups to AutoscalingGroups. This 
would be a breaking change for overcloud updates (although no worse than 
the change from merge.py over to overcloud-without-mergepy), but that 
also means that there'll never be a better time than now to make it.


I suspect that some folks (Tomas?) have possibly looked into this in the 
past... can anybody identify any potential obstacles to the change? Two 
candidates come to mind:


1) The SoftwareDeployments (plural) resource type. I believe we 
carefully designed that to work with both ResourceGroup and 
AutoscalingGroup though.
2) The elision feature (https://review.openstack.org/#/c/128365/). 
Steve, I think this was only implemented for ResourceGroup? An 
AutoscalingGroup version of this should be feasible though, or do we 
have better ideas for how to solve it in that context?


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] glusterfs plugin

2015-04-02 Thread Stanislaw Bogatkin
Hi Przemyslaw,
I would be glad to be core reviewer to fuel-plugin-glusterfs as long as
seems than I was only one person who push some commits to it.

On Thu, Apr 2, 2015 at 10:47 AM, Przemyslaw Kaminski pkamin...@mirantis.com
 wrote:

 Since there is no reply here I have taken steps to become core reviewer
 of the (orphaned) repos [1], [2], [3], [4].

 Should anyone want to take responsibility for them please write me.

 I have also taken steps to get the fuel-qa script working and will make
 sure tests pass with new manifests. I will also update manifests'
 version so that there will be no deprecation warnings.

 P.

 [1]

 https://review.openstack.org/#/admin/projects/stackforge/fuel-plugin-external-glusterfs,access
 [2]

 https://review.openstack.org/#/admin/projects/stackforge/fuel-plugin-group-based-policy,access
 [3]

 https://review.openstack.org/#/admin/projects/stackforge/fuel-plugin-external-nfs,access
 [4]

 https://review.openstack.org/#/admin/projects/stackforge/fuel-plugin-cinder-netapp,access

 On 04/01/2015 03:48 PM, Przemyslaw Kaminski wrote:
  Hello,
 
  I've been investigating bug [1] concentrating on the
  fuel-plugin-external-glusterfs.
 
  First of all: [2] there are no core reviewers for Gerrit for this repo
  so even if there was a patch to fix [1] no one could merge it. I saw
  also fuel-plugin-external-nfs -- same issue, haven't checked other
  repos. Why is this? Can we fix this quickly?
 
  Second, the plugin throws:
 
  DEPRECATION WARNING: The plugin has old 1.0 package format, this format
  does not support many features, such as plugins updates, find plugin in
  new format or migrate and rebuild this one.
 
  I don't think this is appropriate for a plugin that is listed in the
  official catalog [3].
 
  Third, I created a supposed fix for this bug [4] and wanted to test it
  with the fuel-qa scripts. Basically I built an .fp file with
  fuel-plugin-builder from that code, set the GLUSTER_PLUGIN_PATH variable
  to point to that .fp file and then ran the
  group=deploy_ha_one_controller_glusterfs tests. The test failed [5].
  Then I reverted the changes from the patch and the test still failed
  [6]. But installing the plugin by hand shows that it's available there
  so I don't know if it's broken plugin test or am I still missing
 something.
 
  It would be nice to get some QA help here.
 
  P.
 
  [1] https://bugs.launchpad.net/fuel/+bug/1415058
  [2] https://review.openstack.org/#/admin/groups/577,members
  [3] https://fuel-infra.org/plugins/catalog.html
  [4] https://review.openstack.org/#/c/169683/
  [5]
 
 https://www.dropbox.com/s/1mhz8gtm2j391mr/fail_error_deploy_ha_one_controller_glusterfs_simple-2015_04_01__11_39_11.tar.xz?dl=0
  [6]
 
 https://www.dropbox.com/s/ehjox554xl23xgv/fail_error_deploy_ha_one_controller_glusterfs_simple-2015_04_01__13_16_11.tar.xz?dl=0
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bug expiration

2015-04-02 Thread Sean Dague
On 04/02/2015 06:54 AM, James Bottomley wrote:
 On Thu, 2015-04-02 at 06:45 -0400, Sean Dague wrote:
 On 04/02/2015 06:33 AM, James Bottomley wrote:
 On Thu, 2015-04-02 at 11:32 +0200, Thierry Carrez wrote:
 Sean Dague wrote:
 I just spent a chunk of the morning purging out some really old
 Incomplete bugs because about 9 months ago we disabled the auto
 expiration bit in launchpad -
 https://bugs.launchpad.net/nova/+configure-bugtracker

 This is a manually grueling task, which by looking at these bugs, no one
 else is doing. I'd like to turn that bit back on so we can actually get
 attention focused on actionable bugs.

 Any objections here?

 No objection, just a remark:

 One issue with auto-expiration is that it usually results in the
 following story:

 1. Someone reports bug
 2. Triager notices NEW bug, asks reporter for details, sets INCOMPLETE
 3. Reporter provides details
 4. Triager does not notice reply on bug since they ignore INCOMPLETE
 5. Bug expires after n months and disappears forever
 6. Reporter is frustrated and won't ever submit issues again

 The problem is of course at step 4, not at step 5. Auto-expiration is
 very beneficial if your bug triaging routine includes checking Launchpad
 for INCOMPLETE bugs with an answer regularly. If nobody does this very
 boring task, then auto-expiration can be detrimental.

 Is anyone in Nova checking for INCOMPLETE bugs with an answer ? That's
 task 4 in https://wiki.openstack.org/wiki/BugTriage ...

 This actually looks to be a problem in the workflow to me.

 The OpenStack Incomplete/Confirmed seem to map roughly to the bugzilla
 Need Info/Open states.  The difference is that in bugzilla, a reporter
 can clear the Need Info flag.  This is also what needs to happen in
 OpenStack (so the reporter doesn't need to wait on anyone looking at
 thier input to move the bug on).

 I propose allowing the reporter to move the bug to Confirmed when they
 supply the information making it incomplete.  If the triager thinks this
 is wrong, they can set it back to incomplete again.  This has the net
 effect that Incomplete needs no real review, it marks bugs the reporter
 doesn't care enough about to reply... and these can be auto expired.

 This would make the initial state diagram


 +---+Review +--+
 |New|--|Incomplete|
 +---+   +--+
   | ^   |
   |Still Needs Info |   | Reporter replies
   | |   v
   | Review  +-+
   +|Confirmed|
 +-+


 James

 Reporters can definitely move it back to New, which is the expected
 flow, that means it gets picked up again on the next New bug sweep.
 That's Step #1 in triaging (for Nova we've agressively worked to keep
 that very near 0). I don't remember if they can also move it into
 Confirmed themselves if they aren't in the nova-bugs group, though that
 is an open group.

 Mostly the concern is people that don't understand the tools or bug
 flow. So they respond and leave in Incomplete. Or it's moved to
 Incomplete and they never respond because they don't understand that
 more info is needed. These things sit there for a year, and then there
 is some whiff of a real problem in them, but no path forward with that
 information.
 
 But we have automation: the system can move it to Confirmed when they
 reply.  The point is to try to make the states and timeouts self
 classifying.  If incomplete means no-one cared enough about this bug to
 supply requested information, then it's a no brainer candidate for
 exipry.  The question I was asking is could the states be set up so
 this happens and I believe the answer based on the above workflow is
 yes.
 
 Now if it sits in Confirmed because the triager didn't read the supplied
 information, it's not a candidate for expiry, it's a candidate for
 kicking someone's arse.
 
 The fundamental point is to make the states align with time triggered
 actionable consequences.  That's what I believe the problem with the
 current workflow is.  Someone has to look at each bug to determine what
 Incomplete actually means which I'd view as unbelievably painful for
 that person (or group of people).

So, it's launchpad, we only have so much control over things as we don't
host it.

But that being said, New is a better state. It means that a bug triager
actually looks at the response and says yep, that's sufficient and
moves it to Confirmed. I'm not sure what the problem is there as long as
New bugs are getting looked at regularly.

In almost all times Incomplete is set with a Question. We don't just
flag things Incomplete without commentary.

Realistically we've gotten a handle on the top end (Triaged which means
directly actionable) the bottom end (New) in Nova. Where things are
getting lost is in the giant pool of Confirmed Medium/Low that sit and
rot and people don't tend to work on them.

-Sean

-- 
Sean Dague
http://dague.net


Re: [openstack-dev] [Fuel] glusterfs plugin

2015-04-02 Thread Przemyslaw Kaminski
Hello,

Done, added you.

I already created something that should fix the tests for glusterfs: [1]

Also the fuel-qa is not entirely correct for testing the glusterfs
plugin: here's the proposed fix [2].

Unfortunately the tests still fail with this message: [3]

I had an error about GLUSTER_CLUSTER_ENDPOINT being undefined so I set
it like: GLUSTER_CLUSTER_ENDPOINT=127.0.0.2:/mnt but I'm not sure if
it's correct (CI job has some custom-setup server with glusterfs for this).

Here are the logs [4]. Will you take over?

P.

[1] https://review.openstack.org/#/c/169683/
[2] https://review.openstack.org/170094
[3] http://sprunge.us/BYVY
[4]
https://www.dropbox.com/s/io6aeogidc49qxk/fail_error_deploy_ha_one_controller_glusterfs_simple-2015_04_02__10_52_22.tar.xz?dl=0

On 04/02/2015 12:07 PM, Stanislaw Bogatkin wrote:
 Hi Przemyslaw,
 I would be glad to be core reviewer to fuel-plugin-glusterfs as long as
 seems than I was only one person who push some commits to it.
 
 On Thu, Apr 2, 2015 at 10:47 AM, Przemyslaw Kaminski
 pkamin...@mirantis.com mailto:pkamin...@mirantis.com wrote:
 
 Since there is no reply here I have taken steps to become core reviewer
 of the (orphaned) repos [1], [2], [3], [4].
 
 Should anyone want to take responsibility for them please write me.
 
 I have also taken steps to get the fuel-qa script working and will make
 sure tests pass with new manifests. I will also update manifests'
 version so that there will be no deprecation warnings.
 
 P.
 
 [1]
 
 https://review.openstack.org/#/admin/projects/stackforge/fuel-plugin-external-glusterfs,access
 [2]
 
 https://review.openstack.org/#/admin/projects/stackforge/fuel-plugin-group-based-policy,access
 [3]
 
 https://review.openstack.org/#/admin/projects/stackforge/fuel-plugin-external-nfs,access
 [4]
 
 https://review.openstack.org/#/admin/projects/stackforge/fuel-plugin-cinder-netapp,access
 
 On 04/01/2015 03:48 PM, Przemyslaw Kaminski wrote:
  Hello,
 
  I've been investigating bug [1] concentrating on the
  fuel-plugin-external-glusterfs.
 
  First of all: [2] there are no core reviewers for Gerrit for this repo
  so even if there was a patch to fix [1] no one could merge it. I saw
  also fuel-plugin-external-nfs -- same issue, haven't checked other
  repos. Why is this? Can we fix this quickly?
 
  Second, the plugin throws:
 
  DEPRECATION WARNING: The plugin has old 1.0 package format, this
 format
  does not support many features, such as plugins updates, find
 plugin in
  new format or migrate and rebuild this one.
 
  I don't think this is appropriate for a plugin that is listed in the
  official catalog [3].
 
  Third, I created a supposed fix for this bug [4] and wanted to test it
  with the fuel-qa scripts. Basically I built an .fp file with
  fuel-plugin-builder from that code, set the GLUSTER_PLUGIN_PATH
 variable
  to point to that .fp file and then ran the
  group=deploy_ha_one_controller_glusterfs tests. The test failed [5].
  Then I reverted the changes from the patch and the test still failed
  [6]. But installing the plugin by hand shows that it's available there
  so I don't know if it's broken plugin test or am I still missing
 something.
 
  It would be nice to get some QA help here.
 
  P.
 
  [1] https://bugs.launchpad.net/fuel/+bug/1415058
  [2] https://review.openstack.org/#/admin/groups/577,members
  [3] https://fuel-infra.org/plugins/catalog.html
  [4] https://review.openstack.org/#/c/169683/
  [5]
 
 
 https://www.dropbox.com/s/1mhz8gtm2j391mr/fail_error_deploy_ha_one_controller_glusterfs_simple-2015_04_01__11_39_11.tar.xz?dl=0
  [6]
 
 
 https://www.dropbox.com/s/ehjox554xl23xgv/fail_error_deploy_ha_one_controller_glusterfs_simple-2015_04_01__13_16_11.tar.xz?dl=0
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Reducing noise of the ML

2015-04-02 Thread Sean Dague
On 04/02/2015 05:37 AM, Thierry Carrez wrote:
 Michael Still wrote:
 Actually, for some projects the +1 is part of a public voting process
 and therefore required.
 
 Could that public voting process happen somewhere else ? Like at an
 IRC meeting ?

For global teams there is no IRC meeting that lets everyone have a voice.

 Also, did anyone ever vote -1 ?

Yes, there was a Cinder vote that had -1s (for invalid reasons) it was
useful to have that in the open because it straightened out some culture
issues.

Also I failed to gather the requisite 5 +1s the first time I was
proposed for nova-core. While embarrassing, I got over it. I do get that
made a lot of people more heavily straw poll first to avoid those kinds
of situations.

 (FWIW originally we used lazy consensus -- PTL proposes, and approval is
 automatic after a while unless someone *opposes*. Not sure when +1s or
 public voting was added as a requirement).

No idea myself. But it's been here since I started participating in the
project in March 2012.

Requiring a [core] tag for these so people can get rid of them if they
don't care seems fair.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bug expiration

2015-04-02 Thread Sean Dague
On 04/02/2015 06:33 AM, James Bottomley wrote:
 On Thu, 2015-04-02 at 11:32 +0200, Thierry Carrez wrote:
 Sean Dague wrote:
 I just spent a chunk of the morning purging out some really old
 Incomplete bugs because about 9 months ago we disabled the auto
 expiration bit in launchpad -
 https://bugs.launchpad.net/nova/+configure-bugtracker

 This is a manually grueling task, which by looking at these bugs, no one
 else is doing. I'd like to turn that bit back on so we can actually get
 attention focused on actionable bugs.

 Any objections here?

 No objection, just a remark:

 One issue with auto-expiration is that it usually results in the
 following story:

 1. Someone reports bug
 2. Triager notices NEW bug, asks reporter for details, sets INCOMPLETE
 3. Reporter provides details
 4. Triager does not notice reply on bug since they ignore INCOMPLETE
 5. Bug expires after n months and disappears forever
 6. Reporter is frustrated and won't ever submit issues again

 The problem is of course at step 4, not at step 5. Auto-expiration is
 very beneficial if your bug triaging routine includes checking Launchpad
 for INCOMPLETE bugs with an answer regularly. If nobody does this very
 boring task, then auto-expiration can be detrimental.

 Is anyone in Nova checking for INCOMPLETE bugs with an answer ? That's
 task 4 in https://wiki.openstack.org/wiki/BugTriage ...
 
 This actually looks to be a problem in the workflow to me.
 
 The OpenStack Incomplete/Confirmed seem to map roughly to the bugzilla
 Need Info/Open states.  The difference is that in bugzilla, a reporter
 can clear the Need Info flag.  This is also what needs to happen in
 OpenStack (so the reporter doesn't need to wait on anyone looking at
 thier input to move the bug on).
 
 I propose allowing the reporter to move the bug to Confirmed when they
 supply the information making it incomplete.  If the triager thinks this
 is wrong, they can set it back to incomplete again.  This has the net
 effect that Incomplete needs no real review, it marks bugs the reporter
 doesn't care enough about to reply... and these can be auto expired.
 
 This would make the initial state diagram
 
 
 +---+Review +--+
 |New|--|Incomplete|
 +---+   +--+
   | ^   |
   |Still Needs Info |   | Reporter replies
   | |   v
   | Review  +-+
   +|Confirmed|
 +-+
 
 
 James

Reporters can definitely move it back to New, which is the expected
flow, that means it gets picked up again on the next New bug sweep.
That's Step #1 in triaging (for Nova we've agressively worked to keep
that very near 0). I don't remember if they can also move it into
Confirmed themselves if they aren't in the nova-bugs group, though that
is an open group.

Mostly the concern is people that don't understand the tools or bug
flow. So they respond and leave in Incomplete. Or it's moved to
Incomplete and they never respond because they don't understand that
more info is needed. These things sit there for a year, and then there
is some whiff of a real problem in them, but no path forward with that
information.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [Ceilometer] [rc1] bug is unresolved due to requirements freeze

2015-04-02 Thread Sergey Kraynev
Hi Guys.


 A couple of concerns:

 #1 - would have been really nice if the commit message for the review
 included the above block of text. The current commit message is not
 clear that Heat *can not* work.


I will update commit message regarding info mentioned in this thread.



 #2 - why wasn't the fact that Heat *can not* work raised earlier,
 because I assume that means there are tests that are blocking all kinds
 of changes?


The reason, why we don't tell it early is:
 when issue was found we ask ceilometer team to bump new version,
after that I uploaded patch to global-requirements and believed that it
will be merged before release.
It does not block gates (due to reason mentioned above), but the reality is
so, that with 1.0.12 version
user can not use any ceilometer resources in Heat (as result we loose
autoscaling templates, where ceilometer plays one of major roles).



 If this is truly blocking we can raise with Thierry, he has final
 override here. However, if this means that one resource type doesn't
 work quite as expected, I don't think that warrants a freeze bump. The
 libraries are set to = here so nothing in Kilo that prevents users from
 deciding to take that upgrade.

 Forcing that upgrade on all users for 1 use case which a user may or may
 not be using is not the point of GR.

 -Sean

 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Regards,
Sergey.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bug expiration

2015-04-02 Thread James Bottomley
On Thu, 2015-04-02 at 07:03 -0400, Sean Dague wrote:
 On 04/02/2015 06:54 AM, James Bottomley wrote:
  On Thu, 2015-04-02 at 06:45 -0400, Sean Dague wrote:
  On 04/02/2015 06:33 AM, James Bottomley wrote:
  On Thu, 2015-04-02 at 11:32 +0200, Thierry Carrez wrote:
  Sean Dague wrote:
  I just spent a chunk of the morning purging out some really old
  Incomplete bugs because about 9 months ago we disabled the auto
  expiration bit in launchpad -
  https://bugs.launchpad.net/nova/+configure-bugtracker
 
  This is a manually grueling task, which by looking at these bugs, no one
  else is doing. I'd like to turn that bit back on so we can actually get
  attention focused on actionable bugs.
 
  Any objections here?
 
  No objection, just a remark:
 
  One issue with auto-expiration is that it usually results in the
  following story:
 
  1. Someone reports bug
  2. Triager notices NEW bug, asks reporter for details, sets INCOMPLETE
  3. Reporter provides details
  4. Triager does not notice reply on bug since they ignore INCOMPLETE
  5. Bug expires after n months and disappears forever
  6. Reporter is frustrated and won't ever submit issues again
 
  The problem is of course at step 4, not at step 5. Auto-expiration is
  very beneficial if your bug triaging routine includes checking Launchpad
  for INCOMPLETE bugs with an answer regularly. If nobody does this very
  boring task, then auto-expiration can be detrimental.
 
  Is anyone in Nova checking for INCOMPLETE bugs with an answer ? That's
  task 4 in https://wiki.openstack.org/wiki/BugTriage ...
 
  This actually looks to be a problem in the workflow to me.
 
  The OpenStack Incomplete/Confirmed seem to map roughly to the bugzilla
  Need Info/Open states.  The difference is that in bugzilla, a reporter
  can clear the Need Info flag.  This is also what needs to happen in
  OpenStack (so the reporter doesn't need to wait on anyone looking at
  thier input to move the bug on).
 
  I propose allowing the reporter to move the bug to Confirmed when they
  supply the information making it incomplete.  If the triager thinks this
  is wrong, they can set it back to incomplete again.  This has the net
  effect that Incomplete needs no real review, it marks bugs the reporter
  doesn't care enough about to reply... and these can be auto expired.
 
  This would make the initial state diagram
 
 
  +---+Review +--+
  |New|--|Incomplete|
  +---+   +--+
| ^   |
|Still Needs Info |   | Reporter replies
| |   v
| Review  +-+
+|Confirmed|
  +-+
 
 
  James
 
  Reporters can definitely move it back to New, which is the expected
  flow, that means it gets picked up again on the next New bug sweep.
  That's Step #1 in triaging (for Nova we've agressively worked to keep
  that very near 0). I don't remember if they can also move it into
  Confirmed themselves if they aren't in the nova-bugs group, though that
  is an open group.
 
  Mostly the concern is people that don't understand the tools or bug
  flow. So they respond and leave in Incomplete. Or it's moved to
  Incomplete and they never respond because they don't understand that
  more info is needed. These things sit there for a year, and then there
  is some whiff of a real problem in them, but no path forward with that
  information.
  
  But we have automation: the system can move it to Confirmed when they
  reply.  The point is to try to make the states and timeouts self
  classifying.  If incomplete means no-one cared enough about this bug to
  supply requested information, then it's a no brainer candidate for
  exipry.  The question I was asking is could the states be set up so
  this happens and I believe the answer based on the above workflow is
  yes.
  
  Now if it sits in Confirmed because the triager didn't read the supplied
  information, it's not a candidate for expiry, it's a candidate for
  kicking someone's arse.
  
  The fundamental point is to make the states align with time triggered
  actionable consequences.  That's what I believe the problem with the
  current workflow is.  Someone has to look at each bug to determine what
  Incomplete actually means which I'd view as unbelievably painful for
  that person (or group of people).
 
 So, it's launchpad, we only have so much control over things as we don't
 host it.

Hm ... I wonder if the company that hosts it might be responsive to
community requests ...?  They can only say no ... and it is (now,
finally) open source, so you could run your own.

 But that being said, New is a better state. It means that a bug triager
 actually looks at the response and says yep, that's sufficient and
 moves it to Confirmed. I'm not sure what the problem is there as long as
 New bugs are getting looked at regularly.
 
 In almost all times Incomplete is set with a Question. We don't just
 flag things 

[openstack-dev] [Elections] Nominations for OpenStack PTLs (Program Technical Leads) are now open

2015-04-02 Thread Tristan Cacqueray
Nominations for OpenStack PTLs (Program Technical Leads) are now open
and will remain open until April 9, 2015 05:59 UTC.

To announce your candidacy please start a new openstack-dev at
lists.openstack.org mailing list thread with the program name as a tag,
example [Glance] PTL Candidacy with the body as your announcement of
intent. People who are not the candidate, please refrain from posting +1
to the candidate announcement posting.

I'm sure the electorate would appreciate a bit of information about why
you would make a great PTL and the direction you would like to take the
program, though it is not required for eligibility.

In order to be an eligible candidate (and be allowed to vote) in a given
PTL election, you need to have contributed an accepted patch to one of
the corresponding program's projects[0] during the Juno-Kilo
timeframe (April 9, 2014 06:00 UTC to April 9, 2015 05:59 UTC).

We need to elect PTLs for 25 programs this round:
* Compute (Nova) - one position
* Object Storage (Swift) - one position
* Image Service (Glance) - one position
* Identity (Keystone) - one position
* Dashboard (Horizon) - one position
* Networking (Neutron) - one position
* Block Storage (Cinder) - one position
* Metering/Monitoring (Ceilometer) - one position
* Orchestration (Heat) - one position
* Database Service (Trove) - one position
* Bare metal (Ironic) - one position
* Common Libraries (Oslo) - one position
* Infrastructure - one position
* Documentation - one position
* Quality Assurance (QA) - one position
* Deployment (TripleO) - one position
* Release cycle management  - one position
* Message service (Zaqar) - one position
* Data Processing Service (Sahara) - one position
* Key Management Service (Barbican) - one position
* DNS Services (Designate) - one position
* Shared File Systems (Manila) - one position
* Command Line Client (OpenStackClient) - one position
* OpenStack Containers Service (Magnum) - one position
* Application Catalog (Murano) - one position

Additional information about the nomination process can be found here:
https://wiki.openstack.org/wiki/PTL_Elections_April_2015

As Elizabeth and I confirm candidates, we will reply to each email
thread with confirmed and add each candidate's name to the list of
confirmed candidates on the above wiki page.

Elections will begin on April 10, 2015 (as soon as we get each election
set up we will start it, it will probably be a staggered start) and run
until after 13:00 UTC April 16, 2015.

The electorate is requested to confirm their email address in gerrit,
review.openstack.org  Settings  Contact Information   Preferred
Email, prior to April 9, 2015 05:59 UTC so that the emailed
ballots are mailed to the correct email address.

Happy running,
Tristan Cacqueray (tristanC)

[0]
https://github.com/openstack/governance/blob/master/reference/projects.yaml



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The Evolution of core developer to maintainer?

2015-04-02 Thread Flavio Percoco

On 02/04/15 12:26 +0200, Thierry Carrez wrote:

Maru Newby wrote:

[...] Many of us in the Neutron
community find this taxonomy restrictive and not representative
of all the work that makes the project possible.


We seem to be after the same end goal. I just disagree that renaming
core reviewers to maintainers is a positive step toward that goal.


Worse, 'cores'
are put on a pedastal, and not just in the project.  Every summit
a 'core reviewer dinner' is held that underscores the
glorification of this designation.


I deeply regret that, and communicated to the sponsor holding it the
problem with this +2 dinner the very first time it was held. FWIW it's
been renamed to VIP dinner and no longer limited to core reviewers,
but I'd agree with you that the damage was already done.


By proposing to rename 'core
reviewer' to 'maintainer' the goal was to lay the groundwork for
broadening the base of people whose valuable contribution could
be recognized.  The goal was to recognize not just review-related
contributors, but also roles like doc/bug/test czar and cross-project
liaison.  The statue of the people filling these roles today is less
if they are not also ‘core’, and that makes the work less attractive
to many.


That's where we disagree. You see renaming core reviewer to
maintainer has a way to recognize a broader type of contributions. I
see it as precisely resulting in the opposite.

Simply renaming core reviewers to maintainers just keeps us using a
single term (or class) to describe project leadership. And that class
includes +2 reviewing duties. So you can't be a maintainer if you don't
do core reviewing. That is exclusive, not inclusive.

What we need to do instead is reviving the drivers concept (we can
rename it maintainers if you really like that term), separate from the
core reviewers concept. One can be a project driver and a core
reviewer. And one can be a project driver *without* being a core
reviewer. Now *that* allows to recognize all valuable contributions,
and to be representative of all the work that makes the project possible.


While I don't think renaming core reviewers to maintainers will
fix the problem, I do recognize this as a step forward on fixing the
issue. It just states that we know there's been a misunderstanding on
what the purpose of the team is and we're working on changing that.

This being said, there are projects that have the drivers and the
cores team split already. Glance being one of them. This allows for
allowing people to focus on the areas they are most interested in.

Cheers,
Flavio

--
@flaper87
Flavio Percoco


pgpixpK8v1Bdw.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up Doc? Apr 2 2015

2015-04-02 Thread Anne Gentle | Just Write Click
_ Doc team meeting APAC/Pacific edition _
Thanks to Joseph Robinson for running the APAC doc team meeting this week. 
You can get logs and minutes:
Minutes: 
http://eavesdrop.openstack.org/meetings/docteam/2015/docteam.2015-04-01-01.01.html
Log: 
http://eavesdrop.openstack.org/meetings/docteam/2015/docteam.2015-04-01-01.01.log.html

_ Progress and status _
Over 100 patches merged this week, nice job everyone. Our bug backlog 
continues to be, well, high. With 116 new, untriaged bugs in 
openstack-manuals and 581 triaged, we have a lot of work to do, and need 
all the help we can get. 

_ API docs updates _
Great work Paul Michali and Diane Fleming at Cisco, they completed the API 
reference info for LBaaS v 2.0 and it's now available at 
http://developer.openstack.org/api-ref-networking-v2-ext.html#lbaas-v2.0. 
Way to go.

Great work sahara team, they did a series of patches this week that put the 
Data processing API reference information on 
http://developer.openstack.org/api-ref-data-processing-v1.1.html. Thanks 
and kudos to PTL Sergey Lukjanov.

Great progress on Telemetry API reference docs, really impressive review in 
progress on capabilities for their API. Thank you Ildiko Vancsa for your 
diligence!

_ Welcoming more docs-core members_
Welcome to the three newest members of openstack-docs-core who have shown 
great review numbers and patches to the docs:
Maria Zlatkova
Olga Gusarenko 
Alexander Adamov

More +2 power! We also have a handful of up-and-comers who we hope to 
level up so that we can increase resources reviewing documentation for 
docs.openstack.org and developer.openstack.org.

_ Legal use of docs.openstack.org _
I'm working with the OpenStack Foundation to understand which projects 
should publish to docs.openstack.org. Many newer projects started 
automating builds to docs.openstack.org/developer/projectname and I need 
to find out where the line is drawn for what's hosted where and with what 
themes around the content. Thanks for patience while we sort this out.


-- 
Anne Gentle
annegen...@justwriteclick.com__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bug expiration

2015-04-02 Thread Kashyap Chamarthy
On Thu, Apr 02, 2015 at 11:32:44AM +0200, Thierry Carrez wrote:
 Sean Dague wrote:
  I just spent a chunk of the morning purging out some really old
  Incomplete bugs because about 9 months ago we disabled the auto
  expiration bit in launchpad -
  https://bugs.launchpad.net/nova/+configure-bugtracker
  
  This is a manually grueling task, which by looking at these bugs, no one
  else is doing. I'd like to turn that bit back on so we can actually get
  attention focused on actionable bugs.
  
  Any objections here?
 
 No objection, just a remark:
 
 One issue with auto-expiration is that it usually results in the
 following story:
 
 1. Someone reports bug
 2. Triager notices NEW bug, asks reporter for details, sets INCOMPLETE
 3. Reporter provides details
 4. Triager does not notice reply on bug since they ignore INCOMPLETE
 5. Bug expires after n months and disappears forever
 6. Reporter is frustrated and won't ever submit issues again
 
 The problem is of course at step 4, not at step 5. Auto-expiration is
 very beneficial if your bug triaging routine includes checking Launchpad
 for INCOMPLETE bugs with an answer regularly. If nobody does this very
 boring task, then auto-expiration can be detrimental.

This is an excellent point. 

A reporter takes time to file a bug, it should not be mindlessly expired
via a bot (as Invalid rationale) without even any priliminary
investigation in the first place.

Unfortunately, I see most attention is only on refactoring or on
half-baked shiny new features, which fall apart if you sneeze. Perils of
moving fast.

 Is anyone in Nova checking for INCOMPLETE bugs with an answer ? That's
 task 4 in https://wiki.openstack.org/wiki/BugTriage ...

I doubt that, I only see Sean take up this drudgery most times. I should
admit, I try sometimes, but just feel overwhelmed with the sheer numbers
and get distracted with other work.

PS: On a side note, I'd wish Launchpad (or the upcoming Storyboard) has
a state like INSUFFICIENT_DATA instead of the current, lousy,
Invalid state (using it as a blanket rationale for most bugs we want
to close). A bug can be _valid_ but might not have sufficient data for
any no. of reasons, it ought to be closed, accurately, as
INSUFFICIENT_DATA not Invalid.


-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: [heat] Application level HA via Heat

2015-04-02 Thread Huangtianhua
If we replace a autoscaling group member, we can't make sure the attached 
resources keep the same, why not to call the evacuate or rebuild api of nova,
just to add meters for ha(vm state or host state) in ceilometer, and then 
signal to HA resource(such as HARestarter)?

-邮件原件-
发件人: Steven Hardy [mailto:sha...@redhat.com] 
发送时间: 2014年12月23日 2:21
收件人: openstack-dev@lists.openstack.org
主题: [openstack-dev] [heat] Application level HA via Heat

Hi all,

So, lately I've been having various discussions around $subject, and I know 
it's something several folks in our community are interested in, so I wanted to 
get some ideas I've been pondering out there for discussion.

I'll start with a proposal of how we might replace HARestarter with AutoScaling 
group, then give some initial ideas of how we might evolve that into something 
capable of a sort-of active/active failover.

1. HARestarter replacement.

My position on HARestarter has long been that equivalent functionality should 
be available via AutoScalingGroups of size 1.  Turns out that shouldn't be too 
hard to do:

 resources:
  server_group:
type: OS::Heat::AutoScalingGroup
properties:
  min_size: 1
  max_size: 1
  resource:
type: ha_server.yaml

  server_replacement_policy:
type: OS::Heat::ScalingPolicy
properties:
  # FIXME: this adjustment_type doesn't exist yet
  adjustment_type: replace_oldest
  auto_scaling_group_id: {get_resource: server_group}
  scaling_adjustment: 1

So, currently our ScalingPolicy resource can only support three adjustment 
types, all of which change the group capacity.  AutoScalingGroup already 
supports batched replacements for rolling updates, so if we modify the 
interface to allow a signal to trigger replacement of a group member, then the 
snippet above should be logically equivalent to HARestarter AFAICT.

The steps to do this should be:

 - Standardize the ScalingPolicy-AutoScaling group interface, so aynchronous 
adjustments (e.g signals) between the two resources don't use the adjust 
method.

 - Add an option to replace a member to the signal interface of AutoScalingGroup

 - Add the new replace adjustment type to ScalingPolicy

I posted a patch which implements the first step, and the second will be 
required for TripleO, e.g we should be doing it soon.

https://review.openstack.org/#/c/143496/
https://review.openstack.org/#/c/140781/

2. A possible next step towards active/active HA failover

The next part is the ability to notify before replacement that a scaling action 
is about to happen (just like we do for LoadBalancer resources
already) and orchestrate some or all of the following:

- Attempt to quiesce the currently active node (may be impossible if it's
  in a bad state)

- Detach resources (e.g volumes primarily?) from the current active node,
  and attach them to the new active node

- Run some config action to activate the new node (e.g run some config
  script to fsck and mount a volume, then start some application).

The first step is possible by putting a SofwareConfig/SoftwareDeployment 
resource inside ha_server.yaml (using NO_SIGNAL so we don't fail if the node is 
too bricked to respond and specifying DELETE action so it only runs when we 
replace the resource).

The third step is possible either via a script inside the box which polls for 
the volume attachment, or possibly via an update-only software config.

The second step is the missing piece AFAICS.

I've been wondering if we can do something inside a new heat resource, which 
knows what the current active member of an ASG is, and gets triggered on a 
replace signal to orchestrate e.g deleting and creating a VolumeAttachment 
resource to move a volume between servers.

Something like:

 resources:
  server_group:
type: OS::Heat::AutoScalingGroup
properties:
  min_size: 2
  max_size: 2
  resource:
type: ha_server.yaml

  server_failover_policy:
type: OS::Heat::FailoverPolicy
properties:
  auto_scaling_group_id: {get_resource: server_group}
  resource:
type: OS::Cinder::VolumeAttachment
properties:
# FIXME: refs is a ResourceGroup interface not currently
# available in AutoScalingGroup
instance_uuid: {get_attr: [server_group, refs, 1]}

  server_replacement_policy:
type: OS::Heat::ScalingPolicy
properties:
  # FIXME: this adjustment_type doesn't exist yet
  adjustment_type: replace_oldest
  auto_scaling_policy_id: {get_resource: server_failover_policy}
  scaling_adjustment: 1

By chaining policies like this we could trigger an update on the attachment 
resource (or a nested template via a provider resource containing many 
attachments or other resources) every time the ScalingPolicy is triggered.

For the sake of clarity, I've not included the existing stuff like ceilometer 
alarm resources etc above, but hopefully it gets the idea accross so we can 
discuss 

Re: [openstack-dev] [Heat][Horizon] What we can do for Heat in Horizon else?

2015-04-02 Thread Pavlo Shchelokovskyy
Hi folks,

there is another not covered feature that jumps at me:

say you have a stack containing an autoscaling group (a nested stack) where
the scaled resource is also a nested stack. I can click on the uuid of the
asg to get a page similar to other stacks showing me the structure of the
nested stack described by autoscaling group, but I can not further click on
the uuid of the scaled resource to get its representation as a stack.
Copy-pasting the uuid of the scaled resource into the Horizon url as
HorizonHost/project/stacks/uuid does the trick and the structure of the
resource as a stack is displayed.

As an example you can use the templates for the test_autoscaling_lb
integration test, under review [1]
[1] https://review.openstack.org/#/c/165944/

Best regards,

Pavlo Shchelokovskyy
Software Engineer
Mirantis Inc
www.mirantis.com

On Thu, Apr 2, 2015 at 11:55 AM, Sergey Kraynev skray...@mirantis.com
wrote:

 Hi community.

 I want to ask feedback from our Heat team and also involve Horizon team in
 this discussion.
 AFAIK during Kilo was implemented bp:
 https://blueprints.launchpad.net/horizon/+spec/heat-ui-improvement

 This bp add more base Heat functionality to Horizon.
 I asked some ideas from Heat guys. What we want to have here else ?

 There is only one idea for me about topology:
 create some filters for displaying only particular resources (by their
 type)
 F.e. stack has 50 resources, but there is half of them network resources.
 As user I want to see only network level, so I enable filtering by network
 resources.


 Regards,
 Sergey.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [Ceilometer] [rc1] bug is unresolved due to requirements freeze

2015-04-02 Thread Sean Dague
On 04/02/2015 05:42 AM, Eoghan Glynn wrote:
 
 
 Hi all,

 we have a problem with dependencies for the kilo-rc1 release of Heat - see
 bug [1]. Root cause is ceilometerclient was not updated for a long time and
 just got an update recently. We are sure that Heat in Kilo would not work
 with ceilometerclient =1.0.12 (users would not be able to create Ceilometer
 alarms in their stacks). In the same time, global requirements have
 ceilometerclient =1.0.12. That works on the gate, but will fail for any
 deployment that happens to use an outdated pypi mirror. I am also afraid
 that if the version of ceilometerclient would be upper-capped to 1.0.12 in
 stable/kilo, Heat in stable/kilo would be completely broken in regards to
 Ceilometer alarms usage.

 The patch to global requirements was already proposed [2] but is blocked by
 requirements freeze. Can we somehow apply for an exception and still merge
 it? Are there any other OpenStack projects besides Heat that use
 ceilometerclient's Python API (just asking to assert the testing burden)?

 [1] https://bugs.launchpad.net/python-ceilometerclient/+bug/1423291

 [2] https://review.openstack.org/#/c/167527/
 
 Pavlo - part of the resistance here I suspect may be due to the
 fact that I inadvertently broke the SEMVER rules when cutting
 the ceilometerclient 1.0.13 release, i.e. it was not sufficiently
 backward compatible with 1.0.12 to warrant only a Z-version bump.
 
 Sean - would you be any happier with making a requirements freeze
 exception to facilitate Heat if we were to cut a fresh ceiloclient
 release that's properly versioned, i.e. 2.0.0?

A couple of concerns:

#1 - would have been really nice if the commit message for the review
included the above block of text. The current commit message is not
clear that Heat *can not* work.

#2 - why wasn't the fact that Heat *can not* work raised earlier,
because I assume that means there are tests that are blocking all kinds
of changes?

If this is truly blocking we can raise with Thierry, he has final
override here. However, if this means that one resource type doesn't
work quite as expected, I don't think that warrants a freeze bump. The
libraries are set to = here so nothing in Kilo that prevents users from
deciding to take that upgrade.

Forcing that upgrade on all users for 1 use case which a user may or may
not be using is not the point of GR.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [scheduler] [gantt] Please stop using Gantt for discussing about Nova scheduler

2015-04-02 Thread Sean Dague
Gantt is a dead and abandoned source tree, it also means nothing to new
people joining into helping with the nova scheduler. It keeps generating
confusion.

Gantt is dead, the current effort is nova-scheduler improvement. Please
just call it that.

-Sean

On 04/02/2015 04:10 AM, Sylvain Bauza wrote:
 
 Le 02/04/2015 01:28, Dugger, Donald D a écrit :
 I think there's a lot of `a rose by any other name would smell as
 sweet' going on here, we're really just arguing about how we label
 things.  I admit I use the term gantt as a very expansive, this is the
 effort to clean up the current scheduler and create a separate
 scheduler as a service project.  There should be no reason that this
 effort should turn off people, if you're interested in the scheduler
 then very quickly you will get pointed to gantt.

 I'd like to hear what others think but I still don't see a need to
 change the name (but I'm willing to change if the majority thinks we
 should drop gantt for now).
 
 Erm, I discussed that point during the weekly meeting and I pledged for
 people giving their opinion in this email.
 
 http://eavesdrop.openstack.org/meetings/gantt/2015/gantt.2015-03-31-15.00.html
 
 
 As a meeting is by definition a synchronous thing, should we maybe try
 to async that decision using Gerrit ? I could pop up a resolution in
 Gerrit so that people could -1 or +1 it.
 
 -Sylvain
 
 
 -- 
 Don Dugger
 Censeo Toto nos in Kansa esse decisse. - D. Gale
 Ph: 303/443-3786

 -Original Message-
 From: Sylvain Bauza [mailto:sba...@redhat.com]
 Sent: Tuesday, March 31, 2015 1:49 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] [scheduler] [gantt] Please stop
 using Gantt for discussing about Nova scheduler


 Le 31/03/2015 02:57, Dugger, Donald D a écrit :
 I actually prefer to use the term Gantt, it neatly encapsulates the
 discussions and it doesn't take much effort to realize that Gantt
 refers to the scheduler and, if you feel there is confusion, we can
 clarify things in the wiki page to emphasize the process: clean up
 the current scheduler interfaces and then split off the scheduler. 
 The end goal will be the Gantt scheduler and I'd prefer not to change
 the discussion.

 Bottom line is I don't see a need to drop the Gantt reference.
 While I agree with you that *most* of the scheduler effort is to
 spin-off the scheduler as a dedicated repository whose codename is
 Gantt, there are some notes to do :
1. not all the efforts are related to the split, some are only
 reducing the tech debt within Nova (eg.
 bp/detach-service-from-computenode has very little impact on the
 scheduler itself, but rather on what is passed to the scheduler as
 resources) and may confuse people who could wonder why it is related
 to the split

 2. We haven't yet agreed on a migration path for Gantt and what will
 become the existing nova-scheduler. I seriously doubt that the Nova
 community would accept to keep the existing nova-scheduler as a
 feature duplicate to the future Gantt codebase, but that has been not
 yet discussed and things can be less clear

 3. Based on my experience, we are loosing contributors or people
 interested in the scheduler area because they just don't know that
 Gantt is actually at the moment the Nova scheduler.


 I seriously don't think that if we decide to leave the Gantt codename
 unused while we're working on Nova, it won't seriously impact our
 capacity to propose an alternative based on a separate repository,
 ideally as a cross-project service. It will just translate the
 reality, ie. that Gantt is at the moment more an idea than a project.

 -Sylvain



 -- 
 Don Dugger
 Censeo Toto nos in Kansa esse decisse. - D. Gale
 Ph: 303/443-3786

 -Original Message-
 From: Sylvain Bauza [mailto:sba...@redhat.com]
 Sent: Monday, March 30, 2015 8:17 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [nova] [scheduler] [gantt] Please stop using
 Gantt for discussing about Nova scheduler

 Hi,

 tl;dr: I used the [gantt] tag for this e-mail, but I would prefer if
 we could do this for the last time until we spin-off the project.

 As it is confusing for many people to understand the difference
 in between the future Gantt project and the Nova scheduler effort
 we're doing, I'm proposing to stop using that name for all the
 efforts related to reducing the technical debt and splitting out the
 scheduler. That includes, not exhaustively, the topic name for our
 IRC weekly meetings on Tuesdays, any ML thread related to the Nova
 scheduler or any discussed related to the scheduler happening on IRC.
 Instead of using [gantt], please use [nova] [scheduler] tags.

 That said, any discussion related to the real future of a
 cross-project scheduler based on the existing Nova scheduler makes
 sense to be tagged as Gantt, of course.


 -Sylvain


 

Re: [openstack-dev] [Heat] [Ceilometer] [rc1] bug is unresolved due to requirements freeze

2015-04-02 Thread Pavlo Shchelokovskyy
Sean,

unfortunately, in Heat we do not have yet integration tests for all the
Heat resources (creating them in real OpenStack), and Ceilometer alarms are
in those not covered. In unit tests the real client is of course mocked
out. When we stumbled on this issue during normal Heat usage, we promptly
raised a bug suggesting to make a new release, but propagating it to
requirements took some time. The gate is not affected as it installs as per
= in requirements the latest which is 1.0.13.

With 1.0.12 ceilometerclient and Heat-Kilo, the Ceilometer alarm resource
not doesn't work quite as expected, it can not be created at all, failing
any stack that has it in the template.

Best regards,

Pavlo Shchelokovskyy
Software Engineer
Mirantis Inc
www.mirantis.com

On Thu, Apr 2, 2015 at 1:12 PM, Sean Dague s...@dague.net wrote:

 On 04/02/2015 05:42 AM, Eoghan Glynn wrote:
 
 
  Hi all,
 
  we have a problem with dependencies for the kilo-rc1 release of Heat -
 see
  bug [1]. Root cause is ceilometerclient was not updated for a long time
 and
  just got an update recently. We are sure that Heat in Kilo would not
 work
  with ceilometerclient =1.0.12 (users would not be able to create
 Ceilometer
  alarms in their stacks). In the same time, global requirements have
  ceilometerclient =1.0.12. That works on the gate, but will fail for any
  deployment that happens to use an outdated pypi mirror. I am also afraid
  that if the version of ceilometerclient would be upper-capped to 1.0.12
 in
  stable/kilo, Heat in stable/kilo would be completely broken in regards
 to
  Ceilometer alarms usage.
 
  The patch to global requirements was already proposed [2] but is
 blocked by
  requirements freeze. Can we somehow apply for an exception and still
 merge
  it? Are there any other OpenStack projects besides Heat that use
  ceilometerclient's Python API (just asking to assert the testing
 burden)?
 
  [1] https://bugs.launchpad.net/python-ceilometerclient/+bug/1423291
 
  [2] https://review.openstack.org/#/c/167527/
 
  Pavlo - part of the resistance here I suspect may be due to the
  fact that I inadvertently broke the SEMVER rules when cutting
  the ceilometerclient 1.0.13 release, i.e. it was not sufficiently
  backward compatible with 1.0.12 to warrant only a Z-version bump.
 
  Sean - would you be any happier with making a requirements freeze
  exception to facilitate Heat if we were to cut a fresh ceiloclient
  release that's properly versioned, i.e. 2.0.0?

 A couple of concerns:

 #1 - would have been really nice if the commit message for the review
 included the above block of text. The current commit message is not
 clear that Heat *can not* work.

 #2 - why wasn't the fact that Heat *can not* work raised earlier,
 because I assume that means there are tests that are blocking all kinds
 of changes?

 If this is truly blocking we can raise with Thierry, he has final
 override here. However, if this means that one resource type doesn't
 work quite as expected, I don't think that warrants a freeze bump. The
 libraries are set to = here so nothing in Kilo that prevents users from
 deciding to take that upgrade.

 Forcing that upgrade on all users for 1 use case which a user may or may
 not be using is not the point of GR.

 -Sean

 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [Ceilometer] [depfreeze] bug is unresolved due to requirements freeze

2015-04-02 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 04/02/2015 01:58 PM, Eoghan Glynn wrote:
 
 Pavlo Shchelokovskyy wrote:
 unfortunately, in Heat we do not have yet integration tests for
 all the Heat resources (creating them in real OpenStack), and
 Ceilometer alarms are in those not covered. In unit tests the
 real client is of course mocked out. When we stumbled on this
 issue during normal Heat usage, we promptly raised a bug
 suggesting to make a new release, but propagating it to
 requirements took some time. The gate is not affected as it 
 installs as per = in requirements the latest which is 1.0.13.
 
 With 1.0.12 ceilometerclient and Heat-Kilo, the Ceilometer
 alarm resource not doesn't work quite as expected, it can not
 be created at all, failing any stack that has it in the
 template.
 
 I'm +1 on the change.
 
 Let's wait until tomorrow to make sure this is not completely 
 unacceptable to packagers.
 

Packaging wise, it does not seem like a great deal, since all
new/updated dependencies that were touched from 1.0.12 to 1.0.13 are
already present in other Kilo components for a long time (all of them
are covered by neutron kilo deps). The package hasn't changed a lot
(just a new CONTRIBUTE file was introduced that can be easily
added/skipped from downstream packages).

Though, have we actually determined that the issue we try to tackle is
still present in Kilo? I don't see an update to the latest email from
Pavlo in the thread where he said that he cannot reproduce it.

So in the end, if it fixes a valid bug, I don't see a huge problem
packaging wise to bump the version. Though there can be other
concerns, like exposing a new code to users that could potentially get
new bugs with the bump. [I personally don't consider it a huge risk
though.]

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJVHTevAAoJEC5aWaUY1u577IgIANuqzfg5Oo/JhKY/RlpcEYB0
rOKI6DhlE5YXMiOsUR4R5w6kl8NdlJGLkj4OSilBzSFQ+LwmiqRR3T3xll6HwLSE
IZYxMvM/YIKF34nsJ+G40u/8OQzNATnWXby4YwAR58YG96c/EDy50smJDlPVcuKd
IC8P44qklbmqr7LhU+V7PeB25QuLNyyJNuo3Ni7FuuXPiwC3GzBChuCk+Yol3a8J
jPJwF/sBX9lDquroaCuhIJXBv/imv73H2Ccar1J+3QMqvbqtLs6yFnepW93aaQFn
sMe7aQC9zlHZMi72CY5G2h/PpBG2CpLJQ0wQN22xgtrMQ90lFFqTPGxnyvpJeBo=
=mE3g
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [Ceilometer] [rc1] bug is unresolved due to requirements freeze

2015-04-02 Thread Pavlo Shchelokovskyy
Hi all,

a (may be hasty) update.

I just tried using a quite fresh devstack master that somehow still
(PIP_UPGRADE=False?) has ceilometerclient as 1.0.12,
and ceilometer alarms do work as expected, template is [1]. May be the
actual bug/backward incompatibility was somewhere in oslo-incubator and
latest syncs fixed it.

I urge Heat team to double-check if 1.0.12 does indeed work now, so we can
call of the dogs and close this issue.

[1]
https://github.com/pshchelo/stackdev/blob/master/templates/autoscaling/asg.yaml

Best regards,

Pavlo Shchelokovskyy
Software Engineer
Mirantis Inc
www.mirantis.com

On Thu, Apr 2, 2015 at 1:46 PM, Sergey Kraynev skray...@mirantis.com
wrote:

 Hi Guys.


 A couple of concerns:

 #1 - would have been really nice if the commit message for the review
 included the above block of text. The current commit message is not
 clear that Heat *can not* work.


 I will update commit message regarding info mentioned in this thread.



 #2 - why wasn't the fact that Heat *can not* work raised earlier,
 because I assume that means there are tests that are blocking all kinds
 of changes?


 The reason, why we don't tell it early is:
  when issue was found we ask ceilometer team to bump new version,
 after that I uploaded patch to global-requirements and believed that it
 will be merged before release.
 It does not block gates (due to reason mentioned above), but the reality
 is so, that with 1.0.12 version
 user can not use any ceilometer resources in Heat (as result we loose
 autoscaling templates, where ceilometer plays one of major roles).



 If this is truly blocking we can raise with Thierry, he has final
 override here. However, if this means that one resource type doesn't
 work quite as expected, I don't think that warrants a freeze bump. The
 libraries are set to = here so nothing in Kilo that prevents users from
 deciding to take that upgrade.

 Forcing that upgrade on all users for 1 use case which a user may or may
 not be using is not the point of GR.

 -Sean

 --
 Sean Dague
 http://dague.net

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 Regards,
 Sergey.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [Ceilometer] [depfreeze] bug is unresolved due to requirements freeze

2015-04-02 Thread Thierry Carrez
Pavlo Shchelokovskyy wrote:
 unfortunately, in Heat we do not have yet integration tests for all the
 Heat resources (creating them in real OpenStack), and Ceilometer alarms
 are in those not covered. In unit tests the real client is of course
 mocked out. When we stumbled on this issue during normal Heat usage, we
 promptly raised a bug suggesting to make a new release, but propagating
 it to requirements took some time. The gate is not affected as it
 installs as per = in requirements the latest which is 1.0.13.
 
 With 1.0.12 ceilometerclient and Heat-Kilo, the Ceilometer alarm
 resource not doesn't work quite as expected, it can not be created at
 all, failing any stack that has it in the template.

I'm +1 on the change.

Let's wait until tomorrow to make sure this is not completely
unacceptable to packagers.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [Ceilometer] [depfreeze] bug is unresolved due to requirements freeze

2015-04-02 Thread Eoghan Glynn

 Pavlo Shchelokovskyy wrote:
  unfortunately, in Heat we do not have yet integration tests for all the
  Heat resources (creating them in real OpenStack), and Ceilometer alarms
  are in those not covered. In unit tests the real client is of course
  mocked out. When we stumbled on this issue during normal Heat usage, we
  promptly raised a bug suggesting to make a new release, but propagating
  it to requirements took some time. The gate is not affected as it
  installs as per = in requirements the latest which is 1.0.13.
  
  With 1.0.12 ceilometerclient and Heat-Kilo, the Ceilometer alarm
  resource not doesn't work quite as expected, it can not be created at
  all, failing any stack that has it in the template.
 
 I'm +1 on the change.
 
 Let's wait until tomorrow to make sure this is not completely
 unacceptable to packagers.

Excellent, thank you sir!

Cheers,
Eoghan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bug expiration

2015-04-02 Thread Sylvain Bauza


Le 02/04/2015 11:32, Thierry Carrez a écrit :

Sean Dague wrote:

I just spent a chunk of the morning purging out some really old
Incomplete bugs because about 9 months ago we disabled the auto
expiration bit in launchpad -
https://bugs.launchpad.net/nova/+configure-bugtracker

This is a manually grueling task, which by looking at these bugs, no one
else is doing. I'd like to turn that bit back on so we can actually get
attention focused on actionable bugs.

Any objections here?

No objection, just a remark:

One issue with auto-expiration is that it usually results in the
following story:

1. Someone reports bug
2. Triager notices NEW bug, asks reporter for details, sets INCOMPLETE
3. Reporter provides details
4. Triager does not notice reply on bug since they ignore INCOMPLETE
5. Bug expires after n months and disappears forever
6. Reporter is frustrated and won't ever submit issues again

The problem is of course at step 4, not at step 5. Auto-expiration is
very beneficial if your bug triaging routine includes checking Launchpad
for INCOMPLETE bugs with an answer regularly. If nobody does this very
boring task, then auto-expiration can be detrimental.
Erm, I was thinking it was set back to the former state. That sounds 
something we can automate, nope ?


I can volunteer, my only concern is about the limited bandwidth I have.




Is anyone in Nova checking for INCOMPLETE bugs with an answer ? That's
task 4 in https://wiki.openstack.org/wiki/BugTriage ...



We already have trivial bug monkeys that are chasing up trivial 
bugfixes. That sounds something we can discuss as an extent to the 
trival bug chase until the point I mentioned above is provided IMHO.


-Sylvain


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The Evolution of core developer to maintainer?

2015-04-02 Thread Thierry Carrez
Joe Gordon wrote:
 My main objection to the model you propose is its binary nature. You
 bundle core reviewing duties with drivers duties into a single
 group. That simplification means that drivers have to be core reviewers,
 and that core reviewers have to be drivers. Sure, a lot of core
 reviewers are good candidates to become drivers. But I think bundling
 the two concepts excludes a lot of interesting people from being a
 driver.
 
 I cannot speak for all projects, but at least in Nova you have to be a
 nova-core to be part of nova-drivers.

And would you describe that as a good thing ? If John Garbutt is so deep
into release liaison work that he can't sustain a review rate suitable
to remain a core reviewer, would you have him removed from the
maintainers group ? If someone steps up and works full-time on
triaging bugs in Nova (and can't commit to do enough reviews as a
result), would you exclude that person from your maintainers group ?

 If someone steps up and owns bug triaging in a project, that is very
 interesting and I'd like that person to be part of the drivers group.
 
 In our current model, not sure why they would need to be part of
 drivers. the bug triage group is open to anyone.

I think we are talking past each other. I'm not saying bug triagers have
to be drivers. I'm saying bug triagers should be *allowed* to
potentially become drivers, even if they aren't core reviewers. That is
including of all forms of project leadership.

You are the one suggesting that maintainers and core reviewers are the
same thing, and therefore asking that all maintainers/drivers have to be
core reviewers, actively excluding non-reviewers from that project
leadership class.

 Saying core reviewers and maintainers are the same thing, you basically
 exclude people from stepping up to the project leadership unless they
 are code reviewers. I think that's a bad thing. We need more people
 volunteering to own bug triaging and liaison work, not less.
 
 I don't agree with this statement, I am not saying reviewing and
 maintenance need to be tightly coupled.

You've been proposing to rename core reviewers to maintainers. I'm
not sure how that can be more tightly coupled...

 [...]
 I really want to know what you meant be 'no aristocracy' and the why
 behind that.

Aristocracies are self-selecting, privileged groups. Aristocracies
require that current group members agree on any new member addition,
basically limiting the associated privilege to a caste. Aristocracies
result in limited gene pool, tunnel vision and echo chamber effects.

OpenStack governance mandates that core developers are ultimately the
PTL's choice. Since the PTL is regularly elected by all contributors,
that prevents aristocracy.

However in some projects, core reviewers have to be approved by existing
core reviewers. That is an aristocracy. In those projects, if you
associate more rights and badges to core reviewing (like by renaming it
maintainer and bundle driver responsibilities with it), I think you
actually extend the aristocracy problem rather than reduce it.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The Evolution of core developer to maintainer?

2015-04-02 Thread Thierry Carrez
Maru Newby wrote:
 [...] Many of us in the Neutron
 community find this taxonomy restrictive and not representative
 of all the work that makes the project possible.

We seem to be after the same end goal. I just disagree that renaming
core reviewers to maintainers is a positive step toward that goal.

 Worse, 'cores'
 are put on a pedastal, and not just in the project.  Every summit
 a 'core reviewer dinner' is held that underscores the
 glorification of this designation.

I deeply regret that, and communicated to the sponsor holding it the
problem with this +2 dinner the very first time it was held. FWIW it's
been renamed to VIP dinner and no longer limited to core reviewers,
but I'd agree with you that the damage was already done.

 By proposing to rename 'core
 reviewer' to 'maintainer' the goal was to lay the groundwork for
 broadening the base of people whose valuable contribution could
 be recognized.  The goal was to recognize not just review-related
 contributors, but also roles like doc/bug/test czar and cross-project
 liaison.  The statue of the people filling these roles today is less 
 if they are not also ‘core’, and that makes the work less attractive 
 to many.

That's where we disagree. You see renaming core reviewer to
maintainer has a way to recognize a broader type of contributions. I
see it as precisely resulting in the opposite.

Simply renaming core reviewers to maintainers just keeps us using a
single term (or class) to describe project leadership. And that class
includes +2 reviewing duties. So you can't be a maintainer if you don't
do core reviewing. That is exclusive, not inclusive.

What we need to do instead is reviving the drivers concept (we can
rename it maintainers if you really like that term), separate from the
core reviewers concept. One can be a project driver and a core
reviewer. And one can be a project driver *without* being a core
reviewer. Now *that* allows to recognize all valuable contributions,
and to be representative of all the work that makes the project possible.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Reducing noise of the ML

2015-04-02 Thread Flavio Percoco

On 02/04/15 06:19 -0400, Sean Dague wrote:

On 04/02/2015 05:37 AM, Thierry Carrez wrote:

Michael Still wrote:

Actually, for some projects the +1 is part of a public voting process
and therefore required.


Could that public voting process happen somewhere else ? Like at an
IRC meeting ?


For global teams there is no IRC meeting that lets everyone have a voice.


Also, FWIW, I don't mind knowing when teams grow!




Also, did anyone ever vote -1 ?


Yes, there was a Cinder vote that had -1s (for invalid reasons) it was
useful to have that in the open because it straightened out some culture
issues.

Also I failed to gather the requisite 5 +1s the first time I was
proposed for nova-core. While embarrassing, I got over it. I do get that
made a lot of people more heavily straw poll first to avoid those kinds
of situations.


And I believe there was a case in Neutron too.




(FWIW originally we used lazy consensus -- PTL proposes, and approval is
automatic after a while unless someone *opposes*. Not sure when +1s or
public voting was added as a requirement).


No idea myself. But it's been here since I started participating in the
project in March 2012.

Requiring a [core] tag for these so people can get rid of them if they
don't care seems fair.


+1

Flavio

--
@flaper87
Flavio Percoco


pgpetEp2QUIyU.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bug expiration

2015-04-02 Thread James Bottomley
On Thu, 2015-04-02 at 11:32 +0200, Thierry Carrez wrote:
 Sean Dague wrote:
  I just spent a chunk of the morning purging out some really old
  Incomplete bugs because about 9 months ago we disabled the auto
  expiration bit in launchpad -
  https://bugs.launchpad.net/nova/+configure-bugtracker
  
  This is a manually grueling task, which by looking at these bugs, no one
  else is doing. I'd like to turn that bit back on so we can actually get
  attention focused on actionable bugs.
  
  Any objections here?
 
 No objection, just a remark:
 
 One issue with auto-expiration is that it usually results in the
 following story:
 
 1. Someone reports bug
 2. Triager notices NEW bug, asks reporter for details, sets INCOMPLETE
 3. Reporter provides details
 4. Triager does not notice reply on bug since they ignore INCOMPLETE
 5. Bug expires after n months and disappears forever
 6. Reporter is frustrated and won't ever submit issues again
 
 The problem is of course at step 4, not at step 5. Auto-expiration is
 very beneficial if your bug triaging routine includes checking Launchpad
 for INCOMPLETE bugs with an answer regularly. If nobody does this very
 boring task, then auto-expiration can be detrimental.
 
 Is anyone in Nova checking for INCOMPLETE bugs with an answer ? That's
 task 4 in https://wiki.openstack.org/wiki/BugTriage ...

This actually looks to be a problem in the workflow to me.

The OpenStack Incomplete/Confirmed seem to map roughly to the bugzilla
Need Info/Open states.  The difference is that in bugzilla, a reporter
can clear the Need Info flag.  This is also what needs to happen in
OpenStack (so the reporter doesn't need to wait on anyone looking at
thier input to move the bug on).

I propose allowing the reporter to move the bug to Confirmed when they
supply the information making it incomplete.  If the triager thinks this
is wrong, they can set it back to incomplete again.  This has the net
effect that Incomplete needs no real review, it marks bugs the reporter
doesn't care enough about to reply... and these can be auto expired.

This would make the initial state diagram


+---+Review +--+
|New|--|Incomplete|
+---+   +--+
  | ^   |
  |Still Needs Info |   | Reporter replies
  | |   v
  | Review  +-+
  +|Confirmed|
+-+


James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bug expiration

2015-04-02 Thread James Bottomley
On Thu, 2015-04-02 at 06:45 -0400, Sean Dague wrote:
 On 04/02/2015 06:33 AM, James Bottomley wrote:
  On Thu, 2015-04-02 at 11:32 +0200, Thierry Carrez wrote:
  Sean Dague wrote:
  I just spent a chunk of the morning purging out some really old
  Incomplete bugs because about 9 months ago we disabled the auto
  expiration bit in launchpad -
  https://bugs.launchpad.net/nova/+configure-bugtracker
 
  This is a manually grueling task, which by looking at these bugs, no one
  else is doing. I'd like to turn that bit back on so we can actually get
  attention focused on actionable bugs.
 
  Any objections here?
 
  No objection, just a remark:
 
  One issue with auto-expiration is that it usually results in the
  following story:
 
  1. Someone reports bug
  2. Triager notices NEW bug, asks reporter for details, sets INCOMPLETE
  3. Reporter provides details
  4. Triager does not notice reply on bug since they ignore INCOMPLETE
  5. Bug expires after n months and disappears forever
  6. Reporter is frustrated and won't ever submit issues again
 
  The problem is of course at step 4, not at step 5. Auto-expiration is
  very beneficial if your bug triaging routine includes checking Launchpad
  for INCOMPLETE bugs with an answer regularly. If nobody does this very
  boring task, then auto-expiration can be detrimental.
 
  Is anyone in Nova checking for INCOMPLETE bugs with an answer ? That's
  task 4 in https://wiki.openstack.org/wiki/BugTriage ...
  
  This actually looks to be a problem in the workflow to me.
  
  The OpenStack Incomplete/Confirmed seem to map roughly to the bugzilla
  Need Info/Open states.  The difference is that in bugzilla, a reporter
  can clear the Need Info flag.  This is also what needs to happen in
  OpenStack (so the reporter doesn't need to wait on anyone looking at
  thier input to move the bug on).
  
  I propose allowing the reporter to move the bug to Confirmed when they
  supply the information making it incomplete.  If the triager thinks this
  is wrong, they can set it back to incomplete again.  This has the net
  effect that Incomplete needs no real review, it marks bugs the reporter
  doesn't care enough about to reply... and these can be auto expired.
  
  This would make the initial state diagram
  
  
  +---+Review +--+
  |New|--|Incomplete|
  +---+   +--+
| ^   |
|Still Needs Info |   | Reporter replies
| |   v
| Review  +-+
+|Confirmed|
  +-+
  
  
  James
 
 Reporters can definitely move it back to New, which is the expected
 flow, that means it gets picked up again on the next New bug sweep.
 That's Step #1 in triaging (for Nova we've agressively worked to keep
 that very near 0). I don't remember if they can also move it into
 Confirmed themselves if they aren't in the nova-bugs group, though that
 is an open group.
 
 Mostly the concern is people that don't understand the tools or bug
 flow. So they respond and leave in Incomplete. Or it's moved to
 Incomplete and they never respond because they don't understand that
 more info is needed. These things sit there for a year, and then there
 is some whiff of a real problem in them, but no path forward with that
 information.

But we have automation: the system can move it to Confirmed when they
reply.  The point is to try to make the states and timeouts self
classifying.  If incomplete means no-one cared enough about this bug to
supply requested information, then it's a no brainer candidate for
exipry.  The question I was asking is could the states be set up so
this happens and I believe the answer based on the above workflow is
yes.

Now if it sits in Confirmed because the triager didn't read the supplied
information, it's not a candidate for expiry, it's a candidate for
kicking someone's arse.

The fundamental point is to make the states align with time triggered
actionable consequences.  That's what I believe the problem with the
current workflow is.  Someone has to look at each bug to determine what
Incomplete actually means which I'd view as unbelievably painful for
that person (or group of people).

James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Announcement - The Security Team for OpenStack

2015-04-02 Thread Clark, Robert Graham
The OpenStack Security Group (OSSG) and the OpenStack Vulnerability Management
Team (VMT) have historically operated as independent teams, each with a focus on
different aspects of OpenStack security. To present a more coherent security
posture we are pleased to announce that the OSSG and VMT will be joining forces.

It is our hope that this merging of teams will help present a stronger and more
mature security posture, both to the outside world and within OpenStack, and
will make it easier for developers to engage with the security resources they
need.

Moving forward, the OSSG and VMT combined will apply to become a recognized
project within OpenStack. We seek to mirror the successes of the documentation
team and will be applying to become known simply as 'Security'.

We are excited about the new opportunities this creates and are hopeful that it
gives OpenStack a clearer security message.

What is changing? 

Initially a huge work effort will be undertaken to restructure and rebrand
existing documentation which will eventually be hosted under a new subdomain of
openstack.org [1]. This will allow developers and consumers of OpenStack to
easily find security resources such as the OpenStack Security Advisories, the
Security Guide, Security Notes and Best Practices.

Does this change how I report security issues? 

No. The existing vulnerability management process [2], and team members will
remain the same. The VMT will maintain its independence and will continue to
operate with the same level of confidentiality as before. 

How can I get involved? 

The security group is always looking for enthusiastic new members; there's a
wiki article on how to get involved[3]. If you are interested, please come along
to the weekly IRC meeting, or just start contributing.

Asking the security group questions? 

Any general security questions that do not relate to a vulnerability within the
OpenStack code base should be sent to the openstack-dev@lists.openstack.org
address with the [security] in the subject line.


1. https://security.openstack.org
2. https://wiki.openstack.org/wiki/Vulnerability_Management
3. https://wiki.openstack.org/wiki/Security/How_To_Contribute

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable] Call for testing: 2014.2.3 candidate tarballs

2015-04-02 Thread Adam Gandelman
Hi all,

We are scheduled to publish 2014.2.3 on Thursday April 9th for
Ceilometer, Cinder, Glance, Heat, Horizon, Keystone, Neutron, Nova,
Sahara and Trove.

We'd appreciate anyone who could test the candidate 2014.2.3 tarballs, which
include all changes aside from any pending freeze exceptions:

  http://tarballs.openstack.org/ceilometer/ceilometer-stable-juno.tar.gz
  http://tarballs.openstack.org/cinder/cinder-stable-juno.tar.gz
  http://tarballs.openstack.org/glance/glance-stable-juno.tar.gz
  http://tarballs.openstack.org/heat/heat-stable-juno.tar.gz
  http://tarballs.openstack.org/horizon/horizon-stable-juno.tar.gz
  http://tarballs.openstack.org/keystone/keystone-stable-juno.tar.gz
  http://tarballs.openstack.org/neutron/neutron-stable-juno.tar.gz
  http://tarballs.openstack.org/nova/nova-stable-juno.tar.gz
  http://tarballs.openstack.org/sahara/sahara-stable-juno.tar.gz
  http://tarballs.openstack.org/trove/trove-stable-juno.tar.gz

Thanks,
Adam
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Heat] Overcloud software updates and ResourceGroups

2015-04-02 Thread Giulio Fidente

hi there,

thanks for sharing this, I have a

On 04/03/2015 12:31 AM, Zane Bitter wrote:

A few of us have been looking for a way to perform software updates to
servers in a TripleO Heat/Puppet-based overcloud


[...]


Here's a trivial example of what this deployment might look like:

   update_config:
 type: OS::Heat::SoftwareConfig
 properties:
   config: {get_file: do_sw_update.sh}
   inputs:
 - name: update_after_time
   description: Timestamp of the most recent update request

   update_deployment:
 type: OS::Heat::SoftwareDeployment
 properties:
   actions:
 - UPDATE
   config: {get_resource: update_config}
   server: {get_resource: my_server}
   input_values:
 update_after_time: {get_param: update_timestamp}


[...]


   heat stack-update my_overcloud -f $TMPL -P update_timestamp=$(date)


leaving the ResourceGroup/AutoScalingGroup to more knowledgeable people 
on a side and trying instead to translate the templating approach into 
user features, if I read it correctly this would also make it possible to:


1. perform a config update without a software update as long as the 
update_timestamp param remains unchanged


2. perform software updates of each ResourceGroup independently from the 
others by using as many update_timestamp params


3. use different update.sh scripts per ResourceGroup

are the above correct?

My single minor concern is about the update script itself which, if not 
left for editing to the user but bundled instead with t-h-t , should be 
clever enough to cope with different distros and distro versions because 
we can't know that from the template ... but this can be achieved by 
abstracting it on top of Puppet itself it seems (or whichever other 
config management tool is deployed)

--
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TC][Mistral] Workflow Service project proposal

2015-04-02 Thread Dmitri Zimine
Hi respected TC members and community:

We would like to propose Workflow Service project (code name Mistral), 
as a project in the OpenStack namespace, in accordance with the new governance 
changes [1].

The details are on the review: https://review.openstack.org/#/c/170225

Please review the proposal at your earliest convenience, 
happy to answer questions and give more information here, or IRC on the 
#openstack-mistral.

Thanks, 

Dmitri Zimine (irc dzimine)

on behalf of Mistral contributors.

[1] http://governance.openstack.org/reference/new-projects-requirements.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Deadline For Volume Drivers to Be Readded

2015-04-02 Thread Alex Meade
We believe we have satisfied the required criteria [1] to have NetApp’s
fibre channel drivers included in the Kilo release. We have submitted a
revert patch [2] along with posting an ether pad [3] to provide more detail
on our progress. Thanks for your consideration.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2015-March/059990.html

[2] https://review.openstack.org/#/c/169781/

[3] https://etherpad.openstack.org/p/NetApp-Kilo-Fibre-Channel
Thanks so much,

-Alex

On Thu, Mar 26, 2015 at 8:48 PM, Ryan Hsu r...@vmware.com wrote:

 Thanks for clarifying!

 Ryan

 On Mar 26, 2015, at 5:29 PM, Mike Perez thin...@gmail.com wrote:

  On 00:24 Fri 27 Mar , Ryan Hsu wrote:
  Rightfully so, but it doesn't hurt to offer suggestions that might
 improve
  the community. It would just be nice to have exclusions reconsidered if
 there
  are legitimate bugs behind them. You see them all the time in the
 tempest
  tests ala SKIPPED: Skipped until Bug: 1373513 is resolved so  it's
 hard to
  understand why we can't just apply the same principles to third-party
 CI.
 
  Your usage of exclusions is fine for fixing bugs in my opinion. My
 meaning of
  exclusion was not allowing these additional tests to be discovered.
 
  --
  Mike Perez
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Block migrations and Cinder volumes

2015-04-02 Thread Matthew Gilliard
 Thanks for the clarification, is there a  bug tracking this in libvirt
 already?

 Actually I don't think there is one, so feel free to file one

I took the liberty of doing so:
https://bugzilla.redhat.com/show_bug.cgi?id=1208588

On Wed, Mar 18, 2015 at 6:11 PM, Daniel P. Berrange berra...@redhat.com wrote:
 On Wed, Mar 18, 2015 at 10:59:19AM -0700, Joe Gordon wrote:
 On Wed, Mar 18, 2015 at 3:09 AM, Daniel P. Berrange berra...@redhat.com
 wrote:

  On Wed, Mar 18, 2015 at 08:33:26AM +0100, Thomas Herve wrote:
Interesting bug.  I think I agree with you that there isn't a good
  solution
currently for instances that have a mix of shared and not-shared
  storage.
   
I'm curious what Daniel meant by saying that marking the disk
  shareable is
not
as reliable as we would want.
  
   I think this is the bug I reported here:
  https://bugs.launchpad.net/nova/+bug/1376615
  
   My initial approach was indeed to mark the disks are shareable: the
  patch (https://review.openstack.org/#/c/125616/) has comments around the
  issues, mainly around I/Ocache and SELinux isolation being disabled.
 
  Yep, those are both show stopper issues. The only solution is to fix the
  libvirt API for this first.
 

 Thanks for the clarification, is there a  bug tracking this in libvirt
 already?

 Actually I don't think there is one, so feel free to file one


 Regards,
 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o- http://virt-manager.org :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Horizon] What we can do for Heat in Horizon else?

2015-04-02 Thread Fox, Kevin M
+1. I have to copy/paste in nested stack uuids all the time. :/

Thanks,
Kevin


From: Pavlo Shchelokovskyy
Sent: Thursday, April 02, 2015 3:05:14 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Heat][Horizon] What we can do for Heat in Horizon 
else?

Hi folks,

there is another not covered feature that jumps at me:

say you have a stack containing an autoscaling group (a nested stack) where the 
scaled resource is also a nested stack. I can click on the uuid of the asg to 
get a page similar to other stacks showing me the structure of the nested stack 
described by autoscaling group, but I can not further click on the uuid of the 
scaled resource to get its representation as a stack. Copy-pasting the uuid of 
the scaled resource into the Horizon url as HorizonHost/project/stacks/uuid 
does the trick and the structure of the resource as a stack is displayed.

As an example you can use the templates for the test_autoscaling_lb integration 
test, under review [1]
[1] https://review.openstack.org/#/c/165944/

Best regards,

Pavlo Shchelokovskyy
Software Engineer
Mirantis Inc
www.mirantis.comhttp://www.mirantis.com

On Thu, Apr 2, 2015 at 11:55 AM, Sergey Kraynev 
skray...@mirantis.commailto:skray...@mirantis.com wrote:
Hi community.

I want to ask feedback from our Heat team and also involve Horizon team in this 
discussion.
AFAIK during Kilo was implemented bp:
https://blueprints.launchpad.net/horizon/+spec/heat-ui-improvement

This bp add more base Heat functionality to Horizon.
I asked some ideas from Heat guys. What we want to have here else ?

There is only one idea for me about topology:
create some filters for displaying only particular resources (by their type)
F.e. stack has 50 resources, but there is half of them network resources.
As user I want to see only network level, so I enable filtering by network 
resources.


Regards,
Sergey.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][puppet] Running custom puppet manifests during overcloud post-deployment

2015-04-02 Thread Dan Prince
On Wed, 2015-04-01 at 21:31 -0400, Tzu-Mainn Chen wrote:
 Hey all,
 
 I've run into a requirement where it'd be useful if, as an end user, I could 
 inject
 a personal ssh key onto all provisioned overcloud nodes.
 
 Obviously this is something that not every user would need or want.  I talked 
 about
 some options with Dan Prince on IRC, and (besides suggesting that I bring the
 discussion to the mailing list) he proposed some generic solutions - and Dan, 
 please
 feel free to correct me if I misunderstood any of your ideas.
 
 The first is to specify a pre-set custom puppet manifest to be run when the 
 Heat
 stack is created by adding a post_deployment_customizations.pp puppet 
 manifest to
 be run by all roles.  Users would simply override this manifest.
 
 The second solution is essentially the same as the first, except we'd perform
 the override at the Heat resource registry level: the user would update the
 resource reference to point to a their custom manifest (rather than overriding
 the default post-deployment customization manifest).
 
 Do either of these solutions seem acceptable to others?  Would one be 
 preferred?

Talking about this a bit more on IRC this morning we all realized that
Puppet isn't a hard requirement. Just simply providing a pluggable
mechanism to inject this sort of information into the nodes in a clean
way is all we need.

Steve Hardy's suggestion here is probably the cleanest way to support
this sort of configuration in a generic fashion.

https://review.openstack.org/170137

I don't believe this solution runs post deployment however. So if
running a hook post deployment is a requirement we may need to wire in a
similar generic config parameter for that as well.

Dan

 
 
 Thanks,
 Tzu-Mainn Chen
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Nominate Clinton Knight for core team

2015-04-02 Thread Tom Barron
On 4/2/15 9:16 AM, Ben Swartzlander wrote:
 Clinton Knight (cknight on IRC) has been working on OpenStack for the
 better part of the year, and starting in February, he shifted his focus
 from Cinder to Manila. I think everyone is already aware of his high
 quality contributions and code reviews. I would like to nominate him to
 join the Manila core reviewer team.
 
 -Ben Swartzlander
 Manila PTL
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The Evolution of core developer to maintainer?

2015-04-02 Thread Ed Leafe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 04/02/2015 03:22 AM, Sylvain Bauza wrote:

 I was originally pro giving a limited set of merge powers to
 subteams for a specific codepath, but my personal experience made
 me think that it can't work that way in Nova at the moment - just
 because everything is intersected.
 
 So, yeah, before kicking-off new features, we need at least people 
 enough aware of the global state to give their voice on if it's
 doable or not. I don't want to say it would be a clique or a gang
 blessing good people or bad people, just architects that have
 enough knowledge to know if it will work - or not.
 
 Good ideas can turn into bad implementations just because of the 
 existing tech debt. And there is nothing that we can avoid that,
 unless we have mentors that can help us finding the right path.
 
 Don't get me wrong : again, it's not giving more powers to people,
 it's basically stating that cores are by definition people who have
 the global knowledge.

It *can* work, as long as the cores hold the all-important +W. I'm
sure that having your +1 on a scheduler-related patch now makes cores
feel more confident that the change is correct for the scheduler, so
making your vote a +2 would streamline the process. But as you
correctly point out, they have a deeper knowledge of the entire
project, and need to validate that the change won't adversely affect
other parts, so holding the +W for cores would minimize the chance of
adding to tech debt.

- -- 

- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
Comment: GPGTools - https://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCgAGBQJVHVuCAAoJEKMgtcocwZqLjcwQAIOzkKTLCjWLV84bqoCJX6Ai
D/zzqMCoOOYyj341lrfVM2SxkJX6+awQfe75WNo46u4nEC1CZp5EscZpbrIUIxG2
Bic5l2cjKQmvKEIyvNrAb1g3mmC50mEUscZwfvrUr4QUamwL0wNm7jkZBMVlAGMd
uD2My/HRqnls/FhCMGprzTI+zeowa4KdUWuZUC5CfGZg/GFVty2/k4u0Jej5nla9
efoQ49VG3M2g+Ipxhg1sgVWvru+7E2pfCNd3lFVt+C/gwJUrQ7x36nQkOIR8OtyN
j9BH5NqJ+8qDIqEvQIaI1ToaBwwNn4JK1XsKfj41wt+YFym4qS0o8/wKuup+5Syj
q1m29yWZwFfFXqYoLqfqHQJ34bY3CTFS7OllXGX1KkMh7MfMffbwbPZn67ICYnoY
4Q4ed/QQOzDnzvH3MAOwVfaeNq8vL8u/x5C2rotXQmcihJrrgi5MWfATayiuNGpj
Sfi++C1MDgW61I1ehFrgHcW2sQ6vvuyErA9e8mm0xPBO/o+64gL4dmqnU2U6rfTx
KBR6Q9vXJXoFBvCmrFMv+F7vk+XDEUUP5uoXj8069/lXblQxZwowH4ijJql2RwlI
COUsWahnKAp05F8CiMKxaIF/R7UeHslTLer7yE+dwmNTqIVF1ylj0ToOxnSXV/WK
f7sPIo9aHtfZbPO+d8uV
=Euob
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] ‘az:force_host:force_node’ validation when booting instance

2015-04-02 Thread Lingxian Kong
hi, nova guys:

now I am working on this bug[1], and this is my patch[2], but met some
problems, so have to bring it here to get suggestions from broader
community.

In short, the problem is about validation on the
‘az:force_host:force_node’ parameter when booting instance. As we all
know that, user can specify ‘az:force_host:force_node’ or
‘az:force_host’ or ‘az::force_node’ or ‘az’. So, we need to check:

#1 for ‘az:force_host’ or ‘az:force_host:force_node’: the host must
belong to az;
#2 for 'az::force_node', the host to which the node belongs, must belong to az;

For #2, there is another problem. in one availability zone, there
might be more than one nodes with the same name(belongs to different
hosts). What should we do? Hans Lindgren suggested that, an
InvalidRequestException with appropriate message shoule be raised.
Alex Xu suggested we shouldn't allow the user specify a force_node
without a force_host.

For #1, another workaroud I could think of is, we just let it through
the api layer, and override the 'az' property of instance with the
correct one, at the nova-compute layer.

So, what's your opinion?

[1] https://launchpad.net/bugs/1431194
[2] https://review.openstack.org/#/c/163842/

-- 
Regards!
---
Lingxian Kong

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [scheduler] [gantt] Please stop using Gantt for discussing about Nova scheduler

2015-04-02 Thread Ed Leafe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 04/02/2015 03:10 AM, Sylvain Bauza wrote:

 As a meeting is by definition a synchronous thing, should we maybe
 try to async that decision using Gerrit ? I could pop up a
 resolution in Gerrit so that people could -1 or +1 it.

To me, 'Gantt' refers to a set of code that was created to eventually
become a stand-alone scheduler service, replacing the current
scheduler in nova. It does not mean the effort to clean up the current
scheduler interface to nova, or any other code that may be proposed in
the future to provide a scheduling service.

Gantt was dead when I re-joined the nova team 7 months ago, and so its
name should also die.

- -- 

- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
Comment: GPGTools - https://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCgAGBQJVHV2uAAoJEKMgtcocwZqL1hgQAK4yR5kA7ajDKe/KR+MGaMI4
SXp92l0/1rgKO9cqtv2sq8tACClCrmfTGx9AhNfj34+i1qLnO6wS+kdGK6DasGCb
B9Ot6dUth9OljwElaYuy+Wc+6eDEtJnfEa+zxNl7UkDjrT2G4aOM/lfRhtbBgoED
9lkwDPJNLbhf82f/E6z5grex0sPa1YVAVCMXct9irP8uaAedQf21yMKFypXFfcfV
XEJjl4CCv20CO7UWKZ/mMccKR0An3O4aGAiKksFPG2dLjdElkp26YlW4FqKDIpeo
5zWOnyMv3Qqxpa4IrvgqwmvtPZbjbpdRZ72kdk1R6NpUhkwIiJJtWPZrwb7ey77G
PnWI5jmRw+7Qh8rSNpm77W7Ao1HbIH/tpV5PPGofnlsZXe1Dz+Gu3xB5UQDb/zg9
HRETW4UyKFxgRnnhgxoAzaYfrjKib85aDu1L3B/ouK1iJ+lxmJ353nK0ORCdveGb
XQBDliMi/eTk4LzcFDwXIuS8+z80pkZibjo6POI90hXsVtMHASuD++ShZTny1uec
vgNuTFo+RUqAPZIQ8Pt0I1YGq2ENQ5uhfkCFENe4dVYlFf2ln/1O7BqMW2URV5vm
BhnuLYCfo34F9wHSiKPNhyH3mDO05x+OyLOClKQa8Gjwgecz2D5s84n+zEXT5gcd
Ad8vakNbo6Q3TFU/62+P
=hMqn
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Liberty Design Summit planning

2015-04-02 Thread Thierry Carrez
Hello everyone,

It's that time of the year again... In less than 7 weeks a lot of us
will meet in Vancouver for 4 days of Design Summit craziness. The space
we'll be in is pretty awesome, I'm sure you will all like it.

Like every design summit, we introduced a number of changes, which I
already mentioned at:

http://lists.openstack.org/pipermail/openstack-dev/2015-January/054122.html

The TL;DR is that we'll have scheduled fishbowl sessions, our
traditional open discussions in a large room to gather as much feedback
as possible on a given topic. But we'll also have scheduled work
sessions in smaller rooms, for teams to gather in a quiet environment
to get specific work done. Those replace most of the uses of the
project pods we had in previous editions, although we plan to still
have a few roundtables set up as pod space for ad-hoc meetings.

The second change is that the Ops Summit is now fully integrated in
the Design Summit, as an Ops track. They will make use of Fishbowl
sessions and Work sessions as well, and share the same time and space.

The overall layout for the event is the following:

Tuesday: Cross-project track, Ops track fishbowl sessions
Wednesday: Project team tracks, Ops track working sessions
Thursday: Project team tracks
Friday: Contributors meetups

In the mean time, we need to determine what we want to discuss. Each
team is free to select its preferred tooling to achieve that. You can
find links to each team planning tool/etherpad at:

https://wiki.openstack.org/wiki/Summit/Planning

Beyond suggesting topics, it's interesting to discuss whether that topic
needs to be discussed in a fishbowl room (and announced on the summit
schedule), or needs to be worked on in a quieter setting in a working
session room. That might prove useful to determine more precisely how
much of each session type each team needs, and let us do late
adjustments on the room allocation.

NB: The current plan is to publish the proposed room allocation for all
projects on April 10, once we know which projects we need to include.

Feel free to reach out to me here or on IRC if you have questions over
this process. See you soon in Vancouver !

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] PTL Candidacy

2015-04-02 Thread Tristan Cacqueray
confirmed

On 04/02/2015 10:53 AM, Flavio Percoco wrote:
 Greetings,
 
 I'd like to put my name out there for the Glance PTL position.
 
 Few words about me:
 
 I'm Flavio Percoco (flaper87 on IRC and everywhere). I've been an
 OpenStack fellow for the last 2 1/2 years. During this time I've
 spread my efforts on several projects but mainly on Glance, from which
 you may/may not know me.
 
 Glance
 ==
 
 Since my early days in OpenStack, I've been part of every core - core
 in terms of service, not membership - decision in Glance. From what to
 do with the registry service, glance_store, API stability etc.
 Throughout these decisions, I've participated in the release process,
 leading positions and also as a voice for those changes.
 
 I'm happy to say that our project has grown a lot and that we're
 facing new challenges that I'd love to be part of and more
 importantly, I'd love to help leading those efforts along side our,
 growing, community.
 
 Interesting things happened in Kilo but I'd like to focus now on
 what's coming next, Liberty.
 
 One of the things that is still pending for us is the work on
 Artifacts. Despite I don't believing that Glance is the best for it to
 live forever, I'd love to see this work done and for it to grow. The
 effort that is being put there not only code-wise, feature-wise but
 also review-wise is already a good enough proof of how the impact this
 could have in our community.
 
 In addition to the above, I'd love our team to improve Glance's API
 story throughout OpenStack and see such API grow and stabilize. This
 not only refers to the service itself but the libraries that Glance
 relies on too. I strongly believe that stability and consistency
 should be part of our main goals in the upcoming development cycle.
 
 New features will be proposed for sure and I'd love us all to review
 them together and decide together what's best for the project's future
 baring in mind the goals of the cycle.
 
 Community
 =
 
 Thankfully enough, I've had the pleasure to be involved in many areas
 of our community and this has given me a good knowledge of how our
 community is structured and how the different parts interact with each
 other. From the stability team to our infrastructure team going
 through OpenStack's common ground (Oslo). I'd love to use this broad
 view in this position as I've been using it as a contributor.
 
 In the other hand, I'm also known from being noisy, speaking up and
 ready to fight whenever it's needed (even when it is not :P). Just
 like with everything else, I'm looking forward to apply all this to
 this position as I've done in my current position.
 
 Last but no least, I've had the pleasure to be Zaqar's PTL during Kilo
 (and co-PTL since the beginning), which has as well prepared me for
 this task.
 
 Thanks for reading thus far, I hope you'll consider me as a good
 candidate for this position.
 Flavio
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] PTL Candidacy

2015-04-02 Thread Tristan Cacqueray
confirmed

On 04/02/2015 02:20 AM, Michael Still wrote:
 I'd like another term as Nova PTL, if you'll have me.
 
 I feel Kilo has gone reasonably well for Nova -- our experiment with
 priorities has meant that we’ve got a lot of important work done. We
 have progressed well with cells v2, our continued objects transition,
 scheduler refactoring, and the v2.1 API. The introduction of the
 trivial bug review list at the mid-cycle meetup has also seen 123 bug
 fixes merged since the mid-cycle, which is a great start.
 
 Kilo is our second release using specs, and I think this process is
 still working well for us -- we’re having fewer arguments at code
 review time about the fundamentals of design, and we’re signalling to
 operators much better what we’re currently working on. Throughout Kilo
 I wrote regular summaries of the currently approved specs, and that
 seems to have been popular with operators.
 
 We also pivoted a little in Kilo and created a trivial approval
 process for Kilo specs which either were very small, or previously
 approved in Juno. This released the authors of those specs from
 meaningless paperwork, and meant that we were able to start merging
 that work very early in the release cycle. I think we should continue
 with this process in Liberty.
 
 I think its a good idea also to examine briefly some statistics about specs:
 
 Juno:
approved but not implemented: 40
implemented: 49
 
 Kilo:
approved but not implemented: 30
implemented: 32
 
 For those previously approved in Juno, 12 were implemented in Kilo.
 However, we’ve now approved 7 specs twice, but not merged an
 implementation. I’d like to spend some time at the start of Liberty
 trying to work out what’s happening with those 7 specs and why we
 haven’t managed to land an implementation yet. Approving specs is a
 fair bit of work, so doing it and then not merging an implementation
 is something we should dig into.
 
 There are certainly priorities which haven’t gone so well in Kilo. We
 need to progress more on functional testing, the nova-network
 migration effort, and CI testing consistency our drivers. These are
 obvious things to try and progress in Liberty, but I don’t want to
 pre-empt the design summit discussions by saying that these should be
 on the priority list of Liberty.
 
 In my Kilo PTL candidacy email, I called for a “social approach” to
 the problems we faced at the time, and that’s what I have focussed on
 for this release cycle. At the start of the release we didn’t have an
 agreed plan for how to implement the specifics for the v2.1 API, and
 we talked through that really well. What we’ve ended up with is an
 implementation in tree which I think will meet our needs going
 forward. We are similarly still in a talking phase with the
 nova-network migration work, and I think that might continue for a bit
 longer -- the problem there is that we need a shared vision for what
 this migration will look like while meeting the needs of the deployers
 who are yet to migrate.
 
 Our velocity continues to amaze me, and I don’t think we’re going
 noticeably slower than we did in Juno. In Juno we saw 2,974 changes
 with 16,112 patchsets, and 21,958 reviews. In Kilo we have seen 2,886
 changes with 15,668 patchsets and 19,516 reviews at the time of
 writing this email. For comparison, Neutron saw 11,333 patchsets and
 Swift saw 1,139 patchsets for Kilo.
 
 I’d like to thank everyone for their hard work during Kilo. I am
 personally very excited by what we achieved in Nova in Kilo, and I’m
 looking forwards to Liberty. I hope you are looking forward to our
 next release as well!
 
 Michael
 




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] PTL Candidacy

2015-04-02 Thread Tristan Cacqueray
confirmed

On 04/02/2015 10:16 AM, Kyle Mestery wrote:
 Hi everyone:
 
 I'd like to announce my candidacy for another term as the Neutron PTL. I'm
 the current Neutron PTL, having been the Neutron PTL for the past two
 cycles (Juno and Kilo). I'd like a chance to lead the Neutron team for
 another cycle of development.
 
 During the Kilo cycle, we worked hard to expand the capabilities of all
 contributors in Neutron. Some examples include the following:
 
 * Plugin decomposition [1] has allowed us to enhance innovation and speed
 around plugin and driver development in Neutron.
 * Moving our API tests into the Neutron tree from Tempest has allowed us to
 better control our API testing destiny.
 * The advanced services split [2] has allowed us to continue to scale
 development of Neutron by breaking out the advanced services into their own
 repositories, with separate core reviewer teams.
 
 These changes have helped to increase the velocity of development for all
 parties involved, and yet still maintain testing quality to ensure
 stability of code. I'm proud of the work the team has done in this area.
 These are the types of things the team needed to do in order to put Neutron
 into solid ground to continue development in upcoming cycles.
 
 Looking forward to Liberty, we have a backlog of specs from Kilo which we
 hope to land early in Liberty. Things such as pluggable IPAM [3] and the
 flavor framework [4] are things which never quite made Kilo and will be
 fast tracked into development for Liberty. In addition, we have a large
 list of items people are interested in discussing at the upcoming Summit
 [5], we'll work to pare that list down into the things we can deliver for
 Liberty.
 
 Being PTL is effectively a full time job, and in a lot of cases it's even
 more than a full time job. What makes it rewarding is being able to work
 with a great group of upstream contributors as you work towards common
 goals for each release. I'm proud of the work the Neutron team has done for
 the Juno and Kilo cycles, and I graciously look forward to the chance to
 lead the team during the upcoming Liberty cycle.
 
 Thank you!
 Kyle
 
 [1]
 http://specs.openstack.org/openstack/neutron-specs/specs/kilo/core-vendor-decomposition.html
 [2]
 http://specs.openstack.org/openstack/neutron-specs/specs/kilo/services-split.html
 [3]
 http://specs.openstack.org/openstack/neutron-specs/specs/liberty/neutron-ipam.html
 [4]
 http://specs.openstack.org/openstack/neutron-specs/specs/liberty/neutron-flavor-framework.html
 [5] https://etherpad.openstack.org/p/liberty-neutron-summit-topics
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Nominate Clinton Knight for core team

2015-04-02 Thread yang, xing
+1




 On Apr 2, 2015, at 9:21 AM, Ben Swartzlander b...@swartzlander.org wrote:
 
 Clinton Knight (cknight on IRC) has been working on OpenStack for the better 
 part of the year, and starting in February, he shifted his focus from Cinder 
 to Manila. I think everyone is already aware of his high quality 
 contributions and code reviews. I would like to nominate him to join the 
 Manila core reviewer team.
 
 -Ben Swartzlander
 Manila PTL
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][puppet] Running custom puppet manifests during overcloud post-deployment

2015-04-02 Thread Tzu-Mainn Chen


- Original Message -
 On Thu, Apr 02, 2015 at 10:34:29AM -0400, Dan Prince wrote:
  On Wed, 2015-04-01 at 21:31 -0400, Tzu-Mainn Chen wrote:
   Hey all,
   
   I've run into a requirement where it'd be useful if, as an end user, I
   could inject
   a personal ssh key onto all provisioned overcloud nodes.
   
   Obviously this is something that not every user would need or want.  I
   talked about
   some options with Dan Prince on IRC, and (besides suggesting that I bring
   the
   discussion to the mailing list) he proposed some generic solutions - and
   Dan, please
   feel free to correct me if I misunderstood any of your ideas.
   
   The first is to specify a pre-set custom puppet manifest to be run when
   the Heat
   stack is created by adding a post_deployment_customizations.pp puppet
   manifest to
   be run by all roles.  Users would simply override this manifest.
   
   The second solution is essentially the same as the first, except we'd
   perform
   the override at the Heat resource registry level: the user would update
   the
   resource reference to point to a their custom manifest (rather than
   overriding
   the default post-deployment customization manifest).
   
   Do either of these solutions seem acceptable to others?  Would one be
   preferred?
  
  Talking about this a bit more on IRC this morning we all realized that
  Puppet isn't a hard requirement. Just simply providing a pluggable
  mechanism to inject this sort of information into the nodes in a clean
  way is all we need.
  
  Steve Hardy's suggestion here is probably the cleanest way to support
  this sort of configuration in a generic fashion.
  
  https://review.openstack.org/170137
  
  I don't believe this solution runs post deployment however. So if
  running a hook post deployment is a requirement we may need to wire in a
  similar generic config parameter for that as well.
 
 No that's correct, this will only run when the initial node boot happens
 and cloud-init runs, so it is pre-deployment only.
 
 If we need post-deployment hooks too, then we could add a similar hook at
 the end of *-post.yaml, which pulls in some deployer defined additional
 post-deployment config to apply.
 
 Steve

Post-deployment hooks would definitely be useful; one of the things we'd like
to do is create a user with very specific permissions on various openstack-
related files and executables.

Mainn

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack clients] Client bash_completion script discovery

2015-04-02 Thread Kirill Zaitsev
Hello all,

Most OS python-xxxclient projects include a tools/xxx.bash_completion file, 
that is invaluable for working with said clients. Correct me if I’m wrong, but 
as far as I see the script is not packaged with any of the python packages on 
the pypi. If the end user wants to install client with

pip install python-xxxclient

she would have to go to that packages git repository and checkout the file by 
hand (and source it of course).


I think it might be a good idea to include completion script with the client in 
the pypi package and add a separate command, smth like `bash-completion-script` 
or `completion-script bash`, that would print the script on the stdout.

Since this applies basically to any python-xxxclient — I’d like to ask for 
input on this idea. Maybe someone already has a similar solution?

-- 
Kirill Zaitsev
Sent with Airmail__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] Request to adopt security as a project team

2015-04-02 Thread Clark, Robert Graham
Technical Committee,

Please consider this request to recognize the security team as an OpenStack
project team.

This is a milestone for the OpenStack Security Group and follows from our
merging with the VMT. Over the last few years what started as a small working
group has become a team of dedicated security experts who assist with security
advisories, create security notes and developer guidance. We've created
technologies and tools such as ephemeral PKI (Anchor) and Python static
analysis to help the community to build more secure services.

Following the new project team application process, we request that the
technical committee consider our application to become a recognised OpenStack
project team:

https://review.openstack.org/170172

Thank You.

-Rob

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] PTL Candidacy

2015-04-02 Thread Flavio Percoco

Greetings,

I'd like to put my name out there for the Glance PTL position.

Few words about me:

I'm Flavio Percoco (flaper87 on IRC and everywhere). I've been an
OpenStack fellow for the last 2 1/2 years. During this time I've
spread my efforts on several projects but mainly on Glance, from which
you may/may not know me.

Glance
==

Since my early days in OpenStack, I've been part of every core - core
in terms of service, not membership - decision in Glance. From what to
do with the registry service, glance_store, API stability etc.
Throughout these decisions, I've participated in the release process,
leading positions and also as a voice for those changes.

I'm happy to say that our project has grown a lot and that we're
facing new challenges that I'd love to be part of and more
importantly, I'd love to help leading those efforts along side our,
growing, community.

Interesting things happened in Kilo but I'd like to focus now on
what's coming next, Liberty.

One of the things that is still pending for us is the work on
Artifacts. Despite I don't believing that Glance is the best for it to
live forever, I'd love to see this work done and for it to grow. The
effort that is being put there not only code-wise, feature-wise but
also review-wise is already a good enough proof of how the impact this
could have in our community.

In addition to the above, I'd love our team to improve Glance's API
story throughout OpenStack and see such API grow and stabilize. This
not only refers to the service itself but the libraries that Glance
relies on too. I strongly believe that stability and consistency
should be part of our main goals in the upcoming development cycle.

New features will be proposed for sure and I'd love us all to review
them together and decide together what's best for the project's future
baring in mind the goals of the cycle.

Community
=

Thankfully enough, I've had the pleasure to be involved in many areas
of our community and this has given me a good knowledge of how our
community is structured and how the different parts interact with each
other. From the stability team to our infrastructure team going
through OpenStack's common ground (Oslo). I'd love to use this broad
view in this position as I've been using it as a contributor.

In the other hand, I'm also known from being noisy, speaking up and
ready to fight whenever it's needed (even when it is not :P). Just
like with everything else, I'm looking forward to apply all this to
this position as I've done in my current position.

Last but no least, I've had the pleasure to be Zaqar's PTL during Kilo
(and co-PTL since the beginning), which has as well prepared me for
this task.

Thanks for reading thus far, I hope you'll consider me as a good
candidate for this position.
Flavio

--
@flaper87
Flavio Percoco


pgpl5BUdvwxbO.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] The Evolution of core developer to maintainer?

2015-04-02 Thread Maru Newby

 On Apr 2, 2015, at 3:26 AM, Thierry Carrez thie...@openstack.org wrote:
 
 Maru Newby wrote:
 [...] Many of us in the Neutron
 community find this taxonomy restrictive and not representative
 of all the work that makes the project possible.
 
 We seem to be after the same end goal. I just disagree that renaming
 core reviewers to maintainers is a positive step toward that goal.
 
 Worse, 'cores'
 are put on a pedastal, and not just in the project.  Every summit
 a 'core reviewer dinner' is held that underscores the
 glorification of this designation.
 
 I deeply regret that, and communicated to the sponsor holding it the
 problem with this +2 dinner the very first time it was held. FWIW it's
 been renamed to VIP dinner and no longer limited to core reviewers,
 but I'd agree with you that the damage was already done.
 
 By proposing to rename 'core
 reviewer' to 'maintainer' the goal was to lay the groundwork for
 broadening the base of people whose valuable contribution could
 be recognized.  The goal was to recognize not just review-related
 contributors, but also roles like doc/bug/test czar and cross-project
 liaison.  The statue of the people filling these roles today is less 
 if they are not also ‘core’, and that makes the work less attractive 
 to many.
 
 That's where we disagree. You see renaming core reviewer to
 maintainer has a way to recognize a broader type of contributions. I
 see it as precisely resulting in the opposite.
 
 Simply renaming core reviewers to maintainers just keeps us using a
 single term (or class) to describe project leadership. And that class
 includes +2 reviewing duties. So you can't be a maintainer if you don't
 do core reviewing. That is exclusive, not inclusive.

The important part of my statement above was ‘lay the groundwork for’.
We were intended to change the name as a _precursor_ to changing the
role itself to encompass more than just those with +2 rights.  Nobody
in their right mind would assume that changing the name by itself could
fix the situation, but we thought it would be a good signal as to our
intent to broaden the scope of recognized contribution.


 What we need to do instead is reviving the drivers concept (we can
 rename it maintainers if you really like that term), separate from the
 core reviewers concept. One can be a project driver and a core
 reviewer. And one can be a project driver *without* being a core
 reviewer. Now *that* allows to recognize all valuable contributions,
 and to be representative of all the work that makes the project possible.

As Joe and I have said, Nova and Neutron already have drivers teams and 
they fill a different role from what you are suggesting.  Can you think of a 
more
appropriate name that isn’t already in use for what you are proposing?


Maru
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] glusterfs plugin

2015-04-02 Thread Sergey Kulanov
I've just print tree with hidden files, so actually it's ok with fpb:

root@55725ffa6e80:~# tree -a test/
test/
|-- .gitignore
|-- LICENSE
|-- README.md
|-- deployment_scripts
|   `-- deploy.sh
|-- environment_config.yaml
|-- metadata.yaml
|-- pre_build_hook





*|-- repositories|   |-- centos|   |   `-- .gitkeep|   `-- ubuntu|
`-- .gitkeep*`-- tasks.yaml



2015-04-02 17:01 GMT+03:00 Przemyslaw Kaminski pkamin...@mirantis.com:

 Well then either we need to fix fuel-plugin-builder to accept such
 situations.

 Actually it is an issue with fpb since git does not accepty empty
 directories [1] so pulling fresh from such repo will result in
 'repositories' dir missing even when the developer had it.

 I hope no files were accidentaly forgotten during commit there?

 P.

 [1]

 http://stackoverflow.com/questions/115983/how-can-i-add-an-empty-directory-to-a-git-repository

 On 04/02/2015 03:46 PM, Sergey Kulanov wrote:
  Hi, Przemyslaw
 
  1) There should be two repositories folders. Please check the correct
  structure (marked with bold):
  mkdir -p repositories/{ubuntu,centos}
 
 
  root@55725ffa6e80:~/fuel-plugin-cinder-netapp# tree
  .
  |-- LICENSE
  |-- README.md
  |-- cinder_netapp-1.0.0.fp
  |-- deployment_scripts
  |   |-- puppet
  |   |   `-- plugin_cinder_netapp
  |   |   `-- manifests
  |   |   `-- init.pp
  |   `-- site.pp
  |-- environment_config.yaml
  |-- metadata.yaml
  |-- pre_build_hook
  *|-- repositories
  |   |-- centos
  |   `-- ubuntu
  *`-- tasks.yaml
 
  Then you can build the plugin.
 
  2) Actually, this should not be the issue while creating plugins from
  scratch using fpb tool itself [1]:
 
  fpb --create test
 
  root@55725ffa6e80:~# tree test
  test
  |-- LICENSE
  |-- README.md
  |-- deployment_scripts
  |   `-- deploy.sh
  |-- environment_config.yaml
  |-- metadata.yaml
  |-- pre_build_hook
  |-- repositories
  |   |-- centos
  |   `-- ubuntu
  `-- tasks.yaml
 
 
 
  [1]. https://pypi.python.org/pypi/fuel-plugin-builder/1.0.2
 
 
  2015-04-02 16:30 GMT+03:00 Przemyslaw Kaminski pkamin...@mirantis.com
  mailto:pkamin...@mirantis.com:
 
  Investigating the cinder-netapp plugin [1] (a 'certified' one) shows
  fuel-plugin-build error:
 
  (fuel)vagrant@ubuntu-14:/sources/fuel-plugin-cinder-netapp$ fpb
 --build
  .
 
 
  Unexpected error
  Cannot find directories ./repositories/ubuntu for release
  {'repository_path': 'repositories/ubuntu', 'version': '2014.2-6.0',
  'os': 'ubuntu', 'mode': ['ha', 'multinode'],
 'deployment_scripts_path':
  'deployment_scripts/'}
  (fuel)vagrant@ubuntu-14:/sources/fuel-plugin-cinder-netapp$ ls
  deployment_scripts  environment_config.yaml  LICENSE  metadata.yaml
  pre_build_hook  README.md  tasks.yaml
  (fuel)vagrant@ubuntu-14:/sources/fuel-plugin-cinder-netapp$ ag
  'repositories'
  metadata.yaml
  18:repository_path: repositories/ubuntu
  23:repository_path: repositories/centos
 
  Apparently some files are missing from the git repo or the manifest
 is
  incorrect. Does anyone know something about this?
 
  P.
 
  [1] https://github.com/stackforge/fuel-plugin-cinder-netapp
 
  On 04/01/2015 03:48 PM, Przemyslaw Kaminski wrote:
   Hello,
  
   I've been investigating bug [1] concentrating on the
   fuel-plugin-external-glusterfs.
  
   First of all: [2] there are no core reviewers for Gerrit for this
 repo
   so even if there was a patch to fix [1] no one could merge it. I
 saw
   also fuel-plugin-external-nfs -- same issue, haven't checked other
   repos. Why is this? Can we fix this quickly?
  
   Second, the plugin throws:
  
   DEPRECATION WARNING: The plugin has old 1.0 package format, this
  format
   does not support many features, such as plugins updates, find
  plugin in
   new format or migrate and rebuild this one.
  
   I don't think this is appropriate for a plugin that is listed in
 the
   official catalog [3].
  
   Third, I created a supposed fix for this bug [4] and wanted to
 test it
   with the fuel-qa scripts. Basically I built an .fp file with
   fuel-plugin-builder from that code, set the GLUSTER_PLUGIN_PATH
  variable
   to point to that .fp file and then ran the
   group=deploy_ha_one_controller_glusterfs tests. The test failed
 [5].
   Then I reverted the changes from the patch and the test still
 failed
   [6]. But installing the plugin by hand shows that it's available
 there
   so I don't know if it's broken plugin test or am I still missing
  something.
  
   It would be nice to get some QA help here.
  
   P.
  
   [1] https://bugs.launchpad.net/fuel/+bug/1415058
   [2] https://review.openstack.org/#/admin/groups/577,members
   [3] https://fuel-infra.org/plugins/catalog.html
   [4] https://review.openstack.org/#/c/169683/
   [5]
  
 
 

Re: [openstack-dev] [cinder] Issue for backup speed

2015-04-02 Thread Murali Balcha
Just curious. What is the overhead of compression and other backup processes?  
How much time does it take to upload a simple 50GB file to swift compare to 
backup of 50 GB to swift?

From: Duncan Thomas [mailto:duncan.tho...@gmail.com]
Sent: Wednesday, April 01, 2015 6:13 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [cinder] Issue for backup speed

This is something we're working on (I work with the author of the patch you 
referenced) but the refactoring of the backup code in this cycle has made 
progress challenging. If you have a patch that works, please submit it, even if 
it needs some cleaning up, we'd be happy to work with you on and testing, 
cleaning up and improvements.

The basic problem is that backup is CPU bound (compression, ssl) so the 
existing parallelisation techniques used in cinder don't help. Running many 
cinder-backup processes can give you good aggregate throughput if you're 
running many backups at once, but this appears not to be a common case, even in 
a large public cloud.

On 1 April 2015 at 11:41, Jae Sang Lee 
hyan...@gmail.commailto:hyan...@gmail.com wrote:
Hi,

I tested Swift backup driver in Cinder-backup and that performance isn't high.
In our test environment, The average time for backup 50G volume is 20min.


I found a patch for this that add multi thread for swift backup 
driver(https://review.openstack.org/#/c/111314) but It's also too slow. It 
looks like that patch doesn't implement thread properly.

Is there any improvement way about this? I'd appreciate other's thoughts on 
these issues.

Thanks.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] Stepping down as PTL

2015-04-02 Thread Eoghan Glynn

Hi Folks,

Just a quick note to say that I won't be running again for
ceilometer PTL over the liberty cycle.

I've taken on a new role internally that won't realistically
allow me the time that the PTL role deserves. But y'all haven't
seen the last of me, I'll be sticking around as a contributor,
bandwidth allowing.

I just wanted to take to opportunity to warmly thank everyone
in the ceilometer community for their efforts over the past two
cycles, and before.

And I'm sure I'll be leaving the reins in good hands :)

Cheers,
Eoghan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Nominate Clinton Knight for core team

2015-04-02 Thread Alex Meade
+1

On Thu, Apr 2, 2015 at 10:30 AM, Thomas Bechtold thomasbecht...@jpberlin.de
 wrote:

 On 02.04.2015 15:16, Ben Swartzlander wrote:
  Clinton Knight (cknight on IRC) has been working on OpenStack for the
  better part of the year, and starting in February, he shifted his focus
  from Cinder to Manila. I think everyone is already aware of his high
  quality contributions and code reviews. I would like to nominate him to
  join the Manila core reviewer team.

 +1

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][puppet] Running custom puppet manifests during overcloud post-deployment

2015-04-02 Thread Steven Hardy
On Thu, Apr 02, 2015 at 10:34:29AM -0400, Dan Prince wrote:
 On Wed, 2015-04-01 at 21:31 -0400, Tzu-Mainn Chen wrote:
  Hey all,
  
  I've run into a requirement where it'd be useful if, as an end user, I 
  could inject
  a personal ssh key onto all provisioned overcloud nodes.
  
  Obviously this is something that not every user would need or want.  I 
  talked about
  some options with Dan Prince on IRC, and (besides suggesting that I bring 
  the
  discussion to the mailing list) he proposed some generic solutions - and 
  Dan, please
  feel free to correct me if I misunderstood any of your ideas.
  
  The first is to specify a pre-set custom puppet manifest to be run when the 
  Heat
  stack is created by adding a post_deployment_customizations.pp puppet 
  manifest to
  be run by all roles.  Users would simply override this manifest.
  
  The second solution is essentially the same as the first, except we'd 
  perform
  the override at the Heat resource registry level: the user would update the
  resource reference to point to a their custom manifest (rather than 
  overriding
  the default post-deployment customization manifest).
  
  Do either of these solutions seem acceptable to others?  Would one be 
  preferred?
 
 Talking about this a bit more on IRC this morning we all realized that
 Puppet isn't a hard requirement. Just simply providing a pluggable
 mechanism to inject this sort of information into the nodes in a clean
 way is all we need.
 
 Steve Hardy's suggestion here is probably the cleanest way to support
 this sort of configuration in a generic fashion.
 
 https://review.openstack.org/170137
 
 I don't believe this solution runs post deployment however. So if
 running a hook post deployment is a requirement we may need to wire in a
 similar generic config parameter for that as well.

No that's correct, this will only run when the initial node boot happens
and cloud-init runs, so it is pre-deployment only.

If we need post-deployment hooks too, then we could add a similar hook at
the end of *-post.yaml, which pulls in some deployer defined additional
post-deployment config to apply.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] glusterfs plugin

2015-04-02 Thread Przemyslaw Kaminski
Investigating the cinder-netapp plugin [1] (a 'certified' one) shows
fuel-plugin-build error:

(fuel)vagrant@ubuntu-14:/sources/fuel-plugin-cinder-netapp$ fpb --build
.


Unexpected error
Cannot find directories ./repositories/ubuntu for release
{'repository_path': 'repositories/ubuntu', 'version': '2014.2-6.0',
'os': 'ubuntu', 'mode': ['ha', 'multinode'], 'deployment_scripts_path':
'deployment_scripts/'}
(fuel)vagrant@ubuntu-14:/sources/fuel-plugin-cinder-netapp$ ls
deployment_scripts  environment_config.yaml  LICENSE  metadata.yaml
pre_build_hook  README.md  tasks.yaml
(fuel)vagrant@ubuntu-14:/sources/fuel-plugin-cinder-netapp$ ag
'repositories'
metadata.yaml
18:repository_path: repositories/ubuntu
23:repository_path: repositories/centos

Apparently some files are missing from the git repo or the manifest is
incorrect. Does anyone know something about this?

P.

[1] https://github.com/stackforge/fuel-plugin-cinder-netapp

On 04/01/2015 03:48 PM, Przemyslaw Kaminski wrote:
 Hello,
 
 I've been investigating bug [1] concentrating on the
 fuel-plugin-external-glusterfs.
 
 First of all: [2] there are no core reviewers for Gerrit for this repo
 so even if there was a patch to fix [1] no one could merge it. I saw
 also fuel-plugin-external-nfs -- same issue, haven't checked other
 repos. Why is this? Can we fix this quickly?
 
 Second, the plugin throws:
 
 DEPRECATION WARNING: The plugin has old 1.0 package format, this format
 does not support many features, such as plugins updates, find plugin in
 new format or migrate and rebuild this one.
 
 I don't think this is appropriate for a plugin that is listed in the
 official catalog [3].
 
 Third, I created a supposed fix for this bug [4] and wanted to test it
 with the fuel-qa scripts. Basically I built an .fp file with
 fuel-plugin-builder from that code, set the GLUSTER_PLUGIN_PATH variable
 to point to that .fp file and then ran the
 group=deploy_ha_one_controller_glusterfs tests. The test failed [5].
 Then I reverted the changes from the patch and the test still failed
 [6]. But installing the plugin by hand shows that it's available there
 so I don't know if it's broken plugin test or am I still missing something.
 
 It would be nice to get some QA help here.
 
 P.
 
 [1] https://bugs.launchpad.net/fuel/+bug/1415058
 [2] https://review.openstack.org/#/admin/groups/577,members
 [3] https://fuel-infra.org/plugins/catalog.html
 [4] https://review.openstack.org/#/c/169683/
 [5]
 https://www.dropbox.com/s/1mhz8gtm2j391mr/fail_error_deploy_ha_one_controller_glusterfs_simple-2015_04_01__11_39_11.tar.xz?dl=0
 [6]
 https://www.dropbox.com/s/ehjox554xl23xgv/fail_error_deploy_ha_one_controller_glusterfs_simple-2015_04_01__13_16_11.tar.xz?dl=0
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] PTL Candidacy

2015-04-02 Thread Matthew Booth
On 02/04/15 07:20, Michael Still wrote:
 I'd like another term as Nova PTL, if you'll have me.
 

...

 
 I think its a good idea also to examine briefly some statistics about specs:
 
 Juno:
approved but not implemented: 40
implemented: 49
 
 Kilo:
approved but not implemented: 30
implemented: 32

Hi, Michael,

It has been my impression over at least the last 2 releases that the
most significant barrier to progress in Nova is the lack of core
reviewer bandwidth. This affects not only feature development, but also
the much less sexy paying down of technical debt. There have been
various attempts to redefine roles and create new processes, but no
attempt that I have seen to address the underlying fundamental issue:
the lack of people who can +2 patches.

There is a discussion currently ongoing on this list, The Evolution of
core developer to maintainer, which contains a variety of proposals.
However, none of these will gain anything close to a consensus. The
result of this will be that none of them will be implemented. We will be
left by default with the status quo, and the situation will continue not
to improve despite the new processes we will invent instead.

The only way we are going to change the status quo is by fiat. We should
of course make every effort to get as many people on board as possible.
However, when change is required, but nobody can agree precisely which
change, we need positive leadership from the PTL.

Would you like to take a position on how to improve core reviewer
throughput in the next cycle?

Thanks,

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] unit tests result in false negatives on system z platform CI

2015-04-02 Thread Matt Riedemann



On 4/2/2015 2:37 AM, Markus Zoeller wrote:

Michael Still mi...@stillhq.com wrote on 04/01/2015 11:01:51 PM:


From: Michael Still mi...@stillhq.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date: 04/01/2015 11:06 PM
Subject: Re: [openstack-dev] [nova] unit tests result in false
negatives on system z platform CI

Thanks for the detailed email on this. How about we add this to the
agenda for this weeks nova meeting?


Yes, that would be great. I've seen you already put it on the agenda.
I will be in todays meeting.

Regards,
Markus Zoeller (markus_z)


One option would be to add a fixture to some higher level test class,
but perhaps someone has a better idea than that.

Michael

On Wed, Apr 1, 2015 at 8:54 PM, Markus Zoeller mzoel...@de.ibm.com

wrote:

[...]
I'm looking for a way to express
the assumption that x86 should be the default platform in the unit

tests

and prevent calls to the underlying system. This has to be rewritable

if

platform specific code like in [2] has to be tested.

I'd like to discuss how that could be achieved in a maintainable way.


References
--
[1] https://blueprints.launchpad.net/nova/+spec/libvirt-kvm-systemz
[2] test_driver.py; test_get_guest_config_with_type_kvm_on_s390;

https://github.com/openstack/nova/blob/master/nova/tests/unit/virt/

libvirt/test_driver.py#L2592





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



It's simple, don't run the unit tests on z.  We don't require the other 
virt driver CI to run unit tests, I don't see why we'd make zKVM do it. 
 Any platform-specific code should be exercised via the APIs in Tempest 
runs and the zKVM CI should be focusing on running the Tempest tests 
that hit the APIs they support (which should be listed in the hypervisor 
support matrix).


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Nominate Clinton Knight for core team

2015-04-02 Thread Thomas Bechtold
On 02.04.2015 15:16, Ben Swartzlander wrote:
 Clinton Knight (cknight on IRC) has been working on OpenStack for the
 better part of the year, and starting in February, he shifted his focus
 from Cinder to Manila. I think everyone is already aware of his high
 quality contributions and code reviews. I would like to nominate him to
 join the Manila core reviewer team.

+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] oslo.log hacking rules

2015-04-02 Thread Ivan Kolodyazhny
I created issue in Launchpad:
https://bugs.launchpad.net/hacking/+bug/1439709

Regards,
Ivan Kolodyazhny,
Software Engineer,
Mirantis Inc.

On Wed, Apr 1, 2015 at 1:38 PM, Ihar Hrachyshka ihrac...@redhat.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 03/31/2015 09:19 PM, Ivan Kolodyazhny wrote:
  Hi all,
 
  After moving to oslo.log and a lots of reviews to Cinder we merged
  some parts for hacking checks to our code [1], [2]. Some of them
  are also implemented in Nova [2], [3]. I didn't check other
  projects.
 
  We try to make our code following logging guidelines [5], [6] and
  making cross-project hacking checks for all logging guidelines will
  help every project.
 
  Does anybody from oslo and other project interested in it? If it
  it needed for oslo.log, I really hope in it, I could be a volunteer
  to move hacking checks inside openstack-dev/hacking or oslo.log
  project.
 

 We should definitely maintain the rules in single place. Neutron also
 have those btw.

 /Ihar
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1

 iQEcBAEBAgAGBQJVG8qSAAoJEC5aWaUY1u57gNAH/RDMG5PB37dkXx3iR8WrWvdB
 cTXTH6vu4945Loyz6WlEsc3yAXQXtUdfAaPphVAURV3B8RbdXG8K25X37HI5WEp3
 IJ0dTGA7WvVVJcGcK4kNv9yiLvr06J5ijwXcLY+aYZ8I/8/uy1ZIuU3Jkxiys87f
 Eql2QidtgubBA+HDbhSxDJ0n8kGNP534zUQip5nOVBOyN0Vfh2xBUje1qMEnJnKR
 uQ2V73CBVXh6fZX2FArmpw1MB6BiWFOXI427fsG4OuM5f700+ECiDQ6wMZCWfaXK
 PFcRkEWVnag2wKP//iRdXpEYX3v7eIOatn5P5LoRYTf8XwF2+VxuWtYKNxGdk1A=
 =J/1a
 -END PGP SIGNATURE-

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Nominate Clinton Knight for core team

2015-04-02 Thread Ben Swartzlander
Clinton Knight (cknight on IRC) has been working on OpenStack for the 
better part of the year, and starting in February, he shifted his focus 
from Cinder to Manila. I think everyone is already aware of his high 
quality contributions and code reviews. I would like to nominate him to 
join the Manila core reviewer team.


-Ben Swartzlander
Manila PTL


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] glusterfs plugin

2015-04-02 Thread Sergey Kulanov
Hi, Przemyslaw

1) There should be two repositories folders. Please check the correct
structure (marked with bold):
mkdir -p repositories/{ubuntu,centos}


root@55725ffa6e80:~/fuel-plugin-cinder-netapp# tree
.
|-- LICENSE
|-- README.md
|-- cinder_netapp-1.0.0.fp
|-- deployment_scripts
|   |-- puppet
|   |   `-- plugin_cinder_netapp
|   |   `-- manifests
|   |   `-- init.pp
|   `-- site.pp
|-- environment_config.yaml
|-- metadata.yaml
|-- pre_build_hook



*|-- repositories|   |-- centos|   `-- ubuntu*`-- tasks.yaml

Then you can build the plugin.

2) Actually, this should not be the issue while creating plugins from
scratch using fpb tool itself [1]:

fpb --create test

root@55725ffa6e80:~# tree test
test
|-- LICENSE
|-- README.md
|-- deployment_scripts
|   `-- deploy.sh
|-- environment_config.yaml
|-- metadata.yaml
|-- pre_build_hook
|-- repositories
|   |-- centos
|   `-- ubuntu
`-- tasks.yaml



[1]. https://pypi.python.org/pypi/fuel-plugin-builder/1.0.2


2015-04-02 16:30 GMT+03:00 Przemyslaw Kaminski pkamin...@mirantis.com:

 Investigating the cinder-netapp plugin [1] (a 'certified' one) shows
 fuel-plugin-build error:

 (fuel)vagrant@ubuntu-14:/sources/fuel-plugin-cinder-netapp$ fpb --build
 .


 Unexpected error
 Cannot find directories ./repositories/ubuntu for release
 {'repository_path': 'repositories/ubuntu', 'version': '2014.2-6.0',
 'os': 'ubuntu', 'mode': ['ha', 'multinode'], 'deployment_scripts_path':
 'deployment_scripts/'}
 (fuel)vagrant@ubuntu-14:/sources/fuel-plugin-cinder-netapp$ ls
 deployment_scripts  environment_config.yaml  LICENSE  metadata.yaml
 pre_build_hook  README.md  tasks.yaml
 (fuel)vagrant@ubuntu-14:/sources/fuel-plugin-cinder-netapp$ ag
 'repositories'
 metadata.yaml
 18:repository_path: repositories/ubuntu
 23:repository_path: repositories/centos

 Apparently some files are missing from the git repo or the manifest is
 incorrect. Does anyone know something about this?

 P.

 [1] https://github.com/stackforge/fuel-plugin-cinder-netapp

 On 04/01/2015 03:48 PM, Przemyslaw Kaminski wrote:
  Hello,
 
  I've been investigating bug [1] concentrating on the
  fuel-plugin-external-glusterfs.
 
  First of all: [2] there are no core reviewers for Gerrit for this repo
  so even if there was a patch to fix [1] no one could merge it. I saw
  also fuel-plugin-external-nfs -- same issue, haven't checked other
  repos. Why is this? Can we fix this quickly?
 
  Second, the plugin throws:
 
  DEPRECATION WARNING: The plugin has old 1.0 package format, this format
  does not support many features, such as plugins updates, find plugin in
  new format or migrate and rebuild this one.
 
  I don't think this is appropriate for a plugin that is listed in the
  official catalog [3].
 
  Third, I created a supposed fix for this bug [4] and wanted to test it
  with the fuel-qa scripts. Basically I built an .fp file with
  fuel-plugin-builder from that code, set the GLUSTER_PLUGIN_PATH variable
  to point to that .fp file and then ran the
  group=deploy_ha_one_controller_glusterfs tests. The test failed [5].
  Then I reverted the changes from the patch and the test still failed
  [6]. But installing the plugin by hand shows that it's available there
  so I don't know if it's broken plugin test or am I still missing
 something.
 
  It would be nice to get some QA help here.
 
  P.
 
  [1] https://bugs.launchpad.net/fuel/+bug/1415058
  [2] https://review.openstack.org/#/admin/groups/577,members
  [3] https://fuel-infra.org/plugins/catalog.html
  [4] https://review.openstack.org/#/c/169683/
  [5]
 
 https://www.dropbox.com/s/1mhz8gtm2j391mr/fail_error_deploy_ha_one_controller_glusterfs_simple-2015_04_01__11_39_11.tar.xz?dl=0
  [6]
 
 https://www.dropbox.com/s/ehjox554xl23xgv/fail_error_deploy_ha_one_controller_glusterfs_simple-2015_04_01__13_16_11.tar.xz?dl=0
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sergey
DevOps Engineer
IRC: SergK
Skype: Sergey_kul
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] PTL Candidacy

2015-04-02 Thread Kyle Mestery
Hi everyone:

I'd like to announce my candidacy for another term as the Neutron PTL. I'm
the current Neutron PTL, having been the Neutron PTL for the past two
cycles (Juno and Kilo). I'd like a chance to lead the Neutron team for
another cycle of development.

During the Kilo cycle, we worked hard to expand the capabilities of all
contributors in Neutron. Some examples include the following:

* Plugin decomposition [1] has allowed us to enhance innovation and speed
around plugin and driver development in Neutron.
* Moving our API tests into the Neutron tree from Tempest has allowed us to
better control our API testing destiny.
* The advanced services split [2] has allowed us to continue to scale
development of Neutron by breaking out the advanced services into their own
repositories, with separate core reviewer teams.

These changes have helped to increase the velocity of development for all
parties involved, and yet still maintain testing quality to ensure
stability of code. I'm proud of the work the team has done in this area.
These are the types of things the team needed to do in order to put Neutron
into solid ground to continue development in upcoming cycles.

Looking forward to Liberty, we have a backlog of specs from Kilo which we
hope to land early in Liberty. Things such as pluggable IPAM [3] and the
flavor framework [4] are things which never quite made Kilo and will be
fast tracked into development for Liberty. In addition, we have a large
list of items people are interested in discussing at the upcoming Summit
[5], we'll work to pare that list down into the things we can deliver for
Liberty.

Being PTL is effectively a full time job, and in a lot of cases it's even
more than a full time job. What makes it rewarding is being able to work
with a great group of upstream contributors as you work towards common
goals for each release. I'm proud of the work the Neutron team has done for
the Juno and Kilo cycles, and I graciously look forward to the chance to
lead the team during the upcoming Liberty cycle.

Thank you!
Kyle

[1]
http://specs.openstack.org/openstack/neutron-specs/specs/kilo/core-vendor-decomposition.html
[2]
http://specs.openstack.org/openstack/neutron-specs/specs/kilo/services-split.html
[3]
http://specs.openstack.org/openstack/neutron-specs/specs/liberty/neutron-ipam.html
[4]
http://specs.openstack.org/openstack/neutron-specs/specs/liberty/neutron-flavor-framework.html
[5] https://etherpad.openstack.org/p/liberty-neutron-summit-topics
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] glusterfs plugin

2015-04-02 Thread Przemyslaw Kaminski
Well then either we need to fix fuel-plugin-builder to accept such
situations.

Actually it is an issue with fpb since git does not accepty empty
directories [1] so pulling fresh from such repo will result in
'repositories' dir missing even when the developer had it.

I hope no files were accidentaly forgotten during commit there?

P.

[1]
http://stackoverflow.com/questions/115983/how-can-i-add-an-empty-directory-to-a-git-repository

On 04/02/2015 03:46 PM, Sergey Kulanov wrote:
 Hi, Przemyslaw
 
 1) There should be two repositories folders. Please check the correct
 structure (marked with bold):
 mkdir -p repositories/{ubuntu,centos}
 
 
 root@55725ffa6e80:~/fuel-plugin-cinder-netapp# tree
 .
 |-- LICENSE
 |-- README.md
 |-- cinder_netapp-1.0.0.fp
 |-- deployment_scripts
 |   |-- puppet
 |   |   `-- plugin_cinder_netapp
 |   |   `-- manifests
 |   |   `-- init.pp
 |   `-- site.pp
 |-- environment_config.yaml
 |-- metadata.yaml
 |-- pre_build_hook
 *|-- repositories
 |   |-- centos
 |   `-- ubuntu
 *`-- tasks.yaml
 
 Then you can build the plugin.
 
 2) Actually, this should not be the issue while creating plugins from
 scratch using fpb tool itself [1]:
 
 fpb --create test
 
 root@55725ffa6e80:~# tree test
 test
 |-- LICENSE
 |-- README.md
 |-- deployment_scripts
 |   `-- deploy.sh
 |-- environment_config.yaml
 |-- metadata.yaml
 |-- pre_build_hook
 |-- repositories
 |   |-- centos
 |   `-- ubuntu
 `-- tasks.yaml
 
 
 
 [1]. https://pypi.python.org/pypi/fuel-plugin-builder/1.0.2
 
 
 2015-04-02 16:30 GMT+03:00 Przemyslaw Kaminski pkamin...@mirantis.com
 mailto:pkamin...@mirantis.com:
 
 Investigating the cinder-netapp plugin [1] (a 'certified' one) shows
 fuel-plugin-build error:
 
 (fuel)vagrant@ubuntu-14:/sources/fuel-plugin-cinder-netapp$ fpb --build
 .
 
 
 Unexpected error
 Cannot find directories ./repositories/ubuntu for release
 {'repository_path': 'repositories/ubuntu', 'version': '2014.2-6.0',
 'os': 'ubuntu', 'mode': ['ha', 'multinode'], 'deployment_scripts_path':
 'deployment_scripts/'}
 (fuel)vagrant@ubuntu-14:/sources/fuel-plugin-cinder-netapp$ ls
 deployment_scripts  environment_config.yaml  LICENSE  metadata.yaml
 pre_build_hook  README.md  tasks.yaml
 (fuel)vagrant@ubuntu-14:/sources/fuel-plugin-cinder-netapp$ ag
 'repositories'
 metadata.yaml
 18:repository_path: repositories/ubuntu
 23:repository_path: repositories/centos
 
 Apparently some files are missing from the git repo or the manifest is
 incorrect. Does anyone know something about this?
 
 P.
 
 [1] https://github.com/stackforge/fuel-plugin-cinder-netapp
 
 On 04/01/2015 03:48 PM, Przemyslaw Kaminski wrote:
  Hello,
 
  I've been investigating bug [1] concentrating on the
  fuel-plugin-external-glusterfs.
 
  First of all: [2] there are no core reviewers for Gerrit for this repo
  so even if there was a patch to fix [1] no one could merge it. I saw
  also fuel-plugin-external-nfs -- same issue, haven't checked other
  repos. Why is this? Can we fix this quickly?
 
  Second, the plugin throws:
 
  DEPRECATION WARNING: The plugin has old 1.0 package format, this
 format
  does not support many features, such as plugins updates, find
 plugin in
  new format or migrate and rebuild this one.
 
  I don't think this is appropriate for a plugin that is listed in the
  official catalog [3].
 
  Third, I created a supposed fix for this bug [4] and wanted to test it
  with the fuel-qa scripts. Basically I built an .fp file with
  fuel-plugin-builder from that code, set the GLUSTER_PLUGIN_PATH
 variable
  to point to that .fp file and then ran the
  group=deploy_ha_one_controller_glusterfs tests. The test failed [5].
  Then I reverted the changes from the patch and the test still failed
  [6]. But installing the plugin by hand shows that it's available there
  so I don't know if it's broken plugin test or am I still missing
 something.
 
  It would be nice to get some QA help here.
 
  P.
 
  [1] https://bugs.launchpad.net/fuel/+bug/1415058
  [2] https://review.openstack.org/#/admin/groups/577,members
  [3] https://fuel-infra.org/plugins/catalog.html
  [4] https://review.openstack.org/#/c/169683/
  [5]
 
 
 https://www.dropbox.com/s/1mhz8gtm2j391mr/fail_error_deploy_ha_one_controller_glusterfs_simple-2015_04_01__11_39_11.tar.xz?dl=0
  [6]
 
 
 https://www.dropbox.com/s/ehjox554xl23xgv/fail_error_deploy_ha_one_controller_glusterfs_simple-2015_04_01__13_16_11.tar.xz?dl=0
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 

Re: [openstack-dev] [Magnum] Containers and networking

2015-04-02 Thread Russell Bryant
On 04/02/2015 12:36 PM, Adrian Otto wrote:
 What to expect in the Liberty release cycle:
 
...
 * Overlay networking 
...

This is totally unrelated to your PTL email, but on this point, I'd be
curious what the Magnum team thinks of this proposal:

http://openvswitch.org/pipermail/dev/2015-March/052663.html

It's a proposed (and now merged) design for how containers that live
inside OpenStack managed VMs can be natively connected to virtual
networks managed by Neutron.  There's some parts of the process that are
handled by the container orchestration system being used.

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Request to adopt security as a project team

2015-04-02 Thread Adam Young

On 04/02/2015 11:56 AM, Clark, Robert Graham wrote:

Technical Committee,

Please consider this request to recognize the security team as an OpenStack
project team.

This is a milestone for the OpenStack Security Group and follows from our
merging with the VMT. Over the last few years what started as a small working
group has become a team of dedicated security experts who assist with security
advisories, create security notes and developer guidance. We've created
technologies and tools such as ephemeral PKI (Anchor) and Python static
analysis to help the community to build more secure services.

Following the new project team application process, we request that the
technical committee consider our application to become a recognised OpenStack
project team:

https://review.openstack.org/170172

Thank You.

-Rob

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Heartily endorsed!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Request to adopt security as a project team

2015-04-02 Thread Jeremy Stanley
On 2015-04-02 15:56:31 + (+), Clark, Robert Graham wrote:
 Please consider this request to recognize the security team as an
 OpenStack project team.
 
 This is a milestone for the OpenStack Security Group and follows
 from our merging with the VMT.
[...]

With my VMT hat donned, I second this request!
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >