Re: [openstack-dev] [Ironic] [TC] Discussion: changing Ironic's release model

2015-05-29 Thread Dmitry Tantsur

On 05/28/2015 06:41 PM, Devananda van der Veen wrote:

Hi all,

tl;dr;

At the summit, the Ironic team discussed the challenges we've had with
the current release model and came up with some ideas to address them.
I had a brief follow-up conversation with Doug and Thierry, but I'd
like this to be discussed more openly and for us (the Ironic dev
community) to agree on a clear plan before we take action.

If Ironic moves to a release:independent model, it shouldn't have any
direct effect on other projects we integrate with -- we will continue
to follow release:at-6mo-cycle-end -- but our processes for how we get
there would be different, and that will have an effect on the larger
community.

Longer version...

We captured some notes from our discussion on Thursday afternoon's etherpad:
https://etherpad.openstack.org/p/liberty-ironic-scaling-the-dev-team

Which I've summarized below, and mixed in several themes which didn't
get captured on the 'pad but were discussed somewhere, possibly in a
hallway or on Friday.

Current challenges / observations:
- six weeks' feature freeze is not actually having the desired
stabilizing effect
- the post-release/pre-summit slump only makes that worse
- many folks stop reviewing during this time because they know their
own features won't land, and instead focus their time downstream
- this creates pressure to land features which aren't ready, and
leaves few people to find bugs, write docs, and generally prepare the
release during this window

The alternative we discussed:
- use feature branches for risky / large work, keeping total # of
branches small, and rebasing them regularly on master
- keep trunk moving quickly for smaller / less risky / refactoring changes
- slow down for a week or two before a release, but dont actually
freeze master
- cut releases when new features are available


Note, that will need some scheduling anyway, so that we can slow down a 
week before. So probably still some milestone process required, wdyt?



- OpenStack coordinated releases are taken from latest independent release
- that release will then get backports  stable maintenance, other
independent releases don't


So no stable branch for other independent releases? What if serious 
security issue is found in one of these? Will we advice users to 
downgrade to the latest coordinated release?




We think this will accomplish a few things:
- make the developer experience better by being more consistent, thus
keeping developers engaged year-round and increase the likelyhood
they'll find and fix bugs
- reduce stress on core reviewers since there's no crunch time at
the end of a cycle
- allow big changes to bake in a feature branch, rather than in a
series of gerrit patches that need to be continually re-reviewed and
cherry-picked to test them.
- allow operators who wish to use Ironic outside of OpenStack to
consume feature releases more rapidly, while still consuming approved
releases instead of being forced to deploy from trunk


For reference, Michael has posted a tracking change to the governance
repo here: https://review.openstack.org/#/c/185203/

Before Ironic actually makes the switch, I would like us to discuss
and document the approach we're going to take more fully, and get
input from other teams on this approach. Often, the devil is in the
details - and, for instance, I don't yet understand how we'll fit this
approach into SemVer, or how this will affect our use of launchpad to
track features (maybe it means we stop doing that?).


I don't see problem with using launchpad tbh.

Re versions my biggest concern is pbr: I don't know how it's version 
detection black magic is going to react if we go from 2015.1 to say 1.0.




Input appreciated.

Thanks,
Devananda

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nominating Serge van Ginderachter to the os-ansible-deployment core team

2015-05-29 Thread Jesse Pretorius
On 29 May 2015 at 00:01, Kevin Carter kevin.car...@rackspace.com wrote:

 I would like to nominate Serge (svg on IRC) for the
 os-ansible-deployment-core team. Serge has been involved with the greater
 Ansible community for some time and has been working with the OSAD project
 for the last couple of months. He has been an active contributor in the
 #openstack-ansible channel and has been participating in the general
 deployment/evolution of the project since joining the channel. I believe
 that his expertise will be invaluable to moving OSAD forward and is an
 ideal candidate for the Core team.

 Please respond with +1/-1s and or any other concerns


+1 from me :)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] New micro-version needed for api bug fix or not?

2015-05-29 Thread Sahid Orentino Ferdjaoui
On Fri, May 29, 2015 at 08:47:01AM +0200, Jens Rosenboom wrote:
 As the discussion in https://review.openstack.org/179569 still
 continues about whether this is just a bug fix, or an API change
 that will need a new micro version, maybe it makes sense to take this
 issue over here to the ML.

Changing version of the API probably makes sense also for bug if it
changes the behavior of a command/option to something backward
incompatible. I do not believe it is the case for your change.

 My personal opinion is undecided, I can see either option as being
 valid, but maybe after having this open bug for four weeks now we can
 come to some conclusion either way.
 
 Yours,
 Jens
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Puppet] Package updates strategy

2015-05-29 Thread Jan Provazník

On 05/28/2015 08:24 PM, Zane Bitter wrote:

On 28/05/15 03:35, Jan Provaznik wrote:

On 05/28/2015 01:10 AM, Steve Baker wrote:

On 28/05/15 10:54, Richard Raseley wrote:

Zane Bitter wrote:

Steve is working on a patch to allow package-based updates of
overcloud
nodes[1] using the distro's package manager (yum in the case of RDO,
but
conceivable apt in others). Note we're talking exclusively about minor
updates, not version-to-version upgrades here.

Dan mentioned at the summit that this approach fails to take into
account the complex ballet of service restarts required to update
OpenStack services. (/me shakes fist at OpenStack services.) And
furthermore, that the Puppet manifests already encode the necessary
relationships to do this properly. (Thanks Puppeteers!) Indeed we'd be
doing the Wrong Thing by Puppet if we changed this stuff from under
it.

The problem of course is that neither Puppet nor yum/apt has a view of
the entire system. Yum doesn't know about the relationships between
services and Puppet doesn't know about all of the _other_ packages
that
they depend on.

One solution proposed was to do a yum update first but specifically
exclude any packages that Puppet knows about (the --excludes flag
appears sufficient for this); then follow that up with another Puppet
run using ensure - latest.


My only concern with this approach is how do we collect and maintain the
excludes list. Other than that it sounds reasonable.


Why not swap the order? First run puppet using ensure=latest which
updates/restarts everything Openstack depends on, then run yum/apt
update to update any remaining bits. You wouldn't need exclude list then.


Will ensure=latest update all packages that the given one is dependent
on, even if it doesn't require new versions? I assumed that it wouldn't,
so by doing Puppet first we would just ensure that we are even less
likely to pick up library changes by restarting services after the
libraries are updated.



We could take advantage of this only when both a service and a 
dependency library are part of the upgrade. Other services depending on 
the lib would have to be restarted out-of-puppet in post yum-update 
phase anyway. I wonder if it is worth to try generate list of exclude 
packages (to be able to run yum first) given quite limited benefit it 
provides, but if it's simple enough then +1 :).



A problem with that approach is that it still fails to restart
services
which have had libraries updated but have not themselves been updated.
That's no worse than the pure yum approach though. We might need an
additional way to just manually trigger a restart of services.


Maybe this could be handled at the packaging stage by reving the package
version when there is a known fix in a low-level library, thus
triggering a service restart in the puppet phase.



My concern is that then e.g. libc update would mean repackaging (bumping
version) of zillion of other packages, also zillion of packages would be
downloaded/upgraded on each system because of a new package version.


Yes, no distro ever created works like this - for good reason - and it
is way out of scope for us to try to change that :)


I think that providing user a manual (script) way to restart services
after update would be sufficient solution (though not so sophisticated).


Maybe there's a way we can poke Puppet to do it.

cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [TC] Discussion: changing Ironic's release model

2015-05-29 Thread Lucas Alvares Gomes
Hi

 Note, that will need some scheduling anyway, so that we can slow down a week
 before. So probably still some milestone process required, wdyt?


We can cut a release and establish it one or two weeks before the
official OpenStack release. But prior to that we can just cut a
release whenever we think it's good without following any time
scheduling.

 - OpenStack coordinated releases are taken from latest independent release
 - that release will then get backports  stable maintenance, other
 independent releases don't


 So no stable branch for other independent releases? What if serious security
 issue is found in one of these? Will we advice users to downgrade to the
 latest coordinated release?


Good point, but I think that would extremely hard and costly to
maintain a bunch of stable branches at once. So having a stable branch
every 6 months to follow the OpenStack model seems enough. If we find
a serious security issue, we could advice the user to downgrade to the
last coordinate release which the fix was backported or if the user is
following the feature based release of Ironic we can fix the problem
and cut a new release and advice the user to use that instead.


 Input appreciated.


I think it's a good plan and we have to keep in mind that maybe it
doesn't work but I think we should definitely try it out and see how
it goes. That said, I'm also confused about the versioning (with
launchpad and pbr) but Swift already does something similar right? We
should take a look at how they do it.

Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] I think nova behaves poorly when booting multiple instances

2015-05-29 Thread Lingxian Kong
On Mon, May 25, 2015 at 11:37 PM, Chris Friesen
chris.frie...@windriver.com wrote:


 Also, I think that if nova can't schedule max count instances, but can
 schedule at least min count instances, then it shouldn't put the
 unscheduled ones into an error state--it should just delete them.

 Thoughts?


Frankly speaking, I agree with you that it's indeed a problem we should fix.


-- 
Regards!
---
Lingxian Kong

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Proposing Assaf Muller for the Neutron Core Reviewer Team

2015-05-29 Thread Daniel Comnea
My vote doesn't count but as an operator i'd give Assaf +100 for all the
great knowledge he has and shared through his blog - incredible source of
information for everyone who wants to understand in a dummy way how Neutron
does things.

Dani

On Fri, May 29, 2015 at 4:13 AM, Akihiro Motoki amot...@gmail.com wrote:

 +1

 2015-05-28 22:42 GMT+09:00 Kyle Mestery mest...@mestery.com:

 Folks, I'd like to propose Assaf Muller to be a member of the Neutron
 core reviewer team. Assaf has been a long time contributor in Neutron, and
 he's also recently become my testing Lieutenant. His influence and
 knowledge in testing will be critical to the team in Liberty and beyond. In
 addition to that, he's done some fabulous work for Neutron around L3 HA and
 DVR. Assaf has become a trusted member of our community. His review stats
 place him in the pack with the rest of the Neutron core reviewers.

 I'd also like to take this time to remind everyone that reviewing code is
 a responsibility, in Neutron the same as other projects. And core reviewers
 are especially beholden to this responsibility. I'd also like to point out
 that +1/-1 reviews are very useful, and I encourage everyone to continue
 reviewing code even if you are not a core reviewer.

 Existing Neutron cores, please vote +1/-1 for the addition of Assaf to
 the core reviewer team.

 Thanks!
 Kyle

 [1] http://stackalytics.com/report/contribution/neutron-group/180

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][api] New micro-version needed for api bug fix or not?

2015-05-29 Thread Jens Rosenboom
As the discussion in https://review.openstack.org/179569 still
continues about whether this is just a bug fix, or an API change
that will need a new micro version, maybe it makes sense to take this
issue over here to the ML.

My personal opinion is undecided, I can see either option as being
valid, but maybe after having this open bug for four weeks now we can
come to some conclusion either way.

Yours,
Jens

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [new][app-catalog] App Catalog next steps

2015-05-29 Thread Alexander Tivelkov
Hi Kevin,

 Has the Glance Artifact Repository implemented enough bits to have Heat
 and/or Murano artefacts yet?


​Most of the code is there already, couple of patchsets are still on review
but we'll land them soon.​ L1 is a likely milestone to have it ready in
master.


Also, has there been any work on Exporting/Importing them through some
 defined format (tarball?) that doesn't depend on the artefact type?


​This one is not completely implemented: the design is ready (the spec had
this feature from the very beginning) and a PoC was done. The final
implementation is likely to happen in L cycle.


I've been talking with the Heat folks on starting a blueprint to allow heat
 templates to use relative URL's instead of absolute ones. That would allow
 a set of Heat templates to be stored in one artefact in Glance.


​That's awesome.
Also I'd consider allowing Heat to reference Templates by their artifact
IDs in Glance, same as Nova does it for images. ​



  --
 *From:* Alexander Tivelkov [ativel...@mirantis.com]
 *Sent:* Thursday, May 28, 2015 4:46 AM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [new][app-catalog] App Catalog next steps

   Hi folks,

  I believe that at least part of the filtering we are discussing here may
 be done at the client side if the client is sophisticated enough to be
 aware about the capabilities of the local cloud.
 And by sophisticated client I mean Glance V3 (previously known as
 Artifact Repository), which may (and, in my vision, should) become the
 ultimate consumer of the app catalog on the cloud side.

  Each asset type (currently Image, Murano Package, Heat template, more to
 come) should be implemented as Glance Artifact type (i.e. a plugin), and
 may define the required capabilities as its type specific metadata fields
 (for example, Heat-template type may list plugins which are required to run
 this template; Murano-package type may set the minimum required version of
 Core library etc). The logic which is needed to validate this capabilities
 may be put into this type-specific plugin as well. This custom logic method
 will gets executed when the artifact is being exported from app catalog
 into the particular cloud.

  In this case the compatibility of particular artifact with particular
 cloud will be validated by that cloud itself when the app catalog is
 browsed. Also, if the cloud does not have support of some artifact types at
 all (e.g. does not have Murano installed and thus cannot utilize Murano
 Packages), then it does not have the Murano plugin in its glance and thus
 will not be able to import murano-artifacts from the Catalog.

  Hope this makes sense.


  --
  Regards,
 Alexander Tivelkov

 On Thu, May 28, 2015 at 10:29 AM, Morgan Fainberg 
 morgan.fainb...@gmail.com wrote:



 On Wed, May 27, 2015 at 5:33 PM, Joe Gordon joe.gord...@gmail.com
 wrote:



 On Wed, May 27, 2015 at 4:27 PM, Fox, Kevin M kevin@pnnl.gov
 wrote:

  I'd say, tools that utilize OpenStack, like the knife openstack
 plugin, are not something that you would probably go to the catalog to
 find. And also, the recipes that you would use with knife would not be
 specific to OpenStack in any way, so you would just be duplicating the
 config management system's own catalog in the OpenStack catalog, which
 would be error prone. Duplicating all the chef recipes, and docker
 containers, puppet stuff, and . is a lot of work...


  I am very much against duplicating things, including chef recipes that
 use the openstack plugin for knife. But we can still easily point to
 external resources from apps.openstack.org. In fact we already do (
 http://apps.openstack.org/#tab=heat-templatesasset=Lattice).



 The vision I have for the Catalog (I can be totally wrong here, lets
 please discuss) is a place where users (non computer scientists) can visit
 after logging into their Cloud, pick some app of interest, hit launch, and
 optionally fill out a form. They then have a running piece of software,
 provided by the greater OpenStack Community, that they can interact with,
 and their Cloud can bill them for. Think of it as the Apple App Store for
 OpenStack.  Having a reliable set of deployment engines (Murano, Heat,
 whatever) involved is critical to the experience I think. Having too many
 of them though will mean it will be rare to have a cloud that has all of
 them, restricting the utility of the catalog. Too much choice here may
 actually be a detriment.


  calling this a catalog, which it sounds accurate, is confusing since
 keystone already has a catalog.   Naming things is unfortunately a
 difficult problem.


  This is in itself turns into a really unfortunately usability issue for
 a number of reason; colliding namespaces that end users need to be aware of
 serves to generate confusion. Even the choices made naming things currently
 in use by OpenStack (I openly admit Keystone is particularly bad in 

Re: [openstack-dev] [nova][api] New micro-version needed for api bug fix or not?

2015-05-29 Thread John Garbutt
On 29 May 2015 at 09:00, Sahid Orentino Ferdjaoui
sahid.ferdja...@redhat.com wrote:
 On Fri, May 29, 2015 at 08:47:01AM +0200, Jens Rosenboom wrote:
 As the discussion in https://review.openstack.org/179569 still
 continues about whether this is just a bug fix, or an API change
 that will need a new micro version, maybe it makes sense to take this
 issue over here to the ML.

 Changing version of the API probably makes sense also for bug if it
 changes the behavior of a command/option to something backward
 incompatible. I do not believe it is the case for your change.

 My personal opinion is undecided, I can see either option as being
 valid, but maybe after having this open bug for four weeks now we can
 come to some conclusion either way.

Apologies for this, we are still trying to evolve the rules for when
to bump the API micro versions, there will be some pain while we work
that out :(


From the summit discussion, I think got three things roughly agreed
(although we have not yet got them written up into a devref document,
to make the formal agreement, and we need to do that ASAP):

1)
We agreed changing a 500 error to an existing error (or making it
succeed in the usual way) is a change that doesn't need a version
bump, its a bug fix.

2)
We also agreed that all micro version bumps need a spec, to help avoid
is adding more bad things to the API as we try and move forward.
This is heavy weight. In time, we might find certain good patterns
where we want to relax that restriction, but we haven't done enough
changes to agree on those patterns yet. This will mean we are moving a
bit slower at first, but it feels like the right trade off against
releasing (i.e. something that lands in any commit on master) an API
with a massive bug we have to support for a long time.

3)
Discuss other cases as they came up, and evolve the plans based on the
examples that come up, with a focus on bumping the version being
(almost) free, and useful for clients to work out what has changed.

Is that how everyone else remembers that discussion?


Now when it comes to your change. It is a bug in the default policy.
Sadly this policy is also quite hard wired to admin vs non-admin. We
still need work to make policy more discoverable, so I don't think we
need to make this any more discoverable as such.

Having said all that, we probably need to look at this case more
carefully, after your patch has merged, and work out how this should
work now we assuming strong validation, and granular policy, etc.

But maybe there is something massive here?


Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [TC] Discussion: changing Ironic's release model

2015-05-29 Thread Jeremy Stanley
On 2015-05-29 11:47:36 +0200 (+0200), Thierry Carrez wrote:
[...]
 As far as vulnerability management goes, we already publish the
 master fix as part of the advisory, so people can easily find
 that. The only thing the VMT might want to reconsider is: when an
 issue is /only/ present in the master branch and was never part of
 a release, it currently gets fixed silently there, without an
 advisory being published. I guess that could be evolved to
 publish an advisory if the issue was in any released version.
 That would still not give users of intermediary versions a pure
 backport for their version, but give them notice and a patch to
 apply. I also suspect that for critical issues Ironic would issue
 a new intermediary release sooner rather than later.

This is what we've historically done for master-branch-only projects
anyway, so I don't see it as a new process. Works just fine, but as
you say we should make sure we know at the time of writing the
advisory what the next release version number will be (and hopefully
it comes along shortly after the fix merges so people can just
upgrade to it).
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] I think nova behaves poorly when booting multiple instances

2015-05-29 Thread John Garbutt
On 27 May 2015 at 23:36, Robert Collins robe...@robertcollins.net wrote:
 On 26 May 2015 at 03:37, Chris Friesen chris.frie...@windriver.com wrote:

 Hi all,

 I've just opened a bug around booting multiple instances at once, and it was
 suggested on IRC that I mention it here to broaden the discussion around the
 ideal behaviour.

 The bug is at:  https://bugs.launchpad.net/nova/+bug/1458122

 Basically the problem is this:

 When booting up instances, nova allows the user to specify a min count and
 a max count.  So logically, this request should be considered successful
 if at least min count instances can be booted.

 Currently, if the user has quota space for max count instances, then nova
 will try to create them all. If any of them can't be scheduled, then the
 creation of all of them will be aborted and they will all be put into an
 error state.

The new quota ideas we discussed should make other options for this a
lot simpler, I think:
https://review.openstack.org/#/c/182445/
But lets skip over that for now...

 Arguably, if nova was able to schedule at least min count instances (which
 defaults to 1) then it should continue on with creating those instances that
 it was able to schedule. Only if nova cannot create at least min count
 instances should nova actually consider the request as failed.

 Also, I think that if nova can't schedule max count instances, but can
 schedule at least min count instances, then it shouldn't put the
 unscheduled ones into an error state--it should just delete them.

 I think taking successfully provisioned vm's and rolling them back is
 poor, when the users request was strictly met- I'm in favour of your
 proposals.

The problem here is having a nice way to explicitly tell the users
about what worked and what didn't. Currently the instance being in an
error state because its the good way to tell the user that build
failed. Deleting them doesn't have the same visibility, it can look
like the just vanished.

We do have a (straw man) proposed solution for this. See the Task API
discussion here:
https://etherpad.openstack.org/p/YVR-nova-error-handling

Given this also impacts discussions around cancelling operations like
live-migrate, I would love for a sub group to form and push forward
the important work on building a Task API. I think Andrew Laski has
committed to writing up a backlog spec for this current proposal (that
has gained a lot of support), so it could be taken on by some others
who want to move this forward. Do you fancy getting involved with
that?


Having said all that, I am very tempted to say we should deprecate the
min_count parameter in the API, keep the current behaviour for old
version requests, and maybe even remove the max_count parameter. We
could look to Heat to do a much better job of this kind of
orchestration. This is very much in the spirit of:
http://docs.openstack.org/developer/nova/devref/project_scope.html#no-more-orchestration


Either which way, given the impact of the bug fix (i.e. it touches the
API, and would probably need a micro version bump), I think it would
be great to actually write up your proposal as a nova-spec (backlog or
targeted at liberty, either way is cool). I think a spec review would
be a great way to reach a good agreement on the best approach here.


Chris, does that sounds like an approach that would work for you?


Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][sahara]

2015-05-29 Thread Rob Cresswell (rcresswe)
This was a known bug with the modals, which was fixed yesterday. Update horizon 
master, and you’re good to go :)

Bug: https://bugs.launchpad.net/horizon/+bug/1459115
Patch: https://review.openstack.org/#/c/185927/

Rob


From: lu jander juvenboy1...@gmail.commailto:juvenboy1...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, 29 May 2015 03:14
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Horizon][sahara]

Hi, guys.

I installed latest devstack many times, but it seems there are many issue with 
that,  one snapshot pic below for example, have you ever met recently?

[ 1]
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-05-29 Thread Derek Higgins



On 28/05/15 22:09, Thomas Goirand wrote:

On 05/28/2015 02:53 PM, Derek Higgins wrote:



On 28/05/15 12:07, Jaume Devesa wrote:

Hi Thomas,

Delorean is a tool to build rpm packages from master branches (maybe any
branch?) of OpenStack projects.

Check out here:
https://www.rdoproject.org/packaging/rdo-packaging.html#master-pkg-guide


Following those instructions you'll notice that the rpm's are being
built using rpmbuild inside a docker container, if expanding to add dep
support this is where we could plug in sbuild.


sbuild by itself already provides the single use trow-able chroot
feature, with very effective back ends like AUFS or LVM snapshots.
Adding docker would only have the bad effect to remove the package
caching feature of sbuild, so it makes no sense to use it, as sbuild
would constantly download from the internet instead of using its package
cache.

Also, it is my understanding that infra will not accept to use
long-living VMs, and prefer to spawn new instances. In such a case, I
don't see the point using docker which would be a useless layer. In
fact, I was even thinking that in this case, sbuild wouldn't be
required, and we could simply use mk-build-deps and git-buildpackage
without even using sbuild. The same dependency resolver (ie: apt) would
then be in use, just without the added sbuild layer. I used that already
to automatically build backports, and it was really fast.

Did I miss something here? Apart from the fact that Docker is trendy,
what feature would it bring?



The reason I choose docker was essentially for a chroot to build the 
packages in while having the various distro images easily available, 
other people have shown interest in using mock in the passed so we may 
switch to it at some stage in the future. Whats important I think is 
that we can change things to use sbuild without docker if that is what 
works best for you for debs.


I think the feature in delorean that is most useful is that it will 
continue to maintain a history of usable package repositories 
representing the openstack projects over time, for this we would need a 
long running instance, but that can happen outside of infra.


Once we have all of the packaging available in infra we can use any tool 
to build it as part of CI, my preference for delorean is because it 
would match how we would want to run a long running delorean server.


All of this needs to be preceded by actually importing the packaging 
into review.openstack.org , so lets talk to infra first about how we 
should go about that and we can converge on processes secondary to that?



I'm traveling for a lot of next week but would like to try and start 
working on importing things to gerrit soon, so will try and get some 
prep done over the next week to import the RDO packaging but in reality 
it will probably be the following week before its ready (unless of 
course somebody else wants todo it).





By the way, one question: does Delorean use mock? We had the discussion
during an internal meeting, and we were not sure about this...


Nope, not using mock currently



Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [TC] Discussion: changing Ironic's release model

2015-05-29 Thread Thierry Carrez
Devananda van der Veen wrote:
 [...]
 The alternative we discussed:
 - use feature branches for risky / large work, keeping total # of
 branches small, and rebasing them regularly on master
 - keep trunk moving quickly for smaller / less risky / refactoring changes
 - slow down for a week or two before a release, but dont actually
 freeze master
 - cut releases when new features are available
 - OpenStack coordinated releases are taken from latest independent release
 - that release will then get backports  stable maintenance, other
 independent releases don't

With the already-mentioned caveats on feature branch usage, I think this
makes sense, for simpler or more stable projects that can actually
release something that works without a large stabilization period.

That said, it's worth noting that the way Swift currently aligns with
the coordinated release at the end of a cycle is a little different:

- for intermediary releases, Swift would just soft-freeze at a proposed
SHA and tag that if everyone is fine with it after some time (which is
what you propose here)

- for the final release Swift tags a RC1 (which triggers a
stable/$SERIES release branch) and has the option of doing other RCs if
a critical issue is found before the coordinated release date, then the
final is re-tagged from the last RC

In all cases, for stable maintenance/CI reasons, we need to cut a
stable/$SERIES branch for every project in the weeks preceding the
coordinated release date -- but I guess we have two options there.

(1) we could adopt the Swift RC model and special-case the release
process for the final release.

(2) we could just create the stable branch from your presumed last
release, and in case a critical issue is found, backport the fix and tag
a point release there (and consider that point release the $SERIES
release)

Since I would only recommend simpler / more stable projects to switch to
that model for the time being (projects that are less likely to need
release candidates), I think (2) makes sense (and I could see Swift
adopting it as well).

 [...]
 Before Ironic actually makes the switch, I would like us to discuss
 and document the approach we're going to take more fully, and get
 input from other teams on this approach. Often, the devil is in the
 details - and, for instance, I don't yet understand how we'll fit this
 approach into SemVer, or how this will affect our use of launchpad to
 track features (maybe it means we stop doing that?).

As far as semver goes, since you switch to independent releases you
can't stick to a common (2015.1.0) version anyway, so I think it's
less confusing to use a semver versioning than using conflicting
date-based ones.

As far as Launchpad goes, I don't think switching to that model changes
anything. Oslo libraries (which also follow a semver-driven,
multiple-release + final one model) track features and bugfixes using
Launchpad alright, and the series graph in LP actually helps in figuring
out where you are. Together with the proposed changes in release
tracking to report after the fact more than try to predict (see thread I
posted yesterday), I think it would work just fine.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-05-29 Thread Derek Higgins



On 28/05/15 20:58, Paul Belanger wrote:

On 05/27/2015 05:26 PM, Derek Higgins wrote:

On 27/05/15 09:14, Thomas Goirand wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi all,

tl;dr:
- - We'd like to push distribution packaging of OpenStack on upstream
gerrit with reviews.
- - The intention is to better share the workload, and improve the
overall
QA for packaging *and* upstream.
- - The goal is *not* to publish packages upstream
- - There's an ongoing discussion about using stackforge or openstack.
This isn't, IMO, that important, what's important is to get started.
- - There's an ongoing discussion about using a distribution specific
namespace, my own opinion here is that using /openstack-pkg-{deb,rpm} or
/stackforge-pkg-{deb,rpm} would be the most convenient because of a
number of technical reasons like the amount of Git repository.
- - Finally, let's not discuss for too long and let's do it!!! :)

Longer version:

Before I start: some stuff below is just my own opinion, others are just
given facts. I'm sure the reader is smart enough to guess which is what,
and we welcome anyone involved in the project to voice an opinion if
he/she differs.

During the Vancouver summit, operation, Canonical, Fedora and Debian
people gathered and collectively expressed the will to maintain
packaging artifacts within upstream OpenStack Gerrit infrastructure. We
haven't decided all the details of the implementation, but spent the
Friday morning together with members of the infra team (hi Paul!) trying
to figure out what and how.

A number of topics have been raised, which needs to be shared.

First, we've been told that such a topic deserved a message to the dev
list, in order to let groups who were not present at the summit. Yes,
there was a consensus among distributions that this should happen, but
still, it's always nice to let everyone know.

So here it is. Suse people (and other distributions), you're welcome to
join the effort.

- - Why doing this

It's been clear to both Canonical/Ubuntu teams, and Debian (ie: myself)
that we'd be a way more effective if we worked better together, on a
collaborative fashion using a review process like on upstream Gerrit.
But also, we'd like to welcome anyone, and especially the operation
folks, to contribute and give feedback. Using Gerrit is the obvious way
to give everyone a say on what we're implementing.

As OpenStack is welcoming every day more and more projects, it's making
even more sense to spread the workload.

This is becoming easier for Ubuntu guys as Launchpad now understand not
only BZR, but also Git.

We'd start by merging all of our packages that aren't core packages
(like all the non-OpenStack maintained dependencies, then the Oslo libs,
then the clients). Then we'll see how we can try merging core packages.

Another reason is that we believe working with the infra of OpenStack
upstream will improve the overall quality of the packages. We want to be
able to run a set of tests at build time, which we already do on each
distribution, but now we want this on every proposed patch. Later on,
when we have everything implemented and working, we may explore doing a
package based CI on every upstream patch (though, we're far from doing
this, so let's not discuss this right now please, this is a very long
term goal only, and we will have a huge improvement already *before*
this is implemented).

- - What it will *not* be
===
We do not have the intention (yet?) to publish the resulting packages
built on upstream infra. Yes, we will share the same Git repositories,
and yes, the infra will need to keep a copy of all builds (for example,
because core packages will need oslo.db to build and run unit tests).
But we will still upload on each distributions on separate repositories.
So published packages by the infra isn't currently discussed. We could
get to this topic once everything is implemented, which may be nice
(because we'd have packages following trunk), though please, refrain to
engage in this topic right now: having the implementation done is more
important for the moment. Let's try to stay on tracks and be
constructive.

- - Let's keep efficiency in mind
===
Over the last few years, I've been able to maintain all of OpenStack in
Debian with little to no external contribution. Let's hope that the
Gerrit workflow will not slow down too much the packaging work, even if
there's an unavoidable overhead. Hopefully, we can implement some
liberal ACL policies for the core reviewers so that the Gerrit workflow
don't slow down anyone too much. For example we may be able to create
new repositories very fast, and it may be possible to self-approve some
of the most trivial patches (for things like typo in a package
description, adding new debconf translations, and such obvious fixes, we
shouldn't waste our time).

There's a middle ground between the current system (ie: only write
access ACLs for 

Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-05-29 Thread Derek Higgins



On 29/05/15 02:54, Steve Kowalik wrote:

On 29/05/15 06:41, Haïkel wrote:

Here's the main script to rebuild a RPM package.
https://github.com/openstack-packages/delorean/blob/master/scripts/build_rpm.sh

The script basically uses rpmbuild to build packages, we could have a
build_deb.sh that uses
sbuild and add dockerfiles for the Debian/Ubuntu supported releases.


I have a preliminary patch locally that adds support for building for
both Debian and Ubuntu. I will be applying some polish next week and
then working with the Delorean guys to get it landed.



Thanks Steve, looking forward to seeing it

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [TC] Discussion: changing Ironic's release model

2015-05-29 Thread Thierry Carrez
Lucas Alvares Gomes wrote:
 - OpenStack coordinated releases are taken from latest independent release
 - that release will then get backports  stable maintenance, other
 independent releases don't

 So no stable branch for other independent releases? What if serious security
 issue is found in one of these? Will we advice users to downgrade to the
 latest coordinated release?
 
 Good point, but I think that would extremely hard and costly to
 maintain a bunch of stable branches at once. So having a stable branch
 every 6 months to follow the OpenStack model seems enough. If we find
 a serious security issue, we could advice the user to downgrade to the
 last coordinate release which the fix was backported or if the user is
 following the feature based release of Ironic we can fix the problem
 and cut a new release and advice the user to use that instead.

Right, there is no way under our current setup to support more than a
(common) stable branch every 6 months. That means if people use
intermediary releases and a vulnerability is found, they can either
backport the fix to their code, workaround the issue until the next
version is released, or downgrade to tracking the last stable branch.

As far as vulnerability management goes, we already publish the master
fix as part of the advisory, so people can easily find that. The only
thing the VMT might want to reconsider is: when an issue is /only/
present in the master branch and was never part of a release, it
currently gets fixed silently there, without an advisory being
published. I guess that could be evolved to publish an advisory if the
issue was in any released version. That would still not give users of
intermediary versions a pure backport for their version, but give them
notice and a patch to apply. I also suspect that for critical issues
Ironic would issue a new intermediary release sooner rather than later.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] IRC meetings agenda is now driven from Gerrit !

2015-05-29 Thread Thierry Carrez
yatin kumbhare wrote:
 Great Work!
 
 New meetings or meeting changes would be proposed in Gerrit, and
 check/gate tests would make sure that there aren't any conflict.
 
  --Will this tell upfront, that in any given day of week, which are all
 meeting slots (time and irc meeting channels) are available? this could
 bring down no. patchset and gate tests failures. may be :)

No it doesn't tell you about empty slots. The simpler to find them is
still to load the ical in some calendaring app or Google Calendar and
check for interesting available slots.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][plugin][astute][UI] DSL restrictions with an action: none to display a message?

2015-05-29 Thread Swann Croiset
Did the trick! Thanks

for records, commands typed:
 workon fuel-web
 cd fuel-web/nailgun
 pip install -r requirements.txt
 pip install nodeenv
 nodeenv nenv
 source nenv/bin/activate
 npm install -g gulp bower
 gulp bower
 scp -r static/ root@master:/tmp/
master# dockerctl copy /tmp/static nailgun:/usr/share/nailgun/
 master# dockerctl shell nailgun
nailgun# rsync -arv --delete /tmp/static/ /usr/share/nailgun/static/
...
remove cache browser

On Thu, May 28, 2015 at 11:19 AM, Julia Aranovich jkirnos...@mirantis.com
wrote:

 You can setting up fake dev environment [1] based on the latest master,
 run 'npm install  gulp bower' from fuel-web/nailgun directory to load
 uncompressed ui and its dependencies and replace nailgun/static dir with
 compressed ui on your master node by the static dir from fake environment.

 But this is a rather cumbersome way, and something can go wrong :) so I
 would still suggest you to re-install with the last iso.

 Hope this will be helpful for you!

 [1] -
 https://github.com/stackforge/fuel-web/blob/master/docs/develop/nailgun/development/env.rst#running-nailgun-in-fake-mode

 On Thu, May 28, 2015 at 11:29 AM, Swann Croiset scroi...@mirantis.com
 wrote:

 Many thanks Julia for the quick fix!

 Is it possible to update my fuel master to test this patch?  w/o download
 a new ISO and avoid to reinstall my env ?

 On Wed, May 27, 2015 at 6:04 PM, Julia Aranovich jkirnos...@mirantis.com
  wrote:

 Hi,

 That's an issue of course. Settings definitely should support 'none'
 action in their restrictions. Thank you for catching it!
 And we've prepared the *fix*: https://review.openstack.org/#/c/186049/. It
 should be merged ASAP.

 Best regards,
 Julia

 On Wed, May 27, 2015 at 5:57 PM, Swann Croiset scroi...@mirantis.com
 wrote:

 Folks,

 With our plugin UI definition [0]  I'm trying to use a restriction with
 'action: none' to display a message but nothing happen.
 According to the doc this should just works [1], btw I didn't find any
 similar example on fuel-web/nailgun.
 So I guess I hit a bug here or smth is wrong with plugin integration or
 I missed smth.

 Does somebody can confirm the bug and help to determine if it should be
 filled on 'fuel-plugin' or 'fuel' launchpad project?

 Thanks

 [0]
 https://review.openstack.org/#/c/184981/4/environment_config.yaml,cm
 [1]
 https://github.com/stackforge/fuel-web/blob/master/docs/develop/nailgun/customization/settings.rst#restrictions



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kind Regards,
 Julia Aranovich,
 Software Engineer,
 Mirantis, Inc
 +7 (905) 388-82-61 (cell)
 Skype: juliakirnosova
 www.mirantis.ru
 jaranov...@mirantis.com jkirnos...@mirantis.com


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kind Regards,
 Julia Aranovich,
 Software Engineer,
 Mirantis, Inc
 +7 (905) 388-82-61 (cell)
 Skype: juliakirnosova
 www.mirantis.ru
 jaranov...@mirantis.com jkirnos...@mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][tempest]How to enable lbaas related test cases in templest?

2015-05-29 Thread Lily.Sing
Hi all,

I try to test neutron LBaaS with tempest, but find all the API and scenario
test cases related to it are skipped. Is there a way to enable these test
cases? I have LBaaS service enabled already. Also, is there any detailed
document about lbaasv2?

Thanks.

Best regards,
Lily Xing(邢莉莉)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nodepool: puppet fails to create image

2015-05-29 Thread Eduard Matei
Thanks,
I updated nodepool and nodepool scripts and now it seems the image is
building.

Eduard


On Thu, May 28, 2015 at 6:39 PM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2015-05-28 11:58:43 +0300 (+0300), Eduard Matei wrote:
  We had some issues with jenkins/devstack so i updated both the jenkins
  version and the devstack.
  When starting nodepool it tries to create a new image but it fails after
 ~5
  minutes:
 
  2015-05-28 10:49:21,509 ERROR nodepool.image.build.local_01.d-p-c: + sudo
  puppet apply --detailed-exitcodes
 --modulepath=/root/system-config/modules:/
  etc/puppet/modules -e 'class
 {'\''openstack_project::single_use_slave'\'':
  sudo = true, thin = true, python3 = false, include_pypy = false,
 all_my
  sql_privs = false, }'
 [...]
  2015-05-28 10:49:21,509 ERROR nodepool.image.build.local_01.d-p-c: Error:
  Invalid parameter python3 on Class[Openstack_project::Single_use_slave]
 at l
  ine 1 on node d-p-c-1432802582.template.openstack.org
 [...]
  Anyone has any idea how to fix this?
 
  Nodepool was not updated.
  Jenkins is 1.596.3
  Devstack is 2015.1.1

 Looks like you've pulled in a new version of our openstack_project
 Puppet module which includes https://review.openstack.org/151714 but
 have not updated your nodepool prep scripts since
 https://review.openstack.org/151715 went in almost 4 months ago to
 support it.
 --
 Jeremy Stanley

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

*Eduard Biceri Matei, Senior Software Developer*
www.cloudfounders.com
 | eduard.ma...@cloudfounders.com



*CloudFounders, The Private Cloud Software Company*

Disclaimer:
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed.
If you are not the named addressee or an employee or agent responsible
for delivering this message to the named addressee, you are hereby
notified that you are not authorized to read, print, retain, copy or
disseminate this message or any part of it. If you have received this
email in error we request you to notify us by reply e-mail and to
delete all electronic files of the message. If you are not the
intended recipient you are notified that disclosing, copying,
distributing or taking any action in reliance on the contents of this
information is strictly prohibited.
E-mail transmission cannot be guaranteed to be secure or error free as
information could be intercepted, corrupted, lost, destroyed, arrive
late or incomplete, or contain viruses. The sender therefore does not
accept liability for any errors or omissions in the content of this
message, and shall have no liability for any loss or damage suffered
by the user, which arise as a result of e-mail transmission.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Sphinx 1.3 is coming, breaking almost every docs

2015-05-29 Thread Thomas Goirand
Hi,

Currently, in the global requirements, we have:
sphinx=1.1.2,!=1.2.0,!=1.3b1,1.3

However, like it or not, sphinx 1.3 will soon reach Debian unstable.
Already, I see something broken with it:
https://gist.github.com/mitya57/1643a7c1666b650a0de5

(if you didn't notice, that's sphinxcontrib-programoutput)

sphinxcontrib-programoutput breaks *only* because the html theme needs
to be called 'classic' instead of 'default'. To make things worse, the
'classic' theme isn't available in version 1.2 of Sphinx.

So we need to fix things before they break badly. I have suggested to
the python-sphinx package maintainer to make the transition better than
upstream. Maybe just remove the warning, which may (like above) wrongly
taken as a hard fail? Then maybe get reverse dependencies check if the
classic theme is available, and use 'default' as a fall-back? But then,
how does one test for the presence of a given theme name in the system?
Suggestions would be warmly welcome to fix things at the distribution
level and avoid all the pain.

BTW, I find this kind of problems really annoying. How come upstream
Sphinx authors are breaking things like this? Just because of the rename
of the default sphinx theme from 'default' to 'classic'? That's really
not worth the amount of pain everyone will have. The person who did this
should be taking care of unbreaking all the things he broke. :(

Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] I think nova behaves poorly when booting multiple instances

2015-05-29 Thread Sylvain Bauza



Le 29/05/2015 14:27, John Garbutt a écrit :

On 27 May 2015 at 23:36, Robert Collins robe...@robertcollins.net wrote:

On 26 May 2015 at 03:37, Chris Friesen chris.frie...@windriver.com wrote:

Hi all,

I've just opened a bug around booting multiple instances at once, and it was
suggested on IRC that I mention it here to broaden the discussion around the
ideal behaviour.

The bug is at:  https://bugs.launchpad.net/nova/+bug/1458122

Basically the problem is this:

When booting up instances, nova allows the user to specify a min count and
a max count.  So logically, this request should be considered successful
if at least min count instances can be booted.

Currently, if the user has quota space for max count instances, then nova
will try to create them all. If any of them can't be scheduled, then the
creation of all of them will be aborted and they will all be put into an
error state.

The new quota ideas we discussed should make other options for this a
lot simpler, I think:
https://review.openstack.org/#/c/182445/
But lets skip over that for now...


Arguably, if nova was able to schedule at least min count instances (which
defaults to 1) then it should continue on with creating those instances that
it was able to schedule. Only if nova cannot create at least min count
instances should nova actually consider the request as failed.

Also, I think that if nova can't schedule max count instances, but can
schedule at least min count instances, then it shouldn't put the
unscheduled ones into an error state--it should just delete them.

I think taking successfully provisioned vm's and rolling them back is
poor, when the users request was strictly met- I'm in favour of your
proposals.

The problem here is having a nice way to explicitly tell the users
about what worked and what didn't. Currently the instance being in an
error state because its the good way to tell the user that build
failed. Deleting them doesn't have the same visibility, it can look
like the just vanished.

We do have a (straw man) proposed solution for this. See the Task API
discussion here:
https://etherpad.openstack.org/p/YVR-nova-error-handling

Given this also impacts discussions around cancelling operations like
live-migrate, I would love for a sub group to form and push forward
the important work on building a Task API. I think Andrew Laski has
committed to writing up a backlog spec for this current proposal (that
has gained a lot of support), so it could be taken on by some others
who want to move this forward. Do you fancy getting involved with
that?


Count me in as well, since I'm interested in contributing to the Tasks 
API (by the level of free time I have :-) )
I will discuss with Andrew once he's back from holidays to see when we 
can setup a subteam meeting for wrapping that up.


That said, all of that has nothing to do with the original question IIUC 
since it's a conductor/scheduler problem and not a quota issue.




Having said all that, I am very tempted to say we should deprecate the
min_count parameter in the API, keep the current behaviour for old
version requests, and maybe even remove the max_count parameter. We
could look to Heat to do a much better job of this kind of
orchestration. This is very much in the spirit of:
http://docs.openstack.org/developer/nova/devref/project_scope.html#no-more-orchestration


Either which way, given the impact of the bug fix (i.e. it touches the
API, and would probably need a micro version bump), I think it would
be great to actually write up your proposal as a nova-spec (backlog or
targeted at liberty, either way is cool). I think a spec review would
be a great way to reach a good agreement on the best approach here.


100% agreed with John, we should just drop the min-count parameter and 
just leave as deprecated the max-count parameter.


Chris, happy with writing the spec ?

-Sylvain




Chris, does that sounds like an approach that would work for you?


Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Puppet] Package updates strategy

2015-05-29 Thread Martin Mágr


On 05/28/2015 05:32 PM, James Slagle wrote:

Then I'm +1 for running Puppet with 'ensure = latest' first and then 'yum
update -y'. Seems ideal solution from my point of view.

How would this solve the library update problem though?. If a new
version of a library is released to address a security issue or what
not, first you'd run puppet, which doesn't see anything new. Then the
yum update -y would pick up the library update, but no services
would get restarted. I don't think we can convince all distros to rev
every package version that might depend on such a library update.

Take something like heartbleed for example, when the updated openssl
was released, there wasn't a corresponding rebuild of every package
out there that requires openssl to set a minimum requires on the new
openssl version (that I know of).


Hmpf, it won't solve the library update problem. I didn't think about 
such case.




I don't know puppet internals at all, but what about some type of
Puppet provider that overrides latest to make Puppet think that it's
actually updated a package even if no such update exists? That could
trigger Puppet to then restart the services b/c it thinks it's updated
something.


Service restart is not triggered by package providers, but by 
dependencies stated in modules.

So using dummy package provider does not seem ideal for me.



SImilar to what TripleO is doing with the norpm provider where the
install is a noop:
https://github.com/stackforge/puppet-tripleo/blob/master/lib/puppet/provider/package/norpm.rb

Could we then swap in this provider when we're triggering the update
via Heat? So we do a yum update -y, and then rerun puppet, and it
thinks it's updated everything, so it restarts as needed.

Each module would need to have parameter implemented to enable such 
behaviour. But anyway
I don't like this solution because I see it as dirty hack. AFAIR all 
service resources in OpenStack modules
are defined as 'refreshonly'. This means that we could implement in 
Tripleo manifests a logic just

to schedule refresh on them when it is required. Thoughts?

Regards,
Martin


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [TC] Discussion: changing Ironic's release model

2015-05-29 Thread Thomas Goirand
On 05/28/2015 06:41 PM, Devananda van der Veen wrote:
 Hi all,
 
 tl;dr;
 
 At the summit, the Ironic team discussed the challenges we've had with
 the current release model and came up with some ideas to address them.
 I had a brief follow-up conversation with Doug and Thierry, but I'd
 like this to be discussed more openly and for us (the Ironic dev
 community) to agree on a clear plan before we take action.
 
 If Ironic moves to a release:independent model, it shouldn't have any
 direct effect on other projects we integrate with -- we will continue
 to follow release:at-6mo-cycle-end -- but our processes for how we get
 there would be different, and that will have an effect on the larger
 community.

Just a quick follow-up to voice the Distro's opinion (I'm sure other
package maintainers will agree with what I'm going to write).

It's fine if you use whatever release schedule that you want. Though
what's important for us, downstream distro, is that you make sure to
release a longer term version at the same time as everything else, and
that you make sure security follow-up is done properly. It's really
important that you clearly identify the versions for which you will do
security patch backports, so that we don't (by mistake) release a stable
version of a given distribution with a something you wont ever give long
term support for.

I'm sure you guys know that already, but better be safe than sorry, and
other less experience projects may find the above useful.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [TC] Discussion: changing Ironic's release model

2015-05-29 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2015-05-29 12:12:51 +0200:
 Devananda van der Veen wrote:
  [...]
  The alternative we discussed:
  - use feature branches for risky / large work, keeping total # of
  branches small, and rebasing them regularly on master
  - keep trunk moving quickly for smaller / less risky / refactoring changes
  - slow down for a week or two before a release, but dont actually
  freeze master
  - cut releases when new features are available
  - OpenStack coordinated releases are taken from latest independent release
  - that release will then get backports  stable maintenance, other
  independent releases don't

That last part is important -- you don't want to get bogged down with
managing umpteen different backports, so you want to maintain one stable
branch per cycle. Deployers running from master will get new stuff and
fixes quickly, and deployers running from stable releases will have
stability and fixes. We'll be serving both types of downstream consumers
well.

 
 With the already-mentioned caveats on feature branch usage, I think this
 makes sense, for simpler or more stable projects that can actually
 release something that works without a large stabilization period.
 
 That said, it's worth noting that the way Swift currently aligns with
 the coordinated release at the end of a cycle is a little different:
 
 - for intermediary releases, Swift would just soft-freeze at a proposed
 SHA and tag that if everyone is fine with it after some time (which is
 what you propose here)
 
 - for the final release Swift tags a RC1 (which triggers a
 stable/$SERIES release branch) and has the option of doing other RCs if
 a critical issue is found before the coordinated release date, then the
 final is re-tagged from the last RC
 
 In all cases, for stable maintenance/CI reasons, we need to cut a
 stable/$SERIES branch for every project in the weeks preceding the
 coordinated release date -- but I guess we have two options there.
 
 (1) we could adopt the Swift RC model and special-case the release
 process for the final release.
 
 (2) we could just create the stable branch from your presumed last
 release, and in case a critical issue is found, backport the fix and tag
 a point release there (and consider that point release the $SERIES
 release)
 
 Since I would only recommend simpler / more stable projects to switch to
 that model for the time being (projects that are less likely to need
 release candidates), I think (2) makes sense (and I could see Swift
 adopting it as well).

If I understand you correctly, option 2 is what we do for Oslo
libraries that release this way.

  [...]
  Before Ironic actually makes the switch, I would like us to discuss
  and document the approach we're going to take more fully, and get
  input from other teams on this approach. Often, the devil is in the
  details - and, for instance, I don't yet understand how we'll fit this
  approach into SemVer, or how this will affect our use of launchpad to
  track features (maybe it means we stop doing that?).
 
 As far as semver goes, since you switch to independent releases you
 can't stick to a common (2015.1.0) version anyway, so I think it's
 less confusing to use a semver versioning than using conflicting
 date-based ones.

I have a todo on my list to write up a spec summarizing the summit
discussion on the new version numbering scheme for all projects.
tl;dr is moving to semver, starting with version 12.0.0 (since Kilo
was our 11th release).

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [stable] No longer doing stable point releases

2015-05-29 Thread Thierry Carrez
Hi everyone,

TL;DR:
- We propose to stop tagging coordinated point releases (like 2015.1.1)
- We continue maintaining stable branches as a trusted source of stable
updates for all projects though

Long version:

At the stable branch session in Vancouver we discussed recent
evolutions in the stable team processes and how to further adapt the
work of the team in a big tent world.

One of the key questions there was whether we should continue doing
stable point releases. Those were basically tags with the same version
number (2015.1.1) that we would periodically push to the stable
branches for all projects.

Those create three problems.

(1) Projects do not all follow the same versioning, so some projects
(like Swift) were not part of the stable point releases. More and more
projects are considering issuing intermediary releases (like Swift
does), like Ironic. That would result in a variety of version numbers,
and ultimately less and less projects being able to have a common
2015.1.1-like version.

(2) Producing those costs a non-trivial amount of effort on a very small
team of volunteers, especially with projects caring about stable
branches in various amounts. We were constantly missing the
pre-announced dates on those ones. Looks like that effort could be
better spent improving the stable branches themselves and keeping them
working.

(3) The resulting stable point releases are mostly useless. Stable
branches are supposed to be always usable, and the released version
did not undergo significantly more testing. Issuing them actually
discourages people from taking whatever point in stable branches makes
the most sense for them, testing and deploying that.

The suggestion we made during that session (and which was approved by
the session participants) is therefore to just get rid of the stable
point release concept altogether for non-libraries. That said:

- we'd still do individual point releases for libraries (for critical
bugs and security issues), so that you can still depend on a specific
version there

- we'd still very much maintain stable branches (and actually focus our
efforts on that work) to ensure they are a continuous source of safe
upgrades for users of a given series

Now we realize that the cross-section of our community which was present
in that session might not fully represent the consumers of those
artifacts, which is why we expand the discussion on this mailing-list
(and soon on the operators ML).

If you were a consumer of those and will miss them, please explain why.
In particular, please let us know how consuming that version (which was
only made available every n months) is significantly better than picking
your preferred time and get all the current stable branch HEADs at that
time.

Thanks in advance for your feedback,

[1] https://etherpad.openstack.org/p/YVR-relmgt-stable-branch

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-05-29 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

What about release notes? How can we now communicate some changes that
require operator consideration or action?

Ihar

On 05/29/2015 03:41 PM, Thierry Carrez wrote:
 Hi everyone,
 
 TL;DR: - We propose to stop tagging coordinated point releases
 (like 2015.1.1) - We continue maintaining stable branches as a
 trusted source of stable updates for all projects though
 
 Long version:
 
 At the stable branch session in Vancouver we discussed recent 
 evolutions in the stable team processes and how to further adapt
 the work of the team in a big tent world.
 
 One of the key questions there was whether we should continue
 doing stable point releases. Those were basically tags with the
 same version number (2015.1.1) that we would periodically push to
 the stable branches for all projects.
 
 Those create three problems.
 
 (1) Projects do not all follow the same versioning, so some
 projects (like Swift) were not part of the stable point releases.
 More and more projects are considering issuing intermediary
 releases (like Swift does), like Ironic. That would result in a
 variety of version numbers, and ultimately less and less projects
 being able to have a common 2015.1.1-like version.
 
 (2) Producing those costs a non-trivial amount of effort on a very
 small team of volunteers, especially with projects caring about
 stable branches in various amounts. We were constantly missing the 
 pre-announced dates on those ones. Looks like that effort could be 
 better spent improving the stable branches themselves and keeping
 them working.
 
 (3) The resulting stable point releases are mostly useless.
 Stable branches are supposed to be always usable, and the
 released version did not undergo significantly more testing.
 Issuing them actually discourages people from taking whatever point
 in stable branches makes the most sense for them, testing and
 deploying that.
 
 The suggestion we made during that session (and which was approved
 by the session participants) is therefore to just get rid of the
 stable point release concept altogether for non-libraries. That
 said:
 
 - we'd still do individual point releases for libraries (for
 critical bugs and security issues), so that you can still depend on
 a specific version there
 
 - we'd still very much maintain stable branches (and actually focus
 our efforts on that work) to ensure they are a continuous source of
 safe upgrades for users of a given series
 
 Now we realize that the cross-section of our community which was
 present in that session might not fully represent the consumers of
 those artifacts, which is why we expand the discussion on this
 mailing-list (and soon on the operators ML).
 
 If you were a consumer of those and will miss them, please explain
 why. In particular, please let us know how consuming that version
 (which was only made available every n months) is significantly
 better than picking your preferred time and get all the current
 stable branch HEADs at that time.
 
 Thanks in advance for your feedback,
 
 [1] https://etherpad.openstack.org/p/YVR-relmgt-stable-branch
 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJVaIMzAAoJEC5aWaUY1u57/iIIANPync0FRB0Pe77fBWXEuCqX
jUdoRmhQR3kzGar5UYUXAh5wa2RD/xT1JIlrwjaz4I0jKKGs7nQJ2RoTG68OtgXf
Dk3WzVg7GgmJXrzFim1cACIzDmrX704gegfCSmf8qd0WC03AT3gI/uSR0NYmiW75
xBWSltPPt1T0PaKgV9lLQ6kyCjzdUySve4815jtGPf1hZUpyQlw1+7NE9LEUfbw8
P+wLCB4UyIQ00Hjf90pnjVOT6bhMQ2/2ldoPZsDTCI5PWB52Mqk9ZteuC/1QLBkS
OSRD4fZSMeGDXkSTpIKStZm3J36cM9BGsFzC2OcZ+C52yE3vS4g7cBFaPlLub6k=
=tP/b
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Monasca] Authentication through Keystone

2015-05-29 Thread Pradip Mukhopadhyay
Hello,


How Monasca is getting authenticated through Keystone?

The reason of asking this is: Monasca is in Java, whereas other services
(like Cinder) uses the keystone-middleware-authentication (when configured)
through api-paste.ini. And the middleware is in python.


Does Monasca uses some other scheme to authenticate to Keystone?



Thanks in advance,
Pradip
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What does neutron.ml2.type_drivers namespace mean?

2015-05-29 Thread ????
I found some code in plugins/ml2/managers.py. There writes super(TypeManager, 
self).__init__('neutron.ml2.type_drivers',. The neutron.ml2.type_drivers is 
namespace by document.  What does namespace mean? Can I write a random 
namespace linke 'abcdef'?__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Impressions from Vancouver Neutron Design Summits

2015-05-29 Thread Neil Jerram

Hi Gal,

Thanks for starting this thread, and I'm sorry not to have met you 
during the summit.


Vancouver was my first summit, and I very much enjoyed it.  My 
reflection is that I came to the summit with two overall expectations, 
of which one proved correct and one false.


The correct expectation was that it would be great to meet everyone in 
person, associate faces to the names that I see on the ML and IRC, and - 
even if only to a small extent - gain more personal understandings of 
people in a way that will hopefully help make future ML/IRC/review 
conversations more effective.


The incorrect expectation was that the design summit would be a 
significantly more open forum, for a relative newbie such as myself, for 
the discussion of new ideas, and for the incorporation of new 
participants, than the regular ML and IRC forums that we use all the 
time.  In fact the summit topics and participation are as much driven by 
the core team as the regular forums, and in retrospect that should have 
been more obvious to me, because:


- the detailed schedule is fleshed out on ML/IRC before the summit

- the cores have a lot to discuss, and the face to face time is as 
valuable for them as it is for anyone else.


So there is no criticism here, just reflection and one main piece of 
advice for the next people thinking about going to their first summit: 
whatever you have to say, get it out there on the ML and IRC; don't save 
it up for the summit specifically.


Again, I really enjoyed Vancouver - thanks very much to everyone that I 
met and chatted with there, and to everyone that organized it!


Regards,
Neil


On 28/05/15 09:12, Gal Sagie wrote:

Hello All,

I wanted to very briefly share my thoughts and a suggestion regarding
the Neutron design summits (maybe only for the next one, but still..)

[...]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Puppet] Proposed Change in Nova Service Defaults

2015-05-29 Thread Matt Fischer
I'd like them to default to enabled as well since it matches the other
modules as you said. Was the intent here to allow bringing up new compute
hosts without them being enabled? If so there's another flag that could be
set to manage that state.

As for the patch itself, we need to change it for all the other services in
nova too, not just API.

On Fri, May 29, 2015 at 8:20 AM, Richard Raseley rich...@raseley.com
wrote:

 We are currently reviewing a patch[0] which will result in a change to the
 way Nova services are managed by default. Previously services were set to
 'Disabled' by default, and had to be manually enabled in the manifests,
 this change will make the default value 'Enabled'.

 The consensus is that this will bring the Nova module more in-line with
 the other modules, but we understand this could result in some undesirable
 behavior for some implementations.

 If you have a strong opinion on the matter, or want to make sure your
 use-case is considered, please comment on the aforementioned review[0].

 Regards,

 Richard Raseley

 System Operations Engineer @ Puppet Labs

 [0] - https://review.openstack.org/#/c/184656/

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-05-29 Thread Dave Walker
On 29 May 2015 at 14:41, Thierry Carrez thie...@openstack.org wrote:
 Hi everyone,

 TL;DR:
 - We propose to stop tagging coordinated point releases (like 2015.1.1)
 - We continue maintaining stable branches as a trusted source of stable
 updates for all projects though

 Long version:

 At the stable branch session in Vancouver we discussed recent
 evolutions in the stable team processes and how to further adapt the
 work of the team in a big tent world.

 One of the key questions there was whether we should continue doing
 stable point releases. Those were basically tags with the same version
 number (2015.1.1) that we would periodically push to the stable
 branches for all projects.

 Those create three problems.

 (1) Projects do not all follow the same versioning, so some projects
 (like Swift) were not part of the stable point releases. More and more
 projects are considering issuing intermediary releases (like Swift
 does), like Ironic. That would result in a variety of version numbers,
 and ultimately less and less projects being able to have a common
 2015.1.1-like version.

 (2) Producing those costs a non-trivial amount of effort on a very small
 team of volunteers, especially with projects caring about stable
 branches in various amounts. We were constantly missing the
 pre-announced dates on those ones. Looks like that effort could be
 better spent improving the stable branches themselves and keeping them
 working.

 (3) The resulting stable point releases are mostly useless. Stable
 branches are supposed to be always usable, and the released version
 did not undergo significantly more testing. Issuing them actually
 discourages people from taking whatever point in stable branches makes
 the most sense for them, testing and deploying that.

 The suggestion we made during that session (and which was approved by
 the session participants) is therefore to just get rid of the stable
 point release concept altogether for non-libraries. That said:

 - we'd still do individual point releases for libraries (for critical
 bugs and security issues), so that you can still depend on a specific
 version there

 - we'd still very much maintain stable branches (and actually focus our
 efforts on that work) to ensure they are a continuous source of safe
 upgrades for users of a given series

 Now we realize that the cross-section of our community which was present
 in that session might not fully represent the consumers of those
 artifacts, which is why we expand the discussion on this mailing-list
 (and soon on the operators ML).

 If you were a consumer of those and will miss them, please explain why.
 In particular, please let us know how consuming that version (which was
 only made available every n months) is significantly better than picking
 your preferred time and get all the current stable branch HEADs at that
 time.

 Thanks in advance for your feedback,

 [1] https://etherpad.openstack.org/p/YVR-relmgt-stable-branch

 --
 Thierry Carrez (ttx)

This is generally my opinion as-well, I always hoped that *every*
commit would be considered a release rather than an arbitrary tagged
date.  This empowers vendors and distributors to create their own
service pack style update on a cadence that suits their requirements
and users, rather than feeling tied to cross-vendor schedule or
feeling bad picking interim commits.

The primary push back on this when we started the stable branches was
a vendor wanting to have known release versions for their customers,
and I don't think we have had comment from that (or all) vendors.  I
hope this is seen as a positive thing, as it really is IMO.

I have a question about still having library releases you mentioned,
as generally - everything in python is a library.  I don't think we
have a definition of what in OpenStack is considered a mere library,
compared to a project that wouldn't have a release.

I also wondered if it might make sense for us to do a better job of
storing metadata of what the shasums of projects used to pass gate for
a given commit - as this might be both useful as a known good state
but also, slightly unrelated, might be helpful in debugging gate
blockages in the future.

Thanks

--
Kind Regards,
Dave Walker

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-05-29 Thread Matt Riedemann



On 5/29/2015 10:30 AM, Dave Walker wrote:

On 29 May 2015 at 14:41, Thierry Carrez thie...@openstack.org wrote:

Hi everyone,

TL;DR:
- We propose to stop tagging coordinated point releases (like 2015.1.1)
- We continue maintaining stable branches as a trusted source of stable
updates for all projects though

Long version:

At the stable branch session in Vancouver we discussed recent
evolutions in the stable team processes and how to further adapt the
work of the team in a big tent world.

One of the key questions there was whether we should continue doing
stable point releases. Those were basically tags with the same version
number (2015.1.1) that we would periodically push to the stable
branches for all projects.

Those create three problems.

(1) Projects do not all follow the same versioning, so some projects
(like Swift) were not part of the stable point releases. More and more
projects are considering issuing intermediary releases (like Swift
does), like Ironic. That would result in a variety of version numbers,
and ultimately less and less projects being able to have a common
2015.1.1-like version.

(2) Producing those costs a non-trivial amount of effort on a very small
team of volunteers, especially with projects caring about stable
branches in various amounts. We were constantly missing the
pre-announced dates on those ones. Looks like that effort could be
better spent improving the stable branches themselves and keeping them
working.

(3) The resulting stable point releases are mostly useless. Stable
branches are supposed to be always usable, and the released version
did not undergo significantly more testing. Issuing them actually
discourages people from taking whatever point in stable branches makes
the most sense for them, testing and deploying that.

The suggestion we made during that session (and which was approved by
the session participants) is therefore to just get rid of the stable
point release concept altogether for non-libraries. That said:

- we'd still do individual point releases for libraries (for critical
bugs and security issues), so that you can still depend on a specific
version there

- we'd still very much maintain stable branches (and actually focus our
efforts on that work) to ensure they are a continuous source of safe
upgrades for users of a given series

Now we realize that the cross-section of our community which was present
in that session might not fully represent the consumers of those
artifacts, which is why we expand the discussion on this mailing-list
(and soon on the operators ML).

If you were a consumer of those and will miss them, please explain why.
In particular, please let us know how consuming that version (which was
only made available every n months) is significantly better than picking
your preferred time and get all the current stable branch HEADs at that
time.

Thanks in advance for your feedback,

[1] https://etherpad.openstack.org/p/YVR-relmgt-stable-branch

--
Thierry Carrez (ttx)


This is generally my opinion as-well, I always hoped that *every*
commit would be considered a release rather than an arbitrary tagged
date.  This empowers vendors and distributors to create their own
service pack style update on a cadence that suits their requirements
and users, rather than feeling tied to cross-vendor schedule or
feeling bad picking interim commits.

The primary push back on this when we started the stable branches was
a vendor wanting to have known release versions for their customers,
and I don't think we have had comment from that (or all) vendors.  I
hope this is seen as a positive thing, as it really is IMO.

I have a question about still having library releases you mentioned,
as generally - everything in python is a library.  I don't think we
have a definition of what in OpenStack is considered a mere library,
compared to a project that wouldn't have a release.


A library from an OpenStack POV, from my POV :), is anything that the 
'server' projects, e.g. nova, cinder, keystone, glance, etc, depend on. 
 Primarily the oslo libraries, the clients, and everything they depend on.


It's probably easier to think of it as anything in the 
global-requirements list:


https://github.com/openstack/requirements/blob/master/global-requirements.txt

Note that nova, keystone, glance, cinder, etc aren't in that list.



I also wondered if it might make sense for us to do a better job of
storing metadata of what the shasums of projects used to pass gate for
a given commit - as this might be both useful as a known good state
but also, slightly unrelated, might be helpful in debugging gate
blockages in the future.

Thanks

--
Kind Regards,
Dave Walker

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt 

Re: [openstack-dev] [new][app-catalog] App Catalog next steps

2015-05-29 Thread Alexander Tivelkov
Hi Kevin,

I don't suggest to use random IDs as artifact identifiers in the community
app catalog. Of course we will need to have some globally unique names
there (my initial idea on that is to have a combination of fully-qualified
namespace-based name + version + signature) - and such names should be used
to replicate artifacts across the cloud boundaries.

By Referencing by ID I mean only the local referencing: when the artifact
is already present in cloud's local Glance (be it imported from the
community catalog, copied from other cloud or uploaded directly), the
particular service (Heat in our example) should be able to consume it by
ID, same as Nova currently does with Images.
This has its own purpose to guarantee objects' immutability: once the user
has selected an object in the local catalog, she may be sure that nobody
will interfere and modify it, as the object itself is immutable and the id
is not reusable. If the object is referenced only by name, then it may be
deleted and a different artifact with the same name may be uploaded
instead, which may introduce a potential security issue. Using IDs will
prevent such behavior.
Fully qualified object names are still needed, of course - but that's
Glance goal to locate an artifact based on its FQN and return the id for
it.
At least, this was the design idea of the initial artifact concept.

But that's an off-topic here, as this concept is related only to the local
artifact repos, and world-global app catalog has nothing to do with this.


--
Regards,
Alexander Tivelkov

On Fri, May 29, 2015 at 6:47 PM, Fox, Kevin M kevin@pnnl.gov wrote:

  Hi Alexander,

 Sweet. I'll have to kick the tires on the current state of Liberty soon. :)

 Reference by artifact IDs is going to be problematic I think. How do you
 release a generic set of resources to the world that reference specific
 randomly generated ID's?

 What about by Name? If not, then it will need to be some kind of mapping
 mechanism. :/

 Thanks,
 Kevin

  --
 *From:* Alexander Tivelkov [ativel...@mirantis.com]
 *Sent:* Friday, May 29, 2015 4:19 AM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [new][app-catalog] App Catalog next steps

   Hi Kevin,

   Has the Glance Artifact Repository implemented enough bits to have Heat
 and/or Murano artefacts yet?


  ​Most of the code is there already, couple of patchsets are still on
 review but we'll land them soon.​ L1 is a likely milestone to have it ready
 in master.


   Also, has there been any work on Exporting/Importing them through some
 defined format (tarball?) that doesn't depend on the artefact type?


  ​This one is not completely implemented: the design is ready (the spec
 had this feature from the very beginning) and a PoC was done. The final
 implementation is likely to happen in L cycle.


   I've been talking with the Heat folks on starting a blueprint to allow
 heat templates to use relative URL's instead of absolute ones. That would
 allow a set of Heat templates to be stored in one artefact in Glance.


  ​That's awesome.
 Also I'd consider allowing Heat to reference Templates by their artifact
 IDs in Glance, same as Nova does it for images. ​



   --
 *From:* Alexander Tivelkov [ativel...@mirantis.com]
 *Sent:* Thursday, May 28, 2015 4:46 AM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [new][app-catalog] App Catalog next steps

Hi folks,

  I believe that at least part of the filtering we are discussing here
 may be done at the client side if the client is sophisticated enough to be
 aware about the capabilities of the local cloud.
 And by sophisticated client I mean Glance V3 (previously known as
 Artifact Repository), which may (and, in my vision, should) become the
 ultimate consumer of the app catalog on the cloud side.

  Each asset type (currently Image, Murano Package, Heat template, more
 to come) should be implemented as Glance Artifact type (i.e. a plugin), and
 may define the required capabilities as its type specific metadata fields
 (for example, Heat-template type may list plugins which are required to run
 this template; Murano-package type may set the minimum required version of
 Core library etc). The logic which is needed to validate this capabilities
 may be put into this type-specific plugin as well. This custom logic method
 will gets executed when the artifact is being exported from app catalog
 into the particular cloud.

  In this case the compatibility of particular artifact with particular
 cloud will be validated by that cloud itself when the app catalog is
 browsed. Also, if the cloud does not have support of some artifact types at
 all (e.g. does not have Murano installed and thus cannot utilize Murano
 Packages), then it does not have the Murano plugin in its glance and thus
 will not be able to import murano-artifacts from the 

Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-05-29 Thread Jeremy Stanley
On 2015-05-29 16:30:12 +0100 (+0100), Dave Walker wrote:
 This is generally my opinion as-well, I always hoped that *every*
 commit would be considered a release rather than an arbitrary
 tagged date.
[...]

If we switch away from lockstep major/minor release versioning
anyway (again separate discussion underway but seems a distinct
possibility) then I think the confusion over why stable point
releases are mismatched becomes less of an issue. At that point we
may want to reconsider and actually tag each of them with a
sequential micro (patch in semver terminology) version bump. Could
help in communication around security fixes in particular.

 I also wondered if it might make sense for us to do a better job of
 storing metadata of what the shasums of projects used to pass gate for
 a given commit - as this might be both useful as a known good state
 but also, slightly unrelated, might be helpful in debugging gate
 blockages in the future.

I think if we get stable branches back into the openstack/openstack
pseudo-repo it might help in this regard. Also Robert's plan for
requirements revamp should make it easier for us to keep track of
what versions of which dependencies were used when testing these.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][tempest] [lbaas] How to enable lbaas related test cases in templest?

2015-05-29 Thread Madhusudhan Kandadai
Hi Lily,

Could you let us know how and what tests you are running for lbaasv2? To my
understanding, I don't see all the api/scenario tests for lbaasv2 are
skipped. However, there are documents about neutron lbaasv2 and API tests
and they are documented [1][2] respectively.

[1] https://wiki.openstack.org/wiki/Neutron/LBaaS
[2] https://wiki.openstack.org/wiki/Neutron/LBaaS/API_2.0

Cheers,
Madhu

On Fri, May 29, 2015 at 3:09 AM, Lily.Sing lily.s...@gmail.com wrote:

 Hi all,

 I try to test neutron LBaaS with tempest, but find all the API and
 scenario test cases related to it are skipped. Is there a way to enable
 these test cases? I have LBaaS service enabled already. Also, is there any
 detailed document about lbaasv2?

 Thanks.

 Best regards,
 Lily Xing(邢莉莉)

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [TC] Discussion: changing Ironic's release model

2015-05-29 Thread Brant Knudson
On Fri, May 29, 2015 at 8:11 AM, Thomas Goirand z...@debian.org wrote:

 On 05/28/2015 06:41 PM, Devananda van der Veen wrote:
  Hi all,
 
  tl;dr;
 
  At the summit, the Ironic team discussed the challenges we've had with
  the current release model and came up with some ideas to address them.
  I had a brief follow-up conversation with Doug and Thierry, but I'd
  like this to be discussed more openly and for us (the Ironic dev
  community) to agree on a clear plan before we take action.
 
  If Ironic moves to a release:independent model, it shouldn't have any
  direct effect on other projects we integrate with -- we will continue
  to follow release:at-6mo-cycle-end -- but our processes for how we get
  there would be different, and that will have an effect on the larger
  community.

 Just a quick follow-up to voice the Distro's opinion (I'm sure other
 package maintainers will agree with what I'm going to write).

 It's fine if you use whatever release schedule that you want. Though
 what's important for us, downstream distro, is that you make sure to
 release a longer term version at the same time as everything else, and
 that you make sure security follow-up is done properly. It's really
 important that you clearly identify the versions for which you will do
 security patch backports, so that we don't (by mistake) release a stable
 version of a given distribution with a something you wont ever give long
 term support for.



The only way I know of to get this information now is to look in the git
repos, so for example for keystoneclient I can find the security-supported
releases are
0.11 (icehouse) --
http://git.openstack.org/cgit/openstack/python-keystoneclient/log/?h=stable/icehouse
1.1 (juno)
1.3 (kilo)

The rest (for example, 1.2) are not supported.

The place I would expect to find this information is here:
https://wiki.openstack.org/wiki/Releases

tschüß -- Brant



 I'm sure you guys know that already, but better be safe than sorry, and
 other less experience projects may find the above useful.

 Cheers,

 Thomas Goirand (zigo)


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [new][app-catalog] App Catalog next steps

2015-05-29 Thread Fox, Kevin M
I've ran into the opposite problem though with Glance. As an Op, I really 
really want to replace one image with a new one atomically with security 
updates preapplied. Think shellshock, ghost, etc. It will be basically be the 
same exact image as before, but patched. Referencing local ID's explicitly 
makes it harder to ensure things are patched, since new vm's will tend to pop 
up after things are patched with new vulnerabilities.

Thanks,
Kevin


From: Alexander Tivelkov [ativel...@mirantis.com]
Sent: Friday, May 29, 2015 9:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [new][app-catalog] App Catalog next steps

Hi Kevin,

I don't suggest to use random IDs as artifact identifiers in the community app 
catalog. Of course we will need to have some globally unique names there (my 
initial idea on that is to have a combination of fully-qualified 
namespace-based name + version + signature) - and such names should be used to 
replicate artifacts across the cloud boundaries.

By Referencing by ID I mean only the local referencing: when the artifact is 
already present in cloud's local Glance (be it imported from the community 
catalog, copied from other cloud or uploaded directly), the particular service 
(Heat in our example) should be able to consume it by ID, same as Nova 
currently does with Images.
This has its own purpose to guarantee objects' immutability: once the user has 
selected an object in the local catalog, she may be sure that nobody will 
interfere and modify it, as the object itself is immutable and the id is not 
reusable. If the object is referenced only by name, then it may be deleted and 
a different artifact with the same name may be uploaded instead, which may 
introduce a potential security issue. Using IDs will prevent such behavior.
Fully qualified object names are still needed, of course - but that's Glance 
goal to locate an artifact based on its FQN and return the id for it.
At least, this was the design idea of the initial artifact concept.

But that's an off-topic here, as this concept is related only to the local 
artifact repos, and world-global app catalog has nothing to do with this.


--
Regards,
Alexander Tivelkov

On Fri, May 29, 2015 at 6:47 PM, Fox, Kevin M 
kevin@pnnl.govmailto:kevin@pnnl.gov wrote:
Hi Alexander,

Sweet. I'll have to kick the tires on the current state of Liberty soon. :)

Reference by artifact IDs is going to be problematic I think. How do you 
release a generic set of resources to the world that reference specific 
randomly generated ID's?

What about by Name? If not, then it will need to be some kind of mapping 
mechanism. :/

Thanks,
Kevin


From: Alexander Tivelkov [ativel...@mirantis.commailto:ativel...@mirantis.com]
Sent: Friday, May 29, 2015 4:19 AM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [new][app-catalog] App Catalog next steps

Hi Kevin,

Has the Glance Artifact Repository implemented enough bits to have Heat and/or 
Murano artefacts yet?

​Most of the code is there already, couple of patchsets are still on review but 
we'll land them soon.​ L1 is a likely milestone to have it ready in master.


Also, has there been any work on Exporting/Importing them through some defined 
format (tarball?) that doesn't depend on the artefact type?

​This one is not completely implemented: the design is ready (the spec had this 
feature from the very beginning) and a PoC was done. The final implementation 
is likely to happen in L cycle.


I've been talking with the Heat folks on starting a blueprint to allow heat 
templates to use relative URL's instead of absolute ones. That would allow a 
set of Heat templates to be stored in one artefact in Glance.

​That's awesome.
Also I'd consider allowing Heat to reference Templates by their artifact IDs in 
Glance, same as Nova does it for images. ​



From: Alexander Tivelkov [ativel...@mirantis.commailto:ativel...@mirantis.com]
Sent: Thursday, May 28, 2015 4:46 AM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [new][app-catalog] App Catalog next steps

Hi folks,

I believe that at least part of the filtering we are discussing here may be 
done at the client side if the client is sophisticated enough to be aware about 
the capabilities of the local cloud.
And by sophisticated client I mean Glance V3 (previously known as Artifact 
Repository), which may (and, in my vision, should) become the ultimate 
consumer of the app catalog on the cloud side.

Each asset type (currently Image, Murano Package, Heat template, more to come) 
should be implemented as Glance Artifact type (i.e. a plugin), and may define 
the required capabilities as its type specific metadata fields (for example, 
Heat-template type may list plugins which are required to run this template; 

Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-05-29 Thread Thierry Carrez
Neil Jerram wrote:
 
 
 On 29/05/15 14:41, Thierry Carrez wrote:
 Hi everyone,

 TL;DR:
 - We propose to stop tagging coordinated point releases (like 2015.1.1)
 - We continue maintaining stable branches as a trusted source of stable
 updates for all projects though
 [...]
 
 I initially misunderstood your email as saying there will no longer be
 releases at all... but I see now that you are only talking about
 .X.N with N = 1, and that we will continue to have .X.0.  Right?

Yes.

 Will the main releases continue to be tagged as .X.0 ?

That is an orthogonal topic, which warrants its own thread, but we also
discussed that in Vancouver. The general idea there is that since we'll
more and more Swift-like things with their own version, the value of
common release versioning for the rest of them becomes questionable.
What we need is a convenient way of telling which version number is the
kilo release for each project that aligns with the 6-month cycle.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [new][app-catalog] App Catalog next steps

2015-05-29 Thread Fox, Kevin M
Hi Alexander,

Sweet. I'll have to kick the tires on the current state of Liberty soon. :)

Reference by artifact IDs is going to be problematic I think. How do you 
release a generic set of resources to the world that reference specific 
randomly generated ID's?

What about by Name? If not, then it will need to be some kind of mapping 
mechanism. :/

Thanks,
Kevin


From: Alexander Tivelkov [ativel...@mirantis.com]
Sent: Friday, May 29, 2015 4:19 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [new][app-catalog] App Catalog next steps

Hi Kevin,

Has the Glance Artifact Repository implemented enough bits to have Heat and/or 
Murano artefacts yet?

​Most of the code is there already, couple of patchsets are still on review but 
we'll land them soon.​ L1 is a likely milestone to have it ready in master.


Also, has there been any work on Exporting/Importing them through some defined 
format (tarball?) that doesn't depend on the artefact type?

​This one is not completely implemented: the design is ready (the spec had this 
feature from the very beginning) and a PoC was done. The final implementation 
is likely to happen in L cycle.


I've been talking with the Heat folks on starting a blueprint to allow heat 
templates to use relative URL's instead of absolute ones. That would allow a 
set of Heat templates to be stored in one artefact in Glance.

​That's awesome.
Also I'd consider allowing Heat to reference Templates by their artifact IDs in 
Glance, same as Nova does it for images. ​



From: Alexander Tivelkov [ativel...@mirantis.commailto:ativel...@mirantis.com]
Sent: Thursday, May 28, 2015 4:46 AM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [new][app-catalog] App Catalog next steps

Hi folks,

I believe that at least part of the filtering we are discussing here may be 
done at the client side if the client is sophisticated enough to be aware about 
the capabilities of the local cloud.
And by sophisticated client I mean Glance V3 (previously known as Artifact 
Repository), which may (and, in my vision, should) become the ultimate 
consumer of the app catalog on the cloud side.

Each asset type (currently Image, Murano Package, Heat template, more to come) 
should be implemented as Glance Artifact type (i.e. a plugin), and may define 
the required capabilities as its type specific metadata fields (for example, 
Heat-template type may list plugins which are required to run this template; 
Murano-package type may set the minimum required version of Core library etc). 
The logic which is needed to validate this capabilities may be put into this 
type-specific plugin as well. This custom logic method will gets executed when 
the artifact is being exported from app catalog into the particular cloud.

In this case the compatibility of particular artifact with particular cloud 
will be validated by that cloud itself when the app catalog is browsed. Also, 
if the cloud does not have support of some artifact types at all (e.g. does not 
have Murano installed and thus cannot utilize Murano Packages), then it does 
not have the Murano plugin in its glance and thus will not be able to import 
murano-artifacts from the Catalog.

Hope this makes sense.


--
Regards,
Alexander Tivelkov

On Thu, May 28, 2015 at 10:29 AM, Morgan Fainberg 
morgan.fainb...@gmail.commailto:morgan.fainb...@gmail.com wrote:


On Wed, May 27, 2015 at 5:33 PM, Joe Gordon 
joe.gord...@gmail.commailto:joe.gord...@gmail.com wrote:


On Wed, May 27, 2015 at 4:27 PM, Fox, Kevin M 
kevin@pnnl.govmailto:kevin@pnnl.gov wrote:
I'd say, tools that utilize OpenStack, like the knife openstack plugin, are not 
something that you would probably go to the catalog to find. And also, the 
recipes that you would use with knife would not be specific to OpenStack in any 
way, so you would just be duplicating the config management system's own 
catalog in the OpenStack catalog, which would be error prone. Duplicating all 
the chef recipes, and docker containers, puppet stuff, and . is a lot of 
work...

I am very much against duplicating things, including chef recipes that use the 
openstack plugin for knife. But we can still easily point to external resources 
from apps.openstack.orghttp://apps.openstack.org. In fact we already do 
(http://apps.openstack.org/#tab=heat-templatesasset=Lattice).


The vision I have for the Catalog (I can be totally wrong here, lets please 
discuss) is a place where users (non computer scientists) can visit after 
logging into their Cloud, pick some app of interest, hit launch, and optionally 
fill out a form. They then have a running piece of software, provided by the 
greater OpenStack Community, that they can interact with, and their Cloud can 
bill them for. Think of it as the Apple App Store for OpenStack.  Having a 
reliable set of deployment engines 

Re: [openstack-dev] [nova] I think nova behaves poorly when booting multiple instances

2015-05-29 Thread Chris Friesen

On 05/29/2015 06:27 AM, John Garbutt wrote:

On 27 May 2015 at 23:36, Robert Collins robe...@robertcollins.net wrote:

On 26 May 2015 at 03:37, Chris Friesen chris.frie...@windriver.com wrote:



Basically the problem is this:

When booting up instances, nova allows the user to specify a min count and
a max count.  So logically, this request should be considered successful
if at least min count instances can be booted.

Currently, if the user has quota space for max count instances, then nova
will try to create them all. If any of them can't be scheduled, then the
creation of all of them will be aborted and they will all be put into an
error state.



The problem here is having a nice way to explicitly tell the users
about what worked and what didn't. Currently the instance being in an
error state because its the good way to tell the user that build
failed. Deleting them doesn't have the same visibility, it can look
like the just vanished.


Fair enough.  But we should only put the ones we couldn't schedule into an error 
state, not the ones that we could schedule (assuming we could schedule at least 
min_count instances).



Having said all that, I am very tempted to say we should deprecate the
min_count parameter in the API, keep the current behaviour for old
version requests, and maybe even remove the max_count parameter. We
could look to Heat to do a much better job of this kind of
orchestration. This is very much in the spirit of:
http://docs.openstack.org/developer/nova/devref/project_scope.html#no-more-orchestration


I'm totally in favour of doing a microversion bump and removing both min_count 
and max_count and saying if you want multiple instances then let something 
outside of nova handle it.



Either which way, given the impact of the bug fix (i.e. it touches the
API, and would probably need a micro version bump), I think it would
be great to actually write up your proposal as a nova-spec (backlog or
targeted at liberty, either way is cool). I think a spec review would
be a great way to reach a good agreement on the best approach here.


Chris, does that sounds like an approach that would work for you?


I'm happy to write up a spec to remove min_count and max_count and bump the 
microversion.


I don't think the bugfix for the current API version should have a microversion 
bump.  I think this is a case of Fixing a bug so that a request which resulted 
in an error response before is now successful. from 
http://specs.openstack.org/openstack/api-wg/guidelines/evaluating_api_changes.html;


The user asked for between min_count and max_count instances.  If we can 
schedule at least min_count instances then we should do that and not give them 
nothing just because we couldn't schedule max_count instances.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [murano] oslo namespace changes and murano-dashboard

2015-05-29 Thread Doug Hellmann
Hi, Murano team!

First, where do you hang out on IRC? I couldn’t find a channel that looked like 
it had any population, and there’s nothing mentioned on 
https://wiki.openstack.org/wiki/Murano

I’m trying to figure out whether the upcoming Oslo library releases are going 
to break your gate. Both murano-dashboard and python-muranoclient have landed 
the needed changes to stop using the oslo namespace package, but those versions 
haven’t been released. The list of unreleased changes for both projects is 
below.

It looks like the client is being versioned using something like semantic 
versioning, so I could create a new release (version 0.6.0, since there are 
requirements changes) for you but don’t want to do that without coordinating 
with someone on the development team. 

The murano-dashboard seems to be versioned following the integrated release 
pattern, so I can’t tell if it needs to be released or if it is installed from 
source in the gate jobs.

I am putting together a list of libraries that need releases for Monday 
morning, but I will hold off on releasing python-muranoclient if I don’t hear 
from someone on the Murano team that it’s safe to release from master.

Thanks,
Doug



[ openstack/murano-dashboard ]

$ git log --oneline --no-merges 2015.1.0..HEAD
275b545 Call location.reload only once at successfull env deployment
f278e40 Fixed gramattical issues in README.rst
5d38554 Use catalog=True for app_by_fqn helper function
8aa77db Environments: reload page after all the environments have been updated
6a6f2a6 Add catalog parameter to packages list API call
f80519a Take advantage of better API filtering catalog filtering
d7f87ca Drop use of 'oslo' namespace package
ed04b14 Update information in README
1a8c01e Show ellipsis in case of app header overflows
f255e02 Updated YAQL requirement to = 0.2.6


[ openstack/python-muranoclient ]

Changes in python-muranoclient 0.5.6..HEAD
--
d2182e1 2015-05-21 17:03:41 +0300 Point default MURANO_REPO_URL to 
http://storage.apps.openstack.org
af0a55c 2015-05-14 21:14:15 +0300 Remove hash check during image upload
55000b1 2015-05-12 16:00:21 +0300 Make post_test_hook.sh executable
7972beb 2015-05-09 15:28:48 +0300 Add post_test_hook for functional tests
7d1443f 2015-05-09 15:16:40 +0300 First pass at tempest_lib based functional 
testing
2bbc396 2015-05-09 14:53:24 +0300 Add OS_TEST_PATH to testr
5aadf96 2015-05-09 14:45:49 +0300 Move unit tests into unit test directory
1bb107c 2015-05-06 19:37:48 + Drop use of 'oslo' namespace package
22a28da 2015-04-29 17:55:53 +0300 Only delete 'owned' packages for 
--exists-action update
1ee951a 2015-04-29 13:08:31 +0300 Updated YAQL requirement to = 0.2.6
33d1169 2015-04-27 17:24:09 +0300 Update Readme
2643696 2015-04-19 12:25:18 + Limit parameter is no longer ignored for 
packages
4e00450 2015-04-18 00:42:25 + Update .gitreview file to reflect repo rename
5a4e3ab 2015-04-13 18:35:22 +0300 Better error logging for package/bundle import
a864dce 2015-04-10 16:11:59 + Update package composing command
21504e6 2015-04-06 15:20:04 +0300 Tests for repository machinery
e4963ea 2015-04-02 17:40:21 +0300 Bash completion script for murano
1f47d2c 2015-04-02 16:46:26 +0300 Add bash-completion subparser to shell client
2a44544 2015-04-01 14:02:40 +0300 Update from global requirements



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Sphinx 1.3 is coming, breaking almost every docs

2015-05-29 Thread Matthew Thode
On 05/29/2015 07:56 AM, Thomas Goirand wrote:
 Hi,
 
 Currently, in the global requirements, we have:
 sphinx=1.1.2,!=1.2.0,!=1.3b1,1.3
 
 However, like it or not, sphinx 1.3 will soon reach Debian unstable.
 Already, I see something broken with it:
 https://gist.github.com/mitya57/1643a7c1666b650a0de5
 
 (if you didn't notice, that's sphinxcontrib-programoutput)
 
 sphinxcontrib-programoutput breaks *only* because the html theme needs
 to be called 'classic' instead of 'default'. To make things worse, the
 'classic' theme isn't available in version 1.2 of Sphinx.
 
 So we need to fix things before they break badly. I have suggested to
 the python-sphinx package maintainer to make the transition better than
 upstream. Maybe just remove the warning, which may (like above) wrongly
 taken as a hard fail? Then maybe get reverse dependencies check if the
 classic theme is available, and use 'default' as a fall-back? But then,
 how does one test for the presence of a given theme name in the system?
 Suggestions would be warmly welcome to fix things at the distribution
 level and avoid all the pain.
 
 BTW, I find this kind of problems really annoying. How come upstream
 Sphinx authors are breaking things like this? Just because of the rename
 of the default sphinx theme from 'default' to 'classic'? That's really
 not worth the amount of pain everyone will have. The person who did this
 should be taking care of unbreaking all the things he broke. :(
 
 Cheers,
 
 Thomas Goirand (zigo)
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
We won't have a problem with with doc generation on Gentoo at least.
We've had 1.3.1 available for some time now (though not stablized yet,
even then it will not mater).  Are you not able to either create a
meta-package or package the two versions side by side (mutually
exclusive install of course)?

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-05-29 Thread Matthew Thode
On 05/29/2015 09:44 AM, Matt Riedemann wrote:
 
 
 On 5/29/2015 8:41 AM, Thierry Carrez wrote:
 Hi everyone,

 TL;DR:
 - We propose to stop tagging coordinated point releases (like 2015.1.1)
 - We continue maintaining stable branches as a trusted source of stable
 updates for all projects though

 Long version:

 At the stable branch session in Vancouver we discussed recent
 evolutions in the stable team processes and how to further adapt the
 work of the team in a big tent world.

 One of the key questions there was whether we should continue doing
 stable point releases. Those were basically tags with the same version
 number (2015.1.1) that we would periodically push to the stable
 branches for all projects.

 Those create three problems.

 (1) Projects do not all follow the same versioning, so some projects
 (like Swift) were not part of the stable point releases. More and more
 projects are considering issuing intermediary releases (like Swift
 does), like Ironic. That would result in a variety of version numbers,
 and ultimately less and less projects being able to have a common
 2015.1.1-like version.

 (2) Producing those costs a non-trivial amount of effort on a very small
 team of volunteers, especially with projects caring about stable
 branches in various amounts. We were constantly missing the
 pre-announced dates on those ones. Looks like that effort could be
 better spent improving the stable branches themselves and keeping them
 working.

 (3) The resulting stable point releases are mostly useless. Stable
 branches are supposed to be always usable, and the released version
 did not undergo significantly more testing. Issuing them actually
 discourages people from taking whatever point in stable branches makes
 the most sense for them, testing and deploying that.

 The suggestion we made during that session (and which was approved by
 the session participants) is therefore to just get rid of the stable
 point release concept altogether for non-libraries. That said:

 - we'd still do individual point releases for libraries (for critical
 bugs and security issues), so that you can still depend on a specific
 version there

 - we'd still very much maintain stable branches (and actually focus our
 efforts on that work) to ensure they are a continuous source of safe
 upgrades for users of a given series

 Now we realize that the cross-section of our community which was present
 in that session might not fully represent the consumers of those
 artifacts, which is why we expand the discussion on this mailing-list
 (and soon on the operators ML).

 If you were a consumer of those and will miss them, please explain why.
 In particular, please let us know how consuming that version (which was
 only made available every n months) is significantly better than picking
 your preferred time and get all the current stable branch HEADs at that
 time.

 Thanks in advance for your feedback,

 [1] https://etherpad.openstack.org/p/YVR-relmgt-stable-branch

 
 To reiterate what I said in the session, for my team personally (IBM),
 we don't align with the point release schedules on stable anyway, we
 release our own stable release fix packs as needed on our own schedules,
 so in that regard I don't see a point in the stable point releases -
 especially since most of the time I don't know when those are going to
 be anyway so we can't plan for them accurately.
 
 Having said that, what I mentioned in IRC the other day is the one
 upside I see to the point releases is it is a milestone that requires
 focus from the stable maintainers, which means if stable has been broken
 for a few weeks and no one has really noticed, converging on a stable
 point release at least forces attention there.
 
 I don't think that is a very good argument for keeping stable point
 releases though, since as you said we don't do any additional testing
 above and beyond what normally happens in the Jenkins runs.  Some of the
 distributions might have extra regression testing scenarios, I'm not
 sure, but no one really spoke to that in the session from the distros
 that were present - I assume they do, but they can do that on their own
 schedule anyway IMO.
 
 I am a bit cynical about thinking that dropping point releases will make
 people spend more time on caring about the health of the stable branches
 (persistent gate failures) or stale changes out for review.  I combed
 through a lot of open stable/icehouse changes yesterday and there were
 many that should have been abandoned 6 months ago but were just sitting
 there, and others that were good fixes to have and should have been
 merged by now.
 
 Personally I've been trying to point out some of these in the
 #openstack-stable IRC channel when I see them so that we don't wait so
 long on these that they fall into a stable support phase where we don't
 think they are appropriate for merging anymore, but if we had acted
 sooner they'd be in.
 
 But I'm also the new guy on the team so 

[openstack-dev] [Neutron][bgpvpn] IRC meetings on BGP VPN interconnection API

2015-05-29 Thread thomas.morin

Hi everyone,

As a follow-up to discussions last week on a BGP VPN interconnection API 
and the work started with the people already involved, we are going to 
hold IRC meetings to discuss how to progress the different pieces of 
work, in particular on the API itself [1] and its implementation+drivers 
[2].


The slot we propose is ** Tuesday 15:00 UTC ** with the first meeting 
next Tuesday (June 2nd).


Note that, based on last week feedback, we submitted the existing 
stackforge project for inclusion in the Neutron big tent earlier this 
week [3].


We will do a proper meeting registration (patch to openstack-infra 
irc-meeting) and send meeting info with wiki and meeting room before 
next Tuesday.


Looking forward to discussing with everyone interested!

-Thomas  Mathieu

[1] currently being discussed at https://review.openstack.org/#/c/177740
[2] https://github.com/stackforge/networking-bgpvpn
[3] https://review.openstack.org/#/c/186041



_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-05-29 Thread Jeremy Stanley
On 2015-05-29 17:50:01 +0200 (+0200), Ihar Hrachyshka wrote:
[...]
 if we attempt to fix a security issue that has a backwards
 incompatible fix, then we are forced in introducing a new
 configuration option to opt-in the new secure world.
[...]

To my knowledge that's how we've handled these in the past anyway,
accompanied by publication of a security note (not advisory)
suggesting the steps necessary to enable the breaking change when
opting into the bug fix.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-05-29 Thread Neil Jerram



On 29/05/15 14:41, Thierry Carrez wrote:

Hi everyone,

TL;DR:
- We propose to stop tagging coordinated point releases (like 2015.1.1)
- We continue maintaining stable branches as a trusted source of stable
updates for all projects though

[...]

I initially misunderstood your email as saying there will no longer be 
releases at all... but I see now that you are only talking about 
.X.N with N = 1, and that we will continue to have .X.0.  Right?


Will the main releases continue to be tagged as .X.0 ?

Thanks,
Neil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-05-29 Thread Matt Riedemann



On 5/29/2015 8:41 AM, Thierry Carrez wrote:

Hi everyone,

TL;DR:
- We propose to stop tagging coordinated point releases (like 2015.1.1)
- We continue maintaining stable branches as a trusted source of stable
updates for all projects though

Long version:

At the stable branch session in Vancouver we discussed recent
evolutions in the stable team processes and how to further adapt the
work of the team in a big tent world.

One of the key questions there was whether we should continue doing
stable point releases. Those were basically tags with the same version
number (2015.1.1) that we would periodically push to the stable
branches for all projects.

Those create three problems.

(1) Projects do not all follow the same versioning, so some projects
(like Swift) were not part of the stable point releases. More and more
projects are considering issuing intermediary releases (like Swift
does), like Ironic. That would result in a variety of version numbers,
and ultimately less and less projects being able to have a common
2015.1.1-like version.

(2) Producing those costs a non-trivial amount of effort on a very small
team of volunteers, especially with projects caring about stable
branches in various amounts. We were constantly missing the
pre-announced dates on those ones. Looks like that effort could be
better spent improving the stable branches themselves and keeping them
working.

(3) The resulting stable point releases are mostly useless. Stable
branches are supposed to be always usable, and the released version
did not undergo significantly more testing. Issuing them actually
discourages people from taking whatever point in stable branches makes
the most sense for them, testing and deploying that.

The suggestion we made during that session (and which was approved by
the session participants) is therefore to just get rid of the stable
point release concept altogether for non-libraries. That said:

- we'd still do individual point releases for libraries (for critical
bugs and security issues), so that you can still depend on a specific
version there

- we'd still very much maintain stable branches (and actually focus our
efforts on that work) to ensure they are a continuous source of safe
upgrades for users of a given series

Now we realize that the cross-section of our community which was present
in that session might not fully represent the consumers of those
artifacts, which is why we expand the discussion on this mailing-list
(and soon on the operators ML).

If you were a consumer of those and will miss them, please explain why.
In particular, please let us know how consuming that version (which was
only made available every n months) is significantly better than picking
your preferred time and get all the current stable branch HEADs at that
time.

Thanks in advance for your feedback,

[1] https://etherpad.openstack.org/p/YVR-relmgt-stable-branch



To reiterate what I said in the session, for my team personally (IBM), 
we don't align with the point release schedules on stable anyway, we 
release our own stable release fix packs as needed on our own schedules, 
so in that regard I don't see a point in the stable point releases - 
especially since most of the time I don't know when those are going to 
be anyway so we can't plan for them accurately.


Having said that, what I mentioned in IRC the other day is the one 
upside I see to the point releases is it is a milestone that requires 
focus from the stable maintainers, which means if stable has been broken 
for a few weeks and no one has really noticed, converging on a stable 
point release at least forces attention there.


I don't think that is a very good argument for keeping stable point 
releases though, since as you said we don't do any additional testing 
above and beyond what normally happens in the Jenkins runs.  Some of the 
distributions might have extra regression testing scenarios, I'm not 
sure, but no one really spoke to that in the session from the distros 
that were present - I assume they do, but they can do that on their own 
schedule anyway IMO.


I am a bit cynical about thinking that dropping point releases will make 
people spend more time on caring about the health of the stable branches 
(persistent gate failures) or stale changes out for review.  I combed 
through a lot of open stable/icehouse changes yesterday and there were 
many that should have been abandoned 6 months ago but were just sitting 
there, and others that were good fixes to have and should have been 
merged by now.


Personally I've been trying to point out some of these in the 
#openstack-stable IRC channel when I see them so that we don't wait so 
long on these that they fall into a stable support phase where we don't 
think they are appropriate for merging anymore, but if we had acted 
sooner they'd be in.


But I'm also the new guy on the team so I've got belly fire, feel free 
to tell me to shut up. :)


--

Thanks,

Matt Riedemann



Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-05-29 Thread Matthew Thode
On 05/29/2015 10:18 AM, Ihar Hrachyshka wrote:
 What about release notes? How can we now communicate some changes that
 require operator consideration or action?
 
 Ihar
 
 On 05/29/2015 03:41 PM, Thierry Carrez wrote:
 Hi everyone,
 
 TL;DR: - We propose to stop tagging coordinated point releases
 (like 2015.1.1) - We continue maintaining stable branches as a
 trusted source of stable updates for all projects though
 
 Long version:
 
 At the stable branch session in Vancouver we discussed recent 
 evolutions in the stable team processes and how to further adapt
 the work of the team in a big tent world.
 
 One of the key questions there was whether we should continue
 doing stable point releases. Those were basically tags with the
 same version number (2015.1.1) that we would periodically push to
 the stable branches for all projects.
 
 Those create three problems.
 
 (1) Projects do not all follow the same versioning, so some
 projects (like Swift) were not part of the stable point releases.
 More and more projects are considering issuing intermediary
 releases (like Swift does), like Ironic. That would result in a
 variety of version numbers, and ultimately less and less projects
 being able to have a common 2015.1.1-like version.
 
 (2) Producing those costs a non-trivial amount of effort on a very
 small team of volunteers, especially with projects caring about
 stable branches in various amounts. We were constantly missing the 
 pre-announced dates on those ones. Looks like that effort could be 
 better spent improving the stable branches themselves and keeping
 them working.
 
 (3) The resulting stable point releases are mostly useless.
 Stable branches are supposed to be always usable, and the
 released version did not undergo significantly more testing.
 Issuing them actually discourages people from taking whatever point
 in stable branches makes the most sense for them, testing and
 deploying that.
 
 The suggestion we made during that session (and which was approved
 by the session participants) is therefore to just get rid of the
 stable point release concept altogether for non-libraries. That
 said:
 
 - we'd still do individual point releases for libraries (for
 critical bugs and security issues), so that you can still depend on
 a specific version there
 
 - we'd still very much maintain stable branches (and actually focus
 our efforts on that work) to ensure they are a continuous source of
 safe upgrades for users of a given series
 
 Now we realize that the cross-section of our community which was
 present in that session might not fully represent the consumers of
 those artifacts, which is why we expand the discussion on this
 mailing-list (and soon on the operators ML).
 
 If you were a consumer of those and will miss them, please explain
 why. In particular, please let us know how consuming that version
 (which was only made available every n months) is significantly
 better than picking your preferred time and get all the current
 stable branch HEADs at that time.
 
 Thanks in advance for your feedback,
 
 [1] https://etherpad.openstack.org/p/YVR-relmgt-stable-branch
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

for release notes just do git log between commit hashes?

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-05-29 Thread Thierry Carrez
Ihar Hrachyshka wrote:
 What about release notes? How can we now communicate some changes that
 require operator consideration or action?

Arguably each and every change requiring operator consideration or
action mentioned in these release notes is a failure in the stable
branch process -- this branch is supposedly safe to draw updates from.
We should at the very least work to eliminate most of them.

I guess we'd replace those discrete release notes by continuous notes
(like a wiki page for each stable branch), that downstream users could
consult whenever they want to draw from the branch.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-05-29 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 05/29/2015 05:26 PM, Thierry Carrez wrote:
 Ihar Hrachyshka wrote:
 What about release notes? How can we now communicate some changes
 that require operator consideration or action?
 
 Arguably each and every change requiring operator consideration
 or action mentioned in these release notes is a failure in the
 stable branch process -- this branch is supposedly safe to draw
 updates from. We should at the very least work to eliminate most of
 them.

In most cases, yes. Though if we attempt to fix a security issue that
has a backwards incompatible fix, then we are forced in introducing a
new configuration option to opt-in the new secure world.

Or there is an update to rootwrap filters that in some distributions
are considered configuration files (why they do it is probably a
separate question).

 
 I guess we'd replace those discrete release notes by continuous
 notes (like a wiki page for each stable branch), that downstream
 users could consult whenever they want to draw from the branch.
 

OK.

Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJVaIqpAAoJEC5aWaUY1u57w70H/0cBTVTbpJHUNTRMICc96i/o
kMXnx88tFkJJx0J8rhtIuE+Ypltm/gAS+RppsxSPNU2dIN3aliZB+1/QsxX7EWud
oK0xmWdNj3mL+kGuOl/0JBGuDtXwtyVvJ3XF7B9DBleB6Ozwsv3+/YscYxDy+DAV
x78QqO46/t0sFlsHvLcMZyb0KS2ZoAEAdMpC7leo8tE8LBNhXhf8beguCmT1rKCg
G3K2x5SM12gvEgkCBl7TzPvSiJauM3EYUtzFmaQIdiaRFXHVSlVDyrG90BJxsGme
r5HZmgr8pzsX9ASlUjUkp39sZrw9Tr73rv+RnelGFIlIUzCDOkU0pnmRRlzmDzU=
=zqTi
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] oslo namespace changes and murano-dashboard

2015-05-29 Thread Jeremy Stanley
On 2015-05-29 12:28:37 -0400 (-0400), Doug Hellmann wrote:
 First, where do you hang out on IRC? I couldn’t find a channel
 that looked like it had any population, and there’s nothing
 mentioned on https://wiki.openstack.org/wiki/Murano
[...]


https://wiki.openstack.org/wiki/IRC says it's #murano (though I
haven't /joined to see if anyone actually uses it, I would assume
they do).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-05-29 Thread David Medberry
On Fri, May 29, 2015 at 7:41 AM, Thierry Carrez thie...@openstack.org
wrote:

 If you were a consumer of those and will miss them, please explain why.
 In particular, please let us know how consuming that version (which was
 only made available every n months) is significantly better than picking
 your preferred time and get all the current stable branch HEADs at that
 time.


If vendor packages are used (as many folks do) they will need to weigh in
before operators can really give valid feedback.
I've already heard from one vendor that they will continue to do
point-like releases that they will support, but we probably need a more
complete answer.

Another issue, operators pulling from stable will just need to do a bit
more diligence themselves (and this is probably appropriate.) One thing we
will do in this diligence is something of tracking rate of new bugs and
looking for windows of opportunity where there may be semi-quiescence.

The other issue I'm aware of is that there will essentially be no syncing
across projects (except by the vendors). Operators using upstream will need
to do a better job (ie, more burden) in making sure all of the packages
work together.)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][bgpvpn] IRC meetings on BGP VPN interconnection API

2015-05-29 Thread Paul Michali
You can use the VPNaaS IRC channel/time... we don't have much on the agenda
right now, other than discussion VPN flavors for Liberty, in which it would
be good to discuss BGP VPN and Edge VPN.

Regards,

Paul Michali (pc_m)

On Fri, May 29, 2015 at 11:08 AM thomas.mo...@orange.com wrote:

 Hi everyone,

 As a follow-up to discussions last week on a BGP VPN interconnection API
 and the work started with the people already involved, we are going to
 hold IRC meetings to discuss how to progress the different pieces of
 work, in particular on the API itself [1] and its implementation+drivers
 [2].

 The slot we propose is ** Tuesday 15:00 UTC ** with the first meeting
 next Tuesday (June 2nd).

 Note that, based on last week feedback, we submitted the existing
 stackforge project for inclusion in the Neutron big tent earlier this
 week [3].

 We will do a proper meeting registration (patch to openstack-infra
 irc-meeting) and send meeting info with wiki and meeting room before
 next Tuesday.

 Looking forward to discussing with everyone interested!

 -Thomas  Mathieu

 [1] currently being discussed at https://review.openstack.org/#/c/177740
 [2] https://github.com/stackforge/networking-bgpvpn
 [3] https://review.openstack.org/#/c/186041




 _

 Ce message et ses pieces jointes peuvent contenir des informations
 confidentielles ou privilegiees et ne doivent donc
 pas etre diffuses, exploites ou copies sans autorisation. Si vous avez
 recu ce message par erreur, veuillez le signaler
 a l'expediteur et le detruire ainsi que les pieces jointes. Les messages
 electroniques etant susceptibles d'alteration,
 Orange decline toute responsabilite si ce message a ete altere, deforme ou
 falsifie. Merci.

 This message and its attachments may contain confidential or privileged
 information that may be protected by law;
 they should not be distributed, used or copied without authorisation.
 If you have received this email in error, please notify the sender and
 delete this message and its attachments.
 As emails may be altered, Orange is not liable for messages that have been
 modified, changed or falsified.
 Thank you.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-05-29 Thread Dave Walker
On 29 May 2015 at 16:25, Matthew Thode prometheanf...@gentoo.org wrote:
SNIP

 for release notes just do git log between commit hashes?


For Ubuntu, I wrote a tool to do just this and generate a Debian style
changelog (later cleaned up quite a lot by adamg!).  It parses the git
log looking for LP bug references and uses the bug title as the
changelog entry, and adds the bug number to the change if there is a
Ubuntu task on the bug tracker.  If there is no bug reference in the
commit, it simply uses the first line of the git commit message.

Seems to work well enough for Ubuntu..

Here is an example of how it presents it
http://changelogs.ubuntu.com/changelogs/pool/main/n/nova/nova_2014.2.3-0ubuntu1/changelog

Let me know if you want a hand with it, as it should be pretty
portable to other distros quite easily.

Thanks

--
Kind Regards,
Dave Walker

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] development team

2015-05-29 Thread Adam Lawson
I have a question I'd like to ask offline... Can someone from the ironic
team ping me? Have a question about pxe...
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Impressions from Vancouver Neutron Design Summits

2015-05-29 Thread Anita Kuno
On 05/28/2015 05:11 AM, Gal Sagie wrote:
 Hi Kevin,
 
 The main idea that i tried to express, is we need a course of action after
 the discussions, i understand that we want to discuss high level points
 but we still need to decide on a follow up plan, because otherwise this is
 all wasted.
 
 What i was suggesting is that we apply action items in the session and the
 moderator of the talk or someone else
 try and make sure its being followed.

I have found great success any time I have taken any action in compiling
a list of agreed upon items and following up on them. I don't have much
networking experience but all the folks I have worked with in Neutron
have been happy when I have taken on this task myself. Also it tends to
produce results in that the task gets attention and moves forward or if
stalled and important enough, moves up and gets more attention.

I think anyone who decided to do this work would get support.

Thanks,
Anita.

 
 the mailing list suggestion was just that we get some of the points at
 least sorted or at least get preliminary feedback
 which we can use to focus the agenda more so the meetings themselves will
 be more focused
 
 Gal.
 
 On Thu, May 28, 2015 at 11:50 AM, Kevin Benton blak...@gmail.com wrote:
 
 Thanks for the feedback. What to discuss at the summit has always been an
 issue.

 The only issue I see with your proposal is that you assume we can agree on
 high-level things on the mailing list. One of the reasons to discuss
 high-level things at the summit is that its a great place to get operators,
 developers, product managers, etc all in the same room to discuss issues.
 If try do decide everything on openstack-dev, we end up operating somewhat
 in a vacuum.

 From another angle, what's the point in meeting up at the summit if the
 only point is to look for volunteers to take action items? That could be
 done completely via the mailing list as well.

 The summit is our only opportunity for high-bandwidth interaction with
 users and contributors from other projects as well. We need to make sure we
 take advantage of that.

 On Thu, May 28, 2015 at 1:12 AM, Gal Sagie gal.sa...@gmail.com wrote:

 Hello All,

 I wanted to very briefly share my thoughts and a suggestion regarding the
 Neutron design summits (maybe only for the next one, but still..)

 I would first like to say that i was amazed by the number of people that
 attended the summit, it was really great to see it and it was very nice
 meeting most of you and linking the person with the IRC nick.

 Anyone that knows me, or talked with me in the past knows that i wanted
 to get involved with Open Source for quite some time and i really feel
 fortunate to be able to contribute and join OpenStack, and hope to continue
 with this efforts going forward.

 I have to share that i was expecting the design summits to be completely
 different then what went on, and although some issues were interesting i
 feel they were not use full to go forward but rather were just a stage for
 people to express opinions/ideas without a distinct decision.

 Since i am new and never been in one of these i don’t want to judge or
 declare if that’s good or bad :), i do think that its hard to actually
 achieve something with so many people but i also understand and appreciate
 the great interest of so many people with Neutron.

 A small suggestion i think might improve is if the session moderator will
 write (or any other volunteer) a follow up email with Action Items that
 needs to be addressed and people actually taking and volunteering to take
 some of those action items.
 This can and should be live in the meeting where anyone requesting or
 making a suggestion would take it on him/her self to push it forward and
 report status.
 (I think every point in the agenda must be converted into an action item)

 We can also do this before the actual summit on the agenda points and
 hopefully gets a discussion going before the actual session.

 I also think we should try to re-write the agenda points into a more
 lower level points before the summit rather then try to tackle the high
 level stuff (for example trying to discuss the pro's and con's in the
 mailing lists/IRC before the summit and have a more concrete discussion in
 the session it self)


 I do hope that’s the right place to express this, and again that’s only
 my impression, i could be completely wrong here as i don’t consider my self
 experienced enough with the Open Source way of doing things.
 Would love to help in the future with some of the points i raised here.

 Thanks
 Gal.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 __
 OpenStack Development 

Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-05-29 Thread Haïkel
2015-05-29 15:41 GMT+02:00 Thierry Carrez thie...@openstack.org:
 Hi everyone,

 TL;DR:
 - We propose to stop tagging coordinated point releases (like 2015.1.1)
 - We continue maintaining stable branches as a trusted source of stable
 updates for all projects though


Hi,

I'm one of the main maintainer of the packages for Fedora/RHEL/CentOS.
We try to stick as much as possible to upstream (almost zero
downstream patches),
and without intermediate releases, it will get difficult.

I'm personally not fond of this as it will lead to more fragmentation.
It may encourage
bad behaviors like shipping downstream patches for bug fixes and CVE instead
of collaborating upstream to differentiate themselves.
For instance, if we had no point-based release, for issues tracking
purposes, we would
have to maintain our sets of tags somewhere.

There's also the release notes issue that has already been mentioned.
Still continuous release notes won't solve the problem, as you wouldn't
be able to map these to the actual packages. Will we require operators
to find from which git commit, the packages were built and then try to figure
out which fixes are and are not included?

 Long version:

 At the stable branch session in Vancouver we discussed recent
 evolutions in the stable team processes and how to further adapt the
 work of the team in a big tent world.

 One of the key questions there was whether we should continue doing
 stable point releases. Those were basically tags with the same version
 number (2015.1.1) that we would periodically push to the stable
 branches for all projects.

 Those create three problems.

 (1) Projects do not all follow the same versioning, so some projects
 (like Swift) were not part of the stable point releases. More and more
 projects are considering issuing intermediary releases (like Swift
 does), like Ironic. That would result in a variety of version numbers,
 and ultimately less and less projects being able to have a common
 2015.1.1-like version.


And that's actually a pain point to track for these releases in which
OpenStack branch belong. And this is probably something that needs to
be resolved.

 (2) Producing those costs a non-trivial amount of effort on a very small
 team of volunteers, especially with projects caring about stable
 branches in various amounts. We were constantly missing the
 pre-announced dates on those ones. Looks like that effort could be
 better spent improving the stable branches themselves and keeping them
 working.


Agreed, but why not switching to a time-based release?
Regularly, we tag/generate/upload tarballs, this could even be automated.
As far as I'm concerned, I would be more happy to have more frequent releases.

 (3) The resulting stable point releases are mostly useless. Stable
 branches are supposed to be always usable, and the released version
 did not undergo significantly more testing. Issuing them actually
 discourages people from taking whatever point in stable branches makes
 the most sense for them, testing and deploying that.

 The suggestion we made during that session (and which was approved by
 the session participants) is therefore to just get rid of the stable
 point release concept altogether for non-libraries. That said:

 - we'd still do individual point releases for libraries (for critical
 bugs and security issues), so that you can still depend on a specific
 version there

 - we'd still very much maintain stable branches (and actually focus our
 efforts on that work) to ensure they are a continuous source of safe
 upgrades for users of a given series

 Now we realize that the cross-section of our community which was present
 in that session might not fully represent the consumers of those
 artifacts, which is why we expand the discussion on this mailing-list
 (and soon on the operators ML).


Thanks, I was not able to join this discussion, and that was the kind
of proposal
that I was fearing to see happen.

 If you were a consumer of those and will miss them, please explain why.
 In particular, please let us know how consuming that version (which was
 only made available every n months) is significantly better than picking
 your preferred time and get all the current stable branch HEADs at that
 time.


We provide both type of builds
* git continuous builds = for testing/CI and early feedback on potential issues
* point-release based builds = for GA, and production

Anyway, I won't force anyone to do something they don't want to do but I'm
willing to step in to keep point releases in one form or another.

Regards,
H.

 Thanks in advance for your feedback,

 [1] https://etherpad.openstack.org/p/YVR-relmgt-stable-branch

 --
 Thierry Carrez (ttx)

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] library releases planned for Monday 1 June 2015

2015-05-29 Thread Doug Hellmann
We have several library projects outside of Oslo that need releases anyway, and 
since some of them are blocking work we want to do to drop the “oslo” namespace 
package I have collected a list and will tag releases on Monday morning. I have 
confirmed all of these releases with the PTLs of the related projects, but I’m 
pre-announcing them here since there are several and communication is a Good 
Thing.

version sha project
---
3.2.0 b1eb91a python-barbicanclient
0.7.0 2ce7fd2 python-ironicclient
1.2.0 13429c8 python-manilaclient
2.6.0 da39d9e python-neutronclient
0.2.0 13b842c os-brick
0.1.35 163030c os-collect-config
1.3.0 cab38ce oslo.middleware
1.2.0 025191a python-troveclient

Doug


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Updates from the Summit

2015-05-29 Thread Joshua Harlow

Gorka Eguileor wrote:

On Thu, May 28, 2015 at 10:51:35AM -0700, Joshua Harlow wrote:

Btw, if u *do not* want to do a spec (using [1]) or blueprint let me know
and I'll probably make a spec so that we can ensure (and agree on) the
semantics of this because the specifics of what 'cancel' or 'abort' means
start to matter.

[1] https://github.com/openstack/oslo-specs/blob/master/specs/template.rst

-Josh



Cool!

I will review the patch and come up with a Spec about abort/cancel.

New features discussed in the fishbowl will add considerable benefits
and probably facilitate its adoption in more projects.



Sounds great!

-Josh


Cheers,
Gorka.


Joshua Harlow wrote:

Feel free to add it, or open a blueprint, or make a spec to :)

Any of the above are ok with me.

I did start prototyping something @
https://review.openstack.org/#/c/184663/ (external cancellation) but it
might be going in the wrong direction (or a different direction); so
feel free to comment on that review if so...

-Josh

Gorka Eguileor wrote:

On Wed, May 27, 2015 at 08:47:42AM -0400, Davanum Srinivas wrote:

Hi Team,

Here are the etherpads from the summit[1].

I remember that in Taskflow's Fishbowl session we discussed not only
pause/yield option but abort/cancel for long running tasks as well, but
reviewing the Etherpad now I don't see it there.

Should I just add it to Ideas for Liberty section or there's a
specific reason why it wasn't included?

Cheers,
Gorka.


Some highlights are as follows:
Oslo.messaging : Took status of the existing zmq driver, proposed a
new driver in parallel to the existing zmq driver. Also looked at
possibility of using Pika with RabbitMQ. Folks from pivotal promised
to help with our scenarios as well.
Oslo.rootwrap : Debated daemon vs a new privileged service. The Nova
change to add rootwrap as daemon is on hold pending progress on the
privsep proposal/activity.
Oslo.versionedobjects : We had a nice presentation from Dan about what
o.vo can do and a deepdive into what we could do in next release.
Taskflow : Josh and team came up with several new features and how to
improve usability

We will also have several new libraries in Liberty (oslo.cache,
oslo.service, oslo.reports, futurist, automaton etc). We talked about
our release processes, functional testing, deprecation strategies and
debated a but about how best to move to async models as well. Please
see etherpads for detailed information.

thanks,
dims

[1] https://wiki.openstack.org/wiki/Design_Summit/Liberty/Etherpads#Oslo

--
Davanum Srinivas :: https://twitter.com/dims

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [new][app-catalog] App Catalog next steps

2015-05-29 Thread Georgy Okrokvertskhov
I believe that current app store approach uses only image name rather then
ID. So you can replace image with a new one without affecting application
definitions. Both Heat and Murano can use just image name without IDs.  I
think it should be supported in artifact repository too.

As for other then Heat, Murano and Glance entities in the catalog, the
reason why these types where selected is current OpenStack infrastructure.
Application catalog uses standard OpenStack infrastructure for CI\CD and we
know how to test Murano packages, Heat templates and Glance images via
tempest. It should be pretty straightforward to do this and I believe it
was on app catalog roadmap. For other artifacts type it is not clear yet
how to do this CI\CD automation. I am not sure if we have OpenStack
infrastructure for ansible, puppet or chef testing.

Thanks
Gosha

On Fri, May 29, 2015 at 9:55 AM, Fox, Kevin M kevin@pnnl.gov wrote:

  I've ran into the opposite problem though with Glance. As an Op, I
 really really want to replace one image with a new one atomically with
 security updates preapplied. Think shellshock, ghost, etc. It will be
 basically be the same exact image as before, but patched. Referencing local
 ID's explicitly makes it harder to ensure things are patched, since new
 vm's will tend to pop up after things are patched with new vulnerabilities.

 Thanks,
 Kevin

  --
 *From:* Alexander Tivelkov [ativel...@mirantis.com]
 *Sent:* Friday, May 29, 2015 9:24 AM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [new][app-catalog] App Catalog next steps

   Hi Kevin,

  I don't suggest to use random IDs as artifact identifiers in the
 community app catalog. Of course we will need to have some globally unique
 names there (my initial idea on that is to have a combination of
 fully-qualified namespace-based name + version + signature) - and such
 names should be used to replicate artifacts across the cloud boundaries.

  By Referencing by ID I mean only the local referencing: when the
 artifact is already present in cloud's local Glance (be it imported from
 the community catalog, copied from other cloud or uploaded directly), the
 particular service (Heat in our example) should be able to consume it by
 ID, same as Nova currently does with Images.
 This has its own purpose to guarantee objects' immutability: once the user
 has selected an object in the local catalog, she may be sure that nobody
 will interfere and modify it, as the object itself is immutable and the id
 is not reusable. If the object is referenced only by name, then it may be
 deleted and a different artifact with the same name may be uploaded
 instead, which may introduce a potential security issue. Using IDs will
 prevent such behavior.
 Fully qualified object names are still needed, of course - but that's
 Glance goal to locate an artifact based on its FQN and return the id for
 it.
 At least, this was the design idea of the initial artifact concept.

  But that's an off-topic here, as this concept is related only to the
 local artifact repos, and world-global app catalog has nothing to do with
 this.


  --
  Regards,
 Alexander Tivelkov

 On Fri, May 29, 2015 at 6:47 PM, Fox, Kevin M kevin@pnnl.gov wrote:

  Hi Alexander,

 Sweet. I'll have to kick the tires on the current state of Liberty soon.
 :)

 Reference by artifact IDs is going to be problematic I think. How do you
 release a generic set of resources to the world that reference specific
 randomly generated ID's?

 What about by Name? If not, then it will need to be some kind of mapping
 mechanism. :/

 Thanks,
 Kevin

  --
 *From:* Alexander Tivelkov [ativel...@mirantis.com]
 *Sent:* Friday, May 29, 2015 4:19 AM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [new][app-catalog] App Catalog next steps

Hi Kevin,

   Has the Glance Artifact Repository implemented enough bits to have
 Heat and/or Murano artefacts yet?


  ​Most of the code is there already, couple of patchsets are still on
 review but we'll land them soon.​ L1 is a likely milestone to have it ready
 in master.


   Also, has there been any work on Exporting/Importing them through some
 defined format (tarball?) that doesn't depend on the artefact type?


  ​This one is not completely implemented: the design is ready (the spec
 had this feature from the very beginning) and a PoC was done. The final
 implementation is likely to happen in L cycle.


   I've been talking with the Heat folks on starting a blueprint to allow
 heat templates to use relative URL's instead of absolute ones. That would
 allow a set of Heat templates to be stored in one artefact in Glance.


  ​That's awesome.
 Also I'd consider allowing Heat to reference Templates by their artifact
 IDs in Glance, same as Nova does it for images. ​



   --
 *From:* 

Re: [openstack-dev] [infra] [qa] A place for gate job contacts?

2015-05-29 Thread Anita Kuno
On 05/29/2015 10:15 AM, Doug Hellmann wrote:
 Excerpts from Anita Kuno's message of 2015-05-28 17:35:06 -0400:
 On 05/28/2015 04:46 PM, Jeremy Stanley wrote:
 On 2015-05-28 13:30:44 -0700 (-0700), Clint Byrum wrote:
 [...]
 Do we already have this hidden somewhere, or would it make sense to
 maybe add this as something in openstack-infra/project-config along side
 the jjb definition that creates the job/class of job somehow?
 [...]

 We don't (yet anyway). It's a little tricky since a job isn't
 necessarily a particular configuration element but rather often
 arises by instantiating a parameterized template. So the assembly of
 a particular template along with a specific set of parameters is
 what we might want to associate with a given contact. It's possible
 we could add a contact parameter to these so that it accompanies
 each job configuration as metadata and transform that into a
 responsible parties list somewhere easy to reference... but I expect
 there are plenty of alternative options for this. Also we should get
 input from the QA team as well (tag added) since they'd be one of
 the more frequent consumers of this information.

 I tend to default to the ptl of the repo on which the main job is
 running (for example if the job runs on cinder, devstack-gate and
 tempest, and contains the word cinder, I would ask the cinder ptl for
 further direction on the job).

 This works mostly and for me is preferable to a secondary file which
 would be hard to keep updated.
 
 The jobs Clint is thinking of map more closely to the third-party CI
 jobs, where someone with specialty expertise on a messaging driver may
 need to get involved in debugging a functional test failure. IIRC,
 there's a wiki page for third-party CI owners, so maybe we should do
 something similar for these jobs.
 
 Doug
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
Jim has created this patch, Clint will this serve your purpose?

https://review.openstack.org/#/c/186852/

Thanks,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-05-29 Thread Matt Riedemann



On 5/29/2015 10:00 AM, David Medberry wrote:


On Fri, May 29, 2015 at 7:41 AM, Thierry Carrez thie...@openstack.org
mailto:thie...@openstack.org wrote:

If you were a consumer of those and will miss them, please explain why.
In particular, please let us know how consuming that version (which was
only made available every n months) is significantly better than picking
your preferred time and get all the current stable branch HEADs at that
time.


If vendor packages are used (as many folks do) they will need to weigh
in before operators can really give valid feedback.
I've already heard from one vendor that they will continue to do
point-like releases that they will support, but we probably need a
more complete answer.

Another issue, operators pulling from stable will just need to do a bit
more diligence themselves (and this is probably appropriate.) One thing
we will do in this diligence is something of tracking rate of new bugs
and looking for windows of opportunity where there may be semi-quiescence.


This, IMO, is about the only time right now that I see doing point 
releases on stable as worthwhile.  In other words, things have been very 
touchy in stable for at least the last 6 months, so in the rare moments 
of stability with the gate on stable is when I'd cut a release before 
the next gate breaker.  You can get some examples of why here:


https://etherpad.openstack.org/p/stable-tracker



The other issue I'm aware of is that there will essentially be no
syncing across projects (except by the vendors). Operators using
upstream will need to do a better job (ie, more burden) in making sure
all of the packages work together.)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [app-catalog] Poll for IRC meeting

2015-05-29 Thread Christopher Aedo
I’ve started a doodle poll to vote on the initial IRC meeting
schedule, if you’re interested in helping improve and build up this
catalog please vote for the day/time that works best and get involved!
http://doodle.com/vf3husyn4bdkui8w

The poll will close in one week (June 5).  Just sending this reminder
again as I've seen a lot of interest this week, so I think some of the
people on that thread might be interested in being part of this on an
ongoing basis.

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [new][app-catalog] App Catalog next steps

2015-05-29 Thread Georgy Okrokvertskhov
I think the next logical step is to add versioning and predefined tags so
we can distinguish application versions and application status. Right now
the catalog has just information about application without any strict
schema enforced.

The separate problem is to how this information should be handled by
OpenStack services. Potentially it should be done by Artifact repository
during the import phase. It will be nice to meet with Glance team and
discuss this so we can draft this import process land something in L
release.

Thanks
Gosha

On Fri, May 29, 2015 at 3:16 PM, Christopher Aedo ca...@mirantis.com
wrote:

 On Fri, May 29, 2015 at 9:55 AM, Fox, Kevin M kevin@pnnl.gov wrote:
  As an Op, I really
  really want to replace one image with a new one atomically with security
  updates preapplied. Think shellshock, ghost, etc. It will be basically be
  the same exact image as before, but patched.

 On Fri, May 29, 2015 at 11:16 AM, Georgy Okrokvertskhov
 gokrokvertsk...@mirantis.com wrote:
  I believe that current app store approach uses only image name rather
 then
  ID. So you can replace image with a new one without affecting application
  definitions.

 In my opinion this is a pretty serious shortcoming with the app
 catalog as it stands right now.  There's no concept of versions for
 catalog assets, only whatever is put in with the asset name.  It's not
 obvious when the binary component of an asset has been replaced for
 instance.  Maybe the latest one has the security updates applied,
 maybe it doesn't?  If you are watching the repo you might catch it,
 but that's not very use friendly.  We are also unable to account for
 duplicate names currently (i.e. no protection against having two
 identically named glance images).

 I think the easiest way to handle at least the versions is by
 including additional information in the metadata.  If we eventually
 switch to using the artifacts implementation in glance, I think some
 of this is resolved, but a switch like that is a long way off.  Any
 thoughts on what we could do in the near term?

 -Christopher

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-05-29 Thread Haïkel
2015-05-29 21:36 GMT+02:00 Dave Walker em...@daviey.com:
 Responses inline.

 On 29 May 2015 6:15 pm, Haïkel hgue...@fedoraproject.org wrote:

 2015-05-29 15:41 GMT+02:00 Thierry Carrez thie...@openstack.org:
  Hi everyone,
 
  TL;DR:
  - We propose to stop tagging coordinated point releases (like 2015.1.1)
  - We continue maintaining stable branches as a trusted source of stable
  updates for all projects though
 

 Hi,

 I'm one of the main maintainer of the packages for Fedora/RHEL/CentOS.
 We try to stick as much as possible to upstream (almost zero
 downstream patches),
 and without intermediate releases, it will get difficult.

 If you consider *every* commit to be a release, then your life becomes
 easier. This is just a case of bumping the SemVer patch version per commit
 (as eloquently put by Jeremy).  We even have tooling to automate the version
 generation via pbr..

 Therefore, you might want to jump from X.X.100 to X.X.200 which would mean
 100 commits since the last update.


We have continuous builds for every commit master for a while now, and
it's been a
great tool with CI to have early feedback (missing deps, integration
issues etc.).
We could easily reuse that platform to track stable branches.

The problem is that downstream QA/CI cycle of a package could be much
longer than between
two commits. So we'd end up jamming updates.
I'd rather not drop downstream QA as it testes integration bits, and
it's unlikely
to be something that could be upstream.


 I'm personally not fond of this as it will lead to more fragmentation.
 It may encourage
 bad behaviors like shipping downstream patches for bug fixes and CVE
 instead
 of collaborating upstream to differentiate themselves.
 For instance, if we had no point-based release, for issues tracking
 purposes, we would
 have to maintain our sets of tags somewhere.

 I disagree, each distro already does security patching and whilst I expect
 this to still happens, it actually *encourages* upstream first workflow as
 you can select a release on your own cadence that includes commits you need,
 for your users.


If they choose to rebase upon stable branches, you could also cherry-pick.

 There's also the release notes issue that has already been mentioned.
 Still continuous release notes won't solve the problem, as you wouldn't
 be able to map these to the actual packages. Will we require operators
 to find from which git commit, the packages were built and then try to
 figure
 out which fixes are and are not included?

 Can you provide more detail? I'm not understanding the problem.


A release version makes it easy to know what fixes are shipped in a package.
If you rebase on stable branches, then you can just put the git sha1sum (though,
it's not very friendly) in the version, and leverage git branch
--contains to find out
if you fix is included.
Some distributors may choose to use their own release scheme, adding
complexity to
this simple but common problem.
Other may choose to cherry-pick which adds more complexity than the
previous scenario.

Let's say you're an operator and you want to check if a CVE is shipped
in all your nodes,
if you can't check with just the release version, it will be complicated.
It could be a barrier for heterogeneous systems

  Long version:
 
  At the stable branch session in Vancouver we discussed recent
  evolutions in the stable team processes and how to further adapt the
  work of the team in a big tent world.
 
  One of the key questions there was whether we should continue doing
  stable point releases. Those were basically tags with the same version
  number (2015.1.1) that we would periodically push to the stable
  branches for all projects.
 
  Those create three problems.
 
  (1) Projects do not all follow the same versioning, so some projects
  (like Swift) were not part of the stable point releases. More and more
  projects are considering issuing intermediary releases (like Swift
  does), like Ironic. That would result in a variety of version numbers,
  and ultimately less and less projects being able to have a common
  2015.1.1-like version.
 

 And that's actually a pain point to track for these releases in which
 OpenStack branch belong. And this is probably something that needs to
 be resolved.

  (2) Producing those costs a non-trivial amount of effort on a very small
  team of volunteers, especially with projects caring about stable
  branches in various amounts. We were constantly missing the
  pre-announced dates on those ones. Looks like that effort could be
  better spent improving the stable branches themselves and keeping them
  working.
 

 Agreed, but why not switching to a time-based release?
 Regularly, we tag/generate/upload tarballs, this could even be automated.
 As far as I'm concerned, I would be more happy to have more frequent
 releases.

  (3) The resulting stable point releases are mostly useless. Stable
  branches are supposed to be always usable, and the released version
  did 

Re: [openstack-dev] [all][release] summit session summary: Release Versioning for Server Applications

2015-05-29 Thread Jeremy Stanley
On 2015-05-29 17:38:04 -0400 (-0400), Doug Hellmann wrote:
[...]
 This will result in resetting the version numbers to values lower
 than they are currently (12  2015), but the distributions can
 prepend an epoch value to their version number to ensure upgrades
 work properly.
[...]

Also for those who have been around since at least Essex, you'll
recall that we've previously done this for some of our software (for
example python-novaclient 2012.1 followed by 2.6.0), so distros who
have been packaging us for that long or longer have already been
through this to some degree and should probably not be entirely
surprised.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat-translator] [heat] IRC Channel for heat-translator

2015-05-29 Thread Sahdev P Zala
Hi everyone,

I just wanted to let you know that the heat-translator project now has IRC 
Channel in place - #openstack-heat-translator

Thanks!

Regards, 
Sahdev Zala 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Renaming the IRC channel to #openstack-puppet

2015-05-29 Thread Mike Dorman
+1 Let’s do it.


From: Matt Fischer
Reply-To: OpenStack Development Mailing List (not for usage questions)
Date: Friday, May 29, 2015 at 1:46 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [puppet] Renaming the IRC channel to 
#openstack-puppet

I would love to do this. +2!

On Fri, May 29, 2015 at 1:39 PM, Mathieu Gagné 
mga...@iweb.commailto:mga...@iweb.com wrote:
Hi,

We recently asked for our IRC channel (#puppet-openstack) to be logged
by the infra team. We happen to be the only channel suffixing the word
openstack instead of prefixing it. [1]

I would like to propose renaming our IRC channnel to #openstack-puppet
to better fit the mold (convention) already in place and be more
intuitive for new comers to discover.

Jemery Stanley (fungi) explained to me that previous IRC channel renames
were done following the Ubuntu procedure. [2] Last rename I remember of
was #openstack-stable to #openstack-release and it went smoothly without
any serious problem.

What do you guys think about the idea?

[1] http://eavesdrop.openstack.org/irclogs/
[2] https://wiki.ubuntu.com/IRC/MovingChannels

Note: I already registered the channel name for safe measures.

--
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] inactive projects

2015-05-29 Thread Joe Gordon
Hi All,

It turns out we have a few repositories that have been inactive for almost
a year, appear to be dead but don't have any documentation to reflecting
that. (And a lot more that have been dead for over half a year)


format: days since last updated - name

List of stackforge that have not been updated in almost a year.

816 stackforge/MRaaS
732 stackforge/occi-os
713 stackforge/bufunfa
437 stackforge/milk

We also have one openstack project that appears to be inactive and possible
dead:

171 openstack/kite


If you know if one of these repos is not maintained, please update the
README to reflect the state of the repo. For example:
https://git.openstack.org/cgit/stackforge/fuel-ostf-plugin/tree/README.rst

Source code: http://paste.openstack.org/show/246039/
Raw output: http://paste.openstack.org/show/246060
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] IRC meetings agenda is now driven from Gerrit !

2015-05-29 Thread Tony Breeds
On Fri, May 29, 2015 at 10:42:58AM +0530, yatin kumbhare wrote:
 Great Work!
 
 New meetings or meeting changes would be proposed in Gerrit, and
 check/gate tests would make sure that there aren't any conflict.
 
  --Will this tell upfront, that in any given day of week, which are all
 meeting slots (time and irc meeting channels) are available? this could
 bring down no. patchset and gate tests failures. may be :)

That'd be an interetsing feature but in reality it's trivial to run yaml2ical
locally against the irc-meetings repo (which you already need to propose the
change).  Or you can just push the review see if it fails.

 There's some meeting with at weekly interval of 2, I assume we would get
 iCal by SUMMARY?

I'm not sure I follow what you're asking here.

Yours Tony.


pgpkNnViouJ4S.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [new][app-catalog] App Catalog next steps

2015-05-29 Thread Christopher Aedo
On Fri, May 29, 2015 at 9:55 AM, Fox, Kevin M kevin@pnnl.gov wrote:
 As an Op, I really
 really want to replace one image with a new one atomically with security
 updates preapplied. Think shellshock, ghost, etc. It will be basically be
 the same exact image as before, but patched.

On Fri, May 29, 2015 at 11:16 AM, Georgy Okrokvertskhov
gokrokvertsk...@mirantis.com wrote:
 I believe that current app store approach uses only image name rather then
 ID. So you can replace image with a new one without affecting application
 definitions.

In my opinion this is a pretty serious shortcoming with the app
catalog as it stands right now.  There's no concept of versions for
catalog assets, only whatever is put in with the asset name.  It's not
obvious when the binary component of an asset has been replaced for
instance.  Maybe the latest one has the security updates applied,
maybe it doesn't?  If you are watching the repo you might catch it,
but that's not very use friendly.  We are also unable to account for
duplicate names currently (i.e. no protection against having two
identically named glance images).

I think the easiest way to handle at least the versions is by
including additional information in the metadata.  If we eventually
switch to using the artifacts implementation in glance, I think some
of this is resolved, but a switch like that is a long way off.  Any
thoughts on what we could do in the near term?

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [qa] A place for gate job contacts?

2015-05-29 Thread Clint Byrum
Excerpts from Anita Kuno's message of 2015-05-29 10:51:35 -0700:
 On 05/29/2015 10:15 AM, Doug Hellmann wrote:
  Excerpts from Anita Kuno's message of 2015-05-28 17:35:06 -0400:
  On 05/28/2015 04:46 PM, Jeremy Stanley wrote:
  On 2015-05-28 13:30:44 -0700 (-0700), Clint Byrum wrote:
  [...]
  Do we already have this hidden somewhere, or would it make sense to
  maybe add this as something in openstack-infra/project-config along side
  the jjb definition that creates the job/class of job somehow?
  [...]
 
  We don't (yet anyway). It's a little tricky since a job isn't
  necessarily a particular configuration element but rather often
  arises by instantiating a parameterized template. So the assembly of
  a particular template along with a specific set of parameters is
  what we might want to associate with a given contact. It's possible
  we could add a contact parameter to these so that it accompanies
  each job configuration as metadata and transform that into a
  responsible parties list somewhere easy to reference... but I expect
  there are plenty of alternative options for this. Also we should get
  input from the QA team as well (tag added) since they'd be one of
  the more frequent consumers of this information.
 
  I tend to default to the ptl of the repo on which the main job is
  running (for example if the job runs on cinder, devstack-gate and
  tempest, and contains the word cinder, I would ask the cinder ptl for
  further direction on the job).
 
  This works mostly and for me is preferable to a secondary file which
  would be hard to keep updated.
  
  The jobs Clint is thinking of map more closely to the third-party CI
  jobs, where someone with specialty expertise on a messaging driver may
  need to get involved in debugging a functional test failure. IIRC,
  there's a wiki page for third-party CI owners, so maybe we should do
  something similar for these jobs.
  
  Doug
  
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 Jim has created this patch, Clint will this serve your purpose?
 
 https://review.openstack.org/#/c/186852/

+1, this looks like a way to do that. I will see about adding some
language to the messaging drivers policy that requires that people +1
or submit a review that adds their information as a contact macro for
the job(s) that meet the policy requirements for gate testing.

Thanks!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Please test each backport of the heat templates before commit

2015-05-29 Thread Steven Dake (stdake)
If you are backporting heat templates for Kubernetes, please test each commit 
with a bay create followed by a creation of a redis application.  For an 
example redis application, check out my demo at:

https://github.com/stackforge/kolla/tree/master/demos/magnum

Master is currently broken, I assume because of the few backports that have hit 
the repo.  Debugging definitely showed a problem with the templates.

See:
https://bugs.launchpad.net/magnum/+bug/1460232

For more details.

This points out gaps in our functional gate.  We need our functional gate to do 
something simple like launch the redis RC/service/POD and make sure it enters 
the running state.

Regards
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] gate failing sporadically

2015-05-29 Thread Steven Dake (stdake)
Hey Folks,

I noticed the Kolla functional gate is failing sporadically.

It seems that sometimes an image doesn’t build.

http://logs.openstack.org/75/186475/1/check/check-kolla-functional-f21/8f23913/console.html

One thing that looks off there is barbican is not building a —release image 
(I.e. Tag =atest).  Shouldn’t it?

The other failures I’ve noticed follow the same pattern.  Is there a problem in 
our build scripts?

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] dashboard-app split in horizon

2015-05-29 Thread Tripp, Travis S
Hi all,

I just added my 2 cents to a couple of these patches, but in my opinion all of 
the existing patches in gerrit for angular tests that were written as part of 
the launch instance effort should be finalized and merged prior to any further 
reorganization patches going through.

Creating a nice phased chain of patches is nice for seeing the steps, but as we 
saw earlier this week with a breakage, even with smaller steps the 
reorganization is making a lot of changes that require a lot of verification. I 
would be a lot more comfortable if we could get the outstanding test patches 
into the system.  I would like to see the tests passing at each stage of the 
reorganization to help avoid corner case breakages. Can we make a concerted 
effort to review those and get them in prior to more reorganization?

I realize file globbing, etc will make the listing of patches easier to do, but 
maybe the outstanding test patches should get a chain of dependencies to avoid 
merge conflicts themselves?

Thanks,
Travis

From: Richard Jones r1chardj0...@gmail.commailto:r1chardj0...@gmail.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, May 28, 2015 at 12:55 AM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Horizon] dashboard-app split in horizon

I have now submitted the patch to do the html shenanigans as a new step in the 
current ngReorg pipeline (inserted because it should fix a test failure popping 
up in the dashboard-app move final step).

https://review.openstack.org/#/c/186295/

Ryan's API reorg should probably depend on this patch also as that should fix 
*its* test failures also.

And before anyone says anything, no I'm not particularly thrilled about the new 
horizon/test/templates/base.html but frankly I'm not sure how else to make it 
work. We could probably cull the JS from that file though. I'm pretty sure none 
of the django unit tests exercise JS, and I believe Selenium works off a 
different interface (but I've run out of time today to investigate).


 Richard


On Thu, 28 May 2015 at 02:15 Thai Q Tran 
tqt...@us.ibm.commailto:tqt...@us.ibm.com wrote:
Yes Rob, you are correct. ToastService was something Cindy wrote to replace 
horizon.alert (aka messages). We can't remove it because legacy still uses it.

-Rob Cresswell (rcresswe) rcres...@cisco.commailto:rcres...@cisco.com 
wrote: -
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
From: Rob Cresswell (rcresswe) rcres...@cisco.commailto:rcres...@cisco.com
Date: 05/26/2015 11:29PM

Subject: Re: [openstack-dev] [Horizon] dashboard-app split in horizon

Went through the files myself and I concur. Most of these files define pieces 
specific to our implementation of the dashboard, so should be moved.

I’m not entirely sure on where _messages should sit. As we move forward, won’t 
that file just end up as a toast element and nothing more? Maybe I’m 
misinterpreting it, I’m not familiar with toastService.

Rob


From: Richard Jones r1chardj0...@gmail.commailto:r1chardj0...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, 26 May 2015 01:35
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: Johanson, Tyr H t...@hp.commailto:t...@hp.com
Subject: Re: [openstack-dev] [Horizon] dashboard-app split in horizon

As a follow-up to this [in the misguided hope that anyone will actually read 
this conversation with myself ;-)] I've started looking at the base.html split. 
At the summit last week, we agreed to:

1. move base.html over from the framework to the dashboard, and
2. move the _conf.html and _scripts.html over as well, since they configure the 
application (dashboard).

Upon starting the work it occurs to me that all of the other files referenced 
by base.html should also move. So, here's the complete list of base.html 
components and whether they should move over in my opinion:

- horizon/_custom_meta.html
  Yep, is an empty file in horizon, intended as an extension point in 
dashboard. The empty file (plus an added comment) should move.
  - horizon/_stylesheets.html
  Is just a dummy in horizon anyway, should move.
- horizon/_conf.html
  Yep, should move.
- horizon/client_side/_script_loader.html
  Looks to be a framework component not intended for override, so we should 
leave it there.
- horizon/_custom_head_js.html
  Yep, is an empty file in horizon, intended as an extension point in 
dashboard. Move, with a comment added.
- horizon/_header.html
  There is a basic implementation in framework but the real (used) 
implementation is in dashboard, so should move.
- horizon/_messages.html
  This is a 

[openstack-dev] [nova] Progressing/tracking work on libvirt / vif drivers

2015-05-29 Thread Neil Jerram

Hi all,

Per yesterday's IRC meeting [1], and discussion in Vancouver, Nova work 
is being somewhat driven by the etherpad at [2].  But this etherpad 
doesn't have a section for libvirt / vif driver changes.  The log at [1] 
briefly touched on this, but moved on after noting that Dan PB had 
disbanded a libvirt subteam for lack of interest.


So, what should folk interested in libvirt / vif work (including me) now 
do?  I think the answer is that we should self-organize, then report to 
the next IRC on how we've done that, and I'm happy to lead that if no 
one else wants to - but is there someone else who should or wants to own 
this?


Thanks,
Neil


[1] 
http://eavesdrop.openstack.org/meetings/nova/2015/nova.2015-05-28-14.02.log.html

[2] https://etherpad.openstack.org/p/liberty-nova-priorities-tracking

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-05-29 Thread Ian Cordasco


On 5/29/15, 12:14, Haïkel hgue...@fedoraproject.org wrote:

2015-05-29 15:41 GMT+02:00 Thierry Carrez thie...@openstack.org:
 Hi everyone,

 TL;DR:
 - We propose to stop tagging coordinated point releases (like 2015.1.1)
 - We continue maintaining stable branches as a trusted source of stable
 updates for all projects though


Hi,

I'm one of the main maintainer of the packages for Fedora/RHEL/CentOS.
We try to stick as much as possible to upstream (almost zero
downstream patches),
and without intermediate releases, it will get difficult.

Can you expound on why this is difficult? I believe you, but I want to
understand it better.

I'm personally not fond of this as it will lead to more fragmentation.

Could you explain this as well? Do you mean fragmentation between what
distros are offering? In other words, Ubuntu is packaging Kilo @ SHA1 and
RHEL is at SHA2. I'm not entirely certain that's a bad thing. That seems
to give the packagers more freedom.

It may encourage
bad behaviors like shipping downstream patches for bug fixes and CVE
instead
of collaborating upstream to differentiate themselves.

Perhaps I'm wrong, but when a CVE is released, don't the downstream
packagers usually patch whatever version they have and push that out?
Isn't that the point of them being on an private list to receive embargoed
notifications with the patches?

For instance, if we had no point-based release, for issues tracking
purposes, we would
have to maintain our sets of tags somewhere.

But, if I understand correct, downstream sometimes has patches they apply
(or develop) to ensure the package is rock solid on their distribution.
Those aren't always relevant upstream so you maintain them. How is this
different?

There's also the release notes issue that has already been mentioned.
Still continuous release notes won't solve the problem, as you wouldn't
be able to map these to the actual packages. Will we require operators
to find from which git commit, the packages were built and then try to
figure
out which fixes are and are not included?

I think this is wrong. If it's a continuously updated set of notes, then
whatever SHA the head of stable/X is at will be the correct set of notes
for that branch. If you decide to package a SHA earlier than that, then
you would need to do this, but I'm not sure why you would want to package
a SHA that isn't at the HEAD of that branch.


 Long version:

 At the stable branch session in Vancouver we discussed recent
 evolutions in the stable team processes and how to further adapt the
 work of the team in a big tent world.

 One of the key questions there was whether we should continue doing
 stable point releases. Those were basically tags with the same version
 number (2015.1.1) that we would periodically push to the stable
 branches for all projects.

 Those create three problems.

 (1) Projects do not all follow the same versioning, so some projects
 (like Swift) were not part of the stable point releases. More and more
 projects are considering issuing intermediary releases (like Swift
 does), like Ironic. That would result in a variety of version numbers,
 and ultimately less and less projects being able to have a common
 2015.1.1-like version.


And that's actually a pain point to track for these releases in which
OpenStack branch belong. And this is probably something that needs to
be resolved.

Well there's been a lot of discussion around not integrating releases at
all. That said, I'm not sure I disagree. Coordinating release numbers is
fine. Coordinating release dates seems less so, especially since they
prevent the project from delivering what it's promised so that it can
manage to get something that's super stable by an arbitrary date.


 (2) Producing those costs a non-trivial amount of effort on a very small
 team of volunteers, especially with projects caring about stable
 branches in various amounts. We were constantly missing the
 pre-announced dates on those ones. Looks like that effort could be
 better spent improving the stable branches themselves and keeping them
 working.


Agreed, but why not switching to a time-based release?
Regularly, we tag/generate/upload tarballs, this could even be automated.
As far as I'm concerned, I would be more happy to have more frequent
releases.

A time based release would probably consist of just tagging whatever SHA
is the HEAD of that branch, especially if it's automated. And if it is
automated, the release notes will mostly just be the one-line log info
between the last release and HEAD (kind of like the oslo release notes).
That seems to me, to be something that y'all could do just as easily as we
could.


 (3) The resulting stable point releases are mostly useless. Stable
 branches are supposed to be always usable, and the released version
 did not undergo significantly more testing. Issuing them actually
 discourages people from taking whatever point in stable branches makes
 the most sense for them, testing and deploying 

Re: [openstack-dev] [Ironic] [TC] Discussion: changing Ironic's release model

2015-05-29 Thread Doug Hellmann
Excerpts from Thomas Goirand's message of 2015-05-29 15:11:25 +0200:
 On 05/28/2015 06:41 PM, Devananda van der Veen wrote:
  Hi all,
  
  tl;dr;
  
  At the summit, the Ironic team discussed the challenges we've had with
  the current release model and came up with some ideas to address them.
  I had a brief follow-up conversation with Doug and Thierry, but I'd
  like this to be discussed more openly and for us (the Ironic dev
  community) to agree on a clear plan before we take action.
  
  If Ironic moves to a release:independent model, it shouldn't have any
  direct effect on other projects we integrate with -- we will continue
  to follow release:at-6mo-cycle-end -- but our processes for how we get
  there would be different, and that will have an effect on the larger
  community.
 
 Just a quick follow-up to voice the Distro's opinion (I'm sure other
 package maintainers will agree with what I'm going to write).
 
 It's fine if you use whatever release schedule that you want. Though
 what's important for us, downstream distro, is that you make sure to
 release a longer term version at the same time as everything else, and
 that you make sure security follow-up is done properly. It's really
 important that you clearly identify the versions for which you will do
 security patch backports, so that we don't (by mistake) release a stable
 version of a given distribution with a something you wont ever give long
 term support for.
 
 I'm sure you guys know that already, but better be safe than sorry, and
 other less experience projects may find the above useful.

We do still plan to use stable branches for those supported releases, so
as long as you're pulling from the right branch when packaging it should
work fine, right?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-05-29 Thread Dave Walker
Responses inline.

On 29 May 2015 6:15 pm, Haïkel hgue...@fedoraproject.org wrote:

 2015-05-29 15:41 GMT+02:00 Thierry Carrez thie...@openstack.org:
  Hi everyone,
 
  TL;DR:
  - We propose to stop tagging coordinated point releases (like 2015.1.1)
  - We continue maintaining stable branches as a trusted source of stable
  updates for all projects though
 

 Hi,

 I'm one of the main maintainer of the packages for Fedora/RHEL/CentOS.
 We try to stick as much as possible to upstream (almost zero
 downstream patches),
 and without intermediate releases, it will get difficult.

If you consider *every* commit to be a release, then your life becomes
easier. This is just a case of bumping the SemVer patch version per commit
(as eloquently put by Jeremy).  We even have tooling to automate the
version generation via pbr..

Therefore, you might want to jump from X.X.100 to X.X.200 which would mean
100 commits since the last update.

 I'm personally not fond of this as it will lead to more fragmentation.
 It may encourage
 bad behaviors like shipping downstream patches for bug fixes and CVE
instead
 of collaborating upstream to differentiate themselves.
 For instance, if we had no point-based release, for issues tracking
 purposes, we would
 have to maintain our sets of tags somewhere.

I disagree, each distro already does security patching and whilst I expect
this to still happens, it actually *encourages* upstream first workflow as
you can select a release on your own cadence that includes commits you
need, for your users.

 There's also the release notes issue that has already been mentioned.
 Still continuous release notes won't solve the problem, as you wouldn't
 be able to map these to the actual packages. Will we require operators
 to find from which git commit, the packages were built and then try to
figure
 out which fixes are and are not included?

Can you provide more detail? I'm not understanding the problem.

  Long version:
 
  At the stable branch session in Vancouver we discussed recent
  evolutions in the stable team processes and how to further adapt the
  work of the team in a big tent world.
 
  One of the key questions there was whether we should continue doing
  stable point releases. Those were basically tags with the same version
  number (2015.1.1) that we would periodically push to the stable
  branches for all projects.
 
  Those create three problems.
 
  (1) Projects do not all follow the same versioning, so some projects
  (like Swift) were not part of the stable point releases. More and more
  projects are considering issuing intermediary releases (like Swift
  does), like Ironic. That would result in a variety of version numbers,
  and ultimately less and less projects being able to have a common
  2015.1.1-like version.
 

 And that's actually a pain point to track for these releases in which
 OpenStack branch belong. And this is probably something that needs to
 be resolved.

  (2) Producing those costs a non-trivial amount of effort on a very small
  team of volunteers, especially with projects caring about stable
  branches in various amounts. We were constantly missing the
  pre-announced dates on those ones. Looks like that effort could be
  better spent improving the stable branches themselves and keeping them
  working.
 

 Agreed, but why not switching to a time-based release?
 Regularly, we tag/generate/upload tarballs, this could even be automated.
 As far as I'm concerned, I would be more happy to have more frequent
releases.

  (3) The resulting stable point releases are mostly useless. Stable
  branches are supposed to be always usable, and the released version
  did not undergo significantly more testing. Issuing them actually
  discourages people from taking whatever point in stable branches makes
  the most sense for them, testing and deploying that.
 
  The suggestion we made during that session (and which was approved by
  the session participants) is therefore to just get rid of the stable
  point release concept altogether for non-libraries. That said:
 
  - we'd still do individual point releases for libraries (for critical
  bugs and security issues), so that you can still depend on a specific
  version there
 
  - we'd still very much maintain stable branches (and actually focus our
  efforts on that work) to ensure they are a continuous source of safe
  upgrades for users of a given series
 
  Now we realize that the cross-section of our community which was present
  in that session might not fully represent the consumers of those
  artifacts, which is why we expand the discussion on this mailing-list
  (and soon on the operators ML).
 

 Thanks, I was not able to join this discussion, and that was the kind
 of proposal
 that I was fearing to see happen.

  If you were a consumer of those and will miss them, please explain why.
  In particular, please let us know how consuming that version (which was
  only made available every n months) is significantly 

[openstack-dev] [puppet] Renaming the IRC channel to #openstack-puppet

2015-05-29 Thread Mathieu Gagné
Hi,

We recently asked for our IRC channel (#puppet-openstack) to be logged
by the infra team. We happen to be the only channel suffixing the word
openstack instead of prefixing it. [1]

I would like to propose renaming our IRC channnel to #openstack-puppet
to better fit the mold (convention) already in place and be more
intuitive for new comers to discover.

Jemery Stanley (fungi) explained to me that previous IRC channel renames
were done following the Ubuntu procedure. [2] Last rename I remember of
was #openstack-stable to #openstack-release and it went smoothly without
any serious problem.

What do you guys think about the idea?

[1] http://eavesdrop.openstack.org/irclogs/
[2] https://wiki.ubuntu.com/IRC/MovingChannels

Note: I already registered the channel name for safe measures.

-- 
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [stable] No longer doing stable point releases

2015-05-29 Thread Dave Walker
On 29 May 2015 7:41 pm, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:
SNIP

 This, IMO, is about the only time right now that I see doing point
releases on stable as worthwhile.  In other words, things have been very
touchy in stable for at least the last 6 months, so in the rare moments of
stability with the gate on stable is when I'd cut a release before the next
gate breaker.  You can get some examples of why here:
SNIP

I disagree this would help things, as every commit that lands the gate was
perfectly functional at that time by definition.

It is usually retrospection that the gate falls to pass, with the usual
case being changing of bound direct or indirect dependencies.

If we grab a recent point release tarball, and put it back through the gate
with the same declared requirement bounding - it will still fail (even
though it passed when the release was cut).

--
Kind Regards,
Dave Walker
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] I think nova behaves poorly when booting multiple instances

2015-05-29 Thread Joshua Harlow

John Garbutt wrote:

On 27 May 2015 at 23:36, Robert Collinsrobe...@robertcollins.net  wrote:

On 26 May 2015 at 03:37, Chris Friesenchris.frie...@windriver.com  wrote:

Hi all,

I've just opened a bug around booting multiple instances at once, and it was
suggested on IRC that I mention it here to broaden the discussion around the
ideal behaviour.

The bug is at:  https://bugs.launchpad.net/nova/+bug/1458122

Basically the problem is this:

When booting up instances, nova allows the user to specify a min count and
a max count.  So logically, this request should be considered successful
if at least min count instances can be booted.

Currently, if the user has quota space for max count instances, then nova
will try to create them all. If any of them can't be scheduled, then the
creation of all of them will be aborted and they will all be put into an
error state.


The new quota ideas we discussed should make other options for this a
lot simpler, I think:
https://review.openstack.org/#/c/182445/
But lets skip over that for now...


Arguably, if nova was able to schedule at least min count instances (which
defaults to 1) then it should continue on with creating those instances that
it was able to schedule. Only if nova cannot create at least min count
instances should nova actually consider the request as failed.

Also, I think that if nova can't schedule max count instances, but can
schedule at least min count instances, then it shouldn't put the
unscheduled ones into an error state--it should just delete them.

I think taking successfully provisioned vm's and rolling them back is
poor, when the users request was strictly met- I'm in favour of your
proposals.


The problem here is having a nice way to explicitly tell the users
about what worked and what didn't. Currently the instance being in an
error state because its the good way to tell the user that build
failed. Deleting them doesn't have the same visibility, it can look
like the just vanished.

We do have a (straw man) proposed solution for this. See the Task API
discussion here:
https://etherpad.openstack.org/p/YVR-nova-error-handling

Given this also impacts discussions around cancelling operations like
live-migrate, I would love for a sub group to form and push forward
the important work on building a Task API. I think Andrew Laski has
committed to writing up a backlog spec for this current proposal (that
has gained a lot of support), so it could be taken on by some others
who want to move this forward. Do you fancy getting involved with
that?


+1 to all the above for tasks API(s)




Having said all that, I am very tempted to say we should deprecate the
min_count parameter in the API, keep the current behaviour for old
version requests, and maybe even remove the max_count parameter. We
could look to Heat to do a much better job of this kind of
orchestration. This is very much in the spirit of:
http://docs.openstack.org/developer/nova/devref/project_scope.html#no-more-orchestration



What about the EC2 API (I think/thought that's the reason the nova API 
has these parameters in the first place?)


https://github.com/openstack/nova/blob/stable/icehouse/nova/api/ec2/cloud.py#L1279

Something to think about, maybe the EC2 API working group/project will 
be handling that anyway in the end (via heat?)?


Neat history (from git blame)...

- https://github.com/openstack/nova/commit/1f99e500a99
- https://github.com/openstack/nova/commit/b1a08af4
- https://github.com/openstack/nova/commit/1188dd
-  day zero



Either which way, given the impact of the bug fix (i.e. it touches the
API, and would probably need a micro version bump), I think it would
be great to actually write up your proposal as a nova-spec (backlog or
targeted at liberty, either way is cool). I think a spec review would
be a great way to reach a good agreement on the best approach here.


Chris, does that sounds like an approach that would work for you?


Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Renaming the IRC channel to #openstack-puppet

2015-05-29 Thread Matt Fischer
I would love to do this. +2!

On Fri, May 29, 2015 at 1:39 PM, Mathieu Gagné mga...@iweb.com wrote:

 Hi,

 We recently asked for our IRC channel (#puppet-openstack) to be logged
 by the infra team. We happen to be the only channel suffixing the word
 openstack instead of prefixing it. [1]

 I would like to propose renaming our IRC channnel to #openstack-puppet
 to better fit the mold (convention) already in place and be more
 intuitive for new comers to discover.

 Jemery Stanley (fungi) explained to me that previous IRC channel renames
 were done following the Ubuntu procedure. [2] Last rename I remember of
 was #openstack-stable to #openstack-release and it went smoothly without
 any serious problem.

 What do you guys think about the idea?

 [1] http://eavesdrop.openstack.org/irclogs/
 [2] https://wiki.ubuntu.com/IRC/MovingChannels

 Note: I already registered the channel name for safe measures.

 --
 Mathieu

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] oslo namespace changes and murano-dashboard

2015-05-29 Thread Georgy Okrokvertskhov
Hi,

We do monitor both IRC and openstack-dev. #murano is our IRC channel. Most
of the time there are someone who can respond quickly.  Mirantis and
Telefonica guys are located in Europe so they will respond during US
morning but not during US day time. HP guys are US based so they can
respond more quickly in US timezones.

As for the releases, murano-dashboard is installed from master on the
gates, so it should already be compatible with oslo changes, but
python-muranoagent is release based and we need to release a new version
and upload it to pip repository as it is installed via pip.

Thanks
Gosha

On Fri, May 29, 2015 at 9:34 AM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2015-05-29 12:28:37 -0400 (-0400), Doug Hellmann wrote:
  First, where do you hang out on IRC? I couldn’t find a channel
  that looked like it had any population, and there’s nothing
  mentioned on https://wiki.openstack.org/wiki/Murano
 [...]


 https://wiki.openstack.org/wiki/IRC says it's #murano (though I
 haven't /joined to see if anyone actually uses it, I would assume
 they do).
 --
 Jeremy Stanley

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] New micro-version needed for api bug fix or not?

2015-05-29 Thread Jay Pipes

On 05/29/2015 05:04 AM, John Garbutt wrote:

On 29 May 2015 at 09:00, Sahid Orentino Ferdjaoui
sahid.ferdja...@redhat.com wrote:

On Fri, May 29, 2015 at 08:47:01AM +0200, Jens Rosenboom wrote:

As the discussion in https://review.openstack.org/179569 still
continues about whether this is just a bug fix, or an API change
that will need a new micro version, maybe it makes sense to take this
issue over here to the ML.


Changing version of the API probably makes sense also for bug if it
changes the behavior of a command/option to something backward
incompatible. I do not believe it is the case for your change.


My personal opinion is undecided, I can see either option as being
valid, but maybe after having this open bug for four weeks now we can
come to some conclusion either way.


Apologies for this, we are still trying to evolve the rules for when
to bump the API micro versions, there will be some pain while we work
that out :(


 From the summit discussion, I think got three things roughly agreed
(although we have not yet got them written up into a devref document,
to make the formal agreement, and we need to do that ASAP):

1)
We agreed changing a 500 error to an existing error (or making it
succeed in the usual way) is a change that doesn't need a version
bump, its a bug fix.

2)
We also agreed that all micro version bumps need a spec, to help avoid
is adding more bad things to the API as we try and move forward.
This is heavy weight. In time, we might find certain good patterns
where we want to relax that restriction, but we haven't done enough
changes to agree on those patterns yet. This will mean we are moving a
bit slower at first, but it feels like the right trade off against
releasing (i.e. something that lands in any commit on master) an API
with a massive bug we have to support for a long time.

3)
Discuss other cases as they came up, and evolve the plans based on the
examples that come up, with a focus on bumping the version being
(almost) free, and useful for clients to work out what has changed.

Is that how everyone else remembers that discussion?


Yes.


Now when it comes to your change. It is a bug in the default policy.
Sadly this policy is also quite hard wired to admin vs non-admin. We
still need work to make policy more discoverable, so I don't think we
need to make this any more discoverable as such.

Having said all that, we probably need to look at this case more
carefully, after your patch has merged, and work out how this should
work now we assuming strong validation, and granular policy, etc.


Actually, after reading the IRC conversation between Dims and Sean, I 
believe Sean is right to want a microversion bump for this patch. If two 
clouds are deployed, one with this patch and another without, a client 
issuing commands to both will have no idea whether the ip6 filter will 
be considered or not. Having a microversion increment in the patch would 
allow clients to request behaviour they want (the ip6 filter).


Best,
-jay


But maybe there is something massive here?


Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Need volunteers for tempest bug triage

2015-05-29 Thread David Kranz
The rotation has gotten a bit thin, and the untriaged bug count growing, 
with no one signing up for this past week:


https://etherpad.openstack.org/p/qa-bug-triage-rotation

It would help if every core reviewer could be doing this every other 
month. Getting some more sign ups would be very helpful!


 -David

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-05-29 Thread Jeremy Stanley
On 2015-05-28 22:45:37 + (+), Fox, Kevin M wrote:
 You could pass the cache through with a volume

Yeah, from the what can we do with our current CI infrastructure?
perspective, we would just need a way to identify what bits benefit
from being cached for these particular builds and then we would bake
them into our worker base images just like we do to pre-cache all
sorts of other resources used by all our jobs.

The trick is to first understand what mechanisms we already have in
the OpenStack CI to handle performance and isolation, and then try
not to propose new solutions that redundantly reimplement them.
Hopefully as we get more project infrastructure team members
involved with this we can weed out the unnecessary additional
complexities which are being suggested in the initial brainstorming
process.

 As for what docker buys you, it would allow a vm (or my desktop)
 running centos (as an example) to build packages for multiple
 distro's using the distro's own native tools. I see that as a
 plus.

I think the question was really not what good is isolation? but
rather, how is a container any better than a chroot for this
purpose?
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-05-29 Thread Jeremy Stanley
On 2015-05-28 23:09:41 +0200 (+0200), Thomas Goirand wrote:
[...]
 Also, it is my understanding that infra will not accept to use
 long-living VMs, and prefer to spawn new instances.
[...]

Right, after we run arbitrary user-submitted code on a server, we
cease to be able to trust it and so immediately delete and replace
it.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-05-29 Thread Jeremy Stanley
On 2015-05-29 10:37:43 +0100 (+0100), Derek Higgins wrote:
[...]
 I think the feature in delorean that is most useful is that it
 will continue to maintain a history of usable package repositories
 representing the openstack projects over time, for this we would
 need a long running instance, but that can happen outside of
 infra.
[...]

It could potentially even happen in our infrastructure, as long as
the package builds happen on different machines from the servers
serving the results of those builds. We don't eschew all
long-running hosts, just long-running job workers.

However, whether OpenStack's project infrastructure wants to be
responsible for serving package download repositories for things it
doesn't use is a separate discussion entirely. To me, the main
benefits are that you have somewhere to collaborate on packaging
work with a code review system, and our CI can further assist you in
that effort by providing feedback on whether or not a proposed
packaging change is able to successfully build a package (and
perhaps even whether that package can be installed and exercised in
some arbitrary way).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] summit session summary: Moving Apps to Python 3

2015-05-29 Thread Doug Hellmann
This message is a summary of the discussion about enabling integration testing 
under Python 3 from the summit. The full notes are in 
https://etherpad.openstack.org/p/liberty-cross-project-python3 and the spec is 
up at https://review.openstack.org/#/c/177375/

tl;dr: Most of our libraries now support Python 3, and run their unit tests 
there. We have several application-level projects ready to start porting their 
code to Python 3 as well. Running unit tests for the application code under 
Python 3 will be an important step, but won’t be sufficient to claim Python 3 
support. The proposed changes to devstack allow each application project to add 
a test job to run their code under Python 3 when they think they are ready, 
allowing applications to port one at a time.


We agreed that we need to keep supporting Python 2.7 and 3.x simultaneously for 
some time, until we are at a point where we can reliably tell deployers to 
shift most/all of their stack. We are explicitly putting off discussion of more 
detailed criteria for that until we actually have *any* project that runs fully 
on Python 3.

We agreed that we would only support one version of python 3 at a time. Some 
distros may choose to package 3.5 when it is available (soon), but we agreed 
that we would continue working with python 3.4 because it is more widely 
supported on our platforms (directly, or through supported extra repositories). 
Ubuntu has an issue with their 3.4 package due to missing a backport that 
causes segfaults under some circumstances. The packagers are aware of the 
issue, and zul is going to work with them to raise the priority of updating the 
package. If the update isn’t released in a reasonable amount of time, we can 
fall back to testing on Debian Jessie instead, but that will take some work 
from the infra team.

We agreed to use environment markers in our packaging metadata to control 
version-specific requirements, and remove the use of version-specific 
requirements*.txt files. The current release of pbr understands environment 
markers in setup.cfg, but we still need to update projects with 
version-specific requirements to list their dependencies there instead of 
requirements.txt and to fix up the global-requirements tools to update 
setup.cfg instead of the requirements files. I think lifeless has signed up for 
some of that work, but maybe not all of it?

We agreed to move to PyMySQL as our database driver, since the one we are using 
now appears to be unmaintained and does not support Python 3. sdague has a 
series of patches up for devstack to make this change, starting with 
https://review.openstack.org/#/c/184489 and we will need to update the 
dependencies of oslo.db as well.

We still need to approve the spec (https://review.openstack.org/#/c/177375/) 
and I need to set up the job template so that projects can turn the job on when 
they are ready to try it out.

Please let me know if you think I missed any details, or misunderstood 
something as agreed to. Or, of course, if you weren’t there and see something 
wrong with the plan after looking at it now.

Doug


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-05-29 Thread Dmitry Borodaenko
On Fri, May 29, 2015 at 1:48 PM Jeremy Stanley fu...@yuggoth.org wrote:

 On 2015-05-28 23:09:41 +0200 (+0200), Thomas Goirand wrote:
  Also, it is my understanding that infra will not accept to use
  long-living VMs, and prefer to spawn new instances.

 Right, after we run arbitrary user-submitted code on a server, we cease to
 be able to trust it and so immediately delete and replace it.


I think is unnecessarily maximalist. Trust is not an all-or-nothing boolean
flag: why can't you trust that server to do more work at the same level of
trust and run another batch of user-submitted code?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-05-29 Thread Jeremy Stanley
On 2015-05-28 23:19:36 +0200 (+0200), Thomas Goirand wrote:
[...]
 By the way, I was thinking about the sbuild package caching system, and
 thought: how about network mounting /var/cache/pbuilder/aptcache using
 something like Manila (or any other distributed filesystem)? Does infra
 have such tooling in place?

We pre-cache resources onto the local filesystems of the images used
to boot our job workers, and update those images daily.

 What would be the distributed filesystem of choice in such a case?

We have an AFS cell which we could consider using for this
eventually. We're working on using it as a distributed backend for
package mirrors, git repositories, documentation and potentially
lots of other things. However, as we've seen repeatedly, any actions
in a job which require network access (even locally to other servers
in the same cloud provider/region) have a variable percentage of
failures associated with network issues. The more you can avoid
depending on network resources, the better off your jobs will be.

 Also, could we setup approx somewhere? Or do we have Debian and Ubuntu
 mirrors available within infra?
[...]

There is a separate effort underway to maintain distributed
multi-distro package mirrors in all our providers/regions like we
already do for our PyPI mirrors. As mentioned above, it will likely
be updated in one place on a writeable AFS volume, automatically
consistency-checked, and then released to the read-only volumes
published from each of the mirror servers.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Renaming the IRC channel to #openstack-puppet

2015-05-29 Thread Steve Martinelli
Good call, it's always at the bottom of my irc client. Might want to 
change the channel topic to remind folks to move to #openstack-puppet

Thanks,

Steve Martinelli
OpenStack Keystone Core



From:   Mathieu Gagné mga...@iweb.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date:   05/29/2015 03:46 PM
Subject:[openstack-dev] [puppet] Renaming the IRC channel to 
#openstack-puppet



Hi,

We recently asked for our IRC channel (#puppet-openstack) to be logged
by the infra team. We happen to be the only channel suffixing the word
openstack instead of prefixing it. [1]

I would like to propose renaming our IRC channnel to #openstack-puppet
to better fit the mold (convention) already in place and be more
intuitive for new comers to discover.

Jemery Stanley (fungi) explained to me that previous IRC channel renames
were done following the Ubuntu procedure. [2] Last rename I remember of
was #openstack-stable to #openstack-release and it went smoothly without
any serious problem.

What do you guys think about the idea?

[1] http://eavesdrop.openstack.org/irclogs/
[2] https://wiki.ubuntu.com/IRC/MovingChannels

Note: I already registered the channel name for safe measures.

-- 
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Availability of device names for operations with volumes and BDM and other features.

2015-05-29 Thread Nikola Đipanov
On 05/29/2015 12:55 AM, Feodor Tersin wrote:
 Nicola, i would add some words to Alexandre repsonse.
 
 We (standalone ec2api project guys) have filed some bugs (the main is
 [1]), but we don't know how to fix them since the way Nova's device
 names are moved on is unclear for us. Neither BP, nor wiki you've
 mentioned above don't explain what was happened with device names in images.
 Other bug which we filed by results of bdm v2 implementation [2] was
 resolved, but the fix returns only two devices (even if more than two
 volumes are defined in the image) instead of to write device names to
 the image and to return full bdm.
 
 I hope you will clarify this question (Alexandre referred to the patch
 with explicit elimination of device names for images).
 
 Also you mentioned that we can still use bdm v1. We do it now for
 instance launch, but we would like to switch to v2 to use new features
 like blank volumes which are provided by AWS as well. However v2 based
 launch has a suspicious feature which i asked about in ML [3], but no
 one answered me. It would be great if you clarify that question too.
 
 [1] https://bugs.launchpad.net/nova/+bug/1370177
 [2] https://bugs.launchpad.net/nova/+bug/1370265
 [3] http://lists.openstack.org/pipermail/openstack-dev/2015-May/063769.html
 

Hey Feodor and Alexandre - Thanks for the detailed information!

I have already commented on some of the bugs above and provided a small
patch that I think fixes one bit of it. As described on the bug - device
names might be a bit trickier, but I hope to have something posted next
week.

Help with testing (while patches are in review) would be hugely appreciated!

On 05/28/2015 02:24 PM, Alexandre Levine wrote:
 1. RunInstance. Change parameters of devices during instance booting
 from image. In Grizzly it worked so we could specify changed BDM in
 parameters, it overwrote in nova DB the one coming from image and then
 started the instance with new parameters. The only key for addressing
 devices in this use case is the very device name. And now we don't have
 it for the volumes in BDM coming from the image, because nova stopped
 putting this information into the image.

http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_RunInstances.html

 2. Devices names for Xen-backed instances to work fully. We should be
 able to specify required device names during initial instance creation,
 they should be stored into an image when the instance is shapshotted, we
 can fetch info and change parameters of such volume during subsequent
 operations, and the device names inside the instance should be named
 exactly.

 3. DescribeInstances and DescribeInstanceAttributes to return BDM with
 device names ideally corresponding to actual device naming in instance.

http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeInstances.html


http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeInstanceAttribute.html


 4. DescribeImages and DescribeImageAttributes to return BDM with device
 names ideally corresponding to the ones in instance before snapshotting.

http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeImages.html


http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeImageAttribute.html


I think all of the above is pretty much covered by
https://bugs.launchpad.net/nova/+bug/1370177


 5. AttachVolume with the specified device name.

http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_AttachVolume.html

 6. ModifyInstanceAttribute with BDM as parameter.

http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_ModifyInstanceAttribute.html


 7. ModifyImageAttribute with BDM as parameter.

http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_ModifyImageAttribute.html

I am not sure about these 3 cases - would it be possible to actually
report bugs for them as I don't think I have enough information this way.

N.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Renaming the IRC channel to #openstack-puppet

2015-05-29 Thread Jeremy Stanley
On 2015-05-29 15:39:20 -0400 (-0400), Mathieu Gagné wrote:
[...]
 Jemery Stanley (fungi) explained to me that previous IRC channel renames
 were done following the Ubuntu procedure. [2] Last rename I remember of
 was #openstack-stable to #openstack-release and it went smoothly without
 any serious problem.
[...]

Well, after revisiting my notes it was that we moved
#openstack-packaging to #openstack-stable (not the other way around)
but yeah the only real blocker was that we hadn't previously been
granting +s to our admins. We do that now so we don't need to worry
about that step. I provided a bit of a blow-by-blow in
https://launchpad.net/bugs/1360324 for those who may be curious. If
there's consensus, I'm happy to help with the bits that require
operator/admin intervention; just let me know.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][bgpvpn] IRC meetings on BGP VPN interconnection API

2015-05-29 Thread Vikram Choudhary
Hi Thomos/Mathieu,

Thanks for starting this mail thread. Let's discuss over the IRC as
suggested by Paul.

Thanks
Vikram

On 5/29/15, Paul Michali p...@michali.net wrote:
 You can use the VPNaaS IRC channel/time... we don't have much on the agenda
 right now, other than discussion VPN flavors for Liberty, in which it would
 be good to discuss BGP VPN and Edge VPN.

 Regards,

 Paul Michali (pc_m)

 On Fri, May 29, 2015 at 11:08 AM thomas.mo...@orange.com wrote:

 Hi everyone,

 As a follow-up to discussions last week on a BGP VPN interconnection API
 and the work started with the people already involved, we are going to
 hold IRC meetings to discuss how to progress the different pieces of
 work, in particular on the API itself [1] and its implementation+drivers
 [2].

 The slot we propose is ** Tuesday 15:00 UTC ** with the first meeting
 next Tuesday (June 2nd).

 Note that, based on last week feedback, we submitted the existing
 stackforge project for inclusion in the Neutron big tent earlier this
 week [3].

 We will do a proper meeting registration (patch to openstack-infra
 irc-meeting) and send meeting info with wiki and meeting room before
 next Tuesday.

 Looking forward to discussing with everyone interested!

 -Thomas  Mathieu

 [1] currently being discussed at https://review.openstack.org/#/c/177740
 [2] https://github.com/stackforge/networking-bgpvpn
 [3] https://review.openstack.org/#/c/186041




 _

 Ce message et ses pieces jointes peuvent contenir des informations
 confidentielles ou privilegiees et ne doivent donc
 pas etre diffuses, exploites ou copies sans autorisation. Si vous avez
 recu ce message par erreur, veuillez le signaler
 a l'expediteur et le detruire ainsi que les pieces jointes. Les messages
 electroniques etant susceptibles d'alteration,
 Orange decline toute responsabilite si ce message a ete altere, deforme
 ou
 falsifie. Merci.

 This message and its attachments may contain confidential or privileged
 information that may be protected by law;
 they should not be distributed, used or copied without authorisation.
 If you have received this email in error, please notify the sender and
 delete this message and its attachments.
 As emails may be altered, Orange is not liable for messages that have
 been
 modified, changed or falsified.
 Thank you.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Puppet] Proposed Change in Nova Service Defaults

2015-05-29 Thread Richard Raseley

Matt Fischer wrote:

Was the intent here to allow bringing up new
compute hosts without them being enabled? If so there's another flag
that could be set to manage that state.


Mathieu posited the following intent in the review:

It was used in some active/passive setup (as stated in the bug report) 
where service state was managed by an external cluster/resource manager.


I think it reasonable that people would manage the state of their nodes 
via the composition layer, but it is also reasonable that we might want 
to put an additional option in place.


I'd love to hear more input on that.


As for the patch itself, we need to change it for all the other services
in nova too, not just API.


Agreed. I will see if Matt wants to do that work and if not will be 
happy to do so over the weekend.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >