[openstack-dev] [astara] PTL non-candidacy

2016-03-19 Thread Adam Gandelman
Hi All-

I've decided that I will not be running for Astara PTL this coming cycle.
It's been great helping the project grow and find its place within the
OpenStack tent over the last 8 months or so.  We're a small group of
developers and I think its important to stir the pot and let another
contributor from another organization help drive the next 6 months.  I've
gotten verbal confirmation that someone will be stepping up from another
company to volunteer to take the reigns, so do be on the lookout for their
PTL candidacy email soon.

Thanks again and see you all in Austin,
Adam
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [astara] Outstanding mitaka-2 bugs

2016-01-18 Thread Adam Gandelman
Hi All-

I mentioned this in todays meeting but wanted to drop a note here since
most of us are out on a US holiday...

This week is Mitaka-2 [1] and we still have quite a few outstanding bugs
that still need resolution. The good news is that (i think) all of them
have had patches up for review for some time, and should all be in pretty
good shape for merging. Some are less trivial than others, with the most
complex being the remaining pieces to dynamic management addresses (bug
#1524068), see gerrit topic [2].

I'd like to ask core reviewers to make this backlog a priority for the
first part of the week, so we can get our M2 tags pushed to all our repos
before the end of the week.  There's also a number of non-critical patches
up that we should try to clear out as well. It'd be great to get a good
burn down going of reviews/bugs this week and dedicate the remainder of
this cycle looking at finishing up feature blueprints we've been planning
for Mitaka.

Thanks!
Adam

[1] https://launchpad.net/astara/+milestone/mitaka-2
[2] https://review.openstack.org/#/q/topic:bug/1524068+and+branch:master
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [astara][requirements] astara-appliance has requirements not in global-requirements

2015-12-15 Thread Adam Gandelman
Thanks for the heads up, Andreas. I've opened
https://bugs.launchpad.net/astara/+bug/1526527 and hope to resolve it in
the coming days.

Cheers
Adam


On Sun, Dec 13, 2015 at 3:26 AM, Andreas Jaeger  wrote:

> Astara team,
>
> The requirements proposal job complains about astara-appliance with:
> 'gunicorn' is not in global-requirements.txt
>
> Please get this requirement into global-requirements or remove it.
>
> Details:
>
> https://jenkins.openstack.org/job/propose-requirements-updates/602/consoleFull
> http://docs.openstack.org/developer/requirements/
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [astara] contributor meetup during Mitaka summit

2015-10-27 Thread Adam Gandelman
So after chatting with folks it looks like the best and only time for us to
get everyone together will be tomorrow (Thursday) AM.  Let's plan on
meeting in the developer lounge in the design summit area (downstairs of
the Grand Prince Hotel Takanawa) at 10AM Thursday.

Topics we hope to cover have been noted on the etherpad [1], but feel free
to add more if there's something you'd like to discuss.

Cheers,
Adam

[1] https://etherpad.openstack.org/p/akanda-mitaka-planning


On Mon, Oct 26, 2015 at 7:00 PM, Adam Gandelman <gandelma...@gmail.com>
wrote:

> Should we shoot for a contributor meetup Friday in the developer lounge,
> alongside the other project meetups?  Does this *not* work for anyone?  Is
> afternoon better than the AM?  I propose we meet before lunch and carry on
> into the afternoon if necessary.  Please jot your name down on the etherpad
> [1] with preferences, we should finalize this today so we dont double book
> ourselves
>
> Cheers
> Adam
>
> [1] https://etherpad.openstack.org/p/akanda-mitaka-planning
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [astara] contributor meetup during Mitaka summit

2015-10-26 Thread Adam Gandelman
Should we shoot for a contributor meetup Friday in the developer lounge,
alongside the other project meetups?  Does this *not* work for anyone?  Is
afternoon better than the AM?  I propose we meet before lunch and carry on
into the afternoon if necessary.  Please jot your name down on the etherpad
[1] with preferences, we should finalize this today so we dont double book
ourselves

Cheers
Adam

[1] https://etherpad.openstack.org/p/akanda-mitaka-planning
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [astara] Announcing the Astara Liberty release (formely Akanda)

2015-10-23 Thread Adam Gandelman
Hi All-

The Astara team is pleased to announce the availability of the Astara
Liberty release.  You can find a list of closed bugs, implemented
blueprints and release tarballs on launchpad at
https://launchpad.net/astara/+milestone/7.0.0.  We are proud of what we've
accomplished this release and look forward to another productive run during
the Mitaka cycle.

Our stable/liberty branches have been created and any critical bugs
discovered should be filed at http://bugs.launchpad.net/astara and brought
to developers attention with a 'backport-potential' tag.

It's worth noting here that we are in the process of renaming the project
from Akanda to Astara.  We are about halfway there at the moment, with
package names updated, console scripts and some code updated, but we
haven't completely renamed the code repositories.  We hope to get this
completed when we return from Tokyo and apologize in advance for any
confusion in the meantime.

Thanks and we hope to see many of you in Tokyo,
Adam
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][releases] OpenStack 2014.2.3 released

2015-04-13 Thread Adam Gandelman
Correction. Also included in the list of released projects for 2014.2.3,
Sahara: https://launchpad.net/sahara/juno/2014.2.3

Apologies,
Adam

On Mon, Apr 13, 2015 at 10:30 AM, Adam Gandelman gandelma...@gmail.com
wrote:

 Hello everyone,

 The OpenStack Stable Maintenance team is happy to announce the release
 of the 2014.2.3 stable Juno release.  We have been busy reviewing and
 accepting backported bugfixes to the stable/juno branches according
 to the criteria set at:

 https://wiki.openstack.org/wiki/StableBranch

 A total of 109 bugs have been fixed across all projects. These
 updates to Juno are intended to be low risk with no
 intentional regressions or API changes. The list of bugs, tarballs and
 other milestone information for each project may be found on Launchpad:

 https://launchpad.net/ceilometer/juno/2014.2.3
 https://launchpad.net/cinder/juno/2014.2.3
 https://launchpad.net/glance/juno/2014.2.3
 https://launchpad.net/heat/juno/2014.2.3
 https://launchpad.net/horizon/juno/2014.2.3
 https://launchpad.net/keystone/juno/2014.2.3
 https://launchpad.net/nova/juno/2014.2.3
 https://launchpad.net/neutron/juno/2014.2.3
 https://launchpad.net/trove/juno/2014.2.3

 Release notes may be found on the wiki:

 https://wiki.openstack.org/wiki/ReleaseNotes/2014.2.3

 The freeze on the stable/juno branches will be lifted today as we
 begin working toward the 2014.2.4 release.

 Thanks,
 Adam


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable][releases] OpenStack 2014.2.3 released

2015-04-13 Thread Adam Gandelman
Hello everyone,

The OpenStack Stable Maintenance team is happy to announce the release
of the 2014.2.3 stable Juno release.  We have been busy reviewing and
accepting backported bugfixes to the stable/juno branches according
to the criteria set at:

https://wiki.openstack.org/wiki/StableBranch

A total of 109 bugs have been fixed across all projects. These
updates to Juno are intended to be low risk with no
intentional regressions or API changes. The list of bugs, tarballs and
other milestone information for each project may be found on Launchpad:

https://launchpad.net/ceilometer/juno/2014.2.3
https://launchpad.net/cinder/juno/2014.2.3
https://launchpad.net/glance/juno/2014.2.3
https://launchpad.net/heat/juno/2014.2.3
https://launchpad.net/horizon/juno/2014.2.3
https://launchpad.net/keystone/juno/2014.2.3
https://launchpad.net/nova/juno/2014.2.3
https://launchpad.net/neutron/juno/2014.2.3
https://launchpad.net/trove/juno/2014.2.3

Release notes may be found on the wiki:

https://wiki.openstack.org/wiki/ReleaseNotes/2014.2.3

The freeze on the stable/juno branches will be lifted today as we
begin working toward the 2014.2.4 release.

Thanks,
Adam
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][nova][ironic] How to run microversions tests on the gate

2015-04-08 Thread Adam Gandelman
FWIW the Ironic microversion test patch mentioned on gerrit is only
targeted at Tempest because thats where the API tests currenty live and
from which our infra is setup to run. The eventual goal is to move all of
tempest.api.baremetal.* to the Ironic tree, there's no reason why those
proposed new tests wouldn't either.

Those tests were designed to allow running against all available
microversions or some configured subset, and to ensure tests for previous
microversions run against newer.  I think its perfectly feasible to test
many microversions in tree or out, provided test coverage is kept
sufficiently up to date as the APIs evolve.

Adam

On Wed, Apr 8, 2015 at 7:45 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 04/08/2015 05:24 AM, Sean Dague wrote:

 On 04/08/2015 07:38 AM, Dmitry Tantsur wrote:

 On 04/08/2015 12:53 PM, Sean Dague wrote:

 On 04/08/2015 03:58 AM, Dmitry Tantsur wrote:

 On 04/08/2015 06:23 AM, Ken'ichi Ohmichi wrote:

 Hi,

 Now Nova and Ironic have implemented API microversions in Kilo.
 Nova's microversions are v2.1 - v2.3.
 Ironic's microversions are v1.1 - v1.6.

 Now Tempest is testing the lowest microversion on the gate, and
 Ironic's microversions test patch[1] is on the gerrit.
 Before merging the patch, I'd like to propose consistent test way for
 microversions of Nova and Ironic.

 My suggestion is the test target microversions are:
 * the lowest microversion
 * the biggest microversion, but don't use the keyword latest on a
 header and these microversions tests are operated on different gate
 jobs.

 The lowest microversion is already tested on check-tempest-dsvm-full
 or something, so this proposes just to add the biggest microversion
 job like check-tempest-dsvm-full-big-microversion.

 [background]
 In long-term, these microversions continue increasing and it is
 difficult to run Tempest for all microversions on the gate because of
 test workload. So I feel we need to select microversions which are
 tested on the gate for efficient testing.

 [the lowest microversion]
 On microversion mechanism, if a client *doesn't* specify favorite
 microversion in its request header, a Nova/Ironic server considers the
 request as the lowest microversion. So the lowest microversion is
 default behavior and important. I think we need to test it at least.

 [the biggest microversion]
 On microversion mechanism, if a client specify the keyword latest in
 its request header instead of microversion, a Nova/Ironic server works
 on the biggest microversion behavior.
 During the development, there is time lag between each project dev and
 Tempest dev. After adding a new API on a project, corresponding tests
 are added to Tempest in most cases. So if specifying the keyword
 latest, Tempest would not handle the request/response and fail,
 because Tempest can not catch the latest API changes until
 corresponding Tempest patch is merged.
 So it is necessary to have the target microversion config option in
 Tempest and pass specific biggest microversion to Tempest with
 openstack-infra/project-config.

 Any thoughts?


 Hi! I've already stated this point in #openstack-ironic and I'd like to
 reiterate: if we test only the lowest and the highest microversions
 essentially means (or at least might mean) that the other are broken.
 At
 least in Ironic only some unit tests actually touch code paths for
 versions 1.2-1.5. As we really can't test too many versions, I suggest
 we stop producing a microversion for every API feature feature change
 in
 L. No idea what to do with 1.2-1.5 now except for politely asking
 people
 not to use them :D


 Tempest shouldn't be the *only* test for a project API. The projects
 themselves need to take some ownership for their API getting full
 coverage with in tree testing, including whatever microversion strategy
 they are employing.


 Agreed, but in-tree testing is also not feasible with too many version.
 Even now we have 7 (1.0-1.6), if it continues, we'll have not less than
 12 after L, 18 after M, etc. And we have to test every one of them for
 regressions at least occasionally, provided that we don't start to
 aggressively deprecated microversions. If we do start, then we'll start
 breaking people even more often, than we should. E.g. if someone writes
 a tool targeted at 1.1, and we deprecated 1.1 in M cycle, the tool will
 break, though maybe it can actually work with new API.


 I do not understand how in tree testing is not feasible. In tree you
 have insights into all the branching that occurs within code so can very
 clearly understand what paths aren't possible. It should be a lot more
 straight forward than external black box testing where that can't be
 assume.


 Exactly.

 The whole *point* of microversions was to allow the APIs to evolve in a
 backwards-compatible, structured and advertised way. The evolution of the
 APIs response and request payloads should be tested fully for each
 microversion added to the codebase -- in tree.

 -jay


 

[openstack-dev] [stable] Call for testing: 2014.2.3 candidate tarballs

2015-04-02 Thread Adam Gandelman
Hi all,

We are scheduled to publish 2014.2.3 on Thursday April 9th for
Ceilometer, Cinder, Glance, Heat, Horizon, Keystone, Neutron, Nova,
Sahara and Trove.

We'd appreciate anyone who could test the candidate 2014.2.3 tarballs, which
include all changes aside from any pending freeze exceptions:

  http://tarballs.openstack.org/ceilometer/ceilometer-stable-juno.tar.gz
  http://tarballs.openstack.org/cinder/cinder-stable-juno.tar.gz
  http://tarballs.openstack.org/glance/glance-stable-juno.tar.gz
  http://tarballs.openstack.org/heat/heat-stable-juno.tar.gz
  http://tarballs.openstack.org/horizon/horizon-stable-juno.tar.gz
  http://tarballs.openstack.org/keystone/keystone-stable-juno.tar.gz
  http://tarballs.openstack.org/neutron/neutron-stable-juno.tar.gz
  http://tarballs.openstack.org/nova/nova-stable-juno.tar.gz
  http://tarballs.openstack.org/sahara/sahara-stable-juno.tar.gz
  http://tarballs.openstack.org/trove/trove-stable-juno.tar.gz

Thanks,
Adam
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable] Preparing for 2014.2.3 -- branches freeze April 2nd

2015-03-30 Thread Adam Gandelman
Hi All-

We'll be freezing the stable/juno branches for integrated Juno projects this
Thursday April 2nd in preparation for the 2014.2.3 stable release on
Thursday April 9th.  You can view the current queue of proposed patches
on gerrit [1].  I'd like to request all interested parties review current
bugs affecting Juno and help ensure any relevant fixes be proposed
soon and merged by Thursday, or notify the stable-maint-core team of
anything critical that may land late and require a freeze exception.

Thanks,
Adam

[1] https://review.openstack.org/#/q/status:open+branch:stable/juno,n,z
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Icehouse 2014.1.4 freeze exceptions

2015-03-11 Thread Adam Gandelman
Big +2 on both those, but will leave it up to Alan make the final call
Cheers

On Wed, Mar 11, 2015 at 3:42 PM, Ihar Hrachyshka ihrac...@redhat.com
wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 03/11/2015 12:21 PM, Alan Pevec wrote:
  Hi,
 
  next Icehouse stable point release 2014.1.4 has been slipping last
  few weeks due to various gate issues, see Recently closed section
  in https://etherpad.openstack.org/p/stable-tracker for details.
  Branch looks good enough now to push the release tomorrow
  (Thursdays are traditional release days) and I've put freeze -2s on
  the open reviews. I'm sorry about the short freeze period but
  branch was effectively frozen last two weeks due to gate issues and
  further delay doesn't make sense. Attached is the output from the
  stable_freeze script for thawing after tags are pushed.
 
  At the same time I'd like to propose following freeze exceptions
  for the review by stable-maint-core:
 
  * https://review.openstack.org/144714 - Eventlet green threads not
  released back to pool Justification: while not OSSA fix, it does
  have SecurityImpact tag
 
  * https://review.openstack.org/163035 - [OSSA 2015-005] Websocket
  Hijacking Vulnerability in Nova VNC Server (CVE-2015-0259)
  Justification: pending merge on master and juno
 
 

 There seems to be a regression [1] introduced into Icehouse and Juno
 trees by backporting [2] that may leave instances in db that are not
 deletable (and we have no force-delete in Icehouse). The fix is merged
 in master and backported to both stable branches [3]. I suggest we
 consider it as an exception for the upcoming release to avoid the
 regression in it.

 [1]: https://bugs.launchpad.net/nova/+bug/1423952
 [2]:

 https://review.openstack.org/#/q/Ife712c43c5a61424bc68b2f5ab47cefdb46ac168,n,z
 [3]:

 https://review.openstack.org/#/q/I70f464120c798422f9a3d601b7cdf3b0a8320690,n,z

 Comments?

 Thanks,
 /Ihar
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1

 iQEcBAEBAgAGBQJVAMTyAAoJEC5aWaUY1u57l5AIAKEheCCCzifZgg0jJymDKL4w
 mUST3axIxTK/SlgkU83yw7ies2zMFYv1vhu+D4MH9B/Vl+j3wVL79YgBKn9jeJFu
 CU5+5f+98QW897Ba06GEXZ1HjvKwYGCII9MZsnQ3k2185OpCZvsW7jOZPbMKbmIH
 cN66RJWYpLOhtYjznRCx5KLjOHz0Lp+C36oR29DICUosrTAKfQyE3jFebLSAnxgq
 cpWGG7sfkbkob5YUijtmkYc8loLwC95s6ScGT8riLj/JKqyJ8JotEdfsmAfA3Bfg
 ZmY+bFzLLGFIjyNfX+j5nfmfDYEEbGhhNoh2JzOL9uuP/FOhDp6s+NvZPqDmMDQ=
 =WkTB
 -END PGP SIGNATURE-

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-20 Thread Adam Gandelman
On Fri, Feb 20, 2015 at 3:06 AM, Sean Dague s...@dague.net wrote:


 It sounds like you are suggesting we take the tool we use to ensure that
 all of OpenStack is installable together in a unified way, and change
 it's installation so that it doesn't do that any more.

 Which I'm fine with.

 But if we are doing that we should just whole hog give up on the idea
 that OpenStack can be run all together in a single environment, and just
 double down on the devstack venv work instead.

 -Sean



Not necessarily. There'd be some tweaks to the tooling but we'd still be
doing the same fundamental thing (installing everything openstack together)
except using a strict set of dependencies that we know wont break each
other when that happens.

This would help tremendously with testing around global-requirements, too.
Currently, a local devstack run today likely produces a set dependency
different than what was tested by jenkins on the last change to
global-requirements.  If proposed changes to global-requirements produced a
compiled list of pinned dependencies and tested against that, we'd know
that the next day's devstack runs are still testing against the dependency
chain produced by the last change to GR.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-20 Thread Adam Gandelman
On Fri, Feb 20, 2015 at 10:16 AM, Joshua Harlow harlo...@outlook.com
wrote:


 It'd be interesting to see what a distribution (canonical, redhat...)
 would think about this movement. I know yahoo! has been looking into it for
 similar reasons (but we are more flexibly then I think a packager such as
 canonical/redhat/debian/... would/culd be). With a move to venv's that
 seems like it would just offload the work to find the set of dependencies
 that work together (in a single-install) to packagers instead.

 Is that ok/desired at this point?

 -Josh


I share this concern, as well. I wonder if the compiled list of pinned
dependencies will be the only thing we look at upstream. Once functional on
stable branches, will we essentially forget about the non-pinned
requirements.txt that downstreams are meant to use?

One way of looking at it, though (especially wrt stable) is that the pinned
list of compiled dependencies more closely resembles how distros are
packaging this stuff.  That is, instead of providing explicit dependencies
via a pinned list, they are providing them via a frozen package archive
(ie, ubuntu 14.04) that are known to provide a working set.  It'd be up to
distros to make sure that everything is functional prior to freezing that,
and I imagine they already do that.

-Adam
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Ironic 2014.2.1 released

2015-02-19 Thread Adam Gandelman
Hello-

The Ironic team is pleased to announce the release of the 2014.2.1 stable
Juno release.  This release contains a number of backported fixes that have
accrued in our stable branch since the release of 2014.2. These updates to
Juno are intended to be low risk with no intentional
regressions or API changes. The list of bugs, tarballs and other milestone
information for this release may be found on Launchpad:

https://bugs.launchpad.net/ironic/+milestone/2014.2.1

Note that Ironic is not included in the set of 2014.2 projects that receive
scheduled point releases by the stable maintenance team. This release comes
separate from those coordinated point releases.

Thanks,
Adam
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-19 Thread Adam Gandelman
This creates a bit of a problem for downstream (packagers and probably
others)  Shipping a requirements.txt with explicit pins will end up
producing an egg with a requires.txt that reflects those pins, unless there
is some other magic planned that I'm not aware of.  I can't speak for all
packaging flavors, but I know debian packaging interacts quite closely with
things like requirements.txt and resulting egg's requires.txt to determine
appropriate system-level package dependencies.  This would require a lot of
tedious work on packagers part to get something functional.

What if its flipped? How about keeping requirements.txt with the caps, and
using that as input to produce something like requirements.gate that passed
to 'pip install --no-deps'  on our slaves?  We'd end up installing and
using the explicit pinned requirements while the services/libraries
themselves remain flexible.  This might the issue Doug pointed out, where
requirements updates across projects are not synchronized.

Adam



On Thu, Feb 19, 2015 at 12:59 PM, Joe Gordon joe.gord...@gmail.com wrote:



 On Wed, Feb 18, 2015 at 7:14 AM, Doug Hellmann d...@doughellmann.com
 wrote:



 On Wed, Feb 18, 2015, at 10:07 AM, Donald Stufft wrote:
 
   On Feb 18, 2015, at 10:00 AM, Doug Hellmann d...@doughellmann.com
 wrote:
  
  
  
   On Tue, Feb 17, 2015, at 03:17 PM, Joe Gordon wrote:
   On Tue, Feb 17, 2015 at 4:19 AM, Sean Dague s...@dague.net wrote:
  
   On 02/16/2015 08:50 PM, Ian Cordasco wrote:
   On 2/16/15, 16:08, Sean Dague s...@dague.net wrote:
  
   On 02/16/2015 02:08 PM, Doug Hellmann wrote:
  
  
   On Mon, Feb 16, 2015, at 01:01 PM, Ian Cordasco wrote:
   Hey everyone,
  
   The os-ansible-deployment team was working on updates to add
 support
   for
   the latest version of juno and noticed some interesting version
   specifiers
   introduced into global-requirements.txt in January. It
 introduced some
   version specifiers that seem a bit impossible like the one for
   requests
   [1]. There are others that equate presently to pinning the
 versions of
   the
   packages [2, 3, 4].
  
   I understand fully and support the commit because of how it
 improves
   pretty much everyone’s quality of life (no fires to put out in
 the
   middle
   of the night on the weekend). I’m also aware that a lot of the
   downstream
   redistributors tend to work from global-requirements.txt when
   determining
   what to package/support.
  
   It seems to me like there’s room to clean up some of these
   requirements
   to
   make them far more explicit and less misleading to the human
 eye (even
   though tooling like pip can easily parse/understand these).
  
   I think that's the idea. These requirements were generated
   automatically, and fixed issues that were holding back several
   projects.
   Now we can apply updates to them by hand, to either move the
 lower
   bounds down (as in the case Ihar pointed out with stevedore) or
 clean
   up
   the range definitions. We should not raise the limits of any Oslo
   libraries, and we should consider raising the limits of
 third-party
   libraries very carefully.
  
   We should make those changes on one library at a time, so we can
 see
   what effect each change has on the other requirements.
  
  
   I also understand that stable-maint may want to occasionally
 bump the
   caps
   to see if newer versions will not break everything, so what is
 the
   right
   way forward? What is the best way to both maintain a stable
 branch
   with
   known working dependencies while helping out those who do so
 much work
   for
   us (downstream and stable-maint) and not permanently pinning to
   certain
   working versions?
  
   Managing the upper bounds is still under discussion. Sean
 pointed out
   that we might want hard caps so that updates to stable branch
 were
   explicit. I can see either side of that argument and am still on
 the
   fence about the best approach.
  
   History has shown that it's too much work keeping testing
 functioning
   for stable branches if we leave dependencies uncapped. If
 particular
   people are interested in bumping versions when releases happen,
 it's
   easy enough to do with a requirements proposed update. It will
 even run
   tests that in most cases will prove that it works.
  
   It might even be possible for someone to build some automation
 that did
   that as stuff from pypi released so we could have the best of both
   worlds. But I think capping is definitely something we want as a
   project, and it reflects the way that most deployments will
 consume this
   code.
  
   -Sean
  
   --
   Sean Dague
   http://dague.net
  
   Right. No one is arguing the very clear benefits of all of this.
  
   I’m just wondering if for the example version identifiers that I
 gave in
   my original message (and others that are very similar) if we want
 to make
   the strings much simpler for people who tend to work from them
 (i.e.,
   downstream re-distributors 

Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-19 Thread Adam Gandelman
Its more than just the naming.  In the original proposal, requirements.txt
is the compiled list of all pinned deps (direct and transitive), while
requirements.in reflects what people will actually use.  Whatever is in
requirements.txt affects the egg's requires.txt. Instead, we can keep
requirements.txt unchanged and have it still be the canonical list of
dependencies, while
reqiurements.out/requirements.gate/requirements.whatever is an upstream
utility we produce and use to keep things sane on our slaves.

Maybe all we need is:

* update the existing post-merge job on the requirements repo to produce a
requirements.txt (as it does now) as well the compiled version.

* modify devstack in some way with a toggle to have it process dependencies
from the compiled version when necessary

I'm not sure how the second bit jives with the existing devstack
installation code, specifically with the libraries from git-or-master but
we can probably add something to warm the system with dependencies from the
compiled version prior to calling pip/setup.py/etc

Adam



On Thu, Feb 19, 2015 at 2:31 PM, Joe Gordon joe.gord...@gmail.com wrote:



 On Thu, Feb 19, 2015 at 1:48 PM, Adam Gandelman ad...@ubuntu.com wrote:

 This creates a bit of a problem for downstream (packagers and probably
 others)  Shipping a requirements.txt with explicit pins will end up
 producing an egg with a requires.txt that reflects those pins, unless there
 is some other magic planned that I'm not aware of.  I can't speak for all
 packaging flavors, but I know debian packaging interacts quite closely with
 things like requirements.txt and resulting egg's requires.txt to determine
 appropriate system-level package dependencies.  This would require a lot of
 tedious work on packagers part to get something functional.

 What if its flipped? How about keeping requirements.txt with the caps,
 and using that as input to produce something like requirements.gate that
 passed to 'pip install --no-deps'  on our slaves?  We'd end up installing
 and using the explicit pinned requirements while the services/libraries
 themselves remain flexible.  This might the issue Doug pointed out, where
 requirements updates across projects are not synchronized.


 Switching them to requirements.txt and requirements.gate works for me. If
 a simple renaming makes things better, then great!

 As for Doug's comment, yes we need to work something out to overwrite
 requirements.gate, under your proposed naming, with global requirments


 Adam



 On Thu, Feb 19, 2015 at 12:59 PM, Joe Gordon joe.gord...@gmail.com
 wrote:



 On Wed, Feb 18, 2015 at 7:14 AM, Doug Hellmann d...@doughellmann.com
 wrote:



 On Wed, Feb 18, 2015, at 10:07 AM, Donald Stufft wrote:
 
   On Feb 18, 2015, at 10:00 AM, Doug Hellmann d...@doughellmann.com
 wrote:
  
  
  
   On Tue, Feb 17, 2015, at 03:17 PM, Joe Gordon wrote:
   On Tue, Feb 17, 2015 at 4:19 AM, Sean Dague s...@dague.net
 wrote:
  
   On 02/16/2015 08:50 PM, Ian Cordasco wrote:
   On 2/16/15, 16:08, Sean Dague s...@dague.net wrote:
  
   On 02/16/2015 02:08 PM, Doug Hellmann wrote:
  
  
   On Mon, Feb 16, 2015, at 01:01 PM, Ian Cordasco wrote:
   Hey everyone,
  
   The os-ansible-deployment team was working on updates to add
 support
   for
   the latest version of juno and noticed some interesting
 version
   specifiers
   introduced into global-requirements.txt in January. It
 introduced some
   version specifiers that seem a bit impossible like the one for
   requests
   [1]. There are others that equate presently to pinning the
 versions of
   the
   packages [2, 3, 4].
  
   I understand fully and support the commit because of how it
 improves
   pretty much everyone’s quality of life (no fires to put out
 in the
   middle
   of the night on the weekend). I’m also aware that a lot of the
   downstream
   redistributors tend to work from global-requirements.txt when
   determining
   what to package/support.
  
   It seems to me like there’s room to clean up some of these
   requirements
   to
   make them far more explicit and less misleading to the human
 eye (even
   though tooling like pip can easily parse/understand these).
  
   I think that's the idea. These requirements were generated
   automatically, and fixed issues that were holding back several
   projects.
   Now we can apply updates to them by hand, to either move the
 lower
   bounds down (as in the case Ihar pointed out with stevedore)
 or clean
   up
   the range definitions. We should not raise the limits of any
 Oslo
   libraries, and we should consider raising the limits of
 third-party
   libraries very carefully.
  
   We should make those changes on one library at a time, so we
 can see
   what effect each change has on the other requirements.
  
  
   I also understand that stable-maint may want to occasionally
 bump the
   caps
   to see if newer versions will not break everything, so what
 is the
   right
   way forward? What is the best way to both

Re: [openstack-dev] [Ironic] volunteer to be rep for third-party CI

2015-01-30 Thread Adam Gandelman
Hi-

I believe I'm the one who was volunteered for this on IRC. I'm still fine
with being the contact for these matters, but making the meetings at
0800/1500 UTC will be difficult for me.   Feel free to sign me up if that
is not a blocker. I've been driving the ironic CI/QA stuff over the last
couple of cycles and know it and the rest of the upstream gate well enough
to be able to provide guidance to any other Ironic dev's hoping to get
their own up and running.

adam_g

On Thu, Jan 29, 2015 at 1:28 PM, Anita Kuno ante...@anteaya.info wrote:

 On 01/29/2015 02:32 PM, Adam Lawson wrote:
  Hi ruby, I'd be interested in this. Let me know next steps when ready?
 
  Thanks!
 Hi Adam:

 It requires someone who knows the code base really well. While core
 review permissions are not required, the person fulfilling this role
 needs to have the confidence of the cores for support of decisions they
 make.

 Since folks with these abilities spend much of their time in irc and
 read backscroll I had brought the subject up in channel. I hadn't
 expected a post to the mailing list as folks in the larger community may
 not have the skill set that would make them effective in this role.

 Which is not to say they can't learn the role. The starting place would
 be to contribute to the code base as a contributor
 (http://docs.openstack.org/infra/manual/developers.html) and earn the
 trust of the program's cores through participation in channel and in
 reviews.

 To be honest, I had thought someone already had said they would do this
 but since Ironic doesn't have much third party ci activity, I have
 forgotten who said they would. Mostly I was asking if anyone else
 remembered who this was.

 Thanks Adam,
 Anita.
  On Jan 29, 2015 11:14 AM, Ruby Loo rlooya...@gmail.com wrote:
 
  Hi,
 
  Want to contribute even more to the Ironic community? Here's your
  opportunity!
 
  Anita Kuno (anteaya) would like someone to be the Ironic representative
  for third party CIs. What would you have to do? In her own words:
 mostly
  I need to know who they are so that when someone has questions I can
 work
  with that person to learn the answers so that they can learn to answer
 the
  questions
 
  There are regular third party meetings [1] and it would be great if you
  would attend them, but that isn't necessary.
 
  Let us know if you're interested. No resumes need to be submitted. In
 case
  there is a lot of interest, hmm..., the PTL, Devananda, will decide.
  (That's what he gets for not being around now. ;))
 
  Thanks in advance for all your interest,
  --ruby
 
  [1] https://wiki.openstack.org/wiki/Meetings#Third_Party_Meeting
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Re: [stable] Stable check of openstack/nova failed - EnvironmentError: mysql_config not found

2015-01-26 Thread Adam Gandelman
Looking at the image build logs @ nodepool.openstack.org, it *looks* like
package dependencies are not being installed properly on one of the slaves
(hpcloud-b2.bare-precise): http://paste.ubuntu.com/9886045/  The failing
periodic jobs all land on this b2 node. nodepool is, however, pre-caching
devstack required packages (including libmysqlclient-dev) I wonder if that
b2 node is incorrectly classified somewhere in puppet/etc as a devstack
slave?

Adam



On Mon, Jan 26, 2015 at 3:55 AM, Ihar Hrachyshka ihrac...@redhat.com
wrote:

 The issue is still there, and I haven't heard anything from infra.
 Updates, anyone?
 /Ihar


 On 01/21/2015 11:56 AM, Ihar Hrachyshka wrote:

 Hi all,

 Any updates from infra on why it occurs? It's still one of the issues
 that make periodic stable jobs fail.

 We also have other failures due to missing packages on nodes. F.e.,

 keystone python-ldap installation failing due to missing devel files for
 openldap:
 http://logs.openstack.org/periodic-stableperiodicx-
 keystone-docs-icehouse/30c89e8/console.html
 http://logs.openstack.org/periodic-stableperiodic-
 keystone-python27-icehouse/2a77792/console.html

 /Ihar

 On 01/19/2015 09:17 AM, Alan Pevec wrote:

 - periodic-nova-docs-icehouse http://logs.openstack.org/
 periodic-stableperiodic-nova-docs-icehouse/a3d88ed/ : FAILURE in 1m 15s

 Same symptom as https://bugs.launchpad.net/openstack-ci/+bug/1336161
 which is marked as Fix released, could infra team check if all images
 are alright?
 This showed up in 3 periodic icehouse jobs over weekend, all on
 bare-precise-hpcloud-b2 nodes, I've listed them in
 https://etherpad.openstack.org/p/stable-tracker


 Cheers,
 Alan

 __

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Re: [stable] Stable check of openstack/nova failed - EnvironmentError: mysql_config not found

2015-01-26 Thread Adam Gandelman
https://review.openstack.org/150120 should address it.

On Mon, Jan 26, 2015 at 11:16 AM, Adam Gandelman ad...@ubuntu.com wrote:

 Looking at the image build logs @ nodepool.openstack.org, it *looks* like
 package dependencies are not being installed properly on one of the slaves
 (hpcloud-b2.bare-precise): http://paste.ubuntu.com/9886045/  The failing
 periodic jobs all land on this b2 node. nodepool is, however, pre-caching
 devstack required packages (including libmysqlclient-dev) I wonder if that
 b2 node is incorrectly classified somewhere in puppet/etc as a devstack
 slave?

 Adam



 On Mon, Jan 26, 2015 at 3:55 AM, Ihar Hrachyshka ihrac...@redhat.com
 wrote:

 The issue is still there, and I haven't heard anything from infra.
 Updates, anyone?
 /Ihar


 On 01/21/2015 11:56 AM, Ihar Hrachyshka wrote:

 Hi all,

 Any updates from infra on why it occurs? It's still one of the issues
 that make periodic stable jobs fail.

 We also have other failures due to missing packages on nodes. F.e.,

 keystone python-ldap installation failing due to missing devel files for
 openldap:
 http://logs.openstack.org/periodic-stableperiodicx-
 keystone-docs-icehouse/30c89e8/console.html
 http://logs.openstack.org/periodic-stableperiodic-
 keystone-python27-icehouse/2a77792/console.html

 /Ihar

 On 01/19/2015 09:17 AM, Alan Pevec wrote:

 - periodic-nova-docs-icehouse http://logs.openstack.org/
 periodic-stableperiodic-nova-docs-icehouse/a3d88ed/ : FAILURE in 1m
 15s

 Same symptom as https://bugs.launchpad.net/openstack-ci/+bug/1336161
 which is marked as Fix released, could infra team check if all images
 are alright?
 This showed up in 3 periodic icehouse jobs over weekend, all on
 bare-precise-hpcloud-b2 nodes, I've listed them in
 https://etherpad.openstack.org/p/stable-tracker


 Cheers,
 Alan

 __

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Stable branch status and proposed stable-maint members mentoring

2015-01-14 Thread Adam Gandelman
Thanks, Ihar. Lets make this the 'official' tracking pad for stable
branches.  We were previously using branch-specific pads (ie,
https://etherpad.openstack.org/p/StableJuno) but it makes sense to track
both releases in one place.  I've updated it with current staus there and
added a reference on the stable branch wiki page [1]

Cheers,
Adam

[1] https://wiki.openstack.org/wiki/StableBranch#Gate_Status

On Wed, Jan 14, 2015 at 4:37 AM, Ihar Hrachyshka ihrac...@redhat.com
wrote:

 On 01/09/2015 01:02 PM, Ihar Hrachyshka wrote:

 I think we should have some common document (etherpad?) with branch
 status and links.


 OK, I moved forward and created an Etherpad. I also filed it in with
 current state. Please fill it in with updates.

 https://etherpad.openstack.org/p/stable-tracker

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]nova not work with eventlet 0.16.0

2015-01-13 Thread Adam Gandelman
So eventlet 0.16.x has started hitting slaves and breaking stable branches
(its not like we weren't warned :\ )

https://bugs.launchpad.net/nova/+bug/1410626

Should hopefully be resolved by eventlet versions caps in icehouse + juno's
requirements:

https://review.openstack.org/#/q/I4bbbeb5bf9c22ed36f5c9a74fec6b487d2c15697,n,z

Cheers,
Adam


On Tue, Jan 6, 2015 at 1:18 AM, Eli Qiao ta...@linux.vnet.ibm.com wrote:

  hi all ,
 I set up nova environment with latest upstream code.
 nova-compute can not boot up due to failed to load libvirt driver.
 by further debugging. found that eventlet (0.16.0) remove util  which is
 referenced by libvirt driver.
 after I downgrade to (0.15.2) it works.

 *0.15.2*

 In [3]: print eventlet.__version__
 0.15.2

 In [5]: import eventlet.util

 In [6]:


 -
 *0.16.0*


 In [1]: import eventlet.util
 ---
 ImportError   Traceback (most recent call last)
 ipython-input-1-a23626d6f273 in module()
  1 import eventlet.util

 ImportError: No module named util

 In [3]: import eventlet

 In [4]: print eventlet.__version__
 0.16.0

 In [5]:

 


 In [1]: import nova.virt.libvirt.LibvirtDriver
 ---
 ImportError   Traceback (most recent call last)
 ipython-input-1-2bdce28fc3dd in module()
  1 import nova.virt.libvirt.LibvirtDriver

 /opt/stack/nova/nova/virt/libvirt/__init__.py in module()
  13 #under the License.
  14
 --- 15 from nova.virt.libvirt import driver
  16
  17 LibvirtDriver = driver.LibvirtDriver

 /opt/stack/nova/nova/virt/libvirt/driver.py in module()
  96 from nova.virt.libvirt import dmcrypt
  97 from nova.virt.libvirt import firewall as libvirt_firewall
 --- 98 from nova.virt.libvirt import host
  99 from nova.virt.libvirt import imagebackend
 100 from nova.virt.libvirt import imagecache

 /opt/stack/nova/nova/virt/libvirt/host.py in module()
  37 from eventlet import patcher
  38 from eventlet import tpool
 --- 39 from eventlet import util as eventlet_util
  40
  41 from nova import exception

 ImportError: cannot import name util

 In [2]: import eventlet

 In [3]: from eventlet import util
 ---
 ImportError   Traceback (most recent call last)
 ipython-input-3-f6f91e4749eb in module()
  1 from eventlet import util

 ImportError: cannot import name util

 In [4]:

 --
 Thanks,
 Eli (Li Yong) Qiao


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable] Proposal to add Flavio Percoco to stable-maint-core

2015-01-06 Thread Adam Gandelman
Hiya-

Flavio has been actively involved in stable branch maintenance for as long
as I can remember, but it looks like his +2 abilities were removed after
the organizational changes made to the stable maintenance teams.  He has
expressed interest in continuing on with general stable maintenance and I
think his proven understanding of branch policies make him a valuable
contributor. I propose we add him to the stable-maint-core team.

Cheers,
Adam
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] check-grenade-dsvm-ironic-sideways failing, blocking much code

2014-12-29 Thread Adam Gandelman
Heads up here since IRC seems to be crickets this week...

Fallout from newer pip's creation and usage of ~/.cache is still biting the
sideways ironic grenade job:

https://bugs.launchpad.net/ironic/+bug/1405626

This is blocking patches to ironic, nova, devstack, tempest, grenade master
+ stable/juno.  master's main tempest job was broken by the same issue
during the holiday, and fixed with some patches to devstack.  Those need to
be backported to stable/juno devstack and grenade (via a functions-common
sync), but two remaining patches are wedged and cannot merge without the
other:

https://review.openstack.org/#/c/144352/
https://review.openstack.org/#/c/144374/

I've proposed temporary workaround to devstack-gate, which will hopefully
allow those two to backport:

https://review.openstack.org/#/c/144408/

If anyone has a minute while they switch from eggnog to champagne, it would
be appreciated!

Thanks
Adam
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [CI][TripleO][Ironic] check-tripleo-ironic-xxx fails

2014-12-26 Thread Adam Gandelman
This is fallout from a new upstream release of pip that went out earlier in
the week.  It looks like no formal bug ever got filed, tho the same problem
discovered in devstack and trove's integration testing repository.  Added
some comments comments to the bug.

On Thu, Dec 25, 2014 at 10:59 PM, James Polley j...@jamezpolley.com wrote:

 Thanks for the alert

 The earliest failure I can see because of this is
 http://logs.openstack.org/43/141043/6/check-tripleo/check-tripleo-ironic-overcloud-precise-nonha/36c9771/

 I've raised https://bugs.launchpad.net/tripleo/+bug/1405732 and I've put
 some preliminary notes on
 https://etherpad.openstack.org/p/tripleo-ci-breakages

 On Fri, Dec 26, 2014 at 3:42 AM, ZhiQiang Fan aji.zq...@gmail.com wrote:

 check-tripleo-ironic-xxx failed because:

 rm -rf /home/jenkins/.cache/image-create/pypi/mirror/
 rm: cannot remove `/home/jenkins/.cache/image-create/pypi/mirror/':
 Permission denie

 see

 http://logs.openstack.org/37/143937/1/check-tripleo/check-tripleo-ironic-overcloud-precise-nonha/9be729b/console.html

 search on logstash.openstack.org:
 message:rm: cannot remove
 `/home/jenkins/.cache/image-create/pypi/mirror/\': Permission denied
 there are 59 hits in last 48 hours



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable] Proposal to add Dave Walker back to stable-maint-core

2014-11-26 Thread Adam Gandelman
Hi All-

Daviey was an original member of the stable-maint team and one of the
driving forces behind the creation of the team and branches back in the
early days. He was removed from the team later on during a pruning of
inactive members. Recently, he has began focusing on the stable branches
again and has been providing valuable reviews across both branches:

https://review.openstack.org/#/q/reviewer:%22Dave+Walker+%253Cemail%2540daviey.com%253E%22++branch:stable/icehouse,n,z

https://review.openstack.org/#/q/reviewer:%22Dave+Walker+%253Cemail%2540daviey.com%253E%22++branch:stable/juno,n,z

I think his understanding of policy, attention to detail and willingness to
question the appropriateness of proposed backports would make him a great
member of the team (again!).  Having worked with him in Ubuntu-land, I also
think he'd be a great candidate to help out with the release management
aspect of things (if he wanted to).

Cheers,
Adam
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] icehouse failure rates are somewhat catastrophic - who is actually maintaining it?

2014-10-02 Thread Adam Gandelman
The stable-maint team has been more active in the last couple months of
keeping on top of stable branch specific gate breakage (usually identified
by periodic job failures).  We managed to flush a bunch of reviews through
the gate over the last couple weeks [1] Yea, many required rechecks, but
the biggest bugs I hit werehttp://pad.lv/1323658 and
http://pad.lv/1374175 which, according to elastic-recheck, are
project-wide, affecting master and not specific to the stable branches.

mikal's right, code review has indeed been lagging over the last cycle..
Tho the last month or two a number of new faces have showed and are
actively helping get things reviewed in a timely manner.

I'm curious what else is failing that is specific to the stable trees?  I
spent time over the weekend babysitting many stable merges and found it to
be no more / no less painful than trying to get a Tempest patch merged.

Cheers,
-Adam

[1]
https://review.openstack.org/#/q/status:merged+branch:stable/icehouse,n,z

On Wed, Oct 1, 2014 at 9:42 AM, Sean Dague s...@dague.net wrote:

 As stable branches got discussed recently, I'm kind of curious who is
 actually stepping up to make icehouse able to pass tests in any real
 way. Because right now I've been trying to fix devstack icehouse so that
 icehouse requirements can be unblocked (and to land code that will
 reduce grenade failures)

 I'm on retry #7 of modifying the tox.ini file in devstack.

 During the last summit people said they wanted to support icehouse for
 15 months. Right now we're at 6 months and the tree is basically unable
 to merge code.

 So who is actually standing up to fix these things, or are we going to
 just leave it broken and shoot icehouse in the head early?

 -Sean

 --
 Sean Dague
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Call for testing: 2014.1.3 candidate tarballs

2014-09-29 Thread Adam Gandelman
Hi all,

We are scheduled to publish 2014.1.3 on Thurs Oct. 2nd for
Ceilometer, Cinder, Glance, Heat, Horizon, Keystone, Neutron, Nova
and Trove.

The list of issues fixed so far can be seen here:

  https://launchpad.net/ceilometer/+milestone/2014.1.3
  https://launchpad.net/cinder/+milestone/2014.1.3
  https://launchpad.net/heat/+milestone/2014.1.3
  https://launchpad.net/horizon/+milestone/2014.1.3
  https://launchpad.net/keystone/+milestone/2014.1.3
  https://launchpad.net/neutron/+milestone/2014.1.3
  https://launchpad.net/nova/+milestone/2014.1.3
  https://launchpad.net/trove/+milestone/2014.1.3

We'd appreciate anyone who could test the candidate 2014.1.3 tarballs, which
include all changes aside from a few trickling through the gate and any
pending
freeze exceptions:

  http://tarballs.openstack.org/ceilometer/ceilometer-stable-icehouse.tar.gz
  http://tarballs.openstack.org/cinder/cinder-stable-icehouse.tar.gz
  http://tarballs.openstack.org/heat/heat-stable-icehouse.tar.gz
  http://tarballs.openstack.org/horizon/horizon-stable-icehouse.tar.gz
  http://tarballs.openstack.org/keystone/keystone-stable-icehouse.tar.gz
  http://tarballs.openstack.org/neutron/neutron-stable-icehouse.tar.gz
  http://tarballs.openstack.org/nova/nova-stable-icehouse.tar.gz
  http://tarballs.openstack.org/trove/trove-stable-icehouse.tar.gz

Thanks,
Adam
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Preparing for 2014.1.3 -- branches freeze September 25th

2014-09-22 Thread Adam Gandelman
Hi All-

We'll be freezing the stable/icehouse branches for integrated projects this
Thursday September 25th in preparation for the 2014.1.3 stable release on
Thursday October 2nd.  You can view the current queue of proposed patches
on gerrit [1].  I'd like to request all interested parties review current
bugs affecting Icehouse and help ensure any relevant fixes be proposed
soon and approved by Thursday, or notify the stable-maint team of
anything critical that may land late.

A couple of notes:

- We are quickly approaching the Juno RC period and, if last cycle was any
indication, expect a slower-than-normal Zuul queue during this time.  Any
help with getting things approved and on its way to merging would be
appreciated, the earlier the better!  We're currently waiting on a fix to
land
in python-keystoneclient [2] to fix the icehouse gate, but should be in
good shape for merges after.

- The next release (2014.1.4) will be the last planned stable/icehouse
release and the release date is TBD for after Kilo-3.  If there are any
critical fixes that need backporting, now is would be the time to propose
them as it will be a while before the next release.

Thanks,
Adam

[1]
https://review.openstack.org/#/q/status:open+AND+branch:stable/icehouse+AND+(project:openstack/nova+OR+project:openstack/keystone+OR+project:openstack/glance+OR+project:openstack/cinder+OR+project:openstack/neutron+OR+project:openstack/horizon+OR+project:openstack/heat+OR+project:openstack/ceilometer+OR+project:openstack/trove),n,z

[2] https://review.openstack.org/123198
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Error in deploying ironicon Ubuntu 12.04

2014-09-17 Thread Adam Gandelman
On Mon, Sep 15, 2014 at 11:38 PM, Peeyush Gupta gpeey...@linux.vnet.ibm.com
 wrote:


 What I am interested to know is if this is a problem with precise or with
 devstack script?


As mentioned, we test this in the gate using Ubuntu Trusty 14.04.  When we
were using Precise, we would setup access to the Ubuntu Cloud Archive's
Icehouse repository to pull in a couple of newer dependencies (libvirt,
OVS) but the docker dependency (which was added after the upgrade to
Trusty) cannot be satisfied there, so you are better off just using a 14.04
base for your devstack + ironic testing.

Hope that helps,
Adam
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [neutron] make mac address updatable - port status for ironic servers

2014-08-27 Thread Adam Gandelman
Yes, its to be expected.   IIRC, you were helping me investigate the same
thing at the TripleO mid-cycle. :) The port gets the necessary
configuration setup by the DHCP agent to take care of PXE, but this being
baremetal there is no hypervisor on which some agent is running and
plugging VIFs into a vSwitch ports at the compute host level.   We rely (at
least in devstack) on a flat networking environment setup where baremetal
nodes are already tapped into the tenant network namespace without the help
of the agents. I believe devtest sets up something similar on the host
thats running the devtest VMs.  The end result is ports showing DOWN tho
they are functional.


On Wed, Aug 27, 2014 at 5:06 PM, Carlino, Chuck chuck.carl...@hp.com
wrote:

 Hi all,

 I'm working on bug [1] and have code in review[2].  The bug wants
 neutron's port['mac_address'] to be updatable to handle the case where an
 ironic server's nic is replaced.  A comment in the review suggests that we
 only allow mac address update when the port status is not ACTIVE.  While
 I've made another change which may (or may not) address the underlying
 concern, I want to ask if port status is correct for ironic server ports.
 When I run devtest (tripleo) on my laptop, I get VM ironic servers, which
 when booted all have neutron ports with 'binding:vif_type':
 'binding_failed' and 'status': 'DOWN'.  Is this expected?  Does this happen
 with hardware ironic servers?

 Thanks in advance!
 Chuck

 [1] https://bugs.launchpad.net/neutron/+bug/1341268
 [2] https://review.openstack.org/112129


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Problematic gate-tempest-dsvm-virtual-ironic job

2014-06-12 Thread Adam Gandelman
I've opened https://bugs.launchpad.net/openstack-ci/+bug/1329430 to track
the progress of getting the ironic job in better shape.  Monty had a great
suggestion this morning about how cache_devstack.py can be updated to cache
the UCA stuff for any job that may need it.  I'm putting together a patch
to address that now.

Thanks,
Adam



On Thu, Jun 12, 2014 at 11:09 AM, Ben Nemec openst...@nemebean.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 06/12/2014 06:40 AM, Sean Dague wrote:
  Current gate-tempest-dsvm-virtual-ironic has only a 65% pass rate
  *in the gate* over the last 48 hrs -
 
 http://logstash.openstack.org/#eyJzZWFyY2giOiJidWlsZF9uYW1lOmdhdGUtdGVtcGVzdC1kc3ZtLXZpcnR1YWwtaXJvbmljIEFORCAobWVzc2FnZTpcIkZpbmlzaGVkOiBTVUNDRVNTXCIgT1IgbWVzc2FnZTpcIkZpbmlzaGVkOiBGQUlMVVJFXCIpIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDAyNTcyMjk4NTE4LCJtb2RlIjoic2NvcmUiLCJhbmFseXplX2ZpZWxkIjoiYnVpbGRfc3RhdHVzIn0=
 
   This job is run on diskimage-builder and ironic jobs in the the
  gate queue. Those jobs are now part of the integrated gate queue
  due to the overlap with oslotest jobs.
 
  This is *really* problematic, and too low to be voting. Anything 
  90% pass rate is really an issue.
 
  It looks like these issues are actually structural with the job,
  because unlike our other configurations which aggressively try to
  avoid network interaction (which we've found is too unreliable),
  this job adds the cloud archive repository on the fly, and pulls
  content from there. That's never going to have a high success
  rate.
 
  I'm proposing we turn this off -
  https://review.openstack.org/#/c/99630/
 
  The ironic team needs to go back to the drawing board a little here
  and work on getting all the packages and repositories they need
  pulled down into nodepool so we can isolate from network effects
  before we can make this job gating again.
 
  -Sean

 On a related note, that oslotest cross-testing job probably needs to
 be removed for the moment.  It's completely broken on our stable
 branches and at some point all of these cross-test jobs will be
 generated automatically, so the manually added ones will probably need
 to go anyway.

 I rechecked https://review.openstack.org/#/c/92910/ and it still looks
 ready to go.

 - -Ben

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1
 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

 iQEcBAEBAgAGBQJTmey/AAoJEDehGd0Fy7uqAWgIAJRiNdRUGFZT/nImP99h4PUK
 ZPBVGgD22G+dF2G4UibOn0YQL+fvlbnABHD6tbELmn91hqWEl20MfhUW+55xRPsr
 pLJXWktdyeJTkLNP6TynthUw6B1H8okVRkgsbQ8tKVdIn3VFHXuN+1I8l9qx/4w6
 paDwnfaokVL9zxK2BwNLJm3KUlFQ5lYFebKRiJYGdCxb8qPIEmJIoo+xfn3G2EqO
 q8OiDkPt4V53RxKUS+Er8sRlJJXOjDr97CPhhoj5napEn7ox5n7OdasngotEg1oR
 n3EbVr6sdifq71EQejxQUjt4zDGu+qAFNVjYnzpBx+ad2RR+t3yvDnguq7gG3hc=
 =Zfeu
 -END PGP SIGNATURE-

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] pacemaker management tools

2014-06-12 Thread Adam Gandelman
It's been a while since I've used these tools and I'm not 100% surprised
they've fragmented once again. :)  That said, does pcs support creating the
CIB configuration in bulk from a file? I know that crm shell would let you
dump the entire cluster config and restore from file.  Unless the CIB
format has differs now, couldn't we just create the entire thing first and
use a single pcs or crm command to import it to the cluster, rather than
building each resource command-by-command?

Adam


On Wed, Jun 11, 2014 at 4:28 AM, Jan Provazník jprov...@redhat.com wrote:

 Hi,
 ceilometer-agent-central element was added recently into overcloud image.
 To be able scale out overcloud control nodes, we need HA for this central
 agent. Currently central agent can not scale out (until [1] is done). For
 now, the simplest way is add the central agent to Pacemaker, which is quite
 simple.

 The issue is that distributions supported in TripleO provide different
 tools for managing Pacemaker. Ubuntu/Debian provides crmsh, Fedora/RHEL
 provides pcs, OpenSuse provides both. I didn't find packages for all our
 distros for any of the tools. Also if there is a third-party repo providing
 packages for various distros, adding dependency on an untrusted third-party
 repo might be a problem for some users.

 Although it's a little bit annoying, I think we will end up with managing
 commands for both config tools, a resource creation sample:

 if $USE_PCS;then
   crm configure primitive ClusterIP ocf:heartbeat:IPaddr2 params
 ip=192.168.122.120 cidr_netmask=32 op monitor interval=30s
 else
   pcs resource create ClusterIP IPaddr2 ip=192.168.0.120 cidr_netmask=32
 fi

 There are not many places where pacemaker configuration would be required,
 so I think this is acceptable. Any other opinions?

 Jan


 [1] https://blueprints.launchpad.net/ceilometer/+spec/central-
 agent-improvement

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] VIF event callbacks implementation

2014-05-02 Thread Adam Gandelman
On Tue, Apr 29, 2014 at 12:23 PM, Dan Smith d...@danplanet.com wrote:


 Yeah, we've already got plans in place to get Cinder to use the
 interface to provide us more detailed information and eliminate some
 polling. We also have a very purpose-built notification scheme between
 nova and cinder that facilitates a callback for a very specific
 scenario. I'd like to get that converted to use this mechanism as well,
 so that it becomes the way you tell nova that things it's waiting for
 have happened.

 --Dan


We actually need something *very* similar in Ironic right now to address
many of the same issues that os-external-events solves for Nova - Neutron
coordination.  I've been looking at implementing an almost identical thing
in Ironic and was hoping to file a BP to get some discussion going in
Atlanta.  There are a few places currently where the same mechanism would
fix bugs or be a general improvement, and more stuff coming in Juno where
this will be required. I would love to find out if parts of what is
currently in Nova that can be factored out and shared across projects to
make this easier, and to provide all projects with a way you tell some
other service that things it's waiting for have happened

Cheers,
Adam
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic][QA] Ironic functional gate tests stable, thoughts on extending?

2014-04-28 Thread Adam Gandelman
Hi All--

We've finally got the check-tempest-dsvm-virtual-ironic passing
successfully in the Ironic gate.  Among other things, this test runs the
tempest.scenario.test_baremetal_basic_ops which is a functional
provisioning test directly stressing Ironic, Nova and Neutron as well as
devstack and diskimage-builder indirectly.  It is non-voting currently, but
will eventually move to a voting job once its proven stable.  I've also
proposed adding this to the non-experimental checks of Devstack, Nova,
Tempest, diskimage-builder and devstack-gate as the stability of the job is
dependent on all of them.  If you see this failing, please investigate
and/or ping me (adam_g).  There have been multiple breakages that slipped
into trunk over the last month or so that this test would have caught, and
I'd love for it do its job now that it is stable.

The current scenario test only tests the pxe_ssh driver.  As we begin to
think about things like the IPA and other new features in Juno, we should
also think about how they fit in to the current test framework.  I'd love
it if we can capture some thoughts on the list or in the IronicCI etherpad
[1] about what we'd like to test and how those test would look.  Its
important to note that Tempest is generally a blackbox test suite that only
has access to the various APIs (both admin and non) and guests via SSH, so
any assertions we hope to make must be possible by poking APIs or SSHing to
a provisioned node.

Any thoughts we can collect before and during the summit would be great.

Thanks,
Adam

[1] https://etherpad.openstack.org/p/IronicCI
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Selecting Compute tests by driver/hypervisor

2014-04-22 Thread Adam Gandelman
On Tue, Apr 22, 2014 at 4:23 AM, Sean Dague s...@dague.net wrote:


 Agreed. Though I think we probably want the Nova API to be explicit
 about what parts of the API it's ok to throw a Not Supported. Because I
 don't think it's a blanket ok. On API endpoints where this is ok, we can
 convert not supported to a skip.

 -Sean


I'd favor going even further and let any such exception convert to a skip,
at least in the main test suite.  Keep Tempest a point-and-shoot suite that
can be pointed at any cloud and do the right thing.  We can add another
test or utility (perhaps in tools/?) to interpret results and attach
meaning to them WRT skips/fails validated against individual projects'
current notion of what is mandatory.   A list of these tests must pass
against any driver instead of a driver feature matrix.   This would allow
such policies to easily change over time outside of the actual test code.

-Adam
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Call for testing: 2013.2.3 candidate tarballs

2014-03-28 Thread Adam Gandelman
Hi all,

We are scheduled to publish the 2013.2.3 release on Thursday April 3 for
Ceilometer, Cinder, Glance, Heat, Horizon, Keystone, Neutron and Nova.

The list of issues fixed so far can be seen here:

  https://launchpad.net/ceilometer/+milestone/2013.2.3
  https://launchpad.net/cinder/+milestone/2013.2.3
  https://launchpad.net/glance/+milestone/2013.2.3
  https://launchpad.net/heat/+milestone/2013.2.3
  https://launchpad.net/horizon/+milestone/2013.2.3
  https://launchpad.net/keystone/+milestone/2013.2.3
  https://launchpad.net/neutron/+milestone/2013.2.3
  https://launchpad.net/nova/+milestone/2013.2.3

We'd appreciate anyone who could test the candidate 2013.2.3 tarballs:

  http://tarballs.openstack.org/ceilometer/ceilometer-stable-havana.tar.gz
  http://tarballs.openstack.org/cinder/cinder-stable-havana.tar.gz
  http://tarballs.openstack.org/glance/glance-stable-havana.tar.gz
  http://tarballs.openstack.org/heat/heat-stable-havana.tar.gz
  http://tarballs.openstack.org/horizon/horizon-stable-havana.tar.gz
  http://tarballs.openstack.org/keystone/keystone-stable-havana.tar.gz
  http://tarballs.openstack.org/neutron/neutron-stable-havana.tar.gz
  http://tarballs.openstack.org/nova/nova-stable-havana.tar.gz

Effective immediately, stable/havana branches enter freeze until release on
Thursday (April 3).

Thanks
Adam
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Preparing for 2013.2.3 -- branches freeze March 27th

2014-03-21 Thread Adam Gandelman
Hi All-

We'll be freezing the stable/havana branches for integrated projects this
Thursday March 27th in preparation for the 2013.2.3 stable release on
Thursday April 3rd.  You can view the current queue of proposed patches
on gerrit [1].  I'd like to request all interested parties review current
bugs affecting Havana and help ensure any relevant fixes be proposed
soon and merged by Thursday, or notify the stable-maint team of
anything critical that may land late.

Thanks,
Adam

[1] https://review.openstack.org/#/q/status:open+branch:stable/havana,n,z
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Functional testing, Tempest

2014-03-20 Thread Adam Gandelman
Hi--

We've made some progress over the last week or so toward more thorough
Ironic CI:

* Initial devstack support has been merged enabling an all-in-one Ironic
environment that uses libvirt+VMs+OVS to support baremetal deployments [1]

* Devananda and myself have been getting patches pushed into infra. that
add an experimental devstack gate check using the new support

While we continue to work out the kinks in the devstack gate, I've started
thinking about what functional Ironic tests in Tempest would look like.
 I've been capturing my thoughts in an etherpad [2] and invite anyone
interested to add theirs.

I've put together a Tempest scenario test that validates instance spawn and
corresponding Ironic orchestration  [3].  This initial test assumes its
being tested against the pxe_ssh driver (which devstack enrolls nodes with)
and verifies assumptions accordingly.  A longer term goal would be for this
same test to be run against other non-devstack environments (ie, TripleO
undercloud) and verify other things specific to the drivers in use there.

I'm curious to know what others think should be, or even can be, stressed
and tested from the outside by Tempest.  Since Tempest cannot assume it has
access to poke at management resources like libvirt, IPMI, etc. it can
really only inspect what is provided by the APIs.  Validating that
nova/neutron/etc injects the correct data into Ironic node/port/etc objects
seems like the most that can happen beyond simply spawning an instance and
waiting for it to show up on the network.

On a related note, running Tempest in the gate against Ironic presents some
interesting challenges that we'll need to work with the Tempest team to
solve.  The API and smoke tests that are run assume many things about the
supported features of the running compute driver. Many of these are not
supported in Ironic (eg,, boot from volume).  This not specific to Ironic
(eg, lxc) but will require some discussion and work before Tempest is just
point-and-shoot against Ironic.   I'm wondering if it would make sense to
lean on the soon-to-be-deprecated devstack exercises for verification in
the short-term, while we work through the larger issues in Tempest.

-Adam


[1] https://review.openstack.org/#/c/70348/
[2] https://etherpad.openstack.org/p/IronicCI
[2] https://review.openstack.org/#/c/81958/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-stable-maint] Stable/grizzly

2013-10-10 Thread Adam Gandelman
On 10/10/2013 04:42 AM, Gary Kotton wrote:
 Trunk - https://review.openstack.org/50904
 Stable/Grizzly - https://review.openstack.org/#/c/50905/
 There is an alternative patch - https://review.openstack.org/#/c/50873/7
 I recall seeing the same problem a few month ago and the bot version was
 excluded - not sure why the calling code was not updated. Maybe someone
 who is familiar with that can chime in.
 Thanks
 Gary

 On 10/10/13 12:20 PM, Alan Pevec ape...@gmail.com wrote:


Missed the chance to weigh in on the stable review, but is there a
reason we're also bumping the minimum required boto version for a stable
point update? 


Adam


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate issues - what you can do to help

2013-10-01 Thread Adam Gandelman
On 10/01/2013 12:02 AM, Alan Pevec wrote:
 1) Please *do not* Approve or Reverify stable/* patches. The pyparsing
 requirements conflict with neutron client from earlier in the week is still
 not resolved on stable/*.
 Also there's an issue with quantumclient and Nova stable/grizzly:
 https://jenkins01.openstack.org/job/periodic-nova-python27-stable-grizzly/34/console
 ...nova/network/security_group/quantum_driver.py, line 101, in get
  id = quantumv20.find_resourceid_by_name_or_id(
 AttributeError: 'module' object has no attribute 
 'find_resourceid_by_name_or_id'
 That should be fixed by https://review.openstack.org/49006 + new
 quantumclient release, thanks Matt!

 Adam, Thierry - given that stable/grizzly is still blocked by this, I
 suppose we should delay 2013.1.4 freeze (was planned this Thursday)
 until stable/grizzly is back in shape?


 Cheers,
 Alan


This sounds okay to me.  To be clear:

2013.1.4 Freeze goes into affect Oct. 10th
2013.1.4 Release Oct. 17th

- Adam


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev