Re: [openstack-dev] [OpenStack][Cinder] Driver qualification

2013-07-28 Thread Avishay Traeger
John Griffith john.griff...@solidfire.com wrote on 07/26/2013 03:44:12
AM:
snip
 I think it would be a very useful tool for initial introduction of a
 new driver and even perhaps some sort of check that's run and
 submitted again prior to milestone releases.
snip

+1.  Do you see this happening for Havana?  Or should this be a summit
topic?

Thanks,
Avishay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] support for multiple active scheduler policies/drivers

2013-07-28 Thread Day, Phil


 From: Joe Gordon [mailto:joe.gord...@gmail.com] 
 Sent: 26 July 2013 23:16
 To: OpenStack Development Mailing List
 Subject: Re: [openstack-dev] [Nova] support for multiple active scheduler 
 policies/drivers



 On Wed, Jul 24, 2013 at 6:18 PM, Alex Glikson glik...@il.ibm.com wrote:
 Russell Bryant rbry...@redhat.com wrote on 24/07/2013 07:14:27 PM:

 
 I really like your point about not needing to set things up via a config
 file.  That's fairly limiting since you can't change it on the fly via
 the API.

True. As I pointed out in another response, the ultimate goal would be to 
have policies as 'first class citizens' in Nova, including a DB table, API, 
etc. Maybe even a separate policy service? But in the meantime, it seems 
that the approach with config file is a reasonable compromise in terms of 
usability, consistency and simplicity. 

I think we need to be looking in the future to being able to delegate large 
parts of the functionality that is currently admin only in Nova, and a large 
part of that is moving things like this from the config file into APIs.   Once 
we have the Domain capability in ketystone fully available to services like 
Nova we  need to think more about ownership of resources like hosts, and being 
able to delegate this kind of capability.


I do like your idea of making policies first class citizens in Nova, but I am 
not sure doing this in nova is enough.  Wouldn't we need similar things in 
Cinder and Neutron?    Unfortunately this does tie into how to do good 
scheduling across multiple services, which is another rabbit hole all 
together.

 I don't like the idea of putting more logic in the config file, as it is the 
 config files are already too complex, making running any OpenStack 
 deployment  require some config file templating and some metadata magic 
 (like heat).   I would prefer to keep things like this in aggregates, or 
 something else with a REST API.  So why not build a tool on top of 
 aggregates to push the appropriate metadata into the aggregates.  This will 
 give you a central point to manage policies, that can easily be updated on 
 the fly (unlike config files).  

I agree with Jo on this point, and his is the approach we're taking with the 
Pcloud / whole-host-allocation blueprint:

https://review.openstack.org/#/c/38156/
https://wiki.openstack.org/wiki/WholeHostAllocation

I don't think realistically we'll be able to land this in Havana now (as much 
as anything I don't think it had enough air time yet to be sure we have a 
consensus on all of the details) but Rackspace are now helping with part of 
this and we do expect to have something in a PoC / Demonstratable state for the 
Design Summit to provide a more focused discussion.  Because the code is 
layered on top of existing aggregate and scheduler features its pretty easy to 
keep it as something we can just keep rebasing.

Regards,
Phil


 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: Blueprint information

2013-07-28 Thread Ofer Blaut
Hi

Hi, I am interested in helping out with QE efforts on upstream
OpenStack, specifically around Neutron.

I'm trying to understand the following blueprint,can you please point me to 
more detailed design

https://blueprints.launchpad.net/neutron/+spec/auto-associate-floating-ip


Thanks

Ofer Blaut  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] property protections -- final call for comments

2013-07-28 Thread Brian Rosmaita
Stuart,

I agree with Mark's comments, wanted to address this:
3) we could potentially link roles to the regex

eg this could allow role1_xxx to be writable only if you have 'role1'.
By assigning appropriate roles (com.provider/com.partner/nova?) you
could provide the ability to write to that prefix without config file
changes.
I like your idea, and I think the config we're proposing would be able to cover 
this use case.  Since the plan is to allow reference to roles defined in 
policy.json, it would just be up to the provider to make sure the config file 
and the policy.json were in sync.  (Not as nice as having it work 
automatically, but should be doable.)

cheers,
brian


From: Mark Washenberger [mark.washenber...@markwash.net]
Sent: Friday, July 26, 2013 2:56 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Glance] property protections -- final call for 
comments




On Fri, Jul 26, 2013 at 9:56 AM, 
stuart.mcla...@hp.commailto:stuart.mcla...@hp.com wrote:
Hi Brian,

Firstly, thanks for all your great work here!

Some feedback:

1) Is there a clash with existing user properties?

For currently deployed systems a user may have an existing property 'foo: bar'.
If we restrict property access (by virtue of allowing only owner_xxx)
can the user update this previously existing property?

No, a user would not be able to update the previously existing property. 
However, I do not view requiring owner_ as a prefix for generic metadata 
properties to be the typical use case, so I am not concerned about this 
conflict. Those who wish to take on the extra responsibility of completely 
isolating owner metadata into a prefix may also take on the responsibility of 
migrating existing general properties to that prefix.


2) A nice feature of this scheme is that the cloud provider can pick an 
arbitrary
informal namespace for this purpose and educate users appropriately.

How about having the user properties area be always the same?
It would be more consistent/predictable -- is there a down side?

I'm not sure that the need is great enough--the downside is that this user 
properties area may not be appropriate for a majority of deployers.


3) we could potentially link roles to the regex

eg this could allow role1_xxx to be writable only if you have 'role1'.
By assigning appropriate roles (com.provider/com.partner/nova?) you
could provide the ability to write to that prefix without config file
changes.

Thanks,

-Stuart

After lots of discussion, I think we've come to a consensus on what property 
protections should look like in Glance.  Please reply with comments!

The blueprint: 
https://blueprints.launchpad.net/glance/+spec/api-v2-property-protection

The full specification: 
https://wiki.openstack.org/wiki/Glance-property-protections
  (it's got a Prior Discussion section with links to the discussion etherpads)

A product approach to describing the feature: 
https://wiki.openstack.org/wiki/Glance-property-protections-product

cheers,
brian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quick README? (Re: [vmware] VMwareAPI sub-team status update 2013-07-22)

2013-07-28 Thread Dan Wendlandt
Hi dims,

All of the deployments I've been a part of so far use 5.0 or newer, as do
most of the developers I interact with.  I'm not aware of anything specific
as to why 4.1 would not work (someone more familiar with details of the
vSphere APIs than I may be able to chime in), but based on customer input I
expect most of our testing to focus on 5.x series, at least for now.

Feel free to file a launchpad bug with logs describing what is happening on
your 4.1 setup that is resulting in a problem.   Please remember to add the
tag 'vmware', so that it shows up when the vmwareapi sub-team searches for
bugs.

Dan



On Fri, Jul 26, 2013 at 6:12 AM, Davanum Srinivas dava...@gmail.com wrote:

 Shawn, Dan,

 What are the versions of ESX and vCenter that should work by the time
 Havana gets out? Havana Trunk-ESX 4.1 and Havana Trunk-vSphere
 4.1-ESX 4.1 do not seem to work. I was only able to get  Havana
 Trunk-vCenter 5.1-ESX 5.1 to work.

 thanks,
 dims

 On Wed, Jul 24, 2013 at 1:10 PM, Shawn Hartsock hartso...@vmware.com
 wrote:
  I am trying to put everything here for now:
 
  https://wiki.openstack.org/wiki/NovaVMware/DeveloperGuide
 
  Let me know if you need more.
  # Shawn Hartsock
 
  Davanum Srinivas dava...@gmail.com wrote:
 
  Shawn, or others involved in this effort,
 
  Is there a quick README or equivalent on how to use the latest code
  say with devstack and vCenter to get a simple deploy working?
 
  thanks,
  -- dims
 
  On Mon, Jul 22, 2013 at 9:15 PM, Shawn Hartsock hartso...@vmware.com
 wrote:
 
  ** No meeting this week **
 
  I have a conflict and can't run the meeting this week. We'll be back
 http://www.timeanddate.com/worldclock/fixedtime.html?iso=20130731T1700
 
  Two of us ran into a problem with an odd pep8 failure:
  E: nova.conf.sample is not up to date, please run
 tools/conf/generate_sample.sh
 
  Yaguang Tang gave the work around:
  nova.conf.sample is not up to date, please run
 tools/conf/generate_sample.sh ,then resubmit.
 
  I've put all these reviews under the re-work section. Hopefully this
 is simple and we can fix them this week.
 
  Blueprints targeted for Havana-3:
  *
 https://blueprints.launchpad.net/nova/+spec/vmware-image-clone-strategy -
 nova.conf.sample out of date
  *
 https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service-
  needs review
 
  New Blueprint:
  *
 https://blueprints.launchpad.net/nova/+spec/vmware-configuration-section
 
  Needs one more +2 / Approve button:
  * https://review.openstack.org/#/c/33504/
  * https://review.openstack.org/#/c/36411/
 
  Ready for core-reviewer:
  * https://review.openstack.org/#/c/33100/
 
  Needs VMware API expert review (no human reviews):
  * https://review.openstack.org/#/c/30282/
  * https://review.openstack.org/#/c/30628/
  * https://review.openstack.org/#/c/32695/
  * https://review.openstack.org/#/c/37389/
  * https://review.openstack.org/#/c/37539/
 
  Work/re-work in progress:
  * https://review.openstack.org/#/c/30822/ - weird Jenkins issue, fault
 is not in the patch
  * https://review.openstack.org/#/c/37819/ - weird Jenkins issue, fault
 is not in the patch
  * https://review.openstack.org/#/c/34189/ - in danger of becoming
 abandoned
 
  Needs help/discussion (has a -1):
  * https://review.openstack.org/#/c/34685/
  * https://review.openstack.org/#/c/34685/
 
  Meeting info:
  * https://wiki.openstack.org/wiki/Meetings/VMwareAPI
 
  # Shawn Hartsock
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  --
  Davanum Srinivas :: http://davanum.wordpress.com
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Davanum Srinivas :: http://davanum.wordpress.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Alembic support

2013-07-28 Thread Jamie Lennox




- Original Message -
 From: Doug Hellmann doug.hellm...@dreamhost.com
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Sent: Saturday, 27 July, 2013 4:15:53 AM
 Subject: Re: [openstack-dev] [Keystone] Alembic support
 
 
 
 
 On Fri, Jul 26, 2013 at 2:04 PM, Adam Young  ayo...@redhat.com  wrote:
 
 
 
 On 07/26/2013 01:55 PM, Doug Hellmann wrote:
 
 
 
 It makes sense, though, if the data is going to be (potentially) coming from
 separate databases. Is that any different for sqlalchemy-migrate?
 
 As far as tracking the separate migrations for the different schemas, that's
 handled by running alembic init for each schema to create a different
 environment. The environments should then have unique version_table values
 defined in the alembic.ini that the versions of each schema can be tracked
 separately. I suggest alembic_version_${schema_owner} where schema_owner is
 the subset of the schema (i.e., policy, tokens, or identity), or the
 extension name.
 
 It think that this will require enough of a change that I would like to do it
 in Icehouse, and have a detailed blueprint written up for it.
 
 
 That seems reasonable. Splitting one database into 3 (or more) will take some
 consideration.

+1 I think we can leave this till Icehouse, though continue discussing the how.
I'm interested to know how the changeover will work. 

 
 Doug
 
 
 
 
 
 
 
 
 On Fri, Jul 26, 2013 at 1:42 PM, Dolph Mathews  dolph.math...@gmail.com 
 wrote:
 
 
 
 Based on the docs, it looks like you need to start with separate sqlalchemy
 engines with their own metadata instances for each environment you want to
 migrate. That sounds like a significant refactor from where we are today
 (everything shares keystone.common.sql.core.ModelBase).
 
 
 On Thu, Jul 25, 2013 at 10:41 PM, Morgan Fainberg  m...@metacloud.com  
 wrote:
 
 
 +1 to getting the multiple repos in place. Moving to Alembric later on in H
 or even as the first commit of I should meet our goals to be on Alembric in
 a reasonable timeframe. This also allows us to ensure we aren't rushing the
 work to get our migration repos over to Alembric.
 
 I think that allowing the extensions to have their own repos sooner is
 better, and if we end up with an extension that has more than 1 or 2
 migrations, we have probably accepted code that is far from fully baked (and
 we should evaluate how that code made it in).
 
 I am personally in favor of making the first commit of Icehouse (barring any
 major issue) the point in which we move to Alembric. We can be selective in
 taking extension modifications that add migration repos if it is a major
 concern that moving to Alembric is going to be really painful.
 
 Cheers,
 Morgan Fainberg
 
 On Thu, Jul 25, 2013 at 7:35 PM, Adam Young  ayo...@redhat.com  wrote:
 
 
 I've been looking into Alembic support. It seems that there is one thing
 missing that I was counting on: multiple migration repos. It might be
 supported, but the docs are thin, and reports vary.
 
 In the current Keystone implementation, we have a table like this:
 mysql desc migrate_version;
 +-+--+--+-+-+---+
 | Field | Type | Null | Key | Default | Extra |
 +-+--+--+-+-+---+
 | repository_id | varchar(250) | NO | PRI | NULL | |
 | repository_path | text | YES | | NULL | |
 | version | int(11) | YES | | NULL | |
 +-+--+--+-+-+---+
 
 
 Right now we only have one row in there:
 
 keystone | /opt/stack/keystone/keystone/common/sql/migrate_repo | 0
 
 
 However, we have been lumping all of our migrations together into a singel
 repo, and we are just now looking to sort them out. For example, Policy,
 Tokens, and Identity do not really need to share a database. As such, they
 could go into separate migration repos, and it would keep changes to one
 from stepping on changes to another, and avoiding the continuous rebasing
 problem we currently have.
 
 In addition, we want to put each of the extensions into their own repos. This
 happens to be an important time for that, as we have three extensions coming
 in that need SQL repos: OAuth, KDS, and Attribute Mapping.
 
 I think we should delay moving Keystone to Alembic until the end of Havana,
 or as the first commit in Icehouse. That way, we have a clean cut over
 point. We can decide then whether to backport the Extnesion migrations or
 leave them under sql_alchemy migrations. Mixing the two technologies side by
 side for a short period of time is going to be required, and I think we need
 to have a clear approach in place to avoid a mess.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

[openstack-dev] Grenade issues

2013-07-28 Thread Monty Taylor
Hey all!

There is currently an issue with which is causing a very high failure
rate in the gate. From IRC:

18:32:19  clarkb | the grenade failures seem to get very
consistent in the gate at 2013-0-27 1552UTC
18:32:27  clarkb | before that the success rate is much higher
18:34:53  clarkb | *2013-07-27
18:40:01  clarkb | https://review.openstack.org/#/c/38810/ was
the last change to pass grenade when it was semi consistently passing
18:41:31  clarkb | 38587 and 28082 seem like strong candidates
for the breakage

The working hypothesis is that since the grenade gate is assymetrical
(it consumes grizzly and trunk but only gates trunk) that a change to
grizzly went in that broke something for trunk. Obviously this is
something we want to avoid - but since this is our first time gating on
upgrade patterns in this way, it's also probably a good chance for us to
learn about the process of doing that.

In any case, although I'm sure dtroyer and sdague will take a look as
soon as they are online, it's unlikely that anything is going to land
until this is sorted- so I'm sure they'd appreciate any help from anyone
who can look in to the actual issue.

Thanks!
Monty

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Multi region support for Heat

2013-07-28 Thread Angus Salkeld

On 26/07/13 09:43 -0700, Clint Byrum wrote:

Excerpts from Zane Bitter's message of 2013-07-26 06:37:09 -0700:

On 25/07/13 19:07, Bartosz Górski wrote:
 We want to start from something simple. At the beginning we are assuming
 no dependencies between resources from different region. Our first use
 case (the one on the wikipage) uses this assumptions. So this is why it
 can be easily split on two separate single region templates.

 Our goal is to support dependencies between resources from different
 regions. Our second use case (I will add it with more details to the
 wikipage soon) is similar to deploying two instances (app server + db
 server) wordpress in two different regions (app server in the first
 region and db server in the second). Regions will be connected to each
 other via VPN connection . In this case configuration of app server
 depends on db server. We need to know IP address of created DB server to
 properly configure App server. It forces us to wait with creating app
 server until db server will be created.

That's still a fairly simple case that could be handled by a pair of
OS::Heat::Stack resources (one provides a DBServerIP output it is passed
as a parameter to the other region using {'Fn::GetAtt':
['FirstRegionStack', 'Outputs.DBServerIP']}. But it's possible to
imagine circumstances where that approach is at least suboptimal (e.g.
when creating the actual DB server is comparatively quick, but we have
to wait for the entire template, which might be slow).



How about we add an actual heat resource?

So you could aggregate stacks.

We kinda have one with OS::Heat::Stack, but it doesn't use
python-heatclient. We could solve this by adding an endpoint
 property to the OS::Heat::Stack resource. Then if it is not
local then it uses python-heatclient to create the nested stack
remotely.

Just a thought.

-Angus



If you break that stack up into two stacks, db and other slow stuff
then you can get the Output of the db stack earlier, so that is a
solvable problem.


 More complicated use case with load balancers and more regions are also
 in ours minds.

Good to know, thanks. I'll look forward to reading more about it on the
wiki.

What I'd like to avoid is a situation where anything _appears_ to be
possible (Nova server and Cinder volume in different regions? Sure!
Connect 'em together? Sure!), and the user only finds out later that it
doesn't work. It would be much better to structure the templates in such
a way that only things that are legitimate are expressible. That's not
an achievable goal, but IMO we want to be much closer to the latter than
the former.



These are all predictable limitations and can be handled at the parsing
level.  You will know as soon as you have template + params whether or
not that cinder volume in region A can be attached to the nova server
in region B.

I'm still convinced that none of this matters if you rely on a single Heat
in one of the regions. The whole point of multi region is to eliminate
a SPOF.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Grenade issues

2013-07-28 Thread Sean Dague

On 07/28/2013 08:02 PM, Monty Taylor wrote:

Hey all!

There is currently an issue with which is causing a very high failure
rate in the gate. From IRC:

18:32:19  clarkb | the grenade failures seem to get very
consistent in the gate at 2013-0-27 1552UTC
18:32:27  clarkb | before that the success rate is much higher
18:34:53  clarkb | *2013-07-27
18:40:01  clarkb | https://review.openstack.org/#/c/38810/ was
the last change to pass grenade when it was semi consistently passing
18:41:31  clarkb | 38587 and 28082 seem like strong candidates
for the breakage

The working hypothesis is that since the grenade gate is assymetrical
(it consumes grizzly and trunk but only gates trunk) that a change to
grizzly went in that broke something for trunk. Obviously this is
something we want to avoid - but since this is our first time gating on
upgrade patterns in this way, it's also probably a good chance for us to
learn about the process of doing that.

In any case, although I'm sure dtroyer and sdague will take a look as
soon as they are online, it's unlikely that anything is going to land
until this is sorted- so I'm sure they'd appreciate any help from anyone
who can look in to the actual issue.


I won't be able to get to trying code until tomorrow morning, however 
the mostly likely culprit line I'm seeing in the logs is this - 
http://logs-dev.openstack.org/51/38951/2/check/gate-grenade-devstack-vm/22154/logs/new/screen-c-sch.txt.gz#2013-07-28%2006%3A27%3A34.974


Cinder isn't able to schedule volumes, which is bad. How we got to this 
stage of bad post upgrade is unknown to me.


The other thing that is suspicious is this - 
http://logs-dev.openstack.org/51/38951/2/check/gate-grenade-devstack-vm/22154/logs/new/screen-c-vol.txt.gz#2013-07-28%2006%3A26%3A36.466


Especially given that the last change that passed the gate was adding 
jsonschema to the tempest requirements list. Maybe this is all just a 
crazy requirements unwind?


Anyway, help appreciated on debugging. This is actually catching a real 
problem with cinder, which is what it was designed to do. How we got to 
the real problem is however kind of up in the air.


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Grenade issues

2013-07-28 Thread Davanum Srinivas
Monty,

I picked up a latest run
https://jenkins.openstack.org/job/gate-grenade-devstack-vm/22236/
which lead me to
http://logs.openstack.org/51/38951/2/check/gate-grenade-devstack-vm/22236/logs/new/screen-c-vol.txt.gz

So could the problem be 38810 itself?

2013-07-29 00:46:37.562 27821 TRACE cinder.service Stderr: 'Traceback
(most recent call last):\n  File /usr/local/bin/cinder-rootwrap,
line 4, in module\nfrom pkg_resources import require;
require(\'cinder==2013.2.a78.g465eb62\')\n  File
/usr/lib/python2.7/dist-packages/pkg_resources.py, line 2707, in
module\nworking_set.require(__requires__)\n  File
/usr/lib/python2.7/dist-packages/pkg_resources.py, line 686, in
require\nneeded = self.resolve(parse_requirements(requirements))\n
 File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 584,
in resolve\nraise
DistributionNotFound(req)\npkg_resources.DistributionNotFound:
jsonschema=0.7,3\n'

On Sun, Jul 28, 2013 at 8:02 PM, Monty Taylor mord...@inaugust.com wrote:
 Hey all!

 There is currently an issue with which is causing a very high failure
 rate in the gate. From IRC:

 18:32:19  clarkb | the grenade failures seem to get very
 consistent in the gate at 2013-0-27 1552UTC
 18:32:27  clarkb | before that the success rate is much higher
 18:34:53  clarkb | *2013-07-27
 18:40:01  clarkb | https://review.openstack.org/#/c/38810/ was
 the last change to pass grenade when it was semi consistently passing
 18:41:31  clarkb | 38587 and 28082 seem like strong candidates
 for the breakage

 The working hypothesis is that since the grenade gate is assymetrical
 (it consumes grizzly and trunk but only gates trunk) that a change to
 grizzly went in that broke something for trunk. Obviously this is
 something we want to avoid - but since this is our first time gating on
 upgrade patterns in this way, it's also probably a good chance for us to
 learn about the process of doing that.

 In any case, although I'm sure dtroyer and sdague will take a look as
 soon as they are online, it's unlikely that anything is going to land
 until this is sorted- so I'm sure they'd appreciate any help from anyone
 who can look in to the actual issue.

 Thanks!
 Monty

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Grenade issues

2013-07-28 Thread Sean Dague

On 07/28/2013 09:28 PM, Davanum Srinivas wrote:

Monty,

I picked up a latest run
https://jenkins.openstack.org/job/gate-grenade-devstack-vm/22236/
which lead me to
http://logs.openstack.org/51/38951/2/check/gate-grenade-devstack-vm/22236/logs/new/screen-c-vol.txt.gz

So could the problem be 38810 itself?

2013-07-29 00:46:37.562 27821 TRACE cinder.service Stderr: 'Traceback
(most recent call last):\n  File /usr/local/bin/cinder-rootwrap,
line 4, in module\nfrom pkg_resources import require;
require(\'cinder==2013.2.a78.g465eb62\')\n  File
/usr/lib/python2.7/dist-packages/pkg_resources.py, line 2707, in
module\nworking_set.require(__requires__)\n  File
/usr/lib/python2.7/dist-packages/pkg_resources.py, line 686, in
require\nneeded = self.resolve(parse_requirements(requirements))\n
  File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 584,
in resolve\nraise
DistributionNotFound(req)\npkg_resources.DistributionNotFound:
jsonschema=0.7,3\n'


Well, it's a think regardless. Tempest is definitely downgrading 
jsonschema when it hits it's requirements phase:


Downloading/unpacking jsonschema=1.0.0,!=1.4.0,2 (from -r 
/opt/stack/new/tempest/requirements.txt (line 6))

  Downloading jsonschema-1.3.0.tar.gz
  Storing download in cache at 
/var/cache/pip/http%3A%2F%2Fpypi.openstack.org%2Fopenstack%2Fjsonschema%2Fjsonschema-1.3.0.tar.gz

  Running setup.py egg_info for package jsonschema

http://logs.openstack.org/51/38951/2/check/gate-grenade-devstack-vm/22236/logs/grenade.sh.log.2013-07-29-003611

Seems weird, but I guess entrypoints now mean lots of things blow up if 
we do things like this. I'm rushing through a revert on this commit now 
- https://review.openstack.org/#/c/39005/ as it's low impact and we can 
revisit later. Will look harder in the morning.



On Sun, Jul 28, 2013 at 8:02 PM, Monty Taylor mord...@inaugust.com wrote:

Hey all!

There is currently an issue with which is causing a very high failure
rate in the gate. From IRC:

18:32:19  clarkb | the grenade failures seem to get very
consistent in the gate at 2013-0-27 1552UTC
18:32:27  clarkb | before that the success rate is much higher
18:34:53  clarkb | *2013-07-27
18:40:01  clarkb | https://review.openstack.org/#/c/38810/ was
the last change to pass grenade when it was semi consistently passing
18:41:31  clarkb | 38587 and 28082 seem like strong candidates
for the breakage

The working hypothesis is that since the grenade gate is assymetrical
(it consumes grizzly and trunk but only gates trunk) that a change to
grizzly went in that broke something for trunk. Obviously this is
something we want to avoid - but since this is our first time gating on
upgrade patterns in this way, it's also probably a good chance for us to
learn about the process of doing that.

In any case, although I'm sure dtroyer and sdague will take a look as
soon as they are online, it's unlikely that anything is going to land
until this is sorted- so I'm sure they'd appreciate any help from anyone
who can look in to the actual issue.

Thanks!
Monty

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Grenade issues

2013-07-28 Thread Sean Dague

On 07/28/2013 09:42 PM, Sean Dague wrote:

On 07/28/2013 09:28 PM, Davanum Srinivas wrote:

Monty,

I picked up a latest run
https://jenkins.openstack.org/job/gate-grenade-devstack-vm/22236/
which lead me to
http://logs.openstack.org/51/38951/2/check/gate-grenade-devstack-vm/22236/logs/new/screen-c-vol.txt.gz


So could the problem be 38810 itself?

2013-07-29 00:46:37.562 27821 TRACE cinder.service Stderr: 'Traceback
(most recent call last):\n  File /usr/local/bin/cinder-rootwrap,
line 4, in module\nfrom pkg_resources import require;
require(\'cinder==2013.2.a78.g465eb62\')\n  File
/usr/lib/python2.7/dist-packages/pkg_resources.py, line 2707, in
module\nworking_set.require(__requires__)\n  File
/usr/lib/python2.7/dist-packages/pkg_resources.py, line 686, in
require\nneeded = self.resolve(parse_requirements(requirements))\n
  File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 584,
in resolve\nraise
DistributionNotFound(req)\npkg_resources.DistributionNotFound:
jsonschema=0.7,3\n'



Well, it's a think regardless. Tempest is definitely downgrading
jsonschema when it hits it's requirements phase:


Gr... s/think/thing/... reasons why I shouldn't do this on a Sunday 
night... :)


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev