Re: [openstack-dev] [nova] how to unit test scripts outside of nova/nova?

2014-07-01 Thread Matt Riedemann



On 7/1/2014 4:03 PM, Matthew Treinish wrote:

On Tue, Jul 01, 2014 at 03:21:06PM -0500, Matt Riedemann wrote:

As part of the enforce-unique-instance-uuid-in-db blueprint [1] I'm writing
a script to scan the database and find any NULL instance_uuid records that
will cause the new database migration to fail so that operators can run this
before they run the migration, otherwise the migration blocks if these types
of records are found.

I have the script written [2], but wanted to also write unit tests for it. I
guess I assumed the script would go under nova/tools/db like the
schema_diff.py script, but I'm not sure how to unit test anything outside of
the nova/nova tree.

Nova's testr configuration is only discovering tests within nova/tests [3].
But I don't think I can put the unit tests under nova/tests and then import
the module from nova/tools.


So we hit a similar issue in tempest when we wanted to unit test some utility
scripts in tempest/tools. Changing the discovery path to find tests outside of
nova/tests is actually a pretty easy change[4], but I don't think that will 
solve
the use case with tox. What happened when we tried to do this in tempest use
case was that when the project was getting installed the tools dir wasn't
included so when we ran with tox it couldn't find the files we were trying to
test. The solution we came up there was to put the script under the tempest
namespace and add unit tests in tempest/tests. (we also added an entry point for
the script to expose it as a command when tempest was installed)



So I'm a bit stuck.  I could take the easy way out and just throw the script
under nova/db/sqlalchemy/migrate_repo and put my unit tests under
nova/tests/db/, and I'd also get pep8 checking with that, but that doesn't
seem right - but I'm also possibly over-thinking this.

Anyone else have any ideas?


I think it really comes down to how you want to present the utility to the end
users. To enable unit testing it, it's just easier to put it in the nova
namespace. I couldn't come up with a good way to get around the
install/namespace issue. (maybe someone else who is more knowledgeable here has
a good way to get around this) So then you can symlink it to the tools dir or
add an entry point (or bake it into nova-manage) to make it easy to find. I
think the issue with putting it in nova/db/sqlalchemy/migrate_repo is that it's
hard to find.



[1] 
https://blueprints.launchpad.net/nova/+spec/enforce-unique-instance-uuid-in-db
[2] https://review.openstack.org/#/c/97946/
[3] http://git.openstack.org/cgit/openstack/nova/tree/.testr.conf#n5

[4] 
http://git.openstack.org/cgit/openstack/tempest/tree/tempest/test_discover/test_discover.py

-Matt Treinish



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Matt,

Thanks for the help, I completely forgot about making the new script an 
entry point in setup.cfg, that's a good idea.


Before I saw this I did move the script under 
nova/db/sqlalchemy/migrate_repo and moved the tests under nova/tests/db 
and have that all working now, so will probably just move forward with 
that rather than try to do some black magic with test discovery and 
getting the module imported.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Openstack and SQLAlchemy

2014-07-07 Thread Matt Riedemann



On 7/2/2014 8:23 PM, Mike Bayer wrote:


I've just added a new section to this wiki, "MySQLdb + eventlet = sad",
summarizing some discussions I've had in the past couple of days about
the ongoing issue that MySQLdb and eventlet were not meant to be used
together.   This is a big one to solve as well (though I think it's
pretty easy to solve).

https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy#MySQLdb_.2B_eventlet_.3D_sad



On 6/30/14, 12:56 PM, Mike Bayer wrote:

Hi all -

For those who don't know me, I'm Mike Bayer, creator/maintainer of
SQLAlchemy, Alembic migrations and Dogpile caching.   In the past month
I've become a full time Openstack developer working for Red Hat, given
the task of carrying Openstack's database integration story forward.
To that extent I am focused on the oslo.db project which going forward
will serve as the basis for database patterns used by other Openstack
applications.

I've summarized what I've learned from the community over the past month
in a wiki entry at:

https://wiki.openstack.org/wiki/Openstack_and_SQLAlchemy

The page also refers to an ORM performance proof of concept which you
can see at https://github.com/zzzeek/nova_poc.

The goal of this wiki page is to publish to the community what's come up
for me so far, to get additional information and comments, and finally
to help me narrow down the areas in which the community would most
benefit by my contributions.

I'd like to get a discussion going here, on the wiki, on IRC (where I am
on freenode with the nickname zzzeek) with the goal of solidifying the
blueprints, issues, and SQLAlchemy / Alembic features I'll be focusing
on as well as recruiting contributors to help in all those areas.  I
would welcome contributors on the SQLAlchemy / Alembic projects directly
as well, as we have many areas that are directly applicable to Openstack.

I'd like to thank Red Hat and the Openstack community for welcoming me
on board and I'm looking forward to digging in more deeply in the coming
months!

- mike



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Regarding the eventlet + mysql sadness, I remembered this [1] in the 
nova.db.api code.


I'm not sure if that's just nova-specific right now, I'm a bit too lazy 
at the moment to check if it's in other projects, but I'm not seeing it 
in neutron, for example, and makes me wonder if it could help with the 
neutron db lock timeouts we see in the gate [2].  Don't let the bug 
status fool you, that thing is still showing up, or a variant of it is.


There are at least 6 lock-related neutron bugs hitting the gate [3].

[1] https://review.openstack.org/59760
[2] https://bugs.launchpad.net/neutron/+bug/1283522
[3] http://status.openstack.org/elastic-recheck/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Openstack and SQLAlchemy

2014-07-07 Thread Matt Riedemann



On 7/7/2014 3:28 PM, Jay Pipes wrote:



On 07/07/2014 04:17 PM, Mike Bayer wrote:


On 7/7/14, 3:57 PM, Matt Riedemann wrote:




Regarding the eventlet + mysql sadness, I remembered this [1] in the
nova.db.api code.

I'm not sure if that's just nova-specific right now, I'm a bit too
lazy at the moment to check if it's in other projects, but I'm not
seeing it in neutron, for example, and makes me wonder if it could
help with the neutron db lock timeouts we see in the gate [2].  Don't
let the bug status fool you, that thing is still showing up, or a
variant of it is.

There are at least 6 lock-related neutron bugs hitting the gate [3].

[1] https://review.openstack.org/59760
[2] https://bugs.launchpad.net/neutron/+bug/1283522
[3] http://status.openstack.org/elastic-recheck/



yeah, tpool, correct me if I'm misunderstanding, we take some API code
that is 90% fetching from the database, we have it all under eventlet,
the purpose of which is, IO can be shoveled out to an arbitrary degree,
e.g. 500 concurrent connections type of thing, but then we take all the
IO (MySQL access) and put it into a thread pool anyway.


Yep. It makes no sense to do that, IMO.

The solution is to use a non-blocking MySQLdb library which will yield
appropriately for evented solutions like gevent and eventlet.

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Yeah, nevermind my comment, since it's not working without an eventlet 
patch, details in the nova bug here [1].  And it sounds like it's still 
not 100% with the patch.


[1] https://bugs.launchpad.net/nova/+bug/1171601

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] new nasty gate bug 1338844 with nova-network races

2014-07-07 Thread Matt Riedemann
I noticed the bug [1] today.  Given the trend in logstash, it might be 
related to some fixes proposed to try and resolve the other big nova ssh 
timeout bug 1298472.  It appears to only be in jobs using nova-network.


[1] https://bugs.launchpad.net/nova/+bug/1338844

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] whatever happened to removing instance.locked in icehouse?

2014-07-08 Thread Matt Riedemann
I came across this [1] today and noticed the note to remove 
instance.locked in favor of locked_by is still in master, so apparently 
not being removed in Icehouse.


Is anyone aware of intentions to remove instance.locked, or we don't 
care, or other?  If we don't care, maybe we should remove the note in 
the code.


I found it and thought about this because the check_instance_lock 
decorator in nova.compute.api doesn't check the locked_by field [2] but 
I'm guessing it probably should...


[1] https://review.openstack.org/#/c/38196/13/nova/objects/instance.py
[2] 
http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/api.py?id=2014.2.b1#n184


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] new nasty gate bug 1338844 with nova-network races

2014-07-09 Thread Matt Riedemann



On 7/7/2014 9:29 PM, Matt Riedemann wrote:

I noticed the bug [1] today.  Given the trend in logstash, it might be
related to some fixes proposed to try and resolve the other big nova ssh
timeout bug 1298472.  It appears to only be in jobs using nova-network.

[1] https://bugs.launchpad.net/nova/+bug/1338844



Looks like jogo got the fix here:

https://review.openstack.org/#/c/105651/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gate] concurrent workers are overwhelming postgresql in the gate - bug 1338841

2014-07-09 Thread Matt Riedemann
Bug 1338841 [1] started showing up yesterday and I first noticed it on 
the change to set osapi_volume_workers equal to the number of CPUs 
available by default.  Similar patches for trove (api/conductor workers) 
and glance (api/registry workers) have landed in the last week also, and 
nova has been running with multiple api/conductor workers by default 
since Icehouse.


It looks like the cinder change tipped the default postgresql 
max_connections over and we started getting asynchronous connection 
failures in that job. [2]


We can also note that the postgresql job is the only one that runs the 
nova api-metadata service, which has it's own workers.


The VMs the jobs are running on have 8 VCPUs, so that's at least 88 
workers between nova (3), cinder (1), glance (2), trove (2), neutron, 
heat and ceilometer.


So osapi_volume_workers (8) + n-api-meta workers (8) seems to have 
tipped it over.


The first attempt at a fix is to simply double the default 
max_connections value [3].


While looking up the postgresql configuration docs, I also read a bit on 
synchronous_commit=off and fsync=off, which sound like we might want to 
also think about using one of those in devstack runs since they are 
supposed to be more performant if you don't care about disaster recovery 
(which we don't in gate runs on VMs).


Anyway, bumping max connections might fix the gate, I'm just sending 
this out to see if there are any postgresql experts out there with 
additional tips or insights on things we can tweak or look for, 
including whether or not it might be worthwhile to set 
synchronous_commit=off or fsync=off for gate runs.


[1] https://bugs.launchpad.net/nova/+bug/1338841
[2] http://goo.gl/yRBDjQ
[3] https://review.openstack.org/#/c/105854/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] concurrent workers are overwhelming postgresql in the gate - bug 1338841

2014-07-09 Thread Matt Riedemann



On 7/9/2014 2:59 PM, Matt Riedemann wrote:

Bug 1338841 [1] started showing up yesterday and I first noticed it on
the change to set osapi_volume_workers equal to the number of CPUs
available by default.  Similar patches for trove (api/conductor workers)
and glance (api/registry workers) have landed in the last week also, and
nova has been running with multiple api/conductor workers by default
since Icehouse.

It looks like the cinder change tipped the default postgresql
max_connections over and we started getting asynchronous connection
failures in that job. [2]

We can also note that the postgresql job is the only one that runs the
nova api-metadata service, which has it's own workers.

The VMs the jobs are running on have 8 VCPUs, so that's at least 88
workers between nova (3), cinder (1), glance (2), trove (2), neutron,
heat and ceilometer.

So osapi_volume_workers (8) + n-api-meta workers (8) seems to have
tipped it over.

The first attempt at a fix is to simply double the default
max_connections value [3].

While looking up the postgresql configuration docs, I also read a bit on
synchronous_commit=off and fsync=off, which sound like we might want to
also think about using one of those in devstack runs since they are
supposed to be more performant if you don't care about disaster recovery
(which we don't in gate runs on VMs).

Anyway, bumping max connections might fix the gate, I'm just sending
this out to see if there are any postgresql experts out there with
additional tips or insights on things we can tweak or look for,
including whether or not it might be worthwhile to set
synchronous_commit=off or fsync=off for gate runs.

[1] https://bugs.launchpad.net/nova/+bug/1338841
[2] http://goo.gl/yRBDjQ
[3] https://review.openstack.org/#/c/105854/



Typo in my math on the workers, it should be:

nova (3*8), cinder (1*8), glance (2*8), trove (2*8), neutron (1), heat 
(1) and ceilometer (1) = 67.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Inter cloud resource federation [Alliance]

2014-07-09 Thread Matt Riedemann



On 7/9/2014 12:33 PM, Tiwari, Arvind wrote:

Hi All,

I am investigating on inter cloud resource federation across OS based
cloud deployments, this is needed to support multi regions, cloud
bursting, VPC and more use cases. I came up with a design (link below)
which advocate a new service (a.k.a. Alliance), this service sits close
to Keystone and help abstracting all the inter cloud concerns from
Keystone. This service will be abstracted from end users and there won’t
be any direct interactions between user and Alliance service. Keystone
will be delegating all inter cloud concerns to Alliance.

https://wiki.openstack.org/wiki/Inter_Cloud_Resource_Federation

Apart from basic resource federation use cases, Alliance service will
add following features

1.UUID token support across cloud

2.PKI Token support

3.Inter Cloud Token Validation

4.Inter Cloud Communication to allow

•Region/endpoint Discovery

•Service Discovery

•Remote Resource Provisioning

5.Resource Access Across Clouds

6.SSO Across Cloud

7.SSOut Across Cloud (or Inter Cloud Token Revocation)

8.Notification to propagate meter info, resource de-provisioning ….

I would appreciate if you guys take a look and share your perspective. I
am open to any questions, suggestions, discussions on the same.

Thanks for your time,

Arvind

*Please excuse any typographical error.***



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Is this only identity (keystone) are other things like booting instances 
in nova from public/private clouds which are abstracted from the client, 
and if so have you heard of nova-cells?


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fastest way to run individual tests ?

2014-07-09 Thread Matt Riedemann



On 6/12/2014 6:17 AM, Daniel P. Berrange wrote:

On Thu, Jun 12, 2014 at 07:07:37AM -0400, Sean Dague wrote:

On 06/12/2014 06:59 AM, Daniel P. Berrange wrote:

Does anyone have any tip on how to actually run individual tests in an
efficient manner. ie something that adds no more than 1 second penalty
over & above the time to run the test itself. NB, assume that i've primed
the virtual env with all prerequisite deps already.



The overhead is in the fact that we have to discover the world, then
throw out the world.

You can actually run an individual test via invoking the testtools.run
directly:


python -m testtools.run nova.tests.test_versions


(Also, when testr explodes because of an import error this is about the
only way to debug what's going on).


Most excellent, thankyou. I knew someone must know a way to do it :-)

Regards,
Daniel



I've been beating my head against the wall a bit on unit tests too this 
week, and here is another tip that just uncovered something for me when 
python -m testtools.run and nosetests didn't help.


I sourced the tox virtualenv and then ran the test from there, which 
gave me the actual error, so something like this:


source .tox/py27/bin/activate
python -m testtools.run 

Props to Matt Odden for helping me with the source of the venv tip.

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Treating notifications as a contract

2014-07-10 Thread Matt Riedemann



On 7/10/2014 3:48 AM, Eoghan Glynn wrote:


TL;DR: do we need to stabilize notifications behind a versioned
and discoverable contract?

Folks,

One of the issues that has been raised in the recent discussions with
the QA team about branchless Tempest relates to some legacy defects
in the OpenStack notification system.

Now, I don't personally subscribe to the PoV that ceilometer, or
indeed any other consumer of these notifications (e.g. StackTach), was
at fault for going ahead and depending on this pre-existing mechanism
without first fixing it.

But be that as it may, we have a shortcoming here that needs to be
called out explicitly, and possible solutions explored.

In many ways it's akin to the un-versioned RPC that existed in nova
before the versioned-rpc-apis BP[1] was landed back in Folsom IIRC,
except that notification consumers tend to be at arms-length from the
producer, and the effect of a notification is generally more advisory
than actionable.

A great outcome would include some or all of the following:

  1. more complete in-tree test coverage of notification logic on the
 producer side

  2. versioned notification payloads to protect consumers from breaking
 changes in payload format

  3. external discoverability of which event types a service is emitting

  4. external discoverability of which event types a service is consuming

If you're thinking that sounds like a substantial chunk of cross-project
work & co-ordination, you'd be right :)

So the purpose of this thread is simply to get a read on the appetite
in the community for such an effort. At the least it would require:

  * trashing out the details in say a cross-project-track session at
the K* summit

  * buy-in from the producer-side projects (nova, glance, cinder etc.)
in terms of stepping up to make the changes

  * acquiescence from non-integrated projects that currently consume
these notifications

(we shouldn't, as good citizens, simply pull the rug out from under
projects such as StackTach without discussion upfront)

  * dunno if the TC would need to give their imprimatur to such an
approach, or whether we could simply self-organize and get it done
without the need for governance resolutions etc.

Any opinions on how desirable or necessary this is, and how the
detailed mechanics might work, would be welcome.

Apologies BTW if this has already been discussed and rejected as
unworkable. I see a stalled versioned-notifications BP[2] and some
references to the CADF versioning scheme in the LP fossil-record.
Also an inconclusive ML thread from 2012[3], and a related grizzly
summit design session[4], but it's unclear to me whether these
aspirations got much traction in the end.

Cheers,
Eoghan

[1] https://blueprints.launchpad.net/nova/+spec/versioned-rpc-apis
[2] https://blueprints.launchpad.net/nova/+spec/versioned-notifications
[3] http://osdir.com/ml/openstack/2012-10/msg3.html
[4] https://etherpad.openstack.org/p/grizzly-common-messaging

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I didn't read all of this, but in nova-land yes we've wanted versioned 
notifications for a long time because there are things we can't change 
in the notification payload without that.  There have been a few ML 
threads in the past about this but no one has ever really stepped up to 
work on it seriously from what I can tell.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Python 2.6 being dropped in K? What does that entail?

2014-07-11 Thread Matt Riedemann
I'm hearing that python 2.6 will no longer be support in the K release 
but not sure if there is an official statement about that somewhere (wiki?).


I realize this means turning off the 2.6 unit test jobs, but what other 
runtime things are going to be explicitly removed, or if not removed 
just not blocked which are not compatible with 2.6?


Sounds like dict comprehension for one, but a lot of other stuff I 
thought we were moving to six anyway for supporting python 3?


I'm not as concerned about unit tests with 2.6 since I think a lot of 
development happens against 2.7, but thinking more for distro support 
like RHEL 6.5 vs RHEL 7, which would mean upgrading to RHEL 7 if you want K.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][qa] proposal for moving forward on cells/tempest testing

2014-07-14 Thread Matt Riedemann
Today we only gate on exercises in devstack for cells testing coverage 
in the gate-devstack-dsvm-cells job.


The cells tempest non-voting job was moving to the experimental queue 
here [1] since it doesn't work with a lot of the compute API tests.


I think we all agreed to tar and feather comstud if he didn't get 
Tempest "working" (read: passing) with cells enabled in Juno.


The first part of this is just figuring out where we sit with what's 
failing in Tempest (in the check-tempest-dsvm-cells-full job).


I'd like to propose that we do the following to get the ball rolling:

1. Add an option to tempest.conf under the compute-feature-enabled 
section to toggle cells and then use that option to skip tests that we 
know will fail in cells, e.g. security group tests.


2. Open bugs for all of the tests we're skipping so we can track closing 
those down, assuming they aren't already reported. [2]


3. Once the known failures are being skipped, we can move 
check-tempest-dsvm-cells-full out of the experimental queue.  I'm not 
proposing that it'd be voting right away, I think we have to see it burn 
in for awhile first.


With at least this plan we should be able to move forward on identifying 
issues and getting some idea for how much of Tempest doesn't work with 
cells and the effort involved in making it work.


Thoughts? If there aren't any objections, I said I'd work on the qa-spec 
and can start doing the grunt-work of opening bugs and skipping tests.


[1] https://review.openstack.org/#/c/87982/
[2] https://bugs.launchpad.net/nova/+bugs?field.tag=cells+

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][glance] what to do about tons of http connection pool is full warnings in g-api log?

2014-07-14 Thread Matt Riedemann
I opened bug 1341777 [1] against glance but it looks like it's due to 
the default log level for requests.packages.urllib3.connectionpool in 
oslo's log module.


The problem is this warning shows up nearly 420K times in 7 days in 
Tempest runs:


WARNING urllib3.connectionpool [-] HttpConnectionPool is full, 
discarding connection: 127.0.0.1


So either glance is doing something wrong, or that's logging too high of 
a level (I think it should be debug in this case).  I'm not really sure 
how to scope this down though, or figure out what is so damn chatty in 
glance-api that is causing this.  It doesn't seem to be causing test 
failures, but the rate at which this is logged in glance-api is surprising.


[1] https://bugs.launchpad.net/glance/+bug/1341777

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][glance] what to do about tons of http connection pool is full warnings in g-api log?

2014-07-14 Thread Matt Riedemann



On 7/14/2014 4:09 PM, Matt Riedemann wrote:

I opened bug 1341777 [1] against glance but it looks like it's due to
the default log level for requests.packages.urllib3.connectionpool in
oslo's log module.

The problem is this warning shows up nearly 420K times in 7 days in
Tempest runs:

WARNING urllib3.connectionpool [-] HttpConnectionPool is full,
discarding connection: 127.0.0.1

So either glance is doing something wrong, or that's logging too high of
a level (I think it should be debug in this case).  I'm not really sure
how to scope this down though, or figure out what is so damn chatty in
glance-api that is causing this.  It doesn't seem to be causing test
failures, but the rate at which this is logged in glance-api is surprising.

[1] https://bugs.launchpad.net/glance/+bug/1341777



I found this older thread [1] which led to this in oslo [2] but I'm not 
really sure how to use it to make the connectionpool logging quieter in 
glance, any guidance there?  It looks like in Joe's change to nova for 
oslo.messaging he just changed the value directly in the log module in 
nova, something I thought was forbidden.


[1] 
http://lists.openstack.org/pipermail/openstack-dev/2014-March/030763.html

[2] https://review.openstack.org/#/c/94001/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][glance] what to do about tons of http connection pool is full warnings in g-api log?

2014-07-14 Thread Matt Riedemann



On 7/14/2014 5:18 PM, Ben Nemec wrote:

On 07/14/2014 04:21 PM, Matt Riedemann wrote:



On 7/14/2014 4:09 PM, Matt Riedemann wrote:

I opened bug 1341777 [1] against glance but it looks like it's due to
the default log level for requests.packages.urllib3.connectionpool in
oslo's log module.

The problem is this warning shows up nearly 420K times in 7 days in
Tempest runs:

WARNING urllib3.connectionpool [-] HttpConnectionPool is full,
discarding connection: 127.0.0.1

So either glance is doing something wrong, or that's logging too high of
a level (I think it should be debug in this case).  I'm not really sure
how to scope this down though, or figure out what is so damn chatty in
glance-api that is causing this.  It doesn't seem to be causing test
failures, but the rate at which this is logged in glance-api is surprising.

[1] https://bugs.launchpad.net/glance/+bug/1341777



I found this older thread [1] which led to this in oslo [2] but I'm not
really sure how to use it to make the connectionpool logging quieter in
glance, any guidance there?  It looks like in Joe's change to nova for
oslo.messaging he just changed the value directly in the log module in
nova, something I thought was forbidden.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-March/030763.html
[2] https://review.openstack.org/#/c/94001/



There was a change recently in incubator to address something related,
but since it's setting to WARN I don't think it would get rid of this
message:
https://github.com/openstack/oslo-incubator/commit/3310d8d2d3643da2fc249fdcad8f5000866c4389

It looks like Joe's change was a cherry-pick of the incubator change to
add oslo.messaging, so discouraged but not forbidden (and apparently
during feature freeze, which is understandable).

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Yeah it sounds like either a problem in glance because they don't allow 
configuring the max pool size so it defaults to 1, or it's an issue in 
python-swiftclient and is being tracked in a different bug:


https://bugs.launchpad.net/python-swiftclient/+bug/1295812

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][qa] proposal for moving forward on cells/tempest testing

2014-07-15 Thread Matt Riedemann



On 7/15/2014 12:36 AM, Sean Dague wrote:

On 07/14/2014 07:44 PM, Matt Riedemann wrote:

Today we only gate on exercises in devstack for cells testing coverage
in the gate-devstack-dsvm-cells job.

The cells tempest non-voting job was moving to the experimental queue
here [1] since it doesn't work with a lot of the compute API tests.

I think we all agreed to tar and feather comstud if he didn't get
Tempest "working" (read: passing) with cells enabled in Juno.

The first part of this is just figuring out where we sit with what's
failing in Tempest (in the check-tempest-dsvm-cells-full job).

I'd like to propose that we do the following to get the ball rolling:

1. Add an option to tempest.conf under the compute-feature-enabled
section to toggle cells and then use that option to skip tests that we
know will fail in cells, e.g. security group tests.


I don't think we should do that. Part of creating the feature matrix in
devstack gate included the follow on idea of doing extension selection
based on branch or feature.

I'm happy if that gets finished, then tests are skipped by known not
working extensions, but just landing a ton of tempest ifdefs that will
all be removed is feeling very gorpy. Especially as we're now at Juno 2,
which was supposed to be the checkpoint for this being "on track for
completion" and... people are just talking about starting.


2. Open bugs for all of the tests we're skipping so we can track closing
those down, assuming they aren't already reported. [2]

3. Once the known failures are being skipped, we can move
check-tempest-dsvm-cells-full out of the experimental queue.  I'm not
proposing that it'd be voting right away, I think we have to see it burn
in for awhile first.

With at least this plan we should be able to move forward on identifying
issues and getting some idea for how much of Tempest doesn't work with
cells and the effort involved in making it work.

Thoughts? If there aren't any objections, I said I'd work on the qa-spec
and can start doing the grunt-work of opening bugs and skipping tests.

[1] https://review.openstack.org/#/c/87982/
[2] https://bugs.launchpad.net/nova/+bugs?field.tag=cells+



All the rest is fine, I just think we should work on the proper way to
skip things.

-Sean



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



OK I don't know anything about the extensions in devstack-gate or how 
the skips would work then, I'll have to bug some people in IRC unless 
there is an easy example that can be pointed out here.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][glance] what to do about tons of http connection pool is full warnings in g-api log?

2014-07-15 Thread Matt Riedemann



On 7/14/2014 5:28 PM, Matt Riedemann wrote:



On 7/14/2014 5:18 PM, Ben Nemec wrote:

On 07/14/2014 04:21 PM, Matt Riedemann wrote:



On 7/14/2014 4:09 PM, Matt Riedemann wrote:

I opened bug 1341777 [1] against glance but it looks like it's due to
the default log level for requests.packages.urllib3.connectionpool in
oslo's log module.

The problem is this warning shows up nearly 420K times in 7 days in
Tempest runs:

WARNING urllib3.connectionpool [-] HttpConnectionPool is full,
discarding connection: 127.0.0.1

So either glance is doing something wrong, or that's logging too
high of
a level (I think it should be debug in this case).  I'm not really sure
how to scope this down though, or figure out what is so damn chatty in
glance-api that is causing this.  It doesn't seem to be causing test
failures, but the rate at which this is logged in glance-api is
surprising.

[1] https://bugs.launchpad.net/glance/+bug/1341777



I found this older thread [1] which led to this in oslo [2] but I'm not
really sure how to use it to make the connectionpool logging quieter in
glance, any guidance there?  It looks like in Joe's change to nova for
oslo.messaging he just changed the value directly in the log module in
nova, something I thought was forbidden.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-March/030763.html

[2] https://review.openstack.org/#/c/94001/



There was a change recently in incubator to address something related,
but since it's setting to WARN I don't think it would get rid of this
message:
https://github.com/openstack/oslo-incubator/commit/3310d8d2d3643da2fc249fdcad8f5000866c4389


It looks like Joe's change was a cherry-pick of the incubator change to
add oslo.messaging, so discouraged but not forbidden (and apparently
during feature freeze, which is understandable).

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Yeah it sounds like either a problem in glance because they don't allow
configuring the max pool size so it defaults to 1, or it's an issue in
python-swiftclient and is being tracked in a different bug:

https://bugs.launchpad.net/python-swiftclient/+bug/1295812



It looks like the issue for the g-api logs was bug 1295812 in 
python-swiftclient, around the time that moved to using python-requests.


I noticed last night that the n-cpu/c-vol logs started spiking with the 
urllib3 connectionpool warning on 7/11 which is when python-glanceclient 
started using requests, so I've changed bug 1341777 to a 
python-glanceclient bug.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] python-glanceclient with requests is spamming the logs

2014-07-15 Thread Matt Riedemann
I've been looking at bug 1341777 since yesterday originally because of 
g-api logs and this warning:


"HttpConnectionPool is full, discarding connection: 127.0.0.1"

But that's been around awhile and it sounds like an issue with 
python-swiftclient since it started using python-requests (see bug 1295812).


I did also noticed that the warning started spiking in the n-cpu and 
c-vol logs on 7/11 and traced that back to this change in 
python-glanceclient to start using requests:


https://review.openstack.org/#/c/78269/

This is nasty because it's generating around 166K warnings since 7/11 in 
those logs:


http://goo.gl/p0urYm

It's a big change in glanceclient so I wouldn't want to propose a revert 
for this, but hopefully the glance team can sort this out quickly since 
it's going to impact our elastic search cluster.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] request to tag novaclient 2.18.0

2014-07-18 Thread Matt Riedemann



On 7/17/2014 5:48 PM, Steve Baker wrote:

On 18/07/14 00:44, Joe Gordon wrote:




On Wed, Jul 16, 2014 at 11:28 PM, Steve Baker mailto:sba...@redhat.com>> wrote:

On 12/07/14 09:25, Joe Gordon wrote:




On Fri, Jul 11, 2014 at 4:42 AM, Jeremy Stanley
mailto:fu...@yuggoth.org>> wrote:

On 2014-07-11 11:21:19 +0200 (+0200), Matthias Runge wrote:
> this broke horizon stable and master; heat stable is
affected as
> well.
[...]

I guess this is a plea for applying something like the oslotest
framework to client libraries so they get backward-compat
jobs run
against unit tests of all dependant/consuming software...
branchless
tempest already alleviates some of this, but not the case of
changes
in a library which will break unit/functional tests of another
project.


We actually do have some tests for backwards compatibility, and
they all passed. Presumably because both heat and horizon have
poor integration test.

We ran

  * check-tempest-dsvm-full-havana

<http://logs.openstack.org/66/94166/3/check/check-tempest-dsvm-full-havana/8e09faa>
SUCCESS in 40m 47s (non-voting)
  * check-tempest-dsvm-neutron-havana

<http://logs.openstack.org/66/94166/3/check/check-tempest-dsvm-neutron-havana/b4ad019>
SUCCESS in 36m 17s (non-voting)
  * check-tempest-dsvm-full-icehouse

<http://logs.openstack.org/66/94166/3/check/check-tempest-dsvm-full-icehouse/c0c62e5>
SUCCESS in 53m 05s
  * check-tempest-dsvm-neutron-icehouse

<http://logs.openstack.org/66/94166/3/check/check-tempest-dsvm-neutron-icehouse/a54aedb>
SUCCESS in 57m 28s


on the offending patches (https://review.openstack.org/#/c/94166/)

Infra patch that added these tests:
https://review.openstack.org/#/c/80698/



Heat-proper would have continued working fine with novaclient
2.18.0. The regression was with raising novaclient exceptions,
which is only required in our unit tests. I saw this break coming
and switched to raising via from_response
https://review.openstack.org/#/c/97977/22/heat/tests/v1_1/fakes.py

Unit tests tend to deal with more internals of client libraries
just for mocking purposes, and there have been multiple breaks in
unit tests for heat and horizon when client libraries make
internal changes.

This could be avoided if the client gate jobs run the unit tests
for the projects which consume them.

That may work but isn't this exactly what integration testing is for?

If you mean tempest then no, this is different.

Client projects have done a good job of keeping their public library
APIs stable. An exception type is public API, but the constructor for
raising that type arguably is more of a gray area since only the client
library should be raising its own exceptions.

However heat and horizon unit tests need to raise client exceptions to
test their own error condition handling, so exception constructors could
be considered public API, but only for unit test mocking in other projects.

This problem couldn't have been caught in an integration test because
nothing outside the unit tests directly raises a client exception.

There have been other breakages where internal client library changes
have broken the mocking in our unit tests (I recall a neutronclient
internal refactor).

In many cases the cause may be inappropriate mocking in the unit tests,
but that is cold comfort when the gates break when a client library is
released.

Maybe we can just start with adding heat and horizon to the check jobs
of the clients they consume, but the following should also be considered:
grep "python-.*client" */requirements.txt

This could give client libraries more confidence that internal changes
don't break anything, and allows them to fix mocking in other projects
before their changes land.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I don't think we should have to change the gate jobs just so that other 
projects can test against the internals of their dependent clients, that 
sounds like a flawed unit test design to me.


Looking at 
https://review.openstack.org/#/c/97977/22/heat/tests/v1_1/fakes.py for 
example, why is a fake_exception needed to mock out novaclient's 
NotFound exception?  A better way to do this is that whatever is 
expecting to raise the NotFound should use mock with a side_effect to 
raise novaclient.exceptions.NotFound, then mock handles the spec being 
set on the mock and you don't have to worry about the internal 
construction of the exception class in your unit tests.


--

Thanks,

Matt Riedemann


___

Re: [openstack-dev] [gate] Automatic elastic rechecks

2014-07-18 Thread Matt Riedemann



On 7/17/2014 9:01 AM, Matthew Booth wrote:

Elastic recheck is a great tool. It leaves me messages like this:

===
I noticed jenkins failed, I think you hit bug(s):

check-devstack-dsvm-cells: https://bugs.launchpad.net/bugs/1334550
gate-tempest-dsvm-large-ops: https://bugs.launchpad.net/bugs/1334550

We don't automatically recheck or reverify, so please consider doing
that manually if someone hasn't already. For a code review which is not
yet approved, you can recheck by leaving a code review comment with just
the text:

 recheck bug 1334550

For bug details see: http://status.openstack.org/elastic-recheck/
===

In an ideal world, every person seeing this would diligently check that
the fingerprint match was accurate before submitting a recheck request.

In the real world, how about we just do it automatically?

Matt



We don't want automatic rechecks because then we're just piling on to 
races, because you can have jenkins failures where we have a fingerprint 
for one job failure but there is some other job failing on your patch 
which is an unrecognized failure (no e-r fingerprint query yet).  If we 
never force people to investigate the failures and write fingerprints 
because we're just always automatically rechecking things for them, 
we'll drop our categorization rates and most likely eventually fall into 
a locked gate once we hit 2-3 really nasty races hitting at the same time.


So the best way to avoid a locked gate is to stay on top of managing the 
worst offenders and making sure everyone is actually looking at what 
failed so we can quickly identify new races.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] About the ERROR:cliff.app Service Unavailable during deploy openstack by devstack.

2014-07-24 Thread Matt Riedemann



On 7/14/2014 3:47 AM, Meng Jie MJ Li wrote:

HI,


I tried to use devstack to deploy openstack. But encountered an issue :
ERROR: cliff.app Service Unavailable (HTTP 503).  Tried several times
all same result.

2014-07-14 05:53:39.430 | + create_keystone_accounts
2014-07-14 05:53:39.431 | ++ get_or_create_project admin
2014-07-14 05:53:39.433 | +++ openstack project show admin -f value -c id
2014-07-14 05:53:40.147 | +++ openstack project create admin -f value -c id
2014-07-14 05:53:40.771 | ERROR: cliff.app Service Unavailable (HTTP 503)


2014-07-14 05:53:41.519 | +++ openstack user create admin --password
admin --project --email ad...@example.com -f value -c id
2014-07-14 05:53:42.080 | usage: openstack user create [-h] [-f
{shell,table,value}] [-c COLUMN]
2014-07-14 05:53:42.080 |  [--max-width ]
[--prefix PREFIX]
2014-07-14 05:53:42.080 |  [--password
] [--password-prompt]
2014-07-14 05:53:42.080 |  [--email ]
[--project ]
2014-07-14 05:53:42.080 |  [--enable | --disable]
2014-07-14 05:53:42.080 |  
2014-07-14 05:53:42.081 | openstack user create: error: argument
--project: expected one argument
2014-07-14 05:53:42.109 | ++ USER_ID=
2014-07-14 05:53:42.109 | ++ echo
2014-07-14 05:53:42.109 | + ADMIN_USER=
2014-07-14 05:53:42.110 | ++ get_or_create_role admin
2014-07-14 05:53:42.111 | +++ openstack role show admin -f value -c id
2014-07-14 05:53:42.682 | +++ openstack role create admin -f value -c id
2014-07-14 05:53:43.235 | ERROR: cliff.app Service Unavailable (HTTP 503)





By checked in google, found someone encountered the same problem logged
in https://bugs.launchpad.net/devstack/+bug/129, I tried to
workaround but didn't work. The below is workaround way.
=
1st, I tried setting HOST_IP to 127.0.0.1.
Next, I set it to *9.21.xxx.xxx* , which is the address of my eth0
interface, and added
export no_proxy=localhost,127.0.0.1,*9.21.xxx.xxx*

Neither of these fixed the problem.





My localrc file:

HOST_IP=9.21.xxx.xxx
FLAT_INTERFACE=eth0
#FIXED_RANGE=10.4.128.0/20
#FIXED_NETWORK_SIZE=4096
#FLOATING_RANGE=192.168.42.128/25
MULTI_HOST=1
LOGFILE=/opt/stack/logs/stack.sh.log
ADMIN_PASSWORD=admin
MYSQL_PASSWORD=admin
RABBIT_PASSWORD=admin
SERVICE_PASSWORD=admin
SERVICE_TOKEN=xyzpdqlazydog
===

Any help appreciated


Regards
Mengjie







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



There was a recent change to devstack to default to running keystone in 
apache, that might be what you're hitting.  There is an env var to 
disable that so it doesn't run in apache, but you'd have to look up the 
change for the details.  Should be in the devstack/libs/keystone file 
history.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] requesting python-neutronclient release for MacAddressInUseClient exception

2014-07-28 Thread Matt Riedemann
Nove needs a python-neutronclient release to use the new 
MacAddressInUseClient exception type defined here [1].


[1] https://review.openstack.org/#/c/109052/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] requesting python-neutronclient release for MacAddressInUseClient exception

2014-07-29 Thread Matt Riedemann



On 7/29/2014 9:15 AM, Kyle Mestery wrote:

On Tue, Jul 29, 2014 at 7:46 AM, Kyle Mestery  wrote:

On Mon, Jul 28, 2014 at 6:45 PM, Matt Riedemann
 wrote:

Nove needs a python-neutronclient release to use the new
MacAddressInUseClient exception type defined here [1].


I'll spin a new client release today Matt, and reply back on this
thread once that's complete.


FYI, I just pushed this release out, see the email here:

http://lists.openstack.org/pipermail/openstack-dev/2014-July/041438.html

Thanks,
Kyle


Thanks,
Kyle


[1] https://review.openstack.org/#/c/109052/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Thanks for the quick turnaround!

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][qa] turbo-hipster seems very unhappy

2014-07-29 Thread Matt Riedemann
I've seen t-h failing on many patches today, most that aren't touching 
the database migrations, but it's primarily catching my attention 
because of the failure on this change:


https://review.openstack.org/#/c/109660/

It looks like a pretty simple issue of the decorator package not being 
in whatever pypi mirror that t-h is using.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL] Removing the tox==1.6.1 pin

2014-07-29 Thread Matt Riedemann



On 7/25/2014 2:38 PM, Clark Boylan wrote:

Hello,

The recent release of tox 1.7.2 has fixed the {posargs} interpolation
issues we had with newer tox which forced us to be pinned to tox==1.6.1.
Before we can remove the pin and start telling people to use latest tox
we need to address a new default behavior in tox.

New tox sets a random PYTHONHASHSEED value by default. Arguably this is
a good thing as it forces you to write code that handles unknown hash
seeds, but unfortunately many projects' unittests don't currently deal
with this very well. A work around is to hard set a PYTHONHASHSEED of 0
in tox.ini files. I have begun to propose these changes to the projects
that I have tested and found to not handle random seeds. It would be
great if we could get these reviewed and merged so that infra can update
the version of tox used on our side.

I probably won't be able to test every single project and propose fixes
with backports to stable branches for everything. It would be a massive
help if individual projects tested and proposed fixes as necessary too
(these changes will need to be backported to stable branches). You can
test by running `tox -epy27` in your project with tox version 1.7.2. If
that fails add PYTHONHASHSEED=0 as in
https://review.openstack.org/#/c/109700/ and rerun `tox -epy27` to
confirm that succeeds.

This will get us over the immediate hump of the tox upgrade, but we
should also start work to make our tests run with random hashes. This
shouldn't be too hard to do as it will be a self gating change once
infra is able to update the version of tox used in the gate. Most of the
issues appear related to dict entry ordering. I have gone ahead and
created https://bugs.launchpad.net/cinder/+bug/1348818 to track this
work.

Thank you,
Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Is this in any way related to the fact that tox is unable to 
find/install the oslo alpha packages for me in nova right now (config, 
messaging, rootwrap) after I rebased on master?  I had to go into 
requirements.txt and remove the min versions on the alpha versions to 
get tox to install dependencies for nova unit tests.  I'm running with 
tox 1.6.1 but not sure if that would be related anyhow.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][nova] can't rebuild local tox due to oslo alpha packages

2014-07-30 Thread Matt Riedemann
I noticed yesterday that trying to rebuild tox in nova fails because it 
won't pull down the oslo alpha packages (config, messaging, rootwrap).


It looks like you need the --pre option with pip install to get these 
normally.


Also sounds like tox should already be doing --pre, but it doesn't 
appear to be with at least tox 1.6.1 in site-packages.


I'm using pip 1.5.6 which I thought was the latest.
--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] so what do i do about libvirt-python if i'm on precise?

2014-07-30 Thread Matt Riedemann

This change:

https://review.openstack.org/#/c/105501/

Tries to pull in libvirt-python >= 1.2.5 for testing.

I'm on Ubuntu Precise for development which has libvirt 0.9.8.

The latest libvirt-python appears to require libvirt >= 0.9.11.

So do I have to move to Trusty?
--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] so what do i do about libvirt-python if i'm on precise?

2014-07-30 Thread Matt Riedemann



On 7/30/2014 9:20 AM, Matt Riedemann wrote:



On 7/30/2014 6:43 AM, Daniel P. Berrange wrote:

On Wed, Jul 30, 2014 at 06:39:56AM -0700, Matt Riedemann wrote:

This change:

https://review.openstack.org/#/c/105501/

Tries to pull in libvirt-python >= 1.2.5 for testing.

I'm on Ubuntu Precise for development which has libvirt 0.9.8.

The latest libvirt-python appears to require libvirt >= 0.9.11.

So do I have to move to Trusty?


You can use the CloudArchive repository to get newer libvirt and
qemu packages for Precise, which is what anyone deploying the
Ubuntu provided OpenStack packages would be doing.

Regards,
Daniel



Can we be more specific because this would also need to be updated in
the devref docs for setting up your development environment with Ubuntu.

Sorry for being a newb but I went here:

https://wiki.ubuntu.com/ServerTeam/CloudArchive/

And tried doing:

sudo add-apt-repository cloud-archive:icehouse

Which failed, I guess it doesn't know about what that means or something?

I added a repo to
http://ubuntu-cloud.archive.canonical.com/ubuntu/dists/precise-updates/icehouse/
manually but it's not finding any newer libvirt packages.

If I can get some help I can push a patch to update the docs since I'm
assuming I won't be the only one that hits this and it sounds like
minesweeper hit it recently too. [1]

[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-July/041457.html



Yay for docs team, I was missing this:

apt-get install python-software-properties

Found it here:

http://docs.openstack.org/havana/install-guide/install/apt/content/basics-packages.html

The devref env setup doc in nova should still probably be updated to say 
something like, 'hey if you're on juno using precise you need to enable 
cloud archive to update libvirt'.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] so what do i do about libvirt-python if i'm on precise?

2014-07-30 Thread Matt Riedemann



On 7/30/2014 6:43 AM, Daniel P. Berrange wrote:

On Wed, Jul 30, 2014 at 06:39:56AM -0700, Matt Riedemann wrote:

This change:

https://review.openstack.org/#/c/105501/

Tries to pull in libvirt-python >= 1.2.5 for testing.

I'm on Ubuntu Precise for development which has libvirt 0.9.8.

The latest libvirt-python appears to require libvirt >= 0.9.11.

So do I have to move to Trusty?


You can use the CloudArchive repository to get newer libvirt and
qemu packages for Precise, which is what anyone deploying the
Ubuntu provided OpenStack packages would be doing.

Regards,
Daniel



Can we be more specific because this would also need to be updated in 
the devref docs for setting up your development environment with Ubuntu.


Sorry for being a newb but I went here:

https://wiki.ubuntu.com/ServerTeam/CloudArchive/

And tried doing:

sudo add-apt-repository cloud-archive:icehouse

Which failed, I guess it doesn't know about what that means or something?

I added a repo to 
http://ubuntu-cloud.archive.canonical.com/ubuntu/dists/precise-updates/icehouse/ 
manually but it's not finding any newer libvirt packages.


If I can get some help I can push a patch to update the docs since I'm 
assuming I won't be the only one that hits this and it sounds like 
minesweeper hit it recently too. [1]


[1] http://lists.openstack.org/pipermail/openstack-dev/2014-July/041457.html

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL] Removing the tox==1.6.1 pin

2014-07-30 Thread Matt Riedemann



On 7/30/2014 2:27 AM, Michele Paolino wrote:

On 30/07/2014 07:53, Matt Riedemann wrote:



On 7/25/2014 2:38 PM, Clark Boylan wrote:

Hello,

The recent release of tox 1.7.2 has fixed the {posargs} interpolation
issues we had with newer tox which forced us to be pinned to tox==1.6.1.
Before we can remove the pin and start telling people to use latest tox
we need to address a new default behavior in tox.

New tox sets a random PYTHONHASHSEED value by default. Arguably this is
a good thing as it forces you to write code that handles unknown hash
seeds, but unfortunately many projects' unittests don't currently deal
with this very well. A work around is to hard set a PYTHONHASHSEED of 0
in tox.ini files. I have begun to propose these changes to the projects
that I have tested and found to not handle random seeds. It would be
great if we could get these reviewed and merged so that infra can update
the version of tox used on our side.

I probably won't be able to test every single project and propose fixes
with backports to stable branches for everything. It would be a massive
help if individual projects tested and proposed fixes as necessary too
(these changes will need to be backported to stable branches). You can
test by running `tox -epy27` in your project with tox version 1.7.2. If
that fails add PYTHONHASHSEED=0 as in
https://review.openstack.org/#/c/109700/ and rerun `tox -epy27` to
confirm that succeeds.

This will get us over the immediate hump of the tox upgrade, but we
should also start work to make our tests run with random hashes. This
shouldn't be too hard to do as it will be a self gating change once
infra is able to update the version of tox used in the gate. Most of the
issues appear related to dict entry ordering. I have gone ahead and
created https://bugs.launchpad.net/cinder/+bug/1348818 to track this
work.

Thank you,
Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Is this in any way related to the fact that tox is unable to
find/install the oslo alpha packages for me in nova right now (config,
messaging, rootwrap) after I rebased on master?  I had to go into
requirements.txt and remove the min versions on the alpha versions to
get tox to install dependencies for nova unit tests. I'm running with
tox 1.6.1 but not sure if that would be related anyhow.


Problem confirmed from my side. The error is:
Downloading/unpacking oslo.config>=1.4.0.0a3 (from -r
/media/repos/nova/requirements.txt (line 34))
   Could not find a version that satisfies the requirement
oslo.config>=1.4.0.0a3 (from -r /media/repos/nova/requirements.txt (line
34)) (from versions: 1.1.0, 1.1.1, 1.2.0, 1.2.1, 1.3.0)



It looks like you have to use the --pre option with pip install to get 
pre-release packages, but then why isn't that in every project's tox.ini 
that is using these alpha oslo packages?


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] so what do i do about libvirt-python if i'm on precise?

2014-07-30 Thread Matt Riedemann



On 7/30/2014 9:57 AM, Matt Riedemann wrote:



On 7/30/2014 9:20 AM, Matt Riedemann wrote:



On 7/30/2014 6:43 AM, Daniel P. Berrange wrote:

On Wed, Jul 30, 2014 at 06:39:56AM -0700, Matt Riedemann wrote:

This change:

https://review.openstack.org/#/c/105501/

Tries to pull in libvirt-python >= 1.2.5 for testing.

I'm on Ubuntu Precise for development which has libvirt 0.9.8.

The latest libvirt-python appears to require libvirt >= 0.9.11.

So do I have to move to Trusty?


You can use the CloudArchive repository to get newer libvirt and
qemu packages for Precise, which is what anyone deploying the
Ubuntu provided OpenStack packages would be doing.

Regards,
Daniel



Can we be more specific because this would also need to be updated in
the devref docs for setting up your development environment with Ubuntu.

Sorry for being a newb but I went here:

https://wiki.ubuntu.com/ServerTeam/CloudArchive/

And tried doing:

sudo add-apt-repository cloud-archive:icehouse

Which failed, I guess it doesn't know about what that means or something?

I added a repo to
http://ubuntu-cloud.archive.canonical.com/ubuntu/dists/precise-updates/icehouse/

manually but it's not finding any newer libvirt packages.

If I can get some help I can push a patch to update the docs since I'm
assuming I won't be the only one that hits this and it sounds like
minesweeper hit it recently too. [1]

[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-July/041457.html



Yay for docs team, I was missing this:

apt-get install python-software-properties

Found it here:

http://docs.openstack.org/havana/install-guide/install/apt/content/basics-packages.html


The devref env setup doc in nova should still probably be updated to say
something like, 'hey if you're on juno using precise you need to enable
cloud archive to update libvirt'.



Hopefully this helps people:

https://review.openstack.org/#/c/110720/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] so what do i do about libvirt-python if i'm on precise?

2014-07-30 Thread Matt Riedemann



On 7/30/2014 11:49 AM, Joe Gordon wrote:




On Wed, Jul 30, 2014 at 6:43 AM, Daniel P. Berrange mailto:berra...@redhat.com>> wrote:

On Wed, Jul 30, 2014 at 06:39:56AM -0700, Matt Riedemann wrote:
 > This change:
 >
 > https://review.openstack.org/#/c/105501/
 >
 > Tries to pull in libvirt-python >= 1.2.5 for testing.
 >
 > I'm on Ubuntu Precise for development which has libvirt 0.9.8.
 >
 > The latest libvirt-python appears to require libvirt >= 0.9.11.
 >
 > So do I have to move to Trusty?

You can use the CloudArchive repository to get newer libvirt and
qemu packages for Precise, which is what anyone deploying the
Ubuntu provided OpenStack packages would be doing.


I am not a fan of this approach the patch above along with [0], broke
Minesweeper [1] and Matt, I am worried that we will be breaking other
folks as well. I don't think we should force folks to upgrade to a newer
version of libvirt just to do some code cleanup. I think we should
revert these patches.

"Increase the min required libvirt version to 0.9.11 since


we require that for libvirt-python from PyPI to build
successfully. Kill off the legacy CPU model configuration
and legacy OpenVSwitch setup code paths only required by
libvirt < 0.9.11"


[0] https://review.openstack.org/#/c/58494/
[1] http://lists.openstack.org/pipermail/openstack-dev/2014-July/041457.html


Regards,
Daniel
--
|: http://berrange.com  -o-
http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o- http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
<mailto:OpenStack-dev@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



So https://review.openstack.org/#/c/58494/ is new to me as of today.

The 0.9.8 on ubuntu precise broke me (and our internal CI system which 
is running against precise images, but that's internal so meh).  The 
gate is running against ubuntu trusty and I have a way forward on 
getting updated libvirt in ubuntu precise (with updated docs on how 
others can as well), which is a short-term fix until I move my dev 
environment to ubuntu trusty.


My bigger concern here was how this impacts RHEL 6.5 which I'm running 
Juno on, but looks like that has libvirt 0.10.2 so I'm good.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][infra] Adding support for AMQP 1.0 Messaging to Oslo.Messaging and infra/config

2014-07-30 Thread Matt Riedemann



On 7/30/2014 11:59 AM, Ken Giusti wrote:

On Wed, 30 Jul 2014 14:25:46 +0100, Daniel P. Berrange wrote:

On Wed, Jul 30, 2014 at 08:54:01AM -0400, Ken Giusti wrote:

Greetings,

Apologies for the cross-post: this should be of interest to both infra
and olso.messaging developers.

The blueprint [0] that adds support for version 1.0 of the AMQP messaging
protocol is blocked due to CI test failures [1]. These failures are due
to a new package dependency this blueprint adds to oslo.messaging.

The AMQP 1.0 functionality is provided by the Apache Qpid's Proton
AMQP 1.0 toolkit.  The blueprint uses the Python bindings for this
toolkit, which are available on Pypi.  These bindings, however, include
a C extension that depends on the Proton toolkit development libraries
in order to build and install.  The lack of this toolkit is the cause
of the blueprint's current CI failures.

This toolkit is written in C, and thus requires platform-specific
libraries.

Now here's the problem: packages for Proton are not included by
default in most distro's base repositories (yet).  The Apache Qpid
team has provided packages for EPEL, and has a PPA available for
Ubuntu.  Packages for Debian are also being proposed.

I'm proposing this patch to openstack-infra/config to address the
dependency problem [2].  It adds the proton toolkit packages to the
common slave configuration.  Does this make sense?  Are there any
better alternatives?


For other cases where we need more native packages, we tyically
use devstack to ensure they are installed. This is preferrable
since it works for ordinary developers as well as the CI system.



Thanks Daniel.  It was my understanding - which may be wrong - that
having devstack install the 'out of band' packages would only help in
the case of the devstack-based integration tests, not in the case of
CI running the unit tests.  Is that indeed the case?

At this point, there are no integration tests that exercise the
driver.  However, the new unit tests include a test 'broker', which
allow the unit tests to fully exercise the new driver, right down to
the network.  That's a bonus of AMQP 1.0 - it can support peer-2-peer
messaging.

So its the new unit tests that have the 'hard' requirement of the
proton libraries.And mocking-out the proton libraries really
doesn't allow us to do any meaningful tests of the driver.

But if devstack is the preferred method for getting 'special case'
packages installed, would it be acceptable to have the new unit tests
run as a separate oslo.messaging integration test, and remove them
from the unit tests?

I'm open to any thoughts on how best to solve this, thanks.


Regards,
Daniel
--
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



If your unit tests are dependent on a specific dependent library aren't 
they no longer unit tests but integration tests anyway?


Just wondering, not trying to put up road-blocks because I'd like to see 
how this code performs but haven't had time yet to play with it.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Nominating Jay Pipes for nova-core

2014-07-30 Thread Matt Riedemann



On 7/30/2014 2:10 PM, Russell Bryant wrote:

On 07/30/2014 05:02 PM, Michael Still wrote:

Greetings,

I would like to nominate Jay Pipes for the nova-core team.

Jay has been involved with nova for a long time now.  He's previously
been a nova core, as well as a glance core (and PTL). He's been around
so long that there are probably other types of core status I have
missed.

Please respond with +1s or any concerns.


+1



+1

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Cinder tempest api volume tests failed

2014-08-01 Thread Matt Riedemann



On 8/1/2014 4:16 AM, Nikesh Kumar Mahalka wrote:

Hi Mike,test which is failed for me is:
*tempest.api.volume.admin.test_volume_types.VolumeTypesTest*

I am getting error in below function call in above test
  "*self.volumes_client.wait_for_volume_status**(volume['id'],** 
'available')**".*

This function call is in below function:
*@test.attr(type='smoke')
*
*def
test_create_get_delete_volume_with_volume_type_and_extra_specs(self)*


I saw in c-sch log and i found this major issue:
*2014-08-01 14:08:05.773 11853 ERROR
cinder.scheduler.flows.create_volume
[req-ceafd00c-30b1-4846-a555-6116556efb3b
43af88811b2243238d3d9fc732731565 a39922e8e5284729b07fcd045cfd5a88 - - -]
Failed to run task
cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create:
No valid host was found. No weighed hosts available*

Actually by analyzing the test i found:
1)it is creating a volume-type with extra_specs
2)it is creating a volume with volume type and here it is failing.


*Below is my new local.conf file. *
*Am i missing anything in this?*

[[local|localrc]]
ADMIN_PASSWORD=some_password
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=ADMIN
FLAT_INTERFACE=eth0
FIXED_RANGE=192.168.2.80/29 <http://192.168.2.80/29>
HOST_IP=192.168.2.64
LOGFILE=$DEST/logs/stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen
SYSLOG=True
SYSLOG_HOST=$HOST_IP
SYSLOG_PORT=516
RECLONE=yes
CINDER_ENABLED_BACKENDS=client:client_driver
TEMPEST_VOLUME_DRIVER=client_iscsi
TEMPEST_VOLUME_VENDOR="CLIENT"
TEMPEST_STORAGE_PROTOCOL=iSCSI
VOLUME_BACKING_FILE_SIZE=20G

[[post-config|$CINDER_CONF]]
[client_driver]
volume_driver=cinder.volume.drivers.san.client.iscsi.client_iscsi.ClientISCSIDriver
san_ip=192.168.2.192
san_login=some_name
san_password=some_password
client_iscsi_ips = 192.168.2.193


*Below is my cinder.conf:*
[keystone_authtoken]
auth_uri = http://192.168.2.64:5000/v2.0
signing_dir = /var/cache/cinder
admin_password = some_password
admin_user = cinder
admin_tenant_name = service
cafile =
identity_uri = http://192.168.2.64:35357

[DEFAULT]
rabbit_password = some_password
rabbit_hosts = 192.168.2.64
rpc_backend = cinder.openstack.common.rpc.impl_kombu
use_syslog = True
default_volume_type = client_driver
enabled_backends = client_driver
enable_v1_api = true
periodic_interval = 60
lock_path = /opt/stack/data/cinder
state_path = /opt/stack/data/cinder
osapi_volume_extension = cinder.api.contrib.standard_extensions
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_config = /etc/cinder/api-paste.ini
sql_connection =
mysql://root:some_password@127.0.0.1/cinder?charset=utf8
<http://127.0.0.1/cinder?charset=utf8>
iscsi_helper = tgtadm
my_ip = 192.168.2.64
verbose = True
debug = True
auth_strategy = keystone

[client_driver]
client_iscsi_ips = 192.168.2.193
san_password = !manage
san_login = manage
san_ip = 192.168.2.192
volume_driver =
cinder.volume.drivers.san.client.iscsi.client_iscsi.ClientISCSIDriver



Regards
Nikesh









On Fri, Aug 1, 2014 at 1:56 AM, Mike Perez mailto:thin...@gmail.com>> wrote:

On 11:30 Thu 31 Jul , Nikesh Kumar Mahalka wrote:
 > I deployed a single node devstack on Ubuntu 14.04.
 > This devstack belongs to Juno.
 >
 > When i am running tempest api volume test, i am getting some
tests failed.

Hi Nikesh,

To further figure out what's wrong, take a look at the c-vol, c-api
and c-sch
tabs in the stack screen session. If you're unsure where to go from
there after
looking at the output, set the `SCREEN_LOGDIR` setting in your
local.conf [1]
and copy the logs from those tabs to paste.openstack.org
<http://paste.openstack.org> for us to see.

[1] - http://devstack.org/configuration.html

--
Mike Perez

___
Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openst...@lists.openstack.org
<mailto:openst...@lists.openstack.org>
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Can you please start tagging your threads on an out-of-tree cinder 
driver with [cinder] in the subject line so this gets filtered into the 
cinder channel at least.


Generally when people come to the openstack-dev list asking for help 
with a deployment they get sent to ask.openstack.org or the general 
openstack mailing list.


This sort of falls in between since it sounds like you're doing 
development on a new driver and trying to get tempest working, but if 
this is going to be an openstack-dev list discussion, please isolate it 
to [cinder], or go to the #openstack-cinder channel in

Re: [openstack-dev] how and which tempest tests to run

2014-08-01 Thread Matt Riedemann



On 8/1/2014 3:32 AM, Nikesh Kumar Mahalka wrote:

I deployed a single node devstack on Ubuntu 14.04.
This devstack belongs to Juno.
I have written a cinder-volume driver for my client backend.
I want to contribute this driver in Juno release.
As i analyzed the contribution process,it is telling to run tempest
tests for Continuous Integration.

Could any one tell me how and which tempest tests to run on this
devstack deployment for cinder volume driver?
Also tempest has many test cases.Do i have to pass all tests for
contribution of my driver?

Also am i missing any thing thing in below local.conf?

*_Below are steps for my devstack deployment:_*

1) git clone https://github.com/openstack-dev/devstack.git
2)cd devstack
3)vi local.conf

[[local|localrc]]

ADMIN_PASSWORD=some_password
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=ADMIN
FLAT_INTERFACE=eth0
FIXED_RANGE=192.168.2.80/29 <http://192.168.2.80/29>
#FLOATING_RANGE=192.168.20.0/25 <http://192.168.20.0/25>
HOST_IP=192.168.2.64
LOGFILE=$DEST/logs/stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen
SYSLOG=True
SYSLOG_HOST=$HOST_IP
SYSLOG_PORT=516
RECLONE=yes
CINDER_ENABLED_BACKENDS=client:client_driver
TEMPEST_VOLUME_DRIVER=client_iscsi
TEMPEST_VOLUME_VENDOR="CLIENT"
TEMPEST_STORAGE_PROTOCOL=iSCSI
VOLUME_BACKING_FILE_SIZE=20G

[[post-config|$CINDER_CONF]]

[client_driver]
volume_driver=cinder.volume.drivers.san.client.iscsi.client_iscsi.ClientISCSIDriver
san_ip = 192.168.2.192
san_login = some_name
san_password =some_password
client_iscsi_ips = 192.168.2.193

4)./stack.sh



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Please tag your cinder-specific driver test questions with [cinder] so 
these threads are filtered appropriately in people's mail clients.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Containers][Nova] Containers Team Mid-Cycle Meetup to join Nova Meetup

2014-08-05 Thread Matt Riedemann



On 7/16/2014 10:44 AM, Adrian Otto wrote:

Additional Update:

Two important additions:

1) No Formal Thursday Meetings.

We are eliminating our plans to meet formally on the 31st. You are
still welcome to meet informally. We want to keep these discussions
as productive as possible, and want to avoid attendee burnout. My
deepest apologies to those who have made travel plans around this.
See me if there are financial considerations to resolve.


2) Containers Team Registration

To better manage attendance expectations, register for the event
that you will attend as a primary. For those attending primarily for
Containers, register here:


https://www.eventbrite.com/e/openstack-containers-team-juno-mid-cycle-developer-meetup-tickets-12304951441


If you are registering for Nova, use this link:


https://www.eventbrite.com.au/e/openstack-nova-juno-mid-cycle-developer-meetup-tickets-11878128803

If you are already registered for the Nova Meetup, but will be
attending in the Containers Team Meetup as the primary, you can
return your tickets for Nova as long as you have a Containers Team
Meetup ticket. That will allow for a more accurate count, and make
sure that all the Nova devs who need to attend can.


Logistics details:

https://wiki.openstack.org/wiki/Sprints/BeavertonJunoSprint


Event Etherpad:

https://etherpad.openstack.org/p/juno-containers-sprint


Thanks,

Adrian


On Jul 11, 2014, at 3:31 PM, Adrian Otto mailto:adrian.o...@rackspace.com>> wrote:


CORRECTION: This event happens *July* 28-31. Sorry for any confusion!
Corrected Announcement:

Containers Team,

We have decided to hold our Mid-Cycle meetup along with the Nova
Meetup in Beaverton, Oregon on *July* 28-31.The Nova Meetup is
scheduled for *July* 28-30.

https://www.eventbrite.com.au/e/openstack-nova-juno-mid-cycle-developer-meetup-tickets-11878128803

Those of us interested in Containers topic will use one of the
breakout rooms generously offered by Intel. We will also stay on
Thursday to focus on implementation plans and to engage with those
members of the Nova Team who will be otherwise occupied on *July*
28-30, and will have a chance to focus entirely on Containers on the 31st.

Please take a moment now to register using the link above, and I look
forward to seeing you there.

Thanks,

Adrian Otto





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Adrian,

Can you share a summary of notes that came out of the containers meetup, 
specifically related to the integration with nova, i.e. the slides you 
shared in one of the nova sessions?  Wondering what the plans/details 
are for Kilo.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] so what do i do about libvirt-python if i'm on precise?

2014-08-06 Thread Matt Riedemann



On 8/5/2014 12:39 PM, Solly Ross wrote:

Just to add my two cents, while I get that people need to run on older versions 
of software,
at a certain point you have to bump the minimum version.  Even libvirt 0.9.11 
is from April 3rd 2012.
That's two and a third years old at this point.  I think at a certain point we need 
to say "if you want
to run OpenStack on an older platform, then you'll need to run an older 
OpenStack or backport the required
packages.

Best Regards,
Solly Ross

- Original Message -

From: "Joe Gordon" 
To: "OpenStack Development Mailing List" 
Sent: Wednesday, July 30, 2014 7:07:13 PM
Subject: Re: [openstack-dev] [nova] so what do i do about libvirt-python if i'm 
on precise?




On Jul 30, 2014 3:36 PM, "Clark Boylan" < cboy...@sapwetik.org > wrote:


On Wed, Jul 30, 2014, at 03:23 PM, Jeremy Stanley wrote:

On 2014-07-30 13:21:10 -0700 (-0700), Joe Gordon wrote:

While forcing people to move to a newer version of libvirt is
doable on most environments, do we want to do that now? What is
the benefit of doing so?

[...]

The only dog I have in this fight is that using the split-out
libvirt-python on PyPI means we finally get to run Nova unit tests
in virtualenvs which aren't built with system-site-packages enabled.
It's been a long-running headache which I'd like to see eradicated
everywhere we can. I understand though if we have to go about it
more slowly, I'm just excited to see it finally within our grasp.
--
Jeremy Stanley


We aren't quite forcing people to move to newer versions. Only those
installing nova test-requirements need newer libvirt. This does not
include people using eg devstack. I think it is reasonable to expect
people testing tip of nova master to have a reasonably newish test bed
to test it (its not like the Infra team moves at a really fast pace :)
).


Based on
http://lists.openstack.org/pipermail/openstack-dev/2014-July/041457.html
this patch is breaking people, which is the basis for my concerns. Perhaps
we should get some further details from Salvatore.



Avoiding system site packages in virtualenvs is a huge win particularly
for consistency of test results. It avoids pollution of site packages
that can happen differently across test machines. This particular type
of inconsistency has been the cause of the previously mentioned
headaches.


I agree this is a huge win, but I am just concerned we don't have any
deprecation cycle and just roll out a new requirement without a heads up.



Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Yeah, I agree, I'm just, you know, a curmudgeon.  I was doing a 
stable/havana backport though on my ubuntu precise + libvirt 1.2.2 from 
cloud-archive:icehouse and hit this bug:


https://bugs.launchpad.net/nova/+bug/1266711

I guess I should just get off my ass and setup a Trusty VM for Juno+ 
development and leave my Precise one alone for stable branch work.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] fair standards for all hypervisor drivers

2014-08-07 Thread Matt Riedemann



On 7/18/2014 2:55 AM, Daniel P. Berrange wrote:

On Thu, Jul 17, 2014 at 12:13:13PM -0700, Johannes Erdfelt wrote:

On Thu, Jul 17, 2014, Russell Bryant  wrote:

On 07/17/2014 02:31 PM, Johannes Erdfelt wrote:

It kind of helps. It's still implicit in that you need to look at what
features are enabled at what version and determine if it is being
tested.

But the behavior is still broken since code is still getting merged that
isn't tested. Saying that is by design doesn't help the fact that
potentially broken code exists.


Well, it may not be tested in our CI yet, but that doesn't mean it's not
tested some other way, at least.


I'm skeptical. Unless it's tested continuously, it'll likely break at
some time.

We seem to be selectively choosing the continuous part of CI. I'd
understand if it was reluctantly because of immediate problems but
this reads like it's acceptable long-term too.


I think there are some good ideas in other parts of this thread to look
at how we can more reguarly rev libvirt in the gate to mitigate this.

There's also been work going on to get Fedora enabled in the gate, which
is a distro that regularly carries a much more recent version of libvirt
(among other things), so that's another angle that may help.


That's an improvement, but I'm still not sure I understand what the
workflow will be for developers.


That's exactly why we want to have the CI system using newer libvirt
than it does today. The patch to cap the version doesn't change what
is tested - it just avoids users hitting untested paths by default
so they're not exposed to any potential instability until we actually
get a more updated CI system


Do they need to now wait for Fedora to ship a new version of libvirt?
Fedora is likely to help the problem because of how quickly it generally
ships new packages and their release schedule but it would still hold
back some features?


Fedora has an add-on repository ("virt-preview") which contains the
latest QEMU + libvirt RPMs for current stable release - this is lags
upstream by a matter of days, so there would be no appreciable delay
in getting access to newest possible releases.


Also, this explanation doesn't answer my question about what happens
when the gate finally gets around to actually testing those potentially
broken code paths.


I think we would just test out the bump and make sure it's working fine
before it's enabled for every job.  That would keep potential breakage
localized to people working on debugging/fixing it until it's ready to go.


The downside is that new features for libvirt could be held back by
needing to fix other unrelated features. This is certainly not a bigger
problem than users potentially running untested code simply because they
are on a newer version of libvirt.

I understand we have an immediate problem and I see the short-term value
in the libvirt version cap.

I try to look at the long-term and unless it's clear to me that a
solution is proposed to be short-term and there are some understood
trade-offs then I'll question the long-term implications of it.


Once CI system is regularly tracking upstream releases within a matter of
days, then the version cap is a total non-issue from a feature
availability POV. It is none the less useful in the long term, for example,
if there were a problem we miss in testing, which a deployer then hits in
the field, the version cap would allow them to get their deployment to
avoid use of the newer libvirt feature, which could be a useful workaround
for them until a fix is available.

Regards,
Daniel



FYI, there is a proposed revert of the libvirt version cap change 
mentioned previously in this thread [1].


Just bringing it up again here since the discussion should happen in the 
ML rather than gerrit.


[1] https://review.openstack.org/#/c/110754/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] oslo.config and import chains

2014-08-07 Thread Matt Riedemann



On 8/7/2014 10:27 AM, Kevin L. Mitchell wrote:

On Thu, 2014-08-07 at 12:15 +0100, Matthew Booth wrote:

A (the?) solution is to register_opts() in foo before importing any
modules which might also use oslo.config.


Actually, I disagree.  The real problem here is the definition of
bar_func().  The default value of the parameter "arg" will likely always
be the default value of foo_opt, rather than the configured value,
because "CONF.foo_opt" will be evaluated at module load time.  The way
bar_func() should be defined would be:

 def bar_func(arg=None):
 if not arg:
 arg = CONF.foo_opt
 …

That ensures that arg will be the configured value, and should also
solve the import conflict.



Surely you mean:

if arg is not None:

right?! I'm pretty sure there is a hacking check for that now too...

:)

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][neutron] Need a quantumclient patch pushed through the gate

2013-10-01 Thread Matt Riedemann
This patch is a fix for a bug that's blocking all stable branch patches 
from getting through Jenkins:

https://review.openstack.org/#/c/49006/

Jenkins is failing on it because it runs devstack from master which uses 
python-neutronclient but this is a fix for the quantumclient branch (which 
stable branch patches are tested against).

>From what I've seen on other patches like this (and what Monty was saying 
in IRC), the only way to get it in is to force merge it, so I'm requesting 
that someone from the infra team check that out please.



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States
<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate issues - what you can do to help

2013-10-02 Thread Matt Riedemann
I'm tracking that with this bug:

https://bugs.launchpad.net/openstack-ci/+bug/1234181 

There are a lot of sys.exit(1) calls in the neutron code on stable/grizzly 
(and in master too for that matter) so I'm wondering if something is 
puking but the error doesn't get logged before the process exits.


Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Alan Pevec 
To: Gary Kotton , 
Cc: OpenStack Development Mailing List 

Date:   10/02/2013 10:45 AM
Subject:Re: [openstack-dev] Gate issues - what you can do to help



Hi,

quantumclient is now fixed for stable/grizzly but there are issues
with check-tempest-devstack-vm-neutron job where devstack install is
dying in the middle of create_quantum_initial_network() without trace
e.g. 
http://logs.openstack.org/71/49371/1/check/check-tempest-devstack-vm-neutron/6da159d/console.html


Any ideas?

Cheers,
Alan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Tenant isolation gate failures?

2013-10-07 Thread Matt Riedemann
These tempest patches were directly related to tenant isolation also:

https://review.openstack.org/#/c/49431/ 

https://review.openstack.org/#/c/49447/ 



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Gary Kotton 
To: OpenStack Development Mailing List 
, 
Date:   10/07/2013 05:18 AM
Subject:Re: [openstack-dev] [Neutron] Tenant isolation gate 
failures?



https://review.openstack.org/#/c/46900/


On 10/7/13 10:36 AM, "Maru Newby"  wrote:

>The tenant isolation gates that have been failing so frequently seem to
>be passing all of a sudden.  I didn't see any merges that claimed to fix
>the issue, so maybe this is just a lull due to a lower volume of gate
>jobs.  If it was intentional, though, I would appreciate knowing which
>patch or patches resolved the problem.
>
>Thanks in advance,
>
>
>Maru
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate issues - what you can do to help

2013-10-07 Thread Matt Riedemann
Akihiro and Gary - thanks for working on this!  I've rechecked several 
stable/grizzly nova patches and everything is passing now.



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Akihiro Motoki 
To: OpenStack Development Mailing List 
, 
Date:   10/06/2013 06:50 PM
Subject:Re: [openstack-dev] Gate issues - what you can do to help



 Hi all,

The blocking issue of Neutron in stable/grizzly gate has been fixed.
The workaround patch of neutronclient from Gary was merged about
a couple of hours ago and I confirmed stable/grizzly gate recovered.
You can see Jenkins check in some stable/grizzly patches become +1.

On Sun, Oct 6, 2013 at 6:18 PM, Akihiro Motoki  wrote:
> Regarding https://review.openstack.org/#/c/49942/ (against
> quantumclient branch),
> the gate for quantumclient branch of python-neutronclient seems broken.
> It seems the script expects master branch of python-neutronclient.
> I am not sure what is the right direction to propose patch to
> quantumclient branch.
>
> Gary's patch https://review.openstack.org/#/c/49943/ looks a short way
> to fix the gate issue
> since once the fix is merged the gate issue will be fixed.
> I am fine with the patch as a temporary solution.
>
> Thanks,
> Akihiro
>
>
> On Sun, Oct 6, 2013 at 5:51 PM, Akihiro Motoki  
wrote:
>> Hi Gary,
>>
>> Almost the same I have posted another way of the fix.
>> https://review.openstack.org/#/c/49942/
>>
>> Both try to fix the same issue.
>> Gary's one changes neutronclient itself and mine chnages quantumclient 
proxy.
>> I am not sure which is the direction, but at least one of them should
>> be merged ASAP
>> to fix the stable/grizzly blocking failure.
>>
>> Thanks,
>>
>>
>> On Sun, Oct 6, 2013 at 5:45 PM, Gary Kotton  wrote:
>>> Hi,
>>> Can some Neutron cores please look at
>>> https://review.openstack.org/#/c/49943/. I have tested this locally 
and it
>>> addresses the issues that I have encountered.
>>> Thanks
>>> Gary
>>>
>>>>On Fri, Oct 4, 2013 at 2:06 AM, Akihiro Motoki  
wrote:
>>>>> Hi,
>>>>>
>>>>> I would like to share what Gary and I investigated, while it is not
>>>>> addressed yet.
>>>>>
>>>>> The cause is the failure of quantum-debug command in 
setup_quantum_debug
>>>>>
>>>>>(
https://github.com/openstack-dev/devstack/blob/stable/grizzly/stack.sh#L
>>>>>996).
>>>>> We can reproduce the issue in local environment by setting
>>>>> Q_USE_DEBUG_COMMAND=True in localrc.
>>>>>
>>>>> Mark proposed a patch https://review.openstack.org/#/c/49584/ but it
>>>>> does not address the issue.
>>>>> We need another way to proxy quantumclient to neutronclient.
>>>>>
>>>>> Note that there is a case devstack log in the gate does not contain
>>>>> the end of the console logs.
>>>>> In
>>>>>
http://logs.openstack.org/39/48539/5/check/check-tempest-devstack-vm-neut
>>>>>ron/b9e6559/,
>>>>> the last command logged is "quantum subnet-create", but actually
>>>>> quantum-debug command was executed
>>>>> and it failed.
>>>>>
>>>>> Thanks,
>>>>> Akihiro
>>>>>
>>>>> On Fri, Oct 4, 2013 at 12:05 AM, Alan Pevec  
wrote:
>>>>>>> The problems occur when the when the the following line is 
invoked:
>>>>>>>
>>>>>>>
>>>>>>>
https://github.com/openstack-dev/devstack/blob/stable/grizzly/lib/quant
>>>>>>>um#L302
>>>>>>
>>>>>> But that line is reached only in case baremetal is enabled which 
isn't
>>>>>> the case in gate, is it?
>>>>>>
>>>>>> Cheers,
>>>>>> Alan
>>>>>>
>>>>>> ___
>>>>>> OpenStack-dev mailing list
>>>>>> OpenStack-dev@lists.openstack.org
>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Akihiro MOTOKI 
>>>>
>>>>
>>>>
>>>>--
>>>>Akihiro MOTOKI 
>>>
>>
>>
>>
>> --
>> Akihiro MOTOKI 
>
>
>
> --
> Akihiro MOTOKI 



-- 
Akihiro MOTOKI 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Looking for clarification on the diagnostics API

2013-10-10 Thread Matt Riedemann
Tempest recently got some new tests for the nova diagnostics API [1] which 
failed when I was running against the powervm driver since it doesn't 
implement that API.  I started looking at other drivers that did and found 
that libvirt, vmware and xenapi at least had code for the get_diagnostics 
method.  I found that the vmware driver was re-using it's get_info method 
for get_diagnostics which led to bug 1237622 [2] but overall caused some 
confusion about the difference between the compute driver's get_info and 
get_diagnostics mehods.  It looks like get_info is mainly just used to get 
the power_state of the instance.

First, the get_info method has a nice docstring for what it needs returned 
[3] but the get_diagnostics method doesn't [4].  From looking at the API 
docs [5], the diagnostics API basically gives an example of values to get 
back which is completely based on what the libvirt driver returns. Looking 
at the xenapi driver code, it looks like it does things a bit differently 
than the libvirt driver (maybe doesn't return the exact same keys, but it 
returns information based on what Xen provides).

I'm thinking about implementing the diagnostics API for the powervm driver 
but I'd like to try and get some help on defining just what should be 
returned from that call.  There are some IVM commands available to the 
powervm driver for getting hardware resource information about an LPAR so 
I think I could implement this pretty easily.

I think it basically comes down to providing information about the 
processor, memory, storage and network interfaces for the instance but if 
anyone has more background information on that API I'd like to hear it.

[1] 
https://github.com/openstack/tempest/commit/da0708587432e47f85241201968e6402190f0c5d
 

[2] https://bugs.launchpad.net/nova/+bug/1237622 
[3] 
https://github.com/openstack/nova/blob/2013.2.rc1/nova/virt/driver.py#L144 

[4] 
https://github.com/openstack/nova/blob/2013.2.rc1/nova/virt/driver.py#L299 

[5] http://paste.openstack.org/show/48236/ 



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States
<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Looking for clarification on the diagnostics API

2013-10-10 Thread Matt Riedemann
Looks like this has been brought up a couple of times:

https://lists.launchpad.net/openstack/msg09138.html 

https://lists.launchpad.net/openstack/msg08555.html 

But they seem to kind of end up in the same place I already am - it seems 
to be an open-ended API that is hypervisor-specific.



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Matt Riedemann/Rochester/IBM
To: "OpenStack Development Mailing List" 
, 
Date:   10/10/2013 02:12 PM
Subject:[nova] Looking for clarification on the diagnostics API


Tempest recently got some new tests for the nova diagnostics API [1] which 
failed when I was running against the powervm driver since it doesn't 
implement that API.  I started looking at other drivers that did and found 
that libvirt, vmware and xenapi at least had code for the get_diagnostics 
method.  I found that the vmware driver was re-using it's get_info method 
for get_diagnostics which led to bug 1237622 [2] but overall caused some 
confusion about the difference between the compute driver's get_info and 
get_diagnostics mehods.  It looks like get_info is mainly just used to get 
the power_state of the instance.

First, the get_info method has a nice docstring for what it needs returned 
[3] but the get_diagnostics method doesn't [4].  From looking at the API 
docs [5], the diagnostics API basically gives an example of values to get 
back which is completely based on what the libvirt driver returns. Looking 
at the xenapi driver code, it looks like it does things a bit differently 
than the libvirt driver (maybe doesn't return the exact same keys, but it 
returns information based on what Xen provides).

I'm thinking about implementing the diagnostics API for the powervm driver 
but I'd like to try and get some help on defining just what should be 
returned from that call.  There are some IVM commands available to the 
powervm driver for getting hardware resource information about an LPAR so 
I think I could implement this pretty easily.

I think it basically comes down to providing information about the 
processor, memory, storage and network interfaces for the instance but if 
anyone has more background information on that API I'd like to hear it.

[1] 
https://github.com/openstack/tempest/commit/da0708587432e47f85241201968e6402190f0c5d
 

[2] https://bugs.launchpad.net/nova/+bug/1237622 
[3] 
https://github.com/openstack/nova/blob/2013.2.rc1/nova/virt/driver.py#L144 

[4] 
https://github.com/openstack/nova/blob/2013.2.rc1/nova/virt/driver.py#L299 

[5] http://paste.openstack.org/show/48236/ 



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States

<><>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][powervm] my notes from the meeting on powervm CI

2013-10-10 Thread Matt Riedemann
Based on the discussion with Russell and Dan Smith in the nova meeting 
today, here are some of my notes from the meeting that can continue the 
discussion.  These are all pretty rough at the moment so please bear with 
me, this is more to just get the ball rolling on ideas.

Notes on powervm CI:

1. What OS to run on?  Fedora 19, RHEL 6.4?
- Either of those is probably fine, we use RHEL 6.4 right now 
internally.
2. Deployment - RDO? SmokeStack? Devstack?
- SmokeStack is preferable since it packages rpms which is what 
we're using internally.
3. Backing database - mysql or DB2 10.5?
- Prefer DB2 since that's what we want to support in Icehouse and 
it's what we use internally, but there are differences in how long it 
takes to create a database with DB2 versus MySQL so when you multiply that 
times 7 databases (keystone, cinder, glance, nova, heat, neutron, 
ceilometer) it's going to add up unless we can figure out a better way to 
do it (single database with multiple schemas?).  Internally we use a 
pre-created image with the DB2 databases already created, we just run the 
migrate scripts against them so we don't have to wait for the create times 
every run - would that fly in community?
4. What is the max amount of time for us to report test results?  Dan 
didn't seem to think 48 hours would fly. :)
5. What are the minimum tests that need to run (excluding APIs that the 
powervm driver doesn't currently support)?
- smoke/gate/negative/whitebox/scenario/cli?  Right now we have 
1152 tempest tests running, those are only within api/scenario/cli and we 
don't run everything.
6. Network service? We're running with openvswitch 1.10 today so we 
probably want to continue with that if possible.
7. Cinder backend? We're running with the storwize driver but we do we do 
about the remote v7000?

Again, just getting some thoughts out there to help us figure out our 
goals for this, especially around 4 and 5.



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States
<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-10 Thread Matt Riedemann
Getting integration testing hooked up for the hyper-v driver with tempest 
should go a long way here which is a good reason to have it.  As has been 
mentioned, there is a core team of people that understand the internals of 
the hyper-v driver and the subtleties of when it won't work, and only 
those with a vested interest in using it will really care about it.

My team has the same issue with the powervm driver.  We don't have 
community integration testing hooked up yet.  We run tempest against it 
internally so we know what works and what doesn't, but besides standard 
code review practices that apply throughout everything (strong unit test 
coverage, consistency with other projects, hacking rules, etc), any other 
reviewer has to generally take it on faith that what's in there works as 
it's supposed to.  Sure, there is documentation available on what the 
native commands do and anyone can dig into those to figure it out, but I 
wouldn't expect that low-level of review from anyone that doesn't 
regularly work on the powervm driver.  I think the same is true for 
anything here.  So the equalizer is a rigorously tested and broad set of 
integration tests, which is where we all need to get to with tempest and 
continuous integration.

We've had the same issues as mentioned in the original note about things 
slipping out of releases or taking a long time to get reviewed, and we've 
had to fork code internally because of it which we then have to continue 
to try and get merged upstream - and it's painful, but it is what it is, 
that's the nature of the business.

Personally my experience has been that the more I give the more I get. The 
more I'm involved in what others are doing and the more I review other's 
code, the more I can build a relationship which is mutually beneficial. 
Sometimes I can only say 'hey, you need unit tests for this or this 
doesn't seem right but I'm not sure', but unless you completely automate 
code coverage metrics and build that back into reviews, e.g. does your 
1000 line blueprint have 95% code coverage in the tests, you still need 
human reviewers on everything, regardless of context.  Even then it's not 
going to be enough, there will always be a need for people with a broader 
vision of the project as a whole that can point out where things are going 
in the wrong direction even if it fixes a bug.

The point is I see both sides of the argument, I'm sure many people do. In 
a large complicated project like this it's inevitable.  But I think the 
quality and adoption of OpenStack speaks for itself and I believe a key 
component of that is the review system and that's only as good as the 
people which are going to uphold the standards across the project.  I've 
been on enough development projects that give plenty of lip service to 
code quality and review standards which are always the first thing to go 
when a deadline looms, and those projects are always ultimately failures.



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Tim Smith 
To: OpenStack Development Mailing List 
, 
Date:   10/10/2013 07:48 PM
Subject:Re: [openstack-dev] [Hyper-V] Havana status



On Thu, Oct 10, 2013 at 1:50 PM, Russell Bryant  
wrote:
 
Please understand that I only want to help here.  Perhaps a good way for
you to get more review attention is get more karma in the dev community
by helping review other patches.  It looks like you don't really review
anything outside of your own stuff, or patches that touch hyper-v.  In
the absence of significant interest in hyper-v from others, the only way
to get more attention is by increasing your karma.

NB: I don't have any vested interest in this discussion except that I want 
to make sure OpenStack stays "Open", i.e. inclusive. I believe the concept 
of "reviewer karma", while seemingly sensible, is actually subtly counter 
to the goals of openness, innovation, and vendor neutrality, and would 
also lead to overall lower commit quality.

Brian Kernighan famously wrote: "Debugging is twice as hard as writing the 
code in the first place." A corollary is that constructing a mental model 
of code is hard; perhaps harder than writing the code in the first place. 
It follows that reviewing code is not an easy task, especially if one has 
not been intimately involved in the original development of the code under 
review. In fact, if a reviewer is not intimately familiar with the code 
under review, and therefore only able to perform the functions of human 
compiler and style-checker (functions which can be and typically are 
performed by automatic tools), the rigor of their review is at best 
less-than-ideal, a

Re: [openstack-dev] [nova][powervm] my notes from the meeting on powervm CI

2013-10-10 Thread Matt Riedemann
Dan Smith  wrote on 10/10/2013 08:26:14 PM:

> From: Dan Smith 
> To: OpenStack Development Mailing List 
, 
> Date: 10/10/2013 08:31 PM
> Subject: Re: [openstack-dev] [nova][powervm] my notes from the 
> meeting on powervm CI
> 
> > 4. What is the max amount of time for us to report test results?  Dan
> > didn't seem to think 48 hours would fly. :)
> 
> Honestly, I think that 12 hours during peak times is the upper limit of
> what could be considered useful. If it's longer than that, many patches
> could go into the tree without a vote, which defeats the point.

Yeah, I was just joking about the 48 hour thing, 12 hours seems excessive
but I guess that has happened when things are super backed up with gate
issues and rechecks.

Right now things take about 4 hours, with Tempest being around 1.5 hours
of that. The rest of the time is setup and install, which includes heat
and ceilometer. So I guess that raises another question, if we're really
setting this up right now because of nova, do we need to have heat and
ceilometer installed and configured in the initial delivery of this if
we're not going to run tempest tests against them (we don't right now)?

I think some aspect of the slow setup time is related to DB2 and how
the migrations perform with some of that, but the overall time is not
considerably different from when we were running this with MySQL so
I'm reluctant to blame it all on DB2.  I think some of our topology
could have something to do with it too since the IVM hypervisor is running
on a separate system and we are gated on how it's performing at any
given time.  I think that will be our biggest challenge for the scale
issues with community CI.

> 
> > 5. What are the minimum tests that need to run (excluding APIs that 
the
> > powervm driver doesn't currently support)?
> > - smoke/gate/negative/whitebox/scenario/cli?  Right now we 
have
> > 1152 tempest tests running, those are only within api/scenario/cli and
> > we don't run everything.
> 
> I think that "a full run of tempest" should be required. That said, if
> there are things that the driver legitimately doesn't support, it makes
> sense to exclude those from the tempest run, otherwise it's not useful.
> 
> I think you should publish the tempest config (or config script, or
> patch, or whatever) that you're using so that we can see what it means
> in terms of the coverage you're providing.

Just to clarify, do you mean publish what we are using now or publish
once it's all working?  I can certainly attach our nose.cfg and
latest x-unit results xml file.

> 
> > 6. Network service? We're running with openvswitch 1.10 today so we
> > probably want to continue with that if possible.
> 
> Hmm, so that means neutron? AFAIK, not much of tempest runs with
> Nova/Neutron.
> 
> I kinda think that since nova-network is our default right now (for
> better or worse) that the run should include that mode, especially if
> using neutron excludes a large portion of the tests.
> 
> I think you said you're actually running a bunch of tempest right now,
> which conflicts with my understanding of neutron workiness. Can you 
clarify?

Correct, we're running with neutron using the ovs plugin. We basically 
have
the same issues that the neutron gate jobs have, which is related to 
concurrency
issues and tenant isolation (we're doing the same as devstack with neutron
in that we don't run tempest with tenant isolation).  We are running most
of the nova and most of the neutron API tests though (we don't have all
of the neutron-dependent scenario tests working though, probably more due
to incompetence in setting up neutron than anything else).

> 
> > 7. Cinder backend? We're running with the storwize driver but we do we
> > do about the remote v7000?
> 
> Is there any reason not to just run with a local LVM setup like we do in
> the real gate? I mean, additional coverage for the v7000 driver is
> great, but if it breaks and causes you to not have any coverage at all,
> that seems, like, bad to me :)

Yeah, I think we'd just run with a local LVM setup, that's what we do for
x86_64 and s390x tempest runs. For whatever reason we thought we'd do
storwize for our ppc64 runs, probably just to have a matrix of coverage.

> 
> > Again, just getting some thoughts out there to help us figure out our
> > goals for this, especially around 4 and 5.
> 
> Yeah, thanks for starting this discussion!
> 
> --Dan
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] dd performance for wipe in cinder

2013-10-11 Thread Matt Riedemann
Have you looked at the volume_clear and volume_clear_size options in 
cinder.conf?

https://github.com/openstack/cinder/blob/2013.2.rc1/etc/cinder/cinder.conf.sample#L1073
 


The default is to zero out the volume.  You could try 'none' to see if 
that helps with performance.



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   cosmos cosmos 
To: openstack-dev@lists.openstack.org, 
Date:   10/11/2013 04:26 AM
Subject:[openstack-dev]  dd performance for wipe in cinder



Hello.
My name is Rucia for Samsung SDS.

Now I am in trouble in cinder volume deleting.
I am developing for supporting big data storage in lvm 

But it takes too much time for deleting of cinder lvm volume because of 
dd.
Cinder volume is 200GB for supporting hadoop master data.
When i delete cinder volume in using 'dd if=/dev/zero of $cinder-volume 
count=100 bs=1M' it takes about 30 minutes.

Is there the better and quickly way for deleting?

Cheers. 
Rucia.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-11 Thread Matt Riedemann
I'd like to see the powervm driver fall into that first category.  We 
don't nearly have the rapid development that the hyper-v driver does, but 
we do have some out of tree stuff anyway simply because it hasn't landed 
upstream yet (DB2, config drive support for the powervm driver, etc), and 
maintaining that out of tree code is not fun.  So I definitely don't want 
to move out of tree.

Given that, I think at least I'm trying to contribute overall [1][2] by 
doing reviews outside my comfort zone, bug triage, fixing bugs when I can, 
and because we run tempest in house (with neutron-openvswitch) we find 
issues there that I get to push patches for.

Having said all that, it's moot for the powervm driver if we don't get the 
CI hooked up in Icehouse and I completely understand that so it's a top 
priority.


[1] 
http://stackalytics.com/?release=havana&metric=commits&project_type=openstack&module=&company=&user_id=mriedem
 

[2] 
https://review.openstack.org/#/q/reviewer:6873+project:openstack/nova,n,z 


Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Russell Bryant 
To: openstack-dev@lists.openstack.org, 
Date:   10/11/2013 11:33 AM
Subject:Re: [openstack-dev] [Hyper-V] Havana status



On 10/11/2013 12:04 PM, John Griffith wrote:
> 
> 
> 
> On Fri, Oct 11, 2013 at 9:12 AM, Bob Ball  <mailto:bob.b...@citrix.com>> wrote:
> 
> > -Original Message-
> > From: Russell Bryant [mailto:rbry...@redhat.com
> <mailto:rbry...@redhat.com>]
> > Sent: 11 October 2013 15:18
> > To: openstack-dev@lists.openstack.org
> <mailto:openstack-dev@lists.openstack.org>
> > Subject: Re: [openstack-dev] [Hyper-V] Havana status
> >
> > > As a practical example for Nova: in our case that would simply
> include the
> > following subtrees: "nova/virt/hyperv" and 
"nova/tests/virt/hyperv".
> >
> > If maintainers of a particular driver would prefer this sort of
> > autonomy, I'd rather look at creating new repositories.  I'm
> completely
> > open to going that route on a per-driver basis.  Thoughts?
> 
> I think that all drivers that are officially supported must be
> treated in the same way.
> 
> If we are going to split out drivers into a separate but still
> official repository then we should do so for all drivers.  This
> would allow Nova core developers to focus on the architectural side
> rather than how each individual driver implements the API that is
> presented.
> 
> Of course, with the current system it is much easier for a Nova core
> to identify and request a refactor or generalisation of code written
> in one or multiple drivers so they work for all of the drivers -
> we've had a few of those with XenAPI where code we have written has
> been pushed up into Nova core rather than the XenAPI tree.
> 
> Perhaps one approach would be to re-use the incubation approach we
> have; if drivers want to have the fast-development cycles uncoupled
> from core reviewers then they can be moved into an incubation
> project.  When there is a suitable level of integration (and
> automated testing to maintain it of course) then they can graduate.
>  I imagine at that point there will be more development of new
> features which affect Nova in general (to expose each hypervisor's
> strengths), so there would be fewer cases of them being restricted
> just to the virt/* tree.
> 
> Bob
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> <mailto:OpenStack-dev@lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> I've thought about this in the past, but always come back to a couple of
> things.
> 
> Being a community driven project, if a vendor doesn't want to
> participate in the project then why even pretend (ie having their own
> project/repo, reviewers etc).  Just post your code up in your own github
> and let people that want to use it pull it down.  If it's a vendor
> project, then that's fine; have it be a vendor project.
> 
> In my opinion pulling out and leaving things up to the vendors as is
> being described has significant negative impacts.  Not the least of
> which is consistency in behaviors.  On the Cinder side, the core team
> spends the bulk of their review tim

Re: [openstack-dev] [nova] Looking for clarification on the diagnostics API

2013-10-12 Thread Matt Riedemann
There is also a tempest patch now to ease some of the libvirt-specific 
keys checked in the new diagnostics tests there:

https://review.openstack.org/#/c/51412/ 

To relay some of my concerns that I put in that patch:

I'm not sure how I feel about this. It should probably be more generic but 
I think we need more than just a change in tempest to enforce it, i.e. we 
should have a nova patch that changes the doc strings for the abstract 
compute driver method to specify what the minimum keys are for the info 
returned, maybe a doc api sample change, etc?

For reference, here is the mailing list post I started on this last week:

http://lists.openstack.org/pipermail/openstack-dev/2013-October/016385.html

There are also docs here (these examples use xen and libvirt):

http://docs.openstack.org/grizzly/openstack-compute/admin/content/configuring-openstack-compute-basics.html

And under procedure 4.4 here:

http://docs.openstack.org/admin-guide-cloud/content/ch_introduction-to-openstack-compute.html#section_manage-the-cloud


=

I also found this wiki page related to metering and the nova diagnostics 
API:

https://wiki.openstack.org/wiki/EfficientMetering/FutureNovaInteractionModel 


So it seems like if at some point this will be used with ceilometer it 
should be standardized a bit which is what the Tempest part starts but I 
don't want it to get lost there.


Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Gary Kotton 
To: OpenStack Development Mailing List 
, 
Date:   10/12/2013 01:42 PM
Subject:Re: [openstack-dev] [nova] Looking for clarification on 
the diagnostics API



Yup, it seems to be hypervisor specific. I have added in the Vmware 
support following you correcting in the Vmware driver.
Thanks
Gary 

From: Matt Riedemann 
Reply-To: OpenStack Development Mailing List <
openstack-dev@lists.openstack.org>
Date: Thursday, October 10, 2013 10:17 PM
To: OpenStack Development Mailing List 
Subject: Re: [openstack-dev] [nova] Looking for clarification on the 
diagnostics API

Looks like this has been brought up a couple of times:

https://lists.launchpad.net/openstack/msg09138.html

https://lists.launchpad.net/openstack/msg08555.html

But they seem to kind of end up in the same place I already am - it seems 
to be an open-ended API that is hypervisor-specific.



Thanks, 

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States





From:    Matt Riedemann/Rochester/IBM
To:"OpenStack Development Mailing List" <
openstack-dev@lists.openstack.org>, 
Date:10/10/2013 02:12 PM
Subject:[nova] Looking for clarification on the diagnostics API


Tempest recently got some new tests for the nova diagnostics API [1] which 
failed when I was running against the powervm driver since it doesn't 
implement that API.  I started looking at other drivers that did and found 
that libvirt, vmware and xenapi at least had code for the get_diagnostics 
method.  I found that the vmware driver was re-using it's get_info method 
for get_diagnostics which led to bug 1237622 [2] but overall caused some 
confusion about the difference between the compute driver's get_info and 
get_diagnostics mehods.  It looks like get_info is mainly just used to get 
the power_state of the instance.

First, the get_info method has a nice docstring for what it needs returned 
[3] but the get_diagnostics method doesn't [4].  From looking at the API 
docs [5], the diagnostics API basically gives an example of values to get 
back which is completely based on what the libvirt driver returns. Looking 
at the xenapi driver code, it looks like it does things a bit differently 
than the libvirt driver (maybe doesn't return the exact same keys, but it 
returns information based on what Xen provides). 

I'm thinking about implementing the diagnostics API for the powervm driver 
but I'd like to try and get some help on defining just what should be 
returned from that call.  There are some IVM commands available to the 
powervm driver for getting hardware resource information about an LPAR so 
I think I could implement this pretty easily.

I think it basically comes down to providing information about the 
processor, memory, storage and network interfaces for the instance but if 
anyone has more background information on that API I'd like to hear it.

[1] 
https://github.com/openstack/tempest/commit/da0708587432e47f85241201968e6402190f0c5d

[2] https://bugs.launchpad.net/nova/+bug/1237622
[3] 
https://github.com/openstack/nova/blob/2013.2.rc1/nova/virt/driver.py#L144
[4] 
https://git

Re: [openstack-dev] [Nova] UpgradeImpact commit message tag

2013-10-15 Thread Matt Riedemann
Good idea, I'd only ask that you post back here when you have something up 
in the wiki so we can remember to review it.  This is more for me so I 
have this in mind when doing reviews, i.e. does the change fall into this 
category but doesn't tag it.  Thinking maybe DB migration issues, 
backwards incompatible API changes (which is I think what started this 
discussion), etc.



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Russell Bryant 
To: OpenStack Development Mailing List 
, 
Cc: Dan Smith 
Date:   10/14/2013 06:38 PM
Subject:[openstack-dev] [Nova] UpgradeImpact commit message tag



I was talking to Dan Smith today about a patch series I was starting to
work on.  These changes affect people doing continuous deployment, so we
came up with the idea of tagging the commits with "UpgradeImpact",
similar to how we use DocImpact for changes that affect docs.

This seems like a good convention to start using for all changes that
affect upgrades in some way.

Any comments/suggestions/objections?  If not, I'll get this documented on:

https://wiki.openstack.org/wiki/GitCommitMessages#Including_external_references


-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Hyper-V] Havana status

2013-10-15 Thread Matt Riedemann
Sorry to pile on, but:

"this was not particularly the case for what our driver is concerned. As I 
already wrote, almost all the reviews so far have been related to unit 
tests or minor formal corrections."

As was pointed out by me in patch set 1 here: 
https://review.openstack.org/#/c/43592/ 

There was no unit test coverage for an entire module 
(nova.virt.hyperv.volumeops) before that patch.

So while I agree that driver maintainers know their code the best and how 
it all works with the dirty details, but they are also going to be the 
ones to cut corners to get things fixed which usually shows up in a lack 
of test coverage - and that's a good reason to have external reviewers on 
everything, to keep us all honest.



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Alessandro Pilotti 
To: OpenStack Development Mailing List 
, 
Date:   10/15/2013 10:39 AM
Subject:Re: [openstack-dev] [Hyper-V] Havana status




On Oct 15, 2013, at 18:14 , Duncan Thomas  wrote:

On 11 October 2013 15:41, Alessandro Pilotti
 wrote:
Current reviews require:

+1 "de facto" driver X mantainer(s)
+2  core reviewer
+2A  core reviewer

While with the proposed scenario we'd get to a way faster route:

+2  driver X mantainer
+2A another driver X mantainer or a core reviewer

This would make a big difference in terms of review time.

Unfortunately I suspect it would also lead to a big difference in
review quality, and not in a positive way. The things that are
important / obvious to somebody who focuses on one driver are totally
different, and often far more limited, than the concerns of somebody
who reviews many drivers and core code changes.

Although the eyes of somebody which comes from a different domain bring 
usually additional points of views and befits, this was not particularly 
the case for what our driver is concerned. As I already wrote, almost all 
the reviews so far have been related to unit tests or minor formal 
corrections. 

I disagree on the "far more limited": driver devs (at least in our case), 
have to work on a wider range of projects beside Nova (e.g.: Neutron, 
Cinder, Ceilometer and outside proper OpenStack OpenVSwitch and Crowbar, 
to name the most relevant cases). 





-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][powervm] my notes from the meeting on powervm CI

2013-10-18 Thread Matt Riedemann
I just opened this bug, it's going to be one of the blockers for us to get 
PowerVM CI going in Icehouse:

https://bugs.launchpad.net/nova/+bug/1241619 



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Matt Riedemann/Rochester/IBM@IBMUS
To: OpenStack Development Mailing List 
, 
Date:   10/11/2013 10:59 AM
Subject:Re: [openstack-dev] [nova][powervm] my notes from the 
meeting on  powervm CI







Matthew Treinish  wrote on 10/10/2013 10:31:29 PM:

> From: Matthew Treinish  
> To: OpenStack Development Mailing List 
, 
> Date: 10/10/2013 11:07 PM 
> Subject: Re: [openstack-dev] [nova][powervm] my notes from the 
> meeting on powervm CI 
> 
> On Thu, Oct 10, 2013 at 07:39:37PM -0700, Joe Gordon wrote:
> > On Thu, Oct 10, 2013 at 7:28 PM, Matt Riedemann  
wrote:
> > > >
> > > > > 4. What is the max amount of time for us to report test results? 
 Dan
> > > > > didn't seem to think 48 hours would fly. :)
> > > >
> > > > Honestly, I think that 12 hours during peak times is the upper 
limit of
> > > > what could be considered useful. If it's longer than that, many 
patches
> > > > could go into the tree without a vote, which defeats the point.
> > >
> > > Yeah, I was just joking about the 48 hour thing, 12 hours seems 
excessive
> > > but I guess that has happened when things are super backed up with 
gate
> > > issues and rechecks.
> > >
> > > Right now things take about 4 hours, with Tempest being around 1.5 
hours
> > > of that. The rest of the time is setup and install, which includes 
heat
> > > and ceilometer. So I guess that raises another question, if we're 
really
> > > setting this up right now because of nova, do we need to have heat 
and
> > > ceilometer installed and configured in the initial delivery of this 
if
> > > we're not going to run tempest tests against them (we don't right 
now)?
> > >
> > 
> > 
> > In general the faster the better, and if things get to slow enough 
that we
> > have to wait for powervm CI to report back, I
> > think its reasonable to go ahead and approve things without hearing 
back.
> >  In reality if you can report back in under 12 hours this will rarely
> > happen (I think).
> > 
> > 
> > >
> > > I think some aspect of the slow setup time is related to DB2 and how
> > > the migrations perform with some of that, but the overall time is 
not
> > > considerably different from when we were running this with MySQL so
> > > I'm reluctant to blame it all on DB2.  I think some of our topology
> > > could have something to do with it too since the IVM hypervisor is 
running
> > > on a separate system and we are gated on how it's performing at any
> > > given time.  I think that will be our biggest challenge for the 
scale
> > > issues with community CI.
> > >
> > > >
> > > > > 5. What are the minimum tests that need to run (excluding 
> APIs that the
> > > > > powervm driver doesn't currently support)?
> > > > > - smoke/gate/negative/whitebox/scenario/cli?  Right 
> now we have
> > > > > 1152 tempest tests running, those are only within 
api/scenario/cli and
> > > > > we don't run everything.
> 
> Well that's almost a full run right now, the full tempest jobs have 1290 
tests
> of which we skip 65 because of bugs or configuration. (don't run neutron 
api
> tests without neutron) That number is actually pretty high since you are
> running with neutron. Right now the neutron gating jobs only have 221 
jobs and
> skip 8 of those. Can you share the list of things you've got working 
with
> neutron so we can up the number of gating tests? 

Here is the nose.cfg we run with: 



Some of the tests are excluded because of performance issues that still 
need to 
be worked out (like test_list_image_filters - it works but it takes over 
20 
minutes sometimes). 

Some of the tests are excluded because of limitations with DB2, e.g. 
test_list_servers_filtered_by_name_wildcard 

Some of them are probably old excludes on bugs that are now fixed. We have 
to 
go back through what's excluded every once in awhile to figure out what's 
still broken and clean things up. 

Here is the tempest.cfg we use on ppc64: 



And here are the xunit results from our latest run: 



Note that we have known issues with some cinder and neutron failure

Re: [openstack-dev] [nova][powervm] my notes from the meeting on powervm CI

2013-10-18 Thread Matt Riedemann
And this guy: https://bugs.launchpad.net/nova/+bug/1241628 



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Matt Riedemann/Rochester/IBM@IBMUS
To: OpenStack Development Mailing List 
, 
Date:   10/18/2013 09:25 AM
Subject:Re: [openstack-dev] [nova][powervm] my notes from the 
meeting on  powervm CI



I just opened this bug, it's going to be one of the blockers for us to get 
PowerVM CI going in Icehouse: 

https://bugs.launchpad.net/nova/+bug/1241619 



Thanks, 

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development 

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com 


3605 Hwy 52 N
Rochester, MN 55901-1407
United States





From:    Matt Riedemann/Rochester/IBM@IBMUS 
To:OpenStack Development Mailing List 
, 
Date:10/11/2013 10:59 AM 
Subject:Re: [openstack-dev] [nova][powervm] my notes from the 
meeting onpowervm CI 







Matthew Treinish  wrote on 10/10/2013 10:31:29 PM:

> From: Matthew Treinish  
> To: OpenStack Development Mailing List 
, 
> Date: 10/10/2013 11:07 PM 
> Subject: Re: [openstack-dev] [nova][powervm] my notes from the 
> meeting on powervm CI 
> 
> On Thu, Oct 10, 2013 at 07:39:37PM -0700, Joe Gordon wrote:
> > On Thu, Oct 10, 2013 at 7:28 PM, Matt Riedemann  
wrote:
> > > >
> > > > > 4. What is the max amount of time for us to report test results? 
 Dan
> > > > > didn't seem to think 48 hours would fly. :)
> > > >
> > > > Honestly, I think that 12 hours during peak times is the upper 
limit of
> > > > what could be considered useful. If it's longer than that, many 
patches
> > > > could go into the tree without a vote, which defeats the point.
> > >
> > > Yeah, I was just joking about the 48 hour thing, 12 hours seems 
excessive
> > > but I guess that has happened when things are super backed up with 
gate
> > > issues and rechecks.
> > >
> > > Right now things take about 4 hours, with Tempest being around 1.5 
hours
> > > of that. The rest of the time is setup and install, which includes 
heat
> > > and ceilometer. So I guess that raises another question, if we're 
really
> > > setting this up right now because of nova, do we need to have heat 
and
> > > ceilometer installed and configured in the initial delivery of this 
if
> > > we're not going to run tempest tests against them (we don't right 
now)?
> > >
> > 
> > 
> > In general the faster the better, and if things get to slow enough 
that we
> > have to wait for powervm CI to report back, I
> > think its reasonable to go ahead and approve things without hearing 
back.
> >  In reality if you can report back in under 12 hours this will rarely
> > happen (I think).
> > 
> > 
> > >
> > > I think some aspect of the slow setup time is related to DB2 and how
> > > the migrations perform with some of that, but the overall time is 
not
> > > considerably different from when we were running this with MySQL so
> > > I'm reluctant to blame it all on DB2.  I think some of our topology
> > > could have something to do with it too since the IVM hypervisor is 
running
> > > on a separate system and we are gated on how it's performing at any
> > > given time.  I think that will be our biggest challenge for the 
scale
> > > issues with community CI.
> > >
> > > >
> > > > > 5. What are the minimum tests that need to run (excluding 
> APIs that the
> > > > > powervm driver doesn't currently support)?
> > > > > - smoke/gate/negative/whitebox/scenario/cli?  Right 
> now we have
> > > > > 1152 tempest tests running, those are only within 
api/scenario/cli and
> > > > > we don't run everything.
> 
> Well that's almost a full run right now, the full tempest jobs have 1290 
tests
> of which we skip 65 because of bugs or configuration. (don't run neutron 
api
> tests without neutron) That number is actually pretty high since you are
> running with neutron. Right now the neutron gating jobs only have 221 
jobs and
> skip 8 of those. Can you share the list of things you've got working 
with
> neutron so we can up the number of gating tests? 

Here is the nose.cfg we run with: 



Some of the tests are excluded because of performance issues that still 
need to 
be worked out (like test_list_image_fi

Re: [openstack-dev] [Neutron] IPv6 & DHCP options for dnsmasq

2013-10-22 Thread Matt Riedemann
FWIW, we've wanted IPv6 support too but there are limitations in 
sqlalchemy and python 2.6 and since openstack is still supporting both of 
those, we are gated on that.



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   "Sean M. Collins" 
To: OpenStack Development Mailing List 
, 
Date:   10/22/2013 10:33 AM
Subject:Re: [openstack-dev] [Neutron] IPv6 & DHCP options for 
dnsmasq



On Tue, Oct 22, 2013 at 08:58:52AM +0200, Luke Gorrie wrote:
> Deutsche Telekom too. We are working on making Neutron interoperate well
> with a service provider network that's based on IPv6. I look forward to
> talking about this with people in Hong Kong :)

I may be mistaken, but I don't see a summit proposal for Neutron, on the
subject of IPv6. Are there plans to have one?

-- 
Sean M. Collins
[attachment "att18car.dat" deleted by Matt Riedemann/Rochester/IBM] 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Openstack on power pc/Freescale linux

2013-10-22 Thread Matt Riedemann
We run openstack on ppc64 with RHEL 6.4 using the powervm nova virt 
driver.  What do you want to know?



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Qing He 
To: OpenStack Development Mailing List 
, 
Date:   10/22/2013 05:49 PM
Subject:[openstack-dev]  [nova] Openstack on power pc/Freescale 
linux



All,
I'm wondering if anyone tried OpenStack on Power PC/ free scale Linux?

Thanks,
Qing

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Openstack on power pc/Freescale linux

2013-10-22 Thread Matt Riedemann
Yeah, my team does.  We're using openvswitch 1.10, qpid 0.22, DB2 10.5 
(but MySQL also works).  Do you have specific issues/questions?

We're working on getting continuous integration testing working for the 
nova powervm driver in the icehouse release, so you can see some more 
details about what we're doing with openstack on power in this thread:

http://lists.openstack.org/pipermail/openstack-dev/2013-October/016395.html 




Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Qing He 
To: OpenStack Development Mailing List 
, 
Date:   10/22/2013 07:43 PM
Subject:Re: [openstack-dev] [nova] Openstack on power pc/Freescale 
linux



Thanks Matt.
I’d like know if anyone has tried to run the controller, API server and 
MySql database, msg queue, etc—the brain of the openstack, on ppc.
Qing
 
From: Matt Riedemann [mailto:mrie...@us.ibm.com] 
Sent: Tuesday, October 22, 2013 4:17 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [nova] Openstack on power pc/Freescale linux
 
We run openstack on ppc64 with RHEL 6.4 using the powervm nova virt 
driver.  What do you want to know?



Thanks, 

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development 


Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com 


3605 Hwy 52 N
Rochester, MN 55901-1407
United States





From:Qing He  
To:OpenStack Development Mailing List <
openstack-dev@lists.openstack.org>, 
Date:10/22/2013 05:49 PM 
Subject:[openstack-dev]  [nova] Openstack on power pc/Freescale 
linux 




All,
I'm wondering if anyone tried OpenStack on Power PC/ free scale Linux?

Thanks,
Qing

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<><>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sqlalchemy-migrate] Blueprint for review: Add DB2 10.5 Support

2013-10-31 Thread Matt Riedemann
I've got a sqlalchemy-migrate blueprint up for review to add DB2 support 
in migrate.

https://blueprints.launchpad.net/sqlalchemy-migrate/+spec/add-db2-support 

This is a pre-req for getting DB2 support into Nova so I'm targeting 
icehouse-1.  We've been running with the migrate patches internally since 
Folsom, but getting them into migrate was difficult before OpenStack took 
over maintenance of the project.

Please let me know if there are any questions/issues or something I need 
to address here.

Thanks,

Matt Riedemann
Cloud Solutions and OpenStack Development
Email: mrie...@us.ibm.com
Office Phone: 507-253-7622___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do we have some guidelines for mock, stub, mox when writing unit test?

2013-11-10 Thread Matt Riedemann
I don't see anything explicit in the wiki and hacking guides, they 
mainly just say to have unit tests for everything and tell you how to 
run/debug them.


Generally mock is supposed to be used over mox now for python 3 support.

There is also a blueprint to remove the usage of mox in neutron:

https://blueprints.launchpad.net/neutron/+spec/remove-mox

For all new patches, we should be using mock over mox because of the 
python 3 support of mock (and lack thereof for mox).


As for when to use mock vs stubs, I think you'll get different opinions 
from different people. Stubs are quick and easy and that's what I used 
early when I started contributing to the project, but since then have 
preferred mox/mock since they validate that methods are actually called 
with specific parameters, which can get lost when simply stubbing a 
method call out. In other words, if I'm stubbing a method and doing 
assertions within it (which you'll usually see), if that method is never 
called (maybe the code changed since the test was written), the 
assertions are lost and the test is essentially broken.


So I think in general it's best to use mock now unless you have a good 
reason not to.


On 11/10/2013 7:40 AM, Jay Lau wrote:

Hi,

I noticed that we are now using mock, mox and stub for unit test, just
curious do we have any guidelines for this, in which condition shall we
use mock, mox or stub?

Thanks,

Jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] sqlalchemy-migrate needs a new release

2013-11-12 Thread Matt Riedemann
I don't know what's all involved in putting out a release for 
sqlalchemy-migrate but if there is a way that I can help, please let me 
know.  I'll try to catch dripton in IRC today.


As for CI with DB2, it's in the blueprint as a work item, I just don't 
know enough about the infra side of things to get that going, so I'd 
need some help there.


DB2 Express-C is the free version which is the plan to run the unit 
tests in CI, but the only problem I see with that is it's a trial 
license and I wouldn't want to have to redo images or licenses every 3 
months or however long it lasts. I would think that IBM would be able to 
provide a permanent license for CI though, otherwise our alternative is 
running the tests in-house and reporting the results back (something 
like what the nova virt drivers have to do and vmware is already doing).


Thanks,

Matt Riedemann

On 11/12/2013 1:50 AM, Roman Podoliaka wrote:

Hey David,

Thank you for undertaking this task!

I agree, that merging of DB2 support can be postponed for now, even if
it looks totally harmless (though I see no way to test it, as we don't
have DB2 instances running on Infra test nodes).

Thanks,
Roman

On Mon, Nov 11, 2013 at 10:54 PM, Davanum Srinivas  wrote:

@dripton, @Roman Many thanks :)

On Mon, Nov 11, 2013 at 3:35 PM, David Ripton  wrote:

On 11/11/2013 11:37 AM, Roman Podoliaka wrote:


As you may know, in our global requirements list [1] we are currently
depending on SQLAlchemy 0.7.x versions (which is 'old stable' branch
and will be deprecated soon). This is mostly due to the fact, that the
latest release of sqlalchemy-migrate from PyPi doesn't support
SQLAlchemy 0.8.x+.

At the same time, distros have been providing patches for fixing this
incompatibility for a long time now. Moreover, those patches have been
merged to sqlalchemy-migrate master too.

As we are now maintaining sqlalchemy-migrate, we could make a new
release of it. This would allow us to bump the version of SQLAlchemy
release we are depending on (as soon as we fix all the bugs we have)
and let distros maintainers stop carrying their own patches.

This has been discussed at the design summit [2], so we just basically
need a volunteer from [3] Gerrit ACL group to make a new release.

Is sqlalchemy-migrate stable enough to make a new release? I think,
yes. The commits we've merged since we adopted this library, only fix
a few issues with SQLAlchemy 0.8.x compatibility and enable running of
tests (we are currently testing all new changes on py26/py27,
SQLAlchemy 0.7.x/0.8.x, SQLite/MySQL/PostgreSQL).

Who wants to help? :)

Thanks,
Roman

[1]
https://github.com/openstack/requirements/blob/master/global-requirements.txt
[2] https://etherpad.openstack.org/p/icehouse-oslo-db-migrations
[3] https://review.openstack.org/#/admin/groups/186,members



I'll volunteer to do this release.  I'll wait 24 hours from the timestamp of
this email for input first.  So, if anyone has opinions about the timing of
this release, please speak up.

(In particular, I'd like to do a release *before* Matt Riedermann's DB2
support patch https://review.openstack.org/#/c/55572/ lands, just in case it
breaks anything.  Of course we could do another release shortly after it
gets in, to make folks who use DB2 happy.)

--
David Ripton   Red Hat   drip...@redhat.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do we have some guidelines for mock, stub, mox when writing unit test?

2013-11-13 Thread Matt Riedemann



On 11/12/2013 5:04 PM, Chuck Short wrote:




On Tue, Nov 12, 2013 at 4:49 PM, Mark McLoughlin mailto:mar...@redhat.com>> wrote:

On Tue, 2013-11-12 at 16:42 -0500, Chuck Short wrote:
 >
 > Hi
 >
 >
 > On Tue, Nov 12, 2013 at 4:24 PM, Mark McLoughlin
mailto:mar...@redhat.com>>
 > wrote:
 > On Tue, 2013-11-12 at 13:11 -0800, Shawn Hartsock wrote:
 > > Maybe we should have some 60% rule... that is: If you
change
 > more than
 > > half of a test... you should *probably* rewrite the test in
 > Mock.
 >
 >
 > A rule needs a reasoning attached to it :)
 >
 > Why do we want people to use mock?
 >
 > Is it really for Python3? If so, I assume that means we've
 > ruled out the
 > python3 port of mox? (Ok by me, but would be good to hear
why)
 > And, if
 > that's the case, then we should encourage whoever wants to
 > port mox
 > based tests to mock.
 >
 >
 >
 > The upstream maintainer is not going to port mox to python3 so we
have
 > a fork of mox called mox3. Ideally, we would drop the usage of mox in
 > favour of mock so we don't have to carry a forked mox.

Isn't that the opposite conclusion you came to here:

http://lists.openstack.org/pipermail/openstack-dev/2013-July/012474.html

i.e. using mox3 results in less code churn?

Mark.



Yes that was my original position but I though we agreed in thread
(further on) that we would use mox3 and then migrate to mock further on.

Regards
chuck


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



So it sounds like we're good with using mox for new tests again? Given 
Chuck got it into global-requirements here:


https://github.com/openstack/requirements/commit/998dda263d7c7881070e3f16e4523ddcd23fc36d

We can stave off the need to transition everything from mox to mock?

I can't seem to find the nova blueprint to convert everything from mox 
to mock, maybe it was obsoleted already.


Anyway, if mox(3) is OK and we don't need to use mock, it seems like we 
could add something to the developer guide here because I think this 
question comes up frequently:


http://docs.openstack.org/developer/nova/devref/unit_tests.html

Does anyone disagree?

BTW, I care about this because I've been keeping in mind the mox/mock 
transition when doing code reviews and giving a -1 when new tests are 
using mox (since I thought that was a no-no now).

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] sqlalchemy-migrate needs a new release

2013-11-14 Thread Matt Riedemann



On 11/14/2013 2:43 PM, David Ripton wrote:

On 11/11/2013 03:35 PM, David Ripton wrote:


I'll volunteer to do this release.  I'll wait 24 hours from the
timestamp of this email for input first.  So, if anyone has opinions
about the timing of this release, please speak up.

(In particular, I'd like to do a release *before* Matt Riedermann's DB2
support patch https://review.openstack.org/#/c/55572/ lands, just in
case it breaks anything.  Of course we could do another release shortly
after it gets in, to make folks who use DB2 happy.)


Update:

There's now a "0.8" tag in Git but that release failed to reach PyPI, so
please ignore it.

Thanks fungi and mordred for helping debug what went wrong.

https://review.openstack.org/#/c/56449/ (a one-liner) should fix the
problem.  Once it gets approved, I will attempt to push "0.8.1".



Any particular reason to go with 0.8 rather than 0.7.3 as a bug fix release?

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sqlalchemy-migrate] Blueprint for review: Add DB2 10.5 Support

2013-11-14 Thread Matt Riedemann

Joe,

Hey, I missed this question.  I moved email accounts for the 
openstack-dev mailing list and missed this in my old pile.


So I touched on this a bit in response here [1] and also a bit when 
talking about the plans for CI for the nova PowerVM virt driver here 
[2].  The blueprint for adding DB2 support to sqlalchemy-migrate and the 
DB2 enablement wiki [3] does call out CI.  Getting the 
sqlalchemy-migrate unit tests to run against DB2 isn't that hard, I just 
haven't figured out if it's something I can do with community 
infrastructure or running as an external third party test, and I think 
whether we use Express-C or not would matter there since that has a 
trial license.


I'm open to suggestions/comments/ideas.

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2013-November/018714.html 

[2] 
http://lists.openstack.org/pipermail/openstack-dev/2013-October/016395.html

[3] https://wiki.openstack.org/wiki/DB2Enablement

--

Thanks,

Matt Riedemann


From: Joe Gordon 
To: "OpenStack Development Mailing List (not for usage questions)"
,
Date: 11/07/2013 09:41 PM
Subject: Re: [openstack-dev] [sqlalchemy-migrate] Blueprint for review:
Add DB2 10.5 Support




With OpenStacks test and gating oriented mindset, how can we gate on
this functionality working going forward?


On Fri, Nov 1, 2013 at 3:30 AM, Matt Riedemann <_mrie...@us.ibm.com_
<mailto:mrie...@us.ibm.com>> wrote:
I've got a sqlalchemy-migrate blueprint up for review to add DB2 support
in migrate.
_
__https://blueprints.launchpad.net/sqlalchemy-migrate/+spec/add-db2-support_

This is a pre-req for getting DB2 support into Nova so I'm targeting
icehouse-1.  We've been running with the migrate patches internally
since Folsom, but getting them into migrate was difficult before
OpenStack took over maintenance of the project.

Please let me know if there are any questions/issues or something I need
to address here.

Thanks,

Matt Riedemann
Cloud Solutions and OpenStack Development
Email: _mrie...@us.ibm.com_ <mailto:mrie...@us.ibm.com>
Office Phone: _507-253-7622_ 
___
OpenStack-dev mailing list_
__OpenStack-dev@lists.openstack.org_
<mailto:OpenStack-dev@lists.openstack.org>_
__http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev_

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sqlalchemy-migrate] Blueprint for review: Add DB2 10.5 Support

2013-11-15 Thread Matt Riedemann



On 11/14/2013 10:38 PM, Matt Riedemann wrote:

Joe,

Hey, I missed this question.  I moved email accounts for the
openstack-dev mailing list and missed this in my old pile.

So I touched on this a bit in response here [1] and also a bit when
talking about the plans for CI for the nova PowerVM virt driver here
[2].  The blueprint for adding DB2 support to sqlalchemy-migrate and the
DB2 enablement wiki [3] does call out CI.  Getting the
sqlalchemy-migrate unit tests to run against DB2 isn't that hard, I just
haven't figured out if it's something I can do with community
infrastructure or running as an external third party test, and I think
whether we use Express-C or not would matter there since that has a
trial license.

I'm open to suggestions/comments/ideas.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2013-November/018714.html

[2]
http://lists.openstack.org/pipermail/openstack-dev/2013-October/016395.html
[3] https://wiki.openstack.org/wiki/DB2Enablement



Thanks to Brant Bknudson for pointing out that DB2 Express-C doesn't 
have a time restriction:


http://www.ibm.com/developerworks/downloads/im/db2express/

"It is a fully licensed product available for free download. It does not 
have any time restrictions."


I must have mistaken that with Enterprise Server Edition that we were 
using in house for some bigger deployments for CI with Tempest.


So it sounds like Express-C is what we could use to get 
sqlalchemy-migrate unit tests running against DB2 using the community 
infrastructure (I hope), I just need some help with getting that going. 
I know Roman got the migrate UT running for MySQL and PostgreSQL here:


https://review.openstack.org/#/c/40436/

I'll try working with Roman, Monty and any infra guys that will talk to 
me to get this going.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sqlalchemy-migrate] Blueprint for review: Add DB2 10.5 Support

2013-11-15 Thread Matt Riedemann



On 11/15/2013 10:15 AM, Matt Riedemann wrote:



On 11/14/2013 10:38 PM, Matt Riedemann wrote:

Joe,

Hey, I missed this question.  I moved email accounts for the
openstack-dev mailing list and missed this in my old pile.

So I touched on this a bit in response here [1] and also a bit when
talking about the plans for CI for the nova PowerVM virt driver here
[2].  The blueprint for adding DB2 support to sqlalchemy-migrate and the
DB2 enablement wiki [3] does call out CI.  Getting the
sqlalchemy-migrate unit tests to run against DB2 isn't that hard, I just
haven't figured out if it's something I can do with community
infrastructure or running as an external third party test, and I think
whether we use Express-C or not would matter there since that has a
trial license.

I'm open to suggestions/comments/ideas.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2013-November/018714.html


[2]
http://lists.openstack.org/pipermail/openstack-dev/2013-October/016395.html

[3] https://wiki.openstack.org/wiki/DB2Enablement



Thanks to Brant Bknudson for pointing out that DB2 Express-C doesn't
have a time restriction:

http://www.ibm.com/developerworks/downloads/im/db2express/

"It is a fully licensed product available for free download. It does not
have any time restrictions."

I must have mistaken that with Enterprise Server Edition that we were
using in house for some bigger deployments for CI with Tempest.

So it sounds like Express-C is what we could use to get
sqlalchemy-migrate unit tests running against DB2 using the community
infrastructure (I hope), I just need some help with getting that going.
I know Roman got the migrate UT running for MySQL and PostgreSQL here:

https://review.openstack.org/#/c/40436/

I'll try working with Roman, Monty and any infra guys that will talk to
me to get this going.



Just to circle back on this before anyone throws in their two cents and 
tells me that 3rd party CI is the way to go, I caught Monty in IRC and 
came to that conclusion already.


While DB2 Express-C is free and doesn't expire, it's closed source so 
it's an issue of the infra team being able to maintain it, and that 
there is no closed source code running in the community infrastructure.


So I'll plan on getting the sqlalchemy-migrate unit tests reporting back 
for DB2 using 3rd party CI and triggers.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Split of the openstack-dev list (summary so far)

2013-11-16 Thread Matt Riedemann
enstack-dev] [Solum] Command Line Interface for Solum (20 messages)
[openstack-dev] [nova] future fate of nova-network? (3 messages)
[openstack-dev] [Nova] New API requirements, review of GCE (6 messages)
[openstack-dev] how can I know a new instance is created from the code ?
(3 messages)
[openstack-dev] [Nova] Icehouse Blueprints (2 messages)
[openstack-dev] [Solum/Heat] Is Solum really necessary? (14 messages)
[openstack-dev] Nova XML serialization bug 1223358 moving discussion
here to get more people involved (4 messages)
[openstack-dev] [RFC] Straw man to start the incubation / graduation
requirements discussion (11 messages)
[openstack-dev] [Savanna] DiskBuilder / savanna-image-elements (4 messages)
[openstack-dev] [Keystone] Blob in keystone v3 certificate API (2 messages)
[openstack-dev] [oslo] team meeting Friday 15 November @ 14:00 UTC (2
messages)
[openstack-dev] [Trove][Savanna][Murano] Unified Agent proposal
discussion at Summit (6 messages)
[openstack-dev] [oslo] tracking graduation status for incubated code
[openstack-dev] [OpenStack-dev][Neutron][Tempest]Can Tempest embrace
some complicated network scenario tests (3 messages)
[openstack-dev] [nova][cinder][oslo][scheduler] How to leverage oslo
schduler/filters for nova and cinder (6 messages)
[openstack-dev] [Nova] Hypervisor CI requirement and deprecation  plan
[openstack-dev] [Ceilometer] compute agent cannot start (7 messages)
[openstack-dev] [Horizon] Use icon set instead of instance Action (4
messages)
[openstack-dev] [OpenStack][Horizon] poweroff/shutdown action in horizon
(3 messages)
[openstack-dev] [Murano] Implementing Elastic Applications (3 messages)

Now - tell me in the above list where the mass of StackForge related
email overwhelming madness is coming from. I count 4 topics and 26
messages out of a total of 44 topics and 328 messages.

So - before we take the extreme move of segregation, can we just try
threaded mail readers for a while and see if it helps?

Monty

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Thanks for the tip Monty. I just started using Thunderbird last week 
and already had my tags sorting most of the dev list into folders, but 
just installed the Conversations add-on to further clean things up.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable/havana] gate broken

2013-11-17 Thread Matt Riedemann



On Sunday, November 17, 2013 7:46:39 AM, Gary Kotton wrote:

Hi,
The gating for the stable version is broken when the running the
neutron gate. Locally this works but the gate has problem. All of the
services are up and running correctly. There are some exceptions with
the ceilometer service but that is not related to the neutron gating.

The error message is as follows:
2013-11-17 11:00:05.855
<http://logs.openstack.org/46/56746/1/check/check-tempest-devstack-vm-neutron/a02894b/console.html#_2013-11-17_11_00_05_855>
| 2013-11-17 11:00:05
2013-11-17 11:00:17.239  
<http://logs.openstack.org/46/56746/1/check/check-tempest-devstack-vm-neutron/a02894b/console.html#_2013-11-17_11_00_17_239>
  | Process leaked file descriptors. 
Seehttp://wiki.jenkins-ci.org/display/JENKINS/Spawning+processes+from+build  for more 
information
2013-11-17 11:00:17.437  
<http://logs.openstack.org/46/56746/1/check/check-tempest-devstack-vm-neutron/a02894b/console.html#_2013-11-17_11_00_17_437>
  | Build step 'Execute shell' marked build as failure
2013-11-17 11:00:19.129  
<http://logs.openstack.org/46/56746/1/check/check-tempest-devstack-vm-neutron/a02894b/console.html#_2013-11-17_11_00_19_129>
  | [SCP] Connecting to static.openstack.org
Thanks
Gary


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I've seen this fail on at least two stable/havana patches in nova 
today, so I opened this bug:


https://bugs.launchpad.net/openstack-ci/+bug/1252024

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [api] How to handle bug 1249526?

2013-11-17 Thread Matt Riedemann
This is mainly just a newbie question but looks like it could be an easy 
fix. The bug report is just asking for the nova os-fixed-ips API 
extension to return the 'reserved' status for the fixed IP. I don't see 
that in the v3 API list though, was that dropped in V3? If it's not 
being ported to V3 I'm sure there was a good reason so maybe this isn't 
worth implementing in the V2 API, even though it seems like a pretty 
harmless backwards compatible change. Am I missing something here?


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] How to determine patch set load for a given project

2013-11-19 Thread Matt Riedemann
We have a team working on getting CI setup for DB2 10.5 in 
sqlalchemy-migrate and they were asking me if there was a way to 
calculate the patch load through that project.


I asked around in the infra IRC channel and Jeremy Stanley pointed out 
that there might be something available in 
http://graphite.openstack.org/ by looking for the project's test stats.


I found that if you expand stats_counts > zuul > job and then search for 
your project (sqlalchemy-migrate in this case), you can find the jobs 
and their graphs for load. In my case I care about stats for 
gate-sqlalchemy-migrate-python27.


I'm having a little trouble interpreting the data though. From looking 
at what's out there for review now, there is one new patch created on 
11/19 and the last new one before that was on 11/15. I see spikes in the 
graph around 11/15, 11/18 and 11/19, but I'm not sure what the 11/18 
spike is from?


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Misspelled pbr parameter in quantum setup.cfg

2013-06-20 Thread Matt Riedemann
It's a bug, this is the code in pbr that references it:

https://github.com/openstack-dev/pbr/blob/master/pbr/packaging.py#L304 

Good catch.

I've opened bug https://bugs.launchpad.net/quantum/+bug/1192987 .


Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Ilya Shakhat 
To: mord...@inaugust.com, 
Cc: "OpenStack Development Mailing List 
\(openstack-dev@lists.openstack.org\)" 
Date:   06/20/2013 05:04 AM
Subject:[openstack-dev] Misspelled pbr parameter in quantum 
setup.cfg



Hi Monty,

I've noticed that pbr section of Quantum's setup.cfg contains misspelled 
parameter:
[pbr]
single-version-externally-mananged = true
(extra 'n' in word managed)
Should this parameter be fixed or removed completely?

Ilya___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Metrics][Nova] Another take on review turnaround stats

2013-06-28 Thread Matt Riedemann
Hey I made the list!

https://review.openstack.org/#/c/25355/ 

Just wanted to point out for nova in longest-waiting reviews based on 
first revision:


1.  94 days, 12 hours, 49 minutes - https://review.openstack.org/25355
 (PowerVM resize and migrate test cases)

This one is a bit skewed because it was abandoned due to inactivity and 
then I picked it back up by assigning the bug to myself and contributing 
to the original review.

Is there a way to take that into account in the metrics?  Or is this a 
process issue, i.e. should I have left this abandoned and pushed up a new 
review based on the original?



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Russell Bryant 
To: OpenStack Development Mailing List 
, 
Date:   06/27/2013 09:45 PM
Subject:[openstack-dev] [Metrics][Nova] Another take on review 
turnaround  stats



Greetings,

The key metric I have been using for knowing whether we are keeping up
with review requests is the average wait time for getting a review.  In
a previous thread, we set a goal of keeping that under 4 days (at least
by the end of the week, may be higher after a weekend).  This is
calculated using the time that the *latest* patch revision was posted.
We have been keeping up with this (Nova at 3.5 days right now).

I've been getting a lot of complaints this week about review turnaround.
 It's important to me that we're doing this well, but action needs to be
based on real data.

One of the theories was that patches are having to be rebased a bunch,
so they have been waiting longer than the stats say.  True, but by how
much?  The answer is now in the stats:

http://russellbryant.net/openstack-stats/all-openreviews.html

The results are much better than I was afraid of.  On average across all
projects, patches waiting for review have an age of just under 14 days
since they were first posted.  Nova is below average, sitting at an
average of just over 10 days.  That doesn't seem bad at all, to me.

So, if we have a problem, it's not Nova specific, at least.  It's harder
to set a goal for this metric since it's not entirely in the hands of
reviewers like the other one.

Suggestions for additional tweaks welcome.

Thanks,

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Removing the .mo files from Horizon git

2013-07-01 Thread Matt Riedemann
Thomas,

+1

I don't know anything about the background here, but for what it's worth, 
I work with a team that packages RPMs for the core components (nova, 
keystone, etc) and we only compile the mo's at rpmbuild time, nothing 
binary is stored in git, only the .po files (which are converted to .mo's 
using babel compile_catalog).



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Thomas Goirand 
To: OpenStack Development Mailing List 
, 
Date:   07/01/2013 12:29 PM
Subject:[openstack-dev] [horizon] Removing the .mo files from 
Horizon git



Hi,

Is there any reason why the .mo files of Horizon are included in the
Git? IMO, they have nothing to do there. They take time to download, and
are the major causes of "git merge" failures for me when preparing
Havana packages. Removing the .mo files would also be a good step to
make sure we follow the guidelines of Debian (as binary generated files
could be considered non-free, and we should make sure everything is
built from source).

So, shouldn't the .mo files be generated at build time only, and be kept
out of the Git?

Cheers,

Thomas Goirand (zigo)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Seperate out 'soft-deleted' instances from 'deleted' ones?

2013-07-01 Thread Matt Riedemann
For everyone's awareness, there is a bug related to this: 
https://bugs.launchpad.net/nova/+bug/1196255 



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Yufang Zhang 
To: openstack-dev@lists.openstack.org, 
Date:   06/30/2013 10:01 AM
Subject:[openstack-dev] Seperate out 'soft-deleted' instances from 
'deleted' ones?



In nova-api, both vm_states.DELETED and vm_states.SOFT_DELETED states are 
mapped to the 'DELETED' status. Thus although nova-api supports filtering 
instances by instance status, we cannot get instances which are in 
'soft-deleted' status, like: 

nova list --status SOFT_DELETED

So does it make sense to seperate out 'soft-deleted' instances from 
'deleted' ones in the api level? 

To achive this, we can modify the state-status mappings in nova-api to map 
vm_states.SOFT_DELETED to a dedicated status(like SOFT_DELETED) and vice 
versa. Of course, some modification should be token in the instance filter 
logic.

Could anyone give some opinions before I am working on it?

Thanks.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] too many tokens

2013-07-03 Thread Matt Riedemann
For some history, there was an attempt at consolidating some of this here:

https://github.com/openstack/nova/commit/dd9c27f999221001bae9faa03571645824d2a681
 


But that caused some issues and was reverted here:

https://github.com/openstack/nova/commit/ee5d9ae8d376e41e852b06488e922400cf69b4ac




Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Ala Rezmerita 
To: OpenStack Development Mailing List 
, 
Cc: gong...@unitedstack.com, hrushikesh.gan...@hp.com
Date:   07/03/2013 11:26 AM
Subject:[openstack-dev] [Nova] too many tokens



Hi everyone, 
I have a question regarding too many token generation in nova when using 
quantumclient (also related to bug reports 
https://bugs.launchpad.net/nova/+bug/1192383 + 
https://bugs.launchpad.net/nova-project/+bug/1191159) 
For instance during the periodic task  heal_instance_info_cache  (every 
60s) nova calls quantum API method  get_instance_nw_info that calls 
_build_network_info_model (backtrace at the end of the mail).  
During the execution of this method,  4 quantum clients intances are 
created (all of them use the same context object) and for each of them a 
new token is generated.   
Is it possible to change this behavior by updating the context.auth_token 
property the first time a quantumclient for a given context is created (so 
that the same token will be reused among the 4 client instances) ?  Is 
there some security issue that can appear?
Thanks
Ala Rezmerita
Cloudwatt

The backtrace :
  
/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py(194)main()
-> result = function(*args, **kwargs)
  /opt/stack/nova/nova/openstack/common/loopingcall.py(125)_inner()
-> idle = self.f(*self.args, **self.kw)
  /opt/stack/nova/nova/service.py(283)periodic_tasks()
-> return self.manager.periodic_tasks(ctxt, raise_on_error=raise_on_error)
  /opt/stack/nova/nova/manager.py(100)periodic_tasks()
-> return self.run_periodic_tasks(context, raise_on_error=raise_on_error)
  
/opt/stack/nova/nova/openstack/common/periodic_task.py(179)run_periodic_tasks()
-> task(self, context)
  /opt/stack/nova/nova/compute/manager.py(3654)_heal_instance_info_cache()
-> self._get_instance_nw_info(context, instance)
  /opt/stack/nova/nova/compute/manager.py(767)_get_instance_nw_info()
-> instance, conductor_api=self.conductor_api)
  /opt/stack/nova/nova/network/quantumv2/api.py(367)get_instance_nw_info()
-> result = self._get_instance_nw_info(context, instance, networks)
  
/opt/stack/nova/nova/network/quantumv2/api.py(375)_get_instance_nw_info()
-> nw_info = self._build_network_info_model(context, instance, networks)
  
/opt/stack/nova/nova/network/quantumv2/api.py(840)_build_network_info_model()
-> client = quantumv2.get_client(context, admin=True)
> /opt/stack/nova/nova/network/quantumv2/__init__.py(67)get_client()
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Removing the .mo files from Horizon git

2013-07-03 Thread Matt Riedemann
If you use Babel, I don't think you need gettext by itself since I thought 
Babel has it's own conversion/compile code built-in?



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Monty Taylor 
To: openstack-dev@lists.openstack.org, 
Date:   07/02/2013 01:22 PM
Subject:Re: [openstack-dev] [horizon] Removing the .mo files from 
Horizon git





On 07/02/2013 01:13 AM, Mark McLoughlin wrote:
> On Tue, 2013-07-02 at 09:58 +0200, Thierry Carrez wrote:
>> Thomas Goirand wrote:
>>> So, shouldn't the .mo files be generated at build time only, and be 
kept
>>> out of the Git?
>>
>> +1
> 
> Yep, agree too.
> 
> Interestingly, last time I checked, devstack doesn't actually compile
> the message catalogs (python setup.py compile_catalog).
> 
> I've been meaning to fix that for a while now, but it's fallen by the
> wayside. I've unassigned myself from the bug for now:
> 
>   https://bugs.launchpad.net/devstack/+bug/995287

Should we make python setup.py install do this if gettext is installed?
Or keep it as a separate step for people who care?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Help with database migration error

2013-07-03 Thread Matt Riedemann
What is the sql_connection value in your cisco_plugins.ini file?  Looks 
like sqlalchemy is having issues parsing the URL.



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Henry Gessau 
To: OpenStack Development Mailing List 
, 
Date:   07/02/2013 09:05 PM
Subject:[openstack-dev] [Neutron] Help with database migration 
error



I have not worked with databases much and this is my first attempt
at a database migration. I am trying to follow this Howto:
https://wiki.openstack.org/wiki/Neutron/DatabaseMigration

I get the following error at step 3:

/opt/stack/quantum[master] $ quantum-db-manage --config-file 
/etc/quantum/quantum.conf --config-file 
/etc/quantum/plugins/cisco/cisco_plugins.ini stamp head
Traceback (most recent call last):
  File "/usr/local/bin/quantum-db-manage", line 9, in 
load_entry_point('quantum==2013.2.a882.g0fc6605', 'console_scripts', 
'quantum-db-manage')()
  File "/opt/stack/quantum/quantum/db/migration/cli.py", line 136, in main
CONF.command.func(config, CONF.command.name)
  File "/opt/stack/quantum/quantum/db/migration/cli.py", line 81, in 
do_stamp
sql=CONF.command.sql)
  File "/opt/stack/quantum/quantum/db/migration/cli.py", line 54, in 
do_alembic_command
getattr(alembic_command, cmd)(config, *args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/alembic/command.py", line 
221, in stamp
script.run_env()
  File "/usr/local/lib/python2.7/dist-packages/alembic/script.py", line 
193, in run_env
util.load_python_file(self.dir, 'env.py')
  File "/usr/local/lib/python2.7/dist-packages/alembic/util.py", line 177, 
in load_python_file
module = imp.load_source(module_id, path, open(path, 'rb'))
  File 
"/opt/stack/quantum/quantum/db/migration/alembic_migrations/env.py", line 
100, in 
run_migrations_online()
  File 
"/opt/stack/quantum/quantum/db/migration/alembic_migrations/env.py", line 
73, in run_migrations_online
poolclass=pool.NullPool)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/__init__.py", 
line 338, in create_engine
return strategy.create(*args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/strategies.py", 
line 48, in create
u = url.make_url(name_or_url)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/url.py", line 
178, in make_url
return _parse_rfc1738_args(name_or_url)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/url.py", line 
219, in _parse_rfc1738_args
"Could not parse rfc1738 URL from string '%s'" % name)
sqlalchemy.exc.ArgumentError: Could not parse rfc1738 URL from string ''


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Removing the .mo files from Horizon git

2013-07-04 Thread Matt Riedemann
I use Babel simply because the setup.cfg on the core projects are already 
configured for running babel so all I have to do to generate the .mo files 
is "python setup.py compile_catalog".



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Thomas Goirand 
To: OpenStack Development Mailing List 
, 
Date:   07/04/2013 08:15 AM
Subject:Re: [openstack-dev] [horizon] Removing the .mo files from 
Horizon git



On 07/04/2013 10:14 AM, Matt Riedemann wrote:
> If you use Babel, I don't think you need gettext by itself since I
> thought Babel has it's own conversion/compile code built-in?

But why would you use something in Python, when GNU gettext in C does
the job lightning fast, and is available everywhere?

Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Guidelines for setting a milestone target for a bug?

2013-07-09 Thread Matt Riedemann
I understand setting milestone targets for blueprints, but was wondering 
what guidelines/best practices there are around setting a target milestone 
for a bug?  Does it mean anything, especially if there is already a patch 
up in review for it?



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States
<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][docs] Why is the neutron security group extension disabled by default?

2013-07-13 Thread Matt Riedemann
I had to figure out via the code that unless you specify a firewall driver 
in the neutron plugin's ini file (I'm using openvswitch in this case), the 
neutron security group extension is disabled.

The admin doc tells you what to do in nova.conf to get nova to proxy 
security group calls through neutron:

http://docs.openstack.org/trunk/openstack-network/admin/content/nova_config_security_groups.html
 


But there is no mention of setting the firwall_driver property in the 
[securitygroup] section of your plugin's ini file.  For OVS, it would be 
setting this:

http://gerrit.rtp.raleigh.ibm.com/gitweb?p=osee-tools.git;a=blob;f=install/build.include;h=2089a32f1da4ad92a61601a4d46a5b34b312f644;hb=refs/heads/osee-havana#l103
 


In nova, security groups work out of the box (well, at least they are 
enabled, you still have to setup the rules).

Is there a design point of why the neutron security group extension is 
disabled by default (maybe so it doesn't interfere with nova somehow)?  If 
so, we can work on getting the docs updated.  Otherwise it seems like a 
bug in the code.


Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States
<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Horizon] Is there precedent for validating user input on data types to APIs?

2013-07-14 Thread Matt Riedemann
I'm triaging nova bug 1199539 and trying to determine if this should be 
routed to Horizon, checked in the nova API layer, or just rejected as a 
usage error.  In this case, the DB excepts an integer but an empty string 
is being passed in from the user via Horizon.  I don't know if Horizon is 
doing type checking on user input already and if so, we should just route 
to Horizon?  Or if this is something to check in the nova code itself (or 
just reject it)?



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States
<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Seperate out 'soft-deleted' instances from 'deleted' ones?

2013-07-15 Thread Matt Riedemann
I have a patch up for review on this:

https://review.openstack.org/#/c/35061/ 

However, this doesn't fix the vm_states.SOFT_DELETED mapping in 
nova.api.openstack.common so if you show an instance with 
vm_states.SOFT_DELETED, the response status will be 'DELETED'.

I'd like to see if there are any opinions on if this should come back as 
'SOFT_DELETED' or if everyone is OK with mapping soft-delete to 'DELETED' 
in the v3 API?

As far as the bug is concerned, I've at least done what I wanted which was 
to make the filtering work when searching on SOFT_DELETED and raise a 
BadRequest on unmapped status values to shore up the usability problem 
there.



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Matt Riedemann/Rochester/IBM@IBMUS
To: OpenStack Development Mailing List 
, 
Date:   07/01/2013 04:14 PM
Subject:Re: [openstack-dev] [Nova] Seperate out 'soft-deleted' 
instances   from 'deleted' ones?



For everyone's awareness, there is a bug related to this: 
https://bugs.launchpad.net/nova/+bug/1196255 



Thanks, 

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development 

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com 


3605 Hwy 52 N
Rochester, MN 55901-1407
United States





From:Yufang Zhang  
To:openstack-dev@lists.openstack.org, 
Date:06/30/2013 10:01 AM 
Subject:[openstack-dev] Seperate out 'soft-deleted' instances from 
   'deleted' ones? 



In nova-api, both vm_states.DELETED and vm_states.SOFT_DELETED states are 
mapped to the 'DELETED' status. Thus although nova-api supports filtering 
instances by instance status, we cannot get instances which are in 
'soft-deleted' status, like: 

nova list --status SOFT_DELETED 

So does it make sense to seperate out 'soft-deleted' instances from 
'deleted' ones in the api level? 

To achive this, we can modify the state-status mappings in nova-api to map 
vm_states.SOFT_DELETED to a dedicated status(like SOFT_DELETED) and vice 
versa. Of course, some modification should be token in the instance filter 
logic. 

Could anyone give some opinions before I am working on it? 

Thanks.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

<><>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] non-default quota not set for new tenant - bug?

2013-07-17 Thread Matt Riedemann
I'm wondering if this is a bug or working as designed and I'm just not 
aware of the design point.

Running with the latest nova havana master level of code, I'm setting up 
tenants and users for running Tempest on a RHEL 6.3 box.  I create two 
tenants and two users, tempest1 and tempest2.  Before creating the tenants 
and users I update the instance quota in nova.conf from 10 to 20 and 
restart nova api, conductor, scheduler and compute services.

The problem is I'm eventually getting failures due to instance quota being 
maxed out.  The weird thing is if I do quota-show on the tenant name, it 
says the instance quota is 20, but if I query on the tenant ID, it says 
the quota is 10. Here is the paste:

http://paste.openstack.org/show/40675/ 

So my questions are:

1. Should nova quota-show be showing me different things depending on if I 
pass the name or ID of the tenant?
2. Since I'm creating the tenant/user after updating the default quotas in 
nova.conf, shouldn't the tenant also get the default of 10?

I'm using the 2.13 python-novaclient.



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States
<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] need to pin jsonschema version for glance?

2013-07-17 Thread Matt Riedemann
I recently synched up on the latest glance and ran tempest on my RHEL 6.3 
box and the image v2 tests all started failing due to json schema 
validation errors:

http://paste.openstack.org/show/40684/ 

I found that the version of jsonschema on the system is 0.7, probably 
because of the dependency from warlock in python-glanceclient:

https://github.com/openstack/python-glanceclient/blob/master/requirements.txt#L8
 


I started looking at what recent changes in glance might be causing the 
issue and I found this one:

https://review.openstack.org/#/c/35134/ 

As pointed out in the test output from that patch, since there is no 
version constraint on jsonschema in glance or tempest, it's getting the 
latest version from pypi (2.0.0 in this case).

When I updated my test box to jsonschema 1.3.0, I got past the schema 
validation error.

So this leads me to believe that we need to pin the jsonschema version in 
glance and tempest to >= 1.3.0.

Thoughts?



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States
<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] headsup - transient test failures on py26 ' cannot import name OrderedDict'

2013-07-17 Thread Matt Riedemann
What do you mean in (b) about upstream python not supporting python 2.6? 
>From what I understand here, it's the version of testrepository being used 
that doesn't support py26, not python itself or openstack.

Anyone doing development or test on RHEL 6 (which doesn't have python 2.7) 
is going to have an issue with this.



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Robert Collins 
To: OpenStack Development Mailing List 
, 
Date:   07/17/2013 04:13 AM
Subject:[openstack-dev] headsup - transient test failures on py26 
' cannotimport name OrderedDict'



Python 2.6 isn't one of the official supported Pythons for
testrepository, and I managed to break that when I fixed tests on
Python3.3 (which has more random dicts). So Testrepository 0.0.16
breaks on 2.6, 0.0.17 is fixed.

However until the fixed version propogates into the OpenStack-infra
PyPI mirror, I think every Python2.6 run will fail in this way.

a) sorry.
b) Can we not say 'if you want to run OpenStack on a Python version
upstream python don't support, it's your problem, not ours' ?

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Cloud Services

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] need to pin jsonschema version for glance?

2013-07-17 Thread Matt Riedemann
Mark, I have glanceclient 0.9.0 installed.



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Mark Washenberger 
To: OpenStack Development Mailing List 
, 
Date:   07/17/2013 01:07 PM
Subject:Re: [openstack-dev] [glance] need to pin jsonschema 
version for glance?



Actually, when I build out a virtual environment and install 
python-glanceclient, I get jsonschema 2.0.0. So maybe the problem is 
elsewhere? I also get python-glanceclient 0.9.0, but I notice that tempest 
requires python-glanceclient >0.5.0 ( 
https://github.com/openstack/tempest/blob/master/requirements.txt#L11 ). 
What version of python-glanceclient do you have installed in the 
environment where there is a problem?


On Wed, Jul 17, 2013 at 9:52 AM, Mark Washenberger <
mark.washenber...@markwash.net> wrote:



On Wed, Jul 17, 2013 at 7:16 AM, Matt Riedemann  
wrote:
I recently synched up on the latest glance and ran tempest on my RHEL 6.3 
box and the image v2 tests all started failing due to json schema 
validation errors: 

http://paste.openstack.org/show/40684/ 

I found that the version of jsonschema on the system is 0.7, probably 
because of the dependency from warlock in python-glanceclient: 

https://github.com/openstack/python-glanceclient/blob/master/requirements.txt#L8
 


I started looking at what recent changes in glance might be causing the 
issue and I found this one: 

https://review.openstack.org/#/c/35134/ 

As pointed out in the test output from that patch, since there is no 
version constraint on jsonschema in glance or tempest, it's getting the 
latest version from pypi (2.0.0 in this case). 

When I updated my test box to jsonschema 1.3.0, I got past the schema 
validation error. 

So this leads me to believe that we need to pin the jsonschema version in 
glance and tempest to >= 1.3.0. 

Thoughts?

This sounds correct. Another alternative would be to switch back to the 
"old" syntax and pin < 1.3.0, which sounds like its not really forward 
progress, but might be easier.
 



Thanks, 

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development 

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com 


3605 Hwy 52 N
Rochester, MN 55901-1407
United States


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

<><>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] need to pin jsonschema version for glance?

2013-07-17 Thread Matt Riedemann
Just FYI that there is a bug related to this now in launchpad: 
https://bugs.launchpad.net/glance/+bug/1202391 



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Matthew Treinish 
To: OpenStack Development Mailing List 
, 
Date:   07/17/2013 01:49 PM
Subject:Re: [openstack-dev] [glance] need to pin jsonschema 
version for glance?



On Wed, Jul 17, 2013 at 11:03:53AM -0700, Mark Washenberger wrote:
> Actually, when I build out a virtual environment and install
> python-glanceclient, I get jsonschema 2.0.0. So maybe the problem is
> elsewhere? I also get python-glanceclient 0.9.0, but I notice that 
tempest
> requires python-glanceclient >0.5.0 (
> https://github.com/openstack/tempest/blob/master/requirements.txt#L11 ).
> What version of python-glanceclient do you have installed in the
> environment where there is a problem?

The glance v2 testing doesn't use glanceclient though. The glanceclient
dependency is only used for scenario testing. It makes http calls using
tempest's rest client:

https://github.com/openstack/tempest/blob/master/tempest/services/image/v2/json/image_client.py


Tempest uses jsonschema directly to verify requests before it sends them
by comparing against the schema it gets from the v2 api. I wrote it this 
way to
avoid having a broken schema pushed into glance.

I think that setting a requirement of >=1.3.0 is fine it should get us
around this.

-Matt Treinish

> 
> 
> On Wed, Jul 17, 2013 at 9:52 AM, Mark Washenberger <
> mark.washenber...@markwash.net> wrote:
> 
> >
> >
> >
> > On Wed, Jul 17, 2013 at 7:16 AM, Matt Riedemann 
wrote:
> >
> >> I recently synched up on the latest glance and ran tempest on my RHEL 
6.3
> >> box and the image v2 tests all started failing due to json schema
> >> validation errors:
> >>
> >> *http://paste.openstack.org/show/40684/*<
http://paste.openstack.org/show/40684/>
> >>
> >> I found that the version of jsonschema on the system is 0.7, probably
> >> because of the dependency from warlock in python-glanceclient:
> >>
> >> *
> >> 
https://github.com/openstack/python-glanceclient/blob/master/requirements.txt#L8

> >> *<
https://github.com/openstack/python-glanceclient/blob/master/requirements.txt#L8
>
> >>
> >> I started looking at what recent changes in glance might be causing 
the
> >> issue and I found this one:
> >>
> >> *https://review.openstack.org/#/c/35134/*<
https://review.openstack.org/#/c/35134/>
> >>
> >> As pointed out in the test output from that patch, since there is no
> >> version constraint on jsonschema in glance or tempest, it's getting 
the
> >> latest version from pypi (2.0.0 in this case).
> >>
> >> When I updated my test box to jsonschema 1.3.0, I got past the schema
> >> validation error.
> >>
> >> So this leads me to believe that we need to pin the jsonschema 
version in
> >> glance and tempest to >= 1.3.0.
> >>
> >> Thoughts?
> >>
> >
> > This sounds correct. Another alternative would be to switch back to 
the
> > "old" syntax and pin < 1.3.0, which sounds like its not really forward
> > progress, but might be easier.
> >
> >
> >>
> >>
> >>
> >> Thanks,
> >>
> >> *MATT RIEDEMANN*
> >> Advisory Software Engineer
> >> Cloud Solutions and OpenStack Development
> >> --
> >>  *Phone:* 1-507-253-7622 | *Mobile:* 1-507-990-1889*
> >> E-mail:* *mrie...@us.ibm.com* 
> >> [image: IBM]
> >>
> >> 3605 Hwy 52 N
> >> Rochester, MN 55901-1407
> >> United States
> >>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >



> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] core review request - 34947

2013-07-18 Thread Matt Riedemann
Hi, looking for some cores to check this out again: 
https://review.openstack.org/#/c/34947/ 

Basically it was already approved but failed to merge.  I resolved the 
merge conflict, now it just needs approval again.  I haven't been able to 
catch the previous approvers on IRC so trying the mailing list.



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States
<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] generate_sample.sh

2013-07-19 Thread Matt Riedemann
Looks like it's complaining because you changed nova.conf.sample.  Based 
on the readme:

https://github.com/openstack/nova/tree/master/tools/conf 

Did you running ./tools/conf/analyze_opts.py?  I'm assuming you need to 
run the tools and if there are issues you have to resolve them before 
pushing up your changes.  I've personally never ran this though.



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Gary Kotton 
To: "OpenStack Development Mailing List 
(openstack-dev@lists.openstack.org)" , 
Date:   07/19/2013 07:03 PM
Subject:[openstack-dev] [nova] generate_sample.sh



Hi,
I have run into a problem with pep8 for 
https://review.openstack.org/#/c/37539/. The issue is that have run the 
script in the subject and the pep8 fails.
Any ideas?
Thanks
Gary___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Proposal to add Nikola Đipanov to nova-core

2013-07-31 Thread Matt Riedemann
+1



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Russell Bryant 
To: OpenStack Development Mailing List 
, 
Date:   07/31/2013 02:14 PM
Subject:[openstack-dev] [Nova] Proposal to add Nikola Đipanov to 
nova-core



Greetings,

I propose that we add Nikola Đipanov to the nova-core team [1].

Nikola has been actively contributing to nova for a while now, both in
code and reviews.  He provides high quality reviews. so I think he would
make a good addition to the review team.

https://review.openstack.org/#/q/reviewer:ndipa...@redhat.com,n,z

https://review.openstack.org/#/q/owner:ndipa...@redhat.com,n,z

Please respond with +1/-1.

Thanks,

[1] https://wiki.openstack.org/wiki/Nova/CoreTeam

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] instances fail to boot on el6 (glance schema error issue)

2013-08-09 Thread Matt Riedemann
Dan, I ran into problems with the glance v2 schema tests in tempest on 
RHEL 6.4 (python 2.6) until I updated jsonschema (1.3.0) and warlock 
(1.0.1).  Could that be related to your issue?

This was the related bug: https://bugs.launchpad.net/glance/+bug/1202391 



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Dan Prince 
To: OpenStack Development Mailing List 
, 
Date:   08/06/2013 09:45 AM
Subject:Re: [openstack-dev] instances fail to boot on el6 (glance 
schema errorissue)



Okay. The quick fix is to remove the extra Glance V2 call when 
CONF.allowed_direct_url_schemes is disabled.

 https://review.openstack.org/#/c/40426/1

This effectively avoids calling the Glance V2 API on python 2.6 (thus 
avoiding the schema validation issue).

The real issue here is still unresolved however and it looks like we still 
have some work to do to get all the fancy new Glance V2 stuff fully 
working on python 2.6 distros (RHEL/Centos, etc).

Dan

- Original Message -
> From: "Dan Prince" 
> To: "OpenStack Development Mailing List" 

> Sent: Monday, August 5, 2013 9:01:40 PM
> Subject: [openstack-dev] instances fail to boot on el6 (glance schema 
errorissue)
> 
> As of an hour ago the el6 (Centos) builds in SmokeStack all started 
failing.
> I've documented the initial issue I'm seeing in this ticket:
> 
>  https://bugs.launchpad.net/nova/+bug/1208656
> 
> The issue seems to be that we now hit a SchemaError which bubbles up 
from
> glanceclient when the new direct download plugin code runs. This only 
seems
> to happen on distributions using python 2.6 as I'm not seeing the same 
thing
> on Fedora.
> 
> This stack trace also highlights the fact that the Glance v2 API now 
seems to
> be a requirement for Nova... and I'm not sure this is a good thing
> considering we still use the v1 API for many things as well. Ideally 
we'd
> have all Nova -> Glance communication use a single version of the Glance 
API
> (either v1 or v2... not both) right?
> 
> Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Flakiness with launchpad tracking bug status?

2013-08-10 Thread Matt Riedemann
I've been seeing some flakiness lately with launchpad not tracking Gerrit 
status when a patch is proposed (changing the bug to 'In Progress') or 
merged (changing the bug to 'Fix Committed').  Has anyone else experienced 
this?  Note that the bugs even show when a patch is proposed, change the 
assignee and track when it's merged in github, but doesn't change the 
status.  I'm primarily looking at nova.



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States
<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Flakiness with launchpad tracking bug status?

2013-08-10 Thread Matt Riedemann
Here is one I saw today that was in progress but status was New: 
https://bugs.launchpad.net/nova/+bug/1209134 

This is another one I saw today in nova bug triage that was New but had 
actually already been merged: https://bugs.launchpad.net/nova/+bug/1206330 


A bit older but the first occurrence that I've seen: 
https://bugs.launchpad.net/nova/+bug/1197506  (In this case it didn't even 
get the github merge/commit information).


Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   John Griffith 
To: OpenStack Development Mailing List 
, 
Date:   08/10/2013 01:47 PM
Subject:Re: [openstack-dev] Flakiness with launchpad tracking bug 
status?






On Sat, Aug 10, 2013 at 12:20 PM, Matt Riedemann  
wrote:
I've been seeing some flakiness lately with launchpad not tracking Gerrit 
status when a patch is proposed (changing the bug to 'In Progress') or 
merged (changing the bug to 'Fix Committed').  Has anyone else experienced 
this?  Note that the bugs even show when a patch is proposed, change the 
assignee and track when it's merged in github, but doesn't change the 
status.  I'm primarily looking at nova.



Thanks, 

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development 

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com 


3605 Hwy 52 N
Rochester, MN 55901-1407
United States


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Any specific examples to look at?  Could be a badly formed commit message 
or branch name such that the bug ID isn't picked up?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

<><>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Flakiness with launchpad tracking bug status?

2013-08-10 Thread Matt Riedemann
OK, I have the post on the new tags bookmarked so I've been sure to use 
them in my latest commits but looking back at the ones I pointed out, they 
were using something different so I suppose that's the issue (although the 
hyper-v one of mine was before those changes went into effect, I think - 
it was merged back on 7/9).

Note that these two use "Fix bug" on a separate line, albeit they aren't 
"fixes bug" as the post mentions.

https://review.openstack.org/#/c/41064/ 
https://review.openstack.org/#/c/39651/  

Anyway, I'll keep a look out for the new tag in commit messages when doing 
reviews.  Does this mean we should be giving a -1 if the tag doesn't apply 
to the new patterns?



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Jeremy Stanley 
To: OpenStack Development Mailing List 
, 
Date:   08/10/2013 04:43 PM
Subject:Re: [openstack-dev] Flakiness with launchpad tracking bug 
status?



On 2013-08-10 13:20:41 -0500 (-0500), Matt Riedemann wrote:
> I've been seeing some flakiness lately with launchpad not tracking
> Gerrit status when a patch is proposed (changing the bug to 'In
> Progress') or merged (changing the bug to 'Fix Committed'). Has
> anyone else experienced this? Note that the bugs even show when a
> patch is proposed, change the assignee and track when it's merged
> in github, but doesn't change the status.
[...]

Yes, this was an announced change in the bug integration hook for
commit message processing a little over a week ago...

http://lists.openstack.org/pipermail/openstack-dev/2013-August/012945.html 
>

In short, some of the old commonly-used patterns were reimplemented
for convenience, but the new parser is not 100% backward compatible
so some wordier bug-related comments will not be picked up
(particularly if they're not all on one line, "bug" is not one of
the first two words or there are additional words after it prior to
the bug number). Best of course is to use the new headers mentioned
there, so you can also benefit from the new partial and related bug
associations.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] gauging feelings on a new config option for powervm_lpar_operation_timeout

2013-08-11 Thread Matt Riedemann
While working on a patch to implement hard reboot for the powervm driver 
[1], I noticed that the stop_lpar (power_off) method has a timeout 
argument with a default value of 30 seconds but it's not overridden 
anywhere in the code and it's not configurable.  The start_lpar (power_on) 
method doesn't have a timeout at all.  I was thinking about creating a 
patch to (1) make start_lpar poll until the instance is running and (2) 
making the stop/start timeouts configurable with a new config option, 
something like powervm_lpar_operation_timeout that defaults to 60 seconds.

I looked in nova.conf.sample for existing timeout options and found this 
one for the xen virt driver:

# number of seconds to wait for instance to go to running
# state (integer value)
#xenapi_running_timeout=60

I'm looking to basically get a feeling of what kind of reaction I'll get 
if I put a patch up with a new config option for this in the powervm 
driver.  If there aren't any major pushes against it, I'll start working a 
patch.

[1] https://review.openstack.org/#/c/40748/ 



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States
<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] gauging feelings on a new config option for powervm_lpar_operation_timeout

2013-08-12 Thread Matt Riedemann
Russell, with powervm the hypervisor (IVM) is running on a detached system 
and depending on how many compute nodes are going through the same 
hypervisor I would think the load would vary as it processes multiple 
requests.  In my runs with Tempest I haven't seen any timeouts in the 
power_off operation, but I have seen other problems which are making me 
conscious of timeout issues with the powervm driver, mainly around taking 
a snapshot of a running instance (but that's a different issue related 
more to disk performance on the IVM and network topology - still 
investigating).

Honestly, I just don't like hard-coded timeouts which can't be configured 
if the need arises.  I don't know why there is a timeout argument in the 
code that defaults to 30 seconds if it can't be overridden (or is 
overridden in the code).  We could do like the libvirt driver and just not 
have a timeout on stop/start, but that scares me for some reason.



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Russell Bryant 
To: openstack-dev@lists.openstack.org, 
Date:   08/12/2013 09:49 AM
Subject:Re: [openstack-dev] [nova] gauging feelings on a new 
config option for powervm_lpar_operation_timeout



On 08/11/2013 04:04 PM, Matt Riedemann wrote:
> While working on a patch to implement hard reboot for the powervm driver
> [1], I noticed that the stop_lpar (power_off) method has a timeout
> argument with a default value of 30 seconds but it's not overridden
> anywhere in the code and it's not configurable.  The start_lpar
> (power_on) method doesn't have a timeout at all.  I was thinking about
> creating a patch to (1) make start_lpar poll until the instance is
> running and (2) making the stop/start timeouts configurable with a new
> config option, something like powervm_lpar_operation_timeout that
> defaults to 60 seconds.

Why would someone change this?  What makes one person's environment need
a different timeout than another?  If those questions have good answers,
it's probably fine IMO.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Skipping tests in tempest via config file

2013-08-13 Thread Matt Riedemann
I have the same issue.  I run a subset of the tempest tests via nose on a 
RHEL 6.4 VM directly against the site-packages (not using virtualenv). I'm 
running on x86_64, ppc64 and s390x and have different issues on all of 
them (a mix of DB2 on x86_64 and MySQL on the others, and different 
nova/cinder drivers on each).  What I had to do was just make a nose.cfg 
for each of them and throw that into ~/ for each run of the suite.

The switch from nose to testr hasn't impacted me because I'm not using a 
venv.  However, there was a change this week that broke me on python 2.6 
and I opened this bug:

https://bugs.launchpad.net/tempest/+bug/1212071 



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Ian Wienand 
To: openstack-dev@lists.openstack.org, 
Date:   08/13/2013 09:13 PM
Subject:[openstack-dev] Skipping tests in tempest via config file



Hi,

I proposed a change to tempest that skips tests based on a config file
directive [1].  Reviews were inconclusive and it was requested the
idea be discussed more widely.

Of course issues should go upstream first.  However, sometimes test
failures are triaged to a local/platform problem and it is preferable
to keep everything else running by skipping the problematic tests
while its being worked on.

My perspective is one of running tempest in a mixed CI environment
with RHEL, Fedora, etc.  Python 2.6 on RHEL doesn't support testr (it
doesn't do the setUpClass calls required by temptest) and nose
upstream has some quirks that make it hard to work with the tempest
test layout [2].

Having a common place in the temptest config to set these skips is
more convienent than having to deal with the multiple testing
environments.

Another proposal is to have a separate JSON file of skipped tests.  I
don't feel strongly but it does seem like another config file.

-i

[1] https://review.openstack.org/#/c/39417/
[2] https://github.com/nose-devs/nose/pull/717

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Skipping tests in tempest via config file

2013-08-14 Thread Matt Riedemann
I put a nose.cfg with my excludes in the user's root and it works to run 
nosetests via the virtual environment like this:

tempest/tools/./with_venv.sh nosetests

I had to use the run_tests.sh script in tempest to create the virtual 
environment, but after that running tempest via nose within the venv 
wasn't a problem.  Of course, I didn't want to duplicate the test runs 
when setting up the venv via run_tests.sh, so I created it with the -p 
option to only run pep8 after it was setup (I'm not aware of a way to tell 
it to not run any tests and simply setup the environment).

Going back to the bug I opened last night for failures on py26, it's fixed 
with this patch: https://review.openstack.org/#/c/39346/ 



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Alexius Ludeman 
To: OpenStack Development Mailing List 
, 
Date:   08/14/2013 11:39 AM
Subject:Re: [openstack-dev] Skipping tests in tempest via config 
file



I am running tempest on OEL 6.3 (aka RHEL 6.3) and I had issues with 
python 2.6 and skipException[3], so now I'm using python 2.7 just for 
tempest.  I also had difficulty with yum and python module dependency and 
made the transition to venv.  This has reduced the yum dependency 
nightmare greatly.

now that testr is default for tempest.  testr does not appear to support 
--exclusion[1] or --stop[2].

I have a work around for --exclusion, by:
testr list-tests | egrep -v regex-exclude-list > unit-tests.txt
testr --load-list unit-tests.txt

I do not have a work around for --stop.

[1]https://bugs.launchpad.net/testrepository/+bug/1208610
[2]https://bugs.launchpad.net/testrepository/+bug/1211926
[3]https://bugs.launchpad.net/tempest/+bug/1202815



On Tue, Aug 13, 2013 at 7:25 PM, Matt Riedemann  
wrote:
I have the same issue.  I run a subset of the tempest tests via nose on a 
RHEL 6.4 VM directly against the site-packages (not using virtualenv). 
 I'm running on x86_64, ppc64 and s390x and have different issues on all 
of them (a mix of DB2 on x86_64 and MySQL on the others, and different 
nova/cinder drivers on each).  What I had to do was just make a nose.cfg 
for each of them and throw that into ~/ for each run of the suite. 

The switch from nose to testr hasn't impacted me because I'm not using a 
venv.  However, there was a change this week that broke me on python 2.6 
and I opened this bug: 

https://bugs.launchpad.net/tempest/+bug/1212071 



Thanks, 

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development 

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com 


3605 Hwy 52 N
Rochester, MN 55901-1407
United States





From:Ian Wienand  
To:openstack-dev@lists.openstack.org, 
Date:08/13/2013 09:13 PM 
Subject:[openstack-dev] Skipping tests in tempest via config file 




Hi,

I proposed a change to tempest that skips tests based on a config file
directive [1].  Reviews were inconclusive and it was requested the
idea be discussed more widely.

Of course issues should go upstream first.  However, sometimes test
failures are triaged to a local/platform problem and it is preferable
to keep everything else running by skipping the problematic tests
while its being worked on.

My perspective is one of running tempest in a mixed CI environment
with RHEL, Fedora, etc.  Python 2.6 on RHEL doesn't support testr (it
doesn't do the setUpClass calls required by temptest) and nose
upstream has some quirks that make it hard to work with the tempest
test layout [2].

Having a common place in the temptest config to set these skips is
more convienent than having to deal with the multiple testing
environments.

Another proposal is to have a separate JSON file of skipped tests.  I
don't feel strongly but it does seem like another config file.

-i

[1] https://review.openstack.org/#/c/39417/
[2] https://github.com/nose-devs/nose/pull/717

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

<><>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<    1   2   3   4   5   6   7   8   9   10   >