[Openstack] running Tempest continuously in the openstack project?

2012-02-22 Thread Ionuț Arțăriși

Hello,

I'm trying to get started with Tempest and I would really like to be 
able to see a running setup somewhere. I'm getting a lot of tests 
failing and I don't know if it's because of my setup or because of bugs 
in Tempest or incompatibilities with the current code in the other 
components.


I looked at Jenkins, but it seems there are only two tasks related to 
Tempest there, testing for git-merge and pep8. Does the OpenStack 
project actively run the full Tempest test suite anywhere?


It would be great if anyone can offer a working configuration or at 
least say if they have Tempest running successfully against which 
versions of nova, glance, swift etc.


Thanks,
Ionuț

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] running Tempest continuously in the openstack project?

2012-02-27 Thread Ionuț Arțăriși

On 02/22/2012 07:17 PM, Jay Pipes wrote:

On 02/22/2012 10:49 AM, James E. Blair wrote:

Indeed, as soon as someone says I have Tempest working on a system
configured by devstack with a repeatable process, here's how I did
it... we'll start running it in Jenkins. But so far I've heard from
the Tempest developers that it's not quite ready yet.


That would be correct. Still work needed to get things working more
consistently.



How can I help with this effort? Who should I contact?

Right now I'm having trouble setting up running the tests against 
devstack (a configuration problem on my part I suppose, but I blame it 
on the lack of documentation).


-Ionuț

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] running Tempest continuously in the openstack project?

2012-02-27 Thread Ionuț Arțăriși

On 02/22/2012 07:17 PM, Jay Pipes wrote:

On 02/22/2012 10:49 AM, James E. Blair wrote:

Indeed, as soon as someone says I have Tempest working on a system
configured by devstack with a repeatable process, here's how I did
it... we'll start running it in Jenkins. But so far I've heard from
the Tempest developers that it's not quite ready yet.


That would be correct. Still work needed to get things working more
consistently.



How can I help with this effort? Who should I contact?

Right now I'm having trouble setting up running the tests against 
devstack (a configuration problem on my part I suppose, but I blame it 
on the lack of documentation).


-Ionuț

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] running Tempest continuously in the openstack project?

2012-02-27 Thread Ionuț Arțăriși

On 02/27/2012 05:13 PM, Daryl Walleck wrote:

I'm actively looking into any issues. I have all these tests passing locally in 
my environment, so the issues seem to be focused around people using devstack. 
I've made some merge prop's that will take care of a few of the issues. One 
issue that will certainly come up is that if you have rate limiting enabled, 
most of the tests will certainly fail. I've also included the link to the 
Tempest documentation, which also should help make using a bit more clear. I'm 
working on updating this document as I'm using Devstack so that I can either 
smooth over or enumerate issues that may come up when running the tests. If you 
run into anything though, please make a report to the Tempest project and I'll 
have a look.

http://qa.openstack.org/integration.html

https://launchpad.net/tempest

Daryl



Can you give more details about your environment? What operating system 
are you running the tests on? Are all the tested openstack components 
the latest versions from master? What specific configurations have you done?


The documentation that you linked to does not explain how to actually 
set up the tests.


There are currently three README files in the repository which say to 
rename the tempest.conf.sample and config.ini.sample files and then 
edit the variables to fit your test environment, but none of them 
explain what that actually means. Which values should be reset? Where 
would I get the information to be able to set them?


I'm trying hard to understand how all of this works, but without a 
canonical working configuration I don't know which errors are mine and 
which are tempest bugs.


-Ionuț

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Backporting test fixes

2012-05-02 Thread Ionuț Arțăriși

I recently submitted a few fixes to the test suite in various components
of openstack. These fixes are being merged in master, but the code
remains broken in the stable/essex branch. Review requests for 
stable/essex either get rejected or stuck in limbo because it seems that 
people don't know what to do about them.


I am aware of the procedure for backporting fixes[1], but I think it
does not deal with this issue correctly.

Fixes to test scripts are for our benefit, the developers'. They don't
affect the users in any way. I don't think test code should be thrown
together with application code when thinking about making changes to
it.

Only making test changes to the master branches reflects a belief that
tests are only used during development. I don't think this is
true. Tests, especially functional tests, are also incredibly useful
during maintenance. e.g. They help us test against different library
versions/distro than the one that's used for development and using
different deployment configurations.

I suspect we're not the only downstream running the various testsuites
against their own packaged versions of different openstack
branches. Backporting these changes not only spares the time of other
projects who might run into these bugs on the stable branches later, it
also gives all of us the benefit of not having to fork the project just
so we can attach our patches. OTOH blocking test backports removes the
incentive that downstream projects have for reporting those bugs and
sending fixes for them upstream.

So can we talk about separating the tests from the application code at
least as far as the backports are concerned? What about having the 
'test/' directory as a git submodule?


Or maybe I don't understand this problem enough. What are the downsides
to backporting test-only fixes? Do they really outweigh the advantages?

Thanks,
Ionuț


[1] http://wiki.openstack.org/StableBranch

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Backporting test fixes

2012-05-03 Thread Ionuț Arțăriși

Hi Mark, thanks for your answer.

On 05/03/2012 10:25 AM, Mark McLoughlin wrote:

Hi,

On Wed, 2012-05-02 at 14:37 +0200, Ionuț Arțăriși wrote:

I recently submitted a few fixes to the test suite in various components
of openstack.


Thanks for that!



These fixes are being merged in master, but the code remains broken in
the stable/essex branch. Review requests for  stable/essex either get
rejected or stuck in limbo because it seems that people don't know
what to do about them.


We're talking about this?

   https://review.openstack.org/#/c/6619/



and this: https://bugs.launchpad.net/keystone/+bug/983800/ (comment #2)



The issue here is that Swift is different :)

Swift's releases are intended to be stable updates and AFAIK the Swift
team feels that they don't need to do separate stable update releases.
So the stable branch process we've put in place for the other projects
doesn't apply to Swift.

I've added a note to the wiki page about this.



Ok, good to know. Thanks.


I am aware of the procedure for backporting fixes[1], but I think it
does not deal with this issue correctly.

Fixes to test scripts are for our benefit, the developers'. They don't
affect the users in any way. I don't think test code should be thrown
together with application code when thinking about making changes to
it.

Only making test changes to the master branches reflects a belief that
tests are only used during development. I don't think this is
true. Tests, especially functional tests, are also incredibly useful
during maintenance. e.g. They help us test against different library
versions/distro than the one that's used for development and using
different deployment configurations.

I suspect we're not the only downstream running the various testsuites
against their own packaged versions of different openstack
branches. Backporting these changes not only spares the time of other
projects who might run into these bugs on the stable branches later, it
also gives all of us the benefit of not having to fork the project just
so we can attach our patches. OTOH blocking test backports removes the
incentive that downstream projects have for reporting those bugs and
sending fixes for them upstream.

So can we talk about separating the tests from the application code at
least as far as the backports are concerned? What about having the
'test/' directory as a git submodule?

Or maybe I don't understand this problem enough. What are the downsides
to backporting test-only fixes? Do they really outweigh the advantages?


I think you've misinterpreted the response to your review so I won't go
into the specifics of your points, but here's how I think about the
backporting of unit tests:

   - Unit test's main value IMHO is preventing regressions during
 development.

 Adding new unit tests can occasionally find new bugs too but, if I
 want to find bugs on a stable branch, it would be functional tests
 I'd write.

   - As such, I don't think a concerted effort across the project to
 systematically backport unit tests from master to the stable branch
 is time well spent.


I'm not advocating that someone spend time to look through all the 
patches to unittests in master and backport them to stable. What I'm 
asking for is that when someone submits a test fix backport it would be 
*accepted* in a stable branch after the proper review process.




 However, anyone is welcome to maintain their own tree based off the
 stable branch and use that as a venue for finding bugs with new
 unit tests. I'd hope those tests would go into master first, though.

   - Also, a worthwhile goal on a stable branch is to keep churn to a
 minimum. You could argue that tests deserve a free pass because
 changes to those can't introduce regressions but, when it comes to
 a stable branch, I'm sceptical that any patch is zero risk never
 mind a whole class of (generally quite large) patches.



I think you're again talking about backporting all test fixes here in 
bulk. But apart from that, I don't really see how testsuite-only patches 
which go through the same review process as normal patches can break 
anything in application code and be deemed risky?




 Part of keeping churn to a minimum on a stable branch is that
 reviewers default answer should be no. Unless a patch meets the
 safe fix for a high-impact, user-visible issue then it doesn't
 belong in the stable branch.

   - That said, if someone adds a new test to master and it uncovers a
 significant user-visible bug (or uncovers a bug and adds a test for
 it), then I think it makes sense to backport the unit test to master
 along with the bug fix. It helps verify that the backported patch
 does fix the bug.

   - And finally, I think it's sane for downstreams to run the unit
 tests. As you say, it can catch issues where downstream is using a
 different version of a library than upstream. If a downstream

Re: [Openstack] Swift on Webob-1.2 anyone?

2012-05-09 Thread Ionuț Arțăriși

On 05/09/2012 05:45 AM, Pete Zaitcev wrote:

I ran .unittests on a box with python-webob-1.2b3 and it throws left
and right: errors=72, failures=6. I'm wondering if anyone is working
on adapting Swift for WebOb 1.2 and if a patch is available somewhere.
I see Ionut fixed lp:984042, but clearly it wasn't enough.

If nobody's done it yet, I suppose I could take a swing at it. New webob
is required to address the S3-with-colon problem lp:936998.

-- Pete


We started at 1.2b3, but saw too many failing unittests so we downgraded 
to 1.1.1 which we now have working.


It would be great to have it on 1.2b3, though.

-Ionuț

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Keystone] Quotas: LDAP Help

2012-07-25 Thread Ionuț Arțăriși

On 07/17/2012 11:33 PM, Joseph Heck wrote:

That's the general area I was going to head with the Active Directory backend 
I'm hacking on. Chris Hoge of UOregon presented today (@ OSCON) on a local 
keystone hack that they did to enable LDAP AuthN + a fail back to SQL based 
system for their scientific computing cluster - follows a very similar model.

-joe

On Jul 17, 2012, at 2:16 PM, Tim Bell tim.b...@cern.ch wrote:

+1 The corporate LDAP should be read-only for a source of user, roles and
attributes. Updating the corporate LDAP is not an option in many
environments which can significantly benefit from the structured directory
information available.

Thus, at minimum, allow a r/o LDAP and local DB store for any openstack
specific information that needs updating.

Tim


-Original Message-
From: openstack-bounces+tim.bell=cern...@lists.launchpad.net
[mailto:openstack-bounces+tim.bell=cern...@lists.launchpad.net] On Behalf
Of Ryan Lane
Sent: 17 July 2012 20:43
To: Adam Young
Cc: Joseph Heck; openstack
Subject: Re: [Openstack] [Keystone] Quotas: LDAP Help


I haven't been thinking about quotas, so bear with me here. A few

thoughts:

Certain deployments might not be able to touch the LDAP backend.  I am
thinking specifically where there is a corporate AD/LDAP server.  I
tried to keep the scheme dependency simple enough that it could be
layered onto a read-only scenario.  If we put quotas into LDAP,  it
might break on those deployments.


Many, many deployments won't be able to. Applications should generally
assume they are read-only in regards to LDAP.


I can see that we don't want to define them in the Nova database, as
Swift might not have access to that, and swift is going to be one of
the primary consumers of Quotas.  I am Assuming Quantum will have them

as well.

As you are aware, there is no metadata storage in the LDAP driver,
instead it is generated from the tenant and role information on the
fly.  There is no place to store metadata in groupOfNames which is
the lowest( common
denominator) grouping used for Tenants.  Probably the most correct
thing to do would be to use a seeAlso  that points to where the
quota data is stored.


Let's try not to force things into attributes if possible.

When LDAP is used, is the SQL backend not used at all? Why not store quota
info in Keystone's SQL backend, but pull user info from LDAP, when

enabled?

We should only consider storing something in LDAP if it's going to be

reused

by other applications. LDAP has a strict schema for exactly this purpose.

If the

quota information isn't directly usable by other applications we shouldn't
store it in LDAP.

Many applications with an LDAP backend also have an SQL backend, and use
the SQL as primary storage for most things, and as a cache for LDAP, if

it's

used. I think this is likely a sane approach here, as well.

- Ryan

___



Hi,

I just wanted to add a bit to this thread. We're currently working on a 
hybrid backend between LDAP and SQL. I have a working version for a 
specific setup in which the user accounts are stored in LDAP, but 
tenants and roles are all stored in SQL together with other openstack 
user accounts such as the nova admin account.


I basically just Frankensteined the two backends together for user 
processing and left everything else to be handled by the SQL backend. 
I'd like to hear other people's opinion on this or alternative 
implementations.


https://gist.github.com/3176390

-Ionuț

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Keystone rate-limiting with turnstile

2012-09-11 Thread Ionuț Arțăriși

Hi!

I've been working on a solution for rate-limiting requests to keystone. 
I based this on the existing turnstile [0] and nova_limits [1] projects 
by Kevin L. Mitchell. The project is basically a refactoring of 
nova_limits to work with keystone so I've named it keystone_limits:


https://github.com/mapleoin/keystone_limits

Turnstile already provides a distributed rate-limiting WSGI middleware 
with a redis backend. The way keystone_limits works is it tracks the IPs 
(REMOTE_ADDR header) of the incoming requests to keystone and then 
matches them against a set of rules. The rules are defined in an XML 
document which also describes rate limits such as: 90 POST requests per 
minute to the '/tokens' URL. If the request exceeds the limit a '413 
Request Entity Too Large' error response is returned.


Now there's still a problem. In the case of Dashboard for example, all 
the users will show up to keystone using the same IP, which is the IP of 
the Dashboard server. I've opened a bug [2] and proposed to change both 
Dashboard and python-keystoneclient in order to then send out the 
original IP address of the user so that it makes it safely to keystone.


To start using it, you should check out the README. It should be pretty 
clear, but if there's anything muddy, don't hesitate to ask.


I'd appreciate any feedback or patches or help on the launchpad bug.

-Ionuț

[0] https://github.com/klmitch/turnstile
[1] https://github.com/klmitch/nova_limits
[2] https://bugs.launchpad.net/keystone/+bug/1046837

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp