Re: [openstack-dev] Building on Debian: Havana unit tests at build time report

2013-10-17 Thread Roman Podolyaka
Hi all,

Being a bit familiar with both SQLAlchemy and sqlalchemy-migrate, I decided
to check the issue with running of migrations tests in Nova with SQLAlchemy
0.8.x. Long story is here:
https://bugs.launchpad.net/sqlalchemy-migrate/+bug/1241038

TL;DR
1. It's really the issue with sqlalchemy-migrate and has nothing to do with
Nova.
2. sqlalchemy-migrate DOES seem to support SQLAlchemy 0.8.x, but, I
believe, we are going to face more and more similar issues.
3. It's downgrade, so it should be a minor issue for packaging of new
OpenStack release in Debian.
4. Everyone's life would be much easier, if we dropped migrations support
for SQLite. Alembic doesn't support ALTER for SQLite at all on purpose. And
I would really like us to switch to using of Alembic in the following
releases.

I'll try to fix this in sqlalchemy-migrate and maybe we could release the
fix later.

Thanks,
Roman


On Thu, Oct 17, 2013 at 6:31 PM, Thomas Goirand  wrote:

> On 10/17/2013 09:11 PM, Monty Taylor wrote:
> > I understand what you are saying and I also understand your frustration.
> > However, OpenStack does not, as of yet, support SQLAlchemy 0.8, and as
> > you can see, the requirements file does, in fact, communicate reality,
> > we depend on <0.8.
>
> It does, at the exception of this little issue when running in Sid with
> SQLite 3.
>
> > I support the bug being fixed and the requirement being raised, but it
> > has to happen in that order.
>
> Sure!
>
> >> So yes, we need a fix for the above problem, and Nova needs to support
> >> SQLAlchemy 0.8.2 which is what we have in Sid [1]. If I remember well,
> >> there's also the problem on RPM based systems. BTW, it looks like it is
> >> an isolated unique problem, and it seems to be the only one left in this
> >> release of OpenStack. It is also new (Nova Havana B3 didn't have the
> >> issue). So I really think it shouldn't be hard to fix the problem.
> >>
> >> Also, note that the issue isn't only in Debian, it's also a problem for
> >> Ubuntu 2013.10, which also has SQLAlchemy 0.8.2. [2]
> >
> > Fascinating. What have they done to make that work?
>
> I'm not sure what you are talking about, since I don't agree that
> OpenStack doesn't work with SQLAlchemy 0.8.2 (apart from this one bug).
>
> > And why has none of
> > that work made it into OpenStack so that we could raise the SQLAlchemy
> > requirement?
>
> If you didn't know, I already have Grizzly (currently in Sid) working
> with SQLAlchemy 0.8.2... (I just backported from Havana to Grizzly a few
> patches from upstream, namely for Cinder and Heat IIRC.) These were
> reported by me during this summer, and fixed in July for all projects,
> and just right before Havana B3 for Heat.
>
> >> I really hope that for the next release cycle, the above will be taken
> >> into consideration, and that someone will come up with a fix quickly for
> >> this release. Maybe *now* is time to switch the gate to SQLAlchemy 0.8.2
> >> for the Icehouse branch?
> >
> > Can't change the req until it works - but yes, I agree with you, we
> > clearly need to upgrade.
>
> Let's have that one fixed, and let's switch...
>
> > Thanks for you help and attention Thomas! I appreciate all the work!!!
>
> My pleasure! :)
>
> Thomas
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] odd behaviour from sqlalchemy

2013-10-12 Thread Roman Podolyaka
Hello Chris,

I thought it was a bug in SQLAlchemy code, so I wrote a snippet [1] to
check my assumption, but I haven't managed to reproduce the problem with
SQLAlchemy versions 0.7.9, 0.7.10 and 0.8.2.

I would suggest you to start from enabling logging of all SQL queries
SQLAlchemy issues [2] and, if needed, examining of session/model instance
state with pdb.

For your second question. You can set a column to its current value by
using of literal_column() expression [3].

Can you elaborate a bit more on your use case? Why do you update the table
row, but keep the update_at column value unchanged?

Thanks,
Roman

[1] http://paste.openstack.org/show/48335/
[2]
https://github.com/openstack/nova/blob/stable/grizzly/nova/openstack/common/db/sqlalchemy/session.py#L296
[3]
https://github.com/openstack/cinder/blob/master/cinder/db/sqlalchemy/api.py#L1088


On Sat, Oct 12, 2013 at 2:31 AM, Chris Friesen
wrote:

> Hi,
>
> I'm using grizzly with sqlalchemy 0.7.9.
>
> I'm seeing some funny behaviour related to the automatic update of
> "updated_at" column for the Service class in the sqlalchemy model.
>
> I added a new column to the Service class, and I want to be able to update
> that column without triggering the automatic update of the "updated_at"
> field.
>
> While trying to do this, I noticed the following behaviour.  If I do
>
> values = {'updated_at': new_value}
> self.service_update(context, service, values)
>
> this sets the "updated_at" column to new_value as expected.  However, if I
> do
>
> values = {'updated_at': new_value, 'other_key': other_value}
> self.service_update(context, service, values)
>
> then the other key is set as expected, but "updated_at" gets auto-updated
> to the current timestamp.
>
> The "onupdate" description in the sqlalchemy docs indicates that it "will
> be invoked upon update if this column is not present in the SET clause of
> the update".  Anyone know why it's being invoked even though I'm passing in
> an explicit value?
>
>
> On a slightly different note, does anyone have a good way to update a
> column in the Service class without triggering the "updated_at" field to be
> changed?  Is there a way to tell the database "set this column to this
> value, and set the updated_at column to its current value"?  I don't want
> to read the "updated_at" value and then write it back in another operation
> since that leads to a potential race with other entities accessing the
> database.
>
> Thanks,
> Chris
>
> __**_
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.**org 
> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] what is the code organization of nova

2013-10-09 Thread Roman Podolyaka
Hello Aparna,

I would suggest to start from Nova Developer Guide [1] to understand what
Nova is in general, what services it consists of and so on.

There are different approaches how to learn new stuff. I personally prefer
the 'top-bottom' one, i.e. when you start from high-level concepts
and gradually proceed to concrete details. If you like this approach, you
could install devstack [2], boot a few VMs and start learning Nova by
checking logs of its services: API, scheduler, compute, etc. So you could
follow the booting of a new VM from a request to Nova API to a qemu process
running on the compute node.

When you are familiar with the basic concepts of how Nova works, it might
be about the time to actually check the code. Basically, it looks like this
(I described a few important subsystems):

rpodolyaka@rpodolyaka-pc:~/sandbox/nova/nova$ tree -L 1 -d

├── api- OpenStack Compute/EC2 APIs are defined here;
mapping of HTTP requests to functions, which handle them, is done here too
├── CA
├── cells
├── cert
├── cloudpipe
├── cmd  - executables of Nova (nova-api, scheduler,
compute, etc) are defined here
├── compute- implemetantion of nova-compute service
├── conductor   - implementation of nova-conductor service
├── console
├── consoleauth
├── db- DB access layer (also known as DBAPI)
├── hacking
├── image
├── ipv6
├── keymgr
├── locale
├── network - implementation of nova-network + Neutron bindings
├── objects
├── objectstore
├── openstack  - common code for all OpenStack projects (utils,
logs, DB, RPC, etc)
├── pci
├── scheduler   - implementation of Nova scheduler
├── servicegroup
├── spice
├── storage
├── tests  - tests (mostly unit) of Nova live here
├── virt- bindings to supported hypervisors (libvirt,
xen, etc)
├── vnc
└── volume

I hope, this will help you.

Thanks,
Roman

[1] http://docs.openstack.org/developer/nova/devref/
[2] http://devstack.org/


On Wed, Oct 9, 2013 at 4:58 AM, Aparna Datt  wrote:

> hi i was going through code of nova on github...but there are no
> readme files available regarding code organization of nova. Can anyone
> provide me with a link from where i can begin reading the code ? or if
> anyone can help me by indicators on from which files / folders the nova
> begins its processing?
>
> Regards,
>
> Aparna
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requirements syncing job is live

2013-10-02 Thread Roman Podolyaka
Hello ZhiQiang,

I'm not sure what HEADs you mean: oslo-incubator doesn't contain git
submodules, but rather regular Python packages.

On the other hand, oslo.version/oslo.messaging/oslo.* are separate
libraries, having their own releases, so syncing of global requirements
will effectively make projects use newer versions of those libs.

Thanks,
Roman


On Wed, Oct 2, 2013 at 5:02 AM, ZhiQiang Fan  wrote:

> great job! thanks
>
> (how about auto sync from oslo too?
> - projects.txt: projects want to be automatically synced from oslo
> - heads.txt: HEAD for each module in oslo
>
> whenever module maintainer think current module is strong enough to
> publish, then he/she can edit the heads.txt of that module line, then
> jenkins will propose a sync patch for projects listed in projects.txt
>
> this behavior will be dangerous, since it may pass gate test when merge
> but cause internal bug which is not well test coverd)
>
>
> On Wed, Oct 2, 2013 at 1:27 AM, Monty Taylor  wrote:
>
>> Hey all!
>>
>> The job to automatically propose syncs from the openstack/requirements
>> repo went live today - as I'm sure you all noticed, since pretty much
>> everyone got a patch of at least some size.
>>
>> The job works the same way as the translations job - it will propose a
>> patch any time the global repo changes - but if there is already an
>> outstanding change that has not been merged, it will simply amend that
>> change. So there should only ever be one change per branch per project
>> in the topic openstack/requirements submitted by the jenkins user.
>>
>> If a change comes in and you say to yourself "ZOMG, that version would
>> break us" - then you should definitely go and propose an update to the
>> global list itself, which is in the global-requirements.txt file in the
>> openstack/requirements repo.
>>
>> The design goal, as discussed at the last two summits, is that we should
>> converge on alignment by the release at the very least. With this and
>> the changes that exist now in the gate to block non-aligned
>> requirements, once we get aligned, we shouldn't probably be too far out
>> from each other moving forward.
>>
>> Additionally, the list of projects to receive updates is managed in a
>> file, projects.txt, in the openstack/requirements repo. If you are
>> running a project and would like to receive syncing patches, feel free
>> to add yourself to the list.
>>
>> Enjoy!
>> Monty
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> blog: zqfan.github.com
> git: github.com/zqfan
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] configapplier licensing

2013-09-19 Thread Roman Podolyaka
Hi Thomas,

I believe all OpenStack projects (including diskimage-builder [1] and
os-apply-config [2]) are distributed under Apache license.

Thanks,
Roman

[1] https://github.com/openstack/diskimage-builder/blob/master/LICENSE
[2] https://github.com/openstack/os-apply-config/blob/master/LICENSE


On Fri, Sep 20, 2013 at 8:02 AM, Thomas Goirand  wrote:

> Hi,
>
> While trying to package diskimage-builder for Debian, I saw that in some
> files, it's written "this file is release under the same license as
> configapplier". However, I haven't been able to find the license of
> configapplier anywhere.
>
> So, under which license is configapplier released? I need this
> information to populate the debian/copyright file before uploading to
> Sid (to pass the NEW queue).
>
> Cheers,
>
> Thomas Goirand (zigo)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Oslo.db possible module?

2013-09-16 Thread Roman Podolyaka
Hi Joshua,

This looks great!

We definitely should consider this to become the base of oslo.db, as
currently DB code in oslo-incubator depends on oslo-config and has a few
drawbacks (e. g. global engine and session instances).

We could discuss this in details at the summit (Boris has already proposed
a session for oslo.db lib - http://summit.openstack.org/cfp/details/13).

Thanks,
Roman


On Fri, Sep 13, 2013 at 9:04 PM, Joshua Harlow wrote:

>  Hi guys,
>
>  In my attempt to not use oslo.cfg in taskflow I ended up re-creating a
> lot of what oslo-incubator db has but without the strong connection to
> oslo.cfg,
>
>  I was thinking that a majority of this code (which is also partially
> ceilometer influenced) could become oslo.db,
>
>
> https://github.com/stackforge/taskflow/blob/master/taskflow/persistence/backends/impl_sqlalchemy.py
>  (search
> for SQLAlchemyBackend as the main class).
>
>  It should be generic enough that it could be easily extracted to be the
> basis for oslo.db if that is desirable,
>
>  Thoughts/comments/questions welcome :-)
>
>  -Josh
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: [Openstack] Neutron debug in eclipse (devstack): anyone ??

2013-09-13 Thread Roman Podolyaka
Oops... I replied to Otavio directly rather than to the mailing list...

-- Forwarded message --
From: Roman Podolyaka 
Date: Fri, Sep 13, 2013 at 1:05 PM
Subject: Re: [Openstack] Neutron debug in eclipse (devstack): anyone ??
To: Otávio Augusto 


Hello Otavio,

You must be facing problems with eventlet patching Python threading module.

Try to modify "neutron/server/__init__.py" file by replacing line
"eventlet.monkey_patch()" with "eventlet.monkey_patch(os=False,
thread=False)"

(this solution is actually taken from here
https://wiki.openstack.org/wiki/NeutronDevelopment)

Thanks,
Roman


On Fri, Sep 13, 2013 at 2:29 AM, Otávio Augusto 
wrote:
>
> Anyone able to debug neutron on eclipse? I'd like to run neutron's server
in pydev but I'm still missing something.
> I've followed examples available on the internet for horizon and they
work but I might be missing something for neutron.
> I'd appreciate any help on this.
>
> Regards
>
> /O
> Otavio Augusto
>
> "A human being should be able to change a diaper, plan an invasion,
butcher a hog, conn a ship, design a building, write a sonnet, balance
accounts, build a wall, set a bone, comfort the dying, take orders, give
orders, cooperate, act alone, solve equations, analyze a new problem, pitch
manure, program a computer, cook a tasty meal, fight efficiently, die
gallantly. Specialization is for insects."
> Robert A. Heinlein
>
> ___
> Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openst...@lists.openstack.org
> Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Backwards incompatible migration changes - Discussion

2013-09-12 Thread Roman Podolyaka
I can't agree more with Robert.

Even if it was possible to downgrade all migrations without data loss, it
would be required to make backups before DB schema upgrade/downgrade.

E.g. MySQL doesn't support transactional DDL. So if a migration script
can't be executed successfully for whatever reason (let's say we haven't
tested it well enough on real data and it's turned out it has a few bugs),
you will end up in a situation when the migration is partially applied...
And migrations can possibly fail before backup tables are created, during
this process or after it.

Thanks,
Roman


On Thu, Sep 12, 2013 at 8:30 AM, Robert Collins
wrote:

> I think having backup tables adds substantial systematic complexity,
> for a small use case.
>
> Perhaps a better answer is to document in 'take a backup here' as part
> of the upgrade documentation and let sysadmins make a risk assessment.
> We can note that downgrades are not possible.
>
> Even in a public cloud doing trunk deploys, taking a backup shouldn't
> be a big deal: *those* situations are where you expect backups to be
> well understood; and small clouds don't have data scale issues to
> worry about.
>
> -Rob
>
> -Rob
>
> On 12 September 2013 17:09, Joshua Hesketh 
> wrote:
> > On 9/4/13 6:47 AM, Michael Still wrote:
> >>
> >> On Wed, Sep 4, 2013 at 1:54 AM, Vishvananda Ishaya
> >>  wrote:
> >>>
> >>> +1 I think we should be reconstructing data where we can, but keeping
> >>> track of
> >>> deleted data in a backup table so that we can restore it on a downgrade
> >>> seems
> >>> like overkill.
> >>
> >> I guess it comes down to use case... Do we honestly expect admins to
> >> regret and upgrade and downgrade instead of just restoring from
> >> backup? If so, then we need to have backup tables for the cases where
> >> we can't reconstruct the data (i.e. it was provided by users and
> >> therefore not something we can calculate).
> >
> >
> > So assuming we don't keep the data in some kind of backup state is there
> a
> > way we should be documenting which migrations are backwards incompatible?
> > Perhaps there should be different classifications for data-backwards
> > incompatible and schema incompatibilities.
> >
> > Having given it some more thought, I think I would like to see migrations
> > keep backups of obsolete data. I don't think it is unforeseeable that an
> > administrator would upgrade a test instance (or less likely, a
> production)
> > by accident or not realising their backups are corrupted, outdated or
> > invalid. Being able to roll back from this point could be quite useful. I
> > think potentially more useful than that though is that if somebody ever
> > needs to go back and look at some data that would otherwise be lost it is
> > still in the backup table.
> >
> > As such I think it might be good to see all migrations be downgradable
> > through the use of backup tables where necessary. To couple this I think
> it
> > would be good to have a standard for backup table naming and maybe schema
> > (similar to shadow tables) as well as an official list of backup tables
> in
> > the documentation stating which migration they were introduced and how to
> > expire them.
> >
> > In regards to the backup schema, it could be exactly the same as the
> table
> > being backed up (my preference) or the backup schema could contain just
> the
> > lost columns/changes.
> >
> > In regards to the name, I quite like "backup_table-name_migration_214".
> The
> > backup table name could also contain a description of what is backed up
> (for
> > example, 'uuid_column').
> >
> > In terms of expiry they could be dropped after a certain release/version
> or
> > left to the administrator to clear out similar to shadow tables.
> >
> > Thoughts?
> >
> > Cheers,
> > Josh
> >
> > --
> > Rackspace Australia
> >
> >>
> >> Michael
> >>
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][oslo] mysql, sqlalchemy and sql_mode

2013-09-11 Thread Roman Podolyaka
Hi Steven,

Nice catch! This is not the first time MySQL has played a joke on us...

I think, we can fix this easily by adding a callback function, which will
set the proper sql_mode value, when a DB connection is retrieved from a
connection pool.

We'll provide a fix to oslo-incubator soon.

Thanks,
Roman

[1] http://www.enricozini.org/2012/tips/sa-sqlmode-traditional/


On Wed, Sep 11, 2013 at 1:37 PM, Steven Hardy  wrote:

> Hi all,
>
> I'm investigating some issues, where data stored to a text column in mysql
> is silently truncated if it's too big.
>
> It appears that the default configuration of mysql, and the sessions
> established via sqlalchemy is to simply warn on truncation rather than
> raise an error.
>
> This seems to me to be almost never what you want, since on retrieval the
> data is corrupt and bad/unexpected stuff is likely.
>
> This AFAICT is a mysql specific issue[1], which can be resolved by setting
> sql_mode to "traditional"[2,3], after which an error is raised on
> truncation,
> allowing us to catch the error before the data is stored.
>
> My question is, how do other projects, or oslo.db, handle this atm?
>
> It seems we either have to make sure the DB enforces the schema/model, or
> validate every single value before attempting to store, which seems like an
> unreasonable burden given that the schema changes pretty regularly.
>
> Can any mysql, sqlalchemy and oslo.db experts pitch in with opinions on
> this?
>
> Thanks!
>
> Steve
>
> [1] http://www.enricozini.org/2012/tips/sa-sqlmode-traditional/
> [2]
> http://rpbouman.blogspot.co.uk/2009/01/mysqls-sqlmode-my-suggestions.html
> [3] http://dev.mysql.com/doc/refman/5.5/en/server-sql-mode.html
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack + PyPy: Status and goals

2013-09-09 Thread Roman Podolyaka
Hi Alex,

That's really cool! I believe, performance is not the only benefit we can
get from running OpenStack projects on PyPy. We  can also improve the
overall "correctness" of our code (as PyPy behaves differently with
non-closed files, etc), just like compiling of your C/C++ app using
different compilers can show hidden errors.

And what about eventlet? Does it work well on PyPy? (as it is used in Nova,
Neutron, etc)

Thanks,
Roman


On Tue, Sep 10, 2013 at 12:28 AM, Alex Gaynor  wrote:

> Hi all,
>
> Many of you have probably seen me send review requests in the last few
> weeks
> about adding PyPy support to various OpenStack projects. A few people were
> confused by these, so I wanted to fill everyone in on what I'm up to :)
>
> First, for those who aren't familiar with what PyPy is: PyPy is an
> implementation of the Python language which includes a high performance
> tracing
> just-in-time compiler and which is faster than CPython (the reference, and
> most
> widely deployed, implementation) on almost all workloads.
>
> The current status is:
>
> Two major projects work, both Marconi and Swift, Marconi is gating against
> PyPy
> already, Swift isn't yet since I needed to fix a few small PyPy bugs and
> those
> aren't in a release yet, expect it soon :)
>
> In terms of results, I've observed 30% performance improvements on GET
> workloads for Swift under PyPy vs. CPython (other workloads haven't been
> benchmarked tet). I believe the Marconi folks have also observed some
> performance wins, but I'll let them speak to that, I don't have the full
> details.
>
> Many python-clients projects are also working out of the box and gating:
> including novaclient, swiftclient, marconiclient, ceilometerclient,
> heatclient,
> and ironicclient!
>
> There's a few outstanding reviews to add PyPy gating for cinderclient,
> troveclient, and glanceclient.
>
> In terms of future direction:
>
> I'm going to continue to work on getting more projects running and gating
> against PyPy.
>
> Right now I'm focusing a lot of my attention on improving Swift
> performance,
> particularly under PyPy, but also under CPython.
>
> I'm hoping some day PyPy will be the default way to deploy OpenStack!
>
>
> If you're interested in getting your project running on PyPy, or looking at
> performance under it, please let me know, I'm always interested in helping!
>
> Thanks,
> Alex
>
> --
> "I disapprove of what you say, but I will defend to the death your right
> to say it." -- Evelyn Beatrice Hall (summarizing Voltaire)
> "The people's good is the highest law." -- Cicero
> GPG Key fingerprint: 125F 5C67 DFE9 4084
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Baremetal][DB] Core review request: bugfix for 1221620

2013-09-09 Thread Roman Podolyaka
Robert,

Cool! That would be really nice! nova-bm functional tests are the bare
minimum we need.

Thanks,
Roman


On Mon, Sep 9, 2013 at 2:27 PM, Robert Collins wrote:

> We deseperately want TripleO in the gate and are working towards it.
> However, doing that acceptably fast makes it non-trivial [nested VM's
> are kindof slow...]. If there was BYOI in RackSpace and in HP's
> non-Beta regions, and custom networking in both clouds, then we could
> avoid a layer of nesting and test the different TripleO layers
> independently (and concurrently) but sadly thats not the case yet, so
> we're having to work on tuning and the occasional hack.
>
> We hope to have at least functional tests for nova-bm in soon, with
> TripleO as a whole following on subsequently to that.
>
> -Rob
>
> On 9 September 2013 23:12, Roman Podolyaka 
> wrote:
> > Hi,
> >
> > I'm ok with both accepting this patch and reverting the commit, which
> > introduced the regression, but it would be really nice to have these DB
> > optimizations in Nova.
> >
> > As for your concern of accepting such optimizations. I don't think, it's
> a
> > problem of such patches themselves, but rather with the lack of
> > comprehensive tests of complex OpenStack installations in our CI (at the
> > same time I personally believe our CI is the best thing ever happened to
> > OpenStack :), CI team you really rock!).
> >
> > Anyway, TripleO-CI found this regression. Maybe we should consider adding
> > its job to Nova check/gate pipelines?
> >
> > Thanks,
> > Roman
> >
> >
> > On Mon, Sep 9, 2013 at 1:59 PM, Nikola Đipanov 
> wrote:
> >>
> >> On 09/09/13 11:25, Roman Podolyaka wrote:
> >> > Hi,
> >> >
> >> > There is a patch on review (https://review.openstack.org/#/c/45422/)
> >> > fixing https://bugs.launchpad.net/tripleo/+bug/1221620 which has
> >> > importance 'Critical' in Nova and TripleO (long story short: currently
> >> > Nova Baremetal deployments with more than one baremetal node won't
> >> > work).
> >> >
> >> > It would be really nice to have this patch reviewed by core
> developers,
> >> > so we can fix the bug ASAP.
> >> >
> >>
> >> Hey - thanks for responding quickly - I commented on the patch and tbh I
> >> am starting to be -1 on this due to issues mentioned on the review.
> >>
> >> I will accept that my take on this is too conservative :) and remove a
> >> -1 if needed to get this in, but at this point, I have some doubts
> >> weather this is the right approach.
> >>
> >> Cheers,
> >>
> >> N.
> >>
> >>
> >> > Thanks,
> >> > Roman
> >> >
> >> >
> >> > ___
> >> > OpenStack-dev mailing list
> >> > OpenStack-dev@lists.openstack.org
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >
> >>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Baremetal][DB] Core review request: bugfix for 1221620

2013-09-09 Thread Roman Podolyaka
Hi,

I'm ok with both accepting this patch and reverting the commit, which
introduced the regression, but it would be really nice to have these DB
optimizations in Nova.

As for your concern of accepting such optimizations. I don't think, it's a
problem of such patches themselves, but rather with the lack of
comprehensive tests of complex OpenStack installations in our CI (at the
same time I personally believe our CI is the best thing ever happened to
OpenStack :), CI team you really rock!).

Anyway, TripleO-CI found this regression. Maybe we should consider adding
its job to Nova check/gate pipelines?

Thanks,
Roman


On Mon, Sep 9, 2013 at 1:59 PM, Nikola Đipanov  wrote:

> On 09/09/13 11:25, Roman Podolyaka wrote:
> > Hi,
> >
> > There is a patch on review (https://review.openstack.org/#/c/45422/)
> > fixing https://bugs.launchpad.net/tripleo/+bug/1221620 which has
> > importance 'Critical' in Nova and TripleO (long story short: currently
> > Nova Baremetal deployments with more than one baremetal node won't work).
> >
> > It would be really nice to have this patch reviewed by core developers,
> > so we can fix the bug ASAP.
> >
>
> Hey - thanks for responding quickly - I commented on the patch and tbh I
> am starting to be -1 on this due to issues mentioned on the review.
>
> I will accept that my take on this is too conservative :) and remove a
> -1 if needed to get this in, but at this point, I have some doubts
> weather this is the right approach.
>
> Cheers,
>
> N.
>
>
> > Thanks,
> > Roman
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Baremetal][DB] Core review request: bugfix for 1221620

2013-09-09 Thread Roman Podolyaka
Hi,

There is a patch on review (https://review.openstack.org/#/c/45422/) fixing
https://bugs.launchpad.net/tripleo/+bug/1221620 which has importance
'Critical' in Nova and TripleO (long story short: currently Nova Baremetal
deployments with more than one baremetal node won't work).

It would be really nice to have this patch reviewed by core developers, so
we can fix the bug ASAP.

Thanks,
Roman
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] headsup - transient test failures on py26 ' cannot import name OrderedDict'

2013-07-19 Thread Roman Podolyaka
Hi guys,

Both 0.0.16 and 0.0.17 seem to have a broken tests counter. It shows that 2
times more tests have been run than I actually have.

Thanks,
Roman


On Thu, Jul 18, 2013 at 2:29 AM, David Ripton  wrote:

> On 07/17/2013 04:54 PM, Robert Collins wrote:
>
>> On 18 July 2013 08:48, Chris Jones  wrote:
>>
>>> Hi
>>>
>>> On 17 July 2013 21:27, Robert Collins  wrote:
>>>

 Surely thats fixable by having a /opt/ install of Python2.7 built for
 RHEL
 ? That would make life s much easier for all concerned, and is super

>>>
>>>
>>> Possibly not easier for those tasked with keeping OS security patches up
>>> to
>>> date, which is part of what a RHEL customer is paying Red Hat a bunch of
>>> money to do.
>>>
>>
>> I totally agree, which is why it would make sense for Red Hat to
>> supply the build of Python 2.7 :).
>>
>
> FYI,
>
> http://developerblog.redhat.**com/2013/06/05/red-hat-**
> software-collections-1-0-beta-**now-available/
>
> (TL;DR : Red Hat Software Collections is a way to get newer versions of
> Python and some other software on RHEL 6.  It's still in beta though.)
>
> --
> David Ripton   Red Hat   drip...@redhat.com
>
>
> __**_
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.**org 
> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ipdb debugging in Neutron test cases?

2013-07-17 Thread Roman Podolyaka
Hi,

Indeed, stable/grizzly contains the following code in the base test case
class (quantum/tests/base.py):

if os.environ.get('OS_STDOUT_NOCAPTURE') not in TRUE_STRING:
stdout = self.useFixture(fixtures.StringStream('stdout')).stream
self.useFixture(fixtures.MonkeyPatch('sys.stdout', stdout))

so stdout is captured by default, and you should use OS_STDOUT_NOCAPTURE=1
instead.

The behavior was changed in this commit
https://github.com/openstack/neutron/commit/91bd4bbaeac37d12e61c9c7b033f55ec9f1ab562
.

Thanks,
Roman


On Wed, Jul 17, 2013 at 8:44 AM, Qiu Yu  wrote:

> On Wed, Jul 17, 2013 at 12:00 PM, Roman Podolyaka
>  wrote:
> > Hi,
> >
> > Ensure that stdout isn't captured by the corresponding fixture:
> >
> > OS_STDOUT_CAPTURE=0 python -m testtools.run
> >
> neutron.tests.unit.openvswitch.test_ovs_neutron_agent.TestOvsNeutronAgent.test_port_update
> > Tests running...
>
> Thanks Roman, ipdb works fine with test cases in Neutron master
> branch. And if you run 'python -m testtools.run {testcase}', stdout is
> not captured by default.
>
> However, the issue still exists with Neutron stable/grizzly branch,
> even with OS_STDOUT_CAPTURE=0. Not quite sure which change in trunk
> resolved this issue.
>
> Thanks,
> --
> Qiu Yu
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ipdb debugging in Neutron test cases?

2013-07-16 Thread Roman Podolyaka
Hi,

Ensure that stdout isn't captured by the corresponding fixture:

OS_STDOUT_CAPTURE=0 python -m testtools.run
neutron.tests.unit.openvswitch.test_ovs_neutron_agent.TestOvsNeutronAgent.test_port_update
Tests running...
>
/home/rpodolyaka/src/neutron/neutron/tests/unit/openvswitch/test_ovs_neutron_agent.py(251)test_port_update()
250
--> 251 with contextlib.nested(
252 mock.patch.object(self.agent.int_br,
"get_vif_port_by_id"),


OS_STDOUT_CAPTURE=1 python -m testtools.run
neutron.tests.unit.openvswitch.test_ovs_neutron_agent.TestOvsNeutronAgent.test_port_update
Tests running...
==
ERROR:
neutron.tests.unit.openvswitch.test_ovs_neutron_agent.TestOvsNeutronAgent.test_port_update
--
Empty attachments:
  pythonlogging:''
  stdout

Traceback (most recent call last):
  File "neutron/tests/unit/openvswitch/test_ovs_neutron_agent.py", line
248, in test_port_update
import ipdb

()

AttributeError: '_io.BytesIO' object has no attribute 'name'

Thanks,
Roman


On Wed, Jul 17, 2013 at 5:58 AM, Qiu Yu  wrote:

> Hi,
>
> I'm wondering is there any one ever tried using ipdb in Neutron test
> cases? The same trick that used to be working with Nova, cannot be
> applied in Neutron.
>
> For example, you can trigger one specific test case. But once ipdb
> line is added, following exception will be raised from ipython.
>
> Any thoughts? How can I make ipdb work with Neutron test case? Thanks!
>
> $ source .venv/bin/activate
> (.venv)$ python -m testtools.run
>
> quantum.tests.unit.openvswitch.test_ovs_quantum_agent.TestOvsQuantumAgent.test_port_update
>
> ==
> ERROR:
> quantum.tests.unit.openvswitch.test_ovs_quantum_agent.TestOvsQuantumAgent.test_port_update
> --
> Empty attachments:
>   pythonlogging:''
>   stderr
>   stdout
>
> Traceback (most recent call last):
>   File "quantum/tests/unit/openvswitch/test_ovs_quantum_agent.py",
> line 163, in test_port_update
> from ipdb import set_trace; set_trace()
>   File
> "/opt/stack/quantum/.venv/local/lib/python2.7/site-packages/ipdb/__init__.py",
> line 16, in 
> from ipdb.__main__ import set_trace, post_mortem, pm, run,
> runcall, runeval, launch_ipdb_on_exception
>   File
> "/opt/stack/quantum/.venv/local/lib/python2.7/site-packages/ipdb/__main__.py",
> line 26, in 
> import IPython
>   File
> "/opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/__init__.py",
> line 43, in 
> from .config.loader import Config
>   File
> "/opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/config/__init__.py",
> line 16, in 
> from .application import *
>   File
> "/opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/config/application.py",
> line 31, in 
> from IPython.config.configurable import SingletonConfigurable
>   File
> "/opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/config/configurable.py",
> line 26, in 
> from loader import Config
>   File
> "/opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/config/loader.py",
> line 27, in 
> from IPython.utils.path import filefind, get_ipython_dir
>   File
> "/opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/utils/path.py",
> line 25, in 
> from IPython.utils.process import system
>   File
> "/opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/utils/process.py",
> line 27, in 
> from ._process_posix import _find_cmd, system, getoutput, arg_split
>   File
> "/opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/utils/_process_posix.py",
> line 27, in 
> from IPython.utils import text
>   File
> "/opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/utils/text.py",
> line 29, in 
> from IPython.utils.io import nlprint
>   File
> "/opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/utils/io.py",
> line 78, in 
> stdout = IOStream(sys.stdout, fallback=devnull)
>   File
> "/opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/utils/io.py",
> line 42, in __init__
> setattr(self, meth, getattr(stream, meth))
> AttributeError: '_io.BytesIO' object has no attribute 'name'
>
>
> --
> Qiu Yu
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [DB][Migrations] Switching to using of Alembic

2013-07-16 Thread Roman Podolyaka
Hello, stackers!

Most of you who is interested in work around DB in OpenStack must have read
this thread [1] started by Boris Pavlovic. Boris made an overview of the
work our team is doing to make DB code better.

One of our main goals is to switch from sqlalchemy-migrate to Alembic for
applying of DB schema migrations. sqlalchemy-migrate was abandoned for a
long time, and even now when it's become adopted by OpenStack community,
we'd better use a project which is supported by upstream (especially in the
case when the author of this project is the same person who also authored
SQLAlchemy).

The switch isn't going to be simple though. We have a few problems:

1) stable releases must be supported for some time, so we can't switch from
migrate to alembic immediately

The switch should probably be made when previous migrations scripts are
"compacted", so all new migrations scripts will use alembic. Switching of
such big projects as Nova is hard, so we decided to gain some experience
with porting of smaller ones first. Alexei Kornienko is currently working
on adding support of Alembic migrations in Ceilometer [3].

Our long term goal is to switch all projects from using of
sqlalchemy-migrate to Alembic.

2) we rely on schema migrations to set up an SQLite database for running
tests

Nova and possibly other projects use schema migrations to set up an SQLite
database for running tests. Unfortunately, we can't use models definitions
for generation of initial DB schema, because those definitions do not
correspond migration scripts. Our team is working on fixing of this issue
[2].

As you may now SQLite has limited support of ALTER DDL statements [4]. Nova
code contains a few auxiliary functions to make ALTER work in SQLite.
Unfortunately, Alembic doesn't support ALTER in SQLite on purpose [5]. In
order to run our tests on SQLite right now using Alembic as a schema
migration tool, we should add ALTER support to it first.

We are going to implement ALTER support in Alembic for SQLite in the next
few weeks.

As always, your comments in ML and reviews are always welcome.

Thanks,
Roman

[1] http://lists.openstack.org/pipermail/openstack-dev/2013-July/011253.html
[2]
https://blueprints.launchpad.net/nova/+spec/db-sync-models-with-migrations
[3]
https://review.openstack.org/#/q/status:open+project:openstack/ceilometer+branch:master+topic:bp/convert-to-alembic,n,z
[4] http://www.sqlite.org/lang_altertable.html
[5] https://bitbucket.org/zzzeek/alembic
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][Nova] Running Nova DB API tests on different backends

2013-06-21 Thread Roman Podolyaka
Hello Sean, all,

Currently there are ~30 test classes in DB API tests, containing ~370 test
cases. setUpClass()/tearDownClass() would be definitely an improvement, but
applying of all DB schema migrations for MySQL 30 times is going to take a
long time...

Thanks,
Roman


On Fri, Jun 21, 2013 at 3:02 PM, Sean Dague  wrote:

> On 06/21/2013 07:40 AM, Roman Podolyaka wrote:
>
>> Hi, all!
>>
>> In Nova we've got a DB access layer known as "DB API" and tests for it.
>> Currently, those tests are run only for SQLite in-memory DB, which is
>> great for speed, but doesn't allow us to spot backend-specific errors.
>>
>> There is a blueprint
>> (https://blueprints.launchpad.**net/nova/+spec/db-api-tests-**
>> on-all-backends<https://blueprints.launchpad.net/nova/+spec/db-api-tests-on-all-backends>
>> )
>> by Boris Pavlovic, which goal is to run the DB API tests on all DB
>> backends (e. g. SQLite, MySQL and PosgreSQL). Recently, I've been
>> working on implementation of this BP
>> (https://review.openstack.org/**#/c/33236/<https://review.openstack.org/#/c/33236/>
>> ).
>>
>> The chosen approach for implementing this is best explained by going
>> through a list of problems which must be solved:
>>
>> 1. Tests should be executed concurrently by testr.
>>
>> testr creates a few worker processes each running a portion of test
>> cases. When SQLite in-memory DB is used for testing, each of those
>> processes has it's own DB in its address space, so no race conditions
>> are possible. If we used a shared MySQL/PostgreSQL DB, the test suite
>> would fail due to various race conditions. Thus, we must create a
>> separate DB for each of test running processes and drop those, when all
>> tests end.
>>
>> The question is, where we should create/drop those DBs? There are a few
>> possible places in our code:
>> 1) setUp()/tearDown() methods of test cases. These are executed for
>> each test case (there are ~370 tests in test_db_api). So it must be a
>> bad idea to create/apply migrations/drop DB 370 times, if MySQL or
>> PostgreSQL are used instead of SQLite in-memory DB
>> 2) testr supports creation of isolated test environments
>> (https://testrepository.**readthedocs.org/en/latest/**
>> MANUAL.html#remote-or-**isolated-test-environments<https://testrepository.readthedocs.org/en/latest/MANUAL.html#remote-or-isolated-test-environments>
>> ).
>> Long story short: we can specify commands to execute before tests are
>> run, after test have ended and how to run tests
>>  3) module/package level setUp()/tearDown(), but these are probably
>> supported only in nosetest
>>
>
> How many Classes are we talking about? We're actually going after a
> similar problem in Tempest that setUp isn't cheap, so Matt Treinish has an
> experimental patch to testr which allows class level partitioning instead.
> Then you can use setupClass / teardownClass for expensive resource setup.
>
>
>  So:
>> 1) before tests are run, a few test DBs are created (the number of
>> created DBs is equal to the used concurrency level value)
>> 2) for each test running process an env variable, containing the
>> connection string to the created DB, is set;
>> 3) after all test running processes have ended, the created DBs are
>> dropped.
>>
>> 2. Tests cleanup should be fast.
>>
>> For SQLite in-memory DB we use "create DB/apply migrations/run test/drop
>> DB" pattern, but that would be too slow for running tests on MySQL or
>> PostgreSQL.
>>
>> Another option would be to create DB only once for each of test running
>> processes, apply DB migrations and then run each test case within a DB
>> transaction which is rolled back after a test ends. Combining with
>> something like "fsync = off" option of PostgreSQL this approach works
>> really fast (on my PC it takes ~5 s to run DB API tests on SQLite and
>> ~10 s on PostgreSQL).
>>
>
> I like the idea of creating a transaction in setup, and triggering
> rollback in teardown, that's pretty clever.
>
>
>  3. Tests should be easy to run for developers as well as for Jenkins.
>>
>> DB API tests are the only tests which should be run on different
>> backends. All other test cases can be run on SQLite. The convenient way
>> to do this is to create a separate tox env, running only DB API tests.
>> Developers specify the DB connection string which effectively defines
>> the backend that should be used for running tests.
>>
>> I'd rather not run t

[openstack-dev] [Infra][Nova] Running Nova DB API tests on different backends

2013-06-21 Thread Roman Podolyaka
Hi, all!

In Nova we've got a DB access layer known as "DB API" and tests for it.
Currently, those tests are run only for SQLite in-memory DB, which is great
for speed, but doesn't allow us to spot backend-specific errors.

There is a blueprint (
https://blueprints.launchpad.net/nova/+spec/db-api-tests-on-all-backends)
by Boris Pavlovic, which goal is to run the DB API tests on all DB backends
(e. g. SQLite, MySQL and PosgreSQL). Recently, I've been working on
implementation of this BP (https://review.openstack.org/#/c/33236/).

The chosen approach for implementing this is best explained by going
through a list of problems which must be solved:

1. Tests should be executed concurrently by testr.

testr creates a few worker processes each running a portion of test cases.
When SQLite in-memory DB is used for testing, each of those processes has
it's own DB in its address space, so no race conditions are possible. If we
used a shared MySQL/PostgreSQL DB, the test suite would fail due to various
race conditions. Thus, we must create a separate DB for each of test
running processes and drop those, when all tests end.

The question is, where we should create/drop those DBs? There are a few
possible places in our code:
   1) setUp()/tearDown() methods of test cases. These are executed for each
test case (there are ~370 tests in test_db_api). So it must be a bad idea
to create/apply migrations/drop DB 370 times, if MySQL or PostgreSQL are
used instead of SQLite in-memory DB
   2) testr supports creation of isolated test environments (
https://testrepository.readthedocs.org/en/latest/MANUAL.html#remote-or-isolated-test-environments).
Long story short: we can specify commands to execute before tests are run,
after test have ended and how to run tests
3) module/package level setUp()/tearDown(), but these are probably
supported only in nosetest

So:
   1) before tests are run, a few test DBs are created (the number of
created DBs is equal to the used concurrency level value)
   2) for each test running process an env variable, containing the
connection string to the created DB, is set;
   3) after all test running processes have ended, the created DBs are
dropped.

2. Tests cleanup should be fast.

For SQLite in-memory DB we use "create DB/apply migrations/run test/drop
DB" pattern, but that would be too slow for running tests on MySQL or
PostgreSQL.

Another option would be to create DB only once for each of test running
processes, apply DB migrations and then run each test case within a DB
transaction which is rolled back after a test ends. Combining with
something like "fsync = off" option of PostgreSQL this approach works
really fast (on my PC it takes ~5 s to run DB API tests on SQLite and ~10 s
on PostgreSQL).

3. Tests should be easy to run for developers as well as for Jenkins.

DB API tests are the only tests which should be run on different backends.
All other test cases can be run on SQLite. The convenient way to do this is
to create a separate tox env, running only DB API tests. Developers specify
the DB connection string which effectively defines the backend that should
be used for running tests.

I'd rather not run those tests 'opportunistically' in py26 and py27 as we
do for migrations, because they are going to be broken for some time (most
problems are described here
https://docs.google.com/a/mirantis.com/document/d/1H82lIxd54CRmy-22oPRUS1sBkEtiguMU8N0whtye-BE/edit).
So it would be really nice to have a separate non-voting gate test.


I would really like to receive some comments from Nova and Infra guys
on whether this is an acceptable approach of running DB API tests and how
we can improve this.

Thanks,
Roman
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev