Re: [openstack-dev] [Oslo] [Ironic] DB migration woes

2014-06-09 Thread Devananda van der Veen
There may be some problems with MySQL when testing parallel writes in
different non-committing transactions, even in READ COMMITTED mode,
due to InnoDB locking, if the queries use non-unique secondary indexes
for UPDATE or SELECT..FOR UPDATE queries. This is done by the
with_lockmode('update') SQLAlchemy phrase, and is used in ~10 places
in Nova. So I would not recommend this approach, even though, in
principle, I agree it would be a much more efficient way of testing
database reads/writes.

More details here:
http://dev.mysql.com/doc/refman/5.5/en/innodb-locks-set.html and
http://dev.mysql.com/doc/refman/5.5/en/innodb-record-level-locks.html

On Sun, Jun 8, 2014 at 8:46 AM, Roman Podoliaka rpodoly...@mirantis.com wrote:
 Hi Mike,

 However, when testing an application that uses a fixed set of tables, as 
 should be the case for the majority if not all Openstack apps, there’s no 
 reason that these tables need to be recreated for every test.

 This is a very good point. I tried to use the recipe from SQLAlchemy
 docs to run Nova DB API tests (yeah, I know, this might sound
 confusing, but these are actually methods that access the database in
 Nova) on production backends (MySQL and PostgreSQL). The abandoned
 patch is here [1]. Julia Varlamova has been working on rebasing this
 on master and should upload a new patch set soon.

 Overall, the approach with executing a test within a transaction and
 then emitting ROLLBACK worked quite well. The only problem I ran into
 were tests doing ROLLBACK on purpose. But you've updated the recipe
 since then and this can probably be solved by using of save points. I
 used a separate DB per a test running process to prevent race
 conditions, but we should definitely give READ COMMITTED approach a
 try. If it works, that will awesome.

 With a few tweaks of PostgreSQL config I was able to run Nova DB API
 tests in 13-15 seconds, while SQLite in memory took about 7s.

 Action items for me and Julia probably: [2] needs a spec with [1]
 updated accordingly. Using of this 'test in a transaction' approach
 seems to be a way to go for running all db related tests except the
 ones using DDL statements (as any DDL statement commits the current
 transaction implicitly on MySQL and SQLite AFAIK).

 Thanks,
 Roman

 [1] https://review.openstack.org/#/c/33236/
 [2] https://blueprints.launchpad.net/nova/+spec/db-api-tests-on-all-backends

 On Sat, Jun 7, 2014 at 10:27 PM, Mike Bayer mba...@redhat.com wrote:

 On Jun 6, 2014, at 8:12 PM, Devananda van der Veen devananda@gmail.com
 wrote:

 I think some things are broken in the oslo-incubator db migration code.

 Ironic moved to this when Juno opened and things seemed fine, until recently
 when Lucas tried to add a DB migration and noticed that it didn't run... So
 I looked into it a bit today. Below are my findings.

 Firstly, I filed this bug and proposed a fix, because I think that tests
 that don't run any code should not report that they passed -- they should
 report that they were skipped.
   https://bugs.launchpad.net/oslo/+bug/1327397
   No notice given when db migrations are not run due to missing engine

 Then, I edited the test_migrations.conf file appropriately for my local
 mysql service, ran the tests again, and verified that migration tests ran --
 and they passed. Great!

 Now, a little background... Ironic's TestMigrations class inherits from
 oslo's BaseMigrationTestCase, then opportunistically checks each back-end,
 if it's available. This opportunistic checking was inherited from Nova so
 that tests could pass on developer workstations where not all backends are
 present (eg, I have mysql installed, but not postgres), and still
 transparently run on all backends in the gate. I couldn't find such
 opportunistic testing in the oslo db migration test code, unfortunately -
 but maybe it's well hidden.

 Anyhow. When I stopped the local mysql service (leaving the configuration
 unchanged), I expected the tests to be skipped, but instead I got two
 surprise failures:
 - test_mysql_opportunistically() failed because setUp() raises an exception
 before the test code could call calling _have_mysql()
 - test_mysql_connect_fail() actually failed! Again, because setUp() raises
 an exception before running the test itself

 Unfortunately, there's one more problem... when I run the tests in parallel,
 they fail randomly because sometimes two test threads run different
 migration tests, and the setUp() for one thread (remember, it calls
 _reset_databases) blows up the other test.

 Out of 10 runs, it failed three times, each with different errors:
   NoSuchTableError: `chassis`
   ERROR 1007 (HY000) at line 1: Can't create database 'test_migrations';
 database exists
   ProgrammingError: (ProgrammingError) (1146, Table
 'test_migrations.alembic_version' doesn't exist)

 As far as I can tell, this is all coming from:

 

Re: [openstack-dev] [Oslo] [Ironic] DB migration woes

2014-06-09 Thread Mike Bayer

On Jun 9, 2014, at 12:50 PM, Devananda van der Veen devananda@gmail.com 
wrote:

 There may be some problems with MySQL when testing parallel writes in
 different non-committing transactions, even in READ COMMITTED mode,
 due to InnoDB locking, if the queries use non-unique secondary indexes
 for UPDATE or SELECT..FOR UPDATE queries. This is done by the
 with_lockmode('update') SQLAlchemy phrase, and is used in ~10 places
 in Nova. So I would not recommend this approach, even though, in
 principle, I agree it would be a much more efficient way of testing
 database reads/writes.
 
 More details here:
 http://dev.mysql.com/doc/refman/5.5/en/innodb-locks-set.html and
 http://dev.mysql.com/doc/refman/5.5/en/innodb-record-level-locks.html

OK, but just to clarify my understanding, what is the approach to testing 
writes in parallel right now, are we doing CREATE DATABASE for two entirely 
distinct databases with some kind of generated name for each one?  Otherwise, 
if the parallel tests are against the same database, this issue exists 
regardless (unless autocommit mode is used, is FOR UPDATE accepted under those 
conditions?)




 
 On Sun, Jun 8, 2014 at 8:46 AM, Roman Podoliaka rpodoly...@mirantis.com 
 wrote:
 Hi Mike,
 
 However, when testing an application that uses a fixed set of tables, as 
 should be the case for the majority if not all Openstack apps, there’s no 
 reason that these tables need to be recreated for every test.
 
 This is a very good point. I tried to use the recipe from SQLAlchemy
 docs to run Nova DB API tests (yeah, I know, this might sound
 confusing, but these are actually methods that access the database in
 Nova) on production backends (MySQL and PostgreSQL). The abandoned
 patch is here [1]. Julia Varlamova has been working on rebasing this
 on master and should upload a new patch set soon.
 
 Overall, the approach with executing a test within a transaction and
 then emitting ROLLBACK worked quite well. The only problem I ran into
 were tests doing ROLLBACK on purpose. But you've updated the recipe
 since then and this can probably be solved by using of save points. I
 used a separate DB per a test running process to prevent race
 conditions, but we should definitely give READ COMMITTED approach a
 try. If it works, that will awesome.
 
 With a few tweaks of PostgreSQL config I was able to run Nova DB API
 tests in 13-15 seconds, while SQLite in memory took about 7s.
 
 Action items for me and Julia probably: [2] needs a spec with [1]
 updated accordingly. Using of this 'test in a transaction' approach
 seems to be a way to go for running all db related tests except the
 ones using DDL statements (as any DDL statement commits the current
 transaction implicitly on MySQL and SQLite AFAIK).
 
 Thanks,
 Roman
 
 [1] https://review.openstack.org/#/c/33236/
 [2] https://blueprints.launchpad.net/nova/+spec/db-api-tests-on-all-backends
 
 On Sat, Jun 7, 2014 at 10:27 PM, Mike Bayer mba...@redhat.com wrote:
 
 On Jun 6, 2014, at 8:12 PM, Devananda van der Veen devananda@gmail.com
 wrote:
 
 I think some things are broken in the oslo-incubator db migration code.
 
 Ironic moved to this when Juno opened and things seemed fine, until recently
 when Lucas tried to add a DB migration and noticed that it didn't run... So
 I looked into it a bit today. Below are my findings.
 
 Firstly, I filed this bug and proposed a fix, because I think that tests
 that don't run any code should not report that they passed -- they should
 report that they were skipped.
  https://bugs.launchpad.net/oslo/+bug/1327397
  No notice given when db migrations are not run due to missing engine
 
 Then, I edited the test_migrations.conf file appropriately for my local
 mysql service, ran the tests again, and verified that migration tests ran --
 and they passed. Great!
 
 Now, a little background... Ironic's TestMigrations class inherits from
 oslo's BaseMigrationTestCase, then opportunistically checks each back-end,
 if it's available. This opportunistic checking was inherited from Nova so
 that tests could pass on developer workstations where not all backends are
 present (eg, I have mysql installed, but not postgres), and still
 transparently run on all backends in the gate. I couldn't find such
 opportunistic testing in the oslo db migration test code, unfortunately -
 but maybe it's well hidden.
 
 Anyhow. When I stopped the local mysql service (leaving the configuration
 unchanged), I expected the tests to be skipped, but instead I got two
 surprise failures:
 - test_mysql_opportunistically() failed because setUp() raises an exception
 before the test code could call calling _have_mysql()
 - test_mysql_connect_fail() actually failed! Again, because setUp() raises
 an exception before running the test itself
 
 Unfortunately, there's one more problem... when I run the tests in parallel,
 they fail randomly because sometimes two test threads run different
 migration tests, and the setUp() for one thread 

Re: [openstack-dev] [Oslo] [Ironic] DB migration woes

2014-06-09 Thread Jay Pipes

On 06/09/2014 12:50 PM, Devananda van der Veen wrote:

There may be some problems with MySQL when testing parallel writes in
different non-committing transactions, even in READ COMMITTED mode,
due to InnoDB locking, if the queries use non-unique secondary indexes
for UPDATE or SELECT..FOR UPDATE queries. This is done by the
with_lockmode('update') SQLAlchemy phrase, and is used in ~10 places
in Nova. So I would not recommend this approach, even though, in
principle, I agree it would be a much more efficient way of testing
database reads/writes.

More details here:
http://dev.mysql.com/doc/refman/5.5/en/innodb-locks-set.html and
http://dev.mysql.com/doc/refman/5.5/en/innodb-record-level-locks.html


Hi Deva,

MySQL/InnoDB's default isolation mode is REPEATABLE_READ, not 
READ_COMMITTED... are you saying that somewhere in the Ironic codebase 
we are setting the isolation mode manually to READ_COMMITTED for some 
reason?


Best,
-jay


On Sun, Jun 8, 2014 at 8:46 AM, Roman Podoliaka rpodoly...@mirantis.com wrote:

Hi Mike,


However, when testing an application that uses a fixed set of tables, as should 
be the case for the majority if not all Openstack apps, there’s no reason that 
these tables need to be recreated for every test.


This is a very good point. I tried to use the recipe from SQLAlchemy
docs to run Nova DB API tests (yeah, I know, this might sound
confusing, but these are actually methods that access the database in
Nova) on production backends (MySQL and PostgreSQL). The abandoned
patch is here [1]. Julia Varlamova has been working on rebasing this
on master and should upload a new patch set soon.

Overall, the approach with executing a test within a transaction and
then emitting ROLLBACK worked quite well. The only problem I ran into
were tests doing ROLLBACK on purpose. But you've updated the recipe
since then and this can probably be solved by using of save points. I
used a separate DB per a test running process to prevent race
conditions, but we should definitely give READ COMMITTED approach a
try. If it works, that will awesome.

With a few tweaks of PostgreSQL config I was able to run Nova DB API
tests in 13-15 seconds, while SQLite in memory took about 7s.

Action items for me and Julia probably: [2] needs a spec with [1]
updated accordingly. Using of this 'test in a transaction' approach
seems to be a way to go for running all db related tests except the
ones using DDL statements (as any DDL statement commits the current
transaction implicitly on MySQL and SQLite AFAIK).

Thanks,
Roman

[1] https://review.openstack.org/#/c/33236/
[2] https://blueprints.launchpad.net/nova/+spec/db-api-tests-on-all-backends

On Sat, Jun 7, 2014 at 10:27 PM, Mike Bayer mba...@redhat.com wrote:


On Jun 6, 2014, at 8:12 PM, Devananda van der Veen devananda@gmail.com
wrote:

I think some things are broken in the oslo-incubator db migration code.

Ironic moved to this when Juno opened and things seemed fine, until recently
when Lucas tried to add a DB migration and noticed that it didn't run... So
I looked into it a bit today. Below are my findings.

Firstly, I filed this bug and proposed a fix, because I think that tests
that don't run any code should not report that they passed -- they should
report that they were skipped.
   https://bugs.launchpad.net/oslo/+bug/1327397
   No notice given when db migrations are not run due to missing engine

Then, I edited the test_migrations.conf file appropriately for my local
mysql service, ran the tests again, and verified that migration tests ran --
and they passed. Great!

Now, a little background... Ironic's TestMigrations class inherits from
oslo's BaseMigrationTestCase, then opportunistically checks each back-end,
if it's available. This opportunistic checking was inherited from Nova so
that tests could pass on developer workstations where not all backends are
present (eg, I have mysql installed, but not postgres), and still
transparently run on all backends in the gate. I couldn't find such
opportunistic testing in the oslo db migration test code, unfortunately -
but maybe it's well hidden.

Anyhow. When I stopped the local mysql service (leaving the configuration
unchanged), I expected the tests to be skipped, but instead I got two
surprise failures:
- test_mysql_opportunistically() failed because setUp() raises an exception
before the test code could call calling _have_mysql()
- test_mysql_connect_fail() actually failed! Again, because setUp() raises
an exception before running the test itself

Unfortunately, there's one more problem... when I run the tests in parallel,
they fail randomly because sometimes two test threads run different
migration tests, and the setUp() for one thread (remember, it calls
_reset_databases) blows up the other test.

Out of 10 runs, it failed three times, each with different errors:
   NoSuchTableError: `chassis`
   ERROR 1007 (HY000) at line 1: Can't create database 'test_migrations';
database exists
   ProgrammingError: 

Re: [openstack-dev] [Oslo] [Ironic] DB migration woes

2014-06-09 Thread Devananda van der Veen
On Mon, Jun 9, 2014 at 10:49 AM, Jay Pipes jaypi...@gmail.com wrote:
 On 06/09/2014 12:50 PM, Devananda van der Veen wrote:

 There may be some problems with MySQL when testing parallel writes in
 different non-committing transactions, even in READ COMMITTED mode,
 due to InnoDB locking, if the queries use non-unique secondary indexes
 for UPDATE or SELECT..FOR UPDATE queries. This is done by the
 with_lockmode('update') SQLAlchemy phrase, and is used in ~10 places
 in Nova. So I would not recommend this approach, even though, in
 principle, I agree it would be a much more efficient way of testing
 database reads/writes.

 More details here:
 http://dev.mysql.com/doc/refman/5.5/en/innodb-locks-set.html and
 http://dev.mysql.com/doc/refman/5.5/en/innodb-record-level-locks.html


 Hi Deva,

 MySQL/InnoDB's default isolation mode is REPEATABLE_READ, not
 READ_COMMITTED... are you saying that somewhere in the Ironic codebase we
 are setting the isolation mode manually to READ_COMMITTED for some reason?

 Best,
 -jay


Jay,

Not saying that at all. I was responding to Mike's suggested approach
for testing DB changes (which was actually off topic from my original
post), in which he suggested using READ_COMMITTED.

-Deva


 On Sun, Jun 8, 2014 at 8:46 AM, Roman Podoliaka rpodoly...@mirantis.com
 wrote:

 Hi Mike,

 However, when testing an application that uses a fixed set of tables,
 as should be the case for the majority if not all Openstack apps, 
 there’s no
 reason that these tables need to be recreated for every test.


 This is a very good point. I tried to use the recipe from SQLAlchemy
 docs to run Nova DB API tests (yeah, I know, this might sound
 confusing, but these are actually methods that access the database in
 Nova) on production backends (MySQL and PostgreSQL). The abandoned
 patch is here [1]. Julia Varlamova has been working on rebasing this
 on master and should upload a new patch set soon.

 Overall, the approach with executing a test within a transaction and
 then emitting ROLLBACK worked quite well. The only problem I ran into
 were tests doing ROLLBACK on purpose. But you've updated the recipe
 since then and this can probably be solved by using of save points. I
 used a separate DB per a test running process to prevent race
 conditions, but we should definitely give READ COMMITTED approach a
 try. If it works, that will awesome.

 With a few tweaks of PostgreSQL config I was able to run Nova DB API
 tests in 13-15 seconds, while SQLite in memory took about 7s.

 Action items for me and Julia probably: [2] needs a spec with [1]
 updated accordingly. Using of this 'test in a transaction' approach
 seems to be a way to go for running all db related tests except the
 ones using DDL statements (as any DDL statement commits the current
 transaction implicitly on MySQL and SQLite AFAIK).

 Thanks,
 Roman

 [1] https://review.openstack.org/#/c/33236/
 [2]
 https://blueprints.launchpad.net/nova/+spec/db-api-tests-on-all-backends

 On Sat, Jun 7, 2014 at 10:27 PM, Mike Bayer mba...@redhat.com wrote:


 On Jun 6, 2014, at 8:12 PM, Devananda van der Veen
 devananda@gmail.com
 wrote:

 I think some things are broken in the oslo-incubator db migration code.

 Ironic moved to this when Juno opened and things seemed fine, until
 recently
 when Lucas tried to add a DB migration and noticed that it didn't run...
 So
 I looked into it a bit today. Below are my findings.

 Firstly, I filed this bug and proposed a fix, because I think that tests
 that don't run any code should not report that they passed -- they
 should
 report that they were skipped.
https://bugs.launchpad.net/oslo/+bug/1327397
No notice given when db migrations are not run due to missing
 engine

 Then, I edited the test_migrations.conf file appropriately for my local
 mysql service, ran the tests again, and verified that migration tests
 ran --
 and they passed. Great!

 Now, a little background... Ironic's TestMigrations class inherits from
 oslo's BaseMigrationTestCase, then opportunistically checks each
 back-end,
 if it's available. This opportunistic checking was inherited from Nova
 so
 that tests could pass on developer workstations where not all backends
 are
 present (eg, I have mysql installed, but not postgres), and still
 transparently run on all backends in the gate. I couldn't find such
 opportunistic testing in the oslo db migration test code, unfortunately
 -
 but maybe it's well hidden.

 Anyhow. When I stopped the local mysql service (leaving the
 configuration
 unchanged), I expected the tests to be skipped, but instead I got two
 surprise failures:
 - test_mysql_opportunistically() failed because setUp() raises an
 exception
 before the test code could call calling _have_mysql()
 - test_mysql_connect_fail() actually failed! Again, because setUp()
 raises
 an exception before running the test itself

 Unfortunately, there's one more problem... when I run the tests in
 parallel,
 they fail randomly because 

Re: [openstack-dev] [Oslo] [Ironic] DB migration woes

2014-06-09 Thread Jay Pipes

On 06/09/2014 02:57 PM, Devananda van der Veen wrote:

On Mon, Jun 9, 2014 at 10:49 AM, Jay Pipes jaypi...@gmail.com wrote:

On 06/09/2014 12:50 PM, Devananda van der Veen wrote:


There may be some problems with MySQL when testing parallel writes in
different non-committing transactions, even in READ COMMITTED mode,
due to InnoDB locking, if the queries use non-unique secondary indexes
for UPDATE or SELECT..FOR UPDATE queries. This is done by the
with_lockmode('update') SQLAlchemy phrase, and is used in ~10 places
in Nova. So I would not recommend this approach, even though, in
principle, I agree it would be a much more efficient way of testing
database reads/writes.

More details here:
http://dev.mysql.com/doc/refman/5.5/en/innodb-locks-set.html and
http://dev.mysql.com/doc/refman/5.5/en/innodb-record-level-locks.html



Hi Deva,

MySQL/InnoDB's default isolation mode is REPEATABLE_READ, not
READ_COMMITTED... are you saying that somewhere in the Ironic codebase we
are setting the isolation mode manually to READ_COMMITTED for some reason?

Best,
-jay



Jay,

Not saying that at all. I was responding to Mike's suggested approach
for testing DB changes (which was actually off topic from my original
post), in which he suggested using READ_COMMITTED.


Apologies, thx for the clarification, Deva,

-jay


-Deva




On Sun, Jun 8, 2014 at 8:46 AM, Roman Podoliaka rpodoly...@mirantis.com
wrote:


Hi Mike,


However, when testing an application that uses a fixed set of tables,
as should be the case for the majority if not all Openstack apps, there’s no
reason that these tables need to be recreated for every test.



This is a very good point. I tried to use the recipe from SQLAlchemy
docs to run Nova DB API tests (yeah, I know, this might sound
confusing, but these are actually methods that access the database in
Nova) on production backends (MySQL and PostgreSQL). The abandoned
patch is here [1]. Julia Varlamova has been working on rebasing this
on master and should upload a new patch set soon.

Overall, the approach with executing a test within a transaction and
then emitting ROLLBACK worked quite well. The only problem I ran into
were tests doing ROLLBACK on purpose. But you've updated the recipe
since then and this can probably be solved by using of save points. I
used a separate DB per a test running process to prevent race
conditions, but we should definitely give READ COMMITTED approach a
try. If it works, that will awesome.

With a few tweaks of PostgreSQL config I was able to run Nova DB API
tests in 13-15 seconds, while SQLite in memory took about 7s.

Action items for me and Julia probably: [2] needs a spec with [1]
updated accordingly. Using of this 'test in a transaction' approach
seems to be a way to go for running all db related tests except the
ones using DDL statements (as any DDL statement commits the current
transaction implicitly on MySQL and SQLite AFAIK).

Thanks,
Roman

[1] https://review.openstack.org/#/c/33236/
[2]
https://blueprints.launchpad.net/nova/+spec/db-api-tests-on-all-backends

On Sat, Jun 7, 2014 at 10:27 PM, Mike Bayer mba...@redhat.com wrote:



On Jun 6, 2014, at 8:12 PM, Devananda van der Veen
devananda@gmail.com
wrote:

I think some things are broken in the oslo-incubator db migration code.

Ironic moved to this when Juno opened and things seemed fine, until
recently
when Lucas tried to add a DB migration and noticed that it didn't run...
So
I looked into it a bit today. Below are my findings.

Firstly, I filed this bug and proposed a fix, because I think that tests
that don't run any code should not report that they passed -- they
should
report that they were skipped.
https://bugs.launchpad.net/oslo/+bug/1327397
No notice given when db migrations are not run due to missing
engine

Then, I edited the test_migrations.conf file appropriately for my local
mysql service, ran the tests again, and verified that migration tests
ran --
and they passed. Great!

Now, a little background... Ironic's TestMigrations class inherits from
oslo's BaseMigrationTestCase, then opportunistically checks each
back-end,
if it's available. This opportunistic checking was inherited from Nova
so
that tests could pass on developer workstations where not all backends
are
present (eg, I have mysql installed, but not postgres), and still
transparently run on all backends in the gate. I couldn't find such
opportunistic testing in the oslo db migration test code, unfortunately
-
but maybe it's well hidden.

Anyhow. When I stopped the local mysql service (leaving the
configuration
unchanged), I expected the tests to be skipped, but instead I got two
surprise failures:
- test_mysql_opportunistically() failed because setUp() raises an
exception
before the test code could call calling _have_mysql()
- test_mysql_connect_fail() actually failed! Again, because setUp()
raises
an exception before running the test itself

Unfortunately, there's one more problem... when I run the tests in
parallel,
they 

Re: [openstack-dev] [Oslo] [Ironic] DB migration woes

2014-06-09 Thread Mike Bayer

On Jun 9, 2014, at 1:08 PM, Mike Bayer mba...@redhat.com wrote:

 
 On Jun 9, 2014, at 12:50 PM, Devananda van der Veen devananda@gmail.com 
 wrote:
 
 There may be some problems with MySQL when testing parallel writes in
 different non-committing transactions, even in READ COMMITTED mode,
 due to InnoDB locking, if the queries use non-unique secondary indexes
 for UPDATE or SELECT..FOR UPDATE queries. This is done by the
 with_lockmode('update') SQLAlchemy phrase, and is used in ~10 places
 in Nova. So I would not recommend this approach, even though, in
 principle, I agree it would be a much more efficient way of testing
 database reads/writes.
 
 More details here:
 http://dev.mysql.com/doc/refman/5.5/en/innodb-locks-set.html and
 http://dev.mysql.com/doc/refman/5.5/en/innodb-record-level-locks.html
 
 OK, but just to clarify my understanding, what is the approach to testing 
 writes in parallel right now, are we doing CREATE DATABASE for two entirely 
 distinct databases with some kind of generated name for each one?  Otherwise, 
 if the parallel tests are against the same database, this issue exists 
 regardless (unless autocommit mode is used, is FOR UPDATE accepted under 
 those conditions?)

Took a look and this seems to be the case, from oslo.db:

def create_database(engine):
Provide temporary user and database for each particular test.
driver = engine.name

auth = {
'database': ''.join(random.choice(string.ascii_lowercase)
for i in moves.range(10)),
# ...

sqls = [
drop database if exists %(database)s;,
create database %(database)s;
]

Just thinking out loud here, I’ll move these ideas to a new wiki page after 
this post.My idea now is that OK, we provide ad-hoc databases for tests, 
but look into the idea that we create N ad-hoc databases, corresponding to 
parallel test runs - e.g. if we are running five tests concurrently, we make 
five databases.   Tests that use a database will be dished out among this pool 
of available schemas.   In the *typical* case (which means not the case that 
we’re testing actual migrations, that’s a special case) we build up the schema 
on each database via migrations or even create_all() just once, run tests 
within rolled-back transactions one-per-database, then the DBs are torn down 
when the suite is finished.

Sorry for the thread hijack.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] [Ironic] DB migration woes

2014-06-09 Thread Devananda van der Veen
Mike,

For the typical case, your proposal sounds reasonable to me. That
should protect against cross-session locking while still getting the
benefits of testing DML without committing to disk.

The issue I was originally raising is, of course, the special case
-- testing of migrations -- which, I think, could be solved in much
the same way. Given N test runners, create N empty schemata, hand each
migration-test-runner a schema from that pool. When that test runner
is done, drop and recreate that schema.

AIUI, Nodepool is already doing something similar here:
  
https://git.openstack.org/cgit/openstack-infra/nodepool/tree/nodepool/tests/__init__.py#n71

Regards,
Devananda



On Mon, Jun 9, 2014 at 12:58 PM, Mike Bayer mba...@redhat.com wrote:

 On Jun 9, 2014, at 1:08 PM, Mike Bayer mba...@redhat.com wrote:


 On Jun 9, 2014, at 12:50 PM, Devananda van der Veen 
 devananda@gmail.com wrote:

 There may be some problems with MySQL when testing parallel writes in
 different non-committing transactions, even in READ COMMITTED mode,
 due to InnoDB locking, if the queries use non-unique secondary indexes
 for UPDATE or SELECT..FOR UPDATE queries. This is done by the
 with_lockmode('update') SQLAlchemy phrase, and is used in ~10 places
 in Nova. So I would not recommend this approach, even though, in
 principle, I agree it would be a much more efficient way of testing
 database reads/writes.

 More details here:
 http://dev.mysql.com/doc/refman/5.5/en/innodb-locks-set.html and
 http://dev.mysql.com/doc/refman/5.5/en/innodb-record-level-locks.html

 OK, but just to clarify my understanding, what is the approach to testing 
 writes in parallel right now, are we doing CREATE DATABASE for two entirely 
 distinct databases with some kind of generated name for each one?  
 Otherwise, if the parallel tests are against the same database, this issue 
 exists regardless (unless autocommit mode is used, is FOR UPDATE accepted 
 under those conditions?)

 Took a look and this seems to be the case, from oslo.db:

 def create_database(engine):
 Provide temporary user and database for each particular 
 test.
 driver = engine.name

 auth = {
 'database': ''.join(random.choice(string.ascii_lowercase)
 for i in moves.range(10)),
 # ...

 sqls = [
 drop database if exists %(database)s;,
 create database %(database)s;
 ]

 Just thinking out loud here, I’ll move these ideas to a new wiki page after 
 this post.My idea now is that OK, we provide ad-hoc databases for tests, 
 but look into the idea that we create N ad-hoc databases, corresponding to 
 parallel test runs - e.g. if we are running five tests concurrently, we make 
 five databases.   Tests that use a database will be dished out among this 
 pool of available schemas.   In the *typical* case (which means not the case 
 that we’re testing actual migrations, that’s a special case) we build up the 
 schema on each database via migrations or even create_all() just once, run 
 tests within rolled-back transactions one-per-database, then the DBs are torn 
 down when the suite is finished.

 Sorry for the thread hijack.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] [Ironic] DB migration woes

2014-06-08 Thread Roman Podoliaka
Hi Deva,

I haven't actually touched Ironic db migrations tests code yet, but
your feedback is very valuable for oslo.db maintainers, thank you!

So currently, there are two ways to run migrations tests:
1. Opportunistically (using openstack_citest user credentials; this is
how we test migrations in the gates in Nova/Glance/Cinder/etc). I'm
surprised we don't provide this out-of-box in common db code.
2. By providing database credentials in test_migrations.conf
(Nova/Glance/Cinder/etc have test_migrations.conf, though I haven't
ever tried to put mysql/postgresql credentials there and run unit
tests).

The latter came to common db code directly from Nova and I'm not sure
if anyone is using the incubator version of it in the consuming
projects. Actually, I'd really like us to drop this feature and stick
to the opportunistic tests of migrations (fyi, there is a patch on
review to oslo.db [1]) to ensure there is only one way to run the
migrations tests and this the way we run the tests in the gates.

[1] uses opportunistic DB test cases provided by oslo.db to prevent
race conditions: a db is created on demand per test (which is
obviously not fast, but safe and easy). And it's perfectly normal to
use a separate db per migrations test case, as this is a kind of a
test that needs total control on the database, which can not be
provided even by using of high transaction isolation levels (so
unfortunately we can't use the solution proposed by Mike here).

Migration tests using test_migrations.conf, on the other hand, leave
it up to you how to isolate separate test cases using the same
database. You could use file locks, put them on each conflicting test
case to prevent race conditions, but this is not really handy, of
course.

Overall, I think, this is a good example of a situation when we put
code to incubator when it wasn't really ready to be reused by other
projects. We should have added docs at least on how to use those
migrations tests properly. This is something we should become better
at as a team.

Ok, so at least we know about the problem and [1] should make it
easier for everyone in the consuming projects to run their migrations
tests.

Thanks,
Roman

[1] https://review.openstack.org/#/c/93424/

On Sat, Jun 7, 2014 at 3:12 AM, Devananda van der Veen
devananda@gmail.com wrote:
 I think some things are broken in the oslo-incubator db migration code.

 Ironic moved to this when Juno opened and things seemed fine, until recently
 when Lucas tried to add a DB migration and noticed that it didn't run... So
 I looked into it a bit today. Below are my findings.

 Firstly, I filed this bug and proposed a fix, because I think that tests
 that don't run any code should not report that they passed -- they should
 report that they were skipped.
   https://bugs.launchpad.net/oslo/+bug/1327397
   No notice given when db migrations are not run due to missing engine

 Then, I edited the test_migrations.conf file appropriately for my local
 mysql service, ran the tests again, and verified that migration tests ran --
 and they passed. Great!

 Now, a little background... Ironic's TestMigrations class inherits from
 oslo's BaseMigrationTestCase, then opportunistically checks each back-end,
 if it's available. This opportunistic checking was inherited from Nova so
 that tests could pass on developer workstations where not all backends are
 present (eg, I have mysql installed, but not postgres), and still
 transparently run on all backends in the gate. I couldn't find such
 opportunistic testing in the oslo db migration test code, unfortunately -
 but maybe it's well hidden.

 Anyhow. When I stopped the local mysql service (leaving the configuration
 unchanged), I expected the tests to be skipped, but instead I got two
 surprise failures:
 - test_mysql_opportunistically() failed because setUp() raises an exception
 before the test code could call calling _have_mysql()
 - test_mysql_connect_fail() actually failed! Again, because setUp() raises
 an exception before running the test itself

 Unfortunately, there's one more problem... when I run the tests in parallel,
 they fail randomly because sometimes two test threads run different
 migration tests, and the setUp() for one thread (remember, it calls
 _reset_databases) blows up the other test.

 Out of 10 runs, it failed three times, each with different errors:
   NoSuchTableError: `chassis`
   ERROR 1007 (HY000) at line 1: Can't create database 'test_migrations';
 database exists
   ProgrammingError: (ProgrammingError) (1146, Table
 'test_migrations.alembic_version' doesn't exist)

 As far as I can tell, this is all coming from:

 https://github.com/openstack/oslo-incubator/blob/master/openstack/common/db/sqlalchemy/test_migrations.py#L86;L111


 So, Ironic devs -- if you see a DB migration proposed, pay extra attention
 to it. We aren't running migration tests in our check or gate queues right
 now, and we shouldn't enable them until this fixed.

 Regards,
 Devananda

 

Re: [openstack-dev] [Oslo] [Ironic] DB migration woes

2014-06-08 Thread Roman Podoliaka
Hi Mike,

 However, when testing an application that uses a fixed set of tables, as 
 should be the case for the majority if not all Openstack apps, there’s no 
 reason that these tables need to be recreated for every test.

This is a very good point. I tried to use the recipe from SQLAlchemy
docs to run Nova DB API tests (yeah, I know, this might sound
confusing, but these are actually methods that access the database in
Nova) on production backends (MySQL and PostgreSQL). The abandoned
patch is here [1]. Julia Varlamova has been working on rebasing this
on master and should upload a new patch set soon.

Overall, the approach with executing a test within a transaction and
then emitting ROLLBACK worked quite well. The only problem I ran into
were tests doing ROLLBACK on purpose. But you've updated the recipe
since then and this can probably be solved by using of save points. I
used a separate DB per a test running process to prevent race
conditions, but we should definitely give READ COMMITTED approach a
try. If it works, that will awesome.

With a few tweaks of PostgreSQL config I was able to run Nova DB API
tests in 13-15 seconds, while SQLite in memory took about 7s.

Action items for me and Julia probably: [2] needs a spec with [1]
updated accordingly. Using of this 'test in a transaction' approach
seems to be a way to go for running all db related tests except the
ones using DDL statements (as any DDL statement commits the current
transaction implicitly on MySQL and SQLite AFAIK).

Thanks,
Roman

[1] https://review.openstack.org/#/c/33236/
[2] https://blueprints.launchpad.net/nova/+spec/db-api-tests-on-all-backends

On Sat, Jun 7, 2014 at 10:27 PM, Mike Bayer mba...@redhat.com wrote:

 On Jun 6, 2014, at 8:12 PM, Devananda van der Veen devananda@gmail.com
 wrote:

 I think some things are broken in the oslo-incubator db migration code.

 Ironic moved to this when Juno opened and things seemed fine, until recently
 when Lucas tried to add a DB migration and noticed that it didn't run... So
 I looked into it a bit today. Below are my findings.

 Firstly, I filed this bug and proposed a fix, because I think that tests
 that don't run any code should not report that they passed -- they should
 report that they were skipped.
   https://bugs.launchpad.net/oslo/+bug/1327397
   No notice given when db migrations are not run due to missing engine

 Then, I edited the test_migrations.conf file appropriately for my local
 mysql service, ran the tests again, and verified that migration tests ran --
 and they passed. Great!

 Now, a little background... Ironic's TestMigrations class inherits from
 oslo's BaseMigrationTestCase, then opportunistically checks each back-end,
 if it's available. This opportunistic checking was inherited from Nova so
 that tests could pass on developer workstations where not all backends are
 present (eg, I have mysql installed, but not postgres), and still
 transparently run on all backends in the gate. I couldn't find such
 opportunistic testing in the oslo db migration test code, unfortunately -
 but maybe it's well hidden.

 Anyhow. When I stopped the local mysql service (leaving the configuration
 unchanged), I expected the tests to be skipped, but instead I got two
 surprise failures:
 - test_mysql_opportunistically() failed because setUp() raises an exception
 before the test code could call calling _have_mysql()
 - test_mysql_connect_fail() actually failed! Again, because setUp() raises
 an exception before running the test itself

 Unfortunately, there's one more problem... when I run the tests in parallel,
 they fail randomly because sometimes two test threads run different
 migration tests, and the setUp() for one thread (remember, it calls
 _reset_databases) blows up the other test.

 Out of 10 runs, it failed three times, each with different errors:
   NoSuchTableError: `chassis`
   ERROR 1007 (HY000) at line 1: Can't create database 'test_migrations';
 database exists
   ProgrammingError: (ProgrammingError) (1146, Table
 'test_migrations.alembic_version' doesn't exist)

 As far as I can tell, this is all coming from:

 https://github.com/openstack/oslo-incubator/blob/master/openstack/common/db/sqlalchemy/test_migrations.py#L86;L111


 Hello -

 Just an introduction, I’m Mike Bayer, the creator of SQLAlchemy and Alembic
 migrations. I’ve just joined on as a full time Openstack contributor,
 and trying to help improve processes such as these is my primary
 responsibility.

 I’ve had several conversations already about how migrations are run within
 test suites in various openstack projects.   I’m kind of surprised by this
 approach of dropping and recreating the whole database for individual tests.
 Running tests in parallel is obviously made very difficult by this style,
 but even beyond that, a lot of databases don’t respond well to lots of
 dropping/rebuilding of tables and/or databases in any case; while SQLite and
 MySQL are probably the most forgiving of this, 

Re: [openstack-dev] [Oslo] [Ironic] DB migration woes

2014-06-08 Thread Mike Bayer

On Jun 8, 2014, at 11:46 AM, Roman Podoliaka rpodoly...@mirantis.com wrote:

 
 Overall, the approach with executing a test within a transaction and
 then emitting ROLLBACK worked quite well. The only problem I ran into
 were tests doing ROLLBACK on purpose. But you've updated the recipe
 since then and this can probably be solved by using of save points.

yup, I went and found the gist, that is here:

https://gist.github.com/zzzeek/8443477



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] [Ironic] DB migration woes

2014-06-07 Thread Mike Bayer

On Jun 6, 2014, at 8:12 PM, Devananda van der Veen devananda@gmail.com 
wrote:

 I think some things are broken in the oslo-incubator db migration code.
 
 Ironic moved to this when Juno opened and things seemed fine, until recently 
 when Lucas tried to add a DB migration and noticed that it didn't run... So I 
 looked into it a bit today. Below are my findings.
 
 Firstly, I filed this bug and proposed a fix, because I think that tests that 
 don't run any code should not report that they passed -- they should report 
 that they were skipped.
   https://bugs.launchpad.net/oslo/+bug/1327397
   No notice given when db migrations are not run due to missing engine
 
 Then, I edited the test_migrations.conf file appropriately for my local mysql 
 service, ran the tests again, and verified that migration tests ran -- and 
 they passed. Great!
 
 Now, a little background... Ironic's TestMigrations class inherits from 
 oslo's BaseMigrationTestCase, then opportunistically checks each back-end, 
 if it's available. This opportunistic checking was inherited from Nova so 
 that tests could pass on developer workstations where not all backends are 
 present (eg, I have mysql installed, but not postgres), and still 
 transparently run on all backends in the gate. I couldn't find such 
 opportunistic testing in the oslo db migration test code, unfortunately - but 
 maybe it's well hidden.
 
 Anyhow. When I stopped the local mysql service (leaving the configuration 
 unchanged), I expected the tests to be skipped, but instead I got two 
 surprise failures:
 - test_mysql_opportunistically() failed because setUp() raises an exception 
 before the test code could call calling _have_mysql()
 - test_mysql_connect_fail() actually failed! Again, because setUp() raises an 
 exception before running the test itself
 
 Unfortunately, there's one more problem... when I run the tests in parallel, 
 they fail randomly because sometimes two test threads run different migration 
 tests, and the setUp() for one thread (remember, it calls _reset_databases) 
 blows up the other test.
 
 Out of 10 runs, it failed three times, each with different errors:
   NoSuchTableError: `chassis`
   ERROR 1007 (HY000) at line 1: Can't create database 'test_migrations'; 
 database exists
   ProgrammingError: (ProgrammingError) (1146, Table 
 'test_migrations.alembic_version' doesn't exist)
 
 As far as I can tell, this is all coming from:
   
 https://github.com/openstack/oslo-incubator/blob/master/openstack/common/db/sqlalchemy/test_migrations.py#L86;L111

Hello -

Just an introduction, I’m Mike Bayer, the creator of SQLAlchemy and Alembic 
migrations. I’ve just joined on as a full time Openstack contributor, and 
trying to help improve processes such as these is my primary responsibility.

I’ve had several conversations already about how migrations are run within test 
suites in various openstack projects.   I’m kind of surprised by this approach 
of dropping and recreating the whole database for individual tests.   Running 
tests in parallel is obviously made very difficult by this style, but even 
beyond that, a lot of databases don’t respond well to lots of 
dropping/rebuilding of tables and/or databases in any case; while SQLite and 
MySQL are probably the most forgiving of this, a backend like Postgresql is 
going to lock tables from being dropped more aggressively, if any open 
transactions or result cursors against those tables remain, and on a backend 
like Oracle, the speed of schema operations starts to become prohibitively 
slow.   Dropping and creating tables is in general not a very speedy task on 
any backend, and on a test suite that runs many tests against a fixed schema, I 
don’t see why a full drop is necessary.

If you look at SQLAlchemy’s own tests, they do in fact create tables on each 
test, or just as often for a specific suite of tests.  However, this is due to 
the fact that SQLAlchemy tests are testing SQLAlchemy itself, so the database 
schemas used for these tests are typically built explicitly for small groups or 
individual tests, and there are ultimately thousands of small “mini schemas” 
built up and torn down for these tests.   A lot of framework code is involved 
within the test suite to keep more picky databases like Postgresql and Oracle 
happy when building up and dropping tables so frequently.

However, when testing an application that uses a fixed set of tables, as should 
be the case for the majority if not all Openstack apps, there’s no reason that 
these tables need to be recreated for every test.   Typically, the way I 
recommend is that the test suite includes a “per suite” activity which creates 
the test schema just once (with or without using CREATE DATABASE; I’m not a fan 
of tests running CREATE DATABASE as this is not a command so easily available 
in some environments).   The tests themselves then run within a transactional 
container, such that each test performs all of its work within a