Jim Fulton wrote:
>> A large proportion of our tests use a relational database. Some of
>> them want
>> an empty database, some of them want just the schema created but no data,
>> some of them want the schema created and the data. Some of them need the
>> component architecture, and some of them don't. Some of them need one or
>> more twisted servers running, some of them don't.
>> Note that we mix and match. We have 4 different types of database fixture
>> (none, empty, schema, populated), 2 different types of database
>> connection
>> mechanisms (psycopgda, psycopg), 2 types of CA fixture (none, loaded),
>> and
>> (currently) 4 states of external daemons needed. If we were to arrange
>> this
>> in layers, it would take 56 different layers, and this will double every
>> time we add a new daemon, or add more database templates (e.g. fat for
>> lots
>> of sample data to go with the existing thin).
>> As a way of supporting this better, instead of specifying a layer a test
>> could specify the list of resources it needs:
>> import testresources as r
>> class FooTest(unittest.TestCase):
>>     resources = [r.LaunchpadDb, r.Librarian, r.Component]
>>     [...]
>> class BarTest(unittest.TestCase):
>>     resources = [r.EmptyDb]
>> class BazTest(unittest.TestCase):
>>     resources = [r.LaunchpadDb, r.Librarian]
> This is pretty much how layers work.  Layers can be arranged in
> a DAG (much like a traditional multiple-inheritence class graph).
> So, you can model each resource as a layer and specific combinations
> of resources as layers.  The test runner will attempt to run the layers
> in an order than minimizes set-up and tear-down of layers.

So my example could be modeled using layers like:

import layers as l

class FooLayer(l.LaunchpadDb, l.Librarian, l.Component): pass
class FooTest(unittest.TestCase):
    layer = 'FooLayer'

class BarLayer(l.LaunchpadDb, l.Librarian, l.Component): pass
class BarTest(unitest.TestCase):
    layer = 'BarLayer'

class BazLayer(l.LaunchpadDb, l.Librarian): pass
class BazTest(unittest.TestCase):
    layer = 'BazLayer'

In general I would need to define a layer for each test case (because the
number of combinations make it impractical to explode all the possible
combinations into a tree of layers, if for no other reason than naming them).

If I tell the test runner to run all the tests, will the LaunchpadDb,
Librarian and Component layers each be initialized just once?

If I tell the test runner to run the Librarian layer tests, will all three
tests be run?

What happens if I go and define a new test:

class LibTest(unittest.TestCase):
    layer = 'l.Librarian'

If I run all the tests, will the Librarian setup/teardown be run once (by
running the tests in the order LibTest, BazTest, FooTest, BarTest and
initializing the Librarian layer before the LaunchpadDb layer)? I expect
not, as 'layer' indicates a heirarchy which isn't as useful to me as a set
of resources.

If layers don't work this way, it might be possible to emulate resources

class ResourceTest(unittest.TestCase):
   def layer(self):
       return type(optimize_order(self.resources))

Howver, optimize_order would need to know about all the other tests so would
really be the responsibility of the test runner (so it would need to be
customized/overridden), and the test runner would need to support the layer
attribute possibly being a class rather than a string.

> Ah, so the layer specifies additional per-test setUp and tearDown
> that is used in addition to the tests's own setUp and tearDown.  This
> sounds reasonable.

But what to call them? setUpPerTest? The pretest and posttest names I used
are a bit sucky.

>> On another note, enforcing isolation of tests has been a continuous
>> problem
>> for us. For example, a developer registering a utility or otherwise
>> mucking
>> around with the global environment and forgetting to reset things in
>> tearDown. This goes unnoticed for a while, and other tests get written
>> that
>> actually depend on this corruption. But at some point, the order the
>> tests
>> are run changes for some reason and suddenly test 500 starts failing. It
>> turns out the global state has been screwed, and you have the fun task of
>> tracking down which of the proceeding 499 tests screwed it. I think
>> this is
>> a use case for some sort of global posttest hook.
> How so?

In order to diagnose the problem I describe (which has happened far too
often!), you would add a posttest check that is run after each test. The
first test that fails due to this check is the culprit.

I see now though that this could be easily modeled by having a 'global' or
'base' layer in your test suite, and mandate its use by all tests in your
application. Or the check could go in a more specific layer if appropriate.

>> These sorts of policies are important
>> for us as we run our tests in an automated environment (we can't
>> commit to
>> our trunk. Instead, we send a request to a daemon which runs the test
>> suite
>> and commits on our behalf if the tests all pass).
> Hm, seems rather restrictive...

We like it ;) Our RCS (Bazaar) allows us to trivially merge branches into
other branches, so we can avoid fallout from any delays in landing stuff to
the trunk. And it is guarenteed that any changes landing in the trunk run,
and more importantly, run with the current versions of all the dependant
libraries and tools. So for example, if we add some sanity checks to
SQLObject stopping certain dangerous operations, nobody can accidently
commit code that breaks under the new version. It means that every day the
trunk is rolled out to a staging server automatically and actually runs, and
production rollouts can be done with confidence by simply picking an
arbitrary revision on the trunk, tagging it and pushing it out.

> I guess making the new test runner class-based would more easily allow this
> sort of customization.
>> Our full test suite currently takes 45 minutes to run and it is
>> becoming an
>> issue.
> Hm, I can see why you'd like to parallelize things.  Of course, this
> only helps
> you if you have enough hardware to benefit from the parallelization.

I think the parallelization might take some work (perhaps not the test
runner, but making our existing test suite work with it ;) ). I think there
are lower hanging fruit to reach for first ;)

Stuart Bishop <[EMAIL PROTECTED]>

Attachment: signature.asc
Description: OpenPGP digital signature

Zope-Dev maillist  -  Zope-Dev@zope.org
**  No cross posts or HTML encoding!  **
(Related lists - 
 http://mail.zope.org/mailman/listinfo/zope )

Reply via email to