Stuart Bishop wrote:
Jim Fulton wrote:
A large proportion of our tests use a relational database. Some of
an empty database, some of them want just the schema created but no data,
some of them want the schema created and the data. Some of them need the
component architecture, and some of them don't. Some of them need one or
more twisted servers running, some of them don't.
Note that we mix and match. We have 4 different types of database fixture
(none, empty, schema, populated), 2 different types of database
mechanisms (psycopgda, psycopg), 2 types of CA fixture (none, loaded),
(currently) 4 states of external daemons needed. If we were to arrange
in layers, it would take 56 different layers, and this will double every
time we add a new daemon, or add more database templates (e.g. fat for
of sample data to go with the existing thin).
As a way of supporting this better, instead of specifying a layer a test
could specify the list of resources it needs:
import testresources as r
resources = [r.LaunchpadDb, r.Librarian, r.Component]
resources = [r.EmptyDb]
resources = [r.LaunchpadDb, r.Librarian]
This is pretty much how layers work. Layers can be arranged in
a DAG (much like a traditional multiple-inheritence class graph).
So, you can model each resource as a layer and specific combinations
of resources as layers. The test runner will attempt to run the layers
in an order than minimizes set-up and tear-down of layers.
So my example could be modeled using layers like:
import layers as l
class FooLayer(l.LaunchpadDb, l.Librarian, l.Component): pass
layer = 'FooLayer'
class BarLayer(l.LaunchpadDb, l.Librarian, l.Component): pass
layer = 'BarLayer'
class BazLayer(l.LaunchpadDb, l.Librarian): pass
layer = 'BazLayer'
In general I would need to define a layer for each test case (because the
number of combinations make it impractical to explode all the possible
combinations into a tree of layers, if for no other reason than naming them).
That's too bad. Perhaps layers don't fit your need then.
If I tell the test runner to run all the tests, will the LaunchpadDb,
Librarian and Component layers each be initialized just once?
If all of the tests means these 3, then yes.
If I tell the test runner to run the Librarian layer tests, will all three
tests be run?
No, no tests will be run. None of the tests are in the librarian layer.
They are in layers build on the librarian layer.
What happens if I go and define a new test:
layer = 'l.Librarian'
If I run all the tests, will the Librarian setup/teardown be run once (by
running the tests in the order LibTest, BazTest, FooTest, BarTest and
initializing the Librarian layer before the LaunchpadDb layer)?
> I expect
not, as 'layer' indicates a heirarchy which isn't as useful to me as a set
I don't follow this.
If layers don't work this way, it might be possible to emulate resources
If each test *really* has a unique set of resources, then perhaps
laters don't fit.
Howver, optimize_order would need to know about all the other tests so would
really be the responsibility of the test runner (so it would need to be
customized/overridden), and the test runner would need to support the layer
attribute possibly being a class rather than a string.
Layers can be classes. In fact, I typically use classes with class
methods for setUp and tearDown.
Ah, so the layer specifies additional per-test setUp and tearDown
that is used in addition to the tests's own setUp and tearDown. This
But what to call them? setUpPerTest? The pretest and posttest names I used
are a bit sucky.
On another note, enforcing isolation of tests has been a continuous
for us. For example, a developer registering a utility or otherwise
around with the global environment and forgetting to reset things in
tearDown. This goes unnoticed for a while, and other tests get written
actually depend on this corruption. But at some point, the order the
are run changes for some reason and suddenly test 500 starts failing. It
turns out the global state has been screwed, and you have the fun task of
tracking down which of the proceeding 499 tests screwed it. I think
a use case for some sort of global posttest hook.
In order to diagnose the problem I describe (which has happened far too
often!), you would add a posttest check that is run after each test. The
first test that fails due to this check is the culprit.
so your post-test thing would check to make sure there weren't any left-over
bits from tests. This makes sense.
I see now though that this could be easily modeled by having a 'global' or
'base' layer in your test suite, and mandate its use by all tests in your
application. Or the check could go in a more specific layer if appropriate.
I think it makes more sense to be able to provide a hook. I would be inclined
make this a command-line options. (2 actually).
Jim Fulton mailto:[EMAIL PROTECTED] Python Powered!
CTO (540) 361-1714 http://www.python.org
Zope Corporation http://www.zope.com http://www.zope.org
Zope-Dev maillist - Zope-Dev@zope.org
** No cross posts or HTML encoding! **
(Related lists -