On Aug 25, 2012, at 1:15 AM, Russell Keith-Magee wrote:

> Hi all,
> 
> So, I've been working on a Django branch [1] to implement the approach
> to pluggable user models that was decided upon earlier this year [2]
> 
> [1] https://github.com/freakboy3742/django/tree/t3011
> [2] https://code.djangoproject.com/wiki/ContribAuthImprovements
> 
> The user-swapping code itself is coming together well. However, I've
> hit a snag during the process of writing tests that may require a
> little yak shaving.
> 
> The problem is this: With pluggable auth models, you lose the
> certainty over exactly which User model is present, which makes it
> much harder to write some tests, especially ones that will continue to
> work in an unpredictable end-user testing environment.
> 
> With regards to pluggable Users, there are 5 types of tests that can be run:
> 
> 1) Contrib.auth tests that validate that the internals of
> contrib.auth work as expected
> 
> 2) Contrib.auth tests that validate that the internals of
> contrib.auth work with a custom User model
> 
> 3) Contrib.auth tests that validate that the currently specified user
> model meets the requirements of the User contract
> 
> The problem is that because of the way syncdb works during testing
> some of these tests are effectively mutually exclusive. The test
> framework runs syncdb at the start of the test run, which sets up the
> models that will be available for the duration of testing -- which
> then constrains the tests that can actually be run.
> 
> This doesn't affect the Django core tests so much; Django's tests will
> synchronise auth.User by default, which allows tests of type 1 to run.
> It can also provide a custom User model, and use @override_settings to
> swap in that model as required for tests of type 2. Tests of type 3
> are effectively integration tests which will pass with *any* interface
> compliant User model.
> 
> However, if I have my own project that includes contrib.auth in
> INSTALLED_APPS, ./manage.py test will attempt to run *all* the tests
> from contrib.auth. If I have a custom User model in play, that means
> that the tests of type 1 *can't* pass, because auth.User won't be
> synchronised to the database. I can't even use @override_settings to
> force auth.User into use -- the opportunity for syncdb to pick up
> auth.User has passed.
> 
> We *could* just mark the affected tests that require auth.User as
> "skipUnless(user model == auth.User)", but that would mean some
> projects would run the tests, and some wouldn't. That seems like an
> odd inconsistency to me -- the tests either should be run, or they
> shouldn't.
> 
> In thinking about this problem, it occurred to me that what is needed
> is for us to finally solve an old problem with Django's testing -- the
> fact that there is a difference between different types of tests.
> There are tests in auth.User that Django's core team needs to run
> before we cut a release, and there are integration tests that validate
> that when contrib.auth is deployed in your own project, that it will
> operate as designed. The internal tests need to run against a clean,
> known environment; integration tests must run against your project's
> native environment.
> 
> Thinking more broadly, there may be other categories -- "smoke tests"
> for  quick sanity check that a system is working; "Interaction tests"
> that run live browser tests; and so on.
> 
> Python's unittest library contains the concept of Suites, which seems
> to me like a reasonable analog of what I'm talking about here. What is
> missing is a syntax for executing those suites, and maybe some helpers
> to make it easier to build those suites in the first place.
> 
> I don't have a concrete proposal at this point (beyond the high level
> idea that suites seem like the way to handle this). This is an attempt
> to feel out community opinion about the problem as a whole. I know
> there are efforts underway to modify Django's test discovery mechanism
> (#17365), and there might be some overlap here. There are also a range
> of tickets relating to controlling the test execution process (#9156,
> #11593), so there has been plenty of thought put into this general
> problem in the past. If anyone has any opinions or alternate
> proposals, I'd like to hear them.

Perhaps some inspiration could be found in other testing frameworks. In 
particular, py.test has an interesting 'marking' feature [1] and I believe that 
Nose also has something similar. This allows to limit the execution of tests 
that have been marked with a particular tag, or in other terms, to dynamically 
specify which sub-suite to execute.

The challenge is that everyone in the industry seems to have different 
definitions for what constitutes a smoke, unit, functional, integration or 
system test. So it's probably best to follow a flexible, non-authoritative 
approach as the meaning for those types of tests is so subjective. Therefore my 
suggestion would be to:
- add a feature to allow marking tests in a similar fashion as in py.test. Any 
app's maintainer would be free to mark their tests with whatever tags they 
want, as long as they document those tags to help the user use them adequately 
when creating their own test suites. The Django contrib apps tests themselves 
could be marked with certain tags ('smoke', 'unit' or whatever) if we can agree 
on what makes sense in the context of Django core.
- automatically mark tests from any app with the app's name and the TestCase 
name. This would allow to continue to do things like: manage.py test auth 
gis.GEOSTest
- for convenience, allow to explicitly exclude tests marked with certain tags 
(either automatic or custom): e.g. manage.py test --exclude=auth 
--exclude=gis.GEOSTest --exclude=sessions-custom-bleh 
--exclude=tastypie-custom-tag-blah

Hopefully this could be a backwards-compatible, yet flexible way of creating 
custom test suites that make sense in the specific context of any project.

My 2 cents :)

[1] http://pytest.org/latest/example/markers.html#mark-examples

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to