On Mon, Jun 29, 2009 at 10:20 AM, Russell Keith-Magee <
[email protected]> wrote:

>
> On Sat, Jun 27, 2009 at 2:22 PM, Kevin Kubasik<[email protected]> wrote:
> > Should windmill tests default to non-isolated/non-transactional DB
> behavior?
> > Basically, we are providing the means for functional tests, these should
> be
> > tests like 'Register and Complete a Profile', then 'Edit Profile'. We
> don't
> > want this as one massive test, and it seems like that would be the
> expected
> > behavior most of the time, and still allowing for the option of specific
> > tests being run in isolation seems like the best move. However, this
> could
> > be confusing, or just bad practice, so I wanted to get some feedback.
>
> You need to clarify your terms here - when you say "in isolation", do
> you mean in the sense that the effects of test 1 shouldn't affect test
> 2 (i.e., the basic Unit Test premise), or are you referring to the
> transactional testing framework that has been introduced for Django
> v1.1? What are you trying to isolate from what?

Sorry, yes I was referring to the transactional framework.

>
>
> > What is the general interest in test-only models as a public api? The
> > mechanics of it have been worked out for the regression suite, but the
> > debate falls to one of the following systems.
> >
> > A class used instead of db.Model (db.TestModel)
> > A module in the app (test_models.py)
> > Similar to fixtures (a property on tests)
> > A settings option
>
> It's not entirely obvious to me what these alternatives mean. You're
> describing a relatively complex feature, yet your explanation of four
> options doesn't dig much deeper than 4 words in a parenthetical
> comment. That isn't much to base a design judgement upon.
>
> Here are my expectations as a potential end user of this feature:
>
>  * I should be able to define a test model in exactly the same way as
> any other model (i.e., subclassing models.Model)
>
>  * Test models shouldn't be defined in the same module as application
> models. Putting the test models somewhere in the tests namespace
> (e.g., myapp/tests/models.py) would make some sort of sense to me, but
> I'm open to other suggestions.
>
>  * There must be sufficient flexibility so that I can have different
> sets of models for different tests. For example, the admin app should
> be testing behavior when there are no models defined. It should also
> test behavior when there are N models defined. These two test
> conditions cannot co-exist if there is a single models file and all
> models in that file are automatically loaded as part of the test app.


My though on best solving this is a property (similar to how we use
'fixtures') that for each test points to any model modules that should be
explicitly loaded. Thoughts?


>
>  * The test models should appear to be part of a separate test
> application - i.e., if I have a test model called Foo in myapp, it
> should be myapp_test.Foo (or something similar), not myapp.Foo.

I hadn't planned on doing this, do you have a use case in which myapp.Foo
would cause problems? Not entirely sure I understand why this would be
ideal.

>
>
>  * The appropriate housekeeping should be performed to ensure that app
> caches are flushed/purged at the end of each test so that when the
> second test runs, it can't accidentally find out about a model that
> should only be present for the first test.
>

This is a tough(er) problem, since my initial approach (flush the entire app
cache after test and force a call to _populate()) made things unbearably
slow. My main issue has been trying to determine behavior based on changes
to the AppCache, are there any docs for the AppCache that I might be
missing?

Otherwise, it looks like manual manipulation of the cache is the key. I have
just checked in a super-alpha sample with me messing around a bit, but usage
is pretty straightforward:

    from django.test import TransactionTestCase

    class TestMyViews(TransactionTestCase):
        test_models = ['test_models']

        def testIndexPageView(self):
            # Here you'd test your view using ``Client``.

Where ``test_models.py`` has models declared for use, there are no limits on
the number of modules that can be loaded for one test. If a user wants
different models for different tests, they just have to declare several
modules.




>
> I'm open to almost any design suggestion that enables this use case.
>
> > I am assuming that code coverage of windmill tests isn't that useful of a
> > number, given the specialized execution paths etc. But I wanted to double
> > check that people wouldn't be surprised by that.
>
> I wouldn't rule out the proposition that someone might be interested
> in this number.
>
Certainly, and it is easy to get, however I was referring to what the
--coverage flag does by default to the runtests.py script.

>
> I'm also a little confused as to why this decision is even required.
> My understanding was that determining code coverage is one problem;
> starting a Windmill test was a separate problem. As I understood it,
> both features were being layered over the top of the standard UnitTest
> framework, so if you wanted to determine code coverage of a Windmill
> test, it would just be a matter of turning on the coverage flag on
> your django.test.WindmillTest instance. Have I missed something
> important here?

Somewhat, WindmillTests are special cases, and do not extend unittest. This
is for 2 main reasons,

   1. Windmill is very tightly coupled with functests, and functests don't
   play well with unittests.
   2. Running windmill tests from Django's unittest runner presents a few
   problems, including proper reporting of test success/failure and the ability
   to filter when windmill tests are run.

Since windmill tests are very slow, and represent a special case, I have
instead opted for a special runner which only runs windmill tests. This also
means that the specialized error-casing, threading environment and
transaction behavior don't pollute the django TestCase.

While its not ideal, it is more flexible, and more inline with how I imagine
most people using windmill tests. I am planning on writing a twill runner as
well to extend and help me abstract the windmill runner.

I am open to a design discussion regarding how Windmill tests are integrated
into django. Basically, there are 2 routes:

   1. Specialized Runner locates windmilltests directories and runs them
   with functests.
   2. Specialized subclass of TestCase, which starts the windmill runner and
   loads tests from windmilltests.

As mentioned above, I prefer option 1 because of the specialized hacks it
takes to run windmill tests, and the performance hits we take when bringing
the background server up and down.



>
> Yours,
> Russ Magee %-)
>
> >
>


-- 
Kevin Kubasik
http://kubasik.net/blog

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to