On Wed, Jul 1, 2009 at 7:01 PM, Kevin Kubasik<[email protected]> wrote:
>
>
> On Mon, Jun 29, 2009 at 10:20 AM, Russell Keith-Magee
> <[email protected]> wrote:
>>
>> On Sat, Jun 27, 2009 at 2:22 PM, Kevin Kubasik<[email protected]> wrote:
>> > Should windmill tests default to non-isolated/non-transactional DB
>> > behavior?
>> > Basically, we are providing the means for functional tests, these should
>> > be
>> > tests like 'Register and Complete a Profile', then 'Edit Profile'. We
>> > don't
>> > want this as one massive test, and it seems like that would be the
>> > expected
>> > behavior most of the time, and still allowing for the option of specific
>> > tests being run in isolation seems like the best move. However, this
>> > could
>> > be confusing, or just bad practice, so I wanted to get some feedback.
>>
>> You need to clarify your terms here - when you say "in isolation", do
>> you mean in the sense that the effects of test 1 shouldn't affect test
>> 2 (i.e., the basic Unit Test premise), or are you referring to the
>> transactional testing framework that has been introduced for Django
>> v1.1? What are you trying to isolate from what?
>
> Sorry, yes I was referring to the transactional framework.

Ok - so you're referring to the transactional framework, but what
problem are you referring to? I don't see how this relates to your
original question/feedback request.

A Windmill test exists to test views. Views will often (but not
always) use transactions. A Windmill test suite will contain multiple
view tests. People will want to invoke individual tests, as well as
invoking the entire suite and getting a report. What exactly is it
that you need feedback on?

>> > What is the general interest in test-only models as a public api? The
>> > mechanics of it have been worked out for the regression suite, but the
>> > debate falls to one of the following systems.
>> >
>> > A class used instead of db.Model (db.TestModel)
>> > A module in the app (test_models.py)
>> > Similar to fixtures (a property on tests)
>> > A settings option
>>
>> It's not entirely obvious to me what these alternatives mean. You're
>> describing a relatively complex feature, yet your explanation of four
>> options doesn't dig much deeper than 4 words in a parenthetical
>> comment. That isn't much to base a design judgement upon.
>>
>> Here are my expectations as a potential end user of this feature:
>>
>>  * I should be able to define a test model in exactly the same way as
>> any other model (i.e., subclassing models.Model)
>>
>>  * Test models shouldn't be defined in the same module as application
>> models. Putting the test models somewhere in the tests namespace
>> (e.g., myapp/tests/models.py) would make some sort of sense to me, but
>> I'm open to other suggestions.
>>
>>  * There must be sufficient flexibility so that I can have different
>> sets of models for different tests. For example, the admin app should
>> be testing behavior when there are no models defined. It should also
>> test behavior when there are N models defined. These two test
>> conditions cannot co-exist if there is a single models file and all
>> models in that file are automatically loaded as part of the test app.
>
> My though on best solving this is a property (similar to how we use
> 'fixtures') that for each test points to any model modules that should be
> explicitly loaded. Thoughts?

Again - I think you've given a one line response to a problem that
isn't that simple. You need to explain your thoughts. You need to
elaborate on your plans. I can imagine any number of ways that "a
property" could be used to solve this problem. What do you have in
mind?

>>  * The test models should appear to be part of a separate test
>> application - i.e., if I have a test model called Foo in myapp, it
>> should be myapp_test.Foo (or something similar), not myapp.Foo.
>
> I hadn't planned on doing this, do you have a use case in which myapp.Foo
> would cause problems? Not entirely sure I understand why this would be
> ideal.

Think of what it is that we could be testing. For example:

 * Admin needs to test layouts when there are multiple applications.
 * Schema evolution projects need to test migrations when there are
cross-app dependencies.

These are just two examples - it shouldn't be too hard to think of
others. My point is that contrib.admin is a single application, and it
has models of its own. You can't do a comprehensive test of
contrib.admin by putting all the test models in the same namespace -
you need to be able to define multiple test app namespaces within the
contrib.admin test suite.

>>  * The appropriate housekeeping should be performed to ensure that app
>> caches are flushed/purged at the end of each test so that when the
>> second test runs, it can't accidentally find out about a model that
>> should only be present for the first test.
>
> This is a tough(er) problem, since my initial approach (flush the entire app
> cache after test and force a call to _populate()) made things unbearably
> slow. My main issue has been trying to determine behavior based on changes
> to the AppCache, are there any docs for the AppCache that I might be
> missing?

Nope :-)

Unfortunately, this is one of those internal areas that hasn't been
fully documented. The closest we come to any form of documentation is
"documentation by test" - that is, the test suite expects that the app
cache behaves the way it does, so if tests start failing, you've
probably broken something important. Not very helpful, I know, but
that's the way it is.

> Otherwise, it looks like manual manipulation of the cache is the key. I have
> just checked in a super-alpha sample with me messing around a bit, but usage
> is pretty straightforward:
>     from django.test import TransactionTestCase
>     class TestMyViews(TransactionTestCase):
>         test_models = ['test_models']
>         def testIndexPageView(self):
>             # Here you'd test your view using ``Client``.
> Where ``test_models.py`` has models declared for use, there are no limits on
> the number of modules that can be loaded for one test. If a user wants
> different models for different tests, they just have to declare several
> modules.

Ok - so the idea here is that you have:
/myapp
   __init__.py
   models.py
   tests.py
   test_models.py
   test_models2.py

and then the TestMyViews test case can assume the existence of the
'test_models' app in the app cache? Some quick queries:

 * What happens if I want to put in some namespace modules (e.g., a
tests directory)? How does test_models resolve in this case?

 * How does this integrate with tests that depend on non-test
applications - e.g., admin requires auth, so the test case should be
able to specify contrib.auth as a test dependency

>> I'm also a little confused as to why this decision is even required.
>> My understanding was that determining code coverage is one problem;
>> starting a Windmill test was a separate problem. As I understood it,
>> both features were being layered over the top of the standard UnitTest
>> framework, so if you wanted to determine code coverage of a Windmill
>> test, it would just be a matter of turning on the coverage flag on
>> your django.test.WindmillTest instance. Have I missed something
>> important here?
>
> Somewhat, WindmillTests are special cases, and do not extend unittest. This
> is for 2 main reasons,
>
> Windmill is very tightly coupled with functests, and functests don't play
> well with unittests.
> Running windmill tests from Django's unittest runner presents a few
> problems, including proper reporting of test success/failure and the ability
> to filter when windmill tests are run.
>
> Since windmill tests are very slow, and represent a special case, I have
> instead opted for a special runner which only runs windmill tests. This also
> means that the specialized error-casing, threading environment and
> transaction behavior don't pollute the django TestCase.
> While its not ideal, it is more flexible, and more inline with how I imagine
> most people using windmill tests. I am planning on writing a twill runner as
> well to extend and help me abstract the windmill runner.
> I am open to a design discussion regarding how Windmill tests are integrated
> into django. Basically, there are 2 routes:
>
> Specialized Runner locates windmilltests directories and runs them with
> functests.
> Specialized subclass of TestCase, which starts the windmill runner and loads
> tests from windmilltests.
>
> As mentioned above, I prefer option 1 because of the specialized hacks it
> takes to run windmill tests, and the performance hits we take when bringing
> the background server up and down.

Ok, that's a little clearer. I agree that the runner approach is
better. Is there any reason that the code for invoking and reporting
coverage couldn't be factored out so it could be shared between the
two runners?

Russ %-)

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Django developers" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/django-developers?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to