On Mon, Jun 29, 2009 at 11:20 AM, Russell Keith-Magee < [email protected]> wrote:
> > On Sat, Jun 27, 2009 at 2:22 PM, Kevin Kubasik<[email protected]> wrote: > > Should windmill tests default to non-isolated/non-transactional DB > behavior? > > Basically, we are providing the means for functional tests, these should > be > > tests like 'Register and Complete a Profile', then 'Edit Profile'. We > don't > > want this as one massive test, and it seems like that would be the > expected > > behavior most of the time, and still allowing for the option of specific > > tests being run in isolation seems like the best move. However, this > could > > be confusing, or just bad practice, so I wanted to get some feedback. > > You need to clarify your terms here - when you say "in isolation", do > you mean in the sense that the effects of test 1 shouldn't affect test > 2 (i.e., the basic Unit Test premise), or are you referring to the > transactional testing framework that has been introduced for Django > v1.1? What are you trying to isolate from what? > > > What is the general interest in test-only models as a public api? The > > mechanics of it have been worked out for the regression suite, but the > > debate falls to one of the following systems. > > > > A class used instead of db.Model (db.TestModel) > > A module in the app (test_models.py) > > Similar to fixtures (a property on tests) > > A settings option > > It's not entirely obvious to me what these alternatives mean. You're > describing a relatively complex feature, yet your explanation of four > options doesn't dig much deeper than 4 words in a parenthetical > comment. That isn't much to base a design judgement upon. > > Here are my expectations as a potential end user of this feature: > > * I should be able to define a test model in exactly the same way as > any other model (i.e., subclassing models.Model) > > * Test models shouldn't be defined in the same module as application > models. Putting the test models somewhere in the tests namespace > (e.g., myapp/tests/models.py) would make some sort of sense to me, but > I'm open to other suggestions. > > * There must be sufficient flexibility so that I can have different > sets of models for different tests. For example, the admin app should > be testing behavior when there are no models defined. It should also > test behavior when there are N models defined. These two test > conditions cannot co-exist if there is a single models file and all > models in that file are automatically loaded as part of the test app. > > * The test models should appear to be part of a separate test > application - i.e., if I have a test model called Foo in myapp, it > should be myapp_test.Foo (or something similar), not myapp.Foo. > > * The appropriate housekeeping should be performed to ensure that app > caches are flushed/purged at the end of each test so that when the > second test runs, it can't accidentally find out about a model that > should only be present for the first test. > > I'm open to almost any design suggestion that enables this use case. > > > I am assuming that code coverage of windmill tests isn't that useful of a > > number, given the specialized execution paths etc. But I wanted to double > > check that people wouldn't be surprised by that. > > I wouldn't rule out the proposition that someone might be interested > in this number. > > I'm also a little confused as to why this decision is even required. > My understanding was that determining code coverage is one problem; > starting a Windmill test was a separate problem. As I understood it, > both features were being layered over the top of the standard UnitTest > framework, so if you wanted to determine code coverage of a Windmill > test, it would just be a matter of turning on the coverage flag on > your django.test.WindmillTest instance. Have I missed something > important here? > > Yours, > Russ Magee %-) > > > > Another thought just occured to me. As a part of my multi-db work I've had to update the testing harness for syncing more than one DB. That work itself is probably 100% orthagonal to what you're doing. However Russ mentioned subclasses on UnitTest which reminded me that I've already hit the problem doing all of the setUp and tearDown work when multiple databases aren't needed is wasteful, and can kill some of the speed ups we got from transactional test cases. What I'd like to do is introduce a MultipleDatabaseTestCase, and only for this is the syncing of all DBs done. The issue here is we'd end up with 4 classes: TestCase, TransactionalTestCase, MutliDBTestCase, MultiDBTransactionalTestCase. This is Bad (tm). If you're introducing additional TestCase subclasses (like for Windmill), this situation would be further exacerbated. I'm not sure what the right solution to this is, but I figured I'd put it out there. Alex -- "I disapprove of what you say, but I will defend to the death your right to say it." --Voltaire "The people's good is the highest law."--Cicero --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [email protected] For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~----------~----~----~----~------~----~------~--~---
