On Mon, 2007-08-13 at 21:28 -0700, Brian Harring wrote: > On Mon, Aug 13, 2007 at 04:31:42PM -0500, Adrian Holovaty wrote: > > So I was writing Django view unit tests and setting up fixtures with > > sample data, and it hit me -- wouldn't it be useful we made it easy to > > run the Django development server with fixture data? > > > > I'm proposing a "--with-fixture" flag to django-admin.py, so that you > > could do something like this: > > > > django-admin.py runserver --with-fixture=mydata.json > > > > With this command, Django would: > > > > * Create a new test database (following the TEST_DATABASE_NAME setting). > > > > * Import the fixture data into that fresh test database (just as the > > unit test framework does). > > > > * Change the DATABASE_NAME setting in memory to point to the test database. > > > > * Run the development server, pointing at the test database. > > > > * Delete the test database when the development server is stopped. > > > > The main benefit of this feature is that it would let developers poke > > around their fixture data and, essentially, walk through unit tests > > manually (in a Web browser rather than programatically). > > unittests, or doctests? walking through unittests is already viable, > docttests, not really (or at least not even remotely pleasant). > > > > In the future, the next step would be to keep track of any database > > changes and optionally serialize them back into the fixture when the > > server is stopped. This would let people add fixture data through > > their Web application itself, whether it's the Django admin site or > > something else. Now *that* would be cool and useful! But I'm only > > proposing the first part of this for now. > > That one actually worries me a bit; personally, doctests I view as > great for *verifying* documentation, but when used as api tests (as > django does), it has two failings imo- > > > 1) doctests are really a pita if you've grown accustomed to the > capabilities of normal unittests;
And vice-versa. You're going to find that opinions vary quite a bit here from the "unittests are the new sliced bread" camp to the "doctests are teh k00l" to people who don't mind (or possibly enjoy) either. Personally, I prefer doctests to the unittest framework for many things, although unittests are invaluable in some cases (e.g. client tests would be hard to write as doctests a lot of the time), so right tool for the job and some flexibility is good to have. Each system has its advantages and disadvantages. A couple of the advantages of doctests are that they are easier to write, have less setup overhead (so they often run faster as a whole) and are self-descriptive. Trying to work out the pertinent bits of some unittest methods is often a bit tricky because you have to mentally filter out the test-specific setup bits (which don't necessarily belong in setUp or tearDown in a suite). A couple of the disadvantages include that narrowing in on a specific line is harder, the environment stays around for the whole docstring, so sometimes inadvertently gets corrupted by something earlier and the fact that the output is highly ASCII based, so you can break error reporting pretty easily. The first one if manageable by breaking things into smaller blocks (see below), the second one is not that big a problem until it bites you and then you fix it and the last one is because the doctest infrastructure in Python is Broken As Designed in that respect, so there's not a lot we can do about it. > further, they grow unwieldly rather > quickly- a good example would be regressiontests/forms/tests.py; > I challenge anyone to try debugging a failure around line 2500 on that > beast; sure, you can try chunking everything leading up to that point, > but it still is a collosal pain in the ass to do so when compared to > running Fortunately, it's also not something you have to do that often (when compared to running or reading them). It's not impossible and that file is by far the worst example. Many files are a lot better than that. Bad unittest examples accrete, too, and sometimes it's unavoidable. Usually if you manage to trigger a failure in forms/tests.py it's fairly easy to read the error, check your latest small change and narrow down what broke it. Not always, but I'll maintain the exceptions are rare (and, yes, I've had to do it a lot, too. But let's play the averages a bit). We keep encouraging anybody who wants to to supply a patch to run a particular sub-test inside a test (so forms.test.firstBit) and then it makes it worthwhile splitting up the big string in forms.tests into half a dozen or so littlier tests that can be run in isolation. At the moment, it's not that useful to do (having them all in one file is handy because it's fast to read and search -- we've only split out things unrelated to the main form tests into other files in that directory). Readability, maintainability and general runtime speed are other aspects of the same area that we are trading off against here. Regards, Malcolm -- Despite the cost of living, have you noticed how popular it remains? http://www.pointy-stick.com/blog/ --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Django developers" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/django-developers?hl=en -~----------~----~----~----~------~----~------~--~---
