Just a few observations that I've had when running the test suite that may be relevant.
- There are lots and lots of different test modules that may be relevant to a particular change, and some may not seem relevant until you run the entire suite - bug* modules are hard to classify without reading the tests or the ticket - *_regress modules seem too complex, and should be folded back in to the main test module - Creating a new test module is not as easy as it could be. Basically, copy another test module, and search/replace the fixtures (if they exist) - Setup/Teardown can be quite expensive across a large number of tests. Perhaps individual tests could be longer (which is bad practise, but practical with respect to time) - There are 13 individual admin test modules. Perhaps it'd be nicer to have them all under a single admin module, with separate TestCases within. This applies to other systems like the ORM. - It'd be nice if test modules could be parallelised, to improve total run time of the test suite I think that restructuring the modules could take the place of tagging or classification in a naive kind of way, but does not allow marking two systems as relevant. For example, the checks framework has tests relating to the admin, which should be run alongside any admin tests, but wouldn't necessarily live in the admin module. Would pytest help with any of the issues observed with the current test suite? - Josh On Sunday, 16 February 2014 02:17:37 UTC+11, Chris Wilson wrote: > > Hi all, > > On Sat, 15 Feb 2014, Russell Keith-Magee wrote: > > > One of the improvements I see is classification of test cases. > > Classifying them into categories (read multiple-categories), would make > > it easier for users/developers/maintainers to run them. Basis of > > classification,etc is what I am still thinking on. But surely > > classification will help in deciding which all test cases to run. For > > example - just running third-party app test cases, or just run my test > > cases, or those which check part ABC of my project, or just those with > > priority set to important. > [...] > > I would envisage that this would be a declarative process - in code, > > marking a specific test as a "system" test or an "integration" test (or > > whatever other categories we develop). > > It just occurred to me that most classification systems are completely > arbitrary and therefore not very useful. What's a "system" test and how > would I know whether I need to run it? > > But some ideas that I can think of that might be useful are: > > * Automatically building test coverage maps for each test, and reversing > them, so we can see which tests touch the line(s) of code that we just > modified, and rerun them easily. A good smoke test to run while modifying > part of Django. > > * Categorising by imports: run all tests that import django.db or > django.core.http for example. Not perfect, some tests may touch facilities > without needing to actually import them, but it would be quick and cheap. > > * Profile and speed up the test suite, so that we can run all tests more > quickly, especially with databases like postgres where it takes an hour to > run them all. > > Cheers, Chris. > -- > Aptivate | http://www.aptivate.org | Phone: +44 1223 967 838 > Citylife House, Sturton Street, Cambridge, CB1 2QF, UK > > Aptivate is a not-for-profit company registered in England and Wales > with company number 04980791. > > -- You received this message because you are subscribed to the Google Groups "Django developers" group. To unsubscribe from this group and stop receiving emails from it, send an email to django-developers+unsubscr...@googlegroups.com. To post to this group, send email to django-developers@googlegroups.com. Visit this group at http://groups.google.com/group/django-developers. To view this discussion on the web visit https://groups.google.com/d/msgid/django-developers/a7e28b8c-50c1-4ab5-8ee1-ebb1ac035173%40googlegroups.com. For more options, visit https://groups.google.com/groups/opt_out.