I have some model tests that use a postgresql database set up just for
testing. These tests run SLOWLY and so I did some digging and
discovered that between every single test, the system drops every
table and then recreates them.  This is really expensive for me
because I have about 80 tables with lots of indexes and constraints.

My tests are all subclasses of testutil.DBTest and I have a bunch of
methods defined like "test_a" and "test_b".

I can understand wiping out all the tables at the beginning of a test,
but between each test seems expensive.

I defined my own tearDown method that does nothing and now my test run
much faster.

I'm curious about why testutil was designed this way.
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"TurboGears" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/turbogears?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to