On Apr 20, 2009, at 10:23 PM, C. Titus Brown wrote:

>
> Then someone (probably the list ;) will have to work out the proper
> division of efforts between nose and runtest -- a good default is that
> anything that is 100% supported in nose can be left to nose to run,  
> and
> anything else we need can be put in runtest.
>
> Thoughts / comments?

Let me see if I understand.  Being able to test with nose would give  
us the automated test exclusion I wanted, for free.  And since nose is  
compatible with unittest style test cases, anyone who wants to / needs  
to can continue to run the tests using runtest / unittest?


>
>
> ->  Suggestions:
> ->
> ->    * does runtest have a "make clean" option that ensures no side-
> -> effects from previous test runs are possible (deletes any leftover
> -> files)?  Also, in my experience, runtest *never* cleans up the  
> reams
> -> of files it saves in tests/tempdir, so "make clean" ought to delete
> -> those as well.
> ->
> ->    * provide an option to strictly insulate each test from any side
> -> effects.  For example, each test should have its own  
> testutil.TEMPDIR,
> -> and runtest would replace testutil.datafile() calls with
> -> testutil.tempdatafile(copyData=True) calls.  Right now I have to  
> do a
> -> bunch of work by hand to assess whether a test error is being  
> caused
> -> by side effects, but that could be automated quite trivially.
>
> Interesting.  There's a tension between some of these ideas and  
> Istvan's
> goal of keeping things fairly simple for post-test-run debugging...  
> but
> I don't want to start that discussion again until we've dealt with 0.8
> ;)

Why can't we implement these as command line options as I suggested?   
That would not change the default runtest behavior, so I don't see how  
that conflicts with Istvan's preference for keeping the default  
behavior simple.

>
>
> ->    * it would be awesome if runtest provided some way of helping to
> -> track down order sensitivity effects to their cause, sort of like  
> git
> -> bisect.  For example, if I define a Python function that can check
> -> whether the "bad side-effect condition has occurred yet", I'd like
> -> runtest to run it for me before and after each test leading up to  
> the
> -> test that actually fails, and report to me which test actually  
> caused
> -> the bad side-effect.
>
> I don't know much about git bisect; can you send me some useful docs?
> This makes it look like it's fairly simple to use in non-automated
> fashion:
>
> http://www.kernel.org/doc/local/git-quick.html#what

All I meant is that git bisect makes it easy to search for the cause  
of a problem between any two reference points, greatly facilitating a  
process that developers used to have to do by many manual steps.  It  
struck me by analogy that a really trivial hook in runtest (i.e. run a  
user-supplied side-effect-detector function after each test, its  
setUp() and its tearDown()) would automate what I had to do rather  
arduously by hand on Friday to track down the cause of a side-effect.

- Chris

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"pygr-dev" group.
To post to this group, send email to pygr-dev@googlegroups.com
To unsubscribe from this group, send email to 
pygr-dev+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/pygr-dev?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to