On Thu, Oct 6, 2011 at 11:03 AM, Benjamin Root <ben.r...@ou.edu> wrote:
> That's valid.  I guess I am just wondering if there is a decent error
> message to the user explaining that the test could not proceed.

Rig the test runner to properly skip them instead of failing?  The
test data should be considered a dependency for those tests, and
absent the dependency, the users simply get less tests, but not a ton
of failures.

Not saying this should be done *now*, but I think in general having
users be able to run the test suite in their environments is useful,
even if parts are skipped for some reason.  You never know when that
will uncover true failures...

That's the approach we take in ipython: we have a lot of tests that
depend on various tools/environment/os, that simply get skipped.  In
fact, there is no way to run the *entire* ipython test suite in one
go, since there are mutually exclusive tests (like things that only
run on OSX or Windows).  So inevitably, every test run is *always*
partial.  Once you think about it that way, then this is just one more
dependency to be handled just like any other.

Cheers,

f

------------------------------------------------------------------------------
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
_______________________________________________
Matplotlib-devel mailing list
Matplotlib-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/matplotlib-devel

Reply via email to