On Wed, Dec 22, 2010 at 02:27:09PM +0100, Alan Franzoni wrote:
> I've tried submitting this as a launchpad answer but it expired
> without further notice; I'm posting this here, I hope it's the right
> place to discuss this.

Probably.  More people read this list than Launchpad answers, I'm sure.

> I'm indirectly using zope.testrunner through zc.buildout
> zc.recipe.testrunner:

(One of my beefs with zope.testrunner is that I've no idea how to use it
without zc.recipe.testrunner, assuming that's even possible.)

> [testunits]
> recipe = zc.recipe.testrunner
> eggs = pydenji
> defaults = [ "--auto-color", "-vv", "--tests-pattern", "^test_.*" ]
> It works; but what happens if a test file matching the pattern has got
> a serious failure, let's say an import or syntax error ?

It is reported near the beginning of the output (highlighted in red, to
stand out), mentioned in the summary at the end to ensure you don't
miss it, and the test runner exits with a non-zero status code (I hope;
if it doesn't, that's a bug).

>  Ran 73 tests with 0 failures and 0 errors in 0.188 seconds.
> Tearing down left over layers:
>  Tear down zope.testrunner.layer.UnitTests in 0.000 seconds.
> Test-modules with import problems:
>  pydenji.test.test_cmdline
> I can't show you such things here, but the "73" is green, which is the
> colour for "all ok" - if any failure happens, the colors turn red,
> while the "test modules with import problems" are just a tiny line
> after that, and often gets overlooked.

That's an interesting perspective.

Note that even when there are failures, the number of tests and the
number of seconds are highlighted in green.  (The colours there are
mainly to make the numbers stand out so they're easier to notice in the

Perhaps it would make sense to increment the number of errors, if there
are modules that cannot be imported.  The number of errors is
highlighted in red (unless it is 0), so that would give you a visual
clue if you missed the one near the beginning of the output, or ignored
the summary list at the end.

> Also, a test file matching the pattern but which not defines any test
> is treated the very same way as a file with import problems, which is
> probably not what it want.

What's "it"?

> The issues I can find are:
> - I need to dig to the top in order to get the traceback; other
> frameworks, like twisted's own trial, print all the tracebacks at the
> bottom of the test run for easy debugging;

Having used other frameworks, I appreciate zope.testrunner's eagerness
to show me the traceback at once, so I can start examining an error
without having to stare at an F in the middle of a sea of dots and
wonder what it might be, while waiting 20 minutes for the rest of the
test suite to finish running.

Then again I agree that having to scroll back to the first traceback is
a necessity that's a bit bothersome.  I don't think printing the
tracebacks at the end of the run would help, in case there were multiple
tracebacks -- you'd want the first one anyway; the others likely being
caused by it.  Also, tracebacks tend to be long, requiring me to scroll

Perhaps my experiences are coloured by working on Zope'ish code --
doctests (causing error cascades by default), deep function call nesting
causing long tracebacks, etc.

I see that zope.testrunner has finally acquired a -x (--stop-on-error)
option, which should terminate the test run after the first failure.
That might help.  Although it might not help with doctests and error

For myself I've ended up running the tests like this:

  bin/test -c 2>&1 | less -RFX

this means I can start reading the results from the top down, starting
with the first failure, and not having to wait for the test suite to run
to completion.

I sometimes wish zope.testrunner had a --pager option, and spawn the
pager on its output if and only if there were any errors.

> - test colors should not turn green if any test with import problem is
> around; maybe an import/syntax error should count as a generic error.

Maybe.  I'm feeling +0 about this.

> - while an import issue is a serious fact - meaning the test can't be
> run, and should be reported, a test module which does not define any
> test could just issue a warning - it could just be a placeholder test,
> or a not-yet-finished test case, and should not be a blocking issue.

An import error could be a placeholder as well.  What makes a module
with no tests different?  If you added a module, it's reasonable to
assume you added it for a reason, and what other reason could there be
other than for it to have tests?

Adding a single placeholder test to assuage the test runner is not that
difficult.  Or you could simply ignore that error while you're working
on other tests.  It's not a _blocking_ issue, in my book, since it
doesn't abort your test run -- all the other tests continue to run.

Marius Gedminas
http://pov.lt/ -- Zope 3/BlueBream consulting and development

Attachment: signature.asc
Description: Digital signature

Zope-Dev maillist  -  Zope-Dev@zope.org
**  No cross posts or HTML encoding!  **
(Related lists - 
 https://mail.zope.org/mailman/listinfo/zope )

Reply via email to