Hi all,

The automated tests are running on wyvern nightly:

    http://wyvern.cs.uni-duesseldorf.de/pypytest/summary.html

As the tests are parallelized it takes on the order of two hours to run
them all, so we could consider running them more than once per day.

Something else I'd like to consider is a tool that checks new failures
shown by these test runs, finding the exact revision that seemed to
broke it, and actively signals the problem somewhere - maybe as a notice
by elbowtone on #pypy and as an e-mail to e.g. the author of the
revision?  What do you think about it?

Longer term, we could also have a way to ask for a complete test run on
wyvern, so that we can check within two hours if we broke something
after a delicate check-in, or just after we fixed a couple of problems.
If we manage 0-failure runs at least every few days we could then have
an URL to which we copy the trunk whenever all tests pass, so that
"outside" people have the option to follow with 'svn up' an URL that is
at least fully tested instead of the bleeding edge trunk.


A bientot,

Armin
_______________________________________________
[email protected]
http://codespeak.net/mailman/listinfo/pypy-dev

Reply via email to