On Wed, Feb 3, 2016 at 9:18 PM, Nathaniel Smith <n...@pobox.com> wrote:
>
> On Tue, Feb 2, 2016 at 8:45 AM, Pauli Virtanen <p...@iki.fi> wrote:
> > 01.02.2016, 23:25, Ralf Gommers kirjoitti:
> > [clip]
> >> So: it would really help if someone could pick up the automation part of
> >> this and improve the stack testing, so the numpy release manager doesn't
> >> have to do this.
> >
> > quick hack: https://github.com/pv/testrig
> >
> > Not that I'm necessarily volunteering to maintain the setup, though, but
> > if it seems useful, move it under numpy org.
>
> That's pretty cool :-). I also was fiddling with a similar idea a bit, though 
> much less fancy... my little script cheats and uses miniconda to fetch 
> pre-built versions of some packages, and then runs the tests against numpy 
> 1.10.2 (as shipped by anaconda) + the numpy master, and does a diff (with a 
> bit of massaging to make things more readable, like summarizing warnings):

Whoops, got distracted talking about the results and forgot to say --
I guess we should think about how to combine these? I like the
information on warnings, because it helps gauge the impact of
deprecations, which is a thing that takes a lot of our attention. But
your approach is clearly fancier in terms of how it parses the test
results. (Do you think the fanciness is worth it? I can see an
argument for crude and simple if the fanciness ends up being fragile,
but I haven't read the code -- mostly I was just being crude and
simple because I'm lazy :-).)

An extra ~2 hours of tests / 6-way parallelism is not that big a deal
in the grand scheme of things (and I guess it's probably less than
that if we can take advantage of existing binary builds) -- certainly
I can see an argument for enabling it by default on the
maintenance/1.x branches. Running N extra test suites ourselves is not
actually more expensive than asking N projects to run 1 more testsuite
:-). The trickiest part is getting it to give actually-useful
automated pass/fail feedback, as opposed to requiring someone to
remember to look at it manually :-/

Maybe it should be uploading the reports somewhere? So there'd be a
readable "what's currently broken by 1.x" page, plus with persistent
storage we could get travis to flag if new additions to the release
branch causes any new failures to appear? (That way we only have to
remember to look at the report manually once per release, instead of
constantly throughout the process.)

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to