Pandas has for quite a while has a travis build where we install numpy
master and then run our test suite.

e.g. here: https://travis-ci.org/pydata/pandas/jobs/77256007

Over the last year this has uncovered a couple of changes which affected
pandas (mainly using something deprecated which was turned off :)

This was pretty simple to setup. Note that this adds 2+ minutes to the
build (though our builds take a while anyhow so its not a big deal).



On Wed, Aug 26, 2015 at 7:14 AM, Matthew Brett <matthew.br...@gmail.com>
wrote:

> Hi,
>
> On Wed, Aug 26, 2015 at 7:59 AM, Nathaniel Smith <n...@pobox.com> wrote:
> > [Popping this off to its own thread to try and keep things easier to
> follow]
> >
> > On Tue, Aug 25, 2015 at 9:52 AM, Nathan Goldbaum <nathan12...@gmail.com>
> wrote:
> >>>   - Lament: it would be really nice if we could get more people to
> >>>     test our beta releases, because in practice right now 1.x.0 ends
> >>>     up being where we actually the discover all the bugs, and 1.x.1 is
> >>>     where it actually becomes usable. Which sucks, and makes it
> >>>     difficult to have a solid policy about what counts as a
> >>>     regression, etc. Is there anything we can do about this?
> >>
> >> Just a note in here - have you all thought about running the test
> suites for
> >> downstream projects as part of the numpy test suite?
> >
> > I don't think it came up, but it's not a bad idea! The main problems I
> > can foresee are:
> > 1) Since we don't know the downstream code, it can be hard to
> > interpret test suite failures. OTOH for changes we're uncertain of we
> > already do often end up running some downstream test suites by hand,
> > so it can only be an improvement on that...
> > 2) Sometimes everyone including downstream agrees that breaking
> > something is actually a good idea and they should just deal, but what
> > do you do then?
> >
> > These both seem solvable though.
> >
> > I guess a good strategy would be to compile a travis-compatible wheel
> > of $PACKAGE version $latest-stable against numpy 1.x, and then in the
> > 1.(x+1) development period numpy would have an additional travis run
> > which, instead of running the numpy test suite, instead does:
> >   pip install .
> >   pip install $PACKAGE-$latest-stable.whl
> >   python -c 'import package; package.test()' # adjust as necessary
> > ? Where $PACKAGE is something like scipy / pandas / astropy / ...
> > matplotlib would be nice but maybe impractical...?
> >
> > Maybe someone else will have objections but it seems like a reasonable
> > idea to me. Want to put together a PR? Asides from fame and fortune
> > and our earnest appreciation, your reward is you get to make sure that
> > the packages you care about are included so that we break them less
> > often in the future ;-).
>
> One simple way to get going would be for the release manager to
> trigger a build from this repo:
>
> https://github.com/matthew-brett/travis-wheel-builder
>
> This build would then upload a wheel to:
>
> http://travis-wheels.scikit-image.org/
>
> The upstream packages would have a test grid which included an entry
> with something like:
>
> pip install -f http://travis-wheels.scikit-image.org --pre numpy
>
> Cheers,
>
> Matthew
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to