Re: [Numpy-discussion] testing numpy with downstream testsuites (was: Re: Notes from the numpy dev meeting at scipy 2015)

2015-08-26 Thread Jens Nielsen
As a Matplotlib developer I try to test our code manually with all betas
and rc of new numpy versions.
(And already pushed fixed a few new deprecation warnings with 1.10beta1
which otherwise passes our test suite.
I forgot to report this back since there were no issues to report )
However, we could actually do this automatically if numpy betas were
uploaded as prereleases on pypi.

We are already using Travis's allow failure mode to test python 3.5 betas
and rc's along with all our dependencies installed with `pip --pre`
https://pip.pypa.io/en/latest/reference/pip_install.html#pre-release-versions

Putting prereleases on pypi would thus automate most of the testing of new
Numpy versions for us.

Best
Jens

ons. 26. aug. 2015 kl. 07.59 skrev Nathaniel Smith n...@pobox.com:

 [Popping this off to its own thread to try and keep things easier to
 follow]

 On Tue, Aug 25, 2015 at 9:52 AM, Nathan Goldbaum nathan12...@gmail.com
 wrote:
- Lament: it would be really nice if we could get more people to
  test our beta releases, because in practice right now 1.x.0 ends
  up being where we actually the discover all the bugs, and 1.x.1 is
  where it actually becomes usable. Which sucks, and makes it
  difficult to have a solid policy about what counts as a
  regression, etc. Is there anything we can do about this?
 
  Just a note in here - have you all thought about running the test suites
 for
  downstream projects as part of the numpy test suite?

 I don't think it came up, but it's not a bad idea! The main problems I
 can foresee are:
 1) Since we don't know the downstream code, it can be hard to
 interpret test suite failures. OTOH for changes we're uncertain of we
 already do often end up running some downstream test suites by hand,
 so it can only be an improvement on that...
 2) Sometimes everyone including downstream agrees that breaking
 something is actually a good idea and they should just deal, but what
 do you do then?

 These both seem solvable though.

 I guess a good strategy would be to compile a travis-compatible wheel
 of $PACKAGE version $latest-stable against numpy 1.x, and then in the
 1.(x+1) development period numpy would have an additional travis run
 which, instead of running the numpy test suite, instead does:
   pip install .
   pip install $PACKAGE-$latest-stable.whl
   python -c 'import package; package.test()' # adjust as necessary
 ? Where $PACKAGE is something like scipy / pandas / astropy / ...
 matplotlib would be nice but maybe impractical...?

 Maybe someone else will have objections but it seems like a reasonable
 idea to me. Want to put together a PR? Asides from fame and fortune
 and our earnest appreciation, your reward is you get to make sure that
 the packages you care about are included so that we break them less
 often in the future ;-).

 -n

 --
 Nathaniel J. Smith -- http://vorpus.org
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing numpy with downstream testsuites (was: Re: Notes from the numpy dev meeting at scipy 2015)

2015-08-26 Thread Matthew Brett
Hi,

On Wed, Aug 26, 2015 at 7:59 AM, Nathaniel Smith n...@pobox.com wrote:
 [Popping this off to its own thread to try and keep things easier to follow]

 On Tue, Aug 25, 2015 at 9:52 AM, Nathan Goldbaum nathan12...@gmail.com 
 wrote:
   - Lament: it would be really nice if we could get more people to
 test our beta releases, because in practice right now 1.x.0 ends
 up being where we actually the discover all the bugs, and 1.x.1 is
 where it actually becomes usable. Which sucks, and makes it
 difficult to have a solid policy about what counts as a
 regression, etc. Is there anything we can do about this?

 Just a note in here - have you all thought about running the test suites for
 downstream projects as part of the numpy test suite?

 I don't think it came up, but it's not a bad idea! The main problems I
 can foresee are:
 1) Since we don't know the downstream code, it can be hard to
 interpret test suite failures. OTOH for changes we're uncertain of we
 already do often end up running some downstream test suites by hand,
 so it can only be an improvement on that...
 2) Sometimes everyone including downstream agrees that breaking
 something is actually a good idea and they should just deal, but what
 do you do then?

 These both seem solvable though.

 I guess a good strategy would be to compile a travis-compatible wheel
 of $PACKAGE version $latest-stable against numpy 1.x, and then in the
 1.(x+1) development period numpy would have an additional travis run
 which, instead of running the numpy test suite, instead does:
   pip install .
   pip install $PACKAGE-$latest-stable.whl
   python -c 'import package; package.test()' # adjust as necessary
 ? Where $PACKAGE is something like scipy / pandas / astropy / ...
 matplotlib would be nice but maybe impractical...?

 Maybe someone else will have objections but it seems like a reasonable
 idea to me. Want to put together a PR? Asides from fame and fortune
 and our earnest appreciation, your reward is you get to make sure that
 the packages you care about are included so that we break them less
 often in the future ;-).

One simple way to get going would be for the release manager to
trigger a build from this repo:

https://github.com/matthew-brett/travis-wheel-builder

This build would then upload a wheel to:

http://travis-wheels.scikit-image.org/

The upstream packages would have a test grid which included an entry
with something like:

pip install -f http://travis-wheels.scikit-image.org --pre numpy

Cheers,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] testing numpy with downstream testsuites (was: Re: Notes from the numpy dev meeting at scipy 2015)

2015-08-26 Thread Jeff Reback
Pandas has for quite a while has a travis build where we install numpy
master and then run our test suite.

e.g. here: https://travis-ci.org/pydata/pandas/jobs/77256007

Over the last year this has uncovered a couple of changes which affected
pandas (mainly using something deprecated which was turned off :)

This was pretty simple to setup. Note that this adds 2+ minutes to the
build (though our builds take a while anyhow so its not a big deal).



On Wed, Aug 26, 2015 at 7:14 AM, Matthew Brett matthew.br...@gmail.com
wrote:

 Hi,

 On Wed, Aug 26, 2015 at 7:59 AM, Nathaniel Smith n...@pobox.com wrote:
  [Popping this off to its own thread to try and keep things easier to
 follow]
 
  On Tue, Aug 25, 2015 at 9:52 AM, Nathan Goldbaum nathan12...@gmail.com
 wrote:
- Lament: it would be really nice if we could get more people to
  test our beta releases, because in practice right now 1.x.0 ends
  up being where we actually the discover all the bugs, and 1.x.1 is
  where it actually becomes usable. Which sucks, and makes it
  difficult to have a solid policy about what counts as a
  regression, etc. Is there anything we can do about this?
 
  Just a note in here - have you all thought about running the test
 suites for
  downstream projects as part of the numpy test suite?
 
  I don't think it came up, but it's not a bad idea! The main problems I
  can foresee are:
  1) Since we don't know the downstream code, it can be hard to
  interpret test suite failures. OTOH for changes we're uncertain of we
  already do often end up running some downstream test suites by hand,
  so it can only be an improvement on that...
  2) Sometimes everyone including downstream agrees that breaking
  something is actually a good idea and they should just deal, but what
  do you do then?
 
  These both seem solvable though.
 
  I guess a good strategy would be to compile a travis-compatible wheel
  of $PACKAGE version $latest-stable against numpy 1.x, and then in the
  1.(x+1) development period numpy would have an additional travis run
  which, instead of running the numpy test suite, instead does:
pip install .
pip install $PACKAGE-$latest-stable.whl
python -c 'import package; package.test()' # adjust as necessary
  ? Where $PACKAGE is something like scipy / pandas / astropy / ...
  matplotlib would be nice but maybe impractical...?
 
  Maybe someone else will have objections but it seems like a reasonable
  idea to me. Want to put together a PR? Asides from fame and fortune
  and our earnest appreciation, your reward is you get to make sure that
  the packages you care about are included so that we break them less
  often in the future ;-).

 One simple way to get going would be for the release manager to
 trigger a build from this repo:

 https://github.com/matthew-brett/travis-wheel-builder

 This build would then upload a wheel to:

 http://travis-wheels.scikit-image.org/

 The upstream packages would have a test grid which included an entry
 with something like:

 pip install -f http://travis-wheels.scikit-image.org --pre numpy

 Cheers,

 Matthew
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion