Thanks, that is very helpful!
On 01/30/2016 01:40 PM, Jeff Reback wrote:
just my 2c
it's fairly straightforward to add a test to the Travis matrix to grab
numpy wheels built numpy wheels (works for conda or pip installs).
so in pandas we r testing 2.7/3.5 against numpy master continuously
https://github.com/pydata/pandas/blob/master/ci/install-3.5_NUMPY_DEV.sh
On Jan 30, 2016, at 1:16 PM, Nathaniel Smith <n...@pobox.com
<mailto:n...@pobox.com>> wrote:
On Jan 30, 2016 9:27 AM, "Ralf Gommers" <ralf.gomm...@gmail.com
<mailto:ralf.gomm...@gmail.com>> wrote:
>
>
>
> On Fri, Jan 29, 2016 at 11:39 PM, Nathaniel Smith <n...@pobox.com
<mailto:n...@pobox.com>> wrote:
>>
>> It occurs to me that the best solution might be to put together a
.travis.yml for the release branches that does: "for pkg in
IMPORTANT_PACKAGES: pip install $pkg; python -c 'import pkg; pkg.test()'"
>> This might not be viable right now, but will be made more viable
if pypi starts allowing official Linux wheels, which looks likely to
happen before 1.12... (see PEP 513)
>>
>> On Jan 29, 2016 9:46 AM, "Andreas Mueller" <t3k...@gmail.com
<mailto:t3k...@gmail.com>> wrote:
>> >
>> > Is this the point when scikit-learn should build against it?
>>
>> Yes please!
>>
>> > Or do we wait for an RC?
>>
>> This is still all in flux, but I think we might actually want a
rule that says it can't become an RC until after we've tested
scikit-learn (and a list of similarly prominent packages). On the
theory that RC means "we think this is actually good enough to
release" :-). OTOH I'm not sure the alpha/beta/RC distinction is very
helpful; maybe they should all just be betas.
>>
>> > Also, we need a scipy build against it. Who does that?
>>
>> Like Julian says, it shouldn't be necessary. In fact using old
builds of scipy and scikit-learn is even better than rebuilding them,
because it tests numpy's ABI compatibility -- if you find you *have*
to rebuild something then we *definitely* want to know that.
>>
>> > Our continuous integration doesn't usually build scipy or numpy,
so it will be a bit tricky to add to our config.
>> > Would you run our master tests? [did we ever finish this
discussion?]
>>
>> We didn't, and probably should... :-)
>
> Why would that be necessary if scikit-learn simply tests
pre-releases of numpy as you suggested earlier in the thread (with
--pre)?
>
> There's also https://github.com/MacPython/scipy-stack-osx-testing
by the way, which could have scikit-learn and scikit-image added to it.
>
> That's two options that are imho both better than adding more
workload for the numpy release manager. Also from a principled point
of view, packages should test with new versions of their
dependencies, not the other way around.
Sorry, that was unclear. I meant that we should finish the
discussion, not that we should necessarily be the ones running the
tests. "The discussion" being this one:
https://github.com/numpy/numpy/issues/6462#issuecomment-148094591
https://github.com/numpy/numpy/issues/6494
I'm not saying that the release manager necessarily should be running
the tests (though it's one option). But the 1.10 experience seems to
indicate that we need *some* process for the release manager to make
sure that some basic downstream testing has happened. Another option
would be keeping a checklist of downstream projects and making sure
they've all checked in and confirmed that they've run tests before
making the release.
-n
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org <mailto:NumPy-Discussion@scipy.org>
https://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion