Re: [Numpy-discussion] Starting work on ufunc rewrite

2016-04-02 Thread Nathaniel Smith
On Thu, Mar 31, 2016 at 3:09 PM, Irwin Zaid  wrote:
> Hey guys,
>
> I figured I'd just chime in here.
>
> Over in DyND-town, we've spent a lot of time figuring out how to structure
> DyND callables, which are actually more general than NumPy gufuncs. We've
> just recently got them to a place where we are very happy, and are able to
> represent a wide range of computations.
>
> Our callables use a two-fold approach to evaluation. The first pass is a
> resolution pass, where a callable can specialize what it is doing based on
> the input types. It is able to deduce the return type, multidispatch, or
> even perform some sort of recursive analysis in the form of computations
> that call themselves. The second pass is construction of a kernel object
> that is exactly specialized to the metadata (e.g., strides, contiguity, ...)
> of the array.
>
> The callable itself can store arbitrary data, as can each pass of the
> evaluation. Either (or both) of these passes can be done ahead of time,
> allowing one to have a callable exactly specialized for your array.
>
> If NumPy is looking to change it's ufunc design, we'd be happy to share our
> experiences with this.

Yeah, this all sounds very relevant :-). You can even see some of the
kernel of that design in numpy's current ufuncs, with their
first-stage "resolver" choosing which inner loop to use, but we
definitely need to make these semantics richer if we want to allow for
things like inner loops that depend on kwargs (e.g. sort(...,
kind="quicksort") versus sort(..., kind="mergesort")) or dtype
attributes. Is your design written up anywhere?

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] linux wheels coming soon

2016-04-02 Thread Matthew Brett
On Fri, Mar 25, 2016 at 6:39 AM, Peter Cock  wrote:
> On Fri, Mar 25, 2016 at 3:02 AM, Robert T. McGibbon  
> wrote:
>> I suspect that many of the maintainers of major scipy-ecosystem projects are
>> aware of these (or other similar) travis wheel caches, but would guess that
>> the pool of travis-ci python users who weren't aware of these wheel caches
>> is much much larger. So there will still be a lot of travis-ci clock cycles
>> saved by manylinux wheels.
>>
>> -Robert
>
> Yes exactly. Availability of NumPy Linux wheels on PyPI is definitely 
> something
> I would suggest adding to the release notes. Hopefully this will help trigger
> a general availability of wheels in the numpy-ecosystem :)
>
> In the case of Travis CI, their VM images for Python already have a version
> of NumPy installed, but having the latest version of NumPy and SciPy etc
> available as Linux wheels would be very nice.

We're very nearly there now.

The latest versions of numpy, scipy, scikit-image, pandas, numexpr,
statsmodels wheels for testing at
http://ccdd0ebb5a931e58c7c5-aae005c4999d7244ac63632f8b80e089.r77.cf2.rackcdn.com/

Please do test with:

python -m install --upgrade pip

pip install 
--trusted-host=ccdd0ebb5a931e58c7c5-aae005c4999d7244ac63632f8b80e089.r77.cf2.rackcdn.com
--find-links=http://ccdd0ebb5a931e58c7c5-aae005c4999d7244ac63632f8b80e089.r77.cf2.rackcdn.com
numpy scipy scikit-learn numexpr

python -c 'import numpy; numpy.test("full")'
python -c 'import scipy; scipy.test("full")'

We would love to get any feedback as to whether these work on your machines.

Cheers,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Starting work on ufunc rewrite

2016-04-02 Thread Nathaniel Smith
On Thu, Mar 31, 2016 at 1:00 PM, Jaime Fernández del Río
 wrote:
> I have started discussing with Nathaniel the implementation of the ufunc ABI
> break that he proposed in a draft NEP a few months ago:
>
> http://thread.gmane.org/gmane.comp.python.numeric.general/61270
>
> His original proposal was to make the public portion of PyUFuncObject be:
>
> typedef struct {
> PyObject_HEAD
> int nin, nout, nargs;
> } PyUFuncObject;
>
> Of course the idea is that internally we would use a much larger struct that
> we could change at will, as long as its first few entries matched those of
> PyUFuncObject. My problem with this, and I may very well be missing
> something, is that in PyUFunc_Type we need to set the tp_basicsize to the
> size of the extended struct, so we would end up having to expose its
> contents.

How so? tp_basicsize tells you the size of the real struct, but that
doesn't let you actually access any of its fields. Unless you decide
to start cheating and reaching into random bits of memory by hand,
but, well, this is C, we can't really prevent that :-).

> This is somewhat similar to what now happens with PyArrayObject:
> anyone can #include "ndarraytypes.h", cast PyArrayObject* to
> PyArrayObjectFields*, and access the guts of the struct without using the
> supplied API inline functions. Not the end of the world, but if you want to
> make something private, you might as well make it truly private.

Yeah, there is also an issue here where we don't always do a great job
of separating our internal headers from our public headers. But that's
orthogonal -- any solution for hiding PyUFunc's internals will require
handling that somehow.

> I think it would be to have something similar to what NpyIter does::
>
> typedef struct {
> PyObject_HEAD
> NpyUFunc *ufunc;
> } PyUFuncObject;

A few points:

We have to leave nin, nout, nargs where they are in PyUFuncObject,
because there code out there that accesses them.

This technique is usually used when you want to allow subclassing of a
struct, while also allowing you to add fields later without breaking
ABI. We don't want to allow subclassing of PyUFunc (regardless of what
happens here -- subclassing just creates tons of problems), so AFAICT
it isn't really necessary. It adds a bit of extra complexity (two
allocations instead of one, extra pointer chasing, etc.), though to be
fair the hidden struct approach also adds some complexity (you have to
cast to the internal type), so it's not a huge deal either way.

If the NpyUFunc pointer field is public then in principle people could
refer to it and create problems down the line in case we ever decided
to switch to a different strategy... not very likely given that it'd
just be a meaningless opaque pointer, but mentioning for
completeness's sake.

> where NpyUFunc would, at this level, be an opaque type of which nothing
> would be known. We could have some of the NpyUFunc attributes cached on the
> PyUFuncObject struct for easier access, as is done in NewNpyArrayIterObject.

Caching sounds like *way* more complexity than we want :-). As soon as
you have two copies of data then they can get out of sync...

> This would also give us more liberty in making NpyUFunc be whatever we want
> it to be, including a variable-sized memory chunk that we could use and
> access at will.

Python objects are allowed to be variable size: tp_basicsize is the
minimum size. Built-ins like lists and strings have variable size
structs.

> NpyIter is again a good example, where rather than storing
> pointers to strides and dimensions arrays, these are made part of the
> NpyIter memory chunk, effectively being equivalent to having variable sized
> arrays as part of the struct. And I think we will probably no longer trigger
> the Cython warnings about size changes either.
>
> Any thoughts on this approach? Is there anything fundamentally wrong with
> what I'm proposing here?

Modulo the issue with nin/nout/nargs, I don't think it makes a huge
difference either way. I don't see any compelling advantages to your
proposal given our particular situation, but it doesn't make a huge
difference either way. Maybe I'm missing something.

> Also, this is probably going to end up being a rewrite of a pretty large and
> complex codebase. I am not sure that working on this on my own and
> eventually sending a humongous PR is the best approach. Any thoughts on how
> best to handle turning this into a collaborative, incremental effort? Anyone
> who would like to join in the fun?

I'd strongly recommend breaking it up into individually mergeable
pieces to the absolute maximum extent possible, and merging them back
as we go, so that we never have a giant branch diverging from master.
(E.g., refactor a few functions -> submit a PR -> merge, refactor some
more -> merge, add a new feature enabled by the refactoring -> merge,
repeat). There are limits to how far you can take this, e.g. the 

[Numpy-discussion] rational custom dtype example

2016-04-02 Thread Steve Mitchell
I have noticed a few issues with the "rational" custom C dtype example.


1.It doesn't build on Windows.  I managed to tweak it to build.  
Mainly, the MSVC9 compiler is C89.

2.   A few tests don't pass on Windows, due to integer sizes.

3.   The copyswap and copyswapn routines don't do in-place swapping if src 
== NULL, as specified in the docs.
http://docs.scipy.org/doc/numpy-1.10.0/reference/c-api.types-and-structures.html

  --Steve

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Call for Proposals || PyCon India 2016

2016-04-02 Thread Ayush Kesarwani
Hello Everyone

The Call for Proposals (CFP) for PyCon India 2016, New Delhi are live now.
We have started accepting proposals.

Those interested to submit a proposal for a talk/proposal should submit the
same at the given link [1].

More information about the event is present at the official website [2].

Kindly adhere to the guidelines mentioned for the submission of proposals.

Please help us spread the word. Kindly use #inpycon in your social updates.

Any queries regarding the CFP could be sent to cont...@in.pycon.org .

Regards
Team InPycon

[1] bit.ly/inpycon2016cfp
[2] http://bit.ly/inpycon2016
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion