Hi Julian,
Any chance that NPY_MAXARGS could be increased to something more than
the current value of 32? There is a discussion about this in:
https://github.com/numpy/numpy/pull/226
but I think that, as Charles was suggesting, just increasing NPY_MAXARGS
to something more reasonable (say
hm increasing it for PyArrayMapIterObject would break the public ABI.
While nobody should be using this part of the ABI its not appropriate
for a bugfix release.
Note that as it currently stands in numpy 1.9.dev we will break this ABI
for the indexing improvements.
Though for nditer and some
Well, what numexpr is using is basically NpyIter_AdvancedNew:
https://github.com/pydata/numexpr/blob/master/numexpr/interpreter.cpp#L1178
and nothing else. If NPY_MAXARGS could be increased just for that, and
without ABI breaking, then fine. If not, we should have to wait until
1.9 I am
performance should not be impacted as long as we stay on the stack, it
just increases offset of a stack pointer a bit more.
E.g. nditer and einsum use temporary stack arrays of this type for its
initialization:
op_axes_arrays[NPY_MAXARGS][NPY_MAXDIMS]; // both 32 currently
The resulting nditer
On Fri, Feb 28, 2014 at 5:52 AM, Julian Taylor
jtaylor.deb...@googlemail.com wrote:
performance should not be impacted as long as we stay on the stack, it
just increases offset of a stack pointer a bit more.
E.g. nditer and einsum use temporary stack arrays of this type for its
On 2/28/14, 3:00 PM, Charles R Harris wrote:
On Fri, Feb 28, 2014 at 5:52 AM, Julian Taylor
jtaylor.deb...@googlemail.com mailto:jtaylor.deb...@googlemail.com
wrote:
performance should not be impacted as long as we stay on the stack, it
just increases offset of a stack pointer a
hi,
We want to start preparing the release candidate for the bugfix release
1.8.1rc1 this weekend, I'll start preparing the changelog tomorrow.
So if you want a certain issue fixed please scream now or better create
a pull request/patch on the maintenance/1.8.x branch.
Please only consider