Perhaps I in turn am missing something; but I would suppose that any
algorithm that requires multiple passes over the data is off the table?
Perhaps I am being a little old fashioned and performance oriented here,
but to make the ultra-majority of use cases suffer a factor two performance
penalty for an odd use case which already has a plethora of fine and dandy
solutions? Id vote against, fwiw...


On Sat, Jul 26, 2014 at 6:34 PM, Sturla Molden <sturla.mol...@gmail.com>
wrote:

> Sturla Molden <sturla.mol...@gmail.com> wrote:
> > Sebastian Berg <sebast...@sipsolutions.net> wrote:
> >
> >> Yes, it is much more complicated and incompatible with naive ufuncs if
> >> you want your memory access to be optimized. And optimizing that is very
> >> much worth it speed wise...
> >
> > Why? Couldn't we just copy the data chunk-wise to a temporary buffer of
> say
> > 2**13 numbers and then reduce that? I don't see why we need another
> > iterator for that.
>
> I am sorry if this is a stupid suggestion. My knowledge of how NumPy ufuncs
> works could have been better.
>
> Sturla
>
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to