On 26.07.2014 15:38, Eelco Hoogendoorn wrote:
> 
> Why is it not always used?

for 1d reduction the iterator blocks by 8192 elements even when no
buffering is required. There is a TODO in the source to fix that by
adding additional checks. Unfortunately nobody knows hat these
additional tests would need to be and Mark Wiebe who wrote it did not
reply to a ping yet.

Also along the non-fast axes the iterator optimizes the reduction to
remove the strided access, see:
https://github.com/numpy/numpy/pull/4697#issuecomment-42752599


Instead of having a keyword argument to mean I would prefer a context
manager that changes algorithms for different requirements.
This would easily allow changing the accuracy and performance of third
party functions using numpy without changing the third party library as
long as they are using numpy as the base.
E.g.
with np.precisionstate(sum="kahan"):
  scipy.stats.nanmean(d)

We also have case where numpy uses algorithms that are far more precise
than most people needs them. E.g. np.hypot and the related complex
absolute value and division.
These are very slow with glibc as it provides 1ulp accuracy, this is
hardly ever needed.
Another case that could use dynamic changing is flushing subnormals to zero.

But this api is like Nathaniels parameterizable dtypes just an idea
floating in my head which needs proper design and implementation written
down. The issue is as usual ENOTIME.
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to