On Mon, 2019-01-07 at 12:15 -0800, Keith Goodman wrote:
> Numpy uses pairwise summation along the fast axis if that axis
> contains no more than 8192 elements. How was 8192 chosen?
> 

It is simply a constant used throughout the ufunc machinery (and
iteration) for cache friendliness.

However, that iteration should not always chunk to 8192 elements, it
should often just the whole array.
And I do not think the inner loop has anything chunking itself, so
given a contiguous fast axis and no casting, you likely already get a
single outer iteration.

In any case 8192 chosen to be small enough to be cache friendly and is
exposed as `np.BUFSIZE`, you can actually the buffer that is being used
with `numpy.setbufsize(size)`, although I can't say I ever tried it.
Note that it has to fit also larger datatypes and multiple buffers.

- Sebastian


> Doubling to 16384 would result in a lot more function call overhead
> due to the recursion. Is it a speed issue? Memory? Or something else
> entirely?
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@python.org
> https://mail.python.org/mailman/listinfo/numpy-discussion

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion

Reply via email to