Nathaniel Smith <n...@pobox.com> wrote:

> The proposal in my initial email requires zero pthreads, and is
> substantially more effective. (Your proposal reduces only the alloc
> overhead for large arrays; mine reduces both alloc and memory access
> overhead for boyh large and small arrays.)

My suggestion prevents the kernel from zeroing pages in the middle of a
computation, which is an important part. It would also be an optimiation
the Python interpreter could benefit from indepently of NumPy, by allowing
reuse of allocated memory pages within CPU bound portions of the Python
code. And no, the method I suggested does not only work for large arrays.

If we really want to take out the memory access overhead, we need to
consider lazy evaluation. E.g. a context manager that collects a symbolic
expression and triggers evaluation on exit:

with numpy.accelerate:
    x = <expression>
    y = <expression>
    z = <expression>
# evaluation of x,y,z happens here

Sturla

_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to