On Wed, Oct 5, 2016 at 5:36 PM, Robert McLeod <robbmcl...@gmail.com> wrote:

>
> It's certainly true that numexpr doesn't create a lot of OP_COPY
> operations, rather it's optimized to minimize them, so probably it's fewer
> ops than naive successive calls to numpy within python, but I'm unsure if
> there's any difference in operation count between a hand-optimized numpy
> with out= set and numexpr.  Numexpr just does it for you.
>

That was my understanding as well. If it automatically does what one could
achieve by carrying the state along in the 'out' parameter, that's as good
as it can get in terms removing unnecessary ops. There are other speedup
opportunities of course, but that's a separate matter.


> This blog post from Tim Hochberg is useful for understanding the
> performance advantages of blocking versus multithreading:
>
> http://www.bitsofbits.com/2014/09/21/numpy-micro-optimization-and-numexpr/
>

Hadnt come across that one before. Great link. Thanks. using caches and
vector registers well trumps threading, unless one has a lot of data and it
helps to disable hyper-threading.
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to