On 6 Jun 2014 02:16, "Nikolaus Rath" <nikol...@rath.org> wrote: > > Nathaniel Smith <n...@pobox.com> writes: > > Such optimizations are important enough that numpy operations always > > give the option of explicitly specifying the output array (like > > in-place operators but more general and with clumsier syntax). Here's > > an example small-array benchmark that IIUC uses Jacobi iteration to > > solve Laplace's equation. It's been written in both natural and > > hand-optimized formats (compare "num_update" to "num_inplace"): > > > > https://yarikoptic.github.io/numpy-vbench/vb_vb_app.html#laplace-inplace > > > > num_inplace is totally unreadable, but because we've manually elided > > temporaries, it's 10-15% faster than num_update. > > Does it really have to be that ugly? Shouldn't using > > tmp += u[2:,1:-1] > tmp *= dy2 > > instead of > > np.add(tmp, u[2:,1:-1], out=tmp) > np.multiply(tmp, dy2, out=tmp) > > give the same performance? (yes, not as nice as what you're proposing, > but I'm still curious).
Yes, only the last line actually requires the out= syntax, everything else could use in place operators instead (and automatic temporary elision wouldn't work for the last line anyway). I guess whoever wrote it did it that way for consistency (and perhaps in hopes of eking out a tiny bit more speed - in numpy currently the in-place operators are implemented by dispatching to function calls like those). Not sure how much difference it really makes in practice though. It'd still be 8 statements and two named temporaries to do the work of one infix expression, with order of operations implicit. -n
_______________________________________________ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com