On Wednesday, 18 November 2020 at 13:01:42 UTC, Bastiaan Veelo wrote:
On Wednesday, 18 November 2020 at 10:05:06 UTC, Tobias Schmidt wrote:
Dear all,

to compare MIR and Numpy in the HPC context, we implemented a multigrid solver in Python using Numpy and in D using Mir and perforemd some benchmarks with them.

You can find our code and results here:
https://github.com/typohnebild/numpy-vs-mir

Nice numbers. I’m not a Python guy but I was under the impression that Numpy actually is written in C, so that when you benchmark Numpy you’re mostly benchmarking C, not Python. Therefore I had expected the Numpy performance to be much closer to D’s. An important factor I think, which I’m not sure you have discussed (didn’t look too closely), is the compiler backend that was used to compile D and Numpy. Then again, as a user one is mostly interested in the out-of-the-box performance, which this seems to be a good measure of.

— Bastiaan.

A lot of numpy is in C, C++, fortran, asm etc....

But when you chain a bunch of things together, you are going via python. The language boundary (and python being slow) means that internal iteration in native code is a requirement for performance, which leads to eager allocation for composability via python, which then hurts performance. Numpy makes a very good effort, but is always constrained by this. Clever schemes with laziness where operations in python are actually just composing operations for execution later/on-demand can work as an alternative, but a) that's hard and b) even if you can completely avoid calling back in to python during iteration you would still need JIT to really unlock the performance.

Julia fixes this by having all/most in one language which is JIT'd

D can do the same with templates AOT, like C++/Eigen does but more flexible and less terrifying code. That's (one part of) what mir provides.

Reply via email to