Greetings,

I'm a reasonably proficient user of MATLAB and Python/NumPy/SciPy doing 
computational physics. Since Julia appears to be designed to be very well 
suited to many such applications, I was curious to test its performance 
before investing much time in converting any research code. To start out, I 
wrote up the classic 2D regular finite difference Laplace benchmark in 
Julia, Python and MATLAB in both vectorized and loop versions and tested 
them all. 

The results shown in the following Google spreadsheet (all results obtained 
using a 5000x5000 grid and doing 100 iterations for a reasonable sample 
using a Haswell i7-4710HQ CPU on Windows 8.1, using Julia 0.3.1, Anaconda 
2.0.1 and MATLAB R2014a):
https://docs.google.com/spreadsheets/d/1mJ8wNiyYVszkVapRVHvRJZG9j9XQhLLPvaUJwiWrJXY/pubhtml

The code itself is published as follows:
Julia: http://pastebin.com/AAdXXYZC
Python: http://pastebin.com/5hqi9xzf

As can be clearly seen, Julia does handily beat both MATLAB and basic 
Python/NumPy. It does however lose by a factor of 1.65 to a Numba-jitted 
version of the same Python code (obtained by simply adding a 
"@jit(target='cpu')" decorator on top of the appropriate function in naive 
Python code), which compiles to the same LLVM stack Julia uses. I 
deliberately avoided using more complex JIT techniques for Python (such as 
using Pythran to compile to OpenMP-enabled C code by specifying function 
signatures) to stick to single core performance only.

Given these results and the virtual given that my Julia code is naive, 
non-idiomatic and just plain bad, I'd like to know if there's anything I 
could improve to match or (if possible) beat JIT-compiled Python. 

Reply via email to