Yichao is right: contrary to common misconception, declaring argument types
in Julia has nothing to do with performance. Functions are specialized for
the argument types that you pass at compile-time anyway. You declare
argument types for three reasons: (a) controlling dispatch (having
different versions of a function for different types), (b) ensuring
correctness (if your function would work but give an unexpected answer for
some types), and (c) clarity (to indicate "hey, I really want you to pass
some kind of integer here" etc.).
You aren't going to get a lot of performance gains for this particular
function, because it vectorizes well already, so you are taking good
advantage of fast code (written in C) in Python. The big performance
advantage of Julia is for problems that don't vectorize well, so that you
are forced to write your own inner loops.
However, I do get some speedup for your function in Julia by unrolling and
merging some of the loops. It also cuts down a lot on the memory usage by
removing temporary allocations. The speedup is particularly noticable if
you are interested in single-core performance: call blas_set_num_threads(1)
to use the single-threaded BLAS.
function mf_loop2(Ndt,V0,dt,W,J)
V = copy(V0)
n = length(V)
Jsv = copy(V0)
@inbounds for i=1:Ndt
# compute Jsv = J*sig(V), in-place
fill!(Jsv, 0)
for k = 1:n
sv = sig(V[k])
@simd for j = 1:n
Jsv[j] += J[j,k] * sv
end
end
# compute V += ( -V + J*sv ) * dt + W[:,i], in-place
@simd for j = 1:n
V[j] += (Jsv[j] - V[j]) * dt + W[j,i]
end
end
return V
end