On Thu, Jan 21, 2016 at 3:33 AM, Dupont <[email protected]> wrote:
> Hi,
>
> I am trying to implement Euler-Maruyama method in Julia for a neural network
> and it is 4x slower than python.
> Can someone give me a hint to fill that gap?
>
> Thank you for your help,
>
> Best regards,
>
> ### Python code
> import numpy as np
> import time
>
> def sig(x):
>     return 1.0/ (1. + np.exp(-x))
>
> def mf_loop(Ndt,V0,V,dt,W,J):
>     sv = np.copy(V0)
>     V = np.copy(V0)
>
>     for i in range(Ndt):
>         sv = 1.0/ (1. + np.exp(-V))
>         V = V + ( -V+np.dot(J,sv) ) * dt + np.sqrt(dt) * W[:,i]
>
> N = 100
> # timestep
> Ndt = 10000
>
> sigma  = 0.1
> W = (np.random.randn(N,Ndt)) * sigma
> J = (np.random.rand(N,N))/N
>
> V0 = np.random.rand(N)
> V = np.copy(V0)
>
> ## JULIA CODE ##############
> import Base.LinAlg, Base.LinAlg.BlasReal, Base.LinAlg.BlasComplex
>
> function sig(x::Union{Float64,Vector})
>   return 1.0 ./ (1. + exp(-x))
> end
>
> function
> mf_loop(Ndt::Int64,V0::Vector{Float64},V::Vector{Float64},dt::Float64,W::Matrix{Float64},J::Matrix{Float64})

Remove the type constraints, they are useless and Int64 is wrong.

>   sv = copy(V0)
>   V  = copy(V0)
>
>   for i=1:Ndt
>     sv = sig(V)
>     V = V + ( -V + J*sv ) * dt + W[:,i]
>     #     BLAS.gemv!('N',dt,J,sv,1.0-dt,V)
>     #     BLAS.axpy!(1., W[:,i], V)
>   end

This won't be faster than python since both of them are calling the
underlying c library and are allocating a lot of arrays in each loop.
Even in the version you commented out, sig(V) and W[:, i] are still allocating.

>   nothing
> end
>
> N = 100
> dt = 0.1
> Ndt = 10000
>
> sigma  = 0.1
> W = (randn(N,Ndt)) * sigma * sqrt(dt)
> J = (rand(N,N))/N
>
> V0 = rand(N)
> V = copy(V0)
>
> mf_loop(1,V0,V,dt,W,J)
> @time mf_loop(Ndt,V0,V,dt,W,J)

Reply via email to