hey Josh,

that makes sense to me. your benchmark, though, I may not be understanding; 
it looks as if Julia is slower for larger matrices. is this true, or am I 
just going crazy and not able to properly read graphs anymore?

On Thursday, January 8, 2015 at 12:46:29 PM UTC-6, Joshua Adelman wrote:
>
> numpy.dot is calling BLAS under the hood so it's calling fast code and I 
> wouldn't expect Julia to shine against it. Try calling numpy methods that 
> aren't thin wrappers around C and you should see a bigger difference. Or 
> implement a larger more complex algorithm. Here's a simple micro-benchmark 
> I did recently of Julia vs np.vander:
>
> http://nbviewer.ipython.org/gist/synapticarbors/26910166ab775c04c47b
>
> Not large, but maybe a bit more illustrative.
>
> Josh
>
>
> On Jan 8, 2015, at 1:27 PM, Dakota St. Laurent wrote:
>
> hi all, I've been trying to test some simple benchmarks for my new job to 
> see what language we should use between Python (Numpy/Scipy) and Julia. I 
> like how simple it seems for Julia to do things in parallel (we plan to be 
> running code on a supercomputer using lots and lots of cores), but I'm not 
> getting the ideal benchmarks. I'm sure I'm doing something wrong here.
>
> Python code:
>
> import time, numpy as np
> N = 25000
> A = np.random.rand(N,N)
> x = np.random.rand(N)
>
> t0 = time.clock()
> A.dot(x)
> print time.clock() - t0
>
> --------------------------------
>
> Julia code:
>
> function rand_mat_vec_mul(A::Array{Float64, 2}, x::Array{Float64,1})
>   tic()
>   A * x
>   toc()
> end
>
> # warmup
> rand_mat_vec_mul(rand(1000,1000), rand(1000))
> rand_mat_vec_mul(rand(1000,1000), rand(1000))
> rand_mat_vec_mul(rand(1000,1000), rand(1000))
>
> # timing
> rand_mat_vec_mul(rand(25000,25000), rand(25000))
>
> ---------------------------
>
> Python generally takes about 0.630 - 0.635 seconds, Julia generally takes 
> about 0.640 - 0.650 seconds. as I said, I'm sure I'm doing something wrong, 
> I'm just not really sure what. any help is appreciated :)
>
>
>

Reply via email to