First of all MATLAB is pretty good in optimising this problem. Most of the
work is vectorised where MATLAB can use efficient C-code. When running your
MATLAB example, I can also that both my processors are in use. Please try
this version

https://gist.github.com/andreasnoackjensen/9039685

by running
addprocs(#Processors)
@everywhere include("callprice.jl")
@time CallPrice.timecallprice(200)


2014-02-16 18:38 GMT+01:00 Stefan Karpinski <[email protected]>:

> Looks like you're using global variables, which is the first no-no of
> Julia performance:
>
> http://julia.readthedocs.org/en/latest/manual/performance-tips/
>
>
> On Sun, Feb 16, 2014 at 3:02 AM, Pithawat Tan Vachiramon <
> [email protected]> wrote:
>
>> I've been testing out performance of simple Monte Carlo simulation for a
>> call option, basically generating random samples of outcome and taking a
>> mean. I did this with vectorization and also in a loop. What I found is
>> that Julia is *way* slower than Matlab or numpy:
>>
>> Julia
>> Vectorized: 50.16s
>> Loop: 358.6s
>>
>> Matlab
>> Vactorized: 6.6s
>>
>> Numpy
>> Vectorized: 10.31s
>>
>> The codes are in here:
>>
>> http://pithawat.com/post/monte-carlo-simulation
>> http://pithawat.com/post/first-brush-with-julia
>>
>> Anyone care to explain why?
>>
>
>


-- 
Med venlig hilsen

Andreas Noack Jensen

Reply via email to