That does make a difference but quite as much as running it twice though..

No precompile: Run 1 = 0.76 s, Run 2 = 0.077 s (ish)
With precompile: Run 1 = 0.56 s, Run 2 = 0.05 s (ish)

Good to know though, thanks!

On Saturday, May 24, 2014 5:57:08 PM UTC+2, Patrick O'Leary wrote:
>
> Scratch that, I forgot about `precompile`: 
> http://docs.julialang.org/en/latest/stdlib/base/#Base.precompile
>
> That may work for your needs.
>
> On Saturday, May 24, 2014 10:21:16 AM UTC-5, Patrick O'Leary wrote:
>>
>> The first time you run a Julia function, it is lowered, type-inferred, 
>> generates LLVM code which is then optimized and compiled down to native 
>> code.
>>
>> The second time you run a Julia function, the native code already 
>> generated is called.
>>
>> (This is why you'll see the first run of a Julia function is thrown out 
>> when benchmarking. It is assumed that the compilation cost will be 
>> amortized out.)
>>
>> There isn't a way around this (that is easy to use, yet); though in your 
>> case, you can trial-run with plays==1 to get the thing compiled.
>>
>> On Saturday, May 24, 2014 9:39:36 AM UTC-5, St Elmo Wilken wrote:
>>>
>>> Hi,
>>>
>>> I'm using a parallel for loop to evaluate a function a few times; the 
>>> weird thing is that if I run the program more than once there is a 
>>> significant speed improvement.
>>>
>>> The speed improvement phenomenon is independent of how many processes I 
>>> have (the results discussed below are for 4 processes).
>>>
>>> If I run the code shown below the first time (by using 
>>> include("main.jl")) the timed part takes about 1 second on my machine 
>>> (about how long it takes with no parallelisation). 
>>> When I run it again (and for all subsequent runs) it takes about 0.35 
>>> seconds.
>>>
>>> Does anyone know why this happens/how to fix this?
>>>
>>> Thanks!
>>>
>>> I've copy/pasted the code below:
>>>
>>> *******In the file main.jl******************************************
>>> ********************************************
>>>
>>> # Main program  
>>>
>>>
>>> using PyPlot
>>>
>>> @everywhere using Distributions
>>>
>>> require("bandit.jl")
>>>
>>>
>>> plays = 1000
>>>
>>> replicates = 2000
>>>
>>> ave_rewardp = zeros(plays, 1)
>>>
>>>
>>> tic()
>>>
>>> ave_rewardp = @parallel (+) for k=1:replicates
>>>
>>>    bandit(0.0, plays)
>>>
>>> end
>>>
>>> ave_rewardp = ave_rewardp*(1.0/replicates)
>>>
>>> toc()
>>>
>>>
>>>
>>> # Plot the results
>>>
>>> figure()
>>>
>>> plot([1:plays], ave_rewardp,"b")
>>>
>>>
>>>
>>> *******In the file bandit.jl******************************************
>>> ********************************************
>>>
>>> function bandit(epsilon, plays) 
>>>
>>>
>>> Nbandit = 10::Int64 #arms of the bandit
>>>
>>> Qtmeans = rand(Normal(), Nbandit) #true means of each arm
>>>
>>> Qemeans = rand(MvNormal(Qtmeans, eye(Nbandit))) #estimated means of each arm
>>>
>>> Qdraw(mean) = rand(Normal(mean)) #normal draw with mean
>>>
>>> meanUpdate = ones(Nbandit,2) # array creation
>>>
>>> meanUpdate[:,1] = Qemeans #update matrix of means of the bandit
>>>
>>>
>>> # Play the game
>>>
>>> reward = zeros(plays,1)
>>>
>>> for p=1:plays
>>>
>>>
>>>    # Epsilon-Greedy Player
>>>
>>>    if rand() < epsilon
>>>
>>>       gp_index = indmax(rand(Nbandit)) #return random index
>>>
>>>    else
>>>
>>>       gp_index = indmax(Qemeans) #return the max estimated index
>>>
>>>    end
>>>
>>>
>>>    # Update Step
>>>
>>>    value = Qdraw(Qtmeans[gp_index]) #draw from true means
>>>
>>>    reward[p] = value
>>>
>>>    meanUpdate[gp_index, 2] = meanUpdate[gp_index, 2] + 1.0
>>>
>>>    meanUpdate[gp_index, 1] = meanUpdate[gp_index, 1] + value
>>>
>>>    Qemeans[gp_index] = meanUpdate[gp_index, 1]/meanUpdate[gp_index, 2]
>>>
>>>
>>> end
>>>
>>>
>>> return reward
>>>
>>> end
>>>
>>>
>>>
>>>

Reply via email to