I highly suggest you read through the whole "Performance
Tips<http://julia.readthedocs.org/en/latest/manual/performance-tips/>"
page I linked to above; it has documentation on all these little features
and stuff.  I did get a small improvement (~5%) by enabling SIMD extensions
on the two inner for loops, but that requires a very recent build of Julia
and is a somewhat experimental feature.  Neat to have though.
-E


On Sun, Apr 27, 2014 at 12:14 AM, Freddy Chua <[email protected]> wrote:

> wooh, this @inbounds thing is new to me... At least it does shows that
> Julia is comparable to Java.
>
>
> On Sunday, April 27, 2014 3:04:26 PM UTC+8, Elliot Saba wrote:
>
>> Since we have made sure that our for loops have the right boundaries, we
>> can assure the compiler that we're not going to step out of the bounds of
>> an array, and surround our code in the @inbounds macro.  This is not
>> something you should do unless you're certain that you'll never try to
>> access memory out of bounds, but it does get the runtime down to 0.23
>> seconds, which is on the same order as Java.  Here's the full 
>> code<https://gist.github.com/staticfloat/11339342>with all the modifications 
>> made.
>> -E
>>
>>
>> On Sat, Apr 26, 2014 at 11:55 PM, Freddy Chua <[email protected]> wrote:
>>
>>> Stochastic Gradient Descent is one of the most important optimisation
>>> algorithm in Machine Learning. So having it perform better than Java is
>>> important to have more widespread adoption.
>>>
>>>
>>> On Sunday, April 27, 2014 2:03:28 PM UTC+8, Freddy Chua wrote:
>>>
>>>> This code takes 60+ secs to execute on my machine. The Java equivalent
>>>> takes only 0.2 secs!!! Please tell me how to optimise the following
>>>> code.begin
>>>>
>>>> begin
>>>>   N = 10000
>>>>   K = 100
>>>>   rate = 1e-2
>>>>   ITERATIONS = 1
>>>>
>>>>   # generate y
>>>>   y = rand(N)
>>>>
>>>>   # generate x
>>>>   x = rand(K, N)
>>>>
>>>>   # generate w
>>>>   w = zeros(Float64, K)
>>>>
>>>>   tic()
>>>>   for i=1:ITERATIONS
>>>>     for n=1:N
>>>>       y_hat = 0.0
>>>>       for k=1:K
>>>>         y_hat += w[k] * x[k,n]
>>>>       end
>>>>
>>>>       for k=1:K
>>>>         w[k] += rate * (y[n] - y_hat) * x[k,n]
>>>>       end
>>>>     end
>>>>   end
>>>>   toc()
>>>> end
>>>>
>>>> Sorry for repeated posting, I did so to properly indent the code..
>>>>
>>>
>>

Reply via email to