You are mistaken. The improvement is in the Julia implementation.

On Sunday, April 27, 2014 11:13:12 PM UTC+8, Iain Dunning wrote:
>
> I'm very surprised that Java is that much faster than the initial 
> implementation provided (after its been wrapped in a function). Feel like 
> there is something non-obvious going on...
>
> On Sunday, April 27, 2014 5:33:06 AM UTC-4, Carlos Becker wrote:
>>
>> I agree with Elliot, take a look at the performance tips.
>> Also, you may want to move the tic(), toc() out of the function, make 
>> sure you compile it first, and then use @time <function calll> to time it.
>>
>> you may also get a considerable boost by using @simd in your for loops 
>> (together with @inbounds)
>> Let us know how it goes ;)
>>
>> cheers.
>>
>>
>> El domingo, 27 de abril de 2014 09:39:03 UTC+2, Freddy Chua escribió:
>>>
>>> Alright, thanks! All these is looking very positive for Julia.
>>>
>>> On Sunday, April 27, 2014 3:36:23 PM UTC+8, Elliot Saba wrote:
>>>>
>>>> I highly suggest you read through the whole "Performance 
>>>> Tips<http://julia.readthedocs.org/en/latest/manual/performance-tips/>" 
>>>> page I linked to above; it has documentation on all these little features 
>>>> and stuff.  I did get a small improvement (~5%) by enabling SIMD 
>>>> extensions 
>>>> on the two inner for loops, but that requires a very recent build of Julia 
>>>> and is a somewhat experimental feature.  Neat to have though.
>>>> -E
>>>>
>>>>
>>>> On Sun, Apr 27, 2014 at 12:14 AM, Freddy Chua <[email protected]>wrote:
>>>>
>>>>> wooh, this @inbounds thing is new to me... At least it does shows that 
>>>>> Julia is comparable to Java.
>>>>>
>>>>>
>>>>> On Sunday, April 27, 2014 3:04:26 PM UTC+8, Elliot Saba wrote:
>>>>>
>>>>>> Since we have made sure that our for loops have the right boundaries, 
>>>>>> we can assure the compiler that we're not going to step out of the 
>>>>>> bounds 
>>>>>> of an array, and surround our code in the @inbounds macro.  This is not 
>>>>>> something you should do unless you're certain that you'll never try to 
>>>>>> access memory out of bounds, but it does get the runtime down to 0.23 
>>>>>> seconds, which is on the same order as Java.  Here's the full 
>>>>>> code<https://gist.github.com/staticfloat/11339342>with all the 
>>>>>> modifications made.
>>>>>> -E
>>>>>>
>>>>>>
>>>>>> On Sat, Apr 26, 2014 at 11:55 PM, Freddy Chua <[email protected]>wrote:
>>>>>>
>>>>>>> Stochastic Gradient Descent is one of the most important 
>>>>>>> optimisation algorithm in Machine Learning. So having it perform better 
>>>>>>> than Java is important to have more widespread adoption.
>>>>>>>
>>>>>>>
>>>>>>> On Sunday, April 27, 2014 2:03:28 PM UTC+8, Freddy Chua wrote:
>>>>>>>
>>>>>>>> This code takes 60+ secs to execute on my machine. The Java 
>>>>>>>> equivalent takes only 0.2 secs!!! Please tell me how to optimise the 
>>>>>>>> following code.begin
>>>>>>>>
>>>>>>>> begin
>>>>>>>>   N = 10000
>>>>>>>>   K = 100
>>>>>>>>   rate = 1e-2
>>>>>>>>   ITERATIONS = 1
>>>>>>>>
>>>>>>>>   # generate y
>>>>>>>>   y = rand(N)
>>>>>>>>
>>>>>>>>   # generate x
>>>>>>>>   x = rand(K, N)
>>>>>>>>
>>>>>>>>   # generate w
>>>>>>>>   w = zeros(Float64, K)
>>>>>>>>
>>>>>>>>   tic()
>>>>>>>>   for i=1:ITERATIONS
>>>>>>>>     for n=1:N
>>>>>>>>       y_hat = 0.0
>>>>>>>>       for k=1:K
>>>>>>>>         y_hat += w[k] * x[k,n]
>>>>>>>>       end
>>>>>>>>
>>>>>>>>       for k=1:K
>>>>>>>>         w[k] += rate * (y[n] - y_hat) * x[k,n]       
>>>>>>>>       end
>>>>>>>>     end
>>>>>>>>   end
>>>>>>>>   toc()
>>>>>>>> end
>>>>>>>>
>>>>>>>> Sorry for repeated posting, I did so to properly indent the code..
>>>>>>>>
>>>>>>>
>>>>>>
>>>>

Reply via email to