Doing the math, that makes that optimized Julia version 18% slower than C++, which is fast indeed.
On Mon, Jun 16, 2014 at 1:02 PM, Andreas Noack Jensen < andreasnoackjen...@gmail.com> wrote: > I think that the log in openlibm is slower than most system logs. On my > mac, if I use > > mylog(x::Float64) = ccall((:log, "libm"), Float64, (Float64,), x) > > the code runs 25 pct. faster. If I also use @inbounds and devectorise the > max(abs) it runs in 2.26 seconds on my machine. The C++ version with the > XCode compiler and -O3 runs in 1.9 seconds. > > > 2014-06-16 18:21 GMT+02:00 Florian Oswald <florian.osw...@gmail.com>: > > Hi guys, >> >> thanks for the comments. Notice that I'm not the author of this code [so >> variable names are not on me :-) ] just tried to speed it up a bit. In >> fact, declaring types before running the computation function and using >> @inbounds made the code 24% faster than the benchmark version. here's my >> attempt >> >> >> https://github.com/floswald/Comparison-Programming-Languages-Economics/tree/master/julia/floswald >> >> should try the Base.maxabs. >> >> in profiling this i found that a lot of time is spent here: >> >> >> https://github.com/floswald/Comparison-Programming-Languages-Economics/blob/master/julia/floswald/model.jl#L119 >> >> which i'm not sure how to avoid. >> >> >> On 16 June 2014 17:13, Dahua Lin <linda...@gmail.com> wrote: >> >>> First, I agree with John that you don't have to declare the types in >>> general, like in a compiled language. It seems that Julia would be able to >>> infer the types of most variables in your codes. >>> >>> There are several ways that your code's efficiency may be improved: >>> >>> (1) You can use @inbounds to waive bound checking in several places, >>> such as line 94 and 95 (in RBC_Julia.jl) >>> (2) Line 114 and 116 involves reallocating new arrays, which is probably >>> unnecessary. Also note that Base.maxabs can compute the maximum of absolute >>> value more efficiently than maximum(abs( ... )) >>> >>> In terms of measurement, did you pre-compile the function before >>> measuring the runtime? >>> >>> A side note about code style. It seems that it uses a lot of Java-ish >>> descriptive names with camel case. Julia practice tends to encourage more >>> concise naming. >>> >>> Dahua >>> >>> >>> >>> On Monday, June 16, 2014 10:55:50 AM UTC-5, John Myles White wrote: >>> >>>> Maybe it would be good to verify the claim made at >>>> https://github.com/jesusfv/Comparison-Programming- >>>> Languages-Economics/blob/master/RBC_Julia.jl#L9 >>>> >>>> I would think that specifying all those types wouldn’t matter much if >>>> the code doesn’t have type-stability problems. >>>> >>>> — John >>>> >>>> On Jun 16, 2014, at 8:52 AM, Florian Oswald <florian...@gmail.com> >>>> wrote: >>>> >>>> > Dear all, >>>> > >>>> > I thought you might find this paper interesting: >>>> http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf >>>> > >>>> > It takes a standard model from macro economics and computes it's >>>> solution with an identical algorithm in several languages. Julia is roughly >>>> 2.6 times slower than the best C++ executable. I was bit puzzled by the >>>> result, since in the benchmarks on http://julialang.org/, the slowest >>>> test is 1.66 times C. I realize that those benchmarks can't cover all >>>> possible situations. That said, I couldn't really find anything unusual in >>>> the Julia code, did some profiling and removed type inference, but still >>>> that's as fast as I got it. That's not to say that I'm disappointed, I >>>> still think this is great. Did I miss something obvious here or is there >>>> something specific to this algorithm? >>>> > >>>> > The codes are on github at >>>> > >>>> > https://github.com/jesusfv/Comparison-Programming-Languages-Economics >>>> > >>>> > >>>> >>>> >> > > > -- > Med venlig hilsen > > Andreas Noack Jensen >