Re: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-17 Thread Florian Oswald
Hi Dahua,
I cannot find Base.maxabs (i.e. Julia says Base.maxabs not defined)

I'm here:

julia versioninfo()
Julia Version 0.3.0-prerelease+2703
Commit 942ae42* (2014-04-22 18:57 UTC)
Platform Info:
  System: Darwin (x86_64-apple-darwin12.5.0)
  CPU: Intel(R) Core(TM) i5-2435M CPU @ 2.40GHz
  WORD_SIZE: 64
  BLAS: libgfortblas
  LAPACK: liblapack
  LIBM: libopenlibm

cheers

On Monday, 16 June 2014 17:13:44 UTC+1, Dahua Lin wrote:

 First, I agree with John that you don't have to declare the types in 
 general, like in a compiled language. It seems that Julia would be able to 
 infer the types of most variables in your codes.

 There are several ways that your code's efficiency may be improved:

 (1) You can use @inbounds to waive bound checking in several places, such 
 as line 94 and 95 (in RBC_Julia.jl)
 (2) Line 114 and 116 involves reallocating new arrays, which is probably 
 unnecessary. Also note that Base.maxabs can compute the maximum of absolute 
 value more efficiently than maximum(abs( ... ))

 In terms of measurement, did you pre-compile the function before measuring 
 the runtime?

 A side note about code style. It seems that it uses a lot of Java-ish 
 descriptive names with camel case. Julia practice tends to encourage more 
 concise naming.

 Dahua



 On Monday, June 16, 2014 10:55:50 AM UTC-5, John Myles White wrote:

 Maybe it would be good to verify the claim made at 
 https://github.com/jesusfv/Comparison-Programming-Languages-Economics/blob/master/RBC_Julia.jl#L9
  

 I would think that specifying all those types wouldn’t matter much if the 
 code doesn’t have type-stability problems. 

  — John 

 On Jun 16, 2014, at 8:52 AM, Florian Oswald florian...@gmail.com 
 wrote: 

  Dear all, 
  
  I thought you might find this paper interesting: 
 http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf 
  
  It takes a standard model from macro economics and computes it's 
 solution with an identical algorithm in several languages. Julia is roughly 
 2.6 times slower than the best C++ executable. I was bit puzzled by the 
 result, since in the benchmarks on http://julialang.org/, the slowest 
 test is 1.66 times C. I realize that those benchmarks can't cover all 
 possible situations. That said, I couldn't really find anything unusual in 
 the Julia code, did some profiling and removed type inference, but still 
 that's as fast as I got it. That's not to say that I'm disappointed, I 
 still think this is great. Did I miss something obvious here or is there 
 something specific to this algorithm? 
  
  The codes are on github at 
  
  https://github.com/jesusfv/Comparison-Programming-Languages-Economics 
  
  



Re: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-17 Thread Tomas Lycken
It seems Base.maxabs was added (by Dahua) as late as May 30 
- 
https://github.com/JuliaLang/julia/commit/78bbf10c125a124bc8a1a25e8aaaea1cbc6e0ebc

If you update your Julia to the latest master, you'll have it =)

// T

On Tuesday, June 17, 2014 10:20:05 AM UTC+2, Florian Oswald wrote:

 Hi Dahua,
 I cannot find Base.maxabs (i.e. Julia says Base.maxabs not defined)

 I'm here:

 julia versioninfo()
 Julia Version 0.3.0-prerelease+2703
 Commit 942ae42* (2014-04-22 18:57 UTC)
 Platform Info:
   System: Darwin (x86_64-apple-darwin12.5.0)
   CPU: Intel(R) Core(TM) i5-2435M CPU @ 2.40GHz
   WORD_SIZE: 64
   BLAS: libgfortblas
   LAPACK: liblapack
   LIBM: libopenlibm

 cheers

 On Monday, 16 June 2014 17:13:44 UTC+1, Dahua Lin wrote:

 First, I agree with John that you don't have to declare the types in 
 general, like in a compiled language. It seems that Julia would be able to 
 infer the types of most variables in your codes.

 There are several ways that your code's efficiency may be improved:

 (1) You can use @inbounds to waive bound checking in several places, such 
 as line 94 and 95 (in RBC_Julia.jl)
 (2) Line 114 and 116 involves reallocating new arrays, which is probably 
 unnecessary. Also note that Base.maxabs can compute the maximum of absolute 
 value more efficiently than maximum(abs( ... ))

 In terms of measurement, did you pre-compile the function before 
 measuring the runtime?

 A side note about code style. It seems that it uses a lot of Java-ish 
 descriptive names with camel case. Julia practice tends to encourage more 
 concise naming.

 Dahua



 On Monday, June 16, 2014 10:55:50 AM UTC-5, John Myles White wrote:

 Maybe it would be good to verify the claim made at 
 https://github.com/jesusfv/Comparison-Programming-Languages-Economics/blob/master/RBC_Julia.jl#L9
  

 I would think that specifying all those types wouldn’t matter much if 
 the code doesn’t have type-stability problems. 

  — John 

 On Jun 16, 2014, at 8:52 AM, Florian Oswald florian...@gmail.com 
 wrote: 

  Dear all, 
  
  I thought you might find this paper interesting: 
 http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf 
  
  It takes a standard model from macro economics and computes it's 
 solution with an identical algorithm in several languages. Julia is roughly 
 2.6 times slower than the best C++ executable. I was bit puzzled by the 
 result, since in the benchmarks on http://julialang.org/, the slowest 
 test is 1.66 times C. I realize that those benchmarks can't cover all 
 possible situations. That said, I couldn't really find anything unusual in 
 the Julia code, did some profiling and removed type inference, but still 
 that's as fast as I got it. That's not to say that I'm disappointed, I 
 still think this is great. Did I miss something obvious here or is there 
 something specific to this algorithm? 
  
  The codes are on github at 
  
  https://github.com/jesusfv/Comparison-Programming-Languages-Economics 
  
  



Re: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-17 Thread Florian Oswald
hi tim - True!
(why on earth would I do that?)

defining it outside reproduces the speed gain. thanks!


On 16 June 2014 18:30, Tim Holy tim.h...@gmail.com wrote:

 From the sound of it, one possibility is that you made it a private
 function
 inside the computeTuned function. That creates the equivalent of an
 anonymous
 function, which is slow. You need to make it a generic function (define it
 outside computeTuned).

 --Tim

 On Monday, June 16, 2014 06:16:49 PM Florian Oswald wrote:
  interesting!
  just tried that - I defined mylog inside the computeTuned function
 
 
 https://github.com/floswald/Comparison-Programming-Languages-Economics/blob/
  master/julia/floswald/model.jl#L193
 
  but that actually slowed things down considerably. I'm on a mac as well,
  but it seems that's not enough to compare this? or where did you define
  this function?
 
 
  On 16 June 2014 18:02, Andreas Noack Jensen 
 andreasnoackjen...@gmail.com
 
  wrote:
   I think that the log in openlibm is slower than most system logs. On my
   mac, if I use
  
   mylog(x::Float64) = ccall((:log, libm), Float64, (Float64,), x)
  
   the code runs 25 pct. faster. If I also use @inbounds and devectorise
 the
   max(abs) it runs in 2.26 seconds on my machine. The C++ version with
 the
   XCode compiler and -O3 runs in 1.9 seconds.
  
  
   2014-06-16 18:21 GMT+02:00 Florian Oswald florian.osw...@gmail.com:
  
   Hi guys,
  
   thanks for the comments. Notice that I'm not the author of this code
 [so
   variable names are not on me :-) ] just tried to speed it up a bit. In
   fact, declaring types before running the computation function and
 using
   @inbounds made the code 24% faster than the benchmark version. here's
 my
   attempt
  
  
  
 https://github.com/floswald/Comparison-Programming-Languages-Economics/tr
   ee/master/julia/floswald
  
   should try the Base.maxabs.
  
   in profiling this i found that a lot of time is spent here:
  
  
  
 https://github.com/floswald/Comparison-Programming-Languages-Economics/bl
   ob/master/julia/floswald/model.jl#L119
  
   which i'm not sure how to avoid.
  
   On 16 June 2014 17:13, Dahua Lin linda...@gmail.com wrote:
   First, I agree with John that you don't have to declare the types in
   general, like in a compiled language. It seems that Julia would be
 able
   to
   infer the types of most variables in your codes.
  
   There are several ways that your code's efficiency may be improved:
  
   (1) You can use @inbounds to waive bound checking in several places,
   such as line 94 and 95 (in RBC_Julia.jl)
   (2) Line 114 and 116 involves reallocating new arrays, which is
 probably
   unnecessary. Also note that Base.maxabs can compute the maximum of
   absolute
   value more efficiently than maximum(abs( ... ))
  
   In terms of measurement, did you pre-compile the function before
   measuring the runtime?
  
   A side note about code style. It seems that it uses a lot of Java-ish
   descriptive names with camel case. Julia practice tends to encourage
   more
   concise naming.
  
   Dahua
  
   On Monday, June 16, 2014 10:55:50 AM UTC-5, John Myles White wrote:
   Maybe it would be good to verify the claim made at
   https://github.com/jesusfv/Comparison-Programming- 
 Languages-Economics/blob/master/RBC_Julia.jl#L9
  
   I would think that specifying all those types wouldn’t matter much
 if
   the code doesn’t have type-stability problems.
  
— John
  
   On Jun 16, 2014, at 8:52 AM, Florian Oswald florian...@gmail.com
  
   wrote:
Dear all,
  
I thought you might find this paper interesting:
   http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf
  
It takes a standard model from macro economics and computes it's
  
   solution with an identical algorithm in several languages. Julia is
   roughly
   2.6 times slower than the best C++ executable. I was bit puzzled by
 the
   result, since in the benchmarks on http://julialang.org/, the
 slowest
   test is 1.66 times C. I realize that those benchmarks can't cover
 all
   possible situations. That said, I couldn't really find anything
 unusual
   in
   the Julia code, did some profiling and removed type inference, but
   still
   that's as fast as I got it. That's not to say that I'm
 disappointed, I
   still think this is great. Did I miss something obvious here or is
   there
   something specific to this algorithm?
  
The codes are on github at
   
   
 https://github.com/jesusfv/Comparison-Programming-Languages-Economics
  
   --
   Med venlig hilsen
  
   Andreas Noack Jensen




Re: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-17 Thread Milan Bouchet-Valat
Le lundi 16 juin 2014 à 14:59 -0700, Jesus Villaverde a écrit :
 Also, defining
 
 mylog(x::Float64) = ccall((:log, libm), Float64, (Float64,), x)
 
 made quite a bit of difference for me, from 1.92 to around 1.55. If I
 also add @inbounds, I go down to 1.45, making Julia only twice as
 sslow as C++. Numba still beats Julia, which kind of bothers me a bit
Since Numba uses LLVM too, you should be able to compare the LLVM IR it
generates to that generated by Julia. Doing this at least for the tight
loop would be very interesting.


My two cents

 Thanks for the suggestions.
 
 On Monday, June 16, 2014 4:56:34 PM UTC-4, Jesus Villaverde wrote:
 Hi
 
 
 1) Yes, we pre-compiled the function.
 
 
 2) As I mentioned before, we tried the code with and without
 type declaration, it makes a difference.
 
 
 3) The variable names turns out to be quite useful because
 this code will be eventually nested into a much larger project
 where it is convenient to have very explicit names.
 
 
 Thanks 
 
 On Monday, June 16, 2014 12:13:44 PM UTC-4, Dahua Lin wrote:
 First, I agree with John that you don't have to
 declare the types in general, like in a compiled
 language. It seems that Julia would be able to infer
 the types of most variables in your codes.
 
 
 There are several ways that your code's efficiency may
 be improved:
 
 
 (1) You can use @inbounds to waive bound checking in
 several places, such as line 94 and 95 (in
 RBC_Julia.jl)
 (2) Line 114 and 116 involves reallocating new arrays,
 which is probably unnecessary. Also note that
 Base.maxabs can compute the maximum of absolute value
 more efficiently than maximum(abs( ... ))
 
 
 In terms of measurement, did you pre-compile the
 function before measuring the runtime?
 
 
 A side note about code style. It seems that it uses a
 lot of Java-ish descriptive names with camel case.
 Julia practice tends to encourage more concise naming.
 
 
 Dahua
 
 
 
 
 
 On Monday, June 16, 2014 10:55:50 AM UTC-5, John Myles
 White wrote:
 Maybe it would be good to verify the claim
 made at
 
 https://github.com/jesusfv/Comparison-Programming-Languages-Economics/blob/master/RBC_Julia.jl#L9
  
 
 I would think that specifying all those types
 wouldn’t matter much if the code doesn’t have
 type-stability problems. 
 
  — John 
 
 On Jun 16, 2014, at 8:52 AM, Florian Oswald
 florian...@gmail.com wrote: 
 
  Dear all, 
  
  I thought you might find this paper
 interesting:
 
 http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf 
  
  It takes a standard model from macro
 economics and computes it's solution with an
 identical algorithm in several languages.
 Julia is roughly 2.6 times slower than the
 best C++ executable. I was bit puzzled by the
 result, since in the benchmarks on
 http://julialang.org/, the slowest test is
 1.66 times C. I realize that those benchmarks
 can't cover all possible situations. That
 said, I couldn't really find anything unusual
 in the Julia code, did some profiling and
 removed type inference, but still that's as
 fast as I got it. That's not to say that I'm
 disappointed, I still think this is great. Did
 I miss something obvious here or is there
 something specific to this algorithm? 
  
  The codes are on github at 
  
 
 
 

Re: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-17 Thread Bruno Rodrigues
Hi Pr. Villaverde, just wanted to say that it was your paper that made me 
try Julia. I must say that I am very happy with the switch! Will you 
continue using Julia for your research?


Re: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-17 Thread Jesus Villaverde
Ah Sorry, over 20 years of coding in Matlab :(

Yes, you are right, once I change that line, the type definition is 
irrelevant. We should change the paper and the code ASAP

On Tuesday, June 17, 2014 12:03:29 AM UTC-4, Peter Simon wrote:

 By a process of elimination, I determined that the only variable whose 
 declaration affected the run time was vGridCapital.  The variable is 
 declared to be of type Array{Float64,1}, but is initialized as


 vGridCapital = 0.5*capitalSteadyState:0.1:1.5*capitalSteadyState

 which, unlike in Matlab, produces a Range object, rather than an array. 
  If the line above is modified to

 vGridCapital = [0.5*capitalSteadyState:0.1:1.5*capitalSteadyState]

 then the type instability is eliminated, and all type declarations can be 
 removed with no effect on execution time.

 --Peter


 On Monday, June 16, 2014 2:59:31 PM UTC-7, Jesus Villaverde wrote:

 Also, defining

 mylog(x::Float64) = ccall((:log, libm), Float64, (Float64,), x)

 made quite a bit of difference for me, from 1.92 to around 1.55. If I also 
 add @inbounds, I go down to 1.45, making Julia only twice as sslow as C++. 
 Numba still beats Julia, which kind of bothers me a bit


 Thanks for the suggestions.


 On Monday, June 16, 2014 4:56:34 PM UTC-4, Jesus Villaverde wrote:

 Hi

 1) Yes, we pre-compiled the function.

 2) As I mentioned before, we tried the code with and without type 
 declaration, it makes a difference.

 3) The variable names turns out to be quite useful because this code 
 will be eventually nested into a much larger project where it is convenient 
 to have very explicit names.

 Thanks 

 On Monday, June 16, 2014 12:13:44 PM UTC-4, Dahua Lin wrote:

 First, I agree with John that you don't have to declare the types in 
 general, like in a compiled language. It seems that Julia would be able to 
 infer the types of most variables in your codes.

 There are several ways that your code's efficiency may be improved:

 (1) You can use @inbounds to waive bound checking in several places, 
 such as line 94 and 95 (in RBC_Julia.jl)
 (2) Line 114 and 116 involves reallocating new arrays, which is 
 probably unnecessary. Also note that Base.maxabs can compute the maximum 
 of 
 absolute value more efficiently than maximum(abs( ... ))

 In terms of measurement, did you pre-compile the function before 
 measuring the runtime?

 A side note about code style. It seems that it uses a lot of Java-ish 
 descriptive names with camel case. Julia practice tends to encourage more 
 concise naming.

 Dahua



 On Monday, June 16, 2014 10:55:50 AM UTC-5, John Myles White wrote:

 Maybe it would be good to verify the claim made at 
 https://github.com/jesusfv/Comparison-Programming-Languages-Economics/blob/master/RBC_Julia.jl#L9
  

 I would think that specifying all those types wouldn’t matter much if 
 the code doesn’t have type-stability problems. 

  — John 

 On Jun 16, 2014, at 8:52 AM, Florian Oswald florian...@gmail.com 
 wrote: 

  Dear all, 
  
  I thought you might find this paper interesting: 
 http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf 
  
  It takes a standard model from macro economics and computes it's 
 solution with an identical algorithm in several languages. Julia is 
 roughly 
 2.6 times slower than the best C++ executable. I was bit puzzled by the 
 result, since in the benchmarks on http://julialang.org/, the slowest 
 test is 1.66 times C. I realize that those benchmarks can't cover all 
 possible situations. That said, I couldn't really find anything unusual 
 in 
 the Julia code, did some profiling and removed type inference, but still 
 that's as fast as I got it. That's not to say that I'm disappointed, I 
 still think this is great. Did I miss something obvious here or is there 
 something specific to this algorithm? 
  
  The codes are on github at 
  
  
 https://github.com/jesusfv/Comparison-Programming-Languages-Economics 
  
  



Re: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-17 Thread Stefan Karpinski
Not your fault at all. We need to make this kind of thing easier to discover. 
Eg with

https://github.com/astrieanna/TypeCheck.jl

 On Jun 17, 2014, at 8:35 AM, Jesus Villaverde vonbismarck1...@gmail.com 
 wrote:
 
 Ah Sorry, over 20 years of coding in Matlab :(
 
 Yes, you are right, once I change that line, the type definition is 
 irrelevant. We should change the paper and the code ASAP
 
 On Tuesday, June 17, 2014 12:03:29 AM UTC-4, Peter Simon wrote:
 By a process of elimination, I determined that the only variable whose 
 declaration affected the run time was vGridCapital.  The variable is 
 declared to be of type Array{Float64,1}, but is initialized as
 
 
 vGridCapital = 0.5*capitalSteadyState:0.1:1.5*capitalSteadyState
 
 which, unlike in Matlab, produces a Range object, rather than an array.  If 
 the line above is modified to
 
 vGridCapital = [0.5*capitalSteadyState:0.1:1.5*capitalSteadyState]
 
 then the type instability is eliminated, and all type declarations can be 
 removed with no effect on execution time.
 
 --Peter
 
 
 On Monday, June 16, 2014 2:59:31 PM UTC-7, Jesus Villaverde wrote:
 Also, defining
 
 mylog(x::Float64) = ccall((:log, libm), Float64, (Float64,), x)
 
 made quite a bit of difference for me, from 1.92 to around 1.55. If I also 
 add @inbounds, I go down to 1.45, making Julia only twice as sslow as C++. 
 Numba still beats Julia, which kind of bothers me a bit
 
 Thanks for the suggestions.
 
 On Monday, June 16, 2014 4:56:34 PM UTC-4, Jesus Villaverde wrote:
 Hi
 
 1) Yes, we pre-compiled the function.
 
 2) As I mentioned before, we tried the code with and without type 
 declaration, it makes a difference.
 
 3) The variable names turns out to be quite useful because this code will 
 be eventually nested into a much larger project where it is convenient to 
 have very explicit names.
 
 Thanks 
 
 On Monday, June 16, 2014 12:13:44 PM UTC-4, Dahua Lin wrote:
 First, I agree with John that you don't have to declare the types in 
 general, like in a compiled language. It seems that Julia would be able 
 to infer the types of most variables in your codes.
 
 There are several ways that your code's efficiency may be improved:
 
 (1) You can use @inbounds to waive bound checking in several places, such 
 as line 94 and 95 (in RBC_Julia.jl)
 (2) Line 114 and 116 involves reallocating new arrays, which is probably 
 unnecessary. Also note that Base.maxabs can compute the maximum of 
 absolute value more efficiently than maximum(abs( ... ))
 
 In terms of measurement, did you pre-compile the function before 
 measuring the runtime?
 
 A side note about code style. It seems that it uses a lot of Java-ish 
 descriptive names with camel case. Julia practice tends to encourage more 
 concise naming.
 
 Dahua
 
 
 
 On Monday, June 16, 2014 10:55:50 AM UTC-5, John Myles White wrote:
 Maybe it would be good to verify the claim made at 
 https://github.com/jesusfv/Comparison-Programming-Languages-Economics/blob/master/RBC_Julia.jl#L9
  
 
 I would think that specifying all those types wouldn’t matter much if 
 the code doesn’t have type-stability problems. 
 
  — John 
 
 On Jun 16, 2014, at 8:52 AM, Florian Oswald florian...@gmail.com 
 wrote: 
 
  Dear all, 
  
  I thought you might find this paper interesting: 
  http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf 
  
  It takes a standard model from macro economics and computes it's 
  solution with an identical algorithm in several languages. Julia is 
  roughly 2.6 times slower than the best C++ executable. I was bit 
  puzzled by the result, since in the benchmarks on 
  http://julialang.org/, the slowest test is 1.66 times C. I realize 
  that those benchmarks can't cover all possible situations. That said, 
  I couldn't really find anything unusual in the Julia code, did some 
  profiling and removed type inference, but still that's as fast as I 
  got it. That's not to say that I'm disappointed, I still think this is 
  great. Did I miss something obvious here or is there something 
  specific to this algorithm? 
  
  The codes are on github at 
  
  https://github.com/jesusfv/Comparison-Programming-Languages-Economics 
  
  
 


Re: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-17 Thread Jesus Villaverde
I think so! Matlab is just too slow for many things and a bit old in some 
dimensions. I often use C++ but for a lot of stuff, it is just to 
cumbersome.

On Tuesday, June 17, 2014 8:50:02 AM UTC-4, Bruno Rodrigues wrote:

 Hi Pr. Villaverde, just wanted to say that it was your paper that made me 
 try Julia. I must say that I am very happy with the switch! Will you 
 continue using Julia for your research?



Re: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-17 Thread Jesus Villaverde
Thanks! I'll learn those tools. In any case, paper updated online, github 
page with new commit. This is really great. Nice example of aggregation of 
information. Economists love that :)

On Tuesday, June 17, 2014 9:11:08 AM UTC-4, Stefan Karpinski wrote:

 Not your fault at all. We need to make this kind of thing easier to 
 discover. Eg with

 https://github.com/astrieanna/TypeCheck.jl

 On Jun 17, 2014, at 8:35 AM, Jesus Villaverde vonbism...@gmail.com 
 javascript: wrote:

 Ah Sorry, over 20 years of coding in Matlab :(

 Yes, you are right, once I change that line, the type definition is 
 irrelevant. We should change the paper and the code ASAP

 On Tuesday, June 17, 2014 12:03:29 AM UTC-4, Peter Simon wrote:

 By a process of elimination, I determined that the only variable whose 
 declaration affected the run time was vGridCapital.  The variable is 
 declared to be of type Array{Float64,1}, but is initialized as


 vGridCapital = 0.5*capitalSteadyState:0.1:1.5*capitalSteadyState

 which, unlike in Matlab, produces a Range object, rather than an array. 
  If the line above is modified to

 vGridCapital = [0.5*capitalSteadyState:0.1:1.5*capitalSteadyState]

 then the type instability is eliminated, and all type declarations can be 
 removed with no effect on execution time.

 --Peter


 On Monday, June 16, 2014 2:59:31 PM UTC-7, Jesus Villaverde wrote:

 Also, defining

 mylog(x::Float64) = ccall((:log, libm), Float64, (Float64,), x)

 made quite a bit of difference for me, from 1.92 to around 1.55. If I also 
 add @inbounds, I go down to 1.45, making Julia only twice as sslow as C++. 
 Numba still beats Julia, which kind of bothers me a bit


 Thanks for the suggestions.


 On Monday, June 16, 2014 4:56:34 PM UTC-4, Jesus Villaverde wrote:

 Hi

 1) Yes, we pre-compiled the function.

 2) As I mentioned before, we tried the code with and without type 
 declaration, it makes a difference.

 3) The variable names turns out to be quite useful because this code 
 will be eventually nested into a much larger project where it is 
 convenient 
 to have very explicit names.

 Thanks 

 On Monday, June 16, 2014 12:13:44 PM UTC-4, Dahua Lin wrote:

 First, I agree with John that you don't have to declare the types in 
 general, like in a compiled language. It seems that Julia would be able 
 to 
 infer the types of most variables in your codes.

 There are several ways that your code's efficiency may be improved:

 (1) You can use @inbounds to waive bound checking in several places, 
 such as line 94 and 95 (in RBC_Julia.jl)
 (2) Line 114 and 116 involves reallocating new arrays, which is 
 probably unnecessary. Also note that Base.maxabs can compute the maximum 
 of 
 absolute value more efficiently than maximum(abs( ... ))

 In terms of measurement, did you pre-compile the function before 
 measuring the runtime?

 A side note about code style. It seems that it uses a lot of Java-ish 
 descriptive names with camel case. Julia practice tends to encourage more 
 concise naming.

 Dahua



 On Monday, June 16, 2014 10:55:50 AM UTC-5, John Myles White wrote:

 Maybe it would be good to verify the claim made at 
 https://github.com/jesusfv/Comparison-Programming-Languages-Economics/blob/master/RBC_Julia.jl#L9
  

 I would think that specifying all those types wouldn’t matter much if 
 the code doesn’t have type-stability problems. 

  — John 

 On Jun 16, 2014, at 8:52 AM, Florian Oswald florian...@gmail.com 
 wrote: 

  Dear all, 
  
  I thought you might find this paper interesting: 
 http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf 
  
  It takes a standard model from macro economics and computes it's 
 solution with an identical algorithm in several languages. Julia is 
 roughly 
 2.6 times slower than the best C++ executable. I was bit puzzled by the 
 result, since in the benchmarks on http://julialang.org/, the 
 slowest test is 1.66 times C. I realize that those benchmarks can't 
 cover 
 all possible situations. That said, I couldn't really find anything 
 unusual 
 in the Julia code, did some profiling and removed type inference, but 
 still 
 that's as fast as I got it. That's not to say that I'm disappointed, I 
 still think this is great. Did I miss something obvious here or is there 
 something specific to this algorithm? 
  
  The codes are on github at 
  
  
 https://github.com/jesusfv/Comparison-Programming-Languages-Economics 
  
  



Re: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-17 Thread Tony Kelman
Your matrices are kinda small so it might not make much difference, but it 
would be interesting to see whether using the Tridiagonal type could speed 
things up at all.


On Tuesday, June 17, 2014 6:25:24 AM UTC-7, Jesus Villaverde wrote:

 Thanks! I'll learn those tools. In any case, paper updated online, github 
 page with new commit. This is really great. Nice example of aggregation of 
 information. Economists love that :)

 On Tuesday, June 17, 2014 9:11:08 AM UTC-4, Stefan Karpinski wrote:

 Not your fault at all. We need to make this kind of thing easier to 
 discover. Eg with

 https://github.com/astrieanna/TypeCheck.jl

 On Jun 17, 2014, at 8:35 AM, Jesus Villaverde vonbism...@gmail.com 
 wrote:

 Ah Sorry, over 20 years of coding in Matlab :(

 Yes, you are right, once I change that line, the type definition is 
 irrelevant. We should change the paper and the code ASAP

 On Tuesday, June 17, 2014 12:03:29 AM UTC-4, Peter Simon wrote:

 By a process of elimination, I determined that the only variable whose 
 declaration affected the run time was vGridCapital.  The variable is 
 declared to be of type Array{Float64,1}, but is initialized as


 vGridCapital = 0.5*capitalSteadyState:0.1:1.5*capitalSteadyState

 which, unlike in Matlab, produces a Range object, rather than an array. 
  If the line above is modified to

 vGridCapital = [0.5*capitalSteadyState:0.1:1.5*capitalSteadyState]

 then the type instability is eliminated, and all type declarations can 
 be removed with no effect on execution time.

 --Peter


 On Monday, June 16, 2014 2:59:31 PM UTC-7, Jesus Villaverde wrote:

 Also, defining

 mylog(x::Float64) = ccall((:log, libm), Float64, (Float64,), x)

 made quite a bit of difference for me, from 1.92 to around 1.55. If I also 
 add @inbounds, I go down to 1.45, making Julia only twice as sslow as C++. 
 Numba still beats Julia, which kind of bothers me a bit


 Thanks for the suggestions.


 On Monday, June 16, 2014 4:56:34 PM UTC-4, Jesus Villaverde wrote:

 Hi

 1) Yes, we pre-compiled the function.

 2) As I mentioned before, we tried the code with and without type 
 declaration, it makes a difference.

 3) The variable names turns out to be quite useful because this code 
 will be eventually nested into a much larger project where it is 
 convenient 
 to have very explicit names.

 Thanks 

 On Monday, June 16, 2014 12:13:44 PM UTC-4, Dahua Lin wrote:

 First, I agree with John that you don't have to declare the types in 
 general, like in a compiled language. It seems that Julia would be able 
 to 
 infer the types of most variables in your codes.

 There are several ways that your code's efficiency may be improved:

 (1) You can use @inbounds to waive bound checking in several places, 
 such as line 94 and 95 (in RBC_Julia.jl)
 (2) Line 114 and 116 involves reallocating new arrays, which is 
 probably unnecessary. Also note that Base.maxabs can compute the maximum 
 of 
 absolute value more efficiently than maximum(abs( ... ))

 In terms of measurement, did you pre-compile the function before 
 measuring the runtime?

 A side note about code style. It seems that it uses a lot of Java-ish 
 descriptive names with camel case. Julia practice tends to encourage 
 more 
 concise naming.

 Dahua



 On Monday, June 16, 2014 10:55:50 AM UTC-5, John Myles White wrote:

 Maybe it would be good to verify the claim made at 
 https://github.com/jesusfv/Comparison-Programming-Languages-Economics/blob/master/RBC_Julia.jl#L9
  

 I would think that specifying all those types wouldn’t matter much 
 if the code doesn’t have type-stability problems. 

  — John 

 On Jun 16, 2014, at 8:52 AM, Florian Oswald florian...@gmail.com 
 wrote: 

  Dear all, 
  
  I thought you might find this paper interesting: 
 http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf 
  
  It takes a standard model from macro economics and computes it's 
 solution with an identical algorithm in several languages. Julia is 
 roughly 
 2.6 times slower than the best C++ executable. I was bit puzzled by the 
 result, since in the benchmarks on http://julialang.org/, the 
 slowest test is 1.66 times C. I realize that those benchmarks can't 
 cover 
 all possible situations. That said, I couldn't really find anything 
 unusual 
 in the Julia code, did some profiling and removed type inference, but 
 still 
 that's as fast as I got it. That's not to say that I'm disappointed, I 
 still think this is great. Did I miss something obvious here or is 
 there 
 something specific to this algorithm? 
  
  The codes are on github at 
  
  
 https://github.com/jesusfv/Comparison-Programming-Languages-Economics 
  
  



Re: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-17 Thread Cameron McBride
Do any of the more initiated have an idea why Numba performs better for
this application, as both it and Julia use LLVM?  I'm just asking out of
pure curiosity.

Cameron


On Tue, Jun 17, 2014 at 10:11 AM, Tony Kelman t...@kelman.net wrote:

 Your matrices are kinda small so it might not make much difference, but it
 would be interesting to see whether using the Tridiagonal type could speed
 things up at all.


 On Tuesday, June 17, 2014 6:25:24 AM UTC-7, Jesus Villaverde wrote:

 Thanks! I'll learn those tools. In any case, paper updated online, github
 page with new commit. This is really great. Nice example of aggregation of
 information. Economists love that :)

 On Tuesday, June 17, 2014 9:11:08 AM UTC-4, Stefan Karpinski wrote:

 Not your fault at all. We need to make this kind of thing easier to
 discover. Eg with

 https://github.com/astrieanna/TypeCheck.jl

 On Jun 17, 2014, at 8:35 AM, Jesus Villaverde vonbism...@gmail.com
 wrote:

 Ah Sorry, over 20 years of coding in Matlab :(

 Yes, you are right, once I change that line, the type definition is
 irrelevant. We should change the paper and the code ASAP

 On Tuesday, June 17, 2014 12:03:29 AM UTC-4, Peter Simon wrote:

 By a process of elimination, I determined that the only variable whose
 declaration affected the run time was vGridCapital.  The variable is
 declared to be of type Array{Float64,1}, but is initialized as


 vGridCapital = 0.5*capitalSteadyState:0.1:1.5*capitalSteadyState

 which, unlike in Matlab, produces a Range object, rather than an array.
  If the line above is modified to

 vGridCapital = [0.5*capitalSteadyState:0.1:1.5*capitalSteadyState]

 then the type instability is eliminated, and all type declarations can
 be removed with no effect on execution time.

 --Peter


 On Monday, June 16, 2014 2:59:31 PM UTC-7, Jesus Villaverde wrote:

 Also, defining

 mylog(x::Float64) = ccall((:log, libm), Float64, (Float64,), x)

 made quite a bit of difference for me, from 1.92 to around 1.55. If I 
 also add @inbounds, I go down to 1.45, making Julia only twice as sslow 
 as C++. Numba still beats Julia, which kind of bothers me a bit


 Thanks for the suggestions.


 On Monday, June 16, 2014 4:56:34 PM UTC-4, Jesus Villaverde wrote:

 Hi

 1) Yes, we pre-compiled the function.

 2) As I mentioned before, we tried the code with and without type
 declaration, it makes a difference.

 3) The variable names turns out to be quite useful because this code
 will be eventually nested into a much larger project where it is 
 convenient
 to have very explicit names.

 Thanks

 On Monday, June 16, 2014 12:13:44 PM UTC-4, Dahua Lin wrote:

 First, I agree with John that you don't have to declare the types in
 general, like in a compiled language. It seems that Julia would be able 
 to
 infer the types of most variables in your codes.

 There are several ways that your code's efficiency may be improved:

 (1) You can use @inbounds to waive bound checking in several places,
 such as line 94 and 95 (in RBC_Julia.jl)
 (2) Line 114 and 116 involves reallocating new arrays, which is
 probably unnecessary. Also note that Base.maxabs can compute the 
 maximum of
 absolute value more efficiently than maximum(abs( ... ))

 In terms of measurement, did you pre-compile the function before
 measuring the runtime?

 A side note about code style. It seems that it uses a lot of
 Java-ish descriptive names with camel case. Julia practice tends to
 encourage more concise naming.

 Dahua



 On Monday, June 16, 2014 10:55:50 AM UTC-5, John Myles White wrote:

 Maybe it would be good to verify the claim made at
 https://github.com/jesusfv/Comparison-Programming-
 Languages-Economics/blob/master/RBC_Julia.jl#L9

 I would think that specifying all those types wouldn’t matter much
 if the code doesn’t have type-stability problems.

  — John

 On Jun 16, 2014, at 8:52 AM, Florian Oswald florian...@gmail.com
 wrote:

  Dear all,
 
  I thought you might find this paper interesting:
 http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf
 
  It takes a standard model from macro economics and computes it's
 solution with an identical algorithm in several languages. Julia is 
 roughly
 2.6 times slower than the best C++ executable. I was bit puzzled by the
 result, since in the benchmarks on http://julialang.org/, the
 slowest test is 1.66 times C. I realize that those benchmarks can't 
 cover
 all possible situations. That said, I couldn't really find anything 
 unusual
 in the Julia code, did some profiling and removed type inference, but 
 still
 that's as fast as I got it. That's not to say that I'm disappointed, I
 still think this is great. Did I miss something obvious here or is 
 there
 something specific to this algorithm?
 
  The codes are on github at
 
  https://github.com/jesusfv/Comparison-Programming-
 Languages-Economics
 
 




Re: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-17 Thread Florian Oswald
thanks peter. I made that devectorizing change after dalua suggested so. It
made a massive difference!

On Tuesday, 17 June 2014, Peter Simon psimon0...@gmail.com wrote:

 You're right.  Replacing the NumericExtensions function calls with a small
 loop

 maxDifference  = 0.0
 for k = 1:length(mValueFunction)
 maxDifference = max(maxDifference, abs(mValueFunction[k]-
 mValueFunctionNew[k]))
 end


 makes no significant difference in execution time or memory allocation and
 eliminates the dependency.

 --Peter


 On Tuesday, June 17, 2014 10:05:03 AM UTC-7, Andreas Noack Jensen wrote:

 ...but the Numba version doesn't use tricks like that.

 The uniform metric can also be calculated with a small loop. I think that
 requiring dependencies is against the purpose of the exercise.


 2014-06-17 18:56 GMT+02:00 Peter Simon psimo...@gmail.com:

 As pointed out by Dahua, there is a lot of unnecessary memory
 allocation.  This can be reduced significantly by replacing the lines

 maxDifference  = maximum(abs(mValueFunctionNew-mValueFunction))
 mValueFunction= mValueFunctionNew
 mValueFunctionNew = zeros(nGridCapital,nGridProductivity)




 with

 maxDifference  = maximum(abs!(subtract!(mValueFunction,
 mValueFunctionNew)))
 (mValueFunction, mValueFunctionNew) = (mValueFunctionNew,
 mValueFunction)
 fill!(mValueFunctionNew, 0.0)



 abs! and subtract! require adding the line

 using NumericExtensions



 prior to the function line.  I think the OP used Julia 0.2; I don't
 believe that NumericExtensions will work with that old version.  When I
 combine these changes with adding

 @inbounds begin
 ...
 end



 block around the while loop, I get about 25% reduction in execution
 time, and reduction of memory allocation from roughly 700 MByte to 180 MByte

 --Peter


 On Tuesday, June 17, 2014 9:32:34 AM UTC-7, John Myles White wrote:

 Sounds like we need to rerun these benchmarks after the new GC branch
 gets updated.

  -- John

 On Jun 17, 2014, at 9:31 AM, Stefan Karpinski ste...@karpinski.org
 wrote:

 That definitely smells like a GC issue. Python doesn't have this
 particular problem since it uses reference counting.


 On Tue, Jun 17, 2014 at 12:21 PM, Cristóvão Duarte Sousa 
 cri...@gmail.com wrote:

 I've just done measurements of algorithm inner loop times in my
 machine by changing the code has shown in this commit
 https://github.com/cdsousa/Comparison-Programming-Languages-Economics/commit/4f6198ad24adc146c268a1c2eeac14d5ae0f300c
 .

 I've found out something... see for yourself:

 using Winston
 numba_times = readdlm(numba_times.dat)[10:end];
 plot(numba_times)


 https://lh6.googleusercontent.com/-m1c6SAbijVM/U6BpmBmFbqI/Ddc/wtxnKuGFDy0/s1600/numba_times.png
 julia_times = readdlm(julia_times.dat)[10:end];
 plot(julia_times)


 https://lh4.googleusercontent.com/-7iprMnjyZQY/U6Bp8gHVNJI/Ddk/yUgu8RyZ-Kw/s1600/julia_times.png
 println((median(numba_times), mean(numba_times), var(numba_times)))
 (0.0028225183486938477,0.0028575707378805993,2.4830103817464292e-8)

  println((median(julia_times), mean(julia_times), var(julia_times)))
 (0.00282404404,0.0034863882123824454,1.7058255003790299e-6)

 So, while inner loop times have more or less the same median on both
 Julia and Numba tests, the mean and variance are higher in Julia.

 Can that be due to the garbage collector being kicking in?


 On Monday, June 16, 2014 4:52:07 PM UTC+1, Florian Oswald wrote:

 Dear all,

 I thought you might find this paper interesting: http://economics.
 sas.upenn.edu/~jesusfv/comparison_languages.pdf

 It takes a standard model from macro economics and computes it's
 solution with an identical algorithm in several languages. Julia is 
 roughly
 2.6 times slower than the best C++ executable. I was bit puzzled by the
 result, since in the benchmarks on http://julialang.org/, the
 slowest test is 1.66 times C. I realize that those benchmarks can't cover
 all possible situations. That said, I couldn't really find anything 
 unusual
 in the Julia code, did some profiling and removed type inference, but 
 still
 that's as fast as I got it. That's not to say that I'm disappointed, I
 still think this is great. Did I miss something obvious here or is there
 something specific to this algorithm?

 The codes are on github at

 https://github.com/jesusfv/Comparison-Programming-Languages-Economics







 --
 Med venlig hilsen

 Andreas Noack Jensen




RE: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-17 Thread David Anthoff
I submitted three pull requests to the original repo that get rid of three 
different array allocations in loops and that make things a fair bit faster 
altogether:

 

https://github.com/jesusfv/Comparison-Programming-Languages-Economics/pulls

 

I think it would also make sense to run these benchmarks on julia 0.3.0 instead 
of 0.2.1, given that there have been a fair number of performance imrpovements.

 

From: julia-users@googlegroups.com [mailto:julia-users@googlegroups.com] On 
Behalf Of Florian Oswald
Sent: Tuesday, June 17, 2014 10:50 AM
To: julia-users@googlegroups.com
Subject: Re: [julia-users] Benchmarking study: C++  Fortran  Numba  Julia  
Java  Matlab  the rest

 

thanks peter. I made that devectorizing change after dalua suggested so. It 
made a massive difference!

On Tuesday, 17 June 2014, Peter Simon psimon0...@gmail.com 
mailto:psimon0...@gmail.com  wrote:

You're right.  Replacing the NumericExtensions function calls with a small loop

 

maxDifference  = 0.0
for k = 1:length(mValueFunction)
maxDifference = max(maxDifference, abs(mValueFunction[k]- 
mValueFunctionNew[k]))
end


makes no significant difference in execution time or memory allocation and 
eliminates the dependency.

 

--Peter



On Tuesday, June 17, 2014 10:05:03 AM UTC-7, Andreas Noack Jensen wrote:

...but the Numba version doesn't use tricks like that. 

 

The uniform metric can also be calculated with a small loop. I think that 
requiring dependencies is against the purpose of the exercise.

 

2014-06-17 18:56 GMT+02:00 Peter Simon psimo...@gmail.com 
mailto:psimo...@gmail.com :

As pointed out by Dahua, there is a lot of unnecessary memory allocation.  This 
can be reduced significantly by replacing the lines

 

maxDifference  = maximum(abs(mValueFunctionNew-mValueFunction))
mValueFunction= mValueFunctionNew
mValueFunctionNew = zeros(nGridCapital,nGridProductivity)

 

 

with

 

maxDifference  = maximum(abs!(subtract!(mValueFunction, 
mValueFunctionNew)))
(mValueFunction, mValueFunctionNew) = (mValueFunctionNew, 
mValueFunction)
fill!(mValueFunctionNew, 0.0)

 

abs! and subtract! require adding the line

 

using NumericExtensions

 

prior to the function line.  I think the OP used Julia 0.2; I don't believe 
that NumericExtensions will work with that old version.  When I combine these 
changes with adding 

 

@inbounds begin
...
end

 

block around the while loop, I get about 25% reduction in execution time, and 
reduction of memory allocation from roughly 700 MByte to 180 MByte

 

--Peter



On Tuesday, June 17, 2014 9:32:34 AM UTC-7, John Myles White wrote:

Sounds like we need to rerun these benchmarks after the new GC branch gets 
updated.

 

 -- John

 

On Jun 17, 2014, at 9:31 AM, Stefan Karpinski ste...@karpinski.org 
mailto:ste...@karpinski.org  wrote:

 

That definitely smells like a GC issue. Python doesn't have this particular 
problem since it uses reference counting.

 

On Tue, Jun 17, 2014 at 12:21 PM, Cristóvão Duarte Sousa cri...@gmail.com 
mailto:cri...@gmail.com  wrote:

I've just done measurements of algorithm inner loop times in my machine by 
changing the code has shown in this commit 
https://github.com/cdsousa/Comparison-Programming-Languages-Economics/commit/4f6198ad24adc146c268a1c2eeac14d5ae0f300c
 .

 

I've found out something... see for yourself:

 

using Winston
numba_times = readdlm(numba_times.dat)[10:end];
plot(numba_times)

 
https://lh6.googleusercontent.com/-m1c6SAbijVM/U6BpmBmFbqI/Ddc/wtxnKuGFDy0/s1600/numba_times.png
 

julia_times = readdlm(julia_times.dat)[10:end];
plot(julia_times)

 

 
https://lh4.googleusercontent.com/-7iprMnjyZQY/U6Bp8gHVNJI/Ddk/yUgu8RyZ-Kw/s1600/julia_times.png
 

println((median(numba_times), mean(numba_times), var(numba_times)))

(0.0028225183486938477,0.0028575707378805993,2.4830103817464292e-8)

 

println((median(julia_times), mean(julia_times), var(julia_times)))

(0.00282404404,0.0034863882123824454,1.7058255003790299e-6)

 

So, while inner loop times have more or less the same median on both Julia and 
Numba tests, the mean and variance are higher in Julia.

 

Can that be due to the garbage collector being kicking in?



On Monday, June 16, 2014 4:52:07 PM UTC+1, Florian Oswald wrote:

Dear all,

 

I thought you might find this paper interesting: 
http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf

 

It takes a standard model from macro economics and computes it's solution with 
an identical algorithm in several languages. Julia is roughly 2.6 times slower 
than the best C++ executable. I was bit puzzled by the result, since in the 
benchmarks on http://julialang.org/, the slowest test is 1.66 times C. I 
realize that those benchmarks can't cover all possible situations. That said, I 
couldn't really find anything unusual in the Julia code, did some profiling and 
removed type inference, but still

Re: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-17 Thread Peter Simon
Sorry, Florian and David, for not seeing that you were way ahead of me.

On the subject of the log function:  I tried implementing mylog() as 
defined by Andreas on Julia running on CentOS and the result was a 
significant slowdown! (Yes, I defined the mylog function outside of main, 
at the module level).  Not sure if this is due to variation in the quality 
of the libm function on various systems or what.  If so, then it makes 
sense that Julia wants a uniformly accurate and fast implementation via 
openlibm.  But for fastest transcendental function performance, I assume 
that one must use the micro-coded versions built into the processor's 
FPU--Is that what the fast libm implementations do?  In that case, how 
could one hope to compete when using a C-coded version?

--Peter


On Tuesday, June 17, 2014 10:57:47 AM UTC-7, David Anthoff wrote:

 I submitted three pull requests to the original repo that get rid of three 
 different array allocations in loops and that make things a fair bit faster 
 altogether:

  

 https://github.com/jesusfv/Comparison-Programming-Languages-Economics/pulls

  

 I think it would also make sense to run these benchmarks on julia 0.3.0 
 instead of 0.2.1, given that there have been a fair number of performance 
 imrpovements.

  

 *From:* julia...@googlegroups.com javascript: [mailto:
 julia...@googlegroups.com javascript:] *On Behalf Of *Florian Oswald
 *Sent:* Tuesday, June 17, 2014 10:50 AM
 *To:* julia...@googlegroups.com javascript:
 *Subject:* Re: [julia-users] Benchmarking study: C++  Fortran  Numba  
 Julia  Java  Matlab  the rest

  

 thanks peter. I made that devectorizing change after dalua suggested so. 
 It made a massive difference!

 On Tuesday, 17 June 2014, Peter Simon psimo...@gmail.com javascript: 
 wrote:

 You're right.  Replacing the NumericExtensions function calls with a small 
 loop

  

 maxDifference  = 0.0
 for k = 1:length(mValueFunction)
 maxDifference = max(maxDifference, abs(mValueFunction[k]- 
 mValueFunctionNew[k]))
 end


 makes no significant difference in execution time or memory allocation and 
 eliminates the dependency.

  

 --Peter



 On Tuesday, June 17, 2014 10:05:03 AM UTC-7, Andreas Noack Jensen wrote:

 ...but the Numba version doesn't use tricks like that. 

  

 The uniform metric can also be calculated with a small loop. I think that 
 requiring dependencies is against the purpose of the exercise.

  

 2014-06-17 18:56 GMT+02:00 Peter Simon psimo...@gmail.com:

 As pointed out by Dahua, there is a lot of unnecessary memory allocation. 
  This can be reduced significantly by replacing the lines

  

 maxDifference  = maximum(abs(mValueFunctionNew-mValueFunction))
 mValueFunction= mValueFunctionNew
 mValueFunctionNew = zeros(nGridCapital,nGridProductivity)

  

  

 with

  

 maxDifference  = maximum(abs!(subtract!(mValueFunction, 
 mValueFunctionNew)))
 (mValueFunction, mValueFunctionNew) = (mValueFunctionNew, 
 mValueFunction)
 fill!(mValueFunctionNew, 0.0)

  

 abs! and subtract! require adding the line

  

 using NumericExtensions

  

 prior to the function line.  I think the OP used Julia 0.2; I don't 
 believe that NumericExtensions will work with that old version.  When I 
 combine these changes with adding 

  

 @inbounds begin
 ...
 end

  

 block around the while loop, I get about 25% reduction in execution 
 time, and reduction of memory allocation from roughly 700 MByte to 180 MByte

  

 --Peter



 On Tuesday, June 17, 2014 9:32:34 AM UTC-7, John Myles White wrote:

 Sounds like we need to rerun these benchmarks after the new GC branch gets 
 updated.

  

  -- John

  

 On Jun 17, 2014, at 9:31 AM, Stefan Karpinski ste...@karpinski.org 
 wrote:

  

 That definitely smells like a GC issue. Python doesn't have this 
 particular problem since it uses reference counting.

  

 On Tue, Jun 17, 2014 at 12:21 PM, Cristóvão Duarte Sousa cri...@gmail.com 
 wrote:

 I've just done measurements of algorithm inner loop times in my machine by 
 changing the code has shown in this commit 
 https://github.com/cdsousa/Comparison-Programming-Languages-Economics/commit/4f6198ad24adc146c268a1c2eeac14d5ae0f300c
 .

  

 I've found out something... see for yourself:

  

 using Winston
 numba_times = readdlm(numba_times.dat)[10:end];
 plot(numba_times)


 https://lh6.googleusercontent.com/-m1c6SAbijVM/U6BpmBmFbqI/Ddc/wtxnKuGFDy0/s1600/numba_times.png

 julia_times = readdlm(julia_times.dat)[10:end];
 plot(julia_times)

  


 https://lh4.googleusercontent.com/-7iprMnjyZQY/U6Bp8gHVNJI/Ddk/yUgu8RyZ-Kw/s1600/julia_times.png

 println((median(numba_times), mean(numba_times), var(numba_times)))

 (0.0028225183486938477,0.0028575707378805993,2.4830103817464292e-8)

  

 println((median(julia_times), mean(julia_times), var(julia_times)))

 (0.00282404404,0.0034863882123824454,1.7058255003790299e-6)

  

 So, while

RE: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-17 Thread David Anthoff
Another interesting result from the paper is how much faster Visual C++ 2010 
generated code is than gcc, on Windows. For their example, the gcc runtime is 
2.29 the runtime of the MS compiled version. The difference might be even 
larger with Visual C++ 2013 because that is when MS added an auto-vectorizer 
that is on by default.

 

I vaguely remember a discussion about compiling julia itself with the MS 
compiler on Windows, is that working and is that making a performance 
difference?

 

From: julia-users@googlegroups.com [mailto:julia-users@googlegroups.com] On 
Behalf Of Peter Simon
Sent: Tuesday, June 17, 2014 12:08 PM
To: julia-users@googlegroups.com
Subject: Re: [julia-users] Benchmarking study: C++  Fortran  Numba  Julia  
Java  Matlab  the rest

 

Sorry, Florian and David, for not seeing that you were way ahead of me.

 

On the subject of the log function:  I tried implementing mylog() as defined by 
Andreas on Julia running on CentOS and the result was a significant slowdown! 
(Yes, I defined the mylog function outside of main, at the module level).  Not 
sure if this is due to variation in the quality of the libm function on various 
systems or what.  If so, then it makes sense that Julia wants a uniformly 
accurate and fast implementation via openlibm.  But for fastest transcendental 
function performance, I assume that one must use the micro-coded versions built 
into the processor's FPU--Is that what the fast libm implementations do?  In 
that case, how could one hope to compete when using a C-coded version?

 

--Peter



On Tuesday, June 17, 2014 10:57:47 AM UTC-7, David Anthoff wrote:

I submitted three pull requests to the original repo that get rid of three 
different array allocations in loops and that make things a fair bit faster 
altogether:

 

https://github.com/jesusfv/Comparison-Programming-Languages-Economics/pulls

 

I think it would also make sense to run these benchmarks on julia 0.3.0 instead 
of 0.2.1, given that there have been a fair number of performance imrpovements.

 

From: julia...@googlegroups.com javascript:  
[mailto:julia...@googlegroups.com javascript: ] On Behalf Of Florian Oswald
Sent: Tuesday, June 17, 2014 10:50 AM
To: julia...@googlegroups.com javascript: 
Subject: Re: [julia-users] Benchmarking study: C++  Fortran  Numba  Julia  
Java  Matlab  the rest

 

thanks peter. I made that devectorizing change after dalua suggested so. It 
made a massive difference!

On Tuesday, 17 June 2014, Peter Simon psimo...@gmail.com javascript:  wrote:

You're right.  Replacing the NumericExtensions function calls with a small loop

 

maxDifference  = 0.0
for k = 1:length(mValueFunction)
maxDifference = max(maxDifference, abs(mValueFunction[k]- 
mValueFunctionNew[k]))
end


makes no significant difference in execution time or memory allocation and 
eliminates the dependency.

 

--Peter



On Tuesday, June 17, 2014 10:05:03 AM UTC-7, Andreas Noack Jensen wrote:

...but the Numba version doesn't use tricks like that. 

 

The uniform metric can also be calculated with a small loop. I think that 
requiring dependencies is against the purpose of the exercise.

 

2014-06-17 18:56 GMT+02:00 Peter Simon psimo...@gmail.com 
mailto:psimo...@gmail.com :

As pointed out by Dahua, there is a lot of unnecessary memory allocation.  This 
can be reduced significantly by replacing the lines

 

maxDifference  = maximum(abs(mValueFunctionNew-mValueFunction))
mValueFunction= mValueFunctionNew
mValueFunctionNew = zeros(nGridCapital,nGridProductivity)

 

 

with

 

maxDifference  = maximum(abs!(subtract!(mValueFunction, 
mValueFunctionNew)))
(mValueFunction, mValueFunctionNew) = (mValueFunctionNew, 
mValueFunction)
fill!(mValueFunctionNew, 0.0)

 

abs! and subtract! require adding the line

 

using NumericExtensions

 

prior to the function line.  I think the OP used Julia 0.2; I don't believe 
that NumericExtensions will work with that old version.  When I combine these 
changes with adding 

 

@inbounds begin
...
end

 

block around the while loop, I get about 25% reduction in execution time, and 
reduction of memory allocation from roughly 700 MByte to 180 MByte

 

--Peter



On Tuesday, June 17, 2014 9:32:34 AM UTC-7, John Myles White wrote:

Sounds like we need to rerun these benchmarks after the new GC branch gets 
updated.

 

 -- John

 

On Jun 17, 2014, at 9:31 AM, Stefan Karpinski ste...@karpinski.org 
mailto:ste...@karpinski.org  wrote:

 

That definitely smells like a GC issue. Python doesn't have this particular 
problem since it uses reference counting.

 

On Tue, Jun 17, 2014 at 12:21 PM, Cristóvão Duarte Sousa cri...@gmail.com 
mailto:cri...@gmail.com  wrote:

I've just done measurements of algorithm inner loop times in my machine by 
changing the code has shown in this commit 
https://github.com/cdsousa/Comparison-Programming-Languages-Economics/commit

Re: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-17 Thread Tobias Knopp
There are some remaining issues but compilation with MSVC is almost 
possible. I did some initial work and Tony Kelman made lots of progress 
in https://github.com/JuliaLang/julia/pull/6230. But there have not been 
any speed comparisons as far as I know. Note that Julia uses JIT 
compilation and thus I would not expect to have the source compiler have a 
huge impact.


Am Dienstag, 17. Juni 2014 21:25:50 UTC+2 schrieb David Anthoff:

 Another interesting result from the paper is how much faster Visual C++ 
 2010 generated code is than gcc, on Windows. For their example, the gcc 
 runtime is 2.29 the runtime of the MS compiled version. The difference 
 might be even larger with Visual C++ 2013 because that is when MS added an 
 auto-vectorizer that is on by default.

  

 I vaguely remember a discussion about compiling julia itself with the MS 
 compiler on Windows, is that working and is that making a performance 
 difference?

  

 *From:* julia...@googlegroups.com javascript: [mailto:
 julia...@googlegroups.com javascript:] *On Behalf Of *Peter Simon
 *Sent:* Tuesday, June 17, 2014 12:08 PM
 *To:* julia...@googlegroups.com javascript:
 *Subject:* Re: [julia-users] Benchmarking study: C++  Fortran  Numba  
 Julia  Java  Matlab  the rest

  

 Sorry, Florian and David, for not seeing that you were way ahead of me.

  

 On the subject of the log function:  I tried implementing mylog() as 
 defined by Andreas on Julia running on CentOS and the result was a 
 significant slowdown! (Yes, I defined the mylog function outside of main, 
 at the module level).  Not sure if this is due to variation in the quality 
 of the libm function on various systems or what.  If so, then it makes 
 sense that Julia wants a uniformly accurate and fast implementation via 
 openlibm.  But for fastest transcendental function performance, I assume 
 that one must use the micro-coded versions built into the processor's 
 FPU--Is that what the fast libm implementations do?  In that case, how 
 could one hope to compete when using a C-coded version?

  

 --Peter



 On Tuesday, June 17, 2014 10:57:47 AM UTC-7, David Anthoff wrote:

 I submitted three pull requests to the original repo that get rid of three 
 different array allocations in loops and that make things a fair bit faster 
 altogether:

  

 https://github.com/jesusfv/Comparison-Programming-Languages-Economics/pulls

  

 I think it would also make sense to run these benchmarks on julia 0.3.0 
 instead of 0.2.1, given that there have been a fair number of performance 
 imrpovements.

  

 *From:* julia...@googlegroups.com [mailto:julia...@googlegroups.com] *On 
 Behalf Of *Florian Oswald
 *Sent:* Tuesday, June 17, 2014 10:50 AM
 *To:* julia...@googlegroups.com
 *Subject:* Re: [julia-users] Benchmarking study: C++  Fortran  Numba  
 Julia  Java  Matlab  the rest

  

 thanks peter. I made that devectorizing change after dalua suggested so. 
 It made a massive difference!

 On Tuesday, 17 June 2014, Peter Simon psimo...@gmail.com wrote:

 You're right.  Replacing the NumericExtensions function calls with a small 
 loop

  

 maxDifference  = 0.0
 for k = 1:length(mValueFunction)
 maxDifference = max(maxDifference, abs(mValueFunction[k]- 
 mValueFunctionNew[k]))
 end


 makes no significant difference in execution time or memory allocation and 
 eliminates the dependency.

  

 --Peter



 On Tuesday, June 17, 2014 10:05:03 AM UTC-7, Andreas Noack Jensen wrote:

 ...but the Numba version doesn't use tricks like that. 

  

 The uniform metric can also be calculated with a small loop. I think that 
 requiring dependencies is against the purpose of the exercise.

  

 2014-06-17 18:56 GMT+02:00 Peter Simon psimo...@gmail.com:

 As pointed out by Dahua, there is a lot of unnecessary memory allocation. 
  This can be reduced significantly by replacing the lines

  

 maxDifference  = maximum(abs(mValueFunctionNew-mValueFunction))
 mValueFunction= mValueFunctionNew
 mValueFunctionNew = zeros(nGridCapital,nGridProductivity)

  

  

 with

  

 maxDifference  = maximum(abs!(subtract!(mValueFunction, 
 mValueFunctionNew)))
 (mValueFunction, mValueFunctionNew) = (mValueFunctionNew, 
 mValueFunction)
 fill!(mValueFunctionNew, 0.0)

  

 abs! and subtract! require adding the line

  

 using NumericExtensions

  

 prior to the function line.  I think the OP used Julia 0.2; I don't 
 believe that NumericExtensions will work with that old version.  When I 
 combine these changes with adding 

  

 @inbounds begin
 ...
 end

  

 block around the while loop, I get about 25% reduction in execution 
 time, and reduction of memory allocation from roughly 700 MByte to 180 MByte

  

 --Peter



 On Tuesday, June 17, 2014 9:32:34 AM UTC-7, John Myles White wrote:

 Sounds like we need to rerun these benchmarks after the new GC branch gets 
 updated.

  

  -- John

  

 On Jun 17, 2014, at 9:31 AM

Re: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-17 Thread Tony Kelman
I got pretty far on that a few months ago, 
see https://github.com/JuliaLang/julia/pull/6230 
and https://github.com/JuliaLang/julia/issues/6349

A couple of tiny changes aren't in master at the moment, but I was able to 
get libjulia compiled and julia.exe starting system image bootstrap. It hit 
a stack overflow at osutils.jl which is right after inference.jl, so the 
problem is likely in compiling type inference. Apparently I was missing 
some flags that are used in the MinGW build to increase the default stack 
size. Haven't gotten back to giving it another try recently.


On Tuesday, June 17, 2014 12:25:50 PM UTC-7, David Anthoff wrote:

 Another interesting result from the paper is how much faster Visual C++ 
 2010 generated code is than gcc, on Windows. For their example, the gcc 
 runtime is 2.29 the runtime of the MS compiled version. The difference 
 might be even larger with Visual C++ 2013 because that is when MS added an 
 auto-vectorizer that is on by default.

  

 I vaguely remember a discussion about compiling julia itself with the MS 
 compiler on Windows, is that working and is that making a performance 
 difference?

  

 *From:* julia...@googlegroups.com javascript: [mailto:
 julia...@googlegroups.com javascript:] *On Behalf Of *Peter Simon
 *Sent:* Tuesday, June 17, 2014 12:08 PM
 *To:* julia...@googlegroups.com javascript:
 *Subject:* Re: [julia-users] Benchmarking study: C++  Fortran  Numba  
 Julia  Java  Matlab  the rest

  

 Sorry, Florian and David, for not seeing that you were way ahead of me.

  

 On the subject of the log function:  I tried implementing mylog() as 
 defined by Andreas on Julia running on CentOS and the result was a 
 significant slowdown! (Yes, I defined the mylog function outside of main, 
 at the module level).  Not sure if this is due to variation in the quality 
 of the libm function on various systems or what.  If so, then it makes 
 sense that Julia wants a uniformly accurate and fast implementation via 
 openlibm.  But for fastest transcendental function performance, I assume 
 that one must use the micro-coded versions built into the processor's 
 FPU--Is that what the fast libm implementations do?  In that case, how 
 could one hope to compete when using a C-coded version?

  

 --Peter



 On Tuesday, June 17, 2014 10:57:47 AM UTC-7, David Anthoff wrote:

 I submitted three pull requests to the original repo that get rid of three 
 different array allocations in loops and that make things a fair bit faster 
 altogether:

  

 https://github.com/jesusfv/Comparison-Programming-Languages-Economics/pulls

  

 I think it would also make sense to run these benchmarks on julia 0.3.0 
 instead of 0.2.1, given that there have been a fair number of performance 
 imrpovements.

  

 *From:* julia...@googlegroups.com [mailto:julia...@googlegroups.com] *On 
 Behalf Of *Florian Oswald
 *Sent:* Tuesday, June 17, 2014 10:50 AM
 *To:* julia...@googlegroups.com
 *Subject:* Re: [julia-users] Benchmarking study: C++  Fortran  Numba  
 Julia  Java  Matlab  the rest

  

 thanks peter. I made that devectorizing change after dalua suggested so. 
 It made a massive difference!

 On Tuesday, 17 June 2014, Peter Simon psimo...@gmail.com wrote:

 You're right.  Replacing the NumericExtensions function calls with a small 
 loop

  

 maxDifference  = 0.0
 for k = 1:length(mValueFunction)
 maxDifference = max(maxDifference, abs(mValueFunction[k]- 
 mValueFunctionNew[k]))
 end


 makes no significant difference in execution time or memory allocation and 
 eliminates the dependency.

  

 --Peter



 On Tuesday, June 17, 2014 10:05:03 AM UTC-7, Andreas Noack Jensen wrote:

 ...but the Numba version doesn't use tricks like that. 

  

 The uniform metric can also be calculated with a small loop. I think that 
 requiring dependencies is against the purpose of the exercise.

  

 2014-06-17 18:56 GMT+02:00 Peter Simon psimo...@gmail.com:

 As pointed out by Dahua, there is a lot of unnecessary memory allocation. 
  This can be reduced significantly by replacing the lines

  

 maxDifference  = maximum(abs(mValueFunctionNew-mValueFunction))
 mValueFunction= mValueFunctionNew
 mValueFunctionNew = zeros(nGridCapital,nGridProductivity)

  

  

 with

  

 maxDifference  = maximum(abs!(subtract!(mValueFunction, 
 mValueFunctionNew)))
 (mValueFunction, mValueFunctionNew) = (mValueFunctionNew, 
 mValueFunction)
 fill!(mValueFunctionNew, 0.0)

  

 abs! and subtract! require adding the line

  

 using NumericExtensions

  

 prior to the function line.  I think the OP used Julia 0.2; I don't 
 believe that NumericExtensions will work with that old version.  When I 
 combine these changes with adding 

  

 @inbounds begin
 ...
 end

  

 block around the while loop, I get about 25% reduction in execution 
 time, and reduction of memory allocation from roughly 700 MByte to 180 MByte

  

 --Peter

RE: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-17 Thread David Anthoff
I was more thinking that this might make a difference for some of the 
dependencies, like openblas? But I’m not even sure that can be compiled at all 
using MS compilers…

 

From: julia-users@googlegroups.com [mailto:julia-users@googlegroups.com] On 
Behalf Of Tobias Knopp
Sent: Tuesday, June 17, 2014 12:42 PM
To: julia-users@googlegroups.com
Subject: Re: [julia-users] Benchmarking study: C++  Fortran  Numba  Julia  
Java  Matlab  the rest

 

There are some remaining issues but compilation with MSVC is almost possible. I 
did some initial work and Tony Kelman made lots of progress in 
https://github.com/JuliaLang/julia/pull/6230. But there have not been any speed 
comparisons as far as I know. Note that Julia uses JIT compilation and thus I 
would not expect to have the source compiler have a huge impact.

 


Am Dienstag, 17. Juni 2014 21:25:50 UTC+2 schrieb David Anthoff:

Another interesting result from the paper is how much faster Visual C++ 2010 
generated code is than gcc, on Windows. For their example, the gcc runtime is 
2.29 the runtime of the MS compiled version. The difference might be even 
larger with Visual C++ 2013 because that is when MS added an auto-vectorizer 
that is on by default.

 

I vaguely remember a discussion about compiling julia itself with the MS 
compiler on Windows, is that working and is that making a performance 
difference?

 

From: julia...@googlegroups.com javascript:  
[mailto:julia...@googlegroups.com javascript: ] On Behalf Of Peter Simon
Sent: Tuesday, June 17, 2014 12:08 PM
To: julia...@googlegroups.com javascript: 
Subject: Re: [julia-users] Benchmarking study: C++  Fortran  Numba  Julia  
Java  Matlab  the rest

 

Sorry, Florian and David, for not seeing that you were way ahead of me.

 

On the subject of the log function:  I tried implementing mylog() as defined by 
Andreas on Julia running on CentOS and the result was a significant slowdown! 
(Yes, I defined the mylog function outside of main, at the module level).  Not 
sure if this is due to variation in the quality of the libm function on various 
systems or what.  If so, then it makes sense that Julia wants a uniformly 
accurate and fast implementation via openlibm.  But for fastest transcendental 
function performance, I assume that one must use the micro-coded versions built 
into the processor's FPU--Is that what the fast libm implementations do?  In 
that case, how could one hope to compete when using a C-coded version?

 

--Peter



On Tuesday, June 17, 2014 10:57:47 AM UTC-7, David Anthoff wrote:

I submitted three pull requests to the original repo that get rid of three 
different array allocations in loops and that make things a fair bit faster 
altogether:

 

https://github.com/jesusfv/Comparison-Programming-Languages-Economics/pulls

 

I think it would also make sense to run these benchmarks on julia 0.3.0 instead 
of 0.2.1, given that there have been a fair number of performance imrpovements.

 

From: julia...@googlegroups.com mailto:julia...@googlegroups.com  
[mailto:julia...@googlegroups.com] On Behalf Of Florian Oswald
Sent: Tuesday, June 17, 2014 10:50 AM
To: julia...@googlegroups.com mailto:julia...@googlegroups.com 
Subject: Re: [julia-users] Benchmarking study: C++  Fortran  Numba  Julia  
Java  Matlab  the rest

 

thanks peter. I made that devectorizing change after dalua suggested so. It 
made a massive difference!

On Tuesday, 17 June 2014, Peter Simon psimo...@gmail.com 
mailto:psimo...@gmail.com  wrote:

You're right.  Replacing the NumericExtensions function calls with a small loop

 

maxDifference  = 0.0
for k = 1:length(mValueFunction)
maxDifference = max(maxDifference, abs(mValueFunction[k]- 
mValueFunctionNew[k]))
end


makes no significant difference in execution time or memory allocation and 
eliminates the dependency.

 

--Peter



On Tuesday, June 17, 2014 10:05:03 AM UTC-7, Andreas Noack Jensen wrote:

...but the Numba version doesn't use tricks like that. 

 

The uniform metric can also be calculated with a small loop. I think that 
requiring dependencies is against the purpose of the exercise.

 

2014-06-17 18:56 GMT+02:00 Peter Simon psimo...@gmail.com 
mailto:psimo...@gmail.com :

As pointed out by Dahua, there is a lot of unnecessary memory allocation.  This 
can be reduced significantly by replacing the lines

 

maxDifference  = maximum(abs(mValueFunctionNew-mValueFunction))
mValueFunction= mValueFunctionNew
mValueFunctionNew = zeros(nGridCapital,nGridProductivity)

 

 

with

 

maxDifference  = maximum(abs!(subtract!(mValueFunction, 
mValueFunctionNew)))
(mValueFunction, mValueFunctionNew) = (mValueFunctionNew, 
mValueFunction)
fill!(mValueFunctionNew, 0.0)

 

abs! and subtract! require adding the line

 

using NumericExtensions

 

prior to the function line.  I think the OP used Julia 0.2; I don't believe 
that NumericExtensions will work with that old

Re: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-17 Thread Tony Kelman
We're diverging from the topic of the thread, but anyway...

No, MSVC OpenBLAS will probably never happen, you'd have to CMake-ify the 
whole thing and probably translate all of the assembly to Intel syntax. And 
skip the Fortran, or use Intel's compiler. I don't think they have the 
resources to do that.

There's a C99-only optimized BLAS implementation under development by the 
FLAME group at University of Texas here https://github.com/flame/blis that 
does aim to eventually support MSVC. It's nowhere near as mature as 
OpenBLAS in terms of automatically detecting architecture, cache sizes, 
etc. But their papers look very promising. They could use more people 
poking at it and submitting patches to get it to the usability level we'd 
need.

The rest of the dependencies vary significantly in how painful they would 
be to build with MSVC. GMP in particular was forked into a new project 
called MPIR, with MSVC compatibility being one of the major reasons.



On Tuesday, June 17, 2014 12:47:49 PM UTC-7, David Anthoff wrote:

 I was more thinking that this might make a difference for some of the 
 dependencies, like openblas? But I’m not even sure that can be compiled at 
 all using MS compilers…

  

 *From:* julia...@googlegroups.com javascript: [mailto:
 julia...@googlegroups.com javascript:] *On Behalf Of *Tobias Knopp
 *Sent:* Tuesday, June 17, 2014 12:42 PM
 *To:* julia...@googlegroups.com javascript:
 *Subject:* Re: [julia-users] Benchmarking study: C++  Fortran  Numba  
 Julia  Java  Matlab  the rest

  

 There are some remaining issues but compilation with MSVC is almost 
 possible. I did some initial work and Tony Kelman made lots of progress in 
 https://github.com/JuliaLang/julia/pull/6230. But there have not been any 
 speed comparisons as far as I know. Note that Julia uses JIT compilation 
 and thus I would not expect to have the source compiler have a huge impact.

  


 Am Dienstag, 17. Juni 2014 21:25:50 UTC+2 schrieb David Anthoff:

 Another interesting result from the paper is how much faster Visual C++ 
 2010 generated code is than gcc, on Windows. For their example, the gcc 
 runtime is 2.29 the runtime of the MS compiled version. The difference 
 might be even larger with Visual C++ 2013 because that is when MS added an 
 auto-vectorizer that is on by default.

  

 I vaguely remember a discussion about compiling julia itself with the MS 
 compiler on Windows, is that working and is that making a performance 
 difference?

  

 *From:* julia...@googlegroups.com [mailto:julia...@googlegroups.com] *On 
 Behalf Of *Peter Simon
 *Sent:* Tuesday, June 17, 2014 12:08 PM
 *To:* julia...@googlegroups.com
 *Subject:* Re: [julia-users] Benchmarking study: C++  Fortran  Numba  
 Julia  Java  Matlab  the rest

  

 Sorry, Florian and David, for not seeing that you were way ahead of me.

  

 On the subject of the log function:  I tried implementing mylog() as 
 defined by Andreas on Julia running on CentOS and the result was a 
 significant slowdown! (Yes, I defined the mylog function outside of main, 
 at the module level).  Not sure if this is due to variation in the quality 
 of the libm function on various systems or what.  If so, then it makes 
 sense that Julia wants a uniformly accurate and fast implementation via 
 openlibm.  But for fastest transcendental function performance, I assume 
 that one must use the micro-coded versions built into the processor's 
 FPU--Is that what the fast libm implementations do?  In that case, how 
 could one hope to compete when using a C-coded version?

  

 --Peter



 On Tuesday, June 17, 2014 10:57:47 AM UTC-7, David Anthoff wrote:

 I submitted three pull requests to the original repo that get rid of three 
 different array allocations in loops and that make things a fair bit faster 
 altogether:

  

 https://github.com/jesusfv/Comparison-Programming-Languages-Economics/pulls

  

 I think it would also make sense to run these benchmarks on julia 0.3.0 
 instead of 0.2.1, given that there have been a fair number of performance 
 imrpovements.

  

 *From:* julia...@googlegroups.com [mailto:julia...@googlegroups.com] *On 
 Behalf Of *Florian Oswald
 *Sent:* Tuesday, June 17, 2014 10:50 AM
 *To:* julia...@googlegroups.com
 *Subject:* Re: [julia-users] Benchmarking study: C++  Fortran  Numba  
 Julia  Java  Matlab  the rest

  

 thanks peter. I made that devectorizing change after dalua suggested so. 
 It made a massive difference!

 On Tuesday, 17 June 2014, Peter Simon psimo...@gmail.com wrote:

 You're right.  Replacing the NumericExtensions function calls with a small 
 loop

  

 maxDifference  = 0.0
 for k = 1:length(mValueFunction)
 maxDifference = max(maxDifference, abs(mValueFunction[k]- 
 mValueFunctionNew[k]))
 end


 makes no significant difference in execution time or memory allocation and 
 eliminates the dependency.

  

 --Peter



 On Tuesday, June 17, 2014 10:05:03 AM UTC-7, Andreas Noack Jensen

Re: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-17 Thread Tobias Knopp
I think one has to distinguish between the Julia core dependencies and the 
runtime dependencies. The later (like OpenBlas) don't tell us much how fast 
Julia is. The libm issue discussed in this thread is of such a nature.

Am Dienstag, 17. Juni 2014 22:03:51 UTC+2 schrieb Tony Kelman:

 We're diverging from the topic of the thread, but anyway...

 No, MSVC OpenBLAS will probably never happen, you'd have to CMake-ify the 
 whole thing and probably translate all of the assembly to Intel syntax. And 
 skip the Fortran, or use Intel's compiler. I don't think they have the 
 resources to do that.

 There's a C99-only optimized BLAS implementation under development by the 
 FLAME group at University of Texas here https://github.com/flame/blis 
 that does aim to eventually support MSVC. It's nowhere near as mature as 
 OpenBLAS in terms of automatically detecting architecture, cache sizes, 
 etc. But their papers look very promising. They could use more people 
 poking at it and submitting patches to get it to the usability level we'd 
 need.

 The rest of the dependencies vary significantly in how painful they would 
 be to build with MSVC. GMP in particular was forked into a new project 
 called MPIR, with MSVC compatibility being one of the major reasons.



 On Tuesday, June 17, 2014 12:47:49 PM UTC-7, David Anthoff wrote:

 I was more thinking that this might make a difference for some of the 
 dependencies, like openblas? But I’m not even sure that can be compiled at 
 all using MS compilers…

  

 *From:* julia...@googlegroups.com [mailto:julia...@googlegroups.com] *On 
 Behalf Of *Tobias Knopp
 *Sent:* Tuesday, June 17, 2014 12:42 PM
 *To:* julia...@googlegroups.com
 *Subject:* Re: [julia-users] Benchmarking study: C++  Fortran  Numba  
 Julia  Java  Matlab  the rest

  

 There are some remaining issues but compilation with MSVC is almost 
 possible. I did some initial work and Tony Kelman made lots of progress in 
 https://github.com/JuliaLang/julia/pull/6230. But there have not been 
 any speed comparisons as far as I know. Note that Julia uses JIT 
 compilation and thus I would not expect to have the source compiler have a 
 huge impact.

  


 Am Dienstag, 17. Juni 2014 21:25:50 UTC+2 schrieb David Anthoff:

 Another interesting result from the paper is how much faster Visual C++ 
 2010 generated code is than gcc, on Windows. For their example, the gcc 
 runtime is 2.29 the runtime of the MS compiled version. The difference 
 might be even larger with Visual C++ 2013 because that is when MS added an 
 auto-vectorizer that is on by default.

  

 I vaguely remember a discussion about compiling julia itself with the MS 
 compiler on Windows, is that working and is that making a performance 
 difference?

  

 *From:* julia...@googlegroups.com [mailto:julia...@googlegroups.com] *On 
 Behalf Of *Peter Simon
 *Sent:* Tuesday, June 17, 2014 12:08 PM
 *To:* julia...@googlegroups.com
 *Subject:* Re: [julia-users] Benchmarking study: C++  Fortran  Numba  
 Julia  Java  Matlab  the rest

  

 Sorry, Florian and David, for not seeing that you were way ahead of me.

  

 On the subject of the log function:  I tried implementing mylog() as 
 defined by Andreas on Julia running on CentOS and the result was a 
 significant slowdown! (Yes, I defined the mylog function outside of main, 
 at the module level).  Not sure if this is due to variation in the quality 
 of the libm function on various systems or what.  If so, then it makes 
 sense that Julia wants a uniformly accurate and fast implementation via 
 openlibm.  But for fastest transcendental function performance, I assume 
 that one must use the micro-coded versions built into the processor's 
 FPU--Is that what the fast libm implementations do?  In that case, how 
 could one hope to compete when using a C-coded version?

  

 --Peter



 On Tuesday, June 17, 2014 10:57:47 AM UTC-7, David Anthoff wrote:

 I submitted three pull requests to the original repo that get rid of 
 three different array allocations in loops and that make things a fair bit 
 faster altogether:

  


 https://github.com/jesusfv/Comparison-Programming-Languages-Economics/pulls

  

 I think it would also make sense to run these benchmarks on julia 0.3.0 
 instead of 0.2.1, given that there have been a fair number of performance 
 imrpovements.

  

 *From:* julia...@googlegroups.com [mailto:julia...@googlegroups.com] *On 
 Behalf Of *Florian Oswald
 *Sent:* Tuesday, June 17, 2014 10:50 AM
 *To:* julia...@googlegroups.com
 *Subject:* Re: [julia-users] Benchmarking study: C++  Fortran  Numba  
 Julia  Java  Matlab  the rest

  

 thanks peter. I made that devectorizing change after dalua suggested so. 
 It made a massive difference!

 On Tuesday, 17 June 2014, Peter Simon psimo...@gmail.com wrote:

 You're right.  Replacing the NumericExtensions function calls with a 
 small loop

  

 maxDifference  = 0.0
 for k = 1:length(mValueFunction)
 maxDifference

Re: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-17 Thread Jesus Villaverde
I run the code on 0.3.0. It did not improve things (in fact, there was a 
3-5% deterioration)



On Tuesday, June 17, 2014 1:57:47 PM UTC-4, David Anthoff wrote:

 I submitted three pull requests to the original repo that get rid of three 
 different array allocations in loops and that make things a fair bit faster 
 altogether:

  

 https://github.com/jesusfv/Comparison-Programming-Languages-Economics/pulls

  

 I think it would also make sense to run these benchmarks on julia 0.3.0 
 instead of 0.2.1, given that there have been a fair number of performance 
 imrpovements.

  

 *From:* julia...@googlegroups.com javascript: [mailto:
 julia...@googlegroups.com javascript:] *On Behalf Of *Florian Oswald
 *Sent:* Tuesday, June 17, 2014 10:50 AM
 *To:* julia...@googlegroups.com javascript:
 *Subject:* Re: [julia-users] Benchmarking study: C++  Fortran  Numba  
 Julia  Java  Matlab  the rest

  

 thanks peter. I made that devectorizing change after dalua suggested so. 
 It made a massive difference!

 On Tuesday, 17 June 2014, Peter Simon psimo...@gmail.com javascript: 
 wrote:

 You're right.  Replacing the NumericExtensions function calls with a small 
 loop

  

 maxDifference  = 0.0
 for k = 1:length(mValueFunction)
 maxDifference = max(maxDifference, abs(mValueFunction[k]- 
 mValueFunctionNew[k]))
 end


 makes no significant difference in execution time or memory allocation and 
 eliminates the dependency.

  

 --Peter



 On Tuesday, June 17, 2014 10:05:03 AM UTC-7, Andreas Noack Jensen wrote:

 ...but the Numba version doesn't use tricks like that. 

  

 The uniform metric can also be calculated with a small loop. I think that 
 requiring dependencies is against the purpose of the exercise.

  

 2014-06-17 18:56 GMT+02:00 Peter Simon psimo...@gmail.com:

 As pointed out by Dahua, there is a lot of unnecessary memory allocation. 
  This can be reduced significantly by replacing the lines

  

 maxDifference  = maximum(abs(mValueFunctionNew-mValueFunction))
 mValueFunction= mValueFunctionNew
 mValueFunctionNew = zeros(nGridCapital,nGridProductivity)

  

  

 with

  

 maxDifference  = maximum(abs!(subtract!(mValueFunction, 
 mValueFunctionNew)))
 (mValueFunction, mValueFunctionNew) = (mValueFunctionNew, 
 mValueFunction)
 fill!(mValueFunctionNew, 0.0)

  

 abs! and subtract! require adding the line

  

 using NumericExtensions

  

 prior to the function line.  I think the OP used Julia 0.2; I don't 
 believe that NumericExtensions will work with that old version.  When I 
 combine these changes with adding 

  

 @inbounds begin
 ...
 end

  

 block around the while loop, I get about 25% reduction in execution 
 time, and reduction of memory allocation from roughly 700 MByte to 180 MByte

  

 --Peter



 On Tuesday, June 17, 2014 9:32:34 AM UTC-7, John Myles White wrote:

 Sounds like we need to rerun these benchmarks after the new GC branch gets 
 updated.

  

  -- John

  

 On Jun 17, 2014, at 9:31 AM, Stefan Karpinski ste...@karpinski.org 
 wrote:

  

 That definitely smells like a GC issue. Python doesn't have this 
 particular problem since it uses reference counting.

  

 On Tue, Jun 17, 2014 at 12:21 PM, Cristóvão Duarte Sousa cri...@gmail.com 
 wrote:

 I've just done measurements of algorithm inner loop times in my machine by 
 changing the code has shown in this commit 
 https://github.com/cdsousa/Comparison-Programming-Languages-Economics/commit/4f6198ad24adc146c268a1c2eeac14d5ae0f300c
 .

  

 I've found out something... see for yourself:

  

 using Winston
 numba_times = readdlm(numba_times.dat)[10:end];
 plot(numba_times)


 https://lh6.googleusercontent.com/-m1c6SAbijVM/U6BpmBmFbqI/Ddc/wtxnKuGFDy0/s1600/numba_times.png

 julia_times = readdlm(julia_times.dat)[10:end];
 plot(julia_times)

  


 https://lh4.googleusercontent.com/-7iprMnjyZQY/U6Bp8gHVNJI/Ddk/yUgu8RyZ-Kw/s1600/julia_times.png

 println((median(numba_times), mean(numba_times), var(numba_times)))

 (0.0028225183486938477,0.0028575707378805993,2.4830103817464292e-8)

  

 println((median(julia_times), mean(julia_times), var(julia_times)))

 (0.00282404404,0.0034863882123824454,1.7058255003790299e-6)

  

 So, while inner loop times have more or less the same median on both Julia 
 and Numba tests, the mean and variance are higher in Julia.

  

 Can that be due to the garbage collector being kicking in?



 On Monday, June 16, 2014 4:52:07 PM UTC+1, Florian Oswald wrote:

 Dear all,

  

 I thought you might find this paper interesting: 
 http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf

  

 It takes a standard model from macro economics and computes it's solution 
 with an identical algorithm in several languages. Julia is roughly 2.6 
 times slower than the best C++ executable. I was bit puzzled by the result, 
 since in the benchmarks on http://julialang.org/, the slowest test

Re: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-16 Thread John Myles White
Maybe it would be good to verify the claim made at 
https://github.com/jesusfv/Comparison-Programming-Languages-Economics/blob/master/RBC_Julia.jl#L9

I would think that specifying all those types wouldn’t matter much if the code 
doesn’t have type-stability problems.

 — John

On Jun 16, 2014, at 8:52 AM, Florian Oswald florian.osw...@gmail.com wrote:

 Dear all,
 
 I thought you might find this paper interesting: 
 http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf
 
 It takes a standard model from macro economics and computes it's solution 
 with an identical algorithm in several languages. Julia is roughly 2.6 times 
 slower than the best C++ executable. I was bit puzzled by the result, since 
 in the benchmarks on http://julialang.org/, the slowest test is 1.66 times C. 
 I realize that those benchmarks can't cover all possible situations. That 
 said, I couldn't really find anything unusual in the Julia code, did some 
 profiling and removed type inference, but still that's as fast as I got it. 
 That's not to say that I'm disappointed, I still think this is great. Did I 
 miss something obvious here or is there something specific to this algorithm? 
 
 The codes are on github at 
 
 https://github.com/jesusfv/Comparison-Programming-Languages-Economics
 
 



Re: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-16 Thread Dahua Lin
First, I agree with John that you don't have to declare the types in 
general, like in a compiled language. It seems that Julia would be able to 
infer the types of most variables in your codes.

There are several ways that your code's efficiency may be improved:

(1) You can use @inbounds to waive bound checking in several places, such 
as line 94 and 95 (in RBC_Julia.jl)
(2) Line 114 and 116 involves reallocating new arrays, which is probably 
unnecessary. Also note that Base.maxabs can compute the maximum of absolute 
value more efficiently than maximum(abs( ... ))

In terms of measurement, did you pre-compile the function before measuring 
the runtime?

A side note about code style. It seems that it uses a lot of Java-ish 
descriptive names with camel case. Julia practice tends to encourage more 
concise naming.

Dahua



On Monday, June 16, 2014 10:55:50 AM UTC-5, John Myles White wrote:

 Maybe it would be good to verify the claim made at 
 https://github.com/jesusfv/Comparison-Programming-Languages-Economics/blob/master/RBC_Julia.jl#L9
  

 I would think that specifying all those types wouldn’t matter much if the 
 code doesn’t have type-stability problems. 

  — John 

 On Jun 16, 2014, at 8:52 AM, Florian Oswald florian...@gmail.com 
 javascript: wrote: 

  Dear all, 
  
  I thought you might find this paper interesting: 
 http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf 
  
  It takes a standard model from macro economics and computes it's 
 solution with an identical algorithm in several languages. Julia is roughly 
 2.6 times slower than the best C++ executable. I was bit puzzled by the 
 result, since in the benchmarks on http://julialang.org/, the slowest 
 test is 1.66 times C. I realize that those benchmarks can't cover all 
 possible situations. That said, I couldn't really find anything unusual in 
 the Julia code, did some profiling and removed type inference, but still 
 that's as fast as I got it. That's not to say that I'm disappointed, I 
 still think this is great. Did I miss something obvious here or is there 
 something specific to this algorithm? 
  
  The codes are on github at 
  
  https://github.com/jesusfv/Comparison-Programming-Languages-Economics 
  
  



Re: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-16 Thread Florian Oswald
Hi guys,

thanks for the comments. Notice that I'm not the author of this code [so
variable names are not on me :-) ] just tried to speed it up a bit. In
fact, declaring types before running the computation function and using
@inbounds made the code 24% faster than the benchmark version. here's my
attempt

https://github.com/floswald/Comparison-Programming-Languages-Economics/tree/master/julia/floswald

should try the Base.maxabs.

in profiling this i found that a lot of time is spent here:

https://github.com/floswald/Comparison-Programming-Languages-Economics/blob/master/julia/floswald/model.jl#L119

which i'm not sure how to avoid.


On 16 June 2014 17:13, Dahua Lin linda...@gmail.com wrote:

 First, I agree with John that you don't have to declare the types in
 general, like in a compiled language. It seems that Julia would be able to
 infer the types of most variables in your codes.

 There are several ways that your code's efficiency may be improved:

 (1) You can use @inbounds to waive bound checking in several places, such
 as line 94 and 95 (in RBC_Julia.jl)
 (2) Line 114 and 116 involves reallocating new arrays, which is probably
 unnecessary. Also note that Base.maxabs can compute the maximum of absolute
 value more efficiently than maximum(abs( ... ))

 In terms of measurement, did you pre-compile the function before measuring
 the runtime?

 A side note about code style. It seems that it uses a lot of Java-ish
 descriptive names with camel case. Julia practice tends to encourage more
 concise naming.

 Dahua



 On Monday, June 16, 2014 10:55:50 AM UTC-5, John Myles White wrote:

 Maybe it would be good to verify the claim made at
 https://github.com/jesusfv/Comparison-Programming-
 Languages-Economics/blob/master/RBC_Julia.jl#L9

 I would think that specifying all those types wouldn’t matter much if the
 code doesn’t have type-stability problems.

  — John

 On Jun 16, 2014, at 8:52 AM, Florian Oswald florian...@gmail.com
 wrote:

  Dear all,
 
  I thought you might find this paper interesting:
 http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf
 
  It takes a standard model from macro economics and computes it's
 solution with an identical algorithm in several languages. Julia is roughly
 2.6 times slower than the best C++ executable. I was bit puzzled by the
 result, since in the benchmarks on http://julialang.org/, the slowest
 test is 1.66 times C. I realize that those benchmarks can't cover all
 possible situations. That said, I couldn't really find anything unusual in
 the Julia code, did some profiling and removed type inference, but still
 that's as fast as I got it. That's not to say that I'm disappointed, I
 still think this is great. Did I miss something obvious here or is there
 something specific to this algorithm?
 
  The codes are on github at
 
  https://github.com/jesusfv/Comparison-Programming-Languages-Economics
 
 




Re: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-16 Thread Stefan Karpinski
That's an interesting comparison. Being on par with Java is quite
respectable. There's nothing really obvious to change with that code and it
definitely doesn't need so many type annotations – if the annotations do
improve the performance, it's possible that there's a type instability
somewhere without the annotation. The annotation would avoid the
instability, but by converting, but conversion itself can be expensive.


On Mon, Jun 16, 2014 at 12:21 PM, Florian Oswald florian.osw...@gmail.com
wrote:

 Hi guys,

 thanks for the comments. Notice that I'm not the author of this code [so
 variable names are not on me :-) ] just tried to speed it up a bit. In
 fact, declaring types before running the computation function and using
 @inbounds made the code 24% faster than the benchmark version. here's my
 attempt


 https://github.com/floswald/Comparison-Programming-Languages-Economics/tree/master/julia/floswald

 should try the Base.maxabs.

 in profiling this i found that a lot of time is spent here:


 https://github.com/floswald/Comparison-Programming-Languages-Economics/blob/master/julia/floswald/model.jl#L119

 which i'm not sure how to avoid.


 On 16 June 2014 17:13, Dahua Lin linda...@gmail.com wrote:

 First, I agree with John that you don't have to declare the types in
 general, like in a compiled language. It seems that Julia would be able to
 infer the types of most variables in your codes.

 There are several ways that your code's efficiency may be improved:

 (1) You can use @inbounds to waive bound checking in several places, such
 as line 94 and 95 (in RBC_Julia.jl)
 (2) Line 114 and 116 involves reallocating new arrays, which is probably
 unnecessary. Also note that Base.maxabs can compute the maximum of absolute
 value more efficiently than maximum(abs( ... ))

 In terms of measurement, did you pre-compile the function before
 measuring the runtime?

 A side note about code style. It seems that it uses a lot of Java-ish
 descriptive names with camel case. Julia practice tends to encourage more
 concise naming.

 Dahua



 On Monday, June 16, 2014 10:55:50 AM UTC-5, John Myles White wrote:

 Maybe it would be good to verify the claim made at
 https://github.com/jesusfv/Comparison-Programming-
 Languages-Economics/blob/master/RBC_Julia.jl#L9

 I would think that specifying all those types wouldn’t matter much if
 the code doesn’t have type-stability problems.

  — John

 On Jun 16, 2014, at 8:52 AM, Florian Oswald florian...@gmail.com
 wrote:

  Dear all,
 
  I thought you might find this paper interesting:
 http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf
 
  It takes a standard model from macro economics and computes it's
 solution with an identical algorithm in several languages. Julia is roughly
 2.6 times slower than the best C++ executable. I was bit puzzled by the
 result, since in the benchmarks on http://julialang.org/, the slowest
 test is 1.66 times C. I realize that those benchmarks can't cover all
 possible situations. That said, I couldn't really find anything unusual in
 the Julia code, did some profiling and removed type inference, but still
 that's as fast as I got it. That's not to say that I'm disappointed, I
 still think this is great. Did I miss something obvious here or is there
 something specific to this algorithm?
 
  The codes are on github at
 
  https://github.com/jesusfv/Comparison-Programming-Languages-Economics
 
 





Re: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-16 Thread Andreas Noack Jensen
I think that the log in openlibm is slower than most system logs. On my
mac, if I use

mylog(x::Float64) = ccall((:log, libm), Float64, (Float64,), x)

the code runs 25 pct. faster. If I also use @inbounds and devectorise the
max(abs) it runs in 2.26 seconds on my machine. The C++ version with the
XCode compiler and -O3 runs in 1.9 seconds.


2014-06-16 18:21 GMT+02:00 Florian Oswald florian.osw...@gmail.com:

 Hi guys,

 thanks for the comments. Notice that I'm not the author of this code [so
 variable names are not on me :-) ] just tried to speed it up a bit. In
 fact, declaring types before running the computation function and using
 @inbounds made the code 24% faster than the benchmark version. here's my
 attempt


 https://github.com/floswald/Comparison-Programming-Languages-Economics/tree/master/julia/floswald

 should try the Base.maxabs.

 in profiling this i found that a lot of time is spent here:


 https://github.com/floswald/Comparison-Programming-Languages-Economics/blob/master/julia/floswald/model.jl#L119

 which i'm not sure how to avoid.


 On 16 June 2014 17:13, Dahua Lin linda...@gmail.com wrote:

 First, I agree with John that you don't have to declare the types in
 general, like in a compiled language. It seems that Julia would be able to
 infer the types of most variables in your codes.

 There are several ways that your code's efficiency may be improved:

 (1) You can use @inbounds to waive bound checking in several places, such
 as line 94 and 95 (in RBC_Julia.jl)
 (2) Line 114 and 116 involves reallocating new arrays, which is probably
 unnecessary. Also note that Base.maxabs can compute the maximum of absolute
 value more efficiently than maximum(abs( ... ))

 In terms of measurement, did you pre-compile the function before
 measuring the runtime?

 A side note about code style. It seems that it uses a lot of Java-ish
 descriptive names with camel case. Julia practice tends to encourage more
 concise naming.

 Dahua



 On Monday, June 16, 2014 10:55:50 AM UTC-5, John Myles White wrote:

 Maybe it would be good to verify the claim made at
 https://github.com/jesusfv/Comparison-Programming-
 Languages-Economics/blob/master/RBC_Julia.jl#L9

 I would think that specifying all those types wouldn’t matter much if
 the code doesn’t have type-stability problems.

  — John

 On Jun 16, 2014, at 8:52 AM, Florian Oswald florian...@gmail.com
 wrote:

  Dear all,
 
  I thought you might find this paper interesting:
 http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf
 
  It takes a standard model from macro economics and computes it's
 solution with an identical algorithm in several languages. Julia is roughly
 2.6 times slower than the best C++ executable. I was bit puzzled by the
 result, since in the benchmarks on http://julialang.org/, the slowest
 test is 1.66 times C. I realize that those benchmarks can't cover all
 possible situations. That said, I couldn't really find anything unusual in
 the Julia code, did some profiling and removed type inference, but still
 that's as fast as I got it. That's not to say that I'm disappointed, I
 still think this is great. Did I miss something obvious here or is there
 something specific to this algorithm?
 
  The codes are on github at
 
  https://github.com/jesusfv/Comparison-Programming-Languages-Economics
 
 





-- 
Med venlig hilsen

Andreas Noack Jensen


Re: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-16 Thread Stefan Karpinski
Doing the math, that makes that optimized Julia version 18% slower than
C++, which is fast indeed.


On Mon, Jun 16, 2014 at 1:02 PM, Andreas Noack Jensen 
andreasnoackjen...@gmail.com wrote:

 I think that the log in openlibm is slower than most system logs. On my
 mac, if I use

 mylog(x::Float64) = ccall((:log, libm), Float64, (Float64,), x)

 the code runs 25 pct. faster. If I also use @inbounds and devectorise the
 max(abs) it runs in 2.26 seconds on my machine. The C++ version with the
 XCode compiler and -O3 runs in 1.9 seconds.


 2014-06-16 18:21 GMT+02:00 Florian Oswald florian.osw...@gmail.com:

 Hi guys,

 thanks for the comments. Notice that I'm not the author of this code [so
 variable names are not on me :-) ] just tried to speed it up a bit. In
 fact, declaring types before running the computation function and using
 @inbounds made the code 24% faster than the benchmark version. here's my
 attempt


 https://github.com/floswald/Comparison-Programming-Languages-Economics/tree/master/julia/floswald

 should try the Base.maxabs.

 in profiling this i found that a lot of time is spent here:


 https://github.com/floswald/Comparison-Programming-Languages-Economics/blob/master/julia/floswald/model.jl#L119

 which i'm not sure how to avoid.


 On 16 June 2014 17:13, Dahua Lin linda...@gmail.com wrote:

 First, I agree with John that you don't have to declare the types in
 general, like in a compiled language. It seems that Julia would be able to
 infer the types of most variables in your codes.

 There are several ways that your code's efficiency may be improved:

 (1) You can use @inbounds to waive bound checking in several places,
 such as line 94 and 95 (in RBC_Julia.jl)
 (2) Line 114 and 116 involves reallocating new arrays, which is probably
 unnecessary. Also note that Base.maxabs can compute the maximum of absolute
 value more efficiently than maximum(abs( ... ))

 In terms of measurement, did you pre-compile the function before
 measuring the runtime?

 A side note about code style. It seems that it uses a lot of Java-ish
 descriptive names with camel case. Julia practice tends to encourage more
 concise naming.

 Dahua



 On Monday, June 16, 2014 10:55:50 AM UTC-5, John Myles White wrote:

 Maybe it would be good to verify the claim made at
 https://github.com/jesusfv/Comparison-Programming-
 Languages-Economics/blob/master/RBC_Julia.jl#L9

 I would think that specifying all those types wouldn’t matter much if
 the code doesn’t have type-stability problems.

  — John

 On Jun 16, 2014, at 8:52 AM, Florian Oswald florian...@gmail.com
 wrote:

  Dear all,
 
  I thought you might find this paper interesting:
 http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf
 
  It takes a standard model from macro economics and computes it's
 solution with an identical algorithm in several languages. Julia is roughly
 2.6 times slower than the best C++ executable. I was bit puzzled by the
 result, since in the benchmarks on http://julialang.org/, the slowest
 test is 1.66 times C. I realize that those benchmarks can't cover all
 possible situations. That said, I couldn't really find anything unusual in
 the Julia code, did some profiling and removed type inference, but still
 that's as fast as I got it. That's not to say that I'm disappointed, I
 still think this is great. Did I miss something obvious here or is there
 something specific to this algorithm?
 
  The codes are on github at
 
  https://github.com/jesusfv/Comparison-Programming-Languages-Economics
 
 





 --
 Med venlig hilsen

 Andreas Noack Jensen



Re: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-16 Thread Florian Oswald
interesting!
just tried that - I defined mylog inside the computeTuned function

https://github.com/floswald/Comparison-Programming-Languages-Economics/blob/master/julia/floswald/model.jl#L193

but that actually slowed things down considerably. I'm on a mac as well,
but it seems that's not enough to compare this? or where did you define
this function?


On 16 June 2014 18:02, Andreas Noack Jensen andreasnoackjen...@gmail.com
wrote:

 I think that the log in openlibm is slower than most system logs. On my
 mac, if I use

 mylog(x::Float64) = ccall((:log, libm), Float64, (Float64,), x)

 the code runs 25 pct. faster. If I also use @inbounds and devectorise the
 max(abs) it runs in 2.26 seconds on my machine. The C++ version with the
 XCode compiler and -O3 runs in 1.9 seconds.


 2014-06-16 18:21 GMT+02:00 Florian Oswald florian.osw...@gmail.com:

 Hi guys,

 thanks for the comments. Notice that I'm not the author of this code [so
 variable names are not on me :-) ] just tried to speed it up a bit. In
 fact, declaring types before running the computation function and using
 @inbounds made the code 24% faster than the benchmark version. here's my
 attempt


 https://github.com/floswald/Comparison-Programming-Languages-Economics/tree/master/julia/floswald

 should try the Base.maxabs.

 in profiling this i found that a lot of time is spent here:


 https://github.com/floswald/Comparison-Programming-Languages-Economics/blob/master/julia/floswald/model.jl#L119

 which i'm not sure how to avoid.


 On 16 June 2014 17:13, Dahua Lin linda...@gmail.com wrote:

 First, I agree with John that you don't have to declare the types in
 general, like in a compiled language. It seems that Julia would be able to
 infer the types of most variables in your codes.

 There are several ways that your code's efficiency may be improved:

 (1) You can use @inbounds to waive bound checking in several places,
 such as line 94 and 95 (in RBC_Julia.jl)
 (2) Line 114 and 116 involves reallocating new arrays, which is probably
 unnecessary. Also note that Base.maxabs can compute the maximum of absolute
 value more efficiently than maximum(abs( ... ))

 In terms of measurement, did you pre-compile the function before
 measuring the runtime?

 A side note about code style. It seems that it uses a lot of Java-ish
 descriptive names with camel case. Julia practice tends to encourage more
 concise naming.

 Dahua



 On Monday, June 16, 2014 10:55:50 AM UTC-5, John Myles White wrote:

 Maybe it would be good to verify the claim made at
 https://github.com/jesusfv/Comparison-Programming-
 Languages-Economics/blob/master/RBC_Julia.jl#L9

 I would think that specifying all those types wouldn’t matter much if
 the code doesn’t have type-stability problems.

  — John

 On Jun 16, 2014, at 8:52 AM, Florian Oswald florian...@gmail.com
 wrote:

  Dear all,
 
  I thought you might find this paper interesting:
 http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf
 
  It takes a standard model from macro economics and computes it's
 solution with an identical algorithm in several languages. Julia is roughly
 2.6 times slower than the best C++ executable. I was bit puzzled by the
 result, since in the benchmarks on http://julialang.org/, the slowest
 test is 1.66 times C. I realize that those benchmarks can't cover all
 possible situations. That said, I couldn't really find anything unusual in
 the Julia code, did some profiling and removed type inference, but still
 that's as fast as I got it. That's not to say that I'm disappointed, I
 still think this is great. Did I miss something obvious here or is there
 something specific to this algorithm?
 
  The codes are on github at
 
  https://github.com/jesusfv/Comparison-Programming-Languages-Economics
 
 





 --
 Med venlig hilsen

 Andreas Noack Jensen



Re: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-16 Thread Stefan Karpinski
Different systems have quite different libm implementations, both in terms
of speed and accuracy, which is why we have our own. It would be nice if we
could get our log to be faster.


On Mon, Jun 16, 2014 at 1:16 PM, Florian Oswald florian.osw...@gmail.com
wrote:

 interesting!
 just tried that - I defined mylog inside the computeTuned function


 https://github.com/floswald/Comparison-Programming-Languages-Economics/blob/master/julia/floswald/model.jl#L193

 but that actually slowed things down considerably. I'm on a mac as well,
 but it seems that's not enough to compare this? or where did you define
 this function?


 On 16 June 2014 18:02, Andreas Noack Jensen andreasnoackjen...@gmail.com
 wrote:

 I think that the log in openlibm is slower than most system logs. On my
 mac, if I use

 mylog(x::Float64) = ccall((:log, libm), Float64, (Float64,), x)

  the code runs 25 pct. faster. If I also use @inbounds and devectorise
 the max(abs) it runs in 2.26 seconds on my machine. The C++ version with
 the XCode compiler and -O3 runs in 1.9 seconds.


 2014-06-16 18:21 GMT+02:00 Florian Oswald florian.osw...@gmail.com:

 Hi guys,

 thanks for the comments. Notice that I'm not the author of this code [so
 variable names are not on me :-) ] just tried to speed it up a bit. In
 fact, declaring types before running the computation function and using
 @inbounds made the code 24% faster than the benchmark version. here's my
 attempt


 https://github.com/floswald/Comparison-Programming-Languages-Economics/tree/master/julia/floswald

 should try the Base.maxabs.

 in profiling this i found that a lot of time is spent here:


 https://github.com/floswald/Comparison-Programming-Languages-Economics/blob/master/julia/floswald/model.jl#L119

 which i'm not sure how to avoid.


 On 16 June 2014 17:13, Dahua Lin linda...@gmail.com wrote:

 First, I agree with John that you don't have to declare the types in
 general, like in a compiled language. It seems that Julia would be able to
 infer the types of most variables in your codes.

 There are several ways that your code's efficiency may be improved:

 (1) You can use @inbounds to waive bound checking in several places,
 such as line 94 and 95 (in RBC_Julia.jl)
 (2) Line 114 and 116 involves reallocating new arrays, which is
 probably unnecessary. Also note that Base.maxabs can compute the maximum of
 absolute value more efficiently than maximum(abs( ... ))

 In terms of measurement, did you pre-compile the function before
 measuring the runtime?

 A side note about code style. It seems that it uses a lot of Java-ish
 descriptive names with camel case. Julia practice tends to encourage more
 concise naming.

 Dahua



 On Monday, June 16, 2014 10:55:50 AM UTC-5, John Myles White wrote:

 Maybe it would be good to verify the claim made at
 https://github.com/jesusfv/Comparison-Programming-
 Languages-Economics/blob/master/RBC_Julia.jl#L9

 I would think that specifying all those types wouldn’t matter much if
 the code doesn’t have type-stability problems.

  — John

 On Jun 16, 2014, at 8:52 AM, Florian Oswald florian...@gmail.com
 wrote:

  Dear all,
 
  I thought you might find this paper interesting:
 http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf
 
  It takes a standard model from macro economics and computes it's
 solution with an identical algorithm in several languages. Julia is 
 roughly
 2.6 times slower than the best C++ executable. I was bit puzzled by the
 result, since in the benchmarks on http://julialang.org/, the slowest
 test is 1.66 times C. I realize that those benchmarks can't cover all
 possible situations. That said, I couldn't really find anything unusual in
 the Julia code, did some profiling and removed type inference, but still
 that's as fast as I got it. That's not to say that I'm disappointed, I
 still think this is great. Did I miss something obvious here or is there
 something specific to this algorithm?
 
  The codes are on github at
 
  https://github.com/jesusfv/Comparison-Programming-
 Languages-Economics
 
 





 --
 Med venlig hilsen

 Andreas Noack Jensen





Re: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-16 Thread Tim Holy
From the sound of it, one possibility is that you made it a private function 
inside the computeTuned function. That creates the equivalent of an anonymous 
function, which is slow. You need to make it a generic function (define it 
outside computeTuned).

--Tim

On Monday, June 16, 2014 06:16:49 PM Florian Oswald wrote:
 interesting!
 just tried that - I defined mylog inside the computeTuned function
 
 https://github.com/floswald/Comparison-Programming-Languages-Economics/blob/
 master/julia/floswald/model.jl#L193
 
 but that actually slowed things down considerably. I'm on a mac as well,
 but it seems that's not enough to compare this? or where did you define
 this function?
 
 
 On 16 June 2014 18:02, Andreas Noack Jensen andreasnoackjen...@gmail.com
 
 wrote:
  I think that the log in openlibm is slower than most system logs. On my
  mac, if I use
  
  mylog(x::Float64) = ccall((:log, libm), Float64, (Float64,), x)
  
  the code runs 25 pct. faster. If I also use @inbounds and devectorise the
  max(abs) it runs in 2.26 seconds on my machine. The C++ version with the
  XCode compiler and -O3 runs in 1.9 seconds.
  
  
  2014-06-16 18:21 GMT+02:00 Florian Oswald florian.osw...@gmail.com:
  
  Hi guys,
  
  thanks for the comments. Notice that I'm not the author of this code [so
  variable names are not on me :-) ] just tried to speed it up a bit. In
  fact, declaring types before running the computation function and using
  @inbounds made the code 24% faster than the benchmark version. here's my
  attempt
  
  
  https://github.com/floswald/Comparison-Programming-Languages-Economics/tr
  ee/master/julia/floswald
  
  should try the Base.maxabs.
  
  in profiling this i found that a lot of time is spent here:
  
  
  https://github.com/floswald/Comparison-Programming-Languages-Economics/bl
  ob/master/julia/floswald/model.jl#L119
  
  which i'm not sure how to avoid.
  
  On 16 June 2014 17:13, Dahua Lin linda...@gmail.com wrote:
  First, I agree with John that you don't have to declare the types in
  general, like in a compiled language. It seems that Julia would be able
  to
  infer the types of most variables in your codes.
  
  There are several ways that your code's efficiency may be improved:
  
  (1) You can use @inbounds to waive bound checking in several places,
  such as line 94 and 95 (in RBC_Julia.jl)
  (2) Line 114 and 116 involves reallocating new arrays, which is probably
  unnecessary. Also note that Base.maxabs can compute the maximum of
  absolute
  value more efficiently than maximum(abs( ... ))
  
  In terms of measurement, did you pre-compile the function before
  measuring the runtime?
  
  A side note about code style. It seems that it uses a lot of Java-ish
  descriptive names with camel case. Julia practice tends to encourage
  more
  concise naming.
  
  Dahua
  
  On Monday, June 16, 2014 10:55:50 AM UTC-5, John Myles White wrote:
  Maybe it would be good to verify the claim made at
  https://github.com/jesusfv/Comparison-Programming-  
  Languages-Economics/blob/master/RBC_Julia.jl#L9
  
  I would think that specifying all those types wouldn’t matter much if
  the code doesn’t have type-stability problems.
  
   — John
  
  On Jun 16, 2014, at 8:52 AM, Florian Oswald florian...@gmail.com
  
  wrote:
   Dear all,
  
   I thought you might find this paper interesting:
  http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf
  
   It takes a standard model from macro economics and computes it's
  
  solution with an identical algorithm in several languages. Julia is
  roughly
  2.6 times slower than the best C++ executable. I was bit puzzled by the
  result, since in the benchmarks on http://julialang.org/, the slowest
  test is 1.66 times C. I realize that those benchmarks can't cover all
  possible situations. That said, I couldn't really find anything unusual
  in
  the Julia code, did some profiling and removed type inference, but
  still
  that's as fast as I got it. That's not to say that I'm disappointed, I
  still think this is great. Did I miss something obvious here or is
  there
  something specific to this algorithm?
  
   The codes are on github at
   
   https://github.com/jesusfv/Comparison-Programming-Languages-Economics
  
  --
  Med venlig hilsen
  
  Andreas Noack Jensen



Re: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-16 Thread Stefan Karpinski
Here's an economics blog post that links to this study:

http://juliaeconomics.com/2014/06/15/why-i-started-a-blog-about-programming-julia-for-economics/


On Mon, Jun 16, 2014 at 1:30 PM, Tim Holy tim.h...@gmail.com wrote:

 From the sound of it, one possibility is that you made it a private
 function
 inside the computeTuned function. That creates the equivalent of an
 anonymous
 function, which is slow. You need to make it a generic function (define it
 outside computeTuned).

 --Tim

 On Monday, June 16, 2014 06:16:49 PM Florian Oswald wrote:
  interesting!
  just tried that - I defined mylog inside the computeTuned function
 
 
 https://github.com/floswald/Comparison-Programming-Languages-Economics/blob/
  master/julia/floswald/model.jl#L193
 
  but that actually slowed things down considerably. I'm on a mac as well,
  but it seems that's not enough to compare this? or where did you define
  this function?
 
 
  On 16 June 2014 18:02, Andreas Noack Jensen 
 andreasnoackjen...@gmail.com
 
  wrote:
   I think that the log in openlibm is slower than most system logs. On my
   mac, if I use
  
   mylog(x::Float64) = ccall((:log, libm), Float64, (Float64,), x)
  
   the code runs 25 pct. faster. If I also use @inbounds and devectorise
 the
   max(abs) it runs in 2.26 seconds on my machine. The C++ version with
 the
   XCode compiler and -O3 runs in 1.9 seconds.
  
  
   2014-06-16 18:21 GMT+02:00 Florian Oswald florian.osw...@gmail.com:
  
   Hi guys,
  
   thanks for the comments. Notice that I'm not the author of this code
 [so
   variable names are not on me :-) ] just tried to speed it up a bit. In
   fact, declaring types before running the computation function and
 using
   @inbounds made the code 24% faster than the benchmark version. here's
 my
   attempt
  
  
  
 https://github.com/floswald/Comparison-Programming-Languages-Economics/tr
   ee/master/julia/floswald
  
   should try the Base.maxabs.
  
   in profiling this i found that a lot of time is spent here:
  
  
  
 https://github.com/floswald/Comparison-Programming-Languages-Economics/bl
   ob/master/julia/floswald/model.jl#L119
  
   which i'm not sure how to avoid.
  
   On 16 June 2014 17:13, Dahua Lin linda...@gmail.com wrote:
   First, I agree with John that you don't have to declare the types in
   general, like in a compiled language. It seems that Julia would be
 able
   to
   infer the types of most variables in your codes.
  
   There are several ways that your code's efficiency may be improved:
  
   (1) You can use @inbounds to waive bound checking in several places,
   such as line 94 and 95 (in RBC_Julia.jl)
   (2) Line 114 and 116 involves reallocating new arrays, which is
 probably
   unnecessary. Also note that Base.maxabs can compute the maximum of
   absolute
   value more efficiently than maximum(abs( ... ))
  
   In terms of measurement, did you pre-compile the function before
   measuring the runtime?
  
   A side note about code style. It seems that it uses a lot of Java-ish
   descriptive names with camel case. Julia practice tends to encourage
   more
   concise naming.
  
   Dahua
  
   On Monday, June 16, 2014 10:55:50 AM UTC-5, John Myles White wrote:
   Maybe it would be good to verify the claim made at
   https://github.com/jesusfv/Comparison-Programming- 
 Languages-Economics/blob/master/RBC_Julia.jl#L9
  
   I would think that specifying all those types wouldn’t matter much
 if
   the code doesn’t have type-stability problems.
  
— John
  
   On Jun 16, 2014, at 8:52 AM, Florian Oswald florian...@gmail.com
  
   wrote:
Dear all,
  
I thought you might find this paper interesting:
   http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf
  
It takes a standard model from macro economics and computes it's
  
   solution with an identical algorithm in several languages. Julia is
   roughly
   2.6 times slower than the best C++ executable. I was bit puzzled by
 the
   result, since in the benchmarks on http://julialang.org/, the
 slowest
   test is 1.66 times C. I realize that those benchmarks can't cover
 all
   possible situations. That said, I couldn't really find anything
 unusual
   in
   the Julia code, did some profiling and removed type inference, but
   still
   that's as fast as I got it. That's not to say that I'm
 disappointed, I
   still think this is great. Did I miss something obvious here or is
   there
   something specific to this algorithm?
  
The codes are on github at
   
   
 https://github.com/jesusfv/Comparison-Programming-Languages-Economics
  
   --
   Med venlig hilsen
  
   Andreas Noack Jensen




Re: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-16 Thread Jesus Villaverde
Hi

I am one of the authors of the paper :)

Our first version of the code did not declare types. It was thanks to 
Florian's suggestion that we started doing it. We discovered, to our 
surprise, that it reduced execution time by around 25%. I may be mistaken 
but I do not think there are type-stability problems. We have a version of 
the code that is nearly identical in C++ and we did not have any of those 
type problems.

On Monday, June 16, 2014 11:55:50 AM UTC-4, John Myles White wrote:

 Maybe it would be good to verify the claim made at 
 https://github.com/jesusfv/Comparison-Programming-Languages-Economics/blob/master/RBC_Julia.jl#L9
  

 I would think that specifying all those types wouldn’t matter much if the 
 code doesn’t have type-stability problems. 

  — John 

 On Jun 16, 2014, at 8:52 AM, Florian Oswald florian...@gmail.com 
 javascript: wrote: 

  Dear all, 
  
  I thought you might find this paper interesting: 
 http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf 
  
  It takes a standard model from macro economics and computes it's 
 solution with an identical algorithm in several languages. Julia is roughly 
 2.6 times slower than the best C++ executable. I was bit puzzled by the 
 result, since in the benchmarks on http://julialang.org/, the slowest 
 test is 1.66 times C. I realize that those benchmarks can't cover all 
 possible situations. That said, I couldn't really find anything unusual in 
 the Julia code, did some profiling and removed type inference, but still 
 that's as fast as I got it. That's not to say that I'm disappointed, I 
 still think this is great. Did I miss something obvious here or is there 
 something specific to this algorithm? 
  
  The codes are on github at 
  
  https://github.com/jesusfv/Comparison-Programming-Languages-Economics 
  
  



Re: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-16 Thread Jesus Villaverde
Hi

1) Yes, we pre-compiled the function.

2) As I mentioned before, we tried the code with and without type 
declaration, it makes a difference.

3) The variable names turns out to be quite useful because this code will 
be eventually nested into a much larger project where it is convenient to 
have very explicit names.

Thanks 

On Monday, June 16, 2014 12:13:44 PM UTC-4, Dahua Lin wrote:

 First, I agree with John that you don't have to declare the types in 
 general, like in a compiled language. It seems that Julia would be able to 
 infer the types of most variables in your codes.

 There are several ways that your code's efficiency may be improved:

 (1) You can use @inbounds to waive bound checking in several places, such 
 as line 94 and 95 (in RBC_Julia.jl)
 (2) Line 114 and 116 involves reallocating new arrays, which is probably 
 unnecessary. Also note that Base.maxabs can compute the maximum of absolute 
 value more efficiently than maximum(abs( ... ))

 In terms of measurement, did you pre-compile the function before measuring 
 the runtime?

 A side note about code style. It seems that it uses a lot of Java-ish 
 descriptive names with camel case. Julia practice tends to encourage more 
 concise naming.

 Dahua



 On Monday, June 16, 2014 10:55:50 AM UTC-5, John Myles White wrote:

 Maybe it would be good to verify the claim made at 
 https://github.com/jesusfv/Comparison-Programming-Languages-Economics/blob/master/RBC_Julia.jl#L9
  

 I would think that specifying all those types wouldn’t matter much if the 
 code doesn’t have type-stability problems. 

  — John 

 On Jun 16, 2014, at 8:52 AM, Florian Oswald florian...@gmail.com 
 wrote: 

  Dear all, 
  
  I thought you might find this paper interesting: 
 http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf 
  
  It takes a standard model from macro economics and computes it's 
 solution with an identical algorithm in several languages. Julia is roughly 
 2.6 times slower than the best C++ executable. I was bit puzzled by the 
 result, since in the benchmarks on http://julialang.org/, the slowest 
 test is 1.66 times C. I realize that those benchmarks can't cover all 
 possible situations. That said, I couldn't really find anything unusual in 
 the Julia code, did some profiling and removed type inference, but still 
 that's as fast as I got it. That's not to say that I'm disappointed, I 
 still think this is great. Did I miss something obvious here or is there 
 something specific to this algorithm? 
  
  The codes are on github at 
  
  https://github.com/jesusfv/Comparison-Programming-Languages-Economics 
  
  



Re: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-16 Thread Jesus Villaverde
Also, defining

mylog(x::Float64) = ccall((:log, libm), Float64, (Float64,), x)

made quite a bit of difference for me, from 1.92 to around 1.55. If I also add 
@inbounds, I go down to 1.45, making Julia only twice as sslow as C++. Numba 
still beats Julia, which kind of bothers me a bit


Thanks for the suggestions.


On Monday, June 16, 2014 4:56:34 PM UTC-4, Jesus Villaverde wrote:

 Hi

 1) Yes, we pre-compiled the function.

 2) As I mentioned before, we tried the code with and without type 
 declaration, it makes a difference.

 3) The variable names turns out to be quite useful because this code will 
 be eventually nested into a much larger project where it is convenient to 
 have very explicit names.

 Thanks 

 On Monday, June 16, 2014 12:13:44 PM UTC-4, Dahua Lin wrote:

 First, I agree with John that you don't have to declare the types in 
 general, like in a compiled language. It seems that Julia would be able to 
 infer the types of most variables in your codes.

 There are several ways that your code's efficiency may be improved:

 (1) You can use @inbounds to waive bound checking in several places, such 
 as line 94 and 95 (in RBC_Julia.jl)
 (2) Line 114 and 116 involves reallocating new arrays, which is probably 
 unnecessary. Also note that Base.maxabs can compute the maximum of absolute 
 value more efficiently than maximum(abs( ... ))

 In terms of measurement, did you pre-compile the function before 
 measuring the runtime?

 A side note about code style. It seems that it uses a lot of Java-ish 
 descriptive names with camel case. Julia practice tends to encourage more 
 concise naming.

 Dahua



 On Monday, June 16, 2014 10:55:50 AM UTC-5, John Myles White wrote:

 Maybe it would be good to verify the claim made at 
 https://github.com/jesusfv/Comparison-Programming-Languages-Economics/blob/master/RBC_Julia.jl#L9
  

 I would think that specifying all those types wouldn’t matter much if 
 the code doesn’t have type-stability problems. 

  — John 

 On Jun 16, 2014, at 8:52 AM, Florian Oswald florian...@gmail.com 
 wrote: 

  Dear all, 
  
  I thought you might find this paper interesting: 
 http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf 
  
  It takes a standard model from macro economics and computes it's 
 solution with an identical algorithm in several languages. Julia is roughly 
 2.6 times slower than the best C++ executable. I was bit puzzled by the 
 result, since in the benchmarks on http://julialang.org/, the slowest 
 test is 1.66 times C. I realize that those benchmarks can't cover all 
 possible situations. That said, I couldn't really find anything unusual in 
 the Julia code, did some profiling and removed type inference, but still 
 that's as fast as I got it. That's not to say that I'm disappointed, I 
 still think this is great. Did I miss something obvious here or is there 
 something specific to this algorithm? 
  
  The codes are on github at 
  
  https://github.com/jesusfv/Comparison-Programming-Languages-Economics 
  
  



Re: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-16 Thread Peter Simon
By a process of elimination, I determined that the only variable whose 
declaration affected the run time was vGridCapital.  The variable is 
declared to be of type Array{Float64,1}, but is initialized as


vGridCapital = 0.5*capitalSteadyState:0.1:1.5*capitalSteadyState

which, unlike in Matlab, produces a Range object, rather than an array.  If 
the line above is modified to

vGridCapital = [0.5*capitalSteadyState:0.1:1.5*capitalSteadyState]

then the type instability is eliminated, and all type declarations can be 
removed with no effect on execution time.

--Peter


On Monday, June 16, 2014 2:59:31 PM UTC-7, Jesus Villaverde wrote:

 Also, defining

 mylog(x::Float64) = ccall((:log, libm), Float64, (Float64,), x)

 made quite a bit of difference for me, from 1.92 to around 1.55. If I also 
 add @inbounds, I go down to 1.45, making Julia only twice as sslow as C++. 
 Numba still beats Julia, which kind of bothers me a bit


 Thanks for the suggestions.


 On Monday, June 16, 2014 4:56:34 PM UTC-4, Jesus Villaverde wrote:

 Hi

 1) Yes, we pre-compiled the function.

 2) As I mentioned before, we tried the code with and without type 
 declaration, it makes a difference.

 3) The variable names turns out to be quite useful because this code will 
 be eventually nested into a much larger project where it is convenient to 
 have very explicit names.

 Thanks 

 On Monday, June 16, 2014 12:13:44 PM UTC-4, Dahua Lin wrote:

 First, I agree with John that you don't have to declare the types in 
 general, like in a compiled language. It seems that Julia would be able to 
 infer the types of most variables in your codes.

 There are several ways that your code's efficiency may be improved:

 (1) You can use @inbounds to waive bound checking in several places, 
 such as line 94 and 95 (in RBC_Julia.jl)
 (2) Line 114 and 116 involves reallocating new arrays, which is probably 
 unnecessary. Also note that Base.maxabs can compute the maximum of absolute 
 value more efficiently than maximum(abs( ... ))

 In terms of measurement, did you pre-compile the function before 
 measuring the runtime?

 A side note about code style. It seems that it uses a lot of Java-ish 
 descriptive names with camel case. Julia practice tends to encourage more 
 concise naming.

 Dahua



 On Monday, June 16, 2014 10:55:50 AM UTC-5, John Myles White wrote:

 Maybe it would be good to verify the claim made at 
 https://github.com/jesusfv/Comparison-Programming-Languages-Economics/blob/master/RBC_Julia.jl#L9
  

 I would think that specifying all those types wouldn’t matter much if 
 the code doesn’t have type-stability problems. 

  — John 

 On Jun 16, 2014, at 8:52 AM, Florian Oswald florian...@gmail.com 
 wrote: 

  Dear all, 
  
  I thought you might find this paper interesting: 
 http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf 
  
  It takes a standard model from macro economics and computes it's 
 solution with an identical algorithm in several languages. Julia is 
 roughly 
 2.6 times slower than the best C++ executable. I was bit puzzled by the 
 result, since in the benchmarks on http://julialang.org/, the slowest 
 test is 1.66 times C. I realize that those benchmarks can't cover all 
 possible situations. That said, I couldn't really find anything unusual in 
 the Julia code, did some profiling and removed type inference, but still 
 that's as fast as I got it. That's not to say that I'm disappointed, I 
 still think this is great. Did I miss something obvious here or is there 
 something specific to this algorithm? 
  
  The codes are on github at 
  
  https://github.com/jesusfv/Comparison-Programming-Languages-Economics 
  
  



Re: [julia-users] Benchmarking study: C++ Fortran Numba Julia Java Matlab the rest

2014-06-16 Thread Stefan Karpinski
Ah! Excellent sleuthing. That's about the kind of thing I suspected was going 
on.

 On Jun 17, 2014, at 12:03 AM, Peter Simon psimon0...@gmail.com wrote:
 
 By a process of elimination, I determined that the only variable whose 
 declaration affected the run time was vGridCapital.  The variable is declared 
 to be of type Array{Float64,1}, but is initialized as
 
 
 vGridCapital = 0.5*capitalSteadyState:0.1:1.5*capitalSteadyState
 
 which, unlike in Matlab, produces a Range object, rather than an array.  If 
 the line above is modified to
 
 vGridCapital = [0.5*capitalSteadyState:0.1:1.5*capitalSteadyState]
 
 then the type instability is eliminated, and all type declarations can be 
 removed with no effect on execution time.
 
 --Peter
 
 
 On Monday, June 16, 2014 2:59:31 PM UTC-7, Jesus Villaverde wrote:
 Also, defining
 
 mylog(x::Float64) = ccall((:log, libm), Float64, (Float64,), x)
 
 made quite a bit of difference for me, from 1.92 to around 1.55. If I also 
 add @inbounds, I go down to 1.45, making Julia only twice as sslow as C++. 
 Numba still beats Julia, which kind of bothers me a bit
 
 Thanks for the suggestions.
 
 On Monday, June 16, 2014 4:56:34 PM UTC-4, Jesus Villaverde wrote:
 Hi
 
 1) Yes, we pre-compiled the function.
 
 2) As I mentioned before, we tried the code with and without type 
 declaration, it makes a difference.
 
 3) The variable names turns out to be quite useful because this code will 
 be eventually nested into a much larger project where it is convenient to 
 have very explicit names.
 
 Thanks 
 
 On Monday, June 16, 2014 12:13:44 PM UTC-4, Dahua Lin wrote:
 First, I agree with John that you don't have to declare the types in 
 general, like in a compiled language. It seems that Julia would be able to 
 infer the types of most variables in your codes.
 
 There are several ways that your code's efficiency may be improved:
 
 (1) You can use @inbounds to waive bound checking in several places, such 
 as line 94 and 95 (in RBC_Julia.jl)
 (2) Line 114 and 116 involves reallocating new arrays, which is probably 
 unnecessary. Also note that Base.maxabs can compute the maximum of 
 absolute value more efficiently than maximum(abs( ... ))
 
 In terms of measurement, did you pre-compile the function before measuring 
 the runtime?
 
 A side note about code style. It seems that it uses a lot of Java-ish 
 descriptive names with camel case. Julia practice tends to encourage more 
 concise naming.
 
 Dahua
 
 
 
 On Monday, June 16, 2014 10:55:50 AM UTC-5, John Myles White wrote:
 Maybe it would be good to verify the claim made at 
 https://github.com/jesusfv/Comparison-Programming-Languages-Economics/blob/master/RBC_Julia.jl#L9
  
 
 I would think that specifying all those types wouldn’t matter much if the 
 code doesn’t have type-stability problems. 
 
  — John 
 
 On Jun 16, 2014, at 8:52 AM, Florian Oswald florian...@gmail.com wrote: 
 
  Dear all, 
  
  I thought you might find this paper interesting: 
  http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf 
  
  It takes a standard model from macro economics and computes it's 
  solution with an identical algorithm in several languages. Julia is 
  roughly 2.6 times slower than the best C++ executable. I was bit 
  puzzled by the result, since in the benchmarks on 
  http://julialang.org/, the slowest test is 1.66 times C. I realize that 
  those benchmarks can't cover all possible situations. That said, I 
  couldn't really find anything unusual in the Julia code, did some 
  profiling and removed type inference, but still that's as fast as I got 
  it. That's not to say that I'm disappointed, I still think this is 
  great. Did I miss something obvious here or is there something specific 
  to this algorithm? 
  
  The codes are on github at 
  
  https://github.com/jesusfv/Comparison-Programming-Languages-Economics