Hi Dahua,
I cannot find Base.maxabs (i.e. Julia says Base.maxabs not defined)
I'm here:
julia versioninfo()
Julia Version 0.3.0-prerelease+2703
Commit 942ae42* (2014-04-22 18:57 UTC)
Platform Info:
System: Darwin (x86_64-apple-darwin12.5.0)
CPU: Intel(R) Core(TM) i5-2435M CPU @ 2.40GHz
It seems Base.maxabs was added (by Dahua) as late as May 30
-
https://github.com/JuliaLang/julia/commit/78bbf10c125a124bc8a1a25e8aaaea1cbc6e0ebc
If you update your Julia to the latest master, you'll have it =)
// T
On Tuesday, June 17, 2014 10:20:05 AM UTC+2, Florian Oswald wrote:
Hi Dahua,
hi tim - True!
(why on earth would I do that?)
defining it outside reproduces the speed gain. thanks!
On 16 June 2014 18:30, Tim Holy tim.h...@gmail.com wrote:
From the sound of it, one possibility is that you made it a private
function
inside the computeTuned function. That creates the
Le lundi 16 juin 2014 à 14:59 -0700, Jesus Villaverde a écrit :
Also, defining
mylog(x::Float64) = ccall((:log, libm), Float64, (Float64,), x)
made quite a bit of difference for me, from 1.92 to around 1.55. If I
also add @inbounds, I go down to 1.45, making Julia only twice as
sslow as
Hi Pr. Villaverde, just wanted to say that it was your paper that made me
try Julia. I must say that I am very happy with the switch! Will you
continue using Julia for your research?
Ah Sorry, over 20 years of coding in Matlab :(
Yes, you are right, once I change that line, the type definition is
irrelevant. We should change the paper and the code ASAP
On Tuesday, June 17, 2014 12:03:29 AM UTC-4, Peter Simon wrote:
By a process of elimination, I determined
Not your fault at all. We need to make this kind of thing easier to discover.
Eg with
https://github.com/astrieanna/TypeCheck.jl
On Jun 17, 2014, at 8:35 AM, Jesus Villaverde vonbismarck1...@gmail.com
wrote:
Ah Sorry, over 20 years of coding in Matlab :(
Yes, you are right,
I think so! Matlab is just too slow for many things and a bit old in some
dimensions. I often use C++ but for a lot of stuff, it is just to
cumbersome.
On Tuesday, June 17, 2014 8:50:02 AM UTC-4, Bruno Rodrigues wrote:
Hi Pr. Villaverde, just wanted to say that it was your paper that made me
Thanks! I'll learn those tools. In any case, paper updated online, github
page with new commit. This is really great. Nice example of aggregation of
information. Economists love that :)
On Tuesday, June 17, 2014 9:11:08 AM UTC-4, Stefan Karpinski wrote:
Not your fault at all. We need to make
Your matrices are kinda small so it might not make much difference, but it
would be interesting to see whether using the Tridiagonal type could speed
things up at all.
On Tuesday, June 17, 2014 6:25:24 AM UTC-7, Jesus Villaverde wrote:
Thanks! I'll learn those tools. In any case, paper
Do any of the more initiated have an idea why Numba performs better for
this application, as both it and Julia use LLVM? I'm just asking out of
pure curiosity.
Cameron
On Tue, Jun 17, 2014 at 10:11 AM, Tony Kelman t...@kelman.net wrote:
Your matrices are kinda small so it might not make much
thanks peter. I made that devectorizing change after dalua suggested so. It
made a massive difference!
On Tuesday, 17 June 2014, Peter Simon psimon0...@gmail.com wrote:
You're right. Replacing the NumericExtensions function calls with a small
loop
maxDifference = 0.0
for k
: [julia-users] Benchmarking study: C++ Fortran Numba Julia
Java Matlab the rest
thanks peter. I made that devectorizing change after dalua suggested so. It
made a massive difference!
On Tuesday, 17 June 2014, Peter Simon psimon0...@gmail.com
mailto:psimon0...@gmail.com wrote:
You're
:
*Subject:* Re: [julia-users] Benchmarking study: C++ Fortran Numba
Julia Java Matlab the rest
thanks peter. I made that devectorizing change after dalua suggested so.
It made a massive difference!
On Tuesday, 17 June 2014, Peter Simon psimo...@gmail.com javascript:
wrote:
You're
Sent: Tuesday, June 17, 2014 12:08 PM
To: julia-users@googlegroups.com
Subject: Re: [julia-users] Benchmarking study: C++ Fortran Numba Julia
Java Matlab the rest
Sorry, Florian and David, for not seeing that you were way ahead of me.
On the subject of the log function: I tried
, 2014 10:50 AM
*To:* julia...@googlegroups.com
*Subject:* Re: [julia-users] Benchmarking study: C++ Fortran Numba
Julia Java Matlab the rest
thanks peter. I made that devectorizing change after dalua suggested so.
It made a massive difference!
On Tuesday, 17 June 2014, Peter Simon
javascript: [mailto:
julia...@googlegroups.com javascript:] *On Behalf Of *Peter Simon
*Sent:* Tuesday, June 17, 2014 12:08 PM
*To:* julia...@googlegroups.com javascript:
*Subject:* Re: [julia-users] Benchmarking study: C++ Fortran Numba
Julia Java Matlab the rest
Sorry, Florian
, 2014 12:42 PM
To: julia-users@googlegroups.com
Subject: Re: [julia-users] Benchmarking study: C++ Fortran Numba Julia
Java Matlab the rest
There are some remaining issues but compilation with MSVC is almost possible. I
did some initial work and Tony Kelman made lots of progress
, June 17, 2014 10:50 AM
*To:* julia...@googlegroups.com
*Subject:* Re: [julia-users] Benchmarking study: C++ Fortran Numba
Julia Java Matlab the rest
thanks peter. I made that devectorizing change after dalua suggested so.
It made a massive difference!
On Tuesday, 17 June 2014
*To:* julia...@googlegroups.com
*Subject:* Re: [julia-users] Benchmarking study: C++ Fortran Numba
Julia Java Matlab the rest
thanks peter. I made that devectorizing change after dalua suggested so.
It made a massive difference!
On Tuesday, 17 June 2014, Peter Simon psimo
:* julia...@googlegroups.com javascript: [mailto:
julia...@googlegroups.com javascript:] *On Behalf Of *Florian Oswald
*Sent:* Tuesday, June 17, 2014 10:50 AM
*To:* julia...@googlegroups.com javascript:
*Subject:* Re: [julia-users] Benchmarking study: C++ Fortran Numba
Julia Java Matlab
Dear all,
I thought you might find this paper
interesting: http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf
It takes a standard model from macro economics and computes it's solution
with an identical algorithm in several languages. Julia is roughly 2.6
times slower than the
Maybe it would be good to verify the claim made at
https://github.com/jesusfv/Comparison-Programming-Languages-Economics/blob/master/RBC_Julia.jl#L9
I would think that specifying all those types wouldn’t matter much if the code
doesn’t have type-stability problems.
— John
On Jun 16, 2014, at
First, I agree with John that you don't have to declare the types in
general, like in a compiled language. It seems that Julia would be able to
infer the types of most variables in your codes.
There are several ways that your code's efficiency may be improved:
(1) You can use @inbounds to
Hi guys,
thanks for the comments. Notice that I'm not the author of this code [so
variable names are not on me :-) ] just tried to speed it up a bit. In
fact, declaring types before running the computation function and using
@inbounds made the code 24% faster than the benchmark version. here's my
That's an interesting comparison. Being on par with Java is quite
respectable. There's nothing really obvious to change with that code and it
definitely doesn't need so many type annotations – if the annotations do
improve the performance, it's possible that there's a type instability
somewhere
I think that the log in openlibm is slower than most system logs. On my
mac, if I use
mylog(x::Float64) = ccall((:log, libm), Float64, (Float64,), x)
the code runs 25 pct. faster. If I also use @inbounds and devectorise the
max(abs) it runs in 2.26 seconds on my machine. The C++ version with the
Doing the math, that makes that optimized Julia version 18% slower than
C++, which is fast indeed.
On Mon, Jun 16, 2014 at 1:02 PM, Andreas Noack Jensen
andreasnoackjen...@gmail.com wrote:
I think that the log in openlibm is slower than most system logs. On my
mac, if I use
interesting!
just tried that - I defined mylog inside the computeTuned function
https://github.com/floswald/Comparison-Programming-Languages-Economics/blob/master/julia/floswald/model.jl#L193
but that actually slowed things down considerably. I'm on a mac as well,
but it seems that's not enough
Different systems have quite different libm implementations, both in terms
of speed and accuracy, which is why we have our own. It would be nice if we
could get our log to be faster.
On Mon, Jun 16, 2014 at 1:16 PM, Florian Oswald florian.osw...@gmail.com
wrote:
interesting!
just tried that -
From the sound of it, one possibility is that you made it a private function
inside the computeTuned function. That creates the equivalent of an anonymous
function, which is slow. You need to make it a generic function (define it
outside computeTuned).
--Tim
On Monday, June 16, 2014 06:16:49
Here's an economics blog post that links to this study:
http://juliaeconomics.com/2014/06/15/why-i-started-a-blog-about-programming-julia-for-economics/
On Mon, Jun 16, 2014 at 1:30 PM, Tim Holy tim.h...@gmail.com wrote:
From the sound of it, one possibility is that you made it a private
Hi
I am one of the authors of the paper :)
Our first version of the code did not declare types. It was thanks to
Florian's suggestion that we started doing it. We discovered, to our
surprise, that it reduced execution time by around 25%. I may be mistaken
but I do not think there are
Hi
1) Yes, we pre-compiled the function.
2) As I mentioned before, we tried the code with and without type
declaration, it makes a difference.
3) The variable names turns out to be quite useful because this code will
be eventually nested into a much larger project where it is convenient to
Also, defining
mylog(x::Float64) = ccall((:log, libm), Float64, (Float64,), x)
made quite a bit of difference for me, from 1.92 to around 1.55. If I also add
@inbounds, I go down to 1.45, making Julia only twice as sslow as C++. Numba
still beats Julia, which kind of bothers me a bit
Thanks
By a process of elimination, I determined that the only variable whose
declaration affected the run time was vGridCapital. The variable is
declared to be of type Array{Float64,1}, but is initialized as
vGridCapital = 0.5*capitalSteadyState:0.1:1.5*capitalSteadyState
which, unlike in
Ah! Excellent sleuthing. That's about the kind of thing I suspected was going
on.
On Jun 17, 2014, at 12:03 AM, Peter Simon psimon0...@gmail.com wrote:
By a process of elimination, I determined that the only variable whose
declaration affected the run time was vGridCapital. The variable
37 matches
Mail list logo