[julia-users] switch superior
I have a little suggestion: If julia is going to have switch, could we make it a bit better? Basically the switch would take two parameters :* function* and* variable*. On each case it would would call the function with those two params, and if the functions would return true, it would evaluate the case block. Note: the function has to return *boolean*. Example function divides(a, b) return a % b == 0 end input = 119 switch(divides, input) case 2 # this can be translated as ~ if(divides(input, 2)) case 3 # 3 divides input without remainder case 5 # 5 divides input without remainder case 7 # 7 divides input without remainder end Of course you could achieve the default switch behavior like this: switch(==, input) ... What do you think about it?
Re: [julia-users] Re: why's my julia code running slower than matlab, despite performance tips
I have checked the link and read the article. Am I right that the parallel accelerator basically uses C code instead of julia to do the computation? That would be kinda shame dont you think? Dne pondělí 9. května 2016 7:00:38 UTC+2 Christian Peel napsal(a): > > > The usual solution is to devectorized your code and to use loops (except > for matrix multiplication if you have large matrices). > > I am hopeful that ParallelAccelerator.jl [1][2] or similar projects can > enable fast vectorized Julia code > > [1] https://github.com/IntelLabs/ParallelAccelerator.jl > [2] http://julialang.org/blog/2016/03/parallelaccelerator > > On Sun, May 8, 2016 at 3:37 PM, feza> wrote: > >> I mean the revised script runs just as fast if not a tad faster with the >> latest master as it does on 0.4.5 : ) >> >> >> On Sunday, May 8, 2016 at 5:20:08 PM UTC-4, Patrick Kofod Mogensen wrote: >>> >>> Same as v0.4, or same as before you changed the code? >>> >>> On Sunday, May 8, 2016 at 8:55:00 PM UTC+2, feza wrote: roughly the same speed. On Sunday, May 8, 2016 at 2:44:19 PM UTC-4, Patrick Kofod Mogensen wrote: > > out of curiosity, what about v0.5? > > > -- > chris...@ieee.org >
Re: [julia-users] Re: why's my julia code running slower than matlab, despite performance tips
> The usual solution is to devectorized your code and to use loops (except for matrix multiplication if you have large matrices). I am hopeful that ParallelAccelerator.jl [1][2] or similar projects can enable fast vectorized Julia code [1] https://github.com/IntelLabs/ParallelAccelerator.jl [2] http://julialang.org/blog/2016/03/parallelaccelerator On Sun, May 8, 2016 at 3:37 PM, fezawrote: > I mean the revised script runs just as fast if not a tad faster with the > latest master as it does on 0.4.5 : ) > > > On Sunday, May 8, 2016 at 5:20:08 PM UTC-4, Patrick Kofod Mogensen wrote: >> >> Same as v0.4, or same as before you changed the code? >> >> On Sunday, May 8, 2016 at 8:55:00 PM UTC+2, feza wrote: >>> >>> roughly the same speed. >>> >>> On Sunday, May 8, 2016 at 2:44:19 PM UTC-4, Patrick Kofod Mogensen wrote: out of curiosity, what about v0.5? >>> >>> -- chris.p...@ieee.org
[julia-users] Re: Calling a function when type field changes.
I've ended up creating this macro macro log(ex) local field = eval(ex.args[1].args[2]) local var = ex.args[2] if ex.head == :(+=) println("Item ", field, "+", var) elseif ex.head == :(-=) println("Item ", field, "-", var) end eval(ex) end
Re: [julia-users] External Fortran Library (ifort vs gfortran)
I figured it out, had to do that gfortran places temporary arrays on the heap and ifort places them on the stack. Adding the option -heap-arrays to ifort fixed the problem. Thanks for the help Derek On Friday, May 6, 2016 at 6:17:57 PM UTC-6, Erik Schnetter wrote: > > Derek > > The ccall looks correct. (I didn't count the arguments or check their > types, though.) > > A wild guess: ifort has an option -i8 that uses 8-byte integers. If you're > using it, you'd need to use Int64 instead. Alternatively, there could be an > "implicit integer*8" statement somewhere. > > -erik > > On Fri, May 6, 2016 at 7:43 PM, Derek Tucker> wrote: > >> Erik, >> >> I wondered that, the library is not mine, but the code is here, work in >> progress >> >> The ccall is here ( >> https://github.com/jdtuck/spatial_pp/blob/master/thomas_pp.jl#L556) >> >> The library is here ( >> https://github.com/jdtuck/spatial_pp/blob/master/deps/src/nscluster/Simplex-Thomasf.f >> ) >> >> From what I understand everything is a pointer from ccall for Fortran, >> correct? >> >> Thanks >> >> Derek >> >> On Friday, May 6, 2016 at 5:03:48 PM UTC-6, Erik Schnetter wrote: >>> >>> Derek >>> >>> How are you interfacing it with Julia? >>> >>> It could be there is an error in the way you are interfacing it. This >>> error could be undetected with gfortran, but be visible with ifort. The >>> error message sounds as if you are writing to a memory location that should >>> not be written to (e.g. to a constant or a string). It is easy to get >>> confused with what is a pointer and what not when using Julia's `ccall` for >>> Fortran code, and this can lead to that kind of error. >>> >>> If you point to your code, people might be able to give better advice. >>> >>> -erik >>> >>> >>> On Fri, May 6, 2016 at 6:03 PM, Derek Tucker >>> wrote: >>> I have an external library that I am interfacing with julia, when I compile it with gfortran it runs without a problem. When I compile it with ifort i get this error ERROR: ReadOnlyMemoryError() Anybody has any ideas why. I have test the library with ifort using a test program and it works fine. >>> >>> >>> >>> -- >>> Erik Schnetter >>> http://www.perimeterinstitute.ca/personal/eschnetter/ >>> >> > > > -- > Erik Schnetter > http://www.perimeterinstitute.ca/personal/eschnetter/ >
Re: [julia-users] Re: why's my julia code running slower than matlab, despite performance tips
I mean the revised script runs just as fast if not a tad faster with the latest master as it does on 0.4.5 : ) On Sunday, May 8, 2016 at 5:20:08 PM UTC-4, Patrick Kofod Mogensen wrote: > > Same as v0.4, or same as before you changed the code? > > On Sunday, May 8, 2016 at 8:55:00 PM UTC+2, feza wrote: >> >> roughly the same speed. >> >> On Sunday, May 8, 2016 at 2:44:19 PM UTC-4, Patrick Kofod Mogensen wrote: >>> >>> out of curiosity, what about v0.5? >> >>
Re: [julia-users] Re: why's my julia code running slower than matlab, despite performance tips
Same as v0.4, or same as before you changed the code? On Sunday, May 8, 2016 at 8:55:00 PM UTC+2, feza wrote: > > roughly the same speed. > > On Sunday, May 8, 2016 at 2:44:19 PM UTC-4, Patrick Kofod Mogensen wrote: >> >> out of curiosity, what about v0.5? > >
Re: [julia-users] Re: why's my julia code running slower than matlab, despite performance tips
roughly the same speed. On Sunday, May 8, 2016 at 2:44:19 PM UTC-4, Patrick Kofod Mogensen wrote: > > out of curiosity, what about v0.5?
Re: [julia-users] Re: why's my julia code running slower than matlab, despite performance tips
out of curiosity, what about v0.5?
Re: [julia-users] Re: why's my julia code running slower than matlab, despite performance tips
With all that done, the julia code runs about the same if not better than matlab (using 4 threads) On Sunday, May 8, 2016 at 2:21:42 PM UTC-4, feza wrote: > > Well first problem was that the vectorized version of my code was very > slow. > Then I devectorized still slow, because of the index clashing with the > column-major storage > I assumed for i =1:10,j=1:10,k=1:10 does the index i first then j then k > wrongly... > > On Sunday, May 8, 2016 at 2:04:37 PM UTC-4, David Gold wrote: >> >> So, the issue here was the indexing clashing up against the column-major >> storage of multi-dimensional arrays? >> >> On Sunday, May 8, 2016 at 10:10:54 AM UTC-7, Tk wrote: >>> >>> Could you try replacing >>>for i in 1:nx, j in 1:ny, k in 1:nz >>> to >>>for k in 1:nz, j in 1:ny, i in 1:nx >>> because your arrays are defined like a[i,j,k]? >>> >>> Another question is, how many cores is your Matlab code using? >>> >>> >>> On Monday, May 9, 2016 at 2:03:58 AM UTC+9, feza wrote: Milan Script is here: https://gist.github.com/musmo/27436a340b41c01d51d557a655276783 On Sunday, May 8, 2016 at 12:40:44 PM UTC-4, feza wrote: > > Thanks for the tip (initially I just transllated the matlab verbatim) > > Now I have made all the changes. In place operations, and direct > function calls. > Despite these changes. Matlab is 3.6 seconds, new Julia 7.6 seconds > TBH the results of this experiment are frustrating, I was hoping Julia > was going to provide a huge speedup (on the level of c) > > Am I still missing anything in the Julia code that is crucial to speed? > @code_warntype looks ok sans a few red unions which i don't think are > in my control > > > On Sunday, May 8, 2016 at 8:15:25 AM UTC-4, Tim Holy wrote: >> >> One of the really cool features of julia is that functions are >> allowed to have >> more than 0 arguments. It's even considered good style, and I highly >> recommend >> making use of this awesome feature in your code! :-) >> >> In other words: try passing all variables as arguments to the >> functions. Even >> though you're wrapping everything in a function, performance-wise >> you're >> running up against an inference problem >> (https://github.com/JuliaLang/julia/issues/15276). In terms of >> coding style, >> you're still essentially using global variables. Honestly, these make >> your >> life harder in the end ( >> http://c2.com/cgi/wiki?GlobalVariablesAreBad)---it's >> not a bad thing that julia provides gentle encouragement to avoid >> using them, >> and you're losing out on opportunities by trying to sidestep that >> encouragement. >> >> Best, >> --Tim >> >> On Sunday, May 08, 2016 01:38:41 AM feza wrote: >> > That's no surprise your CPU is better :) >> > >> > Regarding devectorization >> > for l in 1:q >> > for k in 1:nz >> > for j in 1:ny >> > for i in 1:nx >> > u = ux[i,j,k] >> > v = uy[i,j,k] >> > w = uz[i,j,k] >> > >> > cu = c[k,1]*u + c[k,2]*v + c[k,3]*w >> > u2 = u*u + v*v + w*w >> > feq[i,j,k,l] = weights[k]*ρ[i,j,k]*(1 + 3*cu + >> 9/2*(cu*cu) >> > - 3/2*u2) >> > f[i,j,k,l] = f[i,j,k,l]*(1-ω) + ω*feq[i,j,k,l] >> > end >> > end >> > end >> > end >> > >> > Actually makes the code a lot slower >> > >> > On Sunday, May 8, 2016 at 4:37:18 AM UTC-4, Patrick Kofod Mogensen >> wrote: >> > > For what it's worth it run in about 3-4 seconds on my computer >> on latest >> > > v0.4. >> > > >> > > CPU : Intel(R) Core(TM) i7-4600U CPU @ 2.10GHz >> > > >> > > On Sunday, May 8, 2016 at 10:33:14 AM UTC+2, Patrick Kofod >> Mogensen wrote: >> > >> As for the v0.5 performance (which is horrible), I think it's >> the boxing >> > >> issue with closure >> https://github.com/JuliaLang/julia/issues/15276 . >> > >> Right? >> > >> >> > >> On Sunday, May 8, 2016 at 10:29:59 AM UTC+2, STAR0SS wrote: >> > >>> You are using a lot of vectorized operations and Julia isn't as >> good as >> > >>> matlab is with those. >> > >>> >> > >>> The usual solution is to devectorized your code and to use >> loops (except >> > >>> for matrix multiplication if you have large matrices). >> >>
Re: [julia-users] Re: why's my julia code running slower than matlab, despite performance tips
Well first problem was that the vectorized version of my code was very slow. Then I devectorized still slow, because of the index clashing with the column-major storage I assumed for i =1:10,j=1:10,k=1:10 does the index i first then j then k wrongly... On Sunday, May 8, 2016 at 2:04:37 PM UTC-4, David Gold wrote: > > So, the issue here was the indexing clashing up against the column-major > storage of multi-dimensional arrays? > > On Sunday, May 8, 2016 at 10:10:54 AM UTC-7, Tk wrote: >> >> Could you try replacing >>for i in 1:nx, j in 1:ny, k in 1:nz >> to >>for k in 1:nz, j in 1:ny, i in 1:nx >> because your arrays are defined like a[i,j,k]? >> >> Another question is, how many cores is your Matlab code using? >> >> >> On Monday, May 9, 2016 at 2:03:58 AM UTC+9, feza wrote: >>> >>> Milan >>> >>> Script is here: >>> https://gist.github.com/musmo/27436a340b41c01d51d557a655276783 >>> >>> >>> On Sunday, May 8, 2016 at 12:40:44 PM UTC-4, feza wrote: Thanks for the tip (initially I just transllated the matlab verbatim) Now I have made all the changes. In place operations, and direct function calls. Despite these changes. Matlab is 3.6 seconds, new Julia 7.6 seconds TBH the results of this experiment are frustrating, I was hoping Julia was going to provide a huge speedup (on the level of c) Am I still missing anything in the Julia code that is crucial to speed? @code_warntype looks ok sans a few red unions which i don't think are in my control On Sunday, May 8, 2016 at 8:15:25 AM UTC-4, Tim Holy wrote: > > One of the really cool features of julia is that functions are allowed > to have > more than 0 arguments. It's even considered good style, and I highly > recommend > making use of this awesome feature in your code! :-) > > In other words: try passing all variables as arguments to the > functions. Even > though you're wrapping everything in a function, performance-wise > you're > running up against an inference problem > (https://github.com/JuliaLang/julia/issues/15276). In terms of coding > style, > you're still essentially using global variables. Honestly, these make > your > life harder in the end ( > http://c2.com/cgi/wiki?GlobalVariablesAreBad)---it's > not a bad thing that julia provides gentle encouragement to avoid > using them, > and you're losing out on opportunities by trying to sidestep that > encouragement. > > Best, > --Tim > > On Sunday, May 08, 2016 01:38:41 AM feza wrote: > > That's no surprise your CPU is better :) > > > > Regarding devectorization > > for l in 1:q > > for k in 1:nz > > for j in 1:ny > > for i in 1:nx > > u = ux[i,j,k] > > v = uy[i,j,k] > > w = uz[i,j,k] > > > > cu = c[k,1]*u + c[k,2]*v + c[k,3]*w > > u2 = u*u + v*v + w*w > > feq[i,j,k,l] = weights[k]*ρ[i,j,k]*(1 + 3*cu + > 9/2*(cu*cu) > > - 3/2*u2) > > f[i,j,k,l] = f[i,j,k,l]*(1-ω) + ω*feq[i,j,k,l] > > end > > end > > end > > end > > > > Actually makes the code a lot slower > > > > On Sunday, May 8, 2016 at 4:37:18 AM UTC-4, Patrick Kofod Mogensen > wrote: > > > For what it's worth it run in about 3-4 seconds on my computer on > latest > > > v0.4. > > > > > > CPU : Intel(R) Core(TM) i7-4600U CPU @ 2.10GHz > > > > > > On Sunday, May 8, 2016 at 10:33:14 AM UTC+2, Patrick Kofod > Mogensen wrote: > > >> As for the v0.5 performance (which is horrible), I think it's the > boxing > > >> issue with closure > https://github.com/JuliaLang/julia/issues/15276 . > > >> Right? > > >> > > >> On Sunday, May 8, 2016 at 10:29:59 AM UTC+2, STAR0SS wrote: > > >>> You are using a lot of vectorized operations and Julia isn't as > good as > > >>> matlab is with those. > > >>> > > >>> The usual solution is to devectorized your code and to use loops > (except > > >>> for matrix multiplication if you have large matrices). > >
Re: [julia-users] Re: why's my julia code running slower than matlab, despite performance tips
So, the issue here was the indexing clashing up against the column-major storage of multi-dimensional arrays? On Sunday, May 8, 2016 at 10:10:54 AM UTC-7, Tk wrote: > > Could you try replacing >for i in 1:nx, j in 1:ny, k in 1:nz > to >for k in 1:nz, j in 1:ny, i in 1:nx > because your arrays are defined like a[i,j,k]? > > Another question is, how many cores is your Matlab code using? > > > On Monday, May 9, 2016 at 2:03:58 AM UTC+9, feza wrote: >> >> Milan >> >> Script is here: >> https://gist.github.com/musmo/27436a340b41c01d51d557a655276783 >> >> >> On Sunday, May 8, 2016 at 12:40:44 PM UTC-4, feza wrote: >>> >>> Thanks for the tip (initially I just transllated the matlab verbatim) >>> >>> Now I have made all the changes. In place operations, and direct >>> function calls. >>> Despite these changes. Matlab is 3.6 seconds, new Julia 7.6 seconds >>> TBH the results of this experiment are frustrating, I was hoping Julia >>> was going to provide a huge speedup (on the level of c) >>> >>> Am I still missing anything in the Julia code that is crucial to speed? >>> @code_warntype looks ok sans a few red unions which i don't think are in >>> my control >>> >>> >>> On Sunday, May 8, 2016 at 8:15:25 AM UTC-4, Tim Holy wrote: One of the really cool features of julia is that functions are allowed to have more than 0 arguments. It's even considered good style, and I highly recommend making use of this awesome feature in your code! :-) In other words: try passing all variables as arguments to the functions. Even though you're wrapping everything in a function, performance-wise you're running up against an inference problem (https://github.com/JuliaLang/julia/issues/15276). In terms of coding style, you're still essentially using global variables. Honestly, these make your life harder in the end ( http://c2.com/cgi/wiki?GlobalVariablesAreBad)---it's not a bad thing that julia provides gentle encouragement to avoid using them, and you're losing out on opportunities by trying to sidestep that encouragement. Best, --Tim On Sunday, May 08, 2016 01:38:41 AM feza wrote: > That's no surprise your CPU is better :) > > Regarding devectorization > for l in 1:q > for k in 1:nz > for j in 1:ny > for i in 1:nx > u = ux[i,j,k] > v = uy[i,j,k] > w = uz[i,j,k] > > cu = c[k,1]*u + c[k,2]*v + c[k,3]*w > u2 = u*u + v*v + w*w > feq[i,j,k,l] = weights[k]*ρ[i,j,k]*(1 + 3*cu + 9/2*(cu*cu) > - 3/2*u2) > f[i,j,k,l] = f[i,j,k,l]*(1-ω) + ω*feq[i,j,k,l] > end > end > end > end > > Actually makes the code a lot slower > > On Sunday, May 8, 2016 at 4:37:18 AM UTC-4, Patrick Kofod Mogensen wrote: > > For what it's worth it run in about 3-4 seconds on my computer on latest > > v0.4. > > > > CPU : Intel(R) Core(TM) i7-4600U CPU @ 2.10GHz > > > > On Sunday, May 8, 2016 at 10:33:14 AM UTC+2, Patrick Kofod Mogensen wrote: > >> As for the v0.5 performance (which is horrible), I think it's the boxing > >> issue with closure https://github.com/JuliaLang/julia/issues/15276 . > >> Right? > >> > >> On Sunday, May 8, 2016 at 10:29:59 AM UTC+2, STAR0SS wrote: > >>> You are using a lot of vectorized operations and Julia isn't as good as > >>> matlab is with those. > >>> > >>> The usual solution is to devectorized your code and to use loops (except > >>> for matrix multiplication if you have large matrices).
Re: [julia-users] Re: why's my julia code running slower than matlab, despite performance tips
Wow thank you guys I totally thought for i in 1:nx, j in 1:ny, k in 1:nz ran the i index first and then j and then k ! This has been a great learning experience. Much appreciated, now the julia code is about twice as fast! On Sunday, May 8, 2016 at 1:12:30 PM UTC-4, Tk wrote: > > Also try: > julia -O --check-bounds=no yourcode.jl > > On Monday, May 9, 2016 at 2:03:58 AM UTC+9, feza wrote: >> >> Milan >> >> Script is here: >> https://gist.github.com/musmo/27436a340b41c01d51d557a655276783 >> >> >> On Sunday, May 8, 2016 at 12:40:44 PM UTC-4, feza wrote: >>> >>> Thanks for the tip (initially I just transllated the matlab verbatim) >>> >>> Now I have made all the changes. In place operations, and direct >>> function calls. >>> Despite these changes. Matlab is 3.6 seconds, new Julia 7.6 seconds >>> TBH the results of this experiment are frustrating, I was hoping Julia >>> was going to provide a huge speedup (on the level of c) >>> >>> Am I still missing anything in the Julia code that is crucial to speed? >>> @code_warntype looks ok sans a few red unions which i don't think are in >>> my control >>> >>> >>> On Sunday, May 8, 2016 at 8:15:25 AM UTC-4, Tim Holy wrote: One of the really cool features of julia is that functions are allowed to have more than 0 arguments. It's even considered good style, and I highly recommend making use of this awesome feature in your code! :-) In other words: try passing all variables as arguments to the functions. Even though you're wrapping everything in a function, performance-wise you're running up against an inference problem (https://github.com/JuliaLang/julia/issues/15276). In terms of coding style, you're still essentially using global variables. Honestly, these make your life harder in the end ( http://c2.com/cgi/wiki?GlobalVariablesAreBad)---it's not a bad thing that julia provides gentle encouragement to avoid using them, and you're losing out on opportunities by trying to sidestep that encouragement. Best, --Tim On Sunday, May 08, 2016 01:38:41 AM feza wrote: > That's no surprise your CPU is better :) > > Regarding devectorization > for l in 1:q > for k in 1:nz > for j in 1:ny > for i in 1:nx > u = ux[i,j,k] > v = uy[i,j,k] > w = uz[i,j,k] > > cu = c[k,1]*u + c[k,2]*v + c[k,3]*w > u2 = u*u + v*v + w*w > feq[i,j,k,l] = weights[k]*ρ[i,j,k]*(1 + 3*cu + 9/2*(cu*cu) > - 3/2*u2) > f[i,j,k,l] = f[i,j,k,l]*(1-ω) + ω*feq[i,j,k,l] > end > end > end > end > > Actually makes the code a lot slower > > On Sunday, May 8, 2016 at 4:37:18 AM UTC-4, Patrick Kofod Mogensen wrote: > > For what it's worth it run in about 3-4 seconds on my computer on latest > > v0.4. > > > > CPU : Intel(R) Core(TM) i7-4600U CPU @ 2.10GHz > > > > On Sunday, May 8, 2016 at 10:33:14 AM UTC+2, Patrick Kofod Mogensen wrote: > >> As for the v0.5 performance (which is horrible), I think it's the boxing > >> issue with closure https://github.com/JuliaLang/julia/issues/15276 . > >> Right? > >> > >> On Sunday, May 8, 2016 at 10:29:59 AM UTC+2, STAR0SS wrote: > >>> You are using a lot of vectorized operations and Julia isn't as good as > >>> matlab is with those. > >>> > >>> The usual solution is to devectorized your code and to use loops (except > >>> for matrix multiplication if you have large matrices).
Re: [julia-users] Re: why's my julia code running slower than matlab, despite performance tips
Also try: julia -O --check-bounds=no yourcode.jl On Monday, May 9, 2016 at 2:03:58 AM UTC+9, feza wrote: > > Milan > > Script is here: > https://gist.github.com/musmo/27436a340b41c01d51d557a655276783 > > > On Sunday, May 8, 2016 at 12:40:44 PM UTC-4, feza wrote: >> >> Thanks for the tip (initially I just transllated the matlab verbatim) >> >> Now I have made all the changes. In place operations, and direct function >> calls. >> Despite these changes. Matlab is 3.6 seconds, new Julia 7.6 seconds >> TBH the results of this experiment are frustrating, I was hoping Julia >> was going to provide a huge speedup (on the level of c) >> >> Am I still missing anything in the Julia code that is crucial to speed? >> @code_warntype looks ok sans a few red unions which i don't think are in >> my control >> >> >> On Sunday, May 8, 2016 at 8:15:25 AM UTC-4, Tim Holy wrote: >>> >>> One of the really cool features of julia is that functions are allowed >>> to have >>> more than 0 arguments. It's even considered good style, and I highly >>> recommend >>> making use of this awesome feature in your code! :-) >>> >>> In other words: try passing all variables as arguments to the functions. >>> Even >>> though you're wrapping everything in a function, performance-wise you're >>> running up against an inference problem >>> (https://github.com/JuliaLang/julia/issues/15276). In terms of coding >>> style, >>> you're still essentially using global variables. Honestly, these make >>> your >>> life harder in the end ( >>> http://c2.com/cgi/wiki?GlobalVariablesAreBad)---it's >>> not a bad thing that julia provides gentle encouragement to avoid using >>> them, >>> and you're losing out on opportunities by trying to sidestep that >>> encouragement. >>> >>> Best, >>> --Tim >>> >>> On Sunday, May 08, 2016 01:38:41 AM feza wrote: >>> > That's no surprise your CPU is better :) >>> > >>> > Regarding devectorization >>> > for l in 1:q >>> > for k in 1:nz >>> > for j in 1:ny >>> > for i in 1:nx >>> > u = ux[i,j,k] >>> > v = uy[i,j,k] >>> > w = uz[i,j,k] >>> > >>> > cu = c[k,1]*u + c[k,2]*v + c[k,3]*w >>> > u2 = u*u + v*v + w*w >>> > feq[i,j,k,l] = weights[k]*ρ[i,j,k]*(1 + 3*cu + >>> 9/2*(cu*cu) >>> > - 3/2*u2) >>> > f[i,j,k,l] = f[i,j,k,l]*(1-ω) + ω*feq[i,j,k,l] >>> > end >>> > end >>> > end >>> > end >>> > >>> > Actually makes the code a lot slower >>> > >>> > On Sunday, May 8, 2016 at 4:37:18 AM UTC-4, Patrick Kofod Mogensen >>> wrote: >>> > > For what it's worth it run in about 3-4 seconds on my computer on >>> latest >>> > > v0.4. >>> > > >>> > > CPU : Intel(R) Core(TM) i7-4600U CPU @ 2.10GHz >>> > > >>> > > On Sunday, May 8, 2016 at 10:33:14 AM UTC+2, Patrick Kofod Mogensen >>> wrote: >>> > >> As for the v0.5 performance (which is horrible), I think it's the >>> boxing >>> > >> issue with closure https://github.com/JuliaLang/julia/issues/15276 >>> . >>> > >> Right? >>> > >> >>> > >> On Sunday, May 8, 2016 at 10:29:59 AM UTC+2, STAR0SS wrote: >>> > >>> You are using a lot of vectorized operations and Julia isn't as >>> good as >>> > >>> matlab is with those. >>> > >>> >>> > >>> The usual solution is to devectorized your code and to use loops >>> (except >>> > >>> for matrix multiplication if you have large matrices). >>> >>>
Re: [julia-users] Re: why's my julia code running slower than matlab, despite performance tips
Try changing the order of your loops: for i in 1:nx, j in 1:ny, k in 1:nz -> @inbounds for k in 1:nz, j in 1:ny, i in 1:nx (@inbounds disable bounds checking for arrays, it usually makes a small improvement).
Re: [julia-users] Re: why's my julia code running slower than matlab, despite performance tips
Could you try replacing for i in 1:nx, j in 1:ny, k in 1:nz to for k in 1:nz, j in 1:ny, i in 1:nx because your arrays are defined like a[i,j,k]? Another question is, how many cores is your Matlab code using? On Monday, May 9, 2016 at 2:03:58 AM UTC+9, feza wrote: > > Milan > > Script is here: > https://gist.github.com/musmo/27436a340b41c01d51d557a655276783 > > > On Sunday, May 8, 2016 at 12:40:44 PM UTC-4, feza wrote: >> >> Thanks for the tip (initially I just transllated the matlab verbatim) >> >> Now I have made all the changes. In place operations, and direct function >> calls. >> Despite these changes. Matlab is 3.6 seconds, new Julia 7.6 seconds >> TBH the results of this experiment are frustrating, I was hoping Julia >> was going to provide a huge speedup (on the level of c) >> >> Am I still missing anything in the Julia code that is crucial to speed? >> @code_warntype looks ok sans a few red unions which i don't think are in >> my control >> >> >> On Sunday, May 8, 2016 at 8:15:25 AM UTC-4, Tim Holy wrote: >>> >>> One of the really cool features of julia is that functions are allowed >>> to have >>> more than 0 arguments. It's even considered good style, and I highly >>> recommend >>> making use of this awesome feature in your code! :-) >>> >>> In other words: try passing all variables as arguments to the functions. >>> Even >>> though you're wrapping everything in a function, performance-wise you're >>> running up against an inference problem >>> (https://github.com/JuliaLang/julia/issues/15276). In terms of coding >>> style, >>> you're still essentially using global variables. Honestly, these make >>> your >>> life harder in the end ( >>> http://c2.com/cgi/wiki?GlobalVariablesAreBad)---it's >>> not a bad thing that julia provides gentle encouragement to avoid using >>> them, >>> and you're losing out on opportunities by trying to sidestep that >>> encouragement. >>> >>> Best, >>> --Tim >>> >>> On Sunday, May 08, 2016 01:38:41 AM feza wrote: >>> > That's no surprise your CPU is better :) >>> > >>> > Regarding devectorization >>> > for l in 1:q >>> > for k in 1:nz >>> > for j in 1:ny >>> > for i in 1:nx >>> > u = ux[i,j,k] >>> > v = uy[i,j,k] >>> > w = uz[i,j,k] >>> > >>> > cu = c[k,1]*u + c[k,2]*v + c[k,3]*w >>> > u2 = u*u + v*v + w*w >>> > feq[i,j,k,l] = weights[k]*ρ[i,j,k]*(1 + 3*cu + >>> 9/2*(cu*cu) >>> > - 3/2*u2) >>> > f[i,j,k,l] = f[i,j,k,l]*(1-ω) + ω*feq[i,j,k,l] >>> > end >>> > end >>> > end >>> > end >>> > >>> > Actually makes the code a lot slower >>> > >>> > On Sunday, May 8, 2016 at 4:37:18 AM UTC-4, Patrick Kofod Mogensen >>> wrote: >>> > > For what it's worth it run in about 3-4 seconds on my computer on >>> latest >>> > > v0.4. >>> > > >>> > > CPU : Intel(R) Core(TM) i7-4600U CPU @ 2.10GHz >>> > > >>> > > On Sunday, May 8, 2016 at 10:33:14 AM UTC+2, Patrick Kofod Mogensen >>> wrote: >>> > >> As for the v0.5 performance (which is horrible), I think it's the >>> boxing >>> > >> issue with closure https://github.com/JuliaLang/julia/issues/15276 >>> . >>> > >> Right? >>> > >> >>> > >> On Sunday, May 8, 2016 at 10:29:59 AM UTC+2, STAR0SS wrote: >>> > >>> You are using a lot of vectorized operations and Julia isn't as >>> good as >>> > >>> matlab is with those. >>> > >>> >>> > >>> The usual solution is to devectorized your code and to use loops >>> (except >>> > >>> for matrix multiplication if you have large matrices). >>> >>>
Re: [julia-users] Re: why's my julia code running slower than matlab, despite performance tips
Milan Script is here: https://gist.github.com/musmo/27436a340b41c01d51d557a655276783 On Sunday, May 8, 2016 at 12:40:44 PM UTC-4, feza wrote: > > Thanks for the tip (initially I just transllated the matlab verbatim) > > Now I have made all the changes. In place operations, and direct function > calls. > Despite these changes. Matlab is 3.6 seconds, new Julia 7.6 seconds > TBH the results of this experiment are frustrating, I was hoping Julia was > going to provide a huge speedup (on the level of c) > > Am I still missing anything in the Julia code that is crucial to speed? > @code_warntype looks ok sans a few red unions which i don't think are in > my control > > > On Sunday, May 8, 2016 at 8:15:25 AM UTC-4, Tim Holy wrote: >> >> One of the really cool features of julia is that functions are allowed to >> have >> more than 0 arguments. It's even considered good style, and I highly >> recommend >> making use of this awesome feature in your code! :-) >> >> In other words: try passing all variables as arguments to the functions. >> Even >> though you're wrapping everything in a function, performance-wise you're >> running up against an inference problem >> (https://github.com/JuliaLang/julia/issues/15276). In terms of coding >> style, >> you're still essentially using global variables. Honestly, these make >> your >> life harder in the end ( >> http://c2.com/cgi/wiki?GlobalVariablesAreBad)---it's >> not a bad thing that julia provides gentle encouragement to avoid using >> them, >> and you're losing out on opportunities by trying to sidestep that >> encouragement. >> >> Best, >> --Tim >> >> On Sunday, May 08, 2016 01:38:41 AM feza wrote: >> > That's no surprise your CPU is better :) >> > >> > Regarding devectorization >> > for l in 1:q >> > for k in 1:nz >> > for j in 1:ny >> > for i in 1:nx >> > u = ux[i,j,k] >> > v = uy[i,j,k] >> > w = uz[i,j,k] >> > >> > cu = c[k,1]*u + c[k,2]*v + c[k,3]*w >> > u2 = u*u + v*v + w*w >> > feq[i,j,k,l] = weights[k]*ρ[i,j,k]*(1 + 3*cu + >> 9/2*(cu*cu) >> > - 3/2*u2) >> > f[i,j,k,l] = f[i,j,k,l]*(1-ω) + ω*feq[i,j,k,l] >> > end >> > end >> > end >> > end >> > >> > Actually makes the code a lot slower >> > >> > On Sunday, May 8, 2016 at 4:37:18 AM UTC-4, Patrick Kofod Mogensen >> wrote: >> > > For what it's worth it run in about 3-4 seconds on my computer on >> latest >> > > v0.4. >> > > >> > > CPU : Intel(R) Core(TM) i7-4600U CPU @ 2.10GHz >> > > >> > > On Sunday, May 8, 2016 at 10:33:14 AM UTC+2, Patrick Kofod Mogensen >> wrote: >> > >> As for the v0.5 performance (which is horrible), I think it's the >> boxing >> > >> issue with closure https://github.com/JuliaLang/julia/issues/15276 >> . >> > >> Right? >> > >> >> > >> On Sunday, May 8, 2016 at 10:29:59 AM UTC+2, STAR0SS wrote: >> > >>> You are using a lot of vectorized operations and Julia isn't as >> good as >> > >>> matlab is with those. >> > >>> >> > >>> The usual solution is to devectorized your code and to use loops >> (except >> > >>> for matrix multiplication if you have large matrices). >> >>
Re: [julia-users] Re: why's my julia code running slower than matlab, despite performance tips
Thanks for the tip (initially I just transllated the matlab verbatim) Now I have made all the changes. In place operations, and direct function calls. Despite these changes. Matlab is 3.6 seconds, new Julia 7.6 seconds TBH the results of this experiment are frustrating, I was hoping Julia was going to provide a huge speedup (on the level of c) Am I still missing anything in the Julia code that is crucial to speed? @code_warntype looks ok sans a few red unions which i don't think are in my control On Sunday, May 8, 2016 at 8:15:25 AM UTC-4, Tim Holy wrote: > > One of the really cool features of julia is that functions are allowed to > have > more than 0 arguments. It's even considered good style, and I highly > recommend > making use of this awesome feature in your code! :-) > > In other words: try passing all variables as arguments to the functions. > Even > though you're wrapping everything in a function, performance-wise you're > running up against an inference problem > (https://github.com/JuliaLang/julia/issues/15276). In terms of coding > style, > you're still essentially using global variables. Honestly, these make your > life harder in the end ( > http://c2.com/cgi/wiki?GlobalVariablesAreBad)---it's > not a bad thing that julia provides gentle encouragement to avoid using > them, > and you're losing out on opportunities by trying to sidestep that > encouragement. > > Best, > --Tim > > On Sunday, May 08, 2016 01:38:41 AM feza wrote: > > That's no surprise your CPU is better :) > > > > Regarding devectorization > > for l in 1:q > > for k in 1:nz > > for j in 1:ny > > for i in 1:nx > > u = ux[i,j,k] > > v = uy[i,j,k] > > w = uz[i,j,k] > > > > cu = c[k,1]*u + c[k,2]*v + c[k,3]*w > > u2 = u*u + v*v + w*w > > feq[i,j,k,l] = weights[k]*ρ[i,j,k]*(1 + 3*cu + > 9/2*(cu*cu) > > - 3/2*u2) > > f[i,j,k,l] = f[i,j,k,l]*(1-ω) + ω*feq[i,j,k,l] > > end > > end > > end > > end > > > > Actually makes the code a lot slower > > > > On Sunday, May 8, 2016 at 4:37:18 AM UTC-4, Patrick Kofod Mogensen > wrote: > > > For what it's worth it run in about 3-4 seconds on my computer on > latest > > > v0.4. > > > > > > CPU : Intel(R) Core(TM) i7-4600U CPU @ 2.10GHz > > > > > > On Sunday, May 8, 2016 at 10:33:14 AM UTC+2, Patrick Kofod Mogensen > wrote: > > >> As for the v0.5 performance (which is horrible), I think it's the > boxing > > >> issue with closure https://github.com/JuliaLang/julia/issues/15276 . > > >> Right? > > >> > > >> On Sunday, May 8, 2016 at 10:29:59 AM UTC+2, STAR0SS wrote: > > >>> You are using a lot of vectorized operations and Julia isn't as good > as > > >>> matlab is with those. > > >>> > > >>> The usual solution is to devectorized your code and to use loops > (except > > >>> for matrix multiplication if you have large matrices). > >
Re: [julia-users] Julia SQL
Also checkout the SQLite.jl package. It provides methods for reading CSV files into an SQLite table and then running SQLite SQL commands on those tables. You can then export the SQLite to a CSV or Data.Table/DataFrame. -Jacob On May 8, 2016 4:32 AM, "Tero Frondelius"wrote: > Maybe this thread is relevant: > https://groups.google.com/forum/m/#!topic/julia-users/QjxiCO-Lv-0
[julia-users] Re: How should I store map with objects.
On Saturday, May 7, 2016 at 2:59:08 AM UTC-4, Ford Ox wrote: > > I want to do small simulation (similiar to games) where I have multiple > objects that can move and spawn at map. What is the most efficient way to > do that? > It depends on how many objects you have and the size of your grid. If you don't have much experience with this type of problem, I would strongly recommend taking whichever approach is easiest, then profiling to find the bottlenecks. > > I was thinking about something like this: > > @doc "I wont to be able to check for collision in constant time" > map = Array{Union{all objects...}(100, 10) > > > This creates a 100GB array! If your map is that big, then you need another approach (storing your objects in a vector for example) > Is this the most efficient way? > Also, why do I have to specify the type of map array (Union..), when I > will technically store there only pointers, which are all of the same size. > Yes, but arrays of chars (for examples) can be much more efficiently packed, that's why you need to specify a type. > If I use Array{Any}, will it be as fast as using Union{..} ? > My guess is "essentially, yes", but profiling is the only source of truth. See the Julia performance tips. > > Should I make map, moveable_objects_1-5 global variable, so I dont have to > send it as parameter every time? > Globals have very poor performance in Julia at the moment, unless you can make them const. > (That will make my code unusable when imported by somebody else right?) > It depends. But yes, in general, it's better practice to pass the world state as an argument than having it as a global. > Or should I pack them inside one type (f.e. type Data)? I am used to > [this.](..) so this is kinda new to me. > Yes, that's reasonable.
Re: [julia-users] Re: why's my julia code running slower than matlab, despite performance tips
One of the really cool features of julia is that functions are allowed to have more than 0 arguments. It's even considered good style, and I highly recommend making use of this awesome feature in your code! :-) In other words: try passing all variables as arguments to the functions. Even though you're wrapping everything in a function, performance-wise you're running up against an inference problem (https://github.com/JuliaLang/julia/issues/15276). In terms of coding style, you're still essentially using global variables. Honestly, these make your life harder in the end (http://c2.com/cgi/wiki?GlobalVariablesAreBad)---it's not a bad thing that julia provides gentle encouragement to avoid using them, and you're losing out on opportunities by trying to sidestep that encouragement. Best, --Tim On Sunday, May 08, 2016 01:38:41 AM feza wrote: > That's no surprise your CPU is better :) > > Regarding devectorization > for l in 1:q > for k in 1:nz > for j in 1:ny > for i in 1:nx > u = ux[i,j,k] > v = uy[i,j,k] > w = uz[i,j,k] > > cu = c[k,1]*u + c[k,2]*v + c[k,3]*w > u2 = u*u + v*v + w*w > feq[i,j,k,l] = weights[k]*ρ[i,j,k]*(1 + 3*cu + 9/2*(cu*cu) > - 3/2*u2) > f[i,j,k,l] = f[i,j,k,l]*(1-ω) + ω*feq[i,j,k,l] > end > end > end > end > > Actually makes the code a lot slower > > On Sunday, May 8, 2016 at 4:37:18 AM UTC-4, Patrick Kofod Mogensen wrote: > > For what it's worth it run in about 3-4 seconds on my computer on latest > > v0.4. > > > > CPU : Intel(R) Core(TM) i7-4600U CPU @ 2.10GHz > > > > On Sunday, May 8, 2016 at 10:33:14 AM UTC+2, Patrick Kofod Mogensen wrote: > >> As for the v0.5 performance (which is horrible), I think it's the boxing > >> issue with closure https://github.com/JuliaLang/julia/issues/15276 . > >> Right? > >> > >> On Sunday, May 8, 2016 at 10:29:59 AM UTC+2, STAR0SS wrote: > >>> You are using a lot of vectorized operations and Julia isn't as good as > >>> matlab is with those. > >>> > >>> The usual solution is to devectorized your code and to use loops (except > >>> for matrix multiplication if you have large matrices).
[julia-users] Julia SQL
Maybe this thread is relevant: https://groups.google.com/forum/m/#!topic/julia-users/QjxiCO-Lv-0
[julia-users] Re: why's my julia code running slower than matlab, despite performance tips
That's no surprise your CPU is better :) Regarding devectorization for l in 1:q for k in 1:nz for j in 1:ny for i in 1:nx u = ux[i,j,k] v = uy[i,j,k] w = uz[i,j,k] cu = c[k,1]*u + c[k,2]*v + c[k,3]*w u2 = u*u + v*v + w*w feq[i,j,k,l] = weights[k]*ρ[i,j,k]*(1 + 3*cu + 9/2*(cu*cu) - 3/2*u2) f[i,j,k,l] = f[i,j,k,l]*(1-ω) + ω*feq[i,j,k,l] end end end end Actually makes the code a lot slower On Sunday, May 8, 2016 at 4:37:18 AM UTC-4, Patrick Kofod Mogensen wrote: > > For what it's worth it run in about 3-4 seconds on my computer on latest > v0.4. > > CPU : Intel(R) Core(TM) i7-4600U CPU @ 2.10GHz > > On Sunday, May 8, 2016 at 10:33:14 AM UTC+2, Patrick Kofod Mogensen wrote: >> >> As for the v0.5 performance (which is horrible), I think it's the boxing >> issue with closure https://github.com/JuliaLang/julia/issues/15276 . >> Right? >> >> On Sunday, May 8, 2016 at 10:29:59 AM UTC+2, STAR0SS wrote: >>> >>> You are using a lot of vectorized operations and Julia isn't as good as >>> matlab is with those. >>> >>> The usual solution is to devectorized your code and to use loops (except >>> for matrix multiplication if you have large matrices). >>> >>
[julia-users] Re: why's my julia code running slower than matlab, despite performance tips
For what it's worth it run in about 3-4 seconds on my computer on latest v0.4. CPU : Intel(R) Core(TM) i7-4600U CPU @ 2.10GHz On Sunday, May 8, 2016 at 10:33:14 AM UTC+2, Patrick Kofod Mogensen wrote: > > As for the v0.5 performance (which is horrible), I think it's the boxing > issue with closure https://github.com/JuliaLang/julia/issues/15276 . > Right? > > On Sunday, May 8, 2016 at 10:29:59 AM UTC+2, STAR0SS wrote: >> >> You are using a lot of vectorized operations and Julia isn't as good as >> matlab is with those. >> >> The usual solution is to devectorized your code and to use loops (except >> for matrix multiplication if you have large matrices). >> >
[julia-users] Re: why's my julia code running slower than matlab, despite performance tips
As for the v0.5 performance (which is horrible), I think it's the boxing issue with closure https://github.com/JuliaLang/julia/issues/15276 . Right? On Sunday, May 8, 2016 at 10:29:59 AM UTC+2, STAR0SS wrote: > > You are using a lot of vectorized operations and Julia isn't as good as > matlab is with those. > > The usual solution is to devectorized your code and to use loops (except > for matrix multiplication if you have large matrices). >
[julia-users] Re: why's my julia code running slower than matlab, despite performance tips
You are using a lot of vectorized operations and Julia isn't as good as matlab is with those. The usual solution is to devectorized your code and to use loops (except for matrix multiplication if you have large matrices).
[julia-users] Re: why's my julia code running slower than matlab, despite performance tips
Good catch altough this still doesn't explain away the difference @code_warntype shows me feq, f, \rho, ux, uy, uz are red for some reason eventhough I have explictly stated their types... On Sunday, May 8, 2016 at 4:13:08 AM UTC-4, michae...@gmail.com wrote: > > I see that c is a constant array of Ints, and its elements multiply ux, uy > and uz in a loop, where ux, uy and uz are arrays of floats, so there's a > type stability problem. > > On Sunday, May 8, 2016 at 9:18:09 AM UTC+2, feza wrote: >> >> https://gist.github.com/musmo/27436a340b41c01d51d557a655276783 >> >> On Sunday, May 8, 2016 at 3:17:39 AM UTC-4, feza wrote: >>> >>> I have read the performance section and believe I have followed all the >>> suggested guidelines >>> >>> The same matlab script takes less than 3 seconds, julia 0.45 9.7 >>> seconds (julia 0.5 is even worse...) >>> >>> https://gist.github.com/musmo/27436a340b41c01d51d557a655276783.js >>> "> >>> >>>
[julia-users] Re: why's my julia code running slower than matlab, despite performance tips
I see that c is a constant array of Ints, and its elements multiply ux, uy and uz in a loop, where ux, uy and uz are arrays of floats, so there's a type stability problem. On Sunday, May 8, 2016 at 9:18:09 AM UTC+2, feza wrote: > > https://gist.github.com/musmo/27436a340b41c01d51d557a655276783 > > On Sunday, May 8, 2016 at 3:17:39 AM UTC-4, feza wrote: >> >> I have read the performance section and believe I have followed all the >> suggested guidelines >> >> The same matlab script takes less than 3 seconds, julia 0.45 9.7 seconds >> (julia 0.5 is even worse...) >> >> https://gist.github.com/musmo/27436a340b41c01d51d557a655276783.js >> "> >> >>
[julia-users] cannot make codespeed
Hello, I cloned Julia yesterday and could make it, except that I cannot 'make codespeed' in test/perf. The error message is the following: [skipping a bunch of warnings] ERROR: LoadError: LoadError: You must provide the JULIA_FLAVOR environment variable identifying this julia build! in error(::String) at ./error.jl:21 in include_from_node1(::String) at ./loading.jl:426 (repeats 2 times) in process_options(::Base.JLOptions) at ./client.jl:263 in _start() at ./client.jl:319 while loading /Users/didier/tmp/julia/test/perf/micro/../perfutil.jl, in expression starting on line 14 while loading /Users/didier/tmp/julia/test/perf/micro/perf.jl, in expression starting on line 5 make: *** [codespeed] Error 1 zsh: exit 2 make codespeed What value is expected for JULIA_FLAVOR ? Also, note that the file README.md in test/perf mentions a website which does not exist (speed.julialang.org). Thanks. -- ELS'16, May 9-10, Krakow, Poland: http://www.european-lisp-symposium.org Lisp, Jazz, Aïkido: http://www.didierverna.info
[julia-users] Julia SQL
I know Julia has DataFramesMeta, but is there a package for querying native Julia DataFrames using SQL?
[julia-users] Re: why's my julia code running slower than matlab, despite performance tips
https://gist.github.com/musmo/27436a340b41c01d51d557a655276783 On Sunday, May 8, 2016 at 3:17:39 AM UTC-4, feza wrote: > > I have read the performance section and believe I have followed all the > suggested guidelines > > The same matlab script takes less than 3 seconds, julia 0.45 9.7 seconds > (julia 0.5 is even worse...) > > https://gist.github.com/musmo/27436a340b41c01d51d557a655276783.js > "> > >
[julia-users] why's my julia code running slower than matlab, despite performance tips
I have read the performance section and believe I have followed all the suggested guidelines The same matlab script takes less than 3 seconds, julia 0.45 9.7 seconds (julia 0.5 is even worse...) https://gist.github.com/musmo/27436a340b41c01d51d557a655276783.js">