Re: [julia-users] Terminating loops early based on user input

2014-10-24 Thread Tony Fong
Right. looking into it...

How do you stop a task preemptively in Julia's built-in framework?

On Friday, October 24, 2014 8:07:19 AM UTC+7, Steven G. Johnson wrote:



 On Thursday, October 23, 2014 9:13:06 AM UTC-4, Tony Fong wrote:

 I also have a similar need. If I want to be more fancy, what may be the 
 strategy to address the following 2-way communication?
 * UI - worker: send pause, abort
 * worker - UI: progress report, summary state (presumably to allow UI to 
 do fancier presentation)

 I'm thinking about looking into TcpSocket. is it doable? overkill?


 Why not just use Julia's built-in parallel inter-process communication 
 functions?



Re: [julia-users] Re: Modified Gram-Schmidt: getting the best out of Julia

2014-10-24 Thread Ján Dolinský
 

 The way you have written the rank one update in line 14 is very expensive. 
 Unfortunately, we are not making all of BLAS available through our generic 
 mat-vec interface right now, but the whole of BLAS is wrapped in the BLAS 
 module, so you can use the ger! function there, i.e.

 function mgs2(X)
 # mgs speed project
 
 nobs, nvars = size(X)
 R = eye(nvars)
 
 for l=1:nvars-1
 v_i = view(X,:,l+1:nvars) # --- line 10
 s = view(X,:,l);
   
 R[l,l+1:nvars] = (v_i'*s)/sumabs2(s) # --- line 13
 BLAS.ger!(-1.0, s, vec(view(R,l,l+1:nvars)), v_i) # --- line 14
 end
 
 X, R
 
 end

 which avoids a lot of allocation. I don't know how Octave handles such 
 expression, so I don't know why they are faster when not calling into BLAS 
 directly.

 Med venlig hilsen

 Andreas Noack


Thanks for pointing out BLAS.ger! , I was not aware of this function 
because it is not listed in Julia manual. Using ger! resulted in a 
computing time of about 1 second not mentioning huge memory allocation 
saving. 
I will play further also with line #13 as this can probably also be turned 
into a direct BLAS call.

I assume some mat-vec (or mat-mat) operations like (v_i' * s) are directly 
translated into a BLAS call e.g.  BLAS.gemv('T',v_i,s), right ? Is there a 
way to find out what is being called under the hood ?

Thanks,
Jan


Re: [julia-users] Re: Modified Gram-Schmidt: getting the best out of Julia

2014-10-24 Thread Ján Dolinský


 Completely devectorisation can sometimes be useful, but I have come to 
 prefer writing linear algebra code close to the style of LAPACK where you 
 organise the calculations in BLAS calls. Not only for speed, but also for 
 making the code easier to read and infer the structure of the algorithm.

 Med venlig hilsen

 Andreas Noack


Organizing linear algebra code into BLAS calls make sense to me too. It is 
like writing vectorized (easy to read) code but with excellent speed and 
efficient memory allocation.  

Jan


[julia-users] Re: Modified Gram-Schmidt: getting the best out of Julia

2014-10-24 Thread Ján Dolinský
Writting line #13 as a BLAS call reduced further memory allocation. Thanks 
for all the tips (BLAS, NumericExtensions, InplaceOps, etc.).

using ArrayViews

function mgs(X)
# mgs speed project

  nobs, nvars = size(X)
  R = eye(nvars)

  for l=1:nvars-1
v_i = view(X,:,l+1:nvars)
s = view(X,:,l);

R[l,l+1:nvars]=BLAS.gemv('T', 1.0/sumabs2(s), v_i, s) # --- line 13
BLAS.ger!(-1.0, s, vec(view(R,l,l+1:nvars)), v_i) # --- line 14
  end

  X, R

end


[julia-users] I can't figure out where my Julia is installed / How to pre-compile a module.

2014-10-24 Thread Daniel Carrera
Hello,

I am running Ubuntu with Julia 0.3.1 installed from PPA. I want to figure 
out where my Julia base directory is so I can create a userimg.jl file so I 
can pre-compile some modules that I use often. I got the idea from here:

https://github.com/JuliaLang/Gtk.jl/blob/master/doc/precompilation.md

The problem is that I cannot figure out where my jula/base directory is. It 
appears to be nowhere. I have two candidate directories:

/usr/share/julia
/usr/local/julia

My Julia version is 0.3.1. The file /usr/share/julia/VERSION says it 
is 0.3.0-prerelease. On the other hand, /var/lib/dpkg/info/julia.list seems 
to say that /usr/share/julia is the correct directory. So that doesn't make 
sense. The other directory --- /usr/local/julia --- does not have any 
VERSION file or anything that I could that would tell me what version of 
Julia it is for. I tried adding the userimg.jl file to both directories, 
but that didn't do anything. Even more strangely, neither directory one 
seems to be required for Julia to run. Look:


$ sudo mv /usr/share/julia $HOME/usr-share-julia
$ sudo mv /usr/local/julia $HOME/usr-local-julia
$ 
$ julia --version
julia version 0.3.1
$ 
$ julia
   _
   _   _ _(_)_ |  A fresh approach to technical computing
  (_) | (_) (_)|  Documentation: http://docs.julialang.org
   _ _   _| |_  __ _   |  Type help() for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.3.1 (2014-09-21 21:30 UTC)
 _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org release
|__/   |  x86_64-linux-gnu

julia


In other words, neither of those directories seems to be needed to run 
Julia. So, I am completely stuck. I have two base directories, neither of 
which seems to do anything.

Can anyone help me out?

Cheers,
Daniel.















[julia-users] Re: I can't figure out where my Julia is installed / How to pre-compile a module.

2014-10-24 Thread Till Ehrengruber
You cuold look where your julia base library is located for example by

functionloc(push!)
(/usr/local/Cellar/julia/0.3.1/bin/../share/julia/base/array.jl,464)


I'm on OS X but it should work on Ubuntu as well

Am Freitag, 24. Oktober 2014 14:31:32 UTC+2 schrieb Daniel Carrera:

 Hello,

 I am running Ubuntu with Julia 0.3.1 installed from PPA. I want to figure 
 out where my Julia base directory is so I can create a userimg.jl file so I 
 can pre-compile some modules that I use often. I got the idea from here:

 https://github.com/JuliaLang/Gtk.jl/blob/master/doc/precompilation.md

 The problem is that I cannot figure out where my jula/base directory is. 
 It appears to be nowhere. I have two candidate directories:

 /usr/share/julia
 /usr/local/julia

 My Julia version is 0.3.1. The file /usr/share/julia/VERSION says it 
 is 0.3.0-prerelease. On the other hand, /var/lib/dpkg/info/julia.list seems 
 to say that /usr/share/julia is the correct directory. So that doesn't make 
 sense. The other directory --- /usr/local/julia --- does not have any 
 VERSION file or anything that I could that would tell me what version of 
 Julia it is for. I tried adding the userimg.jl file to both 
 directories, but that didn't do anything. Even more strangely, neither 
 directory one seems to be required for Julia to run. Look:


 $ sudo mv /usr/share/julia $HOME/usr-share-julia
 $ sudo mv /usr/local/julia $HOME/usr-local-julia
 $ 
 $ julia --version
 julia version 0.3.1
 $ 
 $ julia
_
_   _ _(_)_ |  A fresh approach to technical computing
   (_) | (_) (_)|  Documentation: http://docs.julialang.org
_ _   _| |_  __ _   |  Type help() for help.
   | | | | | | |/ _` |  |
   | | |_| | | | (_| |  |  Version 0.3.1 (2014-09-21 21:30 UTC)
  _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org release
 |__/   |  x86_64-linux-gnu

 julia


 In other words, neither of those directories seems to be needed to run 
 Julia. So, I am completely stuck. I have two base directories, neither 
 of which seems to do anything.

 Can anyone help me out?

 Cheers,
 Daniel.















Re: [julia-users] I can't figure out where my Julia is installed / How to pre-compile a module.

2014-10-24 Thread Isaiah Norton
Try the JULIA_HOME variable (base should be JULIA_HOME/../../)

On Fri, Oct 24, 2014 at 8:31 AM, Daniel Carrera dcarr...@gmail.com wrote:

 Hello,

 I am running Ubuntu with Julia 0.3.1 installed from PPA. I want to figure
 out where my Julia base directory is so I can create a userimg.jl file so I
 can pre-compile some modules that I use often. I got the idea from here:

 https://github.com/JuliaLang/Gtk.jl/blob/master/doc/precompilation.md

 The problem is that I cannot figure out where my jula/base directory is.
 It appears to be nowhere. I have two candidate directories:

 /usr/share/julia
 /usr/local/julia

 My Julia version is 0.3.1. The file /usr/share/julia/VERSION says it
 is 0.3.0-prerelease. On the other hand, /var/lib/dpkg/info/julia.list seems
 to say that /usr/share/julia is the correct directory. So that doesn't make
 sense. The other directory --- /usr/local/julia --- does not have any
 VERSION file or anything that I could that would tell me what version of
 Julia it is for. I tried adding the userimg.jl file to both
 directories, but that didn't do anything. Even more strangely, neither
 directory one seems to be required for Julia to run. Look:


 $ sudo mv /usr/share/julia $HOME/usr-share-julia
 $ sudo mv /usr/local/julia $HOME/usr-local-julia
 $
 $ julia --version
 julia version 0.3.1
 $
 $ julia
_
_   _ _(_)_ |  A fresh approach to technical computing
   (_) | (_) (_)|  Documentation: http://docs.julialang.org
_ _   _| |_  __ _   |  Type help() for help.
   | | | | | | |/ _` |  |
   | | |_| | | | (_| |  |  Version 0.3.1 (2014-09-21 21:30 UTC)
  _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org release
 |__/   |  x86_64-linux-gnu

 julia


 In other words, neither of those directories seems to be needed to run
 Julia. So, I am completely stuck. I have two base directories, neither
 of which seems to do anything.

 Can anyone help me out?

 Cheers,
 Daniel.
















[julia-users] Re-using Julia process for multiple scripts? (Or other solutions?)

2014-10-24 Thread Magnus Lie Hetland
Though I'm sure it's a problem that'll eventually go away – at present, 
loading large/complex modules potentially takes a *lot* of time. I've got 
some Julia code that produces various files, and I'd like to structure this 
as several small Julia programs/scripts where each is responsible for 
generating a given output file, and a common module that they can all use 
for shared functionality. This would play nicely with build systems such as 
make, for example. (An alternative would be to have one monolithic program, 
perhaps, and just rebuild all these file all the time – or have the program 
itself check what needed to be rebuilt. Would still be more hassle to 
combine with make.)

However, if I have lots of these files, and each one takes a looong time to 
run … well, that's not so nice. I'm open to any solution to the underlying 
problem, but I'll still make the question a bit more concrete. My current 
idea is to try to somehow share a single Julia process across several 
scripts. (Not really a pretty solution, I guess.) I've currently tried 
doing this by using a named pipe for Julia commands:

$ mkfifo juliacmd
$ tail -f juliacmd | julia 
$ echo 'println(Hello, world!)'  juliacmd
$ Hello, world!

$ echo 'exit()'  juliacmd  echo  juliacmd

Well, there are several issues with this. For one thing, while it stops the 
Julia process, it doesn't stop tail. (Sadly, my tail doesn't support the 
--pid switch, so I guess I'll have to kill it more manually.) For another, 
running Julia in the background basically means I need to wait for 
arbitrary amounts of time to make sure the files are produced before I try 
to use them. (C.f., the oddly placed greeting, above.)

So … I don't really have a functioning solution. And I guess dealing with 
startup time must be a common problem (given that some modules even have 
specific functionality for reloading, just to avoid it). I guess I *could* 
generate 
a single script and run that, though it might mess with the makefile 
structure a bit.

Suggestions? Thoughts? (The simplest solution would, I guess, be to just 
grit my teeth and bear the slow running time, as the files won't need to be 
rebuilt that often.)


Re: [julia-users] Re: Modified Gram-Schmidt: getting the best out of Julia

2014-10-24 Thread Andreas Noack
You can squeeze a tiny bit extra out of the implementation by using the
inplace gemv!, i.e.
function mgs2(X)
# mgs speed project
   nobs, nvars = size(X)
   R = eye(nvars)
   for l=1:nvars-1
 v_i = view(X,:,l+1:nvars)
 s = view(X,:,l);
r = vec(view(R,l,l+1:nvars))

 BLAS.gemv!('T', 1.0/sumabs2(s), v_i, s, 0.0, r) # --- line 13
 BLAS.ger!(-1.0, s, r, v_i) # --- line 14
   end
  X, R
 end

Med venlig hilsen

Andreas Noack

2014-10-24 8:15 GMT-04:00 Ján Dolinský jan.dolin...@2bridgz.com:

 Writting line #13 as a BLAS call reduced further memory allocation. Thanks
 for all the tips (BLAS, NumericExtensions, InplaceOps, etc.).

 using ArrayViews

 function mgs(X)
 # mgs speed project

   nobs, nvars = size(X)
   R = eye(nvars)

   for l=1:nvars-1
 v_i = view(X,:,l+1:nvars)
 s = view(X,:,l);

 R[l,l+1:nvars]=BLAS.gemv('T', 1.0/sumabs2(s), v_i, s) # --- line 13
 BLAS.ger!(-1.0, s, vec(view(R,l,l+1:nvars)), v_i) # --- line 14
   end

   X, R

 end



Re: [julia-users] Re: I can't figure out where my Julia is installed / How to pre-compile a module.

2014-10-24 Thread Daniel Carrera
Yes, it seems to work. Apparently my Julia directory is
 /usr/bin/../share/julia/base/, so that's one problem solved.  Thanks!

Cheers,
Daniel.

On 24 October 2014 14:53, Till Ehrengruber darthma...@gmail.com wrote:

 You cuold look where your julia base library is located for example by

 functionloc(push!)
 (/usr/local/Cellar/julia/0.3.1/bin/../share/julia/base/array.jl,464)


 I'm on OS X but it should work on Ubuntu as well

 Am Freitag, 24. Oktober 2014 14:31:32 UTC+2 schrieb Daniel Carrera:

 Hello,

 I am running Ubuntu with Julia 0.3.1 installed from PPA. I want to figure
 out where my Julia base directory is so I can create a userimg.jl file so I
 can pre-compile some modules that I use often. I got the idea from here:

 https://github.com/JuliaLang/Gtk.jl/blob/master/doc/precompilation.md

 The problem is that I cannot figure out where my jula/base directory is.
 It appears to be nowhere. I have two candidate directories:

 /usr/share/julia
 /usr/local/julia

 My Julia version is 0.3.1. The file /usr/share/julia/VERSION says it
 is 0.3.0-prerelease. On the other hand, /var/lib/dpkg/info/julia.list
 seems to say that /usr/share/julia is the correct directory. So that
 doesn't make sense. The other directory --- /usr/local/julia --- does
 not have any VERSION file or anything that I could that would tell me what
 version of Julia it is for. I tried adding the userimg.jl file to both
 directories, but that didn't do anything. Even more strangely, neither
 directory one seems to be required for Julia to run. Look:


 $ sudo mv /usr/share/julia $HOME/usr-share-julia
 $ sudo mv /usr/local/julia $HOME/usr-local-julia
 $
 $ julia --version
 julia version 0.3.1
 $
 $ julia
_
_   _ _(_)_ |  A fresh approach to technical computing
   (_) | (_) (_)|  Documentation: http://docs.julialang.org
_ _   _| |_  __ _   |  Type help() for help.
   | | | | | | |/ _` |  |
   | | |_| | | | (_| |  |  Version 0.3.1 (2014-09-21 21:30 UTC)
  _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org release
 |__/   |  x86_64-linux-gnu

 julia


 In other words, neither of those directories seems to be needed to run
 Julia. So, I am completely stuck. I have two base directories, neither
 of which seems to do anything.

 Can anyone help me out?

 Cheers,
 Daniel.
















-- 
When an engineer says that something can't be done, it's a code phrase that
means it's not fun to do.


Re: [julia-users] Re: I can't figure out where my Julia is installed / How to pre-compile a module.

2014-10-24 Thread Daniel Carrera
Oh, and I now figured out why I can't get Julia to pre-compile the modules 
I want. Apparently you have to be building it from source...

Now I'll go and give that a try.

Cheers,
Daniel.


On Friday, 24 October 2014 15:47:58 UTC+2, Daniel Carrera wrote:

 Yes, it seems to work. Apparently my Julia directory is 
  /usr/bin/../share/julia/base/, so that's one problem solved.  Thanks!

 Cheers,
 Daniel.

 On 24 October 2014 14:53, Till Ehrengruber darthma...@gmail.com wrote:

 You cuold look where your julia base library is located for example by

 functionloc(push!)
 (/usr/local/Cellar/julia/0.3.1/bin/../share/julia/base/array.jl,464)


 I'm on OS X but it should work on Ubuntu as well

 Am Freitag, 24. Oktober 2014 14:31:32 UTC+2 schrieb Daniel Carrera:

 Hello,

 I am running Ubuntu with Julia 0.3.1 installed from PPA. I want to 
 figure out where my Julia base directory is so I can create a userimg.jl 
 file so I can pre-compile some modules that I use often. I got the idea 
 from here:

 https://github.com/JuliaLang/Gtk.jl/blob/master/doc/precompilation.md

 The problem is that I cannot figure out where my jula/base directory is. 
 It appears to be nowhere. I have two candidate directories:

 /usr/share/julia
 /usr/local/julia

 My Julia version is 0.3.1. The file /usr/share/julia/VERSION says it 
 is 0.3.0-prerelease. On the other hand, /var/lib/dpkg/info/julia.list 
 seems to say that /usr/share/julia is the correct directory. So that 
 doesn't make sense. The other directory --- /usr/local/julia --- does 
 not have any VERSION file or anything that I could that would tell me what 
 version of Julia it is for. I tried adding the userimg.jl file to 
 both directories, but that didn't do anything. Even more strangely, neither 
 directory one seems to be required for Julia to run. Look:


 $ sudo mv /usr/share/julia $HOME/usr-share-julia
 $ sudo mv /usr/local/julia $HOME/usr-local-julia
 $ 
 $ julia --version
 julia version 0.3.1
 $ 
 $ julia
_
_   _ _(_)_ |  A fresh approach to technical computing
   (_) | (_) (_)|  Documentation: http://docs.julialang.org
_ _   _| |_  __ _   |  Type help() for help.
   | | | | | | |/ _` |  |
   | | |_| | | | (_| |  |  Version 0.3.1 (2014-09-21 21:30 UTC)
  _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org release
 |__/   |  x86_64-linux-gnu

 julia


 In other words, neither of those directories seems to be needed to run 
 Julia. So, I am completely stuck. I have two base directories, 
 neither of which seems to do anything.

 Can anyone help me out?

 Cheers,
 Daniel.
















 -- 
 When an engineer says that something can't be done, it's a code phrase 
 that means it's not fun to do.
  


Re: [julia-users] Re: I can't figure out where my Julia is installed / How to pre-compile a module.

2014-10-24 Thread Isaiah Norton
It is not strictly necessary to build from source - try searching this list
for sysimg and look for the --build incantation for PPA installs (I'm
pretty sure it has come up before).



On Fri, Oct 24, 2014 at 9:53 AM, Daniel Carrera dcarr...@gmail.com wrote:

 Oh, and I now figured out why I can't get Julia to pre-compile the modules
 I want. Apparently you have to be building it from source...

 Now I'll go and give that a try.

 Cheers,
 Daniel.



 On Friday, 24 October 2014 15:47:58 UTC+2, Daniel Carrera wrote:

 Yes, it seems to work. Apparently my Julia directory is
  /usr/bin/../share/julia/base/, so that's one problem solved.  Thanks!

 Cheers,
 Daniel.

 On 24 October 2014 14:53, Till Ehrengruber darthma...@gmail.com wrote:

 You cuold look where your julia base library is located for example by

 functionloc(push!)
 (/usr/local/Cellar/julia/0.3.1/bin/../share/julia/base/array.jl,464)


 I'm on OS X but it should work on Ubuntu as well

 Am Freitag, 24. Oktober 2014 14:31:32 UTC+2 schrieb Daniel Carrera:

 Hello,

 I am running Ubuntu with Julia 0.3.1 installed from PPA. I want to
 figure out where my Julia base directory is so I can create a userimg.jl
 file so I can pre-compile some modules that I use often. I got the
 idea from here:

 https://github.com/JuliaLang/Gtk.jl/blob/master/doc/precompilation.md

 The problem is that I cannot figure out where my jula/base directory
 is. It appears to be nowhere. I have two candidate directories:

 /usr/share/julia
 /usr/local/julia

 My Julia version is 0.3.1. The file /usr/share/julia/VERSION says it
 is 0.3.0-prerelease. On the other hand, /var/lib/dpkg/info/julia.list
 seems to say that /usr/share/julia is the correct directory. So that
 doesn't make sense. The other directory --- /usr/local/julia --- does
 not have any VERSION file or anything that I could that would tell me what
 version of Julia it is for. I tried adding the userimg.jl file to
 both directories, but that didn't do anything. Even more strangely, neither
 directory one seems to be required for Julia to run. Look:


 $ sudo mv /usr/share/julia $HOME/usr-share-julia
 $ sudo mv /usr/local/julia $HOME/usr-local-julia
 $
 $ julia --version
 julia version 0.3.1
 $
 $ julia
_
_   _ _(_)_ |  A fresh approach to technical computing
   (_) | (_) (_)|  Documentation: http://docs.julialang.org
_ _   _| |_  __ _   |  Type help() for help.
   | | | | | | |/ _` |  |
   | | |_| | | | (_| |  |  Version 0.3.1 (2014-09-21 21:30 UTC)
  _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org release
 |__/   |  x86_64-linux-gnu

 julia


 In other words, neither of those directories seems to be needed to run
 Julia. So, I am completely stuck. I have two base directories,
 neither of which seems to do anything.

 Can anyone help me out?

 Cheers,
 Daniel.
















 --
 When an engineer says that something can't be done, it's a code phrase
 that means it's not fun to do.




[julia-users] Trying to optimize numerical code - peculiar allocation data

2014-10-24 Thread dextorious
So, I've been gradually porting some of my more performance sensitive 
MATLAB research code to Julia, with great results so far. One of my current 
projects spends about 80% of the computation time calculating the 
Rotne-Prager-Yamakawa tensor at each time step, so I'm quite motivated to 
optimize that particular step of the computation as much as possible. 
Rewriting the code in Julia alone resulted in an order of magnitude 
speedup. Writing my own implementation of the dyadic product of two vectors 
gave me another factor of 2 (which I found rather surprising, but still), 
as did heavy use of @simd and @inbounds annotations where appropriate. 
Despite these successes, I still feel like there is more performance to be 
gained here. For one, the code still spends a lot of the time on garbage 
collection. Also, running julia with --track-allocation=user gave me some 
perplexing results.

First things first, the relevant code:
function outer_product(a, b, n)
c = zeros(n, n)
@simd for i = 1 : n
@simd for j = 1 : n
@inbounds c[i, j] = a[i] * b[j]
end
end
c
end

function rotne_prager(x, y, z, r)
const p = length(x) - 1
rpy = zeros(3*(p+1), 3*(p+1))
rvec = zeros(3)
@simd for i = 1 : p+1
@simd for j = 1 : p+1
@inbounds rvec[1] = x[j] - x[i]
@inbounds rvec[2] = y[j] - y[i]
@inbounds rvec[3] = z[j] - z[i]
Rij = sqrt(rvec[1]*rvec[1] + rvec[2]*rvec[2] + rvec[3]*rvec[3])
distance_ratio = r / Rij
if (Rij  2*r)
@inbounds rpy[3*j-2:3*j, 3*i-2:3*i] = (0.75 * 
distance_ratio *
((1.0 + 2.0/3.0 * distance_ratio*distance_ratio) * eye(3
) +
(1.0 - 2.0*distance_ratio*distance_ratio) * 
outer_product(rvec, rvec, 3)))
else
@inbounds rpy[3*j-2:3*j, 3*i-2:3*i] = (
(1.0 - 9.0/32.0 / distance_ratio) * eye(3) +
3.0/32.0 / distance_ratio * outer_product(rvec, rvec, 3
))
end
end
end
rpy
end

function rotne_bench(n)
x = rand(n)
y = rand(n)
z = rand(n)
r = 0.01
rotne_prager(x, y, z, r);
end

rotne_bench(1000);
@time (for i=1:10; rotne_bench(1000); end)

Running this yields the following output on one of my machines:
elapsed time: 13.440131479 seconds (8718927056 bytes allocated, 36.99% gc 
time)

I then tried to add proper type annotations everywhere, but this did not 
change anything in the results, so I'm assuming Julia's type inference 
system is doing a great job here. Next, I tried to look at the allocations, 
using the --track-allocation=user switch (by the way, this did not seem to 
produce a *jl.mem file anywhere on my Windows machines, but worked fine on 
a Linux box, is this a known behavior?). This yielded the following output:
- function outer_product(a, b, n)
-70967296 c = zeros(n, n)
0 @simd for i = 1 : n
0 @simd for j = 1 : n
0 @inbounds c[i, j] = a[i] * b[j]
- end
- end
0 c
- end
- 
- function rotne_prager(x, y, z, r)
0 const p = length(x) - 1
792000528 rpy = zeros(3*(p+1), 3*(p+1))
  880 rvec = zeros(3)
0 @simd for i = 1 : p+1
0 @simd for j = 1 : p+1
0 @inbounds rvec[1] = x[j] - x[i]
0 @inbounds rvec[2] = y[j] - y[i]
0 @inbounds rvec[3] = z[j] - z[i]
0 Rij = sqrt(rvec[1]*rvec[1] + rvec[2]*rvec[2] + rvec[3
]*rvec[3])
0 distance_ratio = r / Rij
0 if (Rij  2*r)
0 @inbounds rpy[3*j-2:3*j, 3*i-2:3*i] = (0.75 * 
distance_ratio *
- ((1.0 + 2.0/3.0 * distance_ratio*
distance_ratio) * eye(3) +
- (1.0 - 2.0*distance_ratio*distance_ratio) * 
outer_product(rvec, rvec, 3)))
- else
279582208 @inbounds rpy[3*j-2:3*j, 3*i-2:3*i] = (
- (1.0 - 9.0/32.0 / distance_ratio) * eye(3) +
- 3.0/32.0 / distance_ratio * outer_product(rvec
, rvec, 3))
- end
- end
- end
0 rpy
0 end
- 
0 function rotne_bench(n)
0 x = rand(n)
0 y = rand(n)
0 z = rand(n)
0 r = 0.01
0 rotne_prager(x, y, z, r);
- end
- 
- rotne_bench(1000);
0 @time (for i=1:10; rotne_bench(1000); end)

I find myself somewhat confused by these results, specifically line 27. Why 
would the second expression constructing the rpy matrix cause a huge number 
of allocations when the first did not? They seem virtually identical from a 
programming point of view, with minor differences in numerical 
coefficients. I feel like, in my limited understanding of a 

Re: [julia-users] Trying to optimize numerical code - peculiar allocation data

2014-10-24 Thread Andreas Noack
Are you sure that Rij  2*r in your calculations?

You should use I instead of eye(3), but probably the best solution is to
avoid the explicit calculation of the outer product. You can probably write
your problem as a rank one update of a view of pry

Med venlig hilsen

Andreas Noack

2014-10-24 10:26 GMT-04:00 dextori...@gmail.com:

 So, I've been gradually porting some of my more performance sensitive
 MATLAB research code to Julia, with great results so far. One of my current
 projects spends about 80% of the computation time calculating the
 Rotne-Prager-Yamakawa tensor at each time step, so I'm quite motivated to
 optimize that particular step of the computation as much as possible.
 Rewriting the code in Julia alone resulted in an order of magnitude
 speedup. Writing my own implementation of the dyadic product of two vectors
 gave me another factor of 2 (which I found rather surprising, but still),
 as did heavy use of @simd and @inbounds annotations where appropriate.
 Despite these successes, I still feel like there is more performance to be
 gained here. For one, the code still spends a lot of the time on garbage
 collection. Also, running julia with --track-allocation=user gave me some
 perplexing results.

 First things first, the relevant code:
 function outer_product(a, b, n)
 c = zeros(n, n)
 @simd for i = 1 : n
 @simd for j = 1 : n
 @inbounds c[i, j] = a[i] * b[j]
 end
 end
 c
 end

 function rotne_prager(x, y, z, r)
 const p = length(x) - 1
 rpy = zeros(3*(p+1), 3*(p+1))
 rvec = zeros(3)
 @simd for i = 1 : p+1
 @simd for j = 1 : p+1
 @inbounds rvec[1] = x[j] - x[i]
 @inbounds rvec[2] = y[j] - y[i]
 @inbounds rvec[3] = z[j] - z[i]
 Rij = sqrt(rvec[1]*rvec[1] + rvec[2]*rvec[2] + rvec[3]*rvec[3
 ])
 distance_ratio = r / Rij
 if (Rij  2*r)
 @inbounds rpy[3*j-2:3*j, 3*i-2:3*i] = (0.75 *
 distance_ratio *
 ((1.0 + 2.0/3.0 * distance_ratio*distance_ratio) * eye
 (3) +
 (1.0 - 2.0*distance_ratio*distance_ratio) *
 outer_product(rvec, rvec, 3)))
 else
 @inbounds rpy[3*j-2:3*j, 3*i-2:3*i] = (
 (1.0 - 9.0/32.0 / distance_ratio) * eye(3) +
 3.0/32.0 / distance_ratio * outer_product(rvec, rvec,
 3))
 end
 end
 end
 rpy
 end

 function rotne_bench(n)
 x = rand(n)
 y = rand(n)
 z = rand(n)
 r = 0.01
 rotne_prager(x, y, z, r);
 end

 rotne_bench(1000);
 @time (for i=1:10; rotne_bench(1000); end)

 Running this yields the following output on one of my machines:
 elapsed time: 13.440131479 seconds (8718927056 bytes allocated, 36.99% gc
 time)

 I then tried to add proper type annotations everywhere, but this did not
 change anything in the results, so I'm assuming Julia's type inference
 system is doing a great job here. Next, I tried to look at the allocations,
 using the --track-allocation=user switch (by the way, this did not seem to
 produce a *jl.mem file anywhere on my Windows machines, but worked fine on
 a Linux box, is this a known behavior?). This yielded the following output:
 - function outer_product(a, b, n)
 -70967296 c = zeros(n, n)
 0 @simd for i = 1 : n
 0 @simd for j = 1 : n
 0 @inbounds c[i, j] = a[i] * b[j]
 - end
 - end
 0 c
 - end
 -
 - function rotne_prager(x, y, z, r)
 0 const p = length(x) - 1
 792000528 rpy = zeros(3*(p+1), 3*(p+1))
   880 rvec = zeros(3)
 0 @simd for i = 1 : p+1
 0 @simd for j = 1 : p+1
 0 @inbounds rvec[1] = x[j] - x[i]
 0 @inbounds rvec[2] = y[j] - y[i]
 0 @inbounds rvec[3] = z[j] - z[i]
 0 Rij = sqrt(rvec[1]*rvec[1] + rvec[2]*rvec[2] + rvec[
 3]*rvec[3])
 0 distance_ratio = r / Rij
 0 if (Rij  2*r)
 0 @inbounds rpy[3*j-2:3*j, 3*i-2:3*i] = (0.75 *
 distance_ratio *
 - ((1.0 + 2.0/3.0 * distance_ratio*
 distance_ratio) * eye(3) +
 - (1.0 - 2.0*distance_ratio*distance_ratio) *
 outer_product(rvec, rvec, 3)))
 - else
 279582208 @inbounds rpy[3*j-2:3*j, 3*i-2:3*i] = (
 - (1.0 - 9.0/32.0 / distance_ratio) * eye(3) +
 - 3.0/32.0 / distance_ratio * outer_product(
 rvec, rvec, 3))
 - end
 - end
 - end
 0 rpy
 0 end
 -
 0 function rotne_bench(n)
 0 x = rand(n)
 0 y = rand(n)
 0 z = rand(n)
 0 r = 0.01
 0 rotne_prager(x, y, z, r);
 - end
 -
 - 

Re: [julia-users] Terminating loops early based on user input

2014-10-24 Thread Tony Fong
Answering my own question, there seems there's a function throwto( task, 
exc )


On Friday, October 24, 2014 2:40:20 PM UTC+7, Tony Fong wrote:

 Right. looking into it...

 How do you stop a task preemptively in Julia's built-in framework?

 On Friday, October 24, 2014 8:07:19 AM UTC+7, Steven G. Johnson wrote:



 On Thursday, October 23, 2014 9:13:06 AM UTC-4, Tony Fong wrote:

 I also have a similar need. If I want to be more fancy, what may be the 
 strategy to address the following 2-way communication?
 * UI - worker: send pause, abort
 * worker - UI: progress report, summary state (presumably to allow UI 
 to do fancier presentation)

 I'm thinking about looking into TcpSocket. is it doable? overkill?


 Why not just use Julia's built-in parallel inter-process communication 
 functions?



Re: [julia-users] Re: I can't figure out where my Julia is installed / How to pre-compile a module.

2014-10-24 Thread Daniel Carrera
Ok. I have had a really hard time finding a solution. I have found many
posts claiming that it is possible to pre-compile modules, but very few
posts that actually try to help you do it. That said, I think I have pieced
together some of the steps:


--

$ cd /usr/share/julia/base

$ sudo vi userimg.jl  # Insert lines like  require(PyCall)

$ sudo julia --build ../usr/lib/julia/sys0 sysimg.jl
...
Cannot open system image file ../usr/lib/julia/sys0.ji for writing.
--

This obviously fails because ../usr does not exist. There is no file in my
system (Ubuntu) called sys9.ji but there are files called

/usr/lib/x86_64-linux-gnu/julia/sys.so
/usr/lib/x86_64-linux-gnu/julia/sys.ji


After backing up /usr/lib/x86_64-linux-gnu/julia/, I tried the command

$ sudo julia --build /usr/lib/x86_64-linux-gnu/julia/sys sysimg.jl


This gave no compile-time errors, but now Julia dies with a seg fault:

$ julia
zsh: segmentation fault (core dumped)  julia


This happens even if userimg.jl has nothing bug comments... I can recover
Julia from the backup, but if anyone can see an obvious error that I can
just fix, I would like to hear it.

Cheers,
Daniel.





On 24 October 2014 15:59, Isaiah Norton isaiah.nor...@gmail.com wrote:

 It is not strictly necessary to build from source - try searching this
 list for sysimg and look for the --build incantation for PPA installs
 (I'm pretty sure it has come up before).



 On Fri, Oct 24, 2014 at 9:53 AM, Daniel Carrera dcarr...@gmail.com
 wrote:

 Oh, and I now figured out why I can't get Julia to pre-compile the
 modules I want. Apparently you have to be building it from source...

 Now I'll go and give that a try.

 Cheers,
 Daniel.



 On Friday, 24 October 2014 15:47:58 UTC+2, Daniel Carrera wrote:

 Yes, it seems to work. Apparently my Julia directory is
  /usr/bin/../share/julia/base/, so that's one problem solved.  Thanks!

 Cheers,
 Daniel.

 On 24 October 2014 14:53, Till Ehrengruber darthma...@gmail.com wrote:

 You cuold look where your julia base library is located for example by

 functionloc(push!)
 (/usr/local/Cellar/julia/0.3.1/bin/../share/julia/base/array.jl,464)


 I'm on OS X but it should work on Ubuntu as well

 Am Freitag, 24. Oktober 2014 14:31:32 UTC+2 schrieb Daniel Carrera:

 Hello,

 I am running Ubuntu with Julia 0.3.1 installed from PPA. I want to
 figure out where my Julia base directory is so I can create a userimg.jl
 file so I can pre-compile some modules that I use often. I got the
 idea from here:

 https://github.com/JuliaLang/Gtk.jl/blob/master/doc/precompilation.md

 The problem is that I cannot figure out where my jula/base directory
 is. It appears to be nowhere. I have two candidate directories:

 /usr/share/julia
 /usr/local/julia

 My Julia version is 0.3.1. The file /usr/share/julia/VERSION says it
 is 0.3.0-prerelease. On the other hand, /var/lib/dpkg/info/julia.list
 seems to say that /usr/share/julia is the correct directory. So that
 doesn't make sense. The other directory --- /usr/local/julia --- does
 not have any VERSION file or anything that I could that would tell me what
 version of Julia it is for. I tried adding the userimg.jl file to
 both directories, but that didn't do anything. Even more strangely, 
 neither
 directory one seems to be required for Julia to run. Look:


 $ sudo mv /usr/share/julia $HOME/usr-share-julia
 $ sudo mv /usr/local/julia $HOME/usr-local-julia
 $
 $ julia --version
 julia version 0.3.1
 $
 $ julia
_
_   _ _(_)_ |  A fresh approach to technical computing
   (_) | (_) (_)|  Documentation: http://docs.julialang.org
_ _   _| |_  __ _   |  Type help() for help.
   | | | | | | |/ _` |  |
   | | |_| | | | (_| |  |  Version 0.3.1 (2014-09-21 21:30 UTC)
  _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org release
 |__/   |  x86_64-linux-gnu

 julia


 In other words, neither of those directories seems to be needed to run
 Julia. So, I am completely stuck. I have two base directories,
 neither of which seems to do anything.

 Can anyone help me out?

 Cheers,
 Daniel.
















 --
 When an engineer says that something can't be done, it's a code phrase
 that means it's not fun to do.





-- 
When an engineer says that something can't be done, it's a code phrase that
means it's not fun to do.


Re: [julia-users] Trying to optimize numerical code - peculiar allocation data

2014-10-24 Thread dextorious
I'm absolutely sure that Rij  2*r most of the time, yes. This is obvious 
if you consider that rand(n) returns values in the range [0, 1] and r = 
0.01. I have also verified this empirically. I'll try to explore the view 
update idea further, I've never used that approach before.


On Friday, October 24, 2014 5:46:37 PM UTC+3, Andreas Noack wrote:

 Are you sure that Rij  2*r in your calculations? 

 You should use I instead of eye(3), but probably the best solution is to 
 avoid the explicit calculation of the outer product. You can probably write 
 your problem as a rank one update of a view of pry

 Med venlig hilsen

 Andreas Noack



Re: [julia-users] Trying to optimize numerical code - peculiar allocation data

2014-10-24 Thread Jiahao Chen
Are you sure that @simd and @inbounds give you any performance gains? On my
laptop removing all those annotations resulted in essentially no change in
performance.

Computing the outer product as a*b' is currently slow because it does not
use the specialized method for computing outer products. Instead, b' turns
the n-vector into a 1xn-row matrix, then a*b' turns a from a n-vector to a
nx1-column matrix, then does generic matrix-matrix multiplication (dgemm).
You can verify this yourself by running

@which a*b'

and looking at the method of * that is being dispatched on.

As Andreas said, you can try the rank-1 update; one way to use BLAS's
function for this is

outer_product(a, b, n) = Base.LinAlg.BLAS.ger!(1.0, a, b, zeros(n, n))

but for 3x3 outer products writing your own kernel may very well be faster
because you avoid the cost of making a library call.

For me the biggest speedup came from rewriting your inner kernel as

rpy[3*j-2:3*j, 3*i-2:3*i] = if (Rij  2*r)
(0.75 * distance_ratio *
((1.0 + 2.0/3.0 * distance_ratio*distance_ratio) * I +
(1.0 - 2.0*distance_ratio*distance_ratio) *
outer_product(rvec, rvec, 3)))
else
(1.0 - 9.0/32.0 / distance_ratio) * I +
3.0/32.0 / distance_ratio * outer_product(rvec, rvec, 3)
end

- The I symbol is an abstract identity matrix that avoids allocating an
explicit matrix.
- The compiler seems happier when I move the assignment outside the if
statement.

Thanks,

Jiahao Chen
Staff Research Scientist
MIT Computer Science and Artificial Intelligence Laboratory

On Fri, Oct 24, 2014 at 11:24 AM, dextori...@gmail.com wrote:

 I'm absolutely sure that Rij  2*r most of the time, yes. This is obvious
 if you consider that rand(n) returns values in the range [0, 1] and r =
 0.01. I have also verified this empirically. I'll try to explore the view
 update idea further, I've never used that approach before.


 On Friday, October 24, 2014 5:46:37 PM UTC+3, Andreas Noack wrote:

 Are you sure that Rij  2*r in your calculations?

 You should use I instead of eye(3), but probably the best solution is to
 avoid the explicit calculation of the outer product. You can probably write
 your problem as a rank one update of a view of pry

 Med venlig hilsen

 Andreas Noack




[julia-users] Accessing values outside functions vs. passing them to the function

2014-10-24 Thread Nils Gudat
A general performance question: when writing functions that operate on a 
large number of variables stored in arrays, is it generally better to pass 
every array into the function or to let the function access the arrays in 
the global space?

Comparing

X = rand(1)

function(a::Real)
(some computation involving a and the values in an array X)
end

to

function(a::Real, X::Array)
   (the same computation)
end

I found that passing the array was marginally smaller on most occasions, 
but I'm wondering how general this result is and what it would depend on. 
Are there any rules/best practices for this?


Re: [julia-users] Trying to optimize numerical code - peculiar allocation data

2014-10-24 Thread Jiahao Chen
Without knowing much more about your specific problem and use cases, you
may also want to consider:

- never computing the tensor directly, but provide an indirect
representation of it by providing a function that  computes its action on a
vector
- using a fast multipole algorithm to determine the distance cutoff
- reducing the number of explicit distance computations using neighbor lists

Thanks,

Jiahao Chen
Staff Research Scientist
MIT Computer Science and Artificial Intelligence Laboratory

On Fri, Oct 24, 2014 at 12:38 PM, Jiahao Chen jia...@mit.edu wrote:

 Are you sure that @simd and @inbounds give you any performance gains? On
 my laptop removing all those annotations resulted in essentially no change
 in performance.

 Computing the outer product as a*b' is currently slow because it does not
 use the specialized method for computing outer products. Instead, b' turns
 the n-vector into a 1xn-row matrix, then a*b' turns a from a n-vector to a
 nx1-column matrix, then does generic matrix-matrix multiplication (dgemm).
 You can verify this yourself by running

 @which a*b'

 and looking at the method of * that is being dispatched on.

 As Andreas said, you can try the rank-1 update; one way to use BLAS's
 function for this is

 outer_product(a, b, n) = Base.LinAlg.BLAS.ger!(1.0, a, b, zeros(n, n))

 but for 3x3 outer products writing your own kernel may very well be faster
 because you avoid the cost of making a library call.

 For me the biggest speedup came from rewriting your inner kernel as

 rpy[3*j-2:3*j, 3*i-2:3*i] = if (Rij  2*r)
 (0.75 * distance_ratio *
 ((1.0 + 2.0/3.0 * distance_ratio*distance_ratio) * I +
 (1.0 - 2.0*distance_ratio*distance_ratio) *
 outer_product(rvec, rvec, 3)))
 else
 (1.0 - 9.0/32.0 / distance_ratio) * I +
 3.0/32.0 / distance_ratio * outer_product(rvec, rvec,
 3)
 end

 - The I symbol is an abstract identity matrix that avoids allocating an
 explicit matrix.
 - The compiler seems happier when I move the assignment outside the if
 statement.

 Thanks,

 Jiahao Chen
 Staff Research Scientist
 MIT Computer Science and Artificial Intelligence Laboratory

 On Fri, Oct 24, 2014 at 11:24 AM, dextori...@gmail.com wrote:

 I'm absolutely sure that Rij  2*r most of the time, yes. This is obvious
 if you consider that rand(n) returns values in the range [0, 1] and r =
 0.01. I have also verified this empirically. I'll try to explore the view
 update idea further, I've never used that approach before.


 On Friday, October 24, 2014 5:46:37 PM UTC+3, Andreas Noack wrote:

 Are you sure that Rij  2*r in your calculations?

 You should use I instead of eye(3), but probably the best solution is to
 avoid the explicit calculation of the outer product. You can probably write
 your problem as a rank one update of a view of pry

 Med venlig hilsen

 Andreas Noack





[julia-users] Re: Accessing values outside functions vs. passing them to the function

2014-10-24 Thread Johan Sigfrids
Operating on global variables in Julia is generally slower so you should 
definitely pass the array to the function. 

On Friday, October 24, 2014 7:43:53 PM UTC+3, Nils Gudat wrote:

 A general performance question: when writing functions that operate on a 
 large number of variables stored in arrays, is it generally better to pass 
 every array into the function or to let the function access the arrays in 
 the global space?

 Comparing

 X = rand(1)

 function(a::Real)
 (some computation involving a and the values in an array X)
 end

 to

 function(a::Real, X::Array)
(the same computation)
 end

 I found that passing the array was marginally smaller on most occasions, 
 but I'm wondering how general this result is and what it would depend on. 
 Are there any rules/best practices for this?



Re: [julia-users] Re: Accessing values outside functions vs. passing them to the function

2014-10-24 Thread John Myles White
Using globals is also potentially bad style since it means that your functions 
aren't interpretable without additional context.

 -- John

 On Oct 24, 2014, at 10:36 AM, Johan Sigfrids johan.sigfr...@gmail.com wrote:
 
 Operating on global variables in Julia is generally slower so you should 
 definitely pass the array to the function. 
 
 On Friday, October 24, 2014 7:43:53 PM UTC+3, Nils Gudat wrote:
 A general performance question: when writing functions that operate on a 
 large number of variables stored in arrays, is it generally better to pass 
 every array into the function or to let the function access the arrays in 
 the global space?
 
 Comparing
 
 X = rand(1)
 
 function(a::Real)
 (some computation involving a and the values in an array X)
 end
 
 to
 
 function(a::Real, X::Array)
(the same computation)
 end
 
 I found that passing the array was marginally smaller on most occasions, but 
 I'm wondering how general this result is and what it would depend on. Are 
 there any rules/best practices for this?


[julia-users] PSA: Don't put `julia 0.3-` in your REQUIRES file

2014-10-24 Thread Iain Dunning
I still see people putting `julia 0.3-` in their REQUIRES file, I guess 
because people thing its means `= 0.3`. 

What it actually means is `This can be installed with any version = any 
pre-release version julia 0.3`, which isn't something that anyone should be 
doing at this point. 

Nothing bad will happen (probably), but, probably best not to all the same.

If for some reason you are making a package that only works with Julia 0.4 
though, for example, `julia 0.4-` would make sense (until Julia 0.4 is 
released).

Here is the relevant section of the 
manual: 
http://docs.julialang.org/en/latest/manual/packages/?highlight=version#man-package-requirements

Thanks,
Iain


Re: [julia-users] PSA: Don't put `julia 0.3-` in your REQUIRES file

2014-10-24 Thread Jacob Quinn
So if I want my package to work with any version starting with Julia 0.3 up
through master, what should I use?

On Fri, Oct 24, 2014 at 5:28 PM, Iain Dunning iaindunn...@gmail.com wrote:

 I still see people putting `julia 0.3-` in their REQUIRES file, I guess
 because people thing its means `= 0.3`.

 What it actually means is `This can be installed with any version = any
 pre-release version julia 0.3`, which isn't something that anyone should be
 doing at this point.

 Nothing bad will happen (probably), but, probably best not to all the same.

 If for some reason you are making a package that only works with Julia 0.4
 though, for example, `julia 0.4-` would make sense (until Julia 0.4 is
 released).

 Here is the relevant section of the manual:
 http://docs.julialang.org/en/latest/manual/packages/?highlight=version#man-package-requirements

 Thanks,
 Iain



[julia-users] Re: Re-using Julia process for multiple scripts? (Or other solutions?)

2014-10-24 Thread Ivar Nesje
A similar solution to the named pipes would be to use a telnet REPL. I think 
Keno demonstrated something like that in quite few lines of Julia code in a 
video some time ago. 


Re: [julia-users] Re: I can't figure out where my Julia is installed / How to pre-compile a module.

2014-10-24 Thread Isaiah Norton
It is necessary to run --build twice to get a fully-inferred version. sys0
is just an intermediate stage and can go anywhere (unless there are still
some hard-coded paths). Something like this (from julia/base):

julia --build /tmp/sys0 sysimg.jl
julia --build /usr/lib/x86_64-linux-gnu/julia/sys.so -J /tmp/sys0 sysimg.jl
# cache everything

(The PPA doesn't distribute sys0 because it is unnecessary in a compiled
distribution)

On Fri, Oct 24, 2014 at 11:24 AM, Daniel Carrera dcarr...@gmail.com wrote:

 Ok. I have had a really hard time finding a solution. I have found many
 posts claiming that it is possible to pre-compile modules, but very few
 posts that actually try to help you do it. That said, I think I have pieced
 together some of the steps:


 --

 $ cd /usr/share/julia/base

 $ sudo vi userimg.jl  # Insert lines like  require(PyCall)

 $ sudo julia --build ../usr/lib/julia/sys0 sysimg.jl
 ...
 Cannot open system image file ../usr/lib/julia/sys0.ji for writing.
 --

 This obviously fails because ../usr does not exist. There is no file in my
 system (Ubuntu) called sys9.ji but there are files called

 /usr/lib/x86_64-linux-gnu/julia/sys.so
 /usr/lib/x86_64-linux-gnu/julia/sys.ji


 After backing up /usr/lib/x86_64-linux-gnu/julia/, I tried the command

 $ sudo julia --build /usr/lib/x86_64-linux-gnu/julia/sys sysimg.jl


 This gave no compile-time errors, but now Julia dies with a seg fault:

 $ julia
 zsh: segmentation fault (core dumped)  julia


 This happens even if userimg.jl has nothing bug comments... I can recover
 Julia from the backup, but if anyone can see an obvious error that I can
 just fix, I would like to hear it.

 Cheers,
 Daniel.





 On 24 October 2014 15:59, Isaiah Norton isaiah.nor...@gmail.com wrote:

 It is not strictly necessary to build from source - try searching this
 list for sysimg and look for the --build incantation for PPA installs
 (I'm pretty sure it has come up before).



 On Fri, Oct 24, 2014 at 9:53 AM, Daniel Carrera dcarr...@gmail.com
 wrote:

 Oh, and I now figured out why I can't get Julia to pre-compile the
 modules I want. Apparently you have to be building it from source...

 Now I'll go and give that a try.

 Cheers,
 Daniel.



 On Friday, 24 October 2014 15:47:58 UTC+2, Daniel Carrera wrote:

 Yes, it seems to work. Apparently my Julia directory is
  /usr/bin/../share/julia/base/, so that's one problem solved.
 Thanks!

 Cheers,
 Daniel.

 On 24 October 2014 14:53, Till Ehrengruber darthma...@gmail.com
 wrote:

 You cuold look where your julia base library is located for example by

 functionloc(push!)
 (/usr/local/Cellar/julia/0.3.1/bin/../share/julia/base/array.jl,464)


 I'm on OS X but it should work on Ubuntu as well

 Am Freitag, 24. Oktober 2014 14:31:32 UTC+2 schrieb Daniel Carrera:

 Hello,

 I am running Ubuntu with Julia 0.3.1 installed from PPA. I want to
 figure out where my Julia base directory is so I can create a userimg.jl
 file so I can pre-compile some modules that I use often. I got the
 idea from here:

 https://github.com/JuliaLang/Gtk.jl/blob/master/doc/precompilation.md

 The problem is that I cannot figure out where my jula/base directory
 is. It appears to be nowhere. I have two candidate directories:

 /usr/share/julia
 /usr/local/julia

 My Julia version is 0.3.1. The file /usr/share/julia/VERSION says it
 is 0.3.0-prerelease. On the other hand, /var/lib/dpkg/info/julia.list
 seems to say that /usr/share/julia is the correct directory. So that
 doesn't make sense. The other directory --- /usr/local/julia ---
 does not have any VERSION file or anything that I could that would tell 
 me
 what version of Julia it is for. I tried adding the userimg.jl
 file to both directories, but that didn't do anything. Even more 
 strangely, neither
 directory one seems to be required for Julia to run. Look:


 $ sudo mv /usr/share/julia $HOME/usr-share-julia
 $ sudo mv /usr/local/julia $HOME/usr-local-julia
 $
 $ julia --version
 julia version 0.3.1
 $
 $ julia
_
_   _ _(_)_ |  A fresh approach to technical computing
   (_) | (_) (_)|  Documentation: http://docs.julialang.org
_ _   _| |_  __ _   |  Type help() for help.
   | | | | | | |/ _` |  |
   | | |_| | | | (_| |  |  Version 0.3.1 (2014-09-21 21:30 UTC)
  _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org release
 |__/   |  x86_64-linux-gnu

 julia


 In other words, neither of those directories seems to be needed to
 run Julia. So, I am completely stuck. I have two base directories,
 neither of which seems to do anything.

 Can anyone help me out?

 Cheers,
 Daniel.
















 --
 When an engineer says that something can't be done, it's a code phrase
 that means it's not fun to do.





 --
 When an engineer says that something can't be done, it's a code phrase
 that means it's not fun to do.



Re: [julia-users] Re: I can't figure out where my Julia is installed / How to pre-compile a module.

2014-10-24 Thread Jameson Nash
you can also use the existing sys.ji image to bootstrap the next one, no
need to run it twice

cd base  julia --build output file without extension -J sys.ji sysimg.jl

then you can use the output file with julia -J path to sys.ji

this really should be better documented, since it isn’t really that
complicated
​

On Fri, Oct 24, 2014 at 6:46 PM, Isaiah Norton isaiah.nor...@gmail.com
wrote:

 It is necessary to run --build twice to get a fully-inferred version. sys0
 is just an intermediate stage and can go anywhere (unless there are still
 some hard-coded paths). Something like this (from julia/base):

 julia --build /tmp/sys0 sysimg.jl
 julia --build /usr/lib/x86_64-linux-gnu/julia/sys.so -J /tmp/sys0
 sysimg.jl # cache everything

 (The PPA doesn't distribute sys0 because it is unnecessary in a compiled
 distribution)

 On Fri, Oct 24, 2014 at 11:24 AM, Daniel Carrera dcarr...@gmail.com
 wrote:

 Ok. I have had a really hard time finding a solution. I have found many
 posts claiming that it is possible to pre-compile modules, but very few
 posts that actually try to help you do it. That said, I think I have pieced
 together some of the steps:


 --

 $ cd /usr/share/julia/base

 $ sudo vi userimg.jl  # Insert lines like  require(PyCall)

 $ sudo julia --build ../usr/lib/julia/sys0 sysimg.jl
 ...
 Cannot open system image file ../usr/lib/julia/sys0.ji for writing.
 --

 This obviously fails because ../usr does not exist. There is no file in
 my system (Ubuntu) called sys9.ji but there are files called

 /usr/lib/x86_64-linux-gnu/julia/sys.so
 /usr/lib/x86_64-linux-gnu/julia/sys.ji


 After backing up /usr/lib/x86_64-linux-gnu/julia/, I tried the command

 $ sudo julia --build /usr/lib/x86_64-linux-gnu/julia/sys sysimg.jl


 This gave no compile-time errors, but now Julia dies with a seg fault:

 $ julia
 zsh: segmentation fault (core dumped)  julia


 This happens even if userimg.jl has nothing bug comments... I can recover
 Julia from the backup, but if anyone can see an obvious error that I can
 just fix, I would like to hear it.

 Cheers,
 Daniel.





 On 24 October 2014 15:59, Isaiah Norton isaiah.nor...@gmail.com wrote:

 It is not strictly necessary to build from source - try searching this
 list for sysimg and look for the --build incantation for PPA installs
 (I'm pretty sure it has come up before).



 On Fri, Oct 24, 2014 at 9:53 AM, Daniel Carrera dcarr...@gmail.com
 wrote:

 Oh, and I now figured out why I can't get Julia to pre-compile the
 modules I want. Apparently you have to be building it from source...

 Now I'll go and give that a try.

 Cheers,
 Daniel.



 On Friday, 24 October 2014 15:47:58 UTC+2, Daniel Carrera wrote:

 Yes, it seems to work. Apparently my Julia directory is
  /usr/bin/../share/julia/base/, so that's one problem solved.
 Thanks!

 Cheers,
 Daniel.

 On 24 October 2014 14:53, Till Ehrengruber darthma...@gmail.com
 wrote:

 You cuold look where your julia base library is located for example by

 functionloc(push!)
 (/usr/local/Cellar/julia/0.3.1/bin/../share/julia/base/array.jl,464
 )


 I'm on OS X but it should work on Ubuntu as well

 Am Freitag, 24. Oktober 2014 14:31:32 UTC+2 schrieb Daniel Carrera:

 Hello,

 I am running Ubuntu with Julia 0.3.1 installed from PPA. I want to
 figure out where my Julia base directory is so I can create a userimg.jl
 file so I can pre-compile some modules that I use often. I got the
 idea from here:

 https://github.com/JuliaLang/Gtk.jl/blob/master/doc/precompi
 lation.md

 The problem is that I cannot figure out where my jula/base directory
 is. It appears to be nowhere. I have two candidate directories:

 /usr/share/julia
 /usr/local/julia

 My Julia version is 0.3.1. The file /usr/share/julia/VERSION says it
 is 0.3.0-prerelease. On the other hand, /var/lib/dpkg/info/julia.list
 seems to say that /usr/share/julia is the correct directory. So that
 doesn't make sense. The other directory --- /usr/local/julia ---
 does not have any VERSION file or anything that I could that would tell 
 me
 what version of Julia it is for. I tried adding the userimg.jl
 file to both directories, but that didn't do anything. Even more 
 strangely, neither
 directory one seems to be required for Julia to run. Look:


 $ sudo mv /usr/share/julia $HOME/usr-share-julia
 $ sudo mv /usr/local/julia $HOME/usr-local-julia
 $
 $ julia --version
 julia version 0.3.1
 $
 $ julia
_
_   _ _(_)_ |  A fresh approach to technical computing
   (_) | (_) (_)|  Documentation: http://docs.julialang.org
_ _   _| |_  __ _   |  Type help() for help.
   | | | | | | |/ _` |  |
   | | |_| | | | (_| |  |  Version 0.3.1 (2014-09-21 21:30 UTC)
  _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org release
 |__/   |  x86_64-linux-gnu

 julia


 In other words, neither of those directories seems to be needed to
 run Julia. So, I am completely stuck. I have two base
 

[julia-users] Gadfly: text placing

2014-10-24 Thread Keyan Ghazi-Zahedi
I would like to freely place some text into my plot by giving some 
coordinates and an orientation. Is that possible?

Cheers,
Keyan


Re: [julia-users] Trying to optimize numerical code - peculiar allocation data

2014-10-24 Thread dextorious
First of all, thank you for a very comprehensive and informative reply. I 
re-checked the performance impact of @simd/@inbounds and it seems you're 
right. The annotations do provide a measurable performance boost on longer 
calculations, but it's slightly under 2% on average and so almost 
negligible. I suspect I must have made other optimizations between 
modifying the annotations previously, leading to an incorrect conclusion.

I tried using the BLAS function for the outer product and it does get 
fairly close to my custom kernel, though not quite there. It's great to see 
how easy Julia makes it to follow method calls that would be quite 
difficult to track down in other languages. I did, however, find a way to 
make my outer product function even faster by passing a preallocated 3x3 
matrix and modifying the elements to avoid repeating the zeros(3,3) call 
every time. This also reduced allocation numbers accordingly.

Using the I statement instead of eye(3) as you suggested gave me a very 
considerable performance boost and I hadn't seen it mentioned anywhere in 
the Julia docs. I'll have to go through the source to see how it does its 
magic, the difference is quite remarkable to me. It's also interesting that 
putting the conditional inside the assignment makes the code run faster 
compared to the other way around, but I can't argue with the results. On 
average, these two changes gave me a steady 40-50% performance boost, which 
is great.

The one thing I was still dissatisfied with was that almost a third of the 
total runtime was still spent on garbage collection, so I re-ran the 
allocation numbers and got the following results:
- function outer_product!(a, b, n, c)
0 for i=1:n, j=1:n
0 c[j,i] = a[j] * b[i]
- end
0 c
- end
- 
- #outer_product(a, b, n) = Base.LinAlg.BLAS.ger!(1.0, a, b, 
zeros(n, n))
- 
- function rotne_prager(x, y, z, r)
0 const p = length(x) - 1
792000528 rpy = zeros(3*(p+1), 3*(p+1))
  880 rvec = zeros(3)
 1408 c = zeros(3,3)
1406543616 for i=1:p+1, j=1:p+1
0 rvec[1] = x[j] - x[i]
0 rvec[2] = y[j] - y[i]
0 rvec[3] = z[j] - z[i]
0 Rij = sqrt(rvec[1]*rvec[1] + rvec[2]*rvec[2] + rvec[3]*
rvec[3])
0 distance_ratio = r / Rij
0 rpy[3*j-2:3*j, 3*i-2:3*i] = if (Rij  2*r)
0 (0.75 * distance_ratio *
- ((1.0 + 2.0/3.0 * distance_ratio*distance_ratio) * 
I +
- (1.0 - 2.0*distance_ratio*distance_ratio) * 
outer_product!(rvec, rvec, 3, c)))
0 else ((1.0 - 9.0/32.0 / distance_ratio) * I +
- 3.0/32.0 / distance_ratio * outer_product!(rvec, 
rvec, 3, c))
- end
- end
0 rpy
- end
- 
- function rotne_bench(n)
0 x = rand(n)
0 y = rand(n)
0 z = rand(n)
0 r = 0.01
0 rotne_prager(x, y, z, r);
- end
- 
- rotne_bench(1000);
- @time (for i=1:10; rotne_bench(1000); end)


The one thing that seems odd to me is that huge number next to the for 
loop. Surely the iterator variables allocating during the loop are tiny and 
every other variable except for the two Float64s (Rij and distance_ratio) 
is pre-allocated. Does this mean that a large number of temporary variables 
are generated during the rpy expressions? Intuitively, it seems to be that 
a fairly simple code such as this shouldn't have to spend 30% time on gc(), 
but perhaps I'm wrong.

On Friday, October 24, 2014 7:38:36 PM UTC+3, Jiahao Chen wrote:

 Are you sure that @simd and @inbounds give you any performance gains? On 
 my laptop removing all those annotations resulted in essentially no change 
 in performance.

 Computing the outer product as a*b' is currently slow because it does not 
 use the specialized method for computing outer products. Instead, b' turns 
 the n-vector into a 1xn-row matrix, then a*b' turns a from a n-vector to a 
 nx1-column matrix, then does generic matrix-matrix multiplication (dgemm). 
 You can verify this yourself by running

 @which a*b'

 and looking at the method of * that is being dispatched on.

 As Andreas said, you can try the rank-1 update; one way to use BLAS's 
 function for this is

 outer_product(a, b, n) = Base.LinAlg.BLAS.ger!(1.0, a, b, zeros(n, n))

 but for 3x3 outer products writing your own kernel may very well be faster 
 because you avoid the cost of making a library call.

 For me the biggest speedup came from rewriting your inner kernel as

 rpy[3*j-2:3*j, 3*i-2:3*i] = if (Rij  2*r)
 (0.75 * distance_ratio *
 ((1.0 + 2.0/3.0 * distance_ratio*distance_ratio) * I +
 (1.0 - 2.0*distance_ratio*distance_ratio) * 
 outer_product(rvec, 

Re: [julia-users] Trying to optimize numerical code - peculiar allocation data

2014-10-24 Thread dextorious
I'm currently avoiding the use of FMMs because the systems I run are fairly 
small (colloidal soft matter, typically involving a few hundreds of spheres 
in a small number of aggregates, so each time step is quite fast, but the 
dynamics take a long time to resolve) and I'd rather not sacrifice any 
precision if I don't have to. Neighbor lists might be an interesting option 
to consider, however. 

I'm not entirely sure what would be gained by providing a function that 
computes the tensor's action on a vector instead of computing it directly. 
In a Stokes flow, the velocity of an arbitrary assortment of spheres can be 
approximately expressed as v = rpy * force_density, which is basically what 
I'm doing in this case. Assuming the force_density vector isn't sparse, 
wouldn't you have to compute every element of the tensor anyway?

On Friday, October 24, 2014 7:43:56 PM UTC+3, Jiahao Chen wrote:

 Without knowing much more about your specific problem and use cases, you 
 may also want to consider:

 - never computing the tensor directly, but provide an indirect 
 representation of it by providing a function that  computes its action on a 
 vector
 - using a fast multipole algorithm to determine the distance cutoff
 - reducing the number of explicit distance computations using neighbor 
 lists

 Thanks,

 Jiahao Chen
 Staff Research Scientist
 MIT Computer Science and Artificial Intelligence Laboratory

 On Fri, Oct 24, 2014 at 12:38 PM, Jiahao Chen jia...@mit.edu 
 javascript: wrote:

 Are you sure that @simd and @inbounds give you any performance gains? On 
 my laptop removing all those annotations resulted in essentially no change 
 in performance.

 Computing the outer product as a*b' is currently slow because it does not 
 use the specialized method for computing outer products. Instead, b' turns 
 the n-vector into a 1xn-row matrix, then a*b' turns a from a n-vector to a 
 nx1-column matrix, then does generic matrix-matrix multiplication (dgemm). 
 You can verify this yourself by running

 @which a*b'

 and looking at the method of * that is being dispatched on.

 As Andreas said, you can try the rank-1 update; one way to use BLAS's 
 function for this is

 outer_product(a, b, n) = Base.LinAlg.BLAS.ger!(1.0, a, b, zeros(n, n))

 but for 3x3 outer products writing your own kernel may very well be 
 faster because you avoid the cost of making a library call.

 For me the biggest speedup came from rewriting your inner kernel as

 rpy[3*j-2:3*j, 3*i-2:3*i] = if (Rij  2*r)
 (0.75 * distance_ratio *
 ((1.0 + 2.0/3.0 * distance_ratio*distance_ratio) * I +
 (1.0 - 2.0*distance_ratio*distance_ratio) * 
 outer_product(rvec, rvec, 3)))
 else
 (1.0 - 9.0/32.0 / distance_ratio) * I +
 3.0/32.0 / distance_ratio * outer_product(rvec, rvec, 
 3)
 end

 - The I symbol is an abstract identity matrix that avoids allocating an 
 explicit matrix.
 - The compiler seems happier when I move the assignment outside the if 
 statement.

 Thanks,

 Jiahao Chen
 Staff Research Scientist
 MIT Computer Science and Artificial Intelligence Laboratory

 On Fri, Oct 24, 2014 at 11:24 AM, dexto...@gmail.com javascript: 
 wrote:

 I'm absolutely sure that Rij  2*r most of the time, yes. This is 
 obvious if you consider that rand(n) returns values in the range [0, 1] and 
 r = 0.01. I have also verified this empirically. I'll try to explore the 
 view update idea further, I've never used that approach before.


 On Friday, October 24, 2014 5:46:37 PM UTC+3, Andreas Noack wrote:

 Are you sure that Rij  2*r in your calculations? 

 You should use I instead of eye(3), but probably the best solution is 
 to avoid the explicit calculation of the outer product. You can probably 
 write your problem as a rank one update of a view of pry

 Med venlig hilsen

 Andreas Noack





Re: [julia-users] Trying to optimize numerical code - peculiar allocation data

2014-10-24 Thread Andreas Noack
Allocating arrays in the inner loop in code like this will unavoidable make
the gc kick in and slow down the execution. Sometimes you can speed things
up a lot with ArrayViews, but in this case the allocation of the views will
also be expensive.

Hence, you can either devectorise your code completely or use pointers. The
following devectorised version is 10x faster on my machine because it
avoids the allocation in the loop.

function rotne_prager1(x, y, z, r)
p = length(x) - 1
rpy = zeros(3*(p+1), 3*(p+1))
rvec = zeros(3)
c = zeros(3,3)
@inbounds begin
for i = 1:p+1, j = 1:p+1
rvec[1] = x[j] - x[i]
rvec[2] = y[j] - y[i]
rvec[3] = z[j] - z[i]
Rij = sqrt(rvec[1]*rvec[1] + rvec[2]*rvec[2] + rvec[3]*rvec[3])
distance_ratio = r / Rij
for l = 1:3, k = 1:3
 if Rij  2*r
 tmp = 0.75 * distance_ratio *(1.0 -
2.0*distance_ratio*distance_ratio) * rvec[k] * rvec[l]
 if k == l
 tmp += 0.75 * distance_ratio * (1.0 + 2.0/3.0 *
distance_ratio*distance_ratio)
 end
 else
 tmp = 3.0/32.0 / distance_ratio * rvec[k] * rvec[l]
 if k == l
 tmp += 1.0 - 9.0/32.0 / distance_ratio
 end
 end
 rpy[3*(j - 1) + k, 3*(i - 1) + l] = tmp
end
end
end
rpy
end

but unfortunately it is less clear what is going on in this version.

Med venlig hilsen

Andreas Noack

2014-10-24 20:12 GMT-04:00 dextori...@gmail.com:

 I'm currently avoiding the use of FMMs because the systems I run are
 fairly small (colloidal soft matter, typically involving a few hundreds of
 spheres in a small number of aggregates, so each time step is quite fast,
 but the dynamics take a long time to resolve) and I'd rather not sacrifice
 any precision if I don't have to. Neighbor lists might be an interesting
 option to consider, however.

 I'm not entirely sure what would be gained by providing a function that
 computes the tensor's action on a vector instead of computing it directly.
 In a Stokes flow, the velocity of an arbitrary assortment of spheres can be
 approximately expressed as v = rpy * force_density, which is basically what
 I'm doing in this case. Assuming the force_density vector isn't sparse,
 wouldn't you have to compute every element of the tensor anyway?

 On Friday, October 24, 2014 7:43:56 PM UTC+3, Jiahao Chen wrote:

 Without knowing much more about your specific problem and use cases, you
 may also want to consider:

 - never computing the tensor directly, but provide an indirect
 representation of it by providing a function that  computes its action on a
 vector
 - using a fast multipole algorithm to determine the distance cutoff
 - reducing the number of explicit distance computations using neighbor
 lists

 Thanks,

 Jiahao Chen
 Staff Research Scientist
 MIT Computer Science and Artificial Intelligence Laboratory

 On Fri, Oct 24, 2014 at 12:38 PM, Jiahao Chen jia...@mit.edu wrote:

 Are you sure that @simd and @inbounds give you any performance gains? On
 my laptop removing all those annotations resulted in essentially no change
 in performance.

 Computing the outer product as a*b' is currently slow because it does
 not use the specialized method for computing outer products. Instead, b'
 turns the n-vector into a 1xn-row matrix, then a*b' turns a from a n-vector
 to a nx1-column matrix, then does generic matrix-matrix multiplication
 (dgemm). You can verify this yourself by running

 @which a*b'

 and looking at the method of * that is being dispatched on.

 As Andreas said, you can try the rank-1 update; one way to use BLAS's
 function for this is

 outer_product(a, b, n) = Base.LinAlg.BLAS.ger!(1.0, a, b, zeros(n, n))

 but for 3x3 outer products writing your own kernel may very well be
 faster because you avoid the cost of making a library call.

 For me the biggest speedup came from rewriting your inner kernel as

 rpy[3*j-2:3*j, 3*i-2:3*i] = if (Rij  2*r)
 (0.75 * distance_ratio *
 ((1.0 + 2.0/3.0 * distance_ratio*distance_ratio) * I
 +
 (1.0 - 2.0*distance_ratio*distance_ratio) *
 outer_product(rvec, rvec, 3)))
 else
 (1.0 - 9.0/32.0 / distance_ratio) * I +
 3.0/32.0 / distance_ratio * outer_product(rvec,
 rvec, 3)
 end

 - The I symbol is an abstract identity matrix that avoids allocating an
 explicit matrix.
 - The compiler seems happier when I move the assignment outside the if
 statement.

 Thanks,

 Jiahao Chen
 Staff Research Scientist
 MIT Computer Science and Artificial Intelligence Laboratory

 On Fri, Oct 24, 2014 at 11:24 AM, dexto...@gmail.com wrote:

 I'm absolutely sure that Rij  2*r most of the time, yes. This is
 obvious if you consider that rand(n) returns values in the range [0, 1] and
 r = 0.01. I have also verified this empirically. I'll try to explore the
 view update idea further, I've never used that