[julia-users] cross compiling julia to binaries

2014-02-12 Thread Ryan Gardner
A project I work on has a very high interest in cross-compiling Julia to embedded architectures, so that Julia code could be run on embedded systems. (I can update this with a more definitive list of target architectures later, but I think the main two are PowerPC and ARM.) I've been reading

[julia-users] Release process

2014-04-28 Thread Ryan Gardner
Can anyone point me to something that describes or briefly describe the process for determining/ensuring that a release is stable. A few sentences is fine. Is there essentially a large set of test cases that are run on the code before the release it made, while those test cases aren't run on

[julia-users] Re: Release process

2014-04-28 Thread Ryan Gardner
Similarly, is there any schedule for the releases (either one with rough objectives or a harder one)? On Monday, April 28, 2014 9:48:18 AM UTC-4, Ryan Gardner wrote: Can anyone point me to something that describes or briefly describe the process for determining/ensuring that a release

Re: [julia-users] Re: Release process

2014-04-29 Thread Ryan Gardner
about six months since 0.2 was released, and we are very close to 0.3 now... My rough guess for 0.4 would be August-ish, bumping to LLVM3.5, but that's really just a guess. On Mon, Apr 28, 2014 at 10:31 AM, Ryan Gardner rwga...@gmail.comjavascript: wrote: Similarly, is there any schedule

[julia-users] efficiency of sparse array creation

2014-04-29 Thread Ryan Gardner
Creating sparse arrays seems exceptionally slow. I can set up the non-zero data of the array relatively quickly. For example, the following code takes about 80 seconds on one machine. vec_len = 70 row_ind = Uint64[] col_ind = Uint64[] value = Float64[] for j = 1:70 for k =

Re: [julia-users] efficiency of sparse array creation

2014-04-30 Thread Ryan Gardner
is correct) and possibly more if the asymptotic storage requirement is more than 2 Int64 + 1 Float64 per stored value. Ivar kl. 01:46:22 UTC+2 onsdag 30. april 2014 skrev Ryan Gardner følgende: Creating sparse arrays seems exceptionally slow. I can set up the non-zero data of the array

Re: [julia-users] efficiency of sparse array creation

2014-04-30 Thread Ryan Gardner
the sprand example, and it took 290 seconds on a machine with enough RAM. Given that it is creating a matrix with half a billion nonzeros, this doesn’t sound too bad. -viral On 30-Apr-2014, at 8:48 pm, Ryan Gardner rwgard...@gmail.com wrote: I've got 16GB of RAM on this machine. Largely, my

[julia-users] suppress deprecation warnings on all workers? (julia 0.4.3)

2016-03-19 Thread Ryan Gardner
I was running with --depwarn=no to suppress deprecation warnings. Now I'm parallelizing, and the warnings still seem to be printed for all the workers (which is probably making everything super slow - they are printed every single time they are hit, not just at parsing or compilation). Anyone

[julia-users] a default module name?

2016-10-26 Thread Ryan Gardner
say I have code: type foo a end module MyModule #how do I use foo here? #can I import .foo #?? end There must be a way to use global types in modules. Is there a name for the "global module" (if you will). Thanks.

[julia-users] Re: a default module name?

2016-10-26 Thread Ryan Gardner
Oh, Main import Main.foo Thanks. On Wednesday, October 26, 2016 at 3:27:50 PM UTC-4, Ryan Gardner wrote: > > say I have code: > > > type foo >a > end > > module MyModule >#how do I use foo here? >#can I >import .foo >#?? > end &

[julia-users] thread safe locks and Julia 0.4

2016-10-18 Thread Ryan Gardner
The documentation for Julia 0.5.0 says that the lock returned by ReentrantLock() "is NOT threadsafe" ( http://docs.julialang.org/en/release-0.5/stdlib/parallel/ see ReentrantLock()) . What does that mean? I interpret it to mean that I cannot safely call lock or unlock simultaneously with

[julia-users] Re: thread safe locks and Julia 0.4

2016-10-18 Thread Ryan Gardner
Thanks. Makes sense now. On Tuesday, October 18, 2016 at 3:53:00 PM UTC-4, Ryan Gardner wrote: > > The documentation for Julia 0.5.0 says that the lock returned by > ReentrantLock() "is NOT threadsafe" ( > http://docs.julialang.org/en/release-0.5/stdlib/parallel

[julia-users] request many processes (addprocs()) simultaneously such that each can be used when obtained?

2016-10-24 Thread Ryan Gardner
I'm trying to write code for sun grid engine (sge) although I think the general idea applies to any addprocs. I would like to be able to request a gazillion nodes, and start using each shortly after it becomes available. An example of what I want is roughly this code: for j=1:100

[julia-users] Re: request many processes (addprocs()) simultaneously such that each can be used when obtained?

2016-10-24 Thread Ryan Gardner
...) finally unlock(worker_lock) end end guess that confirms what's going on... hah. On Monday, October 24, 2016 at 1:40:22 PM UTC-4, Ryan Gardner wrote: > > I'm trying to write code for sun grid engine (sge) although I think the > general idea applies to any addprocs. I w

[julia-users] Re: request many processes (addprocs()) simultaneously such that each can be used when obtained?

2016-10-24 Thread Ryan Gardner
Alright, well I hacked up a copy of ClusterManagers such that I added an obtain_procs() function that actually gets the available process and generates and stores the relevant WorkerConfigs with the ClusterManager. This function does not require any locks to be obtained. Then addprocs()

[julia-users] kill asynchronous tasks

2016-10-19 Thread Ryan Gardner
I'm looking for a way to reliably kill asynchronous tasks. My code is roughly: task = @async call_external_program_that_may_never_return #do stuff of interest exit(0) #please really exit now, no matter what Currently, if the external program never returns, neither does my program,