A project I work on has a very high interest in cross-compiling Julia to
embedded architectures, so that Julia code could be run on embedded
systems. (I can update this with a more definitive list of target
architectures later, but I think the main two are PowerPC and ARM.)
I've been reading
Can anyone point me to something that describes or briefly describe the
process for determining/ensuring that a release is stable. A few sentences
is fine.
Is there essentially a large set of test cases that are run on the code
before the release it made, while those test cases aren't run on
Similarly, is there any schedule for the releases (either one with rough
objectives or a harder one)?
On Monday, April 28, 2014 9:48:18 AM UTC-4, Ryan Gardner wrote:
Can anyone point me to something that describes or briefly describe the
process for determining/ensuring that a release
about six months since 0.2
was released, and we are very close to 0.3 now... My rough guess for 0.4
would be August-ish, bumping to LLVM3.5, but that's really just a guess.
On Mon, Apr 28, 2014 at 10:31 AM, Ryan Gardner rwga...@gmail.comjavascript:
wrote:
Similarly, is there any schedule
Creating sparse arrays seems exceptionally slow.
I can set up the non-zero data of the array relatively quickly. For
example, the following code takes about 80 seconds on one machine.
vec_len = 70
row_ind = Uint64[]
col_ind = Uint64[]
value = Float64[]
for j = 1:70
for k =
is correct) and possibly more if the asymptotic storage
requirement is more than 2 Int64 + 1 Float64 per stored value.
Ivar
kl. 01:46:22 UTC+2 onsdag 30. april 2014 skrev Ryan Gardner følgende:
Creating sparse arrays seems exceptionally slow.
I can set up the non-zero data of the array
the sprand example, and it took 290 seconds on a machine with enough
RAM. Given that it is creating a matrix with half a billion nonzeros, this
doesn’t sound too bad.
-viral
On 30-Apr-2014, at 8:48 pm, Ryan Gardner rwgard...@gmail.com wrote:
I've got 16GB of RAM on this machine. Largely, my
I was running with --depwarn=no to suppress deprecation warnings. Now I'm
parallelizing, and the warnings still seem to be printed for all the
workers (which is probably making everything super slow - they are printed
every single time they are hit, not just at parsing or compilation).
Anyone
say I have code:
type foo
a
end
module MyModule
#how do I use foo here?
#can I
import .foo
#??
end
There must be a way to use global types in modules. Is there a name for
the "global module" (if you will). Thanks.
Oh, Main
import Main.foo
Thanks.
On Wednesday, October 26, 2016 at 3:27:50 PM UTC-4, Ryan Gardner wrote:
>
> say I have code:
>
>
> type foo
>a
> end
>
> module MyModule
>#how do I use foo here?
>#can I
>import .foo
>#??
> end
&
The documentation for Julia 0.5.0 says that the lock returned by
ReentrantLock() "is NOT threadsafe" (
http://docs.julialang.org/en/release-0.5/stdlib/parallel/ see
ReentrantLock()) . What does that mean? I interpret it to mean that I
cannot safely call lock or unlock simultaneously with
Thanks. Makes sense now.
On Tuesday, October 18, 2016 at 3:53:00 PM UTC-4, Ryan Gardner wrote:
>
> The documentation for Julia 0.5.0 says that the lock returned by
> ReentrantLock() "is NOT threadsafe" (
> http://docs.julialang.org/en/release-0.5/stdlib/parallel
I'm trying to write code for sun grid engine (sge) although I think the
general idea applies to any addprocs. I would like to be able to request a
gazillion nodes, and start using each shortly after it becomes available.
An example of what I want is roughly this code:
for j=1:100
...)
finally
unlock(worker_lock)
end
end
guess that confirms what's going on... hah.
On Monday, October 24, 2016 at 1:40:22 PM UTC-4, Ryan Gardner wrote:
>
> I'm trying to write code for sun grid engine (sge) although I think the
> general idea applies to any addprocs. I w
Alright, well I hacked up a copy of ClusterManagers such that I added an
obtain_procs() function that actually gets the available process and
generates and stores the relevant WorkerConfigs with the ClusterManager.
This function does not require any locks to be obtained.
Then addprocs()
I'm looking for a way to reliably kill asynchronous tasks.
My code is roughly:
task = @async call_external_program_that_may_never_return
#do stuff of interest
exit(0) #please really exit now, no matter what
Currently, if the external program never returns, neither does my program,
16 matches
Mail list logo