[julia-users] Re: ClusterManagers timeout

2016-10-05 Thread Matthew Pearce
Thanks for the pointer to the offending line Ben!

On Wednesday, September 28, 2016 at 4:34:48 PM UTC+1, Ben Arthur wrote:
>
> you could custom modify your copy of ClusterManagers.jl to wait longer by 
> changing the hard-coded "60" on this line:
>
>
> https://github.com/JuliaParallel/ClusterManagers.jl/blob/master/src/slurm.jl#L52
>


[julia-users] ClusterManagers timeout

2016-09-28 Thread Matthew Pearce
Hello

If I request some nodes using addprocs_slurm, slurm of course enqueues the 
request. However, if the request is not fulfilled within a certain time, I 
get a timeout type message from ClusterManagers.

Is there a way of telling ClusterManagers I'm prepared to wait (a long 
time) for my nodes to become available? Use case is running an analysis 
script unchanged, but not interactively so can tolerate a big pause before 
work starts.

Best 

Matthew



[julia-users] Re: Performance tips for network data transfer?

2016-08-17 Thread Matthew Pearce
Thanks for the thoughts people, much appreciated, gives me some ideas to 
work with.

I'm going to play around with pure Julia solutions first as my prior 
experience trying to get MPI.jl running on my cluster in a REPL was 
painful. This could be the wrong attitude and I may have to change it.

Workers will be in the low tens as I only need one per compute node.


[julia-users] Performance tips for network data transfer?

2016-08-12 Thread Matthew Pearce
Dear Julians

I'm trying to speed up the network data transfer in an MCMC algorithm. 
Currently I'm using `remotecall` based functions to do the network 
communication.

Essentially for every node I include I scale up by I incur about 50mb of 
data transfer per iteration. The topology is that various 1mb vectors get 
computed on worker nodes and transferred back to a central node. The 
central node does some work on the vector and sends back a copy of the 
resulting vector (same size) to each worker node.

Now I'm doing the send and receive transfers asynchronously, but it's 
scaling quite badly because the network transfer complexity is 
O(nodes*vectors) and the constants are big. This makes me think that 
there's some work going on like the same vector being serialized on the 
central node for each transfer to another node. 

   - Is there a way to only incur serialization/preparation costs once on 
   the central worker, when the same data is transferred to multiple workers?
   - Is it likely to help if I write branching code (1 sends to 2), (1 and 
   2 send to 3 and 4), (1,2,3,4 send to 5,6,7,8)? 
   - Alternately is there any way of using other, faster technologies from 
   within the REPL? My cluster supports MPI, and also I have GPUs with 
   infiniband connections. 
   
My appetite for messing around with this to achieve better performance is 
quite high.

Cheers in advance

Matthew 


[julia-users] Error involving LabelNode

2016-07-11 Thread Matthew Pearce
I've had a long-running job crash out a couple of times now with an error 
message I can't fathom:

```ERROR: On worker 2:
On worker 2:
MethodError: `convert` has no method matching 
convert(::Type{Type{LabelNode}}, ::Int64)
This may have arisen from a call to the constructor Type{LabelNode}(...),
since type constructors fall back to convert methods.
Closest candidates are:
  call{T}(::Type{T}, ::Any)
  convert{T}(::Type{T}, ::T)
 in setindex! at array.jl:313
 in getindex at array.jl:167
 in accepter at /home/mcp50/.julia/v0.4/Icarus/src/gibbsfuncs.jl:504```

I'm on version 0.4.5 and the offending code segment is:

```s = t - minchunkindex + 1 #minchunkindex should be an Integer
aold = a_current_ssq[s]   #a_current_ssq should be an 1d array of floats -  
line 504```

Now I'm going to go through and type a bunch of functions and variables 
strongly. 

However, I'm not sure what a LabelNode is. Never having come across it 
while reading the docs. So I'm not sure what kinds of things are likely to 
lead to this sort of error.

Any clues about how to avoid this gratefully received

Matthew



[julia-users] Re: Achitecture for solving largish LASSO/elasticnet problem

2016-06-24 Thread Matthew Pearce
Thanks for the suggestions so far, I'll be investigating these options :)


[julia-users] Achitecture for solving largish LASSO/elasticnet problem

2016-06-24 Thread Matthew Pearce
Hello

I'm trying to solve a largish elasticnet type problem (convex 
optimisation). 

   - The LARS.jl package produces Out of Memory errors for a test (1000, 
   262144) problem. /proc/meminfo suggests I have 17x this array size free so 
   not sure what's going on there. 
   - I have access to multiple GPUs and nodes.
   - I would potentially need to solve problems of the above sort of size 
   or bigger (10k, 200k) many, many times.

Looking for thoughts on the appropriate way to go about tackling this:

   - Rewrap an existing glmnet library for Julia (e.g. this CUDA enabled 
   one https://github.com/jeffwong/cudaglmnet or 
   http://www-hsc.usc.edu/~garykche/gpulasso.pdf)
   - Go back to basics and use and optimisation package on the objective 
   function (https://github.com/JuliaOpt), but which one? Would this be 
   inefficient compared to specific glmnet solvers which do some kind of 
   coordinate descent?
   - Rewrite some CUDA library from scratch (OK - probably a bad idea).

Thoughts on the back of a postcard would be gratefully received.


Cheers


Matthew


[julia-users] Re: Parallel computing: SharedArrays not updating on cluster

2016-06-24 Thread Matthew Pearce
As the others have said, it won't work like that. I found a few options:

   1. DistributedArrays. Message passing handled in the background. Some 
   limitations, but I've not used much.
   2. SharedArrays on each machine. You can share memory between all the 
   pids on a single machine, and then pass messages between one process from 
   each machine to updated.
   3. Regular Arrays on each machine. Swap messages between all processes.
   
Which one works for you will depend on how big your arrays are and the 
access patterns of the code you're trying to run on them.





[julia-users] Segfaults in SharedArrays

2016-06-16 Thread Matthew Pearce
I had a similar issue. In the end I started using yield(); in code blocks run 
on remote machines before the actual command.

On phone now so hard to look up but amit murphy posted a patch to do with 
finalisation on remotes that also helped (think issue number might have been 
14445).

Re: [julia-users] Abstract version of rng=1:end ?

2016-06-15 Thread Matthew Pearce
Thanks for the suggestions folks. 

Tim, that functionality looks pretty nice, looking forward to playing with 
it.

Yichao's suggestion solves my problem as described. However, what I was 
really interested in was multidimensional arrays, where the index produced 
by `endof` isn't as useful without complicating things with strides.

Matthew

On Tuesday, June 14, 2016 at 7:00:37 PM UTC+1, Tim Holy wrote:
>
> See https://github.com/JuliaLang/julia/pull/15750. What holds that up is 
> the 
> fact that people sometimes want to do math on `end`, e.g., 
>
> b = a[1:round(Int,sqrt(end))] 
>
> works just fine. 
>
> --Tim 
>
> On Monday, June 13, 2016 8:40:39 AM CDT Dan wrote: 
> > A reason such 'extended' ranges might be good, is the ability to 
> dispatch 
> > on them for efficiency. For example, `deleteat!(vec,5:end)`  could be 
> > implemented faster than just any `deleteat!` of any range. This would be 
> > applicable to other such structures with edge effects. Indeed, we can 
> learn 
> > from mathematics which often embraced infinity for ease of use and 
> > expression. 
> > Again, this complication might not be worth the benefits. 
> > 
> > So how about a half-bounded types of the form: 
> > typeof(5:end) == ExtendedUnitRange{BU}(5) # the BU stands for 
> > Bounded-Unbounded 
> > 
> > a UnboundedUnbounded Unit range could be like `:` meaning unbounded in 
> both 
> > directions. 
> > 
> > To summarize: there are algorithmic optimizations which are enabled by 
> > knowing the operated range spans to the end (or even just up close to 
> the 
> > end by  > It would be interesting to allow these optimization to be taken using 
> > dispatch for static optimization. 
> > 
> > On Monday, June 13, 2016 at 11:05:47 AM UTC-4, Yichao Yu wrote: 
> > > On Mon, Jun 13, 2016 at 10:47 AM, Matthew Pearce <mat...@refute.me.uk 
> > > 
> > > > wrote: 
> > > > Hello 
> > > > 
> > > > I find myself frequently wishing to define functions that accept a 
> range 
> > > > argument to operate on like: 
> > > > 
> > > > function foo{T}(M::Array{T,1}, rng::UnitRange) 
> > > > 
> > > > return sum(M[rng].^2) 
> > > > 
> > > > end 
> > > > 
> > > > It would be nice to be able to pass in `rng = 1:end`, but this isn't 
> > > 
> > > legal. 
> > > 
> > > > So I need a signature like, 
> > > 
> > > rng=1:endof(M) 
> > > 
> > > > function foo{T}(M::Array{T,1}) 
> > > > 
> > > > foo(M, 1:length(M)) 
> > > > 
> > > > end 
> > > > 
> > > > Is there a way to set `rng` to achieve my intended effect without 
> having 
> > > 
> > > to 
> > > 
> > > > resort to declaring additional functions? 
> > > > 
> > > > Cheers 
> > > > 
> > > > Matthew 
>
>
>

[julia-users] Abstract version of rng=1:end ?

2016-06-13 Thread Matthew Pearce
Hello

I find myself frequently wishing to define functions that accept a range 
argument to operate on like:

function foo{T}(M::Array{T,1}, rng::UnitRange)
return sum(M[rng].^2)
end

It would be nice to be able to pass in `rng = 1:end`, but this isn't legal. 
So I need a signature like,

function foo{T}(M::Array{T,1})
foo(M, 1:length(M))
end

Is there a way to set `rng` to achieve my intended effect without having to 
resort to declaring additional functions?

Cheers

Matthew


[julia-users] Re: moving data to workers for distributed workloads

2016-05-20 Thread Matthew Pearce
Greg, interesting suggestion about const - I should use that more.

In that setup we'd still be transferring alot of big coefficient vectors 
around (workarounds possible). 


[julia-users] Re: moving data to workers for distributed workloads

2016-05-19 Thread Matthew Pearce
Michael

That's right. With `sow` the mod.eval of the third argument gets bound to 
the second; where mod defaults to Main. Maybe someone could think of a 
cleaner way to make values available for later work, but it seems to do the 
trick. It seems best to avoid using the `sow` function heavily.

`reap` returns the mod.eval of the second argument with no assignment on 
each pid.

Good luck!

Matthew

On Thursday, May 19, 2016 at 8:18:17 PM UTC+1, Michael Eastwood wrote:
>
> Hi Matthew,
>
> ClusterUtils.jl looks very useful. I will definitely try it out. Am I 
> correct in reading that the trick to moving input to the workers is here 
> <https://github.com/pearcemc/ClusterUtils.jl/blob/ac5eb73bd565b43d0b05b9d8af1c930cef4088b7/src/ClusterUtils.jl#L157-L160>
> ?
>
> You're also correct that write_results_to_disk does actually depend on 
> myidx. I might have somewhat oversimplified the example.
>
> Thanks,
> Michael
>
> On Thursday, May 19, 2016 at 7:41:49 AM UTC-7, Matthew Pearce wrote:
>>
>> Hi Michael 
>>
>> Your current code looks like will pull back the `coefficients` across the 
>> network (500 gb transfer) and as you point out transfer `input` each time.
>>
>> I wrote a package ClusterUtils.jl 
>> <https://github.com/pearcemc/ClusterUtils.jl> to handle my own problems 
>> (MCMC sampling) which were somewhat similar.
>>
>> Roughly - given the available info - if I was trying to do something 
>> similar I'd do:
>>
>> ```julia
>> using Compat
>> using ClusterUtils
>>
>> sow(pids, :input, input)
>>
>> @everywhere function dostuff(input, myidxs)
>> for myidx in myidxs
>> coefficients = spherical_harmonic_transforms(input[myidx])
>> write_results_to_disk(coefficients) #needs myidx as arg too probably
>>   end
>> end
>>
>> idxs = chunkit(limit, length(pids))
>> sow(pids, :work, :(Dict(zip($pids, $idxs
>>
>> reap(pids, :(dostuff(input, $work[myid()])))
>> ```
>>
>> This transfers `input` once, and writes something to disk from the remote 
>> process. 
>>
>>
>>
>>

[julia-users] Re: moving data to workers for distributed workloads

2016-05-19 Thread Matthew Pearce
Also...

The above doesn't respect ordering of the `myidx` variable. Not sure how 
your problem domain operates, so it could get more complicated if things 
like order of execution matter. ;)



[julia-users] Re: moving data to workers for distributed workloads

2016-05-19 Thread Matthew Pearce
Hi Michael 

Your current code looks like will pull back the `coefficients` across the 
network (500 gb transfer) and as you point out transfer `input` each time.

I wrote a package ClusterUtils.jl 
 to handle my own problems 
(MCMC sampling) which were somewhat similar.

Roughly - given the available info - if I was trying to do something 
similar I'd do:

```julia
using Compat
using ClusterUtils

sow(pids, :input, input)

@everywhere function dostuff(input, myidxs)
for myidx in myidxs
coefficients = spherical_harmonic_transforms(input[myidx])
write_results_to_disk(coefficients) #needs myidx as arg too probably
  end
end

idxs = chunkit(limit, length(pids))
sow(pids, :work, :(Dict(zip($pids, $idxs

reap(pids, :(dostuff(input, $work[myid()])))
```

This transfers `input` once, and writes something to disk from the remote 
process. 





[julia-users] Re: GPU capabilities

2016-04-29 Thread Matthew Pearce
My university cluster uses Tesla M2090 cards. 

My experience (not comprehensive) so far is that the CUDArt.jl + CU.jl 
libraries work as one would expect. They're not all 100% complete and some 
further documentation in places would be nice, but they're pretty good. 

The only funny behaviour I've come across is not being able to transfer 
pointers to device memory across the network, but that's a weird thing to 
do anyway so I'm happy to work around it.

Matthew

On Thursday, April 28, 2016 at 9:13:56 PM UTC+1, feza wrote:
>
> Hi All, 
>
> Has anyone here had experience using Julia  programming using Nvidia's 
> Tesla K80 or K40  GPU? What was the experience, is it buggy or does Julia 
> have no problem.?
>


[julia-users] Re: Capturing Error messages as strings

2016-04-04 Thread Matthew Pearce
Anyone?

At the moment it is very hard to debug parallel work. With issues like 
#14456 <https://github.com/JuliaLang/julia/issues/14456> and related 
problems, it would be extraordinarily helpful to have access to the full 
error messages from remotes.

I care enough about this to actually try writing code implementing any 
hints / suggestions people might have.


 



On Wednesday, March 30, 2016 at 5:53:53 PM UTC+1, Matthew Pearce wrote:
>
>
> Anyone know how to capture error messages as strings? This is for 
> debugging of code run on remote machines, as the trace on the master node 
> appears to be incomplete.
>
> Note I am *not* asking about the atexit() behaviour. I am asking about 
> capturing non-fatal errors like sqrt(-1) rather than segfaults etc.
>
> Much appreciated
>
> Matthew
>


[julia-users] Capturing Error messages as strings

2016-03-30 Thread Matthew Pearce

Anyone know how to capture error messages as strings? This is for debugging 
of code run on remote machines, as the trace on the master node appears to 
be incomplete.

Note I am *not* asking about the atexit() behaviour. I am asking about 
capturing non-fatal errors like sqrt(-1) rather than segfaults etc.

Much appreciated

Matthew


Re: [julia-users] CUDArt.jl - segfaults after updating Julia

2016-03-24 Thread Matthew Pearce
Hi Stefan - thanks for the advice. Think I'm getting there with 0.4.5 - a 
few packages I relied on didn't work at first but it seems `Compat` sorted 
things out.

I'm not sure yet whether the issue that caused me to update 0.5 in the 
first place is present in 0.4.5 though (#14445).

Cheers

Matthew


On Monday, March 21, 2016 at 7:35:40 PM UTC, Stefan Karpinski wrote:
>
> I'm guessing that CUDArt doesn't support the Julia dev – you're probably 
> safe using a stable release of Julia instead – i.e. the latest v0.4.x 
> release.
>
> On Mon, Mar 21, 2016 at 3:13 PM, Matthew Pearce <mat...@refute.me.uk 
> > wrote:
>
>> I recently updated Julia to the latest version:
>>
>> ```julia
>> julia> versioninfo()
>> Julia Version 0.5.0-dev+3220
>> Commit c18bc53 (2016-03-21 11:09 UTC)```
>>
>> After doing so I get errors when trying to use CUDArt.jl. 
>> I'm writing to ask whether my best bet is to fully clean and remake my 
>> julia build or whether this is likely to be something to do with my 
>> configuration of CUDArt.
>> Prior to this I had CUDArt working fine (previous was commit 83eac1e* (
>> 2015-10-13 16:00 UTC))
>>
>> The error message generated is like:
>>
>> ```julia
>> julia> Pkg.test("CUDArt")
>> INFO: Testing CUDArt
>>
>> signal (11): Segmentation fault
>> while loading /home/mcp50/.julia/v0.5/CUDArt/test/gc.jl, in expression 
>> starting on line 1
>> jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1577
>> [inline] at /home/mcp50/soft/julia/src/dump.c:1169
>> jl_deserialize_datatype at /home/mcp50/soft/julia/src/dump.c:1568
>> jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1378
>> jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1448
>> jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1378
>> jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1438
>> jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1595
>> jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1595
>> jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1595
>> [inline] at /home/mcp50/soft/julia/src/julia.h:573
>> jl_gc_wb at /home/mcp50/soft/julia/src/dump.c:1568
>> jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1318
>> [inline] at /home/mcp50/soft/julia/src/julia.h:573
>> jl_gc_wb at /home/mcp50/soft/julia/src/dump.c:1568
>> jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1436
>> jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1378
>> jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1448
>> jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1378
>> jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1438
>> jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1595
>> jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1595
>> jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1595
>> [inline] at /home/mcp50/soft/julia/src/julia.h:573
>> jl_gc_wb at /home/mcp50/soft/julia/src/dump.c:1568
>> jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1378
>> jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1448
>> jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1595
>> jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1595
>> jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1595
>> [inline] at /home/mcp50/soft/julia/src/julia.h:573
>> jl_gc_wb at /home/mcp50/soft/julia/src/dump.c:1568
>> jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1499
>> [inline] at /home/mcp50/soft/julia/src/julia.h:573
>> jl_gc_wb at /home/mcp50/soft/julia/src/dump.c:1492
>> jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1378
>> _jl_restore_incremental at /home/mcp50/soft/julia/src/dump.c:2275
>> jl_restore_incremental at /home/mcp50/soft/julia/src/dump.c:2353
>> _require_from_serialized at ./loading.jl:165
>> unknown function (ip: 0x7f60febe4426)
>> [inline] at /home/mcp50/soft/julia/src/julia_internal.h:69
>> jl_call_method_internal at /home/mcp50/soft/julia/src/gf.c:1863
>> _require_from_serialized at ./loading.jl:193
>> require at ./loading.jl:323
>> unknown function (ip: 0x7f60f3bb71ac)
>> [inline] at /home/mcp50/soft/julia/src/julia_internal.h:69
>> jl_call_method_internal at /home/mcp50/soft/julia/src/gf.c:1863
>> eval_import_path_ at /home/mcp50/soft/julia/src/toplevel.c:379
>> jl_toplevel_eval_flex at /home/mcp50/soft/julia/src/toplevel.c:471
>> jl_parse_eval_all at /home/mcp50/soft/julia/src/ast.c:784
>> jl_load at /home/mcp50/soft/julia/src/topleve

[julia-users] CUDArt.jl - segfaults after updating Julia

2016-03-21 Thread Matthew Pearce
I recently updated Julia to the latest version:

```julia
julia> versioninfo()
Julia Version 0.5.0-dev+3220
Commit c18bc53 (2016-03-21 11:09 UTC)```

After doing so I get errors when trying to use CUDArt.jl. 
I'm writing to ask whether my best bet is to fully clean and remake my 
julia build or whether this is likely to be something to do with my 
configuration of CUDArt.
Prior to this I had CUDArt working fine (previous was commit 83eac1e* (2015-
10-13 16:00 UTC))

The error message generated is like:

```julia
julia> Pkg.test("CUDArt")
INFO: Testing CUDArt

signal (11): Segmentation fault
while loading /home/mcp50/.julia/v0.5/CUDArt/test/gc.jl, in expression 
starting on line 1
jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1577
[inline] at /home/mcp50/soft/julia/src/dump.c:1169
jl_deserialize_datatype at /home/mcp50/soft/julia/src/dump.c:1568
jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1378
jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1448
jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1378
jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1438
jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1595
jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1595
jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1595
[inline] at /home/mcp50/soft/julia/src/julia.h:573
jl_gc_wb at /home/mcp50/soft/julia/src/dump.c:1568
jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1318
[inline] at /home/mcp50/soft/julia/src/julia.h:573
jl_gc_wb at /home/mcp50/soft/julia/src/dump.c:1568
jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1436
jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1378
jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1448
jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1378
jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1438
jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1595
jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1595
jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1595
[inline] at /home/mcp50/soft/julia/src/julia.h:573
jl_gc_wb at /home/mcp50/soft/julia/src/dump.c:1568
jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1378
jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1448
jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1595
jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1595
jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1595
[inline] at /home/mcp50/soft/julia/src/julia.h:573
jl_gc_wb at /home/mcp50/soft/julia/src/dump.c:1568
jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1499
[inline] at /home/mcp50/soft/julia/src/julia.h:573
jl_gc_wb at /home/mcp50/soft/julia/src/dump.c:1492
jl_deserialize_value_ at /home/mcp50/soft/julia/src/dump.c:1378
_jl_restore_incremental at /home/mcp50/soft/julia/src/dump.c:2275
jl_restore_incremental at /home/mcp50/soft/julia/src/dump.c:2353
_require_from_serialized at ./loading.jl:165
unknown function (ip: 0x7f60febe4426)
[inline] at /home/mcp50/soft/julia/src/julia_internal.h:69
jl_call_method_internal at /home/mcp50/soft/julia/src/gf.c:1863
_require_from_serialized at ./loading.jl:193
require at ./loading.jl:323
unknown function (ip: 0x7f60f3bb71ac)
[inline] at /home/mcp50/soft/julia/src/julia_internal.h:69
jl_call_method_internal at /home/mcp50/soft/julia/src/gf.c:1863
eval_import_path_ at /home/mcp50/soft/julia/src/toplevel.c:379
jl_toplevel_eval_flex at /home/mcp50/soft/julia/src/toplevel.c:471
jl_parse_eval_all at /home/mcp50/soft/julia/src/ast.c:784
jl_load at /home/mcp50/soft/julia/src/toplevel.c:580
include at ./boot.jl:240
[inline] at /home/mcp50/soft/julia/src/julia_internal.h:69
jl_call_method_internal at /home/mcp50/soft/julia/src/gf.c:1863
include_from_node1 at ./loading.jl:417
unknown function (ip: 0x7f60f3b56785)
[inline] at /home/mcp50/soft/julia/src/julia_internal.h:69
jl_call_method_internal at /home/mcp50/soft/julia/src/gf.c:1863
do_call at /home/mcp50/soft/julia/src/interpreter.c:66
eval at /home/mcp50/soft/julia/src/interpreter.c:185
jl_toplevel_eval_flex at /home/mcp50/soft/julia/src/toplevel.c:557
jl_parse_eval_all at /home/mcp50/soft/julia/src/ast.c:784
jl_load at /home/mcp50/soft/julia/src/toplevel.c:580
include at ./boot.jl:240
[inline] at /home/mcp50/soft/julia/src/julia_internal.h:69
jl_call_method_internal at /home/mcp50/soft/julia/src/gf.c:1863
include_from_node1 at ./loading.jl:417
unknown function (ip: 0x7f60f3ba3755)
[inline] at /home/mcp50/soft/julia/src/julia_internal.h:69
jl_call_method_internal at /home/mcp50/soft/julia/src/gf.c:1863
process_options at ./client.jl:266
_start at ./client.jl:318
unknown function (ip: 0x7f60f3b9d2a2)
[inline] at /home/mcp50/soft/julia/src/julia_internal.h:69
jl_call_method_internal at /home/mcp50/soft/julia/src/gf.c:1863
unknown function (ip: 0x401c1d)
unknown function (ip: 0x402e11)
__libc_start_main at /lib64/libc.so.6 (unknown line)
Allocations: 

Re: [julia-users] Using CUDArt on remote machines - 'illegal memory access'

2016-03-03 Thread Matthew Pearce
Thanks again.

Think the problem may have been with my kernel and getting confused about 
the row major and column major ordering of the layout of the array. 
I thought I'd checked it was producing the correct norms yesterday, but I 
must have changed something...

To get it straight if A is a matrix in main memory, the corresponding GPU 
memory object is d_A = CudaArray(A) then:

A[i, j] = d_A[ j * nrows + i]

Is that right? I guess I got confused by the discussion of transposition in 
the CUDArt docs.

Matthew


Re: [julia-users] Using CUDArt on remote machines - 'illegal memory access'

2016-03-03 Thread Matthew Pearce
Thanks Tim. 

For me `elty=Float32' so if I use `CudaArray(elty, ones(10))' or `CudaArray(
elty, ones(10)...)' I get a conversion error. [I am running Julia 
0.5.0-dev+749]
The result of my CudaArray creation above looks like:

julia> to_host(CudaArray(map(elty, ones(10'
1x10 Array{Float32,2}:
 1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0

I tried putting a `device_synchronize()' call in the `p2' block above like 
so, which was probably needed anyway, but doesn't fix the error:

julia> p2 = quote 
   elty = eltype(d_M)
   n1, n2 = size(d_M)
   d_dots = CudaArray(map(elty, ones(n1)))
   dev = device(d_dots)
   dotf = cudakernels.ptxdict[(dev, "sqrownorms", elty)]
   numblox = Int(ceil(n1/cudakernels.maxBlock))
   CUDArt.launch(dotf, numblox, cudakernels.maxBlock, (d_M, n1, n2, 
d_dots))
   device_synchronize()
   dots = to_host(d_dots)
   free(d_dots)
   dots
   end

julia> sow(reps[3], :d_M, :(residual_shared(Y,A_init,S_init,1,sig)))
RemoteRef{Channel{Any}}(51,1,40341)

julia> reap(reps[3], :(string(d_M)))
Dict{Int64,Any} with 1 entry:
  51 => "CUDArt.CudaArray{Float32,2}(CUDArt.CudaPtr{Float32}(Ptr{Float32} 
@0x000b041e),(4000,2500),0)"

julia> reap(reps[3], p2)
ERROR: On worker 51:
"an illegal memory access was encountered"
 [inlined code] from essentials.jl:111
 in checkerror at /home/mcp50/.julia/v0.5/CUDArt/src/libcudart-6.5.jl:16
 [inlined code] from /home/mcp50/.julia/v0.5/CUDArt/src/../gen-6.5/
gen_libcudart.jl:16
 in device_synchronize at /home/mcp50/.julia/v0.5/CUDArt/src/device.jl:28
 in anonymous at multi.jl:892
 in run_work_thunk at multi.jl:645
 [inlined code] from multi.jl:892
 in anonymous at task.jl:59
 in remotecall_fetch at multi.jl:731
 [inlined code] from multi.jl:368
 in remotecall_fetch at multi.jl:734
 in anonymous at task.jl:443
 in sync_end at ./task.jl:409
 [inlined code] from task.jl:418
 in reap at /home/mcp50/.julia/v0.5/ClusterUtils/src/ClusterUtils.jl:203

One thing I have noted is that a remote process crashes if I ever attempt 
to move a `CudaArray' type/pointer from it to the host. 
That shouldn't be happening in the above, but I wonder if, inadevertently 
something similar is happening.

If I try calling the kernel on another process on the same machine, I don't 
get the error:

julia> sow(62, :d_M, :(residual_shared($Y_init,$A_init,$S_init,1,$sig)))
RemoteRef{Channel{Any}}(62,1,40936)

julia> sum(reap(62, p2)[62])
5.149127f6

Hmm...






[julia-users] Re: Using CUDArt on remote machines - 'illegal memory access'

2016-03-03 Thread Matthew Pearce
I should add that I don't think the error lies in my `reap' function for 
remote calls. 
As I can correctly call the cudakernels.sqrownorms function on the host 
from the remote:

julia> reap(3, :(reap(1, :(sum(cudakernels.sqrownorms(d_M[1]  ))[3]
5.149127f6

(The above gets process three to call the kernel on process 1 and then 
returns the result from 3 to 1.)



[julia-users] Using CUDArt on remote machines - 'illegal memory access'

2016-03-03 Thread Matthew Pearce
Hello

I've come across a baffling error. I have a custom CUDA kernel to calculate 
squared row norms of a matrix. It works fine on the host computer:

julia> d_M = residual_shared(Y_init,A_init,S_init,k,sig)
CUDArt.CudaArray{Float32,2}(CUDArt.CudaPtr{Float32}(Ptr{Float32} @
0x000b037a),(4000,2500),0)

julia> sum(cudakernels.sqrownorms(d_M))
5.149127f6

However when I try to run the same code on a remote machine, the variable 
`d_M' gets calculated properly. The custom kernel launch code looks like:

function sqrownorms{T}(d_M::CUDArt.CudaArray{T,2})
elty = eltype(d_M)
n1, n2 = size(d_M)
d_dots = CudaArray(map(elty, ones(n1)))
dev = device(d_dots)
dotf = ptxdict[(dev, "sqrownorms", elty)]
numblox = Int(ceil(n1/maxBlock)) 
CUDArt.launch(dotf, numblox, maxBlock, (d_M, n1, n2, d_dots))
dots = to_host(d_dots)
free(d_dots)
return dots
end

Running the inside of this on a remote causes the following crash message. 
(Running the function produces an unhelpful process exited arrgh! error).

julia> sow(reps[5], :d_M, :(residual_shared(Y,A_init,S_init,1,sig)))

julia> p2 = quote 
   elty = eltype(d_M)
   n1, n2 = size(d_M)
   d_dots = CudaArray(map(elty, ones(n1)))
   dev = device(d_dots)
   dotf = cudakernels.ptxdict[(dev, "sqrownorms", elty)]
   numblox = Int(ceil(n1/cudakernels.maxBlock))
   CUDArt.launch(dotf, numblox, cudakernels.maxBlock, (d_M, n1, n2, 
d_dots))
   dots = to_host(d_dots)
   free(d_dots)
   dots
   end;

julia> reap(reps[5], p2)  #this is a remote call fetch of the eval of the 
`p2' block in global scope
ERROR: On worker 38:
"an illegal memory access was encountered"
 [inlined code] from essentials.jl:111
 in checkerror at /home/mcp50/.julia/v0.5/CUDArt/src/libcudart-6.5.jl:16
 [inlined code] from /home/mcp50/.julia/v0.5/CUDArt/src/stream.jl:11
 in cudaMemcpyAsync at /home/mcp50/.julia/v0.5/CUDArt/src/../gen-6.5/
gen_libcudart.jl:396
 in copy! at /home/mcp50/.julia/v0.5/CUDArt/src/arrays.jl:152
 in to_host at /home/mcp50/.julia/v0.5/CUDArt/src/arrays.jl:148
 in anonymous at multi.jl:892
 in run_work_thunk at multi.jl:645
 [inlined code] from multi.jl:892
 in anonymous at task.jl:59
 in remotecall_fetch at multi.jl:731
 [inlined code] from multi.jl:368
 in remotecall_fetch at multi.jl:734
 in anonymous at task.jl:443
 in sync_end at ./task.jl:409
 [inlined code] from task.jl:418
 in reap at /home/mcp50/.julia/v0.5/ClusterUtils/src/ClusterUtils.jl:203

Any thoughts much appreciated - I'm not sure where to go with this now.

Matthew




[julia-users] Re: Google Summer of Code 2016 - Ideas Page

2016-02-23 Thread Matthew Pearce
Whose GSoC idea is the 'Dynamic distributed execution for data parallel 
tasks in Julia' 
.
  
I'd be keen to collaborate.

I've been working on a package (ClusterUtils.jl 
) that works like scripting 
for HPC. At the moment it's written pretty much just to scratch my own 
itches.

It would be great to get involved with a GSoC project but I have PhD and 
other commitments so can't take full time out.

Matthew






[julia-users] What to do with my package ClusterUtils.jl ?

2016-02-02 Thread Matthew Pearce
As part of working on an MCMC algorithm, I have produced some tools to help 
leverage my academic cluster.
I've stuck them on github as ClusterUtils.jl 
.  The functionality builds on 
`remotecall' and on the ClusterManagers.jl package, with

   - `describepids()' network topology - which processes live on which 
   machines (useful for `SharedArray')
   - `SharedArray' display patch - base produces #undefs if you try to view 
   one hosted on another machine.
   - `sow()' and `reap()' functions for scripting style map-reduce  like 
   `sow(:X, :(myid()^2))' and then later `reap(:(X + pi))'

I know some of the functions e.g. SharedArray display patch should get 
contributed elsewhere, but I don't know where.
Other functions such as `sow' are useful, but would probably benefit from 
knowledgeable feedback and development.

Any advice on where to go with this much appreciated.

Matthew





[julia-users] Advice on factorisation of sparse banded systems and solvers.

2016-01-25 Thread Matthew Pearce
I have a problem which involves sparse, banded, symmetric, PD systems. 

In particular if A is such a matrix and A=LL' I need to solve L' x = z. The 
size of A prohibits using full matrices.

So I am looking for advice on tactics for this problem, as there doesn't 
seem to be an out-of-the-box solution.

Is this likely to be something I can accomplish with calls to CHOLMOD? If 
so will I have to translate data structures. Is there some other library? 
Would I be better calculating L directly?




[julia-users] Re: Cholmod Factor re-use

2015-12-01 Thread Matthew Pearce
Thanks, but I'm afraid not. I guess the inclusion of Cholmod into Julia 
must have been restructured since that part of the code was written.
E.g.:

```
julia> using Base.LinAlg.CHOLMOD.CholmodFactor
ERROR: UndefVarError: CHOLMOD not defined
```

Matthew

On Tuesday, November 24, 2015 at 3:43:21 PM UTC, Pieterjan Robbe wrote:
>
> is this of any help?
>
> https://groups.google.com/forum/#!msg/julia-users/tgO3hd238Ac/olgfSJLXvzoJ
>


[julia-users] Cholmod Factor re-use

2015-11-24 Thread Matthew Pearce
Hello

I was going to investigate whether re-using the structure of a sparse 
cholesky could save me some time. 

Trying it, I got this:

julia> C = cholfact(K)
Base.SparseMatrix.CHOLMOD.Factor{Float64}
type:  LLt
method: supernodal
maxnnz:  0
nnz:   4117190


julia> D = cholfact!(C, K)
ERROR: MethodError: `cholfact!` has no method matching cholfact!(::Base.
SparseMatrix.CHOLMOD.Factor{Float64}, ::SparseMatrixCSC{Float64,Int64})

help?> cholfact!
search: cholfact! cholfact

  ..  cholfact!(A [,LU=:U [,pivot=Val{false}]][;tol=-1.0]) -> Cholesky
  
  ``cholfact!`` is the same as :func:`cholfact`, but saves space by 
overwriting the input ``A``, instead of creating a copy. ``cholfact!`` can 
also reuse the symbolic factorization from a different matrix ``F`` with 
the same structure when used as: ``cholfact!(F::CholmodFactor, A)``.

julia> VERSION
v"0.5.0-dev+749"


So I have a `Factor` object, but it looks like `cholfact!` wants a 
`CholmodFactor`. Is this some kind of hangover from previous development, 
or have I missed something?

Cheers

Matthew


[julia-users] SharedArray - intialisation by filename

2015-11-02 Thread Matthew Pearce

So I was poking around the SharedArray code, and saw that there was a 
method for creating one given a file name.

After some further playing about I've realised the file has to be in some 
sort of binary format (e.g. created by write(somefileio, somearray) ) due 
to something to do with mmapping in the background.

I can't see anything in the docs about this usage, and for me this is quite 
close to my use case.

My question is whether this feature is likely to disappear or not, given it 
isn't in the docs? 
Or whether documenting it has just not been gotten round to yet?

Much appreciated

Matthew


[julia-users] Re: parallel and PyCall

2015-10-30 Thread Matthew Pearce

So I got something working for my pylab example. 

julia> import PyCall

julia> PyCall.@pyimport pylab

julia> @everywhere import PyCall

julia> @everywhere PyCall.@pyimport pylab

julia> @everywhere A = pylab.cumsum(collect(1:10))*1.

julia> fetch(@spawnat remotes[1] A)
10-element Array{Float64,1}:
  1.0
  3.0
  6.0
 10.0
 15.0
 21.0
 28.0
 36.0
 45.0
 55.0




No luck with the math module I'm afraid. Two different types of errors 
depending on style:

julia> @spawnat remotes[1] PyCall.@pyimport math as pymath
RemoteRef{Channel{Any}}(2,1,305)

julia> fetch(@spawnat remotes[1] (pymath.sin(pymath.pi / 4) - sin(pymath.pi 
/ 4)) )
ERROR: On worker 2:
UndefVarError: pymath not defined
 in anonymous at multi.jl:1330
 in anonymous at multi.jl:889
 in run_work_thunk at multi.jl:645
 in run_work_thunk at multi.jl:654
 in anonymous at task.jl:54
 in remotecall_fetch at multi.jl:731
 [inlined code] from multi.jl:368
 in call_on_owner at multi.jl:776
 in fetch at multi.jl:784

julia> @everywhere PyCall.@pyimport math as pymath

julia> fetch(@spawnat remotes[1] (pymath.sin(pymath.pi / 4) - sin(pymath.pi 
/ 4)) )
Worker 2 terminated.srun: error: mrc-bsu-tesla1: task 0: Exited with exit 
code 1
ERROR: ProcessExitedException()
 in yieldto at ./task.jl:67
 in wait at ./task.jl:367
 in wait at ./task.jl:282
 in wait at ./channels.jl:97
 in take! at ./channels.jl:84
 in take! at ./multi.jl:792
 in remotecall_fetch at multi.jl:729
 [inlined code] from multi.jl:368
 in call_on_owner at multi.jl:776
 in fetch at multi.jl:784


ERROR (unhandled task failure): EOFError: read end of file





On Friday, October 30, 2015 at 1:28:21 AM UTC, Yakir Gagnon wrote:
>
> @Matthew: did you find a solution? 
>  
> On Tuesday, October 27, 2015 at 8:44:53 AM UTC+10, Yakir Gagnon wrote:
>>
>> Yea, right? So what’s the answer? How can we if at all do any PyCalls 
>> parallely? 
>>
>> On Monday, October 26, 2015 at 11:49:35 PM UTC+10, Matthew Pearce wrote:
>>
>> Thought I had an idea about this, I was wrong:
>>>
>>> ```julia
>>>
>>> julia> @everywhere using PyCall
>>>
>>> julia> @everywhere @pyimport pylab
>>>
>>> julia> remotecall_fetch(pylab.cumsum, 5, collect(1:10))
>>> ERROR: cannot serialize a pointer
>>>  [inlined code] from error.jl:21
>>>  in serialize at serialize.jl:420
>>>  [inlined code] from dict.jl:372
>>>  in serialize at serialize.jl:428
>>>  in serialize at serialize.jl:310
>>>  in serialize at serialize.jl:420 (repeats 2 times)
>>>  in serialize at serialize.jl:302
>>>  in serialize at serialize.jl:420
>>>  [inlined code] from dict.jl:372
>>>  in serialize at serialize.jl:428
>>>  in serialize at serialize.jl:310
>>>  in serialize at serialize.jl:420 (repeats 2 times)
>>>  in serialize at serialize.jl:302
>>>  in serialize at serialize.jl:420
>>>  [inlined code] from dict.jl:372
>>>  in send_msg_ at multi.jl:222
>>>  [inlined code] from multi.jl:177
>>>  in remotecall_fetch at multi.jl:728
>>>  [inlined code] from multi.jl:368
>>>  in remotecall_fetch at multi.jl:734
>>>
>>> julia> pylab.cumsum(collect(1:10))
>>> 10-element Array{Int64,1}:
>>>   1
>>>   3
>>>   6
>>>  10
>>>  15
>>>  21
>>>  28
>>>  36
>>>  45
>>>  55
>>>
>>> ```
>>>
>> ​
>>
>

[julia-users] CUDArt.CudaArry - fill! not working

2015-10-26 Thread Matthew Pearce
I'm not having much luck filling a CUDArt.CudaArray matrix with a value.

julia> C = CUDArt.CudaArray(Float64, (10,10))
CUDArt.CudaArray{Float64,2}(CUDArt.CudaPtr{Float64}(Ptr{Float64} @
0x000b034a0e00),(10,10),0)

julia> fill!(C, 2.0)
ERROR: KeyError: (0,"fill_contiguous",Float64) not found
 [inlined code] from essentials.jl:58
 in getindex at dict.jl:719
 in fill! at /home/mcp50/.julia/v0.5/CUDArt/src/arrays.jl:158

The fill! code works when matrix C is created by copying data to the gpu. 
This suggested to me the problem was one of memory allocation. However, 
I've tried variations on this which haven't worked, such as taking some of 
the source code:

julia> function NewCudaArray(T::Type, dims::Dims)
   n = prod(dims)
   p = CUDArt.malloc(T, n)
   CudaArray{T,length(dims)}(p, dims, device())
   end
NewCudaArray (generic function with 1 method)

julia> C = NewCudaArray(Float64, (10,10))
CUDArt.CudaArray{Float64,2}(CUDArt.CudaPtr{Float64}(Ptr{Float64} @
0x000b034a1200),(10,10),0)

julia> fill!(C, 2.0)
ERROR: KeyError: (0,"fill_contiguous",Float64) not found
 [inlined code] from essentials.jl:58
 in getindex at dict.jl:719
 in fill! at /home/mcp50/.julia/v0.5/CUDArt/src/arrays.jl:158

Copying things across unnecessarily sounds slow, so thoughts appreciated.


[julia-users] Re: parallel and PyCall

2015-10-26 Thread Matthew Pearce
Thought I had an idea about this, I was wrong:

```julia

julia> @everywhere using PyCall

julia> @everywhere @pyimport pylab

julia> remotecall_fetch(pylab.cumsum, 5, collect(1:10))
ERROR: cannot serialize a pointer
 [inlined code] from error.jl:21
 in serialize at serialize.jl:420
 [inlined code] from dict.jl:372
 in serialize at serialize.jl:428
 in serialize at serialize.jl:310
 in serialize at serialize.jl:420 (repeats 2 times)
 in serialize at serialize.jl:302
 in serialize at serialize.jl:420
 [inlined code] from dict.jl:372
 in serialize at serialize.jl:428
 in serialize at serialize.jl:310
 in serialize at serialize.jl:420 (repeats 2 times)
 in serialize at serialize.jl:302
 in serialize at serialize.jl:420
 [inlined code] from dict.jl:372
 in send_msg_ at multi.jl:222
 [inlined code] from multi.jl:177
 in remotecall_fetch at multi.jl:728
 [inlined code] from multi.jl:368
 in remotecall_fetch at multi.jl:734

julia> pylab.cumsum(collect(1:10))
10-element Array{Int64,1}:
  1
  3
  6
 10
 15
 21
 28
 36
 45
 55

```


[julia-users] Re: CUDArt.CudaArry - fill! not working

2015-10-26 Thread Matthew Pearce
Thanks also Tim.



[julia-users] Re: CUDArt.CudaArry - fill! not working

2015-10-26 Thread Matthew Pearce
Thanks Kristoffer, indeed that works. 

I'm slightly baffled by the initialisation process. In my session I had 
already used fill! successfully on a matrix already on the device, so 
presumed everything had been initialised. Clearly that's not enough.




[julia-users] Re: Shared Array on remote machines?

2015-10-21 Thread Matthew Pearce
Indeed I am not trying to share memory across processes on different 
machines.

Yes neither do I understand why the code above doesn't work. I was hoping I 
might get suggestions as to what such workaround might be.

On Tuesday, October 20, 2015 at 2:10:27 PM UTC+1, Páll Haraldsson wrote:
>
> "Shared Array on remote machines" just seemed like an oxymoron..
>
> I didn't read though the code, if you have your machine and another remote 
> one you can't have shared memory between (then you would need something 
> else, non-shared/message passing). I hope that is not what you are trying 
> to do. I assume you are trying to have two (or more) processes on a remote 
> machine sharing memory. I don't know a reason it shouldn't work within any 
> one machine (just never with an outside machine), maybe there are 
> limitations or workarounds.
>
>
>

[julia-users] How to send dynamically generated code to worker processes?

2015-10-15 Thread Matthew Pearce
Hello

I was wondering if there was a way to send dynamically generated code over 
the network? 

This is the kind of thing I am imagining to get bar() loaded on worker 
processes:












*workrs = addprocs(4)function foo(x)   function bar(y)   x*y   end   
barendbar = foo(2)[@spawnat w bar for w in workrs]  #raises error bar() not 
available*

If *bar* were a data object then I understand (perhaps wrongly) from the 
docs it would have been transferred.

Much appreciated

Matthew


Re: [julia-users] How to send dynamically generated code to worker processes?

2015-10-15 Thread Matthew Pearce
Thanks, works nicely.

On Thursday, October 15, 2015 at 12:39:17 PM UTC+1, Stefan Karpinski wrote:
>
> Prefix the function definition with @everywhere and it should work.
>
> On Thu, Oct 15, 2015 at 5:06 PM, Matthew Pearce <mat...@refute.me.uk 
> > wrote:
>
>> Hello
>>
>> I was wondering if there was a way to send dynamically generated code 
>> over the network? 
>>
>> This is the kind of thing I am imagining to get bar() loaded on worker 
>> processes:
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> *workrs = addprocs(4)function foo(x)   function bar(y)   x*y   end   
>> barendbar = foo(2)[@spawnat w bar for w in workrs]  #raises error bar() not 
>> available*
>>
>> If *bar* were a data object then I understand (perhaps wrongly) from the 
>> docs it would have been transferred.
>>
>> Much appreciated
>>
>> Matthew
>>
>
>

[julia-users] Shared Array on remote machines?

2015-10-15 Thread Matthew Pearce
I was wondering if it was possible to spawn shared arrays on remote 
machines.


*julia> S = SharedArray(Int, (3,4), init = S -> S[localindexes(S)] = 
myid(), pids=Int[1,2])*


*3x4 SharedArray{Int64,2}: 1  1  2  2 1  1  2  2 1  1  2  2*


*julia> remotecall_fetch(readall, 6, `hostname`)**"mrc-bsu-tesla4\n"*


*julia> remotecall_fetch(readall, 7, `hostname`)**"mrc-bsu-tesla4\n"*

*julia> r = @spawnat 6 S = SharedArray(Int, (3,4), init = S -> 
S[localindexes(S)] = myid(), pids=Int[6,7])*
*RemoteRef{Channel{Any}}(6,1,526)*

*julia> fetch(r)*



*3x4 SharedArray{Int64,2}: #undef  #undef  #undef  #undef #undef  #undef  
#undef  #undef #undef  #undef  #undef  #undef*

Am I simply trying the wrong thing, or am I missing something deeper? 

I thought the error might be from trying to access memory on another 
process in some unusual way, but:

*julia> r = @spawnat 6 S*eye(4)*
*RemoteRef{Channel{Any}}(6,1,542)*

*julia> fetch(r)*

*ERROR: On worker 6:UndefRefError: access to undefined reference*

To me this looks like the SharedArray being undefined even from the 
perspective of process 6.

Cheers,

Matthew


[julia-users] Re: What's the reason of the Success of Python?

2015-09-30 Thread Matthew Pearce

One big pull for python is the ecosystem. Almost any task has a python 
package available.

However, there are gaps. For all the strength of scipy + numpy, there are 
serious gaps. For instance in methods for sparse matrices, and CUDA 
bindings. It's those gaps that brought me here.

Also, pip is not consistently used across packages (e.g. pyQT). This is due 
to it having evolved from a less well structured system. Julia seems to 
have learned a bunch of lessons from this; but generally making package 
management _really_ easy both on supply and consumption will be an 
advantage for it.

Matthew

On Tuesday, September 29, 2015 at 10:30:19 AM UTC+1, Sisyphuss wrote:
>
> While waiting Julia 0.4 stabilizing, let's do some brainstorming.
>
> What's the reason of the Success of Python? 
> If Julia had appeared 10 years earlier, will Python still have this 
> success?
>
>
>

[julia-users] Re: Pkg.[update()/install()/build()] woes on Windows 10 64 bit

2015-09-30 Thread Matthew Pearce
If it's any consolation your line "git config --global 
url."https://".insteadOf git://" has helped me out of a bind!

On Friday, September 25, 2015 at 8:29:14 PM UTC+1, Evan Fields wrote:
>
> I've been encountering problems with packages. Here's what happened:
>
>- I installed Julia 0.3.11 via the 64 bit .exe on julialang.org
>- Changed the install path to C:\Julia-0.3.11 but otherwise all 
>default options
>- On Windows 10 x64, not using Cygwin or related
>- Right after install I opened a Julia terminal window; I had the 
>session below.
>
> The errors are shown in the session below. I've tried
> - Running as an administrator
> - Running git config --global url."https://".insteadOf git:// in shell 
> mode
> - Running Pkg.init() (already initialized)
> - Trying to clone a repository in Julia/Git using the git-bash there (it 
> worked over https)
>
>_   _ _(_)_ |  A fresh approach to technical computing
>   (_) | (_) (_)|  Documentation: http://docs.julialang.org
>_ _   _| |_  __ _   |  Type "help()" for help.
>   | | | | | | |/ _` |  |
>   | | |_| | | | (_| |  |  Version 0.3.11 (2015-07-27 06:18 UTC)
>  _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ release
> |__/   |  x86_64-w64-mingw32
>
> julia> Pkg.add("Images")
> INFO: Nothing to be done
> INFO: METADATA is out-of-date — you may not have the latest version of Images
> INFO: Use `Pkg.update()` to get the latest versions of your packages
>
> julia> Pkg.update()
> INFO: Updating METADATA...
> Checking out files: 100% (1627/1627), done.
> INFO: Updating cache of Hexagons...
> INFO: Updating cache of Gadfly...
> INFO: Updating cache of ArrayViews...
> INFO: Updating cache of Lazy...
> INFO: Updating cache of ImmutableArrays...
> INFO: Updating cache of Graphics...
> INFO: Updating cache of StatsBase...
> INFO: Updating cache of Requires...
> INFO: Updating cache of MacroTools...
> INFO: Updating cache of NaNMath...
> INFO: Updating cache of FactCheck...
> INFO: Updating cache of DataArrays...
> INFO: Updating cache of Grid...
> INFO: Updating cache of Loess...
> INFO: Updating cache of Compat...
> INFO: Updating cache of FixedPointNumbers...
> INFO: Updating cache of WoodburyMatrices...
> INFO: Updating cache of Compose...
> INFO: Updating cache of JuliaParser...
> INFO: Updating cache of Iterators...
> INFO: Updating cache of JSON...
> INFO: Updating cache of DataFrames...
> INFO: Updating cache of GZip...
> INFO: Updating cache of Reexport...
> INFO: Updating cache of Showoff...
> INFO: Updating cache of Distributions...
> INFO: Updating cache of Optim...
> INFO: Updating cache of Color...
> INFO: Updating cache of SortingAlgorithms...
> INFO: Updating cache of Docile...
> INFO: Updating cache of Calculus...
> INFO: Updating cache of PDMats...
> INFO: Updating cache of DualNumbers...
> INFO: Updating cache of DataStructures...
> INFO: Updating cache of Jewel...
> ERROR: couldn't update C:\Users\ejfie\.julia\v0.3\.cache\Hexagons using `git 
> remote update`
>  in wait at task.jl:284
>  in wait at task.jl:194
>  in wait at task.jl:48
>  in sync_end at task.jl:311
>  in update at pkg/entry.jl:319
>  in anonymous at pkg/dir.jl:28
>  in cd at file.jl:30
>  in cd at pkg/dir.jl:28
>  in update at pkg.jl:41
>
> julia> using Images
> ERROR: Images not properly installed. Please run Pkg.build("Images") then 
> restart Julia.
>  in error at error.jl:21 (repeats 2 times)
> while loading 
> C:\Users\ejfie\.julia\v0.3\Images\src\ioformats/libmagickwand.jl, in 
> expression starting on line 31
> while loading C:\Users\ejfie\.julia\v0.3\Images\src\Images.jl, in expression 
> starting on line 38
>
> julia> Pkg.build("Images")
> INFO: Building Images
> =[ ERROR: Images 
> ]=
>
>
> type Nothing has no field match
> while loading C:\Users\ejfie\.julia\v0.3\Images\deps\build.jl, in expression 
> starting on line 37
>
> ===
>
>
> =[ BUILD ERRORS 
> ]==
>
>
> WARNING: Images had build errors.
>
>  - packages with build errors remain installed in C:\Users\ejfie\.julia\v0.3
>  - build the package(s) and all dependencies with `Pkg.build("Images")`
>  - build a single package by running its `deps/build.jl` script
>
> ===
>
>
> julia>
>
>
> Hopefully I'm missing something simple here! Any suggestion?
>


[julia-users] Pkg dependence on git://

2015-09-30 Thread Matthew Pearce
Hi 

I'm just getting into Julia. It seems that, out of the box, Pkg is 
dependent on being able to call 'git clone git://blah' type addresses. 
For some reason my network doesn't support that. 

INFO: Cloning cache of ArrayViews from 
git://github.com/JuliaLang/ArrayViews.jl.git
fatal: unable to connect to github.com:
github.com[0: 192.30.252.130]: errno=No route to host

While waiting for support I thought I would dig around. It seems I can 
install packages by 'git clone https://blah' through 
Pkg.add("https://blah;). 
Of course if the package has dependencies it throws up errors for the same 
reason.

TL,DR; It would be nice if Pkg did a try except on the url prefix using 
https://.

Cheers

Matthew Pearce