On Friday, December 5, 2014 12:43:20 PM UTC-5, Steven G. Johnson wrote:
>
> On Friday, December 5, 2014 9:57:42 AM UTC-5, Sebastian Nowozin wrote:
>>
>> I find Julia great, but for the technical computing goal, my biggest 
>> grime with Julia (0.3.3 and 0.4.0-dev) at the moment is the lack of simple 
>> OpenMP-style parallelism.
>>
>
> See the discussion at:
>       https://github.com/JuliaLang/julia/issues/1790
> and the considerable work in progress on multithreading in Julia:
>       https://github.com/JuliaLang/julia/tree/threads
>
> There is DArray and pmap, but they have large communication overheads for 
>> shared-memory parallelism,
>>
>
> This is a somewhat orthogonal issue.  You can have multiple processes and 
> still use a shared address space for data structures.  See:
>
>        
> http://julia.readthedocs.org/en/latest/manual/parallel-computing/#shared-arrays-experimental
>  
>
> The real difference is the programming model, not so much the 
> communications cost.
>

I think you're right that the interesting thing for a language is the 
model, but at the same time
for problems that are too big to reside on 1 machine, you can't ignore the 
communications.

I feel the grail here is to do map-reduce with the bare minimum of language 
elements and hosts
as first-class is too much.  

If I can draw an analogy for a 2-levels removed analogy, what language 
elements guarantee 
@simd will vectorize anything that could possibly be vectorized and what 
will it take to make
@simd completely unnecessary.  In the same way what will it take to make a 
problem
automatically decomposable across hosts in a reasonable way.  Assuming 
everything can
fit on 1 machine is too limiting, but it's so convenient.  

I seems like Julia as a language among other things is predicated on LLVM 
being able to figure
out how to vectorize, and introducing the minimum elements for LLVM to do 
its thing; in this
case typing and immutability and JIT, and from Jacob's graph, looks like 
that was a pretty good
idea.

How about multi-host parallelism?

Reply via email to