Actually if you listen to Guy Steele's talk the whole point is to let the developer be even more ignorant than you are now.

You shouldn't write "do" loops that start at the beginning and go until the end. You should just say that you want the calculation done and that your intent is that you have visited everything. And then of course you might have dependencies etc... but the idea is generally, don't say how to do it, just declare what you want as a result.

Of course, it doesn't immediately match what we are used to doing: telling the computer how we want things done; but you can't really say that you have to tell the computer more about parallelization. It's exactly the opposite.

On 04/29/2011 05:05 PM, Alexandre Bergel wrote:
Interesting result. But still, I feel that the computer should be able to do 
some inference. This could be at runtime, at compile time or during the 
development. I think than expecting the user to have the knowledge about how to 
parallelize an application is too much asking.

I will read Charlotte's work.

Alexandre


On 29 Apr 2011, at 09:58, Toon Verwaest wrote:

I have a hunch that Stefan is referring to the PhD thesis of Charlotte Herzeel 
without giving names. As far as I understood from my discussions with her (and 
her talks), it generally doesn't really pay off to automatically parallelize on 
branches. You get minimal speedups (1% at best).

I tend to agree with Stefan / Michael / Guy Steele... Maybe you don't need much, but the 
mapreduce style is the minimum requirement to give the language enough "wiggle 
room" to automatically parallelize stuff. But it DOES require you to restructure 
your application in a slightly more declarative fashion.

cheers,
Toon

On 04/29/2011 04:55 PM, Alexandre Bergel wrote:
Hi Stefan,

I though about your email. I do not understand why automatic parallelization is 
not the way to go. In my opinion, the computer has much more knowledge about 
the programmer about where side effects appear and where to cut or split a 
computation.

Actually, if I want to be provocative, I would say that parallelization cannot 
be effective without being automatic. For a similar reason that the compiler 
will always know better than me how to properly allocate registers.

I feel it would be cheaper for me to buy a faster computer than to learn how to 
program in a multi-core fashion.

Cheers,
Alexandre


However, as I understand it, it's entirely up to user to write code
exploiting parallel Process explicitly right ?
Sure, you have to do: n times: [ [ 1 expensiveComputation. ] fork ].

I don't belief in holy grails or silver bullets.
Automatic parallelization is something nice for the kids, like Santa Clause or 
the Easter Bunny...




Reply via email to