Granularity and cache complexity are the reasons why not. If you find
anything and everything that can be done in parallel and parallelize it then
you generally obtain only slowdowns. An essential trick is to exploit
locality via mutation but, of course, purely functional programming sucks at
that by design not least because it is striving to abstract that concept
away.

 

I share your dream but I doubt it will ever be realized.

 

Cheers,

Jon.

 

From: caml-list-boun...@yquem.inria.fr
[mailto:caml-list-boun...@yquem.inria.fr] On Behalf Of Eray Ozkural
Sent: 16 November 2010 23:05
To: Gerd Stolpmann
Cc: caml-list@yquem.inria.fr
Subject: Re: [Caml-list] SMP multithreading

 

 

On Wed, Nov 17, 2010 at 12:13 AM, Gerd Stolpmann <i...@gerd-stolpmann.de>
wrote:

Am Dienstag, den 16.11.2010, 22:35 +0200 schrieb Eray Ozkural:

>
>
> On Tue, Nov 16, 2010 at 7:04 PM, Gerd Stolpmann
> <i...@gerd-stolpmann.de> wrote:
>         Am Montag, den 15.11.2010, 22:46 -0800 schrieb Edgar
>         Friendly:
>              * As somebody mentioned "implicit parallelization": Don't
>         expect
>                anything from this. Even if a good compiler finds ways
>         to
>                parallelize 20% of the code (which would be a lot), the
>         runtime
>                effect would be marginal. 80% of the code is run at
>         normal speed
>                (hopefully) and dominates the runtime behavior. The
>         point is
>                that such compiler-driven code improvements are only
>         local
>                optimizations. For getting good parallelization results
>         you need
>                to restructure the design of the program - well, maybe
>                compiler2.0 can do this at some time, but this is not
>         in sight.
>
> I think you are underestimating parallelizing compilers.

I was more citing Amdahl's law, and did not want to criticize any effort
in this area. It's more the usefulness for the majority of problems. How
useful is the best parallelizing compiler if only a small part of the
program _can_ actually benefit from it? Think about it. If you are not
working in an area where many subroutines can be sped up, you consider
this way of parallelizing as a waste of time. And this is still true for
the majority of problems. Also, for many problems that can be tackled,
the scalability is very limited.
  


What makes you think only 20% of the code can be parallelized? I've worked
in such a compiler project, and there were way too many opportunities for
parallelization in ordinary C code, let alone a functional language;
implicit parallelism would work wonders there. Of course you may think
whatever you were thinking, but I know that a high degree of parallelism can
be achieved through a functional language. I can't tell you much more
though. If you think that a very small portion of the code can be
parallelized you probably do not appreciate the kinds of static and dynamic
analysis those compilers perform. Of course if you are thinking of applying
it to some office or e-mail application, this might not be the case, but
automatic parallelization strategies would work best when you apply them to
a computationally intensive program.

The really limiting factor for current functional languages would be their
reliance on inherently sequential primitives like list processing, which may
in some cases limit the compiler to only pipeline parallelism (something
non-trivial to do by hand actually). Instead they would have to get their
basic forms from high-level parallel PLs (which might mean rewriting a lot
of things). The programs could look like something out of a category theory
textbook then. But I think even without modification you could get a lot of
speedup from ocaml code by applying state-of-the-art automatic
parallelization techniques. By the way, how much of the serial work is
parallelized that's what matters rather than the code length that is. Even
if the parallelism is not so obvious, the analysis can find out which
iterations are parallelizable, which variables have which kinds of
dependencies, memory dependencies, etc. etc. In the public, there is this
perception that automatic parallelization does not work, which is wrong,
while it is true that the popular compiler vendors do not understand much in
this regard, all I've seen (in popular products) are lame attempts to
generate vector instructions....

That is not to say, that implicit parallelization of current code can
replace parallel algorithm design (which is AI-complete). Rather, I think,
implicit parallelization is one of the things that will help parallel
computing people: by having them work at a more abstract level, exposing
more concurrency through functional forms, yet avoiding writing low-level
comms code. High-level explicit parallelism is also quite favorable, as
there may be many situations where, say, dynamic load-balancing approaches
are suitable. The approach of HPF might be relevant here, with the
programmer making annotations to guide the distribution of data structures,
and the rest inferred from code.

So whoever says that isn't possible, probably hasn't read up much in the
computer architecture community. Probably even expert PL researchers may be
misled here, as they have made the quoted remark or similar remarks about
multi-core/SMP  architectures. It was known for a very long time (20+ years)
that the clock speeds would hit a wall and then we'd have to expend more
area. This is true regardless of the underlying architecture/technology
actually. Mother nature is parallel. It is the sequence that is an
abstraction. And I wonder what is more natural than expressing a solution to
a problem in terms of functions and sets? The kind of non-leaky abstractions
required can be provided by a type-safe functional language and then the
compiler will have a lot more to work on than C code, although analysis can
reveal much about even C code. With the correct heuristics, it might decide
on good data and task distribution strategies, translating say a set
constructor notation to an efficient parallel algorithm, why not??

What I say applies to parallelization based on analysis, of course for
functional languages, other parallelization strategies are also possible, I
suppose low-level task parallelization and dynamic load-balancing strategies
(where functions or closures are distributed) are the most popular (the
reasoning being there can be many independent function evaluations
concurrently), I wonder if it has been attempted at all for ocaml? The
impurity of the language can be dealt with easily. And is there anyone who
might make suggestions to somebody who would like to work on automatic
parallelization of ocaml code? I actually think it might be fun to try some
of the simpler strategies and see how they yield on current hardware
platforms like multi-core and GPU. We ideally wouldn't like just good
automatic parallelization of, say, linear algebra code, but also cool stuff
like functional data structures. Recursive procedures can be mapped to
threads, it might match best hand-written parallelizations, actually, and
you'd get platform independence as a bonus if the compiler is well-designed
:) 

Best,


-- 
Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
http://groups.yahoo.com/group/ai-philosophy
http://myspace.com/arizanesil http://myspace.com/malfunct

_______________________________________________
Caml-list mailing list. Subscription management:
http://yquem.inria.fr/cgi-bin/mailman/listinfo/caml-list
Archives: http://caml.inria.fr
Beginner's list: http://groups.yahoo.com/group/ocaml_beginners
Bug reports: http://caml.inria.fr/bin/caml-bugs

Reply via email to