Eugen Leitl wrote:

> It's remarkable how few are using MPI in practice. A lot of code 
> is being made multithread-proof, and for what? So that they'll have
> to rewrite it for message-passing, again?

Having seen a couple of applications which used MPI it seems like a dead
end to me. The code is mangled to the point where it becomes really hard
to understand what it does (in one case I rewrote it with OpenMP and the
difference in clarity was amazing). Fortunately, message passing in
Smalltalk looks far nicer and doesn't get in the way. So that is what I
am working on (and yes, I know all about Peter Deutsch's opinion about
making local and remote messages look the same -
http://en.wikipedia.org/wiki/Fallacies_of_Distributed_Computing).

In other messages in this thread there were comments about software
bloat hiding the effects of Moore's Law. There was a funny quote about
that (I was not able to track down who first said it): "What Andy
giveth, Bill taketh away!" (meaning Andrew Grove of Intel and Bill Gates
of Microsoft - this was a while ago). But we were talking about
selecting machines for research, and in that case the same software
would be used.

Compare running Squeak on a 40MHz 386 PC (my 1992 computer) with running
the exact same code on a 1GHz Pentium 4 PC (available to me in 2000).
Not even the old MVC interface is really usable on the first while the
second machine can handle Morphic just fine. The quantitive difference
becomes a qualititive one. I didn't feel the same between my 1 MHz Apple
II and the 6MHz PC AT. But of course there was a diffence - to show of
the AT in trade shows we used to run a Microsoft flight simulator called
Jet (later merged with MS Flight Simulator) on that machine side by side
with a 4.77MHz PC XT. It was a fun game on the AT, but looked more like
a slide show on the XT. I still felt I could get by with the Apple II,
however.

How can we spend money now to live in the future? Alan mentioned the
first way in his talk: put lots and lots of FPGA together. The BEE3
board isn't cheap (something like $5K without the FPGAs, which are a few
thousand dollars each themselves, nor memory) and a good RAMP machine
hook a bunch of these together. The advantage of this approach is that
each FPGA is large enough to do pretty much anything you can imagine. If
you know your processors will be rather small, it might be more cost
effective to have a larger number of cheaper FPGAs. That is what I am
working on.

A second way to live in the future is far less flexible, and so should
only be a second step after the above is no longer getting you the
results you need: use wafer scale integration to have now roughly the
same number of transistors you will have in 2020 on a typical chip. This
is pretty hard (just ask Clive Sinclair or Gene Amdahl how much they
lost on wafer scale integration back in the 1980s). But if you can get
it to work, then you could distribute hundreds (or more) of 2020's
computers to today's researchers.

-- Jecel


_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to