On Fri, Dec 16, 2011 at 04:14:40PM -0300, Jecel Assumpcao Jr. wrote:
> Eugen Leitl wrote:
> 
> > It's remarkable how few are using MPI in practice. A lot of code 
> > is being made multithread-proof, and for what? So that they'll have
> > to rewrite it for message-passing, again?
> 
> Having seen a couple of applications which used MPI it seems like a dead
> end to me. The code is mangled to the point where it becomes really hard

Yes, you're running into the limitations of the human mind. Despite
being a massively parallel process underneath somewhat paradoxically
the upper layers have big problems with utilizing parallelism.

I actually think that the problem is unsolvable at the human end
(just consider debugging millions to billions of fine-grained
asynchronous shared-nothing processes) and have to be routed around
the human by automatic code generation by stochastical means.
Growing your code a la Darwin might be the only thing that could
scale. Of course, we have to learn evolvability first. Current
stuff is way too brittle.

> to understand what it does (in one case I rewrote it with OpenMP and the

OpenMP assumes shared memory, and shared memory does not exist in
this universe. It has to be expensively emulated. Cache coherency
will be distinctly dead well before we'll get to kilonode country.
We can already rack some quite impressive numbers of ARM-based
SoCs on a mesh without the corium failure mode if cooling
fails briefly.

> difference in clarity was amazing). Fortunately, message passing in
> Smalltalk looks far nicer and doesn't get in the way. So that is what I

I must admit I've never done Smalltalk in anger, though I definitely
loved the concept when I did my history in early 1980s.

> am working on (and yes, I know all about Peter Deutsch's opinion about
> making local and remote messages look the same -
> http://en.wikipedia.org/wiki/Fallacies_of_Distributed_Computing).

If you remove the cache and use cache-like embedded memory
than accessing remote locations by message passing (routed via
cut-through signalling mesh) is only slightly more expensive than
accessing local embedded memory. Some gate delays and relativistic
latency (think of pingpong across a 300 mm wafer) do apply, of course.
 
> How can we spend money now to live in the future? Alan mentioned the
> first way in his talk: put lots and lots of FPGA together. The BEE3

FPGAs suffer the problem of lack of embedded memory. Consider
GPGPU with quarter of TByte/s bandwidth across 2-3 GByte grains.
You just can't compete with economies of scale which allows you
hundreds to thousands of meshing such with InfiniBand.

> board isn't cheap (something like $5K without the FPGAs, which are a few
> thousand dollars each themselves, nor memory) and a good RAMP machine
> hook a bunch of these together. The advantage of this approach is that
> each FPGA is large enough to do pretty much anything you can imagine. If
> you know your processors will be rather small, it might be more cost
> effective to have a larger number of cheaper FPGAs. That is what I am
> working on.
> 
> A second way to live in the future is far less flexible, and so should
> only be a second step after the above is no longer getting you the
> results you need: use wafer scale integration to have now roughly the
> same number of transistors you will have in 2020 on a typical chip. This
> is pretty hard (just ask Clive Sinclair or Gene Amdahl how much they
> lost on wafer scale integration back in the 1980s). But if you can get
> it to work, then you could distribute hundreds (or more) of 2020's
> computers to today's researchers.

But today's computers as tomorrow's are already large clusters.
The question is one of how many nodes you can afford, and what is
your electricity bill. If you know how your problem maps you'll just
pick the best of COTS of today, and run it for 3-5 years after which
it's cheaper to buy new hardware than to keep paying the electricity
bill.

I'm not sure how well the SmallTalk model would fare here. 


_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to