Michael Smith wrote:
> I think you just compile whatever application with a cluster-optimized library
> and that's good. Of course, the app has to be the type with little i/o and
> lots of cpu time so that it's worthwhile to run it in a cluster, so I guess
> you're mostly right. I think we would build a cluster and then just stand
> around staring at it, saying "OK, we built this thing, now we need something to
> run on it".
John Mashey, a very smart guy whose job is something like
Supercomputing Evangelist at SGI, gives a presentation where he
describes supercomputing problems as having three orthogonal axes:
Computation, I/O and Communication. He gives examples of problems
that are heavy in each, e.g., brute force cryptanalysis is 99.99%
compute, data mining is mostly I/O, and finite element analysis is
heavy on communication. (I think.) You can measure any
supercomputer's performance along each of those axes independently.
A Beowulf is strong in computation, especially integer computation
(Intel's FP sucks), okay in I/O (lots of disks), and really poor on
communication, unless you use something like multiple Myrinets (sp?).
Even then it won't compare to a shared-memory architecture like SGI's
Origin or HP/Convex's Excalibur.
My fearless prediction is that, since Beowulf's price/performance
ratio for computation blows everything else away, a lot of research
will be done on restructuring algorithms to meet Beowulf's particular
performance mix. (Okay, this isn't exactly fearless -- university
research groups have already built thousands of Beowulfs, so what
else are they gonna research?)
--
K<bob>
[EMAIL PROTECTED], http://www.jogger-egg.com/