On Thu, 1 Apr 1999, Walter B. Ligon III wrote:

> Being a "dyed-in-the-wool" Beowulf person, I tend to believe that the best
> approach is to use a larger number of the most cost effective devices you
> can get.  Cost effective including all aspects of hardware and software, not
> just CPU price and speed.  For better or for worse, this is currently Intel.
> Your mileage may vary.

Chris, I'd generally agree with this philosophy and add only one thing
-- if your task mix is IPC/network bound and synchronous (as it sounds
like it might be) your processors may well be idle a fair amount of the
time waiting for IPC's to complete.  In that case, you are better off
investing relatively more in the network and relatively less in the
processors to try to tune the two to maximum cost/benefit.  The ratio of
the time the task spends computing to the amount of time it spends
communicating, and issues like whether or not the communication can be
parallelized with the computation or the computation blocks until
communication is completed are important things to consider.

The general idea is to try to form an approximate, estimated
cost/benefit function for your task mix.  Then try to optimize it by
shifting investment over the space of possible configurations.  This
process will give you a pretty good idea if you should shift money from
CPU to network to memory, from alpha to PII or even to the Celeron
(which is one of the best deals on the market right now if your job
isn't too nonlocal in memory).  At the very least, you'll be able to
answer questions like "why did you select N nodes with
alpha/PII/celeron, myrinet/GbE,64/128...MB SDRAM,..." with definite
reasons;-)

   rgb

Robert G. Brown                        http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:[EMAIL PROTECTED]


Reply via email to