On Sat, 16 Sep 2000 01:33:37 Dariush Pietrzak wrote: > > 1000s of boxen of course. :) > it's been said that it can support up to 6500 boxes, but I think that it > would be way cheeper and faster to buy parallel machine from SGI
Well, no. :) COTS is the way to go, and Beowulf is getting hotter as days pass. COTS = Commodity Off The Shelf Components. Have a look at www.beowulf.org to learn about the history of the project. Beowulf machines are among the TOP 500 fastest supercomputers. (That has a web page, too, but where) Stay subscribed to this list, I'll post some of my lousy bookmarks. > Amoeba is quite strange thing, although i think i like it's filesystem > very much. and i don't like it's licence. > there is linux version built on Mach microkernel, do you think this would > be any better? What I meant is supporting some kind of message passing, RPC OO, etc. is a nice thing to have. You're not done with that sure. You'll probably need higher level software, like languages, runtime systems, etc. This is one of the places where the distributed / parallel research distinction starts to disappear. > > > for scientific computing. Linear algebra, mostly. blas, scalapack, etc. > blacs. yes. but I ain't going to do much scientific computing, am thinking > about some more general solutions. > Well, go ahead but first check what's been done before. A textbook on parallel programming will be instrumental. I advise "Introduction To Parallel Programming - Design And Analysis Of Algorithms by Kumar, Grama, Gupta and Karypis". Those are the guys we're playing catch-up with :) For newer stuff, you need to see the research projects which you can access on the internet. You may be interested for instance in POOMA. > > People also have tried automatic parallelization of sequential code, but > > that > > doesn't work. :) > Hmm? and what about modern processors? don't they do this and do this > quite nicely ? > It's basically because sequential code is too sequential :) Traditional languages don't have nice semantics that are well-parallelizable. :) A language with side-effects is a nightmare for any parallelizing compiler. > > be sure that it is [Fortran] going to be viable for at least a couple of > > decades. :) > so are assemblers, but I ain't going to do much coding with asm either. You aren't making the right analogy here. Assembly is low-level, C++ is high level. In parallel programming, passing messages with C or C++ is low-level, using a data-parallel language is high level. HPF is one of the (or the) most advanced compilation systems, considering what's under the hood. So, if you don't like low-level stuff, avoid anything that exposes architecture specific details. > > Some physicists like Charm++ also, but I don't find that very efficient. > what is Charm++? > Charm++ is some extensions to C++ that allow you to do OO message passing. Invoking a remote member function is like executing a normal member function. It might be nice if you're doing coarse grained code. Regards, -- Eray (exa) Ozkural Comp. Sci. Dept., Bilkent University, Ankara e-mail: [EMAIL PROTECTED] www: http://www.cs.bilkent.edu.tr/~erayo

