--- On Thu, 5/7/09, Vivek Ayer <[email protected]> wrote: > From: Vivek Ayer <[email protected]> > Subject: Recommendation for Beowulf/Apache Setup > To: "misc" <[email protected]> > Received: Thursday, May 7, 2009, 12:36 PM > Hey guys, > > This is a very general question, but I'm sure not exactly > sure how to > proceed. I'll be getting a lot of hardware soon to be > clustered and I > was wondering what was your take on the setup. > > My setup was going to be: > > 1 OpenBSD Router running 4.5 routing to a subnet of 13 > nodes running > FreeBSD 7.2. Of the 13 nodes, 1 node is a master mysql > server and the > 12 nodes will run apache running LAMP-like services. The > router will > round-robin using hoststated for load-balancing.
hoststated? What is that? I think you mean relayd! ;) > However, they will serve an additional task: The master > mysql server > will be head node for MPI jobs delivered to the 12 nodes. > Basically, > this setup will double up as a beowulf and web server. Is > this > efficient? I imagine the MPI jobs won't be running all the > time and > while they're up, might as well do something. I think you are going to be heading for a world of hurt here. I am the HPC director at a university supporting 3 faculties. Once people begin to use the resource the *will* crash nodes. Having any critical services running on HPC compute nodes is *not advisable*. > Firstly, would you recommend BSD or Linux for this. The > router is a > given to have OpenBSD of course, but what about the > others? OS doesn't matter! It's all about the tools. We use GNU/Linux (CentOS 5) for our HPC cluster because there are more tools available natively for it. This is an unfortunate fact. More and more applications out there are becoming GNU/Linux specific and just don't work properly or at all on other OSs. Evaluate your tools and make a decision. AFAIK, Open-MPI, MPICH and MPICH2 compile and run fine on the BSDs. Other tools and libs, well, YMMV. > I figured it makes sense to parallelize as much as possible > so that > the HTTP/MPI load can be shared among as many computers as > possible. > Let me know your thoughts. Unless you have hard memory and CPU provisioning limiting what the cluster nodes can do, alah XEN/VMWare. Forget about it. Trust me. I've rebooted enough deadlocked/crash nodes due to user error to know better. If you have to... well... NO CARRIER...

