Re: [Beowulf] Purdue Supercomputer

2008-05-04 Thread Mark Hahn
Does anyone know what is the detailed plan for building that thing with 200 people in just 1 day? I'm guessing it's mainly just the monkey work. I've heard that dell always delivers each server in a separate box, so the most annoying part of building a dell cluster is unboxing, racking and

Re: [Beowulf] Purdue Supercomputer

2008-05-04 Thread Matt Lawrence
On Sat, 3 May 2008, Alex Younts wrote: The machine will be running Redhat Enterprise 4. That's getting to be a bit dated, but still very well supported and extremely stable. -- Matt It's not what I know that counts. It's what I can remember in time to use.

Re: [Beowulf] Purdue Supercomputer

2008-05-04 Thread Alex Younts
Joshua mora acosta wrote: Does anyone know what is the detailed plan for building that thing with 200 people in just 1 day? Yep: I am very curious to understand what things can be done in parallel, what things are serialized from the point of view of installation, testing and

Re: [Beowulf] Purdue Supercomputer

2008-05-04 Thread Kilian CAVALOTTI
On Saturday 03 May 2008 23:41:08 Mark Hahn wrote: I'm guessing it's mainly just the monkey work. I've heard that dell always delivers each server in a separate box, Not necessarily. Our 288-nodes Dell cluster has been delivered in fully populated and pre-cabled racks. All the racking and

Re: [Beowulf] Nvidia, cuda, tesla and... where's my double floating point?

2008-05-04 Thread Mikhail Kuzminsky
In message from Ricardo Reis [EMAIL PROTECTED] (Fri, 2 May 2008 14:05:25 +0100 (WEST)): Does anyone knows if/when there will be double floating point on those little toys from nvidia? Next generation Tesla, but I don't know when. Or use AMD FireStream 9170 instead :-) Mikhail Kuzminsky