I dug up this pdf from Nvidia: http://www.nvidia.com/docs/IO/43395/tesla_product_overview_dec.pdf Since I can't imagine coding a graphics card while it serves my X :-) I supposed one might put the PCIE card in a box with a cheap SVGA for the least-cost CUDA experiment (one GPU, 128 "thread processors" per GPU). The deskside is 2 GPU with 3 GB, the rackable is 4 GPU with 6 GB; they have PCIE adapter cards to talk to your workstation.
I think one plan might be like the GraPE (Gravity Pipe); maybe one of the rackables alternating with a CPU MB acting as net host and fileserver, so each (2-board) node has 4 GPU for array computing. Peter P.S. incidentally, while I was browsing Nvidia, I spec'd out a fantasy gaming rig. $23K, 256GB of solid state "disk", 3x 1GB video cards, 180W just to liquid-cool the CPU :-) Maybe next year. On 6/19/08, John Hearns <[EMAIL PROTECTED]> wrote: > > On Wed, 2008-06-18 at 16:31 -0700, Jon Forrest wrote: > > > Kilian CAVALOTTI wrote: > > > I'm glad you mentioned this. I've read through much of the information > > on their web site and I still don't understand the usage model for > > CUDA. By that I mean, on a desktop machine, are you supposed to have > > 2 graphics cards, 1 for running CUDA code and one for regular > > graphics? If you only need 1 card for both, how do you avoid the > > problem you mentioned, which was also mentioned in the documentation? > > Actually, I should imagine Kilian is referring to something else, > not the inbuilt timeout which is in the documentation. But I can't speak > for im. > > > > > > Or, if you have a compute node that will sit in a dark room, > > you aren't going to be running an X server at all, so you won't > > have to worry about anything hanging? > > > I don't work for Nvidia, so I can't say! > But the usage model is as you say - you can prototype applications which > will run for a short time on the desktop machine, but long runs are > meant to be done on a dedicated back-end machine. > If you want a totally desk-side solution, they sell a companion box > which goes alongside and attaches via a ribbon cable. I guess the art > here is finding a motherboard with the right number and type of > PCI-express slots to take both the companion box and a decent graphics > card for X use. > > > > > > > I'm planning on starting a pilot program to get the > > chemists in my department to use CUDA, but I'm waiting > > for V2 of the SDK to come out. > > > > > Why wait? The hardware will be the same, and you can dip your toe in the > water now. > > > > _______________________________________________ > Beowulf mailing list, Beowulf@beowulf.org > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf >
_______________________________________________ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf