Suman Chakrabarty wrote:
Dear all,

I apologise for the fact that it is not a Gromacs related question in a
direct sense. We are planning to build a small scale cluster (upto 32
nodes for the time being), which can take care of both serial and
parallel codes. Gromacs and Amber are the major programs that will be
run for parallel computing.

We have come across the recent product NVIDIA® Tesla™ Personal
Supercomputer: http://www.nvidia.com/object/personal_supercomputing.html
and we request expert opinion regarding this system for being suitable
to run Gromacs on it.

While it looks surprisingly efficient with "the revolutionary NVIDIA®
CUDA™ parallel computing architecture and powered by up to 960 parallel
processing cores" in a single workstation, I would like to know how
efficient it would be in terms of heat management and scaling. Will it
be more cost effective as compared to a simple Beowulf cluster built out
of individual Intel Quad Core processors connected through Gigabit switch?

Gigabit is unlikely to be effective for scaling past a handful of GROMACS processors - there's been quite a few posts on this in the last six months.

On the Tesla, my boss asked me for an opinion last month, and I said that while it seemed there were no serious in-principle problems for MD simulations on GPUs (e.g. FFT libraries exist for PME, precision limitations are going away soon) I would stick with conventional hardware for all kinds of computational chemistry until someone's actually done the hard work of making it run well on the new architecture. Speed with GROMACS comes from its processor-optimized inner loops. Even recasting the generic C loops to work under the new parallel environment seems likely to be a serious project for an experienced GROMACS developer (say 3-6 months). If you were going to do *that*, then you should at least sting them for free access to hardware, since what you're doing is creating them a market for them to sell to.

Mark
_______________________________________________
gmx-users mailing list    [email protected]
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [email protected].
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Reply via email to