Dear all,

I apologise for the fact that it is not a Gromacs related question in a
direct sense. We are planning to build a small scale cluster (upto 32
nodes for the time being), which can take care of both serial and
parallel codes. Gromacs and Amber are the major programs that will be
run for parallel computing.

We have come across the recent product NVIDIA® Tesla™ Personal
Supercomputer: http://www.nvidia.com/object/personal_supercomputing.html
and we request expert opinion regarding this system for being suitable
to run Gromacs on it.

While it looks surprisingly efficient with "the revolutionary NVIDIA®
CUDA™ parallel computing architecture and powered by up to 960 parallel
processing cores" in a single workstation, I would like to know how
efficient it would be in terms of heat management and scaling. Will it
be more cost effective as compared to a simple Beowulf cluster built out
of individual Intel Quad Core processors connected through Gigabit switch?

If any of you have any experience and opinion regarding such systems,
please share with me. It will be immensely helpful for us to make the
decision.


Thanks and regards,
Suman Chakrabarty.

_______________________________________________
gmx-users mailing list    [email protected]
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [email protected].
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Reply via email to