Andrei Neamtu wrote:
Hello gmx,

I have a question regarding infiniband interconnect: Is
there any difference (in terms of performance) between integrated
on-board infiniband (ex. Mellanox) and PCI-Express infiniband adaptors
(due to pci-e limitations)? Which one is recomended. We are in a
process of buying a cluster on which gromacs will be the main
computational engine.
Any help will be greatly appreciated!
Thank you,
Andrei

If you are buying a serious cluster then ask them to let you do benchmarks. the PCI bus can be a bottleneck, although the throughput is still quite OK. It also depends on how many cores are sharing the connection.




------------------------------------------------------------------------

_______________________________________________
gmx-users mailing list    [email protected]
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


--
David van der Spoel, Ph.D.
Molec. Biophys. group, Dept. of Cell & Molec. Biol., Uppsala University.
Box 596, 75124 Uppsala, Sweden. Phone:  +46184714205. Fax: +4618511755.
[EMAIL PROTECTED]       [EMAIL PROTECTED]   http://folding.bmc.uu.se
_______________________________________________
gmx-users mailing list    [email protected]
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the www interface or send it to [EMAIL PROTECTED]
Can't post? Read http://www.gromacs.org/mailing_lists/users.php

Reply via email to