Two things I forgot:
- Ryzen chipsets are limited in the number of PCIE lanes, so if you
plug in a second card (e.g. IB), you'll get x8 on both, which means
GPU transfers will be slower too. Now, this may not be a great issue
if you run multiple ranks per GPU which will provide some
Hi,
Note that it matters a lot how far you want to parallelize and what
kind of runs would you do? 10 GbE with RoCE may well be enough to
scale across a couple of such nodes, especially if you can squeeze PME
into a single node and avoid the MPI collectives across the network.
You may not even
Hi,
GROMACS doesn't much care about bandwidth, but rather message latency and
message injection rate (which in some cases depends on what else is sharing
the network). For those, even high quality gigabit ethernet *can* be good
enough, so likely any Infiniband product will be just fine.
Hi everyone,
Our group is also interested in purchasing cloud GPU cluster. Amazon only
supplies GPU cluster connected by 10Gb/s bandwidth. I notice this post but
there is no reply by far. It would be nice if someone give any clue.
Regards,
Simon
2018-03-06 1:31 GMT+08:00 Daniel Bauer
Hello,
In our group, we have multiple identical Ryzen 1700x / Nvidia GeForce
1080 GTX computing nodes and think about interconnecting them via
InfiniBands.
Does anyone have Information on what Bandwidth is required by GROMACS
for communication via InfiniBand (MPI + trajectory writing) and how it