found it.
http://www.gromacs.org/Documentation/Acceleration_and_parallelization
GPUs are assigned to PP ranks within the same physical node in a sequential
order, that is GPU 0 to the (thread-)MPI rank 0, GPU 1 to rank 1. In order
to manually specify which GPU(s) to be used by mdrun, the
Thank you again for the reply.
ntmpi is for threadMPI but I am using OpenMPI for MPI as I am planning to
use multiple nodes.
As I have pointed in case 7 of my post that if I use ntmpi, i get a fatal
error that says :thread mpi's are requested but gromacs is not compiled
with thread MPI.
For
On Wed, Sep 24, 2014 at 5:57 PM, Siva Dasetty sdas...@g.clemson.edu wrote:
Thank you again for the reply.
ntmpi is for threadMPI but I am using OpenMPI for MPI as I am planning to
use multiple nodes.
As I have pointed in case 7 of my post that if I use ntmpi, i get a fatal
error that says
well... I think i read it somewhere that threaded MPI is a dropped in
replacement for real MPI. OpenMPI is a real MPI. So those two shouldn't be
compatible.
I think we chose that when we compiled gromacs (whether we use real MPI or
not). Threaded MPI is enabled by default if we didn't compile for
what happened when you ran without gpu? I installed 5.0.1 on a single
machine without gpu. It used threaded mpi and no real mpi and ran fine.
On Wed, Sep 24, 2014 at 12:21 PM, Johnny Lu johnny.lu...@gmail.com wrote:
well... I think i read it somewhere that threaded MPI is a dropped in
Try -nt, -ntmpi, -ntomp, -np (one at a time) ?
I forget about what I tried now But I just stop the mdrun, and then
read the log file.
Also can look for the mdrun page in the offical manual (pdf) and try this
page:
http://www.gromacs.org/Documentation/Gromacs_Utilities/mdrun?highlight=mdrun
Thank you Lu for the reply.
As I have mentioned in the post, I have already tried those options but it
didn't work. Kindly please let me know if you have anymore suggestions.
Thank you,
On Tue, Sep 23, 2014 at 8:41 PM, Johnny Lu johnny.lu...@gmail.com wrote:
Try -nt, -ntmpi, -ntomp, -np (one
Dear All,
I am trying to run NPT simulations using GROMACS version 5.0.1 of a system
of size 140k atoms (protein+water systems) with 2 or more GPU's
(model=k20); 8 cores (or more); and 1 or more nodes. I am trying to
understand how to run simulations using multiple gpus on more than one
node. I