yes, the normal MD works fine. Even in a CPU based plumed also works.
It seems that this function doesn't support PLumed GPU running?
On 08/02/2017 01:02 PM, Mark Abraham wrote:
Hi,
My first guess is that the implementation of PLUMED doesn't support this.
Does a normal non-PLUMED simulation run
Hi,
My first guess is that the implementation of PLUMED doesn't support this.
Does a normal non-PLUMED simulation run correctly when called in this
manner?
Mark
On Wed, Aug 2, 2017 at 9:55 AM Albert wrote:
> Hello,
>
> I am trying to run Gromacs with the following command line:
>
>
> mpirun
Hello,
I am trying to run Gromacs with the following command line:
mpirun -np 4 gmx_mpi mdrun -v -g 7.log -s 7.tpr -x 7.xtc -c 7.gro -e
7.edr -plumed plumed.dat -ntomp 2 -gpu_id 0123
but it always failed with the following messages:
Running on 1 node with total 24 cores, 48 logical cores,
I recompiled Gromacs-5.0.1, finally it works now
Probably I made some mistakes in previous compiling
thanks a lot guys
regards
Albert
On 09/09/2014 09:16 AM, Carsten Kutzner wrote:
Hi,
from the double output it looks like two identical mdruns,
each with 1 PP process and 10 OpenMP thr
thank you for reply.
I compiled it with command:
env CC=mpicc CXX=mpicxx F77=mpif90 FC=mpif90 LDF90=mpif90
CMAKE_PREFIX_PATH=/home/albert/install/intel-2013/mkl/include/fftw:/home/albert/install/intel-mpi/bin64
cmake .. -DBUILD_SHARED_LIB=OFF -DBUILD_TESTING=OFF
-DCMAKE_INSTALL_PREFIX=/home/
thank you for reply.
I compiled it with command:
env CC=mpicc CXX=mpicxx F77=mpif90 FC=mpif90 LDF90=mpif90
CMAKE_PREFIX_PATH=/home/albert/install/intel-2013/mkl/include/fftw:/home/albert/install/intel-mpi/bin64
cmake .. -DBUILD_SHARED_LIB=OFF -DBUILD_TESTING=OFF
-DCMAKE_INSTALL_PREFIX=/home/
Hi,
from the double output it looks like two identical mdruns,
each with 1 PP process and 10 OpenMP threads, are started.
Maybe there is something wrong with your MPI setup (did
you by mistake compile with thread-MPI instead of MPI?)
Carsten
On 09 Sep 2014, at 09:06, Albert wrote:
> Here ar
Here are more informations from log file:
mpirun -np 2 mdrun_mpi -v -s npt2.tpr -c npt2.gro -x npt2.xtc -g
npt2.log -gpu_id 01 -ntomp 0
Number of hardware threads detected (20) does not match the number
reported by OpenMP (10).
Consider setting the launch configuration manually!
Number of hardw
thanks a lot for reply both Yunlong and Szilard.
I don't set up PBS system and nodes in the workstation. In the GPU
workstation, it contains 1 CPU with 20 cores, and two GPUs. So it is
similar to 1 nodes with 2 GPUs.
But I don't know why 4.6.5 works, but 5.0.1 doesn't ...
Thx again f
Same idea with Szilard.
How many nodes are you using?
On one nodes, how many MPI ranks do you have? The error is complaining about
you assigned two GPUs to only one MPI process on one node. If you spread your
two MPI ranks on two nodes, that means you only have one at each. Then you
can't assig
Hi,
It looks like you're starting two ranks and passing two GPU IDs so it
should work. The only think I can think of is that you are either
getting the two MPI ranks placed on different nodes or that for some
reason "mpirun -np 2" is only starting one rank (MPI installation
broken?).
Does the sam
Hello:
I am trying to use the following command in Gromacs-5.0.1:
mpirun -np 2 mdrun_mpi -v -s npt2.tpr -c npt2.gro -x npt2.xtc -g
npt2.log -gpu_id 01 -ntomp 10
but it always failed with messages:
2 GPUs detected on host cudaB:
#0: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC: no,
12 matches
Mail list logo