Hello all,

I am running a polymer melt with 100000 atoms, 2 fs time step, PME, on a
workstation with specifications:

2X Intel Xeon 6128 3.4 2666 MHz 6-core CPU
2X16B DDR4 RAM
2XRTX 2080Ti 11 GB

I have installed *GPU and thread_mpi *enabled gromacs 2019.0 version using:

cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON
*-DGMX_THREAD_MPI=ON
-DGMX_GPU=ON*

While running a single job with below command, I am getting a performance
of *65 ns/day. *

*gmx_tmpi mdrun -v -s t1.tpr -c t1.pdb -gpu_id 0 -ntmpi 1 -ntomp 24*

*Q. However I want to run two different simulations at a time using CPU
cores and one GPU for each, Can somebody help me with mdrun command (what
combination of ntmpi and ntomp) I should use to run two simulations with
efficient utilization of CPU cores and 1 GPU each.*

*Q.* I have also tried utilising GPU for PME calculations using -pme GPU,
as in the command

gmx_tmpi mdrun -v -s t1.tpr -c t1.pdb -ntmpi 1 -ntomp 24  -gputasks 01* -nb
gpu -pme gpu*

but i get the below error,


*"Feature not implemented:The input simulation did not use PME in a way
that is supported on the GPU."*

why is this error coming? Should I put extra attributes while compiling
gromacs.

Thanks
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Reply via email to