Re: [gmx-users] Groms 2019.0, simulations using 2 GPUs: RTX 2080Ti

2019-12-19 Thread Mark Abraham
Hi, Yes that's expected. If you want to run two simulations in parallel then you need to follow the advice in the user guide. Two plain calls to gmx mdrun cannot work usefully. Mark On Fri., 13 Dec. 2019, 11:22 Nikhil Maroli, wrote: > Initially, I tried to run 2+ jobs in my workstation with

Re: [gmx-users] Groms 2019.0, simulations using 2 GPUs: RTX 2080Ti

2019-12-19 Thread Mark Abraham
Hi, Those commands are listed in the user guide, please look there :-) Mark On Fri., 13 Dec. 2019, 10:10 Pragati Sharma, wrote: > Hi Paul, > > The option -pme gpu works when I give pme order = 4 in mdp file instead of > 3. but it gives me an increase of 6-7 ns/day. > > @Dave M : I am also

Re: [gmx-users] Groms 2019.0, simulations using 2 GPUs: RTX 2080Ti

2019-12-13 Thread Nikhil Maroli
Initially, I tried to run 2+ jobs in my workstation with multiple GPU, what I found is running one simulation at a time is much faster than running in parallel. You are not going to get equal or exactly half the performance during parallel inputs. -- Gromacs Users mailing list * Please search

Re: [gmx-users] Groms 2019.0, simulations using 2 GPUs: RTX 2080Ti

2019-12-13 Thread Pragati Sharma
Hi Paul, The option -pme gpu works when I give pme order = 4 in mdp file instead of 3. but it gives me an increase of 6-7 ns/day. @Dave M : I am also getting same observation. "If I run just one simulation, I get almost double performance compared to when I run two simulations in two GPUs as

Re: [gmx-users] Groms 2019.0, simulations using 2 GPUs: RTX 2080Ti

2019-12-12 Thread Dave M
Hi Paul, I just jumped in this discussion. But I am wondering is CUDA_VISIBLE_DEVICES equivalent to providing gpu_id in mdrun? Also, my multiple simulations run slower in the same node with multiple gpus. e.g. in a node with 4 GPU and 64 CPU mpirun -np 1 mdrun -ntomp 24 -gpu_id 0 -pin on mpirun

Re: [gmx-users] Groms 2019.0, simulations using 2 GPUs: RTX 2080Ti

2019-12-12 Thread Paul bauer
Hello, the error you are getting in the end means that your simulation likely does not use PME, or uses it in a way that is not implemented to run on the GPU. You can still run the nonbonded calculations on the GPU, just remove the -pme gpu flag. For running different simulations on your

Re: [gmx-users] Groms 2019.0, simulations using 2 GPUs: RTX 2080Ti

2019-12-12 Thread Pragati Sharma
Thanks Nikhil. About the second question. It is actually implemented , as you can see the link below, however I cannot run these commands without error. https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2019-July/126012.html On Fri, Dec 13, 2019 at 12:28 PM Nikhil Maroli wrote: >

Re: [gmx-users] Groms 2019.0, simulations using 2 GPUs: RTX 2080Ti

2019-12-12 Thread Nikhil Maroli
You can assign part of the core and 1 GPU to one job and another part with separate command. For ex. 1. gmx mdrun -ntmpi XX -ntomp YY -gpu_id K1 2. gmx mdrun -ntmpi XX2 -ntomp YY2 -gpu_id K2 the second part of the question is related to the implementation of such calculations in GPU< which is

[gmx-users] Groms 2019.0, simulations using 2 GPUs: RTX 2080Ti

2019-12-12 Thread Pragati Sharma
Hello all, I am running a polymer melt with 10 atoms, 2 fs time step, PME, on a workstation with specifications: 2X Intel Xeon 6128 3.4 2666 MHz 6-core CPU 2X16B DDR4 RAM 2XRTX 2080Ti 11 GB I have installed *GPU and thread_mpi *enabled gromacs 2019.0 version using: cmake ..