Re: [gmx-users] Groms 2019.0, simulations using 2 GPUs: RTX 2080Ti

2019-12-19 Thread Mark Abraham
Hi,

Yes that's expected. If you want to run two simulations in parallel then
you need to follow the advice in the user guide. Two plain calls to gmx
mdrun cannot work usefully.

Mark

On Fri., 13 Dec. 2019, 11:22 Nikhil Maroli,  wrote:

> Initially, I tried to run 2+ jobs in my workstation with multiple GPU, what
> I found is running one simulation at a time is much faster than running in
> parallel.
> You are not going to get equal or exactly half the performance during
> parallel inputs.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Groms 2019.0, simulations using 2 GPUs: RTX 2080Ti

2019-12-19 Thread Mark Abraham
Hi,

Those commands are listed in the user guide, please look there :-)

Mark

On Fri., 13 Dec. 2019, 10:10 Pragati Sharma,  wrote:

> Hi Paul,
>
> The option -pme gpu works when I give pme order = 4 in mdp file instead of
> 3. but it gives me an increase of 6-7 ns/day.
>
> @Dave M : I am also getting same observation. "If I run just one
> simulation, I get almost double performance compared to when I run two
> simulations in two GPUs as follows.
>
> mdrun -ntomp 12 -gpu_id 0 -nb gpu
> mdrun -ntomp 12  -gpu_id 1  -nb gpu
>
> I am seeing various benchmarks, and the performance should not reduce to
> half with two simulations.
>
> I need a command line with mdrun for maximal use of CPU and gpu.
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Fri, Dec 13, 2019 at 1:14 PM Dave M  wrote:
>
> > Hi Paul,
> >
> > I just jumped in this discussion. But I am wondering is
> > CUDA_VISIBLE_DEVICES  equivalent to providing gpu_id in mdrun?
> > Also, my multiple simulations run slower in the same node with multiple
> > gpus. e.g. in a node with 4 GPU and 64 CPU
> > mpirun -np 1 mdrun -ntomp 24 -gpu_id 0 -pin on
> > mpirun -np 1 mdrun -ntomp 24 -gpu_id 2 -pin on
> >
> > If I run just one simulation I get almost double performance (with same
> > command as above).
> >
> > Dave
> >
> > On Thu, Dec 12, 2019 at 11:22 PM Paul bauer 
> > wrote:
> >
> > > Hello,
> > >
> > > the error you are getting in the end means that your simulation likely
> > > does not use PME, or uses it in a way that is not implemented to run on
> > > the GPU.
> > > You can still run the nonbonded calculations on the GPU, just remove
> the
> > > -pme gpu flag.
> > >
> > > For running different simulations on your GPUs, you need to set the
> > > environment variable CUDA_VISIBLE_DEVICES so that each simulation only
> > > sees on of the available GPUs.
> > >
> > > Cheers
> > >
> > > Paul
> > >
> > > On 13/12/2019 06:22, Pragati Sharma wrote:
> > > > Hello all,
> > > >
> > > > I am running a polymer melt with 10 atoms, 2 fs time step, PME,
> on
> > a
> > > > workstation with specifications:
> > > >
> > > > 2X Intel Xeon 6128 3.4 2666 MHz 6-core CPU
> > > > 2X16B DDR4 RAM
> > > > 2XRTX 2080Ti 11 GB
> > > >
> > > > I have installed *GPU and thread_mpi *enabled gromacs 2019.0 version
> > > using:
> > > >
> > > > cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON
> > > > *-DGMX_THREAD_MPI=ON
> > > > -DGMX_GPU=ON*
> > > >
> > > > While running a single job with below command, I am getting a
> > performance
> > > > of *65 ns/day. *
> > > >
> > > > *gmx_tmpi mdrun -v -s t1.tpr -c t1.pdb -gpu_id 0 -ntmpi 1 -ntomp 24*
> > > >
> > > > *Q. However I want to run two different simulations at a time using
> CPU
> > > > cores and one GPU for each, Can somebody help me with mdrun command
> > (what
> > > > combination of ntmpi and ntomp) I should use to run two simulations
> > with
> > > > efficient utilization of CPU cores and 1 GPU each.*
> > > >
> > > > *Q.* I have also tried utilising GPU for PME calculations using -pme
> > GPU,
> > > > as in the command
> > > >
> > > > gmx_tmpi mdrun -v -s t1.tpr -c t1.pdb -ntmpi 1 -ntomp 24  -gputasks
> 01*
> > > -nb
> > > > gpu -pme gpu*
> > > >
> > > > but i get the below error,
> > > >
> > > >
> > > > *"Feature not implemented:The input simulation did not use PME in a
> way
> > > > that is supported on the GPU."*
> > > >
> > > > why is this error coming? Should I put extra attributes while
> compiling
> > > > gromacs.
> > > >
> > > > Thanks
> > >
> > >
> > > --
> > > Paul Bauer, PhD
> > > GROMACS Release Manager
> > > KTH Stockholm, SciLifeLab
> > > 0046737308594
> > >
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read 

Re: [gmx-users] Groms 2019.0, simulations using 2 GPUs: RTX 2080Ti

2019-12-13 Thread Nikhil Maroli
Initially, I tried to run 2+ jobs in my workstation with multiple GPU, what
I found is running one simulation at a time is much faster than running in
parallel.
You are not going to get equal or exactly half the performance during
parallel inputs.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Groms 2019.0, simulations using 2 GPUs: RTX 2080Ti

2019-12-13 Thread Pragati Sharma
Hi Paul,

The option -pme gpu works when I give pme order = 4 in mdp file instead of
3. but it gives me an increase of 6-7 ns/day.

@Dave M : I am also getting same observation. "If I run just one
simulation, I get almost double performance compared to when I run two
simulations in two GPUs as follows.

mdrun -ntomp 12 -gpu_id 0 -nb gpu
mdrun -ntomp 12  -gpu_id 1  -nb gpu

I am seeing various benchmarks, and the performance should not reduce to
half with two simulations.

I need a command line with mdrun for maximal use of CPU and gpu.













On Fri, Dec 13, 2019 at 1:14 PM Dave M  wrote:

> Hi Paul,
>
> I just jumped in this discussion. But I am wondering is
> CUDA_VISIBLE_DEVICES  equivalent to providing gpu_id in mdrun?
> Also, my multiple simulations run slower in the same node with multiple
> gpus. e.g. in a node with 4 GPU and 64 CPU
> mpirun -np 1 mdrun -ntomp 24 -gpu_id 0 -pin on
> mpirun -np 1 mdrun -ntomp 24 -gpu_id 2 -pin on
>
> If I run just one simulation I get almost double performance (with same
> command as above).
>
> Dave
>
> On Thu, Dec 12, 2019 at 11:22 PM Paul bauer 
> wrote:
>
> > Hello,
> >
> > the error you are getting in the end means that your simulation likely
> > does not use PME, or uses it in a way that is not implemented to run on
> > the GPU.
> > You can still run the nonbonded calculations on the GPU, just remove the
> > -pme gpu flag.
> >
> > For running different simulations on your GPUs, you need to set the
> > environment variable CUDA_VISIBLE_DEVICES so that each simulation only
> > sees on of the available GPUs.
> >
> > Cheers
> >
> > Paul
> >
> > On 13/12/2019 06:22, Pragati Sharma wrote:
> > > Hello all,
> > >
> > > I am running a polymer melt with 10 atoms, 2 fs time step, PME, on
> a
> > > workstation with specifications:
> > >
> > > 2X Intel Xeon 6128 3.4 2666 MHz 6-core CPU
> > > 2X16B DDR4 RAM
> > > 2XRTX 2080Ti 11 GB
> > >
> > > I have installed *GPU and thread_mpi *enabled gromacs 2019.0 version
> > using:
> > >
> > > cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON
> > > *-DGMX_THREAD_MPI=ON
> > > -DGMX_GPU=ON*
> > >
> > > While running a single job with below command, I am getting a
> performance
> > > of *65 ns/day. *
> > >
> > > *gmx_tmpi mdrun -v -s t1.tpr -c t1.pdb -gpu_id 0 -ntmpi 1 -ntomp 24*
> > >
> > > *Q. However I want to run two different simulations at a time using CPU
> > > cores and one GPU for each, Can somebody help me with mdrun command
> (what
> > > combination of ntmpi and ntomp) I should use to run two simulations
> with
> > > efficient utilization of CPU cores and 1 GPU each.*
> > >
> > > *Q.* I have also tried utilising GPU for PME calculations using -pme
> GPU,
> > > as in the command
> > >
> > > gmx_tmpi mdrun -v -s t1.tpr -c t1.pdb -ntmpi 1 -ntomp 24  -gputasks 01*
> > -nb
> > > gpu -pme gpu*
> > >
> > > but i get the below error,
> > >
> > >
> > > *"Feature not implemented:The input simulation did not use PME in a way
> > > that is supported on the GPU."*
> > >
> > > why is this error coming? Should I put extra attributes while compiling
> > > gromacs.
> > >
> > > Thanks
> >
> >
> > --
> > Paul Bauer, PhD
> > GROMACS Release Manager
> > KTH Stockholm, SciLifeLab
> > 0046737308594
> >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Groms 2019.0, simulations using 2 GPUs: RTX 2080Ti

2019-12-12 Thread Dave M
Hi Paul,

I just jumped in this discussion. But I am wondering is
CUDA_VISIBLE_DEVICES  equivalent to providing gpu_id in mdrun?
Also, my multiple simulations run slower in the same node with multiple
gpus. e.g. in a node with 4 GPU and 64 CPU
mpirun -np 1 mdrun -ntomp 24 -gpu_id 0 -pin on
mpirun -np 1 mdrun -ntomp 24 -gpu_id 2 -pin on

If I run just one simulation I get almost double performance (with same
command as above).

Dave

On Thu, Dec 12, 2019 at 11:22 PM Paul bauer  wrote:

> Hello,
>
> the error you are getting in the end means that your simulation likely
> does not use PME, or uses it in a way that is not implemented to run on
> the GPU.
> You can still run the nonbonded calculations on the GPU, just remove the
> -pme gpu flag.
>
> For running different simulations on your GPUs, you need to set the
> environment variable CUDA_VISIBLE_DEVICES so that each simulation only
> sees on of the available GPUs.
>
> Cheers
>
> Paul
>
> On 13/12/2019 06:22, Pragati Sharma wrote:
> > Hello all,
> >
> > I am running a polymer melt with 10 atoms, 2 fs time step, PME, on a
> > workstation with specifications:
> >
> > 2X Intel Xeon 6128 3.4 2666 MHz 6-core CPU
> > 2X16B DDR4 RAM
> > 2XRTX 2080Ti 11 GB
> >
> > I have installed *GPU and thread_mpi *enabled gromacs 2019.0 version
> using:
> >
> > cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON
> > *-DGMX_THREAD_MPI=ON
> > -DGMX_GPU=ON*
> >
> > While running a single job with below command, I am getting a performance
> > of *65 ns/day. *
> >
> > *gmx_tmpi mdrun -v -s t1.tpr -c t1.pdb -gpu_id 0 -ntmpi 1 -ntomp 24*
> >
> > *Q. However I want to run two different simulations at a time using CPU
> > cores and one GPU for each, Can somebody help me with mdrun command (what
> > combination of ntmpi and ntomp) I should use to run two simulations with
> > efficient utilization of CPU cores and 1 GPU each.*
> >
> > *Q.* I have also tried utilising GPU for PME calculations using -pme GPU,
> > as in the command
> >
> > gmx_tmpi mdrun -v -s t1.tpr -c t1.pdb -ntmpi 1 -ntomp 24  -gputasks 01*
> -nb
> > gpu -pme gpu*
> >
> > but i get the below error,
> >
> >
> > *"Feature not implemented:The input simulation did not use PME in a way
> > that is supported on the GPU."*
> >
> > why is this error coming? Should I put extra attributes while compiling
> > gromacs.
> >
> > Thanks
>
>
> --
> Paul Bauer, PhD
> GROMACS Release Manager
> KTH Stockholm, SciLifeLab
> 0046737308594
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Groms 2019.0, simulations using 2 GPUs: RTX 2080Ti

2019-12-12 Thread Paul bauer

Hello,

the error you are getting in the end means that your simulation likely 
does not use PME, or uses it in a way that is not implemented to run on 
the GPU.
You can still run the nonbonded calculations on the GPU, just remove the 
-pme gpu flag.


For running different simulations on your GPUs, you need to set the 
environment variable CUDA_VISIBLE_DEVICES so that each simulation only 
sees on of the available GPUs.


Cheers

Paul

On 13/12/2019 06:22, Pragati Sharma wrote:

Hello all,

I am running a polymer melt with 10 atoms, 2 fs time step, PME, on a
workstation with specifications:

2X Intel Xeon 6128 3.4 2666 MHz 6-core CPU
2X16B DDR4 RAM
2XRTX 2080Ti 11 GB

I have installed *GPU and thread_mpi *enabled gromacs 2019.0 version using:

cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON
*-DGMX_THREAD_MPI=ON
-DGMX_GPU=ON*

While running a single job with below command, I am getting a performance
of *65 ns/day. *

*gmx_tmpi mdrun -v -s t1.tpr -c t1.pdb -gpu_id 0 -ntmpi 1 -ntomp 24*

*Q. However I want to run two different simulations at a time using CPU
cores and one GPU for each, Can somebody help me with mdrun command (what
combination of ntmpi and ntomp) I should use to run two simulations with
efficient utilization of CPU cores and 1 GPU each.*

*Q.* I have also tried utilising GPU for PME calculations using -pme GPU,
as in the command

gmx_tmpi mdrun -v -s t1.tpr -c t1.pdb -ntmpi 1 -ntomp 24  -gputasks 01* -nb
gpu -pme gpu*

but i get the below error,


*"Feature not implemented:The input simulation did not use PME in a way
that is supported on the GPU."*

why is this error coming? Should I put extra attributes while compiling
gromacs.

Thanks



--
Paul Bauer, PhD
GROMACS Release Manager
KTH Stockholm, SciLifeLab
0046737308594

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Groms 2019.0, simulations using 2 GPUs: RTX 2080Ti

2019-12-12 Thread Pragati Sharma
Thanks Nikhil.
About the second question. It is actually implemented , as you can see the
link below, however I cannot run these commands without error.

https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2019-July/126012.html

On Fri, Dec 13, 2019 at 12:28 PM Nikhil Maroli  wrote:

> You can assign part of the core and 1 GPU to one job and another part with
> separate command.
> For ex.
> 1. gmx mdrun -ntmpi XX -ntomp YY  -gpu_id K1
> 2. gmx mdrun -ntmpi XX2 -ntomp YY2 -gpu_id K2
>
> the second part of the question is related to the implementation of such
> calculations in GPU< which is not yet implemented
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Groms 2019.0, simulations using 2 GPUs: RTX 2080Ti

2019-12-12 Thread Nikhil Maroli
You can assign part of the core and 1 GPU to one job and another part with
separate command.
For ex.
1. gmx mdrun -ntmpi XX -ntomp YY  -gpu_id K1
2. gmx mdrun -ntmpi XX2 -ntomp YY2 -gpu_id K2

the second part of the question is related to the implementation of such
calculations in GPU< which is not yet implemented
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Groms 2019.0, simulations using 2 GPUs: RTX 2080Ti

2019-12-12 Thread Pragati Sharma
Hello all,

I am running a polymer melt with 10 atoms, 2 fs time step, PME, on a
workstation with specifications:

2X Intel Xeon 6128 3.4 2666 MHz 6-core CPU
2X16B DDR4 RAM
2XRTX 2080Ti 11 GB

I have installed *GPU and thread_mpi *enabled gromacs 2019.0 version using:

cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON
*-DGMX_THREAD_MPI=ON
-DGMX_GPU=ON*

While running a single job with below command, I am getting a performance
of *65 ns/day. *

*gmx_tmpi mdrun -v -s t1.tpr -c t1.pdb -gpu_id 0 -ntmpi 1 -ntomp 24*

*Q. However I want to run two different simulations at a time using CPU
cores and one GPU for each, Can somebody help me with mdrun command (what
combination of ntmpi and ntomp) I should use to run two simulations with
efficient utilization of CPU cores and 1 GPU each.*

*Q.* I have also tried utilising GPU for PME calculations using -pme GPU,
as in the command

gmx_tmpi mdrun -v -s t1.tpr -c t1.pdb -ntmpi 1 -ntomp 24  -gputasks 01* -nb
gpu -pme gpu*

but i get the below error,


*"Feature not implemented:The input simulation did not use PME in a way
that is supported on the GPU."*

why is this error coming? Should I put extra attributes while compiling
gromacs.

Thanks
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.