Re: [gmx-users] Regarding OH, HH vector distribution

2019-07-29 Thread Omkar Singh
Hi,
I did by  "gmx gangle ..." command. But I am not getting good result,
because I have a doubt in ndx file.  Can you help me for making  ndx file.
How should I select the atom for vectors.
Thanks

On Mon, Jul 29, 2019, 22:50 David van der Spoel 
wrote:

> Den 2019-07-29 kl. 18:26, skrev Omkar Singh:
> > Hi,
> > Meaning is that If I want to calculate angle between OH, HH and dip
> vector
> > with positive Z-axis. How can I make index file for this issue? And is it
> > possible that the angle distribution of these vectors for bulk water
> > aproximatly linear. Hope now question is clear.
> Probably. Check gmx gangle -g2 z
> >
> > Thanks
> >
> > On Mon, Jul 29, 2019, 16:33 David van der Spoel 
> > wrote:
> >
> >> Den 2019-07-29 kl. 12:24, skrev Omkar Singh:
> >>> Hello everyone,
> >>> Is it possible that the probability distribution of HH, OH vector for
> >> bulk
> >>> water is approximately linear?
> >>>
> >> What do you mean?
> >>
> >> --
> >> David van der Spoel, Ph.D., Professor of Biology
> >> Head of Department, Cell & Molecular Biology, Uppsala University.
> >> Box 596, SE-75124 Uppsala, Sweden. Phone: +46184714205.
> >> http://www.icm.uu.se
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
> >>
>
>
> --
> David van der Spoel, Ph.D., Professor of Biology
> Head of Department, Cell & Molecular Biology, Uppsala University.
> Box 596, SE-75124 Uppsala, Sweden. Phone: +46184714205.
> http://www.icm.uu.se
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Regarding OH, HH vector distribution

2019-07-29 Thread David van der Spoel

Den 2019-07-29 kl. 18:26, skrev Omkar Singh:

Hi,
Meaning is that If I want to calculate angle between OH, HH and dip vector
with positive Z-axis. How can I make index file for this issue? And is it
possible that the angle distribution of these vectors for bulk water
aproximatly linear. Hope now question is clear.

Probably. Check gmx gangle -g2 z


Thanks

On Mon, Jul 29, 2019, 16:33 David van der Spoel 
wrote:


Den 2019-07-29 kl. 12:24, skrev Omkar Singh:

Hello everyone,
Is it possible that the probability distribution of HH, OH vector for

bulk

water is approximately linear?


What do you mean?

--
David van der Spoel, Ph.D., Professor of Biology
Head of Department, Cell & Molecular Biology, Uppsala University.
Box 596, SE-75124 Uppsala, Sweden. Phone: +46184714205.
http://www.icm.uu.se
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.




--
David van der Spoel, Ph.D., Professor of Biology
Head of Department, Cell & Molecular Biology, Uppsala University.
Box 596, SE-75124 Uppsala, Sweden. Phone: +46184714205.
http://www.icm.uu.se
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] cvff question

2019-07-29 Thread Yi Lu
Dear all,

These days I want to use cvff force field but in gromacs there is no relative 
files for it. So I want to know if I can add it and how to make it.



Thanks sincerely,
YiLu

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Regarding OH, HH vector distribution

2019-07-29 Thread Omkar Singh
Hi,
Meaning is that If I want to calculate angle between OH, HH and dip vector
with positive Z-axis. How can I make index file for this issue? And is it
possible that the angle distribution of these vectors for bulk water
aproximatly linear. Hope now question is clear.

Thanks

On Mon, Jul 29, 2019, 16:33 David van der Spoel 
wrote:

> Den 2019-07-29 kl. 12:24, skrev Omkar Singh:
> > Hello everyone,
> > Is it possible that the probability distribution of HH, OH vector for
> bulk
> > water is approximately linear?
> >
> What do you mean?
>
> --
> David van der Spoel, Ph.D., Professor of Biology
> Head of Department, Cell & Molecular Biology, Uppsala University.
> Box 596, SE-75124 Uppsala, Sweden. Phone: +46184714205.
> http://www.icm.uu.se
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] performance issues running gromacs with more than 1 gpu card in slurm

2019-07-29 Thread Mark Abraham
Hi,

Yes and the -nmpi I copied from Carlos's post is ineffective - use -ntmpi

Mark


On Mon., 29 Jul. 2019, 15:15 Justin Lemkul,  wrote:

>
>
> On 7/29/19 8:46 AM, Carlos Navarro wrote:
> > Hi Mark,
> > I tried that before, but unfortunately in that case (removing —gres=gpu:1
> > and including in each line the -gpu_id flag) for some reason the jobs are
> > run one at a time (one after the other), so I can’t use properly the
> whole
> > node.
> >
>
> You need to run all but the last mdrun process in the background (&).
>
> -Justin
>
> > ——
> > Carlos Navarro Retamal
> > Bioinformatic Engineering. PhD.
> > Postdoctoral Researcher in Center of Bioinformatics and Molecular
> > Simulations
> > Universidad de Talca
> > Av. Lircay S/N, Talca, Chile
> > E: carlos.navarr...@gmail.com or cnava...@utalca.cl
> >
> > On July 29, 2019 at 11:48:21 AM, Mark Abraham (mark.j.abra...@gmail.com)
> > wrote:
> >
> > Hi,
> >
> > When you use
> >
> > DO_PARALLEL=" srun --exclusive -n 1 --gres=gpu:1 "
> >
> > then the environment seems to make sure only one GPU is visible. (The log
> > files report only finding one GPU.) But it's probably the same GPU in
> each
> > case, with three remaining idle. I would suggest not using --gres unless
> > you can specify *which* of the four available GPUs each run can use.
> >
> > Otherwise, don't use --gres and use the facilities built into GROMACS,
> e.g.
> >
> > $DO_PARALLEL $EXE -s eq6.tpr -deffnm eq6-20 -nmpi 1 -pin on -pinoffset 0
> > -ntomp 20 -gpu_id 0
> > $DO_PARALLEL $EXE -s eq6.tpr -deffnm eq6-20 -nmpi 1 -pin on -pinoffset 10
> > -ntomp 20 -gpu_id 1
> > $DO_PARALLEL $EXE -s eq6.tpr -deffnm eq6-20 -nmpi 1 -pin on -pinoffset 20
> > -ntomp 20 -gpu_id 2
> > etc.
> >
> > Mark
> >
> > On Mon, 29 Jul 2019 at 11:34, Carlos Navarro  >
> > wrote:
> >
> >> Hi Szilárd,
> >> To answer your questions:
> >> **are you trying to run multiple simulations concurrently on the same
> >> node or are you trying to strong-scale?
> >> I'm trying to run multiple simulations on the same node at the same
> time.
> >>
> >> ** what are you simulating?
> >> Regular and CompEl simulations
> >>
> >> ** can you provide log files of the runs?
> >> In the following link are some logs files:
> >> https://www.dropbox.com/s/7q249vbqqwf5r03/Archive.zip?dl=0.
> >> In short, alone.log -> single run in the node (using 1 gpu).
> >> multi1/2/3/4.log ->4 independent simulations ran at the same time in a
> >> single node. In all cases, 20 cpus are used.
> >> Best regards,
> >> Carlos
> >>
> >> El jue., 25 jul. 2019 a las 10:59, Szilárd Páll (<
> pall.szil...@gmail.com>)
> >> escribió:
> >>
> >>> Hi,
> >>>
> >>> It is not clear to me how are you trying to set up your runs, so
> >>> please provide some details:
> >>> - are you trying to run multiple simulations concurrently on the same
> >>> node or are you trying to strong-scale?
> >>> - what are you simulating?
> >>> - can you provide log files of the runs?
> >>>
> >>> Cheers,
> >>>
> >>> --
> >>> Szilárd
> >>>
> >>> On Tue, Jul 23, 2019 at 1:34 AM Carlos Navarro
> >>>  wrote:
>  No one can give me an idea of what can be happening? Or how I can
> > solve
> >>> it?
>  Best regards,
>  Carlos
> 
>  ——
>  Carlos Navarro Retamal
>  Bioinformatic Engineering. PhD.
>  Postdoctoral Researcher in Center of Bioinformatics and Molecular
>  Simulations
>  Universidad de Talca
>  Av. Lircay S/N, Talca, Chile
>  E: carlos.navarr...@gmail.com or cnava...@utalca.cl
> 
>  On July 19, 2019 at 2:20:41 PM, Carlos Navarro (
> >>> carlos.navarr...@gmail.com)
>  wrote:
> 
>  Dear gmx-users,
>  I’m currently working in a server where each node posses 40 physical
> >>> cores
>  (40 threads) and 4 Nvidia-V100.
>  When I launch a single job (1 simulation using a single gpu card) I
> >> get a
>  performance of about ~35ns/day in a system of about 300k atoms.
> > Looking
>  into the usage of the video card during the simulation I notice that
> >> the
>  card is being used about and ~80%.
>  The problems arise when I increase the number of jobs running at the
> >> same
>  time. If for instance 2 jobs are running at the same time, the
> >>> performance
>  drops to ~25ns/day each and the usage of the video cards also drops
> >>> during
>  the simulation to about a ~30-40% (and sometimes dropping to less than
> >>> 5%).
>  Clearly there is a communication problem between the gpu cards and the
> >>> cpu
>  during the simulations, but I don’t know how to solve this.
>  Here is the script I use to run the simulations:
> 
>  #!/bin/bash -x
>  #SBATCH --job-name=testAtTPC1
>  #SBATCH --ntasks-per-node=4
>  #SBATCH --cpus-per-task=20
>  #SBATCH --account=hdd22
>  #SBATCH --nodes=1
>  #SBATCH --mem=0
>  #SBATCH --output=sout.%j
>  #SBATCH --error=s4err.%j
>  #SBATCH --time=00:10:00
>  #SBATCH --partition=develgpus
>  #SBATCH --gres=gpu:4
> 

Re: [gmx-users] performance issues running gromacs with more than 1 gpu card in slurm

2019-07-29 Thread Szilárd Páll
Carlos,

You can accomplish the same using the multi-simulation feature of
mdrun and avoid having to manually manage the placement of runs, e.g.
instead of the above you just write
gmx mdrun_mpi -np N -multidir $WORKDIR1 $WORKDIR2 $WORKDIR3 ...
For more details see
http://manual.gromacs.org/documentation/current/user-guide/mdrun-features.html#running-multi-simulations
Note that if the different runs have different speed, just as with
your manual launch, your machine can end up partially utilized when
some of the runs finish.

Cheers,
--
Szilárd

On Mon, Jul 29, 2019 at 2:46 PM Carlos Navarro
 wrote:
>
> Hi Mark,
> I tried that before, but unfortunately in that case (removing —gres=gpu:1
> and including in each line the -gpu_id flag) for some reason the jobs are
> run one at a time (one after the other), so I can’t use properly the whole
> node.
>
>
> ——
> Carlos Navarro Retamal
> Bioinformatic Engineering. PhD.
> Postdoctoral Researcher in Center of Bioinformatics and Molecular
> Simulations
> Universidad de Talca
> Av. Lircay S/N, Talca, Chile
> E: carlos.navarr...@gmail.com or cnava...@utalca.cl
>
> On July 29, 2019 at 11:48:21 AM, Mark Abraham (mark.j.abra...@gmail.com)
> wrote:
>
> Hi,
>
> When you use
>
> DO_PARALLEL=" srun --exclusive -n 1 --gres=gpu:1 "
>
> then the environment seems to make sure only one GPU is visible. (The log
> files report only finding one GPU.) But it's probably the same GPU in each
> case, with three remaining idle. I would suggest not using --gres unless
> you can specify *which* of the four available GPUs each run can use.
>
> Otherwise, don't use --gres and use the facilities built into GROMACS, e.g.
>
> $DO_PARALLEL $EXE -s eq6.tpr -deffnm eq6-20 -nmpi 1 -pin on -pinoffset 0
> -ntomp 20 -gpu_id 0
> $DO_PARALLEL $EXE -s eq6.tpr -deffnm eq6-20 -nmpi 1 -pin on -pinoffset 10
> -ntomp 20 -gpu_id 1
> $DO_PARALLEL $EXE -s eq6.tpr -deffnm eq6-20 -nmpi 1 -pin on -pinoffset 20
> -ntomp 20 -gpu_id 2
> etc.
>
> Mark
>
> On Mon, 29 Jul 2019 at 11:34, Carlos Navarro 
> wrote:
>
> > Hi Szilárd,
> > To answer your questions:
> > **are you trying to run multiple simulations concurrently on the same
> > node or are you trying to strong-scale?
> > I'm trying to run multiple simulations on the same node at the same time.
> >
> > ** what are you simulating?
> > Regular and CompEl simulations
> >
> > ** can you provide log files of the runs?
> > In the following link are some logs files:
> > https://www.dropbox.com/s/7q249vbqqwf5r03/Archive.zip?dl=0.
> > In short, alone.log -> single run in the node (using 1 gpu).
> > multi1/2/3/4.log ->4 independent simulations ran at the same time in a
> > single node. In all cases, 20 cpus are used.
> > Best regards,
> > Carlos
> >
> > El jue., 25 jul. 2019 a las 10:59, Szilárd Páll ()
> > escribió:
> >
> > > Hi,
> > >
> > > It is not clear to me how are you trying to set up your runs, so
> > > please provide some details:
> > > - are you trying to run multiple simulations concurrently on the same
> > > node or are you trying to strong-scale?
> > > - what are you simulating?
> > > - can you provide log files of the runs?
> > >
> > > Cheers,
> > >
> > > --
> > > Szilárd
> > >
> > > On Tue, Jul 23, 2019 at 1:34 AM Carlos Navarro
> > >  wrote:
> > > >
> > > > No one can give me an idea of what can be happening? Or how I can
> solve
> > > it?
> > > > Best regards,
> > > > Carlos
> > > >
> > > > ——
> > > > Carlos Navarro Retamal
> > > > Bioinformatic Engineering. PhD.
> > > > Postdoctoral Researcher in Center of Bioinformatics and Molecular
> > > > Simulations
> > > > Universidad de Talca
> > > > Av. Lircay S/N, Talca, Chile
> > > > E: carlos.navarr...@gmail.com or cnava...@utalca.cl
> > > >
> > > > On July 19, 2019 at 2:20:41 PM, Carlos Navarro (
> > > carlos.navarr...@gmail.com)
> > > > wrote:
> > > >
> > > > Dear gmx-users,
> > > > I’m currently working in a server where each node posses 40 physical
> > > cores
> > > > (40 threads) and 4 Nvidia-V100.
> > > > When I launch a single job (1 simulation using a single gpu card) I
> > get a
> > > > performance of about ~35ns/day in a system of about 300k atoms.
> Looking
> > > > into the usage of the video card during the simulation I notice that
> > the
> > > > card is being used about and ~80%.
> > > > The problems arise when I increase the number of jobs running at the
> > same
> > > > time. If for instance 2 jobs are running at the same time, the
> > > performance
> > > > drops to ~25ns/day each and the usage of the video cards also drops
> > > during
> > > > the simulation to about a ~30-40% (and sometimes dropping to less than
> > > 5%).
> > > > Clearly there is a communication problem between the gpu cards and the
> > > cpu
> > > > during the simulations, but I don’t know how to solve this.
> > > > Here is the script I use to run the simulations:
> > > >
> > > > #!/bin/bash -x
> > > > #SBATCH --job-name=testAtTPC1
> > > > #SBATCH --ntasks-per-node=4
> > > > #SBATCH --cpus-per-task=20
> > > > #SBATCH 

Re: [gmx-users] remd error

2019-07-29 Thread Bratin Kumar Das
Thank you

On Mon 29 Jul, 2019, 6:45 PM Justin Lemkul,  wrote:

>
>
> On 7/29/19 7:55 AM, Bratin Kumar Das wrote:
> > Hi Szilard,
> > Thank you for your reply. I rectified as you said. For
> trial
> > purpose i took 8 nodes or 16 nodes... (-np 8) to text whether it is
> running
> > or not. I gave the following command to run remd
> > *mpirun -np 8 gmx_mpi_d mdrun -v -multi 8 -replex 1000 -deffnm remd*
> > After giving the command it is giving following error
> > Program: gmx mdrun, version 2018.4
> > Source file: src/gromacs/utility/futil.cpp (line 514)
> > MPI rank:0 (out of 32)
> >
> > File input/output error:
> > remd0.tpr
> >
> > For more information and tips for troubleshooting, please check the
> GROMACS
> > website at http://www.gromacs.org/Documentation/Errors
> >   I am not able to understand why it is coming
>
> The error means the input file (remd0.tpr) does not exist in the working
> directory.
>
> -Justin
>
> >
> > On Thu 25 Jul, 2019, 2:31 PM Szilárd Páll, 
> wrote:
> >
> >> This is an MPI / job scheduler error: you are requesting 2 nodes with
> >> 20 processes per node (=40 total), but starting 80 ranks.
> >> --
> >> Szilárd
> >>
> >> On Thu, Jul 18, 2019 at 8:33 AM Bratin Kumar Das
> >> <177cy500.bra...@nitk.edu.in> wrote:
> >>> Hi,
> >>> I am running remd simulation in gromacs-2016.5. After generating
> the
> >>> multiple .tpr file in each directory by the following command
> >>> *for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro
> -p
> >>> topol.top -o remd$i.tpr -maxwarn 1; cd ..; done*
> >>> I run *mpirun -np 80 gmx_mpi mdrun -s remd.tpr -multi 8 -replex 1000
> >>> -reseed 175320 -deffnm remd_equil*
> >>> It is giving the following error
> >>> There are not enough slots available in the system to satisfy the 40
> >> slots
> >>> that were requested by the application:
> >>>gmx_mpi
> >>>
> >>> Either request fewer slots for your application, or make more slots
> >>> available
> >>> for use.
> >>>
> >>
> --
> >>
> --
> >>> There are not enough slots available in the system to satisfy the 40
> >> slots
> >>> that were requested by the application:
> >>>gmx_mpi
> >>>
> >>> Either request fewer slots for your application, or make more slots
> >>> available
> >>> for use.
> >>>
> >>
> --
> >>> I am not understanding the error. Any suggestion will be highly
> >>> appriciated. The mdp file and the qsub.sh file is attached below
> >>>
> >>> qsub.sh...
> >>> #! /bin/bash
> >>> #PBS -V
> >>> #PBS -l nodes=2:ppn=20
> >>> #PBS -l walltime=48:00:00
> >>> #PBS -N mdrun-serial
> >>> #PBS -j oe
> >>> #PBS -o output.log
> >>> #PBS -e error.log
> >>> #cd /home/bratin/Downloads/GROMACS/Gromacs_fibril
> >>> cd $PBS_O_WORKDIR
> >>> module load openmpi3.0.0
> >>> module load gromacs-2016.5
> >>> NP='cat $PBS_NODEFILE | wc -1'
> >>> # mpirun --machinefile $PBS_PBS_NODEFILE -np $NP 'which gmx_mpi' mdrun
> -v
> >>> -s nvt.tpr -deffnm nvt
> >>> #/apps/gromacs-2016.5/bin/mpirun -np 8 gmx_mpi mdrun -v -s remd.tpr
> >> -multi
> >>> 8 -replex 1000 -deffnm remd_out
> >>> for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro
> -r
> >>> em.gro -p topol.top -o remd$i.tpr -maxwarn 1; cd ..; done
> >>>
> >>> for i in {0..7}; do cd equil${i}; mpirun -np 40 gmx_mpi mdrun -v -s
> >>> remd.tpr -multi 8 -replex 1000 -deffnm remd$i_out ; cd ..; done
> >>> --
> >>> Gromacs Users mailing list
> >>>
> >>> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>>
> >>> * For (un)subscribe requests visit
> >>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
>
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read 

Re: [gmx-users] remd error

2019-07-29 Thread Justin Lemkul



On 7/29/19 7:55 AM, Bratin Kumar Das wrote:

Hi Szilard,
Thank you for your reply. I rectified as you said. For trial
purpose i took 8 nodes or 16 nodes... (-np 8) to text whether it is running
or not. I gave the following command to run remd
*mpirun -np 8 gmx_mpi_d mdrun -v -multi 8 -replex 1000 -deffnm remd*
After giving the command it is giving following error
Program: gmx mdrun, version 2018.4
Source file: src/gromacs/utility/futil.cpp (line 514)
MPI rank:0 (out of 32)

File input/output error:
remd0.tpr

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
  I am not able to understand why it is coming


The error means the input file (remd0.tpr) does not exist in the working 
directory.


-Justin



On Thu 25 Jul, 2019, 2:31 PM Szilárd Páll,  wrote:


This is an MPI / job scheduler error: you are requesting 2 nodes with
20 processes per node (=40 total), but starting 80 ranks.
--
Szilárd

On Thu, Jul 18, 2019 at 8:33 AM Bratin Kumar Das
<177cy500.bra...@nitk.edu.in> wrote:

Hi,
I am running remd simulation in gromacs-2016.5. After generating the
multiple .tpr file in each directory by the following command
*for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro -p
topol.top -o remd$i.tpr -maxwarn 1; cd ..; done*
I run *mpirun -np 80 gmx_mpi mdrun -s remd.tpr -multi 8 -replex 1000
-reseed 175320 -deffnm remd_equil*
It is giving the following error
There are not enough slots available in the system to satisfy the 40

slots

that were requested by the application:
   gmx_mpi

Either request fewer slots for your application, or make more slots
available
for use.


--
--

There are not enough slots available in the system to satisfy the 40

slots

that were requested by the application:
   gmx_mpi

Either request fewer slots for your application, or make more slots
available
for use.


--

I am not understanding the error. Any suggestion will be highly
appriciated. The mdp file and the qsub.sh file is attached below

qsub.sh...
#! /bin/bash
#PBS -V
#PBS -l nodes=2:ppn=20
#PBS -l walltime=48:00:00
#PBS -N mdrun-serial
#PBS -j oe
#PBS -o output.log
#PBS -e error.log
#cd /home/bratin/Downloads/GROMACS/Gromacs_fibril
cd $PBS_O_WORKDIR
module load openmpi3.0.0
module load gromacs-2016.5
NP='cat $PBS_NODEFILE | wc -1'
# mpirun --machinefile $PBS_PBS_NODEFILE -np $NP 'which gmx_mpi' mdrun -v
-s nvt.tpr -deffnm nvt
#/apps/gromacs-2016.5/bin/mpirun -np 8 gmx_mpi mdrun -v -s remd.tpr

-multi

8 -replex 1000 -deffnm remd_out
for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro -r
em.gro -p topol.top -o remd$i.tpr -maxwarn 1; cd ..; done

for i in {0..7}; do cd equil${i}; mpirun -np 40 gmx_mpi mdrun -v -s
remd.tpr -multi 8 -replex 1000 -deffnm remd$i_out ; cd ..; done
--
Gromacs Users mailing list

* Please search the archive at

http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or

send a mail to gmx-users-requ...@gromacs.org.
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.


--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] performance issues running gromacs with more than 1 gpu card in slurm

2019-07-29 Thread Justin Lemkul



On 7/29/19 8:46 AM, Carlos Navarro wrote:

Hi Mark,
I tried that before, but unfortunately in that case (removing —gres=gpu:1
and including in each line the -gpu_id flag) for some reason the jobs are
run one at a time (one after the other), so I can’t use properly the whole
node.



You need to run all but the last mdrun process in the background (&).

-Justin


——
Carlos Navarro Retamal
Bioinformatic Engineering. PhD.
Postdoctoral Researcher in Center of Bioinformatics and Molecular
Simulations
Universidad de Talca
Av. Lircay S/N, Talca, Chile
E: carlos.navarr...@gmail.com or cnava...@utalca.cl

On July 29, 2019 at 11:48:21 AM, Mark Abraham (mark.j.abra...@gmail.com)
wrote:

Hi,

When you use

DO_PARALLEL=" srun --exclusive -n 1 --gres=gpu:1 "

then the environment seems to make sure only one GPU is visible. (The log
files report only finding one GPU.) But it's probably the same GPU in each
case, with three remaining idle. I would suggest not using --gres unless
you can specify *which* of the four available GPUs each run can use.

Otherwise, don't use --gres and use the facilities built into GROMACS, e.g.

$DO_PARALLEL $EXE -s eq6.tpr -deffnm eq6-20 -nmpi 1 -pin on -pinoffset 0
-ntomp 20 -gpu_id 0
$DO_PARALLEL $EXE -s eq6.tpr -deffnm eq6-20 -nmpi 1 -pin on -pinoffset 10
-ntomp 20 -gpu_id 1
$DO_PARALLEL $EXE -s eq6.tpr -deffnm eq6-20 -nmpi 1 -pin on -pinoffset 20
-ntomp 20 -gpu_id 2
etc.

Mark

On Mon, 29 Jul 2019 at 11:34, Carlos Navarro 
wrote:


Hi Szilárd,
To answer your questions:
**are you trying to run multiple simulations concurrently on the same
node or are you trying to strong-scale?
I'm trying to run multiple simulations on the same node at the same time.

** what are you simulating?
Regular and CompEl simulations

** can you provide log files of the runs?
In the following link are some logs files:
https://www.dropbox.com/s/7q249vbqqwf5r03/Archive.zip?dl=0.
In short, alone.log -> single run in the node (using 1 gpu).
multi1/2/3/4.log ->4 independent simulations ran at the same time in a
single node. In all cases, 20 cpus are used.
Best regards,
Carlos

El jue., 25 jul. 2019 a las 10:59, Szilárd Páll ()
escribió:


Hi,

It is not clear to me how are you trying to set up your runs, so
please provide some details:
- are you trying to run multiple simulations concurrently on the same
node or are you trying to strong-scale?
- what are you simulating?
- can you provide log files of the runs?

Cheers,

--
Szilárd

On Tue, Jul 23, 2019 at 1:34 AM Carlos Navarro
 wrote:

No one can give me an idea of what can be happening? Or how I can

solve

it?

Best regards,
Carlos

——
Carlos Navarro Retamal
Bioinformatic Engineering. PhD.
Postdoctoral Researcher in Center of Bioinformatics and Molecular
Simulations
Universidad de Talca
Av. Lircay S/N, Talca, Chile
E: carlos.navarr...@gmail.com or cnava...@utalca.cl

On July 19, 2019 at 2:20:41 PM, Carlos Navarro (

carlos.navarr...@gmail.com)

wrote:

Dear gmx-users,
I’m currently working in a server where each node posses 40 physical

cores

(40 threads) and 4 Nvidia-V100.
When I launch a single job (1 simulation using a single gpu card) I

get a

performance of about ~35ns/day in a system of about 300k atoms.

Looking

into the usage of the video card during the simulation I notice that

the

card is being used about and ~80%.
The problems arise when I increase the number of jobs running at the

same

time. If for instance 2 jobs are running at the same time, the

performance

drops to ~25ns/day each and the usage of the video cards also drops

during

the simulation to about a ~30-40% (and sometimes dropping to less than

5%).

Clearly there is a communication problem between the gpu cards and the

cpu

during the simulations, but I don’t know how to solve this.
Here is the script I use to run the simulations:

#!/bin/bash -x
#SBATCH --job-name=testAtTPC1
#SBATCH --ntasks-per-node=4
#SBATCH --cpus-per-task=20
#SBATCH --account=hdd22
#SBATCH --nodes=1
#SBATCH --mem=0
#SBATCH --output=sout.%j
#SBATCH --error=s4err.%j
#SBATCH --time=00:10:00
#SBATCH --partition=develgpus
#SBATCH --gres=gpu:4

module use /gpfs/software/juwels/otherstages
module load Stages/2018b
module load Intel/2019.0.117-GCC-7.3.0
module load IntelMPI/2019.0.117
module load GROMACS/2018.3

WORKDIR1=/p/project/chdd22/gromacs/benchmark/AtTPC1/singlegpu/1
WORKDIR2=/p/project/chdd22/gromacs/benchmark/AtTPC1/singlegpu/2
WORKDIR3=/p/project/chdd22/gromacs/benchmark/AtTPC1/singlegpu/3
WORKDIR4=/p/project/chdd22/gromacs/benchmark/AtTPC1/singlegpu/4

DO_PARALLEL=" srun --exclusive -n 1 --gres=gpu:1 "
EXE=" gmx mdrun "

cd $WORKDIR1
$DO_PARALLEL $EXE -s eq6.tpr -deffnm eq6-20 -nmpi 1 -pin on -pinoffset

0

-ntomp 20 &>log &
cd $WORKDIR2
$DO_PARALLEL $EXE -s eq6.tpr -deffnm eq6-20 -nmpi 1 -pin on -pinoffset

10

-ntomp 20 &>log &
cd $WORKDIR3
$DO_PARALLEL $EXE -s eq6.tpr -deffnm eq6-20 -nmpi 1 -pin on -pinoffset

20

-ntomp 20 &>log &
cd $WORKDIR4
$DO_PARALLEL $EXE -s eq6.tpr -deffnm eq6-20 -nmpi 

Re: [gmx-users] maximum force does not converge

2019-07-29 Thread Justin Lemkul



On 7/29/19 3:24 AM, m g wrote:

Dear all, I'm simulating a MOF by UFF force field, but in energy minimization step I gave an 
error as "Steepest Descents converged to machine precision in 2115 steps, but did not 
reach the requested Fmax < 1000", although the potential energy was converged. I 
used SPC/E water for this system. Would you please help me? Is it better I used a water 
based on UFF itself?Thanks,Ganj


What water model do people use when applying UFF? What happens if you 
minimize without water? Perhaps you have a more fundamental topology 
problem.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] performance issues running gromacs with more than 1 gpu card in slurm

2019-07-29 Thread Carlos Navarro
Hi Mark,
I tried that before, but unfortunately in that case (removing —gres=gpu:1
and including in each line the -gpu_id flag) for some reason the jobs are
run one at a time (one after the other), so I can’t use properly the whole
node.


——
Carlos Navarro Retamal
Bioinformatic Engineering. PhD.
Postdoctoral Researcher in Center of Bioinformatics and Molecular
Simulations
Universidad de Talca
Av. Lircay S/N, Talca, Chile
E: carlos.navarr...@gmail.com or cnava...@utalca.cl

On July 29, 2019 at 11:48:21 AM, Mark Abraham (mark.j.abra...@gmail.com)
wrote:

Hi,

When you use

DO_PARALLEL=" srun --exclusive -n 1 --gres=gpu:1 "

then the environment seems to make sure only one GPU is visible. (The log
files report only finding one GPU.) But it's probably the same GPU in each
case, with three remaining idle. I would suggest not using --gres unless
you can specify *which* of the four available GPUs each run can use.

Otherwise, don't use --gres and use the facilities built into GROMACS, e.g.

$DO_PARALLEL $EXE -s eq6.tpr -deffnm eq6-20 -nmpi 1 -pin on -pinoffset 0
-ntomp 20 -gpu_id 0
$DO_PARALLEL $EXE -s eq6.tpr -deffnm eq6-20 -nmpi 1 -pin on -pinoffset 10
-ntomp 20 -gpu_id 1
$DO_PARALLEL $EXE -s eq6.tpr -deffnm eq6-20 -nmpi 1 -pin on -pinoffset 20
-ntomp 20 -gpu_id 2
etc.

Mark

On Mon, 29 Jul 2019 at 11:34, Carlos Navarro 
wrote:

> Hi Szilárd,
> To answer your questions:
> **are you trying to run multiple simulations concurrently on the same
> node or are you trying to strong-scale?
> I'm trying to run multiple simulations on the same node at the same time.
>
> ** what are you simulating?
> Regular and CompEl simulations
>
> ** can you provide log files of the runs?
> In the following link are some logs files:
> https://www.dropbox.com/s/7q249vbqqwf5r03/Archive.zip?dl=0.
> In short, alone.log -> single run in the node (using 1 gpu).
> multi1/2/3/4.log ->4 independent simulations ran at the same time in a
> single node. In all cases, 20 cpus are used.
> Best regards,
> Carlos
>
> El jue., 25 jul. 2019 a las 10:59, Szilárd Páll ()
> escribió:
>
> > Hi,
> >
> > It is not clear to me how are you trying to set up your runs, so
> > please provide some details:
> > - are you trying to run multiple simulations concurrently on the same
> > node or are you trying to strong-scale?
> > - what are you simulating?
> > - can you provide log files of the runs?
> >
> > Cheers,
> >
> > --
> > Szilárd
> >
> > On Tue, Jul 23, 2019 at 1:34 AM Carlos Navarro
> >  wrote:
> > >
> > > No one can give me an idea of what can be happening? Or how I can
solve
> > it?
> > > Best regards,
> > > Carlos
> > >
> > > ——
> > > Carlos Navarro Retamal
> > > Bioinformatic Engineering. PhD.
> > > Postdoctoral Researcher in Center of Bioinformatics and Molecular
> > > Simulations
> > > Universidad de Talca
> > > Av. Lircay S/N, Talca, Chile
> > > E: carlos.navarr...@gmail.com or cnava...@utalca.cl
> > >
> > > On July 19, 2019 at 2:20:41 PM, Carlos Navarro (
> > carlos.navarr...@gmail.com)
> > > wrote:
> > >
> > > Dear gmx-users,
> > > I’m currently working in a server where each node posses 40 physical
> > cores
> > > (40 threads) and 4 Nvidia-V100.
> > > When I launch a single job (1 simulation using a single gpu card) I
> get a
> > > performance of about ~35ns/day in a system of about 300k atoms.
Looking
> > > into the usage of the video card during the simulation I notice that
> the
> > > card is being used about and ~80%.
> > > The problems arise when I increase the number of jobs running at the
> same
> > > time. If for instance 2 jobs are running at the same time, the
> > performance
> > > drops to ~25ns/day each and the usage of the video cards also drops
> > during
> > > the simulation to about a ~30-40% (and sometimes dropping to less than
> > 5%).
> > > Clearly there is a communication problem between the gpu cards and the
> > cpu
> > > during the simulations, but I don’t know how to solve this.
> > > Here is the script I use to run the simulations:
> > >
> > > #!/bin/bash -x
> > > #SBATCH --job-name=testAtTPC1
> > > #SBATCH --ntasks-per-node=4
> > > #SBATCH --cpus-per-task=20
> > > #SBATCH --account=hdd22
> > > #SBATCH --nodes=1
> > > #SBATCH --mem=0
> > > #SBATCH --output=sout.%j
> > > #SBATCH --error=s4err.%j
> > > #SBATCH --time=00:10:00
> > > #SBATCH --partition=develgpus
> > > #SBATCH --gres=gpu:4
> > >
> > > module use /gpfs/software/juwels/otherstages
> > > module load Stages/2018b
> > > module load Intel/2019.0.117-GCC-7.3.0
> > > module load IntelMPI/2019.0.117
> > > module load GROMACS/2018.3
> > >
> > > WORKDIR1=/p/project/chdd22/gromacs/benchmark/AtTPC1/singlegpu/1
> > > WORKDIR2=/p/project/chdd22/gromacs/benchmark/AtTPC1/singlegpu/2
> > > WORKDIR3=/p/project/chdd22/gromacs/benchmark/AtTPC1/singlegpu/3
> > > WORKDIR4=/p/project/chdd22/gromacs/benchmark/AtTPC1/singlegpu/4
> > >
> > > DO_PARALLEL=" srun --exclusive -n 1 --gres=gpu:1 "
> > > EXE=" gmx mdrun "
> > >
> > > cd $WORKDIR1
> > > $DO_PARALLEL $EXE -s eq6.tpr 

Re: [gmx-users] remd error

2019-07-29 Thread Bratin Kumar Das
Hi Szilard,
   Thank you for your reply. I rectified as you said. For trial
purpose i took 8 nodes or 16 nodes... (-np 8) to text whether it is running
or not. I gave the following command to run remd
*mpirun -np 8 gmx_mpi_d mdrun -v -multi 8 -replex 1000 -deffnm remd*
After giving the command it is giving following error
Program: gmx mdrun, version 2018.4
Source file: src/gromacs/utility/futil.cpp (line 514)
MPI rank:0 (out of 32)

File input/output error:
remd0.tpr

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
 I am not able to understand why it is coming

On Thu 25 Jul, 2019, 2:31 PM Szilárd Páll,  wrote:

> This is an MPI / job scheduler error: you are requesting 2 nodes with
> 20 processes per node (=40 total), but starting 80 ranks.
> --
> Szilárd
>
> On Thu, Jul 18, 2019 at 8:33 AM Bratin Kumar Das
> <177cy500.bra...@nitk.edu.in> wrote:
> >
> > Hi,
> >I am running remd simulation in gromacs-2016.5. After generating the
> > multiple .tpr file in each directory by the following command
> > *for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro -p
> > topol.top -o remd$i.tpr -maxwarn 1; cd ..; done*
> > I run *mpirun -np 80 gmx_mpi mdrun -s remd.tpr -multi 8 -replex 1000
> > -reseed 175320 -deffnm remd_equil*
> > It is giving the following error
> > There are not enough slots available in the system to satisfy the 40
> slots
> > that were requested by the application:
> >   gmx_mpi
> >
> > Either request fewer slots for your application, or make more slots
> > available
> > for use.
> >
> --
> >
> --
> > There are not enough slots available in the system to satisfy the 40
> slots
> > that were requested by the application:
> >   gmx_mpi
> >
> > Either request fewer slots for your application, or make more slots
> > available
> > for use.
> >
> --
> > I am not understanding the error. Any suggestion will be highly
> > appriciated. The mdp file and the qsub.sh file is attached below
> >
> > qsub.sh...
> > #! /bin/bash
> > #PBS -V
> > #PBS -l nodes=2:ppn=20
> > #PBS -l walltime=48:00:00
> > #PBS -N mdrun-serial
> > #PBS -j oe
> > #PBS -o output.log
> > #PBS -e error.log
> > #cd /home/bratin/Downloads/GROMACS/Gromacs_fibril
> > cd $PBS_O_WORKDIR
> > module load openmpi3.0.0
> > module load gromacs-2016.5
> > NP='cat $PBS_NODEFILE | wc -1'
> > # mpirun --machinefile $PBS_PBS_NODEFILE -np $NP 'which gmx_mpi' mdrun -v
> > -s nvt.tpr -deffnm nvt
> > #/apps/gromacs-2016.5/bin/mpirun -np 8 gmx_mpi mdrun -v -s remd.tpr
> -multi
> > 8 -replex 1000 -deffnm remd_out
> > for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro -r
> > em.gro -p topol.top -o remd$i.tpr -maxwarn 1; cd ..; done
> >
> > for i in {0..7}; do cd equil${i}; mpirun -np 40 gmx_mpi mdrun -v -s
> > remd.tpr -multi 8 -replex 1000 -deffnm remd$i_out ; cd ..; done
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Regarding OH, HH vector distribution

2019-07-29 Thread David van der Spoel

Den 2019-07-29 kl. 12:24, skrev Omkar Singh:

Hello everyone,
Is it possible that the probability distribution of HH, OH vector for bulk
water is approximately linear?


What do you mean?

--
David van der Spoel, Ph.D., Professor of Biology
Head of Department, Cell & Molecular Biology, Uppsala University.
Box 596, SE-75124 Uppsala, Sweden. Phone: +46184714205.
http://www.icm.uu.se
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Regarding OH, HH vector distribution

2019-07-29 Thread Omkar Singh
Hello everyone,
Is it possible that the probability distribution of HH, OH vector for bulk
water is approximately linear?
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] performance issues running gromacs with more than 1 gpu card in slurm

2019-07-29 Thread Mark Abraham
Hi,

When you use

DO_PARALLEL=" srun --exclusive -n 1 --gres=gpu:1 "

then the environment seems to make sure only one GPU is visible. (The log
files report only finding one GPU.) But it's probably the same GPU in each
case, with three remaining idle. I would suggest not using --gres unless
you can specify *which* of the four available GPUs each run can use.

Otherwise, don't use --gres and use the facilities built into GROMACS, e.g.

$DO_PARALLEL $EXE -s eq6.tpr -deffnm eq6-20 -nmpi 1 -pin on -pinoffset 0
-ntomp 20 -gpu_id 0
$DO_PARALLEL $EXE -s eq6.tpr -deffnm eq6-20 -nmpi 1 -pin on -pinoffset 10
-ntomp 20 -gpu_id 1
$DO_PARALLEL $EXE -s eq6.tpr -deffnm eq6-20  -nmpi 1 -pin on -pinoffset 20
-ntomp 20 -gpu_id 2
etc.

Mark

On Mon, 29 Jul 2019 at 11:34, Carlos Navarro 
wrote:

> Hi Szilárd,
> To answer your questions:
> **are you trying to run multiple simulations concurrently on the same
> node or are you trying to strong-scale?
> I'm trying to run multiple simulations on the same node at the same time.
>
> ** what are you simulating?
> Regular and CompEl simulations
>
> ** can you provide log files of the runs?
> In the following link are some logs files:
> https://www.dropbox.com/s/7q249vbqqwf5r03/Archive.zip?dl=0.
> In short, alone.log -> single run in the node (using 1 gpu).
> multi1/2/3/4.log ->4 independent simulations ran at the same time in a
> single node. In all cases, 20 cpus are used.
> Best regards,
> Carlos
>
> El jue., 25 jul. 2019 a las 10:59, Szilárd Páll ()
> escribió:
>
> > Hi,
> >
> > It is not clear to me how are you trying to set up your runs, so
> > please provide some details:
> > - are you trying to run multiple simulations concurrently on the same
> > node or are you trying to strong-scale?
> > - what are you simulating?
> > - can you provide log files of the runs?
> >
> > Cheers,
> >
> > --
> > Szilárd
> >
> > On Tue, Jul 23, 2019 at 1:34 AM Carlos Navarro
> >  wrote:
> > >
> > > No one can give me an idea of what can be happening? Or how I can solve
> > it?
> > > Best regards,
> > > Carlos
> > >
> > > ——
> > > Carlos Navarro Retamal
> > > Bioinformatic Engineering. PhD.
> > > Postdoctoral Researcher in Center of Bioinformatics and Molecular
> > > Simulations
> > > Universidad de Talca
> > > Av. Lircay S/N, Talca, Chile
> > > E: carlos.navarr...@gmail.com or cnava...@utalca.cl
> > >
> > > On July 19, 2019 at 2:20:41 PM, Carlos Navarro (
> > carlos.navarr...@gmail.com)
> > > wrote:
> > >
> > > Dear gmx-users,
> > > I’m currently working in a server where each node posses 40 physical
> > cores
> > > (40 threads) and 4 Nvidia-V100.
> > > When I launch a single job (1 simulation using a single gpu card) I
> get a
> > > performance of about ~35ns/day in a system of about 300k atoms. Looking
> > > into the usage of the video card during the simulation I notice that
> the
> > > card is being used about and ~80%.
> > > The problems arise when I increase the number of jobs running at the
> same
> > > time. If for instance 2 jobs are running at the same time, the
> > performance
> > > drops to ~25ns/day each and the usage of the video cards also drops
> > during
> > > the simulation to about a ~30-40% (and sometimes dropping to less than
> > 5%).
> > > Clearly there is a communication problem between the gpu cards and the
> > cpu
> > > during the simulations, but I don’t know how to solve this.
> > > Here is the script I use to run the simulations:
> > >
> > > #!/bin/bash -x
> > > #SBATCH --job-name=testAtTPC1
> > > #SBATCH --ntasks-per-node=4
> > > #SBATCH --cpus-per-task=20
> > > #SBATCH --account=hdd22
> > > #SBATCH --nodes=1
> > > #SBATCH --mem=0
> > > #SBATCH --output=sout.%j
> > > #SBATCH --error=s4err.%j
> > > #SBATCH --time=00:10:00
> > > #SBATCH --partition=develgpus
> > > #SBATCH --gres=gpu:4
> > >
> > > module use /gpfs/software/juwels/otherstages
> > > module load Stages/2018b
> > > module load Intel/2019.0.117-GCC-7.3.0
> > > module load IntelMPI/2019.0.117
> > > module load GROMACS/2018.3
> > >
> > > WORKDIR1=/p/project/chdd22/gromacs/benchmark/AtTPC1/singlegpu/1
> > > WORKDIR2=/p/project/chdd22/gromacs/benchmark/AtTPC1/singlegpu/2
> > > WORKDIR3=/p/project/chdd22/gromacs/benchmark/AtTPC1/singlegpu/3
> > > WORKDIR4=/p/project/chdd22/gromacs/benchmark/AtTPC1/singlegpu/4
> > >
> > > DO_PARALLEL=" srun --exclusive -n 1 --gres=gpu:1 "
> > > EXE=" gmx mdrun "
> > >
> > > cd $WORKDIR1
> > > $DO_PARALLEL $EXE -s eq6.tpr -deffnm eq6-20 -nmpi 1 -pin on -pinoffset
> 0
> > > -ntomp 20 &>log &
> > > cd $WORKDIR2
> > > $DO_PARALLEL $EXE -s eq6.tpr -deffnm eq6-20 -nmpi 1 -pin on -pinoffset
> 10
> > > -ntomp 20 &>log &
> > > cd $WORKDIR3
> > > $DO_PARALLEL $EXE -s eq6.tpr -deffnm eq6-20  -nmpi 1 -pin on -pinoffset
> > 20
> > > -ntomp 20 &>log &
> > > cd $WORKDIR4
> > > $DO_PARALLEL $EXE -s eq6.tpr -deffnm eq6-20 -nmpi 1 -pin on -pinoffset
> 30
> > > -ntomp 20 &>log &
> > >
> > >
> > > Regarding to pinoffset, I first tried using 20 cores for each job but
> > then
> > > also tried with 

Re: [gmx-users] performance issues running gromacs with more than 1 gpu card in slurm

2019-07-29 Thread Carlos Navarro
Hi Szilárd,
To answer your questions:
**are you trying to run multiple simulations concurrently on the same
node or are you trying to strong-scale?
I'm trying to run multiple simulations on the same node at the same time.

** what are you simulating?
Regular and CompEl simulations

** can you provide log files of the runs?
In the following link are some logs files:
https://www.dropbox.com/s/7q249vbqqwf5r03/Archive.zip?dl=0.
In short, alone.log -> single run in the node (using 1 gpu).
multi1/2/3/4.log ->4 independent simulations ran at the same time in a
single node. In all cases, 20 cpus are used.
Best regards,
Carlos

El jue., 25 jul. 2019 a las 10:59, Szilárd Páll ()
escribió:

> Hi,
>
> It is not clear to me how are you trying to set up your runs, so
> please provide some details:
> - are you trying to run multiple simulations concurrently on the same
> node or are you trying to strong-scale?
> - what are you simulating?
> - can you provide log files of the runs?
>
> Cheers,
>
> --
> Szilárd
>
> On Tue, Jul 23, 2019 at 1:34 AM Carlos Navarro
>  wrote:
> >
> > No one can give me an idea of what can be happening? Or how I can solve
> it?
> > Best regards,
> > Carlos
> >
> > ——
> > Carlos Navarro Retamal
> > Bioinformatic Engineering. PhD.
> > Postdoctoral Researcher in Center of Bioinformatics and Molecular
> > Simulations
> > Universidad de Talca
> > Av. Lircay S/N, Talca, Chile
> > E: carlos.navarr...@gmail.com or cnava...@utalca.cl
> >
> > On July 19, 2019 at 2:20:41 PM, Carlos Navarro (
> carlos.navarr...@gmail.com)
> > wrote:
> >
> > Dear gmx-users,
> > I’m currently working in a server where each node posses 40 physical
> cores
> > (40 threads) and 4 Nvidia-V100.
> > When I launch a single job (1 simulation using a single gpu card) I get a
> > performance of about ~35ns/day in a system of about 300k atoms. Looking
> > into the usage of the video card during the simulation I notice that the
> > card is being used about and ~80%.
> > The problems arise when I increase the number of jobs running at the same
> > time. If for instance 2 jobs are running at the same time, the
> performance
> > drops to ~25ns/day each and the usage of the video cards also drops
> during
> > the simulation to about a ~30-40% (and sometimes dropping to less than
> 5%).
> > Clearly there is a communication problem between the gpu cards and the
> cpu
> > during the simulations, but I don’t know how to solve this.
> > Here is the script I use to run the simulations:
> >
> > #!/bin/bash -x
> > #SBATCH --job-name=testAtTPC1
> > #SBATCH --ntasks-per-node=4
> > #SBATCH --cpus-per-task=20
> > #SBATCH --account=hdd22
> > #SBATCH --nodes=1
> > #SBATCH --mem=0
> > #SBATCH --output=sout.%j
> > #SBATCH --error=s4err.%j
> > #SBATCH --time=00:10:00
> > #SBATCH --partition=develgpus
> > #SBATCH --gres=gpu:4
> >
> > module use /gpfs/software/juwels/otherstages
> > module load Stages/2018b
> > module load Intel/2019.0.117-GCC-7.3.0
> > module load IntelMPI/2019.0.117
> > module load GROMACS/2018.3
> >
> > WORKDIR1=/p/project/chdd22/gromacs/benchmark/AtTPC1/singlegpu/1
> > WORKDIR2=/p/project/chdd22/gromacs/benchmark/AtTPC1/singlegpu/2
> > WORKDIR3=/p/project/chdd22/gromacs/benchmark/AtTPC1/singlegpu/3
> > WORKDIR4=/p/project/chdd22/gromacs/benchmark/AtTPC1/singlegpu/4
> >
> > DO_PARALLEL=" srun --exclusive -n 1 --gres=gpu:1 "
> > EXE=" gmx mdrun "
> >
> > cd $WORKDIR1
> > $DO_PARALLEL $EXE -s eq6.tpr -deffnm eq6-20 -nmpi 1 -pin on -pinoffset 0
> > -ntomp 20 &>log &
> > cd $WORKDIR2
> > $DO_PARALLEL $EXE -s eq6.tpr -deffnm eq6-20 -nmpi 1 -pin on -pinoffset 10
> > -ntomp 20 &>log &
> > cd $WORKDIR3
> > $DO_PARALLEL $EXE -s eq6.tpr -deffnm eq6-20  -nmpi 1 -pin on -pinoffset
> 20
> > -ntomp 20 &>log &
> > cd $WORKDIR4
> > $DO_PARALLEL $EXE -s eq6.tpr -deffnm eq6-20 -nmpi 1 -pin on -pinoffset 30
> > -ntomp 20 &>log &
> >
> >
> > Regarding to pinoffset, I first tried using 20 cores for each job but
> then
> > also tried with 8 cores (so pinoffset 0 for job 1, pinoffset 4 for job 2,
> > pinoffset 8 for job 3 and pinoffset 12 for job) but at the end the
> problem
> > persist.
> >
> > Currently in this machine I’m not able to use more than 1 gpu per job, so
> > this is my only choice to use properly the whole node.
> > If you need more information please just let me know.
> > Best regards.
> > Carlos
> >
> > ——
> > Carlos Navarro Retamal
> > Bioinformatic Engineering. PhD.
> > Postdoctoral Researcher in Center of Bioinformatics and Molecular
> > Simulations
> > Universidad de Talca
> > Av. Lircay S/N, Talca, Chile
> > E: carlos.navarr...@gmail.com or cnava...@utalca.cl
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to 

[gmx-users] maximum force does not converge

2019-07-29 Thread m g
Dear all, I'm simulating a MOF by UFF force field, but in energy minimization 
step I gave an error as "Steepest Descents converged to machine precision in 
2115 steps, but did not reach the requested Fmax < 1000", although the 
potential energy was converged. I used SPC/E water for this system. Would you 
please help me? Is it better I used a water based on UFF itself?Thanks,Ganj
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] space dependent electric field

2019-07-29 Thread David van der Spoel

Den 2019-07-29 kl. 04:24, skrev Maryam:

Dear all,

I want to apply a space but not time dependent electric field to my system.
I reviewed the source code of the electric field but it only has constant
and time dependent EF (pulsed EF). Can anyone help me find out how I can
change the source code to have space dependent EF without changing the
defined parameters in gromacs so that I wont face the problem of changing
all related subroutines? Which routines should I apply required changes if
I want to add some new parameters for the space dependent EF?
Thank you

I assume you are looking at routine calculateForces in 
gromacs/src/gromacs/applied_forces/electricfield.cpp ?

Please start by checking out the development version of gromacs if not.

There you can extract the coordinates of the particle from the 
ForceProviderInput structure (see 
gromacs/src/gromacs/mdtypes/iforceprovider.h). To make things work 
quickly you can just hardcode the extra parameters you might need or 
abuse the existing one.


Cheers,
--
David van der Spoel, Ph.D., Professor of Biology
Head of Department, Cell & Molecular Biology, Uppsala University.
Box 596, SE-75124 Uppsala, Sweden. Phone: +46184714205.
http://www.icm.uu.se
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.