Re: [gmx-users] REMD-error

2019-09-04 Thread Bratin Kumar Das
Thank you for your email sir.

On Wed, Sep 4, 2019 at 2:42 PM Mark Abraham 
wrote:

> Hi,
>
> On Wed, 4 Sep 2019 at 10:47, Bratin Kumar Das <177cy500.bra...@nitk.edu.in
> >
> wrote:
>
> > Respected Mark Abraham,
> >   The command-line and the job
> > submission script is given below
> >
> > #!/bin/bash
> > #SBATCH -n 130 # Number of cores
> >
>
> Per the docs, this is a guide to sbatch about how many (MPI) tasks you want
> to run. It's not a core request.
>
> #SBATCH -N 5   # no of nodes
> >
>
> This requires a certain number of nodes. So to implement both your
> instructions, MPI has to start 26 tasks per node. That would make sense if
> you had nodes with a multiple 26 cores. My guess is that your nodes have a
> multiple of 16 cores, based on the error message. MPI saw that you asked to
> allocate more tasks on cores than available cores, and decided not to set a
> number of OpenMP threads per MPI task, so that fell back on a default,
> which produced 16, which GROMACS can see doesn't make sense.
>
> If you want to use -N and -n, then you need to make a choice that makes
> sense for the number of cores per node. Easier might be to use -n 130 and
> -c 2 to express what I assume is your intent to have 2 cores per MPI task.
> Now slurm+MPI can pass that message along properly to OpenMP.
>
> Your other message about -ntomp can only have come from running gmx_mpi_d
> -ntmpi, so just a typo we don't need to worry about further.
>
> Mark
>
> #SBATCH -t 0-20:00:00 # Runtime in D-HH:MM
> > #SBATCH -p cpu # Partition to submit to
> > #SBATCH -o hostname_%j.out # File to which STDOUT will be written
> > #SBATCH -e hostname_%j.err # File to which STDERR will be written
> > #loading gromacs
> > module load gromacs/2018.4
> > #specifying work_dir
> > WORKDIR=/home/chm_bratin/GMX_Projects/REMD/4wbu-REMD-inst-clust_1/stage-1
> >
> >
> > mpirun -np 130 gmx_mpi_d mdrun -v -s remd_nvt_next2.tpr -multidir equil0
> > equil1 equil2 equil3 equil4 equil5 equil6 equil7 equil8 equil9 equil10
> > equil11 equil12 equil13 equil14 equil15 equil16 equil17 equil18 equil19
> > equil20 equil21 equil22 equil23 equil24 equil25 equil26 equil27 equil28
> > equil29 equil30 equil31 equil32 equil33 equil34 equil35 equil36 equil37
> > equil38 equil39 equil40 equil41 equil42 equil43 equil44 equil45 equil46
> > equil47 equil48 equil49 equil50 equil51 equil52 equil53 equil54 equil55
> > equil56 equil57 equil58 equil59 equil60 equil61 equil62 equil63 equil64
> > -deffnm remd_nvt -cpi remd_nvt.cpt -append
> >
> > On Wed, Sep 4, 2019 at 2:13 PM Mark Abraham 
> > wrote:
> >
> > > Hi,
> > >
> > > We need to see your command line in order to have a chance of helping.
> > >
> > > Mark
> > >
> > > On Wed, 4 Sep 2019 at 05:46, Bratin Kumar Das <
> > 177cy500.bra...@nitk.edu.in
> > > >
> > > wrote:
> > >
> > > > Dear all,
> > > > I am running one REMD simulation with 65 replicas. I am
> > using
> > > > 130 cores for the simulation. I am getting the following error.
> > > >
> > > > Fatal error:
> > > > Your choice of number of MPI ranks and amount of resources results in
> > > using
> > > > 16
> > > > OpenMP threads per rank, which is most likely inefficient. The
> optimum
> > is
> > > > usually between 1 and 6 threads per rank. If you want to run with
> this
> > > > setup,
> > > > specify the -ntomp option. But we suggest to change the number of MPI
> > > > ranks.
> > > >
> > > > when I am using -ntomp option ...it is throwing another error
> > > >
> > > > Fatal error:
> > > > Setting the number of thread-MPI ranks is only supported with
> > thread-MPI
> > > > and
> > > > GROMACS was compiled without thread-MPI
> > > >
> > > >
> > > > while GROMACS is compiled with threated-MPI...
> > > >
> > > > plerase help me in this regard.
> > > > --
> > > > Gromacs Users mailing list
> > > >
> > > > * Please search the archive at
> > > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > > posting!
> > > >
> > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > >
> > > > * For (un)subscribe requests visit
> > > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or
> > > > send a mail to gmx-users-requ...@gromacs.org.
> > > >
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > 

Re: [gmx-users] REMD-error

2019-09-04 Thread Mark Abraham
Hi,

On Wed, 4 Sep 2019 at 10:47, Bratin Kumar Das <177cy500.bra...@nitk.edu.in>
wrote:

> Respected Mark Abraham,
>   The command-line and the job
> submission script is given below
>
> #!/bin/bash
> #SBATCH -n 130 # Number of cores
>

Per the docs, this is a guide to sbatch about how many (MPI) tasks you want
to run. It's not a core request.

#SBATCH -N 5   # no of nodes
>

This requires a certain number of nodes. So to implement both your
instructions, MPI has to start 26 tasks per node. That would make sense if
you had nodes with a multiple 26 cores. My guess is that your nodes have a
multiple of 16 cores, based on the error message. MPI saw that you asked to
allocate more tasks on cores than available cores, and decided not to set a
number of OpenMP threads per MPI task, so that fell back on a default,
which produced 16, which GROMACS can see doesn't make sense.

If you want to use -N and -n, then you need to make a choice that makes
sense for the number of cores per node. Easier might be to use -n 130 and
-c 2 to express what I assume is your intent to have 2 cores per MPI task.
Now slurm+MPI can pass that message along properly to OpenMP.

Your other message about -ntomp can only have come from running gmx_mpi_d
-ntmpi, so just a typo we don't need to worry about further.

Mark

#SBATCH -t 0-20:00:00 # Runtime in D-HH:MM
> #SBATCH -p cpu # Partition to submit to
> #SBATCH -o hostname_%j.out # File to which STDOUT will be written
> #SBATCH -e hostname_%j.err # File to which STDERR will be written
> #loading gromacs
> module load gromacs/2018.4
> #specifying work_dir
> WORKDIR=/home/chm_bratin/GMX_Projects/REMD/4wbu-REMD-inst-clust_1/stage-1
>
>
> mpirun -np 130 gmx_mpi_d mdrun -v -s remd_nvt_next2.tpr -multidir equil0
> equil1 equil2 equil3 equil4 equil5 equil6 equil7 equil8 equil9 equil10
> equil11 equil12 equil13 equil14 equil15 equil16 equil17 equil18 equil19
> equil20 equil21 equil22 equil23 equil24 equil25 equil26 equil27 equil28
> equil29 equil30 equil31 equil32 equil33 equil34 equil35 equil36 equil37
> equil38 equil39 equil40 equil41 equil42 equil43 equil44 equil45 equil46
> equil47 equil48 equil49 equil50 equil51 equil52 equil53 equil54 equil55
> equil56 equil57 equil58 equil59 equil60 equil61 equil62 equil63 equil64
> -deffnm remd_nvt -cpi remd_nvt.cpt -append
>
> On Wed, Sep 4, 2019 at 2:13 PM Mark Abraham 
> wrote:
>
> > Hi,
> >
> > We need to see your command line in order to have a chance of helping.
> >
> > Mark
> >
> > On Wed, 4 Sep 2019 at 05:46, Bratin Kumar Das <
> 177cy500.bra...@nitk.edu.in
> > >
> > wrote:
> >
> > > Dear all,
> > > I am running one REMD simulation with 65 replicas. I am
> using
> > > 130 cores for the simulation. I am getting the following error.
> > >
> > > Fatal error:
> > > Your choice of number of MPI ranks and amount of resources results in
> > using
> > > 16
> > > OpenMP threads per rank, which is most likely inefficient. The optimum
> is
> > > usually between 1 and 6 threads per rank. If you want to run with this
> > > setup,
> > > specify the -ntomp option. But we suggest to change the number of MPI
> > > ranks.
> > >
> > > when I am using -ntomp option ...it is throwing another error
> > >
> > > Fatal error:
> > > Setting the number of thread-MPI ranks is only supported with
> thread-MPI
> > > and
> > > GROMACS was compiled without thread-MPI
> > >
> > >
> > > while GROMACS is compiled with threated-MPI...
> > >
> > > plerase help me in this regard.
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit

Re: [gmx-users] REMD-error

2019-09-04 Thread Bratin Kumar Das
Respected Mark Abraham,
  The command-line and the job
submission script is given below

#!/bin/bash
#SBATCH -n 130 # Number of cores
#SBATCH -N 5   # no of nodes
#SBATCH -t 0-20:00:00 # Runtime in D-HH:MM
#SBATCH -p cpu # Partition to submit to
#SBATCH -o hostname_%j.out # File to which STDOUT will be written
#SBATCH -e hostname_%j.err # File to which STDERR will be written
#loading gromacs
module load gromacs/2018.4
#specifying work_dir
WORKDIR=/home/chm_bratin/GMX_Projects/REMD/4wbu-REMD-inst-clust_1/stage-1


mpirun -np 130 gmx_mpi_d mdrun -v -s remd_nvt_next2.tpr -multidir equil0
equil1 equil2 equil3 equil4 equil5 equil6 equil7 equil8 equil9 equil10
equil11 equil12 equil13 equil14 equil15 equil16 equil17 equil18 equil19
equil20 equil21 equil22 equil23 equil24 equil25 equil26 equil27 equil28
equil29 equil30 equil31 equil32 equil33 equil34 equil35 equil36 equil37
equil38 equil39 equil40 equil41 equil42 equil43 equil44 equil45 equil46
equil47 equil48 equil49 equil50 equil51 equil52 equil53 equil54 equil55
equil56 equil57 equil58 equil59 equil60 equil61 equil62 equil63 equil64
-deffnm remd_nvt -cpi remd_nvt.cpt -append

On Wed, Sep 4, 2019 at 2:13 PM Mark Abraham 
wrote:

> Hi,
>
> We need to see your command line in order to have a chance of helping.
>
> Mark
>
> On Wed, 4 Sep 2019 at 05:46, Bratin Kumar Das <177cy500.bra...@nitk.edu.in
> >
> wrote:
>
> > Dear all,
> > I am running one REMD simulation with 65 replicas. I am using
> > 130 cores for the simulation. I am getting the following error.
> >
> > Fatal error:
> > Your choice of number of MPI ranks and amount of resources results in
> using
> > 16
> > OpenMP threads per rank, which is most likely inefficient. The optimum is
> > usually between 1 and 6 threads per rank. If you want to run with this
> > setup,
> > specify the -ntomp option. But we suggest to change the number of MPI
> > ranks.
> >
> > when I am using -ntomp option ...it is throwing another error
> >
> > Fatal error:
> > Setting the number of thread-MPI ranks is only supported with thread-MPI
> > and
> > GROMACS was compiled without thread-MPI
> >
> >
> > while GROMACS is compiled with threated-MPI...
> >
> > plerase help me in this regard.
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] REMD-error

2019-09-04 Thread Mark Abraham
Hi,

We need to see your command line in order to have a chance of helping.

Mark

On Wed, 4 Sep 2019 at 05:46, Bratin Kumar Das <177cy500.bra...@nitk.edu.in>
wrote:

> Dear all,
> I am running one REMD simulation with 65 replicas. I am using
> 130 cores for the simulation. I am getting the following error.
>
> Fatal error:
> Your choice of number of MPI ranks and amount of resources results in using
> 16
> OpenMP threads per rank, which is most likely inefficient. The optimum is
> usually between 1 and 6 threads per rank. If you want to run with this
> setup,
> specify the -ntomp option. But we suggest to change the number of MPI
> ranks.
>
> when I am using -ntomp option ...it is throwing another error
>
> Fatal error:
> Setting the number of thread-MPI ranks is only supported with thread-MPI
> and
> GROMACS was compiled without thread-MPI
>
>
> while GROMACS is compiled with threated-MPI...
>
> plerase help me in this regard.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD-error

2019-09-03 Thread Bratin Kumar Das
Dear all,
I am running one REMD simulation with 65 replicas. I am using
130 cores for the simulation. I am getting the following error.

Fatal error:
Your choice of number of MPI ranks and amount of resources results in using
16
OpenMP threads per rank, which is most likely inefficient. The optimum is
usually between 1 and 6 threads per rank. If you want to run with this
setup,
specify the -ntomp option. But we suggest to change the number of MPI ranks.

when I am using -ntomp option ...it is throwing another error

Fatal error:
Setting the number of thread-MPI ranks is only supported with thread-MPI and
GROMACS was compiled without thread-MPI


while GROMACS is compiled with threated-MPI...

plerase help me in this regard.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] remd error

2019-07-29 Thread Bratin Kumar Das
Thank you

On Mon 29 Jul, 2019, 6:45 PM Justin Lemkul,  wrote:

>
>
> On 7/29/19 7:55 AM, Bratin Kumar Das wrote:
> > Hi Szilard,
> > Thank you for your reply. I rectified as you said. For
> trial
> > purpose i took 8 nodes or 16 nodes... (-np 8) to text whether it is
> running
> > or not. I gave the following command to run remd
> > *mpirun -np 8 gmx_mpi_d mdrun -v -multi 8 -replex 1000 -deffnm remd*
> > After giving the command it is giving following error
> > Program: gmx mdrun, version 2018.4
> > Source file: src/gromacs/utility/futil.cpp (line 514)
> > MPI rank:0 (out of 32)
> >
> > File input/output error:
> > remd0.tpr
> >
> > For more information and tips for troubleshooting, please check the
> GROMACS
> > website at http://www.gromacs.org/Documentation/Errors
> >   I am not able to understand why it is coming
>
> The error means the input file (remd0.tpr) does not exist in the working
> directory.
>
> -Justin
>
> >
> > On Thu 25 Jul, 2019, 2:31 PM Szilárd Páll, 
> wrote:
> >
> >> This is an MPI / job scheduler error: you are requesting 2 nodes with
> >> 20 processes per node (=40 total), but starting 80 ranks.
> >> --
> >> Szilárd
> >>
> >> On Thu, Jul 18, 2019 at 8:33 AM Bratin Kumar Das
> >> <177cy500.bra...@nitk.edu.in> wrote:
> >>> Hi,
> >>> I am running remd simulation in gromacs-2016.5. After generating
> the
> >>> multiple .tpr file in each directory by the following command
> >>> *for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro
> -p
> >>> topol.top -o remd$i.tpr -maxwarn 1; cd ..; done*
> >>> I run *mpirun -np 80 gmx_mpi mdrun -s remd.tpr -multi 8 -replex 1000
> >>> -reseed 175320 -deffnm remd_equil*
> >>> It is giving the following error
> >>> There are not enough slots available in the system to satisfy the 40
> >> slots
> >>> that were requested by the application:
> >>>gmx_mpi
> >>>
> >>> Either request fewer slots for your application, or make more slots
> >>> available
> >>> for use.
> >>>
> >>
> --
> >>
> --
> >>> There are not enough slots available in the system to satisfy the 40
> >> slots
> >>> that were requested by the application:
> >>>gmx_mpi
> >>>
> >>> Either request fewer slots for your application, or make more slots
> >>> available
> >>> for use.
> >>>
> >>
> --
> >>> I am not understanding the error. Any suggestion will be highly
> >>> appriciated. The mdp file and the qsub.sh file is attached below
> >>>
> >>> qsub.sh...
> >>> #! /bin/bash
> >>> #PBS -V
> >>> #PBS -l nodes=2:ppn=20
> >>> #PBS -l walltime=48:00:00
> >>> #PBS -N mdrun-serial
> >>> #PBS -j oe
> >>> #PBS -o output.log
> >>> #PBS -e error.log
> >>> #cd /home/bratin/Downloads/GROMACS/Gromacs_fibril
> >>> cd $PBS_O_WORKDIR
> >>> module load openmpi3.0.0
> >>> module load gromacs-2016.5
> >>> NP='cat $PBS_NODEFILE | wc -1'
> >>> # mpirun --machinefile $PBS_PBS_NODEFILE -np $NP 'which gmx_mpi' mdrun
> -v
> >>> -s nvt.tpr -deffnm nvt
> >>> #/apps/gromacs-2016.5/bin/mpirun -np 8 gmx_mpi mdrun -v -s remd.tpr
> >> -multi
> >>> 8 -replex 1000 -deffnm remd_out
> >>> for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro
> -r
> >>> em.gro -p topol.top -o remd$i.tpr -maxwarn 1; cd ..; done
> >>>
> >>> for i in {0..7}; do cd equil${i}; mpirun -np 40 gmx_mpi mdrun -v -s
> >>> remd.tpr -multi 8 -replex 1000 -deffnm remd$i_out ; cd ..; done
> >>> --
> >>> Gromacs Users mailing list
> >>>
> >>> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>>
> >>> * For (un)subscribe requests visit
> >>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
>
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read 

Re: [gmx-users] remd error

2019-07-29 Thread Justin Lemkul



On 7/29/19 7:55 AM, Bratin Kumar Das wrote:

Hi Szilard,
Thank you for your reply. I rectified as you said. For trial
purpose i took 8 nodes or 16 nodes... (-np 8) to text whether it is running
or not. I gave the following command to run remd
*mpirun -np 8 gmx_mpi_d mdrun -v -multi 8 -replex 1000 -deffnm remd*
After giving the command it is giving following error
Program: gmx mdrun, version 2018.4
Source file: src/gromacs/utility/futil.cpp (line 514)
MPI rank:0 (out of 32)

File input/output error:
remd0.tpr

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
  I am not able to understand why it is coming


The error means the input file (remd0.tpr) does not exist in the working 
directory.


-Justin



On Thu 25 Jul, 2019, 2:31 PM Szilárd Páll,  wrote:


This is an MPI / job scheduler error: you are requesting 2 nodes with
20 processes per node (=40 total), but starting 80 ranks.
--
Szilárd

On Thu, Jul 18, 2019 at 8:33 AM Bratin Kumar Das
<177cy500.bra...@nitk.edu.in> wrote:

Hi,
I am running remd simulation in gromacs-2016.5. After generating the
multiple .tpr file in each directory by the following command
*for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro -p
topol.top -o remd$i.tpr -maxwarn 1; cd ..; done*
I run *mpirun -np 80 gmx_mpi mdrun -s remd.tpr -multi 8 -replex 1000
-reseed 175320 -deffnm remd_equil*
It is giving the following error
There are not enough slots available in the system to satisfy the 40

slots

that were requested by the application:
   gmx_mpi

Either request fewer slots for your application, or make more slots
available
for use.


--
--

There are not enough slots available in the system to satisfy the 40

slots

that were requested by the application:
   gmx_mpi

Either request fewer slots for your application, or make more slots
available
for use.


--

I am not understanding the error. Any suggestion will be highly
appriciated. The mdp file and the qsub.sh file is attached below

qsub.sh...
#! /bin/bash
#PBS -V
#PBS -l nodes=2:ppn=20
#PBS -l walltime=48:00:00
#PBS -N mdrun-serial
#PBS -j oe
#PBS -o output.log
#PBS -e error.log
#cd /home/bratin/Downloads/GROMACS/Gromacs_fibril
cd $PBS_O_WORKDIR
module load openmpi3.0.0
module load gromacs-2016.5
NP='cat $PBS_NODEFILE | wc -1'
# mpirun --machinefile $PBS_PBS_NODEFILE -np $NP 'which gmx_mpi' mdrun -v
-s nvt.tpr -deffnm nvt
#/apps/gromacs-2016.5/bin/mpirun -np 8 gmx_mpi mdrun -v -s remd.tpr

-multi

8 -replex 1000 -deffnm remd_out
for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro -r
em.gro -p topol.top -o remd$i.tpr -maxwarn 1; cd ..; done

for i in {0..7}; do cd equil${i}; mpirun -np 40 gmx_mpi mdrun -v -s
remd.tpr -multi 8 -replex 1000 -deffnm remd$i_out ; cd ..; done
--
Gromacs Users mailing list

* Please search the archive at

http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or

send a mail to gmx-users-requ...@gromacs.org.
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.


--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] remd error

2019-07-29 Thread Bratin Kumar Das
Hi Szilard,
   Thank you for your reply. I rectified as you said. For trial
purpose i took 8 nodes or 16 nodes... (-np 8) to text whether it is running
or not. I gave the following command to run remd
*mpirun -np 8 gmx_mpi_d mdrun -v -multi 8 -replex 1000 -deffnm remd*
After giving the command it is giving following error
Program: gmx mdrun, version 2018.4
Source file: src/gromacs/utility/futil.cpp (line 514)
MPI rank:0 (out of 32)

File input/output error:
remd0.tpr

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
 I am not able to understand why it is coming

On Thu 25 Jul, 2019, 2:31 PM Szilárd Páll,  wrote:

> This is an MPI / job scheduler error: you are requesting 2 nodes with
> 20 processes per node (=40 total), but starting 80 ranks.
> --
> Szilárd
>
> On Thu, Jul 18, 2019 at 8:33 AM Bratin Kumar Das
> <177cy500.bra...@nitk.edu.in> wrote:
> >
> > Hi,
> >I am running remd simulation in gromacs-2016.5. After generating the
> > multiple .tpr file in each directory by the following command
> > *for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro -p
> > topol.top -o remd$i.tpr -maxwarn 1; cd ..; done*
> > I run *mpirun -np 80 gmx_mpi mdrun -s remd.tpr -multi 8 -replex 1000
> > -reseed 175320 -deffnm remd_equil*
> > It is giving the following error
> > There are not enough slots available in the system to satisfy the 40
> slots
> > that were requested by the application:
> >   gmx_mpi
> >
> > Either request fewer slots for your application, or make more slots
> > available
> > for use.
> >
> --
> >
> --
> > There are not enough slots available in the system to satisfy the 40
> slots
> > that were requested by the application:
> >   gmx_mpi
> >
> > Either request fewer slots for your application, or make more slots
> > available
> > for use.
> >
> --
> > I am not understanding the error. Any suggestion will be highly
> > appriciated. The mdp file and the qsub.sh file is attached below
> >
> > qsub.sh...
> > #! /bin/bash
> > #PBS -V
> > #PBS -l nodes=2:ppn=20
> > #PBS -l walltime=48:00:00
> > #PBS -N mdrun-serial
> > #PBS -j oe
> > #PBS -o output.log
> > #PBS -e error.log
> > #cd /home/bratin/Downloads/GROMACS/Gromacs_fibril
> > cd $PBS_O_WORKDIR
> > module load openmpi3.0.0
> > module load gromacs-2016.5
> > NP='cat $PBS_NODEFILE | wc -1'
> > # mpirun --machinefile $PBS_PBS_NODEFILE -np $NP 'which gmx_mpi' mdrun -v
> > -s nvt.tpr -deffnm nvt
> > #/apps/gromacs-2016.5/bin/mpirun -np 8 gmx_mpi mdrun -v -s remd.tpr
> -multi
> > 8 -replex 1000 -deffnm remd_out
> > for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro -r
> > em.gro -p topol.top -o remd$i.tpr -maxwarn 1; cd ..; done
> >
> > for i in {0..7}; do cd equil${i}; mpirun -np 40 gmx_mpi mdrun -v -s
> > remd.tpr -multi 8 -replex 1000 -deffnm remd$i_out ; cd ..; done
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] remd error

2019-07-25 Thread Szilárd Páll
This is an MPI / job scheduler error: you are requesting 2 nodes with
20 processes per node (=40 total), but starting 80 ranks.
--
Szilárd

On Thu, Jul 18, 2019 at 8:33 AM Bratin Kumar Das
<177cy500.bra...@nitk.edu.in> wrote:
>
> Hi,
>I am running remd simulation in gromacs-2016.5. After generating the
> multiple .tpr file in each directory by the following command
> *for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro -p
> topol.top -o remd$i.tpr -maxwarn 1; cd ..; done*
> I run *mpirun -np 80 gmx_mpi mdrun -s remd.tpr -multi 8 -replex 1000
> -reseed 175320 -deffnm remd_equil*
> It is giving the following error
> There are not enough slots available in the system to satisfy the 40 slots
> that were requested by the application:
>   gmx_mpi
>
> Either request fewer slots for your application, or make more slots
> available
> for use.
> --
> --
> There are not enough slots available in the system to satisfy the 40 slots
> that were requested by the application:
>   gmx_mpi
>
> Either request fewer slots for your application, or make more slots
> available
> for use.
> --
> I am not understanding the error. Any suggestion will be highly
> appriciated. The mdp file and the qsub.sh file is attached below
>
> qsub.sh...
> #! /bin/bash
> #PBS -V
> #PBS -l nodes=2:ppn=20
> #PBS -l walltime=48:00:00
> #PBS -N mdrun-serial
> #PBS -j oe
> #PBS -o output.log
> #PBS -e error.log
> #cd /home/bratin/Downloads/GROMACS/Gromacs_fibril
> cd $PBS_O_WORKDIR
> module load openmpi3.0.0
> module load gromacs-2016.5
> NP='cat $PBS_NODEFILE | wc -1'
> # mpirun --machinefile $PBS_PBS_NODEFILE -np $NP 'which gmx_mpi' mdrun -v
> -s nvt.tpr -deffnm nvt
> #/apps/gromacs-2016.5/bin/mpirun -np 8 gmx_mpi mdrun -v -s remd.tpr -multi
> 8 -replex 1000 -deffnm remd_out
> for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro -r
> em.gro -p topol.top -o remd$i.tpr -maxwarn 1; cd ..; done
>
> for i in {0..7}; do cd equil${i}; mpirun -np 40 gmx_mpi mdrun -v -s
> remd.tpr -multi 8 -replex 1000 -deffnm remd$i_out ; cd ..; done
> --
> Gromacs Users mailing list
>
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] remd error

2019-07-18 Thread Bratin Kumar Das
Hi,
   I am running remd simulation in gromacs-2016.5. After generating the
multiple .tpr file in each directory by the following command
*for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro -p
topol.top -o remd$i.tpr -maxwarn 1; cd ..; done*
I run *mpirun -np 80 gmx_mpi mdrun -s remd.tpr -multi 8 -replex 1000
-reseed 175320 -deffnm remd_equil*
It is giving the following error
There are not enough slots available in the system to satisfy the 40 slots
that were requested by the application:
  gmx_mpi

Either request fewer slots for your application, or make more slots
available
for use.
--
--
There are not enough slots available in the system to satisfy the 40 slots
that were requested by the application:
  gmx_mpi

Either request fewer slots for your application, or make more slots
available
for use.
--
I am not understanding the error. Any suggestion will be highly
appriciated. The mdp file and the qsub.sh file is attached below

qsub.sh...
#! /bin/bash
#PBS -V
#PBS -l nodes=2:ppn=20
#PBS -l walltime=48:00:00
#PBS -N mdrun-serial
#PBS -j oe
#PBS -o output.log
#PBS -e error.log
#cd /home/bratin/Downloads/GROMACS/Gromacs_fibril
cd $PBS_O_WORKDIR
module load openmpi3.0.0
module load gromacs-2016.5
NP='cat $PBS_NODEFILE | wc -1'
# mpirun --machinefile $PBS_PBS_NODEFILE -np $NP 'which gmx_mpi' mdrun -v
-s nvt.tpr -deffnm nvt
#/apps/gromacs-2016.5/bin/mpirun -np 8 gmx_mpi mdrun -v -s remd.tpr -multi
8 -replex 1000 -deffnm remd_out
for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro -r
em.gro -p topol.top -o remd$i.tpr -maxwarn 1; cd ..; done

for i in {0..7}; do cd equil${i}; mpirun -np 40 gmx_mpi mdrun -v -s
remd.tpr -multi 8 -replex 1000 -deffnm remd$i_out ; cd ..; done
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] REMD error

2016-05-13 Thread Mark Abraham
Hi,

If you've configured with GMX_MPI, then the resulting GROMACS binary is
called gmx_mpi, so mpirun -np X gmx_mpi mdrun -multi ...

Mark

On Fri, May 13, 2016 at 10:09 AM YanhuaOuyang <15901283...@163.com> wrote:

> Hi,
> I have installed the openmpi 1.10, and I can run mpirun. When I installed
> grimaces 5.1, I configured -DGMX_MPI=on.
> And the error still happens .
> > 在 2016年5月13日,下午3:59,Mark Abraham  写道:
> >
> > Hi,
> >
> > Yes. Exactly as the error message says, you need to compile GROMACS
> > differently, with real MPI support. See
> >
> http://manual.gromacs.org/documentation/5.1.2/user-guide/mdrun-features.html#running-multi-simulations
> >
> > Mark
> >
> > On Fri, May 13, 2016 at 9:47 AM YanhuaOuyang <15901283...@163.com>
> wrote:
> >
> >> Hi,
> >> I am running a REMD of a protein, when I submit "gmx mdrun -s
> >> md_0_${i}.tpr -multi 46 -replex 1000 -reseed -1", it fails as the below
> >> Fatal error:
> >> mdrun -multi or -multidir are not supported with the thread-MPI library.
> >> Please compile GROMACS with a proper external MPI library.
> >> I have installed the openmpi  and gromacs 5.1.
> >> Do anyone know the problem.
> >>
> >> yours sincerelly,
> >> Ouyang
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
> >>
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] REMD error

2016-05-13 Thread YanhuaOuyang
Hi,
I have installed the openmpi 1.10, and I can run mpirun. When I installed 
grimaces 5.1, I configured -DGMX_MPI=on.
And the error still happens .
> 在 2016年5月13日,下午3:59,Mark Abraham  写道:
> 
> Hi,
> 
> Yes. Exactly as the error message says, you need to compile GROMACS
> differently, with real MPI support. See
> http://manual.gromacs.org/documentation/5.1.2/user-guide/mdrun-features.html#running-multi-simulations
> 
> Mark
> 
> On Fri, May 13, 2016 at 9:47 AM YanhuaOuyang <15901283...@163.com> wrote:
> 
>> Hi,
>> I am running a REMD of a protein, when I submit "gmx mdrun -s
>> md_0_${i}.tpr -multi 46 -replex 1000 -reseed -1", it fails as the below
>> Fatal error:
>> mdrun -multi or -multidir are not supported with the thread-MPI library.
>> Please compile GROMACS with a proper external MPI library.
>> I have installed the openmpi  and gromacs 5.1.
>> Do anyone know the problem.
>> 
>> yours sincerelly,
>> Ouyang
>> --
>> Gromacs Users mailing list
>> 
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] REMD error

2016-05-13 Thread Mark Abraham
Hi,

Yes. Exactly as the error message says, you need to compile GROMACS
differently, with real MPI support. See
http://manual.gromacs.org/documentation/5.1.2/user-guide/mdrun-features.html#running-multi-simulations

Mark

On Fri, May 13, 2016 at 9:47 AM YanhuaOuyang <15901283...@163.com> wrote:

> Hi,
> I am running a REMD of a protein, when I submit "gmx mdrun -s
> md_0_${i}.tpr -multi 46 -replex 1000 -reseed -1", it fails as the below
> Fatal error:
> mdrun -multi or -multidir are not supported with the thread-MPI library.
> Please compile GROMACS with a proper external MPI library.
> I have installed the openmpi  and gromacs 5.1.
> Do anyone know the problem.
>
> yours sincerelly,
> Ouyang
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] REMD error

2016-05-13 Thread YanhuaOuyang
Hi,
I am running a REMD of a protein, when I submit "gmx mdrun -s md_0_${i}.tpr 
-multi 46 -replex 1000 -reseed -1", it fails as the below
Fatal error:
mdrun -multi or -multidir are not supported with the thread-MPI library. Please 
compile GROMACS with a proper external MPI library.
I have installed the openmpi  and gromacs 5.1.
Do anyone know the problem.

yours sincerelly,
Ouyang
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.