Thank you for your email sir.
On Wed, Sep 4, 2019 at 2:42 PM Mark Abraham
wrote:
> Hi,
>
> On Wed, 4 Sep 2019 at 10:47, Bratin Kumar Das <177cy500.bra...@nitk.edu.in
> >
> wrote:
>
> > Respected Mark Abraham,
> > The command-line and the job
> >
Hi,
On Wed, 4 Sep 2019 at 10:47, Bratin Kumar Das <177cy500.bra...@nitk.edu.in>
wrote:
> Respected Mark Abraham,
> The command-line and the job
> submission script is given below
>
> #!/bin/bash
> #SBATCH -n 130 # Number of cores
>
Per the docs, this is
Respected Mark Abraham,
The command-line and the job
submission script is given below
#!/bin/bash
#SBATCH -n 130 # Number of cores
#SBATCH -N 5 # no of nodes
#SBATCH -t 0-20:00:00 # Runtime in D-HH:MM
#SBATCH -p cpu # Partition to submit to
#SBATCH -o
Hi,
We need to see your command line in order to have a chance of helping.
Mark
On Wed, 4 Sep 2019 at 05:46, Bratin Kumar Das <177cy500.bra...@nitk.edu.in>
wrote:
> Dear all,
> I am running one REMD simulation with 65 replicas. I am using
> 130 cores for the simulation. I am
Dear all,
I am running one REMD simulation with 65 replicas. I am using
130 cores for the simulation. I am getting the following error.
Fatal error:
Your choice of number of MPI ranks and amount of resources results in using
16
OpenMP threads per rank, which is most likely
Thank you
On Mon 29 Jul, 2019, 6:45 PM Justin Lemkul, wrote:
>
>
> On 7/29/19 7:55 AM, Bratin Kumar Das wrote:
> > Hi Szilard,
> > Thank you for your reply. I rectified as you said. For
> trial
> > purpose i took 8 nodes or 16 nodes... (-np 8) to text whether it is
> running
> >
On 7/29/19 7:55 AM, Bratin Kumar Das wrote:
Hi Szilard,
Thank you for your reply. I rectified as you said. For trial
purpose i took 8 nodes or 16 nodes... (-np 8) to text whether it is running
or not. I gave the following command to run remd
*mpirun -np 8 gmx_mpi_d mdrun -v
Hi Szilard,
Thank you for your reply. I rectified as you said. For trial
purpose i took 8 nodes or 16 nodes... (-np 8) to text whether it is running
or not. I gave the following command to run remd
*mpirun -np 8 gmx_mpi_d mdrun -v -multi 8 -replex 1000 -deffnm remd*
After giving the
This is an MPI / job scheduler error: you are requesting 2 nodes with
20 processes per node (=40 total), but starting 80 ranks.
--
Szilárd
On Thu, Jul 18, 2019 at 8:33 AM Bratin Kumar Das
<177cy500.bra...@nitk.edu.in> wrote:
>
> Hi,
>I am running remd simulation in gromacs-2016.5. After
Hi,
I am running remd simulation in gromacs-2016.5. After generating the
multiple .tpr file in each directory by the following command
*for i in {0..7}; do cd equil$i; gmx grompp -f equil${i}.mdp -c em.gro -p
topol.top -o remd$i.tpr -maxwarn 1; cd ..; done*
I run *mpirun -np 80 gmx_mpi mdrun -s
Hi,
If you've configured with GMX_MPI, then the resulting GROMACS binary is
called gmx_mpi, so mpirun -np X gmx_mpi mdrun -multi ...
Mark
On Fri, May 13, 2016 at 10:09 AM YanhuaOuyang <15901283...@163.com> wrote:
> Hi,
> I have installed the openmpi 1.10, and I can run mpirun. When I installed
Hi,
I have installed the openmpi 1.10, and I can run mpirun. When I installed
grimaces 5.1, I configured -DGMX_MPI=on.
And the error still happens .
> 在 2016年5月13日,下午3:59,Mark Abraham 写道:
>
> Hi,
>
> Yes. Exactly as the error message says, you need to compile GROMACS
Hi,
Yes. Exactly as the error message says, you need to compile GROMACS
differently, with real MPI support. See
http://manual.gromacs.org/documentation/5.1.2/user-guide/mdrun-features.html#running-multi-simulations
Mark
On Fri, May 13, 2016 at 9:47 AM YanhuaOuyang <15901283...@163.com> wrote:
Hi,
I am running a REMD of a protein, when I submit "gmx mdrun -s md_0_${i}.tpr
-multi 46 -replex 1000 -reseed -1", it fails as the below
Fatal error:
mdrun -multi or -multidir are not supported with the thread-MPI library. Please
compile GROMACS with a proper external MPI library.
I have
14 matches
Mail list logo