Re: [gmx-users] Problem with mpirun

2020-02-08 Thread Kevin Boyd
Hi,

Can you send us the output of gmx_mpi --version?

I typically see illegal instructions when I compile gromacs on one
architecture but accidentally try to run it on another.

Kevin

On Thu, Feb 6, 2020 at 6:38 AM Seketoulie Keretsu 
wrote:

> Dear Sir/Madam,
>
> We just installed gromacs 2019 today (MPI compiled) and we're currently
> testing the commands with MPI. The installations went fine however,we are
> having issues with the commands.
>
> $ echo $PATH
> /opt/vmd/1.9.3/bin:/opt/g_mmpbsa/bin:/opt/gromacs/2019.5/bin
>
> However, when we execute the commands we get the following response.
>
> mpirun -np 8 gmx_mpi mdrun -s md_0_10.tpr -o md_0_10.trr -cpi md_0_10.cpt
> -c md_0_10.gro -e md_0_10.edr -g md_0_10.log
>
>
> ===
> =   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
> =   RANK 15 PID 23681 RUNNING AT biopo1
> =   KILLED BY SIGNAL: 4 (Illegal instruction)
>
> ===
>
> We get the same error for 'mpirun -np 16 gmx_mpi mdrun -h'  or ' mpirun -np
> 8 gmx_mpi mdrun -v -deffnm md_0_10'
>
> What are we missing here, please advise.
>
> Sincerely,
> Seket
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Problem with mpirun

2020-02-06 Thread Seketoulie Keretsu
Dear Sir/Madam,

We just installed gromacs 2019 today (MPI compiled) and we're currently
testing the commands with MPI. The installations went fine however,we are
having issues with the commands.

$ echo $PATH
/opt/vmd/1.9.3/bin:/opt/g_mmpbsa/bin:/opt/gromacs/2019.5/bin

However, when we execute the commands we get the following response.

mpirun -np 8 gmx_mpi mdrun -s md_0_10.tpr -o md_0_10.trr -cpi md_0_10.cpt
-c md_0_10.gro -e md_0_10.edr -g md_0_10.log

===
=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   RANK 15 PID 23681 RUNNING AT biopo1
=   KILLED BY SIGNAL: 4 (Illegal instruction)
===

We get the same error for 'mpirun -np 16 gmx_mpi mdrun -h'  or ' mpirun -np
8 gmx_mpi mdrun -v -deffnm md_0_10'

What are we missing here, please advise.

Sincerely,
Seket
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Problem with MPIRUN and Ewald

2017-02-23 Thread Matteo Busato
Actually I have to make a correction, the .log file doesn't say anything.


Da: Matteo Busato
Inviato: giovedì 23 febbraio 2017 10.34.52
A: gromacs.org_gmx-users@maillist.sys.kth.se
Oggetto: Problem with MPIRUN and Ewald


Good morning to everyone,


I am trying to perform a simple classical dynamic of a box of 500 TIP3P water 
molecules with an Ag+ ion in its center. However, when running the dynamic with 
8 physical cores but no multithread with mpirun it crashes and the output of 
the job reports:


"The number of PME grid lines per rank along x is 3. But when using OpenMP 
threads, the number of grid lines per rank along x should be > = pme_order (6) 
or pmeorder-1. To resolve this issue, use fewer ranks along x (and possibly 
more along y and/or z) by specifyng -dd manually"


The md.log file says instead:


"Fatal error: the size of the domain decomposition grid (125) does not match 
the number of ranks (8). The total number of ranks is 8".


The strange fact is that it works on supercomputer CINECA resources but it 
doesn't on our local cluster.


This is the .mdp file I'm using:


; Run control
integrator   = sd   ; Langevin dynamics
tinit= 0
dt   = 0.002
nsteps   = 50   ; 1 ns
nstcomm  = 100
; Output control
nstxout  = 500
nstvout  = 500
nstfout  = 0
nstlog   = 500
nstenergy= 500
nstxout-compressed   = 0
; Neighborsearching and short-range nonbonded interactions
cutoff-scheme= verlet
nstlist  = 20
ns_type  = grid
pbc  = xyz
rlist= 1.0
; Electrostatics
coulombtype  = PME
rcoulomb = 1.0
; van der Waals
vdwtype  = cutoff
vdw-modifier = potential-switch
rvdw-switch  = 0.9
rvdw = 1.0
; Apply long range dispersion corrections for Energy and Pressure
DispCorr  = EnerPres
; Spacing for the PME/PPPM FFT grid
fourierspacing   = 0.12
; EWALD/PME/PPPM parameters
pme_order= 6
ewald_rtol   = 1e-06
epsilon_surface  = 0
; Temperature coupling
; tcoupl is implicitly handled by the sd integrator
tc_grps  = system
tau_t= 1.0
ref_t= 300
; Pressure coupling is on for NPT
Pcoupl   = Parrinello-Rahman
tau_p= 1.0
compressibility  = 4.5e-05
ref_p= 1.0
; Free energy control stuff
free_energy  = yes
init_lambda_state= 0
delta_lambda = 0
calc_lambda_neighbors= 1; only immediate neighboring windows
; Vectors of lambda specified here
; Each combination is an index that is retrieved from init_lambda_state for 
each simulation
; init_lambda_state012345678910 
  11   12   13   14   15   16   17   18   19   20
vdw_lambdas  = 0.00 0.10 0.20 0.30 0.40 0.50 0.60 0.70 0.80 0.90 
1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
coul_lambdas = 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 
0.00 0.10 0.20 0.30 0.40 0.50 0.60 0.70 0.80 0.90 1.00
; We are not transforming any bonded or restrained interactions
bonded_lambdas   = 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
restraint_lambdas= 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
; Masses are not changing (particle identities are the same at lambda = 0 and 
lambda = 1)
mass_lambdas = 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
; Not doing simulated temperting here
temperature_lambdas  = 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
; Options for the decoupling
sc-alpha = 0.5
sc-coul  = no   ; linear interpolation of Coulomb (none in 
this case)
sc-power = 1.0
sc-sigma = 0.3
couple-moltype   = AG   ; name of moleculetype to decouple
couple-lambda0   = none ; no interactions at lambda=0
couple-lambda1   = vdw-q; turn on both vdW and Coulomb
couple-intramol  = no
nstdhdl  = 10
; Do not generate velocities
gen_vel  = no
; options for bonds
constraints  = h-bonds  ; we only have C-H bonds here
; Type of constraint algorithm
constraint-algorithm = lincs
; Constrain the starting configuration
; since we are continuing from NPT
continuation = yes
; Highest order in the expansion of the constraint coupling matrix
lincs-order  = 12


Thank you in advance for your 

[gmx-users] Problem with MPIRUN and Ewald

2017-02-23 Thread Matteo Busato
Good morning to everyone,


I am trying to perform a simple classical dynamic of a box of 500 TIP3P water 
molecules with an Ag+ ion in its center. However, when running the dynamic with 
8 physical cores but no multithread with mpirun it crashes and the output of 
the job reports:


"The number of PME grid lines per rank along x is 3. But when using OpenMP 
threads, the number of grid lines per rank along x should be > = pme_order (6) 
or pmeorder-1. To resolve this issue, use fewer ranks along x (and possibly 
more along y and/or z) by specifyng -dd manually"


The md.log file says instead:


"Fatal error: the size of the domain decomposition grid (125) does not match 
the number of ranks (8). The total number of ranks is 8".


The strange fact is that it works on supercomputer CINECA resources but it 
doesn't on our local cluster.


This is the .mdp file I'm using:


; Run control
integrator   = sd   ; Langevin dynamics
tinit= 0
dt   = 0.002
nsteps   = 50   ; 1 ns
nstcomm  = 100
; Output control
nstxout  = 500
nstvout  = 500
nstfout  = 0
nstlog   = 500
nstenergy= 500
nstxout-compressed   = 0
; Neighborsearching and short-range nonbonded interactions
cutoff-scheme= verlet
nstlist  = 20
ns_type  = grid
pbc  = xyz
rlist= 1.0
; Electrostatics
coulombtype  = PME
rcoulomb = 1.0
; van der Waals
vdwtype  = cutoff
vdw-modifier = potential-switch
rvdw-switch  = 0.9
rvdw = 1.0
; Apply long range dispersion corrections for Energy and Pressure
DispCorr  = EnerPres
; Spacing for the PME/PPPM FFT grid
fourierspacing   = 0.12
; EWALD/PME/PPPM parameters
pme_order= 6
ewald_rtol   = 1e-06
epsilon_surface  = 0
; Temperature coupling
; tcoupl is implicitly handled by the sd integrator
tc_grps  = system
tau_t= 1.0
ref_t= 300
; Pressure coupling is on for NPT
Pcoupl   = Parrinello-Rahman
tau_p= 1.0
compressibility  = 4.5e-05
ref_p= 1.0
; Free energy control stuff
free_energy  = yes
init_lambda_state= 0
delta_lambda = 0
calc_lambda_neighbors= 1; only immediate neighboring windows
; Vectors of lambda specified here
; Each combination is an index that is retrieved from init_lambda_state for 
each simulation
; init_lambda_state012345678910 
  11   12   13   14   15   16   17   18   19   20
vdw_lambdas  = 0.00 0.10 0.20 0.30 0.40 0.50 0.60 0.70 0.80 0.90 
1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
coul_lambdas = 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 
0.00 0.10 0.20 0.30 0.40 0.50 0.60 0.70 0.80 0.90 1.00
; We are not transforming any bonded or restrained interactions
bonded_lambdas   = 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
restraint_lambdas= 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
; Masses are not changing (particle identities are the same at lambda = 0 and 
lambda = 1)
mass_lambdas = 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
; Not doing simulated temperting here
temperature_lambdas  = 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
; Options for the decoupling
sc-alpha = 0.5
sc-coul  = no   ; linear interpolation of Coulomb (none in 
this case)
sc-power = 1.0
sc-sigma = 0.3
couple-moltype   = AG   ; name of moleculetype to decouple
couple-lambda0   = none ; no interactions at lambda=0
couple-lambda1   = vdw-q; turn on both vdW and Coulomb
couple-intramol  = no
nstdhdl  = 10
; Do not generate velocities
gen_vel  = no
; options for bonds
constraints  = h-bonds  ; we only have C-H bonds here
; Type of constraint algorithm
constraint-algorithm = lincs
; Constrain the starting configuration
; since we are continuing from NPT
continuation = yes
; Highest order in the expansion of the constraint coupling matrix
lincs-order  = 12


Thank you in advance for your suggestions.

Matteo Busato

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe