[gmx-users] mdrun failed

2017-05-23 Thread fatemeh ramezani
 Hi dear gmx-usersI want to simulate gold surface - proteininteraction by 
GOLP-CHARMM forcefield. In md step, after 7 ps , without anyreason mdrun 
failed. md.mdp contains:





title   = gold

cpp = cpp

include =

 

; RUN CONTROL PARAMETERS

integrator  = md

;comm_mode  = Linear

;nstcomm =   1

;comm_grps = Protein  non-Protein

;nstcalcenergy  = 1

; ENERGY MINIMIZATIONOPTIONS

; Force tolerance andinitial step-size

emtol   = 500.0

emstep  = 0.001

tinit   = 0.000

dt  = 0.001

nsteps = 10

 

; OUTPUT CONTROL OPTIONS

nstxout = 3000

nstvout = 3000

nstfout = 0

nstlog  = 1

nstenergy   = 3000

;nsttcouple = 5

; Output frequency andprecision for xtc file

nstxtcout   = 3000

xtc-precision   = 3000

 

 

 

; NEIGHBORSEARCHINGPARAMETERS

; Periodic boundaryconditions: xyz (default), no (vacuum)

pbc = xyz

periodic_molecules  = yes

; nblistcut-off  

rlist   = 1.10

 

; OPTIONS FORELECTROSTATICS AND VDW

; Method for doingelectrostatics

coulombtype = PME

r_coulomb= 1.1

;pme_order   = 6

;fourierspacing   = 0.10

ewald_rtol  = 1e-06

ewald_geometry  = 3d

 

; Method for doing Vander Waals

vdw-type= switch

; cut-offlengths 

rvdw-switch = 0.90

rvdw= 1.10

 

; OPTIONS FORBONDS  

constraints = all-bonds

constraint-algorithm= Lincs

lincs-order = 8

lincs-iter  = 12

; Lincs will write awarning to the stderr if in one step a bond

; rotates over moredegrees than

lincs-warnangle = 90

 

; OPTIONS FOR WEAKCOUPLING ALGORITHMS

; Temperature coupling

Tcoupl  = Nose-Hoover

nhchainlength  = 1

; Groups to coupleseparately

tc-grps =ProteinNon-Protein

; Time constant (ps) andreference temperature (K)

tau-t   = 0.2 0.2

ref-t   = 310 310

 

; Non-equilibrium MDstuff

freezegrps  = slab

freezedim   = Y Y Y

 

Fatemeh Ramezani

 

mdrun command is:

 nohup mdrun -s md.tpr-o md.trr -c md.pdb -g md.log -e md.edr -nt 3 -dd 1 -pin 
on -pinoffset 1  &

 

the last line of md.log is:

 Step   Time Lambda

   7460    7.46000    0.0

 

   Energies (kJ/mol)

    U-B    Proper Dih. Improper Dih.  CMAP Dih.  LJ-14

    5.39081e+04    1.47221e+04    3.15826e+03   -1.85050e+03    1.80904e+04

 Coulomb-14    LJ (SR)   Coulomb (SR)   Coul. recip.  Potential

    2.97824e+05    2.17254e+12   -6.91585e+06    3.01612e+04    2.17254e+12

    Kinetic En.   Total Energy Conserved En.    TemperaturePressure (bar)

    7.71065e+14    7.73237e+14    9.41950e+16    2.61554e+11    2.41578e+12

   Constr. rmsd

    1.15453e-01

 

The last line of nohup.out is:

 Changing nstlist from10 to 40, rlist from 1.1 to 1.1

 Using 1 MPI thread

Using 3 OpenMP threads

Applying core pinning offset 1

Setting the maximum number of constraint warnings to -1

 

Back Off! I just backed up fws_md3.trr to ./#fws_md3.trr.20#

 Back Off! I justbacked up traj_comp.xtc to ./#traj_comp.xtc.22#

 Back Off! I justbacked up md3.edr to ./#md3.edr.20#

starting mdrun 'Protein in water'

10 steps,    100.0ps.

 

Can you help me to prevent from mdrun failing?

Thank you very much

Fatemeh Ramezani
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] mdrun failed

2014-05-15 Thread Albert
Hello:

I try to submit Gromacs job with command:

mpirun -np 2 mdrun_mpi -s md.tpr -g -v

but it failed with messages:

--
It looks like opal_init failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during opal_init; some of which are due to configuration or
environment problems.  This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):

  opal_shmem_base_select failed
  -- Returned value -1 instead of OPAL_SUCCESS
--
*** An error occurred in MPI_Init
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
***and potentially your MPI job)
[cudaA:5978] Local abort before MPI_INIT completed successfully; not able
to aggregate error messages, and not able to guarantee that all other
processes were killed!

--


I first compiled openmpi with command:

./configure --prefix=/soft/openmpi-1.7.5_intel CC=icc F77=ifort FC=ifort
CXX=icpc

then gromacs-4.6.5 was compiled with command:

env CC=icc F77=ifort CXX=icpc
CMAKE_PREFIX_PATH=/soft/intel/mkl/include/fftw:/soft/openmpi-1.7.5 cmake ..
-DBUILD_SHARED_LIB=OFF -DBUILD_TESTING=OFF
-DCMAKE_INSTALL_PREFIX=/soft/gromacs-4.6.5 -DGMX_FFT_LIBRARY=mkl
-DGMX_MPI=ON -DGMX_GPU=ON -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda



thank you very much.

ALbert
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] mdrun failed

2014-05-15 Thread Mark Abraham
First, check your install of MPI works on a simple test program. Then,
check that GROMACS cmake picked up the MPI you expected, and that that's
really the one you're using at run time.

Mark


On Thu, May 15, 2014 at 3:28 PM, Albert mailmd2...@gmail.com wrote:

 Hello:

 I try to submit Gromacs job with command:

 mpirun -np 2 mdrun_mpi -s md.tpr -g -v

 but it failed with messages:

 --
 It looks like opal_init failed for some reason; your parallel process is
 likely to abort.  There are many reasons that a parallel process can
 fail during opal_init; some of which are due to configuration or
 environment problems.  This failure appears to be an internal failure;
 here's some additional information (which may only be relevant to an
 Open MPI developer):

   opal_shmem_base_select failed
   -- Returned value -1 instead of OPAL_SUCCESS
 --
 *** An error occurred in MPI_Init
 *** on a NULL communicator
 *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
 ***and potentially your MPI job)
 [cudaA:5978] Local abort before MPI_INIT completed successfully; not able
 to aggregate error messages, and not able to guarantee that all other
 processes were killed!

 --


 I first compiled openmpi with command:

 ./configure --prefix=/soft/openmpi-1.7.5_intel CC=icc F77=ifort FC=ifort
 CXX=icpc

 then gromacs-4.6.5 was compiled with command:

 env CC=icc F77=ifort CXX=icpc
 CMAKE_PREFIX_PATH=/soft/intel/mkl/include/fftw:/soft/openmpi-1.7.5 cmake ..
 -DBUILD_SHARED_LIB=OFF -DBUILD_TESTING=OFF
 -DCMAKE_INSTALL_PREFIX=/soft/gromacs-4.6.5 -DGMX_FFT_LIBRARY=mkl
 -DGMX_MPI=ON -DGMX_GPU=ON -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda



 thank you very much.

 ALbert
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] mdrun failed

2014-05-15 Thread Albert

Hello Mark:

thanks a lot for reply.

The MPI works fine in my machine when I run another problem.

How can I check whether gromacs cmake picked up the MPI that I expected? 
I've already specified with options:


CMAKE_PREFIX_PATH=/soft/intel/mkl/include/fftw:/soft/openmpi-1.7.5

thx a lot

Albert




On 05/15/2014 05:35 PM, Mark Abraham wrote:

First, check your install of MPI works on a simple test program. Then,
check that GROMACS cmake picked up the MPI you expected, and that that's
really the one you're using at run time.

Mark


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] mdrun failed

2014-05-15 Thread Mark Abraham
Try mdrun -version and have a look at the flags and stuff. Do a make
VERBOSE=1 and check the compile line doesn't refer to some other MPI.
Inspect the CMakeCache.txt in the cmake build directory.

Mark


On Thu, May 15, 2014 at 5:39 PM, Albert mailmd2...@gmail.com wrote:

 Hello Mark:

 thanks a lot for reply.

 The MPI works fine in my machine when I run another problem.

 How can I check whether gromacs cmake picked up the MPI that I expected?
 I've already specified with options:

 CMAKE_PREFIX_PATH=/soft/intel/mkl/include/fftw:/soft/openmpi-1.7.5

 thx a lot

 Albert





 On 05/15/2014 05:35 PM, Mark Abraham wrote:

 First, check your install of MPI works on a simple test program. Then,
 check that GROMACS cmake picked up the MPI you expected, and that that's
 really the one you're using at run time.

 Mark


 --
 Gromacs Users mailing list

 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.