I am trying to simulate a system with a phosphorylated threonine residue
using Gromacs 4.5.5. I took the parameters for the TPO residue from the
gromos43a1p force field and added them to the gromos43a1 force field,
following the steps as provided on the Gromacs website. I then successfully
completed the following steps.

$bin/pdb2gmx -ignh -ff gromos43a1 -f odh.pdb -o odh.pdb -p odh.top -water
spce
$bin/editconf -bt octahedron -f odh.pdb -o odh-b4sol.pdb -d 1.0
$bin/genbox -cp odh-b4sol.pdb -cs spc216.gro -o odh-b4ion.pdb -p odh.top
$bin/grompp -f em.mdp -c odh-b4ion.pdb -p odh.top -o ion.tpr -maxwarn 5
$bin/genion -s ion.tpr -o odh-b4em.pdb -neutral -conc 0.15 -p odh.top -g
ion.log
$bin/grompp -f em.mdp -c odh-b4em.pdb -p odh.top -o em.tpr -maxwarn 5

I then tried to carry out the energy minimization (the em.mdp file is
attached)

$bin/mdrun -v -deffnm em

Only to encounter the following error

Fatal error:
6 particles communicated to PME node 4 are more than 2/3 times the cut-off
out of the domain decomposition cell of their charge group in dimension x.
This usually means that your system is not well equilibrated.

Repeating the command causes different numbers of particles to be included
in the error. I have attached a log file from mdrun. I repeated all steps
with the same protein except with a THR in place of the TPO and found no
errors. The initial structure had been minimized in CHARMM, so I downloaded
the original pdb and tested it, only to find the same error. I then tried
simulating a single TPO with the same set of steps (the editconf command
option -d 1.0 was changed to 5.0) and found the same error. I then tried
using the gromos43a1p force field more directly. I used the gromos43a1 .hdb
(and added an entry for TPO) and .atp files in place of the the ones found
in the gromos43a1p directory because of previously discussed issues with
the format. I also added the ions.itp, spce.itp, and ff_dum.itp to the
directory. However, after inputting the same commands and using the single
TPO residue, I received the same error.

Alex Cumberworth
Log file opened on Tue Jun  5 16:13:08 2012
Host: sophie.chibi.ubc.ca  pid: 28093  nodeid: 0  nnodes:  1
The Gromacs distribution was built Tue May 15 17:11:59 PDT 2012 by
[email protected] (Linux 2.6.18-194.17.4.el5 x86_64)


                         :-)  G  R  O  M  A  C  S  (-:

                               Grunge ROck MAChoS

                            :-)  VERSION 4.5.5  (-:

        Written by Emile Apol, Rossen Apostolov, Herman J.C. Berendsen,
      Aldert van Buuren, Pär Bjelkmar, Rudi van Drunen, Anton Feenstra, 
        Gerrit Groenhof, Peter Kasson, Per Larsson, Pieter Meulenhoff, 
           Teemu Murtola, Szilard Pall, Sander Pronk, Roland Schulz, 
                Michael Shirts, Alfons Sijbers, Peter Tieleman,

               Berk Hess, David van der Spoel, and Erik Lindahl.

       Copyright (c) 1991-2000, University of Groningen, The Netherlands.
            Copyright (c) 2001-2010, The GROMACS development team at
        Uppsala University & The Royal Institute of Technology, Sweden.
            check out http://www.gromacs.org for more information.

         This program is free software; you can redistribute it and/or
          modify it under the terms of the GNU General Public License
         as published by the Free Software Foundation; either version 2
             of the License, or (at your option) any later version.

                  :-)  /home/alexc/bin/gromacs/bin/mdrun  (-:


++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl
GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable
molecular simulation
J. Chem. Theory Comput. 4 (2008) pp. 435-447
-------- -------- --- Thank You --- -------- --------


++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J. C.
Berendsen
GROMACS: Fast, Flexible and Free
J. Comp. Chem. 26 (2005) pp. 1701-1719
-------- -------- --- Thank You --- -------- --------


++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
E. Lindahl and B. Hess and D. van der Spoel
GROMACS 3.0: A package for molecular simulation and trajectory analysis
J. Mol. Mod. 7 (2001) pp. 306-317
-------- -------- --- Thank You --- -------- --------


++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
H. J. C. Berendsen, D. van der Spoel and R. van Drunen
GROMACS: A message-passing parallel molecular dynamics implementation
Comp. Phys. Comm. 91 (1995) pp. 43-56
-------- -------- --- Thank You --- -------- --------

Input Parameters:
   integrator           = md
   nsteps               = 400
   init_step            = 0
   ns_type              = Grid
   nstlist              = 10
   ndelta               = 2
   nstcomm              = 10
   comm_mode            = Linear
   nstlog               = 100
   nstxout              = 100
   nstvout              = 100
   nstfout              = 0
   nstcalcenergy        = 10
   nstenergy            = 100
   nstxtcout            = 0
   init_t               = 0
   delta_t              = 0.002
   xtcprec              = 1000
   nkx                  = 64
   nky                  = 64
   nkz                  = 64
   pme_order            = 4
   ewald_rtol           = 1e-05
   ewald_geometry       = 0
   epsilon_surface      = 0
   optimize_fft         = TRUE
   ePBC                 = xyz
   bPeriodicMols        = FALSE
   bContinuation        = FALSE
   bShakeSOR            = FALSE
   etc                  = No
   nsttcouple           = -1
   epc                  = No
   epctype              = Isotropic
   nstpcouple           = -1
   tau_p                = 1
   ref_p (3x3):
      ref_p[    0]={ 0.00000e+00,  0.00000e+00,  0.00000e+00}
      ref_p[    1]={ 0.00000e+00,  0.00000e+00,  0.00000e+00}
      ref_p[    2]={ 0.00000e+00,  0.00000e+00,  0.00000e+00}
   compress (3x3):
      compress[    0]={ 0.00000e+00,  0.00000e+00,  0.00000e+00}
      compress[    1]={ 0.00000e+00,  0.00000e+00,  0.00000e+00}
      compress[    2]={ 0.00000e+00,  0.00000e+00,  0.00000e+00}
   refcoord_scaling     = No
   posres_com (3):
      posres_com[0]= 0.00000e+00
      posres_com[1]= 0.00000e+00
      posres_com[2]= 0.00000e+00
   posres_comB (3):
      posres_comB[0]= 0.00000e+00
      posres_comB[1]= 0.00000e+00
      posres_comB[2]= 0.00000e+00
   andersen_seed        = 815131
   rlist                = 1
   rlistlong            = 1.4
   rtpi                 = 0.05
   coulombtype          = PME
   rcoulomb_switch      = 0
   rcoulomb             = 1
   vdwtype              = Cut-off
   rvdw_switch          = 0
   rvdw                 = 1.4
   epsilon_r            = 1
   epsilon_rf           = 1
   tabext               = 1
   implicit_solvent     = No
   gb_algorithm         = Still
   gb_epsilon_solvent   = 80
   nstgbradii           = 1
   rgbradii             = 1
   gb_saltconc          = 0
   gb_obc_alpha         = 1
   gb_obc_beta          = 0.8
   gb_obc_gamma         = 4.85
   gb_dielectric_offset = 0.009
   sa_algorithm         = Ace-approximation
   sa_surface_tension   = 2.05016
   DispCorr             = No
   free_energy          = no
   init_lambda          = 0
   delta_lambda         = 0
   n_foreign_lambda     = 0
   sc_alpha             = 0
   sc_power             = 0
   sc_sigma             = 0.3
   sc_sigma_min         = 0.3
   nstdhdl              = 10
   separate_dhdl_file   = yes
   dhdl_derivatives     = yes
   dh_hist_size         = 0
   dh_hist_spacing      = 0.1
   nwall                = 0
   wall_type            = 9-3
   wall_atomtype[0]     = -1
   wall_atomtype[1]     = -1
   wall_density[0]      = 0
   wall_density[1]      = 0
   wall_ewald_zfac      = 3
   pull                 = no
   disre                = No
   disre_weighting      = Conservative
   disre_mixed          = FALSE
   dr_fc                = 1000
   dr_tau               = 0
   nstdisreout          = 100
   orires_fc            = 0
   orires_tau           = 0
   nstorireout          = 100
   dihre-fc             = 1000
   em_stepsize          = 0.01
   em_tol               = 1000
   niter                = 20
   fc_stepsize          = 0
   nstcgsteep           = 1000
   nbfgscorr            = 10
   ConstAlg             = Lincs
   shake_tol            = 0.0001
   lincs_order          = 4
   lincs_warnangle      = 30
   lincs_iter           = 1
   bd_fric              = 0
   ld_seed              = 1993
   cos_accel            = 0
   deform (3x3):
      deform[    0]={ 0.00000e+00,  0.00000e+00,  0.00000e+00}
      deform[    1]={ 0.00000e+00,  0.00000e+00,  0.00000e+00}
      deform[    2]={ 0.00000e+00,  0.00000e+00,  0.00000e+00}
   userint1             = 0
   userint2             = 0
   userint3             = 0
   userint4             = 0
   userreal1            = 0
   userreal2            = 0
   userreal3            = 0
   userreal4            = 0
grpopts:
   nrdf:       88230
   ref_t:           0
   tau_t:           0
anneal:          No
ann_npoints:           0
   acc:	           0           0           0
   nfreeze:           N           N           N
   energygrp_flags[  0]: 0
   efield-x:
      n = 0
   efield-xt:
      n = 0
   efield-y:
      n = 0
   efield-yt:
      n = 0
   efield-z:
      n = 0
   efield-zt:
      n = 0
   bQMMM                = FALSE
   QMconstraints        = 0
   QMMMscheme           = 0
   scalefactor          = 1
qm_opts:
   ngQM                 = 0

Initializing Domain Decomposition on 24 nodes
Dynamic load balancing: auto
Will sort the charge groups at every domain (re)decomposition
Initial maximum inter charge-group distances:
    two-body bonded interactions: 0.586 nm, LJ-14, atoms 1034 1052
  multi-body bonded interactions: 0.586 nm, Proper Dih., atoms 1034 1052
Minimum cell size due to bonded interactions: 0.645 nm
Guess for relative PME load: 0.33
Will use 16 particle-particle and 8 PME only nodes
This is a guess, check the performance at the end of the log file
Using 8 separate PME nodes
Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25
Optimizing the DD grid for 16 cells with a minimum initial size of 0.806 nm
The maximum allowed number of cells is: X 7 Y 7 Z 7
Domain decomposition grid 4 x 4 x 1, separate PME nodes 8
PME domain decomposition: 4 x 2 x 1
Interleaving PP and PME nodes
This is a particle-particle only node

Domain decomposition nodeid 0, coordinates 0 0 0

Table routines are used for coulomb: TRUE
Table routines are used for vdw:     FALSE
Will do PME sum in reciprocal space.

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
U. Essmann, L. Perera, M. L. Berkowitz, T. Darden, H. Lee and L. G. Pedersen 
A smooth particle mesh Ewald method
J. Chem. Phys. 103 (1995) pp. 8577-8592
-------- -------- --- Thank You --- -------- --------

Will do ordinary reciprocal space Ewald sum.
Using a Gaussian width (1/beta) of 0.320163 nm for Ewald
Cut-off's:   NS: 1   Coulomb: 1   LJ: 1.4
System total charge: 0.000
Generated table with 1200 data points for Ewald.
Tabscale = 500 points/nm
Generated table with 1200 data points for LJ6.
Tabscale = 500 points/nm
Generated table with 1200 data points for LJ12.
Tabscale = 500 points/nm
Generated table with 1200 data points for 1-4 COUL.
Tabscale = 500 points/nm
Generated table with 1200 data points for 1-4 LJ6.
Tabscale = 500 points/nm
Generated table with 1200 data points for 1-4 LJ12.
Tabscale = 500 points/nm

Enabling SPC-like water optimization for 9325 molecules.

Configuring nonbonded kernels...
Configuring standard C nonbonded kernels...
Testing x86_64 SSE2 support... present.


Removing pbc first time

Linking all bonded interactions to atoms
There are 4710 inter charge-group exclusions,
will use an extra communication step for exclusion forces for PME

The initial number of communication pulses is: X 1 Y 1
The initial domain decomposition cell size is: X 1.55 nm Y 1.50 nm

The maximum allowed distance for charge groups involved in interactions is:
                 non-bonded interactions           1.400 nm
            two-body bonded interactions  (-rdd)   1.400 nm
          multi-body bonded interactions  (-rdd)   1.400 nm

When dynamic load balancing gets turned on, these settings will change to:
The maximum number of communication pulses is: X 2 Y 2
The minimum size for domain decomposition cells is 1.045 nm
The requested allowed shrink of DD cells (option -dds) is: 0.80
The allowed shrink of domain decomposition cells is: X 0.68 Y 0.69
The maximum allowed distance for charge groups involved in interactions is:
                 non-bonded interactions           1.400 nm
            two-body bonded interactions  (-rdd)   1.400 nm
          multi-body bonded interactions  (-rdd)   1.045 nm


Making 2D domain decomposition grid 4 x 4 x 1, home cell index 0 0 0

Center of mass motion removal mode is Linear
We have the following groups for center of mass motion removal:
  0:  rest
There are: 29411 Atoms
Charge group distribution at step 0: 642 617 633 635 650 609 594 628 634 579 616 624 633 633 630 628
Grid: 10 x 10 x 9 cells
Initial temperature: 0 K

Started mdrun on node 0 Tue Jun  5 16:13:08 2012

           Step           Time         Lambda
              0        0.00000        0.00000

   Energies (kJ/mol)
           Bond        G96Bond          Angle       G96Angle    Proper Dih.
    5.32700e+02    8.46056e+02    6.09061e+01    1.06330e+03    1.02962e+03
  Improper Dih.          LJ-14     Coulomb-14        LJ (SR)        LJ (LR)
    6.00495e+01    5.77107e+02    1.50086e+04    3.00573e+05   -1.33813e+03
   Coulomb (SR)   Coul. recip.      Potential    Kinetic En.   Total Energy
   -4.42976e+05   -4.81722e+04   -1.72736e+05    1.00506e+07    9.87782e+06
    Temperature Pressure (bar)
    2.74010e+04    4.34738e+05


-------------------------------------------------------
Program mdrun, VERSION 4.5.5
Source code file: pme.c, line: 538

Fatal error:
4 particles communicated to PME node 4 are more than 2/3 times the cut-off out of the domain decomposition cell of their charge group in dimension x.
This usually means that your system is not well equilibrated.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
-------------------------------------------------------

"These are Ideas, They are Not Lies" (Magnapop)

Attachment: em.mdp
Description: application/mdp

-- 
gmx-users mailing list    [email protected]
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to [email protected].
Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Reply via email to