Re: [gmx-users] the crashed run

2013-12-08 Thread jkrieger
_prev.cpt might be ok. I think it writes those in case the current cpt causes 
problems

On 8 Dec 2013, at 09:57, Mahboobeh Eslami mahboobeh.esl...@yahoo.com wrote:

 hi GMX users
 i use gromacs 4.6.3 double precision for protein ligand complex during the 20 
 ns. my run crashed, I'm not sure that restart my run from cpt file or run a 
 new production.
 is the result of  started again run a reliable like a non crashed run.
 
 thanks for your help
 -- 
 Gromacs Users mailing list
 
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Fw: the crashed run

2013-12-08 Thread Mahboobeh Eslami
thanks for your reply
please suggest the best command for restart a crashed run.
i use following command 
mdrun -v -deffnm md -cpi md
is this command good or not?
thanks




On Sunday, December 8, 2013 2:18 PM, jkrie...@mrc-lmb.cam.ac.uk 
jkrie...@mrc-lmb.cam.ac.uk wrote:
 
_prev.cpt might be ok. I think it writes those in case the current cpt causes 
problems


On 8 Dec 2013, at 09:57, Mahboobeh Eslami mahboobeh.esl...@yahoo.com wrote:

 hi GMX users
 i use
 gromacs 4.6.3 double precision for protein ligand complex during the 20 ns. my 
run crashed, I'm not
 sure that restart my run from cpt file or run a new production.
 is the result of  started again run a reliable like a non crashed run.
 
 thanks for your help
 -- 
 Gromacs Users mailing list
 
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Fwd: How can i run my system successfully?

2013-12-08 Thread Justin Lemkul



On 12/8/13 2:38 PM, bahareh khanoom wrote:

Dear friend
very thanks for your answers

There is one important thing that i must say, after minimization energy i
run my system for nvt equilibration in  500 ps  and run done without any
problem ,
then i applied the output file as input file for nvt equilibration in 15
ns , but during first 1 ns run exit:

Step 79777, time 159.554 (ps)  LINCS WARNING
relative constraint deviation after LINCS:
rms 0.346309, max 1.958716 (between atoms 13 and 15)
bonds that rotated more than 30 degrees:
  atom 1 atom 2  angle  previous, current, constraint length
  13 15   90.00.1000   0.2959  0.1000
Wrote pdb files with previous and current coordinates

so ,what is your suggestion for solving this problem?



The same as I suggested in my previous message.  You need to (scientifically!) 
diagnose the possible sources of error.


-Justin


thanks in advance
bahar


On Sat, Dec 7, 2013 at 4:15 PM, Justin Lemkul jalem...@vt.edu wrote:




On 12/7/13 4:51 AM, bahareh khanoom wrote:


Dear friend
thanks for your answer

first i generate DRG.itp for adsorbed molecules, by PRODRG server.
next i optimize adsorbed moleculs by gaussian
b3lyp/6-311++g(d) opt Pop=ChelpG
and the end ,i replaced the charges in DRG.itp with  charges that
produced by gaussian.



How do these charges compare with existing charges for similar functional
groups in the force field?  AFAIK, there is no hard evidence as to which QM
method will give the best results from Gromos96 force fields.


  so ,what is the reason?




It is hard to say at this point, but one solution is to simulate each
component individually to verify that their topologies are correct and that
your .mdp settings are appropriate (though they look reasonable on first
glance).

See also http://www.gromacs.org/Documentation/Terminology/
Blowing_Up#Diagnosing_an_Unstable_System.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/
Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.



--
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] load imbalance in multiple GPU simulations

2013-12-08 Thread Szilárd Páll
Hi,

That's unfortunate, but not unexpected. You are getting a 3x1x1
decomposition where the middle cell has most of the protein, hence
most of the bonded forces to calculate, while the ones on the side
have little (or none).

Currently, the only thing you can do is to try using more domains,
perhaps with manual decomposition (such that the initial domains will
contain as much protein as possible). This may not help much, though.
In extreme cases (e.g. small system), even using only two of the three
GPUs could improve performance.

Cheers,
--
Szilárd


On Sun, Dec 8, 2013 at 8:10 PM, yunshi11 . yunsh...@gmail.com wrote:
 Hi all,

 My conventional MD run (equilibration) of a protein in TIP3 water had the
 Average load imbalance: 59.4 % when running with 3 GPUs + 12 CPU cores.
 So I wonder how to tweak parameters to optimize the performance.

 End of the log file reads:

 ..
 M E G A - F L O P S   A C C O U N T I N G

  NB=Group-cutoff nonbonded kernelsNxN=N-by-N cluster Verlet kernels
  RF=Reaction-Field  VdW=Van der Waals  QSTab=quadratic-spline table
  W3=SPC/TIP3p  W4=TIP4p (single or pairs)
  VF=Potential and force  V=Potential only  F=Force only

  Computing:   M-Number M-Flops  % Flops
 -
  Pair Search distance check   78483.330336706349.973 0.1
  NxN QSTab Elec. + VdW [F] 11321254.234368   464171423.60995.1
  NxN QSTab Elec. + VdW [VF] 114522.922048 6756852.401 1.4
  1,4 nonbonded interactions1645.932918148133.963 0.0
  Calc Weights 25454.159073916349.727 0.2
  Spread Q Bspline543022.060224 1086044.120 0.2
  Gather F Bspline543022.060224 3258132.361 0.7
  3D-FFT 1138719.444112 9109755.553 1.9
  Solve PME  353.129616 22600.295 0.0
  Reset In Box   424.2275001272.682 0.0
  CG-CoM 424.3971911273.192 0.0
  Bonds  330.706614 19511.690 0.0
  Angles1144.322886192246.245 0.0
  Propers   1718.934378393635.973 0.1
  Impropers  134.502690 27976.560 0.0
  Pos. Restr.321.706434   16085.322 0.0
  Virial 424.7348267645.227 0.0
  Stop-CM 85.184882 851.849 0.0
  P-Coupling8484.719691   50908.318 0.0
  Calc-Ekin  848.794382 22917.448 0.0
  Lincs  313.720420 18823.225 0.0
  Lincs-Mat 1564.1465766256.586 0.0
  Constraint-V  8651.865815 69214.927 0.0
  Constraint-Vir 417.065668 10009.576 0.0
  Settle2674.808325  863963.089 0.2
 -
  Total   487878233.910   100.0
 -


 D O M A I N   D E C O M P O S I T I O N   S T A T I S T I C S

  av. #atoms communicated per step for force:  2 x 63413.7
  av. #atoms communicated per step for LINCS:  2 x 3922.5

  Average load imbalance: 59.4 %
  Part of the total run time spent waiting due to load imbalance: 5.0 %


  R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G

  Computing: Nodes   Th. Count  Wall t (s) G-Cycles %
 -
  Domain decomp. 34   2500  42.792 1300.947 4.4
  DD comm. load  34 31   0.0000.014 0.0
  Neighbor search34   2501  33.076 1005.542 3.4
  Launch GPU ops.34 12   6.537  198.739 0.7
  Comm. coord.   34  47500  20.349  618.652 2.1
  Force  34  50001  75.093 2282.944 7.8
  Wait + Comm. F 34  50001  24.850  755.482 2.6
  PME mesh   34  50001 597.92518177.76062.0
  Wait GPU nonlocal  34  50001   9.862  299.813 1.0
  Wait GPU local 34  50001   0.2627.968 0.0
  NB X/F buffer ops. 34 195002  33.578 1020.833 3.5
  Write traj.34 12   0.506   15.385 0.1
  Update 34  50001  23.243  706.611 2.4
  Constraints34  50001

Re: [gmx-users] load imbalance in multiple GPU simulations

2013-12-08 Thread yunshi11 .
Hi Szilard,




On Sun, Dec 8, 2013 at 2:48 PM, Szilárd Páll pall.szil...@gmail.com wrote:

 Hi,

 That's unfortunate, but not unexpected. You are getting a 3x1x1
 decomposition where the middle cell has most of the protein, hence
 most of the bonded forces to calculate, while the ones on the side
 have little (or none).

 From which values can I tell this?


 Currently, the only thing you can do is to try using more domains,
 perhaps with manual decomposition (such that the initial domains will
 contain as much protein as possible). This may not help much, though.
 In extreme cases (e.g. small system), even using only two of the three
 GPUs could improve performance

Cheers,
 --
 Szilárd


 On Sun, Dec 8, 2013 at 8:10 PM, yunshi11 . yunsh...@gmail.com wrote:
  Hi all,
 
  My conventional MD run (equilibration) of a protein in TIP3 water had the
  Average load imbalance: 59.4 % when running with 3 GPUs + 12 CPU cores.
  So I wonder how to tweak parameters to optimize the performance.
 
  End of the log file reads:
 
  ..
  M E G A - F L O P S   A C C O U N T I N G
 
   NB=Group-cutoff nonbonded kernelsNxN=N-by-N cluster Verlet kernels
   RF=Reaction-Field  VdW=Van der Waals  QSTab=quadratic-spline table
   W3=SPC/TIP3p  W4=TIP4p (single or pairs)
   VF=Potential and force  V=Potential only  F=Force only
 
   Computing:   M-Number M-Flops  %
 Flops
 
 -
   Pair Search distance check   78483.330336706349.973 0.1
   NxN QSTab Elec. + VdW [F] 11321254.234368   464171423.609
  95.1
   NxN QSTab Elec. + VdW [VF] 114522.922048 6756852.401
 1.4
   1,4 nonbonded interactions1645.932918148133.963 0.0
   Calc Weights 25454.159073916349.727 0.2
   Spread Q Bspline543022.060224 1086044.120
 0.2
   Gather F Bspline543022.060224 3258132.361
 0.7
   3D-FFT 1138719.444112 9109755.553
 1.9
   Solve PME  353.129616 22600.295 0.0
   Reset In Box   424.2275001272.682
 0.0
   CG-CoM 424.3971911273.192
 0.0
   Bonds  330.706614 19511.690 0.0
   Angles1144.322886192246.245 0.0
   Propers   1718.934378393635.973 0.1
   Impropers  134.502690 27976.560 0.0
   Pos. Restr.321.706434   16085.322
 0.0
   Virial 424.7348267645.227
 0.0
   Stop-CM 85.184882 851.849
 0.0
   P-Coupling8484.719691   50908.318
 0.0
   Calc-Ekin  848.794382 22917.448 0.0
   Lincs  313.720420 18823.225 0.0
   Lincs-Mat 1564.1465766256.586
 0.0
   Constraint-V  8651.865815 69214.927 0.0
   Constraint-Vir 417.065668 10009.576 0.0
   Settle2674.808325  863963.089
 0.2
 
 -
   Total   487878233.910
 100.0
 
 -
 
 
  D O M A I N   D E C O M P O S I T I O N   S T A T I S T I C S
 
   av. #atoms communicated per step for force:  2 x 63413.7
   av. #atoms communicated per step for LINCS:  2 x 3922.5
 
   Average load imbalance: 59.4 %
   Part of the total run time spent waiting due to load imbalance: 5.0 %
 
 
   R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G
 
   Computing: Nodes   Th. Count  Wall t (s) G-Cycles %
 
 -
   Domain decomp. 34   2500  42.792 1300.947
 4.4
   DD comm. load  34 31   0.0000.014
 0.0
   Neighbor search34   2501  33.076 1005.542
 3.4
   Launch GPU ops.34 12   6.537  198.739
 0.7
   Comm. coord.   34  47500  20.349  618.652
 2.1
   Force  34  50001  75.093 2282.944
 7.8
   Wait + Comm. F 34  50001  24.850  755.482
 2.6
   PME mesh   34  50001 597.92518177.760
  62.0
   Wait GPU nonlocal  34  50001   9.862  299.813
 1.0
   Wait GPU local 34  50001   0.2627.968
 0.0
   NB X/F buffer ops. 34 195002  33.578 1020.833
 3.5
   Write traj.34 12 

Re: [gmx-users] load imbalance in multiple GPU simulations

2013-12-08 Thread Szilárd Páll
There is no value that tells you exactly that, but are clues. However,
you can check in the log file the ratio of smallest and average
(starting) cell size (the value also printed on the terminal with -v)
and that will tell you how much did the DD shrink the middle cell.
What you can also see is that if you run with -dd no, you'll get high
load imbalance, but equal GPU (non-bonded load), but with -dd auto (or
yes) you'll get much smaller load on the second GPU (use nvidia-smi).

Cheers,
--
Szilárd

PS: You can somehow dump the PDB-s corresponding to the individual
domains, but I don't exactly know how to do it (and that's rather
low-level stuff anyway).

On Mon, Dec 9, 2013 at 1:02 AM, yunshi11 . yunsh...@gmail.com wrote:
 Hi Szilard,




 On Sun, Dec 8, 2013 at 2:48 PM, Szilárd Páll pall.szil...@gmail.com wrote:

 Hi,

 That's unfortunate, but not unexpected. You are getting a 3x1x1
 decomposition where the middle cell has most of the protein, hence
 most of the bonded forces to calculate, while the ones on the side
 have little (or none).

 From which values can I tell this?


 Currently, the only thing you can do is to try using more domains,
 perhaps with manual decomposition (such that the initial domains will
 contain as much protein as possible). This may not help much, though.
 In extreme cases (e.g. small system), even using only two of the three
 GPUs could improve performance

 Cheers,
 --
 Szilárd


 On Sun, Dec 8, 2013 at 8:10 PM, yunshi11 . yunsh...@gmail.com wrote:
  Hi all,
 
  My conventional MD run (equilibration) of a protein in TIP3 water had the
  Average load imbalance: 59.4 % when running with 3 GPUs + 12 CPU cores.
  So I wonder how to tweak parameters to optimize the performance.
 
  End of the log file reads:
 
  ..
  M E G A - F L O P S   A C C O U N T I N G
 
   NB=Group-cutoff nonbonded kernelsNxN=N-by-N cluster Verlet kernels
   RF=Reaction-Field  VdW=Van der Waals  QSTab=quadratic-spline table
   W3=SPC/TIP3p  W4=TIP4p (single or pairs)
   VF=Potential and force  V=Potential only  F=Force only
 
   Computing:   M-Number M-Flops  %
 Flops
 
 -
   Pair Search distance check   78483.330336706349.973 0.1
   NxN QSTab Elec. + VdW [F] 11321254.234368   464171423.609
  95.1
   NxN QSTab Elec. + VdW [VF] 114522.922048 6756852.401
 1.4
   1,4 nonbonded interactions1645.932918148133.963 0.0
   Calc Weights 25454.159073916349.727 0.2
   Spread Q Bspline543022.060224 1086044.120
 0.2
   Gather F Bspline543022.060224 3258132.361
 0.7
   3D-FFT 1138719.444112 9109755.553
 1.9
   Solve PME  353.129616 22600.295 0.0
   Reset In Box   424.2275001272.682
 0.0
   CG-CoM 424.3971911273.192
 0.0
   Bonds  330.706614 19511.690 0.0
   Angles1144.322886192246.245 0.0
   Propers   1718.934378393635.973 0.1
   Impropers  134.502690 27976.560 0.0
   Pos. Restr.321.706434   16085.322
 0.0
   Virial 424.7348267645.227
 0.0
   Stop-CM 85.184882 851.849
 0.0
   P-Coupling8484.719691   50908.318
 0.0
   Calc-Ekin  848.794382 22917.448 0.0
   Lincs  313.720420 18823.225 0.0
   Lincs-Mat 1564.1465766256.586
 0.0
   Constraint-V  8651.865815 69214.927 0.0
   Constraint-Vir 417.065668 10009.576 0.0
   Settle2674.808325  863963.089
 0.2
 
 -
   Total   487878233.910
 100.0
 
 -
 
 
  D O M A I N   D E C O M P O S I T I O N   S T A T I S T I C S
 
   av. #atoms communicated per step for force:  2 x 63413.7
   av. #atoms communicated per step for LINCS:  2 x 3922.5
 
   Average load imbalance: 59.4 %
   Part of the total run time spent waiting due to load imbalance: 5.0 %
 
 
   R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G
 
   Computing: Nodes   Th. Count  Wall t (s) G-Cycles %
 
 -
   Domain decomp. 34   2500  42.792 1300.947
 4.4
   DD comm. load  34 31   

Re: [gmx-users] Compilation issue with F77_FUNC functions?

2013-12-08 Thread Roland Schulz
Hi,

what compiler is used by mpicc? What does mpicc -showme and mpicc
--version show? Does it help to uncomment the line containing F77_FUNC int
src/config.h.cmakein?

Roland


On Sun, Dec 8, 2013 at 7:44 PM, Michael Shirts mrshi...@gmail.com wrote:

 So, I'm trying to compile with MPI using mpich3.  Previous
 installations worked, and installations without MPI worked. I'm
 getting errors like:

 /h3/n1/shirtsgroup/gromacs_46/gromacs/src/gmxlib/cinvsqrtdata.c:53:
 warning: parameter names (without types) in function declaration
 /h3/n1/shirtsgroup/gromacs_46/gromacs/src/gmxlib/cinvsqrtdata.c:53:
 error: function ‘F77_FUNC’ is initialized like a variable
 /h3/n1/shirtsgroup/gromacs_46/gromacs/src/gmxlib/cinvsqrtdata.c:56:
 warning: braces around scalar initializer
 /h3/n1/shirtsgroup/gromacs_46/gromacs/src/gmxlib/cinvsqrtdata.c:56:
 warning: (near initialization for ‘F77_FUNC’)
 /h3/n1/shirtsgroup/gromacs_46/gromacs/src/gmxlib/cinvsqrtdata.c:57:
 error: invalid initializer
 /h3/n1/shirtsgroup/gromacs_46/gromacs/src/gmxlib/cinvsqrtdata.c:57:
 error: (near initialization for ‘F77_FUNC’)
 /h3/n1/shirtsgroup/gromacs_46/gromacs/src/gmxlib/cinvsqrtdata.c:57:
 warning: excess elements in scalar initializer
 /h3/n1/shirtsgroup/gromacs_46/gromacs/src/gmxlib/cinvsqrtdata.c:57:
 warning: (near initialization for ‘F77_FUNC’)
 /h3/n1/shirtsgroup/gromacs_46/gromacs/src/gmxlib/cinvsqrtdata.c:57:
 warning: excess elements in scalar initializer
 /h3/n1/shirtsgroup/gromacs_46/gromacs/src/gmxlib/cinvsqrtdata.c:57:
 warning: (near initialization for ‘F77_FUNC’)

 And it keeps on like that for a long while.

 Any suggestions?  Perhaps something wrong in the way mpicc is handling
 Fortran code?
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.







-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Compilation issue with F77_FUNC functions?

2013-12-08 Thread Michael Shirts
Apologies! I certainly didn't post enough information.  mpicc is using:

gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-3)

Which is exactly what the non-mpi versions used, which did not have
this problem.

I ran cmake with:

cmake ../gromacs -DGMX_MPI=ON -DGMX_GPU=OFF -DGMX_DOUBLE=ON
-DGMX_CPU_ACCELERATION=AVX_256
-DCMAKE_INSTALL_PREFIX=/h3/n1/shirtsgroup/gromacs_46/install
-DFFTW_INCLUDE_DIR=/h3/n1/shirtsgroup/software/fft3w/include
-DFFTW_LIBRARY='/h3/n1/shirtsgroup/software/fft3w/lib/libfftw3.a;/h3/n1/shirtsgroup/software/fft3w/lib/libfftw3.so;/usr/lib64/libm.so;/share/apps/mpich3/gnu/lib/libmpich.so;'

All the extra libraries for FFTWF appear to be necessary for some
reason, but I don't think that's it . . .

Commentinf out the line F77_FUNC int src/config.h.cmakein and
rerunning cmake did not change anything.

On Sun, Dec 8, 2013 at 10:29 PM, Roland Schulz rol...@utk.edu wrote:
 Hi,

 what compiler is used by mpicc? What does mpicc -showme and mpicc
 --version show? Does it help to uncomment the line containing F77_FUNC int
 src/config.h.cmakein?

 Roland


 On Sun, Dec 8, 2013 at 7:44 PM, Michael Shirts mrshi...@gmail.com wrote:

 So, I'm trying to compile with MPI using mpich3.  Previous
 installations worked, and installations without MPI worked. I'm
 getting errors like:

 /h3/n1/shirtsgroup/gromacs_46/gromacs/src/gmxlib/cinvsqrtdata.c:53:
 warning: parameter names (without types) in function declaration
 /h3/n1/shirtsgroup/gromacs_46/gromacs/src/gmxlib/cinvsqrtdata.c:53:
 error: function ‘F77_FUNC’ is initialized like a variable
 /h3/n1/shirtsgroup/gromacs_46/gromacs/src/gmxlib/cinvsqrtdata.c:56:
 warning: braces around scalar initializer
 /h3/n1/shirtsgroup/gromacs_46/gromacs/src/gmxlib/cinvsqrtdata.c:56:
 warning: (near initialization for ‘F77_FUNC’)
 /h3/n1/shirtsgroup/gromacs_46/gromacs/src/gmxlib/cinvsqrtdata.c:57:
 error: invalid initializer
 /h3/n1/shirtsgroup/gromacs_46/gromacs/src/gmxlib/cinvsqrtdata.c:57:
 error: (near initialization for ‘F77_FUNC’)
 /h3/n1/shirtsgroup/gromacs_46/gromacs/src/gmxlib/cinvsqrtdata.c:57:
 warning: excess elements in scalar initializer
 /h3/n1/shirtsgroup/gromacs_46/gromacs/src/gmxlib/cinvsqrtdata.c:57:
 warning: (near initialization for ‘F77_FUNC’)
 /h3/n1/shirtsgroup/gromacs_46/gromacs/src/gmxlib/cinvsqrtdata.c:57:
 warning: excess elements in scalar initializer
 /h3/n1/shirtsgroup/gromacs_46/gromacs/src/gmxlib/cinvsqrtdata.c:57:
 warning: (near initialization for ‘F77_FUNC’)

 And it keeps on like that for a long while.

 Any suggestions?  Perhaps something wrong in the way mpicc is handling
 Fortran code?
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.







 --
 ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
 865-241-1537, ORNL PO BOX 2008 MS6309
 --
 Gromacs Users mailing list

 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Compilation issue with F77_FUNC functions?

2013-12-08 Thread Michael Shirts
And FWIW, it's being compiled on CentOS 6.4.

On Sun, Dec 8, 2013 at 11:16 PM, Michael Shirts mrshi...@gmail.com wrote:
 Apologies! I certainly didn't post enough information.  mpicc is using:

 gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-3)

 Which is exactly what the non-mpi versions used, which did not have
 this problem.

 I ran cmake with:

 cmake ../gromacs -DGMX_MPI=ON -DGMX_GPU=OFF -DGMX_DOUBLE=ON
 -DGMX_CPU_ACCELERATION=AVX_256
 -DCMAKE_INSTALL_PREFIX=/h3/n1/shirtsgroup/gromacs_46/install
 -DFFTW_INCLUDE_DIR=/h3/n1/shirtsgroup/software/fft3w/include
 -DFFTW_LIBRARY='/h3/n1/shirtsgroup/software/fft3w/lib/libfftw3.a;/h3/n1/shirtsgroup/software/fft3w/lib/libfftw3.so;/usr/lib64/libm.so;/share/apps/mpich3/gnu/lib/libmpich.so;'

 All the extra libraries for FFTWF appear to be necessary for some
 reason, but I don't think that's it . . .

 Commentinf out the line F77_FUNC int src/config.h.cmakein and
 rerunning cmake did not change anything.

 On Sun, Dec 8, 2013 at 10:29 PM, Roland Schulz rol...@utk.edu wrote:
 Hi,

 what compiler is used by mpicc? What does mpicc -showme and mpicc
 --version show? Does it help to uncomment the line containing F77_FUNC int
 src/config.h.cmakein?

 Roland


 On Sun, Dec 8, 2013 at 7:44 PM, Michael Shirts mrshi...@gmail.com wrote:

 So, I'm trying to compile with MPI using mpich3.  Previous
 installations worked, and installations without MPI worked. I'm
 getting errors like:

 /h3/n1/shirtsgroup/gromacs_46/gromacs/src/gmxlib/cinvsqrtdata.c:53:
 warning: parameter names (without types) in function declaration
 /h3/n1/shirtsgroup/gromacs_46/gromacs/src/gmxlib/cinvsqrtdata.c:53:
 error: function ‘F77_FUNC’ is initialized like a variable
 /h3/n1/shirtsgroup/gromacs_46/gromacs/src/gmxlib/cinvsqrtdata.c:56:
 warning: braces around scalar initializer
 /h3/n1/shirtsgroup/gromacs_46/gromacs/src/gmxlib/cinvsqrtdata.c:56:
 warning: (near initialization for ‘F77_FUNC’)
 /h3/n1/shirtsgroup/gromacs_46/gromacs/src/gmxlib/cinvsqrtdata.c:57:
 error: invalid initializer
 /h3/n1/shirtsgroup/gromacs_46/gromacs/src/gmxlib/cinvsqrtdata.c:57:
 error: (near initialization for ‘F77_FUNC’)
 /h3/n1/shirtsgroup/gromacs_46/gromacs/src/gmxlib/cinvsqrtdata.c:57:
 warning: excess elements in scalar initializer
 /h3/n1/shirtsgroup/gromacs_46/gromacs/src/gmxlib/cinvsqrtdata.c:57:
 warning: (near initialization for ‘F77_FUNC’)
 /h3/n1/shirtsgroup/gromacs_46/gromacs/src/gmxlib/cinvsqrtdata.c:57:
 warning: excess elements in scalar initializer
 /h3/n1/shirtsgroup/gromacs_46/gromacs/src/gmxlib/cinvsqrtdata.c:57:
 warning: (near initialization for ‘F77_FUNC’)

 And it keeps on like that for a long while.

 Any suggestions?  Perhaps something wrong in the way mpicc is handling
 Fortran code?
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.







 --
 ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
 865-241-1537, ORNL PO BOX 2008 MS6309
 --
 Gromacs Users mailing list

 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Fwd: Normal Mode Analysis

2013-12-08 Thread Sathish Kumar
Dear users,

   I am trying to do NMA

First i did energy minimization using .mdp file with conjugate gradient
method,

Next i calculated hessian matrix by using integrator = nm

Then i calculated the eigen vectors from 7 to 100 using g_nmeig

to analyze eigen vectors i use the commands

g_anaeig_d -s nm.tpr -f em-c.gro -v eigenvec.trr -eig eigenval.xvg -proj
proj-ev1.xvg -extr

ev1.pdb -rmsf rmsf-ev1.xvg -first 7 -last 7 -nframes 30


by visuvalizing ev1.pdb, i did not found any motion in the protein.


what is the mistake i have done?

Is it correct procedure for doing NM analysis?

How to analyze the eigen values obtained from hessian matrix?


regards
M.SathishKumar
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Fw: the crashed run

2013-12-08 Thread Mahboobeh Eslami
thanks for your reply
Have a Great Day




On Monday, December 9, 2013 1:35 AM, Justin Lemkul jalem...@vt.edu wrote:
 


On 12/8/13 6:59 AM, Mahboobeh Eslami wrote:
 thanks for
 your reply
 please suggest the best command for restart a crashed run.
 i use following command
 mdrun -v -deffnm md -cpi md
 is this command good or not?

If the desired .cpt file is md.cpt, then mdrun -deffnm md -cpi suffices.  If 
you 
need a previous .cpt file (i.e. md_prev.cpt), then you must specify its name 
explicitly.  Depending on why the run crashed, simply restarting may not be 
worthwhile.  If the crash was due to the system blowing up, it is a waste of 
time and you should investigate the crash.  If it was a hardware failure, disk 
error, etc. then continuing from a checkpoint is fine.

-Justin


 thanks




 On Sunday, December 8, 2013 2:18 PM, jkrie...@mrc-lmb.cam.ac.uk 
 jkrie...@mrc-lmb.cam.ac.uk wrote:

 _prev.cpt might be ok. I think it writes those in case the current cpt causes 
 problems


 On 8 Dec 2013, at 09:57, Mahboobeh Eslami mahboobeh.esl...@yahoo.com wrote:

 hi GMX users
 i use
   gromacs 4.6.3 double precision for protein
 ligand complex during the 20 ns. my run crashed, I'm not
   sure that restart my run from cpt file or run a new production.
 is the result of  started again run a reliable like a non crashed run.

 thanks for your help
 --
 Gromacs Users mailing list

 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.

-- 
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441


==
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] charge correction in topology file from PRODRG

2013-12-08 Thread Mahboobeh Eslami
dear justin
thanks for your help




On Monday, December 9, 2013 1:44 AM, Justin Lemkul jalem...@vt.edu wrote:
 


On 12/8/13 6:18 AM, XAvier Periole wrote:

 For gromos ff you can also use the ATB server. From Alan Mark on Brisbane. It 
 combine typography and non-bonded parameters all at once.

 It is not perfect but pretty good.


shameless self-promotion

For those wondering about some potential implications of topology errors and 
how 
to start going about fixing them, as well as an overview of some common QM 
calculations one can do to try to calculate charges for new groups: 
http://pubs.acs.org/doi/abs/10.1021/ci100335w

/shameless self-promotion

-Justin

 On Dec 8, 2013, at 8:27, Mahboobeh
 Eslami mahboobeh.esl...@yahoo.com wrote:

 hi all my friends
 I use PRODRG and antechmber for building topology and coordinate files for 
 my ligand separately.
 i want to use GROMOS force field so i must to use the topology from PRODRG 
 server. can i use the topology of antechamber for charge correction in 
 topology fiel from PRODRG .
 In general, are special principlesessential for charge correction in 
 topology file from PRODRG
   thanks for your help
 --
 Gromacs Users mailing list

 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.

-- 
==

Justin A. Lemkul, Ph.D.
Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441


==
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] simulation using tabulated dihedral potentials.

2013-12-08 Thread Chandan Choudhury
I am facing some weird problem while using tabulated potentials (esp.
dihedral tabulated potentials) in gromacs 4.6.3. I am trying to generate a
coarse grained (CG) trajectory using the tabulated potentials generated for
the atomistic simulations using VOTCA.

When I donot use the dihedral tabulated potentials, the simulations seems
to proceed smoothly. The problem arises when the dihedral potentials are
incorporated. I have added a link of all the files generated during the
undermentioned runs (https://www.dropbox.com/s/yumzdufuys1ifdr/with-dih.tar)
. It also includes the input files. Here, I list the problem and how do
they occur :

1. $mdrun_463 -v -nice 0 -cpt 1 -cpi state.cpt -nt 2 -pin on (1st Run) 
ver1.txt
Progress smoothly for few steps (step 536400). Shows the following error :

A list of missing interactions:
   Tab. Dih. of720 missing  1

Molecule type 'POLCAR'
the first 10 missing interactions, except for exclusions:
   Tab. Dih. atoms6789 global   858   859   860
861

---
Program mdrun_463, VERSION 4.6.3
Source code file: /tmp/gromacs-4.6.3/src/mdlib/domdec_top.c, line: 393

Fatal error:
1 of the 2400 bonded interactions could not be calculated because some
atoms involved moved further apart than the multi-body cut-off distance
(1.5 nm) or the two-body cut-off distance (1.5 nm), see option -rdd, for
pairs and tabulated bonds also see option -ddcheck
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

Sometimes Life is Obscene (Black Crowes

2. Then I continue the run using the state.cpt file
   $mdrun_463 -v -nice 0 -cpt 1 -cpi state.cpt -nt 2 -pin on  ver2.txt

Further progressed for few more steps (step 973100). Following error is
produced here:

A list of missing interactions:
 Tab. Angles of800 missing  1
   Tab. Dih. of720 missing  1

Molecule type 'POLCAR'
the first 10 missing interactions, except for exclusions:
   Tab. Dih. atoms9   10   11   12 global   237   238   239
240
 Tab. Angles atoms   10   11   12  global   238   239   240

Back Off! I just backed up dd_dump_err_0_n1.pdb to
./#dd_dump_err_0_n1.pdb.1#

Back Off! I just backed up dd_dump_err_0_n0.pdb to
./#dd_dump_err_0_n0.pdb.1#

---
Program mdrun_463, VERSION 4.6.3
Source code file: /tmp/gromacs-4.6.3/src/mdlib/domdec_top.c, line: 393

Fatal error:
2 of the 2400 bonded interactions could not be calculated because some
atoms involved moved further apart than the multi-body cut-off distance
(1.5 nm) or the two-body cut-off distance (1.5 nm), see option -rdd, for
pairs and tabulated bonds also see option -ddcheck
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

She's a Good Sheila Bruce (Monty Python)

3. Again continued the simulation
   $mdrun_463 -v -nice 0 -cpt 1 -cpi state.cpt -nt 2 -pin on 
ver3.txt
Simulation proceeds for further for step 3692700. The again
crashed with the following output:

A list of missing interactions:
 Tab. Angles of800 missing  1
   Tab. Dih. of720 missing  2

Molecule type 'POLCAR'
the first 10 missing interactions, except for exclusions:
   Tab. Dih. atoms6789 global   306   307   308
309
 Tab. Angles atoms789  global   307   308   309
   Tab. Dih. atoms789   10 global   307   308   309
310

Back Off! I just backed up dd_dump_err_0_n0.pdb to
./#dd_dump_err_0_n0.pdb.2#

Back Off! I just backed up dd_dump_err_0_n1.pdb to
./#dd_dump_err_0_n1.pdb.2#

---
Program mdrun_463, VERSION 4.6.3
Source code file: /tmp/gromacs-4.6.3/src/mdlib/domdec_top.c, line: 393

Fatal error:
3 of the 2400 bonded interactions could not be calculated because some
atoms involved moved further apart than the multi-body cut-off distance
(1.5 nm) or the two-body cut-off distance (1.5 nm), see option -rdd, for
pairs and tabulated bonds also see option -ddcheck
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---

Though the path of the comet is sure, it's constitution is not (Peter
Hammill)

In this way the simulations proceed. At the end it finally stops at step
71768380 and no further continuation is possible.
Error at this stage :


Started mdrun on node 0 Mon Dec  9 12:24:52 2013

   Step   Time Lambda
   71768380   358841.90.0


   Energies (kJ/mol)
 Tab. BondsTab. Angles  Tab. Dih.LJ (SR)