Re: [gmx-users] The maxwarn fatal errors

2020-01-06 Thread Justin Lemkul



On 1/6/20 9:40 PM, 변진영 wrote:

Dear everyone, Happy New year!
I have gone through the Justin Lemku tutorial for Umbrella Sampling. During 
tutorial, When I treid to input the command line:
   gmx grompp -f md_umbrella.mdp -c npt0.gro -t npt0.cpt -p topol.top -r 
npt0.gro -n index.ndx -o umbrella0.tpr

I have met two warnings and they occured fatal error.:
   Fatal error:
   Too many warnings (2).
If you are sure all warnings are harmless, use the -maxwarn option.

And the waring is:
  WARNING 1 [file topol.top, line 56]:
   The GROMOS force fields have been parametrized with a physically
   incorrect multiple-time-stepping scheme for a twin-range cut-off. When
   used with a single-range cut-off (or a correct Trotter
   multiple-time-stepping scheme), physical properties, such as the density,
   might differ from the intended values. Since there are researchers
   actively working on validating GROMOS with modern integrators we have not
   yet removed the GROMOS force fields, but you should be aware of these
   issues and check if molecules in your system are affected before
   proceeding. Further information is available at
   https://redmine.gromacs.org/issues/2884 , and a longer explanation of our
   decision to remove physically incorrect algorithms can be found at
   https://doi.org/10.26434/chemrxiv.11474583.v1 .


WARNING 2 [file md_umbrella.mdp]:
   With Nose-Hoover T-coupling and Parrinello-Rahman p-coupling, tau-p (1)
   should be at least twice as large as tau-t (1) to avoid resonances

I solved this problem with using -maxwarn option but I am wondering whether 
thses warning is passed over.
What do you think what I happend? dears. Any idea on what caused this problem?


The first warning is very verbose and provides you with substantial 
justification and background reading.


As for the second, change tau-p to 2 as suggested. Note that I have not 
made any attempt to update the tutorials for the 2020 version, and they 
are only guaranteed to be compatible with GROMACS 2018.x versions.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] The maxwarn fatal errors

2020-01-06 Thread 변진영
Dear everyone, Happy New year!
I have gone through the Justin Lemku tutorial for Umbrella Sampling. During 
tutorial, When I treid to input the command line:
  gmx grompp -f md_umbrella.mdp -c npt0.gro -t npt0.cpt -p topol.top -r 
npt0.gro -n index.ndx -o umbrella0.tpr

I have met two warnings and they occured fatal error.:
  Fatal error:
  Too many warnings (2).
   If you are sure all warnings are harmless, use the -maxwarn option.

And the waring is:
 WARNING 1 [file topol.top, line 56]:
  The GROMOS force fields have been parametrized with a physically
  incorrect multiple-time-stepping scheme for a twin-range cut-off. When
  used with a single-range cut-off (or a correct Trotter
  multiple-time-stepping scheme), physical properties, such as the density,
  might differ from the intended values. Since there are researchers
  actively working on validating GROMOS with modern integrators we have not
  yet removed the GROMOS force fields, but you should be aware of these
  issues and check if molecules in your system are affected before
  proceeding. Further information is available at
  https://redmine.gromacs.org/issues/2884 , and a longer explanation of our
  decision to remove physically incorrect algorithms can be found at
  https://doi.org/10.26434/chemrxiv.11474583.v1 .


WARNING 2 [file md_umbrella.mdp]:
  With Nose-Hoover T-coupling and Parrinello-Rahman p-coupling, tau-p (1)
  should be at least twice as large as tau-t (1) to avoid resonances

I solved this problem with using -maxwarn option but I am wondering whether 
thses warning is passed over.
What do you think what I happend? dears. Any idea on what caused this problem?
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] CMAP format on GROMCACS

2020-01-06 Thread Justin Lemkul



On 1/6/20 4:46 PM, Marcelo Depólo wrote:

Hi everybody! Happy new year! =)


I would like to understand a little bit better the CMAP format of
charmm27.ff within GROMACS. I've looked up in the manual but found little
information.

I understand that it is defined by 5 atoms (1-4 phi and 2-5 psi), a
arbitrary identification number (e.g. 1) and the grid size (e.g. 24x24).
But the following grid values are lines with just 10 values each, ended
with a "\", instead of a 24 columns- 24 lines matrix.

How are those values of CMAP formatted  in GMX? Does GMX automatically
transform those values in a matrix and, if so, can I consider that its
"y-axis" would be PSI and "x-axis" would be PHI?


The \ are line continuation characters, so GROMACS is reading a 24x24 
matrix in a single array of values.


The values are written for all values of phi at a given value of psi, 
i.e. writing each row of the matrix, starting from phi = -180, psi = 
-180 until phi = 180, psi = 180.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] atom moved too far

2020-01-06 Thread Justin Lemkul




On 1/6/20 2:59 PM, Christos Deligkaris wrote:

Justin, thank you.

I have implemented the pull code but that also exhibits the same error when
I use 12 cores (failed at about 2ns) and the simulation goes on fine when I
use 6 cores (now at about 32 ns).

I tried using the v-rescale thermostat (instead of Nose-Hoover) and
Parinello-Rahman barostat, which failed. I also tried the v-rescale
thermostat and the Berendsen barostat but that also failed. It seems to me
that this is not an equilibration issue.

So, to summarize, only if I decrease the time step to 0.001 ps or decrease
the number of cores seem to allow the calculation to proceed.

In this email list, I read that someone else was trying to use different
arguments supplied to mdrun (-nt, -ntomp etc) to solve the same problem. Is
it possible that the problem arises due to my running gmx_mpi on a single
node? This is the command I use in my submission script:

mpirun --mca btl tcp,sm,self /opt/gromacs-2018.1/bin/gmx_mpi mdrun -ntomp
\$ntomp -v -deffnm "${inputfile%.tpr}"

If you think that this is not due to a physics issue I can continue doing
calculations with 6 cores and try to install gromacs 2020 (both gmx and
gmx_mpi) to see if my problem persists there or not...


If you're running on a single node, there's no need for an external MPI 
library. Perhaps you've got a buggy implementation? Have you tried using 
12 cores via the built-in thread MPI library?


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] [gmx-user] Autodock Vina Out use in gromacs MD

2020-01-06 Thread Justin Lemkul




On 1/4/20 1:17 PM, Quin K wrote:

Dear Proffesor Lemkul,

I have given below the MD RMSD for complex, and noted that Ligand is
getting detached again. When I viewed the trajectory on VMD it was
confirmed.  The ligand did not abruptly got dislodged instead it slowly
came out of binding pocket and started moving around after.
https://drive.google.com/file/d/1IBMU-8SzSgXVr_h9zyeVwNV7kQ4at8-S/view?usp=sharing


It would be more useful to investigate which interactions are broken 
first or if there is a dihedral that rotates that leads to a 
conformational change in the ligand. RMSD tells you very little.



I know that following CGenFF tutorial and fixing the molecule and
reparameterization would be the correct thing to do however I lack time at
this moment.
I read the paper and went through the tutorial and it's some what of a
complex method to fix the parameters.
I was thinking if I should use a different force-field such as Gromos with
parameterization with ATB server.
I have already submitted the given molecule below for optimization in ATB
and the parameterization is now complete.
Other option is to use Amber force field. Kindly let me know what your
opinion on this.


The nice thing about CGenFF is it tells you where it thinks problems 
might be and how they score in terms of quality. Neither of the other 
options do that. You always need to validate a ligand topology. Black 
boxes might work well or they might work poorly. You just don't know.



Molecule
https://drive.google.com/file/d/1Ni8rUX4sH3aKaRhhXqCZr9w8VJVetCI7/view?usp=sharing

Interactions at binding site us DS Visualizer
https://drive.google.com/file/d/16EkWYDPDyTfkERiQgaOYo4XxYxqslOD0/view?usp=sharing

Also please let me know if you think the given interactions are enough to
keep the molecule at binding site for like 100ns?


The Arg-ligand interaction should be pretty strong, but you have an 
identified "unfavorable" contact (which could be due to the rotameric 
state of Asn111 being suboptimal, and otherwise only nonpolar contacts. 
I wouldn't expect a very favorable binding free energy.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Need help with installation of Gromacs-2019.3 with Intell compilers

2020-01-06 Thread Schulz, Roland
Hi,

As a work-around it is possible to put "#pragma intel optimization_level 2" in 
front pull_calc_coms in file src/gromacs/pulling/pullutil.cpp or use Intel 
compiler 2019u4 which doesn't have the problem.
Lowering the optimization for the one function shouldn't have a significant 
impact on performance even if you use pulling because this function isn't 
compute intensive. A full fix is in the works.

Roland

> -Original Message-
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
>  On Behalf Of Rajib
> Biswas
> Sent: Monday, December 30, 2019 6:13 AM
> To: gmx-us...@gromacs.org
> Subject: Re: [gmx-users] Need help with installation of Gromacs-2019.3 with
> Intell compilers
> 
> Dear Gromacs-Users,
> 
> Is there any update on this issue? I have used the following flags for version
> 2019.3
> 
> /apps/codes/cmake/3.15.4/bin/cmake ..
> -DCMAKE_INSTALL_PREFIX=/opt/gromacs/2019.3 -
> DGMX_FFT_LIBRARY=fftw3
> -DCMAKE_PREFIX_PATH=/apps/libs/fftw/3.3.8 -DBUILD_SHARED_LIBS=OFF
> -DGMX_DOUBLE=ON -DGMX_SIMD=AVX2_256 -
> DCMAKE_C_COMPILER=mpiicc -DCMAKE_CXX_COMPILER=mpiicpc -
> DGMX_MPI=ON -DGMX_OPENMP=ON -DGMX_BUILD_MDRUN_ONLY=ON
> 
> and getting compilation error which says:
> 
> [ 58%] Building CXX object
> src/gromacs/CMakeFiles/libgromacs.dir/awh/pointstate.cpp.o
> icpc: error #10105:
> /apps/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/mcpcom:
> core dumped
> icpc: warning #10102: unknown signal(-497903120)
> icpc: error #10106: Fatal error in
> /apps/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/mcpcom,
> terminated by unknown
> compilation aborted for
> /storage/rajib/opt/gromacs-2019.3/src/gromacs/pulling/pullutil.cpp (code 1)
> make[2]: *** [src/gromacs/CMakeFiles/libgromacs.dir/pulling/pullutil.cpp.o]
> Error 1
> make[2]: *** Waiting for unfinished jobs
> make[1]: *** [src/gromacs/CMakeFiles/libgromacs.dir/all] Error 2
> make: *** [all] Error 2
> 
> Your help will be highly appreciated.
> 
> Thanking you.
> 
> With regards,
> Rajib
> 
> 
> 
> On Wed, Oct 9, 2019 at 4:03 AM Lyudmyla Dorosh 
> wrote:
> 
> > I have tried this command line:
> > sudo cmake .. -DBUILD_SHARED_LIBS=OFF -DGMX_FFT_LIBRARY=mkl
> > -DCMAKE_INSTALL_PREFIX=$installDir -DGMX_MPI=ON -
> DGMX_OPENMP=ON
> > -DGMX_CYCLE_SUBCOUNTERS=ON -DGMX_GPU=OFF -DGMX_SIMD=SSE2
> > -DCMAKE_C_COMPILER="/home/doroshl/apps/intel/bin/icc"
> > -DCMAKE_CXX_COMPILER="/home/doroshl/apps/intel/bin/icpc"
> > -DREGRESSIONTEST_DOWNLOAD=ON
> > which had no errors for *cmake* or *make -j 4*, but *make check* gave
> > me an
> > error:
> > ...
> > [100%] Running all tests except physical validation Test project
> > /home/doroshl/gromacs-2019.3/build
> >   Start  1: TestUtilsUnitTests
> >  1/46 Test  #1: TestUtilsUnitTests ..***Failed0.00 sec
> > /home/doroshl/gromacs-2019.3/build/bin/testutils-test: error while
> > loading shared libraries: libmkl_intel_lp64.so: cannot open shared
> > object file: No such file or directory ...
> > 0% tests passed, 46 tests failed out of 46
> >
> > so I included libmkl_intel_lp64.so:
> > sudo cmake .. -DBUILD_SHARED_LIBS=OFF -DGMX_FFT_LIBRARY=mkl
> > -DCMAKE_INSTALL_PREFIX=$installDir
> >
> > -
> DMKL_LIBRARIES="/home/doroshl/apps/intel/compilers_and_libraries_2019
> .5.281/linux/mkl/lib/intel64_lin/libmkl_intel_lp64.so"
> > -DMKL_INCLUDE_DIR="/home/doroshl/apps/intel/intelpython2/lib"
> > -DCMAKE_CXX_LINK_FLAGS="-Wl,-rpath,/usr/bin/gcc/lib64 -
> L/usr/bin/gcc/lib64"
> > -DGMX_MPI=ON -DGMX_OPENMP=ON -
> DGMX_CYCLE_SUBCOUNTERS=ON -DGMX_GPU=OFF
> > -DGMX_SIMD=SSE2 -
> DCMAKE_C_COMPILER="/home/doroshl/apps/intel/bin/icc"
> > -DCMAKE_CXX_COMPILER="/home/doroshl/apps/intel/bin/icpc"
> > -DREGRESSIONTEST_DOWNLOAD=ON &> cmake.out which doesn't give
> any error
> > messages for cmake, but then in *sudo make -j
> > 4 *results in
> >
> > [ 46%] Building CXX object
> > src/gromacs/CMakeFiles/libgromacs.dir/pulling/pullutil.cpp.o
> > icpc: error #10105:
> >
> >
> /home/doroshl/apps/intel/compilers_and_libraries_2019.5.281/linux/bin/int
> el64/mcpcom:
> > core dumped
> > icpc: warning #10102: unknown signal(694380720)
> > icpc: error #10106: Fatal error in
> >
> > /home/doroshl/apps/intel/compilers_and_libraries_2019.5.281/linux/bin/
> > intel64/mcpcom,
> > terminated by unknown
> > compilation aborted for
> > /home/doroshl/gromacs-2019.3/src/gromacs/pulling/pullutil.cpp (code 1)
> > src/gromacs/CMakeFiles/libgromacs.dir/build.make:2136: recipe for
> > target 'src/gromacs/CMakeFiles/libgromacs.dir/pulling/pullutil.cpp.o'
> > failed
> > make[2]: ***
> > [src/gromacs/CMakeFiles/libgromacs.dir/pulling/pullutil.cpp.o]
> > Error 1
> > make[2]: *** Waiting for unfinished jobs
> > CMakeFiles/Makefile2:2499: recipe for target
> > 'src/gromacs/CMakeFiles/libgromacs.dir/all' failed
> > make[1]: *** [src/gromacs/CMakeFiles/libgromacs.dir/all] Error 2
> > Makefile:162: recipe for target 'all' failed
> > make: *** [all] Error 2
> > Thanks for any help
> >
> >
> > On Tue, Oct 8, 2019 at 

[gmx-users] CMAP format on GROMCACS

2020-01-06 Thread Marcelo Depólo
Hi everybody! Happy new year! =)


I would like to understand a little bit better the CMAP format of
charmm27.ff within GROMACS. I've looked up in the manual but found little
information.

I understand that it is defined by 5 atoms (1-4 phi and 2-5 psi), a
arbitrary identification number (e.g. 1) and the grid size (e.g. 24x24).
But the following grid values are lines with just 10 values each, ended
with a "\", instead of a 24 columns- 24 lines matrix.

How are those values of CMAP formatted  in GMX? Does GMX automatically
transform those values in a matrix and, if so, can I consider that its
"y-axis" would be PSI and "x-axis" would be PHI?

Cheers!
--
Marcelo Depólo Polêto
Postdoctoral Researcher
BIOAGRO - Room T07
Department of General Biology - UFV
Contact: + 55 31 3612-2464
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] atom moved too far

2020-01-06 Thread Christos Deligkaris
Justin, thank you.

I have implemented the pull code but that also exhibits the same error when
I use 12 cores (failed at about 2ns) and the simulation goes on fine when I
use 6 cores (now at about 32 ns).

I tried using the v-rescale thermostat (instead of Nose-Hoover) and
Parinello-Rahman barostat, which failed. I also tried the v-rescale
thermostat and the Berendsen barostat but that also failed. It seems to me
that this is not an equilibration issue.

So, to summarize, only if I decrease the time step to 0.001 ps or decrease
the number of cores seem to allow the calculation to proceed.

In this email list, I read that someone else was trying to use different
arguments supplied to mdrun (-nt, -ntomp etc) to solve the same problem. Is
it possible that the problem arises due to my running gmx_mpi on a single
node? This is the command I use in my submission script:

mpirun --mca btl tcp,sm,self /opt/gromacs-2018.1/bin/gmx_mpi mdrun -ntomp
\$ntomp -v -deffnm "${inputfile%.tpr}"

If you think that this is not due to a physics issue I can continue doing
calculations with 6 cores and try to install gromacs 2020 (both gmx and
gmx_mpi) to see if my problem persists there or not...


Best wishes,

Christos Deligkaris, PhD


On Thu, Jan 2, 2020 at 7:48 PM Justin Lemkul  wrote:

>
>
> On 1/2/20 3:02 PM, Christos Deligkaris wrote:
> > thank you Justin.
> >
> > I saw on your umbrella sampling tutorial how to implement the restraints
> > using the pull code.
> >
> > The protocol I used is (if I understand correctly your question): The
> > energy minimization reached the cutoff for maximum force 1000 in 346
> steps.
> > My NVT equilibration was 50,000 steps of dt = 0.002 and I used 8 NPT
> > equilibration calculations, each with 31,250 steps of dt=0.002 (total NPT
> > equilibration time 500ps) where I slowly decreased the position
> restraints
> > on DNA and the small molecule, as well as the harmonic restraint between
> > the two.
> > What are the best temperature coupling groups to use when we are not
> > certain whether the small molecule will spend the entire simulation
> period
> > physically bound to the macromolecule or whether it will become fully
> > solvated at some point? Is Macromolecule and Non-macromolecule the best
> > option since the small molecule will always (to a small or large extent)
> > interact with water (versus the Macromolecule_and_small_molecule and
> > everything else grouping option)?
>
> I doubt there would be a measurable or provable difference between the
> behaviors of the two setups.
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
>
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] gmx covar error in gromacs2019.x

2020-01-06 Thread 008
Hi gmx users!
I am using gmx2019.4 in centos 7.6 (gcc version 4.8.5), the MD was conducted 
successfully. However, when I proceeded the PCA analysis, there was a wrong in 
the output informations.


command:
echo 3 3|gmx covar -f md_res_center_mol_fit_protein.xtc -s ../md.tpr -o 
eigenvalues.xvg -v eigenvectors.trr -xpma covapic.xpm -n ../index.ndx


output:
Choose a group for the least squares fit
Group  0 (System) has 
124748 elements
Group  1 (Protein) has 8851 
elements
Group  2 (   Protein-H) has 4419 
elements
Group  3 (C-alpha) has 
574 elements
Group  4 (   Backbone) has 1722 
elements
Group  5 (   MainChain) has 2298 
elements
Group  6 ( MainChain+Cb) has 2824 elements
Group  7 (  MainChain+H) has 2852 elements
Group  8 (   SideChain) has 5999 
elements
Group  9 (  SideChain-H) has 2121 elements
Group  10 (  Prot-Masses) has 8851 elements
Group  11 (  non-Protein) has 115897 elements
Group  12 ( Other) has 
 23 elements
Group  13 (  MOL) has 
 23 elements
Group  14 (  NA) 
has  8 elements
Group  15 ( Water) has 115866 
elements
Group  16 (  SOL) has 
115866 elements
Group  17 (   non-Water) has 8882 elements
Group  18 (  Ion) has 
 8 elements
Group  19 (  MOL) has 
 23 elements
Group  20 (  NA) 
has  8 elements
Group  21 ( Water_and_ions) has 115874 elements
Group  22 (  Protein_lig) has 8874 elements
Group  23 (center_Val108_A) has  16 elements
Group  24 (chain_A) has 4427 
elements
Group  25 (chain_B) has 4424 
elements
Group  26 (C-alpha__chain_A) has 287 elements
Group  27 (C-alpha__chain_B) has 287 elements
Group  28 (  E53) has 
 15 elements
Group  29 ( K135) has 
 22 elements
Group  30 ( C169) has 
 11 elements
Group  31 (   S) 
has  1 elements
Group  32 (  C13) has 
 1 elements
Select a group: Selected 3: 'C-alpha'


Choose a group for the covariance analysis
Group  0 (System) has 
124748 elements
Group  1 (Protein) has 8851 
elements
Group  2 (   Protein-H) has 4419 
elements
Group  3 (C-alpha) has 
574 elements
Group  4 (   Backbone) has 1722 
elements
Group  5 (   MainChain) has 2298 
elements
Group  6 ( MainChain+Cb) has 2824 elements
Group  7 (  MainChain+H) has 2852 elements
Group  8 (   SideChain) has 5999 
elements
Group  9 (  SideChain-H) has 2121 elements
Group  10 (  Prot-Masses) has 8851 elements
Group  11 (  non-Protein) has 115897 elements
Group  12 ( Other) has 
 23 elements
Group  13 (  MOL) has 
 23 elements
Group  14 (  NA) 
has  8 elements
Group  15 ( Water) has 115866 
elements
Group  16 (  SOL) has 
115866 elements
Group  17 (   non-Water) has 8882 elements
Group  18 (  Ion) has 
 8 elements
Group  19 (  MOL) has 
 23 elements
Group  20 (  NA) 
has  8 elements
Group  21 ( Water_and_ions) has 115874 elements
Group  22 (  Protein_lig) has 8874 elements
Group  23 (center_Val108_A) has  16 elements
Group  24 (chain_A) has 4427 
elements
Group  25 (chain_B) has 4424 
elements
Group  26 (C-alpha__chain_A) has 287 elements
Group  27 (C-alpha__chain_B) has 287 elements
Group  28 (  E53) has 
 15 elements
Group  29 ( K135) has 
 22 elements
Group  30 ( C169) has 
 11 elements
Group  31 (   S) 
has  1 elements
Group  32 (  C13) has 
 1 elements
Select a group: Selected 3: 'C-alpha'
Calculating the average structure ...
Reading frame   0 time  0.000 
WARNING: number of atoms in tpx (574) and trajectory (8851) do not match
Last frame   1 time10.000 


Back Off! I just backed up average.pdb to ./#average.pdb.3#
Constructing covariance matrix (1722x1722) ...
Last frame   1 time10.000 
Read 10001 frames


Trace of the covariance matrix: 9.58344 (nm^2)


Back Off! I just backed up covapic.xpm to ./#covapic.xpm.3#
100%
Diagonalizing ...


---
Program:  gmx covar, version 2019.4
Source file: src/gromacs/linearalgebra/eigensolver.cpp (line 137)


Fatal error:
Internal errror in LAPACK diagonalization.


For more information and tips for troubleshooting, please check the GROMACS
website athttp://www.gromacs.org/Documentation/Errors
---




I had tested the same files in a virtual machine with centos 7.6 and gmx2019.4 
installed, the output was normal and gave the output files I need. Then I used 
the same files on other server with gmx 4.5.3, and the md.tpr was replaced by 
md.gro, it was also normal. I had installed gmx2019.2, gmx2019.5 and it gave 
the same error.


the gmx was installed as follows:


cmake(version 3.16.0)

tar xvf cmake-*.tar.gz  cd cmake-*
./bootstrap
gmake  gmake install
yum remove cmake -y
ln -s /usr/local/bin/cmake /usr/bin/


fftw-3.3.8
tar -zxvf fftw*  cd fftw*
./configure --prefix=/usr/local/fftw3.3.8 --enable-sse2 --enable-avx 
--enable-avx2 --enable-float --enable-shared
make install -j 16


gmx2019.4
  tar xfz gromacs-2019.4.tar.gz  cd gromacs-2019.4
  mkdir build  cd build
  export