[gmx-users] Regarding Beta-alanine structure

2018-02-07 Thread Dilip H N
Hello,
I want to simulate beta-alanine amino-acid. But in charmm36 FF there are
four different names (three/four letter code) for ALA, ie., ALA, DALA,
ALAI, ALAO.
Out of this, i wanted to know which one corresponds to beta-alanine
structure..??

I tried in Avogadro software and i could build only alanine structure, and
not beta-alanine. How can i get the pdb file of beta-alanine..?? any other
ways..?
So, can anybody help me regarding this..??

Thank you.

-- 
With Best Regards,

DILIP.H.N
Ph.D. Student



 Sent with Mailtrack

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gibbs' free energy in MD

2018-02-07 Thread Dallas Warren
You don't.  Free energy calculations are much more involved than that,
see http://www.alchemistry.org/wiki/Main_Page as a good resource for
details. Plus, of course, there are heaps of journal articles on it
and a good textbook will have details too.
Catch ya,

Dr. Dallas Warren
Drug Delivery, Disposition and Dynamics
Monash Institute of Pharmaceutical Sciences, Monash University
381 Royal Parade, Parkville VIC 3052
dallas.war...@monash.edu
-
When the only tool you own is a hammer, every problem begins to resemble a nail.


On 6 February 2018 at 19:49, Raag Saluja  wrote:
> Hi!
>
> How can I calculate Gibbs' free energy from the potential energy values
> obtained from the MD simulation?
>
> Regards,
> Raag
> --
> Gromacs Users mailing list
>
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GMX 2018 regression tests: cufftPlanMany R2C plan failure (error code 5)

2018-02-07 Thread Alex
Update: we seem to have had a hiccup with an orphan CUDA install and that
was causing issues. After wiping everything off and rebuilding the errors
from the initial post disappeared. However, two tests failed during
regression:

95% tests passed, 2 tests failed out of 39

Label Time Summary:
GTest  = 170.83 sec (33 tests)
IntegrationTest= 125.00 sec (3 tests)
MpiTest=   4.90 sec (3 tests)
UnitTest   =  45.83 sec (30 tests)

Total Test time (real) = 1225.65 sec

The following tests FAILED:
  9 - GpuUtilsUnitTests (Timeout)
32 - MdrunTests (Timeout)
Errors while running CTest
CMakeFiles/run-ctest-nophys.dir/build.make:57: recipe for target
'CMakeFiles/run-ctest-nophys' failed
make[3]: *** [CMakeFiles/run-ctest-nophys] Error 8
CMakeFiles/Makefile2:1160: recipe for target
'CMakeFiles/run-ctest-nophys.dir/all' failed
make[2]: *** [CMakeFiles/run-ctest-nophys.dir/all] Error 2
CMakeFiles/Makefile2:971: recipe for target 'CMakeFiles/check.dir/rule'
failed
make[1]: *** [CMakeFiles/check.dir/rule] Error 2
Makefile:546: recipe for target 'check' failed
make: *** [check] Error 2

Any ideas? I can post the complete log, if needed.

Thank you,

Alex
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] PCA mathematical information

2018-02-07 Thread Wes Barnett
On Tue, Feb 6, 2018 at 5:11 AM, edesantis  wrote:

> Dear gromacs users,
>
> I am a PhD student in biophysics,
> I am trying to preform principal component analysis on my simulations with
> the aim to understand if there are present correlated motions during the
> dynamics.
>
> I an not expert of this kind of analysis,
> I was studying different tutorials and I saw that it is common to filter
> the trajectory to show only the motion along some eigenvectors  (gmx anaig
> -filt -first -last)
>
> I didn't understand what is the mathematical operation behind this
> operation
>
> any of you can help me?
>

Emiliano,

There is a paragraph in the GROMACS reference manual which cites a paper
which uses this method. I suggest starting with that paper and looking at
the background there and the references it cites as well.

-- 
James "Wes" Barnett
Postdoctoral Research Scientist
Department of Chemical Engineering
Kumar Research Group 
Columbia University
w.barn...@columbia.edu
http://wbarnett.us
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GMX 2018 regression tests: cufftPlanMany R2C plan failure (error code 5)

2018-02-07 Thread Alex

Hi Mark,

Nothing has been installed yet, so the commands were issued from 
/build/bin and so I am not sure about the output of that mdrun-test (let 
me know what exact command could make it more informative).


Thank you,

Alex

***

> ./gmx -version

GROMACS version:    2018
Precision:  single
Memory model:   64 bit
MPI library:    thread_mpi
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
GPU support:    CUDA
SIMD instructions:  AVX2_256
FFT library:    fftw-3.3.5-fma-sse2-avx-avx2-avx2_128-avx512
RDTSCP usage:   enabled
TNG support:    enabled
Hwloc support:  hwloc-1.11.0
Tracing support:    disabled
Built on:   2018-02-06 19:30:36
Built by:   smolyan@647trc-md1 [CMAKE]
Build OS/arch:  Linux 4.4.0-112-generic x86_64
Build CPU vendor:   Intel
Build CPU brand:    Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
Build CPU family:   6   Model: 79   Stepping: 1
Build CPU features: aes apic avx avx2 clfsh cmov cx8 cx16 f16c fma hle 
htt intel lahf mmx msr nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse 
rdrnd rdtscp rtm sse2 sse3 sse4.1 sse4.2 ssse3 tdt x2apic

C compiler: /usr/bin/cc GNU 5.4.0
C compiler flags:    -march=core-avx2 -O3 -DNDEBUG 
-funroll-all-loops -fexcess-precision=fast

C++ compiler:   /usr/bin/c++ GNU 5.4.0
C++ compiler flags:  -march=core-avx2    -std=c++11   -O3 -DNDEBUG 
-funroll-all-loops -fexcess-precision=fast
CUDA compiler:  /usr/local/cuda/bin/nvcc nvcc: NVIDIA (R) Cuda 
compiler driver;Copyright (c) 2005-2017 NVIDIA Corporation;Built on 
Fri_Nov__3_21:07:56_CDT_2017;Cuda compilation tools, release 9.1, V9.1.85
CUDA compiler 
flags:-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_70,code=compute_70;-use_fast_math;-D_FORCE_INLINES;; 
;-march=core-avx2;-std=c++11;-O3;-DNDEBUG;-funroll-all-loops;-fexcess-precision=fast;

CUDA driver:    9.10
CUDA runtime:   9.10

> ldd -r ./mdrun-test
    linux-vdso.so.1 =>  (0x7ffcfcc3e000)
    libgromacs.so.3 => 
/home/smolyan/scratch/gmx2018_install_temp/gromacs-2018/build/bin/./../lib/libgromacs.so.3 
(0x7faa58f8f000)
    libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 
(0x7faa58d72000)
    libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 
(0x7faa589f)

    libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7faa586e7000)
    libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 
(0x7faa584d1000)

    libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7faa58107000)
    libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7faa57f03000)
    librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7faa57cfb000)
    libcufft.so.9.1 => /usr/local/cuda/lib64/libcufft.so.9.1 
(0x7faa5080e000)
    libhwloc.so.5 => /usr/lib/x86_64-linux-gnu/libhwloc.so.5 
(0x7faa505d4000)
    libgomp.so.1 => /usr/lib/x86_64-linux-gnu/libgomp.so.1 
(0x7faa503b2000)

    /lib64/ld-linux-x86-64.so.2 (0x7faa5c1ad000)
    libnuma.so.1 => /usr/lib/x86_64-linux-gnu/libnuma.so.1 
(0x7faa501a7000)
    libltdl.so.7 => /usr/lib/x86_64-linux-gnu/libltdl.so.7 
(0x7faa4ff9d000)



On 2/7/2018 5:13 AM, Mark Abraham wrote:

Hi,

I checked back with the CUDA-facing GROMACS developers. They've run the
code with 9.1 and believe there's no intrinsic problem within GROMACS.


So I don't have much to suggest other then rebuilding everything cleanly,

as this is an internal non-descript cuFFT/driver error that is not supposed
to happen,
especially in mdrun-test with its single input system, and it will prevent
him from using -pme gpu.

The only thing PME could do better is to show more meaningful error

messages (which would have to be hardcoded anyway as cuFFT doesn't even
have human readable strings for error codes).

If you could share the output of
* gmx -version
* ldd -r mdrun-test
then perhaps we can find an issue (or at least report to nvidia usefully).
Ensuring you are using the CUDA driver that came with the CUDA runtime is
most likely to work smoothly.

Mark

On Tue, Feb 6, 2018 at 9:24 PM Alex  wrote:


And this is with:

gcc --version
gcc (Ubuntu 5.4.0-6ubuntu1~16.04.6) 5.4.0 20160609 <020-16%2006%2009>



On Tue, Feb 6, 2018 at 1:18 PM, Alex  wrote:


Hi all,

I've just built the latest version and regression tests are running. Here
is one error:

"Program: mdrun-test, version 2018
Source file: src/gromacs/ewald/pme-3dfft.cu (line 56)

Fatal error:
cufftPlanMany R2C plan failure (error code 5)"

This is with CUDA 9.1.

Anything to worry about?

Thank you,

Alex


--
Gromacs Users mailing list

* Please search the archive at

Re: [gmx-users] GMX 2018 regression tests: cufftPlanMany R2C plan failure (error code 5)

2018-02-07 Thread Mark Abraham
Hi,

I checked back with the CUDA-facing GROMACS developers. They've run the
code with 9.1 and believe there's no intrinsic problem within GROMACS.

> So I don't have much to suggest other then rebuilding everything cleanly,
as this is an internal non-descript cuFFT/driver error that is not supposed
to happen,
especially in mdrun-test with its single input system, and it will prevent
him from using -pme gpu.
> The only thing PME could do better is to show more meaningful error
messages (which would have to be hardcoded anyway as cuFFT doesn't even
have human readable strings for error codes).

If you could share the output of
* gmx -version
* ldd -r mdrun-test
then perhaps we can find an issue (or at least report to nvidia usefully).
Ensuring you are using the CUDA driver that came with the CUDA runtime is
most likely to work smoothly.

Mark

On Tue, Feb 6, 2018 at 9:24 PM Alex  wrote:

> And this is with:
> > gcc --version
> > gcc (Ubuntu 5.4.0-6ubuntu1~16.04.6) 5.4.0 20160609 <020-16%2006%2009>
>
>
>
> On Tue, Feb 6, 2018 at 1:18 PM, Alex  wrote:
>
> > Hi all,
> >
> > I've just built the latest version and regression tests are running. Here
> > is one error:
> >
> > "Program: mdrun-test, version 2018
> > Source file: src/gromacs/ewald/pme-3dfft.cu (line 56)
> >
> > Fatal error:
> > cufftPlanMany R2C plan failure (error code 5)"
> >
> > This is with CUDA 9.1.
> >
> > Anything to worry about?
> >
> > Thank you,
> >
> > Alex
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Octahedral minimization problem

2018-02-07 Thread Ahmed Mashaly
Hi

Two solvated systems with amber were converted to .gro with acpype. one is box 
the other is octahedral


The box can be minimized in cluster with this em.mdp file


integrator = steep 
emtol = 100.0 
emstep = 0.01 
nsteps = 5 

nstlist = 1 
cutoff-scheme 
ns_type = grid 
coulombtype = PME 
rcoulomb = 1.0
rvdw = 1.0 
pbc = xyz 


The box can be minimized, but at the end I had 

Steepest Descents converged to Fmax < 100 in 2706 steps
Potential Energy = -2.7121270e+06
Maximum force = 9.3153786e+01 on atom 163971
Norm of force = 1.7101741e+00
Simulation ended prematurely, no performance report will be written.

as I understood from other messages in archive this premature is not a problem 
as it ends with good pot energy and max force

While in case of octahedral, I got this:

Steepest Descents:
Tolerance (Fmax) = 1.0e+02
Number of steps = 5
Step= 0, Dmax= 1.0e-03 nm, Epot= 8.27123e+15 Fmax= 1.65730e+17, atom= 103876
Step= 1, Dmax= 1.0e-03 nm, Epot= 4.22395e+15 Fmax= 6.54268e+16, atom= 103876
Step= 2, Dmax= 1.2e-03 nm, Epot= 1.96761e+15 Fmax= 2.32216e+16, atom= 103876
Step= 3, Dmax= 1.4e-03 nm, Epot= 8.64595e+14 Fmax= 7.37977e+15, atom= 103876
Step= 4, Dmax= 1.7e-03 nm, Epot= 3.77835e+14 Fmax= 2.09380e+15, atom= 103876
Step= 5, Dmax= 2.1e-03 nm, Epot= 1.49451e+14 Fmax= 5.29119e+14, atom= 103876
Step= 6, Dmax= 2.5e-03 nm, Epot= 5.37730e+13 Fmax= 1.41647e+14, atom= 34363
Step= 7, Dmax= 3.0e-03 nm, Epot= 1.82718e+13 Fmax= 3.59846e+13, atom= 199780
Step= 8, Dmax= 3.6e-03 nm, Epot= 6.05245e+12 Fmax= 7.92220e+12, atom= 134203
Step= 9, Dmax= 4.3e-03 nm, Epot= 1.72087e+12 Fmax= 1.62912e+12, atom= 72328
Step= 10, Dmax= 5.2e-03 nm, Epot= 4.91989e+11 Fmax= 3.31401e+11, atom= 106294
Step= 11, Dmax= 6.2e-03 nm, Epot= 1.59698e+11 Fmax= 7.11892e+10, atom= 136966
Step= 12, Dmax= 7.4e-03 nm, Epot= 5.16370e+10 Fmax= 1.55656e+10, atom= 14985
Step= 13, Dmax= 8.9e-03 nm, Epot= 1.66064e+10 Fmax= 6.35415e+09, atom= 14986
Step= 14, Dmax= 1.1e-02 nm, Epot= 8.82293e+09 Fmax= 3.16163e+09, atom= 14985
Step= 15, Dmax= 1.3e-02 nm, Epot= 4.84420e+09 Fmax= 8.10766e+08, atom= 14986
Step= 16, Dmax= 1.5e-02 nm, Epot= 1.68188e+09 Fmax= 2.50214e+08, atom= 14985
Step= 17, Dmax= 1.8e-02 nm, Epot= 7.22330e+08 Fmax= 3.72832e+07, atom= 14986
Step= 18, Dmax= 2.2e-02 nm, Epot= 1.55106e+08 Fmax= 8.66899e+06, atom= 17209
Step= 19, Dmax= 2.7e-02 nm, Epot= 7.04906e+07 Fmax= 4.10104e+06, atom= 152596
Step= 20, Dmax= 3.2e-02 nm, Epot= 4.15026e+07 Fmax= 1.23764e+06, atom= 152596
Step= 21, Dmax= 3.8e-02 nm, Epot= 1.95151e+07 Fmax= 8.10870e+06, atom= 164668
Step= 22, Dmax= 4.6e-02 nm, Epot= 1.78878e+07 Fmax= 9.02967e+05, atom= 164668
Step= 23, Dmax= 5.5e-02 nm, Epot= 1.14131e+07 Fmax= 5.10415e+06, atom= 164668

step 24: One or more water molecules can not be settled.
Check for bad contacts and/or reduce the timestep if appropriate.

step 24: One or more water molecules can not be settled.
Check for bad contacts and/or reduce the timestep if appropriate.
Wrote pdb files with previous and current coordinates
Wrote pdb files with previous and current coordinates
Fatal error in MPI_Sendrecv: Message truncated, error stack:
MPI_Sendrecv(259).: MPI_Sendrecv(sbuf=0x7ffed5bfe230, scount=8, 
MPI_BYTE, dest=8, stag=0, rbuf=0x7ffed5bfe238, rcount=8, MPI_BYTE, src=6, 
rtag=0, MPI_COMM_WORLD, status=0x7ffed5bfdf90) failed
MPIDI_CH3U_Receive_data_found(131): Message from rank 6 and tag 0 truncated; 
17016 bytes received but buffer size is 8


>From the checked archive I know people had similar problems will be related to 
>topology and atomic clashes, but I don`t have any and some meaningless pdb 
>files were produced with the name of this step. but when I tried on my laptop, 
>the same water error appeared, but the minimization process continued with pdb 
>(s) created and at the end I got this with gro file and everything and this 
>log:


Energy minimization has stopped, but the forces have not converged to the
requested precision Fmax < 100 (which may not be possible for your system).
It stopped because the algorithm tried to make a new step whose size was too
small, or there was no change in the energy since last step. Either way, we
regard the minimization as converged to within the available machine
precision, given your starting configuration and EM parameters.

Double precision normally gives you higher accuracy, but this is often not
needed for preparing to run molecular dynamics.
You might need to increase your constraint accuracy, or turn
off constraints altogether (set constraints = none in mdp file)

Steepest Descents converged to machine precision in 13158 steps,
but did not reach the requested Fmax < 100.
Potential Energy = -3.8879410e+06
Maximum force = 1.8154966e+03 on atom 21160
Norm of force = 5.8444543e+00

Simulation ended prematurely, no performance report will be written.



Later when I tried to run it in the cluster, but with only one cpu instead of 
many (48, 10, 5 were tried and got the same error), but with only 

[gmx-users] MMPBSA

2018-02-07 Thread RAHUL SURESH
Dear all

I have carried out a protein ligand simulation for 50ns and performed a
PBSA calculation for 10-20ns trajectory. I get a positive binding energy.
How can I tackle it..?

Thank you

-- 
*Regards,*
*Rahul Suresh*
*Research Scholar*
*Bharathiar University*
*Coimbatore*
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Error in nvt equilibration, "Segmentation fault Core dumped"??

2018-02-07 Thread Mark Abraham
Hi,

Very likely the solution to your problem is found via the suggestions at
http://www.gromacs.org/Documentation/Terminology/Blowing_Up

Mark

On Wed, Feb 7, 2018 at 9:43 AM anuraag boddapati 
wrote:

> Hello everyone,
>
> I am getting an error in executing the following command in GROMACS  as the
> core gets dumped after sometime.
>
> gmx mdrun -deffnm nvt -nt16
>
> I have also attached the .mdp file below along with the topology file.
> For complete understanding of my problem i am also attaching a
> screenshot of the error. Can anyone please explain what the mistake is
> in the above command? or any other possibilities where i could go
> wrong in the entire process.
>
> I am following the instructions given by bevan labs in this regard.
> Thank you in advance.
>
> error:
> https://drive.google.com/open?id=0B3jH-9b3k9D2dE9hdEZ1a0sxUGszcVNTX0M1NTgyS2dkSkVZ
>
> mdp file: 
> https://drive.google.com/open?id=0B3jH-9b3k9D2NklWeFpmT2Fmc2h0NWJOMWUtNGszYnBHTlJZ
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Error in nvt equilibration, "Segmentation fault Core dumped"??

2018-02-07 Thread anuraag boddapati
Hello everyone,

I am getting an error in executing the following command in GROMACS  as the
core gets dumped after sometime.

gmx mdrun -deffnm nvt -nt16

I have also attached the .mdp file below along with the topology file.
For complete understanding of my problem i am also attaching a
screenshot of the error. Can anyone please explain what the mistake is
in the above command? or any other possibilities where i could go
wrong in the entire process.

I am following the instructions given by bevan labs in this regard.
Thank you in advance.

error: 
https://drive.google.com/open?id=0B3jH-9b3k9D2dE9hdEZ1a0sxUGszcVNTX0M1NTgyS2dkSkVZ

mdp file: 
https://drive.google.com/open?id=0B3jH-9b3k9D2NklWeFpmT2Fmc2h0NWJOMWUtNGszYnBHTlJZ
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.