[gmx-users] Residue id of capped peptide getting changed when adding molecules

2018-11-14 Thread Dilip.H.N
Hi,
I have a capped peptide (ACE-ALA-NME) and now when i add the other
molecules through the command "gmx insert-molecules", the residue id of the
capped peptide, which was 1 as:
1ACECH3
1ACE   HH31
1ACE   HH32
1ACE   HH33
1ACE  C
1ACE  O
1ALA  N
1ALA HN
1ALA CA
1ALA HA
1ALA CB
1ALAHB1
1ALAHB2
1ALAHB3
1ALA  C
1ALA  O
1NME  N
1NME HN
1NMECH3
1NME   HH31
1NME   HH32
1NME   HH33
gets changed to 1,2 and 3 as in:
1ACECH3
1ACE   HH31
1ACE   HH32
1ACE   HH33
1ACE  C
1ACE  O
2ALA  N
2ALA HN
2ALA CA
2ALA HA
2ALA CB
2ALAHB1
2ALAHB2
2ALAHB3
2ALA  C
2ALA  O
3NME  N
3NME HN
3NMECH3
3NME   HH31
3NME   HH32
3NME   HH33
4XYZ N1
4XYZ N2
4XYZ C1 and so on...

1] How can i prevent the residue id's from getting changed..??(or) retain
their residue id's..?

2] Since when residue id's gets changed, it is treated as different
chains/molecules and bonds between them are shown as
"Warning) Unusual bond between residues: 1(none) and 2 (protein)
Warning) Unusual bond between residues:  2 (protein) and 3 (none)" when
viewed in VMD. causing a problem...

Any suggestions are highly appreciated.
Thank you.

---
With Best Regards,

Dilip.H.N
Ph.D. Student.


[image: Mailtrack]

Sender
notified by
Mailtrack

15/11/18,
11:02:34
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] xpm2ps

2018-11-14 Thread Mahdi Sobati Nezhad
Hello everyone
When I use xpm2ps in Gromacs 2018.3 and open .eps file, I see that a balck
line cover the numbers in y and x axis.
Any person can help me?!
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Setting rcon according to system

2018-11-14 Thread Mark Abraham
Hi,

On Wed, Nov 14, 2018 at 3:18 AM Sergio Perez  wrote:

> Hello,
> First of all thanks for the help :)
> I don't necessarily need to run it with 100 processors, I just want to know
> how much I can reduce rcon taking into account the knowledge of my system
> without compromising the accuracy. Let me give some more details of my
> system. The system is a sodium montmorillonite clay with two solid
> alumino-silicate layers with two aqueous interlayers between them. The
>

I assume the silicate network has many bonds over large space - these
adjacent bonds are the issue, not uranyl. (You would have the same problem
with a clay-only system.)


> system has TIP4P waters, some OH bonds within the clay and the bonds of the
> uranyl hydrated ion described in my previous email as constraints. The
> system is orthorrhombic 4.67070x4.49090x3.77930 and has 9046 atoms.
>
> This is the ouput of gromacs:
>
> Initializing Domain Decomposition on 100 ranks
> Dynamic load balancing: locked
> Initial maximum inter charge-group distances:
>two-body bonded interactions: 0.470 nm, Tab. Bonds NC, atoms 10 13
> Minimum cell size due to bonded interactions: 0.000 nm
> Maximum distance for 5 constraints, at 120 deg. angles, all-trans: 0.842 nm
> Estimated maximum distance required for P-LINCS: 0.842 nm
> This distance will limit the DD cell size, you can override this with -rcon
> Guess for relative PME load: 0.04
> Will use 90 particle-particle and 10 PME only ranks
>

GROMACS has guessed to use 90 ranks in the real-space domain decomposition,
e.g. as an array of 6x5x3 ranks.


> This is a guess, check the performance at the end of the log file
> Using 10 separate PME ranks, as guessed by mdrun
> Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25
> Optimizing the DD grid for 90 cells with a minimum initial size of 1.052 nm
> The maximum allowed number of cells is: X 4 Y 4 Z 3
>

... but only 4x4x3=48 ranks can work with the connectivity of your input.
Thus you are simply using too many ranks for a small system. You'd have to
relax the tolerances quite a lot to get to use 90 ranks. Just follow the
first part of the message advice and use fewer ranks :-)

Mark

---
> Program: mdrun_mpi, version 2018.1
> Source file: src/gromacs/domdec/domdec.cpp (line 6571)
> MPI rank:0 (out of 100)
>
> Fatal error:
> There is no domain decomposition for 90 ranks that is compatible with the
> given box and a minimum cell size of 1.05193 nm
> Change the number of ranks or mdrun option -rcon or -dds or your LINCS
> settings
> Look in the log file for details on the domain decomposition
>
> For more information and tips for troubleshooting, please check the GROMACS
> website at http://www.gromacs.org/Documentation/Errors
> ---
>
>
> Thank you for your help!
>
> On Wed, Nov 14, 2018 at 5:28 AM Mark Abraham 
> wrote:
>
> > Hi,
> >
> > Possibly. It would be simpler to use fewer processors, such that the
> > domains can be larger.
> >
> > What does mdrun think it needs for -rcon?
> >
> > Mark
> >
> > On Tue, Nov 13, 2018 at 7:06 AM Sergio Perez 
> > wrote:
> >
> > > Dear gmx comunity,
> > >
> > > I have been running my system without any problems with 100 processors.
> > But
> > > I decided to make some of the bonds of my main molecule constrains. My
> > > molecule is not an extended chain, it is a molecular hydrated ion, in
> > > particular the uranyl cation with 5 water molecules forming a
> pentagonal
> > by
> > > bipyramid. At this point I get a domain decomposition error and I would
> > > like to reduce rcon in order to run with 100 processors. Since I know
> > that
> > > by the shape of my molecule, two atoms connected by several constraints
> > > will never be further appart than 0.6nm, can I use this safely for
> -rcon?
> > >
> > > Thank you very much!
> > > Best regards,
> > > Sergio Pérez-Conesa
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read 

Re: [gmx-users] Running GPU issue

2018-11-14 Thread Mark Abraham
Hi,

I expect that that warning is fine (for now).

The error you saw is clear evidence of a bug. There's a few things you
might try to help narrow things down.
* Is the error reproducible on each run of mdrun?
* 2018.3 was not designed or tested on CUDA 10, which was released rather
later. If you have an earlier CUDA, please see if building with it
alleviates the error
* The error message could be triggered by multiple parts of the code; what
do you get with mdrun -nb gpu -pme cpu?
* Do you get any more diagnostics from running with a build with cmake
-DCMAKE_BUILD_TYPE=Debug?

Mark

On Wed, Nov 14, 2018 at 1:09 PM Kovalskyy, Dmytro 
wrote:

> I forgot to add. While compiling gromacs I got following error at the very
> beggining:
>
>
> [  3%] Built target gpu_utilstest_cuda
> /usr/local/tmp/gromacs-2018.3/src/gromacs/gpu_utils/gpu_utils.cu: In
> function ?int do_sanity_checks(int, cudaDeviceProp*)?:
> /usr/local/tmp/gromacs-2018.3/src/gromacs/gpu_utils/gpu_utils.cu:258:28:
> warning: ?cudaError_t cudaThreadSynchronize()? is deprecated
> [-Wdeprecated-declarations]
>  if (cudaThreadSynchronize() != cudaSuccess)
> ^
> /usr/local/cuda/include/cuda_runtime_api.h:947:46: note: declared here
>  extern __CUDA_DEPRECATED __host__ cudaError_t CUDARTAPI
> cudaThreadSynchronize(void);
>
>
> But make has completed its job without falling down.
>
>
>
>
> 
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Mark
> Abraham 
> Sent: Tuesday, November 13, 2018 10:29 PM
> To: gmx-us...@gromacs.org
> Cc: gromacs.org_gmx-users@maillist.sys.kth.se
> Subject: Re: [gmx-users] Running GPU issue
>
> Hi,
>
> It can share.
>
> Mark
>
> On Mon, Nov 12, 2018 at 10:19 PM Kovalskyy, Dmytro 
> wrote:
>
> > Hi,
> >
> >
> >
> > To perform GPU with Gromacs does it require exclusive  GPU card or
> Gromacs
> > can share the video card with X-server?
> >
> >
> > Thank you
> >
> >
> > Dmytro
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Running GPU issue

2018-11-14 Thread Kovalskyy, Dmytro
I forgot to add. While compiling gromacs I got following error at the very 
beggining:


[  3%] Built target gpu_utilstest_cuda
/usr/local/tmp/gromacs-2018.3/src/gromacs/gpu_utils/gpu_utils.cu: In function 
?int do_sanity_checks(int, cudaDeviceProp*)?:
/usr/local/tmp/gromacs-2018.3/src/gromacs/gpu_utils/gpu_utils.cu:258:28: 
warning: ?cudaError_t cudaThreadSynchronize()? is deprecated 
[-Wdeprecated-declarations]
 if (cudaThreadSynchronize() != cudaSuccess)
^
/usr/local/cuda/include/cuda_runtime_api.h:947:46: note: declared here
 extern __CUDA_DEPRECATED __host__ cudaError_t CUDARTAPI 
cudaThreadSynchronize(void);


But make has completed its job without falling down.





From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 on behalf of Mark Abraham 

Sent: Tuesday, November 13, 2018 10:29 PM
To: gmx-us...@gromacs.org
Cc: gromacs.org_gmx-users@maillist.sys.kth.se
Subject: Re: [gmx-users] Running GPU issue

Hi,

It can share.

Mark

On Mon, Nov 12, 2018 at 10:19 PM Kovalskyy, Dmytro 
wrote:

> Hi,
>
>
>
> To perform GPU with Gromacs does it require exclusive  GPU card or Gromacs
> can share the video card with X-server?
>
>
> Thank you
>
>
> Dmytro
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Running GPU issue

2018-11-14 Thread Kovalskyy, Dmytro
Mark,

Thank you. Then I have an issue I can not find a way to solve.

My MD using GPU fails at the very beginning while CPU-only MD runs no problem 
with the same tpr file.


I can not find what "HtoD cudaMemcpyAsync failed: invalid argument" means.

Here is some diagnostics. 

$ uname -a
Linux didesk 4.15.0-36-generic #39-Ubuntu SMP Mon Sep 24 16:19:09 UTC 2018 
x86_64 x86_64 x86_64 GNU/Linux

dikov@didesk ~ $ gcc --version
gcc (Ubuntu 7.3.0-27ubuntu1~18.04) 7.3.0
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

dikov@didesk ~ $ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Sat_Aug_25_21:08:01_CDT_2018
Cuda compilation tools, release 10.0, V10.0.130
dikov@didesk ~ $ 


GPU setup:

sudo nvidia-smi -pm ENABLED -i 0
sudo nvidia-smi -ac 4513,1733 -i 0

MD.log

Log file opened on Tue Nov 13 16:16:22 2018
Host: didesk  pid: 45669  rank ID: 0  number of ranks:  1
  :-) GROMACS - gmx mdrun, 2018.3 (-:

GROMACS is written by:
 Emile Apol  Rossen Apostolov  Paul Bauer Herman J.C. Berendsen
Par BjelkmarAldert van Buuren   Rudi van Drunen Anton Feenstra  
  Gerrit GroenhofAleksei Iupinov   Christoph Junghans   Anca Hamuraru   
 Vincent Hindriksen Dimitrios KarkoulisPeter KassonJiri Kraus
  Carsten Kutzner  Per Larsson  Justin A. LemkulViveca Lindahl  
  Magnus Lundborg   Pieter MeulenhoffErik Marklund  Teemu Murtola   
Szilard Pall   Sander Pronk  Roland Schulz Alexey Shvetsov  
   Michael Shirts Alfons Sijbers Peter TielemanTeemu Virolainen 
 Christian WennbergMaarten Wolf   
   and the project leaders:
Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel

Copyright (c) 1991-2000, University of Groningen, The Netherlands.
Copyright (c) 2001-2017, The GROMACS development team at
Uppsala University, Stockholm University and
the Royal Institute of Technology, Sweden.
check out http://www.gromacs.org for more information.

GROMACS is free software; you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License
as published by the Free Software Foundation; either version 2.1
of the License, or (at your option) any later version.

GROMACS:  gmx mdrun, version 2018.3
Executable:   /usr/local/gromacs/bin/gmx
Data prefix:  /usr/local/gromacs
Working dir:  /home/dikov/Documents/Cients/DavidL/MD/GPU
Command line:
  gmx mdrun -deffnm md200ns -v

GROMACS version:2018.3
Precision:  single
Memory model:   64 bit
MPI library:thread_mpi
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
GPU support:CUDA
SIMD instructions:  AVX_512
FFT library:fftw-3.3.7-sse2-avx
RDTSCP usage:   enabled
TNG support:enabled
Hwloc support:  hwloc-1.11.6
Tracing support:disabled
Built on:   2018-11-13 21:31:10
Built by:   dikov@didesk [CMAKE]
Build OS/arch:  Linux 4.15.0-36-generic x86_64
Build CPU vendor:   Intel
Build CPU brand:Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz
Build CPU family:   6   Model: 85   Stepping: 4
Build CPU features: aes apic avx avx2 avx512f avx512cd avx512bw avx512vl clfsh 
cmov cx8 cx16 f16c fma hle htt intel lahf mmx msr nonstop_tsc pcid pclmuldq 
pdcm pdpe1gb popcnt pse rdrnd rdtscp rtm sse2 sse3 sse4.1 sse4.2 ssse3 tdt 
x2apic
C compiler: /usr/bin/cc GNU 7.3.0
C compiler flags:-mavx512f -mfma -O3 -DNDEBUG -funroll-all-loops 
-fexcess-precision=fast  
C++ compiler:   /usr/bin/c++ GNU 7.3.0
C++ compiler flags:  -mavx512f -mfma-std=c++11   -O3 -DNDEBUG 
-funroll-all-loops -fexcess-precision=fast  
CUDA compiler:  /usr/local/cuda/bin/nvcc nvcc: NVIDIA (R) Cuda compiler 
driver;Copyright (c) 2005-2018 NVIDIA Corporation;Built on 
Sat_Aug_25_21:08:01_CDT_2018;Cuda compilation tools, release 10.0, V10.0.130
CUDA compiler 
flags:-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_70,code=compute_70;-use_fast_math;-D_FORCE_INLINES;;
 
;-mavx512f;-mfma;-std=c++11;-O3;-DNDEBUG;-funroll-all-loops;-fexcess-precision=fast;
CUDA driver:10.0
CUDA runtime:   10.0


Running on 1 node with total 36 cores, 72 logical cores, 1 compatible GPU
Hardware detected:
  CPU info:
Vendor: Intel
Brand:  Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz
Family: 6   Model: 85   Stepping: 4
Features: aes apic avx avx2 avx512f avx512cd avx512bw avx512vl clfsh cmov 
cx8 cx16 f16c fma hle htt intel lahf mmx msr nonstop_tsc pcid pclmuldq pdcm 

Re: [gmx-users] pcoupl Berendsen

2018-11-14 Thread Justin Lemkul



On 11/14/18 5:28 AM, Gonzalez Fernandez, Cristina wrote:

Hi Justin,

I have taken a few days in answering you because I was trying to reduce the 
discrepancies between the pressure I obtain after simulation and the one I set 
in the .mdp file. However, I have no achieve very good results. As I indicated 
in the previous email, in my simulations ref_p=1bar and ref_t=298K.
According to the output of gmx energy, the pressure average after the 
simulation is 0.19bar , Err.est= 0.59 and RMSD=204,98; and for the temperature 
average=298.003, Err.est=0,0032; RMSD=2,76.

 From this results and your previous email, as the error (Err.est) in the 
pressure is of the same magnitude as the


That error estimate is not relevant here. Your reported pressure is 0.19 
± 205, which is indistinguishable from the target pressure of 1.


-Justin


pressure value after simulation, the pressure significantly differs from the 
reference value (1bar), so for example, more simulation time will be required. 
This is also supported by the high RMSD, which is in the order of hundreds. For 
the temperature, the error is 5 orders of magnitude lower than the obtained 
value and the RMSD is very low. This suggests that the system has reach the 
equilibrium temperature. Are these reasons correct?

Another thing that makes me think that the pressure I obtain is not correct is 
that the pressure I obtained after simulation and the one I obtain after 
analysis also differ significantly (0.19 and 3.9 bar respectively).

I have used long both NPT equilibration and simulation times (50 ns) and the 
results are similar to the ones I have indicated above, which apparently means 
that the system is stable.


 From these discrepancies, do you think the differences are not as important as 
I am considering?, what could I do to obtain more accurate pressure values?

Regarding the Parrinello-Rahnman is "not stable for low pressures", I 
understood that when using low pressures obtaining the reference pressure is sometimes 
difficult by using Parrinello-Rahman. I was trying to use this article in order to 
explain why my simulation pressures differ from the ref_p, but as you say, I have also 
read papers that use Parrinello-Rahman for simulating 1bar systems.

Thank you very much for all your help,

C.


-Mensaje original-
De: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 En nombre de Justin Lemkul
Enviado el: jueves, 8 de noviembre de 2018 14:01
Para: gmx-us...@gromacs.org
Asunto: Re: [gmx-users] pcoupl Berendsen



On 11/8/18 7:50 AM, Gonzalez Fernandez, Cristina wrote:

Dear Gromacs users,

In my simulations, I have specified ref_p= 1bar but after MD
simulation I obtain pressures equal to 0.19 bar (even

A pressure without an error bar is a meaningless value. The fluctuations of 
pressure in most systems are on the order of tens or hundreds of bar, meaning 
your result is indistinguishable from the target value.


with long simulation times) when using pcoupl=Parrinello-Rahman. I
know that Parrinello-Rahman is recommend for production runs and
Berendsen for NPT equilibration. However, I have read in an article
that Parrinello-Rahman is not stable for low pressures, so in such
situations its better to use Berendsen. I have tried to use Berendsen
for

I would be interested to know how this "not stable for low pressures"
was determined, because it seems completely unlikely to be true. Most MD 
simulations nowadays use Parrinello-Rahman for pressure coupling at 1
bar/1 atm without any issue if the system is properly equilibrated (and if not, 
the problem is with preparation, not the barostat itself).


MD simulation but I obtain this Warning and I cannot remove it with the 
-maxwarn option.

"Using Berendsen pressure coupling invalidates the true ensemble for the 
thermostat"


How can I use Berendsen for MD simulation?

Simply, you can't, and you shouldn't. The Berendsen method produces an invalid 
statistical mechanical ensemble. It relaxes systems quickly and is therefore 
still useful for equilibration, but should never be employed during data 
collection. Full stop.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061


Re: [gmx-users] NPT not working

2018-11-14 Thread Justin Lemkul



On 11/14/18 6:07 AM, Hanin Omar wrote:

  Thank you Justin for your answer, i went back to my system and did all sort 
of tests, and the system works fine without the RDC restraint. Since the RDC 
values where used to generate the original pdb file of the protein i think the 
strongest option would be to say that i have an issue with the implementation 
of RDC constraints, so ill post how i implemented them and see if anyone cane 
pinpoint my mistakes:1) in the toplogy file:
; Include orientation restraint file
ifdef POSRES
#include "RDC-restraint.itp"
endif
2) the RDC-restraint.itp had lines like these for 4 types of rdc (n-h , c-n, 
calpha-halpha, h-c):
[ orientation_restraints ]
;  ai    aj    type   exp.    label   alpha   const.   obs.   weight
;    Hz nm^3 Hz    Hz^-2
; residue 1
  5   6    1   1    1   3   -15.098   -15.0821   1
  5   6    1   2    2   3   -15.098   10.8391   1
; residue 2
  18   20    1   1    3   3   1.53   -1.145444   1
  18   20    1   2    4   3   1.53   -1.128506   1
  20   21    1   1    5   3   6.083   7.72483   1
  20   21    1   2    6   3   6.083   7.27571   1
  18   21    1   1    7   3   -15.098   0.187741325   1
  18   21    1   2    8   3   -15.098   3.13646   1
  22   23    1   1    9   3   -15.098   -12.31345   1
  22   23    1   2    10   3   -15.098   15.355   1


I don't know anything about these types of restraints, but it seems you 
have two different restraints for each pair of atoms, which sounds fishy 
to me. Try turning them on one at a time.


-Justin


3) in the mdp file for the simulation:
define   = -DPOSRES ;use orientation restraint;Options for 
orientation restraint
orire    = yes
orire-fc = 1   ; Orientation restraints force constant
orire-tau    = 0   ; Turn off time averaging
orire-fitgrp = backbone


is the problem with my implementation that they are using the rdc restraints 
for the water as well as the protein, and if so how can i specify that they 
should be fitted to just the protein?THank you for the answerHain

 On Thursday, November 1, 2018, 4:11:10 PM EDT, Justin Lemkul 
 wrote:
  
  


On 11/1/18 7:33 AM, Hanin Omar wrote:

Dear all gromacs user;
I am new to gromacs, and i desperatly need help. I want to use gromacs to 
calculate the internal dynamics trajectory within a protein.
For my simulation i followed this protocol:
1)Use pdb2gmx to generate the gro file and topology ( i chose amber99SB and 
TIP3P water modrel)
2)Energy minimization in vacuum
3) set perodic boundery coditions
4)solvate the system
5) ADD ions to neutralize the system
6)Energy minimization of the solvated system
7) Position restrainted MD
8)Unrestraind MD ( NVT equiliberation)
9)Unrestraind MD ( NPT equiliberation)
10)Run MD simulation with the addition of RDC restraint in the topology like 
this
; Include Position restraint file
#ifdef POSRES
#include "RDC-restraint.itp"
#endif

The problem is i always get the following eroor when i run the last step
(Fatal error:
Error: Too many iterations in routine JACOBI)
I researched the error and some suggested that the system wasnt equiliberated 
enough, after checking it seems that the NPT part doesn't converge, i tried 
extending the time but that didnt solve it, i tired doing it in two steps: 1st 
NPT with thermostat = v - rescale and barostat = Berendsen for say 6.5 ps and 
then run 2ND NPT with thermostat = Nose-Hooover and barostat = Parinello 
-Rahman for 6.5 ps.But that also didnt help. can someone explain why is this 
happening( and if it has to do with the RDC constraints)? and how can i solve 
it?

If the introduction of RDC restraints causes a failure, then either (1)
the implementation of the restraints is incorrect or (2) your structure
is incompatible with those restraints, leading to the buildup of forces
and a crash.

-Justin



--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] how to keep the ligand inside water box

2018-11-14 Thread Justin Lemkul




On 11/14/18 7:08 AM, Rahma Dahmani wrote:

Hi GMX users,

After running nvt equilibration, my ligand get out the box and before
proceeding to npt equilibration i want to know how to get (and keep if is
it possible) the ligand in water box* center* ?


There is no reason to do this. In a periodic (infinite) system, there is 
no such thing as a "center." Your molecule will diffuse, cross periodic 
boundaries, etc. You can recenter after the fact with trjconv, but there 
is no physical need to enforce some visualization convenience.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] how to keep the ligand inside water box

2018-11-14 Thread Quyen Vu Van
Hi,
I don't know if there any way to keep ligand in the center of box but I
think you should apply the *position restraint* on the ligand in the
equilibrium process if you don't want the ligand move significantly

On Wed, Nov 14, 2018 at 1:08 PM Rahma Dahmani 
wrote:

> Hi GMX users,
>
> After running nvt equilibration, my ligand get out the box and before
> proceeding to npt equilibration i want to know how to get (and keep if is
> it possible) the ligand in water box* center* ?
>
> Thank you !
>
> --
>
>
>
>
>
>
> *Rahma Dahmani Doctorante en CHIMIE Unité de Recherche: Physico-Chimie des
> Matériaux à l'état condensé, Laboratoire de Chimie Théorique et
> Spectroscopie MoléculaireUniversité de Tunis El Manar, Faculté des Sciences
> de Tunis Campus Universitaire Farhat Hached - BP n ° 94 - Rommana 1068,
> Tunisie Tél: (+216) 28151042*
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] how to keep the ligand inside water box

2018-11-14 Thread Rahma Dahmani
Hi GMX users,

After running nvt equilibration, my ligand get out the box and before
proceeding to npt equilibration i want to know how to get (and keep if is
it possible) the ligand in water box* center* ?

Thank you !

-- 






*Rahma Dahmani Doctorante en CHIMIE Unité de Recherche: Physico-Chimie des
Matériaux à l'état condensé, Laboratoire de Chimie Théorique et
Spectroscopie MoléculaireUniversité de Tunis El Manar, Faculté des Sciences
de Tunis Campus Universitaire Farhat Hached - BP n ° 94 - Rommana 1068,
Tunisie Tél: (+216) 28151042*
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] NPT not working

2018-11-14 Thread Hanin Omar
 Thank you Justin for your answer, i went back to my system and did all sort of 
tests, and the system works fine without the RDC restraint. Since the RDC 
values where used to generate the original pdb file of the protein i think the 
strongest option would be to say that i have an issue with the implementation 
of RDC constraints, so ill post how i implemented them and see if anyone cane 
pinpoint my mistakes:1) in the toplogy file: 
; Include orientation restraint file
ifdef POSRES
#include "RDC-restraint.itp"
endif
2) the RDC-restraint.itp had lines like these for 4 types of rdc (n-h , c-n, 
calpha-halpha, h-c):
[ orientation_restraints ]
;  ai    aj    type   exp.    label   alpha   const.   obs.   weight
;    Hz nm^3 Hz    Hz^-2
; residue 1
 5   6    1   1    1   3   -15.098   -15.0821   1
 5   6    1   2    2   3   -15.098   10.8391   1
; residue 2
 18   20    1   1    3   3   1.53   -1.145444   1
 18   20    1   2    4   3   1.53   -1.128506   1
 20   21    1   1    5   3   6.083   7.72483   1
 20   21    1   2    6   3   6.083   7.27571   1
 18   21    1   1    7   3   -15.098   0.187741325   1
 18   21    1   2    8   3   -15.098   3.13646   1
 22   23    1   1    9   3   -15.098   -12.31345   1
 22   23    1   2    10   3   -15.098   15.355   1 

3) in the mdp file for the simulation:
define   = -DPOSRES ;use orientation restraint;Options for 
orientation restraint
orire    = yes
orire-fc = 1   ; Orientation restraints force constant
orire-tau    = 0   ; Turn off time averaging
orire-fitgrp = backbone 


is the problem with my implementation that they are using the rdc restraints 
for the water as well as the protein, and if so how can i specify that they 
should be fitted to just the protein?THank you for the answerHain

On Thursday, November 1, 2018, 4:11:10 PM EDT, Justin Lemkul 
 wrote:  
 
 

On 11/1/18 7:33 AM, Hanin Omar wrote:
> Dear all gromacs user;
> I am new to gromacs, and i desperatly need help. I want to use gromacs to 
> calculate the internal dynamics trajectory within a protein.
> For my simulation i followed this protocol:
> 1)Use pdb2gmx to generate the gro file and topology ( i chose amber99SB and 
> TIP3P water modrel)
> 2)Energy minimization in vacuum
> 3) set perodic boundery coditions
> 4)solvate the system
> 5) ADD ions to neutralize the system
> 6)Energy minimization of the solvated system
> 7) Position restrainted MD
> 8)Unrestraind MD ( NVT equiliberation)
> 9)Unrestraind MD ( NPT equiliberation)
> 10)Run MD simulation with the addition of RDC restraint in the topology like 
> this
> ; Include Position restraint file
> #ifdef POSRES
> #include "RDC-restraint.itp"
> #endif
>
> The problem is i always get the following eroor when i run the last step
> (Fatal error:
> Error: Too many iterations in routine JACOBI)
> I researched the error and some suggested that the system wasnt equiliberated 
> enough, after checking it seems that the NPT part doesn't converge, i tried 
> extending the time but that didnt solve it, i tired doing it in two steps: 
> 1st NPT with thermostat = v - rescale and barostat = Berendsen for say 6.5 ps 
> and then run 2ND NPT with thermostat = Nose-Hooover and barostat = Parinello 
> -Rahman for 6.5 ps.But that also didnt help. can someone explain why is this 
> happening( and if it has to do with the RDC constraints)? and how can i solve 
> it?

If the introduction of RDC restraints causes a failure, then either (1) 
the implementation of the restraints is incorrect or (2) your structure 
is incompatible with those restraints, leading to the buildup of forces 
and a crash.

-Justin

-- 
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
  
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] gromacs.org_gmx-users Digest, Vol 175, Issue 40

2018-11-14 Thread Farial Tavakoli
Dear Justin

Thanks for your reply

Yes I edited the topology and resolved the problem

Best
Farial

On Tue, Nov 13, 2018 at 3:34 AM <
gromacs.org_gmx-users-requ...@maillist.sys.kth.se> wrote:

> Send gromacs.org_gmx-users mailing list submissions to
> gromacs.org_gmx-users@maillist.sys.kth.se
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or, via email, send a message with subject or body 'help' to
> gromacs.org_gmx-users-requ...@maillist.sys.kth.se
>
> You can reach the person managing the list at
> gromacs.org_gmx-users-ow...@maillist.sys.kth.se
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of gromacs.org_gmx-users digest..."
>
>
> Today's Topics:
>
>1. Re: Expansion system after NVT Equilibrium in Gromacs
>   (Justin Lemkul)
>2. Re: Input file for energy minimization for solvated system:
>   Error (Justin Lemkul)
>3. Re: pdb2gmx fatal error (Ali Khodayari)
>4. Re: Group WAT referenced in the .mdp file was not found in
>   the index file (Justin Lemkul)
>5. Re: atomtype OV not found (Justin Lemkul)
>
>
> --
>
> Message: 1
> Date: Mon, 12 Nov 2018 18:59:01 -0500
> From: Justin Lemkul 
> To: gmx-us...@gromacs.org
> Subject: Re: [gmx-users] Expansion system after NVT Equilibrium in
> Gromacs
> Message-ID: 
> Content-Type: text/plain; charset=utf-8; format=flowed
>
>
>
> On 11/11/18 12:16 PM, yasmineso...@students.itb.ac.id wrote:
> > i'm currently running a simulation protein in water using gromacs with
> force field gromos96 54a7. the protein structure that i used is from
> modeller. there is no problem untill the minimization step but the system
> is expanding after NVT equilibration. and when i try to do NPT
> equilibration after that, the system is shrinking [the box volume is
> smaller than before]
> >
> > i'm using the parameter from gromacs tutorial "lysozyme in water" with
> hbonds constraints for NVT equilibration and the temperature is 298 K.
> please help me to solve this problem
>
> Don't use input files from my tutorial if you're using GROMOS. The
> nonbonded settings and required constraints are different, as hopefully
> I make very clear...
>
> In any case, if you're running NPT, naturally the system will expand and
> contract. There's no reason that it won't, as that is precisely what a
> barostat does. If your system is changing, that just means your initial
> conditions are not compatible with the specified NPT ensemble and the
> properties of the system have to adjust.
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Virginia Tech Department of Biochemistry
>
> 303 Engel Hall
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
>
>
> --
>
> Message: 2
> Date: Mon, 12 Nov 2018 19:00:23 -0500
> From: Justin Lemkul 
> To: gmx-us...@gromacs.org
> Subject: Re: [gmx-users] Input file for energy minimization for
> solvated system: Error
> Message-ID: 
> Content-Type: text/plain; charset=utf-8; format=flowed
>
>
>
> On 11/11/18 3:04 PM, Neena Susan Eappen wrote:
> > Hello GROMACS users,
> >
> >
> > I was trying to create an input file for energy minimization of solvated
> system, using the following command:
> >
> > grompp -v -f minim.mdp -c protein-solvated.gro -p protein.top -o
> protein-EM-solvated.tpr
> >
> > Got the following error:
> > number of coordinates in coordinate file (1y6l-solvated.gro, 181577)
> does not match topology (1y6l.top, 182265).
> >
> > I think the following error occurred because I skipped the following
> step:
> > Edit the topology file and decrease the number of solvent molecules.
> Also add a line specifying the number of NA ions and a line specifying the
> amount of CL.
> >
> > My question:
> >
> >1.  How to open the topology file?
>
> A topology is a plain text file. Use a plain-text editor.
>
> >2.  How do I determine number of NA and CL ions added? I just saw a
> massive list of these counterions being added, but not the total number.
>
> genion tells you this.
>
> >3.  My net charge on the protein was 6+, why do I need to add Na+
> ions?
>
> We don't know what your genion command was, so we can't say.
>
> Let solvate and genion do the work for you. Use the -p flag to have
> those programs update your topology for you, especially if you are not
> familiar with their contents or how to edit them. See e.g.
> http://www.mdtutorials.com/gmx/lysozyme/index.html
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Virginia Tech Department of Biochemistry
>
> 303 Engel Hall
> 340 West Campus Dr.

Re: [gmx-users] Lincs warning

2018-11-14 Thread Farial Tavakoli
Dear Mark

Thank you for your reply

I resolved my problem by refining the output ot the tool

Best
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] pcoupl Berendsen

2018-11-14 Thread Gonzalez Fernandez, Cristina
Hi Justin,

I have taken a few days in answering you because I was trying to reduce the 
discrepancies between the pressure I obtain after simulation and the one I set 
in the .mdp file. However, I have no achieve very good results. As I indicated 
in the previous email, in my simulations ref_p=1bar and ref_t=298K.
According to the output of gmx energy, the pressure average after the 
simulation is 0.19bar , Err.est= 0.59 and RMSD=204,98; and for the temperature 
average=298.003, Err.est=0,0032; RMSD=2,76.

>From this results and your previous email, as the error (Err.est) in the 
>pressure is of the same magnitude as the pressure value after simulation, the 
>pressure significantly differs from the reference value (1bar), so for 
>example, more simulation time will be required. This is also supported by the 
>high RMSD, which is in the order of hundreds. For the temperature, the error 
>is 5 orders of magnitude lower than the obtained value and the RMSD is very 
>low. This suggests that the system has reach the equilibrium temperature. Are 
>these reasons correct?

Another thing that makes me think that the pressure I obtain is not correct is 
that the pressure I obtained after simulation and the one I obtain after 
analysis also differ significantly (0.19 and 3.9 bar respectively).

I have used long both NPT equilibration and simulation times (50 ns) and the 
results are similar to the ones I have indicated above, which apparently means 
that the system is stable.


>From these discrepancies, do you think the differences are not as important as 
>I am considering?, what could I do to obtain more accurate pressure values?

Regarding the Parrinello-Rahnman is "not stable for low pressures", I 
understood that when using low pressures obtaining the reference pressure is 
sometimes difficult by using Parrinello-Rahman. I was trying to use this 
article in order to explain why my simulation pressures differ from the ref_p, 
but as you say, I have also read papers that use Parrinello-Rahman for 
simulating 1bar systems.

Thank you very much for all your help,

C.


-Mensaje original-
De: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 En nombre de Justin Lemkul
Enviado el: jueves, 8 de noviembre de 2018 14:01
Para: gmx-us...@gromacs.org
Asunto: Re: [gmx-users] pcoupl Berendsen



On 11/8/18 7:50 AM, Gonzalez Fernandez, Cristina wrote:
> Dear Gromacs users,
>
> In my simulations, I have specified ref_p= 1bar but after MD 
> simulation I obtain pressures equal to 0.19 bar (even

A pressure without an error bar is a meaningless value. The fluctuations of 
pressure in most systems are on the order of tens or hundreds of bar, meaning 
your result is indistinguishable from the target value.

> with long simulation times) when using pcoupl=Parrinello-Rahman. I 
> know that Parrinello-Rahman is recommend for production runs and 
> Berendsen for NPT equilibration. However, I have read in an article 
> that Parrinello-Rahman is not stable for low pressures, so in such 
> situations its better to use Berendsen. I have tried to use Berendsen 
> for

I would be interested to know how this "not stable for low pressures" 
was determined, because it seems completely unlikely to be true. Most MD 
simulations nowadays use Parrinello-Rahman for pressure coupling at 1
bar/1 atm without any issue if the system is properly equilibrated (and if not, 
the problem is with preparation, not the barostat itself).

> MD simulation but I obtain this Warning and I cannot remove it with the 
> -maxwarn option.
>
> "Using Berendsen pressure coupling invalidates the true ensemble for the 
> thermostat"
>
>
> How can I use Berendsen for MD simulation?

Simply, you can't, and you shouldn't. The Berendsen method produces an invalid 
statistical mechanical ensemble. It relaxes systems quickly and is therefore 
still useful for equilibration, but should never be employed during data 
collection. Full stop.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Setting rcon according to system

2018-11-14 Thread Sergio Perez
Hello,
First of all thanks for the help :)
I don't necessarily need to run it with 100 processors, I just want to know
how much I can reduce rcon taking into account the knowledge of my system
without compromising the accuracy. Let me give some more details of my
system. The system is a sodium montmorillonite clay with two solid
alumino-silicate layers with two aqueous interlayers between them. The
system has TIP4P waters, some OH bonds within the clay and the bonds of the
uranyl hydrated ion described in my previous email as constraints. The
system is orthorrhombic 4.67070x4.49090x3.77930 and has 9046 atoms.

This is the ouput of gromacs:

Initializing Domain Decomposition on 100 ranks
Dynamic load balancing: locked
Initial maximum inter charge-group distances:
   two-body bonded interactions: 0.470 nm, Tab. Bonds NC, atoms 10 13
Minimum cell size due to bonded interactions: 0.000 nm
Maximum distance for 5 constraints, at 120 deg. angles, all-trans: 0.842 nm
Estimated maximum distance required for P-LINCS: 0.842 nm
This distance will limit the DD cell size, you can override this with -rcon
Guess for relative PME load: 0.04
Will use 90 particle-particle and 10 PME only ranks
This is a guess, check the performance at the end of the log file
Using 10 separate PME ranks, as guessed by mdrun
Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25
Optimizing the DD grid for 90 cells with a minimum initial size of 1.052 nm
The maximum allowed number of cells is: X 4 Y 4 Z 3

---
Program: mdrun_mpi, version 2018.1
Source file: src/gromacs/domdec/domdec.cpp (line 6571)
MPI rank:0 (out of 100)

Fatal error:
There is no domain decomposition for 90 ranks that is compatible with the
given box and a minimum cell size of 1.05193 nm
Change the number of ranks or mdrun option -rcon or -dds or your LINCS
settings
Look in the log file for details on the domain decomposition

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---


Thank you for your help!

On Wed, Nov 14, 2018 at 5:28 AM Mark Abraham 
wrote:

> Hi,
>
> Possibly. It would be simpler to use fewer processors, such that the
> domains can be larger.
>
> What does mdrun think it needs for -rcon?
>
> Mark
>
> On Tue, Nov 13, 2018 at 7:06 AM Sergio Perez 
> wrote:
>
> > Dear gmx comunity,
> >
> > I have been running my system without any problems with 100 processors.
> But
> > I decided to make some of the bonds of my main molecule constrains. My
> > molecule is not an extended chain, it is a molecular hydrated ion, in
> > particular the uranyl cation with 5 water molecules forming a pentagonal
> by
> > bipyramid. At this point I get a domain decomposition error and I would
> > like to reduce rcon in order to run with 100 processors. Since I know
> that
> > by the shape of my molecule, two atoms connected by several constraints
> > will never be further appart than 0.6nm, can I use this safely for -rcon?
> >
> > Thank you very much!
> > Best regards,
> > Sergio Pérez-Conesa
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.