Re: [gmx-users] gromacs 5.1.2 MPI performance

2016-09-09 Thread yunshi11 .
Can you tell everyone your system size? 112 cores could be 7 X 16 or 14 X 8, which is indeed weird. Have you tried 4 X 8, 6 X 8, or 12 X 8? These look more natural to me. On Thu, Sep 8, 2016 at 9:08 PM, Stephen Chan wrote: > Hello, > > I am compiling an MPI version

[gmx-users] Simulated tempering in gromacs

2016-07-31 Thread yunshi11 .
Dear List, I'd like to use "simulated tempering" to increase my sampling efficiency as I am trying to fold a polymer from its extended conformation. I understand that GROMACS 5 can easily handle replica-exchange MD, but my system is too large (>100K atoms) and there would be too many replicas...

[gmx-users] vdw-type and DispCorr

2016-07-21 Thread yunshi11 .
Hello everyone, Should DispCorr be turned off if I am using vdw-type = PME (since PME already accounts for long range vdw interactions)? Also I am curious if this vdw PME option is designed to suit some specific force fields? Thanks for any thoughts. Yun -- Gromacs Users mailing list *

[gmx-users] Empty output files from gmx helix?

2016-06-08 Thread yunshi11 .
Hi everyone, I have a 21-mer peptide A for helix analysis using gmx helix in GROMACS 5. When I select residues 3-19 when executing gmx helix, everything looks fine. But when I select another set of residues, e.g. 8-19, the output len-ahx.xvg file ends after: # This file was created Thu Jun 9

Re: [gmx-users] updates about ACPYPE

2015-10-21 Thread yunshi11 .
Hi Alan, Is this Rev: 403? And does the --gmx45 still compatible with gromacs5 (although gromacs says its version 5 is mostly backward compatible with version 4)? On Sat, Aug 30, 2014 at 8:40 PM, Alan wrote: > Dear community, > > Many thinks for all of you who uses ACPYPE

Re: [gmx-users] First frame already out of box, getting very large RMSD

2015-03-01 Thread yunshi11 .
On Sun, Mar 1, 2015 at 5:42 PM, Justin Lemkul jalem...@vt.edu wrote: On 3/1/15 8:26 PM, yunshi11 . wrote: On Sun, Mar 1, 2015 at 4:36 PM, Justin Lemkul jalem...@vt.edu wrote: On 3/1/15 3:47 PM, yunshi11 . wrote: On Sun, Mar 1, 2015 at 11:43 AM, Justin Lemkul jalem...@vt.edu wrote

Re: [gmx-users] First frame already out of box, getting very large RMSD

2015-03-01 Thread yunshi11 .
On Sun, Mar 1, 2015 at 4:36 PM, Justin Lemkul jalem...@vt.edu wrote: On 3/1/15 3:47 PM, yunshi11 . wrote: On Sun, Mar 1, 2015 at 11:43 AM, Justin Lemkul jalem...@vt.edu wrote: On 3/1/15 1:21 PM, yunshi11 . wrote: On Sat, Feb 28, 2015 at 4:40 PM, Justin Lemkul jalem...@vt.edu wrote

Re: [gmx-users] First frame already out of box, getting very large RMSD

2015-03-01 Thread yunshi11 .
On Sat, Feb 28, 2015 at 4:40 PM, Justin Lemkul jalem...@vt.edu wrote: On 2/28/15 7:38 PM, yunshi11 . wrote: On Sat, Feb 28, 2015 at 4:21 PM, Justin Lemkul jalem...@vt.edu wrote: On 2/28/15 7:17 PM, yunshi11 . wrote: On Sat, Feb 28, 2015 at 3:03 PM, Justin Lemkul jalem...@vt.edu wrote

Re: [gmx-users] First frame already out of box, getting very large RMSD

2015-03-01 Thread yunshi11 .
On Sun, Mar 1, 2015 at 11:43 AM, Justin Lemkul jalem...@vt.edu wrote: On 3/1/15 1:21 PM, yunshi11 . wrote: On Sat, Feb 28, 2015 at 4:40 PM, Justin Lemkul jalem...@vt.edu wrote: On 2/28/15 7:38 PM, yunshi11 . wrote: On Sat, Feb 28, 2015 at 4:21 PM, Justin Lemkul jalem...@vt.edu wrote

Re: [gmx-users] First frame already out of box, getting very large RMSD

2015-02-28 Thread yunshi11 .
On Sat, Feb 28, 2015 at 3:03 PM, Justin Lemkul jalem...@vt.edu wrote: On 2/28/15 6:00 PM, yunshi11 . wrote: Dear all, I am running MD for a protein-ligand complex in a dodecahedron box and followed the Suggested trjconv workflow from http://www.gromacs.org/Documentation/Terminology

[gmx-users] First frame already out of box, getting very large RMSD

2015-02-28 Thread yunshi11 .
Dear all, I am running MD for a protein-ligand complex in a dodecahedron box and followed the Suggested trjconv workflow from http://www.gromacs.org/Documentation/Terminology/Periodic_Boundary_Conditions . Now I wonder how to remove jumps (across periodic boxes) when the first frame (actually

Re: [gmx-users] First frame already out of box, getting very large RMSD

2015-02-28 Thread yunshi11 .
On Sat, Feb 28, 2015 at 4:21 PM, Justin Lemkul jalem...@vt.edu wrote: On 2/28/15 7:17 PM, yunshi11 . wrote: On Sat, Feb 28, 2015 at 3:03 PM, Justin Lemkul jalem...@vt.edu wrote: On 2/28/15 6:00 PM, yunshi11 . wrote: Dear all, I am running MD for a protein-ligand complex

[gmx-users] Broken molecule across periodic boundary: The sum of the two largest charge group radii (xxx) is larger than rlist (xxx)

2014-06-11 Thread yunshi11 .
Hi there, I understand this is an old issue, but no one seems to have a solution? So I want to take a snapshot from the middle of a MD trajectory (like 67ns point from a 100ns trajectory), preferably with all solvent molecules (waters and ions). However, after using trjconv -pbc to make

Re: [gmx-users] Shifting in Verlet cut-off schemes?

2014-02-06 Thread yunshi11 .
On Mon, Mar 4, 2013 at 3:22 PM, Mark Abraham mark.j.abra...@gmail.comwrote: On Sat, Mar 2, 2013 at 6:40 PM, Yun Shi yunsh...@gmail.com wrote: Hi all, I have read http://www.gromacs.org/Documentation/Cut-off_schemes, but still unsure about how Verlet works. The group cut-off scheme

[gmx-users] Different optimal pme grid ... coulomb cutoff values from identical input files

2014-02-05 Thread yunshi11 .
Hello all, I am doing a production MD run of a protein-ligand complex in explicit water with GROMACS4.6.5 However, I got different coulomb cutoff values as shown in the output log files. 1st one:

[gmx-users] Since nstlist has no effect on the accuracy

2014-02-05 Thread yunshi11 .
In a MD run with Verlet cutoff scheme, can I set nstlist as large as possible? Like 100 or 1000? Thanks, Yun -- Gromacs Users mailing list * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting! * Can't post? Read

[gmx-users] z always small for domain decomposition grid?

2014-02-05 Thread yunshi11 .
Hi everyone, For my MD simulations on different number of CPUs (sometimes with GPUs), the domain decomposition grid that I got from automatic (not setting -dd) domain decomposition is always like: .. Domain decomposition grid 8 x 6 x 2, separate PME nodes 48 PME domain decomposition: 8 x 6 x

Re: [gmx-users] How to find the mutation effect?

2013-12-15 Thread yunshi11 .
What exactly is the mutation? Length of the MD run? Have you tried clustering and compare the clustered structure? On Sun, Dec 15, 2013 at 10:42 AM, xiao helitr...@126.com wrote: Dear all, I am doing a MD on a mutant protein which is unstable from exprient. However, i found no difference

[gmx-users] 3 GPUs much faster than 2 GPUs with GROMACS-4.6.2 ???

2013-12-09 Thread yunshi11 .
Hi all, I have a physical compute node with 2x 6-core Intel E5649 processors + three NVIDIA Tesla M2070s GPUs. First I tried using all 12 CPU cores + 3 GPUs for an equilibration run (of protein in TIP3 waters), which gave me 8.964 ns/day performance. But I noticed the PME mesh calculation,

Re: [gmx-users] load imbalance in multiple GPU simulations

2013-12-08 Thread yunshi11 .
. small system), even using only two of the three GPUs could improve performance Cheers, -- Szilárd On Sun, Dec 8, 2013 at 8:10 PM, yunshi11 . yunsh...@gmail.com wrote: Hi all, My conventional MD run (equilibration) of a protein in TIP3 water had the Average load imbalance: 59.4 % when