[gmx-users] Enthalpy calculation in NPT ensemble

2018-08-07 Thread Pelin S Bulutoglu
Dear GROMACS users,

I am working on simulating a glycine crystal using GAFF force field with CNDO 
point charges in an NPT ensemble. I want to calculate the enthalpy and Cp of 
the crystal. However, the list that comes up following gmx energy command does 
not include the enthalpy or PV options, as it would normally do for NPT 
simulations. Does this mean that the simulation does not exhibit NPT behavior? 
I believe that this is not the case, since upon inspection of the temperature 
and pressure values, I found that they are both constant (within statistical 
error) throughout the simulation. What may be the cause of this? The following 
are some of my input parameters:

   cutoff-scheme  = Verlet
   nstlist= 40
   ns-type= Grid
   pbc= xyz
   verlet-buffer-tolerance= 0.005
   rlist  = 1.4
   coulombtype= PME
   coulomb-modifier   = Potential-shift
   rcoulomb-switch= 0
   rcoulomb   = 1.4
   rvdw   = 1.4
   DispCorr   = EnerPres
   table-extension= 1
   fourierspacing = 0.12
   pme-order  = 4
   tcoupl = V-rescale
   nsttcouple = 10
   pcoupl = Berendsen
   pcoupltype = Anisotropic
   nstpcouple = 10
   tau-p  = 2
   compressibility (3x3):
  compressibility[0]={ 4.5e-05,  4.5e-05,  4.5e-05}
  compressibility[1]={ 4.5e-05,  4.5e-05,  4.5e-05}
  compressibility[2]={ 4.5e-05,  4.5e-05,  4.5e-05}
   ref-p (3x3):
  ref-p[0]={ 1.0e+00,  1.0e+00,  1.0e+00}
  ref-p[1]={ 1.0e+00,  1.0e+00,  1.0e+00}
  ref-p[2]={ 1.0e+00,  1.0e+00,  1.0e+00}
   refcoord-scaling   = COM

Any input would be greatly appreciated. Thanks in advance.

Pelin Su Bulutoglu


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Cannot compute velocity of COM of a group of atoms

2018-08-07 Thread ARNAB MUKHERJEE
Dear all,

I have simulated a system of DNA and Protein, and I want to calculate the
velocity of the center of mass of protein as a function of time. So I used
the following command :

gmx traj -f traj_comp.trr -s md_run-E-Field.tpr -n index.ndx -ov
test-vel.xvg -com

I have pasted below the file that it generates :

# GROMACS:  gmx traj, VERSION 5.0.6
# Executable:   /applic/applications/GROMACS/5.0.6/bin/gmx
# Library dir:  /applic/applications/GROMACS/5.0.6/share/gromacs/top
# Command line:
#   gmx traj -f traj_comp.trr -s md_run-E-Field.tpr -n index.ndx -ov
test-vel.xvg -com
# gmx is part of G R O M A C S:
#
# Georgetown Riga Oslo Madrid Amsterdam Chisinau Stockholm
#
@title "Center of mass velocity"
@xaxis  label "Time (ps)"
@yaxis  label "Velocity (nm/ps)"
@TYPE xy
@ view 0.15, 0.15, 0.75, 0.85
@ legend on
@ legend box on
@ legend loctype view
@ legend 0.78, 0.8
@ legend length 2
@ s0 legend "Protein X"
@ s1 legend "Protein Y"
@ s2 legend "Protein Z"
"test-vel.xvg" 24L, 731C  24,1
Bot

So it doesn't have the velocities. Is it due to the fact that in my .mdp
file, I had set nstvout  = 0 ? I have pasted below the initial part of my
.mdp file

title   = NVT equilibration with position restraint on all solute
(topology modified)
; Run parameters
integrator  = md; leap-frog integrator
nsteps  = 500   ; 1 * 50 = 500 ps
;nsteps  = 5000
dt  = 0.02  ; 1 fs
; Output control
nstxout = 0 ; save coordinates every 10 ps
nstvout = 0 ; save velocities every 10 ps
nstcalcenergy   = 50
nstenergy   = 1000  ; save energies every 1 ps
nstxtcout   = 2500
;nstxout-compressed  = 5000   ; save compressed coordinates every 1.0 ps
 ; nstxout-compressed replaces nstxtcout
;compressed-x-grps  = System  ; replaces xtc-grps
nstlog  = 1000  ; update log file every 1 ps
; Bond parameters
continuation= no   ; first dynamics run
constraint_algorithm = lincs ; holonomic constraints
constraints = none  ; all bonds (even heavy atom-H bonds)
constrained
;lincs_iter = 2 ; accuracy of LINCS
lincs_order = 4 ; also related to accuracy

Is there a way I can still compute the velocities, or do I need to run the
simulation again, putting nstvout = nsteps ?

I would highly appreciate any help.

Thank you very much,

Regards,

Arnab
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] NVIDIA CUDA Alanine Scanning

2018-08-07 Thread Szilárd Páll
Hi,

Yes, you can use CUDA acceleration, and FEP does work, try to keep
feature-parity between the GPU-accelerated and non-accelerated modes of
GROMACS. Can't comment in depth about GMXPBSA, they may not have full
support for newer releases from what a brief look at their mailing list
shows.
Regarding FEP, the caveat is that not all tasks can be offloaded to the
GPU, the free energy short-range computation itself as well as the entire
PME computation has to run on the CPU, and therefore such runs may not
perform on some hardware as efficiently as it would without FEP. That is
especially true if you have very few CPU cores per GPU.

However note that the GROMACS CPU code is rather well tuned so the using
the CPU for some (or all) is not cripplingly slow as you might have seen
with other codes.

--
Szilárd


On Thu, Jul 26, 2018 at 5:28 PM Matthew Kenney <
matthew.ken...@cellsignal.com> wrote:

> Hi everyone,
>
> I am attempting to find a means to perform Alanine scanning mutagenesis
> with CUDA acceleration, but have yet to find a free/open-source option.
> AMBER and Schrödinger appear to be the only options available for these
> calculations on CUDA, and I'd like to avoid the hefty industry licences for
> these programs. Hopefully, GROMACS is the solution.
>
> Does anyone know if the tool GMXPBSA, or Free Energy Perturbation (FEP)
> calculations happen to support CUDA? I ask because many programs I've
> looked at claim overall CUDA support (ex: NAMD/VMD) but do not support FEP
> and certain other types of calculations on CUDA. Any help is much
> appreciated. Thanks!
>
> Best,
> Matt
>
> --
> This message contains information which may be confidential and/or
> protected by attorney-client privilege.  Unless you are the addressee, you
> may not use, copy or disclose to anyone this message or any information
> contained in this message.  If you have received this message in error,
> please send me an email and delete this message.   Thank you.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] periodic-molecules = yes and movie

2018-08-07 Thread Alex
Thanks.

On Tue, Aug 7, 2018 at 8:30 AM Justin Lemkul  wrote:

>
>
> On 8/7/18 8:23 AM, Alex wrote:
> > Dear all,
> > In the simulation of a system in which there is slab, I did not use
> > "periodic-molecules
> > = yes" unfortunately, hence I can not make a good movie out of the
>
> This is not some magic setting that will make nice movies. Note that a
> "periodic molecule" is different from just normal PBC. Periodic
> molecules are "infinite" molecules like sheets, nanotubes, etc. that
> have bonds between central image atoms and periodic copies of atoms. If
> you needed such a setting and did not use it, likely your simulation
> would have collapsed.

No, the simulation did not blow up and worked nicely. As far as I
understood and in general the periodic-molecule = yes should be used when a
surface is connected to it's image in the neighbouring cell, mine is silica
slab, and if the the slab is defined only by non-bonded interaction like
most of the metallic slab the periodic molecule = NO should work finely in
spite of being infinite.


> > trajectory using VMD, the slab move during the movie. The simulation is
> > around 300 ns and I do not want to just waste the CPU time, so, I wonder
> if
> > there is still any solution for that to make a good movie out of that
> > trajectory?
>
> Many of the normal trjconv routines could likely help here. Accounting
> for PBC effects like these is a standard first step in post-processing
> any trajectory.
> Actually what I see is the result of two below comments applied over the
> initial trajectory:
>

gmx_mpi trjconv -f prd.xtc -s eql1.gro -o nojump-skip-30-prd.xtc -center
-pbc nojump -n input/index.ndx -tu ns -skip 30   < choice : salb, system
gmx_mpi trjconv -f nojump-skip-30-prd.xtc -s prd.tpr -o
nopbc-nojump-skip-30-prd.xtc -center -pbc whole -ur compact -n
input/index.ndx -tu ns < choice : salb, system

Regards,
Alex

> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Virginia Tech Department of Biochemistry
>
> 303 Engel Hall
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GMXRC removes trailing colon from existing MANPATH

2018-08-07 Thread Szilárd Páll
Hi,

Can you please submit your change to gerrit.gromacs.org -- and perhaps it's
best if you also file an issue on redmine.gromacs.org with your brief
description you posted here?

Thanks,
--
Szilárd


On Fri, Jul 27, 2018 at 2:28 PM Peter Kroon  wrote:

> Hi all,
>
> I noticed that sourcing GMXRC removes any trailing colons from a set
> MANPATH environment variable. This colon *is* syntactically significant,
> and removing it causes `mandb` to ignore /etc/manpath.config instead of
> appending that file:
>
>
> > unset MANPATH
> > export MANPATH=/opt/puppetlabs/puppet/share/man:
> > # Note the trailing colon
> > echo $MANPATH
> /opt/puppetlabs/puppet/share/man:
> > mandb
> mandb: warning: $MANPATH set, appending /etc/manpath.config
> ...
> > source /usr/local/gromacs-2018.1/bin/GMXRC
> > # No more trailing colon
> > echo $MANPATH
> > /usr/local/gromacs-2018.1/share/man:/opt/puppetlabs/puppet/share/man
> mandb: warning: $MANPATH set, ignoring /etc/manpath.config
> ...
>
> Should I also report this on redmine, or is this sufficient?
>
>
> Peter
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] gromacs with mps

2018-08-07 Thread Szilárd Páll
Hi,

It does sound like a CUDA/MPS setup issue, GROMACS uses relatively small
amount of GPU memory, so unless you are using a very skinny GPU or a very
large input, it's most likely not a GROMACS issue.

BTW, have you made sure that your GPUs are not in process-exclusive mode?

Cheers,
--
Szilárd


On Fri, Jul 27, 2018 at 9:21 PM Mahmood Naderan 
wrote:

> Hi
> Has anyone run gmx_mpi with MPS? Even with small input files (which are
> working fine when MPS is turned off), I get out of memory error from the
> GPU device.
> Don't know if there is a bug inside cuda or gromacs. I see some other
> related topics for other programs. So, it sound like a cuda problem.
> If you have worked with MPS, please let me know.
>
> Regards,
> Mahmood
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Issue with regression test.

2018-08-07 Thread Szilárd Páll
Hi,

Can you share the directory of the failed test, i.e.
regressiontests/complex/nbnxn_vsite?
Can you check running the regressiontests manually using 1/2/4 ranks, e.g.
perl gmxtest.pl complex -nt 1

--
Szilárd


On Wed, Aug 1, 2018 at 12:49 PM Raymond Arter 
wrote:

> Dear All,
>
> I'm building Gromacs 2018.2 and I've run in to a small issue that I would
> like feed back on.
>
> One of the regression tests fails with the following entered in to the log
> file.
>
> 
> Mdrun cannot use the requested (or automatic) number of ranks, retrying
> with 8.
>
> Abnormal return value for ' gmx mdrun-nb cpu   -notunepme >mdrun.out
> 2>&1' was 1
> Retrying mdrun with better settings...
>
> Abnormal return value for ' gmx mdrun -ntmpi 12  -notunepme >mdrun.out
> 2>&1' was 1
> Retrying mdrun with better settings...
>
> Abnormal return value for ' gmx mdrun -ntmpi 6  -notunepme >mdrun.out
> 2>&1' was -1
> FAILED. Check mdrun.out, md.log file(s) in nbnxn_vsite for nbnxn_vsite
> Re-running orientation-restraints using CPU-based PME
> Re-running pull_geometry_angle using CPU-based PME
> Re-running pull_geometry_angle-axis using CPU-based PME
> Re-running pull_geometry_dihedral using CPU-based PME
> 
>
> However, on the advice of a colleague I run the following command in the
> directory:
>
> gmx mdrun -ntmpi 4 -notunepme
>
> And got the following result:
>
> 
> Reading file topol.tpr, VERSION 2018.2 (single precision)
> Non-default thread affinity set, disabling internal thread affinity
> Can not increase nstlist because verlet-buffer-tolerance is not set or used
>
> Using 4 MPI threads
> Using 3 OpenMP threads per tMPI thread
>
> On host ** 4 GPUs auto-selected for this run.
> Mapping of GPU IDs to the 4 GPU tasks in the 4 ranks on this node:
>   PP:0,PP:1,PP:2,PP:3
>
> Back Off! I just backed up traj.trr to ./#traj.trr.1#
>
> Back Off! I just backed up ener.edr to ./#ener.edr.1#
> starting mdrun 'Protein'
> 20 steps,  0.1 ps.
>
> step 20 Turning on dynamic load balancing, because the performance loss due
> to load imbalance is 3.3 %.
>
> Writing final coordinates.
>
> Back Off! I just backed up confout.gro to ./#confout.gro.1#
>
>  Dynamic load balancing report:
>  DLB was turned on during the run due to measured imbalance.
>  Average load imbalance: 13.0%.
>  The balanceable part of the MD step is 25%, load imbalance is computed
> from this.
>  Part of the total run time spent waiting due to load imbalance: 3.3%.
>  Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X 0
> % Y 0 %
>
> NOTE: 6 % of the run time was spent in domain decomposition,
>   17 % of the run time was spent in pair search,
>   you might want to increase nstlist (this has no effect on accuracy)
>
>Core t (s)   Wall t (s)(%)
>Time:1.9320.161 1200.0
>  (ns/day)(hour/ns)
> Performance:   56.3370.426
> 
>
> Is this just a problem with the regression test and the build of Gromacs is
> fine, or
> is there a problem with the build I have done?
>
> Thanks in advance.
>
> R.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] periodic-molecules = yes and movie

2018-08-07 Thread Justin Lemkul




On 8/7/18 8:23 AM, Alex wrote:

Dear all,
In the simulation of a system in which there is slab, I did not use
"periodic-molecules
= yes" unfortunately, hence I can not make a good movie out of the


This is not some magic setting that will make nice movies. Note that a 
"periodic molecule" is different from just normal PBC. Periodic 
molecules are "infinite" molecules like sheets, nanotubes, etc. that 
have bonds between central image atoms and periodic copies of atoms. If 
you needed such a setting and did not use it, likely your simulation 
would have collapsed.



trajectory using VMD, the slab move during the movie. The simulation is
around 300 ns and I do not want to just waste the CPU time, so, I wonder if
there is still any solution for that to make a good movie out of that
trajectory?


Many of the normal trjconv routines could likely help here. Accounting 
for PBC effects like these is a standard first step in post-processing 
any trajectory.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Too few cells to run on multiple cores

2018-08-07 Thread Szilárd Páll
Hi,

The domain decomposition has certain algorithmic limits that you can relax,
but as you notice that comes at the cost of deteriorating load balance --
and at a certain point it might come at the cost of simulations aborting
mid-run (if you make -rdd too large). More load imbalance does not
necessarily mean less performance, so if your only way of using more cores
is to "squeeze out" more domains of your system, as long as you get more
performance, that may be fine.

However, instead of trying to squeeze out more domains, you can actually
use multiple CPU cores per domain; see the -ntomp option and examples here:
http://manual.gromacs.org/documentation/current/user-guide/mdrun-performance.html
For instance, in your case you could use 4 threads per MPI rank and with a
3x2x2 decoposition you'd get 12 domains x 4 threads = 48 threads total.

A few more tips:
- your original cell size limit was due to bonded interactions, so tweaking
LINCS would not help with that
- you can also try to use separate PME ranks and by doing that some cores
will be reserved for PME work and the domain-decomposition may be stretched
a bit less, e.g.: -ntmpi 24 -npme 6 -ntomp 2 will give a 3x3x2
decomposition. the success of this split will of course depend on the PME
load in the system (which is estimated to be very high -- are you using
some non-default settings?)

Cheers,

--
Szilárd


On Tue, Aug 7, 2018 at 1:52 PM Adrian Devitt-Lee 
wrote:

> Hi,
>
> I'm having an issue using mdrun in parallel on 48 cores. I'm trying to
> figure out which options I can include in the .mdp file to increase the
> number of cells in my system. The full error message is:
>
> Initializing Domain Decomposition on 48 ranks
> Dynamic load balancing: auto
> Will sort the charge groups at every domain (re)decomposition
> Initial maximum inter charge-group distances:
> two-body bonded interactions: 1.368 nm, LJC Pairs NB, atoms 2123 2152
>   multi-body bonded interactions: 0.428 nm, Proper Dih., atoms 1105 1113
> *Minimum cell size due to bonded interactions: 1.505 nm*
> Maximum distance for 7 constraints, at 120 deg. angles, all-trans: 0.219 nm
> Estimated maximum distance required for P-LINCS: 0.219 nm
> Guess for relative PME load: 0.65
> Using 0 separate PME ranks, as guessed by mdrun
> Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25
>
> *Optimizing the DD grid for 48 cells with a minimum initial size of 1.881
> nmThe maximum allowed number of cells is: X 3 Y 3 Z 3*
>
> ---
> Program mdrun_mpi, VERSION 5.1
> Source code file:
> /work/y07/y07/gmx/5.1-phase2/source/src/gromacs/domdec/domdec.cpp, line:
> 6969
>
> Fatal error:
> There is no domain decomposition for 48 ranks that is compatible with the
> given box and a minimum cell size of 1.88133 nm
>
>
> I understand the problem -- the system wants to assign one core to each
> grid cell, but there are only 3x3x3 = 27 cells. I don't know what I can do
> to fix this problem. The system is a solvated protein bound to a ligand in
> a ~15 nm box, and it has > 40,000 atoms. I have tried changing the
> lincs-order and fourier-spacing to no avail.
>
> I was able to get the system to run by adding the following flags to mdrun:
>   -rdd 1.2 -dds 0.9
>
> But when I did this, the force imbalance went to > 140%, and 10-30% of the
> cpu time was lost due to load imbalance.
>
> Can someone suggest how I could edit my .mdp file to increase the number of
> allowed cells?
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] periodic-molecules = yes and movie

2018-08-07 Thread Alex
Dear all,
In the simulation of a system in which there is slab, I did not use
"periodic-molecules
= yes" unfortunately, hence I can not make a good movie out of the
trajectory using VMD, the slab move during the movie. The simulation is
around 300 ns and I do not want to just waste the CPU time, so, I wonder if
there is still any solution for that to make a good movie out of that
trajectory?

Thank you.
Regards,
Alex
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Too few cells to run on multiple cores

2018-08-07 Thread Adrian Devitt-Lee
Hi,

I'm having an issue using mdrun in parallel on 48 cores. I'm trying to
figure out which options I can include in the .mdp file to increase the
number of cells in my system. The full error message is:

Initializing Domain Decomposition on 48 ranks
Dynamic load balancing: auto
Will sort the charge groups at every domain (re)decomposition
Initial maximum inter charge-group distances:
two-body bonded interactions: 1.368 nm, LJC Pairs NB, atoms 2123 2152
  multi-body bonded interactions: 0.428 nm, Proper Dih., atoms 1105 1113
*Minimum cell size due to bonded interactions: 1.505 nm*
Maximum distance for 7 constraints, at 120 deg. angles, all-trans: 0.219 nm
Estimated maximum distance required for P-LINCS: 0.219 nm
Guess for relative PME load: 0.65
Using 0 separate PME ranks, as guessed by mdrun
Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25

*Optimizing the DD grid for 48 cells with a minimum initial size of 1.881
nmThe maximum allowed number of cells is: X 3 Y 3 Z 3*

---
Program mdrun_mpi, VERSION 5.1
Source code file:
/work/y07/y07/gmx/5.1-phase2/source/src/gromacs/domdec/domdec.cpp, line:
6969

Fatal error:
There is no domain decomposition for 48 ranks that is compatible with the
given box and a minimum cell size of 1.88133 nm


I understand the problem -- the system wants to assign one core to each
grid cell, but there are only 3x3x3 = 27 cells. I don't know what I can do
to fix this problem. The system is a solvated protein bound to a ligand in
a ~15 nm box, and it has > 40,000 atoms. I have tried changing the
lincs-order and fourier-spacing to no avail.

I was able to get the system to run by adding the following flags to mdrun:
  -rdd 1.2 -dds 0.9

But when I did this, the force imbalance went to > 140%, and 10-30% of the
cpu time was lost due to load imbalance.

Can someone suggest how I could edit my .mdp file to increase the number of
allowed cells?
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.