Re: [gmx-users] the aggregation of a peptide using gromacs

2018-12-09 Thread Mark Abraham
Hi,

Likely there is not a specific tutorial for that, so you will probably need
to search the literature for such studies, and perhaps attempt to replicate
an older study that is probably easier to run these days on modern hardware
and software.

Mark

On Sun, Dec 9, 2018 at 5:13 AM marzieh dehghan 
wrote:

> Dear all
> I want to survey the aggregation of a peptide using molecular dynamics
> simulation, please let me know there is a tutorial to assess the
> aggregation of related peptide?
>
> I'm looking forward to getting your answer.
> Thanks a lot.
> Best wishes
> --
>
>
>
>
> *Marzieh DehghanPhD of BiochemistryInstitute of biochemistry and Biophysics
> (IBB)University of Tehran, Tehran- Iran.*
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] mdrun-adjusted cutoffs?!

2018-12-09 Thread Mark Abraham
Hi,

There's two ways to specify the long-range grid requirements (either
fourierspacing, or fourier-nx and friends,, see
http://manual.gromacs.org/documentation//current/user-guide/mdp-options.html#ewald).
The tuning will override fourierspacing in the same way that it does
rcoulomb. I assume it does not override a manual specification of the grid
dimensions, but I haven't tried it. Have noted to the dev team that we
should check and document that.

Mark

On Sun, Dec 9, 2018 at 7:46 AM Alex  wrote:

> That's very valuable info, thank you.
>
> By the way, all of our production mdp files have something like
> fourierspacing = 0.135, the origin of which is long gone from my memory.
> Does this imply that despite PME tuning our simulations use a fixed
> Fourier grid that ends up in suboptimal performance, or does the tuning
> override it?
>
> Alex
>
> On 12/8/2018 1:34 PM, Mark Abraham wrote:
> > Hi,
> >
> > Note that that will compare runs of differently accurate electrostatic
> > approximation. For iso-accurate comparisons, one must also scale the
> > Fourier grid by the same factor (per the manual section on PME
> autotuning).
> > Of course, if you start from the smallest rcoulomb and use a fixed grid,
> > then the comparisons will be of increasing accuracy, which might be
> enough
> > for the desired conclusion.
> >
> > Mark
> >
> > On Sat., 8 Dec. 2018, 02:05 Szilárd Páll  >
> >> BTW if you have doubts and still want to make sure that the mdrun PME
> >> tuning does not affect your observables, you can always do a few runs
> >> with a fixed rcoulomb > rvdw set in the mdp file (with -notunepme
> >> passed on the command line for consistency) and compare what you get
> >> with the rcoulomb = rvdw case. As Mark said, you should not observe a
> >> difference.
> >>
> >> --
> >> Szilárd
> >> On Fri, Dec 7, 2018 at 7:10 AM Alex  wrote:
> >>> I think that answers my question, thanks. :)
> >>>
> >>> On 12/6/2018 9:38 PM, Mark Abraham wrote:
>  Hi,
> 
>  Zero, because we are shifting between equivalent ways to compute the
> >> total
>  electrostatic interaction.
> 
>  You can turn off the PME tuning with mdrun -notunepme, but unless
> >> there's a
>  bug, all that will do is force it to run slower than optimal.
> >> Obviously you
>  could try it and see that the FE of hydration does not change with the
>  model, so long as you have a reproducible protocol.
> 
>  Mark
> 
> 
>  On Fri., 7 Dec. 2018, 06:39 Alex  
> > I'm not ignoring the long-range contribution, but yes, most of the
> > effects I am talking about are short-range. What I am asking is how
> >> much
> > the free energy of ionic hydration for K+ changes in, say, a system
> >> that
> > contains KCl in bulk water -- with and without autotuning. Hence also
> > the earlier question about being able to turn it off at least
> >> temporarily.
> > Alex
> >
> > On 12/6/2018 5:42 AM, Mark Abraham wrote:
> >> Hi,
> >>
> >> It sounds like you are only looking at the short-ranged component of
> >> the
> >> electrostatic interaction, and thus ignoring the way the long range
> >> component also changes. Is the validity of the PME auto tuning the
> > question
> >> at hand?
> >>
> >> Mark
> >>
> >> On Thu., 6 Dec. 2018, 21:09 Alex  >>
> >>> More specifically, electrostatics. For the stuff I'm talking about,
> >> the
> >>> LJ portion contributes ~20% at the most. When the change in
> >> energetics
> >>> is a statistically persistent value of order kT (of which about 20%
> >>> comes from LJ), the quantity of interest (~exp(E/kT)) changes by a
> >>> factor of 2.72. Again, this is a fairly special case, but I can
> >> easily
> >>> envision someone doing ion permeation across KcsA and the currents
> >> would
> >>> be similarly affected. For instance, when I set all cutoffs at 1.0
> >> nm,
> >>> mdrun ends up using something like 1.1 nm for electrostatics, at
> >> least
> >>> that's what I see at the top of the log.
> >>>
> >>> I agree with what you said about vdW and it can be totally
> >> arbitraty and
> >>> then often requires crutches elsewhere, but my question was whether
> >> for
> >>> very sensitive quantities mdrun ends up utilizing the forcefield as
> >> it
> >>> was designed and not in a "slightly off" regime. Basically, you
> >> asked me
> >>> to describe our case and why I think there may be a slight issue,
> so
> >>> there it is.
> >>>
> >>> Alex
> >>>
> >>> On 12/5/2018 10:34 PM, Mark Abraham wrote:
>  Hi,
> 
>  One needs to be more specific than NB. There is evidence that VDW
> > cutoffs
>  of traditional lengths cause approximation errors that cause
> > compensating
>  parameterization errors elsewhere; those effects get worse if the
> > system
> >>> is
>  inhomogeneous. 

Re: [gmx-users] User Specified non-bonded potentials- error

2018-12-09 Thread Mark Abraham
Hi,

There's nothing wrong with using a deprecated feature. We're just letting
you know that you shouldn't expect the group scheme to be available in
future. But when we have that available in the verlet scheme, you'll be
able to do the work in an equivalent way (and probably better).

Mark

On Mon, Dec 10, 2018 at 3:43 PM Raj kamal  wrote:

> Dear gromacs expert
> i am working on non-bonded potential for atoms using buckingham. I prepared
> .mdp file where need specify
> vdwtype=user, couloumb=user.
> cut-off scheme=verlet
> In topology  file,
> defaults,
> nbfunc combrule
> 22  .
>  atomtypes,
> ..
> nonbond_params
> ...
> table.xvg
> which has 7 column values
> when i run
> gmx grompp -f md.mdp..
> it shows error
> 1. with verlet lists only cut-off and pme lj interactions are supported
> 2. with verlet lists only cut-off ,reaction field, pme and ewald
> electrostatics are supported
>
> when i use cut-off scheme=group
> it shows error following
> The group cutoff scheme is deprecated in Gromacs 5.0 and will be removed in
> a future release.and so on.
>
>
> Please suggest me. thanks in advance
>
> --
> Best regards,
> A. Rajkamal
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] damping coefficient in the Langevin thermostat ?

2018-12-09 Thread Mark Abraham
Hi,

The defaults are all listed in the mdp options section of the user guide,
please look there for what you want!

Mark

On Mon, Dec 10, 2018 at 2:58 AM Nikhil Maroli  wrote:

> What is the default value of  damping coefficient in the Langevin
> thermostat ?
>
> --
> Regards,
> Nikhil Maroli
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] User Specified non-bonded potentials- error

2018-12-09 Thread Raj kamal
Dear gromacs expert
i am working on non-bonded potential for atoms using buckingham. I prepared
.mdp file where need specify
vdwtype=user, couloumb=user.
cut-off scheme=verlet
In topology  file,
defaults,
nbfunc combrule
22  .
 atomtypes,
..
nonbond_params
...
table.xvg
which has 7 column values
when i run
gmx grompp -f md.mdp..
it shows error
1. with verlet lists only cut-off and pme lj interactions are supported
2. with verlet lists only cut-off ,reaction field, pme and ewald
electrostatics are supported

when i use cut-off scheme=group
it shows error following
The group cutoff scheme is deprecated in Gromacs 5.0 and will be removed in
a future release.and so on.


Please suggest me. thanks in advance

-- 
Best regards,
A. Rajkamal
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] using dual CPU's

2018-12-09 Thread Mark Abraham
Hi,

Your CPUs are pretty old and few, and your system is rather small, so I
would not expect to get a useful speedup from adding a second GPU to a
setup that may have already been limited by the CPU. Run 5000 steps with
one GPU and look at the reporting at the end of the log file (or upload it
to a file-sharing service and share the link here) - it may already be
telling you that the single GPU is not well utilized.

Mark

On Mon, Dec 10, 2018 at 9:32 AM paul buscemi  wrote:

> Dear Users,
>
> I have good luck using a single GPU with the basic setup.. However in
> going from one gtx 1060 to a system with two - 50,000 atoms - the rate
> decrease from 10 ns/day to 5 or worse. The system models a ligand, solvent
> ( water ) and a lipid membrane
> the cpu is a 6 core intel i7 970( 12 threads ) , 750W PS, 16G Ram.
> with the basic command " mdrun I get:
> ck Off! I just backed up sys.nvt.log to ./#.sys.nvt.log.10#
> Reading file SR.sys.nvt.tpr, VERSION 2018.3 (single precision)
> Changing nstlist from 10 to 100, rlist from 1 to 1
>
> Using 2 MPI threads
> Using 6 OpenMP threads per tMPI thread
>
> On host I7 2 GPUs auto-selected for this run.
> Mapping of GPU IDs to the 2 GPU tasks in the 2 ranks on this node:
> PP:0,PP:1
>
> Back Off! I just backed up SR.sys.nvt.trr to ./#SR.sys.nvt.trr.10#
> Back Off! I just backed up SR.sys.nvt.edr to ./#SR.sys.nvt.edr.10#
> NOTE: DLB will not turn on during the first phase of PME tuning
> starting mdrun 'SR-TA'
> 10 steps, 100.0 ps.
> and ending with ^C
>
> Received the INT signal, stopping within 200 steps
>
> Dynamic load balancing report:
> DLB was locked at the end of the run due to unfinished PP-PME balancing.
> Average load imbalance: 0.7%.
> The balanceable part of the MD step is 46%, load imbalance is computed
> from this.
> Part of the total run time spent waiting due to load imbalance: 0.3%.
>
> Core t (s) Wall t (s) (%)
> Time: 543.475 45.290 1200.0
> (ns/day) (hour/ns)
> Performance: 1.719 13.963 before DBL is turned on
>
> Very poor performance. I have been following - or trying to follow -
> "Performance Tuning and Optimization fo GROMACA ' M.Abraham andR Apsotolov
> - 2016 but have not yet broken the code.
> 
> gmx mdrun -deffnm SR.sys.nvt -ntmpi 2 -ntomp 3 -gpu_id 01 -pin on.
>
> Back Off! I just backed up SR.sys.nvt.log to ./#SR.sys.nvt.log.13#
> Reading file SR.sys.nvt.tpr, VERSION 2018.3 (single precision)
> Changing nstlist from 10 to 100, rlist from 1 to 1
>
> Using 2 MPI threads
> Using 3 OpenMP threads per tMPI thread
>
> On host I7 2 GPUs auto-selected for this run.
> Mapping of GPU IDs to the 2 GPU tasks in the 2 ranks on this node:
> PP:0,PP:1
>
> Back Off! I just backed up SR.sys.nvt.trr to ./#SR.sys.nvt.trr.13#
> Back Off! I just backed up SR.sys.nvt.edr to ./#SR.sys.nvt.edr.13#
> NOTE: DLB will not turn on during the first phase of PME tuning
> starting mdrun 'SR-TA'
> 10 steps, 100.0 ps.
>
> NOTE: DLB can now turn on, when beneficial
> ^C
>
> Received the INT signal, stopping within 200 steps
>
> Dynamic load balancing report:
> DLB was off during the run due to low measured imbalance.
> Average load imbalance: 0.7%.
> The balanceable part of the MD step is 46%, load imbalance is computed
> from this.
> Part of the total run time spent waiting due to load imbalance: 0.3%.
>
> Core t (s) Wall t (s) (%)
> Time: 953.837 158.973 600.0
> (ns/day) (hour/ns)
> Performance: 2.935 8.176
>
> 
> the beginning of the log file is
> GROMACS version: 2018.3
> Precision: single
> Memory model: 64 bit
> MPI library: thread_mpi
> OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
> GPU support: CUDA
> SIMD instructions: SSE4.1
> FFT library: fftw-3.3.8-sse2
> RDTSCP usage: enabled
> TNG support: enabled
> Hwloc support: disabled
> Tracing support: disabled
> Built on: 2018-10-19 21:26:38
> Built by: pb@Q4 [CMAKE]
> Build OS/arch: Linux 4.15.0-20-generic x86_64
> Build CPU vendor: Intel
> Build CPU brand: Intel(R) Core(TM) i7 CPU 970 @ 3.20GHz
> Build CPU family: 6 Model: 44 Stepping: 2
> Build CPU features: aes apic clfsh cmov cx8 cx16 htt intel lahf mmx msr
> nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1
> sse4.2 ssse3
> C compiler: /usr/bin/gcc-6 GNU 6.4.0
> C compiler flags: -msse4.1 -O3 -DNDEBUG -funroll-all-loops
> -fexcess-precision=fast
> C++ compiler: /usr/bin/g++-6 GNU 6.4.0
> C++ compiler flags: -msse4.1 -std=c++11 -O3 -DNDEBUG -funroll-all-loops
> -fexcess-precision=fast
> CUDA compiler: /usr/bin/nvcc nvcc: NVIDIA (R) Cuda compiler
> driver;Copyright (c) 2005-2017 NVIDIA Corporation;Built on
> Fri_Nov__3_21:07:56_CDT_2017;Cuda compilation tools, release 9.1, V9.1.85
> CUDA compiler
> 

Re: [gmx-users] gmx select with coordinates

2018-12-09 Thread Shan Jayasinghe
Hi Prof. Dan,

Thank you very much  for the e-mail. How do we define coordinates what we
want in VMD Graphical Representation menu?

Thank you.

On Wed, Oct 10, 2018 at 6:53 AM Dan Gil  wrote:

> Hi Shan,
>
> Sorry it's been a while - I don't check this email too often. Did you
> figure the problem out?
>
> I use the Graphical Representations menu of VMD, although I am sure there
> is a way to do it in the console too.
>
> On Thu, Sep 27, 2018 at 9:35 PM Shan Jayasinghe <
> shanjayasinghe2...@gmail.com> wrote:
>
> > Dear Prof. Dan,
> >
> > Thank you very much for the suggestion. However, when I gave the command;
> > gmx select -f run05.xtc -s run05.tpr -b 39 -e 40 -on index.ndx in
> > VMD TkConsole, I didn't get any output.
> >
> > I didn't get the cursor in next line to type x > 45 and x < 90 and y > 90
> > and y. What could be the reason for this? Appreciate, if you can reply
> me.
> >
> > Thank you.
> >
> > On Fri, Sep 28, 2018 at 12:00 AM Dan Gil  wrote:
> >
> > > Hi,
> > >
> > > I believe you are trying to select atoms/particles that are in an
> > > infinitely tall box with the vertices (45, 90), (45, 125), (90, 90) and
> > > (90, 125)?
> > >
> > > Gmx select uses commands that are similar in syntax to a software
> called
> > > VMD. So I like to use VMD to figure out what I need to give to gmx
> select
> > > in order to get the selections I want.
> > >
> > > For example, I think for what you want, I would do:
> > >
> > > gmx select -f run05.xtc -s run05.tpr -b 39 -e 40 -on index.ndx
> > > [Press Enter]
> > > x > 45 and x < 90 and y > 90 and y < 125 [Press Enter]
> > >
> > > If you want to include it in a script without manual user input:
> > >
> > > echo x > 45 and x < 90 and y > 90 and y < 125 | gmx select -f run05.xtc
> > -s
> > > run05.tpr -b 39 -e 40 -on index.ndx
> > >
> > > VMD is great! It's also free software if you haven't tried using it
> yet.
> > > The only thing you gotta watch out for is that VMD uses Angstroms while
> > GMX
> > > uses nm.
> > >
> > > Dan
> > >
> > > On Thu, Sep 27, 2018 at 7:51 AM Shan Jayasinghe <
> > > shanjayasinghe2...@gmail.com> wrote:
> > >
> > > > Dear Gromacs users,
> > > >
> > > > I want to make an index files with molecules in a particular area. I
> > have
> > > > four x,y coordinates [(45, 90), (45, 125), (90, 90) and (90, 125)]
> > which
> > > > define an area of the system. How can I do it with gmx_select?
> > > >
> > > > Can anyone help me?
> > > >
> > > > Thank you.
> > > >
> > > > On Thu, Sep 27, 2018 at 4:14 PM Shan Jayasinghe <
> > > > shanjayasinghe2...@gmail.com> wrote:
> > > >
> > > > >
> > > > > Dear Gromacs users,
> > > > >
> > > > > I want to make an index files with particular x and y coordinates.
> > > There
> > > > > is no restriction for z coordinates. How can I do it with
> > gmx_select? I
> > > > > already tried with the following command. However, it seems I don't
> > get
> > > > the
> > > > > result I want.
> > > > >
> > > > > gmx select '[45, 90] and [45, 125] and [90, 90] and [90, 125]' -f
> > > > > run05.xtc -s run05.tpr -b 39 -e 40 -on index.ndx
> > > > >
> > > > > Can anyone help me?
> > > > >
> > > > > Thank you.
> > > > >
> > > >
> > > >
> > > > --
> > > > Best Regards
> > > > Shan Jayasinghe
> > > > --
> > > > Gromacs Users mailing list
> > > >
> > > > * Please search the archive at
> > > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > > posting!
> > > >
> > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > >
> > > > * For (un)subscribe requests visit
> > > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or
> > > > send a mail to gmx-users-requ...@gromacs.org.
> > > >
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> >
> >
> > --
> > Best Regards
> > Shan Jayasinghe
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>


-- 
Best Regards

[gmx-users] using dual CPU's

2018-12-09 Thread paul buscemi
Dear Users,

I have good luck using a single GPU with the basic setup.. However in going 
from one gtx 1060 to a system with two - 50,000 atoms - the rate decrease from 
10 ns/day to 5 or worse. The system models a ligand, solvent ( water ) and a 
lipid membrane
the cpu is a 6 core intel i7 970( 12 threads ) , 750W PS, 16G Ram.
with the basic command " mdrun I get:
ck Off! I just backed up sys.nvt.log to ./#.sys.nvt.log.10#
Reading file SR.sys.nvt.tpr, VERSION 2018.3 (single precision)
Changing nstlist from 10 to 100, rlist from 1 to 1

Using 2 MPI threads
Using 6 OpenMP threads per tMPI thread

On host I7 2 GPUs auto-selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 2 ranks on this node:
PP:0,PP:1

Back Off! I just backed up SR.sys.nvt.trr to ./#SR.sys.nvt.trr.10#
Back Off! I just backed up SR.sys.nvt.edr to ./#SR.sys.nvt.edr.10#
NOTE: DLB will not turn on during the first phase of PME tuning
starting mdrun 'SR-TA'
10 steps, 100.0 ps.
and ending with ^C

Received the INT signal, stopping within 200 steps

Dynamic load balancing report:
DLB was locked at the end of the run due to unfinished PP-PME balancing.
Average load imbalance: 0.7%.
The balanceable part of the MD step is 46%, load imbalance is computed from 
this.
Part of the total run time spent waiting due to load imbalance: 0.3%.

Core t (s) Wall t (s) (%)
Time: 543.475 45.290 1200.0
(ns/day) (hour/ns)
Performance: 1.719 13.963 before DBL is turned on

Very poor performance. I have been following - or trying to follow - 
"Performance Tuning and Optimization fo GROMACA ' M.Abraham andR Apsotolov - 
2016 but have not yet broken the code.

gmx mdrun -deffnm SR.sys.nvt -ntmpi 2 -ntomp 3 -gpu_id 01 -pin on.

Back Off! I just backed up SR.sys.nvt.log to ./#SR.sys.nvt.log.13#
Reading file SR.sys.nvt.tpr, VERSION 2018.3 (single precision)
Changing nstlist from 10 to 100, rlist from 1 to 1

Using 2 MPI threads
Using 3 OpenMP threads per tMPI thread

On host I7 2 GPUs auto-selected for this run.
Mapping of GPU IDs to the 2 GPU tasks in the 2 ranks on this node:
PP:0,PP:1

Back Off! I just backed up SR.sys.nvt.trr to ./#SR.sys.nvt.trr.13#
Back Off! I just backed up SR.sys.nvt.edr to ./#SR.sys.nvt.edr.13#
NOTE: DLB will not turn on during the first phase of PME tuning
starting mdrun 'SR-TA'
10 steps, 100.0 ps.

NOTE: DLB can now turn on, when beneficial
^C

Received the INT signal, stopping within 200 steps

Dynamic load balancing report:
DLB was off during the run due to low measured imbalance.
Average load imbalance: 0.7%.
The balanceable part of the MD step is 46%, load imbalance is computed from 
this.
Part of the total run time spent waiting due to load imbalance: 0.3%.

Core t (s) Wall t (s) (%)
Time: 953.837 158.973 600.0
(ns/day) (hour/ns)
Performance: 2.935 8.176


the beginning of the log file is
GROMACS version: 2018.3
Precision: single
Memory model: 64 bit
MPI library: thread_mpi
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
GPU support: CUDA
SIMD instructions: SSE4.1
FFT library: fftw-3.3.8-sse2
RDTSCP usage: enabled
TNG support: enabled
Hwloc support: disabled
Tracing support: disabled
Built on: 2018-10-19 21:26:38
Built by: pb@Q4 [CMAKE]
Build OS/arch: Linux 4.15.0-20-generic x86_64
Build CPU vendor: Intel
Build CPU brand: Intel(R) Core(TM) i7 CPU 970 @ 3.20GHz
Build CPU family: 6 Model: 44 Stepping: 2
Build CPU features: aes apic clfsh cmov cx8 cx16 htt intel lahf mmx msr 
nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1 
sse4.2 ssse3
C compiler: /usr/bin/gcc-6 GNU 6.4.0
C compiler flags: -msse4.1 -O3 -DNDEBUG -funroll-all-loops 
-fexcess-precision=fast
C++ compiler: /usr/bin/g++-6 GNU 6.4.0
C++ compiler flags: -msse4.1 -std=c++11 -O3 -DNDEBUG -funroll-all-loops 
-fexcess-precision=fast
CUDA compiler: /usr/bin/nvcc nvcc: NVIDIA (R) Cuda compiler driver;Copyright 
(c) 2005-2017 NVIDIA Corporation;Built on Fri_Nov__3_21:07:56_CDT_2017;Cuda 
compilation tools, release 9.1, V9.1.85
CUDA compiler 
flags:-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_70,code=compute_70;-use_fast_math;-D_FORCE_INLINES;;
 ;-msse4.1;-std=c++11;-O3;-DNDEBUG;-funroll-all-loops;-fexcess-precision=fast;
CUDA driver: 9.10
CUDA runtime: 9.10

Running on 1 node with total 12 cores, 12 logical cores, 2 compatible GPUs
Hardware detected:
CPU info:
Vendor: Intel
Brand: Intel(R) Core(TM) i7 CPU 970 @ 3.20GHz
Family: 6 Model: 44 Stepping: 2
Features: aes apic clfsh cmov cx8 cx16 htt intel lahf mmx msr nonstop_tsc pcid 
pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1 sse4.2 ssse3
Hardware topology: Only logical processor count
GPU info:
Number of GPUs detected: 2
#0: NVIDIA GeForce GTX 1060 6GB, compute cap.: 6.1, ECC: 

[gmx-users] damping coefficient in the Langevin thermostat ?

2018-12-09 Thread Nikhil Maroli
What is the default value of  damping coefficient in the Langevin
thermostat ?

-- 
Regards,
Nikhil Maroli
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.