Re: [gmx-users] using dual CPU's

2018-12-10 Thread paul buscemi


> On Dec 10, 2018, at 7:33 PM, paul buscemi  wrote:
> 
> 
> Mark, attached are the tail ends of three  log files for
>  the same system but run on an AMD 8  Core/16 Thread 2700x, 16G ram
> In summary:
> for ntpmi:ntomp of 1:16 , 2:8, and auto selection (4:4) are 12.0, 8.8 , and 
> 6.0 ns/day.
> Clearly, I do not have a handle on using 2 GPU's
> 
> Thank you again, and I'll keep probing the web for more understanding.
> I’ve propbably sent too much of the log, let me know if this is the case   
Better way to share files - where is that friend ?
> 
> Paul
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Does adding artificial mass cause problems for friction studies?

2018-12-10 Thread Dallas Warren
Look into heavy hydrogens (x4 mass) which is one way to increase the time
step since the frequency of vibrations for the hydrogen bond is decreased
due to the increase in mass.  Should be a number of references out there
looking into the effect the mass increase has directly.

Catch ya,

Dr. Dallas Warren
Drug Delivery, Disposition and Dynamics
Monash Institute of Pharmaceutical Sciences, Monash University
381 Royal Parade, Parkville VIC 3052
dallas.war...@monash.edu
-
When the only tool you own is a hammer, every problem begins to resemble a
nail.


On Tue, 11 Dec 2018 at 09:04, James  wrote:

> Hi,
>
> When studying friction it can be inconvenient for a structure to have low
> mass. If you push on the structure hard enough to overcome static friction,
> it then accelerates so rapidly that the speeds are unrealistic.
>
> One way to overcome this problem is to increase the structure's mass. But,
> adding many atoms increases simulation time. A work-around might be to
> artificially increase the masses of certain atoms. For example, tell
> GROMACS that H weighs 1,000 instead of 1.
>
> But, artificial masses raise a variety of questions about the accuracy of
> the results. I can't find anything on this in the literature. Does anyone
> know about this, or know of relevant studies?
>
> Sincerely,
> James Ryley
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Gmx gangle

2018-12-10 Thread rose rahmani
Hi,

I don't really understand how gmx gangke works!!!

I want to calculate angle between amino acid ring and surface during
simulation.
 I mad3 an index for 6 atoms of ring(a_CD1_CD2_CE1_CE2_CZ_CG) and two atoms
of surface. Surface is in xy plane and amino acid is in different Z
distances.


I assumed 6 ring atoms are defining a pkane and two atoms of surface are
defining a vector( along  Y). And i expected that the Average angle between
this plane and vector during simulation is calculated by gmx gangle
analysis.

 gmx gangle -f umbrella36_3.xtc -s umbrella36_3.tpr -n index.ndx -oav
angz.xvg -g1 plane -g2 vector -group1 -group2

Available static index groups:
 Group  0 "System" (4331 atoms)
 Group  1 "Other" (760 atoms)
 Group  2 "ZnS" (560 atoms)
 Group  3 "WAL" (200 atoms)
 Group  4 "NA" (5 atoms)
 Group  5 "CL" (5 atoms)
 Group  6 "Protein" (33 atoms)
 Group  7 "Protein-H" (17 atoms)
 Group  8 "C-alpha" (1 atoms)
 Group  9 "Backbone" (5 atoms)
 Group 10 "MainChain" (7 atoms)
 Group 11 "MainChain+Cb" (8 atoms)
 Group 12 "MainChain+H" (9 atoms)
 Group 13 "SideChain" (24 atoms)
 Group 14 "SideChain-H" (10 atoms)
 Group 15 "Prot-Masses" (33 atoms)
 Group 16 "non-Protein" (4298 atoms)
 Group 17 "Water" (3528 atoms)
 Group 18 "SOL" (3528 atoms)
 Group 19 "non-Water" (803 atoms)
 Group 20 "Ion" (10 atoms)
 Group 21 "ZnS" (560 atoms)
 Group 22 "WAL" (200 atoms)
 Group 23 "NA" (5 atoms)
 Group 24 "CL" (5 atoms)
 Group 25 "Water_and_ions" (3538 atoms)
 Group 26 "OW" (1176 atoms)
 Group 27 "CE1_CZ_CD1_CG_CE2_CD2" (6 atoms)
 Group 28 "a_320_302_319_301_318_311" (6 atoms)
 Group 29 "a_301_302" (2 atoms)
Specify any number of selections for option 'group1'
(First analysis/vector selection):
(one per line,  for status/groups, 'help' for help, Ctrl-D to end)
> 27
Selection '27' parsed
> 27
Selection '27' parsed
> Available static index groups:
 Group  0 "System" (4331 atoms)
 Group  1 "Other" (760 atoms)
 Group  2 "ZnS" (560 atoms)
 Group  3 "WAL" (200 atoms)
 Group  4 "NA" (5 atoms)
 Group  5 "CL" (5 atoms)
 Group  6 "Protein" (33 atoms)
 Group  7 "Protein-H" (17 atoms)
 Group  8 "C-alpha" (1 atoms)
 Group  9 "Backbone" (5 atoms)
 Group 10 "MainChain" (7 atoms)
 Group 11 "MainChain+Cb" (8 atoms)
 Group 12 "MainChain+H" (9 atoms)
 Group 13 "SideChain" (24 atoms)
 Group 14 "SideChain-H" (10 atoms)
 Group 15 "Prot-Masses" (33 atoms)
 Group 16 "non-Protein" (4298 atoms)
 Group 17 "Water" (3528 atoms)
 Group 18 "SOL" (3528 atoms)
 Group 19 "non-Water" (803 atoms)
 Group 20 "Ion" (10 atoms)
 Group 21 "ZnS" (560 atoms)
 Group 22 "WAL" (200 atoms)
 Group 23 "NA" (5 atoms)
 Group 24 "CL" (5 atoms)
 Group 25 "Water_and_ions" (3538 atoms)
 Group 26 "OW" (1176 atoms)
 Group 27 "CE1_CZ_CD1_CG_CE2_CD2" (6 atoms)
 Group 28 "a_320_302_319_301_318_311" (6 atoms)
 Group 29 "a_301_302" (2 atoms)
Specify any number of selections for option 'group2'
(Second analysis/vector selection):
(one per line,  for status/groups, 'help' for help, Ctrl-D to end)
> 29
Selection '29' parsed
> 29
Selection '29' parsed
> Reading file umbrella36_3.tpr, VERSION 4.5.4 (single precision)
Reading file umbrella36_3.tpr, VERSION 4.5.4 (single precision)
Reading frame   0 time0.000
Back Off! I just backed up angz.xvg to ./#angz.xvg.1#
Last frame  4 time 4000.Ö00
Analyzed 40001 frames, last ti߸ 4000.000

Am I right? I don't think so. :(

Would you please help me?
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] problem with introducing a new structure in gromacs, please help if anybody knows

2018-12-10 Thread Dallas Warren
The overall charge for an entire molecule has to be an integer value,
anything other than that is physically impossible.  So you have to fix your
charges so that is the case, and you must have done something wrong where
you pull/calculated the charges.

Catch ya,

Dr. Dallas Warren
Drug Delivery, Disposition and Dynamics
Monash Institute of Pharmaceutical Sciences, Monash University
381 Royal Parade, Parkville VIC 3052
dallas.war...@monash.edu
-
When the only tool you own is a hammer, every problem begins to resemble a
nail.


On Tue, 11 Dec 2018 at 05:05, banijamali_fs  wrote:

> Hi there,
>
> I'm working on MOFs ( metal organic frameworks) with gromacs, before
> starting the simulation, I should introduce my molecule in
> aminoacids.rtp file, that I did it, my problem is I have 4 repetitive
> structures (Ligands) and 2 metals, the whole charge of molecule is zero
> but every one of ligands has -1 charge and the two metals, every one of
> them has +2 charge, with this explanation, when I want to define one of
> repetitive ligands in aminoacids.rtp file, it should be -1 to get proper
> results, but from DFT calculations, when I put the partial charges of
> every one of atoms, the sum is not -1, at the end I want to know that,
> first, is it problem? I mean in the other steps of simulation, I get
> error? or it's not a problem, or if anybody knows that what should I do,
> please guide me with this problem.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] using dual CPU's

2018-12-10 Thread paul buscemi
Mark,

I may have misread the ppt on optimization, but I did experiment with 
variations of mtomp mtmpi and so using less than si x threads was a 2 x 3 
combination. Tonight I will put both

this is the last part of the log from a 2 gpu 
setup
using gmx mdrun -deffnm SR.sys.nvt -ntmpi 2 -ntomp 6 -gpu_id 1 -pin on. Run on 
the I7-970 cpu

NOTE: DLB can now turn on, when beneficial

<== ### ==>
< A V E R A G E S >
<== ### ==>

Statistics over 2401 steps using 25 frames
Energies (kJ/mol)
Angle G96Angle Proper Dih. Improper Dih. LJ-14
9.21440e+05 1.96052e+04 6.53857e+04 2.23128e+02 8.65164e+04
Coulomb-14 LJ (SR) Disper. corr. Coulomb (SR) Coul. recip.
-2.84582e+07 -1.44895e+05 -2.04658e+03 1.34455e+07 5.03949e+04
Position Rest. Potential Kinetic En. Total Energy Temperature
3.44645e+01 -1.40160e+07 1.91196e+05 -1.38249e+07 3.04725e+02
Pres. DC (bar) Pressure (bar) Constr. rmsd
-2.88685e+00 3.64550e+02 0.0e+00

Total Virial (kJ/mol)
-8.80572e+04 -5.06693e+03 6.90580e+02
-5.06777e+03 -6.31180e+04 -5.32400e+03
6.90136e+02 -5.32396e+03 -5.27950e+04

Pressure (bar)
4.14166e+02 1.39915e+01 -1.79346e+00
1.39938e+01 3.54006e+02 1.44453e+01
-1.79223e+00 1.44452e+01 3.25476e+02

T-PDMS T-VMOS
2.98272e+02 6.83205e+02
P P - P M E L O A D B A L A N C I N G

NOTE: The PP/PME load balancing was limited by the maximum allowed grid scaling,
you might not have reached a good load balance.

PP/PME load balancing changed the cut-off and PME settings:
particle-particle PME
rcoulomb rlist grid spacing 1/beta
initial 1.000 nm 1.000 nm 160 160 128 0.156 nm 0.320 nm
final 1.628 nm 1.628 nm 96 96 80 0.260 nm 0.521 nm
cost-ratio 4.31 0.23
(note that these numbers concern only part of the total PP and PME load)
M E G A - F L O P S A C C O U N T I N G
NB=Group-cutoff nonbonded kernels NxN=N-by-N cluster Verlet kernels
RF=Reaction-Field VdW=Van der Waals QSTab=quadratic-spline table
W3=SPC/TIP3p W4=TIP4p (single or pairs)
V=Potential and force V=Potential only F=Force only

Computing: M-Number M-Flops % Flops
-
Pair Search distance check 225.527520 2029.748 0.0
NxN Ewald Elec. + LJ [F] 255071.893824 16834744.992 91.2
NxN Ewald Elec. + LJ [V] 2710.128064 289983.703 1.6
1,4 nonbonded interactions 432.540150 38928.613 0.2
Calc Weights 543.250260 19557.009 0.1
Spread Q Bspline 11589.338880 23178.678 0.1
Gather F Bspline 11589.338880 69536.033 0.4
3D-FFT 129115.579906 1032924.639 5.6
Solve PME 31.785216 2034.254 0.0
Reset In Box 1.885500 5.656 0.0
CG-CoM 1.960920 5.883 0.0
Angles 342.430620 57528.344 0.3
Propers 72.102030 16511.365 0.1
Impropers 0.432180 89.893 0.0
Pos. Restr. 3.457440 172.872 0.0
Virial 1.887750 33.979 0.0
Update 181.083420 5613.586 0.0
Stop-CM 1.960920 19.609 0.0
Calc-Ekin 3.771000 101.817 0.0
Lincs 375.988360 22559.302 0.1
Lincs-Mat 8530.590144 34122.361 0.2
Constraint-V 751.820250 6014.562 0.0
Constraint-Vir 1.956622 46.959 0.0
-
Total 18455743.858 100.0
-
D O M A I N D E C O M P O S I T I O N S T A T I S T I C S
av. #atoms communicated per step for force: 2 x 6018.1
av. #atoms communicated per step for LINCS: 2 x 3015.7
Dynamic load balancing report:
DLB was off during the run due to low measured imbalance.
Average load imbalance: 0.9%.
The balanceable part of the MD step is 47%, load imbalance is computed from 
this.
Part of the total run time spent waiting due to load imbalance: 0.4%.

R E A L C Y C L E A N D T I M E A C C O U N T I N G
On 2 MPI ranks, each using 6 OpenMP threads
Computing: Num Num Call Wall time Giga-Cycles
Ranks Threads Count (s) total sum %
-
Domain decomp. 2 6 25 0.627 24.367 0.8
DD comm. load 2 6 2 0.000 0.004 0.0
Neighbor search 2 6 25 0.160 6.206 0.2
Launch GPU ops. 2 6 4802 0.516 20.048 0.7
Comm. coord. 2 6 2376 0.272 10.563 0.4
Force 2 6 2401 3.714 144.331 4.9
Wait + Comm. F 2 6 2401 0.210 8.173 0.3
PME mesh 2 6 2401 49.851 1937.315 66.2
Wait GPU NB nonloc. 2 6 2401 0.056 2.157 0.1
Wait GPU NB local 2 6 2401 0.033 1.285 0.0
NB X/F buffer ops. 2 6 9554 0.641 24.920 0.9
Write traj. 2 6 2 0.040 1.559 0.1
Update 2 6 4802 1.690 65.662 2.2
Constraints 2 6 4802 10.001 388.661 13.3
Comm. energies 2 6 25 0.003 0.107 0.0
Rest 7.511 291.885 10.0
-
Total 75.323 2927.243 100.0
-
Breakdown of PME mesh computation
-
PME redist. X/F 2 6 4802 2.694 104.683 3.6
PME spread 2 6 2401 10.619 412.680 14.1
PME gather 2 6 2401 9.157 355.857 12.2
PME 3D-FFT 2 6 4802 21.805 847.398 28.9
PME 3D-FFT Comm. 2 6 4802 

[gmx-users] Does adding artificial mass cause problems for friction studies?

2018-12-10 Thread James
Hi,

When studying friction it can be inconvenient for a structure to have low
mass. If you push on the structure hard enough to overcome static friction,
it then accelerates so rapidly that the speeds are unrealistic.

One way to overcome this problem is to increase the structure's mass. But,
adding many atoms increases simulation time. A work-around might be to
artificially increase the masses of certain atoms. For example, tell
GROMACS that H weighs 1,000 instead of 1.

But, artificial masses raise a variety of questions about the accuracy of
the results. I can't find anything on this in the literature. Does anyone
know about this, or know of relevant studies?

Sincerely,
James Ryley
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] using dual CPU's

2018-12-10 Thread Mark Abraham
Hi,

One of your reported runs only used six threads, by the way.

Something sensible can be said when the performance report at the end of
the log file can be seen.

Mark

On Tue., 11 Dec. 2018, 01:25 p buscemi,  wrote:

> Thank you, Mark, for the prompt response. I realize the limitations of the
> system ( its over 8 yo ), but I did not expect the speed to decrease by 50%
> with 12 available threads ! No combination of ntomp, ntmpi could raise
> ns/day above 4 with two GPU, vs 6 with one GPU.
>
> This is actually a learning/practice run for a new build - an AMD 4.2 Ghz
> 32 core TR, 64G ram. In this case I am trying to decide upon either a RTX
> 2080 ti or two GTX 1080 TI. I'd prefer the two 1080's for the 7000 cores vs
> the 4500 cores of the 2080. The model systems will have ~ million particles
> and need the speed. But this is a major expense so I need to get it right.
> I'll do as you suggest and report the results for both systems and I
> really appreciate the assist.
> Paul
> UMN, BICB
>
> On Dec 9 2018, at 4:32 pm, paul buscemi  wrote:
> >
> > Dear Users,
> > I have good luck using a single GPU with the basic setup.. However in
> going from one gtx 1060 to a system with two - 50,000 atoms - the rate
> decrease from 10 ns/day to 5 or worse. The system models a ligand, solvent
> ( water ) and a lipid membrane
> > the cpu is a 6 core intel i7 970( 12 threads ) , 750W PS, 16G Ram.
> > with the basic command " mdrun I get:
> > ck Off! I just backed up sys.nvt.log to ./#.sys.nvt.log.10#
> > Reading file SR.sys.nvt.tpr, VERSION 2018.3 (single precision)
> > Changing nstlist from 10 to 100, rlist from 1 to 1
> >
> > Using 2 MPI threads
> > Using 6 OpenMP threads per tMPI thread
> >
> > On host I7 2 GPUs auto-selected for this run.
> > Mapping of GPU IDs to the 2 GPU tasks in the 2 ranks on this node:
> > PP:0,PP:1
> >
> > Back Off! I just backed up SR.sys.nvt.trr to ./#SR.sys.nvt.trr.10#
> > Back Off! I just backed up SR.sys.nvt.edr to ./#SR.sys.nvt.edr.10#
> > NOTE: DLB will not turn on during the first phase of PME tuning
> > starting mdrun 'SR-TA'
> > 10 steps, 100.0 ps.
> > and ending with ^C
> >
> > Received the INT signal, stopping within 200 steps
> >
> > Dynamic load balancing report:
> > DLB was locked at the end of the run due to unfinished PP-PME balancing.
> > Average load imbalance: 0.7%.
> > The balanceable part of the MD step is 46%, load imbalance is computed
> from this.
> > Part of the total run time spent waiting due to load imbalance: 0.3%.
> >
> >
> > Core t (s) Wall t (s) (%)
> > Time: 543.475 45.290 1200.0
> > (ns/day) (hour/ns)
> > Performance: 1.719 13.963 before DBL is turned on
> >
> > Very poor performance. I have been following - or trying to follow -
> "Performance Tuning and Optimization fo GROMACA ' M.Abraham andR Apsotolov
> - 2016 but have not yet broken the code.
> > 
> > gmx mdrun -deffnm SR.sys.nvt -ntmpi 2 -ntomp 3 -gpu_id 01 -pin on.
> >
> >
> > Back Off! I just backed up SR.sys.nvt.log to ./#SR.sys.nvt.log.13#
> > Reading file SR.sys.nvt.tpr, VERSION 2018.3 (single precision)
> > Changing nstlist from 10 to 100, rlist from 1 to 1
> >
> > Using 2 MPI threads
> > Using 3 OpenMP threads per tMPI thread
> >
> > On host I7 2 GPUs auto-selected for this run.
> > Mapping of GPU IDs to the 2 GPU tasks in the 2 ranks on this node:
> > PP:0,PP:1
> >
> > Back Off! I just backed up SR.sys.nvt.trr to ./#SR.sys.nvt.trr.13#
> > Back Off! I just backed up SR.sys.nvt.edr to ./#SR.sys.nvt.edr.13#
> > NOTE: DLB will not turn on during the first phase of PME tuning
> > starting mdrun 'SR-TA'
> > 10 steps, 100.0 ps.
> >
> > NOTE: DLB can now turn on, when beneficial
> > ^C
> >
> > Received the INT signal, stopping within 200 steps
> >
> > Dynamic load balancing report:
> > DLB was off during the run due to low measured imbalance.
> > Average load imbalance: 0.7%.
> > The balanceable part of the MD step is 46%, load imbalance is computed
> from this.
> > Part of the total run time spent waiting due to load imbalance: 0.3%.
> >
> >
> > Core t (s) Wall t (s) (%)
> > Time: 953.837 158.973 600.0
> > (ns/day) (hour/ns)
> > Performance: 2.935 8.176
> >
> > 
> > the beginning of the log file is
> > GROMACS version: 2018.3
> > Precision: single
> > Memory model: 64 bit
> > MPI library: thread_mpi
> > OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
> > GPU support: CUDA
> > SIMD instructions: SSE4.1
> > FFT library: fftw-3.3.8-sse2
> > RDTSCP usage: enabled
> > TNG support: enabled
> > Hwloc support: disabled
> > Tracing support: disabled
> > Built on: 2018-10-19 21:26:38
> > Built by: pb@Q4 [CMAKE]
> > Build OS/arch: Linux 4.15.0-20-generic x86_64
> > Build CPU vendor: Intel
> > Build CPU brand: Intel(R) Core(TM) i7 CPU 970 @ 3.20GHz
> > Build CPU family: 6 Model: 44 Stepping: 2
> > Build CPU features: aes apic clfsh cmov cx8 cx16 htt intel lahf mmx msr
> nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1
> sse4.2 ssse3
> 

Re: [gmx-users] Mean square displacement on Log-Log plot?

2018-12-10 Thread David van der Spoel

Den 2018-12-10 kl. 21:57, skrev Kevin Boyd:

Hi,

If you're reporting a diffusion coefficient, they're probably looking for
you to justify that you're out of the short-time subdiffusive regime. My
experience is in bilayer simulations, where the MSD hits that regime
typically in the time lag range of ~10-20 ns.

For a qualitative estimate of whether you've reached the long timescale
limit, you don't need a log-log plot, you can just eyeball when the MSD
goes linear, and (again in my experience) that's generally sufficient. A
log-log plot may make it easier to see, or catch some subtler trends.

Maybe take a look at "Non-brownian diffusion in lipid membranes:
experiments and simulations", by Metzler, Jeon, and Cherstvy, particularly
some of the later figures look at the extent of subdiffusion with log-log
plots.
https://www.sciencedirect.com/science/article/pii/S0005273616300219


Thanks!


Kevin

On Mon, Dec 10, 2018 at 3:34 PM David van der Spoel 
wrote:


Hi,

unusual request, but here goes. I am dealing with a referee to one of my
papers who is asking for a mean square displacement plots:

"a log-log plot of the MSD vs. time, from which one could judge whether
the long-time limit subject to multiple collisions and obstructions is
actually reached."

Any clue what the referee is looking for? References?

Cheers,
--
David van der Spoel, Ph.D., Professor of Biology
Head of Department, Cell & Molecular Biology, Uppsala University.
Box 596, SE-75124 Uppsala, Sweden. Phone: +46184714205.

https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.icm.uu.sedata=02%7C01%7Ckevin.boyd%40uconn.edu%7C661bb956d58a4a00cb3b08d65edeeea7%7C17f1a87e2a254eaab9df9d439034b080%7C0%7C1%7C636800708977802541sdata=wxcRUd55ynF4cbEOJVbKHbG%2BCeL6DAU8WlAbaG5wrQs%3Dreserved=0
--
Gromacs Users mailing list

* Please search the archive at
https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.gromacs.org%2FSupport%2FMailing_Lists%2FGMX-Users_Listdata=02%7C01%7Ckevin.boyd%40uconn.edu%7C661bb956d58a4a00cb3b08d65edeeea7%7C17f1a87e2a254eaab9df9d439034b080%7C0%7C1%7C636800708977802541sdata=xs%2BZ9IA%2FzoeJPfpSIhIFZFF0kA624QnSyN1DqmVdaMk%3Dreserved=0
before posting!

* Can't post? Read
https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.gromacs.org%2FSupport%2FMailing_Listsdata=02%7C01%7Ckevin.boyd%40uconn.edu%7C661bb956d58a4a00cb3b08d65edeeea7%7C17f1a87e2a254eaab9df9d439034b080%7C0%7C1%7C636800708977802541sdata=42CTt8iaJj2VC2RBUcz%2BI%2F%2BjU%2BKbX23srdKhAvuNBI4%3Dreserved=0

* For (un)subscribe requests visit

https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmaillist.sys.kth.se%2Fmailman%2Flistinfo%2Fgromacs.org_gmx-usersdata=02%7C01%7Ckevin.boyd%40uconn.edu%7C661bb956d58a4a00cb3b08d65edeeea7%7C17f1a87e2a254eaab9df9d439034b080%7C0%7C1%7C636800708977802541sdata=oNjdubSZHzZqp4yN4Rezu4Trjl8cc9FDJMvEkkUmESQ%3Dreserved=0
or send a mail to gmx-users-requ...@gromacs.org.




--
David van der Spoel, Ph.D., Professor of Biology
Head of Department, Cell & Molecular Biology, Uppsala University.
Box 596, SE-75124 Uppsala, Sweden. Phone: +46184714205.
http://www.icm.uu.se
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Mean square displacement on Log-Log plot?

2018-12-10 Thread Kevin Boyd
Hi,

If you're reporting a diffusion coefficient, they're probably looking for
you to justify that you're out of the short-time subdiffusive regime. My
experience is in bilayer simulations, where the MSD hits that regime
typically in the time lag range of ~10-20 ns.

For a qualitative estimate of whether you've reached the long timescale
limit, you don't need a log-log plot, you can just eyeball when the MSD
goes linear, and (again in my experience) that's generally sufficient. A
log-log plot may make it easier to see, or catch some subtler trends.

Maybe take a look at "Non-brownian diffusion in lipid membranes:
experiments and simulations", by Metzler, Jeon, and Cherstvy, particularly
some of the later figures look at the extent of subdiffusion with log-log
plots.
https://www.sciencedirect.com/science/article/pii/S0005273616300219

Kevin

On Mon, Dec 10, 2018 at 3:34 PM David van der Spoel 
wrote:

> Hi,
>
> unusual request, but here goes. I am dealing with a referee to one of my
> papers who is asking for a mean square displacement plots:
>
> "a log-log plot of the MSD vs. time, from which one could judge whether
> the long-time limit subject to multiple collisions and obstructions is
> actually reached."
>
> Any clue what the referee is looking for? References?
>
> Cheers,
> --
> David van der Spoel, Ph.D., Professor of Biology
> Head of Department, Cell & Molecular Biology, Uppsala University.
> Box 596, SE-75124 Uppsala, Sweden. Phone: +46184714205.
>
> https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.icm.uu.sedata=02%7C01%7Ckevin.boyd%40uconn.edu%7C661bb956d58a4a00cb3b08d65edeeea7%7C17f1a87e2a254eaab9df9d439034b080%7C0%7C1%7C636800708977802541sdata=wxcRUd55ynF4cbEOJVbKHbG%2BCeL6DAU8WlAbaG5wrQs%3Dreserved=0
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.gromacs.org%2FSupport%2FMailing_Lists%2FGMX-Users_Listdata=02%7C01%7Ckevin.boyd%40uconn.edu%7C661bb956d58a4a00cb3b08d65edeeea7%7C17f1a87e2a254eaab9df9d439034b080%7C0%7C1%7C636800708977802541sdata=xs%2BZ9IA%2FzoeJPfpSIhIFZFF0kA624QnSyN1DqmVdaMk%3Dreserved=0
> before posting!
>
> * Can't post? Read
> https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.gromacs.org%2FSupport%2FMailing_Listsdata=02%7C01%7Ckevin.boyd%40uconn.edu%7C661bb956d58a4a00cb3b08d65edeeea7%7C17f1a87e2a254eaab9df9d439034b080%7C0%7C1%7C636800708977802541sdata=42CTt8iaJj2VC2RBUcz%2BI%2F%2BjU%2BKbX23srdKhAvuNBI4%3Dreserved=0
>
> * For (un)subscribe requests visit
>
> https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmaillist.sys.kth.se%2Fmailman%2Flistinfo%2Fgromacs.org_gmx-usersdata=02%7C01%7Ckevin.boyd%40uconn.edu%7C661bb956d58a4a00cb3b08d65edeeea7%7C17f1a87e2a254eaab9df9d439034b080%7C0%7C1%7C636800708977802541sdata=oNjdubSZHzZqp4yN4Rezu4Trjl8cc9FDJMvEkkUmESQ%3Dreserved=0
> or send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Mean square displacement on Log-Log plot?

2018-12-10 Thread David van der Spoel

Hi,

unusual request, but here goes. I am dealing with a referee to one of my 
papers who is asking for a mean square displacement plots:


"a log-log plot of the MSD vs. time, from which one could judge whether 
the long-time limit subject to multiple collisions and obstructions is 
actually reached."


Any clue what the referee is looking for? References?

Cheers,
--
David van der Spoel, Ph.D., Professor of Biology
Head of Department, Cell & Molecular Biology, Uppsala University.
Box 596, SE-75124 Uppsala, Sweden. Phone: +46184714205.
http://www.icm.uu.se
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Buckingham potential

2018-12-10 Thread Mohammadiarani, Hossein
Hi,


I want to use both Buckingham potential and LJ in a same simulation since using 
table potential are slow. Do you have any suggestion how to incorporate both 
potentials in the simulation.


Best,


Hosein
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] problem with introducing a new structure in gromacs, please help if anybody knows

2018-12-10 Thread banijamali_fs
Hi there, 

I'm working on MOFs ( metal organic frameworks) with gromacs, before
starting the simulation, I should introduce my molecule in
aminoacids.rtp file, that I did it, my problem is I have 4 repetitive
structures (Ligands) and 2 metals, the whole charge of molecule is zero
but every one of ligands has -1 charge and the two metals, every one of
them has +2 charge, with this explanation, when I want to define one of
repetitive ligands in aminoacids.rtp file, it should be -1 to get proper
results, but from DFT calculations, when I put the partial charges of
every one of atoms, the sum is not -1, at the end I want to know that,
first, is it problem? I mean in the other steps of simulation, I get
error? or it's not a problem, or if anybody knows that what should I do,
please guide me with this problem.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Install GROMACS 2018 with CUDA features in dynamic linking way

2018-12-10 Thread Szilárd Páll
On Sat, Dec 8, 2018 at 10:00 PM Gmail  wrote:
>
> My mistake! It was a typo. Anyway, this is the result before executing
> the chrpath command:
>
> chrpath -l $APPS/GROMACS/2018/CUDA/8.0/bin/gmx
> $APPS/GROMACS/2018/CUDA/8.0/bin/gmx: RPATH=$ORIGIN/../lib64
>
> I'm suspicious that GROMACS 2018 is not being compiled using shared
> libraries, at least, for CUDA.

First of all, what is the goal, why are you trying to manually rewrite
the binary RPATH?

Well, if the binaries not linked against libcudart.so than it clearly
isn't (and the ldd output is a better way to confirm that -- a library
can be linked against gmx even without an RPATH being set).

I have a vague memory that this may have been the default in CMake or
perhaps it changed at some point. What's your CMake version, perhaps
you're using an old CMake?

>
> Jaime.
>
>
> On 8/12/18 21:39, Mark Abraham wrote:
> > Hi,
> >
> > Your final line doesn't match your CMAKE_INSTALL_PREFIX
> >
> > Mark
> >
> > On Sun., 9 Dec. 2018, 07:00 Jaime Sierra  >
> >> Hi pall,
> >>
> >> thanks for your answer,
> >> I have my own "HOW_TO_INSTALL" guide like:
> >>
> >> $ wget ftp://ftp.gromacs.org/pub/gromacs/gromacs-5.1.4.tar.gz
> >> $ tar xzf gromacs-5.1.4.tar.gz
> >> $ cd gromacs-5.1.4.tar.gz
> >> $ mkdir build
> >> $ cd build
> >> $ export EXTRA_NVCCFLAGS=--cudart=shared
> >> $ export PATH=$APPS/CMAKE/2.8.12.2/bin/:$PATH
> >> $ cmake .. -DCMAKE_INSTALL_PREFIX=$APPS/GROMACS/5.1.4/CUDA8.0/GPU
> >> -DGMX_FFT_LIBRARY=fftw3 -DCMAKE_PREFIX_PATH=$LIBS/FFTW/3.3.3/SINGLE/
> >> -DGMX_GPU=ON -DCUDA_TOOLKIT_ROOT_DIR=$LIBS/CUDA/8.0/
> >> $ make -j $(nproc)
> >> $ make install
> >> $ chrpath -r '$ORIGIN/../lib64' $APPS/GROMACS/5.1.4/GPU/bin/gmx
> >>
> >> that works until GROMACS 2016, I couldn't make it work for GROMACS 2018.
> >>
> >> Regards,
> >>
> >> Jaime.
> >>
> >> El vie., 7 dic. 2018 a las 15:49, Szilárd Páll ()
> >> escribió:
> >>
> >>> Hi Jaime,
> >>>
> >>> Have you tried passing that variable to nvcc? Does it not work?
> >>>
> >>> Note that GROMACS makes up to a dozen of CUDA runtime calls (kernels
> >>> and transfers) per iteration with iteration times in the range of
> >>> milliseconds at longest and peak in the hundreds of nanoseconds and
> >>> the CPU needs to sync up every iteration with the GPU. Hence, I
> >>> suspect GROMACS may be a challenging use-case for rCUDA, but I'm very
> >>> interested in your observations and benchmarks results when you have
> >>> some.
> >>>
> >>> Cheers,
> >>> On Fri, Dec 7, 2018 at 10:39 AM Jaime Sierra  wrote:
>  Hi,
> 
>  my name is Jaime Sierra, a researcher from Polytechnic University of
>  Valencia, Spain. I would like to know how to compile & install GROMACS
>  2018 with CUDA features with the "--cudart=shared" compilation option
> >> to
>  use it with our rCUDA software.
> 
> 
>  We haven't had this problem in previous releases of GROMACS like 2016,
>  5.1.4 and so on.
> 
> 
>  Regards,
> 
>  Jaime.
>  --
>  Gromacs Users mailing list
> 
>  * Please search the archive at
> >>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >>> posting!
>  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
>  * For (un)subscribe requests visit
>  https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >>> send a mail to gmx-users-requ...@gromacs.org.
> >>> --
> >>> Gromacs Users mailing list
> >>>
> >>> * Please search the archive at
> >>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >>> posting!
> >>>
> >>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>>
> >>> * For (un)subscribe requests visit
> >>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >>> send a mail to gmx-users-requ...@gromacs.org.
> >>>
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] using dual CPU's

2018-12-10 Thread p buscemi
Thank you, Mark, for the prompt response. I realize the limitations of the 
system ( its over 8 yo ), but I did not expect the speed to decrease by 50% 
with 12 available threads ! No combination of ntomp, ntmpi could raise ns/day 
above 4 with two GPU, vs 6 with one GPU.

This is actually a learning/practice run for a new build - an AMD 4.2 Ghz 32 
core TR, 64G ram. In this case I am trying to decide upon either a RTX 2080 ti 
or two GTX 1080 TI. I'd prefer the two 1080's for the 7000 cores vs the 4500 
cores of the 2080. The model systems will have ~ million particles and need the 
speed. But this is a major expense so I need to get it right.
I'll do as you suggest and report the results for both systems and I really 
appreciate the assist.
Paul
UMN, BICB

On Dec 9 2018, at 4:32 pm, paul buscemi  wrote:
>
> Dear Users,
> I have good luck using a single GPU with the basic setup.. However in going 
> from one gtx 1060 to a system with two - 50,000 atoms - the rate decrease 
> from 10 ns/day to 5 or worse. The system models a ligand, solvent ( water ) 
> and a lipid membrane
> the cpu is a 6 core intel i7 970( 12 threads ) , 750W PS, 16G Ram.
> with the basic command " mdrun I get:
> ck Off! I just backed up sys.nvt.log to ./#.sys.nvt.log.10#
> Reading file SR.sys.nvt.tpr, VERSION 2018.3 (single precision)
> Changing nstlist from 10 to 100, rlist from 1 to 1
>
> Using 2 MPI threads
> Using 6 OpenMP threads per tMPI thread
>
> On host I7 2 GPUs auto-selected for this run.
> Mapping of GPU IDs to the 2 GPU tasks in the 2 ranks on this node:
> PP:0,PP:1
>
> Back Off! I just backed up SR.sys.nvt.trr to ./#SR.sys.nvt.trr.10#
> Back Off! I just backed up SR.sys.nvt.edr to ./#SR.sys.nvt.edr.10#
> NOTE: DLB will not turn on during the first phase of PME tuning
> starting mdrun 'SR-TA'
> 10 steps, 100.0 ps.
> and ending with ^C
>
> Received the INT signal, stopping within 200 steps
>
> Dynamic load balancing report:
> DLB was locked at the end of the run due to unfinished PP-PME balancing.
> Average load imbalance: 0.7%.
> The balanceable part of the MD step is 46%, load imbalance is computed from 
> this.
> Part of the total run time spent waiting due to load imbalance: 0.3%.
>
>
> Core t (s) Wall t (s) (%)
> Time: 543.475 45.290 1200.0
> (ns/day) (hour/ns)
> Performance: 1.719 13.963 before DBL is turned on
>
> Very poor performance. I have been following - or trying to follow - 
> "Performance Tuning and Optimization fo GROMACA ' M.Abraham andR Apsotolov - 
> 2016 but have not yet broken the code.
> 
> gmx mdrun -deffnm SR.sys.nvt -ntmpi 2 -ntomp 3 -gpu_id 01 -pin on.
>
>
> Back Off! I just backed up SR.sys.nvt.log to ./#SR.sys.nvt.log.13#
> Reading file SR.sys.nvt.tpr, VERSION 2018.3 (single precision)
> Changing nstlist from 10 to 100, rlist from 1 to 1
>
> Using 2 MPI threads
> Using 3 OpenMP threads per tMPI thread
>
> On host I7 2 GPUs auto-selected for this run.
> Mapping of GPU IDs to the 2 GPU tasks in the 2 ranks on this node:
> PP:0,PP:1
>
> Back Off! I just backed up SR.sys.nvt.trr to ./#SR.sys.nvt.trr.13#
> Back Off! I just backed up SR.sys.nvt.edr to ./#SR.sys.nvt.edr.13#
> NOTE: DLB will not turn on during the first phase of PME tuning
> starting mdrun 'SR-TA'
> 10 steps, 100.0 ps.
>
> NOTE: DLB can now turn on, when beneficial
> ^C
>
> Received the INT signal, stopping within 200 steps
>
> Dynamic load balancing report:
> DLB was off during the run due to low measured imbalance.
> Average load imbalance: 0.7%.
> The balanceable part of the MD step is 46%, load imbalance is computed from 
> this.
> Part of the total run time spent waiting due to load imbalance: 0.3%.
>
>
> Core t (s) Wall t (s) (%)
> Time: 953.837 158.973 600.0
> (ns/day) (hour/ns)
> Performance: 2.935 8.176
>
> 
> the beginning of the log file is
> GROMACS version: 2018.3
> Precision: single
> Memory model: 64 bit
> MPI library: thread_mpi
> OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
> GPU support: CUDA
> SIMD instructions: SSE4.1
> FFT library: fftw-3.3.8-sse2
> RDTSCP usage: enabled
> TNG support: enabled
> Hwloc support: disabled
> Tracing support: disabled
> Built on: 2018-10-19 21:26:38
> Built by: pb@Q4 [CMAKE]
> Build OS/arch: Linux 4.15.0-20-generic x86_64
> Build CPU vendor: Intel
> Build CPU brand: Intel(R) Core(TM) i7 CPU 970 @ 3.20GHz
> Build CPU family: 6 Model: 44 Stepping: 2
> Build CPU features: aes apic clfsh cmov cx8 cx16 htt intel lahf mmx msr 
> nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1 
> sse4.2 ssse3
> C compiler: /usr/bin/gcc-6 GNU 6.4.0
> C compiler flags: -msse4.1 -O3 -DNDEBUG -funroll-all-loops 
> -fexcess-precision=fast
> C++ compiler: /usr/bin/g++-6 GNU 6.4.0
> C++ compiler flags: -msse4.1 -std=c++11 -O3 -DNDEBUG -funroll-all-loops 
> -fexcess-precision=fast
> CUDA compiler: /usr/bin/nvcc nvcc: NVIDIA (R) Cuda compiler driver;Copyright 
> (c) 2005-2017 NVIDIA Corporation;Built on Fri_Nov__3_21:07:56_CDT_2017;Cuda 
> 

Re: [gmx-users] mdrun-adjusted cutoffs?!

2018-12-10 Thread Szilárd Páll
On Mon, Dec 10, 2018 at 6:57 AM Mark Abraham  wrote:
>
> Hi,
>
> There's two ways to specify the long-range grid requirements (either
> fourierspacing, or fourier-nx and friends,, see
> http://manual.gromacs.org/documentation//current/user-guide/mdp-options.html#ewald).
> The tuning will override fourierspacing in the same way that it does
> rcoulomb. I assume it does not override a manual specification of the grid
> dimensions, but I haven't tried it. Have noted to the dev team that we
> should check and document that.

If PME tuning as active, it will try to shift work from PME to PP
regardless of the means used to specify the PME settings (spacing or
explicit grid). I think that's consistent with the docs, isn't it?

>
> Mark
>
> On Sun, Dec 9, 2018 at 7:46 AM Alex  wrote:
>
> > That's very valuable info, thank you.
> >
> > By the way, all of our production mdp files have something like
> > fourierspacing = 0.135, the origin of which is long gone from my memory.
> > Does this imply that despite PME tuning our simulations use a fixed
> > Fourier grid that ends up in suboptimal performance, or does the tuning
> > override it?
> >
> > Alex
> >
> > On 12/8/2018 1:34 PM, Mark Abraham wrote:
> > > Hi,
> > >
> > > Note that that will compare runs of differently accurate electrostatic
> > > approximation. For iso-accurate comparisons, one must also scale the
> > > Fourier grid by the same factor (per the manual section on PME
> > autotuning).
> > > Of course, if you start from the smallest rcoulomb and use a fixed grid,
> > > then the comparisons will be of increasing accuracy, which might be
> > enough
> > > for the desired conclusion.
> > >
> > > Mark
> > >
> > > On Sat., 8 Dec. 2018, 02:05 Szilárd Páll  > >
> > >> BTW if you have doubts and still want to make sure that the mdrun PME
> > >> tuning does not affect your observables, you can always do a few runs
> > >> with a fixed rcoulomb > rvdw set in the mdp file (with -notunepme
> > >> passed on the command line for consistency) and compare what you get
> > >> with the rcoulomb = rvdw case. As Mark said, you should not observe a
> > >> difference.
> > >>
> > >> --
> > >> Szilárd
> > >> On Fri, Dec 7, 2018 at 7:10 AM Alex  wrote:
> > >>> I think that answers my question, thanks. :)
> > >>>
> > >>> On 12/6/2018 9:38 PM, Mark Abraham wrote:
> >  Hi,
> > 
> >  Zero, because we are shifting between equivalent ways to compute the
> > >> total
> >  electrostatic interaction.
> > 
> >  You can turn off the PME tuning with mdrun -notunepme, but unless
> > >> there's a
> >  bug, all that will do is force it to run slower than optimal.
> > >> Obviously you
> >  could try it and see that the FE of hydration does not change with the
> >  model, so long as you have a reproducible protocol.
> > 
> >  Mark
> > 
> > 
> >  On Fri., 7 Dec. 2018, 06:39 Alex  > 
> > > I'm not ignoring the long-range contribution, but yes, most of the
> > > effects I am talking about are short-range. What I am asking is how
> > >> much
> > > the free energy of ionic hydration for K+ changes in, say, a system
> > >> that
> > > contains KCl in bulk water -- with and without autotuning. Hence also
> > > the earlier question about being able to turn it off at least
> > >> temporarily.
> > > Alex
> > >
> > > On 12/6/2018 5:42 AM, Mark Abraham wrote:
> > >> Hi,
> > >>
> > >> It sounds like you are only looking at the short-ranged component of
> > >> the
> > >> electrostatic interaction, and thus ignoring the way the long range
> > >> component also changes. Is the validity of the PME auto tuning the
> > > question
> > >> at hand?
> > >>
> > >> Mark
> > >>
> > >> On Thu., 6 Dec. 2018, 21:09 Alex  > >>
> > >>> More specifically, electrostatics. For the stuff I'm talking about,
> > >> the
> > >>> LJ portion contributes ~20% at the most. When the change in
> > >> energetics
> > >>> is a statistically persistent value of order kT (of which about 20%
> > >>> comes from LJ), the quantity of interest (~exp(E/kT)) changes by a
> > >>> factor of 2.72. Again, this is a fairly special case, but I can
> > >> easily
> > >>> envision someone doing ion permeation across KcsA and the currents
> > >> would
> > >>> be similarly affected. For instance, when I set all cutoffs at 1.0
> > >> nm,
> > >>> mdrun ends up using something like 1.1 nm for electrostatics, at
> > >> least
> > >>> that's what I see at the top of the log.
> > >>>
> > >>> I agree with what you said about vdW and it can be totally
> > >> arbitraty and
> > >>> then often requires crutches elsewhere, but my question was whether
> > >> for
> > >>> very sensitive quantities mdrun ends up utilizing the forcefield as
> > >> it
> > >>> was designed and not in a "slightly off" regime. Basically, you
> > >> asked me
> > >>> to describe our case and why I think