Hi guys, I'm new to Gromacs and seeking some input on my .mdp file for the 
production run. I want to perform simulation to check protein stability over 
time and the after that the stable protein will be used for protein-protein 
docking/interactions. I'm using cubic box (with 1nm pbc) with water and Cl ions 
to neutralize on a GPU accelerated system. my mdp file is as: title  = Protein 
in water ; Run parameters integrator = md     ; leap-frog integrator nsteps  = 
2000 ; 2 * 500000 = 1000 ps, 1 ns dt   = 0.002         ; 2 fs cutoff-scheme     
= Verlet     ; for GPU acceleration verlet-buffer-drift = -1          ; now use 
nstlist ; Output control nstxout       = 1000  ; save coordinates every 2 ps 
nstvout   = 1000  ; save velocities every 2 ps nstxtcout         = 1000   ; xtc 
compressed trajectory output every 2 ps nstenergy                = 1000   ; 
save energies every 2 ps nstlog       = 1000  ; update log file every 2 ps ; 
Bond parameters continuation    = yes    ; Restarting after NPT 
constraint_algorithm = lincs    ; holonomic constraints constraints             
    = all-bonds ; all bonds (even heavy atom-H bonds) constrained lincs_iter    
            = 1  ; accuracy of LINCS lincs_order                    = 4  ; also 
related to accuracy ; Neighborsearching ns_type     = grid       ; search 
neighboring grid cells nstlist            = 30         ; 10 fs rlist            
  = 0.6        ; short-range neighborlist cutoff (in nm) rcoulomb     = 0.6    
; short-range electrostatic cutoff (in nm) rvdw                    = 0.6        
; short-range van der Waals cutoff (in nm) ; Electrostatics coulombtype         
 = PME  ; Particle Mesh Ewald for long-range electrostatics pme_order           
 = 4    ; cubic interpolation fourierspacing    = 0.12  ; grid spacing for FFT 
; Temperature coupling is on tcoupl      = V-rescale    ; modified Berendsen 
thermostat tc-grps  = Protein Non-Protein  ; two coupling groups - more 
accurate tau_t      = 0.1  0.1     ; time constant, in ps ref_t     = 300 300   
   ; reference temperature, one for each group, in K ; Pressure coupling is on 
pcoupl       = Parrinello-Rahman    ; Pressure coupling on in NPT pcoupltype    
    = isotropic     ; uniform scaling of box vectors tau_p   = 2.0   ; time 
constant, in ps ref_p    = 1.0   ; reference pressure, in bar compressibility = 
4.5e-5  ; isothermal compressibility of water, bar^-1 ; Periodic boundary 
conditions pbc         = xyz   ; 3-D PBC ; Dispersion correction DispCorr     = 
EnerPres      ; account for cut-off vdW scheme ; Velocity generation gen_vel   
= no    ; Velocity generation is off By performing a test run with ff99SB force 
field, I got an efficiency as follow: Reading file test11.tpr, VERSION 4.6.1 
(single precision) Using 1 MPI thread Using 4 OpenMP threads 1 GPU detected:   
#0: NVIDIA GeForce GT 630, compute cap.: 3.0, ECC:  no, stat: compatible 1 GPU 
auto-selected for this run: #0 starting mdrun 'Protein in water' 2000 steps,    
  4.0 ps. step   60: timed with pme grid 104 104 104, coulomb cutoff 0.600: 
6760.6 M-cycles step  120: timed with pme grid 96 96 96, coulomb cutoff 0.643: 
7826.0 M-cycles step  180: timed with pme grid 104 104 104, coulomb cutoff 
0.600: 6716.3 M-cycles step  240: timed with pme grid 100 100 100, coulomb 
cutoff 0.617: 7248.5 M-cycles               optimal pme grid 104 104 104, 
coulomb cutoff 0.600 step 1900, remaining runtime:     7 s           Writing 
final coordinates. step 2000, remaining runtime:     0 s           NOTE: The 
GPU has >20% more load than the CPU. This imbalance causes       performance 
loss, consider using a shorter cut-off and a finer PME grid.                
Core t (s)   Wall t (s)        (%)        Time:      427.620      144.021      
296.9                  (ns/day)    (hour/ns) Performance:        2.401        
9.996 By changing just the pme_order=6, I got this: Reading file test7.tpr, 
VERSION 4.6.1 (single precision) Using 1 MPI thread Using 4 OpenMP threads 1 
GPU detected:   #0: NVIDIA GeForce GT 630, compute cap.: 3.0, ECC:  no, stat: 
compatible 1 GPU auto-selected for this run: #0 starting mdrun 'Protein in 
water' 2000 steps,      4.0 ps. step   60: timed with pme grid 104 104 104, 
coulomb cutoff 0.600: 6818.0 M-cycles step  120: timed with pme grid 96 96 96, 
coulomb cutoff 0.643: 7821.2 M-cycles step  180: timed with pme grid 104 104 
104, coulomb cutoff 0.600: 6718.4 M-cycles step  240: timed with pme grid 100 
100 100, coulomb cutoff 0.617: 7257.1 M-cycles               optimal pme grid 
104 104 104, coulomb cutoff 0.600 step 1900, remaining runtime:     7 s         
  Writing final coordinates. step 2000, remaining runtime:     0 s              
            Core t (s)   Wall t (s)        (%)        Time:      550.020      
144.580      380.4                  (ns/day)    (hour/ns) Performance:        
2.392       10.035 I have run many test simulations by changing rlist, 
rcoulomb, rvdw, pme_order and fourierspacing (that I have concluded after 
reading papers and gromacs user list), but in most of the time, I got 
performance loss. I am a bit worry about using the very short rlist, rvdw and 
rcoulomb (most of the papers used 1.0 for each for different proteins). If I 
increase these cut-offs, there is performance loss, otherwise these cutoff are 
short (i guess). can any body guide me where I should improve the mdp file or 
the current one is reasonable. Thanks in advance. mayaz
 
                                          --
gmx-users mailing list    [email protected]
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [email protected].
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

Reply via email to