[gmx-users] Fwd: Query regarding Preferential Interaction Coefficient Calculation

2019-01-18 Thread ISHRAT JAHAN
Dear all,
I want to calculate the number of water and osmolyte molecule at particular
distance from surface of protein in order to calculate preferential
interaction coefficient. I had calculated it using gmx select command. Is
it right to calculate using this command. Please help me in this regard.
Thanks in advance

-- Forwarded message -
From: ISHRAT JAHAN 
Date: Wed, Jan 16, 2019, 11:23 AM
Subject: Query regarding Preferential Interaction Coefficient Calculation
To: 


Dear all,
I have done the MD simulation of protein in osmolytes, now i want to
calculate the preferential interaction coefficient of osmolytes from
protein surface.Will anyone please guide me the proper steps of the
calculation?
Thanks and regards
-- 
Ishrat Jahan
Research Scholar
Department Of Chemistry
A.M.U Aligarh
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gmx 2019 performance issues

2019-01-18 Thread Szilárd Páll
On Tue, Jan 15, 2019 at 1:30 PM Tamas Hegedus  wrote:

> Hi,
>
> I do not really see an increased performance with gmx 2019 using -bonded
> gpu. I do not see what I miss or misunderstand.
> The only thing I see that all cpu run at ~100% with gmx2018, while some
> of the cpus run only at ~60% with gmx2019.
>

GPUs aren't necessarily faster than CPUs, in particular as GROMACS has fast
CPU code for all compute-intensive algorithms. When your CPU is
sufficiently fast to complete the bonded computation before the GPU is done
with the rest (and that is generally the if there are at least 3-4
cores/GPU), it is normal that you do not see improvement from offloading
that task to the GPU. Consequently, you could even see slight performance
degradation in such cases with -bonded gpu vs -bonded cpu.

One benefit can of course be the lower CPU requirement which means you
might be able to run two runs side-by-side using half of the CPU cores each
using 1-2 GPUs.


> There are: 196382 Atoms
> Speeds comes from 500 ps runs.
>
>  From one of the log files:
> Mapping of GPU IDs to the 4 GPU tasks in the 4 ranks on this node:
>PP:0,PP:0,PP:2,PME:2
> PP tasks will do (non-perturbed) short-ranged and most bonded
> interactions on the GPU
> PME tasks will do all aspects on the GPU
>
> --
> 16 cores 4 GPUs
> gmx 2018 48ns/day
> gmx 2019 54ns/day
>
> gmx mdrun -nt 16 -ntmpi 4 -pin on -v -deffnm md_test -nb gpu -pme gpu
> -npme 1 -gputasks 0123
>
> gmx mdrun -nt 16 -ntmpi 4 -pin on -v -deffnm md_test -nb gpu -bonded gpu
> -pme gpu -npme 1 -gputasks 0123
>
> Since the GPUs are not utilized well (some of them are below 50%), my
> objective is run 2 jobs/node with 8 CPUs and 2 GPUs with higher usage.
>
> --
> 8 cores 2 GPUs
> gmx 2018 33 ns/day
> gmx 2019 35 ns/day
>
> gmx mdrun -nt 8 -ntmpi 4 -pin on -v -deffnm md_test -nb gpu -pme gpu
> -npme 1 -gputasks 0033
>
> gmx mdrun -nt 8 -ntmpi 4 -pin on -v -deffnm md_test -nb gpu -bonded gpu
> -pme gpu -npme 1 -gputasks 0022
>
> gmx mdrun -ntomp 2 -ntmpi 4 -pin on -v -deffnm md_test -nb gpu -bonded
> gpu -pme gpu -npme 1 -gputasks 0022
> Changing -nt to -ntomp did not help to increase performance.
>
> And the GPUs are not utilized much better. 1080Ti runs max 60-75%
>
> --
> The main question:
> * I use 16 core AMD 2950X with 4 high end GPUs (1080Ti, 2080Ti).
> * GPUs does not run at 100%, so I would like load more on them and
> possibly run 2 gmx jobs on the same node.
>
> I see two options:
> * cheaper: decrease the cores from 16 to 8 and push bonded calculations
> to gpu using gmx 2019
> * expensive: replace the 16core 2950X to 32core 2990WX
>
> 2950X 16 cores 2 GPUs
> gmx 2018 43 ns/day
> gmx 2019 43 ns/day
>
> 33 ns/day (8core/2GPUs)  54 (16core/4GPUS)
> 43 ns/day << 54 (16core/4GPUS)
>
> So this could be a compromise if 16/32 cores works similarly as 16/16
> cores. E.g. 2990 has slower memory access compared to 2950; I do not
> expect this to influence gmx runs too much. However, if it decreases by
> 10-15 percentage then most likely it does not worth to invest into the
> 32 core processor.
>
> Thanks for your feedbacks.
> Tamas
>
> --
> Tamas Hegedus, PhD
> Senior Research Fellow
> Department of Biophysics and Radiation Biology
> Semmelweis University | phone: (36) 1-459 1500/60233
> Tuzolto utca 37-47| mailto:ta...@hegelab.org
> Budapest, 1094, Hungary   | http://www.hegelab.org
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] different nvidia-smi/gmx GPU_IDs

2019-01-18 Thread paul buscemi
Szilard,

Is the environmental variable set at build  ?

thanks
Paul

> On Jan 18, 2019, at 12:36 PM, Szilárd Páll  wrote:
> 
> Hi,
> 
> The CUDA runtime tries (and AFAIK has always tried) to be smart about
> device order which is what GROMACS will see in its detection. The
> nvidia-smi monitoring tools however uses a different mechanism for
> enumeration that will always respect the PCI identifier of the devices (~
> the order of cards/slots in the box).
> 
> This can of course cause some headache in mixed setups, but you can set the
> CUDA_DEVICE_ORDER=PCI_BUS_ID environment variable to tell the runtime to
> avoid reordering the GPUs and expose them ordered by bus ID.
> 
> Cheers,
> --
> Szilárd
> 
> 
> On Sun, Jan 13, 2019 at 2:27 PM Tamas Hegedus  wrote:
> 
>> Hi,
>> 
>> I have a node with 4 nvidia GPUs.
>> From nvidia-smi output:
>>  0  Quadro P6000
>>  1  GeForce RTX 208
>>  2  GeForce GTX 108
>>  3  GeForce RTX 208
>> 
>> However, the order of GPUs is differently detected by gmx 2018.3
>> Number of GPUs detected: 4
>> #0: NVIDIA GeForce RTX 2080 Ti
>> #1: NVIDIA GeForce RTX 2080 Ti
>> #2: NVIDIA Quadro P6000
>> #3: NVIDIA GeForce GTX 1080 Ti
>> 
>> Why is this? This makes difficult to plan/check the running of two gmx
>> jobs on the same node.
>> 
>> Thanks for your suggestion.
>> 
>> Tamas
>> 
>> --
>> Tamas Hegedus, PhD
>> Senior Research Fellow
>> MTA-SE Molecular Biophysics Research Group
>> Hungarian Academy of Sciences  | phone: (36) 1-459 1500/60233
>> Semmelweis University  | fax:   (36) 1-266 6656
>> Tuzolto utca 37-47 | mailto:ta...@hegelab.org
>> Budapest, 1094, Hungary| http://www.hegelab.org
>> 
>> ---
>> This email has been checked for viruses by AVG.
>> https://www.avg.com
>> 
>> --
>> Gromacs Users mailing list
>> 
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] different nvidia-smi/gmx GPU_IDs

2019-01-18 Thread Szilárd Páll
Hi,

The CUDA runtime tries (and AFAIK has always tried) to be smart about
device order which is what GROMACS will see in its detection. The
nvidia-smi monitoring tools however uses a different mechanism for
enumeration that will always respect the PCI identifier of the devices (~
the order of cards/slots in the box).

This can of course cause some headache in mixed setups, but you can set the
CUDA_DEVICE_ORDER=PCI_BUS_ID environment variable to tell the runtime to
avoid reordering the GPUs and expose them ordered by bus ID.

Cheers,
--
Szilárd


On Sun, Jan 13, 2019 at 2:27 PM Tamas Hegedus  wrote:

> Hi,
>
> I have a node with 4 nvidia GPUs.
>  From nvidia-smi output:
>   0  Quadro P6000
>   1  GeForce RTX 208
>   2  GeForce GTX 108
>   3  GeForce RTX 208
>
> However, the order of GPUs is differently detected by gmx 2018.3
>  Number of GPUs detected: 4
>  #0: NVIDIA GeForce RTX 2080 Ti
>  #1: NVIDIA GeForce RTX 2080 Ti
>  #2: NVIDIA Quadro P6000
>  #3: NVIDIA GeForce GTX 1080 Ti
>
> Why is this? This makes difficult to plan/check the running of two gmx
> jobs on the same node.
>
> Thanks for your suggestion.
>
> Tamas
>
> --
> Tamas Hegedus, PhD
> Senior Research Fellow
> MTA-SE Molecular Biophysics Research Group
> Hungarian Academy of Sciences  | phone: (36) 1-459 1500/60233
> Semmelweis University  | fax:   (36) 1-266 6656
> Tuzolto utca 37-47 | mailto:ta...@hegelab.org
> Budapest, 1094, Hungary| http://www.hegelab.org
>
> ---
> This email has been checked for viruses by AVG.
> https://www.avg.com
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] RMSD plots protein-protein complex

2019-01-18 Thread marzieh gharouni
Hello
I did a simulation of protein-protein interaction with Gromacs code. In
this simulation, the number of amino acids in each protein is about 230.
The simulation production run was 300ns. After analyzing trajectory, I
found that RMSD value of protein-protein complex fluctuated near 1.8 nm but
the single of each protein has RMSD plots around 0.35 nm. Other plots (rg ,
rmsf) of the protein-protein complex show my complex system is stable. But
I have a problem with the RMSD value (not behavior) of mix proteins. Is it
normal?
Please see share files
Rmsd of protein A-Protein B complex:
https://drive.google.com/file/d/16BoOdfUev2Bh7zgUCj4pOBAyPQLS_EJY/view?usp=sharing
Rmsd of Protein A:
https://drive.google.com/file/d/1QYt2rQQuNK4qqwp9O-k40uglSZaEYjAe/view?usp=sharing
Rmsd of protein B:
https://drive.google.com/file/d/1DYJ_jIUf-E9I8M38MrBxdh1tAtaIsBod/view?usp=sharing
Thanks in advance.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] perl water_deletor.pl

2019-01-18 Thread Salman Zarrini
Hi
On Fri, Jan 18, 2019 at 3:30 AM rabee khorram 
wrote:

> *Hello everyone, *
> *I am running liposome structure with gromacs 5.*
> *this liposome created with Packmol software(without water molecules).*
> *after Solvating liposome with water in gromacs, I need to remove waters
> from hydrophobic region.*
>
I guess the gmx select can do the job for you, something like:
~> gmx select -f conf.gro -s conf.tpr -selrpos mol_cog -on water.ndx
> resname SOL and not within 1 of [x0, y0, z0]% (This is just an
example region; you should define the region of your interest here)
Gives you the water.ndx which contains the index numbers of the water
molecules of your interest.
Then somthing like:
~> gmx trjconv -f conf.gro -s conf.tpr -n water.ndx -o water.gro
Gives you the water molecules of your interest.
For further examples please have a look at "
http://manual.gromacs.org/documentation/5.1/onlinehelp/selections.html;

Cheers,
Salman


> *is these  any "perl water_deletor.pl  " script
> for this structure?*
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Problem in velocity plot while doing MD simulation of DNA protein with E field

2019-01-18 Thread ARNAB MUKHERJEE
Hi,

I am trying to study the motion of protein along a DNA with application of
electric field. I have position restrained the DNA, and the DNA axis is
along Z, and I am applying an electric field also along +Z. Now since the
protein is strongly positively charged it moves along the DNA in the
direction of the field (along +Z).

Now when I extract the z coordinate of the central C_alpha atom of the
protein, as expected I see for each cycle (since my box is periodic in all
directions) the z coordinate increases with time. But the problem is when I
extract and plot the z component of velocity of the central C_alpha atom of
protein, it fluctuates about 0. I don't understand how can the z component
of the velocity be even negative, when z coordinate shows positive slope
all the time.

In order to extract the position and velocities, I used the following
commands :

gmx_mpi traj -f traj_comp.xtc -s md_run.tpr -n index1.ndx -ox coor-CB.xv

gmx_mpi traj -f traj.trr -s md_run.tpr -n index1.ndx -ov vel-CB.xvg

I hope that the commands are correct.

Below I have pasted my .mdp file

title   = NVT equilibration with position restraint on all solute
(topology modified)
; Run parameters
integrator  = sd; leap-frog integrator
nsteps  = 250   ; 1 * 50 = 500 ps
;nsteps  = 5000
dt  = 0.02  ; 1 fs
; Output control
nstxout = 1000  ; save coordinates every 10 ps
nstvout = 1000  ; save velocities every 10 ps
nstcalcenergy   = 50
nstenergy   = 1000  ; save energies every 1 ps
nstxtcout   = 2500
;nstxout-compressed  = 5000   ; save compressed coordinates every 1.0 ps
 ; nstxout-compressed replaces nstxtcout
;compressed-x-grps  = System  ; replaces xtc-grps
nstlog  = 1000  ; update log file every 1 ps
; Bond parameters
continuation= no   ; first dynamics run
constraint_algorithm = lincs ; holonomic constraints
constraints = none  ; all bonds (even heavy atom-H bonds)
constrained
;lincs_iter = 2 ; accuracy of LINCS
lincs_order = 4 ; also related to accuracy
epsilon_r   = 15
; Neighborsearching
cutoff-scheme   = Verlet
ns_type = grid  ; search neighboring grid cels
nstlist = 10; 20 fs
rvdw_switch = 1.0
rlist   = 1.2   ; short-range neighborlist cutoff (in nm)
rcoulomb= 1.2   ; short-range electrostatic cutoff (in nm)
rvdw= 1.2   ; short-range van der Waals cutoff (in nm)
vdwtype = Cut-off   ; Twin range cut-offs rvdw >= rlist
;vdw-modifier= Force-switch
;Electrostatics
coulombtype = PME   ; Particle Mesh Ewald for long-range
electrostatics
pme_order   = 4 ; cubic interpolation
fourierspacing  = 0.12  ; grid spacing for FFT

ld-seed = -1

; Temperature coupling is on
tcoupl  = v-rescale
tc_grps = System
tau_t   = 2.0
ref_t   = 300

;energygrps = DNA Protein W ION

;freezegrps = Frozen-DNA-atoms
;freezedim = Y Y Y

E-x = 1 0 1 ;
E-y = 1 0 1 ;
E-z = 1 0.28 1 ;

; Pressure coupling is off
pcoupl  = no; no pressure coupling in NVT
; Periodic boundary conditions
pbc = xyz   ; 3-D PBC
; Dispersion correctiion
DispCorr= no; account for cut-off vdW scheme
; Velocity generation
gen_vel = yes   ; assign velocities from Maxwell
distribution
gen_temp= 300   ; temperature for Maxwell distribution
gen_seed= -1; generate a random seed
; COM motion removal
; These options remove motion of the protein/bilayer relative to the
solvent/ions
nstcomm = 50
comm-mode   = Linear
comm-grps   = DNA_W
;
refcoord_scaling = all

Earlier I had "coom-grps = System". So I was thinking that because it was
removing the velocity of COM of the system, hence the velocity of protein
that I was getting was only due to the thermal fluctuation, since it is
removing the velocity due to the electric field. Hence now I used
"coom-grps = DNA_W" where "DNA_W" is the group containing all the DNA and
water atoms, but still when I plot the z component of velocity of the
central C_alpha atom of protein, it again shows fluctuation about 0, I
don't understand why.

Can anyone please help me understand the problem here?

Thank you in advance,

Regards,

Arnab Mukherjee
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] How the "Fmax" is determined without "emtol" in the mdp file?

2019-01-18 Thread ZHANG Cheng
Thank you so much, Justin and Mark!




-- Original --
From:  "ZHANG Cheng"<272699...@qq.com>;
Date:  Fri, Jan 18, 2019 09:55 PM
To:  "gromacs.org_gmx-users";

Subject:  How the "Fmax" is determined without "emtol" in the mdp file?



I am doing an energy minimization in a vacuum condition. There is no "emtol" in 
the mdp file. The energy converges in the end, and tell me "Fmax < 10" as shown 
below. So how this "< 10" is determined?


Steepest Descents converged to Fmax < 10 in 4063 steps
Potential Energy  = -2.3973977e+04
Maximum force =  9.8130465e+00 on atom 405
Norm of force =  1.5696382e+00







My mdp file is:


integrator   = steep
nsteps   = 1 
nstxout  = 0
nstfout  = 0
nstlog   = 100 


cutoff-scheme= Verlet
nstlist  = 20
ns_type  = grid
pbc  = xyz
verlet-buffer-tolerance  = 0.005


coulombtype  = reaction-field 
rcoulomb = 1.1
epsilon_r= 15; 2.5 (with polarizable water)
epsilon_rf   = 0
vdw_type = cutoff  
vdw-modifier = Potential-shift-verlet
rvdw = 1.1
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] How the "Fmax" is determined without "emtol" in the mdp file?

2019-01-18 Thread Mark Abraham
Hi,

There are defaults, and documented online: e.g.
http://manual.gromacs.org/documentation/current/user-guide/mdp-options.html#energy-minimization

Mark

On Fri, Jan 18, 2019 at 2:56 PM ZHANG Cheng <272699...@qq.com> wrote:

> I am doing an energy minimization in a vacuum condition. There is no
> "emtol" in the mdp file. The energy converges in the end, and tell me "Fmax
> < 10" as shown below. So how this "< 10" is determined?
>
>
> Steepest Descents converged to Fmax < 10 in 4063 steps
> Potential Energy  = -2.3973977e+04
> Maximum force =  9.8130465e+00 on atom 405
> Norm of force =  1.5696382e+00
>
>
>
>
>
>
>
> My mdp file is:
>
>
> integrator   = steep
> nsteps   = 1
> nstxout  = 0
> nstfout  = 0
> nstlog   = 100
>
>
> cutoff-scheme= Verlet
> nstlist  = 20
> ns_type  = grid
> pbc  = xyz
> verlet-buffer-tolerance  = 0.005
>
>
> coulombtype  = reaction-field
> rcoulomb = 1.1
> epsilon_r= 15; 2.5 (with polarizable water)
> epsilon_rf   = 0
> vdw_type = cutoff
> vdw-modifier = Potential-shift-verlet
> rvdw = 1.1
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] How the "Fmax" is determined without "emtol" in the mdp file?

2019-01-18 Thread Justin Lemkul




On 1/18/19 8:55 AM, ZHANG Cheng wrote:

I am doing an energy minimization in a vacuum condition. There is no "emtol" in the mdp file. The energy 
converges in the end, and tell me "Fmax < 10" as shown below. So how this "< 10" is 
determined?


Every keyword that requires a numerical setting has a default value. See 
the manual. Not every default is appropriate in all cases.


-Justin



Steepest Descents converged to Fmax < 10 in 4063 steps
Potential Energy  = -2.3973977e+04
Maximum force =  9.8130465e+00 on atom 405
Norm of force =  1.5696382e+00







My mdp file is:


integrator   = steep
nsteps   = 1
nstxout  = 0
nstfout  = 0
nstlog   = 100


cutoff-scheme= Verlet
nstlist  = 20
ns_type  = grid
pbc  = xyz
verlet-buffer-tolerance  = 0.005


coulombtype  = reaction-field
rcoulomb = 1.1
epsilon_r= 15; 2.5 (with polarizable water)
epsilon_rf   = 0
vdw_type = cutoff
vdw-modifier = Potential-shift-verlet
rvdw = 1.1


--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] How the "Fmax" is determined without "emtol" in the mdp file?

2019-01-18 Thread ZHANG Cheng
I am doing an energy minimization in a vacuum condition. There is no "emtol" in 
the mdp file. The energy converges in the end, and tell me "Fmax < 10" as shown 
below. So how this "< 10" is determined?


Steepest Descents converged to Fmax < 10 in 4063 steps
Potential Energy  = -2.3973977e+04
Maximum force =  9.8130465e+00 on atom 405
Norm of force =  1.5696382e+00







My mdp file is:


integrator   = steep
nsteps   = 1 
nstxout  = 0
nstfout  = 0
nstlog   = 100 


cutoff-scheme= Verlet
nstlist  = 20
ns_type  = grid
pbc  = xyz
verlet-buffer-tolerance  = 0.005


coulombtype  = reaction-field 
rcoulomb = 1.1
epsilon_r= 15; 2.5 (with polarizable water)
epsilon_rf   = 0
vdw_type = cutoff  
vdw-modifier = Potential-shift-verlet
rvdw = 1.1
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] perl water_deletor.pl

2019-01-18 Thread Quyen Vu Van
Hi,
I think you can write your script to delete it by defining the center of
liposome and  searching for water molecules in a range of hydrophobic
radius from that center
I suggest awk scripting language
Best,
Quyen

On Fri, Jan 18, 2019 at 9:30 AM rabee khorram 
wrote:

> *Hello everyone, *
> *I am running liposome structure with gromacs 5.*
> *this liposome created with Packmol software(without water molecules).*
> *after Solvating liposome with water in gromacs, I need to remove waters
> from hydrophobic region.*
>
> *is these  any "perl water_deletor.pl  " script
> for this structure?*
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] How to visualise the dodecahedron in Pymol or VMD?

2019-01-18 Thread ZHANG Cheng
I use


gmx editconf -f protein.pdb -d 5 -bt dodecahedron -o protein.gro


to put the protein in a dodecahedron.


However, when I open the protein.gro in pymol, and type "show cell", only a 
triclinic box is shown.


So how to visualise the dodecahedron in Pymol or VMD?
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] perl water_deletor.pl

2019-01-18 Thread rabee khorram
*Hello everyone, *
*I am running liposome structure with gromacs 5.*
*this liposome created with Packmol software(without water molecules).*
*after Solvating liposome with water in gromacs, I need to remove waters
from hydrophobic region.*

*is these  any "perl water_deletor.pl  " script
for this structure?*
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.