[gmx-users] Protein atomic charges modeling question

2018-04-10 Thread Thanh Le
Is there a program or algorithm for calculating atomic charges in protein R 
group amino acid residues in various conditions (ie ligand binding partners, 
solutions of various pH, hydrophobic pockets, etc)?
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Question about gmx bar

2018-02-27 Thread Thanh Le

Hi Justin,
I assume, in order to get Coulomb energy for the first 10 stages, I should 
change coul-lambdas stages to the same row as bonded-lambdas (0.0 0.01 0.025 
0.05 0.075 0.1 0.2 0.35 0.5 0.75 1.0 1.00 1.0 1.00 1.0 1.00 1.0 1.0 1.0 1.0 1.0 
1.0 1.00 1.0 1.00 1.0 1.00 1.0 1.00 1.0).
Is there another way to do it? Is there a paper I can read to get more details 
about the subject?
Thanks,
Thanh Le



On 2/27/18 4:13 PM, Thanh Le wrote:
> Hi everyone,
> I just finished running 2 sets of simulations (ligand in water and RNA+ligand 
> in water) for my system to learn about its binding energy. Using the 
> parameter for BAR method, I ran 20 simulations for ligand in water and 30 
> simulations for RNA+ligand in water with different lambda stages. What is 
> interesting/confusing to me is the 0 energy from stages 0-10. Based on what I 
> have been reading and the set up of my prod.mdp, these stages should give 
> Coulomb Energy. If you can tell me why or how to fix it, I would greatly 
> appreciate your help.

You've defined only a bonded transformation in those stages:

bonded-lambdas   = 0.0 0.01 0.025 0.05 0.075 0.1 0.2 0.35 0.5 0.75 1.0 
1.00 1.0 1.00 1.0 1.00 1.0 1.0 1.0 1.0 1.0 1.0 1.00 1.0 1.00 1.0 1.00 1.0 1.00 
1.0
coul-lambdas = 0.0 0.00 0.000 0.00 0.000 0.0 0.0 0.00 0.0 0.00 0.0 
0.25 0.5 0.75 1.0 1.00 1.0 1.0 1.0 1.0 1.0 1.0 1.00 1.0 1.00 1.0 1.00 1.0 1.00 
1.0
vdw-lambdas  = 0.0 0.00 0.000 0.00 0.000 0.0 0.0 0.00 0.0 0.00 0.0 
0.00 0.0 0.00 0.0 0.05 0.1 0.2 0.3 0.4 0.5 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 
1.0

Your electrostatic terms remain in state A (lambda = 0) and there is no 
difference in bonded energies, so the first windows are essentially doing 
nothing.

-Justin
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Question about gmx bar

2018-02-27 Thread Thanh Le
Hi everyone, 
I just finished running 2 sets of simulations (ligand in water and RNA+ligand 
in water) for my system to learn about its binding energy. Using the parameter 
for BAR method, I ran 20 simulations for ligand in water and 30 simulations for 
RNA+ligand in water with different lambda stages. What is interesting/confusing 
to me is the 0 energy from stages 0-10. Based on what I have been reading and 
the set up of my prod.mdp, these stages should give Coulomb Energy. If you can 
tell me why or how to fix it, I would greatly appreciate your help.
point  0 -  1,   DG  0.00 +/-  0.00
point  1 -  2,   DG  0.00 +/-  0.00
point  2 -  3,   DG  0.00 +/-  0.00
point  3 -  4,   DG  0.00 +/-  0.00
point  4 -  5,   DG  0.00 +/-  0.00
point  5 -  6,   DG  0.00 +/-  0.00
point  6 -  7,   DG  0.00 +/-  0.00
point  7 -  8,   DG  0.00 +/-  0.00
point  8 -  9,   DG  0.00 +/-  0.00
point  9 - 10,   DG  0.00 +/-  0.00
point 10 - 11,   DG 1198.03 +/- 11.41
point 11 - 12,   DG 711.32 +/-  1.58
point 12 - 13,   DG 258.19 +/-  3.80
point 13 - 14,   DG -116.21 +/-  3.81
point 14 - 15,   DG 24.24 +/-  0.17
point 15 - 16,   DG 25.40 +/-  0.18
point 16 - 17,   DG 51.57 +/-  0.42
point 17 - 18,   DG 51.20 +/-  0.57
point 18 - 19,   DG 48.42 +/-  0.97
point 19 - 20,   DG 46.55 +/-  0.56
point 20 - 21,   DG 44.53 +/-  0.23
point 21 - 22,   DG 20.56 +/-  0.12
point 22 - 23,   DG 18.23 +/-  0.11
point 23 - 24,   DG 13.64 +/-  0.16
point 24 - 25,   DG -3.58 +/-  0.84
point 25 - 26,   DG -44.60 +/-  0.29
point 26 - 27,   DG -33.45 +/-  0.07
point 27 - 28,   DG -15.22 +/-  0.02
point 28 - 29,   DG  0.81 +/-  0.12

total  0 - 29,   DG 2299.63 +/-  5.17

Here is the prod.mdp for complex:

;
; Production simulation
;

;
; RUN CONTROL
;
integrator   = sd; stochastic leap-frog integrator
nsteps   = 5000; 2 * 500,000 fs = 1000 ps = 1 ns
dt   = 0.002 ; 2 fs
comm-mode= Linear; remove center of mass translation
nstcomm  = 100   ; frequency for center of mass motion removal

;
; OUTPUT CONTROL
;
nstxout= 0  ; don't save coordinates to .trr
nstvout= 0  ; don't save velocities to .trr
nstfout= 0  ; don't save forces to .trr
nstxout-compressed = 1000   ; xtc compressed trajectory output every 
1000 steps (2 ps)
compressed-x-precision = 1000   ; precision with which to write to the 
compressed trajectory file
nstlog = 1000   ; update log file every 2 ps
nstenergy  = 1000   ; save energies every 2 ps
nstcalcenergy  = 100; calculate energies every 100 steps

;
; BONDS
;
constraint_algorithm   = lincs  ; holonomic constraints
constraints= all-bonds  ; hydrogens only are constrained
lincs_iter = 1  ; accuracy of LINCS (1 is default)
lincs_order= 4  ; also related to accuracy (4 is default)
lincs-warnangle= 30 ; maximum angle that a bond can rotate 
before LINCS will complain (30 is default)
continuation   = yes; formerly known as 'unconstrained-start' - 
useful for exact continuations and reruns

;
; NEIGHBOR SEARCHING
;
cutoff-scheme   = Verlet
ns-type = grid   ; search neighboring grid cells
nstlist = 10 ; 20 fs (default is 10)
rlist   = 1.0; short-range neighborlist cutoff (in nm)
pbc = xyz; 3D PBC

;
; ELECTROSTATICS
;
coulombtype  = PME  ; Particle Mesh Ewald for long-range electrostatics
rcoulomb = 1.0  ; short-range electrostatic cutoff (in nm)
ewald_geometry   = 3d   ; Ewald sum is performed in all three dimensions
pme-order= 6; interpolation order for PME (default is 4)
fourierspacing   = 0.10 ; grid spacing for FFT
ewald-rtol   = 1e-6 ; relative strength of the Ewald-shifted direct 
potential at rcoulomb

;
; VDW
;
vdw-type= PME
rvdw= 1.0
vdw-modifier= 

[gmx-users] MBAR bootstrap

2018-01-09 Thread Thanh Le
Hi everyone, 
I have a question regarding using bootstrap for MBAR.
After running 30 windows for the complex and 20 windows for the ligand, 100 ns 
each, I have a problem trying to do bootstrap (based on what I have read, 
bootstrap sampling is recommended to analyze results from MBAR)
Can anyone point me to the correct direction to implement bootstrap? Or any 
advice on how to analyze MBAR?
Thanks,
Thanh Le
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Problems running on multiple nodes

2017-10-23 Thread Thanh Le
Hi everyone,

This is my first time running Gromacs using multiple nodes. Currently, I don’t 
quite understand the output generated by my run. Can you please take a look at 
the script and output and tell me how to improve?

The HPC I am currently using has 72 nodes; each node has 28 CPUs. 

The script is: 

#!/bin/bash

#SBATCH --job-name=Gromacs78

#SBATCH -o Gromacs_result.out

#SBATCH -n 140 -N 5

#SBATCH --tasks-per-node=28

 

module purge

module load gromacs-mvapich2-2.2 mvapich2-2.2/gnu-4.8.5

source /opt/gromacs/bin/GMXRC

dm=/home/blustig/perl5/simulation/78

dmdp=${dm}/mdpfiles

vt=rna-protein

dw=${dm}/${vt}

mkdir ${dw}

cd ${dw}

### produce 100ns mdrun: 1st trajectory

echo "0" > inputall

trj=1

let tm=trj*20

vp=md_npt

gmx trjconv -s md_npt.tpr -f md_npt.xtc -pbc mol -ur compact -o 
md_npt_trj20ps.gro < inputall

gmx grompp -f ${dmdp}/md.mdp -c md_npt.gro -t md_npt.cpt -p rna-protein.top -n 
rna-protein.ndx -o md1micros.tpr -maxwarn 1

gmx mdrun -ntmpi 140 -pin on -s md1micros.tpr -o md1micros.trr -e md1micros.edr 
-g md1micros.log -c md1micros.gro -x md1micros.xtc -cpo md1micros.cpt

 

The output is: 

Back Off! I just backed up md1micros.log to ./#md1micros.log.14#

NOTE: Error occurred during GPU detection:

  CUDA driver version is insufficient for CUDA runtime version

  Can not use GPU acceleration, will fall back to CPU kernels.

Running on 1 node with total 28 cores, 28 logical cores, 0 compatible GPUs

Hardware detected:

  CPU info:

Vendor: Intel

Brand:  Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz

SIMD instructions most likely to fit this hardware: AVX2_256

SIMD instructions selected at GROMACS compile time: AVX2_256

  Hardware topology: Basic

Reading file md1micros.tpr, VERSION 2016.3 (single precision)

Changing nstlist from 10 to 25, rlist from 1.4 to 1.435

Will use 120 particle-particle and 20 PME only ranks

This is a guess, check the performance at the end of the log file

Using 140 MPI threads

Using 1 OpenMP thread per tMPI thread

NOTE: Oversubscribing a CPU, will not pin threads.

NOTE: Thread affinity setting failed. This can cause performance degradation.

  If you think your settings are correct, ask on the gmx-users list.

Back Off! I just backed up md1micros.xtc to ./#md1micros.xtc.12#

Back Off! I just backed up md1micros.edr to ./#md1micros.edr.12#

WARNING: This run will generate roughly 12227 Mb of data

starting mdrun 'Protein in water'

5 steps, 100.0 ps.

step 87500 Turning on dynamic load balancing, because the performance loss due 
to load imbalance is 2.2 %.

 

I don’t understand why it is taking quite a long time to run.

Any advice is greatly appreciated.

Thanks,

Thanh Le
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] gromacs.org_gmx-users Digest, Vol 158, Issue 186

2017-06-29 Thread Thanh Le
Hi Mr. Abraham.
My system is quite small, only about 8000 atoms. I have run this system for 100 
ns, which took roughly about 2 days. Hence, a run of 1 microsecond would take 
about 20 days. I am trying to shorten it down to 2 days by using more than 1 
node.
Thanks,
Thanh Le
> On Jun 29, 2017, at 3:10 PM, 
> gromacs.org_gmx-users-requ...@maillist.sys.kth.se wrote:
> 
>> http://www.gromacs.org/Support/Mailing_Lists 
>> <http://www.gromacs.org/Support/Mailing_Lists>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Running MD jobs using slurm on multiple nodes

2017-06-29 Thread Thanh Le
Hi all, 
I am quite new to running MD jobs using slurm on multiple nodes. What confuses 
me is the creation of a slurm script. I don’t quite understand what inputs I 
should use to efficiently run. 
Please teach me how to create a slurm script and the md run command.
Here are the info of the HPC:
Architecture:  x86_64
CPU op-mode(s):32-bit, 64-bit
Byte Order:Little Endian
CPU(s):28
On-line CPU(s) list:   0-27
Thread(s) per core:1
Core(s) per socket:14
Socket(s): 2
NUMA node(s):  2
Vendor ID: GenuineIntel
CPU family:6
Model: 79
Model name:Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz
Stepping:  1
CPU MHz:   1200.000
BogoMIPS:  4795.21
Virtualization:VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache:  256K
L3 cache:  35840K
NUMA node0 CPU(s): 0-13
NUMA node1 CPU(s): 14-27
This HPC has about 40 nodes. I have my PI’s permission to use all nodes to run 
as many jobs as I want.
The version of GROMACS is version 2016.3.
I am looking forward to hearing from you guys.
Thanks,
Thanh Le

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Running Gromacs in parallel

2016-09-20 Thread Thanh Le
Hi everyone,
I have a question concerning running gromacs in parallel. I have read over the 
http://manual.gromacs.org/documentation/5.1/user-guide/mdrun-performance.html 
<http://manual.gromacs.org/documentation/5.1/user-guide/mdrun-performance.html> 
but I still dont quite understand how to run it efficiently.
My gromacs version is 4.5.4
The cluster I am using has CPUs total: 108 and 4 hosts up.
The node iam using:
Architecture:  x86_64
CPU op-mode(s):32-bit, 64-bit
Byte Order:Little Endian
CPU(s):12
On-line CPU(s) list:   0-11
Thread(s) per core:2
Core(s) per socket:6
Socket(s): 1
NUMA node(s):  1
Vendor ID: AuthenticAMD
CPU family:21
Model: 2
Stepping:  0
CPU MHz:   1400.000
BogoMIPS:  5200.57
Virtualization:AMD-V
L1d cache: 16K
L1i cache: 64K
L2 cache:  2048K
L3 cache:  6144K
NUMA node0 CPU(s): 0-11
MPI is already installed. I also have permission to use the cluster as much as 
I can.
My question is: how should I write my mdrun command run to utilize all the 
possible cores and nodes?
Thanks,
Thanh Le
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Run time for energy minimization, nvt, and npt equilibration

2016-08-01 Thread Thanh Le
> Hi guys, I have a question regarding the run time for energy minimization,
> nvt and npt equilibration steps. Currently, I am doing a RNA-protein
> simulation (using ff amber14 and spce water model) in a 10 A octahedron
box
> containing almost 400k solute atoms.
> For the energy minimization step, I did it for 2 ns and received a

>There is no time in EM, only number of steps.
I used 1,000,000 steps for EM (I have always assumed 1,000,000 step to be
2ns)
> potential E of nearly -2.25e-7.

>Check your exponent :)  Surely your system does not have an energy of
(effectively) zero.
You are correct. Hahahaha. I didnt even the error. My potential E is
actually -2.25e-7. Is that a good number?

> I am also doing 2ns for nvt and npt equilibration steps.
> My question is: is 2ns too long for these steps?

>You need to run long enough for the observable(s) of interest to
converge.  Have
they?
The observable of interest? Do you mean temperature and density by it?
I am still running it and it has been runing over 4 days (nvt run) on my
mbp.
> Also another question regarding running simulation on a cluster: do I need
> to install gromacs mpi to run it on parallel? Is it necessary since I read
> in the literature gromacs would use all the available cores to run if I
> dont specify the number of cores.

>Depends on the configuration of the cluster, how many cores you intend to
use,
>etc.  Everything is here:
>http://www.gromacs.org/Documentation/Acceleration_and_parallelization
>Consult with your sysadmin for best performance on whatever hardware you
have.
I will talk to my sysadmin about it.
Thanks,
Thanh Le
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] md with multiple ligands

2016-06-30 Thread Thanh Le

> On Jun 30, 2016, at 3:48 PM, Thanh Le <thanh_le_q...@yahoo.com> wrote:
> 
> My name is Thanh Le, a graduate student in chemistry. Currently, I am using 
> gromacs to do a dynamics simulation for my RNA-protein complex. I saw you 
> posted a question titled “Atoms in the .top are not numbered consecutively 
> from 1” on gromacs forum. I know it has been 3 years since you asked the 
> question. I would like to know if you have solved the problem and how to fix 
> this error?
> Hope to hear from you soon,
> Thanks,
> Thanh Le

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Any advice on finding the binding energy from RNA Protein interaction?

2016-04-06 Thread Thanh Le
Hi everyone,

I would like to calculate the binding energy/gibbs free energy of my system 
(protein-rna). I made an amino acid substitution in my protein chain. Thus I 
would like to calculate the new binding energy to see if it is more stable or 
less stable.

I used scwrl4 to do the amino acid substitution in the peptide.

If there are other ways to do this, please advice me.

Thanks,

Thanh Le
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.