Re: [gmx-users] question on ffG43a1p force field

2019-08-23 Thread Justin Lemkul




On 8/23/19 2:27 AM, Lei Qian wrote:

Thank you Dr. Lemkul,
I continued to use the GROMOS 43a1p for my simulation.

I did simulation for 2 proteins separately: one is WT, the other one is its
one-residue mutant.
And I finished em, NVT, NPT and 1 ns Production (4 steps) for both proteins.

However, I found for each of these 4 above steps, the wall time was a lot
more longer for mutant than WT protein.
Actually I used the same set of parameters for both proteins: e.g. same mdp
files for both protein in each step.
Both proteins get acceptable results after 100-step NVT, 100-step NPT etc.,
but the wall-time for mutant was much more longer than WT.
Could I ask the reason for this?
Sorry for this inconvenience. Thank you for your time and all your help!


There is no way to know what is going on without seeing the .log files 
from the runs and knowing the commands you gave. If you want to share 
files, upload them to a file-sharing server and provide a link. The 
mailing list does not accept attachments.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Performance, gpu

2019-08-23 Thread Alex
Dear Gromacs user,
Using a machine with below configurations and also below command I tried to
simulate a system with 479K atoms (mainly water) on CPU-GPU, the
performance is around 1ns per 1 hour.
According the information and also shared log file below, I would be so
appreciated if you could comment on the submission command to improve the
performance by involving better the GPU and CPU.

%
#PBS -l select=4:ncpus=22:mpiprocs=22:ngpus=1
export OMP_NUM_THREADS=4

aprun -n 88 gmx_mpi mdrun -deffnm out -s out.tpr -g out.log -v -dlb yes
-gcom 1 -nb gpu -npme 44 -ntomp 4 -ntomp_pme 6 -tunepme yes

Running on 4 nodes with total 88 cores, 176 logical cores, 4 compatible GPUs
  Cores per node:   22
  Logical cores per node:   44
  Compatible GPUs per node:  1
  All nodes have identical type(s) of GPUs

%
GROMACS version:2018.1
Precision:  single
Memory model:   64 bit
MPI library:MPI
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
GPU support:CUDA
SIMD instructions:  AVX2_256
FFT library:commercial-fftw-3.3.6-pl1-fma-sse2-avx-avx2-avx2_128
RDTSCP usage:   enabled
TNG support:enabled
Hwloc support:  hwloc-1.11.0
Tracing support:disabled
Built on:   2018-09-12 20:34:33
Built by:   
Build OS/arch:  Linux 3.12.61-52.111-default x86_64
Build CPU vendor:   Intel
Build CPU brand:Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
Build CPU family:   6   Model: 79   Stepping: 1
Build CPU features: aes apic avx avx2 clfsh cmov cx8 cx16 f16c fma hle htt
intel lahf mmx msr nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdrnd
rdtscp rtm sse2 sse3 sse4.1 sse4.2 ssse3 tdt x2apic
C compiler: /opt/cray/pe/craype/2.5.13/bin/cc GNU 5.3.0
C compiler flags:-march=core-avx2 -O3 -DNDEBUG -funroll-all-loops
-fexcess-precision=fast
C++ compiler:   /opt/cray/pe/craype/2.5.13/bin/CC GNU 5.3.0
C++ compiler flags:  -march=core-avx2-std=c++11   -O3 -DNDEBUG
-funroll-all-loops -fexcess-precision=fast
CUDA compiler:
/opt/nvidia/cudatoolkit8.0/8.0.61_2.3.13_g32c34f9-2.1/bin/nvcc nvcc: NVIDIA
(R) Cuda compiler driver;Copyright (c) 2005-2016 NVIDIA Corporation;Built
on Tue_Jan_10_13:22:03_CST_2017;Cuda compilation tools, release 8.0, V8.0.61
CUDA compiler
flags:-gencode;arch=compute_60,code=sm_60;-use_fast_math;-Wno-deprecated-gpu-targets;;;
;-march=core-avx2;-std=c++11;-O3;-DNDEBUG;-funroll-all-loops;-fexcess-precision=fast;
CUDA driver:9.20
CUDA runtime:   8.0
%-
Log file:
https://drive.google.com/open?id=1-myQ5rP85UWKb1262QDPa6kYhuzHPzLu

Thank you,
Alex
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] question on ffG43a1p force field

2019-08-23 Thread Lei Qian
Thank you Dr. Lemkul,
I continued to use the GROMOS 43a1p for my simulation.

I did simulation for 2 proteins separately: one is WT, the other one is its
one-residue mutant.
And I finished em, NVT, NPT and 1 ns Production (4 steps) for both proteins.

However, I found for each of these 4 above steps, the wall time was a lot
more longer for mutant than WT protein.
Actually I used the same set of parameters for both proteins: e.g. same mdp
files for both protein in each step.
Both proteins get acceptable results after 100-step NVT, 100-step NPT etc.,
but the wall-time for mutant was much more longer than WT.
Could I ask the reason for this?
Sorry for this inconvenience. Thank you for your time and all your help!
Lei








On Tue, Aug 20, 2019 at 8:23 AM Justin Lemkul  wrote:

>
>
> On 8/20/19 2:11 AM, Lei Qian wrote:
> > Thank you Dr. Lemkul,
> > Could I ask one more question? Thank you!
> >
> > When I did the step for adding ions and minimization and equilibration
> > steps, one warning always showed up.
> > So I had to add -maxwarn 2 after the command gmx grompp.
> > This warning is as follows:
> >
> > WARNING 1 [file topol.top, line 48]:
> >The GROMOS force fields have been parametrized with a physically
> >incorrect multiple-time-stepping scheme for a twin-range cut-off. When
> >used with a single-range cut-off (or a correct Trotter
> >multiple-time-stepping scheme), physical properties, such as the
> density,
> >might differ from the intended values. Check if molecules in your
> system
> >are affected by such issues before proceeding. Further information
> may be
> >available at https://redmine.gromacs.org/issues/2884.
> >
> > It seems this warning is related to the GROMOS force field (for
> > phosphorylation) you sent to me last week.
> > Could I disregard this warning?
>
> There are significant concerns about the reproducibility of GROMOS force
> fields. The authors of a recent study (
> https://pubs.acs.org/doi/10.1021/acs.jctc.8b00425) allege that GROMACS
> has bugs that affect results, but the GROMACS developers maintain that
> the GROMOS force fields were developed with incorrect algorithms in the
> GROMOS software (hence the warning, and see the related Redmine issue
> linked in the message).
>
> The issue is not specifically related to 43a1p (which is anyway an
> extremely old force field), but all of the GROMOS parameter sets.
>
> Proceed with caution. There are other force field options available that
> have been confirmed to work as expected across different software.
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
>
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.