Re: [gmx-users] Excessive and gradually increasing memory usage with OpenCL

2018-03-28 Thread Mark Abraham
Hi,

Our own installation guide does advise against OpenCL on NVIDIA hardware,
and also hints that compiler compatibility is dependent on the CUDA
version, but we could improve the latter I think.

Last time we considered performance of OpenCL on NVIDIA, the GPU kernels
seemed to always run synchronously, providing no overlap with CPU tasks, so
the advice Szilard gave applies mainly to the CUDA case. By far best tuning
opportunity is to organize to use CUDA.

Mark

On Thu, Mar 29, 2018, 00:17 Albert Mao  wrote:

> Thank you for this workaround!
>
> Just setting the GMX_DISABLE_GPU_TIMING environment variable has
> allowed mdrun to progress for several million steps. The memory usage
> is still high at about 1 GB memory and 26 GB swap, but it does not
> appear to increase as the simulation progresses.
>
> I tried 6 ranks x 2 threads as well, but performance was unchanged. I
> think it's because the CPUs are spending time waiting for the GPUs;
> Mark's suggestion to switch to native CUDA would probably make a
> significant difference here. If this is an important recommendation,
> the Gromacs installation guide should probably link to
> http://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html,
> which clarifies that even the latest release of CUDA does not come
> close to being compatible with the latest version of GCC.
>
> -Albert Mao
>
> On Tue, Mar 27, 2018 at 4:43 PM, Szilárd Páll 
> wrote:
> > Hi,
> >
> > This is an issue I noticed recently, but I thought it was only
> > affecting some use-cases (or some runtimes). However, it seems it's a
> > broader problem. It is under investigation, but for now it seems that
> > eliminate it (or strongly diminish its effects) by turning off
> > GPU-side task timing. You can do that by setting the
> > GMX_DISABLE_GPU_TIMING environment variable.
> >
> > Note that this is workaround that may turn out to not be a complete
> > solution, please report back if you've done longer runs.
> >
> > Regarding the thread count, the MPI and CUDA runtimes can spawn
> > threads, GROMACS certainly used 3x 4 threads in your case. Note that
> > you will likely get better performance by using 6 ranks x 2 threads
> > (both because this avoids ranks spanning across sockets and it allows
> > GPU task/transfer overlap).
> >
> > Cheers,
> > --
> > Szilárd
> >
> >
> > On Tue, Mar 27, 2018 at 4:09 PM, Albert Mao 
> wrote:
> >> Hello!
> >>
> >> I'm trying to run molecular dynamics on a fairly large system
> >> containing approximately 25 atoms. The simulation runs well for
> >> about 10 steps and then gets killed by the queueing engine due to
> >> exceeding the swap space usage limit. The compute node I'm using has
> >> 12 cores in two sockets, three GPUs, and 8 GB of memory. I'm using
> >> GROMACS 2018 and allowing mdrun to delegate the workload
> >> automatically, resulting in three thread-MPI ranks each with one GPU
> >> and four OpenMP threads. The queueing engine reports the following
> >> usage:
> >>
> >> TERM_SWAP: job killed after reaching LSF swap usage limit.
> >> Exited with exit code 131.
> >> Resource usage summary:
> >> CPU time   :  50123.00 sec.
> >> Max Memory :  4671 MB
> >> Max Swap   : 30020 MB
> >> Max Processes  : 5
> >> Max Threads:35
> >>
> >> Even though it's a large system, by my rough estimate, the simulation
> >> should not need much more than 0.5 gigabytes of memory; 4.6 GB seems
> >> like too much and 30 GB is completely ridiculous.
> >> Indeed, running the system on a similar node without GPUs is working
> >> well (but slowly), consuming about 0.65 GB and 2 GB of swap.
> >>
> >> I also don't understand why 35 threads got created.
> >>
> >> Could there be a memory leak somewhere in the OpenCL code? Any
> >> suggestions on preventing this memory usage expansion would be greatly
> >> appreciated.
> >>
> >> I've included relevant output from mdrun with system and configuration
> >> information at the end of this message. I'm using OpenCL despite
> >> having Nvidia GPUs because of a sad problem where building with CUDA
> >> support fails due to the C compiler being "too new".
> >>
> >> Thanks!
> >> -Albert Mao
> >>
> >> GROMACS:  gmx mdrun, version 2018
> >> Executable:   /data/albertmaolab/software/gromacs/bin/gmx
> >> Data prefix:  /data/albertmaolab/software/gromacs
> >> Command line:
> >>
> >>   gmx mdrun -v -pforce 1 -s blah.tpr -deffnm blah -cpi blah.cpt
> >>
> >> GROMACS version:2018
> >> Precision:  single
> >> Memory model:   64 bit
> >> MPI library:thread_mpi
> >> OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
> >> GPU support:OpenCL
> >> SIMD instructions:  SSE4.1
> >> FFT library:fftw-3.2.1
> >> RDTSCP usage:   disabled
> >> TNG support:enabled
> >> Hwloc support:  hwloc-1.5.0
> >> Tracing support:disabled
> >> Built on:   2018-02-22 

Re: [gmx-users] Excessive and gradually increasing memory usage with OpenCL

2018-03-28 Thread Albert Mao
Thank you for this workaround!

Just setting the GMX_DISABLE_GPU_TIMING environment variable has
allowed mdrun to progress for several million steps. The memory usage
is still high at about 1 GB memory and 26 GB swap, but it does not
appear to increase as the simulation progresses.

I tried 6 ranks x 2 threads as well, but performance was unchanged. I
think it's because the CPUs are spending time waiting for the GPUs;
Mark's suggestion to switch to native CUDA would probably make a
significant difference here. If this is an important recommendation,
the Gromacs installation guide should probably link to
http://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html,
which clarifies that even the latest release of CUDA does not come
close to being compatible with the latest version of GCC.

-Albert Mao

On Tue, Mar 27, 2018 at 4:43 PM, Szilárd Páll  wrote:
> Hi,
>
> This is an issue I noticed recently, but I thought it was only
> affecting some use-cases (or some runtimes). However, it seems it's a
> broader problem. It is under investigation, but for now it seems that
> eliminate it (or strongly diminish its effects) by turning off
> GPU-side task timing. You can do that by setting the
> GMX_DISABLE_GPU_TIMING environment variable.
>
> Note that this is workaround that may turn out to not be a complete
> solution, please report back if you've done longer runs.
>
> Regarding the thread count, the MPI and CUDA runtimes can spawn
> threads, GROMACS certainly used 3x 4 threads in your case. Note that
> you will likely get better performance by using 6 ranks x 2 threads
> (both because this avoids ranks spanning across sockets and it allows
> GPU task/transfer overlap).
>
> Cheers,
> --
> Szilárd
>
>
> On Tue, Mar 27, 2018 at 4:09 PM, Albert Mao  wrote:
>> Hello!
>>
>> I'm trying to run molecular dynamics on a fairly large system
>> containing approximately 25 atoms. The simulation runs well for
>> about 10 steps and then gets killed by the queueing engine due to
>> exceeding the swap space usage limit. The compute node I'm using has
>> 12 cores in two sockets, three GPUs, and 8 GB of memory. I'm using
>> GROMACS 2018 and allowing mdrun to delegate the workload
>> automatically, resulting in three thread-MPI ranks each with one GPU
>> and four OpenMP threads. The queueing engine reports the following
>> usage:
>>
>> TERM_SWAP: job killed after reaching LSF swap usage limit.
>> Exited with exit code 131.
>> Resource usage summary:
>> CPU time   :  50123.00 sec.
>> Max Memory :  4671 MB
>> Max Swap   : 30020 MB
>> Max Processes  : 5
>> Max Threads:35
>>
>> Even though it's a large system, by my rough estimate, the simulation
>> should not need much more than 0.5 gigabytes of memory; 4.6 GB seems
>> like too much and 30 GB is completely ridiculous.
>> Indeed, running the system on a similar node without GPUs is working
>> well (but slowly), consuming about 0.65 GB and 2 GB of swap.
>>
>> I also don't understand why 35 threads got created.
>>
>> Could there be a memory leak somewhere in the OpenCL code? Any
>> suggestions on preventing this memory usage expansion would be greatly
>> appreciated.
>>
>> I've included relevant output from mdrun with system and configuration
>> information at the end of this message. I'm using OpenCL despite
>> having Nvidia GPUs because of a sad problem where building with CUDA
>> support fails due to the C compiler being "too new".
>>
>> Thanks!
>> -Albert Mao
>>
>> GROMACS:  gmx mdrun, version 2018
>> Executable:   /data/albertmaolab/software/gromacs/bin/gmx
>> Data prefix:  /data/albertmaolab/software/gromacs
>> Command line:
>>
>>   gmx mdrun -v -pforce 1 -s blah.tpr -deffnm blah -cpi blah.cpt
>>
>> GROMACS version:2018
>> Precision:  single
>> Memory model:   64 bit
>> MPI library:thread_mpi
>> OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
>> GPU support:OpenCL
>> SIMD instructions:  SSE4.1
>> FFT library:fftw-3.2.1
>> RDTSCP usage:   disabled
>> TNG support:enabled
>> Hwloc support:  hwloc-1.5.0
>> Tracing support:disabled
>> Built on:   2018-02-22 07:25:43
>> Built by:   ah...@eris1pm01.research.partners.org [CMAKE]
>> Build OS/arch:  Linux 2.6.32-431.29.2.el6.x86_64 x86_64
>> Build CPU vendor:   Intel
>> Build CPU brand:Common KVM processor
>> Build CPU family:   15   Model: 6   Stepping: 1
>> Build CPU features: aes apic clfsh cmov cx8 cx16 intel lahf mmx msr
>> nonstop_tsc pcid pclmuldq pdpe1gb popcnt pse sse2 sse3 sse4.1 sse4.2
>> ssse3
>> C compiler: /data/albertmaolab/software/gcc/bin/gcc GNU 7.3.0
>> C compiler flags:-msse4.1 -O3 -DNDEBUG -funroll-all-loops
>> -fexcess-precision=fast
>> C++ compiler:   /data/albertmaolab/software/gcc/bin/g++ GNU 7.3.0
>> C++ compiler flags:  -msse4.1-std=c++11   -O3 -DNDEBUG
>> -funroll-all-loops 

[gmx-users] RDF brads

2018-03-28 Thread Alex
Dear all,

To obtain the coarse-graine (CG) parameters of a molecule using votca,
below are the considered beads: (the bead are weighted by atomic weights )
Bead A contains atoms of " a1, ..., a11"
Bead B contains atoms of " b1, ..., b9"
Bead C contains atoms of " c1,..., c13"

Now, to calculate the RDF (A-A, A-B, A-C, B-C, B-B and C-C), in the all
atom gromacs, would you please confirm me that I have to bundle the atoms
of a1, ..., a11 in index.ndx (gmx make_ndx) in a section called A (and for
the rest as well), and then using gmx rdf -ref A -sel B ... I am able to
calculate a RDF similar to what I get in CG?

One more question:

Since the molecule which I was talking about in the last question is a log
polymer which is hard to move and also I considered to have 300 molecules
of that in my all atom MD system, so, I did not simulate my system for a
long time (25 ns), but rather I simulated the system in multiple short MD
simulation (5 * 5 ns), each with different initial conf.gro (but all
include 300 molecules), now, I was wondering how can I calculate the
physical quantities of the system out of those simulation? for example for
the RDF, can I average the RDF of the 5 short simulations to get the final
RDF?

Thank you.
Best regards,
Alex
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Performance

2018-03-28 Thread Myunggi Yi
Does it work?

https://drive.google.com/open?id=1n5m1tNGbnV7oZnuAEgZ7gSP6qA6HluNl

How about this?


Myunggi Yi

On Wed, Mar 28, 2018 at 12:20 PM, Mark Abraham 
wrote:

> Hi,
>
> Attachments can't be accepted on the list - please upload to a file sharing
> service and share links to those.
>
> Mark
>
> On Wed, Mar 28, 2018 at 6:16 PM Myunggi Yi  wrote:
>
> > I am attaching the file.
> >
> > Thank you.
> >
> > Myunggi Yi
> >
> > On Wed, Mar 28, 2018 at 11:40 AM, Szilárd Páll 
> > wrote:
> >
> > > Again, please share the exact log files / description of inputs. What
> > > does "bad performance" mean?
> > > --
> > > Szilárd
> > >
> > >
> > > On Wed, Mar 28, 2018 at 5:31 PM, Myunggi Yi 
> > wrote:
> > > > Dear users,
> > > >
> > > > I have two questions.
> > > >
> > > >
> > > > 1. I used to run typical simulations with the following command.
> > > >
> > > > gmx mdrun -deffnm md
> > > >
> > > > I had no problem.
> > > >
> > > >
> > > > Now I am running a simulation with "Dry_Martini" FF with the
> following
> > > > input.
> > > >
> > > >
> > > > integrator   = sd
> > > > tinit= 0.0
> > > > dt   = 0.040
> > > > nsteps   = 100
> > > >
> > > > nstlog   = 5000
> > > > nstenergy= 5000
> > > > nstxout-compressed   = 5000
> > > > compressed-x-precision   = 100
> > > >
> > > > cutoff-scheme= Verlet
> > > > nstlist  = 10
> > > > ns_type  = grid
> > > > pbc  = xyz
> > > > verlet-buffer-tolerance  = 0.005
> > > >
> > > > epsilon_r= 15
> > > > coulombtype  = reaction-field
> > > > rcoulomb = 1.1
> > > > vdw_type = cutoff
> > > > vdw-modifier = Potential-shift-verlet
> > > > rvdw = 1.1
> > > >
> > > > tc-grps  = system
> > > > tau_t= 4.0
> > > > ref_t= 310
> > > >
> > > > ; Pressure coupling:
> > > > Pcoupl   = no
> > > >
> > > > ; GENERATE VELOCITIES FOR STARTUP RUN:
> > > > gen_vel  = yes
> > > > gen_temp = 310
> > > > gen_seed = 1521731368
> > > >
> > > >
> > > >
> > > > If I use the same command to submit the job.
> > > > I got the following error. I don't know why.
> > > >
> > > > ---
> > > > Program: gmx mdrun, version 2018.1
> > > > Source file: src/gromacs/taskassignment/resourcedivision.cpp (line
> 224)
> > > >
> > > > Fatal error:
> > > > When using GPUs, setting the number of OpenMP threads without
> > specifying
> > > the
> > > > number of ranks can lead to conflicting demands. Please specify the
> > > number
> > > > of
> > > > thread-MPI ranks as well (option -ntmpi).
> > > >
> > > > For more information and tips for troubleshooting, please check the
> > > GROMACS
> > > > website at http://www.gromacs.org/Documentation/Errors
> > > > ---
> > > >
> > > >
> > > > So I did run simulation with the following command.
> > > >
> > > >
> > > > gmx mdrun -deffnm md -ntmpi 1
> > > >
> > > >
> > > > Now the performance is extremely bad.
> > > > Since yesterday, the log file still reporting the first step's
> energy.
> > > >
> > > > 2. This is the second question. Why?
> > > >
> > > > Can anyone help?
> > > >
> > > >
> > > > Myunggi Yi
> > > > --
> > > > Gromacs Users mailing list
> > > >
> > > > * Please search the archive at http://www.gromacs.org/Support
> > > /Mailing_Lists/GMX-Users_List before posting!
> > > >
> > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > >
> > > > * For (un)subscribe requests visit
> > > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at http://www.gromacs.org/Support
> > > /Mailing_Lists/GMX-Users_List before posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read 

[gmx-users] Problems in putting restraints during free energy perturbation calculations (cont)

2018-03-28 Thread Searle Duay
 Hi,

I would like to follow-up on this:
https://www.mail-archive.com/gromacs.org_gmx-users@maillist.sys.kth.se/msg31701.html.
I already fixed the problem with the coulombic and vdW interactions with
the help of Mark (thank you!). However, my problem now is on turning off
the restraints from states 41 to 60. I am not sure if I am using the
parameters for restraint lambdas correctly. From states 0 to 40, the
restraint lambdas are all 1.00, then I started turning it off with
decrements of 0.05. The restraints that I applied are pull restraints to
keep a zinc ion close to two residues in my peptide. However, when I
analyze the data using gmx bar, I expect a change in free energy, but there
is no change in free energy. I feel that there's something wrong with my
parameters because when I visualized the trajectory using VMD at the final
state (where restraints should be turned off completely), the zinc ion is
still close to the two residues.

Here is the copy of my MDP file:

; Run control
integrator   = sd   ; Langevin dynamics
tinit= 0
dt   = 0.002
nsteps   = 500   ; 10 ns
nstcomm  = 100
; Output control
nstxout  = 500
nstvout  = 500
nstfout  = 0
nstlog   = 500
nstenergy= 500
tc-grps  = PROT   SOL_ION
tau-t= 1.01.0
ref-t= 300   300
; Pressure coupling is on for NPT
pcoupl   = Parrinello-Rahman
pcoupltype   = isotropic
tau-p= 5.0
compressibility  = 4.5e-5   4.5e-5
ref-p= 1.0  1.0
; Free energy control stuff
free-energy  = yes
init-lambda-state= 0
delta-lambda = 0
calc-lambda-neighbors= 1; only immediate neighboring windows
; Vectors of lambda specified here
; Each combination is an index that is retrieved from init_lambda_state for
each simulation
; init_lambda_state0123456789
   10   11   12   13   14   15   16   17   18   19   20   21   22   23   24
  25   26   27   28   29   30   31   32   33   34   35   36   37   38   39
  40   41  \
42   43   44   45   46   47   48   49   50   51   52   53   54   55   56
  57   58   59   60
coul-lambdas = 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.05 0.10 0.15
0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90
0.95 1.00 1.00\
1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
1.00 1.00 1.00 1.00
vdw-lambdas  = 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40
0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 1.00 1.00 1.00
1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
1.00 1.00 1.00\
1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
1.00 1.00 1.00 1.00
; We are not transforming any bonded or restrained interactions
bonded-lambdas   = 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00\
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00
restraint-lambdas= 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
1.00 1.00 0.95\
0.90 0.85 0.80 0.75 0.70 0.65 0.60 0.55 0.50 0.45 0.40 0.35 0.30 0.25 0.20
0.15 0.10 0.05 0.00
; Masses are not changing (particle identities are the same at lambda = 0
and lambda = 1)
mass-lambdas = 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00\
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00
; Not doing simulated temperting here
temperature-lambdas  = 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00\
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00
; Options for the decoupling
sc-alpha = 0.5
sc-coul  = no   ; linear interpolation of Coulomb (none
in this case)
sc-power = 1
sc-sigma = 0.3
couple-moltype   = HETA  ; name of moleculetype to decouple
couple-lambda0   = none  ; only van der Waals interactions
couple-lambda1   = vdw-q ; turn off everything, in this case
only vdW
couple-intramol

Re: [gmx-users] Performance

2018-03-28 Thread Myunggi Yi
I see.

I am trying again.
​
 ves

​

Myunggi Yi  (이명기, Ph. D.), Professor

Department of Biomedical Engineering (의공학과 bme.pknu.ac.kr), College of
Engineering
Interdisciplinary Program of Biomedical Mechanical & Electrical Engineering
Center for Marine-Integrated Biomedical Technology (BK21+)
College of Engineering
Pukyong National University (부경대학교 www.pknu.ac.kr)
45 Yongso-ro, Nam-gu (남구 용소로 45)
Busan, 48513, South Korea
Phone: +82 51 629 5773
Fax: +82 51 629 5779

On Wed, Mar 28, 2018 at 12:20 PM, Mark Abraham 
wrote:

> Hi,
>
> Attachments can't be accepted on the list - please upload to a file sharing
> service and share links to those.
>
> Mark
>
> On Wed, Mar 28, 2018 at 6:16 PM Myunggi Yi  wrote:
>
> > I am attaching the file.
> >
> > Thank you.
> >
> > Myunggi Yi
> >
> > On Wed, Mar 28, 2018 at 11:40 AM, Szilárd Páll 
> > wrote:
> >
> > > Again, please share the exact log files / description of inputs. What
> > > does "bad performance" mean?
> > > --
> > > Szilárd
> > >
> > >
> > > On Wed, Mar 28, 2018 at 5:31 PM, Myunggi Yi 
> > wrote:
> > > > Dear users,
> > > >
> > > > I have two questions.
> > > >
> > > >
> > > > 1. I used to run typical simulations with the following command.
> > > >
> > > > gmx mdrun -deffnm md
> > > >
> > > > I had no problem.
> > > >
> > > >
> > > > Now I am running a simulation with "Dry_Martini" FF with the
> following
> > > > input.
> > > >
> > > >
> > > > integrator   = sd
> > > > tinit= 0.0
> > > > dt   = 0.040
> > > > nsteps   = 100
> > > >
> > > > nstlog   = 5000
> > > > nstenergy= 5000
> > > > nstxout-compressed   = 5000
> > > > compressed-x-precision   = 100
> > > >
> > > > cutoff-scheme= Verlet
> > > > nstlist  = 10
> > > > ns_type  = grid
> > > > pbc  = xyz
> > > > verlet-buffer-tolerance  = 0.005
> > > >
> > > > epsilon_r= 15
> > > > coulombtype  = reaction-field
> > > > rcoulomb = 1.1
> > > > vdw_type = cutoff
> > > > vdw-modifier = Potential-shift-verlet
> > > > rvdw = 1.1
> > > >
> > > > tc-grps  = system
> > > > tau_t= 4.0
> > > > ref_t= 310
> > > >
> > > > ; Pressure coupling:
> > > > Pcoupl   = no
> > > >
> > > > ; GENERATE VELOCITIES FOR STARTUP RUN:
> > > > gen_vel  = yes
> > > > gen_temp = 310
> > > > gen_seed = 1521731368
> > > >
> > > >
> > > >
> > > > If I use the same command to submit the job.
> > > > I got the following error. I don't know why.
> > > >
> > > > ---
> > > > Program: gmx mdrun, version 2018.1
> > > > Source file: src/gromacs/taskassignment/resourcedivision.cpp (line
> 224)
> > > >
> > > > Fatal error:
> > > > When using GPUs, setting the number of OpenMP threads without
> > specifying
> > > the
> > > > number of ranks can lead to conflicting demands. Please specify the
> > > number
> > > > of
> > > > thread-MPI ranks as well (option -ntmpi).
> > > >
> > > > For more information and tips for troubleshooting, please check the
> > > GROMACS
> > > > website at http://www.gromacs.org/Documentation/Errors
> > > > ---
> > > >
> > > >
> > > > So I did run simulation with the following command.
> > > >
> > > >
> > > > gmx mdrun -deffnm md -ntmpi 1
> > > >
> > > >
> > > > Now the performance is extremely bad.
> > > > Since yesterday, the log file still reporting the first step's
> energy.
> > > >
> > > > 2. This is the second question. Why?
> > > >
> > > > Can anyone help?
> > > >
> > > >
> > > > Myunggi Yi
> > > > --
> > > > Gromacs Users mailing list
> > > >
> > > > * Please search the archive at http://www.gromacs.org/Support
> > > /Mailing_Lists/GMX-Users_List before posting!
> > > >
> > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > >
> > > > * For (un)subscribe requests visit
> > > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at http://www.gromacs.org/Support
> > > /Mailing_Lists/GMX-Users_List before posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List 

Re: [gmx-users] Performance

2018-03-28 Thread Mark Abraham
Hi,

Attachments can't be accepted on the list - please upload to a file sharing
service and share links to those.

Mark

On Wed, Mar 28, 2018 at 6:16 PM Myunggi Yi  wrote:

> I am attaching the file.
>
> Thank you.
>
> Myunggi Yi
>
> On Wed, Mar 28, 2018 at 11:40 AM, Szilárd Páll 
> wrote:
>
> > Again, please share the exact log files / description of inputs. What
> > does "bad performance" mean?
> > --
> > Szilárd
> >
> >
> > On Wed, Mar 28, 2018 at 5:31 PM, Myunggi Yi 
> wrote:
> > > Dear users,
> > >
> > > I have two questions.
> > >
> > >
> > > 1. I used to run typical simulations with the following command.
> > >
> > > gmx mdrun -deffnm md
> > >
> > > I had no problem.
> > >
> > >
> > > Now I am running a simulation with "Dry_Martini" FF with the following
> > > input.
> > >
> > >
> > > integrator   = sd
> > > tinit= 0.0
> > > dt   = 0.040
> > > nsteps   = 100
> > >
> > > nstlog   = 5000
> > > nstenergy= 5000
> > > nstxout-compressed   = 5000
> > > compressed-x-precision   = 100
> > >
> > > cutoff-scheme= Verlet
> > > nstlist  = 10
> > > ns_type  = grid
> > > pbc  = xyz
> > > verlet-buffer-tolerance  = 0.005
> > >
> > > epsilon_r= 15
> > > coulombtype  = reaction-field
> > > rcoulomb = 1.1
> > > vdw_type = cutoff
> > > vdw-modifier = Potential-shift-verlet
> > > rvdw = 1.1
> > >
> > > tc-grps  = system
> > > tau_t= 4.0
> > > ref_t= 310
> > >
> > > ; Pressure coupling:
> > > Pcoupl   = no
> > >
> > > ; GENERATE VELOCITIES FOR STARTUP RUN:
> > > gen_vel  = yes
> > > gen_temp = 310
> > > gen_seed = 1521731368
> > >
> > >
> > >
> > > If I use the same command to submit the job.
> > > I got the following error. I don't know why.
> > >
> > > ---
> > > Program: gmx mdrun, version 2018.1
> > > Source file: src/gromacs/taskassignment/resourcedivision.cpp (line 224)
> > >
> > > Fatal error:
> > > When using GPUs, setting the number of OpenMP threads without
> specifying
> > the
> > > number of ranks can lead to conflicting demands. Please specify the
> > number
> > > of
> > > thread-MPI ranks as well (option -ntmpi).
> > >
> > > For more information and tips for troubleshooting, please check the
> > GROMACS
> > > website at http://www.gromacs.org/Documentation/Errors
> > > ---
> > >
> > >
> > > So I did run simulation with the following command.
> > >
> > >
> > > gmx mdrun -deffnm md -ntmpi 1
> > >
> > >
> > > Now the performance is extremely bad.
> > > Since yesterday, the log file still reporting the first step's energy.
> > >
> > > 2. This is the second question. Why?
> > >
> > > Can anyone help?
> > >
> > >
> > > Myunggi Yi
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at http://www.gromacs.org/Support
> > /Mailing_Lists/GMX-Users_List before posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at http://www.gromacs.org/Support
> > /Mailing_Lists/GMX-Users_List before posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Performance

2018-03-28 Thread Myunggi Yi
I am attaching the file.

Thank you.

Myunggi Yi

On Wed, Mar 28, 2018 at 11:40 AM, Szilárd Páll 
wrote:

> Again, please share the exact log files / description of inputs. What
> does "bad performance" mean?
> --
> Szilárd
>
>
> On Wed, Mar 28, 2018 at 5:31 PM, Myunggi Yi  wrote:
> > Dear users,
> >
> > I have two questions.
> >
> >
> > 1. I used to run typical simulations with the following command.
> >
> > gmx mdrun -deffnm md
> >
> > I had no problem.
> >
> >
> > Now I am running a simulation with "Dry_Martini" FF with the following
> > input.
> >
> >
> > integrator   = sd
> > tinit= 0.0
> > dt   = 0.040
> > nsteps   = 100
> >
> > nstlog   = 5000
> > nstenergy= 5000
> > nstxout-compressed   = 5000
> > compressed-x-precision   = 100
> >
> > cutoff-scheme= Verlet
> > nstlist  = 10
> > ns_type  = grid
> > pbc  = xyz
> > verlet-buffer-tolerance  = 0.005
> >
> > epsilon_r= 15
> > coulombtype  = reaction-field
> > rcoulomb = 1.1
> > vdw_type = cutoff
> > vdw-modifier = Potential-shift-verlet
> > rvdw = 1.1
> >
> > tc-grps  = system
> > tau_t= 4.0
> > ref_t= 310
> >
> > ; Pressure coupling:
> > Pcoupl   = no
> >
> > ; GENERATE VELOCITIES FOR STARTUP RUN:
> > gen_vel  = yes
> > gen_temp = 310
> > gen_seed = 1521731368
> >
> >
> >
> > If I use the same command to submit the job.
> > I got the following error. I don't know why.
> >
> > ---
> > Program: gmx mdrun, version 2018.1
> > Source file: src/gromacs/taskassignment/resourcedivision.cpp (line 224)
> >
> > Fatal error:
> > When using GPUs, setting the number of OpenMP threads without specifying
> the
> > number of ranks can lead to conflicting demands. Please specify the
> number
> > of
> > thread-MPI ranks as well (option -ntmpi).
> >
> > For more information and tips for troubleshooting, please check the
> GROMACS
> > website at http://www.gromacs.org/Documentation/Errors
> > ---
> >
> >
> > So I did run simulation with the following command.
> >
> >
> > gmx mdrun -deffnm md -ntmpi 1
> >
> >
> > Now the performance is extremely bad.
> > Since yesterday, the log file still reporting the first step's energy.
> >
> > 2. This is the second question. Why?
> >
> > Can anyone help?
> >
> >
> > Myunggi Yi
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at http://www.gromacs.org/Support
> /Mailing_Lists/GMX-Users_List before posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support
> /Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Performance

2018-03-28 Thread Szilárd Páll
Again, please share the exact log files / description of inputs. What
does "bad performance" mean?
--
Szilárd


On Wed, Mar 28, 2018 at 5:31 PM, Myunggi Yi  wrote:
> Dear users,
>
> I have two questions.
>
>
> 1. I used to run typical simulations with the following command.
>
> gmx mdrun -deffnm md
>
> I had no problem.
>
>
> Now I am running a simulation with "Dry_Martini" FF with the following
> input.
>
>
> integrator   = sd
> tinit= 0.0
> dt   = 0.040
> nsteps   = 100
>
> nstlog   = 5000
> nstenergy= 5000
> nstxout-compressed   = 5000
> compressed-x-precision   = 100
>
> cutoff-scheme= Verlet
> nstlist  = 10
> ns_type  = grid
> pbc  = xyz
> verlet-buffer-tolerance  = 0.005
>
> epsilon_r= 15
> coulombtype  = reaction-field
> rcoulomb = 1.1
> vdw_type = cutoff
> vdw-modifier = Potential-shift-verlet
> rvdw = 1.1
>
> tc-grps  = system
> tau_t= 4.0
> ref_t= 310
>
> ; Pressure coupling:
> Pcoupl   = no
>
> ; GENERATE VELOCITIES FOR STARTUP RUN:
> gen_vel  = yes
> gen_temp = 310
> gen_seed = 1521731368
>
>
>
> If I use the same command to submit the job.
> I got the following error. I don't know why.
>
> ---
> Program: gmx mdrun, version 2018.1
> Source file: src/gromacs/taskassignment/resourcedivision.cpp (line 224)
>
> Fatal error:
> When using GPUs, setting the number of OpenMP threads without specifying the
> number of ranks can lead to conflicting demands. Please specify the number
> of
> thread-MPI ranks as well (option -ntmpi).
>
> For more information and tips for troubleshooting, please check the GROMACS
> website at http://www.gromacs.org/Documentation/Errors
> ---
>
>
> So I did run simulation with the following command.
>
>
> gmx mdrun -deffnm md -ntmpi 1
>
>
> Now the performance is extremely bad.
> Since yesterday, the log file still reporting the first step's energy.
>
> 2. This is the second question. Why?
>
> Can anyone help?
>
>
> Myunggi Yi
> --
> Gromacs Users mailing list
>
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] mdrun on single node with GPU

2018-03-28 Thread Szilárd Páll
Hi,

I can't reproduce your issue, can you please share a full log file, please?

Cheers,
--
Szilárd


On Wed, Mar 28, 2018 at 5:26 AM, Myunggi Yi  wrote:
> Dear users,
>
> I am running simulation with gromacs 2018.1 version
> on a computer with quad core and 1 gpu.
>
> I used to use the following command to run simulations.
>
> gmx mdrun -deffnm md
>
>
> However, this time I've got the following message.
>
> ---
> Program: gmx mdrun, version 2018.1
> Source file: src/gromacs/taskassignment/resourcedivision.cpp (line 224)
>
> Fatal error:
> When using GPUs, setting the number of OpenMP threads without specifying the
> number of ranks can lead to conflicting demands. Please specify the number
> of
> thread-MPI ranks as well (option -ntmpi).
>
> For more information and tips for troubleshooting, please check the GROMACS
> website at http://www.gromacs.org/Documentation/Errors
> ---
>
>
> Can anyone help?
>
>
> Thank you for any help in advance.
>
>
> Myunggi Yi
> --
> Gromacs Users mailing list
>
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Performance

2018-03-28 Thread Myunggi Yi
Dear users,

I have two questions.


1. I used to run typical simulations with the following command.

gmx mdrun -deffnm md

I had no problem.


Now I am running a simulation with "Dry_Martini" FF with the following
input.


integrator   = sd
tinit= 0.0
dt   = 0.040
nsteps   = 100

nstlog   = 5000
nstenergy= 5000
nstxout-compressed   = 5000
compressed-x-precision   = 100

cutoff-scheme= Verlet
nstlist  = 10
ns_type  = grid
pbc  = xyz
verlet-buffer-tolerance  = 0.005

epsilon_r= 15
coulombtype  = reaction-field
rcoulomb = 1.1
vdw_type = cutoff
vdw-modifier = Potential-shift-verlet
rvdw = 1.1

tc-grps  = system
tau_t= 4.0
ref_t= 310

; Pressure coupling:
Pcoupl   = no

; GENERATE VELOCITIES FOR STARTUP RUN:
gen_vel  = yes
gen_temp = 310
gen_seed = 1521731368



If I use the same command to submit the job.
I got the following error. I don't know why.

---
Program: gmx mdrun, version 2018.1
Source file: src/gromacs/taskassignment/resourcedivision.cpp (line 224)

Fatal error:
When using GPUs, setting the number of OpenMP threads without specifying the
number of ranks can lead to conflicting demands. Please specify the number
of
thread-MPI ranks as well (option -ntmpi).

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---


So I did run simulation with the following command.


gmx mdrun -deffnm md -ntmpi 1


Now the performance is extremely bad.
Since yesterday, the log file still reporting the first step's energy.

2. This is the second question. Why?

Can anyone help?


Myunggi Yi
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] ss.xpm file: first residue showing first, or last residue showing first?

2018-03-28 Thread ZHANG Cheng
Dear Gromacs,
In the ss.xpm file for secondary structures, can I ask if the first residue 
shows first, or last residue shows first? Is this already written in the file? 


I also have a chain separator. Can I ask does it show in the beginning or 
between the two chains?
(Sorry, I asked this question about chain separator before. But if the last 
residue shows first, it is possible that the chain separator is between the two 
chains. But I really want to confirm that. Or how can I get contact with the 
author who wrote the "gmx do_dssp"?)


Thank you.


Yours sincerely
Cheng






First a few lines:
---
/* XPM */
/* Created by: */
/*:-) GROMACS - gmx do_dssp, VERSION 5.1.1 (-: */
/*  */
/* Executable:   /shared/ucl/apps/gromacs/5.1.1/intel-2015-update2/bin//gmx */
/* Data prefix:  /shared/ucl/apps/gromacs/5.1.1/intel-2015-update2 */
/* Command line: */
/*   gmx do_dssp -f md_0_1_noPBC.xtc -s md_0_1.tpr -ssdump ssdump.dat -map 
ss.map -o ss.xpm -sc scount.xvg -a area.xpm -ta totarea.xvg -aa averarea.xvg 
-tu ns */
/* This file can be converted to EPS by the GROMACS program xpm2ps */
/* title:   "Secondary structure" */
/* legend:  "" */
/* x-label: "Time (ns)" */
/* y-label: "Residue" */
/* type:"Discrete" */
static char *gromacs_xpm[] = {
"1064 443   8 1",
"~  c #FF " /* "Coil" */,
"E  c #FF " /* "B-Sheet" */,
"B  c #00 " /* "B-Bridge" */,
"S  c #00 " /* "Bend" */,
"T  c #00 " /* "Turn" */,
"H  c #FF " /* "A-Helix" */,
"G  c #FF00FF " /* "3-Helix" */,
"I  c #FF9900 " /* "5-Helix" */,
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Umbrella sampling: window distance - harmonic force constant

2018-03-28 Thread Hermann, Johannes

Okay. Thanks Justin!


On 27.03.2018 21:12, Justin Lemkul wrote:



On 3/27/18 4:44 AM, Hermann, Johannes wrote:

Dear All, dear Justin,

I am playing around with my umbrella sampling setup and I was looking 
at your paper which you linked in your umbrella sampling tutorial 
("Assessing the Stability of Alzheimer’s Amyloid Protofibrils Using 
Molecular Dynamics").
Up to a distance of 2nm you use a 0.1nm spacing, beyond a 0.2nm 
spacing. Which harmonic force constant pull_coord1_k do you use for 
the 0.1nm spacing? In comparison to the 0.2nm spacing, where 
pull_coord1_k=1000.
Is there a general rule of thumb between window distance and force 
constant? Or is it always try and error while checking the histograms?


You can set the value of k based on experimental methods or somewhat 
ad hoc, but then yes, you have to check overlap. I don't know of any 
useful way of trying to predict how the intermolecular forces in the 
system will respond in such a way that you can exactly say a priori 
how to set up the windows.


-Justin



--
__
*Technische Universität München*
*Johannes Hermann, M.Sc.*
Lehrstuhl für Bioverfahrenstechnik
Boltzmannstr. 15
D-85748 Garching
Tel: +49 8928915730
Fax: +49 8928915714

Email: j.herm...@lrz.tum.de
http://www.biovt.mw.tum.de/

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] RMSD values consideration

2018-03-28 Thread Ahmed Mashaly
Try using trjconv to remove pbc before analysis
Regards,Ahmed 
 
  On Wed, 28 Mar, 2018 at 8:43 am, SHYANTANI MAITI 
wrote:   Dear all,
After using this command for computation of rmsd of backbone of protein
protein complex consisting of three proteins :
/home/locuz/apps/gromacs/5.1/bin/gmx_mpi rms -f md_0_1.trr -s md_0_1.tpr
The rmsd is drastically increasing from 1 to 6 nm and after that it again
decreases to 1nm. can I use this result for my analysis? Is the rmsd
correctly obtained?
-- 
Best regards,
*Shyantani Maiti*
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
  
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] RMSD values consideration

2018-03-28 Thread SHYANTANI MAITI
Dear all,
After using this command for computation of rmsd of backbone of protein
protein complex consisting of three proteins :
/home/locuz/apps/gromacs/5.1/bin/gmx_mpi rms -f md_0_1.trr -s md_0_1.tpr
The rmsd is drastically increasing from 1 to 6 nm and after that it again
decreases to 1nm. can I use this result for my analysis? Is the rmsd
correctly obtained?
-- 
Best regards,
*Shyantani Maiti*
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.