Re: [gmx-users] 2019.2 not using all available cores

2019-08-21 Thread Dallas Warren
I've discovered an option that caused 2019.2 to use all of the cores
correctly.

Use "-pin on" and it works as expected, using all 12 cores, CPU load being
show as appropriate (gets up to 68% total CPU utilisation)

Use "-pin auto", which is the default, or "-pin off" and it will only use a
single core (maximum is 8% total CPU utilisation).

Catch ya,

Dr. Dallas Warren
Drug Delivery, Disposition and Dynamics
Monash Institute of Pharmaceutical Sciences, Monash University
381 Royal Parade, Parkville VIC 3052
dallas.war...@monash.edu
-
When the only tool you own is a hammer, every problem begins to resemble a
nail.


On Thu, 9 May 2019 at 07:54, Dallas Warren  wrote:

> gmx 2019.2 compiled using threads only uses a single core mdrun_mpi
> compiled using MPI only uses a single core, gmx 2016.3 using threads
> uses all 12 cores.
>
> For compiling thread version of 2019.2 used:
> cmake .. -DGMX_GPU=ON
> -DCMAKE_INSTALL_PREFIX=/usr/local/gromacs/gromacs-2019.2
>
> For compiling MPI version of 2019.2 used:
> cmake .. -DGMX_MPI=ON -DBUILD_SHARED_LIBS=OFF -DGMX_GPU=ON
> -DCMAKE_CXX_COMPILER=/usr/lib64/mpi/gcc/openmpi/bin/mpiCC
> -DCMAKE_C_COMPILER=/usr/lib64/mpi/gcc/openmpi/bin/mpicc
> -DGMX_BUILD_MDRUN_ONLY=ON
> -DCMAKE_INSTALL_PREFIX=/usr/local/gromacs/gromacs-2019.2
>
> Between building both to of those, deleted the build directory.
>
> 
> GROMACS:  gmx, version 2019.2
> Executable:   /usr/local/gromacs/gromacs-2019.2/bin/gmx
> Data prefix:  /usr/local/gromacs/gromacs-2019.2
> Working dir:  /home/dallas/experiments/current/19-064/P6DLO
> Command line:
>   gmx -version
>
> GROMACS version:2019.2
> Precision:  single
> Memory model:   64 bit
> MPI library:thread_mpi
> OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
> GPU support:CUDA
> SIMD instructions:  AVX_256
> FFT library:fftw-3.3.8-sse2
> RDTSCP usage:   enabled
> TNG support:enabled
> Hwloc support:  disabled
> Tracing support:disabled
> C compiler: /usr/bin/cc GNU 7.4.0
> C compiler flags:-mavx -O3 -DNDEBUG -funroll-all-loops
> -fexcess-precision=fast
> C++ compiler:   /usr/bin/c++ GNU 7.4.0
> C++ compiler flags:  -mavx-std=c++11   -O3 -DNDEBUG
> -funroll-all-loops -fexcess-precision=fast
> CUDA compiler:  /usr/local/cuda/bin/nvcc nvcc: NVIDIA (R) Cuda
> compiler driver;Copyright (c) 2005-2019 NVIDIA Corporation;Built on
> Fri_Feb__8_19:08:17_PST_2019;Cuda compilation tools, release 10.1,
> V10.1.105
> CUDA compiler
> flags:-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=compute_75;-use_fast_math;-D_FORCE_INLINES;;
> ;-mavx;-std=c++11;-O3;-DNDEBUG;-funroll-all-loops;-fexcess-precision=fast;
> CUDA driver:10.10
> CUDA runtime:   10.10
>
> 
> GROMACS:  mdrun_mpi, version 2019.2
> Executable:   /usr/local/gromacs/gromacs-2019.2/bin/mdrun_mpi
> Data prefix:  /usr/local/gromacs/gromacs-2019.2
> Working dir:  /home/dallas/experiments/current/19-064/P6DLO
> Command line:
>   mdrun_mpi -version
>
> GROMACS version:2019.2
> Precision:  single
> Memory model:   64 bit
> MPI library:MPI
> OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
> GPU support:CUDA
> SIMD instructions:  AVX_256
> FFT library:fftw-3.3.8-sse2
> RDTSCP usage:   enabled
> TNG support:enabled
> Hwloc support:  disabled
> Tracing support:disabled
> C compiler: /usr/lib64/mpi/gcc/openmpi/bin/mpicc GNU 7.4.0
> C compiler flags:-mavx -O3 -DNDEBUG -funroll-all-loops
> -fexcess-precision=fast
> C++ compiler:   /usr/lib64/mpi/gcc/openmpi/bin/mpiCC GNU 7.4.0
> C++ compiler flags:  -mavx-std=c++11   -O3 -DNDEBUG
> -funroll-all-loops -fexcess-precision=fast
> CUDA compiler:  /usr/local/cuda/bin/nvcc nvcc: NVIDIA (R) Cuda
> compiler driver;Copyright (c) 2005-2019 NVIDIA Corporation;Built on
> Fri_Feb__8_19:08:17_PST_2019;Cuda compilation tools, release 10.1,
> V10.1.105
> CUDA compiler
> flags:-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=compute_75;-use_fast_math;-D_FORCE_INLINES;;
> ;-mavx;-std=c++11;-O3;-DNDEBUG;-funroll-all-loops;-fexcess-precision=fast;
> CUDA driver:10.10
> CUDA runtime:   10.10
>
> 
> /usr/local/gromacs/gromacs-2016.3/bin/gmx -version
>
> 

Re: [gmx-users] "comm-mode = Angular" gives error

2019-08-21 Thread Billy Williams-Noonan
50 Angstroms between the solute and it's periodic image is a bit extreme.
Why is it rotating so quickly?  The rotational diffusion should happen
slowly

On Thu., 22 Aug. 2019, 12:52 pm Jorden Cabal,  wrote:

> Hi Justin,
> Thank you Justin for letting me know. I will definitely consider increasing
> the simulation box.
> Although, I was wondering that what would happen if I take
> comm-mode="Angular" and comm-grps= "molecule_AB !molecule_AB". This setting
> gives a warning which was ignored using -maxwarn option. After running 10
> ns of simulation, the system seems to be stable and very less fluctuations
> are observed upon visualization compared to when I use comm-mode="Linear".
> or nothing.
> What would you suggest? Should I proceed with the first setting?
> Thank you in advance.
>
> On Tue, Aug 20, 2019 at 9:25 PM Justin Lemkul  wrote:
>
> >
> >
> > On 8/20/19 5:21 AM, Jorden Cabal wrote:
> > > Hi Justin,
> > > Thank you for your response. In my case the cost of increasing
> simulation
> > > box is very large. I have already tried it by keeping distance between
> > the
> > > periodic images of the macromolecule up to 50 Angstrom. Could you
> suggest
> > > me any other option to do this. In Gromacs, rotation around pivot can
> be
> > > enforced, is it possible to use this method somehow to counter the
> > > rotation? Even if its possible, how and to which extent do you think it
> > > will add biasness to the independent behavior of the macromolecule. Any
> > > suggestions, comments in this regard will help me.
> >
> > You can apply biasing potentials to avoid rotation, but then you're
> > seriously perturbing the dynamics in a way that might completely bias
> > your results (I don't have nearly enough context to say for sure, so
> > I'll be a bit circumspect). If you have a large molecule that rotates,
> > you need a large box that reflects that intrinsic symmetry. It's the
> > same reason you can't build a long, rectangular box around a DNA duplex.
> > If it rotations orthogonal to the longest axis, it sees it's image and
> > the forces are invalid.
> >
> > There are no real "tricks" here. If you have a big molecule, you need a
> > suitably large box.
> >
> > -Justin
> >
> > --
> > ==
> >
> > Justin A. Lemkul, Ph.D.
> > Assistant Professor
> > Office: 301 Fralin Hall
> > Lab: 303 Engel Hall
> >
> > Virginia Tech Department of Biochemistry
> > 340 West Campus Dr.
> > Blacksburg, VA 24061
> >
> > jalem...@vt.edu | (540) 231-3129
> > http://www.thelemkullab.com
> >
> > ==
> >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] "comm-mode = Angular" gives error

2019-08-21 Thread Jorden Cabal
Hi Justin,
Thank you Justin for letting me know. I will definitely consider increasing
the simulation box.
Although, I was wondering that what would happen if I take
comm-mode="Angular" and comm-grps= "molecule_AB !molecule_AB". This setting
gives a warning which was ignored using -maxwarn option. After running 10
ns of simulation, the system seems to be stable and very less fluctuations
are observed upon visualization compared to when I use comm-mode="Linear".
or nothing.
What would you suggest? Should I proceed with the first setting?
Thank you in advance.

On Tue, Aug 20, 2019 at 9:25 PM Justin Lemkul  wrote:

>
>
> On 8/20/19 5:21 AM, Jorden Cabal wrote:
> > Hi Justin,
> > Thank you for your response. In my case the cost of increasing simulation
> > box is very large. I have already tried it by keeping distance between
> the
> > periodic images of the macromolecule up to 50 Angstrom. Could you suggest
> > me any other option to do this. In Gromacs, rotation around pivot can be
> > enforced, is it possible to use this method somehow to counter the
> > rotation? Even if its possible, how and to which extent do you think it
> > will add biasness to the independent behavior of the macromolecule. Any
> > suggestions, comments in this regard will help me.
>
> You can apply biasing potentials to avoid rotation, but then you're
> seriously perturbing the dynamics in a way that might completely bias
> your results (I don't have nearly enough context to say for sure, so
> I'll be a bit circumspect). If you have a large molecule that rotates,
> you need a large box that reflects that intrinsic symmetry. It's the
> same reason you can't build a long, rectangular box around a DNA duplex.
> If it rotations orthogonal to the longest axis, it sees it's image and
> the forces are invalid.
>
> There are no real "tricks" here. If you have a big molecule, you need a
> suitably large box.
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
>
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] SASA calculation

2019-08-21 Thread Bratin Kumar Das
You can put the residue of your interest inside a .ndx file..Use the .ndx
file when you are running sasa command

On Wed 21 Aug, 2019, 11:25 PM Pandya, Akash, 
wrote:

> Hi all,
>
> I calculated the SASA for my protein and I got the average area per
> residue. I was wondering if there was a criteria to determine whether a
> residue is exposed or buried based on an individual's SASA? Or is it
> arbitrary? Apologies if I'm being naive, it's just that I've never actually
> used this calculation before. Thank you for your help.
>
> Best wishes,
>
> Akash
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Error in calculating center of mass using distance command

2019-08-21 Thread Jorden Cabal
Hi Sumedha,
I think you are using the default value of -len (i.e Mean distance for
histogramming) in the command which is 0.1. If you data points contains
distance values which has larger mean than this, histogramming for those
values will not be calculated. For conformation, you can check the output
file, you will see that it ends 0.2. I hope this might help you.
Thanks

On Wed, Aug 21, 2019 at 10:54 PM Justin Lemkul  wrote:

>
>
> On 8/21/19 9:52 AM, Sumedha Bhosale wrote:
> > Hello,
> >
> > I am trying to calculate the center of mass of residues 513-515 and
> 588-590
> > of chain A and chain B using distance command. I want output in histogram
> > format. But it seems that distance command is a calculating the distance
> of
> > my first group and itself. It's not taking the second group. I am getting
> > all values as zero in histogram plot.
> > I have made index files for two separate groups of chain A and chain B. I
> > am using the command - com of group 34 plus com of group 35.
>
> Please provide the exact command you gave, directly copied and pasted
> from the terminal.
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
>
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] SASA calculation

2019-08-21 Thread Pandya, Akash
Hi all,

I calculated the SASA for my protein and I got the average area per residue. I 
was wondering if there was a criteria to determine whether a residue is exposed 
or buried based on an individual's SASA? Or is it arbitrary? Apologies if I'm 
being naive, it's just that I've never actually used this calculation before. 
Thank you for your help.

Best wishes,

Akash
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gromacs.org_gmx-users@maillist.sys.kth.se; jalem...@vt.edu

2019-08-21 Thread Justin Lemkul




On 8/21/19 1:25 PM, Maryam Sadeghi wrote:

  Hi All,

I have created a crystal structure of 2 polymer chains (PEG) and I need to
calculate the cohesive energy for my system using CHARMM36 FF. In this case
my fix_mol2 file includes 2 ligands, to convert to str file I get the
following error:

readmol2 warning: non-unique atoms were renamed. Now processing molecule
LIG ... attype warning: carbon radical, carbocation or carbanion not
supported; skipped molecule.

How can I fix this problem? Changing the ligand IDs and names does not
help. I even tried to make 2 separate mol2 files for each chain in the
crystal structure, but still get the same error while converting to str
file...


As Dallas mentioned, this is not a GROMACS problem so it is not 
appropriate for this mailing list.


I replied in ResearchGate to your post on the same topic.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] gromacs.org_gmx-users@maillist.sys.kth.se; jalem...@vt.edu

2019-08-21 Thread Maryam Sadeghi
 Hi All,

I have created a crystal structure of 2 polymer chains (PEG) and I need to
calculate the cohesive energy for my system using CHARMM36 FF. In this case
my fix_mol2 file includes 2 ligands, to convert to str file I get the
following error:

readmol2 warning: non-unique atoms were renamed. Now processing molecule
LIG ... attype warning: carbon radical, carbocation or carbanion not
supported; skipped molecule.

How can I fix this problem? Changing the ligand IDs and names does not
help. I even tried to make 2 separate mol2 files for each chain in the
crystal structure, but still get the same error while converting to str
file...


-- 

Maryam S. Sadeghi
P *Please think before printing*
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] domain decomposition

2019-08-21 Thread Dhrubajyoti Maji
Thank you sir. The problem is sorted out. Decreasing number of processors
did the trick. Thanks again

On Wed, 21 Aug 2019 at 22:02, Justin Lemkul  wrote:

>
>
> On 8/21/19 12:30 PM, Dhrubajyoti Maji wrote:
> > Many tanks Dr. Lemkul for your kind reply. I have checked the link. I
> have
> > done the equlibration step successfully but the error appears at
> production
> > run. The change is only that now I am writing the output trajectory. So,
> if
> > I had any problem in topology or mdp file then I think my equilibration
> > should have been failed. I am a newbie and I can't understand what
> exactly
> > is going wrong. Any kind of suggestion will be highly appreciated.
>
> Use fewer processors. You can't arbitrarily split any system over a
> given number of processors. Prior runs may have worked if, for instance,
> box dimensions were different, but now you have to adjust.
>
> -Justin
>
> > Thanks and regards.
> > Dhrubajyoti Maji
> >
> >
> > On Wed, 21 Aug 2019 at 16:21, Justin Lemkul  wrote:
> >
> >>
> >> On 8/21/19 1:00 AM, Dhrubajyoti Maji wrote:
> >>> Dear all,
> >>>   I am simulating a system consisting urea molecules. After
> >> successfully
> >>> generating tpr file while I am trying to run mdrun, the following error
> >> is
> >>> appearing.
> >>> Fatal error:
> >>> There is no domain decomposition for 72 ranks that is compatible with
> the
> >>> given box and a minimum cell size of 0.5924 nm
> >>> Change the number of ranks or mdrun option -rcon or -dds or your LINCS
> >>> settings.
> >>> All bonds are constrained are by LINCS algorithm in my system and
> >> dimension
> >>> of my box is 3.40146 nm. I have checked gromacs site as well as mailing
> >>> list but couldn't understand what to do. Please help me with the issue.
> >>
> >>
> http://manual.gromacs.org/current/user-guide/run-time-errors.html#there-is-no-domain-decomposition-for-n-ranks-that-is-compatible-with-the-given-box-and-a-minimum-cell-size-of-x-nm
> >>
> >> -Justin
> >>
> >> --
> >> ==
> >>
> >> Justin A. Lemkul, Ph.D.
> >> Assistant Professor
> >> Office: 301 Fralin Hall
> >> Lab: 303 Engel Hall
> >>
> >> Virginia Tech Department of Biochemistry
> >> 340 West Campus Dr.
> >> Blacksburg, VA 24061
> >>
> >> jalem...@vt.edu | (540) 231-3129
> >> http://www.thelemkullab.com
> >>
> >> ==
> >>
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
> >>
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
>
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] domain decomposition

2019-08-21 Thread Justin Lemkul




On 8/21/19 12:30 PM, Dhrubajyoti Maji wrote:

Many tanks Dr. Lemkul for your kind reply. I have checked the link. I have
done the equlibration step successfully but the error appears at production
run. The change is only that now I am writing the output trajectory. So, if
I had any problem in topology or mdp file then I think my equilibration
should have been failed. I am a newbie and I can't understand what exactly
is going wrong. Any kind of suggestion will be highly appreciated.


Use fewer processors. You can't arbitrarily split any system over a 
given number of processors. Prior runs may have worked if, for instance, 
box dimensions were different, but now you have to adjust.


-Justin


Thanks and regards.
Dhrubajyoti Maji


On Wed, 21 Aug 2019 at 16:21, Justin Lemkul  wrote:



On 8/21/19 1:00 AM, Dhrubajyoti Maji wrote:

Dear all,
  I am simulating a system consisting urea molecules. After

successfully

generating tpr file while I am trying to run mdrun, the following error

is

appearing.
Fatal error:
There is no domain decomposition for 72 ranks that is compatible with the
given box and a minimum cell size of 0.5924 nm
Change the number of ranks or mdrun option -rcon or -dds or your LINCS
settings.
All bonds are constrained are by LINCS algorithm in my system and

dimension

of my box is 3.40146 nm. I have checked gromacs site as well as mailing
list but couldn't understand what to do. Please help me with the issue.


http://manual.gromacs.org/current/user-guide/run-time-errors.html#there-is-no-domain-decomposition-for-n-ranks-that-is-compatible-with-the-given-box-and-a-minimum-cell-size-of-x-nm

-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.



--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] domain decomposition

2019-08-21 Thread Dhrubajyoti Maji
Many tanks Dr. Lemkul for your kind reply. I have checked the link. I have
done the equlibration step successfully but the error appears at production
run. The change is only that now I am writing the output trajectory. So, if
I had any problem in topology or mdp file then I think my equilibration
should have been failed. I am a newbie and I can't understand what exactly
is going wrong. Any kind of suggestion will be highly appreciated.
Thanks and regards.
Dhrubajyoti Maji


On Wed, 21 Aug 2019 at 16:21, Justin Lemkul  wrote:

>
>
> On 8/21/19 1:00 AM, Dhrubajyoti Maji wrote:
> > Dear all,
> >  I am simulating a system consisting urea molecules. After
> successfully
> > generating tpr file while I am trying to run mdrun, the following error
> is
> > appearing.
> > Fatal error:
> > There is no domain decomposition for 72 ranks that is compatible with the
> > given box and a minimum cell size of 0.5924 nm
> > Change the number of ranks or mdrun option -rcon or -dds or your LINCS
> > settings.
> > All bonds are constrained are by LINCS algorithm in my system and
> dimension
> > of my box is 3.40146 nm. I have checked gromacs site as well as mailing
> > list but couldn't understand what to do. Please help me with the issue.
>
>
> http://manual.gromacs.org/current/user-guide/run-time-errors.html#there-is-no-domain-decomposition-for-n-ranks-that-is-compatible-with-the-given-box-and-a-minimum-cell-size-of-x-nm
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
>
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Error in calculating center of mass using distance command

2019-08-21 Thread Justin Lemkul




On 8/21/19 9:52 AM, Sumedha Bhosale wrote:

Hello,

I am trying to calculate the center of mass of residues 513-515 and 588-590
of chain A and chain B using distance command. I want output in histogram
format. But it seems that distance command is a calculating the distance of
my first group and itself. It's not taking the second group. I am getting
all values as zero in histogram plot.
I have made index files for two separate groups of chain A and chain B. I
am using the command - com of group 34 plus com of group 35.


Please provide the exact command you gave, directly copied and pasted 
from the terminal.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Error in calculating center of mass using distance command

2019-08-21 Thread Sumedha Bhosale
Hello,

I am trying to calculate the center of mass of residues 513-515 and 588-590
of chain A and chain B using distance command. I want output in histogram
format. But it seems that distance command is a calculating the distance of
my first group and itself. It's not taking the second group. I am getting
all values as zero in histogram plot.
I have made index files for two separate groups of chain A and chain B. I
am using the command - com of group 34 plus com of group 35.
I am attaching output file.


 WT1_513-515_588-590_Hist.xvg

Thanks,
Sumedha
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Incorrect SPC water density using oplsaa force field

2019-08-21 Thread Justin Lemkul



On 8/21/19 3:18 AM, atb files wrote:
 

 
 Hello Users,I tried simulating a water box (SPC) at 298 K and 1 bar pressure. I am not getting the correct value for water density. The literature value is 997 kg/^3 whereas I am getting the value of 977 kg/^3I am


SPC does not reproduce the experimental density of water. 977 is correct 
for SPC. Other water models perform better with respect to this 
property. Very few get it exactly correct.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] simulation on 2 gpus

2019-08-21 Thread Szilárd Páll
Hi Stefano,


On Tue, Aug 20, 2019 at 3:29 PM Stefano Guglielmo
 wrote:
>
> Dear Szilard,
>
> thanks for the very clear answer.
> Following your suggestion I tried to run without DD; for the same system I
> run two simulations on two gpus:
>
> gmx mdrun -deffnm run -nb gpu -pme gpu -ntomp 28 -ntmpi 1 -npme 0
> -gputasks 00 -pin on -pinoffset 0 -pinstride 1
>
> gmx mdrun -deffnm run2 -nb gpu -pme gpu -ntomp 28 -ntmpi 1 -npme 0
> -gputasks 11 -pin on -pinoffset 28 -pinstride 1
>
> but again the system crashed; with this I mean that after few minutes the
> machine goes off (power off) without any error message, even without using
> all the threads.

That is not normal and I strongly recommend investigating it as it
could be a sign of an underlying system/hardware instability or fault
which could ultimately lead to incorrect simulation results.

Are you sure that:
- your machine is stable and reliable at high loads; is the PSU sufficient?
- your hardware has been thoroughly stress-tested and it does not show
instabilities?

Does the crash also happen with GROMACS running on the CPU only (using
all cores)?
I'd recommend running some stress-tests that fully load the machine
for a few hours to see if the error persists.

> I then tried running the two simulations on the same gpu without DD:
>
> gmx mdrun -deffnm run -nb gpu -pme gpu -ntomp 28 -ntmpi 1 -npme 0
> -gputasks 00 -pin on -pinoffset 0 -pinstride 1
>
> gmx mdrun -deffnm run2 -nb gpu -pme gpu -ntomp 28 -ntmpi 1 -npme 0
> -gputasks 00 -pin on -pinoffset 28 -pinstride 1
>
> and I obtained better performance (about 70 ns/day) with a massive use of
> the gpu (around 90%), comparing to the two runs on two gpus I reported in
> the previous post
> (gmx mdrun -deffnm run -nb gpu -pme gpu -ntomp 4 -ntmpi 7 -npme 1 -gputasks
> 000 -pin on -pinoffset 0 -pinstride 1
>  gmx mdrun -deffnm run2 -nb gpu -pme gpu -ntomp 4 -ntmpi 7 -npme 1
> -gputasks 111 -pin on -pinoffset 28 -pinstride 1).

That is expected; domain-decomposition on a single GPU is unnecessary
and introduces overheads that limit performance.

> As for pinning, cpu topology according to log file is:
> hardware topology: Basic
> Sockets, cores, and logical processors:
>   Socket  0: [   0  32] [   1  33] [   2  34] [   3  35] [   4  36] [
> 5  37] [   6  38] [   7  39] [  16  48] [  17  49] [  18  50] [  19  51] [
>  20  52] [  21  53] [  22  54] [  23  55] [   8  40] [   9  41] [  10  42]
> [  11  43] [  12  44] [  13  45] [  14  46] [  15  47] [  24  56] [  25
>  57] [  26  58] [  27  59] [  28  60] [  29  61] [  30  62] [  31  63]
> If I understand well (absolutely not sure) it should not be that convenient
> to pin to consecutive threads,

On the contrary, pinning to consecutive threads is the recommended
behavior. More generally, application threads are expected to be
pinned to consecutive cores (as threading parallelization will benefit
from the resulting cache access patterns); now, CPU cores can have
multiple hardware threads and depending on whether using one or
mulitpole makes sense (performance-wise), will determine whether a
stride of 1 or 2 is best. Typically, when most work is offloaded to a
GPU and many CPU cores are available 1 thread/core is best.

Note that the above topology mapping simply means that the indexed
entities that the operating system calls "CPU" grouped in "[]"
correspond to hardware threads of the same core, i.e. core 0 is [0
32], core 1 [1 33], etc. Pinning with a stride happens into this map:
- with a -pinstride 1 thread mapping will be (app thread->hardware
thread): 0->0, 1->32, 2->1, 3->33,...
- with a -pinstride 2 thread mapping will be (-||-): 0->0, 1->1, 2->2, 3->3, ...

> and indeed I found a subtle degradation of
> performance for a single simulation, switching from:
> gmx mdrun -deffnm run -nb gpu -pme gpu -ntomp 28 -ntmpi 1 -npme 0 -gputasks
> 00 -pin on
> to
> gmx mdrun -deffnm run -nb gpu -pme gpu -ntomp 28 -ntmpi 1 -npme 0 -gputasks
> 00 -pin on -pinoffset 0 -pinstride 1.

If you compare the log files of the two, you should notice that the
former used a pinstride 2 resulting in the use 28 cores while the
latter using only 14 cores; the likely reason for only a small
difference is that there is not enough CPU work to scale to 28 cores
and additionally, these specific TR CPUs are tricky to scale across
using wide multi-threaded parallelization.

Cheers,
--
Szilárd


>
> Thanks again
> Stefano
>
>
>
>
> Il giorno ven 16 ago 2019 alle ore 17:48 Szilárd Páll <
> pall.szil...@gmail.com> ha scritto:
>
> > On Mon, Aug 5, 2019 at 5:00 PM Stefano Guglielmo
> >  wrote:
> > >
> > > Dear Paul,
> > > thanks for suggestions. Following them I managed to run 91 ns/day for the
> > > system I referred to in my previous post with the configuration:
> > > gmx mdrun -deffnm run -nb gpu -pme gpu -ntomp 4 -ntmpi 7 -npme 1
> > -gputasks
> > > 111 -pin on (still 28 threads seems to be the best choice)
> > >
> > > and 56 ns/day for two independent runs:
> > > gmx 

Re: [gmx-users] gpu usage

2019-08-21 Thread Szilárd Páll
Hi Paul,

Please post log files, otherwise we can only guess what is limiting
the GPU utilization. Otherwise, you should be seeing considerably
higher utilization in single-GPU no-decomposition runs.

Cheers,
--
Szilárd

On Tue, Aug 20, 2019 at 7:01 PM p buscemi  wrote:
>
>
> Dear Users,
> I am getting reasonable performance from two rtx -2080ti's - AMD 32 core and 
> on another node two gtx-1080 ti's -AMD 16 core i.e 20-30 ns/day with 30 
> atoms. But in all my runs the % usage of the gpu's is typcially 40% to 60 % . 
> Given that it is specialized software, I notice that Schrodinger will run a 
> single gpu at 98%.. So the cards are apparently not defective
> The cpu runs at 2.9 Ghz. and the power supply a 1500 watts
> A typical run command might be" gmx mdrun -deffnm sys.npt -nb gpu -pme gpu 
> -ntmpi 8 -ntomp 8 -npme 1
> I have gone over 
> http://manual.gromacs.org/documentation/current/user-guide/mdrun-performance.html
>  
> (https://link.getmailspring.com/link/1566319015.local-8bb7ace6-bf71-v1.5.3-420ce...@getmailspring.com/0?redirect=http%3A%2F%2Fmanual.gromacs.org%2Fdocumentation%2Fcurrent%2Fuser-guide%2Fmdrun-performance.html=Z214LXVzZXJzQGdyb21hY3Mub3Jn)
>  ,and tried to incorporate what I could.
>
> The installation was basically that given in the manual for build 2019.1:
> cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=on 
> -DCMAKE_CXX_COMPILER=/usr/bin/g++-6 -DCMAKE_C_COMPILER=/usr/bin/gcc-6
> Both 2019.1 and 2019.3 run well but with the same "reduced" % workload.
> I am curious to learn why the gpu's are not pushed a littler harder. Or is 
> this a typical result ? Or are there improvments to make in my setup.
> Paul
> --
> Gromacs Users mailing list
>
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] domain decomposition

2019-08-21 Thread Justin Lemkul




On 8/21/19 1:00 AM, Dhrubajyoti Maji wrote:

Dear all,
 I am simulating a system consisting urea molecules. After successfully
generating tpr file while I am trying to run mdrun, the following error is
appearing.
Fatal error:
There is no domain decomposition for 72 ranks that is compatible with the
given box and a minimum cell size of 0.5924 nm
Change the number of ranks or mdrun option -rcon or -dds or your LINCS
settings.
All bonds are constrained are by LINCS algorithm in my system and dimension
of my box is 3.40146 nm. I have checked gromacs site as well as mailing
list but couldn't understand what to do. Please help me with the issue.


http://manual.gromacs.org/current/user-guide/run-time-errors.html#there-is-no-domain-decomposition-for-n-ranks-that-is-compatible-with-the-given-box-and-a-minimum-cell-size-of-x-nm

-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Incorrect SPC water density using oplsaa force field

2019-08-21 Thread atb files




Hello Users,I tried simulating a water box (SPC) at 298 K and 1 bar 
pressure. I am not getting the correct value for water density. The literature 
value is 997 kg/^3 whereas I am getting the value of 977 kg/^3I am using 
Gromacs 2018.4Force field I used is oplsaa from used Gromacs version. I first 
did energy minimisation, then NVT and NPT simulation with Berendsen thermostat 
and barostat for 5 nanoseconds each. Then I did NPT following MDP file:






integrator  = md
nsteps  = 50    
dt  = 0.002 
nstlog  = 5000 
nstxout-compressed = 500
nstenergy = 500
continuation= yes   
constraint_algorithm = SHAKE
constraints = all-bonds     
nstlist         = 10             
ns_type         = grid          
rlist           = 1           
rcoulomb        = 1           
rvdw            = 1           
; Electrostatics
coulombtype     = PME           
pme_order       = 4                 
fourierspacing  = 0.16          
pbc = xyz   
tcoupl  = v-rescale 
tc-grps         = System    
tau_t   = 1.0   
ref_t   = 298.15    
pcoupl  = parrinello-rahman 
pcoupltype  = isotropic 
tau_p   = 5.0   
ref_p   = 1.0 1.0           
compressibility = 4.6e-5 4.6e-5       
DispCorr= EnerPres  
gen_vel = no
gen-temp= 298.15
gen-seed= 121545
refcoord_scaling = comIs there something wrong with my MDP file? Cutoffs and 
all? Please help.- YogeshSent using Zoho Mail








-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Inconsistent shifts over periodic boundaries error.

2019-08-21 Thread Artem Shekhovtsov
Hello!
I performed simulations with the small molecules in water and faced
with the fact that in about 5% of cases, the simulation falls at the last
step with an error:

---
Program: gmx mdrun, version 2018.4
Source file: src/gromacs/pbcutil/mshift.cpp (line 906)

Fatal error:
There are inconsistent shifts over periodic boundaries in a molecule type
consisting of 47 atoms. The longest distance involved in such interactions
is
1.298 nm which is close to half the box length. This molecule type consists
of
multiple parts, e.g. monomers, that are connected by interactions that are
not
chemical bonds, e.g. restraints. Such systems can not be treated. The only
solution is increasing the box size.

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
---
   The problem occurs when simulating in a rhombic dodecahedron cell with
the standard distance between solute and the box 1nm. There is no such
error in the cubic cell with distance 1nm.
Trying to establish the reason for this behavior, I found that it is
associated with the presence of virtual atoms in a small molecule. When I
exchange a virtual atom for an atom linked by a chemical bond to the rest
of the molecule, an error does not appear.

I want to remove errors while preserving virtual atoms as well as the
size and shape of the cell. Maybe someone already encountered this behavior
and knows what my mistake is. I will be glad of any help.

The link
https://drive.google.com/file/d/0B5NzD-LVrUalbDdSblhfTkowUDN1aWxER3hlLWhaRFROcjhj/view?usp=sharing
contains files for 1-step molecular dynamics that failed with my system of
interest. Gromacs version - 2018.4.

Thank you,
Artem Shekhovtsov
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.