[gmx-users] gromacs binaries for windows (Cygwin 64)

2019-09-05 Thread Tatsuro MATSUOKA
I have prepared gromacs binaries for windows (Cygwin 64) on my own web site.
(For testing purpose.)

http://tmacchant3.starfree.jp/gromacs/win/

Tatsuro

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] question about tabulated force field

2019-09-05 Thread Liu, Y.
Hi there,
   I have a question about tabulated force field: what is the
difference when the tabulated potential and build-in potential are used in
Gromacs simulations? why they give different results?
   I simulate a DPPC membrane with Gromos force field. I use standard
gromos input parameters in the simulation, but the area per lipid (0.630
(0.008) nm2) is different from when I compute it with tabulated force
field(0.613 (0.007)nm2). This difference also exist when I use MARTINI
force field.
   I have use gmx dump -s *.tpr > comparison.xvg in two cases(build-in
and tabulated potential) to make sure that the only difference in two
scenarios is the tabulated potential. Here is the difference:
coulombtype= User
40,41c40,41
epsilon-rf = 1
>vdw-type   = User
164c164
tau-t:0.01 0.1   1
169c169
energygrp-flags[  0]: 2 1
   Then you may think my tabulated potential is problematic. However, I
have used Gromacs 4.* with -debug flag to output the tabulated potential.
Therefore, the tabulated potential should be correct.
   In the output .edr file, I also found that the tabulated potential
simulation cannot offer correct temperature for lipids as indicated in the
thermal stat. At least, with the same parameters, it works well in the
build-in potential simulation.
   I do not understand where is the problem and what is the difference
between the tabulated potential and build-in potential in Gromacs. Maybe it
use different engine to compute the force?
Looking forward to your reply. Thank you.
regards
Liuyang
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Spatial Distribution Function

2019-09-05 Thread Pandya, Akash
Hi all,

I'm trying to generate a spatial distribution function for my ligands around my 
protein. I read in the manual I can bypass the gmx trjconv steps if I wanted to 
calculate the SDF for arbitrary Cartesian coordinates. The command I used is 
shown below:

gmx spatial -f protein_LIG.gro -s protein.tpr -n LIG.ndx -b 2 -e 6 -w 
yes -nab 10

I selected the LIG group for the SDF calculation and selected the protein and 
LIG group for the output coordinates. Is this correct? Any help would be much 
appreciated.


Best wishes,

Akash
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] The problem of utilizing multiple GPU

2019-09-05 Thread Szilárd Páll
Hi,

You have 2x Xeon Gold 6150 which is 2x 18 = 36 cores; Intel CPUs
support 2 threads/core (HyperThreading), hence the 72.
https://ark.intel.com/content/www/us/en/ark/products/120490/intel-xeon-gold-6150-processor-24-75m-cache-2-70-ghz.html

You will not be able to scale efficiently over 8 GPUs in a single
simulation with the current code; while performance will likely
improve in the next release, due to PCI bus and PME scaling
limitations, even with GROMACS 2020 it is unlikely you will see much
benefit beyond 4 GPUs.

Try running on 3-4 GPUs with at least 2 ranks on each, and one
separate PME rank. You might also want to use every second GPU rather
than the first four to avoid overloading the PCI bus; e.g.
gmx mdrun -ntmpi 7 -npme 1 -nb gpu -pme gpu -bonded gpu -gpuid 0,2,4,6
-gputask 001122334

Cheers,
--
Szilárd

On Thu, Sep 5, 2019 at 1:12 AM 孙业平  wrote:
>
> Hello Mark Abraham,
>
> Thank you very much for your reply. I will definitely check the webinar and 
> gromacs document. But now I am confused and expect an direct solution. The 
> workstation should have 18 cores each with 4 hyperthreads. The output of 
> "lscpu" reads:
> Architecture:  x86_64
> CPU op-mode(s):32-bit, 64-bit
> Byte Order:Little Endian
> CPU(s):72
> On-line CPU(s) list:   0-71
> Thread(s) per core:2
> Core(s) per socket:18
> Socket(s): 2
> NUMA node(s):  2
> Vendor ID: GenuineIntel
> CPU family:6
> Model: 85
> Model name:Intel(R) Xeon(R) Gold 6150 CPU @ 2.70GHz
> Stepping:  4
> CPU MHz:   2701.000
> CPU max MHz:   2701.
> CPU min MHz:   1200.
> BogoMIPS:  5400.00
> Virtualization:VT-x
> L1d cache: 32K
> L1i cache: 32K
> L2 cache:  1024K
> L3 cache:  25344K
> NUMA node0 CPU(s): 0-17,36-53
> NUMA node1 CPU(s): 18-35,54-71
>
> Now I don't want to do multiple simulations and just want to run a single 
> simulation. When assigning the simulation to only one GPU (gmx mdrun -v 
> -gpu_id 0 -deffnm md), the simulation performance is 90 ns/day. However, when 
> I don't assign the GPU but let all GPU work by:
>gmx mdrun -v -deffnm md
> The simulation performance is only 2 ns/day.
>
> So what is correct command to make a full use of all GPUs and achieve the 
> best performance (which I expect should be much higher than 90 ns/day with 
> only one GPU)? Could you give me further suggestions and help?
>
> Best regards,
> Yeping
>
> --
> From:Mark Abraham 
> Sent At:2019 Sep. 4 (Wed.) 19:10
> To:gromacs ; 孙业平 
> Cc:gromacs.org_gmx-users 
> Subject:Re: [gmx-users] The problem of utilizing multiple GPU
>
> Hi,
>
>
> On Wed, 4 Sep 2019 at 12:54, sunyeping  wrote:
> Dear everyone,
>
>  I am trying to do simulation with a workstation with 72 core and 8 geforce 
> 1080 GPUs.
>
> 72 cores, or just 36 cores each with two hyperthreads? (it matters because 
> you might not want to share cores between simulations, which is what you'd 
> get if you just assigned 9 hyperthreads per GPU and 1 GPU per simulation).
>
>  When I do not assign a certain GPU with the command:
>gmx mdrun -v -deffnm md
>  all GPUs are used and but the utilization of each GPU is extremely low (only 
> 1-2 %), and the simulation will be finished after several months.
>
> Yep. Too many workers for not enough work means everyone spends time more 
> time coordinating than working. This is likely to improve in GROMACS 2020 
> (beta out shortly).
>
>  In contrast, when I assign the simulation task to only one GPU:
>  gmx mdrun -v -gpu_id 0 -deffnm md
>  the GPU utilization can reach 60-70%, and the simulation can be finished 
> within a week. Even when I use only two GPU:
>
> Utilization is only a proxy - what you actually want to measure is the rate 
> of simulation ie. ns/day.
>
>   gmx mdrun -v -gpu_id 0,2 -deffnm md
>
>  the GPU utilizations are very low and the simulation is very slow.
>
> That could be for a variety of reasons, which you could diagnose by looking 
> at the performance report at the end of the log file, and comparing different 
> runs.
>  I think I may missuse the GPU for gromacs simulation. Could you tell me what 
> is the correct way to use multiple GPUs?
>
> If you're happy running multiple simulations, then the easiest thing to do is 
> to use the existing multi-simulation support to do
>
> mpirun -np 8 gmx_mpi -multidir dir0 dir1 dir2 ... dir7
>
> and let mdrun handle the details. Otherwise you have to get involved in 
> assigning a subset of the CPU cores and GPUs to each job that both runs fast 
> and does not conflict. See the documentation for GROMACS for the version 
> you're running e.g. 
> http://manual.gromacs.org/documentation/current/user-guide/mdrun-performance.html#running-mdrun-within-a-single-node.
>
> You probably want to 

Re: [gmx-users] Forcefield parameter for transition metal

2019-09-05 Thread Srijan Chatterjee
Hi Justin,
"The force fields in GROMACS are (by design) biomolecular in nature. None
support tungsten."
Thanks for clarifying. I happen to just come across a  paper with similar
forcefield values (https://doi.org/10.1016/j.jct.2019.01.016)

Although there is another problem with this molecule because of its
octahedral geometry. It contains several 180-degree bonds. So as from my
understanding of the CO2 tutorial gromacs has had difficulty in dealing
180-degree bond.
So, my question is
1) Do I have to use a virtual site to constrain the molecular geometry? If
so, for an octahedral geometry what should I do?
2) Is there any other way of doing it other than virtual site?
3)If I perform position restraint to each atom of 1 still I can see
some movement of atoms but the molecule is not breaking. Is it bad or
reasonable approach to do it during MD?

Srijan


...






On Tue, 3 Sep 2019 at 00:28, Justin Lemkul  wrote:

>
>
> On 9/2/19 5:01 AM, Srijan Chatterjee wrote:
> > Hi,
> > I want to study tungsten hexacarbonyl in a solvent (acetonitrile).
> > I find that nither antechamber or PRODRG or OPLS automatic server
> support a
> > transition metal when creating input.
> > Can anyone guide me to create gromacs compatible parameter for a
> transition
> > metal.
>
> You'll have to start from scratch, likely. The force fields in GROMACS
> are (by design) biomolecular in nature. None support tungsten.
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
>
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalem...@vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>


-- 
SRIJAN CHATTERJEE
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.