Re: [gmx-users] Simulate only one unit of the virus capsid while fixing its surrounding units

2020-04-08 Thread Benson Muite
Dear Cheng,
The paper you mentioned (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5524983/) 
uses NAMD (http://www.ks.uiuc.edu/Research/namd/) has somewhat different 
scalability properties than GROMACS.
Regards,
Benson

On Wed, Apr 8, 2020, at 7:14 AM, ZHANG Cheng wrote:
> Dear Andre,
> 
> 
> Thank you. We are trying to use an adenovirus as a vaccine. As it is 
> not stable, we want to simulate it to identify the unstable regions 
> (e.g. flexible), so as to either engineering (e.g. mutation) it, or 
> adding excipients.
> 
> 
> Simulating only one protein of the capsid is of course doable. But do 
> you think simulating one protein without its neighbours could reflect 
> its dynamics? Would its boundary residues behave very differently 
> compared to with neighbours?
> --Original--
> From:"ZHANG Cheng"<272699...@qq.com;
> Date:Wed, Apr 8, 2020 11:02 AM
> To:"gromacs.org_gmx-users" 
> Subject:Re:Simulate only one unit of the virus capsid while 
> fixing its surrounding units
> 
> 
> 
> Dear Justin and Andre,
> 
> 
> Thank you for the advice. So can I ask how commonly the very large 
> virus capsid is simulated? A recent paper "Physical properties of the 
> HIV-1 capsid from all-atom molecular dynamics simulations" is using 
> 3880 GPU accelerated Cray-XK nodes, which is impossible for our 
> university to provide.
> 
> 
> 
> 
> -- Original --
> From:"ZHANG Cheng"<272699...@qq.com;
> Date:Tue, Apr 7, 2020 10:10 PM
> To:"ZHANG 
> Cheng"<272699...@qq.com;"gromacs.org_gmx-users" 
> Subject:Re:Simulate only one unit of the virus capsid while 
> fixing its surrounding units
> 
> 
> 
> Dear Andre, Thank you for the advice. Can I ask,
> 
> 
> 1) Could you please clarify the concepts? I know "constraint" and 
> "restraint" are two different things in gromacs. And "fix" is another 
> term? How about "freezegrps"?
> 
> 
> 2) It is okay that the computational time is not reduced, as now only 
> several proteins are simulated. If I simulate all the several protein 
> without any fixing, I worry they will lose their conformation. So 
> fixing the neighbours and only focusing on the protein in the centre 
> could be the solution.
> 
> 
> 
> 
> 
> -- Original --
> From:"ZHANG Cheng"<272699...@qq.com;
> Date:Tue, Apr 7, 2020 09:41 PM
> To:"gromacs.org_gmx-users" Cc:"ZHANG Cheng"<272699...@qq.com;
> Subject:Simulate only one unit of the virus capsid while fixing 
> its surrounding units
> 
> 
> 
> It is a challenge to simulate the entire virus as it is too big and I 
> do not have such computational resources. So I was thinking to only 
> simulate one coat protein and its surrounding neighbours, but keep the 
> neighbours relatively fixed.
> 
> 
> Can I ask
> 
> 
> 1) Is this a sensible idea to proceed?
> 
> 
> 2) To fix the neighbours, should I use "constraints" or "restraints"?
> 
> 
> 3) At which step should I start to introduce the fixation?
> 
> 
> 4) If possible, is there a tutorial for this? I feel the information 
> here is still not straightforward to follow
> http://www.gromacs.org/Documentation/How-tos/Position_Restraints
> 
> 
> Thank you!
> 
> 
> Yours sincerely
> Cheng
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before 
> posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or 
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] replica exchange simulations performance issues.

2020-03-29 Thread Benson Muite


On Sun, Mar 29, 2020, at 4:55 AM, Miro Astore wrote:
> Hi everybody. I've been experimenting with REMD for my system running
> on 48 cores with 4 gpus (I will need to scale up to 73 replicas
> because this is a complicated system with many DOF I'm open to being
> told this is all a silly idea).
> 
> My run configuration is
> mpirun -np 4 --map-by numa gmx_mpi mdrun -cpi memb_prod1.cpt -ntomp 11
> -v -deffnm memb_prod1 -multidir 1 2 3 4 -replex 1000
> 
> the best I can squeeze out of this is 9ns/day. In a non-replica
> simulation I can hit 50ns/day with a single GPU and 12 cores.

What happens for a small number of replicas?

> 
> Looking at my accounting, for a single replica 52% of time is being
> spent on the "Force" category with 92% of my Mflops going into NxN
> Ewald Elec. + LJ [F]
> 
> I'm wondering what I could do to reduce this bottle neck if anything.

Do you have access to more hardware? There area number of HPC centers in 
Australia.

> 
> Thank you.
> -- 
> Miro A. Astore   (he/him)
> PhD Candidate | Computational Biophysics
> Office 434 A28 School of Physics
> University of Sydney
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before 
> posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or 
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Various questions related to Gromacs performance tuning

2020-03-28 Thread Benson Muite


On Sat, Mar 28, 2020, at 9:32 PM, Kutzner, Carsten wrote:
> 
> 
> > Am 26.03.2020 um 17:00 schrieb Tobias Klöffel :
> > 
> > Hi Carsten,
> > 
> > 
> > On 3/24/20 9:02 PM, Kutzner, Carsten wrote:
> >> Hi,
> >> 
> >>> Am 24.03.2020 um 16:28 schrieb Tobias Klöffel :
> >>> 
> >>> Dear all,
> >>> I am very new to Gromacs so maybe some of my problems are very easy to 
> >>> fix:)
> >>> Currently I am trying to compile and benchmark gromacs on AMD rome cpus, 
> >>> the benchmarks are taken from:
> >>> https://www.mpibpc.mpg.de/grubmueller/bench
> >>> 
> >>> 1) OpenMP parallelization: Is it done via OpenMP tasks?
> >> Yes, all over the code loops are parallelized via OpenMP via #pragma omp 
> >> parallel for
> >> and similar directives.
> > Ok but that's not OpenMP tasking:)
> >> 
> >>> If the Intel toolchain is detected and  -DGMX_FFT_LIBRARY=mkl is 
> >>> set,-mkl=serial is used, even though -DGMX_OPENMP=on is set.
> >> GROMACS uses only the serial transposes - allowing mkl to open up its own 
> >> OpenMP threads
> >> would lead to oversubscription of cores and performance degradation.
> > Ah I see. But then it should be noted somewhere in the docu that all 
> > FFTW/MKL calls are inside a parallel region. Is there a specific reason for 
> > this? Normally you can achieve much better performance if you call a 
> > threaded library outside of a parallel region and let the library use its 
> > own threads.

Creating and destroying threads can  sometimes be slow, which is what threaded 
libraries do upon entry and exit. Thus if a progam is already using threads, it 
can be faster to have multiple threads call threadsafe versions of the serial 
library if this is what the library does - likely the case for FFTW.

> >>> 2) I am trying to use gmx_mpi tune_pme but I never got it to run. I do 
> >>> not really understand what I have to specify for -mdrun. I
> >> Normally you need a serial (read: non-mpi enabled) 'gmx' so that you can 
> >> call
> >> gmx tune_pme. Most queueing systems don't like it if one parallel program 
> >> calls
> >> another parallel program.
> >> 
> >>> tried -mdrun 'gmx_mpi mdrun' and export MPIRUN="mpirun -use-hwthread-cpus 
> >>> -np $tmpi -map-by ppr:$tnode:node:pe=$OMP_NUM_THREADS --report-bindings" 
> >>> But it just complains that mdrun is not working.
> >> There should be an output somewhere with the exact command line that
> >> tune_pme invoked to test whether mdrun works. That should shed some light
> >> on the issue.
> >> 
> >> Side note: Tuning is normally only useful on CPU-nodes. If your nodes also
> >> have GPUs, you will probably not want to do this kind of PME tuning.
> > Yes it's CPU only... I will tune pp:ppme procs manually. However, 
> most of the times it is failing with 'too large prime number' what is 
> considered to be 'too large'?
> I think 2, 3, 5, 7, 11, and 13 and multiples of these are ok, but not 
> larger prime numbers.
> So for a fixed number of procs only some of the combinations PP:PME 
> will actually work.
> The ones that don't work would not be wise to choose from a performance 
> point of view.
> 
> Best,
>  Carsten
> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before 
> posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or 
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] How to find and open Gromacs after installation

2020-03-18 Thread Benson Muite

It may be helpful to indicate what linux distribution you are using.

It may be helpful to also look at the user  guide:
http://manual.gromacs.org/documentation/current/user-guide/index.html

On the linux command line try typing 

gmx -version

Usually the binaries are on the path.

You may find it convenient to compile Gromacs yourself for efficient use of any 
GPUs you have available. Some documentation here:

http://manual.gromacs.org/documentation/current/install-guide/index.html


On Wed, Mar 18, 2020, at 5:25 PM, Sutanu L'Étranger wrote:
> -- Forwarded message -
> From: Sutanu L'Étranger 
> Date: Wed, Mar 18, 2020, 19:31
> Subject: [gmx-users] (no subject)
> To: 
> 
> 
> Hi,
> 
> I've just installed GROMACS on Linux in Virtualbox. But, now I can't find
> my GROMACS, please tell me how to find it.
> Thank you.
> 
> I've used these commands while installing gromacs:-
> 
> sudo apt update
> sudo apt upgrade
> sudo apt install gcc
> sudo apt install cmake
> sudo apt install build-essential
> sudo apt install libfftw3-dev
> sudo apt install gromacs
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before 
> posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or 
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Fwd: Problems with Non-equilibrium MD

2020-03-16 Thread Benson Muite
Hi Jun,
Can you add a link to your image online rather than attaching it?
Benson



On Mon, Mar 16, 2020, at 10:25 AM, Jun Zhou wrote:
> Hi all,
> 
> I am using non-equilibrium MD to calculate the viscosity of the liquid by
> applying a shear on xy direction. My problem is that when I observe the
> trajectory of the simulation, I find only the box deformed and particles
> did not change correspondingly, as shown in the attached image.
> 
> I also attached my mdp file, can anyone give me some suggestions? Thanks.
> 
> Regards
> 
> -- 
> *Jun ZHOU*
> Postgraduate Student ,
> Room 117, Building 36
> Department of Civil Engineering,
> Monash University,
> Victoria 3800, Australia.
> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] generating Doxygen documentation

2020-02-12 Thread Benson Muite


Hi,
What platform are you building on? Last time I tried this, there were a few 
extra dependencies I needed to install.
Benson

On Thu, Feb 13, 2020, at 12:34 AM, Mark Abraham wrote:
> Hi,
> 
> When they succeed, the doxygen targets produce files like
> 
> $builddir/docs/html/doxygen/html-*/index.xhtml
> 
> which you can open in your browser. The doxygen for the released versions
> is on the web however, so it's much easier to just use or refer to that.
> 
> Mark
> 
> On Wed, 12 Feb 2020 at 16:30, Michele Pellegrino  wrote:
> 
> > Hi,
> >
> >
> > I am trying to generate the Doxygen documentation for GROMACS from the
> > files in /docs/doxygen.
> >
> > I following 2020 manual, I run 'make doxygen-all' in the build directory,
> > succesfully; I don't know what to do next: I tried to run cmake in the
> > doxygen directory to generate the Doxyfiles, but I get the following errors:
> >
> > """
> >
> > CMake Error at CMakeLists.txt:36 (include):
> >   include could not find load file:
> >
> > gmxCustomCommandUtilities
> >
> >
> > CMake Error at CMakeLists.txt:37 (include):
> >   include could not find load file:
> >
> > gmxOptionUtilities
> >
> >
> > CMake Error at CMakeLists.txt:53 (gmx_dependent_option):
> >   Unknown CMake command "gmx_dependent_option".
> >
> > """
> >
> > What should I do?
> >
> >
> > Thanks for your attention,
> >
> > Michele
> >
> >
> > p.s. I am referring to the github master branch current version
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before 
> posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or 
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Solid surface simulation and PBC issues

2019-10-12 Thread Benson Muite



On 10/11/19 6:41 PM, Samuel Asante Afari wrote:

Dear Users,
I am trying to simulate kaolinite in explicit water. After building my 
structure, I try to minimize it but I tend to get very high positive potential 
energy. After doing some research, one possible explanation is the broken bonds 
across the periodic boundary, which I notice in my surface model when I view in 
VMD. I have tried all that I can to fix it i.e using trjconv, whole, nojump, 
mol sequence and also adjusting box size. My question is, is the high potential 
energy caused by the broken bonds? Is this problem fixable by trjconv or it is 
a matter of topology? Which part of the topology cause this issue? Any 
suggestions to help me fix it?
Picture not attached. May want to post it online somewhere and put a 
link to it. It is not clear why you have broken bonds across the 
periodic boundary. What is you initial setup? Should kaolinite be at a 
low concentration? Are you able to use a bigger periodic box?


Thank you in advance

Sam

Please see attached my em.mdp and energy output.  Also attached is a picture of 
the structure with broken bonds.

integrator  = steep ; Algorithm (steep = steepest descent minimization)
emtol   = 1000.0; Stop minimization when the maximum force < 1000.0 
kJ/mol/nm
emstep  = 0.01  ; Minimization step size
nsteps  = 5 ; Maximum number of (minimization) steps to perform

; Parameters describing how to find the neighbors of each atom and how to 
calculate the interactions
nstlist = 1 ; Frequency to update the neighbor list and long 
range forces
cutoff-scheme   = Verlet; Buffered neighbor searching
ns_type = grid  ; Method to determine neighbor list (simple, grid)
coulombtype = PME   ; Treatment of long range electrostatic interactions
rcoulomb= 1.2   ; Short-range electrostatic cut-off
rvdw= 1.2   ; Short-range Van der Waals cut-off
pbc = xyz   ; Periodic Boundary Conditions in all 3 dimensions
periodic_molecules = yes



Steepest Descents:
Tolerance (Fmax)   =  1.0e+03
Number of steps=5

writing lowest energy coordinates.

Steepest Descents converged to Fmax < 1000 in 19 steps
Potential Energy  =  7.0229306e+05
Maximum force =  9.4779437e+02 on atom 522
Norm of force =  5.7396594e+02


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] limit to the number of QM atoms in a QM/MM simulation?

2019-09-03 Thread Benson Muite



Hi Kristina,

Cannot answer your question directly, but in addition to Orca, there is 
some effort in this area that may allow using software other than Orca 
for the QM part, for example:
Zalevsky A.O., Reshetnikov R.V., Golovin A.V. (2019) New QM/MM 
Implementation of the MOPAC2012 in the GROMACS

https://doi.org/10.1007/978-3-030-05807-4_24

Benediktsson B. and Bjornsson R. (2017)
QM/MM Study of the Nitrogenase MoFe Protein Resting State: 
Broken-Symmetry States, Protonation States, and QM Region Convergence in 
the FeMoco Active Site

https://github.com/RagnarB83/chemshell-QMMM-protein-setup


Olsen, Bolnykh, Meloni, Ippoliti, Bircher, Carloni and Rothlisberger (2019)
MiMiC: A Novel Framework for Multiscale Modeling in Computational Chemistry
https://doi.org/10.1021/acs.jctc.9b00093
http://manual.gromacs.org/2019/reference-manual/special/mimic-qmmm.html 
(I belive this is still in progress



Regards,
Benson
On 9/3/19 10:42 AM, Kristina Woods wrote:

Hello:

I have a very basic question about the QM/MM implementation in gromacs (and
yes - I know that it is no longer supported).  I am using ORCA as the
quantum chemistry package for a QM/MM simulation of a photo-protein in
gromacs.  I have successfully used the combination of ORCA and gromacs with
other photo-protein QM/MM simulations without problem but in the previous
cases I have had a smaller number of QM atoms (below 200 atoms).  In my
current protein, I would like to consider close to 700 atoms but I notice
that everytime I set-up a simulation in gromacs - gromacs fragments the
geometry of the system so that only a limited number of the QM atoms are
contained within the simulation box.  The rest of the atoms are translated
out of the simulation box (sorry if this is not a good description).  I
have tried compiling different combinations of gromacs and orca and this
doesn't seem to change anything.  I have also studied the input geometry of
my system and that seems to be fine (the initial geometry comes from the
output of a 100 ns all-atom MM simulation of the entire system).  I then
tried to simulate only of a subset of the QM atoms (up to 200 atoms) of
interest and with a smaller system everything seems to work beautifully.
So my question is if my problems are attributed to the fact that there is a
limit to the number of QM atoms that can be considered in a QM/MM
simulation?


Thank you,

Kristina

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs-GPU help

2019-08-17 Thread Benson Muite

Hi,

This may also be helpful:

http://www.hecbiosim.ac.uk/jade-benchmarks

https://github.com/hpc-uk/archer-benchmarks/blob/master/reports/single_node/index.md#tab16

Regards,

Benson

On 8/16/19 5:45 AM, Benson Muite wrote:

Hi,

You may wish to search the list archives as indicated at:

Please search the archive 
athttp://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before 
posting!


See also:

https://arxiv.org/abs/1903.05918

Benson

On 8/16/19 5:37 AM, tarzan p wrote:
Hi all,I have started using GROMACS recently on my work station (2 X 
Intel 6148 (20 cores each) and 2 x tesla v100 .I have compiled it as 
per the instructions atRun GROMACS 3X Faster on NVIDIA GPUs


|
|
|
|  |  |

  |

  |
|
|  |
Run GROMACS 3X Faster on NVIDIA GPUs

Complete your molecular dynamics simulations in hours instead of 
days. Learn more.

  |

  |

  |





I would like to get request for a proper benchmark for GPU version 
and would like to know how to run the GPU version. I mean the command 
to use one GPU and 2 GPU(s).

With best wishes

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Gromacs-GPU help

2019-08-15 Thread Benson Muite

Hi,

You may wish to search the list archives as indicated at:

Please search the archive 
athttp://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List  before posting!

See also:

https://arxiv.org/abs/1903.05918

Benson

On 8/16/19 5:37 AM, tarzan p wrote:

Hi all,I have started using GROMACS recently on my work station (2 X Intel 6148 
(20 cores each) and 2 x tesla v100 .I have compiled it as per the instructions 
atRun GROMACS 3X Faster on NVIDIA GPUs

|
|
|
|  |  |

  |

  |
|
|  |
Run GROMACS 3X Faster on NVIDIA GPUs

Complete your molecular dynamics simulations in hours instead of days. Learn 
more.
  |

  |

  |





I would like to get request for a proper benchmark for GPU version and would 
like to know how to run the GPU version. I mean the command to use one GPU and 
2 GPU(s).
With best wishes

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] SIMD: SSE2 or AVX2_256?

2019-08-06 Thread Benson Muite

Hi,

Not a necessity, but might speed up your simulations. If simulation time 
is significant, it may help to measure it yourself. However, if most of 
the work is done on a GPU, it will not make too much difference.


Benson

On 8/6/19 5:57 PM, Neena Susan Eappen wrote:

Hello gromacs users,

I got the following message when I was running grompp command:

-Compiled SIMD: SSE2, but for this host/run AVX2_256 might be better (see 
log).
The current CPU can measure timings more accurately than the code in gmx mdrun 
was configured to use. This might affect your simulation speed as accurate 
timings are needed for load-balancing.
Please consider rebuilding gmx mdrun with the GMX_USE_RDTSCP=ON CMake 
option.---

I was wondering is this a necessity?

Thank you,
Neena

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] missing prepare-qmmm.py

2019-07-10 Thread Benson Muite

Thanks.

On 7/10/19 2:38 PM, Dhr. D.W. Sjoerdsma (d.w.sjoerdsma) wrote:


Dear Benson,
Thank you for all the time you put in!
I'll try one of these guys and share the updates!
Sincerely,
Derk Sjoerdsma

On July 10, 2019 at 12:54 PM Benson Muite  
wrote:


Dear Derk,

Ok. Got the same result. Perhaps contact one of the main authors of 
the following papers:


Extreme Scalability of DFT-Based QM/MM MD Simulations Using MiMiC by 
Viacheslav Bolnykh et al


https://doi.org/10.26434/chemrxiv.8067527.v1

MiMiC: A Novel Framework for Multiscale Modeling in Computational 
Chemistry by Jógvan Magnus et al


https://doi.org/10.26434/chemrxiv.7635986.v1

If you make progress, please update the list so that the 
documentation can be updated.


Regards,

Benson

On 7/10/19 1:25 PM, Dhr. D.W. Sjoerdsma (d.w.sjoerdsma) wrote:

Dear Benson,

Thank you for your reply :)
I indeed set the -DCMAKE_PREFIX_PATH flag correctly. I also looked what the
syntax of the  -DCMAKE_INSTALL_PREFIX was.
However after trying the correct syntax, I still was not able to get the
prepare-QMMM.py

It should however be generated as is said in the text: " After preparing the
input for GROMACS and having obtained the preprocessed topology file, simply run
the Python preprocessor script provided within the MiMiC distribution to obtain
MiMiC-related part of the CPMD input file. "
-http://manual.gromacs.org/documentation/2019-rc1/reference-manual/special/mimic-qmmm.html

I also tried checking the src's, but I didn't got any luckier there..

Sincerely,
Derk


On July 1, 2019 at 7:10 PM Benson Muite  
<mailto:benson_mu...@emailplus.org>  wrote:

Hi,

Was -DCMAKE_PREFIX_PATH set to the location at which CommLib library was
installed?

Not sure if

-DCMAKE_INSTALL_PREFIX:PATH=/home/derk/Prog/gmx19

is a typo and should be

-DCMAKE_INSTALL_PREFIX_PATH=/home/derk/Prog/gmx19

prepare-qmmm.py

does not seem to be in the sources at
https://gitlab.com/MiMiC-projects/CommLib  and
https://github.com/gromacs/gromacs, though maybe automatically generated.

Benson

On 7/1/19 3:52 PM, Dhr. D.W. Sjoerdsma (d.w.sjoerdsma) wrote:

Hello,

After compiling GROMACS with the MiMiC option, I was not able to locate the
prepare-qmmm.py file. I do not know whether this is a result of a faulty
installation or if it is something else

I used the following flags:

-DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_DOUBLE=ON -DGMX_
MIMIC =ON -DCMAKE_INSTALL_PREFIX:PATH=/home/derk/Prog/gmx19 -DGMX_MPI=ON
-DCMAKE_C_COMPILER=mpicc -DCMAKE_CXX_COMPILER=mpicxx

Sincerely,
Derk Sjoerdsma


--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List  before posting!

* Can't post? Readhttp://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users  or send a
mail togmx-users-requ...@gromacs.org  <mailto:gmx-users-requ...@gromacs.org>.



--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] missing prepare-qmmm.py

2019-07-10 Thread Benson Muite

Dear Derk,

Ok. Got the same result. Perhaps contact one of the main authors of the 
following papers:


Extreme Scalability of DFT-Based QM/MM MD Simulations Using MiMiC by 
Viacheslav Bolnykh et al


https://doi.org/10.26434/chemrxiv.8067527.v1

MiMiC: A Novel Framework for Multiscale Modeling in Computational 
Chemistry by Jógvan Magnus et al


https://doi.org/10.26434/chemrxiv.7635986.v1

If you make progress, please update the list so that the documentation 
can be updated.


Regards,

Benson

On 7/10/19 1:25 PM, Dhr. D.W. Sjoerdsma (d.w.sjoerdsma) wrote:

Dear Benson,

Thank you for your reply :)
I indeed set the -DCMAKE_PREFIX_PATH flag correctly. I also looked what the
syntax of the  -DCMAKE_INSTALL_PREFIX was.
However after trying the correct syntax, I still was not able to get the
prepare-QMMM.py

It should however be generated as is said in the text: " After preparing the
input for GROMACS and having obtained the preprocessed topology file, simply run
the Python preprocessor script provided within the MiMiC distribution to obtain
MiMiC-related part of the CPMD input file. "
-http://manual.gromacs.org/documentation/2019-rc1/reference-manual/special/mimic-qmmm.html

I also tried checking the src's, but I didn't got any luckier there..

Sincerely,
Derk


On July 1, 2019 at 7:10 PM Benson Muite  wrote:

Hi,

Was -DCMAKE_PREFIX_PATH set to the location at which CommLib library was
installed?

Not sure if

-DCMAKE_INSTALL_PREFIX:PATH=/home/derk/Prog/gmx19

is a typo and should be

-DCMAKE_INSTALL_PREFIX_PATH=/home/derk/Prog/gmx19

prepare-qmmm.py

does not seem to be in the sources at
https://gitlab.com/MiMiC-projects/CommLib and
https://github.com/gromacs/gromacs, though maybe automatically generated.

Benson

On 7/1/19 3:52 PM, Dhr. D.W. Sjoerdsma (d.w.sjoerdsma) wrote:

Hello,

After compiling GROMACS with the MiMiC option, I was not able to locate the
prepare-qmmm.py file. I do not know whether this is a result of a faulty
installation or if it is something else

I used the following flags:

-DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_DOUBLE=ON -DGMX_
MIMIC =ON -DCMAKE_INSTALL_PREFIX:PATH=/home/derk/Prog/gmx19 -DGMX_MPI=ON
-DCMAKE_C_COMPILER=mpicc -DCMAKE_CXX_COMPILER=mpicxx

Sincerely,
Derk Sjoerdsma


--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a
mail to gmx-users-requ...@gromacs.org.

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] rtx 2080 gpu

2019-07-10 Thread Benson Muite

Hi Stefano,

What was your compilation command? (it may be helpful to add SIMD 
support appropriate to your processor 
http://manual.gromacs.org/documentation/current/install-guide/index.html#simd-support)


Did you run make test after compiling?

Benson

On 7/10/19 1:18 AM, Stefano Guglielmo wrote:

Dear all,
I have a centOS machine equipped with two RTX 2080 cards, with nvidia
drivers 430.2; I installed cuda toolkit 10-1. when executing mdrun the log
reported the following message:

GROMACS version:2019.2
Precision:  single
Memory model:   64 bit
MPI library:thread_mpi
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
GPU support:CUDA
SIMD instructions:  NONE
FFT library:fftw-3.3.8
RDTSCP usage:   disabled
TNG support:enabled
Hwloc support:  disabled
Tracing support:disabled
C compiler: /usr/bin/cc GNU 4.8.5
C compiler flags:-O3 -DNDEBUG -funroll-all-loops
-fexcess-precision=fast
C++ compiler:   /usr/bin/c++ GNU 4.8.5
C++ compiler flags: -std=c++11   -O3 -DNDEBUG -funroll-all-loops
-fexcess-precision=fast
CUDA compiler:  /usr/local/cuda/bin/nvcc nvcc: NVIDIA (R) Cuda compiler
driver;Copyright (c) 2005-2019 NVIDIA Corporation;Built on
Wed_Apr_24_19:10:27_PDT_2019;Cuda compilation tools, release 10.1, V10.1.168
CUDA compiler
flags:-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=compute_75;-use_fast_math;;;
;-std=c++11;-O3;-DNDEBUG;-funroll-all-loops;-fexcess-precision=fast;
CUDA driver:10.20
CUDA runtime:   N/A

NOTE: Detection of GPUs failed. The API reported:
   unknown error
   GROMACS cannot run tasks on a GPU.

Does anyone have any suggestions?
Thanks in advance
Stefano




--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] make test failed during installing gromacs

2019-07-10 Thread Benson Muite

Hi Yeping,

a) Would first check that a CPU only build passes the tests.

b) May then want to check compute capability of GPU matches the 
requirements for the Gromacs version you are installing. I assume using 
Geforce GTX 1080 GPU


It may be helpful to indicate your compilation commands and environment 
(compilers, cuda toolkit).


Benson

On 7/10/19 4:02 AM, sunyeping wrote:

Hello, everyone,

When I run make test during installing gromacs-2019.3 in centos 7 in a 
workstation with 4 Tesla 1080 GPU, I get the following failure information:

The following tests FAILED:
   10 - GpuUtilsUnitTests (Timeout)
   38 - MdrunNonIntegratorTests (Timeout)
   42 - regressiontests/complex (Failed)
Errors while running CTest
make[3]: *** [CMakeFiles/run-ctest-nophys] Error 8
make[2]: *** [CMakeFiles/run-ctest-nophys.dir/all] Error 2
make[1]: *** [CMakeFiles/check.dir/rule] Error 2
make: *** [check] Error 2

Will the installation finally failed? Should I continue to run "make install"?

Best regards

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] About 2019 GMX manual

2019-07-07 Thread Benson Muite

Hi Alan,

This requires making a small change in the git repository at 
https://gerrit.gromacs.org


The relevant line is also mirrored at:

https://github.com/gromacs/gromacs/blob/master/docs/user-guide/system-preparation.rst

but changes typically go through the git repository at 
gerrit.gromacs.org as indicated at:


http://manual.gromacs.org/documentation/current/dev-manual/overview.html#documentation-organization

Can make the change on your behalf if it would be helpful.

Benson

On 7/7/19 8:35 PM, Alan wrote:


Is there a better way to report issues with the current manual?

It's a very minor one, to update a link about ACPYPE on page 25.

To use: https://github.com/alanwilter/acpype

Thanks,

Alan

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] missing prepare-qmmm.py

2019-07-01 Thread Benson Muite

Hi,

Was -DCMAKE_PREFIX_PATH set to the location at which CommLib library was 
installed?


Not sure if

-DCMAKE_INSTALL_PREFIX:PATH=/home/derk/Prog/gmx19

is a typo and should be

-DCMAKE_INSTALL_PREFIX_PATH=/home/derk/Prog/gmx19

prepare-qmmm.py

does not seem to be in the sources at 
https://gitlab.com/MiMiC-projects/CommLib and 
https://github.com/gromacs/gromacs, though maybe automatically generated.


Benson

On 7/1/19 3:52 PM, Dhr. D.W. Sjoerdsma (d.w.sjoerdsma) wrote:

Hello,

After compiling GROMACS with the MiMiC option, I was not able to locate the
prepare-qmmm.py file. I do not know whether this is a result of a faulty
installation or if it is something else

I used the following flags:

-DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_DOUBLE=ON -DGMX_
MIMIC =ON -DCMAKE_INSTALL_PREFIX:PATH=/home/derk/Prog/gmx19 -DGMX_MPI=ON
-DCMAKE_C_COMPILER=mpicc -DCMAKE_CXX_COMPILER=mpicxx

Sincerely,
Derk Sjoerdsma


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] How to install gromacs on cpu cluster

2019-07-01 Thread Benson Muite

Hi Yeping,

A full build may be helpful to run the initial correctness tests, once 
you have done this can then use -DGMX_BUILD_MDRUN_ONLY=on for production 
runs. Having a full version of GROMACS on a node with access to the same 
file system where you do runs is also helpful since there may be 
features required for pre and post processing. If you have built your 
own MPI, you probably also want to use the flags:


-DMPI_C_COMPILER=/path/to/your/mpicc

-DMPI_CXX_COMPILER=/path/to/your/mpiccxx

It may be helpful to name the MPI and Shared memory executables 
differently or install them in different places. The size of the system 
you are simulating will likely determine whether the MPI build will be 
useful for you. In most cases, the default


-DGMX_DOUBLE=off

is fine, The flag

-DGMX_SIMD=xxx

is also helpful, you will need to replace xxx with appropriate choice 
for your platform as indicated at:


http://manual.gromacs.org/documentation/current/install-guide/index.html#simd-support

Finally if you use ccmake rather than cmake, some of the other possible 
compilation options should appear.


Regards,

Benson

On 7/1/19 2:21 PM, sunyeping wrote:_

Hi Benson,

I feel I may need to add the following options to cmake?
-DGMX_MPI=on
-DGMX_SIMD=xxx
-DGMX_BUILD_MDRUN_ONLY=on

Should I?

--
From:孙业平 
Sent At:2019 Jun. 30 (Sun.) 08:36
To:gromacs ; Benson Muite

Subject:Re: [gmx-users] How to install gromacs on cpu cluster

Hi,  Benson,

I can install gcc-4.9 for compiling the latest version of gromacs
(gromacs_2019.3) in my own account directory
(/data/home/sunyp/software/GCC. For proper submission of task with
PBS system, which options of cmake should I use?
According to the "Quick and dirty cluster installation" section of
the gromacs installation guide, it seems that a quick and dirty
installation should be done, and then another installation with
MPI should be done to the same location with the non-MPI
installation. I am not very clear how these should be done
exactly. Could you give the exact commands?
Best regards
Yeping


  Q


  ; gromacs 

Subject:Re: [gmx-users] How to install gromacs on cpu cluster

Hi Yeping,

Minimum required compiler version for the latest release is GCC
4.8.1 :

http://manual.gromacs.org/documentation/current/install-guide/index.html

GROMACS 4.5 seems to indicate support for GCC 4.5
(http://www.gromacs.org/Documentation/Installation_Instructions_4.5)

Is CMAKE on your cluster? If so what version?

Regards,

Benson

On 6/29/19 12:08 PM, sunyeping wrote:
Hello Benson,

Thank you for respond to my question. There is no GPU on my cluster.

Best regards,

Yeping
--
From:Benson Muite 
Sent At:2019 Jun. 29 (Sat.) 16:56
To:gromacs ; 孙业平 
Subject:Re: [gmx-users] How to install gromacs on cpu cluster

Hi Yeping,

It may be easier to install a newer version of GCC. Are there any GPUs

on your cluster?

Benson

On 6/29/19 11:27 AM, sunyeping wrote:
>
> Dear everyone,
>
> I would like to install gromacs on a cpu cluster of 12 nodes, with each 
node containing 32 cores. The gcc version on the cluster is 4.4.7. Which version 
of gromacs can be properly compiled with this gcc version?
>
> The cluster support PBS job submission system, then what is the correct 
options for cmake (or maybe configure) when compiling gromacs?
>
> Thank you in advance.
>
> Best regards.
> Yeping




--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] How to install gromacs on cpu cluster

2019-06-30 Thread Benson Muite

Hi Yeping,

The basic steps are the same as in the documentation. The steps below 
should get GROMACS running on a linux computer where Cmake can be 
installed. You may need to change $HOME to an absolute path and give the 
exact location of the C and C++ compilers you will use to compile 
GROMACS. You need not use the same C and C++ compilers to compile CMake, 
though recent versions of Cmake also need a recent compiler. It may be 
the case that you will need to update your path variable to make your 
installation of GCC get picked up as the first compiler that is tried. 
Typical linux systems (but not all) should have most prerequisites for 
CMake:


wget 
https://github.com/Kitware/CMake/releases/download/v3.15.0-rc3/cmake-3.15.0-rc3.tar.gz

tar -xvf cmake-3.15.0-rc3.tar.gz
mkdir cmake-3.15.0-rc3-build
cd cmake-3.15.0-rc3-build/
 ../cmake-3.15.0-rc3/bootstrap 
--prefix=$HOME/InstallGromacs/cmake-3.15.0-rc3-install

gmake
make install
cd ..
wget ftp://ftp.gromacs.org/pub/gromacs/gromacs-2019.3.tar.gz
tar -xvf gromacs-2019.3.tar.gz
cd gromacs-2019.3/
mkdir build
cd build/
$HOME/InstallGromacs/cmake-3.15.0-rc3-install/bin/cmake .. 
-DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON 
-DCMAKE_INSTALL_PREFIX=$HOME/InstallGromacs/gromacs-2019.3-install/ 
-DCMAKE_CXX_COMPILER=/usr/bin/c++ -DCMAKE_C_COMPILER=/usr/bin/gcc

make
make check
make install

On 6/30/19 7:41 AM, Benson Muite wrote:

Hi Yeping,

May want to use GCC 7 or GCC 8

https://gcc.gnu.org/

Follow instructions for how to get GCC working, see

https://gcc.gnu.org/install/

Once have a new version of GCC in your own directory, you will 
probably want to then compile an MPI library (eg. OpenMPI - 
https://www.open-mpi.org/or MPICH - https://www.mpich.org) in your 
home directory as well, the MPI library should be compiled with the 
GCC version that you use. I have not used PBS in a while, but the MPI 
library should pick this up. For programs on a single node, PBS should 
be able to just run the compiled executable.


What version of cmake do you have on your system?

Will try to write a short script later today. You may also want to 
look at Spack which  offers an automated GROMACS installation (but it 
can be helpful to set it up yourself for good performance):


https://github.com/spack/spack

Regards,

Benson

On 6/30/19 3:36 AM, sunyeping wrote:

Hi,  Benson,

I can install gcc-4.9 for compiling the latest version of gromacs 
(gromacs_2019.3) in my own account directory 
(/data/home/sunyp/software/GCC. For proper submission of task with 
PBS system, which options of cmake should I use?
According to the "Quick and dirty cluster installation" section of 
the gromacs installation guide, it seems that a quick and dirty 
installation should be done, and then another installation with MPI 
should be done to the same location with the non-MPI installation. I 
am not very clear how these should be done exactly. Could you give 
the exact commands?

Best regards
Yeping


  Q


  ; gromacs 

    Subject:Re: [gmx-users] How to install gromacs on cpu cluster

    Hi Yeping,

    Minimum required compiler version for the latest release is GCC
    4.8.1 :

http://manual.gromacs.org/documentation/current/install-guide/index.html

    GROMACS 4.5 seems to indicate support for GCC 4.5
(http://www.gromacs.org/Documentation/Installation_Instructions_4.5)

    Is CMAKE on your cluster? If so what version?

    Regards,

    Benson

    On 6/29/19 12:08 PM, sunyeping wrote:
    Hello Benson,

    Thank you for respond to my question. There is no GPU on my cluster.

    Best regards,

    Yeping
--
    From:Benson Muite 
    Sent At:2019 Jun. 29 (Sat.) 16:56
    To:gromacs ; 孙业平 
    Subject:Re: [gmx-users] How to install gromacs on cpu cluster

    Hi Yeping,

It may be easier to install a newer version of GCC. Are there any GPUs

    on your cluster?

    Benson

    On 6/29/19 11:27 AM, sunyeping wrote:
    >
    > Dear everyone,
    >
> I would like to install gromacs on a cpu cluster of 12 nodes, with each node 
containing 32 cores. The gcc version on the cluster is 4.4.7. Which version of 
gromacs can be properly compiled with this gcc version?
    >
> The cluster support PBS job submission system, then what is the correct 
options for cmake (or maybe configure) when compiling gromacs?
    >
    > Thank you in advance.
    >
    > Best regards.
    > Yeping



--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] How to install gromacs on cpu cluster

2019-06-29 Thread Benson Muite

Hi Yeping,

May want to use GCC 7 or GCC 8

https://gcc.gnu.org/

Follow instructions for how to get GCC working, see

https://gcc.gnu.org/install/

Once have a new version of GCC in your own directory, you will probably 
want to then compile an MPI library (eg. OpenMPI - 
https://www.open-mpi.org/or MPICH - https://www.mpich.org) in your home 
directory as well, the MPI library should be compiled with the GCC 
version that you use. I have not used PBS in a while, but the MPI 
library should pick this up. For programs on a single node, PBS should 
be able to just run the compiled executable.


What version of cmake do you have on your system?

Will try to write a short script later today. You may also want to look 
at Spack which  offers an automated GROMACS installation (but it can be 
helpful to set it up yourself for good performance):


https://github.com/spack/spack

Regards,

Benson

On 6/30/19 3:36 AM, sunyeping wrote:

Hi,  Benson,

I can install gcc-4.9 for compiling the latest version of gromacs 
(gromacs_2019.3) in my own account directory 
(/data/home/sunyp/software/GCC. For proper submission of task with PBS 
system, which options of cmake should I use?
According to the "Quick and dirty cluster installation" section of the 
gromacs installation guide, it seems that a quick and dirty 
installation should be done, and then another installation with MPI 
should be done to the same location with the non-MPI installation. I 
am not very clear how these should be done exactly. Could you give the 
exact commands?

Best regards
Yeping


  Q


  ; gromacs 

Subject:Re: [gmx-users] How to install gromacs on cpu cluster

Hi Yeping,

Minimum required compiler version for the latest release is GCC
4.8.1 :

http://manual.gromacs.org/documentation/current/install-guide/index.html

GROMACS 4.5 seems to indicate support for GCC 4.5
(http://www.gromacs.org/Documentation/Installation_Instructions_4.5)

Is CMAKE on your cluster? If so what version?

Regards,

Benson

On 6/29/19 12:08 PM, sunyeping wrote:
Hello Benson,

Thank you for respond to my question. There is no GPU on my cluster.

Best regards,

Yeping
--
From:Benson Muite 
Sent At:2019 Jun. 29 (Sat.) 16:56
To:gromacs ; 孙业平 
Subject:Re: [gmx-users] How to install gromacs on cpu cluster

Hi Yeping,

It may be easier to install a newer version of GCC. Are there any GPUs

on your cluster?

Benson

On 6/29/19 11:27 AM, sunyeping wrote:
>
> Dear everyone,
>
> I would like to install gromacs on a cpu cluster of 12 nodes, with each 
node containing 32 cores. The gcc version on the cluster is 4.4.7. Which version 
of gromacs can be properly compiled with this gcc version?
>
> The cluster support PBS job submission system, then what is the correct 
options for cmake (or maybe configure) when compiling gromacs?
>
> Thank you in advance.
>
> Best regards.
> Yeping



--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] How to install gromacs on cpu cluster

2019-06-29 Thread Benson Muite

Hi Yeping,

Minimum required compiler version for the latest release is GCC 4.8.1 :

http://manual.gromacs.org/documentation/current/install-guide/index.html

GROMACS 4.5 seems to indicate support for GCC 4.5 
(http://www.gromacs.org/Documentation/Installation_Instructions_4.5)


Is CMAKE on your cluster? If so what version?

Regards,

Benson

On 6/29/19 12:08 PM, sunyeping wrote:

Hello Benson,

Thank you for respond to my question. There is no GPU on my cluster.

Best regards,

Yeping

--
From:Benson Muite 
Sent At:2019 Jun. 29 (Sat.) 16:56
To:gromacs ; 孙业平 
Subject:Re: [gmx-users] How to install gromacs on cpu cluster

Hi Yeping,

It may be easier to install a newer version of GCC. Are there any GPUs

on your cluster?

Benson

On 6/29/19 11:27 AM, sunyeping wrote:
>
> Dear everyone,
>
> I would like to install gromacs on a cpu cluster of 12 nodes, with each 
node containing 32 cores. The gcc version on the cluster is 4.4.7. Which version 
of gromacs can be properly compiled with this gcc version?
>
> The cluster support PBS job submission system, then what is the correct 
options for cmake (or maybe configure) when compiling gromacs?
>
> Thank you in advance.
>
> Best regards.
> Yeping



--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] How to install gromacs on cpu cluster

2019-06-29 Thread Benson Muite

Hi Yeping,

It may be easier to install a newer version of GCC. Are there any GPUs 
on your cluster?


Benson

On 6/29/19 11:27 AM, sunyeping wrote:


Dear everyone,

I would like to install gromacs on a cpu cluster of 12 nodes, with each node 
containing 32 cores. The gcc version on the cluster is 4.4.7. Which version of 
gromacs can be properly compiled with this gcc version?

The cluster support PBS job submission system, then what is the correct options 
for cmake (or maybe configure) when compiling gromacs?

Thank you in advance.

Best regards.
Yeping

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Explaination of some terminology

2019-05-29 Thread Benson Muite


On 5/29/19 4:07 AM, Anh Vo wrote:

Hi all,

I'm new to GROMACS and there are some terms that I’m not very clear when
reading GROMACS materials. I have looked them up and read more but I’m
still confused. Please help me to clarify them.



  - What is improper vs. proper dihedral?



  - It is said that “If a dynamical system is ergodic, the ensemble
average becomes equal to the time average”.

What does it means if a system is ergodic? Why does ensemble
sampling becomes similar to time sampling if a system is ergodic?
Ensemble sampling becomes similar to time sampling because over a long 
enough time period, all points in phase space are explored by an ergodic 
system, thus instead of doing multiple simulations with the same global 
average quantities (such as total system energy), one can do one long 
simulation with the same global average quantity. Slightly more involved 
explanation at https://en.wikipedia.org/wiki/Ergodicity



Thank you very much.


Best,

Anh Vo

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] help OpenCL build failure

2019-05-12 Thread Benson Muite

Hi,

Have you tried to use the cmake curse interface:

ccmake .. -DGMX_BUILD_OWN_FFTW=ON -DCMAKE_C_COMPILER=gcc 
-DCMAKE_CXX_COMPILER=g++ -DGMX_GPU=ON -DGMX_USE_OPENCL=ON


What version of OS X are you using? Manual indicates may need 10.10.4 or 
higher if using an AMD GPU 
(http://manual.gromacs.org/current/install-guide/index.html)


Benson

On 5/12/19 6:05 PM, Jacob Farag wrote:

  Hi I have successfully installed Gromacs version 2018.6 (non-GPU version:
with the following values:
cmake .. -DGMX_BUILD_OWN_FFTW=ON -DCMAKE_C_COMPILER=gcc
-DCMAKE_CXX_COMPILER=g++ -DBUILD_SHARED_LIBS=OFF
-DCMAKE_INSTALL_PREFIX='usr/local/gromacs/2018.6/'

I am not able to generate makefile for the GPU version with following
values:

cmake .. -DGMX_BUILD_OWN_FFTW=ON -DCMAKE_C_COMPILER=gcc
-DCMAKE_CXX_COMPILER=g++ -DGMX_GPU=ON -DGMX_USE_OPENCL=ON

-- Looking for CL_VERSION_2_0

-- Looking for CL_VERSION_2_0 - not found

-- Looking for CL_VERSION_1_2

-- Looking for CL_VERSION_1_2 - not found

-- Looking for CL_VERSION_1_1

-- Looking for CL_VERSION_1_1 - not found

-- Looking for CL_VERSION_1_0

-- Looking for CL_VERSION_1_0 - not found

-- Found OPENCL:
/Library/Developer/CommandLineTools/SDKs/MacOSX10.14.sdk/System/Library/Frameworks/OPENCL.framework

CMake Error at cmake/gmxManageOpenCL.cmake:47 (message):

   OpenCL is not supported.  OpenCL version 1.1 or newer is required.

Call Stack (most recent call first):

   CMakeLists.txt:232 (include)



Please let me know how to configure the GPU version. Thank you

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs Benchmarks for NVIDIA GeForce RTX 2080

2019-04-18 Thread Benson Muite

Dear Jason,

This is nice. How did you choose what to run and what hardware to use?

Regards,

Benson

On 4/18/19 7:20 PM, Jason Hogrefe wrote:

Dear Gromacs Users,

Exxact corporation has conducted benchmarks for Gromacs using NVIDIA RTX 2080 
GPUs. We ran them a few months back, but thought the community would be 
interested in such numbers.

System: Exxact TensorEX Gromacs Certified 
Workstation
CPU: Intel Xeon Scalable Family Silver 4114 (Skylake) x2
GPU: NVIDIA GeForce RTX 2080 x4
CUDA: 9.2
Gromacs Version: Gromacs 2018.3

==
   # Running ADH Benchmarks #

   - ADH cubic PME -
Sequential Single GPU Run Performance
40 CPUs + [0] 1 x GPU: 60.385  ns/day
40 CPUs + [1] 1 x GPU: 70.547  ns/day
40 CPUs + [2] 1 x GPU: 60.444  ns/day
40 CPUs + [3] 1 x GPU: 70.753  ns/day
Multiple Single GPU Run Performance
10 CPUs + [0] 1 x GPU: 53.474  ns/day
10 CPUs + [1] 1 x GPU: 44.991  ns/day
10 CPUs + [2] 1 x GPU: 45.034  ns/day
10 CPUs + [3] 1 x GPU: 45.853  ns/day
Sequential Multi GPU Run Performance
40 CPUs + [0,1] 2 x GPU: 38.128  ns/day
40 CPUs + [0,1,2,3] 4 x GPU: 39.226  ns/day

- ADH cubic RF -
Sequential Single GPU Run Performance
40 CPUs + [0] 1 x GPU: 74.364  ns/day
40 CPUs + [1] 1 x GPU: 73.903  ns/day
40 CPUs + [2] 1 x GPU: 74.022  ns/day
40 CPUs + [3] 1 x GPU: 74.105  ns/day
Multiple Single GPU Run Performance
10 CPUs + [0] 1 x GPU:  10 CPUs + [1] 1 x GPU:  10 CPUs + [2] 1 x GPU:  10 CPUs 
+ [3] 1 x GPU: Sequential Multi GPU Run Performance
40 CPUs + [0,1] 2 x GPU: 96.189  ns/day
40 CPUs + [0,1,2,3] 4 x GPU: 102.489  ns/day

- ADH cubic vsites PME -
Sequential Single GPU Run Performance
40 CPUs + [0] 1 x GPU: 132.120  ns/day
40 CPUs + [1] 1 x GPU: 129.414  ns/day
40 CPUs + [2] 1 x GPU: 129.661  ns/day
40 CPUs + [3] 1 x GPU: 133.058  ns/day
Multiple Single GPU Run Performance
10 CPUs + [0] 1 x GPU: 108.044  ns/day
10 CPUs + [1] 1 x GPU: 90.935  ns/day
10 CPUs + [2] 1 x GPU: 103.922  ns/day
10 CPUs + [3] 1 x GPU: 95.532  ns/day
Sequential Multi GPU Run Performance
40 CPUs + [0,1] 2 x GPU: 75.409  ns/day
40 CPUs + [0,1,2,3] 4 x GPU: 86.649  ns/day

 - ADH cubic vsites RF -
Sequential Single GPU Run Performance
40 CPUs + [0] 1 x GPU: 156.230  ns/day
40 CPUs + [1] 1 x GPU: 155.725  ns/day
40 CPUs + [2] 1 x GPU: 155.798  ns/day
40 CPUs + [3] 1 x GPU: 156.289  ns/day
Multiple Single GPU Run Performance
10 CPUs + [0] 1 x GPU:  10 CPUs + [1] 1 x GPU:  10 CPUs + [2] 1 x GPU:  10 CPUs 
+ [3] 1 x GPU: Sequential Multi GPU Run Performance
40 CPUs + [0,1] 2 x GPU: 194.495  ns/day
40 CPUs + [0,1,2,3] 4 x GPU: 203.785  ns/day

 - ADH dodec PME -
Sequential Single GPU Run Performance
40 CPUs + [0] 1 x GPU: 85.505  ns/day
40 CPUs + [1] 1 x GPU: 84.418  ns/day
40 CPUs + [2] 1 x GPU: 84.560  ns/day
40 CPUs + [3] 1 x GPU: 85.463  ns/day
Multiple Single GPU Run Performance
10 CPUs + [0] 1 x GPU: 55.158  ns/day
10 CPUs + [1] 1 x GPU: 54.666  ns/day
10 CPUs + [2] 1 x GPU: 49.706  ns/day
10 CPUs + [3] 1 x GPU: 52.324  ns/day
Sequential Multi GPU Run Performance
40 CPUs + [0,1] 2 x GPU: 44.456  ns/day
40 CPUs + [0,1,2,3] 4 x GPU: 39.953  ns/day

  - ADH dodec RF -
Sequential Single GPU Run Performance
40 CPUs + [0] 1 x GPU: 77.585  ns/day
40 CPUs + [1] 1 x GPU: 77.924  ns/day
40 CPUs + [2] 1 x GPU: 78.122  ns/day
40 CPUs + [3] 1 x GPU: 78.215  ns/day
Multiple Single GPU Run Performance
10 CPUs + [0] 1 x GPU:  10 CPUs + [1] 1 x GPU:  10 CPUs + [2] 1 x GPU:  10 CPUs 
+ [3] 1 x GPU: Sequential Multi GPU Run Performance
40 CPUs + [0,1] 2 x GPU: 102.690  ns/day
40 CPUs + [0,1,2,3] 4 x GPU: 112.896  ns/day

  - ADH dodec vsites PME -
Sequential Single GPU Run Performance
40 CPUs + [0] 1 x GPU: 149.222  ns/day
40 CPUs + [1] 1 x GPU: 148.763  ns/day
40 CPUs + [2] 1 x GPU: 150.029  ns/day
40 CPUs + [3] 1 x GPU: 149.848  ns/day
Multiple Single GPU Run Performance
10 CPUs + [0] 1 x GPU: 124.922  ns/day
10 CPUs + [1] 1 x GPU: 108.062  ns/day
10 CPUs + [2] 1 x GPU: 108.633  ns/day
10 CPUs + [3] 1 x GPU: 110.386  ns/day
Sequential Multi GPU Run Performance
40 CPUs + [0,1] 2 x GPU: 83.872  ns/day
40 CPUs + [0,1,2,3] 4 x GPU: 98.177  ns/day

- ADH dodec vsites RF -
Sequential Single GPU Run Performance
40 CPUs + [0] 1 x GPU: 165.883  ns/day
40 CPUs + [1] 1 x GPU: 165.753  ns/day
40 CPUs + [2] 1 x GPU: 165.282  ns/day
40 CPUs + [3] 1 x GPU: 165.279  ns/day
Multiple Single GPU Run Performance
10 CPUs + [0] 1 x GPU:  10 CPUs + [1] 1 x GPU:  10 CPUs + [2] 1 x GPU:  10 CPUs 
+ [3] 1 x GPU: Sequential Multi GPU Run Performance
40 CPUs + [0,1] 2 x GPU: 209.722  ns/day
40 CPUs + [0,1,2,3] 4 x GPU: 227.262  ns/day

   # Running RNASE Benchmarks #

 - RNASE cubic PME -
Sequential Single GPU Run Performance
40 CPUs + [0] 1 x GPU: 254.480  ns/day
40 CPUs + [1] 1 x GPU: 263.490  ns/day
40 CPUs + [2] 1 x GPU: 

Re: [gmx-users] Coarse-grained Protein-ligand simulations

2019-04-02 Thread Benson Muite

Hi Mac Kevin E. Braza,

Would it be possible to use a GPU?  If can manage with single precision 
and need less than 8Gb of memory, a gaming GPU might give some 
performance improvement.


Regards,

Benson

On 4/2/19 11:46 AM, Peter Kroon wrote:

@Joao: I didn't mean to imply in any way that everyone has (or should
have) a couple hundred nodes at their beck and call. Although it would
be nice. I underestimated the size of the GPCR complex, as well as how
slow atomistic simulations are :)

Now that I've tried to pull my foot out of my mouth again, back on
topic: although Martini can do (almost) anything, I am rather skeptical
of CG docking/binding studies because of the reasons I outlined earlier.
In addition, in Martini the error in the entropic term (due to a lack of
conformational freedom) in the free energy equation is compensated in
the enthalpic term. However, in a confined environment (protein pocket!)
this compensation may have to be different --- and the ligand was
parametrised in solution.


Peter


On 02-04-19 05:18, Billy Williams-Noonan wrote:

Have you considered accelerated MD?  Like metadynamics.  Plumed has a lot
of options there

Cheers,
Billy

On Tue, 2 Apr 2019 at 09:18, Mac Kevin Braza  wrote:


Hello Sir Benson,

We are using Supermicro SYS-1028R-WC1R Server with 2 x 2.2Ghz 12-Core Intel
Processors
(4 x 8GB DDR4) with a single node only. Ideally, to reach the microsecond
simulation of GPCR-membrane
simulation in all-atom, we will be needing a computer cluster with at least
200 parallel nodes system.
But even with a 50-100 parallel nodes, we will reach the simulation time
for a month, although we know that this is
challenging for us here in the Philippines.

The specialized super-computer cluster Anton is an example of hardware that
have reached more than 100 microseconds
simulation of the all-atom GPCR-membrane simulation in a month of total CPU
time. It has 512 processing nodes.

Best regards,
Mac Kevin E. Braza

On Tue, Apr 2, 2019, 12:40 AM Benson Muite 
wrote:


Hi Mac Kevin E. Braza,

What hardware are you using? What kind of hardware would be needed to do
a full simulation instead of a coarse-grained one?

Regards,

Benson

On 4/1/19 6:49 PM, João Henriques wrote:

GPCR + membrane systems are notoriously big systems to work with for

most

research groups, regardless of your location on the map. Even in
"privileged Europe" many research groups would struggle to produce
microsecond long atomistic simulations of this system within a short

period

of time. Moreover, "privileged Europe" is also home to significant

computer

resource discrepancies among its member countries. This is actually one

of

the main reasons why your group's CG model is so popular :)

On Mon, Apr 1, 2019 at 5:09 PM P C Kroon  wrote:


Hi,

I work in privileged Europe, so it’s good for me to get a reality

check

once every while. Thanks.

Coarse graining molecules for Martini is not too hard. There should be
some tutorials on cgmartini.nl that should help you get underway. You
will, however, run into the problems I mentioned, and you will need to

do

extensive validation on the topologies of your ligands. Again, it

depends

on your exact research question: if you’re doing high-throughput like
screening, qualitative models might be good enough. Also see T

Bereau’s

automartini.

Peter

From: Mac Kevin Braza
Sent: 01 April 2019 16:06
To: gmx-us...@gromacs.org
Cc: gromacs.org_gmx-users@maillist.sys.kth.se
Subject: Re: [gmx-users] Coarse-grained Protein-ligand simulations

Dear Sir Peter Kroon,

We are currently maximizing the computer capabilities to reach

microsecond,

but to reach 1 microsecond in our lab, it would take me at least 6

months

to finish all one microsecond.
We do not have that high level capacities here in the Philippines to

reach

it. Membrane proteins are
typically longer, with all the lipid bilayers, solvent, and ions

present on

top of the protein.
We will need more powerful computers in this part.

I found few works from literature on the protein-ligand representation

in

Coarse-grained.
We found several papers but they are either have vague methodology in
describing the ligand coarse-graining method and/or not necessarily

have

the same research problem
as we want to explore.

All in all, we will finish the simulation in all-atom as long as we

can,

and still be hopeful with
the coarse-graining method. What we explored as in the present is the
CHARMM-GUI Martini Maker,
yet they do not include the drug ligands in representing them in
coarse-grained. I still have to search for other means
to do this. Thank you very much!

Best regards,
Mac Kevin E. Braza

On Mon, Apr 1, 2019 at 5:59 PM Peter Kroon  wrote:


Hi,

that's probably a tough cookie. My first instinct would be to just

apply

a more hardware, and do it all atomistically. A microsecond should be
within reach. Whether it's enough is a separate matter. The problem

is

that most CG representations

Re: [gmx-users] Coarse-grained Protein-ligand simulations

2019-04-01 Thread Benson Muite

Hi Mac Kevin E. Braza,

What hardware are you using? What kind of hardware would be needed to do 
a full simulation instead of a coarse-grained one?


Regards,

Benson

On 4/1/19 6:49 PM, João Henriques wrote:

GPCR + membrane systems are notoriously big systems to work with for most
research groups, regardless of your location on the map. Even in
"privileged Europe" many research groups would struggle to produce
microsecond long atomistic simulations of this system within a short period
of time. Moreover, "privileged Europe" is also home to significant computer
resource discrepancies among its member countries. This is actually one of
the main reasons why your group's CG model is so popular :)

On Mon, Apr 1, 2019 at 5:09 PM P C Kroon  wrote:


Hi,

I work in privileged Europe, so it’s good for me to get a reality check
once every while. Thanks.

Coarse graining molecules for Martini is not too hard. There should be
some tutorials on cgmartini.nl that should help you get underway. You
will, however, run into the problems I mentioned, and you will need to do
extensive validation on the topologies of your ligands. Again, it depends
on your exact research question: if you’re doing high-throughput like
screening, qualitative models might be good enough. Also see T Bereau’s
automartini.

Peter

From: Mac Kevin Braza
Sent: 01 April 2019 16:06
To: gmx-us...@gromacs.org
Cc: gromacs.org_gmx-users@maillist.sys.kth.se
Subject: Re: [gmx-users] Coarse-grained Protein-ligand simulations

Dear Sir Peter Kroon,

We are currently maximizing the computer capabilities to reach microsecond,
but to reach 1 microsecond in our lab, it would take me at least 6 months
to finish all one microsecond.
We do not have that high level capacities here in the Philippines to reach
it. Membrane proteins are
typically longer, with all the lipid bilayers, solvent, and ions present on
top of the protein.
We will need more powerful computers in this part.

I found few works from literature on the protein-ligand representation in
Coarse-grained.
We found several papers but they are either have vague methodology in
describing the ligand coarse-graining method and/or not necessarily have
the same research problem
as we want to explore.

All in all, we will finish the simulation in all-atom as long as we can,
and still be hopeful with
the coarse-graining method. What we explored as in the present is the
CHARMM-GUI Martini Maker,
yet they do not include the drug ligands in representing them in
coarse-grained. I still have to search for other means
to do this. Thank you very much!

Best regards,
Mac Kevin E. Braza

On Mon, Apr 1, 2019 at 5:59 PM Peter Kroon  wrote:


Hi,

that's probably a tough cookie. My first instinct would be to just apply
a more hardware, and do it all atomistically. A microsecond should be
within reach. Whether it's enough is a separate matter. The problem is
that most CG representations don't get the shape of both your pocket and
ligand exactly right, producing unreliable answers. In addition, in most
CG FFs hydrogen bonds are isotropic and not specific enough for this
kind of problem.

If "more hardware" is not an option you'll need to dive into literature
to see if people did CG protein-ligand binding/docking/unbinding
(depening on research question). I would also be very skeptical of any
(absolute) kinetics produced by CG simulations.

As a last ditch effort you could look into multiscaling, but that's a
research topic in its own.


Peter


On 01-04-19 11:49, Mac Kevin Braza wrote:

Thank you Prof. Lemkul,

I appreciate your comment on this part.

Sir Peter Kroon,

We want to do the coarse-grained MD simulation to access long timescale
events of the
effect of the ligand binding to the GPCR, at least microsecond . For

now,

the most accessible means for us is to
do the CGMD. But we are currently being cornered in choosing which

set-up

will best suit, and
if it will allow us to see these events. We are looking also in the
possibility of coarse-graining
the ligand, and if you can share your expertise in coarse-graining also

the

ligand that would be great.
I appreciate this Sir Kroon, thank you very much!

Best regards,
Mac Kevin E. Braza

On Mon, Apr 1, 2019 at 5:07 PM Peter Kroon  wrote:


If I may chip in: It really depends on what you're studying, and what
forcefield you're using to do it. Unfortunately there is no FF that
reproduces all behaviour accurately. The art is in picking one that

(at

least) reproduces what you're interested in.


Peter

On 29-03-19 17:26, Justin Lemkul wrote:

On 3/29/19 9:17 AM, Mac Kevin Braza wrote:

Thank you Professor Lemkul,

But would you suggest on how can I coarse-grained the ligand I am
using? I
have been searching resources online but they do not work in our

part.

I don't work with CG simulations, so I'm not much help. I would think
that a CG parametrization of a ligand would remove all the detail
you'd normally want to see in terms of ligand-protein interactions.

-Justin



Re: [gmx-users] same initial velocities vs. -reprod

2019-03-22 Thread Benson Muite

Hi,

Yes, finite precision and order of computations may not be respected by 
the compiler. In most cases, they should be close, however much 
analytical work remains in this area. A  relevant paper (though there 
are many others) is:


Collange, Defour, Grailaat and Iakymchuk

Numerical Reproducibility for the Parallel Reduction on Multi- and 
Many-Core Architectures


https://hal.archives-ouvertes.fr/hal-00949355v4/document

It is unclear if it would be too expensive to implement such methods in 
GROMACs though.


Benson

On 3/22/19 10:11 PM, Mala L Radhakrishnan wrote:

Hi Mark,

Thanks so much -- good to know that it's basically equivalent to different
starting velocities and I should expect them to be different.

I found this page that sort of explains it:
http://www.gromacs.org/Documentation/Terminology/Reproducibility

Out of curiosity, I was wondering if someone can point me to something that
explains why (a+b)+c != a+(b+c) sometimes for computations across multiple
processors.  Is it a finite precision issue?

thanks again,
M

On Fri, Mar 22, 2019 at 1:21 PM Mark Abraham 
wrote:


Hi,

The dynamic load balancing on by default for domain decomposition means
divergence happens by default. After a few ps it's logically equivalent to
starting from different velocities. See the GROMACS user guide for more
details on reproducibility questions!

Mark

On Fri., 22 Mar. 2019, 18:15 Mala L Radhakrishnan, 
Hi all,

We set up replicate simulations (same starting mdp files and structures)
that ran across GPUs WITHOUT using the -reprod flag, but we set gen-vel

to

no and used the same ld-seed value for both with the v-rescale

thermostat.

They also ran on the same machine -- so from a deterministic point of

view,

I would expect them to be "exactly" the same.

The simulations, while having similar average energetics throughout,

sample

different conformations and they start to differ pretty much right after
the simulation starts.

I understand that I could have gotten results to be more reproducible by
using the -reprod flag, but in the case I describe (and I don't think I
have any other stochastic things going on unless I'm not understanding
ld-seed or gen-vel = no, or am forgetting something?),  what is causing

the

difference?   Online, I see something about domain decomposition and
optimization, but I'd like to understand that better.

My major question, though, is -- are the differences due to domain
decomposition optimization enough to basically equal what you might get
from "replicates" starting with different starting velocities, especially
once equilibration (as measure by RMSD) is reached? That's what I'm

seeing,

so I wanted to make sure that these differences can actually be this big.
Or is there some other source of stochasticity I'm forgetting?

Thanks so much, and I hope my question makes sense.

M

--
Mala L. Radhakrishnan
Whitehead Associate Professor of Critical Thought
Associate Professor of Chemistry
Wellesley College
106 Central Street
Wellesley, MA 02481
(781)283-2981
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.


--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.




--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Simulation is very slow

2019-03-14 Thread Benson Muite

Dear Yeongkyu,

In what sense is it slow? What are you comparing to and for what input data?

Are you using both GPUs or only 1 at a time?

Do you have any idea of relative performance differences for each of the 
GPUs you have? A recent inquiry on the list wanted a comparison between 
the two GPUs you mention.


Benson

On 3/14/19 5:56 PM, 이영규 wrote:

Dear gromacs users,

I installed gromacs 2019 today. When I run gromacs, it is really slow. I
don't know the reason. I am using GTX 1080 TI and TITAN XP for GPU and I
have 8 cores. Please help me.

Sincerely


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Videocard selection

2019-03-12 Thread Benson Muite

Hi!

For most applications single precision performance is most important - 
may want to check if this will be fine or your workflow.


An older study is at:

https://arxiv.org/pdf/1507.00898.pdf

Regards,

Benson

On 3/12/19 2:18 PM, Никита Шалин wrote:

Dear Gromacs users,

I would like to buy a videocard for calculation on GPU. I choose between RTX 
2080Ti and Titan Xp. Please, tell me which videocard to choose? And what 
characteristics of the videocard are important for calculating?

I'm doing modeling of copolymer system with an applied electric field.

Thank you for advance!


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Video cards

2019-03-12 Thread Benson Muite

Hi,

Please add a subject to enable list readers to know what your message is 
about. Have you checked the list archives? What will you be simulating?


Regards,
Benson
On 3/12/19 2:04 PM, Никита Шалин wrote:

Dear Gromacs users,

I would like to buy a videocard for calculation on GPU. I choose between RTX 
2080Ti and Titan Xp. Please, tell me which videocard to choose?



--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] gromacs performance

2019-03-08 Thread Benson Muite
You seem to be using a relatively large number of GPUs. May want to 
check your input data (many cases will not scale well, but ensemble runs 
can be quite common). Perhaps check speedup in going from 1 to 2 to 4 
GPUs on one node.


On 3/9/19 12:11 AM, Carlos Rivas wrote:

Hey guys,
Anybody running GROMACS on AWS ?

I have a strong IT background , but zero understanding of GROMACS or OpenMPI. ( 
even less using sge on AWS ),
Just trying to help some PHD Folks with their work.

When I run gromacs using Thread-mpi on a single, very large node on AWS things 
work fairly fast.
However, when I switch from thread-mpi to OpenMPI even though everything's 
detected properly, the performance is horrible.

This is what I am submitting to sge:

ubuntu@ip-10-10-5-81:/shared/charmm-gui/gromacs$ cat sge.sh
#!/bin/bash
#
#$ -cwd
#$ -j y
#$ -S /bin/bash
#$ -e out.err
#$ -o out.out
#$ -pe mpi 256

cd /shared/charmm-gui/gromacs
touch start.txt
/bin/bash /shared/charmm-gui/gromacs/run_eq.bash
touch end.txt

and this is my test script , provided by one of the Doctors:

ubuntu@ip-10-10-5-81:/shared/charmm-gui/gromacs$ cat run_eq.bash
#!/bin/bash
export GMXMPI="/usr/bin/mpirun --mca btl ^openib 
/shared/gromacs/5.1.5/bin/gmx_mpi"

export MDRUN="mdrun -ntomp 2 -npme 32"

export GMX="/shared/gromacs/5.1.5/bin/gmx_mpi"

for comm in min eq; do
if [ $comm == min ]; then
echo ${comm}
$GMX grompp -f step6.0_minimization.mdp -o step6.0_minimization.tpr -c 
step5_charmm2gmx.pdb -p topol.top
$GMXMPI $MDRUN -deffnm step6.0_minimization

fi

if [ $comm == eq ]; then
   for step in `seq 1 6`;do
echo $step
if [ $step -eq 1 ]; then
   echo ${step}
   $GMX grompp -f step6.${step}_equilibration.mdp -o 
step6.${step}_equilibration.tpr -c step6.0_minimization.gro -r 
step5_charmm2gmx.pdb -n index.ndx -p topol.top
   $GMXMPI $MDRUN -deffnm step6.${step}_equilibration
fi
if [ $step -gt 1 ]; then
   old=`expr $step - 1`
   echo $old
   $GMX grompp -f step6.${step}_equilibration.mdp -o 
step6.${step}_equilibration.tpr -c step6.${old}_equilibration.gro -r 
step5_charmm2gmx.pdb -n index.ndx -p topol.top
   $GMXMPI $MDRUN -deffnm step6.${step}_equilibration
fi
   done
fi
done




during the output, I see this , and I get really excited, expecting blazing 
speeds and yet, it's much worse than a single node:

Command line:
   gmx_mpi mdrun -ntomp 2 -npme 32 -deffnm step6.0_minimization


Back Off! I just backed up step6.0_minimization.log to 
./#step6.0_minimization.log.6#

Running on 4 nodes with total 128 cores, 256 logical cores, 32 compatible GPUs
   Cores per node:   32
   Logical cores per node:   64
   Compatible GPUs per node:  8
   All nodes have identical type(s) of GPUs
Hardware detected on host ip-10-10-5-89 (the node of MPI rank 0):
   CPU info:
 Vendor: GenuineIntel
 Brand:  Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
 SIMD instructions most likely to fit this hardware: AVX2_256
 SIMD instructions selected at GROMACS compile time: AVX2_256
   GPU info:
 Number of GPUs detected: 8
 #0: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat: 
compatible
 #1: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat: 
compatible
 #2: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat: 
compatible
 #3: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat: 
compatible
 #4: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat: 
compatible
 #5: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat: 
compatible
 #6: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat: 
compatible
 #7: NVIDIA Tesla V100-SXM2-16GB, compute cap.: 7.0, ECC: yes, stat: 
compatible

Reading file step6.0_minimization.tpr, VERSION 5.1.5 (single precision)
Using 256 MPI processes
Using 2 OpenMP threads per MPI process

On host ip-10-10-5-89 8 compatible GPUs are present, with IDs 0,1,2,3,4,5,6,7
On host ip-10-10-5-89 8 GPUs auto-selected for this run.
Mapping of GPU IDs to the 56 PP ranks in this node: 
0,0,0,0,0,0,0,1,1,1,1,1,1,1,2,2,2,2,2,2,2,3,3,3,3,3,3,3,4,4,4,4,4,4,4,5,5,5,5,5,5,5,6,6,6,6,6,6,6,7,7,7,7,7,7,7



Any suggestions? Greatly appreciate the help.


Carlos J. Rivas
Senior AWS Solutions Architect - Migration Specialist


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] script

2019-03-07 Thread Benson Muite
This is a welcoming and friendly community, with somewhat busy but still 
nice people. The manual is available at:


http://manual.gromacs.org/documentation/

One can search for temperature information here:

http://manual.gromacs.org/documentation/current/reference-manual/index.html

to find :

http://manual.gromacs.org/documentation/current/user-guide/mdp-options.html?highlight=temperature

On 3/7/19 4:56 PM, Quyen Vu wrote:

my question for you: have you ever read gromacs manual/documentation?

On Thu, Mar 7, 2019 at 1:14 PM Amin Rouy  wrote:


Hi

I would like to change the temperature of the simulation (NPT.mdp) inside
my bash script, any one has idea?
thanks
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gmx tcaf directions

2019-02-08 Thread Benson Muite
Which part of the manual does this refer to?

On 2/8/19 12:42 PM, Amin Rouy wrote:
> Hi,
>
> I am sorry, but  the manual is not clear to me. Does Tcaf calculates the
> viscosity in all x,y,z directions?
> the manual says:
> Transverse currents are calculated using the k-vectors (1,0,0) and (2,0,0)
> each also in the *y*- and *z*-direction
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gmx trjconv -center not working?

2019-02-06 Thread Benson Muite
Hi,

There are a few issues related to gmx trjconv -center:

a) 
http://redmine.gromacs.org/projects/redmine-platform/search?all_words=1=0=Submit=1=0=1=-center=all_only==%E2%9C%93

b) http://redmine.gromacs.org/issues/2579

but not sure if the exactly match what you have.

Benson

On 2/6/19 7:39 PM, Mala L Radhakrishnan wrote:
> ...and as a follow up, it doesn't seem to be centering at either the
> geometric center OR the center of mass.  Any insight would be appreciated.
>
> Mala
>
> On Wed, Feb 6, 2019 at 10:15 AM Mala L Radhakrishnan 
> wrote:
>
>> Hi,
>>
>> It would be helpful to know if this is a known bug/issue.  I'd rather not
>> have to write code to recenter in this case -- would be easy if I just had
>> one molecule far from the edges of the box but I am simulating a crowded
>> system and need to get it centered on one complex within the system while
>> still maintaining the cubic shape of the box -- I suppose I could write a
>> script to pass atoms from one side of the box to the other as a result of
>> the translation, but the box size slightly changes at each step (it's an
>> NPT simulation) and I fear I'd be messing things up -- so I'd really rather
>> not have to do that if there is actually a way with -center.  I can't get
>> anything to work the way the documentation says it's supposed to.
>>
>> Does anyone out there know if this is a known bug?  Any of the developers,
>> perhaps?
>>
>> Thanks,
>>
>> Mala
>>
>> On Wed, Feb 6, 2019 at 9:30 AM Gyorgy Hantal 
>> wrote:
>>
>>> Hi,
>>>
>>> I also had that experience with trjconv with various versions. I ended up
>>> doing the centering by a script.
>>> I know on the other hand that centering works well in the case of for
>>> example gmx density. Yet I've had no luck with trjconv...
>>> Maybe this is known issue..?
>>>
>>> Regards,
>>> Gyorgy
>>>
>>> On Tue, 5 Feb 2019 at 20:27, Mala L Radhakrishnan >> wrote:
>>>
 Hi again,

 As a follow up to my earlier email -- I added -pbc atom to the flags,
>>> for a
 total command line of:

 gmx trjconv -f md.xtc -o md_201000ps.pdb -s md.tpr -b 201000 -e 201000
>>> -n
 INDEX.ndx -pbc atom -center

 This brought it "closer" to the group I chose being centered, but it is
 still not centered at the geometric center of the group I chose.   I'm
>>> not
 sure why including -pbc atom would make things different from not
>>> including
 it in this case, either.  Any help would be appreciated.

 Mala

 On Tue, Feb 5, 2019 at 11:37 AM Mala L Radhakrishnan <
 mradh...@wellesley.edu>
 wrote:

> Hi,
>
> I am trying to extract snapshots and center on a particular group, but
 the
> center of the box does not correspond to the geometric center of the
 group
> as expected.  It is centered on the outer edge of the group.  Here is
>>> the
> command I am using:
>
> gmx trjconv -f md.xtc -o md_201000ps.pdb -s md.tpr -b 201000 -e
>>> 201000 -n
> INDEX.ndx -center
>
> ...and I then choose the group in the index file that corresponds to
>>> what
> I want right in the center; I have verified the correctness of the
>>> index
> file.  I ran my simulation with pbc but in this case I specifically
>>> want
 to
> maintain the same cubic box used in the simulations, so I don't want
>>> to
> keep molecules whole.  So all I want to do is extract the cubic box
>>> that
 is
> centered on the group I want.   But the resulting snapshot has the
 molecule
> slightly (about 10%) off center, mainly in the y direction in this
 case.  I
> presume that center will place the geometric center of the group you
 choose
> at the geometric center of the box.  Am I incorrect?  Is there another
 way
> to accomplish what I want?
>
> Thank you so much,
>
> Mala
>
>
> --
> Mala L. Radhakrishnan
> Whitehead Associate Professor of Critical Thought
> Associate Professor of Chemistry
> Wellesley College
> 106 Central Street
> Wellesley, MA 02481
> (781)283-2981
>

 --
 Mala L. Radhakrishnan
 Whitehead Associate Professor of Critical Thought
 Associate Professor of Chemistry
 Wellesley College
 106 Central Street
 Wellesley, MA 02481
 (781)283-2981
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

>>>
>>> --
>>> ---
>>> Gyorgy Hantal, PhD
>>> Postdoctoral Fellow
>>> Dept. of Computational Physics, University of Vienna
>>> Sensengasse 8/9, 1090 Wien, Austria
>>> gyorgy.han...@univie.ac.at 
>>> Tel. 

Re: [gmx-users] Gromacs Tutorials

2019-01-31 Thread Benson Muite
Satya,

It is helpful to have a subject in your messages. A possible start:

http://www.mdtutorials.com/gmx/index.html

Benson

On 1/31/19 7:18 AM, Satya Ranjan Sahoo wrote:
> Sir,
> I am a beginner to GROMACS. I was unable to understand how to create all
> the ions.mdp , md.mdp , mout.mdp , minim.mdp , newbox.mdp , npt.mdp ,
> nvt.mdp , porse.itp , topol.top input files for molecular simulation of my
> molecule. Please teach me how can I generate or create all the above
> mentioned input files for my molecule.


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] WG: Issue with CUDA and gromacs

2019-01-30 Thread Benson Muite
The redmine ticket you mention gives output obtained from an execution with 
verbose output:

gmx mdrun -deffnm md200ns -v

and

gmx mdrun -deffnm md200ns -v -nb gpu -pme cpu

First option fails, but gives a source code line number where to check things. 
Second one runs. Build uses
-DCMAKE_BUILD_TYPE=RelWithDebug. Is it possible to get more information using 
these run options?

On 1/30/19 7:29 PM, Tafelmeier, Stefanie wrote:

To your question:
For the trials with newer Gromacs >2016 versions we simply use (as I understood 
it is not necessary to target the 6.1 with this versions)
cmake .. -DGMX_BUILD_OWN_FFTW=on -DGMX_GPU=on

for the trials with older gromacs < 2018 Versions we used:
cmake .. -DGMX_BUILD_OWN_FFTW=on -DGMX_GPU=on GMX_CUDA_TARGET_SM=6.1 
GMX_CUDA_TARGET_COMPUTE=6.1

Could it be, that the combination of CUDA 9.2 (or 10), gromacs 2019 and gcc 
7.3.0 is making some trouble?
Does anyone have experience with this?

Many thanks in advance for the answer,
Steffi



-Ursprüngliche Nachricht-
Von: 
gromacs.org_gmx-users-boun...@maillist.sys.kth.se<mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se>
 [mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] Im Auftrag von 
Benson Muite
Gesendet: Mittwoch, 30. Januar 2019 18:13
An: gmx-us...@gromacs.org<mailto:gmx-us...@gromacs.org>
Betreff: Re: [gmx-users] WG: Issue with CUDA and gromacs

What is your cmake build command?

Have you tried to specify compute capabilities?

http://manual.gromacs.org/documentation/2019/install-guide/index.html#cuda-gpu-acceleration

GMX_CUDA_TARGET_SM=6.1

GMX_CUDA_TARGET_COMPUTE=6.1

References:

https://developer.nvidia.com/cuda-gpus

https://www.myzhar.com/blog/tutorials/tutorial-nvidia-gpu-cuda-compute-capability/

On 1/30/19 6:20 PM, Tafelmeier, Stefanie wrote:

Please excuse, the tables didn't work, I hope this is better:



Dear all,



We are facing an issue with the CUDA toolkit.

We tried several combinations of gromacs versions and CUDA Toolkits. No Toolkit 
older than 9.2 was possible to try as there are no driver for nvidia available 
for a Quadro P6000.





Gromacs  +  CUDA  =>  Error message



2019  +  10.0  =>  gmx mdrun:

Assertion failed:

Condition: stat == cudaSuccess

Asynchronous H2D copy failed



2019  +  9.2  =>  gmx mdrun:

Assertion failed:

Condition: stat == cudaSuccess

Asynchronous H2D copy failed



2018.5  +  9.2  +  gmx mdrun: Fatal error:

HtoD cudaMemcpyAsync failed: invalid argument



5.1.5  +  9.2  =>Installation make: nvcc fatal   : Unsupported gpu architecture 
'compute_20'*



2016.2  +  9.2  =>  Installation make: nvcc fatal   : Unsupported gpu 
architecture 'compute_20'*





*We also tried to set the target CUDA architectures as described in the 
installation guide 
(manual.gromacs.org/documentation/2019/install-guide/index.html). Unfortunately 
it didn't work.

Performing simulations on CPU only always works, yet of cause are more slowly 
than they could be with additionally using the GPU.

The issue #2761 (https://redmine.gromacs.org/issues/2762) seems similar to our 
problem.

Even though this issue is still open, we wanted to ask if you can give us any 
information about how to solve this problem?



Many thanks in advance.

Best regards,

Stefanie Tafelmeier





Further details if necessary:

The workstation:

2 x Xeon Gold 6152 @ 3,7Ghz (22 K, 44Th, AVX512) Nvidia Quadro P6000 with 3840 
Cuda-Cores



The simulations system:

Long chain alkanes (previously used with gromacs 5.1.5 and CUDA 7.5 - worked 
perfectly)








ZAE Bayern
Stefanie Tafelmeier
Bereich Energiespeicherung/Division Energy Storage
Thermische Energiespeicher/Thermal Energy Storage
Walther-Meißner-Str. 6
85748 Garching

Tel.: +49 89 329442-75
Fax: +49 89 329442-12
stefanie.tafelme...@zae-bayern.de<mailto:stefanie.tafelme...@zae-bayern.de><mailto:stefanie.tafelme...@zae-bayern.de><mailto:stefanie.tafelme...@zae-bayern.de><mailto:stefanie.tafelme...@zae-bayern.de><mailto:stefanie.tafelme...@zae-bayern.de><mailto:stefanie.tafelme...@zae-bayern.de><mailto:stefanie.tafelme...@zae-bayern.de>
http://www.zae-bayern.de<http://www.zae-bayern.de/><http://www.zae-bayern.de/><http://www.zae-bayern.de/><http://www.zae-bayern.de/>


ZAE Bayern - Bayerisches Zentrum für Angewandte Energieforschung e. V.
Vorstand/Board:
Prof. Dr. Hartmut Spliethoff (Vorsitzender/Chairman),
Prof. Dr. Vladimir Dyakonov
Sitz/Registered Office: Würzburg
Registergericht/Register Court: Amtsgericht Würzburg
Registernummer/Register Number: VR 1386

Sämtliche Willenserklärungen, z. B. Angebote, Aufträge, Anträge und Verträge, 
sind für das ZAE Bayern nur in schriftlicher und ordnungsgemäß unterschriebener 
Form rechtsverbindlich. Diese E-Mail ist ausschließlich zur Nutzung durch 
den/die vorgenannten Empfänger bestimmt. Jegliche unbefugte Offenbarung, 
Nutzung oder Verbreitung, sei es insgesamt oder teilweise, ist untersag

Re: [gmx-users] WG: Issue with CUDA and gromacs

2019-01-30 Thread Benson Muite
What is your cmake build command?

Have you tried to specify compute capabilities?

http://manual.gromacs.org/documentation/2019/install-guide/index.html#cuda-gpu-acceleration

GMX_CUDA_TARGET_SM=6.1

GMX_CUDA_TARGET_COMPUTE=6.1

References:

https://developer.nvidia.com/cuda-gpus

https://www.myzhar.com/blog/tutorials/tutorial-nvidia-gpu-cuda-compute-capability/

On 1/30/19 6:20 PM, Tafelmeier, Stefanie wrote:

Please excuse, the tables didn't work, I hope this is better:



Dear all,



We are facing an issue with the CUDA toolkit.

We tried several combinations of gromacs versions and CUDA Toolkits. No Toolkit 
older than 9.2 was possible to try as there are no driver for nvidia available 
for a Quadro P6000.





Gromacs  +  CUDA  =>  Error message



2019  +  10.0  =>  gmx mdrun:

Assertion failed:

Condition: stat == cudaSuccess

Asynchronous H2D copy failed



2019  +  9.2  =>  gmx mdrun:

Assertion failed:

Condition: stat == cudaSuccess

Asynchronous H2D copy failed



2018.5  +  9.2  +  gmx mdrun: Fatal error:

HtoD cudaMemcpyAsync failed: invalid argument



5.1.5  +  9.2  =>Installation make: nvcc fatal   : Unsupported gpu architecture 
'compute_20'*



2016.2  +  9.2  =>  Installation make: nvcc fatal   : Unsupported gpu 
architecture 'compute_20'*





*We also tried to set the target CUDA architectures as described in the 
installation guide 
(manual.gromacs.org/documentation/2019/install-guide/index.html). Unfortunately 
it didn't work.

Performing simulations on CPU only always works, yet of cause are more slowly 
than they could be with additionally using the GPU.

The issue #2761 (https://redmine.gromacs.org/issues/2762) seems similar to our 
problem.

Even though this issue is still open, we wanted to ask if you can give us any 
information about how to solve this problem?



Many thanks in advance.

Best regards,

Stefanie Tafelmeier





Further details if necessary:

The workstation:

2 x Xeon Gold 6152 @ 3,7Ghz (22 K, 44Th, AVX512) Nvidia Quadro P6000 with 3840 
Cuda-Cores



The simulations system:

Long chain alkanes (previously used with gromacs 5.1.5 and CUDA 7.5 - worked 
perfectly)








ZAE Bayern
Stefanie Tafelmeier
Bereich Energiespeicherung/Division Energy Storage
Thermische Energiespeicher/Thermal Energy Storage
Walther-Meißner-Str. 6
85748 Garching

Tel.: +49 89 329442-75
Fax: +49 89 329442-12
stefanie.tafelme...@zae-bayern.de
http://www.zae-bayern.de


ZAE Bayern - Bayerisches Zentrum für Angewandte Energieforschung e. V.
Vorstand/Board:
Prof. Dr. Hartmut Spliethoff (Vorsitzender/Chairman),
Prof. Dr. Vladimir Dyakonov
Sitz/Registered Office: Würzburg
Registergericht/Register Court: Amtsgericht Würzburg
Registernummer/Register Number: VR 1386

Sämtliche Willenserklärungen, z. B. Angebote, Aufträge, Anträge und Verträge, 
sind für das ZAE Bayern nur in schriftlicher und ordnungsgemäß unterschriebener 
Form rechtsverbindlich. Diese E-Mail ist ausschließlich zur Nutzung durch 
den/die vorgenannten Empfänger bestimmt. Jegliche unbefugte Offenbarung, 
Nutzung oder Verbreitung, sei es insgesamt oder teilweise, ist untersagt. 
Sollten Sie diese E-Mail irrtümlich erhalten haben, benachrichtigen Sie bitte 
unverzüglich den Absender und löschen Sie diese E-Mail.

Any declarations of intent, such as quotations, orders, applications and 
contracts, are legally binding for ZAE Bayern only if expressed in a written 
and duly signed form. This e-mail is intended solely for use by the 
recipient(s) named above. Any unauthorised disclosure, use or dissemination, 
whether in whole or in part, is prohibited. If you have received this e-mail in 
error, please notify the sender immediately and delete this e-mail.




-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Gromacs 2018.5 with CUDA

2019-01-30 Thread Benson Muite
Hi,

Do you get the same build errors with Gromacs 2019?

What operating system are you using?

What GPU do you have?

Do  you have a newer version of version of GCC?

Benson

On 1/30/19 5:56 PM, Владимир Богданов wrote:
HI,

Yes, I think, because it seems to be working with nam-cuda right now:

Wed Jan 30 10:39:34 2019
+-+
| NVIDIA-SMI 390.77 Driver Version: 390.77|
|---+--+--+
| GPU  NamePersistence-M| Bus-IdDisp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util  Compute M. |
|===+==+==|
|   0  TITAN XpOff  | :65:00.0  On |  N/A |
| 53%   83CP2   175W / 250W |   2411MiB / 12194MiB | 47%  Default |
+---+--+--+

+-+
| Processes:   GPU Memory |
|  GPU   PID   Type   Process name Usage  |
|=|
|0  1258  G   /usr/lib/xorg/Xorg40MiB |
|0  1378  G   /usr/bin/gnome-shell  15MiB |
|0  7315  G   /usr/lib/xorg/Xorg   403MiB |
|0  7416  G   /usr/bin/gnome-shell 284MiB |
|0 12510  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |
|0 12651  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |
|0 12696  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |
|0 12737  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |
|0 12810  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |
|0 12868  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   235MiB |
|0 20688  C   ..._2.11_Linux-x86_64-multicore-CUDA/namd2   251MiB |
+-+

After unsuccesful gromacs run, I ran namd

Best,

Vlad


30.01.2019, 10:59, "Mark Abraham" 
:

Hi,

Does nvidia-smi report that your GPUs are available to use?

Mark

On Wed, 30 Jan 2019 at 07:37 Владимир Богданов 
mailto:bogdanov-vladi...@yandex.ru>>
wrote:


 Hey everyone!

 I need help, please. When I try to run MD with GPU I get the next error:

 Command line:

 gmx_mpi mdrun -deffnm md -nb auto



 Back Off! I just backed up md.log to ./#md
 .log.4#

 NOTE: Detection of GPUs failed. The API reported:

 GROMACS cannot run tasks on a GPU.

 Reading file md.tpr, VERSION 2018.2 (single precision)

 Changing nstlist from 20 to 80, rlist from 1.224 to 1.32



 Using 1 MPI process

 Using 16 OpenMP threads



 Back Off! I just backed up md.xtc to ./#md
 .xtc.2#



 Back Off! I just backed up md.trr to ./#md
 .trr.2#



 Back Off! I just backed up md.edr to ./#md
 .edr.2#

 starting mdrun 'Protein in water'

 3000 steps, 6.0 ps.

 I built gromacs with MPI=on and CUDA=on and the compilation process looked
 good. I ran gromacs 2018.2 with CUDA 5 months ago and it worked, but now it
 doesn't work.

 Information from *.log file:

 GROMACS version: 2018.2

 Precision: single

 Memory model: 64 bit

 MPI library: MPI

 OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)

 GPU support: CUDA

 SIMD instructions: AVX_512

 FFT library: fftw-3.3.8-sse2-avx-avx2-avx2_128-avx512

 RDTSCP usage: enabled

 TNG support: enabled

 Hwloc support: disabled

 Tracing support: disabled

 Built on: 2018-06-24 02:55:16

 Built by: vlad@vlad [CMAKE]

 Build OS/arch: Linux 4.13.0-45-generic x86_64

 Build CPU vendor: Intel

 Build CPU brand: Intel(R) Core(TM) i7-7820X CPU @ 3.60GHz

 Build CPU family: 6 Model: 85 Stepping: 4

 Build CPU features: aes apic avx avx2 avx512f avx512cd avx512bw avx512vl
 clfsh cmov cx8 cx16 f16c fma hle htt intel lahf mmx msr nonstop_tsc pcid
 pclmuldq pdcm pdpe1gb popcnt pse rdrnd rdtscp rtm sse2 sse3 sse4.1 sse4.2
 ssse3 tdt x2apic

 C compiler: /usr/bin/cc GNU 5.4.0

 C compiler flags: -mavx512f -mfma -O3 -DNDEBUG -funroll-all-loops
 -fexcess-precision=fast

 C++ compiler: /usr/bin/c++ GNU 5.4.0

 C++ compiler flags: -mavx512f -mfma -std=c++11 -O3 -DNDEBUG
 -funroll-all-loops -fexcess-precision=fast

 CUDA compiler: /usr/local/cuda-9.2/bin/nvcc nvcc: NVIDIA (R) Cuda compiler
 driver;Copyright (c) 2005-2018 NVIDIA Corporation;Built on
 Wed_Apr_11_23:16:29_CDT_2018;Cuda compilation tools, release 9.2, V9.2.88

 CUDA compiler
 

Re: [gmx-users] Please explain why this article has been accepted

2018-12-26 Thread Benson Muite
What are the main concerns/inaccuracies? Is it worth repeating?

On 12/26/18 1:31 PM, milad bagheri wrote:
> Please explain why this article has been accepted
> Please refer to the RMSD chart
>
> https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0064364

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] numpy related problem in GROMACS protein-ligand file preperation

2018-12-06 Thread Benson Muite
Dear Seketoulie,

Great. If replying to a digest, please change the subject so readers
know what is useful.

Regards,

Benson

On 12/7/18 7:12 AM, Seketoulie Keretsu wrote:
> Dear
> Thank you.
>
> "yum install numpy" worked for me. Surprised why i haven't tried that long 
> back.
> On Fri, Dec 7, 2018 at 1:41 PM
>  wrote:
>> Send gromacs.org_gmx-users mailing list submissions to
>> gromacs.org_gmx-users@maillist.sys.kth.se
>>
>> To subscribe or unsubscribe via the World Wide Web, visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
>> or, via email, send a message with subject or body 'help' to
>> gromacs.org_gmx-users-requ...@maillist.sys.kth.se
>>
>> You can reach the person managing the list at
>> gromacs.org_gmx-users-ow...@maillist.sys.kth.se
>>
>> When replying, please edit your Subject line so it is more specific
>> than "Re: Contents of gromacs.org_gmx-users digest..."
>>
>>
>> Today's Topics:
>>
>>1. Re: numpy related problem in GROMACS protein-ligand file
>>   preperation (Benson Muite)
>>2. How restrain the end-to-end distance in simulation?
>>   (Mehdi Bagherpour)
>>3. Re: mdrun-adjusted cutoffs?! (Alex)
>>4. Re: mdrun-adjusted cutoffs?! (Mark Abraham)
>>
>>
>> --
>>
>> Message: 1
>> Date: Thu, 6 Dec 2018 12:51:35 +
>> From: Benson Muite 
>> To: "gromacs.org_gmx-users@maillist.sys.kth.se"
>> 
>> Subject: Re: [gmx-users] numpy related problem in GROMACS
>> protein-ligand file preperation
>> Message-ID: <2709ee01-b64b-acd2-0595-298bd7cad...@ut.ee>
>> Content-Type: text/plain; charset="utf-8"
>>
>> Hi Seketoulie,
>>
>> If you have administrator rights on a CentOS system
>>
>> sudo yum search numpy
>>
>> will let you know what numpy versions have already been packaged.
>>
>> You can also use
>>
>> pip install --user numpy
>>
>> or build from source:
>>
>> https://docs.scipy.org/doc/numpy-1.10.1/user/install.html
>>
>> Regards,
>>
>> Benson
>>
>> On 12/6/18 1:57 PM, Seketoulie Keretsu wrote:
>>> Dear Experts,
>>>
>>> I am fairly new to gromacs (and linux CENTOS). I have recently
>>> installed the Gromacs18 successfully. However while doing the
>>> Protein-Lig tutorial I came across this problem while running the
>>> python script:
>>>
>>> Traceback (most recent call last):
>>>   File "cgenff_charmm2gmx.py", line 46, in 
>>> import numpy as np
>>> ImportError: No module named numpy
>>>
>>>
>>> I have python 2.7.5 installed on my system. I am unable to find
>>> solutions related to this. Kindly advise how to correct this?  A hint
>>> on the possible cause will be awesome too.
>>>
>>> Note: I also have Amber18 installed on my the same system which
>>> apparently installs numpy.
>>>
>>> Thanking you.
>>>
>>> Sincerely,
>>> Seketoulie
>>
>> --
>>
>> Message: 2
>> Date: Thu, 6 Dec 2018 15:58:04 +0100
>> From: Mehdi Bagherpour 
>> To: gromacs.org_gmx-users@maillist.sys.kth.se
>> Subject: [gmx-users] How restrain the end-to-end distance in
>> simulation?
>> Message-ID:
>> 
>> Content-Type: text/plain; charset="UTF-8"
>>
>> Dear all,
>>
>> I am new in Gromacs and would like to restrain the the end-to-end distance
>> of a bend DNA. I mean I want to restraint the distance between COM of end
>> base-pairs in simulation.
>>
>> I would appreciate if you could let me know how to do that.
>>
>> Cheer,
>> Mahdi
>>
>>
>> --
>>
>> Message: 3
>> Date: Thu, 6 Dec 2018 12:39:03 -0700
>> From: Alex 
>> To: gmx-us...@gromacs.org
>> Subject: Re: [gmx-users] mdrun-adjusted cutoffs?!
>> Message-ID: <8ffb83a0-4298-4dae-449a-65edad72b...@gmail.com>
>> Content-Type: text/plain; charset=utf-8; format=flowed
>>
>> I'm not ignoring the long-range contribution, but yes, most of the
>> effects I am talking about are short-range. What I am asking is how much
>> the free energy of ionic hydration for K+ changes in, say, a system that
>> contains KCl in bulk water -- with and without autotuning. Hence also
>>

Re: [gmx-users] numpy related problem in GROMACS protein-ligand file preperation

2018-12-06 Thread Benson Muite
Hi Seketoulie,

If you have administrator rights on a CentOS system

sudo yum search numpy

will let you know what numpy versions have already been packaged.

You can also use

pip install --user numpy

or build from source:

https://docs.scipy.org/doc/numpy-1.10.1/user/install.html

Regards,

Benson

On 12/6/18 1:57 PM, Seketoulie Keretsu wrote:
> Dear Experts,
>
> I am fairly new to gromacs (and linux CENTOS). I have recently
> installed the Gromacs18 successfully. However while doing the
> Protein-Lig tutorial I came across this problem while running the
> python script:
>
> Traceback (most recent call last):
>   File "cgenff_charmm2gmx.py", line 46, in 
> import numpy as np
> ImportError: No module named numpy
>
>
> I have python 2.7.5 installed on my system. I am unable to find
> solutions related to this. Kindly advise how to correct this?  A hint
> on the possible cause will be awesome too.
>
> Note: I also have Amber18 installed on my the same system which
> apparently installs numpy.
>
> Thanking you.
>
> Sincerely,
> Seketoulie

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] free binding energy calculation

2018-11-23 Thread Benson Muite
Probably not so helpful, but have you tried latest version of the
tutorial for Gromacs 2018:

http://www.mdtutorials.com/gmx/free_energy/03_workflow.html

On 11/23/18 8:57 PM, marzieh dehghan wrote:
> Dear all
>
> I want to calculate free energy calculation under gromacs 5.1.4 and used
> the following link
> "*http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/free_energy/04_EM.html
> "*
>
> when I run ./job.sh, I confront to the following error:
> * Right hand side '1.0' for parameter 'sc-power' in parameter file is not
> an integer value *
>
> please let me know how to solve this problem.
> Thanks a lot
> Marzieh

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Error: Cannot set thread affinities on the current platform

2018-11-23 Thread Benson Muite
Generally thread affinities are how software threads are mapped to
hardware cores:
https://en.wikipedia.org/wiki/Processor_affinity
https://computing.llnl.gov/tutorials/openMP/ProcessThreadAffinity.pdf

This may have some impact on speed (but is dependent on the computer
chip, program being run and data that is being processed), see for example:
https://software.intel.com/en-us/node/522691
http://developer.amd.com/wp-content/resources/56263-Performance-Tuning-Guidelines-PUB.pdf

It usually should not change results significantly - only expect changes
in rounding errors.

On 11/23/18 8:03 PM, Neena Susan Eappen wrote:
> What do these thread affinities refer to?
> Does that error have an impact on simulations? My simulations were still 
> completed without any halt in between.
>
> Thank you,
> Neena


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] IMAC enable GPU problem

2018-11-21 Thread Benson Muite
Hi,

It is unlikely that a homebrew build has OpenCL enabled. You will likely need 
to build it from source. A number of people have built it for Mac, but you 
might have to figure out how to get OpenCL support:

https://support.apple.com/en-us/HT202823

As a first step, check that the quick and dirty build instructions work:

http://manual.gromacs.org/documentation/current/install-guide/index.html#quick-and-dirty-installation

Then, to get a GPU enabled build, perhaps try:

tar xfz gromacs-2018.4.tar.gz
cd gromacs-2018.4
mkdir build
cd build
cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=on 
-DGMX_USE_OPENCL=on
make
make check
sudo make install
source /usr/local/gromacs/bin/GMXRC

Benson

On 11/21/18 12:53 PM, Jiyong Su wrote:

Dear Gromacs,
I installed gromacs 2018.4 on my IMAC by homebrew.
However, I could not enable GPU. There is a Radeon Pro 570 4096 MB graphic 
cared in this mac. Please tell me how to enable this GPU.


GROMACS version:2018.3
Precision:  single
Memory model:   64 bit
MPI library:thread_mpi
OpenMP support: disabled
GPU support:disabled
SIMD instructions:  SSE4.1
FFT library:fftw-3.3.8-sse2
RDTSCP usage:   enabled
TNG support:enabled
Hwloc support:  disabled
Tracing support:disabled
Built on:   2018-08-25 18:45:39
Built by:   brew@Sierra.local [CMAKE]
Build OS/arch:  Darwin 16.7.0 x86_64
Build CPU vendor:   Intel
Build CPU brand:Intel(R) Xeon(R) CPU   X5570  @ 2.93GHz
Build CPU family:   6   Model: 26   Stepping: 4
Build CPU features: apic clfsh cmov cx8 cx16 intel lahf mmx msr nonstop_tsc 
popcnt pse rdtscp sse2 sse3 sse4.1 sse4.2 ssse3 tdt x2apic
C compiler: /usr/local/Homebrew/Library/Homebrew/shims/mac/super/clang 
AppleClang 9.0.0.939
C compiler flags:-msse4.1-Wno-unknown-pragmas  -DNDEBUG
C++ compiler:   
/usr/local/Homebrew/Library/Homebrew/shims/mac/super/clang++ AppleClang 
9.0.0.939
C++ compiler flags:  -msse4.1-std=c++11  -Wno-unknown-pragmas  -DNDEBUG



Best regards,


Jiyong Su

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] VMD visualization of clusters

2018-11-19 Thread Benson Muite
Hi Rahma,

Have you tried any other visualization programs? Is there an example
file one could try for visualization?

Benson

On 11/19/18 2:57 PM, Rahma Dahmani wrote:
> Hi GMX users,
>
> After visualization of one of my clusters generated by g-cluster command in
> gromacs , i couldn't change the representation type in VMD from lines to
> new cartoon or secondary structure
> so i am wondering if this is related to the structure of cluster ? ... why
> i can visualize the cluster only in lines ?
>
> Thank you!
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] generic hardware assembling for gromacs simulation

2018-11-19 Thread Benson Muite
This should probably work. What operating system are you using? Are you
using a later build of Gromacs, such as 2018.4 ? IF so have you tried
the instructions here:

http://manual.gromacs.org/documentation/current/install-guide/index.html

Have you tried building an CUDA example programs?

On 11/19/18 2:25 PM, Seketoulie Keretsu wrote:
> Dear Users.
>
> I apologise this this not exactly an GROMACs simulation question.
>
> I am a student and currently I trying to build a linux system for
> gromacs simulation. I have seen some materials about utilizing GPUs
> and multiprocessor but I can't fully understand some problems. I have
> a system available with the configuration below:
>
> GPU:  Zotac 1050ti 4gb GPU
>
> Processor: i5 quad core 3.10ghz
> RAM: 8GB DDR 4 Corsair ram
> Storage: 250 GB had
> [also Gigabyte motherboard , 650w power supply, 500 GB external ]
>
> Would it be possible to utilize this GPUs to enhance the MD simulation
> performance? If possible would you suggest/hint how to go about this?
> Would it be possible to maximise the use of the resources if the OS is
> installed with proper configurations?
>
> Thanking you.
>
> Sincerely,
> Seke
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GROMACS 2018.3 ansible playbook for wget, extract, build, install

2018-11-10 Thread Benson Muite
Hi Darin,

Easybuild (https://easybuilders.github.io/easybuild/):
https://github.com/easybuilders/easybuild-easyblocks/blob/master/easybuild/easyblocks/g/gromacs.py
Spack (https://github.com/spack/spack):
https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gromacs/package.py

A bit of searching found the following which can likely be updated to
latest versions:
Ansible playbook:
https://github.com/LIP-Computing/ansible-role-gromacs
Docker file:
https://github.com/soellman/gromacs/blob/master/Dockerfile
Nvidia provided docker/singularity image:
https://ngc.nvidia.com/registry/hpc-gromacs
Forked from Intel singularity repository:
https://github.com/luco2018/Intel-HPC-Container/tree/master/containers/gromacs

It would be useful to know what you choose to use.

Regards,
Benson


On 11/9/18 7:00 PM, Benson Muite wrote:
> There is also an rpm:
>
> https://centos.pkgs.org/7/epel-x86_64/gromacs-2018.2-1.el7.x86_64.rpm.html
>
> for other versions see:
>
> https://pkgs.org/download/gromacs
>
> On 11/9/18 6:54 PM, Benson Muite wrote:
>> The installation instructions 
>> (http://manual.gromacs.org/documentation/2018/install-guide/index.html) make 
>> it possible to write a simple shell script - though optmizing may require a 
>> few more tests. For example to install a basic configuration in 
>> $HOME/gromacs use
>>
>> wget 
>> ftp.gromacs.org/pub/gromacs/gromacs-2018.3.tar.gz<ftp://ftp.gromacs.org/pub/gromacs/gromacs-2018.3.tar.gz>
>> tar xfz gromacs-2018.3.tar.gz
>> cd gromacs-2018.3
>> mkdir build
>> cd build
>> cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON 
>> -DCMAKE_INSTALL_PREFIX=$HOME/gromacs
>> make
>> make check
>> sudo make install
>> source $HOME/gromacs/bin/GMXRC
>>
>> On 11/9/18 5:59 PM, Darin Lory-External wrote:
>> To distribution,
>>
>> Does anybody have a GROMACS 2018.3 (or any version) ansible playbook for 
>> wget, extract, build, install or even a nice shell script for Linux (RHEL, 
>> but I can convert other linuxes)
>>
>> Darin S Lory
>> Email: darin.l...@regeneron.com<mailto:darin.l...@regeneron.com>
>>
>>

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GROMACS 2018.3 ansible playbook for wget, extract, build, install

2018-11-09 Thread Benson Muite
There is also an rpm:

https://centos.pkgs.org/7/epel-x86_64/gromacs-2018.2-1.el7.x86_64.rpm.html

for other versions see:

https://pkgs.org/download/gromacs

On 11/9/18 6:54 PM, Benson Muite wrote:
> The installation instructions 
> (http://manual.gromacs.org/documentation/2018/install-guide/index.html) make 
> it possible to write a simple shell script - though optmizing may require a 
> few more tests. For example to install a basic configuration in $HOME/gromacs 
> use
>
> wget 
> ftp.gromacs.org/pub/gromacs/gromacs-2018.3.tar.gz<ftp://ftp.gromacs.org/pub/gromacs/gromacs-2018.3.tar.gz>
> tar xfz gromacs-2018.3.tar.gz
> cd gromacs-2018.3
> mkdir build
> cd build
> cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON 
> -DCMAKE_INSTALL_PREFIX=$HOME/gromacs
> make
> make check
> sudo make install
> source $HOME/gromacs/bin/GMXRC
>
> On 11/9/18 5:59 PM, Darin Lory-External wrote:
> To distribution,
>
> Does anybody have a GROMACS 2018.3 (or any version) ansible playbook for 
> wget, extract, build, install or even a nice shell script for Linux (RHEL, 
> but I can convert other linuxes)
>
> Darin S Lory
> Email: darin.l...@regeneron.com<mailto:darin.l...@regeneron.com>
>
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GROMACS 2018.3 ansible playbook for wget, extract, build, install

2018-11-09 Thread Benson Muite
The installation instructions 
(http://manual.gromacs.org/documentation/2018/install-guide/index.html) make it 
possible to write a simple shell script - though optmizing may require a few 
more tests. For example to install a basic configuration in $HOME/gromacs use

wget 
ftp.gromacs.org/pub/gromacs/gromacs-2018.3.tar.gz
tar xfz gromacs-2018.3.tar.gz
cd gromacs-2018.3
mkdir build
cd build
cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON 
-DCMAKE_INSTALL_PREFIX=$HOME/gromacs
make
make check
sudo make install
source $HOME/gromacs/bin/GMXRC

On 11/9/18 5:59 PM, Darin Lory-External wrote:
To distribution,

Does anybody have a GROMACS 2018.3 (or any version) ansible playbook for wget, 
extract, build, install or even a nice shell script for Linux (RHEL, but I can 
convert other linuxes)

Darin S Lory
Email: darin.l...@regeneron.com


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Opening PDB files in GROMACS on Windows using Cygwin

2018-11-02 Thread Benson Muite
What version of GROMACS are you using?

On 11/2/18 5:45 PM, Neena Susan Eappen wrote:
> Hello GROMACS users,
>
>
> I am a first time user of GROMACS. According to this excellent tutorial 
> (http://www.mdtutorials.com/gmx/lysozyme/01_pdb2gmx.html), I downloaded the 
> PDB file for lyzozyme, then removed water molecules, typed in the following 
> command on CygwinShell (with GROMACS activated).
>
> $ gmx pdb2gmx -f 1AKI_clean.pdb -o 1AKI_processed.gro -water spce
>
> However, I get an error saying:
>
> In command line option -f, file 1AKI_clean.pdb does not exit or is not 
> accessible.
>
> Please advice where am I going wrong. Any insight would be appreciated.
>
> Many thanks,
>
> Neena Eappen
> Graduate Student
> Jockusch Lab, U of T
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Installation double precision with GPU parallelization mode

2018-10-27 Thread Benson Muite
Please see manual at:

http://manual.gromacs.org/documentation/current/install-guide/index.html#cuda-gpu-acceleration

On 10/27/18 5:05 PM, FRANCESCO PETTINI wrote:
> Which is the list of commands to install gromacs 2018.3 on a Linux system
> with a Nvidia GPU 1070?

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] phosphorylated residues simulation usinh gromacs

2018-10-23 Thread Benson Muite
Hi,

Following may be helpful:

http://manual.gromacs.org/documentation/2019-beta1/user-guide/force-fields.html#gmx-amber-ff

Regards,

Benson

On 10/23/18 11:52 AM, farial tavakoli wrote:
> Dear GMX users
> I need to simulate the complex composed of a protein and a peptide which has 
> phosphotyrosine using AMBER99SB force field in GROMACS. cited to the " 
> https://personalpages.manchester.ac.uk/staff/Richard.Bryce/amber/index.html " 
> to download .OFF and .FRCMOD files of phosphotyrosine. but now I dont know 
> how to use these files in GROMACS. There is no AMBER tutorial in gromacs . I 
> searched google to find that but couldnt find stage by stage guides. Is there 
> anyone can help me how I should use these files in GROMACS?I am sorry because 
> of my english.
> Thanks in advanceFarial

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gmx mdrun -rerun issue

2018-10-12 Thread Benson Muite
Hi Andreas,

I tried it on my laptop:

gmx mdrun -rerun old_PROD.trr -deffnm new_PROD

and got the following output:

  gmx mdrun -rerun old_PROD.trr -deffnm new_PROD
   :-) GROMACS - gmx mdrun, 2018.3 (-:

     GROMACS is written by:
  Emile Apol  Rossen Apostolov  Paul Bauer Herman J.C. 
Berendsen
     Par Bjelkmar    Aldert van Buuren   Rudi van Drunen Anton Feenstra
   Gerrit Groenhof    Aleksei Iupinov   Christoph Junghans   Anca Hamuraru
  Vincent Hindriksen Dimitrios Karkoulis    Peter Kasson Jiri Kraus
   Carsten Kutzner  Per Larsson  Justin A. Lemkul    Viveca Lindahl
   Magnus Lundborg   Pieter Meulenhoff    Erik Marklund  Teemu Murtola
     Szilard Pall   Sander Pronk  Roland Schulz Alexey Shvetsov
    Michael Shirts Alfons Sijbers Peter Tieleman    Teemu 
Virolainen
  Christian Wennberg    Maarten Wolf
    and the project leaders:
     Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel

Copyright (c) 1991-2000, University of Groningen, The Netherlands.
Copyright (c) 2001-2017, The GROMACS development team at
Uppsala University, Stockholm University and
the Royal Institute of Technology, Sweden.
check out http://www.gromacs.org for more information.

GROMACS is free software; you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License
as published by the Free Software Foundation; either version 2.1
of the License, or (at your option) any later version.

GROMACS:  gmx mdrun, version 2018.3
Executable: /home/benson/Projects/GromacsTest/gromacsinstall/bin/gmx
Data prefix:  /home/benson/Projects/GromacsTest/gromacsinstall
Working dir: /home/benson/Projects/GromacsTest/small_rerun_example
Command line:
   gmx mdrun -rerun old_PROD.trr -deffnm new_PROD


Back Off! I just backed up new_PROD.log to ./#new_PROD.log.1#
X server found. dri2 connection failed!
X server found. dri2 connection failed!
X server found. dri2 connection failed!
X server found. dri2 connection failed!
X server found. dri2 connection failed!
X server found. dri2 connection failed!
X server found. dri2 connection failed!
X server found. dri2 connection failed!
X server found. dri2 connection failed!
X server found. dri2 connection failed!
X server found. dri2 connection failed!
X server found. dri2 connection failed!
X server found. dri2 connection failed!
Reading file new_PROD.tpr, VERSION 2016.1 (single precision)
Note: file tpx version 110, software tpx version 112
Changing nstlist from 10 to 100, rlist from 1.2 to 1.201

Using 1 MPI thread
Using 4 OpenMP threads


Back Off! I just backed up new_PROD.trr to ./#new_PROD.trr.1#

Back Off! I just backed up new_PROD.edr to ./#new_PROD.edr.1#
starting md rerun 'Ethanol_Ethanol', reading coordinates from input 
trajectory 'old_PROD.trr'

trr version: GMX_trn_file (single precision)
Reading frame   0 time    0.000
WARNING: Some frames do not contain velocities.
  Ekin, temperature and pressure are incorrect,
  the virial will be incorrect when constraints are present.

Reading frame   1 time   10.000
step -1: resetting all time and cycle counters
Last frame    200 time 2000.000

NOTE: 39 % of the run time was spent in pair search,
   you might want to increase nstlist (this has no effect on accuracy)

    Core t (s)   Wall t (s)    (%)
    Time:    5.795    1.449  400.0
  (ns/day)    (hour/ns)
Performance:    0.030  804.864

GROMACS reminds you: "Molecular biology is essentially the practice of 
biochemistry without a license." (Edwin Chargaff)


Perhaps try a newer version of GROMACS? Rather than using a provided 
module, you can install it in your home directory on your cluster.

Regards,

Benson

On 10/11/18 1:51 PM, Andreas Mecklenfeld wrote:
> Dear Benson,
>
> thanks for the offer. I've used gmx traj to generate a smaller *.trr 
> file, though the occuring issue seems unaffected.
> I've uploaded my files to http://ge.tt/1HNpO8s2
>
>
> Kind regards,
> Andreas
>
>
>
> Am 09.10.2018 um 10:48 schrieb Benson Muite:
>> Current version 2018.3 seems to have re-run feature:
>>
>> http://manual.gromacs.org/documentation/current/user-guide/mdrun-features.html
>>  
>>
>>
>> Is your input data reasonable? Might a small version be available where
>> one could try this in 2018.3 to see if the same error is obtained?
>>
>> Benson
>>
>> On 10/9/18 11:40 AM, Andreas Mecklenfeld wrote:
>>> Hey,
>>>
>>> thanks for the quick response. Unfortunately, there isn't (at least
>>> not in the short-term). Which one would be suitable though?
>>>
>>> Best regards,
>>> Andreas
>>>
>>>
>>>
>>> Am 09.10.2018 um 10:31 schrieb Benson 

Re: [gmx-users] gmx mdrun -rerun issue

2018-10-09 Thread Benson Muite
Current version 2018.3 seems to have re-run feature:

http://manual.gromacs.org/documentation/current/user-guide/mdrun-features.html

Is your input data reasonable? Might a small version be available where 
one could try this in 2018.3 to see if the same error is obtained?

Benson

On 10/9/18 11:40 AM, Andreas Mecklenfeld wrote:
> Hey,
>
> thanks for the quick response. Unfortunately, there isn't (at least 
> not in the short-term). Which one would be suitable though?
>
> Best regards,
> Andreas
>
>
>
> Am 09.10.2018 um 10:31 schrieb Benson Muite:
>> Hi,
>>
>> Is it possible to use a newer version of Gromacs?
>>
>> Benson
>>
>> On 10/9/18 11:15 AM, Andreas Mecklenfeld wrote:
>>> Dear Gromacs users,
>>>
>>>
>>> I've a question regarding the rerun option of the mdrun command in
>>> Gromacs 2016.1. It seems as if the calculation is repeatedly performed
>>> for the last frame (until killed by the work station). The output is
>>>
>>> "Last frame    1000 time 2000.000
>>>
>>> WARNING: Incomplete header: nr 1001 time 2000"
>>>
>>>
>>> My goal is to alter the .top-file (new) and calculate energies with
>>> previously recorded coordinates (old): "gmx grompp -f old_PROD.mdp -c
>>> old_PROD.gro -p new_topol.top -o new_PROD.tpr"
>>>
>>> The mdrun looks like "gmx mdrun -rerun old_PROD.trr -deffnm new_PROD"
>>>
>>>
>>> Is there a way to fix this?
>>>
>>>
>>> Thanks,
>>>
>>> Andreas
>>>
>>>
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] gmx mdrun -rerun issue

2018-10-09 Thread Benson Muite
Hi,

Is it possible to use a newer version of Gromacs?

Benson

On 10/9/18 11:15 AM, Andreas Mecklenfeld wrote:
> Dear Gromacs users,
>
>
> I've a question regarding the rerun option of the mdrun command in 
> Gromacs 2016.1. It seems as if the calculation is repeatedly performed 
> for the last frame (until killed by the work station). The output is
>
> "Last frame    1000 time 2000.000
>
> WARNING: Incomplete header: nr 1001 time 2000"
>
>
> My goal is to alter the .top-file (new) and calculate energies with 
> previously recorded coordinates (old): "gmx grompp -f old_PROD.mdp -c 
> old_PROD.gro -p new_topol.top -o new_PROD.tpr"
>
> The mdrun looks like "gmx mdrun -rerun old_PROD.trr -deffnm new_PROD"
>
>
> Is there a way to fix this?
>
>
> Thanks,
>
> Andreas
>
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] GROMACS user workshop

2018-09-21 Thread Benson Muite


1) There is a possibility to apply for funding support to attend the 
workshop - the earlier you do this, the better. Workshop attendance is 
free, though registration is required. Note that the cost of 
accommodation in Riga is significantly lower than many other European 
cities. Latvia is part of the Schengen zone.


2) Language of instruction at the workshop will be English. Some 
workshop participants may also use other languages in informal discussion.


3) The workshop will be suitable for those with limited GROMACS 
experience (motivated undergraduate students with introductory science 
and programming background should be able to learn how to use GROMACS). 
The hands on sessions will be more attendee driven than the lectures, 
with a mix of introductory topics and more specialized material.


On 09/21/2018 07:27 AM, Benson Muite wrote:

Hi,

Thanks for the suggestion. This depends on speaker willingness to be 
recorded.


Benson

On 09/21/2018 05:52 AM, Tùng Hoàng wrote:

i hope have youtube link for people around the world could learn it

Vào Th 4, 19 thg 9, 2018 vào lúc 21:13 Benson Muite 
đã viết:


Hi,

There will be a GROMACS user workshop in Riga, Latvia on Friday 26 and
Saturday 27 October. Some more information is available at:
https://baltichpc.org/

Best wishes,
Benson




--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] GROMACS user workshop

2018-09-20 Thread Benson Muite

Hi,

Thanks for the suggestion. This depends on speaker willingness to be 
recorded.


Benson

On 09/21/2018 05:52 AM, Tùng Hoàng wrote:

i hope have youtube link for people around the world could learn it

Vào Th 4, 19 thg 9, 2018 vào lúc 21:13 Benson Muite 
đã viết:


Hi,

There will be a GROMACS user workshop in Riga, Latvia on Friday 26 and
Saturday 27 October. Some more information is available at:
https://baltichpc.org/

Best wishes,
Benson


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] GROMACS user workshop

2018-09-19 Thread Benson Muite

Hi,

There will be a GROMACS user workshop in Riga, Latvia on Friday 26 and 
Saturday 27 October. Some more information is available at:

https://baltichpc.org/

Best wishes,
Benson
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Workstation choice

2018-09-10 Thread Benson Muite


d.poly-ch2 from gmxbench-3.0.tar.gz available at:
http://www.gromacs.org/About_Gromacs/Benchmarks
6000 atoms
timestep 0.001
On 09/10/2018 11:37 PM, Albert wrote:
May I ask how many atoms in the system? Which forcefield did you use? 
And what's the time step?


regards


On 09/10/2018 09:14 PM, Olga Selyutina wrote:


(50 ts instead of 5000 ts, otherwise it's too fast)


That would be a factor of about 227!


On an available machine (not used for simulations)
self-compiled 2018.3, CUDA 9.1
the similar result, factor is about 136:

Working dir:  /home/user/dev/gromacs/gmxbench-3.0/d.poly-ch2
Command line:
   gmx mdrun -ntmpi 1 -nt 6

Running on 1 node with total 6 cores, 6 logical cores, 1 compatible GPU
Hardware detected:
   CPU info:
 Vendor: Intel
 Brand:  Intel(R) Core(TM) i5-8600K CPU @ 3.60GHz

   GPU info:
 Number of GPUs detected: 1
 #0: NVIDIA GeForce GTX 1050 Ti, compute cap.: 6.1, ECC:  no, stat:
compatible

On 1 MPI rank, each using 6 OpenMP threads

    Core t (s)   Wall t (s)    (%)
    Time:   1102.853    183.809  600.0
  (ns/day)    (hour/ns)
Performance:  235.027    0.102
Finished mdrun on rank 0 Tue Sep 11 01:53:36 2018




--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Workstation choice

2018-09-10 Thread Benson Muite
Some results (probably suboptimal) for d.poly-ch2 on a desktop running 
Fedora 28 and using Gromacs-Opencl from Fedora repositories:


Log file opened on Mon Sep 10 21:00:25 2018
Host: mikihir  pid: 32669  rank ID: 0  number of ranks:  1
  :-) GROMACS - gmx mdrun, 2018.2 (-:

    GROMACS is written by:
 Emile Apol  Rossen Apostolov  Paul Bauer Herman J.C. 
Berendsen

    Par Bjelkmar    Aldert van Buuren   Rudi van Drunen Anton Feenstra
  Gerrit Groenhof    Aleksei Iupinov   Christoph Junghans   Anca Hamuraru
 Vincent Hindriksen Dimitrios Karkoulis    Peter Kasson    Jiri Kraus
  Carsten Kutzner  Per Larsson  Justin A. Lemkul    Viveca Lindahl
  Magnus Lundborg   Pieter Meulenhoff    Erik Marklund  Teemu Murtola
    Szilard Pall   Sander Pronk  Roland Schulz Alexey Shvetsov
   Michael Shirts Alfons Sijbers Peter Tieleman    Teemu 
Virolainen

 Christian Wennberg    Maarten Wolf
   and the project leaders:
    Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel

Copyright (c) 1991-2000, University of Groningen, The Netherlands.
Copyright (c) 2001-2017, The GROMACS development team at
Uppsala University, Stockholm University and
the Royal Institute of Technology, Sweden.
check out http://www.gromacs.org for more information.

GROMACS is free software; you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License
as published by the Free Software Foundation; either version 2.1
of the License, or (at your option) any later version.

GROMACS:  gmx mdrun, version 2018.2
Executable:   /usr/bin/gmx
Data prefix:  /usr
Working dir:  /home/benson/Projects/GromacsBench/d.poly-ch2
Command line:
  gmx mdrun

GROMACS version:    2018.2
Precision:  single
Memory model:   64 bit
MPI library:    thread_mpi
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
GPU support:    OpenCL
SIMD instructions:  SSE2
FFT library:    fftw-3.3.5-sse2-avx
RDTSCP usage:   disabled
TNG support:    enabled
Hwloc support:  hwloc-1.11.6
Tracing support:    disabled
Built on:   2018-07-19 19:45:21
Built by:   mockbuild@ [CMAKE]
Build OS/arch:  Linux 4.17.3-200.fc28.x86_64 x86_64
Build CPU vendor:   Intel
Build CPU brand:    Intel Core Processor (Haswell, no TSX)
Build CPU family:   6   Model: 60   Stepping: 1
Build CPU features: aes apic avx avx2 clfsh cmov cx8 cx16 f16c fma intel 
lahf mmx msr pcid pclmuldq popcnt pse rdrnd rdtscp sse2 sse3 sse4.1 
sse4.2 ssse3 tdt x2apic

C compiler: /usr/bin/cc GNU 8.1.1
C compiler flags:    -msse2   -O2 -g -pipe -Wall -Werror=format-security 
-Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions 
-fstack-protector-strong -grecord-gcc-switches 
-specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 
-specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic 
-fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection 
-O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 
-Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong 
-grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 
-specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic 
-fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection  
-DNDEBUG -funroll-all-loops -fexcess-precision=fast

C++ compiler:   /usr/bin/c++ GNU 8.1.1
C++ compiler flags:  -msse2   -O2 -g -pipe -Wall -Werror=format-security 
-Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions 
-fstack-protector-strong -grecord-gcc-switches 
-specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 
-specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic 
-fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection 
-std=c++11   -DNDEBUG -funroll-all-loops -fexcess-precision=fast

OpenCL include dir: /usr/include
OpenCL library: /usr/lib64/libOpenCL.so
OpenCL version: 2.0


Running on 1 node with total 8 cores, 8 logical cores, 1 compatible GPU
Hardware detected:
  CPU info:
    Vendor: AMD
    Brand:  AMD FX(tm)-8350 Eight-Core Processor
    Family: 21   Model: 2   Stepping: 0
    Features: aes amd apic avx clfsh cmov cx8 cx16 f16c fma fma4 htt 
lahf misalignsse mmx msr nonstop_tsc pclmuldq pdpe1gb popcnt pse rdtscp 
sse2 sse3 sse4a sse4.1 sse4.2 ssse3 xop

  Hardware topology: Full, with devices
    Sockets, cores, and logical processors:
  Socket  0: [   0] [   1] [   2] [   3] [   4] [   5] [   6] [   7]
    Numa nodes:
  Node  0 (16714620928 bytes mem):   0   1   2   3   4   5   6 7
  Latency:
   0
 0  1.00
    Caches:
  L1: 16384 bytes, linesize 64 bytes, assoc. 4, shared 1 ways
  L2: 2097152 bytes, linesize 64 bytes, assoc. 16, shared 2 ways
  L3: 8388608 bytes, linesize 64 bytes, assoc. 64, shared 8 ways
    PCI devices:
  :01:00.0  Id: 1002:67ef  Class: 0x0300  Numa: 0
  

Re: [gmx-users] Workstation choice

2018-09-09 Thread Benson Muite
It takes a bit of time to benchmark, and probably more maintenance 
needed. May want to see what kind of performance you get with laptop 
processor - may require some development of OpenCL to better use 
integrated GPUs.  Are you able to provide a log file for a typical run, 
these indicate time spent in each part of the computation.



On 09/09/2018 01:32 PM, Olga Selyutina wrote:

Thank you, an idea to build own low-price cluster is very interesting.
Seven Ryzen 3 2200 with hardware cost ~$2000 and their total performance
for some purposes could be higher than performance of one high-performance
CPU with few GPUs (configurations CPU+GPUs mentioned in my previous letter)
. But I’m not sure that they provide higher performance for GROMACS
simulations.



--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Workstation choice

2018-09-09 Thread Benson Muite
049e+04   -2.29507e+04 0.0e+00
  Potential    Kinetic En.   Total Energy  Conserved En. Temperature
    7.53714e+03    2.29835e+04    3.05206e+04    5.29261e+04 3.07193e+02
 Pressure (bar)
    4.62342e+01

   Total Virial (kJ/mol)
    7.64724e+03    1.85579e+02    9.92485e+01
    1.85576e+02    7.22973e+03   -1.43850e+02
    9.92489e+01   -1.43849e+02    7.35308e+03

   Pressure (bar)
    5.66092e+00   -3.86826e+01   -1.72257e+01
   -3.86822e+01    7.39537e+01    2.80894e+01
   -1.72257e+01    2.80893e+01    5.90880e+01


    M E G A - F L O P S   A C C O U N T I N G

 NB=Group-cutoff nonbonded kernels    NxN=N-by-N cluster Verlet kernels
 RF=Reaction-Field  VdW=Van der Waals  QSTab=quadratic-spline table
 W3=SPC/TIP3p  W4=TIP4p (single or pairs)
 V=Potential and force  V=Potential only  F=Force only

 Computing:   M-Number M-Flops  % Flops
-
 Pair Search distance check 144.857066 1303.714 0.3
 NxN LJ [F]   14451.085440 476885.820    95.0
 NxN LJ [V]   148.897120 6402.576 1.3
 Shift-X  0.612000 3.672 0.0
 Bonds   30.000999 1770.059 0.4
 Angles  29.995998 5039.328 1.0
 RB-Dihedrals    29.990997 7407.776 1.5
 Virial   0.614295 11.057 0.0
 Stop-CM  0.624000 6.240 0.0
 Calc-Ekin    6.024000 162.648 0.0
 Virtual Site 3fd    29.995998 2849.620 0.6
 Virtual Site 3fad    0.010002 1.760 0.0
-
 Total 501844.270   100.0
-


 R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G

On 1 MPI rank, each using 4 OpenMP threads

 Computing:  Num   Num  Call    Wall time Giga-Cycles
 Ranks Threads  Count  (s) total sum    %
-
 Vsite constr.  1    4   5001   0.623 5.985   5.7
 Neighbor search    1    4 51   0.293 2.814   2.7
 Force  1    4   5001   8.681 83.340  78.9
 NB X/F buffer ops. 1    4   9951   0.277 2.660   2.5
 Vsite spread   1    4   5001   0.874 8.392   7.9
 Write traj.    1    4  1   0.051 0.489   0.5
 Update 1    4   5001   0.152 1.462   1.4
 Rest   0.047 0.450   0.4
-
 Total 10.999 105.592 100.0
-

   Core t (s)   Wall t (s)    (%)
   Time:   43.996   10.999  400.0
 (ns/day)    (hour/ns)
Performance:   39.285    0.611
Finished mdrun on rank 0 Sun Sep  9 09:04:00 2018



On 09/09/2018 08:59 AM, Benson Muite wrote:

This is old, but seems to indicate Beowulf clusters work quite well:

https://docs.uabgrid.uab.edu/wiki/Gromacs_Benchmark

Szilárd had helped create a benchmark data set available at:
http://www.gromacs.org/About_Gromacs/Benchmarks
http://www.gromacs.org/@api/deki/files/240/=gromacs-5.0-benchmarks.pdf
ftp://ftp.gromacs.org/pub/benchmarks/gmxbench-3.0.tar.gz

Does your use case involves a large number of ensemble simulations 
which can be done in single precision without error correction? If so 
might you be better building a small Beowulf cluster with lower spec 
processors that have integrated GPUs? For example a Ryzen 3 with 
integrated graphics is about $100. Motherboard, RAM, power supply 
would probably get you to about $300. Intel core I3 bundle would be 
about $350.  Setup could be done using OpenHPC stack:

http://www.openhpc.community/

This would get you a personal 5-7 node in house cluster. However, 
ability to do maintenance, have local support for repair may also be 
important in considering system lifetime cost, not just initial 
purchase price. Gromacs current and future support for OpenCl, and 
likely also important here.


At least one computer store in my region has allowed benchmarking.

On 09/07/2018 09:40 PM, Olga Selyutina wrote:

Hi,
A lot of thanks for valuable information.
If it isn’t difficult for you, could you answer how the growth of
performance under using the second GPU on the single simulation was 
changed

in GROMACS 2018 vs older versions (2016, 5.1, it was 20-30% higher)?


2018-09-07 23:25 GMT+07:00 Szilárd Páll :

Are you intending to use it mostly/only for running simulations or 
also as

a desktop computer?

Yes, 

Re: [gmx-users] Workstation choice

2018-09-09 Thread Benson Muite

This is old, but seems to indicate Beowulf clusters work quite well:

https://docs.uabgrid.uab.edu/wiki/Gromacs_Benchmark

Szilárd had helped create a benchmark data set available at:
http://www.gromacs.org/About_Gromacs/Benchmarks
http://www.gromacs.org/@api/deki/files/240/=gromacs-5.0-benchmarks.pdf
ftp://ftp.gromacs.org/pub/benchmarks/gmxbench-3.0.tar.gz

Does your use case involves a large number of ensemble simulations which 
can be done in single precision without error correction? If so might 
you be better building a small Beowulf cluster with lower spec 
processors that have integrated GPUs? For example a Ryzen 3 with 
integrated graphics is about $100. Motherboard, RAM, power supply would 
probably get you to about $300. Intel core I3 bundle would be about 
$350.  Setup could be done using OpenHPC stack:

http://www.openhpc.community/

This would get you a personal 5-7 node in house cluster. However, 
ability to do maintenance, have local support for repair may also be 
important in considering system lifetime cost, not just initial purchase 
price. Gromacs current and future support for OpenCl, and likely also 
important here.


At least one computer store in my region has allowed benchmarking.

On 09/07/2018 09:40 PM, Olga Selyutina wrote:

Hi,
A lot of thanks for valuable information.
If it isn’t difficult for you, could you answer how the growth of
performance under using the second GPU on the single simulation was changed
in GROMACS 2018 vs older versions (2016, 5.1, it was 20-30% higher)?


2018-09-07 23:25 GMT+07:00 Szilárd Páll :


Are you intending to use it mostly/only for running simulations or also as
a desktop computer?

Yes, it will be mostly used for simulations.



I'm not on the top of pricing details so you should probably look at some
configs and get back with concrete CPU + GPU (+price) combinations and we
might be able to guesstimate what's best.



These sets of CPU and GPU are suitable for price (in our region):
*GPU*
GTX 1070 ~1700MHz, cuda 1920 - $514
GTX 1080 ~1700MHz, cuda 2560 - $615
GTX 1070Ti ~1700MHz, cuda 2432 - $615
GTX 1080Ti ~1600MHz, cuda 3584 - $930

*CPU*
Ryzen 7 2700X - $357
4200MHz, 8/16 cores/threads, cache L1/L2/L3 768KB/4MB/16MB, 105W, max.T 85C

Threadripper 1950X - $930
4000MHz, 16/32 cores/threads, cache  L1/L2/L3 1.5/8/32MB, 180W, max.T 68C

i7 8086K - $515
4800MHz, 6/12 cores/threads, cache L2/L3 1.5/12MB, 95W, max.T 100C

i7 8700K - $442
4600MHz, 6/12 cores/threads, cache L2/L3 1.5/12MB, 95W, max.T 100C

The most suitable combinations CPU+GPU are as follows:
1) Ryzen 7 2700X + two GTX 1080 - $1587
1.1) Ryzen 7 2700X + one GTX 1080 + one GTX 1080*Ti* - $1900 (maybe?)
2) Threadripper 1950X + one GTX 1080Ti - $1860
3) i7 8700K + two GTX 1080 - $1672
4) Ryzen 7 2700X + three GTX 1070 - $1900
My suggestions:
Variant 1 seems to be the most suitable.
Variant 2 seems to be suitable only if the single simulation is running on
workstation
It’s a bit confusing that in synthetic tests/games performance of i7 8700
is higher than Ryzen 7 2700.
Thanks a lot again for your advice, it has already clarified a lot!





--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Workstation choice

2018-09-07 Thread Benson Muite
Check if the routines you will use have been ported to use GPUs. Time 
and profile a typical run you will perform on your current hardware to 
determine the bottlenecks, and then choose hardware that will perform 
best on these bottlenecks and is within your budget.


Note that benchmarking can be quite time consuming - even without 
performance optimization. If you have an idea of the simulations you 
will run and have profiled your code, and will primarily do these 
simulations, then it is probably easiest to go to a computer store where 
they have several available workstations, run your benchmark yourself 
and choose the best value system.



On 09/06/2018 08:09 PM, Olga Selyutina wrote:

  Thank you,

to use these tests isn’t a problem, but available workstation are too old,
so test results won’t be helpful in choosing actual components. I have an
access to high-performance cluster, but it also couldn’t be the basis
because of its high cost. If it is hard to give the certain recommendation,
I would to know at least how do performance correlate with the number of
cores/threads and core frequency? What is more important the core frequency
or number of cores? What is preferable, Intel or AMD? How should be
correlated GPU and CPU performance?

In the previous letters in mail-list I found recommendations on purchase of
Threadripper 1950X+GeForce GTX-1080Ti, I think that more cheap and similar
effective choice is TR 1950X+GeForce GTX-*1080*. Will there be significant
differences with 1)TR 19*20*X+GeForce GTX-1080 2) Intel 7900x+GeForce
GTX-1080?

2018-09-05 20:53 GMT+07:00 Benson Muite :


Hi Olga,

The authors of:

https://github.com/bio-phys/MDBenchmark/tree/version-1.3.2

https://zenodo.org/record/1318123

May be helpful. If your data is not confidential, you may consider running
Gromacs remotely on a cloud high performance computing resource, or
benchmarking remotely and then purchasing a suitable configuration.

Some other information:
http://manual.gromacs.org/documentation/current/user-guide/
mdrun-performance.html#
http://www.gromacs.org/GPU_acceleration
https://extras.csc.fi/chem/courses/gmx2007/Erik_Talks/buildi
ng_clusters.pdf
http://www.gromacs.org/Documentation/Performance_checklist

On 09/05/2018 12:02 PM, Olga Selyutina wrote:


   Hello,

I need help in choice of the workstation for MD simulations using GROMACS.
It is supposed to study systems consisting from 30-50k atoms, in
particular, lipid bilayer models. Since the last generation of Intel and
Ryzen CPU has made a big leap in performance, available workstation can’t
be taken as a basis. Please, help to choose components of the workstation,
particularly, GPU and CPU with total cost about $2000. What is better, to
buy two GPUs(sli) or one GPU but more effective?



--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support
/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.






--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Workstation choice

2018-09-05 Thread Benson Muite

Hi Olga,

The authors of:

https://github.com/bio-phys/MDBenchmark/tree/version-1.3.2

https://zenodo.org/record/1318123

May be helpful. If your data is not confidential, you may consider 
running Gromacs remotely on a cloud high performance computing resource, 
or benchmarking remotely and then purchasing a suitable configuration.


Some other information:
http://manual.gromacs.org/documentation/current/user-guide/mdrun-performance.html#
http://www.gromacs.org/GPU_acceleration
https://extras.csc.fi/chem/courses/gmx2007/Erik_Talks/building_clusters.pdf
http://www.gromacs.org/Documentation/Performance_checklist

On 09/05/2018 12:02 PM, Olga Selyutina wrote:

  Hello,

I need help in choice of the workstation for MD simulations using GROMACS.
It is supposed to study systems consisting from 30-50k atoms, in
particular, lipid bilayer models. Since the last generation of Intel and
Ryzen CPU has made a big leap in performance, available workstation can’t
be taken as a basis. Please, help to choose components of the workstation,
particularly, GPU and CPU with total cost about $2000. What is better, to
buy two GPUs(sli) or one GPU but more effective?



--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] force/time application for pull

2018-08-31 Thread Benson Muite
It can take time to read through papers. A title, link and a short 
summary would be helpful to understand what the benefit would be. In 
particular whether you just need information to add the functionality 
for your use case or whether enough people other would use the extra 
functionality you require.



On 08/31/2018 12:42 PM, Rakesh Mishra wrote:

Dear Dr. Justin

Ultimately Mark has not replied .
Can you please tell me what this peak is revealing.
hear 3' of one strand is fixed and 3' of another strand is pulled along the
helical direction of dsDNA (12 bp). using constant velocity pulling using
your
protocol of umbrella sampling (output f/t .xvg file). (My motif is to see
rupture in presence and absence of drug).

In my view sudden drop of force  corresponds to breaking of all the hbonds
of both strands occurs.
Whats your view .

On Mon, Aug 27, 2018 at 12:24 PM, Rakesh Mishra  wrote:


Dear Mark,

There are several experiments have been done for protein and DNA
unfolding.
Thy  became pioneer in this field.  That is why force should apply and
corresponding reaction
coordinates are measured . Here Gromacs login do not allow to upload
larger data.
But I can mention some paper address . If get time Please go through these
papers .

1- NATURE | VOL 421 | 23 JANUARY 2003 | www.nature.com/nature (Carlos
Bustamante*†, Zev Bryant* & Steven B. Smith†)

2-  SCIENCE 1⁄7 VOL. 275 1⁄7 28 FEBRUARY 1997 (Matthias Rief, Filipp
Oesterhelt, Berthold Heymann,Hermann E. Gaub )

That is  why I  asked about force-extension protocol.  In Umbrella
sampling distance is increases linearly with the time and corresponding
force experienced by the system writes .

We still don't no that force/time which writes in .xvg file corresponds
which formula like f= k(vt-x) or something else.
There are other lot experiments regarding measurement of stability of the
DNA and protein under f/x curve.
One can see some theory paper also to see f/x like.

1- THE JOURNAL OF CHEMICAL PHYSICS 148, 215105 (2018)




Note- According to Justin one  can write that protocol. If suppose we
write that protocol, then how  to apply in gromacs.

There are only input .mdp files. We didn't know  where to built the force
subroutine in .ff (force field ) directory. or somewhere else.


Hoping for  response.











On Mon, Aug 13, 2018 at 6:14 PM, Mark Abraham 
wrote:


Hi,

Can you please share a link to something that indicates why this would be
a
good tool for modeling such experimental pulling scenarios? Making the
case
for implementing such a feature would benefit from that.

Mark

On Mon, Aug 13, 2018, 13:42 Rakesh Mishra  wrote:


Hello Mark,

Thank for  your clarification. Gromacs pulling has simple protocol for
pulling
using umbrella sampling. Where one can only get f/t and x/t . here t is
linearly
increases for both cases of force and distance. That actually not full

fill

the need of
experimental pulling.


On Mon, Aug 13, 2018 at 3:26 PM Mark Abraham 
wrote:


Hi,

It's possible, but there is no code written for it.

Mark

On Mon, Aug 13, 2018, 12:47 Rakesh Mishra 

wrote:

Dear Justin.

Thanks for your kind  advise.
But why don't it is not  possible to apply the constant force which

should

increases linearly (at each instant of time delta)
like at
t1 -f1
t2-f2
.
.
tn-fn

and corresponding to that force we get extension for each increment

of

time in Gromacs. This is more relevant in order to map
experimental work of unzipping or axial pulling of nucleic acid like
DNA/RNA/protein.  Is it possible to build this protocol in Gromacs.

On Thu, Aug 9, 2018 at 5:30 PM Justin Lemkul 

wrote:


On 8/9/18 6:37 AM, Rakesh Mishra wrote:

Dear all,

Can anyone shed some light as ,Is it possible to apply

force/time

for

pulling in gromacs. Means I want to pull my system in step wise

.

eg.

0 - t1 force applied f1
t1+dt  to t2 force f2
t2+dt  to t3 force f3
.
.
.
tn-1 +dt  to tn force fn
   Is it possible.

Not in a single run. You would have to run a simulation for

interval

0

-

dt, generate a new .tpr file for the time period of (t1+dt) - t2,

e



tc.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry

303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List

before

posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx

-users

or

send a mail to gmx-users-requ...@gromacs.org.



--

*With Best-Rakesh Kumar Mishra*
*  (RA)CSD  SINP Kolkata, India*

*E-mail - 

Re: [gmx-users] Qm/mm

2018-08-24 Thread Benson Muite
I believe for the non free codes, you would need to obtain a license. 
Depending on the calculation you want to do, there are free alternatives 
to the non free codes.



On 08/25/2018 12:54 AM, rose rahmani wrote:

Hi,

I want to use qm/mm calculations(dfb3 code & ONIOM method) in gromacs.
Since, some of these codes are commercial(like ONIOM specially Gaussian), i
wanted to ask are these all free in GROMACS? Or we should first register
and by these codes separately and then use them in GROMACS?

Best regards

Rose


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GROMACS Installation error on domdec.cpp

2018-08-03 Thread Benson Muite
What version of MPI do you have? It may be helpful to indicate exact 
location of MPI if the paths are not automatically found.



On 08/03/2018 02:43 PM, 郭聪 wrote:

Hi All,



I wanted to build GROMACS-2016.4 on hpc clusters but I got error with 
domdec.cpp. Could anyone helps me with that? Thanks!


ps: I succeeded to build it on my lab pc. I am wondering if something is wrong 
with GROMACS code or the compiler. I used gnu and openmpi.



tar -zxvf gromacs-2016.4.tar.gz
cd gromacs-2016.4/
mkdir build
cd build
cmake .. -DGMX_BUILD_OWN_FFTW=on -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_MPI=on 
-DCMAKE_INSTALL_PREFIX=~/
gromacs-2016.4/
make

*error message
/gpfs/home/XXX/software/gromacs-2016.4/src/gromacs/domdec/domdec.cpp: In 
function ‘int ddcoord2ddnodeid(gmx_domdec_t*, int*)’:
/gpfs/home/XXX/software/gromacs-2016.4/src/gromacs/domdec/domdec.cpp:233:53: 
error: ‘MPI_Cart_rank’ was not declared in this scope
  MPI_Cart_rank(dd->mpi_comm_all, c, );
  ^
/gpfs/home/XXX/software/gromacs-2016.4/src/gromacs/domdec/domdec.cpp: In 
function ‘void dd_collect_vec_sendrecv(gmx_domdec_t*, real (*)[3], real 
(*)[3])’:
/gpfs/home/XXX/software/gromacs-2016.4/src/gromacs/domdec/domdec.cpp:1185:47: 
error: ‘MPI_STATUS_IGNORE’ was not declared in this scope
   n, dd->mpi_comm_all, MPI_STATUS_IGNORE);
^
/gpfs/home/XXX/software/gromacs-2016.4/src/gromacs/domdec/domdec.cpp: In 
function ‘void dd_distribute_vec_sendrecv(gmx_domdec_t*, t_block*, real (*)[3], 
real (*)[3])’:
/gpfs/home/XXX/software/gromacs-2016.4/src/gromacs/domdec/domdec.cpp:1472:49: 
error: ‘MPI_STATUS_IGNORE’ was not declared in this scope
   MPI_ANY_TAG, dd->mpi_comm_all, MPI_STATUS_IGNORE);
  ^
/gpfs/home/XXX/software/gromacs-2016.4/src/gromacs/domdec/domdec.cpp: In 
function ‘int ddcoord2simnodeid(t_commrec*, int, int, int)’:
/gpfs/home/XXX/software/gromacs-2016.4/src/gromacs/domdec/domdec.cpp:1937:58: 
error: ‘MPI_Cart_rank’ was not declared in this scope
  MPI_Cart_rank(cr->mpi_comm_mysim, coords, );
   ^
/gpfs/home/XXX/software/gromacs-2016.4/src/gromacs/domdec/domdec.cpp: In 
function ‘int dd_simnode2pmenode(const gmx_domdec_t*, const t_commrec*, int)’:
/gpfs/home/XXX/software/gromacs-2016.4/src/gromacs/domdec/domdec.cpp:1976:67: 
error: ‘MPI_Cart_coords’ was not declared in this scope
  MPI_Cart_coords(cr->mpi_comm_mysim, sim_nodeid, DIM, coord);
^
/gpfs/home/XXX/software/gromacs-2016.4/src/gromacs/domdec/domdec.cpp:1981:66: 
error: ‘MPI_Cart_rank’ was not declared in this scope
  MPI_Cart_rank(cr->mpi_comm_mysim, coord_pme, );
   ^
/gpfs/home/XXX/software/gromacs-2016.4/src/gromacs/domdec/domdec.cpp: In 
function ‘gmx_bool receive_vir_ener(const gmx_domdec_t*, const t_commrec*)’:
/gpfs/home/XXX/software/gromacs-2016.4/src/gromacs/domdec/domdec.cpp:2106:76: 
error: ‘MPI_Cart_coords’ was not declared in this scope
  MPI_Cart_coords(cr->mpi_comm_mysim, cr->sim_nodeid, DIM, coords);
 ^
/gpfs/home/XXX/software/gromacs-2016.4/src/gromacs/domdec/domdec.cpp:2111:64: 
error: ‘MPI_Cart_rank’ was not declared in this scope
  MPI_Cart_rank(cr->mpi_comm_mysim, coords, );
 ^
/gpfs/home/XXX/software/gromacs-2016.4/src/gromacs/domdec/domdec.cpp: In 
function ‘void make_pp_communicator(FILE*, gmx_domdec_t*, t_commrec*, int)’:
/gpfs/home/XXX/software/gromacs-2016.4/src/gromacs/domdec/domdec.cpp:5642:35: 
error: ‘MPI_Cart_create’ was not declared in this scope
  _cart);
^
/gpfs/home/XXX/software/gromacs-2016.4/src/gromacs/domdec/domdec.cpp:5682:64: 
error: ‘MPI_Cart_coords’ was not declared in this scope
  MPI_Cart_coords(dd->mpi_comm_all, dd->rank, DIM, dd->ci);
 ^
/gpfs/home/XXX/software/gromacs-2016.4/src/gromacs/domdec/domdec.cpp:5706:79: 
error: ‘MPI_Cart_rank’ was not declared in this scope
  MPI_Cart_rank(dd->mpi_comm_all, dd->master_ci, 
>masterrank);

^
/gpfs/home/XXX/software/gromacs-2016.4/src/gromacs/domdec/domdec.cpp: In 
function ‘void split_communicator(FILE*, t_commrec*, gmx_domdec_t*, int, int)’:
/gpfs/home/XXX/software/gromacs-2016.4/src/gromacs/domdec/domdec.cpp:5862:35: 
error: ‘MPI_Cart_create’ was not declared in this scope
  _cart);
^

Re: [gmx-users] building GROMACS 2018.2

2018-07-07 Thread Benson Muite

May wish to use ccmake and check correct links to Cuda are done.

On 07/07/2018 08:11 PM, Sevahn Kayaneh Vorperian wrote:

Dear All,

I am new to GROMACS and wanted to build the 2018.2 version so that I can do 
some MD work.

I am following the instructions under "Quick and Dirty Installation" verbatim: 
http://manual.gromacs.org/documentation/2018/install-guide/index.html

I got the latest version of CUDA and CMAKE. I didn't bother with MPI support 
since I'm not running on multiple networks. For FFT, I did the following during 
the cmake step to acquire it
cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON

this went fine, as well as the make command

however, during make check, I got an error at 97%

(where terminal said: Linking CXX shared library 
../../../../lib/libgpu_utilstest_cuda.dylib); attaching the output I got here.

I ran the commands in the folder I made called build-gromacs not build; that 
was the only difference from the documentation in the manual, but the correct 
folders were referenced, so it shouldn’t be a problem.

I'm unsure how to proceed from here, and what adjustments I need to make so 
that I can finish building GROMACS.

Would really appreciate guidance…
I saw previous problems posted on the site where people got stuck at 2 or 3% 
during the build with similar terminal output with it crashed, but the 
resolution for how to get past this was unclear.

Thanks!
Sevahn


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] opnempi

2018-06-20 Thread Benson Muite

Hi,

Might try using ccmake to get user interface and check variables are 
setup correctly.


Regards,
Benson

On 06/21/2018 02:03 AM, Stefano Guglielmo wrote:

Dear gromacs users,

I am trying to compile gromacs 2016.5 with openmpi compilers installed on
my machine; here is the configuration command:

cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=on
-DGMX_MPI=on -DMPI_C_COMPILER=/usr/lib64/openmpi/bin/mpicc
-DMPI_CXX_COMPILER=/usr/lib64/openmpi/bin/mpicxx

compilation and installation end up correctly, but when trying to run
mdrun, gromacs still uses its own tMPI; how can I avoid tMPI and "force" to
MPI?

Thanks in advance
Stefano


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Problem in installing GROMACS 2018

2018-05-26 Thread Benson Muite



On 05/27/2018 01:53 AM, Ali Ahmed wrote:

Hello GROMACS users,

I'm trying to install GROMACS 2018 on my laptop. I followed the
instructions on GROMACS website and everything looks fine.
But when I check the version. it says :  :-) GROMACS - gmx, VERSION 5.1.2
(-: .
Please, can anyone tell me how to install 2018 version? because I need to
use electric field options

Thank you
Ali



Did you download a release version from:
http://manual.gromacs.org/documentation/
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.