Re: [gmx-users] Gmx gangle

2018-12-11 Thread rose rahmani
On Wed, 12 Dec 2018, 02:03 Mark Abraham  Hi,
>
> I would check the documentation of gmx gangle for how it works,
> particularly for how to define a plane.

Thank you so much Mark. As a GROMACS lover, I like to make a suggestion; I
think sth it would be REALLY helpful if you provide some proper examples
for gromacs tools, with results. You have straightforward tutorials and
useful manual, but some instructions in using tools, are a little ambiguous
and misleading. This also reduces the number of questions.

> Also, 4.5.4 is prehistoric, please
> do yourself a favor

I would like, but I remember that I wanted to use cutoff-scheme=group and i
couldnt come up with errors in gmx. These calculations are for more than
one year ago (and now I want some more analysis), i was a freshman,

> and use a version with the seven years of improvements
>
I was a freshman,  dear smiling man -:)

> since then :-)
>
I am waiting for you.

Best

>
> Mark
>
> On Tue., 11 Dec. 2018, 10:14 rose rahmani,  wrote:
>
> > Hi,
> >
> > I don't really understand how gmx gangke works!!!
> >
> > I want to calculate angle between amino acid ring and surface during
> > simulation.
> >  I mad3 an index for 6 atoms of ring(a_CD1_CD2_CE1_CE2_CZ_CG) and two
> atoms
> > of surface. Surface is in xy plane and amino acid is in different Z
> > distances.
> >
> >
> > I assumed 6 ring atoms are defining a pkane and two atoms of surface are
> > defining a vector( along  Y). And i expected that the Average angle
> between
> > this plane and vector during simulation is calculated by gmx gangle
> > analysis.
> >
> >  gmx gangle -f umbrella36_3.xtc -s umbrella36_3.tpr -n index.ndx -oav
> > angz.xvg -g1 plane -g2 vector -group1 -group2
> >
> > Available static index groups:
> >  Group  0 "System" (4331 atoms)
> >  Group  1 "Other" (760 atoms)
> >  Group  2 "ZnS" (560 atoms)
> >  Group  3 "WAL" (200 atoms)
> >  Group  4 "NA" (5 atoms)
> >  Group  5 "CL" (5 atoms)
> >  Group  6 "Protein" (33 atoms)
> >  Group  7 "Protein-H" (17 atoms)
> >  Group  8 "C-alpha" (1 atoms)
> >  Group  9 "Backbone" (5 atoms)
> >  Group 10 "MainChain" (7 atoms)
> >  Group 11 "MainChain+Cb" (8 atoms)
> >  Group 12 "MainChain+H" (9 atoms)
> >  Group 13 "SideChain" (24 atoms)
> >  Group 14 "SideChain-H" (10 atoms)
> >  Group 15 "Prot-Masses" (33 atoms)
> >  Group 16 "non-Protein" (4298 atoms)
> >  Group 17 "Water" (3528 atoms)
> >  Group 18 "SOL" (3528 atoms)
> >  Group 19 "non-Water" (803 atoms)
> >  Group 20 "Ion" (10 atoms)
> >  Group 21 "ZnS" (560 atoms)
> >  Group 22 "WAL" (200 atoms)
> >  Group 23 "NA" (5 atoms)
> >  Group 24 "CL" (5 atoms)
> >  Group 25 "Water_and_ions" (3538 atoms)
> >  Group 26 "OW" (1176 atoms)
> >  Group 27 "CE1_CZ_CD1_CG_CE2_CD2" (6 atoms)
> >  Group 28 "a_320_302_319_301_318_311" (6 atoms)
> >  Group 29 "a_301_302" (2 atoms)
> > Specify any number of selections for option 'group1'
> > (First analysis/vector selection):
> > (one per line,  for status/groups, 'help' for help, Ctrl-D to end)
> > > 27
> > Selection '27' parsed
> > > 27
> > Selection '27' parsed
> > > Available static index groups:
> >  Group  0 "System" (4331 atoms)
> >  Group  1 "Other" (760 atoms)
> >  Group  2 "ZnS" (560 atoms)
> >  Group  3 "WAL" (200 atoms)
> >  Group  4 "NA" (5 atoms)
> >  Group  5 "CL" (5 atoms)
> >  Group  6 "Protein" (33 atoms)
> >  Group  7 "Protein-H" (17 atoms)
> >  Group  8 "C-alpha" (1 atoms)
> >  Group  9 "Backbone" (5 atoms)
> >  Group 10 "MainChain" (7 atoms)
> >  Group 11 "MainChain+Cb" (8 atoms)
> >  Group 12 "MainChain+H" (9 atoms)
> >  Group 13 "SideChain" (24 atoms)
> >  Group 14 "SideChain-H" (10 atoms)
> >  Group 15 "Prot-Masses" (33 atoms)
> >  Group 16 "non-Protein" (4298 atoms)
> >  Group 17 "Water" (3528 atoms)
> >  Group 18 "SOL" (3528 atoms)
> >  Group 19 "non-Water" (803 atoms)
> >  Group 20 "Ion" (10 atoms)
> >  Group 21 "ZnS" (560 atoms)
> >  Group 22 "WAL" (200 atoms)
> >  Group 23 "NA" (5 atoms)
> >  Group 24 "CL" (5 atoms)
> >  Group 25 "Water_and_ions" (3538 atoms)
> >  Group 26 "OW" (1176 atoms)
> >  Group 27 "CE1_CZ_CD1_CG_CE2_CD2" (6 atoms)
> >  Group 28 "a_320_302_319_301_318_311" (6 atoms)
> >  Group 29 "a_301_302" (2 atoms)
> > Specify any number of selections for option 'group2'
> > (Second analysis/vector selection):
> > (one per line,  for status/groups, 'help' for help, Ctrl-D to end)
> > > 29
> > Selection '29' parsed
> > > 29
> > Selection '29' parsed
> > > Reading file umbrella36_3.tpr, VERSION 4.5.4 (single precision)
> > Reading file umbrella36_3.tpr, VERSION 4.5.4 (single precision)
> > Reading frame   0 time0.000
> > Back Off! I just backed up angz.xvg to ./#angz.xvg.1#
> > Last frame  4 time 4000.Ö00
> > Analyzed 40001 frames, last ti߸ 4000.000
> >
> > Am I right? I don't think so. :(
> >
> > Would you please help me?
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read 

Re: [gmx-users] using dual CPU's

2018-12-11 Thread Mark Abraham
Hi,

In your case the slow down was in part because with a single GPU the PME
work by default went to that GPU. But with two GPUs the default is to leave
the PME work on the CPU (which for your test was very weak), because the
alternative is often not a good idea. You can try it out with the command
Szilard suggested. You won't learn much that will apply to your real case,
because the system size and GPU/CPU balance is critical.

Mark

On Wed., 12 Dec. 2018, 10:56 paul buscemi,  wrote:

> Szilard,
>
> Thank you vey much for the information and I apologize how the text
> appeared - internet demons at work.
>
> The computer described in the log files is a basic test rig which we use
> to iron out models. The workhorse is a many core AMD with now one and
> hopefully soon to be two 2080ti’s,  It will have to handle several 100k
> particles and at the moment do not think the simulation could be divided.
> These are essentially of  a multi component ligand adsorption from solution
> onto a substrate  including evaporation of the solvent.
>
> I saw from a 2015 paper form your group  “ Best bang for your buck: GPU
> nodes for GROMACS biomolecular simulations “ that I should expect maybe a
> 50% improvement for 90k atoms ( with  2x  GTX 970 ) What bothered me in my
> initial attempts was that my simulations became slower by adding the second
> GPU - it was frustrating to say the least
>
> I’ll give your suggestions a good workout, and report on the results when
> I hack it out..
>
> Bes
> Paul
>
> > On Dec 11, 2018, at 12:14 PM, Szilárd Páll 
> wrote:
> >
> > Without having read all details (partly due to the hard to read log
> > files), what I can certainly recommend is: unless you really need to,
> > avoid running single simulations with only a few 10s of thousands of
> > atoms across multiple GPUs. You'll be _much_ better off using your
> > limited resources by running a few independent runs concurrently. If
> > you really need to get maximum single-run throughput, please check
> > previous discussions on the list on my recommendations.
> >
> > Briefly, what you can try for 2 GPUs is (do compare against the
> > single-GPU runs to see if it's worth it):
> > mdrun -ntmpi N -npme 1 -nb gpu -pme gpu -gpustasks TASKSTRING
> > where typically N = 4, 6, 8 are worth a try (but N <= #cores) and the
> > TASKSTRING should have N digits with either N-1 zeros and the last 1
> > or N-2 zeros and the last two 1, i.e..
> >
> > I suggest to share files using a cloud storage service like google
> > drive, dropbox, etc. or a dedicated text sharing service like
> > paste.ee, pastebin.com, or termbin.com -- especially the latter is
> > very handy for those who don't want to leave the command line just to
> > upload a/several files for sharing (i.e. try "echo "foobar" | nc
> > termbin.com )
> >
> > --
> > Szilárd
> > On Tue, Dec 11, 2018 at 2:44 AM paul buscemi  wrote:
> >>
> >>
> >>
> >>> On Dec 10, 2018, at 7:33 PM, paul buscemi  wrote:
> >>>
> >>>
> >>> Mark, attached are the tail ends of three  log files for
> >>> the same system but run on an AMD 8  Core/16 Thread 2700x, 16G ram
> >>> In summary:
> >>> for ntpmi:ntomp of 1:16 , 2:8, and auto selection (4:4) are 12.0, 8.8
> , and 6.0 ns/day.
> >>> Clearly, I do not have a handle on using 2 GPU's
> >>>
> >>> Thank you again, and I'll keep probing the web for more understanding.
> >>> I’ve propbably sent too much of the log, let me know if this is the
> case
> >> Better way to share files - where is that friend ?
> >>>
> >>> Paul
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit

Re: [gmx-users] using dual CPU's

2018-12-11 Thread paul buscemi
Szilard,

Thank you vey much for the information and I apologize how the text appeared - 
internet demons at work.

The computer described in the log files is a basic test rig which we use to 
iron out models. The workhorse is a many core AMD with now one and hopefully 
soon to be two 2080ti’s,  It will have to handle several 100k particles and at 
the moment do not think the simulation could be divided. These are essentially 
of  a multi component ligand adsorption from solution onto a substrate  
including evaporation of the solvent.

I saw from a 2015 paper form your group  “ Best bang for your buck: GPU nodes 
for GROMACS biomolecular simulations “ that I should expect maybe a 50% 
improvement for 90k atoms ( with  2x  GTX 970 ) What bothered me in my initial 
attempts was that my simulations became slower by adding the second GPU - it 
was frustrating to say the least

I’ll give your suggestions a good workout, and report on the results when I 
hack it out..

Bes 
Paul

> On Dec 11, 2018, at 12:14 PM, Szilárd Páll  wrote:
> 
> Without having read all details (partly due to the hard to read log
> files), what I can certainly recommend is: unless you really need to,
> avoid running single simulations with only a few 10s of thousands of
> atoms across multiple GPUs. You'll be _much_ better off using your
> limited resources by running a few independent runs concurrently. If
> you really need to get maximum single-run throughput, please check
> previous discussions on the list on my recommendations.
> 
> Briefly, what you can try for 2 GPUs is (do compare against the
> single-GPU runs to see if it's worth it):
> mdrun -ntmpi N -npme 1 -nb gpu -pme gpu -gpustasks TASKSTRING
> where typically N = 4, 6, 8 are worth a try (but N <= #cores) and the
> TASKSTRING should have N digits with either N-1 zeros and the last 1
> or N-2 zeros and the last two 1, i.e..
> 
> I suggest to share files using a cloud storage service like google
> drive, dropbox, etc. or a dedicated text sharing service like
> paste.ee, pastebin.com, or termbin.com -- especially the latter is
> very handy for those who don't want to leave the command line just to
> upload a/several files for sharing (i.e. try "echo "foobar" | nc
> termbin.com )
> 
> --
> Szilárd
> On Tue, Dec 11, 2018 at 2:44 AM paul buscemi  wrote:
>> 
>> 
>> 
>>> On Dec 10, 2018, at 7:33 PM, paul buscemi  wrote:
>>> 
>>> 
>>> Mark, attached are the tail ends of three  log files for
>>> the same system but run on an AMD 8  Core/16 Thread 2700x, 16G ram
>>> In summary:
>>> for ntpmi:ntomp of 1:16 , 2:8, and auto selection (4:4) are 12.0, 8.8 , and 
>>> 6.0 ns/day.
>>> Clearly, I do not have a handle on using 2 GPU's
>>> 
>>> Thank you again, and I'll keep probing the web for more understanding.
>>> I’ve propbably sent too much of the log, let me know if this is the case
>> Better way to share files - where is that friend ?
>>> 
>>> Paul
>> --
>> Gromacs Users mailing list
>> 
>> * Please search the archive at 
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
>> mail to gmx-users-requ...@gromacs.org.
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Gmx gangle

2018-12-11 Thread Mark Abraham
Hi,

I would check the documentation of gmx gangle for how it works,
particularly for how to define a plane. Also, 4.5.4 is prehistoric, please
do yourself a favor and use a version with the seven years of improvements
since then :-)

Mark

On Tue., 11 Dec. 2018, 10:14 rose rahmani,  wrote:

> Hi,
>
> I don't really understand how gmx gangke works!!!
>
> I want to calculate angle between amino acid ring and surface during
> simulation.
>  I mad3 an index for 6 atoms of ring(a_CD1_CD2_CE1_CE2_CZ_CG) and two atoms
> of surface. Surface is in xy plane and amino acid is in different Z
> distances.
>
>
> I assumed 6 ring atoms are defining a pkane and two atoms of surface are
> defining a vector( along  Y). And i expected that the Average angle between
> this plane and vector during simulation is calculated by gmx gangle
> analysis.
>
>  gmx gangle -f umbrella36_3.xtc -s umbrella36_3.tpr -n index.ndx -oav
> angz.xvg -g1 plane -g2 vector -group1 -group2
>
> Available static index groups:
>  Group  0 "System" (4331 atoms)
>  Group  1 "Other" (760 atoms)
>  Group  2 "ZnS" (560 atoms)
>  Group  3 "WAL" (200 atoms)
>  Group  4 "NA" (5 atoms)
>  Group  5 "CL" (5 atoms)
>  Group  6 "Protein" (33 atoms)
>  Group  7 "Protein-H" (17 atoms)
>  Group  8 "C-alpha" (1 atoms)
>  Group  9 "Backbone" (5 atoms)
>  Group 10 "MainChain" (7 atoms)
>  Group 11 "MainChain+Cb" (8 atoms)
>  Group 12 "MainChain+H" (9 atoms)
>  Group 13 "SideChain" (24 atoms)
>  Group 14 "SideChain-H" (10 atoms)
>  Group 15 "Prot-Masses" (33 atoms)
>  Group 16 "non-Protein" (4298 atoms)
>  Group 17 "Water" (3528 atoms)
>  Group 18 "SOL" (3528 atoms)
>  Group 19 "non-Water" (803 atoms)
>  Group 20 "Ion" (10 atoms)
>  Group 21 "ZnS" (560 atoms)
>  Group 22 "WAL" (200 atoms)
>  Group 23 "NA" (5 atoms)
>  Group 24 "CL" (5 atoms)
>  Group 25 "Water_and_ions" (3538 atoms)
>  Group 26 "OW" (1176 atoms)
>  Group 27 "CE1_CZ_CD1_CG_CE2_CD2" (6 atoms)
>  Group 28 "a_320_302_319_301_318_311" (6 atoms)
>  Group 29 "a_301_302" (2 atoms)
> Specify any number of selections for option 'group1'
> (First analysis/vector selection):
> (one per line,  for status/groups, 'help' for help, Ctrl-D to end)
> > 27
> Selection '27' parsed
> > 27
> Selection '27' parsed
> > Available static index groups:
>  Group  0 "System" (4331 atoms)
>  Group  1 "Other" (760 atoms)
>  Group  2 "ZnS" (560 atoms)
>  Group  3 "WAL" (200 atoms)
>  Group  4 "NA" (5 atoms)
>  Group  5 "CL" (5 atoms)
>  Group  6 "Protein" (33 atoms)
>  Group  7 "Protein-H" (17 atoms)
>  Group  8 "C-alpha" (1 atoms)
>  Group  9 "Backbone" (5 atoms)
>  Group 10 "MainChain" (7 atoms)
>  Group 11 "MainChain+Cb" (8 atoms)
>  Group 12 "MainChain+H" (9 atoms)
>  Group 13 "SideChain" (24 atoms)
>  Group 14 "SideChain-H" (10 atoms)
>  Group 15 "Prot-Masses" (33 atoms)
>  Group 16 "non-Protein" (4298 atoms)
>  Group 17 "Water" (3528 atoms)
>  Group 18 "SOL" (3528 atoms)
>  Group 19 "non-Water" (803 atoms)
>  Group 20 "Ion" (10 atoms)
>  Group 21 "ZnS" (560 atoms)
>  Group 22 "WAL" (200 atoms)
>  Group 23 "NA" (5 atoms)
>  Group 24 "CL" (5 atoms)
>  Group 25 "Water_and_ions" (3538 atoms)
>  Group 26 "OW" (1176 atoms)
>  Group 27 "CE1_CZ_CD1_CG_CE2_CD2" (6 atoms)
>  Group 28 "a_320_302_319_301_318_311" (6 atoms)
>  Group 29 "a_301_302" (2 atoms)
> Specify any number of selections for option 'group2'
> (Second analysis/vector selection):
> (one per line,  for status/groups, 'help' for help, Ctrl-D to end)
> > 29
> Selection '29' parsed
> > 29
> Selection '29' parsed
> > Reading file umbrella36_3.tpr, VERSION 4.5.4 (single precision)
> Reading file umbrella36_3.tpr, VERSION 4.5.4 (single precision)
> Reading frame   0 time0.000
> Back Off! I just backed up angz.xvg to ./#angz.xvg.1#
> Last frame  4 time 4000.Ö00
> Analyzed 40001 frames, last ti߸ 4000.000
>
> Am I right? I don't think so. :(
>
> Would you please help me?
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Domain decomposition and large molecules

2018-12-11 Thread Mark Abraham
Hi,

Unfortunately, you can't attach files to the mailing list. Please use a
file sharing service and share the link.

Mark

On Wed., 12 Dec. 2018, 02:20 Tommaso D'Agostino, 
wrote:

> Dear all,
>
> I have a system of 27000 atoms, that I am simulating on both local and
> Marconi-KNL (cineca) clusters. In this system, I simulate a small molecule
> that has a graphene sheet attached to it, surrounded by water. I have
> already simulated with success this molecule in a system of 6500 atoms,
> using a timestep of 2fs and LINCS algorithm. These simulations have run
> flawlessly when executed with 8 mpi ranks.
>
> Now I have increased the length of the graphene part and the number of
> waters surrounding my molecule, arriving to a total of 27000 atoms;
> however, every simulation that I try to launch on more than 2 cpus or with
> a timestep greater than 0.5fs seems to crash sooner or later (strangely,
> during multiple attempts with 8 cpus, I was able to run up to 5 ns of
> simulations prior to get the crashes; sometimes, however, the crashes
> happen as soon as after 100ps). When I obtain an error prior to the crash
> (sometimes the simulation just hangs without providing any error) I get a
> series of lincs warning, followed by a message like:
>
> Fatal error:
> An atom moved too far between two domain decomposition steps
> This usually means that your system is not well equilibrated
>
> The crashes are relative to a part of the molecule that I have not changed
> when increasing the graphene part, and I already checked twice that there
> are no missing/wrong terms in the molecule topology. Again, I have not
> modified at all the part of the molecule that crashes.
>
> I have already tried to increase lincs-order or lincs-iter up to 8,
> decrease nlist to 1, increase rlist to 5.0, without any success. I have
> also tried (without success) to use a unique charge group for the whole
> molecule, but I would like to avoid this, as point-charges may affect my
> analysis.
>
> One note: I am using a V-rescale thermostat with a tau_t of 40 picoseconds,
> and every 50ps the simulation is stopped and started again from the last
> frame (preserving the velocities). I want to leave these options as they
> are, for consistency with other system used for this work.
>
> Do you have any suggestions on things I may try to launch these simulations
> with a decent performance? even with these few atoms, if I do not use a
> timestep greater than 0.5fs or if I do not use more than 2 cpus, I cannot
> get more than 4ns/day. I think it may me connected with domain
> decomposition, but option -pd was removed from last versions of gromacs (I
> am using gromacs 2016.1) and I cannot check that.
>
> Attached to this mail, you may find the input .mdp file used for the
> simulation.
>
> Thanks in advance for the help,
>
>Tommaso D'Agostino
>Postdoctoral Researcher
>
>   Scuola Normale Superiore,
>
> Palazzo della Carovana, Ufficio 99
>   Piazza dei Cavalieri 7, 56126 Pisa (PI), Italy
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Install GROMACS 2018 with CUDA features in dynamic linking way

2018-12-11 Thread Szilárd Páll
AFAIK the right way to control RPATH using cmake is:
https://cmake.org/cmake/help/v3.12/variable/CMAKE_SKIP_RPATH.html
no need to poke the binary.

If you still need to turn off static cudart linking the way to do that
is also via a CMake feature:
https://cmake.org/cmake/help/latest/module/FindCUDA.html
The default is static.

--
Szilárd
On Tue, Dec 11, 2018 at 10:45 AM Jaime Sierra  wrote:
>
> I'm trying to rewrite the RPATH because shared libraries paths used by
> GROMACS are hardcoded in the binary.
>
> ldd /nfs2/opt/APPS/x86_64/APPS/GROMACS/2016/CUDA/8.0/bin/gmx
> linux-vdso.so.1 =>  (0x7ffddf1d3000)
> libgromacs.so.2 =>
> /nfs2/opt/APPS/x86_64/APPS/GROMACS/2016/CUDA/8.0/bin/../lib64/libgromacs.so.2
> (0x7f0094b25000)
> libcudart.so.8.0 => not found
> libnvidia-ml.so.1 => /lib64/libnvidia-ml.so.1 (0x7f009450)
> libz.so.1 => /lib64/libz.so.1 (0x7f00942ea000)
> libdl.so.2 => /lib64/libdl.so.2 (0x7f00940e5000)
> librt.so.1 => /lib64/librt.so.1 (0x7f0093edd000)
> libpthread.so.0 => /lib64/libpthread.so.0 (0x7f0093cc1000)
> libstdc++.so.6 => /lib64/libstdc++.so.6 (0x7f00939b7000)
> libm.so.6 => /lib64/libm.so.6 (0x7f00936b5000)
> libgomp.so.1 => /lib64/libgomp.so.1 (0x7f009348f000)
> libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x7f0093278000)
> libc.so.6 => /lib64/libc.so.6 (0x7f0092eb7000)
> libcudart.so.8.0 => /nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcudart.so.8.0
> (0x7f0092c5)
> /lib64/ld-linux-x86-64.so.2 (0x7f0097ad2000)
>
> ldd /nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/gmx
> linux-vdso.so.1 =>  (0x7fff27b8d000)
> libgromacs.so.3 =>
> /nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/../lib64/libgromacs.so.3
> (0x7fcb4aa3e000)
> libstdc++.so.6 => /lib64/libstdc++.so.6 (0x7fcb4a71f000)
> libm.so.6 => /lib64/libm.so.6 (0x7fcb4a41d000)
> libgomp.so.1 => /lib64/libgomp.so.1 (0x7fcb4a1f7000)
> libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x7fcb49fe)
> libpthread.so.0 => /lib64/libpthread.so.0 (0x7fcb49dc4000)
> libc.so.6 => /lib64/libc.so.6 (0x7fcb49a03000)
> libcudart.so.8.0 => /nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcudart.so.8.0
> (0x7fcb4979c000)
> libcufft.so.8.0 => /nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcufft.so.8.0
> (0x7fcb4094e000)
> libdl.so.2 => /lib64/libdl.so.2 (0x7fcb40749000)
> librt.so.1 => /lib64/librt.so.1 (0x7fcb40541000)
> libfftw3f.so.3 =>
> /nfs2/LIBS/x86_64/LIBS/FFTW/3.3.3/SINGLE/lib/libfftw3f.so.3
> (0x7fcb401c8000)
> libmkl_intel_lp64.so =>
> /nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_intel_lp64.so
> (0x7fcb3faa4000)
> libmkl_intel_thread.so =>
> /nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_intel_thread.so
> (0x7fcb3ea0a000)
> libmkl_core.so =>
> /nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_core.so
> (0x7fcb3d4dc000)
> libiomp5.so =>
> /nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/lib/intel64/libiomp5.so
> (0x7fcb3d1c2000)
> libmkl_gf_lp64.so =>
> /nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_gf_lp64.so
> (0x7fcb3caa)
> /lib64/ld-linux-x86-64.so.2 (0x7fcb4d785000)
>
> See the differences between the 2016 & 2018 version.
>
> I'm using Cmake 3.13.1.
>
> ~/cmake-3.13.1-Linux-x86_64/bin/cmake .. -DGMX_BUILD_OWN_FFTW=ON
> -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=ON
> -DCUDA_TOOLKIT_ROOT_DIR=/nfs2/LIBS/x86_64/LIBS/CUDA/8.0
> -DCMAKE_INSTALL_PREFIX=/nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0
> -DCUDA_USE_STATIC_CUDA_RUNTIME=OFF -DBUILD_SHARED_LIBS=ON
> -DCUDA_NVCC_FLAGS=--cudart=shared -DGMX_PREFER_STATIC_LIBS=OFF
> -DEXTRA_NVCCFLAGS=--cudart=shared
>
> I think I've tried almost everything.
>
> Regards.
>
> El lun., 10 dic. 2018 a las 16:09, Szilárd Páll ()
> escribió:
>
> > On Sat, Dec 8, 2018 at 10:00 PM Gmail  wrote:
> > >
> > > My mistake! It was a typo. Anyway, this is the result before executing
> > > the chrpath command:
> > >
> > > chrpath -l $APPS/GROMACS/2018/CUDA/8.0/bin/gmx
> > > $APPS/GROMACS/2018/CUDA/8.0/bin/gmx: RPATH=$ORIGIN/../lib64
> > >
> > > I'm suspicious that GROMACS 2018 is not being compiled using shared
> > > libraries, at least, for CUDA.
> >
> > First of all, what is the goal, why are you trying to manually rewrite
> > the binary RPATH?
> >
> > Well, if the binaries not linked against libcudart.so than it clearly
> > isn't (and the ldd output is a better way to confirm that -- a library
> > can be linked against gmx even without an RPATH being set).
> >
> > I have a vague memory that this may have been the default in CMake or
> > perhaps it changed at some point. What's your CMake version, perhaps
> > you're using an old CMake?
> >
> > >
> > > Jaime.
> > >
> > >
> > > On 8/12/18 21:39, Mark Abraham wrote:
> > > > Hi,
> > > >
> > > > Your final line doesn't match your CMAKE_INSTALL_PREFIX
> > > >
> > > > Mark
> > > 

Re: [gmx-users] using dual CPU's

2018-12-11 Thread Szilárd Páll
Without having read all details (partly due to the hard to read log
files), what I can certainly recommend is: unless you really need to,
avoid running single simulations with only a few 10s of thousands of
atoms across multiple GPUs. You'll be _much_ better off using your
limited resources by running a few independent runs concurrently. If
you really need to get maximum single-run throughput, please check
previous discussions on the list on my recommendations.

Briefly, what you can try for 2 GPUs is (do compare against the
single-GPU runs to see if it's worth it):
mdrun -ntmpi N -npme 1 -nb gpu -pme gpu -gpustasks TASKSTRING
where typically N = 4, 6, 8 are worth a try (but N <= #cores) and the
TASKSTRING should have N digits with either N-1 zeros and the last 1
or N-2 zeros and the last two 1, i.e..

I suggest to share files using a cloud storage service like google
drive, dropbox, etc. or a dedicated text sharing service like
paste.ee, pastebin.com, or termbin.com -- especially the latter is
very handy for those who don't want to leave the command line just to
upload a/several files for sharing (i.e. try "echo "foobar" | nc
termbin.com )

--
Szilárd
On Tue, Dec 11, 2018 at 2:44 AM paul buscemi  wrote:
>
>
>
> > On Dec 10, 2018, at 7:33 PM, paul buscemi  wrote:
> >
> >
> > Mark, attached are the tail ends of three  log files for
> >  the same system but run on an AMD 8  Core/16 Thread 2700x, 16G ram
> > In summary:
> > for ntpmi:ntomp of 1:16 , 2:8, and auto selection (4:4) are 12.0, 8.8 , and 
> > 6.0 ns/day.
> > Clearly, I do not have a handle on using 2 GPU's
> >
> > Thank you again, and I'll keep probing the web for more understanding.
> > I’ve propbably sent too much of the log, let me know if this is the case
> Better way to share files - where is that friend ?
> >
> > Paul
> --
> Gromacs Users mailing list
>
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Area compressibility modulus GMX

2018-12-11 Thread John Whittaker
Hi all,

I have a weird, probably very basic question to ask and I hope it is
appropriate for the mailing list.

I am trying to reproduce the pure DPPC bilayer data found in J. Chem.
Theory Comput., 2016, 12 (1), pp 405–413 (10.1021/acs.jctc.5b00935) using
the recommended protocol given in the paper.

I have calculated the area per lipid for my system and have an average
value and am now attempting to calculate the area compressibility modulus,
K, using the formula given in the paper in the subsection "Analysis"
(which itself is taken from https://doi.org/10.1063/1.479313).

I am a bit confused by the wording when the authors describe the value in
the denominator, . The paper calls this value "the average of
the squared fluctuation of the area/lipid". I'm probably being silly, but
am I right to assume that this is the variance of the area/lipid?

As in, to get this value I can:

1) Use gmx analyze to find the standard deviation of the area/lipid over
the course of my trajectory

2) Square the standard deviation to find the variance of the area/lipid

Then, it's a straightforward process of plugging in and making sure
everything comes out in dyn/cm.

Could anyone tell me if my process is correct? Thanks a lot and my
apologies if this is too specific of a question for the mailing list!

John

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Domain decomposition and large molecules

2018-12-11 Thread Tommaso D'Agostino
Dear all,

I have a system of 27000 atoms, that I am simulating on both local and
Marconi-KNL (cineca) clusters. In this system, I simulate a small molecule
that has a graphene sheet attached to it, surrounded by water. I have
already simulated with success this molecule in a system of 6500 atoms,
using a timestep of 2fs and LINCS algorithm. These simulations have run
flawlessly when executed with 8 mpi ranks.

Now I have increased the length of the graphene part and the number of
waters surrounding my molecule, arriving to a total of 27000 atoms;
however, every simulation that I try to launch on more than 2 cpus or with
a timestep greater than 0.5fs seems to crash sooner or later (strangely,
during multiple attempts with 8 cpus, I was able to run up to 5 ns of
simulations prior to get the crashes; sometimes, however, the crashes
happen as soon as after 100ps). When I obtain an error prior to the crash
(sometimes the simulation just hangs without providing any error) I get a
series of lincs warning, followed by a message like:

Fatal error:
An atom moved too far between two domain decomposition steps
This usually means that your system is not well equilibrated

The crashes are relative to a part of the molecule that I have not changed
when increasing the graphene part, and I already checked twice that there
are no missing/wrong terms in the molecule topology. Again, I have not
modified at all the part of the molecule that crashes.

I have already tried to increase lincs-order or lincs-iter up to 8,
decrease nlist to 1, increase rlist to 5.0, without any success. I have
also tried (without success) to use a unique charge group for the whole
molecule, but I would like to avoid this, as point-charges may affect my
analysis.

One note: I am using a V-rescale thermostat with a tau_t of 40 picoseconds,
and every 50ps the simulation is stopped and started again from the last
frame (preserving the velocities). I want to leave these options as they
are, for consistency with other system used for this work.

Do you have any suggestions on things I may try to launch these simulations
with a decent performance? even with these few atoms, if I do not use a
timestep greater than 0.5fs or if I do not use more than 2 cpus, I cannot
get more than 4ns/day. I think it may me connected with domain
decomposition, but option -pd was removed from last versions of gromacs (I
am using gromacs 2016.1) and I cannot check that.

Attached to this mail, you may find the input .mdp file used for the
simulation.

Thanks in advance for the help,

   Tommaso D'Agostino
   Postdoctoral Researcher

  Scuola Normale Superiore,

Palazzo della Carovana, Ufficio 99
  Piazza dei Cavalieri 7, 56126 Pisa (PI), Italy
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] download GROMOS54a6oxy

2018-12-11 Thread Patrick Fuchs

Hi Tushar,
all parameters from GROMOS 53A6_OXY along with other improvements have 
been merged into a new parameter set called 2016H66 (see 
https://pubs.acs.org/doi/abs/10.1021/acs.jctc.6b00187).
Philippe Hünenberger put some files for GROMACS on his website: 
http://www.csms.ethz.ch/files_and_links/GROMOS/2016H66.html.
For the time being, the parameters within these files are only available 
for 62 organic molecules. Some tests are currently being conducted for 
proteins, but the parameters are not ready yet.
Note also, that if you want to use the reaction field (as used in the 
paper cited above), you'll have to use GROMACS 4.0.7 (or lower). If you 
want to use GROMACS 4.6.6 (or higher), you'll have to use nstlist = 2 
(not more than 2!), see https://redmine.gromacs.org/issues/1400. We are 
actually testing some new PME parameters with this set, and we should be 
able to come with new recommandations pretty soon.

Best,

Patrick

Le 08/12/2018 à 11:55, Dr Tushar Ranjan Moharana a écrit :

Hi all,

I wish to use GROMOS54a6oxy forcefield parameterized by Horta et. al.
However, I am unable to find any link to download the same. It will be a
great help if any one send me the link or the forcefield.

Thanks a lot.
Tushar


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Install GROMACS 2018 with CUDA features in dynamic linking way

2018-12-11 Thread Jaime Sierra
I'm trying to rewrite the RPATH because shared libraries paths used by
GROMACS are hardcoded in the binary.

ldd /nfs2/opt/APPS/x86_64/APPS/GROMACS/2016/CUDA/8.0/bin/gmx
linux-vdso.so.1 =>  (0x7ffddf1d3000)
libgromacs.so.2 =>
/nfs2/opt/APPS/x86_64/APPS/GROMACS/2016/CUDA/8.0/bin/../lib64/libgromacs.so.2
(0x7f0094b25000)
libcudart.so.8.0 => not found
libnvidia-ml.so.1 => /lib64/libnvidia-ml.so.1 (0x7f009450)
libz.so.1 => /lib64/libz.so.1 (0x7f00942ea000)
libdl.so.2 => /lib64/libdl.so.2 (0x7f00940e5000)
librt.so.1 => /lib64/librt.so.1 (0x7f0093edd000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x7f0093cc1000)
libstdc++.so.6 => /lib64/libstdc++.so.6 (0x7f00939b7000)
libm.so.6 => /lib64/libm.so.6 (0x7f00936b5000)
libgomp.so.1 => /lib64/libgomp.so.1 (0x7f009348f000)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x7f0093278000)
libc.so.6 => /lib64/libc.so.6 (0x7f0092eb7000)
libcudart.so.8.0 => /nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcudart.so.8.0
(0x7f0092c5)
/lib64/ld-linux-x86-64.so.2 (0x7f0097ad2000)

ldd /nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/gmx
linux-vdso.so.1 =>  (0x7fff27b8d000)
libgromacs.so.3 =>
/nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/../lib64/libgromacs.so.3
(0x7fcb4aa3e000)
libstdc++.so.6 => /lib64/libstdc++.so.6 (0x7fcb4a71f000)
libm.so.6 => /lib64/libm.so.6 (0x7fcb4a41d000)
libgomp.so.1 => /lib64/libgomp.so.1 (0x7fcb4a1f7000)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x7fcb49fe)
libpthread.so.0 => /lib64/libpthread.so.0 (0x7fcb49dc4000)
libc.so.6 => /lib64/libc.so.6 (0x7fcb49a03000)
libcudart.so.8.0 => /nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcudart.so.8.0
(0x7fcb4979c000)
libcufft.so.8.0 => /nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcufft.so.8.0
(0x7fcb4094e000)
libdl.so.2 => /lib64/libdl.so.2 (0x7fcb40749000)
librt.so.1 => /lib64/librt.so.1 (0x7fcb40541000)
libfftw3f.so.3 =>
/nfs2/LIBS/x86_64/LIBS/FFTW/3.3.3/SINGLE/lib/libfftw3f.so.3
(0x7fcb401c8000)
libmkl_intel_lp64.so =>
/nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_intel_lp64.so
(0x7fcb3faa4000)
libmkl_intel_thread.so =>
/nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_intel_thread.so
(0x7fcb3ea0a000)
libmkl_core.so =>
/nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_core.so
(0x7fcb3d4dc000)
libiomp5.so =>
/nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/lib/intel64/libiomp5.so
(0x7fcb3d1c2000)
libmkl_gf_lp64.so =>
/nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_gf_lp64.so
(0x7fcb3caa)
/lib64/ld-linux-x86-64.so.2 (0x7fcb4d785000)

See the differences between the 2016 & 2018 version.

I'm using Cmake 3.13.1.

~/cmake-3.13.1-Linux-x86_64/bin/cmake .. -DGMX_BUILD_OWN_FFTW=ON
-DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=ON
-DCUDA_TOOLKIT_ROOT_DIR=/nfs2/LIBS/x86_64/LIBS/CUDA/8.0
-DCMAKE_INSTALL_PREFIX=/nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0
-DCUDA_USE_STATIC_CUDA_RUNTIME=OFF -DBUILD_SHARED_LIBS=ON
-DCUDA_NVCC_FLAGS=--cudart=shared -DGMX_PREFER_STATIC_LIBS=OFF
-DEXTRA_NVCCFLAGS=--cudart=shared

I think I've tried almost everything.

Regards.

El lun., 10 dic. 2018 a las 16:09, Szilárd Páll ()
escribió:

> On Sat, Dec 8, 2018 at 10:00 PM Gmail  wrote:
> >
> > My mistake! It was a typo. Anyway, this is the result before executing
> > the chrpath command:
> >
> > chrpath -l $APPS/GROMACS/2018/CUDA/8.0/bin/gmx
> > $APPS/GROMACS/2018/CUDA/8.0/bin/gmx: RPATH=$ORIGIN/../lib64
> >
> > I'm suspicious that GROMACS 2018 is not being compiled using shared
> > libraries, at least, for CUDA.
>
> First of all, what is the goal, why are you trying to manually rewrite
> the binary RPATH?
>
> Well, if the binaries not linked against libcudart.so than it clearly
> isn't (and the ldd output is a better way to confirm that -- a library
> can be linked against gmx even without an RPATH being set).
>
> I have a vague memory that this may have been the default in CMake or
> perhaps it changed at some point. What's your CMake version, perhaps
> you're using an old CMake?
>
> >
> > Jaime.
> >
> >
> > On 8/12/18 21:39, Mark Abraham wrote:
> > > Hi,
> > >
> > > Your final line doesn't match your CMAKE_INSTALL_PREFIX
> > >
> > > Mark
> > >
> > > On Sun., 9 Dec. 2018, 07:00 Jaime Sierra  > >
> > >> Hi pall,
> > >>
> > >> thanks for your answer,
> > >> I have my own "HOW_TO_INSTALL" guide like:
> > >>
> > >> $ wget ftp://ftp.gromacs.org/pub/gromacs/gromacs-5.1.4.tar.gz
> > >> $ tar xzf gromacs-5.1.4.tar.gz
> > >> $ cd gromacs-5.1.4.tar.gz
> > >> $ mkdir build
> > >> $ cd build
> > >> $ export EXTRA_NVCCFLAGS=--cudart=shared
> > >> $ export PATH=$APPS/CMAKE/2.8.12.2/bin/:$PATH
> > >> $ cmake .. -DCMAKE_INSTALL_PREFIX=$APPS/GROMACS/5.1.4/CUDA8.0/GPU
> > >> -DGMX_FFT_LIBRARY=fftw3 -DCMAKE_PREFIX_PATH=$LIBS/FFTW/3.3.3/SINGLE/
> > >>