[gmx-users] Alchemical Free energy of growing a cavity in water

2018-12-13 Thread Braden Kelly
Hello.


I am wishing to calculate the free energy of growing a cavity in water.


It has been done by Li et al. using LAMMPS (Computational methodology for 
solubility prediction: Application to the sparingly soluble solutes, Lunna Li, 
Tim Totton, Daan Frenkel, The Journal of Chemical Physics, 146, 2017)


I cannot do other calculations I want in LAMMPS, but can in gromacs. Hence, I 
would like to do this in gromacs. (in case the answer was going to be: just do 
it in LAMMPS :) )


To do this I need to use a user defined potential energy function (strictly 
repulsive) U = A*exp(-rij/B + Lambda)

U is potential energy of the cavity interacting with any atom in the system. A, 
B given constants, rij, distance between cavity and atom, Lambda... the Lambda 
window, it controls the size of the cavity... Lambda varies between -10 and 
5... at Lambda = -10, the cavity is essentially a point source and gone.


I have read up on making user defined tables.


However, how do I do a free energy calculation in gromacs when using a user 
defined function? for each window, I will manually make a table with the 
necessary Lambda inside it and do an individual simulation. Once this is done, 
gromacs only has access to the cubic fit it makes, and can't insert a 
neighboring lambda value. And this is necessary to do TI/BAR. For a given 
configuration I need to be able to sample neighboring lambda potential energies 
during the same simulation.


The only thing I can think of is I could manually, for each Lambda, sample 
phase space(just a normal NVT or NPT simulation) and save coordinates, and 
re-evaluate post processing the potential energy of the configurations for the 
user-defined potential at +1 Lambda and -1 Lambda (or all other lambdas if I 
was going to do MBAR). Does this sound like the best option Or even... only 
option? I should be able to make the standard dhdl.xvg file myself after and 
submit to gmx bar for evaluation?


The next part of the problem is to grow a molecule inside the fully grown 
cavity. This molecule will be "normal" LJ and Coul interaction, but will need 
to be anchored to the COM of the cavity as the cavity, fully grown, explores 
phase space... to keep the molecule in the cavity I should use a restraint... 
is the pull code the thing to use?


Thanks,


Braden Kelly

PhD. Candidate, E.I.T

University of Guelph

Biophysics Interdisciplinary Group (BIG)

"I feel more like I do now than I did when I first got here."


[1495766834308_mini_wolf_noir.jpg]
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] using dual CPU's

2018-12-13 Thread pbuscemi
Carsten,

A possible issue...

I compiled gmx 18.3 with gcc-5 ( CUDA  9 seems to run normally )  Should 
recompile with gcc-6.4 ?

Paul

-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 On Behalf Of p buscemi
Sent: Thursday, December 13, 2018 1:38 PM
To: gmx-us...@gromacs.org
Cc: gmx-us...@gromacs.org
Subject: Re: [gmx-users] using dual CPU's

Carsten

thanks for the suggestion.
Is it necessary to use the MPI version for gromacs when using multdir? - now 
have the single node version loaded.

I'm hammering out the first 2080ti with the 32 core AMD. results are not 
stellar. slower than an intel 17-7000 But I'll beat on it some more before 
throwing in the hammer.
Paul

Sent from Mailspring 
(https://link.getmailspring.com/link/1544729553.local-d6faf123-7363-v1.5.3-420ce...@getmailspring.com/0?redirect=https%3A%2F%2Fgetmailspring.com%2F=Z214LXVzZXJzQGdyb21hY3Mub3Jn),
 the best free email app for work On Dec 13 2018, at 4:33 am, Kutzner, Carsten 
 wrote:
> Hi,
>
> > On 13. Dec 2018, at 01:11, paul buscemi  wrote:
> > Carsten,THanks for the response.
> > my mistake - it was the GTX 980 from fig 3. … I was recalling from 
> > memory….. I assume that similar
> There we measured a 19 percent performance increase for the 80k atom system.
>
> > results would be achieved with the 1060’s
> If you want to run a small system very fast, it is probably better to 
> put in one strong GPU instead of two weaker ones. What you could do 
> with your two 1060, though, is to maximize your aggregate performance 
> by running two (or even 4) simulations at the same time using the 
> -multidir argument to mdrun. For the science, probably several independent 
> trajectories are needed anyway.
> >
> > No I did not reset ,
> I would at least use the -resethway mdrun command line switch, this 
> way your measured performances will be more reliable also for shorter runs.
>
> Carsten
> > my results were a compilation of 4-5 runs each under slightly different 
> > conditions on two computers. All with the same outcome - that is ugh!. Mark 
> > had asked for the log outputs indicating some useful conclusions could be 
> > drawn from them.
> > Paul
> > > On Dec 12, 2018, at 9:02 AM, Kutzner, Carsten  wrote:
> > > Hi Paul,
> > > > On 12. Dec 2018, at 15:36, pbusc...@q.com wrote:
> > > > Dear users ( one more try )
> > > > I am trying to use 2 GPU cards to improve modeling speed. The 
> > > > computer described in the log files is used to iron out models and am 
> > > > using to learn how to use two GPU cards before purchasing two new RTX 
> > > > 2080 ti's. The CPU is a 8 core 16 thread AMD and the GPU's are two GTX 
> > > > 1060; there are 5 atoms in the model Using ntpmi and ntomp settings 
> > > > of 1: 16, auto ( 4:4) and 2: 8 ( and any other combination factoring to 
> > > > 16) the rating for ns/day are approx. 12-16 and for any other setting 
> > > > ~6-8 i.e adding a card cuts efficiency by half. The average load 
> > > > imbalance is less than 3.4% for the multicard setup .
> > > > I am not at this point trying to maximize efficiency, but only 
> > > > to show some improvement going from one to two cards. According 
> > > > to a 2015 paper form the Gromacs group “ Best bang for your 
> > > > buck: GPU nodes for GROMACS biomolecular simulations “ I should 
> > > > expect maybe (at best ) 50% improvement for 90k atoms ( with 2x 
> > > > GTX 970 )
> > > We did not benchmark GTX 970 in that publication.
> > >
> > > But from Table 6 you can see that we also had quite a few cases 
> > > with out 80k benchmark where going from 1 to 2 GPUs, simulation 
> > > speed did not increase much: E.g. for the
> > > E5-2670v2 going from one to 2 GTX 980 GPUs led to an increase of 10 
> > > percent.
> > >
> > > Did you use counter resetting for the benchnarks?
> > > Carsten
> > >
> > > > What bothers me in my initial attempts is that my simulations became 
> > > > slower by adding the second GPU - it is frustrating to say the least. 
> > > > It's like swimming backwards.
> > > > I know am missing - as a minimum - the correct setup for mdrun 
> > > > and suggestions would be welcome The output from the last section of 
> > > > the log files is included below.
> > > > === ntpmi 1 ntomp:16 
> > > > == <== ### ==> < 
> > > > A V E R A G E S > <== ### ==>
> > > >
> > > > Statistics over 29301 steps using 294 frames Energies (kJ/mol) 
> > > > Angle G96Angle Proper Dih. Improper Dih. LJ-14
> > > > 9.17533e+05 2.27874e+04 6.64128e+04 2.31214e+02 8.34971e+04
> > > > Coulomb-14 LJ (SR) Disper. corr. Coulomb (SR) Coul. recip.
> > > > -2.84567e+07 -1.43385e+05 -2.04658e+03 1.33320e+07 1.59914e+05 
> > > > Position Rest. Potential Kinetic En. Total Energy Temperature
> > > > 7.79893e+01 -1.40196e+07 1.88467e+05 -1.38312e+07 3.00376e+02 
> > > > Pres. DC (bar) Pressure (bar) Constr. rmsd
> > > > -2.88685e+00 3.75436e+01 0.0e+00
> 

Re: [gmx-users] using dual CPU's

2018-12-13 Thread pbuscemi
Szilard,

I get an "unknown command " gpustasks  in :

'mdrun -ntmpi N -npme 1 -nb gpu -pme gpu -gpustasks TASKSTRING 

where > typically N = 4, 6, 8 are worth a try (but N <= #cores) and the > 
TASKSTRING should have N digits with either N-1 zeros and the last 1 
> or N-2 zeros and the last two 1, i.e..

Would you please complete the i.e...

Thanks again,
Paul



-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 On Behalf Of paul buscemi
Sent: Tuesday, December 11, 2018 5:56 PM
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] using dual CPU's

Szilard,

Thank you vey much for the information and I apologize how the text appeared - 
internet demons at work.

The computer described in the log files is a basic test rig which we use to 
iron out models. The workhorse is a many core AMD with now one and hopefully 
soon to be two 2080ti’s,  It will have to handle several 100k particles and at 
the moment do not think the simulation could be divided. These are essentially 
of  a multi component ligand adsorption from solution onto a substrate  
including evaporation of the solvent.

I saw from a 2015 paper form your group  “ Best bang for your buck: GPU nodes 
for GROMACS biomolecular simulations “ that I should expect maybe a 50% 
improvement for 90k atoms ( with  2x  GTX 970 ) What bothered me in my initial 
attempts was that my simulations became slower by adding the second GPU - it 
was frustrating to say the least

I’ll give your suggestions a good workout, and report on the results when I 
hack it out..

Bes
Paul

> On Dec 11, 2018, at 12:14 PM, Szilárd Páll  wrote:
> 
> Without having read all details (partly due to the hard to read log 
> files), what I can certainly recommend is: unless you really need to, 
> avoid running single simulations with only a few 10s of thousands of 
> atoms across multiple GPUs. You'll be _much_ better off using your 
> limited resources by running a few independent runs concurrently. If 
> you really need to get maximum single-run throughput, please check 
> previous discussions on the list on my recommendations.
> 
> Briefly, what you can try for 2 GPUs is (do compare against the 
> single-GPU runs to see if it's worth it):
> mdrun -ntmpi N -npme 1 -nb gpu -pme gpu -gpustasks TASKSTRING where 
> typically N = 4, 6, 8 are worth a try (but N <= #cores) and the 
> TASKSTRING should have N digits with either N-1 zeros and the last 1 
> or N-2 zeros and the last two 1, i.e..
> 
> I suggest to share files using a cloud storage service like google 
> drive, dropbox, etc. or a dedicated text sharing service like 
> paste.ee, pastebin.com, or termbin.com -- especially the latter is 
> very handy for those who don't want to leave the command line just to 
> upload a/several files for sharing (i.e. try "echo "foobar" | nc 
> termbin.com )
> 
> --
> Szilárd
> On Tue, Dec 11, 2018 at 2:44 AM paul buscemi  wrote:
>> 
>> 
>> 
>>> On Dec 10, 2018, at 7:33 PM, paul buscemi  wrote:
>>> 
>>> 
>>> Mark, attached are the tail ends of three  log files for the same 
>>> system but run on an AMD 8  Core/16 Thread 2700x, 16G ram In 
>>> summary:
>>> for ntpmi:ntomp of 1:16 , 2:8, and auto selection (4:4) are 12.0, 8.8 , and 
>>> 6.0 ns/day.
>>> Clearly, I do not have a handle on using 2 GPU's
>>> 
>>> Thank you again, and I'll keep probing the web for more understanding.
>>> I’ve propbably sent too much of the log, let me know if this is the 
>>> case
>> Better way to share files - where is that friend ?
>>> 
>>> Paul
>> --
>> Gromacs Users mailing list
>> 
>> * Please search the archive at 
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>> 
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> 
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
>> mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to 

Re: [gmx-users] Fixing the molecule in the centre of the micelle

2018-12-13 Thread Jochen Hub

Hi,

I would use

pull-geometry = distance

with pull-dim = Y Y Y

between the COMs of the micelle and the drug. You can use this already 
during the energy minimization.


I would not use comm-mode = angular, this is meant for other applications.

Cheers,
Jochen

Am 21.11.18 um 19:29 schrieb Alexey Kaa:

Dear Gromacs users,

I am wondering if you could help with advice. In my simulation I have a
drug that is initially put into the centre of a micelle. It tends to drift
away towards the micelle-water interface. I would like to run an umbrella
sampling simulation in order to get a potential of mean force function from
the centre of the micelle (let's assume it is spherical) towards bulk. If I
run energy minimisation and the NPT-equillibration the drug molecule (or
the micelle) already drifts away to the energetically more favourable
position, but obviously these steps must take place as otherwise we have a
non-balanced system. I tried to let the molecule equilibrate first and then
pull it through the center towards the opposite side of the micelle, but
then it rather rotates the whole micelle (even if I apply comm-mode Angular
to the micelle-building type of molecule), than goes through the centre. I
am wondering if it is possible to fix the centers of mass of both - the
drug molecule and also the center of mass of the micelle through the
minimisation/equilibration steps before applying the pull-code, but so that
the micelle-constructing molecules would equilibrate inside it and also the
pressure of water would become uniform outside? Or am I restraining the
rotation of the micelle wrongly?

API = drug, DLiPC = phospholipids making a micelle.
; mode for center of mass motion removal
comm-mode= Angular
; number of steps for center of mass motion removal
nstcomm  = 1
comm-grps= API DLiPC

Thanks,
Aleksei



--
---
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
Phone: +49-551-39-14189
http://cmb.bio.uni-goettingen.de/
---
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] using dual CPU's

2018-12-13 Thread p buscemi
Carsten

thanks for the suggestion.
Is it necessary to use the MPI version for gromacs when using multdir? - now 
have the single node version loaded.

I'm hammering out the first 2080ti with the 32 core AMD. results are not 
stellar. slower than an intel 17-7000 But I'll beat on it some more before 
throwing in the hammer.
Paul

Sent from Mailspring 
(https://link.getmailspring.com/link/1544729553.local-d6faf123-7363-v1.5.3-420ce...@getmailspring.com/0?redirect=https%3A%2F%2Fgetmailspring.com%2F=Z214LXVzZXJzQGdyb21hY3Mub3Jn),
 the best free email app for work
On Dec 13 2018, at 4:33 am, Kutzner, Carsten  wrote:
> Hi,
>
> > On 13. Dec 2018, at 01:11, paul buscemi  wrote:
> > Carsten,THanks for the response.
> > my mistake - it was the GTX 980 from fig 3. … I was recalling from 
> > memory….. I assume that similar
> There we measured a 19 percent performance increase for the 80k atom system.
>
> > results would be achieved with the 1060’s
> If you want to run a small system very fast, it is probably better to put in 
> one
> strong GPU instead of two weaker ones. What you could do with your two 1060, 
> though,
> is to maximize your aggregate performance by running two (or even 4) 
> simulations
> at the same time using the -multidir argument to mdrun. For the science, 
> probably
> several independent trajectories are needed anyway.
> >
> > No I did not reset ,
> I would at least use the -resethway mdrun command line switch,
> this way your measured performances will be more reliable also for shorter 
> runs.
>
> Carsten
> > my results were a compilation of 4-5 runs each under slightly different 
> > conditions on two computers. All with the same outcome - that is ugh!. Mark 
> > had asked for the log outputs indicating some useful conclusions could be 
> > drawn from them.
> > Paul
> > > On Dec 12, 2018, at 9:02 AM, Kutzner, Carsten  wrote:
> > > Hi Paul,
> > > > On 12. Dec 2018, at 15:36, pbusc...@q.com wrote:
> > > > Dear users ( one more try )
> > > > I am trying to use 2 GPU cards to improve modeling speed. The computer 
> > > > described in the log files is used to iron out models and am using to 
> > > > learn how to use two GPU cards before purchasing two new RTX 2080 ti's. 
> > > > The CPU is a 8 core 16 thread AMD and the GPU's are two GTX 1060; there 
> > > > are 5 atoms in the model
> > > > Using ntpmi and ntomp settings of 1: 16, auto ( 4:4) and 2: 8 ( and any 
> > > > other combination factoring to 16) the rating for ns/day are approx. 
> > > > 12-16 and for any other setting ~6-8 i.e adding a card cuts efficiency 
> > > > by half. The average load imbalance is less than 3.4% for the multicard 
> > > > setup .
> > > > I am not at this point trying to maximize efficiency, but only to show 
> > > > some improvement going from one to two cards. According to a 2015 paper 
> > > > form the Gromacs group “ Best bang for your buck: GPU nodes for GROMACS 
> > > > biomolecular simulations “ I should expect maybe (at best ) 50% 
> > > > improvement for 90k atoms ( with 2x GTX 970 )
> > > We did not benchmark GTX 970 in that publication.
> > >
> > > But from Table 6 you can see that we also had quite a few cases with out 
> > > 80k benchmark
> > > where going from 1 to 2 GPUs, simulation speed did not increase much: 
> > > E.g. for the
> > > E5-2670v2 going from one to 2 GTX 980 GPUs led to an increase of 10 
> > > percent.
> > >
> > > Did you use counter resetting for the benchnarks?
> > > Carsten
> > >
> > > > What bothers me in my initial attempts is that my simulations became 
> > > > slower by adding the second GPU - it is frustrating to say the least. 
> > > > It's like swimming backwards.
> > > > I know am missing - as a minimum - the correct setup for mdrun and 
> > > > suggestions would be welcome
> > > > The output from the last section of the log files is included below.
> > > > === ntpmi 1 ntomp:16 
> > > > ==
> > > > <== ### ==>
> > > > < A V E R A G E S >
> > > > <== ### ==>
> > > >
> > > > Statistics over 29301 steps using 294 frames
> > > > Energies (kJ/mol)
> > > > Angle G96Angle Proper Dih. Improper Dih. LJ-14
> > > > 9.17533e+05 2.27874e+04 6.64128e+04 2.31214e+02 8.34971e+04
> > > > Coulomb-14 LJ (SR) Disper. corr. Coulomb (SR) Coul. recip.
> > > > -2.84567e+07 -1.43385e+05 -2.04658e+03 1.33320e+07 1.59914e+05
> > > > Position Rest. Potential Kinetic En. Total Energy Temperature
> > > > 7.79893e+01 -1.40196e+07 1.88467e+05 -1.38312e+07 3.00376e+02
> > > > Pres. DC (bar) Pressure (bar) Constr. rmsd
> > > > -2.88685e+00 3.75436e+01 0.0e+00
> > > >
> > > > Total Virial (kJ/mol)
> > > > 5.27555e+04 -4.87626e+02 1.86144e+02
> > > > -4.87648e+02 4.04479e+04 -1.91959e+02
> > > > 1.86177e+02 -1.91957e+02 5.45671e+04
> > > >
> > > > Pressure (bar)
> > > > 2.22202e+01 1.27887e+00 -4.71738e-01
> > > > 1.27893e+00 6.48135e+01 5.12638e-01
> > > > -4.71830e-01 5.12632e-01 2.55971e+01
> > > >
> > > 

Re: [gmx-users] Gmx gangle

2018-12-13 Thread Justin Lemkul



On 12/13/18 1:05 PM, rose rahmani wrote:

Would you please answer my question? Did you check it?


We can't check your work for you because we don't know what output you 
got or why you are suspicious about it. The selections look sensible but 
if you ever question if something is working, you need to have a test 
case where you know the outcome. If the program doesn't produce the 
output you know to be right, likely your approach is wrong. If the 
output confirms your suspicions, then you're right.


-Justin


On Wed, 12 Dec 2018, 02:03 Mark Abraham 
Hi,

I would check the documentation of gmx gangle for how it works,
particularly for how to define a plane. Also, 4.5.4 is prehistoric, please
do yourself a favor and use a version with the seven years of improvements
since then :-)

Mark

On Tue., 11 Dec. 2018, 10:14 rose rahmani,  wrote:


Hi,

I don't really understand how gmx gangke works!!!

I want to calculate angle between amino acid ring and surface during
simulation.
  I mad3 an index for 6 atoms of ring(a_CD1_CD2_CE1_CE2_CZ_CG) and two

atoms

of surface. Surface is in xy plane and amino acid is in different Z
distances.


I assumed 6 ring atoms are defining a pkane and two atoms of surface are
defining a vector( along  Y). And i expected that the Average angle

between

this plane and vector during simulation is calculated by gmx gangle
analysis.

  gmx gangle -f umbrella36_3.xtc -s umbrella36_3.tpr -n index.ndx -oav
angz.xvg -g1 plane -g2 vector -group1 -group2

Available static index groups:
  Group  0 "System" (4331 atoms)
  Group  1 "Other" (760 atoms)
  Group  2 "ZnS" (560 atoms)
  Group  3 "WAL" (200 atoms)
  Group  4 "NA" (5 atoms)
  Group  5 "CL" (5 atoms)
  Group  6 "Protein" (33 atoms)
  Group  7 "Protein-H" (17 atoms)
  Group  8 "C-alpha" (1 atoms)
  Group  9 "Backbone" (5 atoms)
  Group 10 "MainChain" (7 atoms)
  Group 11 "MainChain+Cb" (8 atoms)
  Group 12 "MainChain+H" (9 atoms)
  Group 13 "SideChain" (24 atoms)
  Group 14 "SideChain-H" (10 atoms)
  Group 15 "Prot-Masses" (33 atoms)
  Group 16 "non-Protein" (4298 atoms)
  Group 17 "Water" (3528 atoms)
  Group 18 "SOL" (3528 atoms)
  Group 19 "non-Water" (803 atoms)
  Group 20 "Ion" (10 atoms)
  Group 21 "ZnS" (560 atoms)
  Group 22 "WAL" (200 atoms)
  Group 23 "NA" (5 atoms)
  Group 24 "CL" (5 atoms)
  Group 25 "Water_and_ions" (3538 atoms)
  Group 26 "OW" (1176 atoms)
  Group 27 "CE1_CZ_CD1_CG_CE2_CD2" (6 atoms)
  Group 28 "a_320_302_319_301_318_311" (6 atoms)
  Group 29 "a_301_302" (2 atoms)
Specify any number of selections for option 'group1'
(First analysis/vector selection):
(one per line,  for status/groups, 'help' for help, Ctrl-D to end)

27

Selection '27' parsed

27

Selection '27' parsed

Available static index groups:

  Group  0 "System" (4331 atoms)
  Group  1 "Other" (760 atoms)
  Group  2 "ZnS" (560 atoms)
  Group  3 "WAL" (200 atoms)
  Group  4 "NA" (5 atoms)
  Group  5 "CL" (5 atoms)
  Group  6 "Protein" (33 atoms)
  Group  7 "Protein-H" (17 atoms)
  Group  8 "C-alpha" (1 atoms)
  Group  9 "Backbone" (5 atoms)
  Group 10 "MainChain" (7 atoms)
  Group 11 "MainChain+Cb" (8 atoms)
  Group 12 "MainChain+H" (9 atoms)
  Group 13 "SideChain" (24 atoms)
  Group 14 "SideChain-H" (10 atoms)
  Group 15 "Prot-Masses" (33 atoms)
  Group 16 "non-Protein" (4298 atoms)
  Group 17 "Water" (3528 atoms)
  Group 18 "SOL" (3528 atoms)
  Group 19 "non-Water" (803 atoms)
  Group 20 "Ion" (10 atoms)
  Group 21 "ZnS" (560 atoms)
  Group 22 "WAL" (200 atoms)
  Group 23 "NA" (5 atoms)
  Group 24 "CL" (5 atoms)
  Group 25 "Water_and_ions" (3538 atoms)
  Group 26 "OW" (1176 atoms)
  Group 27 "CE1_CZ_CD1_CG_CE2_CD2" (6 atoms)
  Group 28 "a_320_302_319_301_318_311" (6 atoms)
  Group 29 "a_301_302" (2 atoms)
Specify any number of selections for option 'group2'
(Second analysis/vector selection):
(one per line,  for status/groups, 'help' for help, Ctrl-D to end)

29

Selection '29' parsed

29

Selection '29' parsed

Reading file umbrella36_3.tpr, VERSION 4.5.4 (single precision)

Reading file umbrella36_3.tpr, VERSION 4.5.4 (single precision)
Reading frame   0 time0.000
Back Off! I just backed up angz.xvg to ./#angz.xvg.1#
Last frame  4 time 4000.Ö00
Analyzed 40001 frames, last ti߸ 4000.000

Am I right? I don't think so. :(

Would you please help me?
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.

--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to 

[gmx-users] RDF in a droplet

2018-12-13 Thread Alex
Dear all,
I have a system contains dispersed molecules of A(# 29) and B(# 14) in
water. A droplet is formed by molecule of A and B after 100 ns of MD
simulations where the B is supposed to more or less be like a shell around
A as a core.
I want to know the relative positions of molecules of A and B (or at least
some atoms of them) respect to each other or respect to the center of mass
of the formed droplet if possible. I guess the GMX RDF can do the job, so,
I am using below command; would you please help me improve the rdf or
please let me know what is wrong with the commands

gmx rdf -f md.xtc -s md.tpe -n index.ndx -o rdfB.xvg -b 99000 -selrpos
mol_cog -ref A -sel B

gmx rdf -f md.xtc -s md.tpe -n index.ndx -o rdfA.xvg -b 99000 -selrpos
mol_cog -ref A -sel A

Thank you
Alex
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Install GROMACS 2018 with CUDA features in dynamic linking way

2018-12-13 Thread Jaime Sierra
I suspect CUDA is not linked dinamically. I'm almost 100% sure.

function cuGetExportTable not supported. Please, report this error to <
supp...@rcuda.net> so that it is supported in future versions of rCUDA.

this function is called when CUDA Runtime is compiled statically.

The ld command is telling me that:
libcudart.so.8.0 => /nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcudart.so.8.0

and my environment variables are unset·

Regards,
Jaime.


El jue., 13 dic. 2018 a las 18:27, Szilárd Páll ()
escribió:

> On Thu, Dec 13, 2018 at 6:07 PM Jaime Sierra  wrote:
> >
> > My cmake config:
> >
> > ~/cmake-3.13.1-Linux-x86_64/bin/cmake .. -DGMX_BUILD_OWN_FFTW=ON
> > -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=ON
> > -DCUDA_TOOLKIT_ROOT_DIR=/nfs2/LIBS/x86_64/LIBS/CUDA/8.0
> > -DCMAKE_INSTALL_PREFIX=/nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0
> > -DCUDA_USE_STATIC_CUDA_RUNTIME=OFF -DBUILD_SHARED_LIBS=ON
> > -DCUDA_NVCC_FLAGS=--cudart=shared
>
> Why pass that flag when the abovecache variable should do the same?
>
> > -DGMX_PREFER_STATIC_LIBS=OFF
> >
> > ldd /nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/gmx mdrun
> > /nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/gmx:
> > linux-vdso.so.1 =>  (0x7ffc6f6f4000)
> > libgromacs.so.3 =>
> >
> /nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/../lib64/libgromacs.so.3(0x7fb588ed9000)
> > libstdc++.so.6 => /lib64/libstdc++.so.6 (0x7fb588bba000)
> > libm.so.6 => /lib64/libm.so.6 (0x7fb5888b8000)
> > libgomp.so.1 => /lib64/libgomp.so.1 (0x7fb588692000)
> > libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x7fb58847b000)
> > libpthread.so.0 => /lib64/libpthread.so.0 (0x7fb58825f000)
> > libc.so.6 => /lib64/libc.so.6 (0x7fb587e9e000)
> > libcudart.so.8.0 =>
> > /nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcudart.so.8.0
> (0x7fb587c37000)
> > libcufft.so.8.0 =>
> > /nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcufft.so.8.0
> (0x7fb57ede9000)
> > libdl.so.2 => /lib64/libdl.so.2 (0x7fb57ebe4000)
> > librt.so.1 => /lib64/librt.so.1 (0x7fb57e9dc000)
> > libmkl_intel_lp64.so =>
> >
> /nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_intel_lp64.so
> > (0x7fb57e2b9000)
> > libmkl_intel_thread.so =>
> >
> /nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_intel_thread.so
> > (0x7fb57d21e000)
> > libmkl_core.so =>
> >
> /nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_core.so
> > (0x7fb57bcf)
> > libiomp5.so =>
> > /nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/lib/intel64/libiomp5.so
> > (0x7fb57b9d7000)
> > libmkl_gf_lp64.so =>
> >
> /nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_gf_lp64.so
> > (0x7fb57b2b4000)
> > /lib64/ld-linux-x86-64.so.2 (0x7fb58bf2d000
> >
> >
> >
> > IDK what i'm doing wrong.
>
> You asked for dynamic linking against the CUDA runtime and you got
> that. Please be more specific what the problem is.
>
> --
> Szilárd
>
> >
> > Regards,
> > Jaime.
> >
> > El mar., 11 dic. 2018 a las 22:14, Szilárd Páll ( >)
> > escribió:
> >
> > > AFAIK the right way to control RPATH using cmake is:
> > > https://cmake.org/cmake/help/v3.12/variable/CMAKE_SKIP_RPATH.html
> > > no need to poke the binary.
> > >
> > > If you still need to turn off static cudart linking the way to do that
> > > is also via a CMake feature:
> > > https://cmake.org/cmake/help/latest/module/FindCUDA.html
> > > The default is static.
> > >
> > > --
> > > Szilárd
> > > On Tue, Dec 11, 2018 at 10:45 AM Jaime Sierra 
> wrote:
> > > >
> > > > I'm trying to rewrite the RPATH because shared libraries paths used
> by
> > > > GROMACS are hardcoded in the binary.
> > > >
> > > > ldd /nfs2/opt/APPS/x86_64/APPS/GROMACS/2016/CUDA/8.0/bin/gmx
> > > > linux-vdso.so.1 =>  (0x7ffddf1d3000)
> > > > libgromacs.so.2 =>
> > > >
> > >
> /nfs2/opt/APPS/x86_64/APPS/GROMACS/2016/CUDA/8.0/bin/../lib64/libgromacs.so.2
> > > > (0x7f0094b25000)
> > > > libcudart.so.8.0 => not found
> > > > libnvidia-ml.so.1 => /lib64/libnvidia-ml.so.1 (0x7f009450)
> > > > libz.so.1 => /lib64/libz.so.1 (0x7f00942ea000)
> > > > libdl.so.2 => /lib64/libdl.so.2 (0x7f00940e5000)
> > > > librt.so.1 => /lib64/librt.so.1 (0x7f0093edd000)
> > > > libpthread.so.0 => /lib64/libpthread.so.0 (0x7f0093cc1000)
> > > > libstdc++.so.6 => /lib64/libstdc++.so.6 (0x7f00939b7000)
> > > > libm.so.6 => /lib64/libm.so.6 (0x7f00936b5000)
> > > > libgomp.so.1 => /lib64/libgomp.so.1 (0x7f009348f000)
> > > > libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x7f0093278000)
> > > > libc.so.6 => /lib64/libc.so.6 (0x7f0092eb7000)
> > > > libcudart.so.8.0 =>
> > > /nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcudart.so.8.0
> > > > (0x7f0092c5)
> > > > 

Re: [gmx-users] Gmx gangle

2018-12-13 Thread rose rahmani
Would you please answer my question? Did you check it?

On Wed, 12 Dec 2018, 02:03 Mark Abraham  Hi,
>
> I would check the documentation of gmx gangle for how it works,
> particularly for how to define a plane. Also, 4.5.4 is prehistoric, please
> do yourself a favor and use a version with the seven years of improvements
> since then :-)
>
> Mark
>
> On Tue., 11 Dec. 2018, 10:14 rose rahmani,  wrote:
>
> > Hi,
> >
> > I don't really understand how gmx gangke works!!!
> >
> > I want to calculate angle between amino acid ring and surface during
> > simulation.
> >  I mad3 an index for 6 atoms of ring(a_CD1_CD2_CE1_CE2_CZ_CG) and two
> atoms
> > of surface. Surface is in xy plane and amino acid is in different Z
> > distances.
> >
> >
> > I assumed 6 ring atoms are defining a pkane and two atoms of surface are
> > defining a vector( along  Y). And i expected that the Average angle
> between
> > this plane and vector during simulation is calculated by gmx gangle
> > analysis.
> >
> >  gmx gangle -f umbrella36_3.xtc -s umbrella36_3.tpr -n index.ndx -oav
> > angz.xvg -g1 plane -g2 vector -group1 -group2
> >
> > Available static index groups:
> >  Group  0 "System" (4331 atoms)
> >  Group  1 "Other" (760 atoms)
> >  Group  2 "ZnS" (560 atoms)
> >  Group  3 "WAL" (200 atoms)
> >  Group  4 "NA" (5 atoms)
> >  Group  5 "CL" (5 atoms)
> >  Group  6 "Protein" (33 atoms)
> >  Group  7 "Protein-H" (17 atoms)
> >  Group  8 "C-alpha" (1 atoms)
> >  Group  9 "Backbone" (5 atoms)
> >  Group 10 "MainChain" (7 atoms)
> >  Group 11 "MainChain+Cb" (8 atoms)
> >  Group 12 "MainChain+H" (9 atoms)
> >  Group 13 "SideChain" (24 atoms)
> >  Group 14 "SideChain-H" (10 atoms)
> >  Group 15 "Prot-Masses" (33 atoms)
> >  Group 16 "non-Protein" (4298 atoms)
> >  Group 17 "Water" (3528 atoms)
> >  Group 18 "SOL" (3528 atoms)
> >  Group 19 "non-Water" (803 atoms)
> >  Group 20 "Ion" (10 atoms)
> >  Group 21 "ZnS" (560 atoms)
> >  Group 22 "WAL" (200 atoms)
> >  Group 23 "NA" (5 atoms)
> >  Group 24 "CL" (5 atoms)
> >  Group 25 "Water_and_ions" (3538 atoms)
> >  Group 26 "OW" (1176 atoms)
> >  Group 27 "CE1_CZ_CD1_CG_CE2_CD2" (6 atoms)
> >  Group 28 "a_320_302_319_301_318_311" (6 atoms)
> >  Group 29 "a_301_302" (2 atoms)
> > Specify any number of selections for option 'group1'
> > (First analysis/vector selection):
> > (one per line,  for status/groups, 'help' for help, Ctrl-D to end)
> > > 27
> > Selection '27' parsed
> > > 27
> > Selection '27' parsed
> > > Available static index groups:
> >  Group  0 "System" (4331 atoms)
> >  Group  1 "Other" (760 atoms)
> >  Group  2 "ZnS" (560 atoms)
> >  Group  3 "WAL" (200 atoms)
> >  Group  4 "NA" (5 atoms)
> >  Group  5 "CL" (5 atoms)
> >  Group  6 "Protein" (33 atoms)
> >  Group  7 "Protein-H" (17 atoms)
> >  Group  8 "C-alpha" (1 atoms)
> >  Group  9 "Backbone" (5 atoms)
> >  Group 10 "MainChain" (7 atoms)
> >  Group 11 "MainChain+Cb" (8 atoms)
> >  Group 12 "MainChain+H" (9 atoms)
> >  Group 13 "SideChain" (24 atoms)
> >  Group 14 "SideChain-H" (10 atoms)
> >  Group 15 "Prot-Masses" (33 atoms)
> >  Group 16 "non-Protein" (4298 atoms)
> >  Group 17 "Water" (3528 atoms)
> >  Group 18 "SOL" (3528 atoms)
> >  Group 19 "non-Water" (803 atoms)
> >  Group 20 "Ion" (10 atoms)
> >  Group 21 "ZnS" (560 atoms)
> >  Group 22 "WAL" (200 atoms)
> >  Group 23 "NA" (5 atoms)
> >  Group 24 "CL" (5 atoms)
> >  Group 25 "Water_and_ions" (3538 atoms)
> >  Group 26 "OW" (1176 atoms)
> >  Group 27 "CE1_CZ_CD1_CG_CE2_CD2" (6 atoms)
> >  Group 28 "a_320_302_319_301_318_311" (6 atoms)
> >  Group 29 "a_301_302" (2 atoms)
> > Specify any number of selections for option 'group2'
> > (Second analysis/vector selection):
> > (one per line,  for status/groups, 'help' for help, Ctrl-D to end)
> > > 29
> > Selection '29' parsed
> > > 29
> > Selection '29' parsed
> > > Reading file umbrella36_3.tpr, VERSION 4.5.4 (single precision)
> > Reading file umbrella36_3.tpr, VERSION 4.5.4 (single precision)
> > Reading frame   0 time0.000
> > Back Off! I just backed up angz.xvg to ./#angz.xvg.1#
> > Last frame  4 time 4000.Ö00
> > Analyzed 40001 frames, last ti߸ 4000.000
> >
> > Am I right? I don't think so. :(
> >
> > Would you please help me?
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs 

Re: [gmx-users] Install GROMACS 2018 with CUDA features in dynamic linking way

2018-12-13 Thread Szilárd Páll
On Thu, Dec 13, 2018 at 6:07 PM Jaime Sierra  wrote:
>
> My cmake config:
>
> ~/cmake-3.13.1-Linux-x86_64/bin/cmake .. -DGMX_BUILD_OWN_FFTW=ON
> -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=ON
> -DCUDA_TOOLKIT_ROOT_DIR=/nfs2/LIBS/x86_64/LIBS/CUDA/8.0
> -DCMAKE_INSTALL_PREFIX=/nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0
> -DCUDA_USE_STATIC_CUDA_RUNTIME=OFF -DBUILD_SHARED_LIBS=ON
> -DCUDA_NVCC_FLAGS=--cudart=shared

Why pass that flag when the abovecache variable should do the same?

> -DGMX_PREFER_STATIC_LIBS=OFF
>
> ldd /nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/gmx mdrun
> /nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/gmx:
> linux-vdso.so.1 =>  (0x7ffc6f6f4000)
> libgromacs.so.3 =>
> /nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/../lib64/libgromacs.so.3(0x7fb588ed9000)
> libstdc++.so.6 => /lib64/libstdc++.so.6 (0x7fb588bba000)
> libm.so.6 => /lib64/libm.so.6 (0x7fb5888b8000)
> libgomp.so.1 => /lib64/libgomp.so.1 (0x7fb588692000)
> libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x7fb58847b000)
> libpthread.so.0 => /lib64/libpthread.so.0 (0x7fb58825f000)
> libc.so.6 => /lib64/libc.so.6 (0x7fb587e9e000)
> libcudart.so.8.0 =>
> /nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcudart.so.8.0 (0x7fb587c37000)
> libcufft.so.8.0 =>
> /nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcufft.so.8.0 (0x7fb57ede9000)
> libdl.so.2 => /lib64/libdl.so.2 (0x7fb57ebe4000)
> librt.so.1 => /lib64/librt.so.1 (0x7fb57e9dc000)
> libmkl_intel_lp64.so =>
> /nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_intel_lp64.so
> (0x7fb57e2b9000)
> libmkl_intel_thread.so =>
> /nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_intel_thread.so
> (0x7fb57d21e000)
> libmkl_core.so =>
> /nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_core.so
> (0x7fb57bcf)
> libiomp5.so =>
> /nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/lib/intel64/libiomp5.so
> (0x7fb57b9d7000)
> libmkl_gf_lp64.so =>
> /nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_gf_lp64.so
> (0x7fb57b2b4000)
> /lib64/ld-linux-x86-64.so.2 (0x7fb58bf2d000
>
>
>
> IDK what i'm doing wrong.

You asked for dynamic linking against the CUDA runtime and you got
that. Please be more specific what the problem is.

--
Szilárd

>
> Regards,
> Jaime.
>
> El mar., 11 dic. 2018 a las 22:14, Szilárd Páll ()
> escribió:
>
> > AFAIK the right way to control RPATH using cmake is:
> > https://cmake.org/cmake/help/v3.12/variable/CMAKE_SKIP_RPATH.html
> > no need to poke the binary.
> >
> > If you still need to turn off static cudart linking the way to do that
> > is also via a CMake feature:
> > https://cmake.org/cmake/help/latest/module/FindCUDA.html
> > The default is static.
> >
> > --
> > Szilárd
> > On Tue, Dec 11, 2018 at 10:45 AM Jaime Sierra  wrote:
> > >
> > > I'm trying to rewrite the RPATH because shared libraries paths used by
> > > GROMACS are hardcoded in the binary.
> > >
> > > ldd /nfs2/opt/APPS/x86_64/APPS/GROMACS/2016/CUDA/8.0/bin/gmx
> > > linux-vdso.so.1 =>  (0x7ffddf1d3000)
> > > libgromacs.so.2 =>
> > >
> > /nfs2/opt/APPS/x86_64/APPS/GROMACS/2016/CUDA/8.0/bin/../lib64/libgromacs.so.2
> > > (0x7f0094b25000)
> > > libcudart.so.8.0 => not found
> > > libnvidia-ml.so.1 => /lib64/libnvidia-ml.so.1 (0x7f009450)
> > > libz.so.1 => /lib64/libz.so.1 (0x7f00942ea000)
> > > libdl.so.2 => /lib64/libdl.so.2 (0x7f00940e5000)
> > > librt.so.1 => /lib64/librt.so.1 (0x7f0093edd000)
> > > libpthread.so.0 => /lib64/libpthread.so.0 (0x7f0093cc1000)
> > > libstdc++.so.6 => /lib64/libstdc++.so.6 (0x7f00939b7000)
> > > libm.so.6 => /lib64/libm.so.6 (0x7f00936b5000)
> > > libgomp.so.1 => /lib64/libgomp.so.1 (0x7f009348f000)
> > > libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x7f0093278000)
> > > libc.so.6 => /lib64/libc.so.6 (0x7f0092eb7000)
> > > libcudart.so.8.0 =>
> > /nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcudart.so.8.0
> > > (0x7f0092c5)
> > > /lib64/ld-linux-x86-64.so.2 (0x7f0097ad2000)
> > >
> > > ldd /nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/gmx
> > > linux-vdso.so.1 =>  (0x7fff27b8d000)
> > > libgromacs.so.3 =>
> > >
> > /nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/../lib64/libgromacs.so.3
> > > (0x7fcb4aa3e000)
> > > libstdc++.so.6 => /lib64/libstdc++.so.6 (0x7fcb4a71f000)
> > > libm.so.6 => /lib64/libm.so.6 (0x7fcb4a41d000)
> > > libgomp.so.1 => /lib64/libgomp.so.1 (0x7fcb4a1f7000)
> > > libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x7fcb49fe)
> > > libpthread.so.0 => /lib64/libpthread.so.0 (0x7fcb49dc4000)
> > > libc.so.6 => /lib64/libc.so.6 (0x7fcb49a03000)
> > > libcudart.so.8.0 =>
> > 

Re: [gmx-users] Install GROMACS 2018 with CUDA features in dynamic linking way

2018-12-13 Thread Jaime Sierra
My cmake config:

~/cmake-3.13.1-Linux-x86_64/bin/cmake .. -DGMX_BUILD_OWN_FFTW=ON
-DREGRESSIONTEST_DOWNLOAD=ON -DGMX_GPU=ON
-DCUDA_TOOLKIT_ROOT_DIR=/nfs2/LIBS/x86_64/LIBS/CUDA/8.0
-DCMAKE_INSTALL_PREFIX=/nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0
-DCUDA_USE_STATIC_CUDA_RUNTIME=OFF -DBUILD_SHARED_LIBS=ON
-DCUDA_NVCC_FLAGS=--cudart=shared -DGMX_PREFER_STATIC_LIBS=OFF

ldd /nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/gmx mdrun
/nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/gmx:
linux-vdso.so.1 =>  (0x7ffc6f6f4000)
libgromacs.so.3 =>
/nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/../lib64/libgromacs.so.3(0x7fb588ed9000)
libstdc++.so.6 => /lib64/libstdc++.so.6 (0x7fb588bba000)
libm.so.6 => /lib64/libm.so.6 (0x7fb5888b8000)
libgomp.so.1 => /lib64/libgomp.so.1 (0x7fb588692000)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x7fb58847b000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x7fb58825f000)
libc.so.6 => /lib64/libc.so.6 (0x7fb587e9e000)
libcudart.so.8.0 =>
/nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcudart.so.8.0 (0x7fb587c37000)
libcufft.so.8.0 =>
/nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcufft.so.8.0 (0x7fb57ede9000)
libdl.so.2 => /lib64/libdl.so.2 (0x7fb57ebe4000)
librt.so.1 => /lib64/librt.so.1 (0x7fb57e9dc000)
libmkl_intel_lp64.so =>
/nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_intel_lp64.so
(0x7fb57e2b9000)
libmkl_intel_thread.so =>
/nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_intel_thread.so
(0x7fb57d21e000)
libmkl_core.so =>
/nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_core.so
(0x7fb57bcf)
libiomp5.so =>
/nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/lib/intel64/libiomp5.so
(0x7fb57b9d7000)
libmkl_gf_lp64.so =>
/nfs2/LIBS/x86_64/LIBS/mkl/l_mkl_11.1.0.080/composer_xe_2013_sp1.0.080/mkl/lib/intel64/libmkl_gf_lp64.so
(0x7fb57b2b4000)
/lib64/ld-linux-x86-64.so.2 (0x7fb58bf2d000



IDK what i'm doing wrong.

Regards,
Jaime.

El mar., 11 dic. 2018 a las 22:14, Szilárd Páll ()
escribió:

> AFAIK the right way to control RPATH using cmake is:
> https://cmake.org/cmake/help/v3.12/variable/CMAKE_SKIP_RPATH.html
> no need to poke the binary.
>
> If you still need to turn off static cudart linking the way to do that
> is also via a CMake feature:
> https://cmake.org/cmake/help/latest/module/FindCUDA.html
> The default is static.
>
> --
> Szilárd
> On Tue, Dec 11, 2018 at 10:45 AM Jaime Sierra  wrote:
> >
> > I'm trying to rewrite the RPATH because shared libraries paths used by
> > GROMACS are hardcoded in the binary.
> >
> > ldd /nfs2/opt/APPS/x86_64/APPS/GROMACS/2016/CUDA/8.0/bin/gmx
> > linux-vdso.so.1 =>  (0x7ffddf1d3000)
> > libgromacs.so.2 =>
> >
> /nfs2/opt/APPS/x86_64/APPS/GROMACS/2016/CUDA/8.0/bin/../lib64/libgromacs.so.2
> > (0x7f0094b25000)
> > libcudart.so.8.0 => not found
> > libnvidia-ml.so.1 => /lib64/libnvidia-ml.so.1 (0x7f009450)
> > libz.so.1 => /lib64/libz.so.1 (0x7f00942ea000)
> > libdl.so.2 => /lib64/libdl.so.2 (0x7f00940e5000)
> > librt.so.1 => /lib64/librt.so.1 (0x7f0093edd000)
> > libpthread.so.0 => /lib64/libpthread.so.0 (0x7f0093cc1000)
> > libstdc++.so.6 => /lib64/libstdc++.so.6 (0x7f00939b7000)
> > libm.so.6 => /lib64/libm.so.6 (0x7f00936b5000)
> > libgomp.so.1 => /lib64/libgomp.so.1 (0x7f009348f000)
> > libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x7f0093278000)
> > libc.so.6 => /lib64/libc.so.6 (0x7f0092eb7000)
> > libcudart.so.8.0 =>
> /nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcudart.so.8.0
> > (0x7f0092c5)
> > /lib64/ld-linux-x86-64.so.2 (0x7f0097ad2000)
> >
> > ldd /nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/gmx
> > linux-vdso.so.1 =>  (0x7fff27b8d000)
> > libgromacs.so.3 =>
> >
> /nfs2/opt/APPS/x86_64/APPS/GROMACS/2018/CUDA/8.0/bin/../lib64/libgromacs.so.3
> > (0x7fcb4aa3e000)
> > libstdc++.so.6 => /lib64/libstdc++.so.6 (0x7fcb4a71f000)
> > libm.so.6 => /lib64/libm.so.6 (0x7fcb4a41d000)
> > libgomp.so.1 => /lib64/libgomp.so.1 (0x7fcb4a1f7000)
> > libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x7fcb49fe)
> > libpthread.so.0 => /lib64/libpthread.so.0 (0x7fcb49dc4000)
> > libc.so.6 => /lib64/libc.so.6 (0x7fcb49a03000)
> > libcudart.so.8.0 =>
> /nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcudart.so.8.0
> > (0x7fcb4979c000)
> > libcufft.so.8.0 => /nfs2/LIBS/x86_64/LIBS/CUDA/8.0/lib64/libcufft.so.8.0
> > (0x7fcb4094e000)
> > libdl.so.2 => /lib64/libdl.so.2 (0x7fcb40749000)
> > librt.so.1 => /lib64/librt.so.1 (0x7fcb40541000)
> > libfftw3f.so.3 =>
> > /nfs2/LIBS/x86_64/LIBS/FFTW/3.3.3/SINGLE/lib/libfftw3f.so.3
> > (0x7fcb401c8000)
> > libmkl_intel_lp64.so =>
> >
> 

Re: [gmx-users] Domain decomposition and large molecules

2018-12-13 Thread Tommaso D'Agostino
>
> Dear all,
>
> I have a system of 27000 atoms, that I am simulating on both local and
> Marconi-KNL (cineca) clusters. In this system, I simulate a small molecule
> that has a graphene sheet attached to it, surrounded by water. I have
> already simulated with success this molecule in a system of 6500 atoms,
> using a timestep of 2fs and LINCS algorithm. These simulations have run
> flawlessly when executed with 8 mpi ranks.
>
> Now I have increased the length of the graphene part and the number of
> waters surrounding my molecule, arriving to a total of 27000 atoms;
> however, every simulation that I try to launch on more than 2 cpus or with
> a timestep greater than 0.5fs seems to crash sooner or later (strangely,
> during multiple attempts with 8 cpus, I was able to run up to 5 ns of
> simulations prior to get the crashes; sometimes, however, the crashes
> happen as soon as after 100ps). When I obtain an error prior to the crash
> (sometimes the simulation just hangs without providing any error) I get a
> series of lincs warning, followed by a message like:
>
> Fatal error:
> An atom moved too far between two domain decomposition steps
> This usually means that your system is not well equilibrated
>
> The crashes are relative to a part of the molecule that I have not changed
> when increasing the graphene part, and I already checked twice that there
> are no missing/wrong terms in the molecule topology. Again, I have not
> modified at all the part of the molecule that crashes.
>
> I have already tried to increase lincs-order or lincs-iter up to 8,
> decrease nlist to 1, increase rlist to 5.0, without any success. I have
> also tried (without success) to use a unique charge group for the whole
> molecule, but I would like to avoid this, as point-charges may affect my
> analysis.
>
> One note: I am using a V-rescale thermostat with a tau_t of 40
> picoseconds, and every 50ps the simulation is stopped and started again
> from the last frame (preserving the velocities). I want to leave these
> options as they are, for consistency with other system used for this work.
>
> Do you have any suggestions on things I may try to launch these
> simulations with a decent performance? even with these few atoms, if I do
> not use a timestep greater than 0.5fs or if I do not use more than 2 cpus,
> I cannot get more than 4ns/day. I think it may me connected with domain
> decomposition, but option -pd was removed from last versions of gromacs (I
> am using gromacs 2016.1) and I cannot check that.
>
> This is the input mdp file used for the simulation:
> https://drive.google.com/file/d/14SeZbjNy1RyU-sGfohvtVLM9tky__GJA/view?usp=sharing
>
> Thanks in advance for the help,
>
>Tommaso D'Agostino
>Postdoctoral Researcher
>
>   Scuola Normale Superiore,
>
> Palazzo della Carovana, Ufficio 99
>   Piazza dei Cavalieri 7, 56126 Pisa (PI), Italy
>
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Interaction energy, vdW and electrostatic

2018-12-13 Thread Justin Lemkul




On 12/13/18 3:27 AM, daniel madulu shadrack wrote:

Hi all,
I am simulating a polymer with drug (polymer-drug) complex, I want to plot
the interaction energy. I use gmx energy -f xx.edr, I am getting a long
list with kind of energy eg
LJ-SR:polymer-polymer
LJ-14:polymer-polymer
LJ-SR:polymer-drug
LJ-14:polymer-drug etc

also I have
Coul-SR:polymer-polymer
Coul-14:polymer-polymer
Coul-SR:polymer-drug
Coul-14:polymer-drug etc

So, is Coul-SR:polymer-drug stands for van der wall? and LJ-SR:polymer-drug
stand for electrostaics?


You have it backwards. LJ = Lennard-Jones (van der Waals) and Coul = 
Coulombic (electrostatics).



But when I use for example, LJ-14:polymer-drug I get zero, but when I use
LJ-SR:polymer-drug I get -88.8 kJ/mol.


A 1-4 interaction is purely intramolecular, so all intermolecular 1-4 
terms are zero by definition.



Is this -88 the total interaction energy? how do I get the vdW and
electrostatic to get the total interaction energy?


No, that is the short-range LJ contribution to the interaction energy 
(hence LJ-SR). The total would be LJ-SR + Coul-SR. Whether this quantity 
has any physical meaning depends on the force field, but in most cases 
it is not a real, physically relevant quantity.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalem...@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] using dual CPU's

2018-12-13 Thread Kutzner, Carsten
Hi,

> On 13. Dec 2018, at 01:11, paul buscemi  wrote:
> 
> Carsten,THanks for the response.
> 
>  my mistake - it was the GTX 980 from fig 3. … I was recalling from memory….. 
>  I assume that similar 
There we measured a 19 percent performance increase for the 80k atom system.

> results would be achieved with the 1060’s
If you want to run a small system very fast, it is probably better to put in one
strong GPU instead of two weaker ones. What you could do with your two 1060, 
though,
is to maximize your aggregate performance by running two (or even 4) simulations
at the same time using the -multidir argument to mdrun. For the science, 
probably
several independent trajectories are needed anyway.
> 
> No I did not reset ,
I would at least use the -resethway mdrun command line switch,
this way your measured performances will be more reliable also for shorter runs.

Carsten

> my results were a compilation of 4-5 runs each under slightly different 
> conditions on two computers. All with the same outcome - that is ugh!. Mark 
> had asked for the log outputs indicating some useful conclusions could be 
> drawn from them.
> 
> Paul
> 
>> On Dec 12, 2018, at 9:02 AM, Kutzner, Carsten  wrote:
>> 
>> Hi Paul,
>> 
>>> On 12. Dec 2018, at 15:36, pbusc...@q.com wrote:
>>> 
>>> Dear users  ( one more try ) 
>>> 
>>> I am trying to use 2 GPU cards to improve modeling speed.  The computer 
>>> described in the log files is used  to iron out models and am using to 
>>> learn how to use two GPU cards before purchasing two new RTX 2080 ti's.  
>>> The CPU is a 8 core 16 thread AMD and the GPU's are two GTX 1060; there are 
>>> 5 atoms in the model
>>> 
>>> Using ntpmi and ntomp  settings of 1: 16,  auto  ( 4:4) and  2: 8 ( and any 
>>> other combination factoring to 16)  the rating for ns/day are approx.   
>>> 12-16  and  for any other setting ~6-8  i.e adding a card cuts efficiency 
>>> by half.  The average load imbalance is less than 3.4% for the multicard 
>>> setup .
>>> 
>>> I am not at this point trying to maximize efficiency, but only to show some 
>>> improvement going from one to two cards.   According to a 2015 paper form 
>>> the Gromacs group  “ Best bang for your buck: GPU nodes for GROMACS 
>>> biomolecular simulations “  I should expect maybe (at best )  50% 
>>> improvement for 90k atoms ( with  2x  GTX 970 )
>> We did not benchmark GTX 970 in that publication.
>> 
>> But from Table 6 you can see that we also had quite a few cases with out 80k 
>> benchmark
>> where going from 1 to 2 GPUs, simulation speed did not increase much: E.g. 
>> for the
>> E5-2670v2 going from one to 2 GTX 980 GPUs led to an increase of 10 percent.
>> 
>> Did you use counter resetting for the benchnarks?
>> 
>> Carsten
>> 
>> 
>>> What bothers me in my initial attempts is that my simulations became slower 
>>> by adding the second GPU - it is frustrating to say the least. It's like 
>>> swimming backwards.
>>> 
>>> I know am missing - as a minimum -  the correct setup for mdrun and 
>>> suggestions would be welcome
>>> 
>>> The output from the last section of the log files is included below.
>>> 
>>> === ntpmi  1  ntomp:16 
>>> ==
>>> 
>>> <==  ###  ==>
>>> <  A V E R A G E S  >
>>> <==  ###  ==>
>>> 
>>> Statistics over 29301 steps using 294 frames
>>> 
>>> Energies (kJ/mol)
>>>Angle   G96AngleProper Dih.  Improper Dih.  LJ-14
>>>  9.17533e+052.27874e+046.64128e+042.31214e+028.34971e+04
>>>   Coulomb-14LJ (SR)  Disper. corr.   Coulomb (SR)   Coul. recip.
>>> -2.84567e+07   -1.43385e+05   -2.04658e+031.33320e+071.59914e+05
>>> Position Rest.  PotentialKinetic En.   Total EnergyTemperature
>>>  7.79893e+01   -1.40196e+071.88467e+05   -1.38312e+073.00376e+02
>>> Pres. DC (bar) Pressure (bar)   Constr. rmsd
>>> -2.88685e+003.75436e+010.0e+00
>>> 
>>> Total Virial (kJ/mol)
>>>  5.27555e+04   -4.87626e+021.86144e+02
>>> -4.87648e+024.04479e+04   -1.91959e+02
>>>  1.86177e+02   -1.91957e+025.45671e+04
>>> 
>>> Pressure (bar)
>>>  2.22202e+011.27887e+00   -4.71738e-01
>>>  1.27893e+006.48135e+015.12638e-01
>>> -4.71830e-015.12632e-012.55971e+01
>>> 
>>>   T-PDMS T-VMOS
>>>  2.99822e+023.32834e+02
>>> 
>>> 
>>> M E G A - F L O P S   A C C O U N T I N G
>>> 
>>> NB=Group-cutoff nonbonded kernelsNxN=N-by-N cluster Verlet kernels
>>> RF=Reaction-Field  VdW=Van der Waals  QSTab=quadratic-spline table
>>> W3=SPC/TIP3p  W4=TIP4p (single or pairs)
>>> V=Potential and force  V=Potential only  F=Force only
>>> 
>>> Computing:   M-Number M-Flops  % Flops
>>> -
>>> Pair Search distance check2349.753264   21147.779 0.0
>>> NxN Ewald Elec. + LJ [F]   

[gmx-users] Interaction energy, vdW and electrostatic

2018-12-13 Thread daniel madulu shadrack
Hi all,
I am simulating a polymer with drug (polymer-drug) complex, I want to plot
the interaction energy. I use gmx energy -f xx.edr, I am getting a long
list with kind of energy eg
LJ-SR:polymer-polymer
LJ-14:polymer-polymer
LJ-SR:polymer-drug
LJ-14:polymer-drug etc

also I have
Coul-SR:polymer-polymer
Coul-14:polymer-polymer
Coul-SR:polymer-drug
Coul-14:polymer-drug etc

So, is Coul-SR:polymer-drug stands for van der wall? and LJ-SR:polymer-drug
stand for electrostaics?

But when I use for example, LJ-14:polymer-drug I get zero, but when I use
LJ-SR:polymer-drug I get -88.8 kJ/mol.

Is this -88 the total interaction energy? how do I get the vdW and
electrostatic to get the total interaction energy?

Help

-- 


*Regards,   *
Daniel Madulu Shadrack., (M.Sc. Chem).
PhD Research Scholar
(Nanomedicine & Comp. Aided Drug Design)

  dmss...@gmail.com
-
*FOR GOD LET US DO MUCH, **QUICK AND **WELL*..
  *St. Gaspar Del Bufalo*
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.