[gmx-users] adding hydrogen atoms into a POPC membrane

2014-12-12 Thread Carlos Navarro Retamal
Dear gromacs users,  
I just ran a CG simulation of a system consisting in a protein embedded in a 
POPC membrane solvated in water.  
After that i ran the script .initram to obtain an AA representation of my 
system (gromos54a7), but sadly i couldn’t find a *.itp with the description of 
the hydrogen atoms of the membrane.
So, is there a way to add the missing atoms after this step? and if its not 
posible, could someone provide me with the ‘correct’ *itp file (hopefully 
gromos or charmm ff) ?
Thanks in advance.
Have a nice weekend,
Carlos


--  
Carlos Navarro Retamal
Bioinformatic engineer
Ph.D(c) in Applied Science, Universidad de Talca, Chile
Center of Bioinformatics and Molecular Simulations (CBSM)
Universidad de Talca
2 Norte 685, Casilla 721, Talca - Chile   
Teléfono: 56-71-201 798,  
Fax: 56-71-201 561
Email: carlos.navarr...@gmail.com or cnava...@utalca.cl

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] output dispersion correction and coul. recip. for energy groups

2014-12-12 Thread Mark Abraham
Yes, but before you can compute a quantity you need to define an expression
for it... For example, the derivation of the dispersion correction (see
manual) starts with "assuming a homogeneous distribution of particles." You
can maybe come up with some multiple-rerun zeroing-parameters additive
scheme (as has been suggested on this list for group-wise long-range
electrostatics)...

Mark

On Fri, Dec 12, 2014 at 11:26 PM, Yongchul Chung 
wrote:
>
> Hi
>
> On Fri, Dec 12, 2014 at 4:17 PM, Mark Abraham 
> wrote:
> >
> > On Fri, Dec 12, 2014 at 10:42 PM, Yongchul Chung  >
> > wrote:
> > >
> > > Hi gmx-users,
> > >
> > > I am using grommacs-5.0.2 with gpu acceleration. I am simulating two
> > energy
> > > groups.
> > >
> > > Is there a way to output reciprocal coulomb and dispersion correction
> > > between each energy groups? It is odd to see GROMACS reports LJ and
> > > Coulombic interaction energies between two groups without dispersion
> > > correction and reciprocal sum.
> > >
> >
> > What definitions would you use?
> >
> > Mark
>
> Well, I was hoping for GROMACS to output LJ:GROUP1-GROUP2 instead of
> LJ-SR:GROUP1-GROUP2, and for Coul:GROUP1-GROUP2 instead of
> Coul-SR:GROUP1-GROUP2
>
> Greg
>
>
> >
> > > Greg
> > >
> > > --
> > >
> > > Yongchul G. Chung
> > > Postdoctoral Fellow
> > > Snurr Research Group
> > > Department of Chemical and Biological Engineering
> > > Northwestern University
> > > Evanston, IL 60208
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-requ...@gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
>
>
> --
>
> Yongchul G. Chung
> Postdoctoral Fellow
> Snurr Research Group
> Department of Chemical and Biological Engineering
> Northwestern University
> Evanston, IL 60208
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] output dispersion correction and coul. recip. for energy groups

2014-12-12 Thread Yongchul Chung
Hi

On Fri, Dec 12, 2014 at 4:17 PM, Mark Abraham 
wrote:
>
> On Fri, Dec 12, 2014 at 10:42 PM, Yongchul Chung 
> wrote:
> >
> > Hi gmx-users,
> >
> > I am using grommacs-5.0.2 with gpu acceleration. I am simulating two
> energy
> > groups.
> >
> > Is there a way to output reciprocal coulomb and dispersion correction
> > between each energy groups? It is odd to see GROMACS reports LJ and
> > Coulombic interaction energies between two groups without dispersion
> > correction and reciprocal sum.
> >
>
> What definitions would you use?
>
> Mark

Well, I was hoping for GROMACS to output LJ:GROUP1-GROUP2 instead of
LJ-SR:GROUP1-GROUP2, and for Coul:GROUP1-GROUP2 instead of
Coul-SR:GROUP1-GROUP2

Greg


>
> > Greg
> >
> > --
> >
> > Yongchul G. Chung
> > Postdoctoral Fellow
> > Snurr Research Group
> > Department of Chemical and Biological Engineering
> > Northwestern University
> > Evanston, IL 60208
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>


-- 

Yongchul G. Chung
Postdoctoral Fellow
Snurr Research Group
Department of Chemical and Biological Engineering
Northwestern University
Evanston, IL 60208
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] output dispersion correction and coul. recip. for energy groups

2014-12-12 Thread Mark Abraham
On Fri, Dec 12, 2014 at 10:42 PM, Yongchul Chung 
wrote:
>
> Hi gmx-users,
>
> I am using grommacs-5.0.2 with gpu acceleration. I am simulating two energy
> groups.
>
> Is there a way to output reciprocal coulomb and dispersion correction
> between each energy groups? It is odd to see GROMACS reports LJ and
> Coulombic interaction energies between two groups without dispersion
> correction and reciprocal sum.
>

What definitions would you use?

Mark


> Greg
>
> --
>
> Yongchul G. Chung
> Postdoctoral Fellow
> Snurr Research Group
> Department of Chemical and Biological Engineering
> Northwestern University
> Evanston, IL 60208
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] output dispersion correction and coul. recip. for energy groups

2014-12-12 Thread Yongchul Chung
Hi gmx-users,

I am using grommacs-5.0.2 with gpu acceleration. I am simulating two energy
groups.

Is there a way to output reciprocal coulomb and dispersion correction
between each energy groups? It is odd to see GROMACS reports LJ and
Coulombic interaction energies between two groups without dispersion
correction and reciprocal sum.

Greg

-- 

Yongchul G. Chung
Postdoctoral Fellow
Snurr Research Group
Department of Chemical and Biological Engineering
Northwestern University
Evanston, IL 60208
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] with 5.0: file INSTALL cannot find gmx

2014-12-12 Thread Jochen Hub


Am 12/12/14 19:40, schrieb Johnny Lu:
> Oh, and what version of cmake are you using?
> 
> If that is too old, you can compile a newer version of cmake and then use
> that.

Hey, that was a good hint!! I used 2.8.12.2 before. I just gave it a try
with the latest 3.02 - and it worked! So many thanks.

Mark, maybe it would be good to suggest a more recent cmake version in
general for Gromacs?

Best,
Jochen



> 
> On Fri, Dec 12, 2014 at 1:35 PM, Johnny Lu  wrote:
>>
>> maybe ... compile it on the head node of the cluster, and hope it has a
>> local storage?
>>
>> fftpack is slow, and let gromacs build its own fftw3 library is better. I
>> don't know if the fftpack code of gromacs is old.
>>
>> May be
>>
>> Location of where you run cmake/CMakeFiles/CMakeError.log
>>
>> will tell a bit more.
>>
>>
>> On Fri, Dec 12, 2014 at 1:24 PM, Jochen Hub  wrote:
>>>
>>>
>>>
>>> Am 12/12/14 19:07, schrieb Mark Abraham:
 Hi,

 I have seen similar behaviour on "interesting" setups, e.g. where the
>>> same
 physical file system has different logical locations, but I don't know
 where the issue is. $(pwd) should be expanded by the shell before cmake
 sees it, so how a wrong path could get into cmake_install.cmake is a
 mystery to me.
>>>
>>> This may in fact be the issue. Our computing center has some extra-fancy
>>> distributed file system. And the webserver we are running is a on a
>>> virtual machine, so also some kind of distributed thingie...
>>>
>>> Hmpf - so is there no solution for that?
>>>
>>> Jochen
>>>

 Mark

 On Fri, Dec 12, 2014 at 6:33 PM, Johnny Lu 
>>> wrote:
>
> I'm not sure what happened, but so far when i install, i use full path
> instead of $(pwd) and it was fine for gromacs 4.6 and 5.0, 5.0.2.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
>>>
>>> --
>>> ---
>>> Dr. Jochen Hub
>>> Computational Molecular Biophysics Group
>>> Institute for Microbiology and Genetics
>>> Georg-August-University of Göttingen
>>> Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
>>> Phone: +49-551-39-14189
>>> http://cmb.bio.uni-goettingen.de/
>>> ---
>>> --
>>> Gromacs Users mailing list
>>>
>>> * Please search the archive at
>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>>> posting!
>>>
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>
>>> * For (un)subscribe requests visit
>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>>> send a mail to gmx-users-requ...@gromacs.org.
>>>
>>

-- 
---
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
Phone: +49-551-39-14189
http://cmb.bio.uni-goettingen.de/
---
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Segfault on energy minimization

2014-12-12 Thread Diego Muñoz G .
I caught the problem, i had my silver atoms too tight inside the box causing 
them to collide in the boundaries. Many thanks Sr. Abraham.

On Friday 12 December 2014 20:30:29 Mark Abraham wrote:
> Hi,
> 
> The "Fmax= 2.35818e+10, atom= 214" in the output says you are
> http://www.gromacs.org/Documentation/Terminology/Blowing_Up, likely because
> you've constructed initial conditions that don't make sense, leading to
> forces many orders of magnitude higher than you probably intend. Start with
> the simplest system you can think of, to debug your topology, force field
> and initial coordinates.
> 
> Mark
> 
> On Fri, Dec 12, 2014 at 8:05 PM, Diego Muñoz G. 
> wrote:
> > Dear Gromacs users,
> > 
> >   I'm trying to launch a MD calculation involving silver atoms, but when I
> > 
> > run the
> > energy minimization gromacs crashes into a Segmentation Fault. This is:
> > when I type:
> > mpirun -H nodo1 -np 8 gmx_mpi mdrun -deffnm solvent_test
> > y get this output: http://pastebin.com/JeNqVFgc[1]
> > 
> > I've tried rebuilding the topology file, reinstalling gromacs, and testing
> > the same run on
> > another computer obtaining the same result.
> > Does anyone knows what could I have done wrong?
> > 
> > Im using Gromacs 5.0.2 with MPI support on Archlinux.
> > --
> > Diego Muñoz G
> > Departmento de Química
> > Laboratorio Fisicoquímica Molecular
> > Facultad de Ciencias
> > Universidad de Chile
> > Fax: 562-22713888
> > Tel: 562-29787342
> > 
> > 
> > [1] http://pastebin.com/JeNqVFgc
> > --
> > Gromacs Users mailing list
> > 
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> > 
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > 
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.

-- 
Diego Muñoz G
Departmento de Química
Laboratorio Fisicoquímica Molecular
Facultad de Ciencias
Universidad de Chile
Fax: 562-22713888
Tel: 562-29787342
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] pdb2gmx atom not found

2014-12-12 Thread Justin Lemkul



On 12/12/14 12:52 PM, xy21hb wrote:

Dear all,


I am introducing a residue named ABC into AMBER03 force field. I built up the 
aminoacids.rtp and aminoacids.hdb file,
but when I pdb2gmx, it gives,
"
Atom HB1 not found in rtp database in residue ABC, it looks a bit like HB
"
I am pretty sure I use HB instead of HB1, and there is no atom called HB1 in 
any of the above-mentioned files.(including .pdb for pdb2gmx)
then I changed my HB to HB1, it gives similar error,


"
Atom HB11 not found in rtp database in residue ABC, it looks a bit like HB1
"


It seems that pdb2gmx is appending the name with "1".


Anyone knows why that is?



Probably due to the .hdb entry - if multiple H are to be built on a single heavy 
atom, numbers are appended so they have unique names.  But to take the guesswork 
out, you'll need to provide the exact text of the .rtp and .hdb files.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] protein-ligand complex fatal error

2014-12-12 Thread Justin Lemkul



On 12/12/14 9:19 AM, Yaser Hosseini wrote:

hi

i just want to run

mdrun -v deffnm nvt

i run every command in this tutorials

http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/complex_old/index.html
  but at the end i got two errors :

There are: 247500 Atoms
Charge group distribution at step 0: 20986 20508 20732 20838
Grid: 7 x 20 x 20 cells



Be careful in trying to use exactly what the tutorial does with other systems; 
specifically, if you're using a different force field, you shouldn't be using 
the tutorial's .mdp files.



Constraining the starting coordinates (step 0)

Constraining the coordinates at t0-dt (step 0)
RMS relative constraint deviation after constraining: 0.00e+00
Initial temperature: 300.014 K

Started mdrun on node 0 Tue Dec  9 18:34:03 2014

Step   Time Lambda
   00.00.0


Energies (kJ/mol)
G96AngleProper Dih.  Improper Dih.  LJ-14 Coulomb-14
 3.82672e+034.61912e+031.17647e+032.38416e+036.14760e+04
 LJ (SR)  Disper. corr.   Coulomb (SR)   Coul. recip. Position Rest.
 5.52391e+05   -1.62520e+04   -4.11231e+06   -3.79310e+051.63515e+00
   PotentialKinetic En.   Total Energy  Conserved En.Temperature
-3.88200e+066.17756e+05   -3.26424e+06   -3.26424e+063.00269e+02
  Pres. DC (bar) Pressure (bar)   Constr. rmsd
-2.16174e+02   -4.15136e+031.57324e-05

DD  step 4 load imb.: force  3.5%


---
Program mdrun, VERSION 4.6.5
Source code file: /build/buildd/gromacs-4.6.5/src/mdlib/clincs.c, line: 1404

Fatal error:
Bond length not finite.

and this :


Fatal error:
4 particles communicated to PME node 2 are more than 2/3 times the
cut-off out of the domain decomposition cell of their charge group in
dimension x.
This usually means that your system is not well equilibrated.


if you want more information i can attach topol.top and em and nvt.log files .



The list does not accept attachments.  If you want to share files, please upload 
them to a file-sharing server and provide the relevant URL(s).  Include .mdp 
files.  A full description of what you have done thus far, as well as the output 
of energy minimization, is needed.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Fw: pdb2gmx atom not found

2014-12-12 Thread xy21hb










 Forwarding messages 
From: "xy21hb" 
Date: 2014-12-13 01:52:03
To:  "gmx-us...@gromacs.org" 
Subject: [gmx-users] pdb2gmx atom not found
Dear all,


I am introducing a residue named ABC into AMBER03 force field. I built up the 
aminoacids.rtp and aminoacids.hdb file,
but when I pdb2gmx, it gives,
"
Atom HB1 not found in rtp database in residue ABC, it looks a bit like HB
"
I am pretty sure I use HB instead of HB1, and there is no atom called HB1 in 
any of the above-mentioned files.(including .pdb for pdb2gmx)
then I changed my HB to HB1, it gives similar error, 


"
Atom HB11 not found in rtp database in residue ABC, it looks a bit like HB1
"


It seems that pdb2gmx is appending the name with "1".


Anyone knows why that is?


Thanks,


Yao
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Segfault on energy minimization

2014-12-12 Thread Mark Abraham
Hi,

The "Fmax= 2.35818e+10, atom= 214" in the output says you are
http://www.gromacs.org/Documentation/Terminology/Blowing_Up, likely because
you've constructed initial conditions that don't make sense, leading to
forces many orders of magnitude higher than you probably intend. Start with
the simplest system you can think of, to debug your topology, force field
and initial coordinates.

Mark

On Fri, Dec 12, 2014 at 8:05 PM, Diego Muñoz G. 
wrote:
>
> Dear Gromacs users,
>   I'm trying to launch a MD calculation involving silver atoms, but when I
> run the
> energy minimization gromacs crashes into a Segmentation Fault. This is:
> when I type:
> mpirun -H nodo1 -np 8 gmx_mpi mdrun -deffnm solvent_test
> y get this output: http://pastebin.com/JeNqVFgc[1]
>
> I've tried rebuilding the topology file, reinstalling gromacs, and testing
> the same run on
> another computer obtaining the same result.
> Does anyone knows what could I have done wrong?
>
> Im using Gromacs 5.0.2 with MPI support on Archlinux.
> --
> Diego Muñoz G
> Departmento de Química
> Laboratorio Fisicoquímica Molecular
> Facultad de Ciencias
> Universidad de Chile
> Fax: 562-22713888
> Tel: 562-29787342
>
> 
> [1] http://pastebin.com/JeNqVFgc
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Segfault on energy minimization

2014-12-12 Thread Diego Muñoz G .
Dear Gromacs users, 
  I'm trying to launch a MD calculation involving silver atoms, but when I run 
the 
energy minimization gromacs crashes into a Segmentation Fault. This is:
when I type:
mpirun -H nodo1 -np 8 gmx_mpi mdrun -deffnm solvent_test
y get this output: http://pastebin.com/JeNqVFgc[1] 

I've tried rebuilding the topology file, reinstalling gromacs, and testing the 
same run on 
another computer obtaining the same result.
Does anyone knows what could I have done wrong?

Im using Gromacs 5.0.2 with MPI support on Archlinux.
-- 
Diego Muñoz G
Departmento de Química
Laboratorio Fisicoquímica Molecular
Facultad de Ciencias
Universidad de Chile
Fax: 562-22713888
Tel: 562-29787342


[1] http://pastebin.com/JeNqVFgc
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] with 5.0: file INSTALL cannot find gmx

2014-12-12 Thread Johnny Lu
Oh, and what version of cmake are you using?

If that is too old, you can compile a newer version of cmake and then use
that.

On Fri, Dec 12, 2014 at 1:35 PM, Johnny Lu  wrote:
>
> maybe ... compile it on the head node of the cluster, and hope it has a
> local storage?
>
> fftpack is slow, and let gromacs build its own fftw3 library is better. I
> don't know if the fftpack code of gromacs is old.
>
> May be
>
> Location of where you run cmake/CMakeFiles/CMakeError.log
>
> will tell a bit more.
>
>
> On Fri, Dec 12, 2014 at 1:24 PM, Jochen Hub  wrote:
>>
>>
>>
>> Am 12/12/14 19:07, schrieb Mark Abraham:
>> > Hi,
>> >
>> > I have seen similar behaviour on "interesting" setups, e.g. where the
>> same
>> > physical file system has different logical locations, but I don't know
>> > where the issue is. $(pwd) should be expanded by the shell before cmake
>> > sees it, so how a wrong path could get into cmake_install.cmake is a
>> > mystery to me.
>>
>> This may in fact be the issue. Our computing center has some extra-fancy
>> distributed file system. And the webserver we are running is a on a
>> virtual machine, so also some kind of distributed thingie...
>>
>> Hmpf - so is there no solution for that?
>>
>> Jochen
>>
>> >
>> > Mark
>> >
>> > On Fri, Dec 12, 2014 at 6:33 PM, Johnny Lu 
>> wrote:
>> >>
>> >> I'm not sure what happened, but so far when i install, i use full path
>> >> instead of $(pwd) and it was fine for gromacs 4.6 and 5.0, 5.0.2.
>> >> --
>> >> Gromacs Users mailing list
>> >>
>> >> * Please search the archive at
>> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> >> posting!
>> >>
>> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> >>
>> >> * For (un)subscribe requests visit
>> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> >> send a mail to gmx-users-requ...@gromacs.org.
>> >>
>>
>> --
>> ---
>> Dr. Jochen Hub
>> Computational Molecular Biophysics Group
>> Institute for Microbiology and Genetics
>> Georg-August-University of Göttingen
>> Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
>> Phone: +49-551-39-14189
>> http://cmb.bio.uni-goettingen.de/
>> ---
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>>
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] with 5.0: file INSTALL cannot find gmx

2014-12-12 Thread Johnny Lu
maybe ... compile it on the head node of the cluster, and hope it has a
local storage?

fftpack is slow, and let gromacs build its own fftw3 library is better. I
don't know if the fftpack code of gromacs is old.

May be

Location of where you run cmake/CMakeFiles/CMakeError.log

will tell a bit more.


On Fri, Dec 12, 2014 at 1:24 PM, Jochen Hub  wrote:
>
>
>
> Am 12/12/14 19:07, schrieb Mark Abraham:
> > Hi,
> >
> > I have seen similar behaviour on "interesting" setups, e.g. where the
> same
> > physical file system has different logical locations, but I don't know
> > where the issue is. $(pwd) should be expanded by the shell before cmake
> > sees it, so how a wrong path could get into cmake_install.cmake is a
> > mystery to me.
>
> This may in fact be the issue. Our computing center has some extra-fancy
> distributed file system. And the webserver we are running is a on a
> virtual machine, so also some kind of distributed thingie...
>
> Hmpf - so is there no solution for that?
>
> Jochen
>
> >
> > Mark
> >
> > On Fri, Dec 12, 2014 at 6:33 PM, Johnny Lu 
> wrote:
> >>
> >> I'm not sure what happened, but so far when i install, i use full path
> >> instead of $(pwd) and it was fine for gromacs 4.6 and 5.0, 5.0.2.
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-requ...@gromacs.org.
> >>
>
> --
> ---
> Dr. Jochen Hub
> Computational Molecular Biophysics Group
> Institute for Microbiology and Genetics
> Georg-August-University of Göttingen
> Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
> Phone: +49-551-39-14189
> http://cmb.bio.uni-goettingen.de/
> ---
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] with 5.0: file INSTALL cannot find gmx

2014-12-12 Thread Jochen Hub


Am 12/12/14 19:07, schrieb Mark Abraham:
> Hi,
> 
> I have seen similar behaviour on "interesting" setups, e.g. where the same
> physical file system has different logical locations, but I don't know
> where the issue is. $(pwd) should be expanded by the shell before cmake
> sees it, so how a wrong path could get into cmake_install.cmake is a
> mystery to me.

This may in fact be the issue. Our computing center has some extra-fancy
distributed file system. And the webserver we are running is a on a
virtual machine, so also some kind of distributed thingie...

Hmpf - so is there no solution for that?

Jochen

> 
> Mark
> 
> On Fri, Dec 12, 2014 at 6:33 PM, Johnny Lu  wrote:
>>
>> I'm not sure what happened, but so far when i install, i use full path
>> instead of $(pwd) and it was fine for gromacs 4.6 and 5.0, 5.0.2.
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>>

-- 
---
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
Phone: +49-551-39-14189
http://cmb.bio.uni-goettingen.de/
---
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] with 5.0: file INSTALL cannot find gmx

2014-12-12 Thread Jochen Hub


Am 12/12/14 19:07, schrieb Mark Abraham:
> Hi,
> 
> I have seen similar behaviour on "interesting" setups, e.g. where the same
> physical file system has different logical locations, but I don't know
> where the issue is. $(pwd) should be expanded by the shell before cmake
> sees it, so how a wrong path could get into cmake_install.cmake is a
> mystery to me.
> 
> Mark
> 
> On Fri, Dec 12, 2014 at 6:33 PM, Johnny Lu  wrote:
>>
>> I'm not sure what happened, but so far when i install, i use full path
>> instead of $(pwd) and it was fine for gromacs 4.6 and 5.0, 5.0.2.

Thanks, but this is not the issue. As mark says, the shell expands the
$(pwd). But I have also before expanded it myself - same error.

Jochen

>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-requ...@gromacs.org.
>>

-- 
---
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
Phone: +49-551-39-14189
http://cmb.bio.uni-goettingen.de/
---
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] with 5.0: file INSTALL cannot find gmx

2014-12-12 Thread Mark Abraham
Hi,

I have seen similar behaviour on "interesting" setups, e.g. where the same
physical file system has different logical locations, but I don't know
where the issue is. $(pwd) should be expanded by the shell before cmake
sees it, so how a wrong path could get into cmake_install.cmake is a
mystery to me.

Mark

On Fri, Dec 12, 2014 at 6:33 PM, Johnny Lu  wrote:
>
> I'm not sure what happened, but so far when i install, i use full path
> instead of $(pwd) and it was fine for gromacs 4.6 and 5.0, 5.0.2.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] pdb2gmx atom not found

2014-12-12 Thread xy21hb
Dear all,


I am introducing a residue named ABC into AMBER03 force field. I built up the 
aminoacids.rtp and aminoacids.hdb file,
but when I pdb2gmx, it gives,
"
Atom HB1 not found in rtp database in residue ABC, it looks a bit like HB
"
I am pretty sure I use HB instead of HB1, and there is no atom called HB1 in 
any of the above-mentioned files.(including .pdb for pdb2gmx)
then I changed my HB to HB1, it gives similar error, 


"
Atom HB11 not found in rtp database in residue ABC, it looks a bit like HB1
"


It seems that pdb2gmx is appending the name with "1".


Anyone knows why that is?


Thanks,


Yao
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] with 5.0: file INSTALL cannot find gmx

2014-12-12 Thread Johnny Lu
I'm not sure what happened, but so far when i install, i use full path
instead of $(pwd) and it was fine for gromacs 4.6 and 5.0, 5.0.2.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] with 5.0: file INSTALL cannot find gmx

2014-12-12 Thread Jochen Hub
Hi all,

I am having trouble to make install Gromacs 5.0x under Linux. make
install fails with:

CMake Error at src/programs/cmake_install.cmake:42 (FILE):
  file INSTALL cannot find "/home/waxs/opt/gmx/5.03-rotmax/bin/gmx".
Call Stack (most recent call first):
  src/cmake_install.cmake:40 (INCLUDE)
  cmake_install.cmake:44 (INCLUDE)

I have this trouble this on different machines (AMD, Intel), on our
computing cluster, and on a webserver that we are running. With icc and
gcc. The install directly is empty before the cmake. My cmake (version
2.8.12) call is, e.g.:

cmake /home/waxs/src/gmx/gromacs-5.0.3 /path/to/gmxsrc
-DGMX_FFT_LIBRARY=fftpack -DCMAKE_INSTALL_PREFIX=$(pwd) && make -j12 &&
make install

(so totally standard) but this happens with other cmake calls as well
(using FFTW or so).

I am kind of stuck. Did anyone else have similar trouble?

Below I pasted the cmake output (generated on an Ubuntu), but that looks
all very standard to me.

Many thanks for any hints,
Jochen

-- The C compiler identification is GNU 4.8.2
-- The CXX compiler identification is GNU 4.8.2
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Looking for NVIDIA GPUs present in the system
-- Could not detect NVIDIA GPUs
-- No compatible CUDA toolkit found (v4.0+), disabling native GPU
acceleration
-- Checking for GCC x86 inline asm
-- Checking for GCC x86 inline asm - supported
-- Detecting best SIMD instructions for this CPU
-- Detected best SIMD instructions for this CPU - AVX_128_FMA
-- Try OpenMP C flag = [-fopenmp]
-- Performing Test OpenMP_FLAG_DETECTED
-- Performing Test OpenMP_FLAG_DETECTED - Success
-- Try OpenMP CXX flag = [-fopenmp]
-- Performing Test OpenMP_FLAG_DETECTED
-- Performing Test OpenMP_FLAG_DETECTED - Success
-- Found OpenMP: -fopenmp
-- Performing Test CFLAGS_WARN
-- Performing Test CFLAGS_WARN - Success
-- Performing Test CFLAGS_WARN_EXTRA
-- Performing Test CFLAGS_WARN_EXTRA - Success
-- Performing Test CFLAGS_WARN_REL
-- Performing Test CFLAGS_WARN_REL - Success
-- Performing Test CFLAGS_WARN_UNINIT
-- Performing Test CFLAGS_WARN_UNINIT - Success
-- Performing Test CFLAGS_EXCESS_PREC
-- Performing Test CFLAGS_EXCESS_PREC - Success
-- Performing Test CFLAGS_COPT
-- Performing Test CFLAGS_COPT - Success
-- Performing Test CFLAGS_NOINLINE
-- Performing Test CFLAGS_NOINLINE - Success
-- Performing Test CXXFLAGS_WARN
-- Performing Test CXXFLAGS_WARN - Success
-- Performing Test CXXFLAGS_WARN_EXTRA
-- Performing Test CXXFLAGS_WARN_EXTRA - Success
-- Performing Test CXXFLAGS_WARN_REL
-- Performing Test CXXFLAGS_WARN_REL - Success
-- Performing Test CXXFLAGS_EXCESS_PREC
-- Performing Test CXXFLAGS_EXCESS_PREC - Success
-- Performing Test CXXFLAGS_COPT
-- Performing Test CXXFLAGS_COPT - Success
-- Performing Test CXXFLAGS_NOINLINE
-- Performing Test CXXFLAGS_NOINLINE - Success
-- Looking for include file unistd.h
-- Looking for include file unistd.h - found
-- Looking for include file pwd.h
-- Looking for include file pwd.h - found
-- Looking for include file dirent.h
-- Looking for include file dirent.h - found
-- Looking for include file time.h
-- Looking for include file time.h - found
-- Looking for include file sys/time.h
-- Looking for include file sys/time.h - found
-- Looking for include file io.h
-- Looking for include file io.h - not found
-- Looking for include file sched.h
-- Looking for include file sched.h - found
-- Looking for include file regex.h
-- Looking for include file regex.h - found
-- Looking for C++ include regex
-- Looking for C++ include regex - not found
-- Looking for posix_memalign
-- Looking for posix_memalign - found
-- Looking for memalign
-- Looking for memalign - found
-- Looking for _aligned_malloc
-- Looking for _aligned_malloc - not found
-- Looking for gettimeofday
-- Looking for gettimeofday - found
-- Looking for fsync
-- Looking for fsync - found
-- Looking for _fileno
-- Looking for _fileno - not found
-- Looking for fileno
-- Looking for fileno - found
-- Looking for _commit
-- Looking for _commit - not found
-- Looking for sigaction
-- Looking for sigaction - found
-- Looking for sysconf
-- Looking for sysconf - found
-- Looking for rsqrt
-- Looking for rsqrt - not found
-- Looking for rsqrtf
-- Looking for rsqrtf - not found
-- Looking for sqrtf
-- Looking for sqrtf - not found
-- Looking for sqrt in m
-- Looking for sqrt in m - found
-- Looking for clock_gettime in rt
-- Looking for clock_gettime in rt - found
-- Checking for sched.h GNU affinity API
-- Performing Test sched_affinity_compile
-- Performing Test sched_affinity_compile - Success
-- Check if the system is big endian
-- Searching 16 bit integer
-- Looking for sys/types.h
-- Looking for sys/types.h -

Re: [gmx-users] pmetune and restarting

2014-12-12 Thread Mark Abraham
On Fri, Dec 12, 2014 at 5:23 PM, Johnny Lu  wrote:
>
> Hi.
>
> I notice that after I stop a simulation by specifying the number of hours
> that it runs with -maxh,
> and then restarting the simulation, at the beginning of each restart, the
> pme is tuned again.
>
> Will that cause any error if I periodically restart the simulation? How big
> is the error?
>

If the tuning is designed to implement PME by varying the parameters in a
conservative and intended-as-iso-accurate way (which it is), then what
problem could arise from repeated tuning? See brief discussion in 3.17.5 of
manual. Assessing the impact of that tuning would require that you first
have assessed how

Is it possible to save and reuse pme tune result by a mdrun option?
>

You can observe the result and construct your input .mdp file to use that
fourier grid and cutoff values and then use mdrun -notunepme. But now you
are hard-coded for that machine running on that many nodes under that
external network load.

Mark

Thank you.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] pmetune and restarting

2014-12-12 Thread Johnny Lu
Hi.

I notice that after I stop a simulation by specifying the number of hours
that it runs with -maxh,
and then restarting the simulation, at the beginning of each restart, the
pme is tuned again.

Will that cause any error if I periodically restart the simulation? How big
is the error?

Is it possible to save and reuse pme tune result by a mdrun option?

Thank you.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Question about GPU acceleration in GROMACS 5

2014-12-12 Thread Mark Abraham
On Fri, Dec 12, 2014 at 3:47 PM, Tomy van Batis 
wrote:
>
> Hi Mark
>
> Thanks for your detailed reposponce.
>
> I still don't see the reason for the GPU loading to be only around 50%, but
> also why does this number increases with increasing CPU cores.
>
> For example, when using 1 CPU (-ntomp 1 i nthe mdrun) , the GPU loading is
> only about 25-30%, although with 4 CPU cores the GPU loading is 55%.
>

Your system runs like this

1. Do forces; so short-range on the GPU (~50% of the time) and angles on
the CPU (~5% of the time, then ~45% idle)
2. Do constraints, updates, neighbour search and house keeping on the CPU
(~50% of the time, including data transfer costs) with the GPU idle (~50%)
3. Repeat

so adding more CPU cores makes 2 take less time. You can see this by doing
a diff on the tables at the ends of the log files. A PME simulation looks
rather different.


> Considering that the work done on the GPU takes a lot longer that the one
> on the CPU, I believe the GPU loading should not change when changing the
> number of openmp threads. Is this correct or do I miss something here?
>

True for 1, but not for 2.


> Addtionally, I don't really see the reason that the GPU is not loaded 100%.
> Is this because of the system size?
>

As Carsten said, we optimize for throughput, not utilization. On a single
node, you could do everything on the GPU (as e.g. AMBER 14 does) and now
utilization would approach peak (and throughput would go up in that case,
if someone wrote a big pile of code to make it happen). But that
implementation would struggle scale to more nodes with current hardware
technology, and is tough to make work well with multiple GPUs per node
(some WIP, but focused on 1).

Mark


> Tommy
>
>
>
> *Hi,*
> >
> > *Only the short-ranged non-bonded work is offloaded to the GPU, but
> that's*
> > *almost all the force-based work you are doing. So it is entirely*
> > *unsurprising that the work done on the GPU takes a lot longer than it
> > does*
> > *on the CPU. That warning is aimed at the more typical PME-based
> > simulation*
> > *where the long-ranged part is done on the CPU, and now there is load to*
> > *balance. Running constraints+update happens only on the CPU, which
> is**always
> > a bottleneck, and worse in your case.*
> >
> > *Ideally, we'd share some load that your simulations are doing solely on
> > the*
> > *GPU with the CPU, and/or do the update on the GPU, but none of
> the**infrastructure
> > is there for that.*
> > *Mark*
>
>
> On Fri, Dec 12, 2014 at 2:00 PM, Tomy van Batis 
> wrote:
> >
> > Dear all
> >
> > I am working with a system of about 200.000 particles. All the non-bonded
> > interactions on the system are Lennard-Jones type (no Coulomb). I
> constrain
> > the bond-length with Lincs. No torsion or bending interactions are taken
> > into account.
> >
> >
> > I am running the simulations on a 4-core Xeon® E5-1620 vs @ 3.70GHz
> > together with an NVIDIA Tesla K20Xm. I observe a strange behavior when
> > looking to performance of the simulations:
> >
> >
> > 1. Running in 4 cores+gpu
> >
> > GPU/CPU force evaluation time=9.5 and GPU usage=58% (I see that with the
> > command nvidia-smi)
> >
> >
> > [image: Inline image 1]
> >
> >
> >
> > 2. Running in 2 cores+gpu
> >
> > GPU/CPU force evaluation time=9.9 and GPU usage=45-50% (Image is not
> > included due to size restrictions)
> >
> >
> >
> > The situation doesn't change if I include the option -nd gpu (or gpu_cpu)
> > in the mdrun.
> >
> >
> > I can see in the mailing list that the force evaluation time should be
> > about 1, that means that I am far away from the optimal performance.
> >
> >
> > Does anybody have any suggestions about how to improve the computational
> > speed?
> >
> >
> > Thanks in advance,
> >
> > Tommy
> >
> >
> >
> >
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] trjconv precision lost from double to single conversion

2014-12-12 Thread Mark Abraham
On Fri, Dec 12, 2014 at 4:31 PM, Johnny Lu  wrote:
>
> Hi. I'm using gromacs 4.6.7
>
> If the trr trajectory file is from double precision calculation, and I use
> single precision trjconv to make a single precision trr trajectory, how
> much precision will I lose in position and in force?
>

You'll go from practically infinite precision for any such quantity to
about 6 significant figures, ie. double floating-point precision to single.
Their representations are kind of like scientific notation. Read up on
wikipedia about those :-)

1, 0.1, 0.01, or 0.001 angstrom and Newton?
>

That depends on the magnitude of the numbers. You're getting a change in
relative precision.

Is the lost of precision acceptable?
>

Depends what you're using them for.


> I noticed that single precision trr seems to take about half the hard disk
> space of double precision trr.
>

Yep.

Mark


>
> Thank you.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gromacs on GPU

2014-12-12 Thread Shaohao Chen

Hi, Szilárd,

Thank you for your reply.

I tried solution i) and still got the same errors. I will try to do 
solution ii)


Where can I get sample input files that result in obtain better 
performance on GPU than on CPU?


Best,
Shaohao


On 12/09/2014 03:05 PM, Szilárd Páll wrote:

Hi Shaohao,

This is caused by a boost bug that affects the nvcc CUDA compiler, for
details see: https://svn.boost.org/trac/boost/ticket/8048

Either of the following should work:

i) use this workaround:
appedn to src/external/boost/boost/config/compiler/nvcc.hpp
#if defined(BOOST_HAS_INT128) && defined(__CUDACC__)
#undef BOOST_HAS_INT128
#endif

ii) install boost from a fairly fresh git version (the bug was fixed
on Oct 10 for 1.57:
https://github.com/boostorg/config/commit/441311c950a40b9bea824016e9e43d7af5e3d4b0)

Cheers,
--
Szilárd


On Tue, Dec 9, 2014 at 5:50 PM, Shaohao Chen  wrote:

Dear users and developers,

I have installed gromacs 4.6.7 with GPU enabled (with CUDA 6.5). I want to
do some testing calculations to see the performance on GPU. Could someone
provide some input files that are good for testing GPU performance?

Has anyone successfully installed gromacs 5.0 with GPU enabled? I got errors
from the self-included boost tool (see below). But these errors disappear if
I installed CPU-only gromacs 5.0.

===
Error message:
---
..
... gromacs-5.0/src/external/boost/boost/config/suffix.hpp(496): error:
identifier "__int128" is undefined
... gromacs-5.0/src/external/boost/boost/config/suffix.hpp(497): error:
expected a ";"
..
==

Thank you!

Best,
Shaohao
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a
mail to gmx-users-requ...@gromacs.org.


--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] trjconv precision lost from double to single conversion

2014-12-12 Thread Johnny Lu
nevermind the 2nd question. it does reduce to about 1/10 size.

On Fri, Dec 12, 2014 at 10:41 AM, Johnny Lu  wrote:
>
> When I test the disk space that I would save by getting rid the water in
> the trajectory, I found that only make the trajectory 1/4 as large as the
> original.
> The trajectory has both position and force.
>
> Yet, the proteins only have about 1/10 the number of atoms of the whole
> system.
>
> Group 0 ( System) has 29859 elements
> Group 1 (Protein) has  2598 elements
>
> Should the trajectory file size reduced to 1/4 instead of 1/10?
>
> The command that I used was:
> echo -e "1\n1\n" | $GMXHOME/trjconv_d -f $In_Traj -o $Out_Traj -s $In_ptr
> -pbc mol -ur compact -center -force
>
> Thanks again.
>
> On Fri, Dec 12, 2014 at 10:31 AM, Johnny Lu 
> wrote:
>>
>> Hi. I'm using gromacs 4.6.7
>>
>> If the trr trajectory file is from double precision calculation, and I
>> use single precision trjconv to make a single precision trr trajectory, how
>> much precision will I lose in position and in force?
>>
>> 1, 0.1, 0.01, or 0.001 angstrom and Newton?
>>
>> Is the lost of precision acceptable?
>>
>> I noticed that single precision trr seems to take about half the hard
>> disk space of double precision trr.
>>
>> Thank you.
>>
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] trjconv precision lost from double to single conversion

2014-12-12 Thread Johnny Lu
When I test the disk space that I would save by getting rid the water in
the trajectory, I found that only make the trajectory 1/4 as large as the
original.
The trajectory has both position and force.

Yet, the proteins only have about 1/10 the number of atoms of the whole
system.

Group 0 ( System) has 29859 elements
Group 1 (Protein) has  2598 elements

Should the trajectory file size reduced to 1/4 instead of 1/10?

The command that I used was:
echo -e "1\n1\n" | $GMXHOME/trjconv_d -f $In_Traj -o $Out_Traj -s $In_ptr
-pbc mol -ur compact -center -force

Thanks again.

On Fri, Dec 12, 2014 at 10:31 AM, Johnny Lu  wrote:
>
> Hi. I'm using gromacs 4.6.7
>
> If the trr trajectory file is from double precision calculation, and I use
> single precision trjconv to make a single precision trr trajectory, how
> much precision will I lose in position and in force?
>
> 1, 0.1, 0.01, or 0.001 angstrom and Newton?
>
> Is the lost of precision acceptable?
>
> I noticed that single precision trr seems to take about half the hard disk
> space of double precision trr.
>
> Thank you.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] trjconv precision lost from double to single conversion

2014-12-12 Thread Johnny Lu
Hi. I'm using gromacs 4.6.7

If the trr trajectory file is from double precision calculation, and I use
single precision trjconv to make a single precision trr trajectory, how
much precision will I lose in position and in force?

1, 0.1, 0.01, or 0.001 angstrom and Newton?

Is the lost of precision acceptable?

I noticed that single precision trr seems to take about half the hard disk
space of double precision trr.

Thank you.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Question about GPU acceleration in GROMACS 5

2014-12-12 Thread Carsten Kutzner
Hi,

On 12 Dec 2014, at 15:47, Tomy van Batis  wrote:

> Hi Mark
> 
> Thanks for your detailed reposponce.
> 
> I still don't see the reason for the GPU loading to be only around 50%, but
> also why does this number increases with increasing CPU cores.
> 
> For example, when using 1 CPU (-ntomp 1 i nthe mdrun) , the GPU loading is
> only about 25-30%, although with 4 CPU cores the GPU loading is 55%.
> 
> Considering that the work done on the GPU takes a lot longer that the one
> on the CPU, I believe the GPU loading should not change when changing the
> number of openmp threads. Is this correct or do I miss something here?
> 
> Addtionally, I don't really see the reason that the GPU is not loaded 100%.
> Is this because of the system size?
The GPU is idle part of the time step when it waits for new positions to 
calculate the forces for. The time step integration is done on the CPU.
Additionally, there is a balancing between real- and reciprocal space part
of the electrostatics calculation done that optimizes for the shortest
possible time step, not the highest possible GPU load.

Carsten

> 
> Tommy
> 
> 
> 
> *Hi,*
>> 
>> *Only the short-ranged non-bonded work is offloaded to the GPU, but that's*
>> *almost all the force-based work you are doing. So it is entirely*
>> *unsurprising that the work done on the GPU takes a lot longer than it
>> does*
>> *on the CPU. That warning is aimed at the more typical PME-based
>> simulation*
>> *where the long-ranged part is done on the CPU, and now there is load to*
>> *balance. Running constraints+update happens only on the CPU, which 
>> is**always
>> a bottleneck, and worse in your case.*
>> 
>> *Ideally, we'd share some load that your simulations are doing solely on
>> the*
>> *GPU with the CPU, and/or do the update on the GPU, but none of 
>> the**infrastructure
>> is there for that.*
>> *Mark*
> 
> 
> On Fri, Dec 12, 2014 at 2:00 PM, Tomy van Batis 
> wrote:
>> 
>> Dear all
>> 
>> I am working with a system of about 200.000 particles. All the non-bonded
>> interactions on the system are Lennard-Jones type (no Coulomb). I constrain
>> the bond-length with Lincs. No torsion or bending interactions are taken
>> into account.
>> 
>> 
>> I am running the simulations on a 4-core Xeon® E5-1620 vs @ 3.70GHz
>> together with an NVIDIA Tesla K20Xm. I observe a strange behavior when
>> looking to performance of the simulations:
>> 
>> 
>> 1. Running in 4 cores+gpu
>> 
>> GPU/CPU force evaluation time=9.5 and GPU usage=58% (I see that with the
>> command nvidia-smi)
>> 
>> 
>> [image: Inline image 1]
>> 
>> 
>> 
>> 2. Running in 2 cores+gpu
>> 
>> GPU/CPU force evaluation time=9.9 and GPU usage=45-50% (Image is not
>> included due to size restrictions)
>> 
>> 
>> 
>> The situation doesn't change if I include the option -nd gpu (or gpu_cpu)
>> in the mdrun.
>> 
>> 
>> I can see in the mailing list that the force evaluation time should be
>> about 1, that means that I am far away from the optimal performance.
>> 
>> 
>> Does anybody have any suggestions about how to improve the computational
>> speed?
>> 
>> 
>> Thanks in advance,
>> 
>> Tommy
>> 
>> 
>> 
>> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner
http://www.mpibpc.mpg.de/grubmueller/sppexa

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Question about GPU acceleration in GROMACS 5

2014-12-12 Thread Tomy van Batis
Hi Mark

Thanks for your detailed reposponce.

I still don't see the reason for the GPU loading to be only around 50%, but
also why does this number increases with increasing CPU cores.

For example, when using 1 CPU (-ntomp 1 i nthe mdrun) , the GPU loading is
only about 25-30%, although with 4 CPU cores the GPU loading is 55%.

Considering that the work done on the GPU takes a lot longer that the one
on the CPU, I believe the GPU loading should not change when changing the
number of openmp threads. Is this correct or do I miss something here?

Addtionally, I don't really see the reason that the GPU is not loaded 100%.
Is this because of the system size?

Tommy



*Hi,*
>
> *Only the short-ranged non-bonded work is offloaded to the GPU, but that's*
> *almost all the force-based work you are doing. So it is entirely*
> *unsurprising that the work done on the GPU takes a lot longer than it
> does*
> *on the CPU. That warning is aimed at the more typical PME-based
> simulation*
> *where the long-ranged part is done on the CPU, and now there is load to*
> *balance. Running constraints+update happens only on the CPU, which is**always
> a bottleneck, and worse in your case.*
>
> *Ideally, we'd share some load that your simulations are doing solely on
> the*
> *GPU with the CPU, and/or do the update on the GPU, but none of 
> the**infrastructure
> is there for that.*
> *Mark*


On Fri, Dec 12, 2014 at 2:00 PM, Tomy van Batis 
wrote:
>
> Dear all
>
> I am working with a system of about 200.000 particles. All the non-bonded
> interactions on the system are Lennard-Jones type (no Coulomb). I constrain
> the bond-length with Lincs. No torsion or bending interactions are taken
> into account.
>
>
> I am running the simulations on a 4-core Xeon® E5-1620 vs @ 3.70GHz
> together with an NVIDIA Tesla K20Xm. I observe a strange behavior when
> looking to performance of the simulations:
>
>
> 1. Running in 4 cores+gpu
>
> GPU/CPU force evaluation time=9.5 and GPU usage=58% (I see that with the
> command nvidia-smi)
>
>
> [image: Inline image 1]
>
>
>
> 2. Running in 2 cores+gpu
>
> GPU/CPU force evaluation time=9.9 and GPU usage=45-50% (Image is not
> included due to size restrictions)
>
>
>
> The situation doesn't change if I include the option -nd gpu (or gpu_cpu)
> in the mdrun.
>
>
> I can see in the mailing list that the force evaluation time should be
> about 1, that means that I am far away from the optimal performance.
>
>
> Does anybody have any suggestions about how to improve the computational
> speed?
>
>
> Thanks in advance,
>
> Tommy
>
>
>
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] protein-ligand complex fatal error

2014-12-12 Thread Yaser Hosseini
hi

i just want to run

mdrun -v deffnm nvt

i run every command in this tutorials

http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/complex_old/index.html
 but at the end i got two errors :

There are: 247500 Atoms
Charge group distribution at step 0: 20986 20508 20732 20838
Grid: 7 x 20 x 20 cells

Constraining the starting coordinates (step 0)

Constraining the coordinates at t0-dt (step 0)
RMS relative constraint deviation after constraining: 0.00e+00
Initial temperature: 300.014 K

Started mdrun on node 0 Tue Dec  9 18:34:03 2014

   Step   Time Lambda
  00.00.0


   Energies (kJ/mol)
   G96AngleProper Dih.  Improper Dih.  LJ-14 Coulomb-14
3.82672e+034.61912e+031.17647e+032.38416e+036.14760e+04
LJ (SR)  Disper. corr.   Coulomb (SR)   Coul. recip. Position Rest.
5.52391e+05   -1.62520e+04   -4.11231e+06   -3.79310e+051.63515e+00
  PotentialKinetic En.   Total Energy  Conserved En.Temperature
   -3.88200e+066.17756e+05   -3.26424e+06   -3.26424e+063.00269e+02
 Pres. DC (bar) Pressure (bar)   Constr. rmsd
   -2.16174e+02   -4.15136e+031.57324e-05

DD  step 4 load imb.: force  3.5%


---
Program mdrun, VERSION 4.6.5
Source code file: /build/buildd/gromacs-4.6.5/src/mdlib/clincs.c, line: 1404

Fatal error:
Bond length not finite.

and this :

> Fatal error:
> 4 particles communicated to PME node 2 are more than 2/3 times the
> cut-off out of the domain decomposition cell of their charge group in
> dimension x.
> This usually means that your system is not well equilibrated.

if you want more information i can attach topol.top and em and nvt.log files .

thank you.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Fwd: Question about GPU acceleration in GROMACS 5

2014-12-12 Thread Mark Abraham
Hi,

Only the short-ranged non-bonded work is offloaded to the GPU, but that's
almost all the force-based work you are doing. So it is entirely
unsurprising that the work done on the GPU takes a lot longer than it does
on the CPU. That warning is aimed at the more typical PME-based simulation
where the long-ranged part is done on the CPU, and now there is load to
balance. Running constraints+update happens only on the CPU, which is
always a bottleneck, and worse in your case.

Ideally, we'd share some load that your simulations are doing solely on the
GPU with the CPU, and/or do the update on the GPU, but none of the
infrastructure is there for that.

Mark

On Fri, Dec 12, 2014 at 2:00 PM, Tomy van Batis 
wrote:
>
> Dear all
>
> I am working with a system of about 200.000 particles. All the non-bonded
> interactions on the system are Lennard-Jones type (no Coulomb). I constrain
> the bond-length with Lincs. No torsion or bending interactions are taken
> into account.
>
>
> I am running the simulations on a 4-core Xeon® E5-1620 vs @ 3.70GHz
> together with an NVIDIA Tesla K20Xm. I observe a strange behavior when
> looking to performance of the simulations:
>
>
> 1. Running in 4 cores+gpu
>
> GPU/CPU force evaluation time=9.5 and GPU usage=58% (I see that with the
> command nvidia-smi)
>
>
> [image: Inline image 1]
>
>
>
> 2. Running in 2 cores+gpu
>
> GPU/CPU force evaluation time=9.9 and GPU usage=45-50% (Image is not
> included due to size restrictions)
>
>
>
> The situation doesn't change if I include the option -nd gpu (or gpu_cpu)
> in the mdrun.
>
>
> I can see in the mailing list that the force evaluation time should be
> about 1, that means that I am far away from the optimal performance.
>
>
> Does anybody have any suggestions about how to improve the computational
> speed?
>
>
> Thanks in advance,
>
> Tommy
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Fwd: Question about GPU acceleration in GROMACS 5

2014-12-12 Thread Tomy van Batis
Dear all

I am working with a system of about 200.000 particles. All the non-bonded
interactions on the system are Lennard-Jones type (no Coulomb). I constrain
the bond-length with Lincs. No torsion or bending interactions are taken
into account.


I am running the simulations on a 4-core Xeon® E5-1620 vs @ 3.70GHz
together with an NVIDIA Tesla K20Xm. I observe a strange behavior when
looking to performance of the simulations:


1. Running in 4 cores+gpu

GPU/CPU force evaluation time=9.5 and GPU usage=58% (I see that with the
command nvidia-smi)


[image: Inline image 1]



2. Running in 2 cores+gpu

GPU/CPU force evaluation time=9.9 and GPU usage=45-50% (Image is not
included due to size restrictions)



The situation doesn't change if I include the option -nd gpu (or gpu_cpu)
in the mdrun.


I can see in the mailing list that the force evaluation time should be
about 1, that means that I am far away from the optimal performance.


Does anybody have any suggestions about how to improve the computational
speed?


Thanks in advance,

Tommy
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] How to define the covalent bonds between ligand and protein

2014-12-12 Thread Bikash Ranjan Sahoo
​Dear All,
   I am trying to perform protein-ligand MD where the ligand is covalently
bonded to the receptor. I have seen few earlier post from other users who
are trying to solve such complexes. Few days ago few gromacs users
suggested to define a new residue to the force fields. However, it is very
difficult to define the necessary parameters from PRODRG/Acpype/ATB. If
anyone define such systems using these servers output, kindly help me in
defining my ligand parameters for which I'll remain ever grateful to you.


Thanking You
In anticipation of your reply
Bikash
Osaka Univ. ; Japan
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Segmentation fault error

2014-12-12 Thread Mark Abraham
Yes, seems like blowing up, but I would run it on another machine to be
sure.

Mark

On Fri, Dec 12, 2014 at 9:03 AM, Seyed Mojtaba Rezaei Sani <
s.m.rezaeis...@gmail.com> wrote:
>
> Hi Mark,
>
> Thanks for your response. I use a rather old version of GROMACS, 4.5.4.
> Please find the md.log file in the following link:
> https://www.dropbox.com/s/bbmqln6oqjyus0b/md.log?dl=0
> I also have to mention that due to reaching the machine precision I could
> not
> minimized the energy of the system well.
>
> On Fri, Dec 12, 2014 at 7:08 AM, Mark Abraham 
> wrote:
> >
> > Hi,
> >
> > This is a generic MPI error message. Nobody can tell from it what caused
> > it. You need to look at the whole stdout and the mdrun .log file for
> > diagnostics. You should also report your Gromacs version. Probably you
> are
> > just http://www.gromacs.org/Documentation/Terminology/Blowing_Up, but
> > there
> > is a known problem with 5.0.3 which we will fix ASAP.
> >
> > Mark
> >
> > On Fri, Dec 12, 2014 at 3:55 AM, Seyed Mojtaba Rezaei Sani <
> > s.m.rezaeis...@gmail.com> wrote:
> > >
> > > *Dear all,*
> > > *I am trying to simulate a system of drug carrier consisting of
> HSPC/CHOL
> > > in the form of a vesicle. The code works well for the system when the
> > CHOL
> > > molecules are not inserted. As I include CHOL molecules I face this
> > error:*
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > *starting mdrun 'Chol/HSPC  VESICLE'90 steps,  27000.0 ps.step
> > > 0[compute-0-3:30916] *** Process received signal ***[compute-0-3:30916]
> > > Signal: Segmentation fault (11)[compute-0-3:30916] Signal code: Address
> > not
> > > mapped (1)[compute-0-3:30916] Failing at address:
> > > 0x9a200b0[compute-0-3:30916] [ 0] /lib64/libpthread.so.0
> > > [0x316940eb10][compute-0-3:30916] [ 1]
> > /opt/bio/gromacs/lib/libgmx_mpi.so.6
> > > [0x2b291ac0ee2c][compute-0-3:30916] *** End of error message
> > >
> > >
> >
> ***--mpirun
> > > noticed that process rank 10 with PID 30916 on node compute-0-3.local
> > > exited on signal 11 (Segmentation
> > >
> > >
> >
> fault).--*
> > >
> > > *Here is the mdp file:*
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > *title= Martiniintegrator   =
> > > mddt   = 0.03  nsteps   =
> > > 90nstcomm  = 100comm-grps=
> > > nstxout  = 1000nstvout  =
> > > 1000nstfout  = 1000nstlog   = 1000  ;
> > > Output frequency for energies to log file nstenergy=
> 100
> > > ; Output frequency for energies to energy filenstxtcout
> =
> > > 1000  ; Output frequency for .xtc filextc_precision=
> > > 100xtc-grps = energygrps   = HSPC CHOL
> > > Wnstlist  = 10ns_type  =
> > > gridpbc  = xyzrlist=
> > > 1.4coulombtype  = Shift  ;Reaction_field (for use with
> > > Verlet-pairlist) ;PME (especially with polarizable
> > > water)rcoulomb_switch  = 0.0rcoulomb =
> > > 1.2epsilon_r= 15   ; 2.5 (with polarizable
> > > water)vdw_type = Shift  ;cutoff (for use with
> > > Verlet-pairlist)   rvdw_switch  = 0.9rvdw
> >  =
> > > 1.2  ;1.1 (for use with Verlet-pairlist);cutoff-scheme=
> > > verlet;coulomb-modifier = Potential-shift;vdw-modifier
> > > = Potential-shift;epsilon_rf   = 0   ; epsilon_rf = 0
> really
> > > means epsilon_rf = infinity;verlet-buffer-drift  =
> > > 0.005tcoupl   = v-rescale tc-grps  =
> HSPC
> > > CHOL Wtau_t= 1.0  1.0 1.0 ref_t
> =
> > > 323 323 323Pcoupl   = berendsen  ; parrinello-rahman ;
> > > parrinello-rahmanPcoupltype   = isotropic  ;
> > > semiisotropictau_p= 3.0; 12.0 12.0
> > > ;parrinello-rahman is more stable with larger tau-p, DdJ,
> > > 20130422compressibility  = 3e-4   ;
> > > 3e-4ref_p= 1.0; 1.0 1.0gen_vel
> > > = yesgen_temp = 320gen_seed =
> > > 473529constraints  = none constraint_algorithm =
> > > Lincscontinuation = nolincs_order  =
> > > 4lincs_warnangle  = 30*
> > >
> > >
> > > *I appreciate any help in advance.*
> > >
> > > --
> > > Seyed Mojtaba Rezaei Sani
> > >
> > > Institute for Research in Funda

[gmx-users] Using g_densmap

2014-12-12 Thread soumadwip ghosh
Dear all,
 I am studying the effect of different ion binding to the sites
of a double stranded DNA . I am using GROMACS 4.5.6 and CHARMM force field
for my MD simulations. I have three different simulation results with
sodium, TMA and Choline ions. what I want to do is to make number density
maps using g_densmap in order to show that the preferential occupancy of
these three ions in the regions of the DNA are different, at least in
number. I have made the major and the minor grooves of the DNA in my
index.ndx file in addition to the usual DNA, ions, and SOL groups. I want
to calculate the no. density maps for the ions in the minor groove. So, I
am thinking of writing the command-


g_densmap -f traj.xtc -s topol.tpr -n index.ndx -dt 1000 -amax 6 -rmax 6
-dt 1000 -mirror


Now my question is  from the .ndx file, which are three groups I am
supposed to choose if I want to plot the density map of ions surrounding
the minor groove?


It is written that three groups should be supplied, the centers of mass of
the first two
groups define the axis, the third defines the analysis group.I am a bit
confused which groups should I use as gp # 1, 2 and 3?


Regards,
Soumadwip Ghosh
Research Fellow
Indian Institute of Technology, Bombay
India
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Segmentation fault error

2014-12-12 Thread Seyed Mojtaba Rezaei Sani
Hi Mark,

Thanks for your response. I use a rather old version of GROMACS, 4.5.4.
Please find the md.log file in the following link:
https://www.dropbox.com/s/bbmqln6oqjyus0b/md.log?dl=0
I also have to mention that due to reaching the machine precision I could
not
minimized the energy of the system well.

On Fri, Dec 12, 2014 at 7:08 AM, Mark Abraham 
wrote:
>
> Hi,
>
> This is a generic MPI error message. Nobody can tell from it what caused
> it. You need to look at the whole stdout and the mdrun .log file for
> diagnostics. You should also report your Gromacs version. Probably you are
> just http://www.gromacs.org/Documentation/Terminology/Blowing_Up, but
> there
> is a known problem with 5.0.3 which we will fix ASAP.
>
> Mark
>
> On Fri, Dec 12, 2014 at 3:55 AM, Seyed Mojtaba Rezaei Sani <
> s.m.rezaeis...@gmail.com> wrote:
> >
> > *Dear all,*
> > *I am trying to simulate a system of drug carrier consisting of HSPC/CHOL
> > in the form of a vesicle. The code works well for the system when the
> CHOL
> > molecules are not inserted. As I include CHOL molecules I face this
> error:*
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > *starting mdrun 'Chol/HSPC  VESICLE'90 steps,  27000.0 ps.step
> > 0[compute-0-3:30916] *** Process received signal ***[compute-0-3:30916]
> > Signal: Segmentation fault (11)[compute-0-3:30916] Signal code: Address
> not
> > mapped (1)[compute-0-3:30916] Failing at address:
> > 0x9a200b0[compute-0-3:30916] [ 0] /lib64/libpthread.so.0
> > [0x316940eb10][compute-0-3:30916] [ 1]
> /opt/bio/gromacs/lib/libgmx_mpi.so.6
> > [0x2b291ac0ee2c][compute-0-3:30916] *** End of error message
> >
> >
> ***--mpirun
> > noticed that process rank 10 with PID 30916 on node compute-0-3.local
> > exited on signal 11 (Segmentation
> >
> >
> fault).--*
> >
> > *Here is the mdp file:*
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > *title= Martiniintegrator   =
> > mddt   = 0.03  nsteps   =
> > 90nstcomm  = 100comm-grps=
> > nstxout  = 1000nstvout  =
> > 1000nstfout  = 1000nstlog   = 1000  ;
> > Output frequency for energies to log file nstenergy= 100
> > ; Output frequency for energies to energy filenstxtcout=
> > 1000  ; Output frequency for .xtc filextc_precision=
> > 100xtc-grps = energygrps   = HSPC CHOL
> > Wnstlist  = 10ns_type  =
> > gridpbc  = xyzrlist=
> > 1.4coulombtype  = Shift  ;Reaction_field (for use with
> > Verlet-pairlist) ;PME (especially with polarizable
> > water)rcoulomb_switch  = 0.0rcoulomb =
> > 1.2epsilon_r= 15   ; 2.5 (with polarizable
> > water)vdw_type = Shift  ;cutoff (for use with
> > Verlet-pairlist)   rvdw_switch  = 0.9rvdw
>  =
> > 1.2  ;1.1 (for use with Verlet-pairlist);cutoff-scheme=
> > verlet;coulomb-modifier = Potential-shift;vdw-modifier
> > = Potential-shift;epsilon_rf   = 0   ; epsilon_rf = 0 really
> > means epsilon_rf = infinity;verlet-buffer-drift  =
> > 0.005tcoupl   = v-rescale tc-grps  = HSPC
> > CHOL Wtau_t= 1.0  1.0 1.0 ref_t=
> > 323 323 323Pcoupl   = berendsen  ; parrinello-rahman ;
> > parrinello-rahmanPcoupltype   = isotropic  ;
> > semiisotropictau_p= 3.0; 12.0 12.0
> > ;parrinello-rahman is more stable with larger tau-p, DdJ,
> > 20130422compressibility  = 3e-4   ;
> > 3e-4ref_p= 1.0; 1.0 1.0gen_vel
> > = yesgen_temp = 320gen_seed =
> > 473529constraints  = none constraint_algorithm =
> > Lincscontinuation = nolincs_order  =
> > 4lincs_warnangle  = 30*
> >
> >
> > *I appreciate any help in advance.*
> >
> > --
> > Seyed Mojtaba Rezaei Sani
> >
> > Institute for Research in Fundamental Sciences (IPM)
> > School of Nano-Science
> > Shahid Farbin Alley
> > Shahid Lavasani st
> > P.O. Box 19395-5531
> > Tehran, Iran
> > Tel: +98 21 2310  (3069)
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listi