Re: [gmx-users] g_energy definition

2014-07-24 Thread Justin Lemkul



On 7/23/14, 11:05 PM, Andy Chao wrote:

Dear GROMACS USERS:

Thanks a lot for your supports and help!  I would like to ask more
questions related to the output (*.xvg) of the command g_energy.

1. GROMACS computes the potential energy of an ionic liquid to be negative
beyond a specific time.  What does negative energy of an ionic liquid
electrolyte mean?



Negative energy means net attraction.


2. What does the calculation of the energy for Bond, Angle, Proper
Dih, LJ-14, Coulomb-14, Vir-XY, Pres-YY, etc. represent? Where can
I find the reference that explain each term?



Most of the terms should be obvious.  Bond is the energy of bonds, Angle for 
angles, etc.  The 14 interactions are intramolecular 1-4 interactions.  Vir 
and Pres terms are related to virial and pressure tensors, respectively.



3. I would like to estimate the total free energy of an ionic liquid.  How
should the total free energy be calculated based on the available selection?



You don't.  You can get an internal energy and ultimately an enthalpy from the 
.edr terms, but there is no energy term for entropy; this is true for any MD 
simulation.  There are various ways of calculating entropy in MD simulations, 
but not from the .edr file.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Fwd: Obtaining Positive Energy Values for Both Potential and Total Energies

2014-07-24 Thread Mark Abraham
Hi,

Please leave the discussion on the list. Others may wish to contribute or
learn from it! :-)

-- Forwarded message --
From: Yip Yew Mun yipy0...@gmail.com
Date: Thu, Jul 24, 2014 at 5:16 AM
Subject: Re: [gmx-users] Obtaining Positive Energy Values for Both
Potential and Total Energies
To: mark.j.abra...@gmail.com


Hi Mark,

Thanks for the prompt reply. I have been trying to obtain the water/octanol
partition coefficient of a certain small molecule in TI simulations, in
which the result would be compared to the experimental value. That’s the
reason why I’m attempting simulations with octanol as the solvent. I have
tried topologies from users-contributed gromacs topologies (
http://www.gromacs.org/index.php?title=Download_%26_Installation/User_contributions/Molecule_topologies
http://www.gromacs.org/index.php?title=Download__Installation/User_contributions/Molecule_topologies)
as well as from VirtualChemistry. However, since they are described to be
equilibrated already, I simply tried to re-run the equilibration to verify
it myself. But when I did it, the potential energy values are positive.
Therefore, I wish to ask if you know of any sites or in your opinion how
should I equilibrate a non-water solvent?


It's not fundamentally any different (but see
http://www.gromacs.org/Documentation/How-tos/Non-Water_Solvation for some
clues). If the energies are positive, then either your methodology was
wrong (does it work for a water box? for DMSO or something?), or the model
is wrong (which is why I suggested the things I already suggested).

Mark
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Angle group

2014-07-24 Thread Cyrus Djahedi
I tried using g_angle with an index file defining the three atoms that make up 
the bond as Group12 (   O1_C1_C4) has   960 elements. I get:

Group 0 ( System) has 27396 elements
Group 1 (  Other) has  6768 elements
Group 2 (   GL4b) has   352 elements
Group 3 (   G14b) has  6048 elements
Group 4 (   GL1b) has   368 elements
Group 5 (  Water) has 20628 elements
Group 6 (SOL) has 20628 elements
Group 7 (  non-Water) has  6768 elements
Group 8 ( O1) has   320 elements
Group 9 ( O4) has16 elements
Group10 ( C1) has   320 elements
Group11 ( C4) has   320 elements
Group12 (   O1_C1_C4) has   960 elements
Group13 (   C1_O1_C4) has   960 elements
Group14 (   C4_C1_O1) has   960 elements
Group15 (   C1_C4_O1) has   960 elements
Group16 (   O1_C1_C4) has   960 elements
Select a group: 12
Selected 12: 'O1_C1_C4'
Last frame  1 time 1.000   
Found points in the range from 5 to 43 (max 180)
  angle   = 23.1856
 angle^2  = 537.601
Std. Dev.   = 0.170041

I dont know exactly what angle it is referring to. The angle I'm looking for is 
formed at the O1-atom, flanked by the carbon atoms and is around 118-120 
degrees . As you can see in the index-options I tried defining the triplets in 
different order, this made no difference however. Any suggestions?


Från: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
[gromacs.org_gmx-users-boun...@maillist.sys.kth.se] f#246;r 
gromacs.org_gmx-users-requ...@maillist.sys.kth.se 
[gromacs.org_gmx-users-requ...@maillist.sys.kth.se]
Skickat: den 23 juli 2014 20:40
Till: gromacs.org_gmx-users@maillist.sys.kth.se
Ämne: gromacs.org_gmx-users Digest, Vol 123, Issue 127

Send gromacs.org_gmx-users mailing list submissions to
gromacs.org_gmx-users@maillist.sys.kth.se

To subscribe or unsubscribe via the World Wide Web, visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
or, via email, send a message with subject or body 'help' to
gromacs.org_gmx-users-requ...@maillist.sys.kth.se

You can reach the person managing the list at
gromacs.org_gmx-users-ow...@maillist.sys.kth.se

When replying, please edit your Subject line so it is more specific
than Re: Contents of gromacs.org_gmx-users digest...


Today's Topics:

   1. Re: Diffusion coefficient of metal complex (Justin Lemkul)
   2. Re: Angle group (Justin Lemkul)
   3. Re: Error in system_inflate.gro coordinates does not  match
  (RINU KHATTRI)
   4. coulomb interactions with zero charge atoms (Sikandar Mashayak)
   5. Re: Error in system_inflate.gro coordinates does not match
  (Justin Lemkul)
   6. Lennard-Jones potential not matching with published data
  (ibrahim khalil)


--

Message: 1
Date: Wed, 23 Jul 2014 07:31:13 -0400
From: Justin Lemkul jalem...@vt.edu
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] Diffusion coefficient of metal complex
Message-ID: 53cf9d01.60...@vt.edu
Content-Type: text/plain; charset=ISO-8859-1; format=flowed



On 7/23/14, 7:13 AM, Meena Singh wrote:
 Dear GROMACS users,

 I'm working on the diffusivity of metal ion with its ligand in organic
 phase.

 I want to calculate the diffusion coefficient of metal-ligand complex as a
 group, but when I run g_msd command the option are there for only
 individual molecules diffusivity calculation.

 Can I calculate the diffusion coefficient of specific complex from the box
 which contains metal ions and ligand molecules.

 Does anyone have a suggestion to help me with this problem?


Create an index group for whatever subset of atoms you like and use it for the
calculation.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==


--

Message: 2
Date: Wed, 23 Jul 2014 07:30:55 -0400
From: Justin Lemkul jalem...@vt.edu
To: gmx-us...@gromacs.org, vvcha...@gmail.com
Subject: Re: [gmx-users] Angle group
Message-ID: 53cf9cef.3070...@vt.edu
Content-Type: text/plain; charset=ISO-8859-1; format=flowed



On 7/23/14, 5:58 AM, Dr. Vitaly Chaban wrote:
 Use g_angle and your index file must contain triples of the involved atoms.

 if I remember correctly, this route provides a gaussian-type
 probability distribution, i.e. not evolution vs. time.


The default behavior is to produce a distribution, but g_angle -ov -all will
produce individual time series of all the angles in the 

[gmx-users] Fw:Normal Mode Analysis

2014-07-24 Thread xy21hb










Dear all,


I wonder if there is anywhere I can know the details of mdp files used for 
normal mode analysis.
I understand from the maunal that it needs steepest descent, conjugate 
gradient, l-bfgs, nm in md options consecutively,
but I am not sure about other parameters set in these different stages.


Many thanks,


OAY 

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Angle group

2014-07-24 Thread Justin Lemkul



On 7/24/14, 8:55 AM, Cyrus Djahedi wrote:

I tried using g_angle with an index file defining the three atoms that make up the bond 
as Group12 (   O1_C1_C4) has   960 elements. I get:

Group 0 ( System) has 27396 elements
Group 1 (  Other) has  6768 elements
Group 2 (   GL4b) has   352 elements
Group 3 (   G14b) has  6048 elements
Group 4 (   GL1b) has   368 elements
Group 5 (  Water) has 20628 elements
Group 6 (SOL) has 20628 elements
Group 7 (  non-Water) has  6768 elements
Group 8 ( O1) has   320 elements
Group 9 ( O4) has16 elements
Group10 ( C1) has   320 elements
Group11 ( C4) has   320 elements
Group12 (   O1_C1_C4) has   960 elements
Group13 (   C1_O1_C4) has   960 elements
Group14 (   C4_C1_O1) has   960 elements
Group15 (   C1_C4_O1) has   960 elements
Group16 (   O1_C1_C4) has   960 elements
Select a group: 12
Selected 12: 'O1_C1_C4'
Last frame  1 time 1.000
Found points in the range from 5 to 43 (max 180)
   angle   = 23.1856
 angle^2  = 537.601
Std. Dev.   = 0.170041

I dont know exactly what angle it is referring to. The angle I'm looking for is 
formed at the O1-atom, flanked by the carbon atoms and is around 118-120 
degrees . As you can see in the index-options I tried defining the triplets in 
different order, this made no difference however. Any suggestions?



The values printed are an average over all triplets in the chosen index group. 
I would think that order would absolutely matter here; check carefully how you 
have created the groups.  The angle formed by C1-O1-C4 must be different than 
the angle of O1-C1-C4.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] continuation run segmentation fault

2014-07-24 Thread David de Sancho
Dear all
I am having some trouble continuing some runs with Gromacs 4.5.5 in our
local cluster. Surprisingly, the simulations run smoothly in the same
number of nodes and cores before in the same system. And even more
surprisingly if I reduce the number of nodes to 1 with its 12 processors,
then it runs again.

And the script I am using to run the simulations looks something like this@

# Set some Torque options: class name and max time for the job. Torque
 developed from a program called
 # OpenPBS, hence all the PBS references in this file
 #PBS -l nodes=4:ppn=12,walltime=24:00:00

source /home/dd363/src/gromacs-4.5.5/bin/GMXRC.bash
 application=/home/user/src/gromacs-4.5.5/bin/mdrun_openmpi_intel
 options=-s data/tpr/filename.tpr -deffnm data/filename -cpi data/filename

 #! change the working directory (default is home directory)
 cd $PBS_O_WORKDIR
 echo Running on host `hostname`
 echo Time is `date`
 echo Directory is `pwd`
 echo PBS job ID is $PBS_JOBID
 echo This jobs runs on the following machines:
 echo `cat $PBS_NODEFILE | uniq`
 #! Run the parallel MPI executable
 #!export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib64:/usr/lib64
 echo Running mpiexec $application $options
 mpiexec $application $options


And the error messages I am getting look something like this

 [compute-0-11:09645] *** Process received signal ***
 [compute-0-11:09645] Signal: Segmentation fault (11)
 [compute-0-11:09645] Signal code: Address not mapped (1)
 [compute-0-11:09645] Failing at address: 0x10
 [compute-0-11:09643] *** Process received signal ***
 [compute-0-11:09643] Signal: Segmentation fault (11)
 [compute-0-11:09643] Signal code: Address not mapped (1)
 [compute-0-11:09643] Failing at address: 0xd0
 [compute-0-11:09645] [ 0] /lib64/libpthread.so.0 [0x38d300e7c0]
 [compute-0-11:09645] [ 1]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_pml_ob1.so
 [0x2af2091443f9]
 [compute-0-11:09645] [ 2]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_pml_ob1.so
 [0x2af209142963]
 [compute-0-11:09645] [ 3]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_btl_sm.so
 [0x2af20996e33c]
 [compute-0-11:09645] [ 4]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/libopen-pal.so.0(opal_progress+0x87)
 [0x2af20572cfa7]
 [compute-0-11:09645] [ 5]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/libmpi.so.0
 [0x2af205219636]
 [compute-0-11:09645] [ 6]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_coll_tuned.so
 [0x2af20aa2259b]
 [compute-0-11:09645] [ 7]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_coll_tuned.so
 [0x2af20aa2a04b]
 [compute-0-11:09645] [ 8]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_coll_tuned.so
 [0x2af20aa22da9]
 [compute-0-11:09645] [ 9]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/libmpi.so.0(ompi_comm_split+0xcc)
 [0x2af205204dcc]
 [compute-0-11:09645] [10]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/libmpi.so.0(MPI_Comm_split+0x3c)
 [0x2af205236f0c]
 [compute-0-11:09645] [11]
 /home/dd363/src/gromacs-4.5.5/lib/libgmx_mpi.so.6(gmx_setup_nodecomm+0x14b)
 [0x2af204b8ba6b]
 [compute-0-11:09645] [12]
 /home/dd363/src/gromacs-4.5.5/bin/mdrun_openmpi_intel(mdrunner+0x86c)
 [0x415aac]
 [compute-0-11:09645] [13]
 /home/dd363/src/gromacs-4.5.5/bin/mdrun_openmpi_intel(main+0x1928)
 [0x41d968]
 [compute-0-11:09645] [14] /lib64/libc.so.6(__libc_start_main+0xf4)
 [0x38d281d994]
 [compute-0-11:09643] [ 0] /lib64/libpthread.so.0 [0x38d300e7c0]
 [compute-0-11:09643] [ 1]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_pml_ob1.so
 [0x2b56aca403f9]
 [compute-0-11:09643] [ 2]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_pml_ob1.so
 [0x2b56aca3e963]
 [compute-0-11:09643] [ 3]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_btl_sm.so
 [0x2b56ad26a33c]
 [compute-0-11:09643] [ 4]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/libopen-pal.so.0(opal_progress+0x87)
 [0x2b56a9028fa7]
 [compute-0-11:09643] [ 5]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/libmpi.so.0
 [0x2b56a8b15636]
 [compute-0-11:09643] [ 6]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_coll_tuned.so
 [0x2b56ae31e59b]
 [compute-0-11:09643] [ 7]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_coll_tuned.so
 [0x2b56ae32604b]
 [compute-0-11:09643] [ 8]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_coll_tuned.so
 [0x2b56ae31eda9]
 [compute-0-11:09643] [ 9]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/libmpi.so.0(ompi_comm_split+0xcc)
 [0x2b56a8b00dcc]
 [compute-0-11:09643] [10]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/libmpi.so.0(MPI_Comm_split+0x3c)
 [0x2b56a8b32f0c]
 [compute-0-11:09643] [11]
 

Re: [gmx-users] Error in system_inflate.gro coordinates does not match

2014-07-24 Thread RINU KHATTRI
hello everyone
thank you justin
i did the same
till minimization without the ligand it is in the lipid and center but i
edit the box size arbitrary i used x and y axis as present in popc but in z
axis used 10.0 so there is overlapping of protein and lipid  i think
this can create problem
help


On Wed, Jul 23, 2014 at 10:48 PM, Justin Lemkul jalem...@vt.edu wrote:



 On 7/23/14, 12:12 PM, RINU KHATTRI wrote:

 hello everyone

 thank you justin but how can i increase the box size i am using the box
 vector which is present in popc_whole.gro
 how can i edit it


 editconf


  and one more problem when i see it in vmd my ligand is out side the
 protein


 Position the protein-ligand complex like you want before packing the
 lipids around the protein, remove the ligand, then assemble the membrane
 protein system.  With strong restraints, the protein should not move, so
 you can just paste in the ligand coordinates afterwards.  Then adjust the
 box and solvate.


 -Justin

 --
 ==

 Justin A. Lemkul, Ph.D.
 Ruth L. Kirschstein NRSA Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441
 http://mackerell.umaryland.edu/~jalemkul

 ==
 --
 Gromacs Users mailing list

 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gromacs.org_gmx-users Digest, Vol 123, Issue 124: reply to message 1

2014-07-24 Thread Guilherme Duarte Ramos Matos
Thanks for the reply!

I actually managed to solve this issue. I was building the super cell
with Mercury, the Cambridge Crystallographic Database software, but I
was not aware of connectivity issues that appeared when I built the
crystal with fragments of molecules. It was solved easily with a
different option in the packing/ slicing utility.

Thanks!

~ Guilherme

*
Guilherme D. R. Matos
Graduate Student at UC Irvine
Mobley Group

*


On Wed, Jul 23, 2014 at 2:48 AM,
gromacs.org_gmx-users-requ...@maillist.sys.kth.se wrote:

 Send gromacs.org_gmx-users mailing list submissions to
 gromacs.org_gmx-users@maillist.sys.kth.se

 To subscribe or unsubscribe via the World Wide Web, visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
 or, via email, send a message with subject or body 'help' to
 gromacs.org_gmx-users-requ...@maillist.sys.kth.se

 You can reach the person managing the list at
 gromacs.org_gmx-users-ow...@maillist.sys.kth.se

 When replying, please edit your Subject line so it is more specific
 than Re: Contents of gromacs.org_gmx-users digest...


 Today's Topics:

1. Re: Molecular Solid PBC problem (Justin Lemkul)
2. about cos-accelation (Hyunjin Kim)
3. g_energy questions (Andy Chao)
4. Re: Error in system_inflate.gro coordinates does not  match
   (RINU KHATTRI)
5. Angle group (Cyrus Djahedi)
6. Re: about cos-accelation (Dr. Vitaly Chaban)


 --

 Message: 1
 Date: Tue, 22 Jul 2014 20:15:23 -0400
 From: Justin Lemkul jalem...@vt.edu
 To: gmx-us...@gromacs.org
 Subject: Re: [gmx-users] Molecular Solid PBC problem
 Message-ID: 53cefe9b.4010...@vt.edu
 Content-Type: text/plain; charset=ISO-8859-1; format=flowed



 On 7/22/14, 7:53 PM, Guilherme Duarte Ramos Matos wrote:
  Dear GROMACS user community,
 
  I'm working with molecular dynamics of molecular solids and I am having
  trouble to set up the calculations.
 
  I got the crystal structure's pdb file from the Cambridge Database and used
  editconf to generate the coordinate file. The topology file is really
  simple: it just carries the hamiltonian of an Einstein crystal, that is,
  harmonic potentials binding each atom of the molecule to its lattice
  position. The relevant part of the mdp file is:
 
  ; NEIGHBORSEARCHING PARAMETERS
  ; nblist update frequency
  nstlist  = 1
  ; ns algorithm (simple or grid)
  ns_type  = grid
  ; Periodic boundary conditions: xyz (default), no (vacuum)
  ; or full (infinite systems only)
  pbc  = xyz
  ; nblist cut-off
  rlist= 1.0
 
  Unfortunately, after running grompp, I get the following warning:
 
WARNING 1 [file molecule_ideal.top, line 351]:
 10116 non-matching atom names
 atom names from molecule_ideal.top will be used
 atom names from input.gro will be ignored
 
  The funny and worrying part of this problem is that all the atom types were
  changed in the output of mdrun. The simulation just didn't crash because of

 As it should; gromp warned you that a huge number of atoms were out of order
 with respect to the topology, so the topology is used, and the identity and/or
 types of the atoms are changed accordingly.

  the hamiltonian used. I investigated a little bit and it seemed that
  GROMACS was not able to connect the fragments in the wall to their
  neighboring periodic copies. That happened because fragments were numbered
  as distinct molecules. Check this small portion of the coordinate file:
 

 How did you generate the original topology?  The mismatch between coordinates
 and topology could also be causing issues with bonded geometry, because
 everything is likely to get scrambled.

  35RES C1  211   0.017   5.561   4.241
  35RES N1  212   0.033   5.362   4.363
  35RES O1  213   0.145   5.367   4.163
  35RES C2  214   0.074   5.421   4.245
  35RES H1  215   0.057   5.283   4.386
  35RES H3  216   0.087   5.628   4.238
  36RES C1  217   0.017   5.561   5.526
  36RES N1  218   0.033   5.362   5.648
  36RES O1  219   0.145   5.367   5.448
  36RES C2  220   0.074   5.421   5.530
  36RES H1  221   0.057   5.283   5.671
  36RES H3  222   0.087   5.628   5.523
  37RES C1  223   0.017   5.561   6.811
  37RES N1  224   0.033   5.362   6.933
  37RES O1  225   0.145   5.367   6.733
  37RES C2  226   0.074   5.421   6.815
  37RES H1  227   0.057   5.283   6.956
  37RES H3  228   0.087   5.628   6.808
  38RES C1  229   0.753   0.786   1.671
  38RES N1  230   0.770   0.587   1.793
  38RES O1  231   0.882   0.592   1.593
  38RES C2  232   0.811   0.646   1.675
  38RES O2  233   

Re: [gmx-users] Issues using tabulated potentials for coarse-grained simulation

2014-07-24 Thread Brian Yoo
Thanks for the response.

I have looked at two particles in NVT with v-rescale/large box and there
was nothing wrong with the intra/inter molecular interaction energies. I
also switched rvdw-switch to rvdw without observing any difference.

I ran into a similar issue when using this force field in a system
containing a mixture. I was able to resolve the issue again, though this
time with annealing and varying the temperature coupling groups.

Perhaps this only happens when there are more degrees of freedom and a
greater likelihood for metastability (?).

I will just be a bit more cautious from this point on when running these CG
simulations.

Thanks again,
Brian



On Wed, Jul 23, 2014 at 4:09 PM, Mark Abraham mark.j.abra...@gmail.com
wrote:

 Hi,

 All sounds very weird. I would be suspicious of the fact that you haven't
 set rvdw. I have no idea what rvdw-switch might do in this context, but I
 definitely think you should verify that systems with just two particles
 have the interaction strength you can compute manually.

 Mark


 On Mon, Jul 21, 2014 at 8:57 PM, Brian Yoo brian.s.yoo...@nd.edu wrote:

  Dear gmx-users,
 
  I am running into an issue regarding the use of tabulated potentials for
  coarse-grained simulations. My system consists of 256 ion pairs (ionic
  liquid) and the simulation is run in the NPT ensemble.
 
  When I use a temperature coupling on the system as a whole, my system
 ends
  up freezing such that the ions vibrate in position. The temperatures and
  pressures are correct, but the density is much higher than what it should
  be. Also, the system is stable and the simulation runs indefinitely.
  However, if I set temperature coupling of anions and cations separately,
 my
  molecules no longer freeze and I obtain the targeted properties almost
  exactly.
 
  This occurrence is insensitive to varying tau-t's (0.5 to 5.0) or
  thermostat (Berendsen or Nose-Hoover), and annealing. It also occurs for
  other systems using a similar ionic liquid force field, although I was
 able
  to resolve the issue again by changing the temperature coupling to anions
  and cations separately.
 
  I have not run into this type of issue for all atom simulations of ionic
  liquids.
 
  The force field is based on a 9-6 Mie cutoff potential and PME long-range
  electrostatics.
 
  Has anyone run into a similar issue using tabulated potentials?
 
  Thank you,
 
  Brian Yoo
 
 
  The mdp parameters are as follows:
 
  integrator  = md
  dt  = 0.004
  nsteps  = 500
  comm-mode   = linear
  nstcomm = 1
 
  ; Output control
  nstxout = 5000
  nstvout = 5000
  nstlog  = 5000
  nstenergy   = 5000
  nstxtcout   = 5000
 
  ; Neighbor searching
  nstlist = 10
  ns_type = grid
  pbc = xyz
  rlist   = 1.5
 
  ;Electrostatics
  coulombtype = PME
  rcoulomb= 1.5
  fourierspacing  = 0.10
  optimize_fft= yes
 
  ; VdW
  vdwtype = user
  rvdw-switch  = 1.5
 
  ; Temperature coupling
  tcoupl  = Berendsen
  tc-grps = C4M PF; System
  tau_t   = 0.5 0.5
  ref_t   = 300 300
 
  ; Pressure coupling
  pcoupl  = Berendsen
  pcoupltype  = isotropic
  ref_p   = 1.0
  tau_p   = 3.0
  compressibility = 4.5e-5
 
  ; Velocity generation
  gen_vel = yes
  continuation= no
  --
  Gromacs Users mailing list
 
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
  posting!
 
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
  * For (un)subscribe requests visit
  https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
  send a mail to gmx-users-requ...@gromacs.org.
 
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Gromacs performance on virtual servers

2014-07-24 Thread Elton Carvalho
Dear Gromacs Users,

My former university is focusing on cloud computing instead of
physical servers, so research groups are now expected to buy virtual
servers from the university coloud instead of buying their own
clusters.

The current setup employs Xeon E7- 2870 servers and there is an
university-wide virtual cluster with 50 virtual servers each with 10
CPUs.

Does anyone here have information on gromacs performance on this kind
of infrastructure? Should I expect big issues?

One thing that comes to mind is that the CPUs may not necessarily be
in the same physical server, rack, or even datacenter (their plan is
to decentralize the colocation), so network latency may be higher than
the traditional setup, which may affect scaling. Does this argument
make sense or am I missing something on cloud management 101?

Cheers.
-- 
Elton Carvalho
Departamento de Física
Universidade Federal do Paraná
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] how to compile and make a C analyzing code under GROMACS 5.0

2014-07-24 Thread qiaobf

Dear all,

Please help me out. Thanks in advance.

I had written some analyzing codes, which can be easily compiled under 
Gromacs 4.5.5. Since I switched to GMX 5.0 weeks ago, I want to 
re-compile them under GMX5.0. But the GMX5.0 seems quite different from 
GMX4.5.5. Anyone can help me? Thanks a lot!


I have tried the following methods:
1)  (a) save the gmx_density2.c under 
$HOME/programmes/backup/gromacs-5.0/src/gromacs/gmxana, which is the 
folder to save all the gmx_XXX.c analyzing codes under the distribution 
folder;
 (b) re-install the whole package of GMX5.0 (cmake--make---make 
install). I got no error message. All regular analyzing programs are 
correctly installed, but not the gmx_density2!

2) (a) run source $HOME/programmes/GROMACS-5.0/bin/GMXRC;
(b) save the gmx_density2.c under 
$HOME/programmes/GROMACS-5.0/share/gromacs/template, which is under the 
executable folder;
(c) modify the content of CMakeList.txt to change template to 
gmx_density2, and template.cpp to gmx_density2.c;
(d) run cmake .. No error message, and the Makefile and the 
folder CMakeFiles are created;

(e) run make. Then I get the error message

gmx_density2.c:42:22: fatal error: sysstuff.h: No such file or directory
 #include sysstuff.h
  ^
compilation terminated.
make[2]: *** [CMakeFiles/gmx_density2.dir/gmx_density2.c.o] Error 1
make[1]: *** [CMakeFiles/gmx_density2.dir/all] Error 2
make: *** [all] Error 2


best wishes,
Baofu
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Issues using tabulated potentials for coarse-grained simulation

2014-07-24 Thread Mark Abraham
Sounds to me like the model is broken, if you can only observe sensible
things in the presence of heat to flow between cations and anions via the
thermal reservoir. Does NVE work?

Mark


On Thu, Jul 24, 2014 at 10:27 PM, Brian Yoo brian.s.yoo...@nd.edu wrote:

 Thanks for the response.

 I have looked at two particles in NVT with v-rescale/large box and there
 was nothing wrong with the intra/inter molecular interaction energies. I
 also switched rvdw-switch to rvdw without observing any difference.

 I ran into a similar issue when using this force field in a system
 containing a mixture. I was able to resolve the issue again, though this
 time with annealing and varying the temperature coupling groups.

 Perhaps this only happens when there are more degrees of freedom and a
 greater likelihood for metastability (?).

 I will just be a bit more cautious from this point on when running these CG
 simulations.

 Thanks again,
 Brian



 On Wed, Jul 23, 2014 at 4:09 PM, Mark Abraham mark.j.abra...@gmail.com
 wrote:

  Hi,
 
  All sounds very weird. I would be suspicious of the fact that you haven't
  set rvdw. I have no idea what rvdw-switch might do in this context, but I
  definitely think you should verify that systems with just two particles
  have the interaction strength you can compute manually.
 
  Mark
 
 
  On Mon, Jul 21, 2014 at 8:57 PM, Brian Yoo brian.s.yoo...@nd.edu
 wrote:
 
   Dear gmx-users,
  
   I am running into an issue regarding the use of tabulated potentials
 for
   coarse-grained simulations. My system consists of 256 ion pairs (ionic
   liquid) and the simulation is run in the NPT ensemble.
  
   When I use a temperature coupling on the system as a whole, my system
  ends
   up freezing such that the ions vibrate in position. The temperatures
 and
   pressures are correct, but the density is much higher than what it
 should
   be. Also, the system is stable and the simulation runs indefinitely.
   However, if I set temperature coupling of anions and cations
 separately,
  my
   molecules no longer freeze and I obtain the targeted properties almost
   exactly.
  
   This occurrence is insensitive to varying tau-t's (0.5 to 5.0) or
   thermostat (Berendsen or Nose-Hoover), and annealing. It also occurs
 for
   other systems using a similar ionic liquid force field, although I was
  able
   to resolve the issue again by changing the temperature coupling to
 anions
   and cations separately.
  
   I have not run into this type of issue for all atom simulations of
 ionic
   liquids.
  
   The force field is based on a 9-6 Mie cutoff potential and PME
 long-range
   electrostatics.
  
   Has anyone run into a similar issue using tabulated potentials?
  
   Thank you,
  
   Brian Yoo
  
  
   The mdp parameters are as follows:
  
   integrator  = md
   dt  = 0.004
   nsteps  = 500
   comm-mode   = linear
   nstcomm = 1
  
   ; Output control
   nstxout = 5000
   nstvout = 5000
   nstlog  = 5000
   nstenergy   = 5000
   nstxtcout   = 5000
  
   ; Neighbor searching
   nstlist = 10
   ns_type = grid
   pbc = xyz
   rlist   = 1.5
  
   ;Electrostatics
   coulombtype = PME
   rcoulomb= 1.5
   fourierspacing  = 0.10
   optimize_fft= yes
  
   ; VdW
   vdwtype = user
   rvdw-switch  = 1.5
  
   ; Temperature coupling
   tcoupl  = Berendsen
   tc-grps = C4M PF; System
   tau_t   = 0.5 0.5
   ref_t   = 300 300
  
   ; Pressure coupling
   pcoupl  = Berendsen
   pcoupltype  = isotropic
   ref_p   = 1.0
   tau_p   = 3.0
   compressibility = 4.5e-5
  
   ; Velocity generation
   gen_vel = yes
   continuation= no
   --
   Gromacs Users mailing list
  
   * Please search the archive at
   http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
   posting!
  
   * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
  
   * For (un)subscribe requests visit
   https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
   send a mail to gmx-users-requ...@gromacs.org.
  
  --
  Gromacs Users mailing list
 
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
  posting!
 
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
  * For (un)subscribe requests visit
  https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
  send a mail to gmx-users-requ...@gromacs.org.
 
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 

[gmx-users] time accounting in log file with GPU

2014-07-24 Thread Sikandar Mashayak
Hi

I am running a benchmark test with the GPU. The system consists of simple
LJ atoms.
And I am running only very basic simulation with NVE ensemble and not
writing any
trajectories or energy values. My grompp.mdp file is attached below.

However, in the time accounting table in the md.log, I observe that write
traj. and comm energies
operations take 40% of time each. So, my question is that even if I have
specified not to write
trajectories and energies, why is 80% of time being spent on those
operations?

Thanks,
Sikandar

 R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G

On 2 MPI ranks

 Computing:  Num   Num  CallWall time Giga-Cycles
 Ranks Threads  Count  (s) total sum%
-
 Domain decomp. 21 11   0.006  0.030   2.1
 DD comm. load  21  2   0.000  0.000   0.0
 Neighbor search21 11   0.007  0.039   2.7
 Launch GPU ops.21202   0.007  0.036   2.5
 Comm. coord.   21 90   0.002  0.013   0.9
 Force  21101   0.001  0.003   0.2
 Wait + Comm. F 21101   0.004  0.020   1.4
 Wait GPU nonlocal  21101   0.004  0.020   1.4
 Wait GPU local 21101   0.000  0.002   0.2
 NB X/F buffer ops. 21382   0.001  0.008   0.6
 Write traj.21  1   0.108  0.586  40.2
 Update 21101   0.005  0.025   1.7
 Comm. energies 21 22   0.108  0.588  40.3
 Rest   0.016  0.087   5.9
-
 Total  0.269  1.459 100.0
-


grompp.mdp file:

integrator   = md-vv
dt   = 0.001
nsteps   = 100
nstlog   = 0
nstcalcenergy= 0
cutoff-scheme= verlet
ns_type  = grid
nstlist  = 10
pbc  = xyz
rlist= 0.7925
vdwtype  = Cut-off
rvdw = 0.7925
rcoulomb = 0.7925
gen_vel  = yes
gen_temp = 296.0
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] how to compile and make a C analyzing code under GROMACS 5.0

2014-07-24 Thread Mark Abraham
On Thu, Jul 24, 2014 at 11:47 PM, qiaobf qia...@gmail.com wrote:

 Dear all,

 Please help me out. Thanks in advance.

 I had written some analyzing codes, which can be easily compiled under
 Gromacs 4.5.5. Since I switched to GMX 5.0 weeks ago, I want to re-compile
 them under GMX5.0. But the GMX5.0 seems quite different from GMX4.5.5.
 Anyone can help me? Thanks a lot!

 I have tried the following methods:
 1)  (a) save the gmx_density2.c under $HOME/programmes/backup/
 gromacs-5.0/src/gromacs/gmxana, which is the folder to save all the
 gmx_XXX.c analyzing codes under the distribution folder;
  (b) re-install the whole package of GMX5.0 (cmake--make---make
 install). I got no error message. All regular analyzing programs are
 correctly installed, but not the gmx_density2!


That's not too surprising. You had to do more than dump a file into
src/tools to get gmx_density2 to build in 4.5.5 ;-) You can probably do
something like the above if you register your module with the new gmx
binary - see
http://jenkins.gromacs.org/job/Doxygen_Gerrit_5_0/javadoc/html-lib/page_wrapperbinary.xhtml


 2) (a) run source $HOME/programmes/GROMACS-5.0/bin/GMXRC;
 (b) save the gmx_density2.c under 
 $HOME/programmes/GROMACS-5.0/share/gromacs/template,
 which is under the executable folder;
 (c) modify the content of CMakeList.txt to change template to
 gmx_density2, and template.cpp to gmx_density2.c;
 (d) run cmake .. No error message, and the Makefile and the folder
 CMakeFiles are created;
 (e) run make. Then I get the error message

 gmx_density2.c:42:22: fatal error: sysstuff.h: No such file or directory
  #include sysstuff.h
   ^
 compilation terminated.
 make[2]: *** [CMakeFiles/gmx_density2.dir/gmx_density2.c.o] Error 1
 make[1]: *** [CMakeFiles/gmx_density2.dir/all] Error 2
 make: *** [all] Error 2


Things change. You'll need to comment out that #include, see what breaks,
and work out how to include the right header to get the right symbols
defined.

Mark


 best wishes,
 Baofu
 --
 Gromacs Users mailing list

 * Please search the archive at http://www.gromacs.org/
 Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] continuation run segmentation fault

2014-07-24 Thread Szilárd Páll
Hi,

There is a certain version of MPI that caused a lot of headache until
we realized that it is buggy. I'm not entirely sure what version was
it, but I suspect it was the 1.4.3 shipped as default on Ubuntu 12.04
server.

I suggest that you try:
- using a different MPI version;
- using a single rank/no MPI to continue;
- using thread-MPI to continue;

Cheers,
--
Szilárd


On Thu, Jul 24, 2014 at 5:29 PM, David de Sancho
daviddesan...@gmail.com wrote:
 Dear all
 I am having some trouble continuing some runs with Gromacs 4.5.5 in our
 local cluster. Surprisingly, the simulations run smoothly in the same
 number of nodes and cores before in the same system. And even more
 surprisingly if I reduce the number of nodes to 1 with its 12 processors,
 then it runs again.

 And the script I am using to run the simulations looks something like this@

 # Set some Torque options: class name and max time for the job. Torque
 developed from a program called
 # OpenPBS, hence all the PBS references in this file
 #PBS -l nodes=4:ppn=12,walltime=24:00:00

 source /home/dd363/src/gromacs-4.5.5/bin/GMXRC.bash
 application=/home/user/src/gromacs-4.5.5/bin/mdrun_openmpi_intel
 options=-s data/tpr/filename.tpr -deffnm data/filename -cpi data/filename

 #! change the working directory (default is home directory)
 cd $PBS_O_WORKDIR
 echo Running on host `hostname`
 echo Time is `date`
 echo Directory is `pwd`
 echo PBS job ID is $PBS_JOBID
 echo This jobs runs on the following machines:
 echo `cat $PBS_NODEFILE | uniq`
 #! Run the parallel MPI executable
 #!export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib64:/usr/lib64
 echo Running mpiexec $application $options
 mpiexec $application $options


 And the error messages I am getting look something like this

 [compute-0-11:09645] *** Process received signal ***
 [compute-0-11:09645] Signal: Segmentation fault (11)
 [compute-0-11:09645] Signal code: Address not mapped (1)
 [compute-0-11:09645] Failing at address: 0x10
 [compute-0-11:09643] *** Process received signal ***
 [compute-0-11:09643] Signal: Segmentation fault (11)
 [compute-0-11:09643] Signal code: Address not mapped (1)
 [compute-0-11:09643] Failing at address: 0xd0
 [compute-0-11:09645] [ 0] /lib64/libpthread.so.0 [0x38d300e7c0]
 [compute-0-11:09645] [ 1]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_pml_ob1.so
 [0x2af2091443f9]
 [compute-0-11:09645] [ 2]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_pml_ob1.so
 [0x2af209142963]
 [compute-0-11:09645] [ 3]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_btl_sm.so
 [0x2af20996e33c]
 [compute-0-11:09645] [ 4]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/libopen-pal.so.0(opal_progress+0x87)
 [0x2af20572cfa7]
 [compute-0-11:09645] [ 5]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/libmpi.so.0
 [0x2af205219636]
 [compute-0-11:09645] [ 6]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_coll_tuned.so
 [0x2af20aa2259b]
 [compute-0-11:09645] [ 7]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_coll_tuned.so
 [0x2af20aa2a04b]
 [compute-0-11:09645] [ 8]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_coll_tuned.so
 [0x2af20aa22da9]
 [compute-0-11:09645] [ 9]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/libmpi.so.0(ompi_comm_split+0xcc)
 [0x2af205204dcc]
 [compute-0-11:09645] [10]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/libmpi.so.0(MPI_Comm_split+0x3c)
 [0x2af205236f0c]
 [compute-0-11:09645] [11]
 /home/dd363/src/gromacs-4.5.5/lib/libgmx_mpi.so.6(gmx_setup_nodecomm+0x14b)
 [0x2af204b8ba6b]
 [compute-0-11:09645] [12]
 /home/dd363/src/gromacs-4.5.5/bin/mdrun_openmpi_intel(mdrunner+0x86c)
 [0x415aac]
 [compute-0-11:09645] [13]
 /home/dd363/src/gromacs-4.5.5/bin/mdrun_openmpi_intel(main+0x1928)
 [0x41d968]
 [compute-0-11:09645] [14] /lib64/libc.so.6(__libc_start_main+0xf4)
 [0x38d281d994]
 [compute-0-11:09643] [ 0] /lib64/libpthread.so.0 [0x38d300e7c0]
 [compute-0-11:09643] [ 1]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_pml_ob1.so
 [0x2b56aca403f9]
 [compute-0-11:09643] [ 2]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_pml_ob1.so
 [0x2b56aca3e963]
 [compute-0-11:09643] [ 3]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_btl_sm.so
 [0x2b56ad26a33c]
 [compute-0-11:09643] [ 4]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/libopen-pal.so.0(opal_progress+0x87)
 [0x2b56a9028fa7]
 [compute-0-11:09643] [ 5]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/libmpi.so.0
 [0x2b56a8b15636]
 [compute-0-11:09643] [ 6]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_coll_tuned.so
 [0x2b56ae31e59b]
 [compute-0-11:09643] [ 7]
 /usr/local/shared/redhat-5.4/x86_64/openmpi-1.4.3-intel/lib/openmpi/mca_coll_tuned.so
 [0x2b56ae32604b]
 [compute-0-11:09643] [ 8]

Re: [gmx-users] time accounting in log file with GPU

2014-07-24 Thread Mark Abraham
On Fri, Jul 25, 2014 at 12:12 AM, Sikandar Mashayak symasha...@gmail.com
wrote:

 Hi

 I am running a benchmark test with the GPU. The system consists of simple
 LJ atoms.
 And I am running only very basic simulation with NVE ensemble and not
 writing any
 trajectories or energy values. My grompp.mdp file is attached below.

 However, in the time accounting table in the md.log, I observe that write
 traj. and comm energies
 operations take 40% of time each. So, my question is that even if I have
 specified not to write
 trajectories and energies, why is 80% of time being spent on those
 operations?


Because you're writing a checkpoint file (hint, use mdrun -noconfout), and
that load is imbalanced so the other cores wait for it in the global
communication stage in Comm. energies (fairly clear, since they have the
same Wall time). Hint - make benchmarks run for about a minute, so you
are not dominated by setup and load-balancing time. Your compute time was
about 1/20 of a second...

Mark


 Thanks,
 Sikandar

  R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G

 On 2 MPI ranks

  Computing:  Num   Num  CallWall time Giga-Cycles
  Ranks Threads  Count  (s) total sum%

 -
  Domain decomp. 21 11   0.006  0.030   2.1
  DD comm. load  21  2   0.000  0.000   0.0
  Neighbor search21 11   0.007  0.039   2.7
  Launch GPU ops.21202   0.007  0.036   2.5
  Comm. coord.   21 90   0.002  0.013   0.9
  Force  21101   0.001  0.003   0.2
  Wait + Comm. F 21101   0.004  0.020   1.4
  Wait GPU nonlocal  21101   0.004  0.020   1.4
  Wait GPU local 21101   0.000  0.002   0.2
  NB X/F buffer ops. 21382   0.001  0.008   0.6
  Write traj.21  1   0.108  0.586  40.2
  Update 21101   0.005  0.025   1.7
  Comm. energies 21 22   0.108  0.588  40.3
  Rest   0.016  0.087   5.9

 -
  Total  0.269  1.459 100.0

 -


 grompp.mdp file:

 integrator   = md-vv
 dt   = 0.001
 nsteps   = 100
 nstlog   = 0
 nstcalcenergy= 0
 cutoff-scheme= verlet
 ns_type  = grid
 nstlist  = 10
 pbc  = xyz
 rlist= 0.7925
 vdwtype  = Cut-off
 rvdw = 0.7925
 rcoulomb = 0.7925
 gen_vel  = yes
 gen_temp = 296.0
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs performance on virtual servers

2014-07-24 Thread Mark Abraham
Hi,

Except for huge simulation systems, GROMACS performance past a single node
is dominated by network latency, so unless you can extract a promise that
any multi-node runs will have Infiniband-quality latency (because the nodes
are physically in the same room, and on Infiniband) you can forget about
doing multi-node MD on such a system.

Mark


On Thu, Jul 24, 2014 at 10:54 PM, Elton Carvalho elto...@if.usp.br wrote:

 Dear Gromacs Users,

 My former university is focusing on cloud computing instead of
 physical servers, so research groups are now expected to buy virtual
 servers from the university coloud instead of buying their own
 clusters.

 The current setup employs Xeon E7- 2870 servers and there is an
 university-wide virtual cluster with 50 virtual servers each with 10
 CPUs.

 Does anyone here have information on gromacs performance on this kind
 of infrastructure? Should I expect big issues?

 One thing that comes to mind is that the CPUs may not necessarily be
 in the same physical server, rack, or even datacenter (their plan is
 to decentralize the colocation), so network latency may be higher than
 the traditional setup, which may affect scaling. Does this argument
 make sense or am I missing something on cloud management 101?

 Cheers.
 --
 Elton Carvalho
 Departamento de Física
 Universidade Federal do Paraná
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] time accounting in log file with GPU

2014-07-24 Thread Sikandar Mashayak
Thanks Mark. -noconfout option helps.

--
Sikandar


On Thu, Jul 24, 2014 at 3:25 PM, Mark Abraham mark.j.abra...@gmail.com
wrote:

 On Fri, Jul 25, 2014 at 12:12 AM, Sikandar Mashayak symasha...@gmail.com
 wrote:

  Hi
 
  I am running a benchmark test with the GPU. The system consists of simple
  LJ atoms.
  And I am running only very basic simulation with NVE ensemble and not
  writing any
  trajectories or energy values. My grompp.mdp file is attached below.
 
  However, in the time accounting table in the md.log, I observe that write
  traj. and comm energies
  operations take 40% of time each. So, my question is that even if I have
  specified not to write
  trajectories and energies, why is 80% of time being spent on those
  operations?
 

 Because you're writing a checkpoint file (hint, use mdrun -noconfout), and
 that load is imbalanced so the other cores wait for it in the global
 communication stage in Comm. energies (fairly clear, since they have the
 same Wall time). Hint - make benchmarks run for about a minute, so you
 are not dominated by setup and load-balancing time. Your compute time was
 about 1/20 of a second...

 Mark


  Thanks,
  Sikandar
 
   R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G
 
  On 2 MPI ranks
 
   Computing:  Num   Num  CallWall time Giga-Cycles
   Ranks Threads  Count  (s) total sum%
 
 
 -
   Domain decomp. 21 11   0.006  0.030
 2.1
   DD comm. load  21  2   0.000  0.000
 0.0
   Neighbor search21 11   0.007  0.039
 2.7
   Launch GPU ops.21202   0.007  0.036
 2.5
   Comm. coord.   21 90   0.002  0.013
 0.9
   Force  21101   0.001  0.003
 0.2
   Wait + Comm. F 21101   0.004  0.020
 1.4
   Wait GPU nonlocal  21101   0.004  0.020
 1.4
   Wait GPU local 21101   0.000  0.002
 0.2
   NB X/F buffer ops. 21382   0.001  0.008
 0.6
   Write traj.21  1   0.108  0.586
  40.2
   Update 21101   0.005  0.025
 1.7
   Comm. energies 21 22   0.108  0.588
  40.3
   Rest   0.016  0.087
 5.9
 
 
 -
   Total  0.269  1.459
 100.0
 
 
 -
 
 
  grompp.mdp file:
 
  integrator   = md-vv
  dt   = 0.001
  nsteps   = 100
  nstlog   = 0
  nstcalcenergy= 0
  cutoff-scheme= verlet
  ns_type  = grid
  nstlist  = 10
  pbc  = xyz
  rlist= 0.7925
  vdwtype  = Cut-off
  rvdw = 0.7925
  rcoulomb = 0.7925
  gen_vel  = yes
  gen_temp = 296.0
  --
  Gromacs Users mailing list
 
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
  posting!
 
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
  * For (un)subscribe requests visit
  https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
  send a mail to gmx-users-requ...@gromacs.org.
 
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] time accounting in log file with GPU

2014-07-24 Thread Szilárd Páll
On Fri, Jul 25, 2014 at 12:48 AM, Sikandar Mashayak
symasha...@gmail.com wrote:
 Thanks Mark. -noconfout option helps.

For benchmarking purposes, additionally to -noconfout I suggest also using:
* -resethway or -resetstep: to exclude initialization and
load-balancing at the beginning of the run to get a more realistic
performance measurement from a short run
* -nsteps N or -maxh: the former is useful if you want to directly
compare (e.g. two-sided diff) the timings from the end of the log
between multiple runs

Cheers,
--
Szilárd


 --
 Sikandar


 On Thu, Jul 24, 2014 at 3:25 PM, Mark Abraham mark.j.abra...@gmail.com
 wrote:

 On Fri, Jul 25, 2014 at 12:12 AM, Sikandar Mashayak symasha...@gmail.com
 wrote:

  Hi
 
  I am running a benchmark test with the GPU. The system consists of simple
  LJ atoms.
  And I am running only very basic simulation with NVE ensemble and not
  writing any
  trajectories or energy values. My grompp.mdp file is attached below.
 
  However, in the time accounting table in the md.log, I observe that write
  traj. and comm energies
  operations take 40% of time each. So, my question is that even if I have
  specified not to write
  trajectories and energies, why is 80% of time being spent on those
  operations?
 

 Because you're writing a checkpoint file (hint, use mdrun -noconfout), and
 that load is imbalanced so the other cores wait for it in the global
 communication stage in Comm. energies (fairly clear, since they have the
 same Wall time). Hint - make benchmarks run for about a minute, so you
 are not dominated by setup and load-balancing time. Your compute time was
 about 1/20 of a second...

 Mark


  Thanks,
  Sikandar
 
   R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G
 
  On 2 MPI ranks
 
   Computing:  Num   Num  CallWall time Giga-Cycles
   Ranks Threads  Count  (s) total sum%
 
 
 -
   Domain decomp. 21 11   0.006  0.030
 2.1
   DD comm. load  21  2   0.000  0.000
 0.0
   Neighbor search21 11   0.007  0.039
 2.7
   Launch GPU ops.21202   0.007  0.036
 2.5
   Comm. coord.   21 90   0.002  0.013
 0.9
   Force  21101   0.001  0.003
 0.2
   Wait + Comm. F 21101   0.004  0.020
 1.4
   Wait GPU nonlocal  21101   0.004  0.020
 1.4
   Wait GPU local 21101   0.000  0.002
 0.2
   NB X/F buffer ops. 21382   0.001  0.008
 0.6
   Write traj.21  1   0.108  0.586
  40.2
   Update 21101   0.005  0.025
 1.7
   Comm. energies 21 22   0.108  0.588
  40.3
   Rest   0.016  0.087
 5.9
 
 
 -
   Total  0.269  1.459
 100.0
 
 
 -
 
 
  grompp.mdp file:
 
  integrator   = md-vv
  dt   = 0.001
  nsteps   = 100
  nstlog   = 0
  nstcalcenergy= 0
  cutoff-scheme= verlet
  ns_type  = grid
  nstlist  = 10
  pbc  = xyz
  rlist= 0.7925
  vdwtype  = Cut-off
  rvdw = 0.7925
  rcoulomb = 0.7925
  gen_vel  = yes
  gen_temp = 296.0
  --
  Gromacs Users mailing list
 
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
  posting!
 
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
  * For (un)subscribe requests visit
  https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
  send a mail to gmx-users-requ...@gromacs.org.
 
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

 --
 Gromacs Users mailing list

 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

Re: [gmx-users] Gromacs performance on virtual servers

2014-07-24 Thread Szilárd Páll
On Fri, Jul 25, 2014 at 1:51 AM, Szilárd Páll pall.szil...@gmail.com wrote:
 Hi

 In general, virtualization will always have an overhead, but if done
 well, the performance should be close to that of bare metal. However,
 for GROMACS the ideal scenario is exclusive host access (including
 hypervisor) and thread affinities which will both depend on the
 hypervisor configuration. Hence, if you can, you should try to get
 access to virtual hosts that fully utilize a compute node and do not
 share it with others.

 On Fri, Jul 25, 2014 at 12:31 AM, Mark Abraham mark.j.abra...@gmail.com 
 wrote:
 Hi,

 Except for huge simulation systems, GROMACS performance past a single node
 is dominated by network latency, so unless you can extract a promise that
 any multi-node runs will have Infiniband-quality latency (because the nodes
 are physically in the same room, and on Infiniband) you can forget about
 doing multi-node MD on such a system.

 Two remarks:

 * With a slow network the only parallelization you can potentially

*inter-node parallelizaion

 make use of is multi-sim, unless your environment is so could-y that
 some nodes can have tens to hundreds of ms latency which can kill even
 you multi-sim performance (depending on how fast each simulation is
 and how often do they sync).

 * I've seen several claims that *good* 10/40G Ethernet can get close
 to IB even in latency, even for MD, and even for GROMACS, e.g:
 http://goo.gl/JrNxKf, http://goo.gl/t0z15f


 Cheers,
 --
 Szilárd

 Mark


 On Thu, Jul 24, 2014 at 10:54 PM, Elton Carvalho elto...@if.usp.br wrote:

 Dear Gromacs Users,

 My former university is focusing on cloud computing instead of
 physical servers, so research groups are now expected to buy virtual
 servers from the university coloud instead of buying their own
 clusters.

 The current setup employs Xeon E7- 2870 servers and there is an
 university-wide virtual cluster with 50 virtual servers each with 10
 CPUs.

 Does anyone here have information on gromacs performance on this kind
 of infrastructure? Should I expect big issues?

 One thing that comes to mind is that the CPUs may not necessarily be
 in the same physical server, rack, or even datacenter (their plan is
 to decentralize the colocation), so network latency may be higher than
 the traditional setup, which may affect scaling. Does this argument
 make sense or am I missing something on cloud management 101?

 Cheers.
 --
 Elton Carvalho
 Departamento de Física
 Universidade Federal do Paraná
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

 --
 Gromacs Users mailing list

 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs performance on virtual servers

2014-07-24 Thread Szilárd Páll
Hi

In general, virtualization will always have an overhead, but if done
well, the performance should be close to that of bare metal. However,
for GROMACS the ideal scenario is exclusive host access (including
hypervisor) and thread affinities which will both depend on the
hypervisor configuration. Hence, if you can, you should try to get
access to virtual hosts that fully utilize a compute node and do not
share it with others.

On Fri, Jul 25, 2014 at 12:31 AM, Mark Abraham mark.j.abra...@gmail.com wrote:
 Hi,

 Except for huge simulation systems, GROMACS performance past a single node
 is dominated by network latency, so unless you can extract a promise that
 any multi-node runs will have Infiniband-quality latency (because the nodes
 are physically in the same room, and on Infiniband) you can forget about
 doing multi-node MD on such a system.

Two remarks:

* With a slow network the only parallelization you can potentially
make use of is multi-sim, unless your environment is so could-y that
some nodes can have tens to hundreds of ms latency which can kill even
you multi-sim performance (depending on how fast each simulation is
and how often do they sync).

* I've seen several claims that *good* 10/40G Ethernet can get close
to IB even in latency, even for MD, and even for GROMACS, e.g:
http://goo.gl/JrNxKf, http://goo.gl/t0z15f


Cheers,
--
Szilárd

 Mark


 On Thu, Jul 24, 2014 at 10:54 PM, Elton Carvalho elto...@if.usp.br wrote:

 Dear Gromacs Users,

 My former university is focusing on cloud computing instead of
 physical servers, so research groups are now expected to buy virtual
 servers from the university coloud instead of buying their own
 clusters.

 The current setup employs Xeon E7- 2870 servers and there is an
 university-wide virtual cluster with 50 virtual servers each with 10
 CPUs.

 Does anyone here have information on gromacs performance on this kind
 of infrastructure? Should I expect big issues?

 One thing that comes to mind is that the CPUs may not necessarily be
 in the same physical server, rack, or even datacenter (their plan is
 to decentralize the colocation), so network latency may be higher than
 the traditional setup, which may affect scaling. Does this argument
 make sense or am I missing something on cloud management 101?

 Cheers.
 --
 Elton Carvalho
 Departamento de Física
 Universidade Federal do Paraná
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

 --
 Gromacs Users mailing list

 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Error in system_inflate.gro coordinates does not match

2014-07-24 Thread Justin Lemkul



On 7/24/14, 11:57 AM, RINU KHATTRI wrote:

hello everyone
thank you justin
i did the same
till minimization without the ligand it is in the lipid and center but i
edit the box size arbitrary i used x and y axis as present in popc but in z
axis used 10.0 so there is overlapping of protein and lipid  i think
this can create problem


I don't understand if there is a question or problem here.  If something is 
wrong, provide the exact command(s) used and provide images of the undesirable 
output.  Without that information, there's nothing that I or anyone else can do 
to help you.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Lennard-Jones potential not matching with published data

2014-07-24 Thread Justin Lemkul



On 7/24/14, 4:30 PM, Elton Carvalho wrote:

On Wed, Jul 23, 2014 at 3:40 PM, ibrahim khalil
ibrahim.khalil.c...@gmail.com wrote:

In my simulation, my results are about half (not exactly half but around
half) of the published data. I am stuck here for like a month and cannot
find my mistakes.

Can anyone help me where to look for my mistake? I am using a modified
oplsaa forcefield.

Should i check my forcefield parameters? Or my mdp parameters?


I've been there before.

Check which Lennard-Jones function was used to publish the data to
which you are comparing. There are some nomenclature differences
regarding sigma. GROAMCS defines sigma as the distance where the LJ
potential is zero. Some forcefields define sigma as the bottom of the
well.



The definition of sigma is always the same; the issue is whether or not force 
fields specify sigma directly or Rmin/2, etc.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Lennard-Jones potential not matching with published data

2014-07-24 Thread Elton Carvalho
On Thu, Jul 24, 2014 at 10:47 PM, Justin Lemkul jalem...@vt.edu wrote:

 On 7/24/14, 4:30 PM, Elton Carvalho wrote:

 I've been there before.

 Check which Lennard-Jones function was used to publish the data to
 which you are comparing. There are some nomenclature differences
 regarding sigma. GROAMCS defines sigma as the distance where the LJ
 potential is zero. Some forcefields define sigma as the bottom of the
 well.


 The definition of sigma is always the same; the issue is whether or not
 force fields specify sigma directly or Rmin/2, etc.


I stand corrected. sigma is well defined. The point is wether the
distance parameter in the LJ formula is sigma or the minimum, or
half the minimum etc. As an example, Accelrys' Cerius2 (and Materials

Studio too, AFIK), define Lennard-Jones as (according to its
documentation):

 LJ 12 6:   E = Do { ( Ro/R )^12 - 2 * ( Ro/R )^6 }

Do - Well depth in kcal/mol
Ro - Equilibrium distance in Angstroms

This was exactly the scenario I had trouble with: trying to implement
a Cerius2 forcefield in gromacs.

Either way, checking what the distance parameter means in the
forcefield the user is trying to reproduce is a nice to place to start
if he is experiencing this kind of discrepancy.

Cheers from a windy, chilly Curitiba.

-- 
Elton Carvalho
Departamento de Física
Universidade Federal do Paraná

Departamento de Física
Universidade Federal do Paraná
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Re grompp and mdrun output files

2014-07-24 Thread Melsa Rose Ducut
Hi GROMACS users,

I typed the command
grompp -f md300.mdp -c equi_new.gro -n dex.ndx -p topol.top -maxwarn 1 -o 
md300.tpr

then this command

mdrun -v -deffnm md300


So, I was expecting that the md300.gro output file will be same to the last 
frame when I load the md300.xtc on the equi_new.gro file on vmd. However, that 
is not the case. Can anyone please enlighten me about this? Thanks.

regards,
Melsa
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.