Re: [gmx-users] Re grompp and mdrun output files

2014-07-25 Thread Giuseppina La Sala
Hi Melsa,
I think that your problem could belong to the fact that in the .xtc file you do 
not write all the frames, but every n-steps. Thus, the last frame of your xtc 
file is a multiple of n-step (a discrete value) and not necessary the real last 
one, that for sure is represented by md300.gro.


Cheers 

Josephine
 

Da: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
[gromacs.org_gmx-users-boun...@maillist.sys.kth.se] per conto di Melsa Rose 
Ducut [ducut_melsar...@yahoo.com]
Inviato: venerdì 25 luglio 2014 6.50
A: gromacs.org_gmx-users@maillist.sys.kth.se
Oggetto: [gmx-users] Re grompp and mdrun output files

Hi GROMACS users,

I typed the command
grompp -f md300.mdp -c equi_new.gro -n dex.ndx -p topol.top -maxwarn 1 -o 
md300.tpr

then this command

mdrun -v -deffnm md300


So, I was expecting that the md300.gro output file will be same to the last 
frame when I load the md300.xtc on the equi_new.gro file on vmd. However, that 
is not the case. Can anyone please enlighten me about this? Thanks.

regards,
Melsa
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] g_cluster analysis for structure refirement

2014-07-25 Thread James Starlight
Dear Gromacs users!

I'd like to use g_cluster utility to cluster a big set of models produced
by Modeller as the result of the refirement of some parts of my protein. In
this case all structures differs only in the conformation of 1 longest loop
(~ 30 amino acids including 2 disulphide bridges) so I need to cluster all
models based on the RMSD in this region.   As the result I'd like to obtain
projection of all set of conformers onto the plane of some coordinates
 correspond to some selected structural criteriums (for instance percent of
the occurence of secondary structure elements  in the refined (loop)
region; and/or some geometrical criteriums like distance between pair of
residues, occurence of the salt-bridges in this region etc. As the result
I'd like to visualize data and chose most representative structures from
each cluster for further MD simulation. Could such processing be performed
by g_cluster taking into account that I have gromacs-like trr consisted of
my conformers ? On what cluster algoritms and parameters  should I pay a
lot of attention?

Many thanks for suggestion,

James
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] g_cluster analysis for structure refirement

2014-07-25 Thread Tsjerk Wassenaar
Hi James,

The first part is just conformational clustering, for which you can use
g_cluster. The easiest is then to collect the structures belonging to the
different clusters in different files (which g_cluster can do), which you
process separately to extract those properties you're interested in. Then
you can easily plot the selected properties per cluster.

Hope it helps,

Tsjerk


On Fri, Jul 25, 2014 at 9:53 AM, James Starlight jmsstarli...@gmail.com
wrote:

 Dear Gromacs users!

 I'd like to use g_cluster utility to cluster a big set of models produced
 by Modeller as the result of the refirement of some parts of my protein. In
 this case all structures differs only in the conformation of 1 longest loop
 (~ 30 amino acids including 2 disulphide bridges) so I need to cluster all
 models based on the RMSD in this region.   As the result I'd like to obtain
 projection of all set of conformers onto the plane of some coordinates
  correspond to some selected structural criteriums (for instance percent of
 the occurence of secondary structure elements  in the refined (loop)
 region; and/or some geometrical criteriums like distance between pair of
 residues, occurence of the salt-bridges in this region etc. As the result
 I'd like to visualize data and chose most representative structures from
 each cluster for further MD simulation. Could such processing be performed
 by g_cluster taking into account that I have gromacs-like trr consisted of
 my conformers ? On what cluster algoritms and parameters  should I pay a
 lot of attention?

 Many thanks for suggestion,

 James
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.




-- 
Tsjerk A. Wassenaar, Ph.D.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs performance on virtual servers

2014-07-25 Thread Mark Abraham
On Fri, Jul 25, 2014 at 1:51 AM, Szilárd Páll pall.szil...@gmail.com
wrote:

 Hi

 In general, virtualization will always have an overhead, but if done
 well, the performance should be close to that of bare metal. However,
 for GROMACS the ideal scenario is exclusive host access (including
 hypervisor) and thread affinities which will both depend on the
 hypervisor configuration. Hence, if you can, you should try to get
 access to virtual hosts that fully utilize a compute node and do not
 share it with others.


Definitely.


 On Fri, Jul 25, 2014 at 12:31 AM, Mark Abraham mark.j.abra...@gmail.com
 wrote:
  Hi,
 
  Except for huge simulation systems, GROMACS performance past a single
 node
  is dominated by network latency, so unless you can extract a promise that
  any multi-node runs will have Infiniband-quality latency (because the
 nodes
  are physically in the same room, and on Infiniband) you can forget about
  doing multi-node MD on such a system.

 Two remarks:

 * With a slow network the only parallelization you can potentially
 make use of is multi-sim, unless your environment is so could-y that
 some nodes can have tens to hundreds of ms latency which can kill even
 you multi-sim performance (depending on how fast each simulation is
 and how often do they sync).


I would not encourage multi-sim on such a setup, unless you actually want
replica exchange. The multi-sim implementation unnecessarily syncs
simulations every min(nstlist,nstcalcenergy,nstreplex) step, so that might
be ~tens of times per second. Unnecessary multi-sim is good for pretending
you are doing a big parallel calculation to get access to a large chunk of
a machine, but this is not really the case here.

* I've seen several claims that *good* 10/40G Ethernet can get close
 to IB even in latency, even for MD, and even for GROMACS, e.g:
 http://goo.gl/JrNxKf, http://goo.gl/t0z15f


Interesting, thanks.

Mark




 Cheers,
 --
 Szilárd

  Mark
 
 
  On Thu, Jul 24, 2014 at 10:54 PM, Elton Carvalho elto...@if.usp.br
 wrote:
 
  Dear Gromacs Users,
 
  My former university is focusing on cloud computing instead of
  physical servers, so research groups are now expected to buy virtual
  servers from the university coloud instead of buying their own
  clusters.
 
  The current setup employs Xeon E7- 2870 servers and there is an
  university-wide virtual cluster with 50 virtual servers each with 10
  CPUs.
 
  Does anyone here have information on gromacs performance on this kind
  of infrastructure? Should I expect big issues?
 
  One thing that comes to mind is that the CPUs may not necessarily be
  in the same physical server, rack, or even datacenter (their plan is
  to decentralize the colocation), so network latency may be higher than
  the traditional setup, which may affect scaling. Does this argument
  make sense or am I missing something on cloud management 101?
 
  Cheers.
  --
  Elton Carvalho
  Departamento de Física
  Universidade Federal do Paraná
  --
  Gromacs Users mailing list
 
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
  posting!
 
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
  * For (un)subscribe requests visit
  https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
  send a mail to gmx-users-requ...@gromacs.org.
 
  --
  Gromacs Users mailing list
 
  * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!
 
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
  * For (un)subscribe requests visit
  https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Groups in index.ndx

2014-07-25 Thread INPE (Ingrid Viveka Pettersson)
Dear Group,

I have defined different specific groups in the index.ndx file. My problem is 
that if I try to add a new group, the old ones are disappearing. Should it be 
like this?

Yours sincerely,

Ingrid Pettersson


_

Ingrid Pettersson, PhD
Principal Scientist
Diabetes Structural Biology

Novo Nordisk A/S
Novo Nordisk Park
DK-2760 Måløv
Denmark
+4530754506 (direct)
i...@novonordisk.com

Facebookhttp://www.facebook.com/novonordisk | 
Twitterhttp://www.twitter.com/novonordisk | 
LinkedInhttp://www.linkedin.com/company/novo-nordisk | 
Youtubehttp://www.Youtube.com/novonordisk | 
Pinteresthttp://www.pinterest.com/novonordisk

This e-mail (including any attachments) is intended for the addressee(s) stated 
above only and may contain confidential information protected by law. You are 
hereby notified that any unauthorized reading, disclosure, copying or 
distribution of this e-mail or use of information contained herein is strictly 
prohibited and may violate rights to proprietary information. If you are not an 
intended recipient, please return this e-mail to the sender and delete it 
immediately hereafter. Thank you.


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Angle group

2014-07-25 Thread Justin Lemkul



On 7/25/14, 7:35 AM, Cyrus Djahedi wrote:

There is something else. I also tried using mk_angndx to generate an index file prior to 
using g_angle by: mk_angndx -s C0.tpr -type angle
It generates a lot of different angle group that I can not identify. Each line 
in the index-file has 9 atoms in it, are the atoms divided three-by-three to 
form a triplet? Is there any way of knowing which group represents which 
triplet?

g_angle:

Group 0 (Theta=110.0_502.42) has  7680 elements
Group 1 (Theta=111.0_376.81) has 10560 elements
Group 2 (Theta=112.0_837.36) has   960 elements
Group 3 (Theta=107.5_586.15) has  4944 elements
Group 4 (Theta=108.5_586.15) has  5616 elements
Group 5 (Theta=109.5_460.55) has  2976 elements
Group 6 (Theta=113.5_376.81) has  3840 elements
Group 7 (Theta=111.6_418.68) has  1872 elements
Group 8 (Theta=109.5_376.81) has   960 elements
Select a group: 2
Selected 2: 'Theta=112.0_837.36'
Last frame  1 time 1.000
Found points in the range from 93 to 124 (max 180)
   angle   = 108.363
 angle^2  = 11742.5
Std. Dev.   = 0.214073



mk_angndx divides the groups based on their parameters, not necessarily by the 
same chemical definition (i.e. the same bonded parameters may apply to different 
groups in your structure).  You can always identify what the groups contain by 
opening the .ndx file in a text editor; all an .ndx file has is a list of atom 
numbers, so it's easy to tell what is what.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] hints for core/shell optimization?

2014-07-25 Thread Tamas Karpati
Dear all,

I have two questions about geometry optimization of a crystal
with polarization via the core/shell model. I'm creating *.gro and
*.top files by hand and compile them with *.mdp to *.tpr via
GROMPP. My FF is also made by hand (simply because i need
to learn GROMACS). I have learnt on this list that with Buckingham
potentials I need to use the group rather than the Verlet scheme.


(1/2) Letting some of all atoms be polarizable through applying
shell particles made MDRUN segfault like this:

#  ...
#  Reading file AAA_opt.tpr, VERSION 4.6.3 (single precision)
#  Using 2 MPI threads
#
#  Steepest Descents:
# Tolerance (Fmax)   =  1.0e+01
#Number of steps=   10
#  Segmentation fault1.0e-02 nm, Epot= -nan Fmax= 3.76506e+03,
atom= 1357

I imagined some divison by zero situation not handled and have put
some random noise on the shell particles' position so they do not
anymore start exactly at the atomic sites (meaning nonzero distances).
Seemed to work, at least no further crashes. Only energies and forces
seem very high:

#  Steepest Descents:
# Tolerance (Fmax)   =  1.0e+01
# Number of steps=   10
#  Step=0, Dmax= 1.0e-02 nm, Epot= -1.36425e+07 Fmax= 2.99600e+05, atom= 160
#  Step=1, Dmax= 1.0e-02 nm, Epot= -1.62080e+07 Fmax= 1.25769e+06, atom= 160
#  Step=2, Dmax= 1.2e-02 nm, Epot= -1.95965e+07 Fmax= 6.87820e+08,
atom= 2759
#  Step=3, Dmax= 1.4e-02 nm, Epot= -2.02902e+07 Fmax= 1.30719e+09, atom= 468
#  Step=8, Dmax= 1.1e-03 nm, Epot= -2.18970e+07 Fmax= 5.77722e+09,
atom= 1095
#  Step=   10, Dmax= 6.5e-04 nm, Epot= -1.96952e+07 Fmax= 3.92889e+08,
atom= 1096
#  Energy minimization reached the maximum numberof steps before the forces
#  reached the requestedprecision Fmax  10.

My question is the following.
   Is it (randomized shell positions) a correct procedure with GROMACS?


(2/2) Changing from a randomized x/y/z set to a fixed distance at
a random direction for the shell particles led to another unexpected result.
I scanned a range between 1e-4 to 0.1 nm and noticed that

the final core-to-shell distance is a function of the starting one.

I used niter = 1 (note: the default is 20) as i noticed in an MD
type of job that 20, 100 or 1000 steps were insufficient for the shells
to relax within default tolerance. The cell size was ca. 3x3x3 nm.

My question is the following.
   What would be the appropriate core-to-shell distance to apply?

I appreciate any help so thanks in advance.

With regards,
  toma
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Lennard jones parameters for metal ions

2014-07-25 Thread #SUKRITI GUPTA#
Dear Gromacs users,


I wanted to simulate metal ions in gromacs. For Fe+2, I have lj potential 
parameters already defined in OPLSAA forcefield, so I wanted to know if I want 
to add FE+3 in my simulation, then the LJ parameters for it will be different 
from that of Fe+2 or same?


I thought it will be same because vander waals radii is defined for a element 
and not ion, hence it will be same for Fe or Fe+2 or Fe+3.


Regards

Sukriti





[https://wis.ntu.edu.sg/graphics/tms/ntulogofit2.gif]

Sukriti Gupta (Ms) | PhD Student | Energy Research Institute @ NTU (ERI@N) | 
Nanyang Technological University
N1.3-B4-14, 50 Nanyang Avenue, Singapore 639798
Tel: (65) 81164191 GMT+8h | Email:sukriti...@e.ntu.edu.sg | 
Web:erian.ntu.edu.sghttp://www.ntu.edu.sg/home/xuzc/People.html


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Lennard jones parameters for ions

2014-07-25 Thread Justin Lemkul



On 7/25/14, 7:39 AM, #SUKRITI GUPTA# wrote:

Dear Gromacs users,


I wanted to simulate metal ions in gromacs. For Fe+2, I have lj potential 
parameters already defined in OPLSAA forcefield, so I wanted to know if I want 
to add FE+3 in my simulation, then the LJ parameters for it will be different 
from that of Fe+2 or same?


I thought it will be same because vander waals radii is defined for a element 
and not ion, hence it will be same for Fe or Fe+2 or Fe+3.



In any force field, LJ and electrostatic terms are parametrized to balance in 
some way that produces some physically sensible behavior.  Given the extremely 
high charge density of Fe2+ and Fe3+, I see no reason to think that the LJ 
parameters for one would be the same as for the other.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] hints for core/shell optimization?

2014-07-25 Thread Justin Lemkul



On 7/25/14, 7:43 AM, Tamas Karpati wrote:

Dear all,

I have two questions about geometry optimization of a crystal
with polarization via the core/shell model. I'm creating *.gro and
*.top files by hand and compile them with *.mdp to *.tpr via
GROMPP. My FF is also made by hand (simply because i need
to learn GROMACS). I have learnt on this list that with Buckingham
potentials I need to use the group rather than the Verlet scheme.


(1/2) Letting some of all atoms be polarizable through applying
shell particles made MDRUN segfault like this:

#  ...
#  Reading file AAA_opt.tpr, VERSION 4.6.3 (single precision)
#  Using 2 MPI threads
#
#  Steepest Descents:
# Tolerance (Fmax)   =  1.0e+01
#Number of steps=   10
#  Segmentation fault1.0e-02 nm, Epot= -nan Fmax= 3.76506e+03,
atom= 1357

I imagined some divison by zero situation not handled and have put
some random noise on the shell particles' position so they do not
anymore start exactly at the atomic sites (meaning nonzero distances).
Seemed to work, at least no further crashes. Only energies and forces
seem very high:

#  Steepest Descents:
# Tolerance (Fmax)   =  1.0e+01
# Number of steps=   10
#  Step=0, Dmax= 1.0e-02 nm, Epot= -1.36425e+07 Fmax= 2.99600e+05, atom= 160
#  Step=1, Dmax= 1.0e-02 nm, Epot= -1.62080e+07 Fmax= 1.25769e+06, atom= 160
#  Step=2, Dmax= 1.2e-02 nm, Epot= -1.95965e+07 Fmax= 6.87820e+08,
atom= 2759
#  Step=3, Dmax= 1.4e-02 nm, Epot= -2.02902e+07 Fmax= 1.30719e+09, atom= 468
#  Step=8, Dmax= 1.1e-03 nm, Epot= -2.18970e+07 Fmax= 5.77722e+09,
atom= 1095
#  Step=   10, Dmax= 6.5e-04 nm, Epot= -1.96952e+07 Fmax= 3.92889e+08,
atom= 1096
#  Energy minimization reached the maximum numberof steps before the forces
#  reached the requestedprecision Fmax  10.

My question is the following.
Is it (randomized shell positions) a correct procedure with GROMACS?


(2/2) Changing from a randomized x/y/z set to a fixed distance at
a random direction for the shell particles led to another unexpected result.
I scanned a range between 1e-4 to 0.1 nm and noticed that

 the final core-to-shell distance is a function of the starting one.

I used niter = 1 (note: the default is 20) as i noticed in an MD
type of job that 20, 100 or 1000 steps were insufficient for the shells
to relax within default tolerance. The cell size was ca. 3x3x3 nm.



Please provide a full .mdp file; other settings are very relevant.


My question is the following.
What would be the appropriate core-to-shell distance to apply?



The equilibrium distance for core-shell bonds should be zero, deviations from 
this non-polarized state account for the polarization energy.



I appreciate any help so thanks in advance.


Upgrade to 4.6.6; there have been issues with shells that have been fixed since 
4.6.3.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Groups in index.ndx

2014-07-25 Thread INPE (Ingrid Viveka Pettersson)
Thank you for the help. It works fine.

BR Ingrid


-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
[mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] On Behalf Of Justin 
Lemkul
Sent: 25. juli 2014 13:36
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] Groups in index.ndx



On 7/25/14, 7:23 AM, INPE (Ingrid Viveka Pettersson) wrote:
 Dear Group,

 I have defined different specific groups in the index.ndx file. My problem is 
 that if I try to add a new group, the old ones are disappearing. Should it be 
 like this?


Depends on the way you're issuing make_ndx.  You can preserve the existing 
custom groups when writing a new index file by supplying the old index file as 
input to make_ndx.  If you're using default file names (hint: always provide 
your exact command!) then it will otherwise be overwritten.

make_ndx -f conf.gro -n old_index.ndx -o new_index.ndx

will preserve your custom groups and allow you to add new ones.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441 
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Lennard jones parameters for ions

2014-07-25 Thread #SUKRITI GUPTA#
Dear Gromacs users,


I wanted to simulate metal ions in gromacs. For Fe+2, I have lj potential 
parameters already defined in OPLSAA forcefield, so I wanted to know if I want 
to add FE+3 in my simulation, then the LJ parameters for it will be different 
from that of Fe+2 or same?


I thought it will be same because vander waals radii is defined for a element 
and not ion, hence it will be same for Fe or Fe+2 or Fe+3.


Regards

Sukriti


[https://wis.ntu.edu.sg/graphics/tms/ntulogofit2.gif]

Sukriti Gupta (Ms) | PhD Student | Energy Research Institute @ NTU (ERI@N) | 
Nanyang Technological University
N1.3-B4-14, 50 Nanyang Avenue, Singapore 639798
Tel: (65) 81164191 GMT+8h | Email:sukriti...@e.ntu.edu.sg | 
Web:erian.ntu.edu.sghttp://www.ntu.edu.sg/home/xuzc/People.html


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Groups in index.ndx

2014-07-25 Thread Nidhi Katyal
Use make_ndx -f *.gro -n old_index.ndx -o old_index.ndx


On Fri, Jul 25, 2014 at 4:53 PM, INPE (Ingrid Viveka Pettersson) 
i...@novonordisk.com wrote:

 Dear Group,

 I have defined different specific groups in the index.ndx file. My problem
 is that if I try to add a new group, the old ones are disappearing. Should
 it be like this?

 Yours sincerely,

 Ingrid Pettersson


 _

 Ingrid Pettersson, PhD
 Principal Scientist
 Diabetes Structural Biology

 Novo Nordisk A/S
 Novo Nordisk Park
 DK-2760 Måløv
 Denmark
 +4530754506 (direct)
 i...@novonordisk.com

 Facebookhttp://www.facebook.com/novonordisk | Twitter
 http://www.twitter.com/novonordisk | LinkedIn
 http://www.linkedin.com/company/novo-nordisk | Youtube
 http://www.Youtube.com/novonordisk | Pinterest
 http://www.pinterest.com/novonordisk

 This e-mail (including any attachments) is intended for the addressee(s)
 stated above only and may contain confidential information protected by
 law. You are hereby notified that any unauthorized reading, disclosure,
 copying or distribution of this e-mail or use of information contained
 herein is strictly prohibited and may violate rights to proprietary
 information. If you are not an intended recipient, please return this
 e-mail to the sender and delete it immediately hereafter. Thank you.


 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] hints for core/shell optimization?

2014-07-25 Thread Tamas Karpati
Dear Justin,

Thank you for your quick answer.

Same with GROMACS-4.6.6: core-to-shell distance
must be 0 to not crash. My crystal is expected to be polarized
(metallic and oxygen sites are the victims of this model).
The *.mpd file being used is:

#   nstcalcenergy = 1 ; 4.6.6 claims this necessary, 4.6.3 didn't need it
#   integrator  = steep
#   vdw-type= cut-off
#   coulombtype = pme
#   nsteps  = 10
#   periodic-molecules = no
#   cutoff-scheme = group
#   ns-type = grid
#   emtol = 10 ; default=10 kJ/mol/nm
#   niter = 1 ; default=20
#   ;fcstep = ; default=0 ps^2 ; not quite clear what it is
#   rlist   = 1.0
#   rcoulomb= 1.0
#   rvdw= 1.0
#   pbc = xyz
#   nstxout = 1  ; wouldn't emit pos for each, though
#   nstfout = 1 ; --
#   nstlist = 1000 ; avoid it, no bonds at all
#   nstlog  = 1
#   ;pcoupl = no ; used to switch cell-optimization on/off

Best regards,
  toma

On Fri, Jul 25, 2014 at 1:48 PM, Justin Lemkul jalem...@vt.edu wrote:


 On 7/25/14, 7:43 AM, Tamas Karpati wrote:

 Dear all,

 I have two questions about geometry optimization of a crystal
 with polarization via the core/shell model. I'm creating *.gro and
 *.top files by hand and compile them with *.mdp to *.tpr via
 GROMPP. My FF is also made by hand (simply because i need
 to learn GROMACS). I have learnt on this list that with Buckingham
 potentials I need to use the group rather than the Verlet scheme.


 (1/2) Letting some of all atoms be polarizable through applying
 shell particles made MDRUN segfault like this:

 #  ...
 #  Reading file AAA_opt.tpr, VERSION 4.6.3 (single precision)
 #  Using 2 MPI threads
 #
 #  Steepest Descents:
 # Tolerance (Fmax)   =  1.0e+01
 #Number of steps=   10
 #  Segmentation fault1.0e-02 nm, Epot= -nan Fmax= 3.76506e+03,
 atom= 1357

 I imagined some divison by zero situation not handled and have put
 some random noise on the shell particles' position so they do not
 anymore start exactly at the atomic sites (meaning nonzero distances).
 Seemed to work, at least no further crashes. Only energies and forces
 seem very high:

 #  Steepest Descents:
 # Tolerance (Fmax)   =  1.0e+01
 # Number of steps=   10
 #  Step=0, Dmax= 1.0e-02 nm, Epot= -1.36425e+07 Fmax= 2.99600e+05,
 atom= 160
 #  Step=1, Dmax= 1.0e-02 nm, Epot= -1.62080e+07 Fmax= 1.25769e+06,
 atom= 160
 #  Step=2, Dmax= 1.2e-02 nm, Epot= -1.95965e+07 Fmax= 6.87820e+08,
 atom= 2759
 #  Step=3, Dmax= 1.4e-02 nm, Epot= -2.02902e+07 Fmax= 1.30719e+09,
 atom= 468
 #  Step=8, Dmax= 1.1e-03 nm, Epot= -2.18970e+07 Fmax= 5.77722e+09,
 atom= 1095
 #  Step=   10, Dmax= 6.5e-04 nm, Epot= -1.96952e+07 Fmax= 3.92889e+08,
 atom= 1096
 #  Energy minimization reached the maximum numberof steps before the
 forces
 #  reached the requestedprecision Fmax  10.

 My question is the following.
 Is it (randomized shell positions) a correct procedure with GROMACS?


 (2/2) Changing from a randomized x/y/z set to a fixed distance at
 a random direction for the shell particles led to another unexpected
 result.
 I scanned a range between 1e-4 to 0.1 nm and noticed that

  the final core-to-shell distance is a function of the starting one.

 I used niter = 1 (note: the default is 20) as i noticed in an MD
 type of job that 20, 100 or 1000 steps were insufficient for the shells
 to relax within default tolerance. The cell size was ca. 3x3x3 nm.


 Please provide a full .mdp file; other settings are very relevant.


 My question is the following.
 What would be the appropriate core-to-shell distance to apply?


 The equilibrium distance for core-shell bonds should be zero, deviations
 from this non-polarized state account for the polarization energy.


 I appreciate any help so thanks in advance.


 Upgrade to 4.6.6; there have been issues with shells that have been fixed
 since 4.6.3.

 -Justin

 --
 ==

 Justin A. Lemkul, Ph.D.
 Ruth L. Kirschstein NRSA Postdoctoral Fellow

 Department of Pharmaceutical Sciences
 School of Pharmacy
 Health Sciences Facility II, Room 601
 University of Maryland, Baltimore
 20 Penn St.
 Baltimore, MD 21201

 jalem...@outerbanks.umaryland.edu | (410) 706-7441
 http://mackerell.umaryland.edu/~jalemkul

 ==
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a
 mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe 

Re: [gmx-users] Lennard jones parameters for ions

2014-07-25 Thread #SUKRITI GUPTA#
Dear Justin,

Thanks for your reply. Then it there any relation between charge density (for 
same element) and lj parameters e.g. for higher charge vander waals radius is 
smaller? Or are lj parameters randomly selected just to satisfy physical 
behavior of the ions?

Regards
Sukriti


Sukriti Gupta (Ms) | PhD Student | Energy Research Institute @ NTU (ERI@N) | 
Nanyang Technological University
N1.3-B4-14, 50 Nanyang Avenue, Singapore 639798
Tel: (65) 81164191 GMT+8h | Email:sukriti...@e.ntu.edu.sg | Web:erian.ntu.edu.sg




From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
gromacs.org_gmx-users-boun...@maillist.sys.kth.se on behalf of Justin Lemkul 
jalem...@vt.edu
Sent: Friday, July 25, 2014 7:46 PM
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] Lennard jones parameters for ions

On 7/25/14, 7:39 AM, #SUKRITI GUPTA# wrote:
 Dear Gromacs users,


 I wanted to simulate metal ions in gromacs. For Fe+2, I have lj potential 
 parameters already defined in OPLSAA forcefield, so I wanted to know if I 
 want to add FE+3 in my simulation, then the LJ parameters for it will be 
 different from that of Fe+2 or same?


 I thought it will be same because vander waals radii is defined for a element 
 and not ion, hence it will be same for Fe or Fe+2 or Fe+3.


In any force field, LJ and electrostatic terms are parametrized to balance in
some way that produces some physically sensible behavior.  Given the extremely
high charge density of Fe2+ and Fe3+, I see no reason to think that the LJ
parameters for one would be the same as for the other.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Lennard jones parameters for ions

2014-07-25 Thread Justin Lemkul



On 7/25/14, 8:21 AM, #SUKRITI GUPTA# wrote:

Dear Justin,

Thanks for your reply. Then it there any relation between charge density (for 
same element) and lj parameters e.g. for higher charge vander waals radius is 
smaller? Or are lj parameters randomly selected just to satisfy physical 
behavior of the ions?



There's nothing random about it.  You need some sort of target data - QM 
interaction energies, crystal geometries, hydration free energies, etc. to 
parametrize the model.  The parameters are tuned to agree with whatever data are 
deemed suitable.  Additive models of transition metals are generally very poor, 
though.  There are many effects that simply can't be captured by a point charge 
with a 12-6 LJ potential.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] hints for core/shell optimization?

2014-07-25 Thread Justin Lemkul



On 7/25/14, 8:10 AM, Tamas Karpati wrote:

Dear Justin,

Thank you for your quick answer.

Same with GROMACS-4.6.6: core-to-shell distance
must be 0 to not crash. My crystal is expected to be polarized


Does your topology specify the proper intramolecular exclusions?  What is(are) 
the molecule(s)?



(metallic and oxygen sites are the victims of this model).
The *.mpd file being used is:

#   nstcalcenergy = 1 ; 4.6.6 claims this necessary, 4.6.3 didn't need it


Definitely true.


#   integrator  = steep
#   vdw-type= cut-off
#   coulombtype = pme
#   nsteps  = 10
#   periodic-molecules = no
#   cutoff-scheme = group
#   ns-type = grid
#   emtol = 10 ; default=10 kJ/mol/nm
#   niter = 1 ; default=20
#   ;fcstep = ; default=0 ps^2 ; not quite clear what it is
#   rlist   = 1.0
#   rcoulomb= 1.0
#   rvdw= 1.0
#   pbc = xyz
#   nstxout = 1  ; wouldn't emit pos for each, though
#   nstfout = 1 ; --
#   nstlist = 1000 ; avoid it, no bonds at all


Try nstlist = 1.  The shell positions are solved via SCF (EM), so you need to 
update the neighbor list very frequently.


-Justin


#   nstlog  = 1
#   ;pcoupl = no ; used to switch cell-optimization on/off

Best regards,
   toma

On Fri, Jul 25, 2014 at 1:48 PM, Justin Lemkul jalem...@vt.edu wrote:



On 7/25/14, 7:43 AM, Tamas Karpati wrote:


Dear all,

I have two questions about geometry optimization of a crystal
with polarization via the core/shell model. I'm creating *.gro and
*.top files by hand and compile them with *.mdp to *.tpr via
GROMPP. My FF is also made by hand (simply because i need
to learn GROMACS). I have learnt on this list that with Buckingham
potentials I need to use the group rather than the Verlet scheme.


(1/2) Letting some of all atoms be polarizable through applying
shell particles made MDRUN segfault like this:

#  ...
#  Reading file AAA_opt.tpr, VERSION 4.6.3 (single precision)
#  Using 2 MPI threads
#
#  Steepest Descents:
# Tolerance (Fmax)   =  1.0e+01
#Number of steps=   10
#  Segmentation fault1.0e-02 nm, Epot= -nan Fmax= 3.76506e+03,
atom= 1357

I imagined some divison by zero situation not handled and have put
some random noise on the shell particles' position so they do not
anymore start exactly at the atomic sites (meaning nonzero distances).
Seemed to work, at least no further crashes. Only energies and forces
seem very high:

#  Steepest Descents:
# Tolerance (Fmax)   =  1.0e+01
# Number of steps=   10
#  Step=0, Dmax= 1.0e-02 nm, Epot= -1.36425e+07 Fmax= 2.99600e+05,
atom= 160
#  Step=1, Dmax= 1.0e-02 nm, Epot= -1.62080e+07 Fmax= 1.25769e+06,
atom= 160
#  Step=2, Dmax= 1.2e-02 nm, Epot= -1.95965e+07 Fmax= 6.87820e+08,
atom= 2759
#  Step=3, Dmax= 1.4e-02 nm, Epot= -2.02902e+07 Fmax= 1.30719e+09,
atom= 468
#  Step=8, Dmax= 1.1e-03 nm, Epot= -2.18970e+07 Fmax= 5.77722e+09,
atom= 1095
#  Step=   10, Dmax= 6.5e-04 nm, Epot= -1.96952e+07 Fmax= 3.92889e+08,
atom= 1096
#  Energy minimization reached the maximum numberof steps before the
forces
#  reached the requestedprecision Fmax  10.

My question is the following.
 Is it (randomized shell positions) a correct procedure with GROMACS?


(2/2) Changing from a randomized x/y/z set to a fixed distance at
a random direction for the shell particles led to another unexpected
result.
I scanned a range between 1e-4 to 0.1 nm and noticed that

  the final core-to-shell distance is a function of the starting one.

I used niter = 1 (note: the default is 20) as i noticed in an MD
type of job that 20, 100 or 1000 steps were insufficient for the shells
to relax within default tolerance. The cell size was ca. 3x3x3 nm.



Please provide a full .mdp file; other settings are very relevant.



My question is the following.
 What would be the appropriate core-to-shell distance to apply?



The equilibrium distance for core-shell bonds should be zero, deviations
from this non-polarized state account for the polarization energy.



I appreciate any help so thanks in advance.



Upgrade to 4.6.6; there have been issues with shells that have been fixed
since 4.6.3.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a
mail to 

[gmx-users] genbox_mpi and Error: Invalid number of threads defined

2014-07-25 Thread lswierczewski .
Dear GROMACS Users,

I have a problem when running genbox_mpi.

Below is information from the console.



[lswiercz@nostromo ~/gromacs]$ srun --nodes 1 genbox_mpi -cp
1AKI_newbox.gro -cs spc216.gro -p topol.top -o solvated.gro
srun: job 5026 queued and waiting for resources
srun: job 5026 has been allocated resources
 :-)  G  R  O  M  A  C  S  (-:

Georgetown Riga Oslo Madrid Amsterdam Chisinau Stockholm

:-)  VERSION 4.6.3  (-:

Contributions from Mark Abraham, Emile Apol, Rossen Apostolov,
   Herman J.C. Berendsen, Aldert van Buuren, Pär Bjelkmar,
 Rudi van Drunen, Anton Feenstra, Gerrit Groenhof, Christoph Junghans,
Peter Kasson, Carsten Kutzner, Per Larsson, Pieter Meulenhoff,
   Teemu Murtola, Szilard Pall, Sander Pronk, Roland Schulz,
Michael Shirts, Alfons Sijbers, Peter Tieleman,

   Berk Hess, David van der Spoel, and Erik Lindahl.

   Copyright (c) 1991-2000, University of Groningen, The Netherlands.
 Copyright (c) 2001-2012,2013, The GROMACS development team at
Uppsala University  The Royal Institute of Technology, Sweden.
check out http://www.gromacs.org for more information.

 This program is free software; you can redistribute it and/or
   modify it under the terms of the GNU Lesser General Public License
as published by the Free Software Foundation; either version 2.1
 of the License, or (at your option) any later version.

  :-)  /opt/gromacs/4.6.3/bin/genbox_mpi  (-:

Option Filename  Type Description

 -cp 1AKI_newbox.gro  Input, Opt!  Structure file: gro g96 pdb tpr etc.
 -cs spc216.gro  Input, Opt!, Lib. Structure file: gro g96 pdb tpr etc.
 -ci insert.gro  Input, Opt.  Structure file: gro g96 pdb tpr etc.
  -o   solvated.gro  Output   Structure file: gro g96 pdb etc.
  -p  topol.top  In/Out, Opt! Topology file

Option   Type   Value   Description
--
-[no]h   bool   no  Print help info and quit
-[no]version bool   no  Print version info and quit
-niceint19  Set the nicelevel
-box vector 0 0 0   Box size
-nmolint0   Number of extra molecules to insert
-try int10  Try inserting -nmol times -try times
-seedint1997Random generator seed
-vdwdreal   0.105   Default van der Waals distance
-shell   real   0   Thickness of optional water layer around solute
-maxsol  int0   Maximum number of solvent molecules to add if
they fit in the box. If zero (default) this is
ignored
-[no]vel bool   no  Keep velocities from input solute and solvent

Reading solute configuration
LYSOZYME
Containing 2194 atoms in 207 residues
Initialising van der waals distances...

WARNING: Masses and atomic (Van der Waals) radii will be guessed
 based on residue and atom names, since they could not be
 definitively assigned from the information in your input
 files. These guessed numbers might deviate from the mass
 and radius of the atom type. Please check the output
 files if necessary.

Reading solvent configuration
216H2O,WATJP01,SPC216,SPC-MODEL,300K,BOX(M)=1.86206NM,WFVG,MAR. 1984
solvent configuration contains 648 atoms in 216 residues

Initialising van der waals distances...
Will generate new solvent configuration of 4x4x4 boxes
Generating configuration
Sorting configuration
Found 1 molecule type:
SOL (   3 atoms): 13824 residues
Calculating Overlap...
box_margin = 0.315
Removed 0 atoms that were outside the box
1587-119 Error: Invalid number of threads defined.


Why does the system returns the Error: Invalid number of threads defined?

Best Regards
Lukasz
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] genbox_mpi and Error: Invalid number of threads defined

2014-07-25 Thread Justin Lemkul



On 7/25/14, 11:20 AM, lswierczewski . wrote:

Dear GROMACS Users,

I have a problem when running genbox_mpi.

Below is information from the console.



[lswiercz@nostromo ~/gromacs]$ srun --nodes 1 genbox_mpi -cp
1AKI_newbox.gro -cs spc216.gro -p topol.top -o solvated.gro
srun: job 5026 queued and waiting for resources
srun: job 5026 has been allocated resources
  :-)  G  R  O  M  A  C  S  (-:

 Georgetown Riga Oslo Madrid Amsterdam Chisinau Stockholm

 :-)  VERSION 4.6.3  (-:

 Contributions from Mark Abraham, Emile Apol, Rossen Apostolov,
Herman J.C. Berendsen, Aldert van Buuren, Pär Bjelkmar,
  Rudi van Drunen, Anton Feenstra, Gerrit Groenhof, Christoph Junghans,
 Peter Kasson, Carsten Kutzner, Per Larsson, Pieter Meulenhoff,
Teemu Murtola, Szilard Pall, Sander Pronk, Roland Schulz,
 Michael Shirts, Alfons Sijbers, Peter Tieleman,

Berk Hess, David van der Spoel, and Erik Lindahl.

Copyright (c) 1991-2000, University of Groningen, The Netherlands.
  Copyright (c) 2001-2012,2013, The GROMACS development team at
 Uppsala University  The Royal Institute of Technology, Sweden.
 check out http://www.gromacs.org for more information.

  This program is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public License
 as published by the Free Software Foundation; either version 2.1
  of the License, or (at your option) any later version.

   :-)  /opt/gromacs/4.6.3/bin/genbox_mpi  (-:

Option Filename  Type Description

  -cp 1AKI_newbox.gro  Input, Opt!  Structure file: gro g96 pdb tpr etc.
  -cs spc216.gro  Input, Opt!, Lib. Structure file: gro g96 pdb tpr etc.
  -ci insert.gro  Input, Opt.  Structure file: gro g96 pdb tpr etc.
   -o   solvated.gro  Output   Structure file: gro g96 pdb etc.
   -p  topol.top  In/Out, Opt! Topology file

Option   Type   Value   Description
--
-[no]h   bool   no  Print help info and quit
-[no]version bool   no  Print version info and quit
-niceint19  Set the nicelevel
-box vector 0 0 0   Box size
-nmolint0   Number of extra molecules to insert
-try int10  Try inserting -nmol times -try times
-seedint1997Random generator seed
-vdwdreal   0.105   Default van der Waals distance
-shell   real   0   Thickness of optional water layer around solute
-maxsol  int0   Maximum number of solvent molecules to add if
 they fit in the box. If zero (default) this is
 ignored
-[no]vel bool   no  Keep velocities from input solute and solvent

Reading solute configuration
LYSOZYME
Containing 2194 atoms in 207 residues
Initialising van der waals distances...

WARNING: Masses and atomic (Van der Waals) radii will be guessed
  based on residue and atom names, since they could not be
  definitively assigned from the information in your input
  files. These guessed numbers might deviate from the mass
  and radius of the atom type. Please check the output
  files if necessary.

Reading solvent configuration
216H2O,WATJP01,SPC216,SPC-MODEL,300K,BOX(M)=1.86206NM,WFVG,MAR. 1984
solvent configuration contains 648 atoms in 216 residues

Initialising van der waals distances...
Will generate new solvent configuration of 4x4x4 boxes
Generating configuration
Sorting configuration
Found 1 molecule type:
 SOL (   3 atoms): 13824 residues
Calculating Overlap...
box_margin = 0.315
Removed 0 atoms that were outside the box
1587-119 Error: Invalid number of threads defined.


Why does the system returns the Error: Invalid number of threads defined?



None of the tools are MPI-aware.  The error is probably a result of incorrect 
compilation.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] hints for core/shell optimization?

2014-07-25 Thread Justin Lemkul



On 7/25/14, 11:22 AM, Tamas Karpati wrote:

Dear Justin,


Does your topology specify the proper intramolecular exclusions?  What
is(are) the molecule(s)?


No bonds, no exclusions. The whole crystal is modelled by ions
interacting via forces of the Coulomb and Buckingham types.
In fact, there is an X-Y-X angle force type which does have an
effect on forces and energy. Shell particles, on the other hand,
are defined via the [polarization] section as quasi-bonds. That's all.



Coulombic interactions fail at short distances; you probably need to apply Thole 
screening to avoid polarization catastrophe.  Ions are particularly problematic 
in this regard.



Try nstlist = 1.  The shell positions are solved via SCF (EM), so you need
to update the neighbor list very frequently.


Thanks for the trick. Tried but to no avail (very same results).
Although shells should have their weight in the model, I expect
them not to frequently change partner -each is ultimately connected
to its core atomic site.
In addition, there are no more bonds in the system i, thus, don't
see the point in regenerating pairs. Is there a process -I should
get to know about- which autogenerates bonds?



If your model doesn't use them, then there's nothing to be done here.  I was 
asking about bonded structure and exclusions and such because of the Thole issue 
I noted above.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] hints for core/shell optimization?

2014-07-25 Thread Tamas Karpati
Dear Justin,

 Does your topology specify the proper intramolecular exclusions?  What
 is(are) the molecule(s)?

No bonds, no exclusions. The whole crystal is modelled by ions
interacting via forces of the Coulomb and Buckingham types.
In fact, there is an X-Y-X angle force type which does have an
effect on forces and energy. Shell particles, on the other hand,
are defined via the [polarization] section as quasi-bonds. That's all.

 Try nstlist = 1.  The shell positions are solved via SCF (EM), so you need
 to update the neighbor list very frequently.

Thanks for the trick. Tried but to no avail (very same results).
Although shells should have their weight in the model, I expect
them not to frequently change partner -each is ultimately connected
to its core atomic site.
In addition, there are no more bonds in the system i, thus, don't
see the point in regenerating pairs. Is there a process -I should
get to know about- which autogenerates bonds?

Thanks al lot for you help.

Best regards,
  toma
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] hints for core/shell optimization?

2014-07-25 Thread Tamas Karpati
Dear Justin,

Thanks for your educational answers.

 Coulombic interactions fail at short distances; you probably need to apply

I was afraid of that... somehow removing shells from the cores in the
initial structure have let it functionally work (with not yet
reasonable results).

 Thole screening to avoid polarization catastrophe.  Ions are particularly
 problematic in this regard.

I've seen this mentioned in the Manual, but hadn't ever hit GROMACS
specific details on how to apply polarization in the input files.
The only source I could locate is within the GROMACS package,
under the name sw.itp. It does exclusively implement the
so called water polarization model -at least I think so.

Can you please direct me to a source from which I could learn
how to polarize GROMACS? I was'not lucky on the Internet and,
indeed, even at the GROMACS site or Manual (neighter application
examples nor file format descriptions).

I appreciate your help so much.

With regards,
  toma
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs performance on virtual servers

2014-07-25 Thread Szilárd Páll
On Fri, Jul 25, 2014 at 12:33 PM, Mark Abraham mark.j.abra...@gmail.com wrote:
 On Fri, Jul 25, 2014 at 1:51 AM, Szilárd Páll pall.szil...@gmail.com
 wrote:

 Hi

 In general, virtualization will always have an overhead, but if done
 well, the performance should be close to that of bare metal. However,
 for GROMACS the ideal scenario is exclusive host access (including
 hypervisor) and thread affinities which will both depend on the
 hypervisor configuration. Hence, if you can, you should try to get
 access to virtual hosts that fully utilize a compute node and do not
 share it with others.


 Definitely.


 On Fri, Jul 25, 2014 at 12:31 AM, Mark Abraham mark.j.abra...@gmail.com
 wrote:
  Hi,
 
  Except for huge simulation systems, GROMACS performance past a single
 node
  is dominated by network latency, so unless you can extract a promise that
  any multi-node runs will have Infiniband-quality latency (because the
 nodes
  are physically in the same room, and on Infiniband) you can forget about
  doing multi-node MD on such a system.

 Two remarks:

 * With a slow network the only parallelization you can potentially
 make use of is multi-sim, unless your environment is so could-y that
 some nodes can have tens to hundreds of ms latency which can kill even
 you multi-sim performance (depending on how fast each simulation is
 and how often do they sync).


 I would not encourage multi-sim on such a setup, unless you actually want
 replica exchange. The multi-sim implementation unnecessarily syncs
 simulations every min(nstlist,nstcalcenergy,nstreplex) step, so that might
 be ~tens of times per second. Unnecessary multi-sim is good for pretending
 you are doing a big parallel calculation to get access to a large chunk of
 a machine, but this is not really the case here.

Good point, I fully agree!

What I meant was that one can use quite efficiently even cheap
Ethernet networks for multi-sim type workloads where there is genuine
need for communication between simulations (but not too often). For
instance, replica exchange type runs work quite OK.

--
Szilárd

 * I've seen several claims that *good* 10/40G Ethernet can get close
 to IB even in latency, even for MD, and even for GROMACS, e.g:
 http://goo.gl/JrNxKf, http://goo.gl/t0z15f


 Interesting, thanks.

 Mark




 Cheers,
 --
 Szilárd

  Mark
 
 
  On Thu, Jul 24, 2014 at 10:54 PM, Elton Carvalho elto...@if.usp.br
 wrote:
 
  Dear Gromacs Users,
 
  My former university is focusing on cloud computing instead of
  physical servers, so research groups are now expected to buy virtual
  servers from the university coloud instead of buying their own
  clusters.
 
  The current setup employs Xeon E7- 2870 servers and there is an
  university-wide virtual cluster with 50 virtual servers each with 10
  CPUs.
 
  Does anyone here have information on gromacs performance on this kind
  of infrastructure? Should I expect big issues?
 
  One thing that comes to mind is that the CPUs may not necessarily be
  in the same physical server, rack, or even datacenter (their plan is
  to decentralize the colocation), so network latency may be higher than
  the traditional setup, which may affect scaling. Does this argument
  make sense or am I missing something on cloud management 101?
 
  Cheers.
  --
  Elton Carvalho
  Departamento de Física
  Universidade Federal do Paraná
  --
  Gromacs Users mailing list
 
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
  posting!
 
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
  * For (un)subscribe requests visit
  https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
  send a mail to gmx-users-requ...@gromacs.org.
 
  --
  Gromacs Users mailing list
 
  * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!
 
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
  * For (un)subscribe requests visit
  https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

 --
 Gromacs Users mailing list

 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* 

Re: [gmx-users] time accounting in log file with GPU

2014-07-25 Thread Mark Abraham
They report the time since the step that the timers were reset. The log
file will note this event. Whether load is balanced by then/ever depends on
the load.

Mark
On Jul 25, 2014 7:31 PM, Sikandar Mashayak symasha...@gmail.com wrote:

 Thanks Szilárd.

 I am bit confused about the -resethway or -resetstep options. Do they
 exclude the time spent on initialization and load-balancing from the total
 time reported in the log file, i.e., the time reported is the total time
 spent only in the loop/iterations over time-steps?

 Thanks,
 Sikandar


 On Thu, Jul 24, 2014 at 4:30 PM, Szilárd Páll pall.szil...@gmail.com
 wrote:

  On Fri, Jul 25, 2014 at 12:48 AM, Sikandar Mashayak
  symasha...@gmail.com wrote:
   Thanks Mark. -noconfout option helps.
 
  For benchmarking purposes, additionally to -noconfout I suggest also
 using:
  * -resethway or -resetstep: to exclude initialization and
  load-balancing at the beginning of the run to get a more realistic
  performance measurement from a short run
  * -nsteps N or -maxh: the former is useful if you want to directly
  compare (e.g. two-sided diff) the timings from the end of the log
  between multiple runs
 
  Cheers,
  --
  Szilárd
 
  
   --
   Sikandar
  
  
   On Thu, Jul 24, 2014 at 3:25 PM, Mark Abraham 
 mark.j.abra...@gmail.com
   wrote:
  
   On Fri, Jul 25, 2014 at 12:12 AM, Sikandar Mashayak 
  symasha...@gmail.com
   wrote:
  
Hi
   
I am running a benchmark test with the GPU. The system consists of
  simple
LJ atoms.
And I am running only very basic simulation with NVE ensemble and
 not
writing any
trajectories or energy values. My grompp.mdp file is attached below.
   
However, in the time accounting table in the md.log, I observe that
  write
traj. and comm energies
operations take 40% of time each. So, my question is that even if I
  have
specified not to write
trajectories and energies, why is 80% of time being spent on those
operations?
   
  
   Because you're writing a checkpoint file (hint, use mdrun -noconfout),
  and
   that load is imbalanced so the other cores wait for it in the global
   communication stage in Comm. energies (fairly clear, since they have
 the
   same Wall time). Hint - make benchmarks run for about a minute, so
 you
   are not dominated by setup and load-balancing time. Your compute time
  was
   about 1/20 of a second...
  
   Mark
  
  
Thanks,
Sikandar
   
 R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G
   
On 2 MPI ranks
   
 Computing:  Num   Num  CallWall time
  Giga-Cycles
 Ranks Threads  Count  (s) total sum
 %
   
   
  
 
 -
 Domain decomp. 21 11   0.006  0.030
   2.1
 DD comm. load  21  2   0.000  0.000
   0.0
 Neighbor search21 11   0.007  0.039
   2.7
 Launch GPU ops.21202   0.007  0.036
   2.5
 Comm. coord.   21 90   0.002  0.013
   0.9
 Force  21101   0.001  0.003
   0.2
 Wait + Comm. F 21101   0.004  0.020
   1.4
 Wait GPU nonlocal  21101   0.004  0.020
   1.4
 Wait GPU local 21101   0.000  0.002
   0.2
 NB X/F buffer ops. 21382   0.001  0.008
   0.6
 Write traj.21  1   0.108  0.586
40.2
 Update 21101   0.005  0.025
   1.7
 Comm. energies 21 22   0.108  0.588
40.3
 Rest   0.016  0.087
   5.9
   
   
  
 
 -
 Total  0.269  1.459
   100.0
   
   
  
 
 -
   
   
grompp.mdp file:
   
integrator   = md-vv
dt   = 0.001
nsteps   = 100
nstlog   = 0
nstcalcenergy= 0
cutoff-scheme= verlet
ns_type  = grid
nstlist  = 10
pbc  = xyz
rlist= 0.7925
vdwtype  = Cut-off
rvdw = 0.7925
rcoulomb = 0.7925
gen_vel  = yes
gen_temp = 296.0
--
Gromacs Users mailing list
   
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!
   
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
   
* For 

Re: [gmx-users] time accounting in log file with GPU

2014-07-25 Thread Sikandar Mashayak
Got it! Thanks Mark.


On Fri, Jul 25, 2014 at 10:41 AM, Mark Abraham mark.j.abra...@gmail.com
wrote:

 They report the time since the step that the timers were reset. The log
 file will note this event. Whether load is balanced by then/ever depends on
 the load.

 Mark
 On Jul 25, 2014 7:31 PM, Sikandar Mashayak symasha...@gmail.com wrote:

  Thanks Szilárd.
 
  I am bit confused about the -resethway or -resetstep options. Do they
  exclude the time spent on initialization and load-balancing from the
 total
  time reported in the log file, i.e., the time reported is the total time
  spent only in the loop/iterations over time-steps?
 
  Thanks,
  Sikandar
 
 
  On Thu, Jul 24, 2014 at 4:30 PM, Szilárd Páll pall.szil...@gmail.com
  wrote:
 
   On Fri, Jul 25, 2014 at 12:48 AM, Sikandar Mashayak
   symasha...@gmail.com wrote:
Thanks Mark. -noconfout option helps.
  
   For benchmarking purposes, additionally to -noconfout I suggest also
  using:
   * -resethway or -resetstep: to exclude initialization and
   load-balancing at the beginning of the run to get a more realistic
   performance measurement from a short run
   * -nsteps N or -maxh: the former is useful if you want to directly
   compare (e.g. two-sided diff) the timings from the end of the log
   between multiple runs
  
   Cheers,
   --
   Szilárd
  
   
--
Sikandar
   
   
On Thu, Jul 24, 2014 at 3:25 PM, Mark Abraham 
  mark.j.abra...@gmail.com
wrote:
   
On Fri, Jul 25, 2014 at 12:12 AM, Sikandar Mashayak 
   symasha...@gmail.com
wrote:
   
 Hi

 I am running a benchmark test with the GPU. The system consists of
   simple
 LJ atoms.
 And I am running only very basic simulation with NVE ensemble and
  not
 writing any
 trajectories or energy values. My grompp.mdp file is attached
 below.

 However, in the time accounting table in the md.log, I observe
 that
   write
 traj. and comm energies
 operations take 40% of time each. So, my question is that even if
 I
   have
 specified not to write
 trajectories and energies, why is 80% of time being spent on those
 operations?

   
Because you're writing a checkpoint file (hint, use mdrun
 -noconfout),
   and
that load is imbalanced so the other cores wait for it in the global
communication stage in Comm. energies (fairly clear, since they have
  the
same Wall time). Hint - make benchmarks run for about a minute, so
  you
are not dominated by setup and load-balancing time. Your compute
 time
   was
about 1/20 of a second...
   
Mark
   
   
 Thanks,
 Sikandar

  R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G

 On 2 MPI ranks

  Computing:  Num   Num  CallWall time
   Giga-Cycles
  Ranks Threads  Count  (s) total
 sum
  %


   
  
 
 -
  Domain decomp. 21 11   0.006
  0.030
2.1
  DD comm. load  21  2   0.000
  0.000
0.0
  Neighbor search21 11   0.007
  0.039
2.7
  Launch GPU ops.21202   0.007
  0.036
2.5
  Comm. coord.   21 90   0.002
  0.013
0.9
  Force  21101   0.001
  0.003
0.2
  Wait + Comm. F 21101   0.004
  0.020
1.4
  Wait GPU nonlocal  21101   0.004
  0.020
1.4
  Wait GPU local 21101   0.000
  0.002
0.2
  NB X/F buffer ops. 21382   0.001
  0.008
0.6
  Write traj.21  1   0.108
  0.586
 40.2
  Update 21101   0.005
  0.025
1.7
  Comm. energies 21 22   0.108
  0.588
 40.3
  Rest   0.016
  0.087
5.9


   
  
 
 -
  Total  0.269
  1.459
100.0


   
  
 
 -


 grompp.mdp file:

 integrator   = md-vv
 dt   = 0.001
 nsteps   = 100
 nstlog   = 0
 nstcalcenergy= 0
 cutoff-scheme= verlet
 ns_type  = grid
 nstlist  = 10
 pbc  = xyz
 rlist= 0.7925
 vdwtype  = Cut-off
 rvdw = 0.7925
 rcoulomb = 0.7925
 gen_vel  = yes
 gen_temp = 296.0
 --
 Gromacs Users mailing list

 * Please search the archive at
   

Re: [gmx-users] hints for core/shell optimization?

2014-07-25 Thread Justin Lemkul



On 7/25/14, 12:40 PM, Tamas Karpati wrote:

Dear Justin,

Thanks for your educational answers.


Coulombic interactions fail at short distances; you probably need to apply


I was afraid of that... somehow removing shells from the cores in the
initial structure have let it functionally work (with not yet
reasonable results).


Thole screening to avoid polarization catastrophe.  Ions are particularly
problematic in this regard.


I've seen this mentioned in the Manual, but hadn't ever hit GROMACS
specific details on how to apply polarization in the input files.
The only source I could locate is within the GROMACS package,
under the name sw.itp. It does exclusively implement the
so called water polarization model -at least I think so.



The water polarization function is a water-specific anisotropy function. 
Don't try to use it for anything else; the interpretation of the atom numbers 
for local axis construction are very specific.



Can you please direct me to a source from which I could learn
how to polarize GROMACS? I was'not lucky on the Internet and,
indeed, even at the GROMACS site or Manual (neighter application
examples nor file format descriptions).



The Thole screening function is (in the released version) not used by anything, 
so it's not documented.  In its present incarnation, you need a 
[thole_polarization] directive that lists atom-shell/Drude pairs as follows:


atom_i shell_i atom_j shell_j 2 a alpha_i alpha_j

The 2 is a required function type.  My implementation of the CHARMM Drude FF 
is nearly done, and there are changes to the way the Thole directive is laid out 
in the future, but at the moment (up through version 5.0), this is the way it 
works.  The code is in src/gromacs/gmxlib/bondfree.c.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Can't run v. 4.6.6 on worker nodes if they are compiled with SIMD

2014-07-25 Thread Justin Lemkul



On 7/25/14, 4:17 PM, Seyyed Mohtadin Hashemi wrote:

On Fri, Jul 25, 2014 at 3:00 PM, Seyyed Mohtadin Hashemi haa...@gmail.com
wrote:


Hi everyone,

I'm having a very weird problem with GROMACS 4.6.6:

I am currently testing out GPU capabilities and was trying to compile
GROMACS with CUDA (v6.0). I can not make this work if I compile GROMACS
with SIMD, no matter what kernel I choose - I have tried everything from
SSE2 to AVX_256.

The log-in node, where I compile, has AMD Interlagos CPUs (worker nodes
use Xeon E5-2630 and are equipped with Tesla K20), but I do not think this
is the problem - I have compiled GROMACS, using the log-in node, without
CUDA but with AVX_256 SIMD and everything works. As soon as CUDA is added
to the mix, I get Illegal Instruction every time I try to run on the
worker nodes.

Compiling on worker nodes gives the same result. However, as soon as I set
SIMD=None everything works and I am able to run simulation using GPUs, this
is regardless of if I use log-in node or worker node to compile.


The cmake string used to configure is:
ccmake .. -DCMAKE_INSTALL_PREFIX=/work/gromacs4gpu -DGMX_DOUBLE=OFF
-DGMX_DEFAULT_SUFFIX=OFF -DGMX_BINARY_SUFFIX=_4gpu -DGMX_LIBS_SUFFIX=_4gpu
-DGMX_GPU=ON -DBUILD_SHARED_LIBS=OFF -DGMX_PREFER_STATIC_LIBS=ON
-GMX_MPI=OFF -DGMX_CPU_ACCELERATION=AVX_256

CUDA v6.0 and FFTW v3.3.4 (single precision) libs are set globally and
correctly identified by GROMACS. To remove OpenMPI as a problem I am
compiling without it (compiling with OpenMPI produced the same behavior as
without), once I have found the error I will compile with OpenMPI v1.6.5.

I get these warnings during the configuration, nothing important:

  A BLAS library was not found by CMake in the paths available to it.
Falling back on the GROMACS internal version of the BLAS library instead.
This is fine for normal usage.

  A LAPACK library was not found by CMake in the paths available to it.
Falling back on the GROMACS internal version of the LAPACK library instead.
This is fine for normal usage.

I am currently trying to compile and test GROMACS 5.0 to see if it also
exhibits the same behavior.

I hope that someone can point me in the direction of a possible solution,
if not then I will file a bug report.

Regards,
Mohtadin



Forgot to attached the md.log. Please find attached md.log for both v4.6.6
and v5.0



The list does not accept attachments.  Please post the files somewhere for 
download.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Can't run v. 4.6.6 on worker nodes if they are compiled with SIMD

2014-07-25 Thread Seyyed Mohtadin Hashemi
On Fri, Jul 25, 2014 at 3:17 PM, Seyyed Mohtadin Hashemi haa...@gmail.com
wrote:

 On Fri, Jul 25, 2014 at 3:00 PM, Seyyed Mohtadin Hashemi haa...@gmail.com
  wrote:

 Hi everyone,

 I'm having a very weird problem with GROMACS 4.6.6:

 I am currently testing out GPU capabilities and was trying to compile
 GROMACS with CUDA (v6.0). I can not make this work if I compile GROMACS
 with SIMD, no matter what kernel I choose - I have tried everything from
 SSE2 to AVX_256.

 The log-in node, where I compile, has AMD Interlagos CPUs (worker nodes
 use Xeon E5-2630 and are equipped with Tesla K20), but I do not think this
 is the problem - I have compiled GROMACS, using the log-in node, without
 CUDA but with AVX_256 SIMD and everything works. As soon as CUDA is added
 to the mix, I get Illegal Instruction every time I try to run on the
 worker nodes.

 Compiling on worker nodes gives the same result. However, as soon as I
 set SIMD=None everything works and I am able to run simulation using GPUs,
 this is regardless of if I use log-in node or worker node to compile.


 The cmake string used to configure is:
 ccmake .. -DCMAKE_INSTALL_PREFIX=/work/gromacs4gpu -DGMX_DOUBLE=OFF
 -DGMX_DEFAULT_SUFFIX=OFF -DGMX_BINARY_SUFFIX=_4gpu -DGMX_LIBS_SUFFIX=_4gpu
 -DGMX_GPU=ON -DBUILD_SHARED_LIBS=OFF -DGMX_PREFER_STATIC_LIBS=ON
 -GMX_MPI=OFF -DGMX_CPU_ACCELERATION=AVX_256

 CUDA v6.0 and FFTW v3.3.4 (single precision) libs are set globally and
 correctly identified by GROMACS. To remove OpenMPI as a problem I am
 compiling without it (compiling with OpenMPI produced the same behavior as
 without), once I have found the error I will compile with OpenMPI v1.6.5.

 I get these warnings during the configuration, nothing important:

  A BLAS library was not found by CMake in the paths available to it.
 Falling back on the GROMACS internal version of the BLAS library instead.
 This is fine for normal usage.

  A LAPACK library was not found by CMake in the paths available to it.
 Falling back on the GROMACS internal version of the LAPACK library instead.
 This is fine for normal usage.

 I am currently trying to compile and test GROMACS 5.0 to see if it also
 exhibits the same behavior.

 I hope that someone can point me in the direction of a possible solution,
 if not then I will file a bug report.

 Regards,
 Mohtadin


 Forgot to attached the md.log. Please find attached md.log for both v4.6.6
 and v5.0

 Unfortunately, v5.0 seems to exhibit the same behavior as v4.6.6.


The logs:
4.6.6: http://pastebin.com/vxagEEZC
5.0: http://pastebin.com/eMemFq1J
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] electrostatics forces and van der waals forces calculation

2014-07-25 Thread Andy Chao
Dear GROMACS Users:

Would you please let me know how to calculate/extract the electrostatic
forces and van der waals forces of an ionic liquid structure in GROMACS?
 Which GROMACS command should I use?  g_enemat?  g_potential?

Thanks a lot!

Andy
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.