[gmx-users] H-bond calculation

2015-09-16 Thread RJ
Dear gmx,


I would like to calculate the H-bond occupancy between two residue (intra mol 
inter Thr183 (OG1) -Tyr162 (N) ).


I made a .ndx file choosing the both atoms : Group40 (  r_162_&_N) has  
   1 elements
 Group
41 (r_183_&_OG1) has 1 elements


and given the " gmx hbond -f prd_noPBC.xtc -s em.tpr -n index.ndx -dist 
hbdist.xvg -hbn hbond.ndx -hbm hbmap.xpm -tu ns "  


but they end up in error as follows: Found 0 donors and 2 acceptors
Making hbmap structure...done.
No Donors found.




I wonder how can i calculate the intra molecular residue-residue hydrogen bond 
over time as well as their occupancy in percentage? Thanks.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] H-bond calculation

2015-09-16 Thread Erik Marklund
Dear RJ,

I don’t remember exactly how the groups are treated internally, but I think you 
might need to include the hydrogen in the donor group.

Kind regards,
Erik

> On 16 Sep 2015, at 07:39, RJ  wrote:
> 
> Dear gmx,
> 
> 
> I would like to calculate the H-bond occupancy between two residue (intra mol 
> inter Thr183 (OG1) -Tyr162 (N) ).
> 
> 
> I made a .ndx file choosing the both atoms : Group40 (  r_162_&_N) 
> has 1 elements
> Group
> 41 (r_183_&_OG1) has 1 elements
> 
> 
> and given the " gmx hbond -f prd_noPBC.xtc -s em.tpr -n index.ndx -dist 
> hbdist.xvg -hbn hbond.ndx -hbm hbmap.xpm -tu ns "  
> 
> 
> but they end up in error as follows: Found 0 donors and 2 acceptors
> Making hbmap structure...done.
> No Donors found.
> 
> 
> 
> 
> I wonder how can i calculate the intra molecular residue-residue hydrogen 
> bond over time as well as their occupancy in percentage? Thanks.
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at 
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
> mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] problem implementing REST2 in GROMACS 4.6.5

2015-09-16 Thread Mark Abraham
Hi,

That does look like a code problem, but I can't imagine how. Can you please
open an issue at http://redmine.gromacs.org, and upload .tpr and .log files
so we can see what is going on?

Thanks!

Mark

On Mon, Sep 14, 2015 at 1:22 PM Elio Fiorentini 
wrote:

> I report an example :
>
> file md0.log
> Step   Time Lambda
>  200   2000.20.0
>
>Potential  Total Energy
>-3.15120e+05  -2.53064e+05
>
> Replica exchange at step 200 time 2000.2
> Repl 0 <-> 1  dE_term =  0.000e+00 (kT)
> Repl ex  0 x  12 x  34 x  56 x  78 x  9   10 x 11
> Repl pr   1.0   1.0   1.0   1.0   1.0   1.0
>
>
> ---
>
> file md1.log
> Step   Time Lambda
>  200   2000.20.14100
>
>Potential  Total Energy
>-3.15895e+05  -2.53743e+05
>
> Replica exchange at step 200 time 2000.2
> Repl 0 <-> 1  dE_term =  0.000e+00 (kT)
> Repl ex  0 x  12 x  34 x  56 x  78 x  9   10 x 11
> Repl pr   1.0   1.0   1.0   1.0   1.0   1.0
>
>
> 
>
> file md2.log
> Step   Time Lambda
>  200   2000.20.26840
>
>Potential  Total Energy
>-3.16544e+05  -2.54358e+05
>
> Replica exchange at step 200 time 2000.2
> Repl 2 <-> 3  dE_term =  0.000e+00 (kT)
> Repl ex  0 x  12 x  34 x  56 x  78 x  9   10 x 11
> Repl pr   1.0   1.0   1.0   1.0   1.0   1.0
>
>
> --
>
> file md3.log
> Step   Time Lambda
>  200   2000.20.3836
>
>Potential  Total Energy
>-3.17757e+05  -2.55540e+05
>
> Replica exchange at step 200 time 2000.2
> Repl 2 <-> 3  dE_term =  0.000e+00 (kT)
> Repl ex  0 x  12 x  34 x  56 x  78 x  9   10 x 11
> Repl pr   1.0   1.0   1.0   1.0   1.0   1.0
>
>
> ---
>
> and so on
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] gromacs-5.1 installation error

2015-09-16 Thread Brett
Dear All,

When I tried to install the gromacs-5.1 by

"tar xfz gromacs-5.1.tar.gz
cd gromacs-5.1
mkdir build
cd build
cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON
make
make check
sudo make install
source /usr/local/gromacs/bin/GMXRC".

at the "cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON" step I 
met the following errors thus the gromacs-5.1 cannot be installed. Will you 
please explain to me on how to have the issue settled so that I can have the 
gromacs-5.1 installed?

I am looking forward to getting a reply from you.

Best regards.

Brett



Apache/2.2.22 (Ubuntu) Server at gerrit.gromacs.org Port
  80

  

  Connection #0 to host gerrit.gromacs.org left intact

  Issue another request to this URL:
  'https://kth.box.com/shared/static/348ua1yqu0rh8r2gpcf6m4zvpikaarxw.gz'

  libcurl was built with SSL disabled, https: not supported!

  unsupported protocol

  Closing connection #0



-- Configuring incomplete, errors occurred!
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Efficiently running multiple simulations

2015-09-16 Thread Mark Abraham
Hi,

I'm confused by your description of the cluster as having 8 GPUs and 16
CPUs. The relevant parameters are the number of GPUs and CPU cores per
node. See the examples at
http://manual.gromacs.org/documentation/5.1/user-guide/mdrun-features.html#running-multi-simulations

Mark

On Tue, Sep 15, 2015 at 11:38 PM Zimmerman, Maxwell 
wrote:

> Hello,
>
>
> I am having some troubles efficiently running simulations in parallel on a
> gpu-cluster. The cluster has 8 GPUs and 16 CPUs. Currently, the command
> that I am using is:
>
>
> mpirun -np 8 mdrun_mpi -multi 8 -nice 4 -s md -o md -c after_md -v -x
> frame -pin on
>
>
> Per-simulation, the performance I am getting with this command is
> significantly lower than running 1 simulation that uses 1 GPU and 2 CPUs
> alone. This command seems to use all 8 GPUs and 16 CPUs on the 8 parallel
> simulations, although I think this would be faster if I could pin each
> simulation to a specific GPU and pair of CPUs. The -gpu_id option does not
> seem to change anything when I am using the mpirun. Is there a way that I
> can efficiently run the 8 simulations on the cluster by specifying the GPU
> and CPUs to run with each simulation?
>
>
> Thank you in advance!
>
>
> Regards,
>
> -Maxwell
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Atomic charges

2015-09-16 Thread Pallavi Banerjee
The forcefield is OPLS-AA. I put in the modified charges for the residue in
the aminoacids.rtp file. The topology generated also shows the same charges
that I put. But I was wondering if the charges defined for the atomtypes in
the ffnonbonded.itp overrride these.

Thanks and regards,

-Pallavi Banerjee
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gromacs-5.1 installation error

2015-09-16 Thread Mark Abraham
Hi,

https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2015-July/098940.html

Mark

On Wed, Sep 16, 2015 at 11:45 AM Brett  wrote:

> Dear All,
>
> When I tried to install the gromacs-5.1 by
>
> "tar xfz gromacs-5.1.tar.gz
> cd gromacs-5.1
> mkdir build
> cd build
> cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON
> make
> make check
> sudo make install
> source /usr/local/gromacs/bin/GMXRC".
>
> at the "cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON"
> step I met the following errors thus the gromacs-5.1 cannot be installed.
> Will you please explain to me on how to have the issue settled so that I
> can have the gromacs-5.1 installed?
>
> I am looking forward to getting a reply from you.
>
> Best regards.
>
> Brett
>
>
>
> Apache/2.2.22 (Ubuntu) Server at gerrit.gromacs.org Port
>   80
>
>   
>
>   Connection #0 to host gerrit.gromacs.org left intact
>
>   Issue another request to this URL:
>   'https://kth.box.com/shared/static/348ua1yqu0rh8r2gpcf6m4zvpikaarxw.gz'
>
>   libcurl was built with SSL disabled, https: not supported!
>
>   unsupported protocol
>
>   Closing connection #0
>
>
>
> -- Configuring incomplete, errors occurred!
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gmx dipoles with dynamic indices (gromacs 5.0.x)

2015-09-16 Thread Justin Lemkul



On 9/15/15 6:13 PM, Daskalakis Vangelis wrote:

Hello.
I want to use a dynamic selection scheme (select different atoms based on
the same selection criteria for each frame of a trajectory) and pass this
selection/ info to the gmx dipoles tool. It seems that the -select option
cannot be used.
I tried to use dynamic selection through gmx select. The gmx select output
(using the -on option, e.g an index.ndx file) was indeed a correct
selection of the molecules/ atoms within the system for each frame,
resulting in a large index.ndx file for the whole trajectory.
When I use the -n index.ndx option for the gmx dipoles, there is of course
this huge list of groups of atoms (one group per frame) showing up. I
cannot choose just one frame selection group, as I want the gmx dipoles
tool to process each frame based on the selected group of atoms for each
frame written in index.ndx by gmx select.
Any ideas or suggestions? Thank you.



You'll have to do the analysis frame by frame in a shell script.  Use the index 
group as a counter, and then calculate what the time is based on the interval 
you've saved and pass that value to gmx dipoles -b -e.  See also


http://www.gromacs.org/Documentation/How-tos/Using_Commands_in_Scripts

So your first call (for the first frame, t=0) is:

echo 0 | gmx dipoles -b 0 -e 0 -s *.tpr -f *.xtc/trr

-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] miscelle formation using ligands only

2015-09-16 Thread Justin Lemkul



On 9/15/15 11:01 PM, Chetan Puri wrote:

  so what is the best option for topologies.



A force field you trust in concert with molecules that are individually 
parametrized using suitable target data.  There is a reason why parametrization 
is considered an expert topic.  While there are numerous web servers that will 
give you topologies for GROMOS, AMBER, and CHARMM, it is incumbent on the user 
to verify the topologies against any available target data and 
refine/reparametrize as needed based on the findings of those assessments. 
Nothing should be trusted blindly, from any source.


-Justin


On Wed, Sep 16, 2015 at 2:19 AM, Justin Lemkul  wrote:




On 9/15/15 11:35 AM, Chetan Puri wrote:


thanks for you help and today i was able to solve the problem,
actually my pdb file was made using packmole for 5 molecules and molecules
were prepared using prodrug , so my pdb file contained names of DRG A,
B,C,D,E



PRODRG produces notoriously bad parameters.  Don't use those topologies
directly and expect good results.

and i tried to change it to  SC3,CA1,D3G,. in the last packed PDB

file and also i made changes in the itp files. So in my topolgy prepared
file also i used the same names under molecules part. As a result of this
the grompp was not able to read my .itp files and showed error as too few
parameters on line. But then again i kept the  names as such and included
DRG_chain_A,B,C ... names in my topology file molecules part and
also.itp molecule name . There after grompp was able to read all the
files.
I received two notes :
note1 was for verlet scheme >10 and for gpu >20 .
note2 was for some PME load distribution
I have used ions.mdp file of the tutorial just to make .tpr file.
i hope this is of not great concern and i would also like to know that why
was grompp not able to read itp files even though i have placed the same
name in every file.



Your description is too confusing to be able to provide useful.  There's
nothing anyone can actually do to help you without actual commands, files,
and real error messages, rather than what you're filtering through your
thoughts.

Sounds like you've hacked together a solution, though, so hopefully
everything matches up.  But like I said above, PRODRG topologies are not
reliable and reviewers should criticize their use heavily.  The problems
are well known.

-Justin




On Mon, Sep 14, 2015 at 10:13 PM, Justin Lemkul  wrote:




On 9/14/15 11:18 AM, Chetan Puri wrote:

i tried to prepare a topolgy file for my ligands and it contained

following
things,
#include "gromos43a1.ff/forcefield.itp"
#include "drg1.itp"
#include"drg2.itp"
#include"drg3.itp"
#include "gromos43a1.ff/spc.itp"

[system]
miscelle
[molecules]
drg18
drg2 5
drg3 7
sol   363408


But since i have packed the system using PACKMOL intially there were
some
error that no. of coordintaes of gro and top file are not matching since
intially i took no. of molecules as one for each type but later upon
changing to the no. as in my packmol input that error had gone and new
error is showing up
i.e. Too few parameters in line 1 for drg2.itp
  Too few parameters in line 1 for drg3.itp

and if i override it with maxwarn than i saw that all the ligands were
stuck together at one place and also with some different representation
.


Don't blow past error messages with -maxwarn.  It is extremely dangerous

and unless you have specific knowledge that the problem is not important,
don't use it.

The error messages indicate that the contents of drg2.itp and drg3.itp
are
incorrect or misformatted (or perhaps that the lack of a space between
#include and the file name is causing a problem, but that's just
speculation).

so can you please help me out with this thing and also is there any other


way by using gromacs and packing a system of different ligands (gromacs
version 5.0.4)


gmx insert-molecules can be used to add small molecules into a system,

but
if it's already built, why bother?

If you want to rebuild the system for any reason, see:

http://www.gromacs.org/Documentation/How-tos/Non-Water_Solvation
http://www.gromacs.org/Documentation/How-tos/Mixed_Solvents

If you want commentary on the contents of your .itp files, you will need
to upload them to a file-sharing service and provide the URL.  Otherwise,
we're working blind and that is not productive for anyone.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at

Re: [gmx-users] Atomic charges

2015-09-16 Thread Justin Lemkul



On 9/16/15 12:27 AM, Pallavi Banerjee wrote:

Hello users,

I have this confusion regarding the atomic charges that gromacs sees. I
have defined my residues in aminoacids.rtp file with my own charges, but
the atom types for the same in the ffnonbonded.itp have different charges
on them. Which of the two charges would Gromacs pick?



The .rtp is always used.  The charges listed in ffnonbonded.itp do nothing; they 
are a relic of an ancient format or desire to implement something else.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Problem in running CPMD with gmx-3.3.1

2015-09-16 Thread Padmani Sandhu
Hello all,

*I am running CPMD v4.1 in hybrid with gmx-3.3.1_qmmm-1.3.2. While running
the qmmm_example ethane_em, the mdrun is evoked CPMD but the process was
stopped with an error message without completing energy minimization:*


MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 999.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--

 RESTART INFORMATION WRITTEN ON FILE  ./RESTART.1
 ***RWFOPT| SIZE OF THE PROGRAM IS NOT AVAILABLE  ***

 
 *  *
 *FINAL RESULTS *
 *  *
 

 
 *  ATOMIC COORDINATES  *
 
   1   H   9.953343   9.349599   8.858271
   2   H   7.204760   7.459873   8.574812
   3   H   7.204760  10.048798  10.634613
   4   H   7.204760  10.540127   7.365387
   5   C   7.885061   9.349599   8.858271
 


 


 ELECTRONIC GRADIENT:
MAX. COMPONENT =8.50745E-06 NORM =6.64506E-07

 TOTAL INTEGRATED ELECTRONIC DENSITY
IN G-SPACE = 8.00
IN R-SPACE = 8.00

 (K+E1+L+N+X)   TOTAL ENERGY =   -8.04332535 A.U.
 (K)  KINETIC ENERGY =6.74712546 A.U.
 (E1=A-S+R) ELECTROSTATIC ENERGY =   -5.85407209 A.U.
 (S)   ESELF =6.64903801 A.U.
 (R) ESR =0.71345062 A.U.
 (L)LOCAL PSEUDOPOTENTIAL ENERGY =   -6.33194050 A.U.
 (N)  N-L PSEUDOPOTENTIAL ENERGY =0.56632617 A.U.
 (X) EXCHANGE-CORRELATION ENERGY =   -3.17076439 A.U.
  GRADIENT CORRECTION ENERGY =   -0.20084053 A.U.

 

*2. the local error log is showing error:*

 process id's: 0, 0, 0
 process stops in file: /home/shalu/CPMD/src/egointer_utils.mod.F90
   at line: 177
   in procedure: INTERFACE
 error message: allocation problem
 call stack:
1  cpmd


*3. I have tried to run the CPMD_inp.run file generated with the standard
./rgmx script in example with CPMD in terminal individually, it was
terminated with the same error.*

*4. In the next step I have replaced the lines*

 INTERFACE GMX
 MOLECULE CENTER OFF


with the same information from cpmd_inp.run file from CPMD-test example
files having information


  OPTIMIZE WAVEFUNCTION geometry
  molecular dynamics
  restart accumulators wavefunction coordinates velocities cell
  restart nosec nosep nosee
  restart latest
  quench bo
  ODIIS
   5
  MAXSTEP
   1000
  STORE
   5000
  TIMESTEP
   7.0
  EMASS
   500.0
  COMPRESS WRITE32





*and after it the the CPMD worked and exited successfully. *
*Can someone help me to configure the problem with gmx and CPMD interface.*







*With regards,*
*Padmani*
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] H-bond calculation

2015-09-16 Thread Justin Lemkul



On 9/16/15 4:06 AM, Erik Marklund wrote:

Dear RJ,

I don’t remember exactly how the groups are treated internally, but I think you 
might need to include the hydrogen in the donor group.



Correct, otherwise there's no way to calculate the H-D-A angle.

-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Problem in running CPMD with gmx-3.3.1

2015-09-16 Thread Mark Abraham
Hi,

There's nothing here that points to a GROMACS-related issue. I think you
should take up the question with the CPMD community.

Mark

On Wed, Sep 16, 2015 at 1:36 PM Padmani Sandhu 
wrote:

> Hello all,
>
> *I am running CPMD v4.1 in hybrid with gmx-3.3.1_qmmm-1.3.2. While running
> the qmmm_example ethane_em, the mdrun is evoked CPMD but the process was
> stopped with an error message without completing energy minimization:*
>
>
> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> with errorcode 999.
>
> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> You may or may not see output from other processes, depending on
> exactly when Open MPI kills them.
> --
>
>  RESTART INFORMATION WRITTEN ON FILE  ./RESTART.1
>  ***RWFOPT| SIZE OF THE PROGRAM IS NOT AVAILABLE  ***
>
>  
>  *  *
>  *FINAL RESULTS *
>  *  *
>  
>
>  
>  *  ATOMIC COORDINATES  *
>  
>1   H   9.953343   9.349599   8.858271
>2   H   7.204760   7.459873   8.574812
>3   H   7.204760  10.048798  10.634613
>4   H   7.204760  10.540127   7.365387
>5   C   7.885061   9.349599   8.858271
>  
>
>
>  
>
>
>  ELECTRONIC GRADIENT:
> MAX. COMPONENT =8.50745E-06 NORM =6.64506E-07
>
>  TOTAL INTEGRATED ELECTRONIC DENSITY
> IN G-SPACE = 8.00
> IN R-SPACE = 8.00
>
>  (K+E1+L+N+X)   TOTAL ENERGY =   -8.04332535 A.U.
>  (K)  KINETIC ENERGY =6.74712546 A.U.
>  (E1=A-S+R) ELECTROSTATIC ENERGY =   -5.85407209 A.U.
>  (S)   ESELF =6.64903801 A.U.
>  (R) ESR =0.71345062 A.U.
>  (L)LOCAL PSEUDOPOTENTIAL ENERGY =   -6.33194050 A.U.
>  (N)  N-L PSEUDOPOTENTIAL ENERGY =0.56632617 A.U.
>  (X) EXCHANGE-CORRELATION ENERGY =   -3.17076439 A.U.
>   GRADIENT CORRECTION ENERGY =   -0.20084053 A.U.
>
>  
>
> *2. the local error log is showing error:*
>
>  process id's: 0, 0, 0
>  process stops in file: /home/shalu/CPMD/src/egointer_utils.mod.F90
>at line: 177
>in procedure: INTERFACE
>  error message: allocation problem
>  call stack:
> 1  cpmd
>
>
> *3. I have tried to run the CPMD_inp.run file generated with the standard
> ./rgmx script in example with CPMD in terminal individually, it was
> terminated with the same error.*
>
> *4. In the next step I have replaced the lines*
> 
>  INTERFACE GMX
>  MOLECULE CENTER OFF
> 
>
> with the same information from cpmd_inp.run file from CPMD-test example
> files having information
>
> 
>   OPTIMIZE WAVEFUNCTION geometry
>   molecular dynamics
>   restart accumulators wavefunction coordinates velocities cell
>   restart nosec nosep nosee
>   restart latest
>   quench bo
>   ODIIS
>5
>   MAXSTEP
>1000
>   STORE
>5000
>   TIMESTEP
>7.0
>   EMASS
>500.0
>   COMPRESS WRITE32
> 
>
>
>
>
> *and after it the the CPMD worked and exited successfully. *
> *Can someone help me to configure the problem with gmx and CPMD interface.*
>
>
>
>
>
>
>
> *With regards,*
> *Padmani*
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Fixing periodicity effects on trajectory file

2015-09-16 Thread Justin Lemkul



On 9/16/15 1:23 AM, Homa rooz wrote:

Dear Justin
​If you asked about verifying docking process​. I should say, I have worked
with single chain protein and the best ranked binding energy was -7.41
kcal/mol by AutoDock 4.2, considering that binding site was located on turn
structure and the edge of protein, seeming unstable. I didn't check the
result by another software.



Then what Michael said is likely true: your ligand dissociated and what you are 
observing is a real effect and not simply a PBC artifact.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] problem implementing REST2 in GROMACS 4.6.5

2015-09-16 Thread Elio Fiorentini
Hi, Mark.
Thanks a lot for your kind reply. I am filing the issue on the website you
mentioned. Please let me know at your earliest convenience whether the
problem can be sorted out or it is better to resort to a previous gromacs
version in the meanwhile (it seems that REST2 can be implemented easily on
4.5.5, although I have not tried that, since it is not installed on our
cluster), so as to get some results on this system in a reasonable time.
Thanks a lot for your help!
Elio

2015-09-16 10:55 GMT+02:00 Mark Abraham :

> Hi,
>
> That does look like a code problem, but I can't imagine how. Can you please
> open an issue at http://redmine.gromacs.org, and upload .tpr and .log
> files
> so we can see what is going on?
>
> Thanks!
>
> Mark
>
> On Mon, Sep 14, 2015 at 1:22 PM Elio Fiorentini <
> elio.fiorentin...@gmail.com>
> wrote:
>
> > I report an example :
> >
> > file md0.log
> > Step   Time Lambda
> >  200   2000.20.0
> >
> >Potential  Total Energy
> >-3.15120e+05  -2.53064e+05
> >
> > Replica exchange at step 200 time 2000.2
> > Repl 0 <-> 1  dE_term =  0.000e+00 (kT)
> > Repl ex  0 x  12 x  34 x  56 x  78 x  9   10 x 11
> > Repl pr   1.0   1.0   1.0   1.0   1.0   1.0
> >
> >
> >
> ---
> >
> > file md1.log
> > Step   Time Lambda
> >  200   2000.20.14100
> >
> >Potential  Total Energy
> >-3.15895e+05  -2.53743e+05
> >
> > Replica exchange at step 200 time 2000.2
> > Repl 0 <-> 1  dE_term =  0.000e+00 (kT)
> > Repl ex  0 x  12 x  34 x  56 x  78 x  9   10 x 11
> > Repl pr   1.0   1.0   1.0   1.0   1.0   1.0
> >
> >
> >
> 
> >
> > file md2.log
> > Step   Time Lambda
> >  200   2000.20.26840
> >
> >Potential  Total Energy
> >-3.16544e+05  -2.54358e+05
> >
> > Replica exchange at step 200 time 2000.2
> > Repl 2 <-> 3  dE_term =  0.000e+00 (kT)
> > Repl ex  0 x  12 x  34 x  56 x  78 x  9   10 x 11
> > Repl pr   1.0   1.0   1.0   1.0   1.0   1.0
> >
> >
> >
> --
> >
> > file md3.log
> > Step   Time Lambda
> >  200   2000.20.3836
> >
> >Potential  Total Energy
> >-3.17757e+05  -2.55540e+05
> >
> > Replica exchange at step 200 time 2000.2
> > Repl 2 <-> 3  dE_term =  0.000e+00 (kT)
> > Repl ex  0 x  12 x  34 x  56 x  78 x  9   10 x 11
> > Repl pr   1.0   1.0   1.0   1.0   1.0   1.0
> >
> >
> >
> ---
> >
> > and so on
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Problem with gmx-distance

2015-09-16 Thread Teemu Murtola
No, the atoms can be in a single group for this simple case of an atom-atom
distance.

The next line (after the prompt that you quote) should say something like
"(one per line, ..., Ctrl-D to end)". Following those instructions should
help (i.e., entering Ctrl-D).

Best regards,
Teemu

On Wed, Sep 16, 2015, 17:01 Justin Lemkul  wrote:

>
>
> On 9/16/15 9:06 AM, Timofey Tyugashev wrote:
> > I want to get the distance between atoms 3770 and 5182 in my trajectory
> > After consulting with the Manual I write command:
> > gmx distance -n ind2.ndx -f trj1_WT_md_40.xtc -s wt_md.gro -oav
> distav.xvg -oall
> > dist.xvg
> > and index file ind2.ndx:
> > [ LYS_241_ODG_C1' ]
> > 3770 5182
> >
> > The program responds like this:
> >
> > Available static index groups:
> >   Group  0 "LYS_241_ODG_C1'" (2 atoms)
> > Specify any number of selections for option 'select'
> > (Position pairs to calculate distances for):
> >
> > If I pick 0, it responds 'Selection '0' parsed' and then nothing happens.
> > What to do next? The manual is incredibly murky and there are no
> examples in the
> > net.
>
> Your atoms can't be in the same group.  That was the same as with g_dist.
>
> [ atom1 ]
> 3770
> [ atom2 ]
> 5182
>
> gmx distance -n -f -s -oav -oall -select 'com of group "atom1" plus com of
> group
> "atom2"'
>
> See "gmx help selections" and specifically "gmx help selections examples"
> for
> syntax of selections and a few simple examples.
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Ruth L. Kirschstein NRSA Postdoctoral Fellow
>
> Department of Pharmaceutical Sciences
> School of Pharmacy
> Health Sciences Facility II, Room 629
> University of Maryland, Baltimore
> 20 Penn St.
> Baltimore, MD 21201
>
> jalem...@outerbanks.umaryland.edu | (410) 706-7441
> http://mackerell.umaryland.edu/~jalemkul
>
> ==
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] g_traj + g_analyze and core dumped error

2015-09-16 Thread gozde ergin
Dear gromacs users;

I am trying to estimate the force-force auto correlation function.
To do that first I run the command of :

 'g_traj -f tra.trr -s topol.tpr -of force.xvg'

My force.xvg file covers for all atoms in simulation box which I have 9566
atoms.
Than I run the command of :

'g_analyze -f force.xvg -ac autocorr.xvg'

however I get the error of core dumped. Here is some snapshot from screen.










*File force.xvg does not end with a newline, ignoring the last lineFile
force.xvg does not end with a newline, ignoring the last lineFile force.xvg
does not end with a newline, ignoring the last lineFile force.xvg does not
end with a newline, ignoring the last lineInvalid line in
force.xvg:Using zeros for the last 7819 sets.*


What could be the reason of this error?
I have enough free space and I am using gromacs 4.6.6

bests
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Efficiently running multiple simulations

2015-09-16 Thread Zimmerman, Maxwell
Hi Mark,

Sorry for the confusion, what I meant to say was that each node on the cluster 
has 8 GPUs and 16 CPUs.

When I attempt to specify the GPU IDs for running 8 simulations on a node using 
the "-multi" and "-gpu_id", each .log file has the following:

"8 GPUs user-selected for this run.
Mapping of GPUs to the 8 PP ranks in this node: #0, #1, #2, #3, #4, #5, #6, #7"

This makes me think that each simulation is competing for each of the GPUs, 
explaining my performance loss per simulation compared to running 1 simulation 
on 1 GPU and 2 CPUs. If this interpretation is correct, is there a better way 
to pin each simulation to a single GPU and 2 CPUs? If my interpretation is 
incorrect, is there a more efficient way to use the "-multi" option to match 
the performance I see of running a single simulation * 8?

Regards,
-Maxwell



From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 on behalf of Mark Abraham 

Sent: Wednesday, September 16, 2015 3:52 AM
To: gmx-us...@gromacs.org; gromacs.org_gmx-users@maillist.sys.kth.se
Subject: Re: [gmx-users] Efficiently running multiple simulations

Hi,

I'm confused by your description of the cluster as having 8 GPUs and 16
CPUs. The relevant parameters are the number of GPUs and CPU cores per
node. See the examples at
http://manual.gromacs.org/documentation/5.1/user-guide/mdrun-features.html#running-multi-simulations

Mark

On Tue, Sep 15, 2015 at 11:38 PM Zimmerman, Maxwell 
wrote:

> Hello,
>
>
> I am having some troubles efficiently running simulations in parallel on a
> gpu-cluster. The cluster has 8 GPUs and 16 CPUs. Currently, the command
> that I am using is:
>
>
> mpirun -np 8 mdrun_mpi -multi 8 -nice 4 -s md -o md -c after_md -v -x
> frame -pin on
>
>
> Per-simulation, the performance I am getting with this command is
> significantly lower than running 1 simulation that uses 1 GPU and 2 CPUs
> alone. This command seems to use all 8 GPUs and 16 CPUs on the 8 parallel
> simulations, although I think this would be faster if I could pin each
> simulation to a specific GPU and pair of CPUs. The -gpu_id option does not
> seem to change anything when I am using the mpirun. Is there a way that I
> can efficiently run the 8 simulations on the cluster by specifying the GPU
> and CPUs to run with each simulation?
>
>
> Thank you in advance!
>
>
> Regards,
>
> -Maxwell
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] miscelle formation using ligands only

2015-09-16 Thread Chetan Puri
thanks for the suggestion ,
I used semi-empirical nddo method for charge determination of each
molecules and found out that there was huge difference between the charges
provided by the prodrug and that calculated by semi-empirical nddo method.
So can i modify the charges in the .itp file of various molecules and use
it .

On Wed, Sep 16, 2015 at 5:20 PM, Justin Lemkul  wrote:

>
>
> On 9/15/15 11:01 PM, Chetan Puri wrote:
>
>>   so what is the best option for topologies.
>>
>>
> A force field you trust in concert with molecules that are individually
> parametrized using suitable target data.  There is a reason why
> parametrization is considered an expert topic.  While there are numerous
> web servers that will give you topologies for GROMOS, AMBER, and CHARMM, it
> is incumbent on the user to verify the topologies against any available
> target data and refine/reparametrize as needed based on the findings of
> those assessments. Nothing should be trusted blindly, from any source.
>
> -Justin
>
>
> On Wed, Sep 16, 2015 at 2:19 AM, Justin Lemkul  wrote:
>>
>>
>>>
>>> On 9/15/15 11:35 AM, Chetan Puri wrote:
>>>
>>> thanks for you help and today i was able to solve the problem,
 actually my pdb file was made using packmole for 5 molecules and
 molecules
 were prepared using prodrug , so my pdb file contained names of DRG A,
 B,C,D,E


>>> PRODRG produces notoriously bad parameters.  Don't use those topologies
>>> directly and expect good results.
>>>
>>> and i tried to change it to  SC3,CA1,D3G,. in the last packed PDB
>>>
 file and also i made changes in the itp files. So in my topolgy prepared
 file also i used the same names under molecules part. As a result of
 this
 the grompp was not able to read my .itp files and showed error as too
 few
 parameters on line. But then again i kept the  names as such and
 included
 DRG_chain_A,B,C ... names in my topology file molecules part and
 also.itp molecule name . There after grompp was able to read all the
 files.
 I received two notes :
 note1 was for verlet scheme >10 and for gpu >20 .
 note2 was for some PME load distribution
 I have used ions.mdp file of the tutorial just to make .tpr file.
 i hope this is of not great concern and i would also like to know that
 why
 was grompp not able to read itp files even though i have placed the same
 name in every file.


 Your description is too confusing to be able to provide useful.  There's
>>> nothing anyone can actually do to help you without actual commands,
>>> files,
>>> and real error messages, rather than what you're filtering through your
>>> thoughts.
>>>
>>> Sounds like you've hacked together a solution, though, so hopefully
>>> everything matches up.  But like I said above, PRODRG topologies are not
>>> reliable and reviewers should criticize their use heavily.  The problems
>>> are well known.
>>>
>>> -Justin
>>>
>>>
>>>
>>> On Mon, Sep 14, 2015 at 10:13 PM, Justin Lemkul  wrote:



> On 9/14/15 11:18 AM, Chetan Puri wrote:
>
> i tried to prepare a topolgy file for my ligands and it contained
>
>> following
>> things,
>> #include "gromos43a1.ff/forcefield.itp"
>> #include "drg1.itp"
>> #include"drg2.itp"
>> #include"drg3.itp"
>> #include "gromos43a1.ff/spc.itp"
>>
>> [system]
>> miscelle
>> [molecules]
>> drg18
>> drg2 5
>> drg3 7
>> sol   363408
>>
>>
>> But since i have packed the system using PACKMOL intially there were
>> some
>> error that no. of coordintaes of gro and top file are not matching
>> since
>> intially i took no. of molecules as one for each type but later upon
>> changing to the no. as in my packmol input that error had gone and new
>> error is showing up
>> i.e. Too few parameters in line 1 for drg2.itp
>>   Too few parameters in line 1 for drg3.itp
>>
>> and if i override it with maxwarn than i saw that all the ligands were
>> stuck together at one place and also with some different
>> representation
>> .
>>
>>
>> Don't blow past error messages with -maxwarn.  It is extremely
>> dangerous
>>
> and unless you have specific knowledge that the problem is not
> important,
> don't use it.
>
> The error messages indicate that the contents of drg2.itp and drg3.itp
> are
> incorrect or misformatted (or perhaps that the lack of a space between
> #include and the file name is causing a problem, but that's just
> speculation).
>
> so can you please help me out with this thing and also is there any
> other
>
> way by using gromacs and packing a system of different ligands (gromacs
>> version 5.0.4)
>>
>>
>> gmx insert-molecules can be 

Re: [gmx-users] Efficiently running multiple simulations

2015-09-16 Thread Mark Abraham
Hi,


On Wed, Sep 16, 2015 at 4:41 PM Zimmerman, Maxwell 
wrote:

> Hi Mark,
>
> Sorry for the confusion, what I meant to say was that each node on the
> cluster has 8 GPUs and 16 CPUs.
>

OK. Please note that "CPU" is ambiguous, so you should prefer not to use it
without clarification.

Unless the GPUs are weak and the CPU is strong, 2 CPU cores per GPU will
likely be under-powered for PME simulations in GROMACS.

When I attempt to specify the GPU IDs for running 8 simulations on a node
> using the "-multi" and "-gpu_id", each .log file has the following:
>
> "8 GPUs user-selected for this run.
> Mapping of GPUs to the 8 PP ranks in this node: #0, #1, #2, #3, #4, #5,
> #6, #7"
>
> This makes me think that each simulation is competing for each of the GPUs


You are running 8 simulations, each of which has a single domain, each of
which is mapped to a single PP rank, each of which is mapped to a different
single GPU. Perfect.

explaining my performance loss per simulation compared to running 1
> simulation on 1 GPU and 2 CPUs.


Very likely you are not comparing with what you think you are, e.g. you
need to compare with an otherwise empty node running something like

mpirun -np 1 mdrun_mpi -ntomp 2 -gpu_id 0 -pin on

so that you actually have a single process running on two pinned CPU cores
and a single GPU. This should be fairly comparable with the mdrun -multi
setup

A side-by-side diff of that log file and the log file of the 0th member of
the multi-sim should show very few differences until the simulation starts,
and comparable performance. If not, please share your .log files on a
file-sharing service.

If this interpretation is correct, is there a better way to pin each
> simulation to a single GPU and 2 CPUs? If my interpretation is incorrect,
> is there a more efficient way to use the "-multi" option to match the
> performance I see of running a single simulation * 8?
>

mdrun will handle all of that correctly if it hasn't been crippled by how
the MPI library has organized life. You want it to assign ranks to cores
that are close to each other and their matching GPU. That tends to be the
default behaviour, but clusters intended for node sharing can do weird
things. (It is not yet clear that any of this is a problem.)

Mark


> Regards,
> -Maxwell
>
>
> 
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Mark
> Abraham 
> Sent: Wednesday, September 16, 2015 3:52 AM
> To: gmx-us...@gromacs.org; gromacs.org_gmx-users@maillist.sys.kth.se
> Subject: Re: [gmx-users] Efficiently running multiple simulations
>
> Hi,
>
> I'm confused by your description of the cluster as having 8 GPUs and 16
> CPUs. The relevant parameters are the number of GPUs and CPU cores per
> node. See the examples at
>
> http://manual.gromacs.org/documentation/5.1/user-guide/mdrun-features.html#running-multi-simulations
>
> Mark
>
> On Tue, Sep 15, 2015 at 11:38 PM Zimmerman, Maxwell 
> wrote:
>
> > Hello,
> >
> >
> > I am having some troubles efficiently running simulations in parallel on
> a
> > gpu-cluster. The cluster has 8 GPUs and 16 CPUs. Currently, the command
> > that I am using is:
> >
> >
> > mpirun -np 8 mdrun_mpi -multi 8 -nice 4 -s md -o md -c after_md -v -x
> > frame -pin on
> >
> >
> > Per-simulation, the performance I am getting with this command is
> > significantly lower than running 1 simulation that uses 1 GPU and 2 CPUs
> > alone. This command seems to use all 8 GPUs and 16 CPUs on the 8 parallel
> > simulations, although I think this would be faster if I could pin each
> > simulation to a specific GPU and pair of CPUs. The -gpu_id option does
> not
> > seem to change anything when I am using the mpirun. Is there a way that I
> > can efficiently run the 8 simulations on the cluster by specifying the
> GPU
> > and CPUs to run with each simulation?
> >
> >
> > Thank you in advance!
> >
> >
> > Regards,
> >
> > -Maxwell
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * 

Re: [gmx-users] gmx dipoles with dynamic indices (gromacs 5.0.x)

2015-09-16 Thread Daskalakis Vangelis
Thank you Justin,
Yes, that would do the work. I thought there could be another way, but I
won't avoid doing it frame by frame.
Thank you again.

On Wed, Sep 16, 2015 at 2:55 PM, Justin Lemkul  wrote:

>
>
> On 9/15/15 6:13 PM, Daskalakis Vangelis wrote:
>
>> Hello.
>> I want to use a dynamic selection scheme (select different atoms based on
>> the same selection criteria for each frame of a trajectory) and pass this
>> selection/ info to the gmx dipoles tool. It seems that the -select option
>> cannot be used.
>> I tried to use dynamic selection through gmx select. The gmx select output
>> (using the -on option, e.g an index.ndx file) was indeed a correct
>> selection of the molecules/ atoms within the system for each frame,
>> resulting in a large index.ndx file for the whole trajectory.
>> When I use the -n index.ndx option for the gmx dipoles, there is of course
>> this huge list of groups of atoms (one group per frame) showing up. I
>> cannot choose just one frame selection group, as I want the gmx dipoles
>> tool to process each frame based on the selected group of atoms for each
>> frame written in index.ndx by gmx select.
>> Any ideas or suggestions? Thank you.
>>
>>
> You'll have to do the analysis frame by frame in a shell script.  Use the
> index group as a counter, and then calculate what the time is based on the
> interval you've saved and pass that value to gmx dipoles -b -e.  See also
>
> http://www.gromacs.org/Documentation/How-tos/Using_Commands_in_Scripts
>
> So your first call (for the first frame, t=0) is:
>
> echo 0 | gmx dipoles -b 0 -e 0 -s *.tpr -f *.xtc/trr
>
> -Justin
>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Ruth L. Kirschstein NRSA Postdoctoral Fellow
>
> Department of Pharmaceutical Sciences
> School of Pharmacy
> Health Sciences Facility II, Room 629
> University of Maryland, Baltimore
> 20 Penn St.
> Baltimore, MD 21201
>
> jalem...@outerbanks.umaryland.edu | (410) 706-7441
> http://mackerell.umaryland.edu/~jalemkul
>
> ==
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Efficiently running multiple simulations

2015-09-16 Thread Zimmerman, Maxwell
Hi Mark,

Thank you for the feedback.

To ensure that I am making a proper comparison, I tried running:
mpirun -np 1 mdrun_mpi -ntomp 2 -gpu_id 0 -pin on
and I still see the same pattern; running a single simulation with 1 GPU and 2 
CPUs performs nearly twice as well as running 8 simulations with "-multi" using 
8 GPUs and 16 CPUs.

Just to clarify, when I use "-multi" all 8 of the .log files show that 8 GPUs 
are selected for the run. If a single GPU were being used, wouldn't it only 
show mapping to one GPU ID per .log file?

Regards,
-Maxwell


From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 on behalf of Mark Abraham 

Sent: Wednesday, September 16, 2015 10:08 AM
To: gmx-us...@gromacs.org; gromacs.org_gmx-users@maillist.sys.kth.se
Subject: Re: [gmx-users] Efficiently running multiple simulations

Hi,


On Wed, Sep 16, 2015 at 4:41 PM Zimmerman, Maxwell 
wrote:

> Hi Mark,
>
> Sorry for the confusion, what I meant to say was that each node on the
> cluster has 8 GPUs and 16 CPUs.
>

OK. Please note that "CPU" is ambiguous, so you should prefer not to use it
without clarification.

Unless the GPUs are weak and the CPU is strong, 2 CPU cores per GPU will
likely be under-powered for PME simulations in GROMACS.

When I attempt to specify the GPU IDs for running 8 simulations on a node
> using the "-multi" and "-gpu_id", each .log file has the following:
>
> "8 GPUs user-selected for this run.
> Mapping of GPUs to the 8 PP ranks in this node: #0, #1, #2, #3, #4, #5,
> #6, #7"
>
> This makes me think that each simulation is competing for each of the GPUs


You are running 8 simulations, each of which has a single domain, each of
which is mapped to a single PP rank, each of which is mapped to a different
single GPU. Perfect.

explaining my performance loss per simulation compared to running 1
> simulation on 1 GPU and 2 CPUs.


Very likely you are not comparing with what you think you are, e.g. you
need to compare with an otherwise empty node running something like

mpirun -np 1 mdrun_mpi -ntomp 2 -gpu_id 0 -pin on

so that you actually have a single process running on two pinned CPU cores
and a single GPU. This should be fairly comparable with the mdrun -multi
setup

A side-by-side diff of that log file and the log file of the 0th member of
the multi-sim should show very few differences until the simulation starts,
and comparable performance. If not, please share your .log files on a
file-sharing service.

If this interpretation is correct, is there a better way to pin each
> simulation to a single GPU and 2 CPUs? If my interpretation is incorrect,
> is there a more efficient way to use the "-multi" option to match the
> performance I see of running a single simulation * 8?
>

mdrun will handle all of that correctly if it hasn't been crippled by how
the MPI library has organized life. You want it to assign ranks to cores
that are close to each other and their matching GPU. That tends to be the
default behaviour, but clusters intended for node sharing can do weird
things. (It is not yet clear that any of this is a problem.)

Mark


> Regards,
> -Maxwell
>
>
> 
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Mark
> Abraham 
> Sent: Wednesday, September 16, 2015 3:52 AM
> To: gmx-us...@gromacs.org; gromacs.org_gmx-users@maillist.sys.kth.se
> Subject: Re: [gmx-users] Efficiently running multiple simulations
>
> Hi,
>
> I'm confused by your description of the cluster as having 8 GPUs and 16
> CPUs. The relevant parameters are the number of GPUs and CPU cores per
> node. See the examples at
>
> http://manual.gromacs.org/documentation/5.1/user-guide/mdrun-features.html#running-multi-simulations
>
> Mark
>
> On Tue, Sep 15, 2015 at 11:38 PM Zimmerman, Maxwell 
> wrote:
>
> > Hello,
> >
> >
> > I am having some troubles efficiently running simulations in parallel on
> a
> > gpu-cluster. The cluster has 8 GPUs and 16 CPUs. Currently, the command
> > that I am using is:
> >
> >
> > mpirun -np 8 mdrun_mpi -multi 8 -nice 4 -s md -o md -c after_md -v -x
> > frame -pin on
> >
> >
> > Per-simulation, the performance I am getting with this command is
> > significantly lower than running 1 simulation that uses 1 GPU and 2 CPUs
> > alone. This command seems to use all 8 GPUs and 16 CPUs on the 8 parallel
> > simulations, although I think this would be faster if I could pin each
> > simulation to a specific GPU and pair of CPUs. The -gpu_id option does
> not
> > seem to change anything when I am using the mpirun. Is there a way that I
> > can efficiently run the 8 simulations on the cluster by specifying the
> GPU
> > and CPUs to run with each simulation?
> >
> >
> > Thank you in advance!
> >
> >
> > Regards,
> >
> > 

Re: [gmx-users] g_traj + g_analyze and core dumped error

2015-09-16 Thread gozde ergin
Also I did the same calculation for only one atom and did not get any error.


On Wed, Sep 16, 2015 at 4:39 PM, gozde ergin  wrote:

> Dear gromacs users;
>
> I am trying to estimate the force-force auto correlation function.
> To do that first I run the command of :
>
>  'g_traj -f tra.trr -s topol.tpr -of force.xvg'
>
> My force.xvg file covers for all atoms in simulation box which I have 9566
> atoms.
> Than I run the command of :
>
> 'g_analyze -f force.xvg -ac autocorr.xvg'
>
> however I get the error of core dumped. Here is some snapshot from screen.
>
>
>
>
>
>
>
>
>
>
> *File force.xvg does not end with a newline, ignoring the last lineFile
> force.xvg does not end with a newline, ignoring the last lineFile force.xvg
> does not end with a newline, ignoring the last lineFile force.xvg does not
> end with a newline, ignoring the last lineInvalid line in
> force.xvg:Using zeros for the last 7819 sets.*
>
>
> What could be the reason of this error?
> I have enough free space and I am using gromacs 4.6.6
>
> bests
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] miscelle formation using ligands only

2015-09-16 Thread Justin Lemkul



On 9/16/15 11:01 AM, Chetan Puri wrote:

thanks for the suggestion ,
I used semi-empirical nddo method for charge determination of each
molecules and found out that there was huge difference between the charges
provided by the prodrug and that calculated by semi-empirical nddo method.
So can i modify the charges in the .itp file of various molecules and use
it .



You should probably do some more reading about the GROMOS force fields before 
diving into this.  There is a minimal, at best, connection between QM and 
GROMOS, unlike AMBER and CHARMM, which are closely linked to QM.  GROMOS 
parametrization is highly empirical and targets condensed phase data without any 
real reliance on QM (aside from possibly only a very basic approximation of 
charge distribution).


-Justin


On Wed, Sep 16, 2015 at 5:20 PM, Justin Lemkul  wrote:




On 9/15/15 11:01 PM, Chetan Puri wrote:


   so what is the best option for topologies.



A force field you trust in concert with molecules that are individually
parametrized using suitable target data.  There is a reason why
parametrization is considered an expert topic.  While there are numerous
web servers that will give you topologies for GROMOS, AMBER, and CHARMM, it
is incumbent on the user to verify the topologies against any available
target data and refine/reparametrize as needed based on the findings of
those assessments. Nothing should be trusted blindly, from any source.

-Justin


On Wed, Sep 16, 2015 at 2:19 AM, Justin Lemkul  wrote:





On 9/15/15 11:35 AM, Chetan Puri wrote:

thanks for you help and today i was able to solve the problem,

actually my pdb file was made using packmole for 5 molecules and
molecules
were prepared using prodrug , so my pdb file contained names of DRG A,
B,C,D,E



PRODRG produces notoriously bad parameters.  Don't use those topologies
directly and expect good results.

and i tried to change it to  SC3,CA1,D3G,. in the last packed PDB


file and also i made changes in the itp files. So in my topolgy prepared
file also i used the same names under molecules part. As a result of
this
the grompp was not able to read my .itp files and showed error as too
few
parameters on line. But then again i kept the  names as such and
included
DRG_chain_A,B,C ... names in my topology file molecules part and
also.itp molecule name . There after grompp was able to read all the
files.
I received two notes :
note1 was for verlet scheme >10 and for gpu >20 .
note2 was for some PME load distribution
I have used ions.mdp file of the tutorial just to make .tpr file.
i hope this is of not great concern and i would also like to know that
why
was grompp not able to read itp files even though i have placed the same
name in every file.


Your description is too confusing to be able to provide useful.  There's

nothing anyone can actually do to help you without actual commands,
files,
and real error messages, rather than what you're filtering through your
thoughts.

Sounds like you've hacked together a solution, though, so hopefully
everything matches up.  But like I said above, PRODRG topologies are not
reliable and reviewers should criticize their use heavily.  The problems
are well known.

-Justin



On Mon, Sep 14, 2015 at 10:13 PM, Justin Lemkul  wrote:





On 9/14/15 11:18 AM, Chetan Puri wrote:

i tried to prepare a topolgy file for my ligands and it contained


following
things,
#include "gromos43a1.ff/forcefield.itp"
#include "drg1.itp"
#include"drg2.itp"
#include"drg3.itp"
#include "gromos43a1.ff/spc.itp"

[system]
miscelle
[molecules]
drg18
drg2 5
drg3 7
sol   363408


But since i have packed the system using PACKMOL intially there were
some
error that no. of coordintaes of gro and top file are not matching
since
intially i took no. of molecules as one for each type but later upon
changing to the no. as in my packmol input that error had gone and new
error is showing up
i.e. Too few parameters in line 1 for drg2.itp
   Too few parameters in line 1 for drg3.itp

and if i override it with maxwarn than i saw that all the ligands were
stuck together at one place and also with some different
representation
.


Don't blow past error messages with -maxwarn.  It is extremely
dangerous


and unless you have specific knowledge that the problem is not
important,
don't use it.

The error messages indicate that the contents of drg2.itp and drg3.itp
are
incorrect or misformatted (or perhaps that the lack of a space between
#include and the file name is causing a problem, but that's just
speculation).

so can you please help me out with this thing and also is there any
other

way by using gromacs and packing a system of different ligands (gromacs

version 5.0.4)


gmx insert-molecules can be used to add small molecules into a system,


but
if it's already built, why bother?

If you want to rebuild the system for any reason, see:


Re: [gmx-users] g_traj + g_analyze and core dumped error

2015-09-16 Thread gozde ergin
Ok thanks Justin.
Than how can I get the time series of total force of the box?
g_traj gives the -af  all_force.xvg but this is for each atom not a time
series means y coordinate is the total force and x coordinate is the atom.

On Wed, Sep 16, 2015 at 5:51 PM, Justin Lemkul  wrote:

>
>
> On 9/16/15 11:48 AM, gozde ergin wrote:
>
>> Also I did the same calculation for only one atom and did not get any
>> error.
>>
>>
>>
> Probably because your original command will have 3*9566 floating-point
> entries per line, which likely can't even be read by GROMACS tools.  So
> analyzing one atom is fine, analyzing all of them at the same time likely
> isn't possible.
>
> -Justin
>
> On Wed, Sep 16, 2015 at 4:39 PM, gozde ergin 
>> wrote:
>>
>> Dear gromacs users;
>>>
>>> I am trying to estimate the force-force auto correlation function.
>>> To do that first I run the command of :
>>>
>>>   'g_traj -f tra.trr -s topol.tpr -of force.xvg'
>>>
>>> My force.xvg file covers for all atoms in simulation box which I have
>>> 9566
>>> atoms.
>>> Than I run the command of :
>>>
>>> 'g_analyze -f force.xvg -ac autocorr.xvg'
>>>
>>> however I get the error of core dumped. Here is some snapshot from
>>> screen.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *File force.xvg does not end with a newline, ignoring the last lineFile
>>> force.xvg does not end with a newline, ignoring the last lineFile
>>> force.xvg
>>> does not end with a newline, ignoring the last lineFile force.xvg does
>>> not
>>> end with a newline, ignoring the last lineInvalid line in
>>> force.xvg:Using zeros for the last 7819 sets.*
>>>
>>>
>>> What could be the reason of this error?
>>> I have enough free space and I am using gromacs 4.6.6
>>>
>>> bests
>>>
>>>
> --
> ==
>
> Justin A. Lemkul, Ph.D.
> Ruth L. Kirschstein NRSA Postdoctoral Fellow
>
> Department of Pharmaceutical Sciences
> School of Pharmacy
> Health Sciences Facility II, Room 629
> University of Maryland, Baltimore
> 20 Penn St.
> Baltimore, MD 21201
>
> jalem...@outerbanks.umaryland.edu | (410) 706-7441
> http://mackerell.umaryland.edu/~jalemkul
>
> ==
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] GMX 5.0.6 on nodes with gpus

2015-09-16 Thread devapriyachem
Hello,

I am running GMX 5.0.6 on 2 nodes of a cluster. Each node has 1 gpu (Tesla 
k20). In the log file, gmx reports that the program is running on 2 ranks and 
reports that it detects 1 gpu on one node. 

However, it says nothing about the gpu on second node. My guess is that this 
is not normal.  The mpi is handled by ibrun. 

ibrun -np 2 mdrun_mpi_gpu -gpu_id 0 -s md-01.tpr -x md-01.xtc -e md-01.edr -g 
md-01.log

I am not sure how to fix the input, any pointers are greatly appreciated. 

Thanks,
Deva. 
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GMX 5.0.6 on nodes with gpus

2015-09-16 Thread Mark Abraham
Hi,

That's normal - we just didn't write the code to collect and compare the
information. 5.1 is better at this. Anyway, 5.0 mdrun will assume the
second node is the same as the first and proceed.

Mark

On Wed, Sep 16, 2015 at 5:12 PM  wrote:

> Hello,
>
> I am running GMX 5.0.6 on 2 nodes of a cluster. Each node has 1 gpu (Tesla
> k20). In the log file, gmx reports that the program is running on 2 ranks
> and
> reports that it detects 1 gpu on one node.
>
> However, it says nothing about the gpu on second node. My guess is that
> this
> is not normal.  The mpi is handled by ibrun.
>
> ibrun -np 2 mdrun_mpi_gpu -gpu_id 0 -s md-01.tpr -x md-01.xtc -e md-01.edr
> -g
> md-01.log
>
> I am not sure how to fix the input, any pointers are greatly appreciated.
>
> Thanks,
> Deva.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] g_traj + g_analyze and core dumped error

2015-09-16 Thread Justin Lemkul



On 9/16/15 11:48 AM, gozde ergin wrote:

Also I did the same calculation for only one atom and did not get any error.




Probably because your original command will have 3*9566 floating-point entries 
per line, which likely can't even be read by GROMACS tools.  So analyzing one 
atom is fine, analyzing all of them at the same time likely isn't possible.


-Justin


On Wed, Sep 16, 2015 at 4:39 PM, gozde ergin  wrote:


Dear gromacs users;

I am trying to estimate the force-force auto correlation function.
To do that first I run the command of :

  'g_traj -f tra.trr -s topol.tpr -of force.xvg'

My force.xvg file covers for all atoms in simulation box which I have 9566
atoms.
Than I run the command of :

'g_analyze -f force.xvg -ac autocorr.xvg'

however I get the error of core dumped. Here is some snapshot from screen.










*File force.xvg does not end with a newline, ignoring the last lineFile
force.xvg does not end with a newline, ignoring the last lineFile force.xvg
does not end with a newline, ignoring the last lineFile force.xvg does not
end with a newline, ignoring the last lineInvalid line in
force.xvg:Using zeros for the last 7819 sets.*


What could be the reason of this error?
I have enough free space and I am using gromacs 4.6.6

bests



--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] GMX 5.0.6 on nodes with gpus

2015-09-16 Thread Deva P.
On Wednesday, September 16, 2015 03:20:31 PM Mark Abraham wrote:
> Hi,
> 
> That's normal - we just didn't write the code to collect and compare the
> information. 5.1 is better at this. Anyway, 5.0 mdrun will assume the
> second node is the same as the first and proceed.
> 
> Mark
> 
> On Wed, Sep 16, 2015 at 5:12 PM  wrote:
> > Hello,
> > 
> > I am running GMX 5.0.6 on 2 nodes of a cluster. Each node has 1 gpu (Tesla
> > k20). In the log file, gmx reports that the program is running on 2 ranks
> > and
> > reports that it detects 1 gpu on one node.
> > 
> > However, it says nothing about the gpu on second node. My guess is that
> > this
> > is not normal.  The mpi is handled by ibrun.
> > 
> > ibrun -np 2 mdrun_mpi_gpu -gpu_id 0 -s md-01.tpr -x md-01.xtc -e 
md-01.edr
> > -g
> > md-01.log
> > 
> > I am not sure how to fix the input, any pointers are greatly appreciated.
> > 
> > Thanks,
> > Deva.
> > --
> > Gromacs Users mailing list
> > 
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> > 
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > 
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-requ...@gromacs.org.

Thanks a lot, Mark.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Efficiently running multiple simulations

2015-09-16 Thread Mark Abraham
Hi,

On Wed, Sep 16, 2015 at 5:46 PM Zimmerman, Maxwell 
wrote:

> Hi Mark,
>
> Thank you for the feedback.
>
> To ensure that I am making a proper comparison, I tried running:
> mpirun -np 1 mdrun_mpi -ntomp 2 -gpu_id 0 -pin on
> and I still see the same pattern; running a single simulation with 1 GPU
> and 2 CPUs performs nearly twice as well as running 8 simulations with
> "-multi" using 8 GPUs and 16 CPUs.
>

OK. In that case, please share some links to .log files on a file-sharing
service, so we might be able to see where the issue arises. The list does
not accept attachments.

Just to clarify, when I use "-multi" all 8 of the .log files show that 8
> GPUs are selected for the run. If a single GPU were being used, wouldn't it
> only show mapping to one GPU ID per .log file?
>

I forget the details here, but organizing the mapping has to be done on a
per-node basis. It would not surprise me if the reporting was not strictly
valid on a per-simulation basis, but it ought to mention that the 8 GPUs
are asserts of the node, and not necessarily of the simulation.

There is absolutely no way that any simulation with a single domain can
share 8 GPUs.

Mark


> Regards,
> -Maxwell
>
> 
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Mark
> Abraham 
> Sent: Wednesday, September 16, 2015 10:08 AM
> To: gmx-us...@gromacs.org; gromacs.org_gmx-users@maillist.sys.kth.se
> Subject: Re: [gmx-users] Efficiently running multiple simulations
>
> Hi,
>
>
> On Wed, Sep 16, 2015 at 4:41 PM Zimmerman, Maxwell 
> wrote:
>
> > Hi Mark,
> >
> > Sorry for the confusion, what I meant to say was that each node on the
> > cluster has 8 GPUs and 16 CPUs.
> >
>
> OK. Please note that "CPU" is ambiguous, so you should prefer not to use it
> without clarification.
>
> Unless the GPUs are weak and the CPU is strong, 2 CPU cores per GPU will
> likely be under-powered for PME simulations in GROMACS.
>
> When I attempt to specify the GPU IDs for running 8 simulations on a node
> > using the "-multi" and "-gpu_id", each .log file has the following:
> >
> > "8 GPUs user-selected for this run.
> > Mapping of GPUs to the 8 PP ranks in this node: #0, #1, #2, #3, #4, #5,
> > #6, #7"
> >
> > This makes me think that each simulation is competing for each of the
> GPUs
>
>
> You are running 8 simulations, each of which has a single domain, each of
> which is mapped to a single PP rank, each of which is mapped to a different
> single GPU. Perfect.
>
> explaining my performance loss per simulation compared to running 1
> > simulation on 1 GPU and 2 CPUs.
>
>
> Very likely you are not comparing with what you think you are, e.g. you
> need to compare with an otherwise empty node running something like
>
> mpirun -np 1 mdrun_mpi -ntomp 2 -gpu_id 0 -pin on
>
> so that you actually have a single process running on two pinned CPU cores
> and a single GPU. This should be fairly comparable with the mdrun -multi
> setup
>
> A side-by-side diff of that log file and the log file of the 0th member of
> the multi-sim should show very few differences until the simulation starts,
> and comparable performance. If not, please share your .log files on a
> file-sharing service.
>
> If this interpretation is correct, is there a better way to pin each
> > simulation to a single GPU and 2 CPUs? If my interpretation is incorrect,
> > is there a more efficient way to use the "-multi" option to match the
> > performance I see of running a single simulation * 8?
> >
>
> mdrun will handle all of that correctly if it hasn't been crippled by how
> the MPI library has organized life. You want it to assign ranks to cores
> that are close to each other and their matching GPU. That tends to be the
> default behaviour, but clusters intended for node sharing can do weird
> things. (It is not yet clear that any of this is a problem.)
>
> Mark
>
>
> > Regards,
> > -Maxwell
> >
> >
> > 
> > From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> > gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Mark
> > Abraham 
> > Sent: Wednesday, September 16, 2015 3:52 AM
> > To: gmx-us...@gromacs.org; gromacs.org_gmx-users@maillist.sys.kth.se
> > Subject: Re: [gmx-users] Efficiently running multiple simulations
> >
> > Hi,
> >
> > I'm confused by your description of the cluster as having 8 GPUs and 16
> > CPUs. The relevant parameters are the number of GPUs and CPU cores per
> > node. See the examples at
> >
> >
> http://manual.gromacs.org/documentation/5.1/user-guide/mdrun-features.html#running-multi-simulations
> >
> > Mark
> >
> > On Tue, Sep 15, 2015 at 11:38 PM Zimmerman, Maxwell 
> > wrote:
> >
> > > Hello,
> > >
> > >
> > > I am having some troubles efficiently running simulations in parallel
> on
> > a
> > 

Re: [gmx-users] Electric double layer, how to add charge

2015-09-16 Thread André Farias de Moura
you should edit your topology file (either top or itp) to include explicit
charges to each atom.

regarding the proper choice of charges, it depends on the surface potential
you think your system should have (mind that OPLSAA will typically produce
an overestimated potential as compared to experiment, if available - from
my experience it may be 2-3 times larger)

adding charges with opposite signs in different layers work as well, you
just need to keep track of the layer in which each atom is located and then
edit the topology files accordingly (in this case, you'll have a neutral,
polar interface, as compared to the electrically charged interface obtained
using only charges of the same sign)

best

André

On Wed, Sep 16, 2015 at 5:45 PM, Andreas  wrote:

> Hello users,
>
> I want to simulate a graphene electric double layer (edl). So far i have
> used the cnt tutorials and the tutorial for lysozyme in water.
> All modification to the OPLSAA field parameter (atomname2type.n2t,
> atomtypes.atp, ffbonded.itp, ffnonbonded.itp) which i have done were with
> zero charge and i could already simulate a graphite layer in a box filled
> with water and ions, similar to the lysozyme tutorial.
>
> How and then to add a charge to the graphene layer, or if two graphene
> layers are added, how to add opposite charges to them?
>
> Or is there a more simple way to MD simulate an edl, in gromacs,
> altogether?
>
> I would be very thankful for any help
>
> Best regards
>
> Andreas
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.
>



-- 
_

Prof. Dr. André Farias de Moura
Department of Chemistry
Federal University of São Carlos
São Carlos - Brazil
phone: +55-16-3351-8090
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] Electric double layer, how to add charge

2015-09-16 Thread Andreas

Hello users,

I want to simulate a graphene electric double layer (edl). So far i have 
used the cnt tutorials and the tutorial for lysozyme in water.
All modification to the OPLSAA field parameter (atomname2type.n2t, 
atomtypes.atp, ffbonded.itp, ffnonbonded.itp) which i have done were 
with zero charge and i could already simulate a graphite layer in a box 
filled with water and ions, similar to the lysozyme tutorial.


How and then to add a charge to the graphene layer, or if two graphene 
layers are added, how to add opposite charges to them?


Or is there a more simple way to MD simulate an edl, in gromacs, altogether?

I would be very thankful for any help

Best regards

Andreas
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Efficiently running multiple simulations

2015-09-16 Thread Mark Abraham
Hi,

The log files tell you that you should compile for AVX2_256 SIMD for the
Haswell CPUs you have. Do that. Your runs are wasting a fair chunk of the
value of the CPU hardware, and your setup absolutely needs to extract every
last drop from the CPUs. That means you need to follow the instructions in
the GROMACS install guide, which suggest you use a recent compiler. Your
GROMACS was compiled with gcc 4.4.7, which was about two years old before a
Haswell was sold! Why HPC clusters buy the latest hardware and continue to
default to the "stable" 5-year old compiler suite shipped with the
"enterprise" distribution remains a total mystery to me. :-)

The log file also says that your MPI system is starting four OpenMP threads
per rank in the multi-simulation case, so the comparison is not valid.
Starting 8*4 OpenMP threads on your node oversubscribes the actual cores,
and this is terrible for GROMACS. You need to find out how many actual
cores you have (each of which can have two hyperthreads, which is usually
worth using on such Haswell machines). You want either one thread per core,
or two threads per core (try both). If you don't know how many actual cores
there are, consult your local docs/admins.

"Mapping of GPUs to the 8 PP ranks in this node: #0, #1, #2, #3, #4, #5,
#6, #7" is actually correct and unambiguous. There's 8 simulations, each
with 1 domain, so 8 PP ranks, and each is mapped to one of 8 GPUs *in this
node*. You've been reading "node" and thinking "simulation."

Mark


On Wed, Sep 16, 2015 at 9:23 PM Zimmerman, Maxwell 
wrote:

> Hi Mark,
>
> Here are two links to .log files for running 1 simulation on 1 GPU and 2
> CPUs and 8 simulations across all 8 GPUs and 16 CPUs respectively:
>
> https://www.dropbox.com/s/ko2l0qlr4kdpt51/md_1GPU.log?dl=0
> https://www.dropbox.com/s/chtcv4nqxof64p8/md_8GPUs.log?dl=0
>
> Regards,
> -Maxwell
>
>
> 
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Mark
> Abraham 
> Sent: Wednesday, September 16, 2015 1:39 PM
> To: gmx-us...@gromacs.org; gromacs.org_gmx-users@maillist.sys.kth.se
> Subject: Re: [gmx-users] Efficiently running multiple simulations
>
> Hi,
>
> On Wed, Sep 16, 2015 at 5:46 PM Zimmerman, Maxwell 
> wrote:
>
> > Hi Mark,
> >
> > Thank you for the feedback.
> >
> > To ensure that I am making a proper comparison, I tried running:
> > mpirun -np 1 mdrun_mpi -ntomp 2 -gpu_id 0 -pin on
> > and I still see the same pattern; running a single simulation with 1 GPU
> > and 2 CPUs performs nearly twice as well as running 8 simulations with
> > "-multi" using 8 GPUs and 16 CPUs.
> >
>
> OK. In that case, please share some links to .log files on a file-sharing
> service, so we might be able to see where the issue arises. The list does
> not accept attachments.
>
> Just to clarify, when I use "-multi" all 8 of the .log files show that 8
> > GPUs are selected for the run. If a single GPU were being used, wouldn't
> it
> > only show mapping to one GPU ID per .log file?
> >
>
> I forget the details here, but organizing the mapping has to be done on a
> per-node basis. It would not surprise me if the reporting was not strictly
> valid on a per-simulation basis, but it ought to mention that the 8 GPUs
> are asserts of the node, and not necessarily of the simulation.
>
> There is absolutely no way that any simulation with a single domain can
> share 8 GPUs.
>
> Mark
>
>
> > Regards,
> > -Maxwell
> >
> > 
> > From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> > gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Mark
> > Abraham 
> > Sent: Wednesday, September 16, 2015 10:08 AM
> > To: gmx-us...@gromacs.org; gromacs.org_gmx-users@maillist.sys.kth.se
> > Subject: Re: [gmx-users] Efficiently running multiple simulations
> >
> > Hi,
> >
> >
> > On Wed, Sep 16, 2015 at 4:41 PM Zimmerman, Maxwell 
> > wrote:
> >
> > > Hi Mark,
> > >
> > > Sorry for the confusion, what I meant to say was that each node on the
> > > cluster has 8 GPUs and 16 CPUs.
> > >
> >
> > OK. Please note that "CPU" is ambiguous, so you should prefer not to use
> it
> > without clarification.
> >
> > Unless the GPUs are weak and the CPU is strong, 2 CPU cores per GPU will
> > likely be under-powered for PME simulations in GROMACS.
> >
> > When I attempt to specify the GPU IDs for running 8 simulations on a node
> > > using the "-multi" and "-gpu_id", each .log file has the following:
> > >
> > > "8 GPUs user-selected for this run.
> > > Mapping of GPUs to the 8 PP ranks in this node: #0, #1, #2, #3, #4, #5,
> > > #6, #7"
> > >
> > > This makes me think that each simulation is competing for each of the
> > GPUs
> >
> >
> > You are running 8 simulations, each of which has a single domain, each of
> > which is mapped to 

[gmx-users] GPU-accelerated desktop PC for MD simulations

2015-09-16 Thread Gustavo Avelar Molina
Hi everyone,

I want to build a new GPU-accelerated desktop PC for MD simulations of
relatively simple protein/carbohydrate systems. No QM-MM simulations is
intended for now. For instance, I have been working with a protein of
approximately 1500 atoms in the presence of small carbohydrates (<100
atoms). For the protein alone, my home PC (i5-4570 CPU 3.20GHz, not
GPU-accelerated) does approximately 5 ns/day, so I want something
considerably faster than that.

Which hardware should I choose? Could you suggest low to high price
configurations with good power considering the current technology available?

Thank you very much for your time.

Best regards,

Gustavo

==
Gustavo Avelar Molina, B.Sc. Chem.
M.Sc. Chem. Student

Department of Chemistry
Faculty of Philosophy, Sciences and Literature of Ribeirão Preto
Protein Biochemistry and Biophysics Laboratory
University of São Paulo, Ribeirão Preto, São Paulo, Brazil

+55 16 994311221 | +55 11 949874141

avelarmolinagust...@gmail.com | gustavoavelarmol...@usp.br

https://lbbpusp.wordpress.com/ |





==
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] on CHARMM-GUI produced lipid bilayer for gromacs use

2015-09-16 Thread Brett
Dear All,


After I got the membrane protein PDB with the membrane part packed in lipid 
bilayer by the lipid bilayer builder of CHARMM-GUI, will you please let me know 
how to get the inputs for the gromacs next step MD process? For example, if I 
wand to go directly to the nvt equilibration step with the CHARMM-GUI produced 
PDB, the command in the lysozyme tutorial is "gmx grompp -f nvt.mdp -c em.gro 
-p topol.top -o nvt.tpr". Then will you please let me know how to get the input 
*.gro and *.top file in order to get the *.tpr file? Or for the  CHARMM-GUI 
produced PDB, I delete all the H2O, and then start from the pdb2gmx step with 
the PDB file containing only the membrane protein and the lipid bilayer? But in 
this way it seems the sides of the lipid bilayer will also be packed in H2O.


I am looking forward to getting a reply from you.


Best regards.


Brett






-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Atomic charges

2015-09-16 Thread Pallavi Banerjee
Thanks Justin. Things are clearer now.

-Pallavi Banerjee
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] g_mmpbsa time

2015-09-16 Thread elham tazikeh
Dear users
i simulated a system for 30 nano-seconds(dt=2fs) and my frame was 6 ps
for binding free energy computations by g_mmpbsa method in gromacs, my
computations took a long time
is it correct?
can i change my simulation time in *mdp* file for *tpr* production for
using in g_mmpbsa calculations or it s equal to  the time of my
siulation ???
regards
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] -rdd and -ddcheck

2015-09-16 Thread Sana Saeed
hi gmx users,i got a problem while running implicit simulation (10 ns) the 
problem is in bond interactions and it says that i should see option -rdd and 
-ddcheck. does anyone know how to use these options. thanks in advance.
RegardsSana SaeedSoongsil University
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] on CHARMM-GUI produced lipid bilayer for gromacs use

2015-09-16 Thread Tsjerk Wassenaar
Hi Brett,

The CHARMM-GUI also provides the .top file. You can use that and the PDB
(it doesn't need to be .gro) to run grompp.

Cheers,

Tsjerk

On Thu, Sep 17, 2015 at 5:06 AM, Brett  wrote:

> Dear All,
>
>
> After I got the membrane protein PDB with the membrane part packed in
> lipid bilayer by the lipid bilayer builder of CHARMM-GUI, will you please
> let me know how to get the inputs for the gromacs next step MD process? For
> example, if I wand to go directly to the nvt equilibration step with the
> CHARMM-GUI produced PDB, the command in the lysozyme tutorial is "gmx
> grompp -f nvt.mdp -c em.gro -p topol.top -o nvt.tpr". Then will you please
> let me know how to get the input *.gro and *.top file in order to get the
> *.tpr file? Or for the  CHARMM-GUI produced PDB, I delete all the H2O, and
> then start from the pdb2gmx step with the PDB file containing only the
> membrane protein and the lipid bilayer? But in this way it seems the sides
> of the lipid bilayer will also be packed in H2O.
>
>
> I am looking forward to getting a reply from you.
>
>
> Best regards.
>
>
> Brett
>
>
>
>
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-requ...@gromacs.org.




-- 
Tsjerk A. Wassenaar, Ph.D.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Efficiently running multiple simulations

2015-09-16 Thread Zimmerman, Maxwell
Hi Mark,

Here are two links to .log files for running 1 simulation on 1 GPU and 2 CPUs 
and 8 simulations across all 8 GPUs and 16 CPUs respectively:

https://www.dropbox.com/s/ko2l0qlr4kdpt51/md_1GPU.log?dl=0
https://www.dropbox.com/s/chtcv4nqxof64p8/md_8GPUs.log?dl=0

Regards,
-Maxwell



From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 on behalf of Mark Abraham 

Sent: Wednesday, September 16, 2015 1:39 PM
To: gmx-us...@gromacs.org; gromacs.org_gmx-users@maillist.sys.kth.se
Subject: Re: [gmx-users] Efficiently running multiple simulations

Hi,

On Wed, Sep 16, 2015 at 5:46 PM Zimmerman, Maxwell 
wrote:

> Hi Mark,
>
> Thank you for the feedback.
>
> To ensure that I am making a proper comparison, I tried running:
> mpirun -np 1 mdrun_mpi -ntomp 2 -gpu_id 0 -pin on
> and I still see the same pattern; running a single simulation with 1 GPU
> and 2 CPUs performs nearly twice as well as running 8 simulations with
> "-multi" using 8 GPUs and 16 CPUs.
>

OK. In that case, please share some links to .log files on a file-sharing
service, so we might be able to see where the issue arises. The list does
not accept attachments.

Just to clarify, when I use "-multi" all 8 of the .log files show that 8
> GPUs are selected for the run. If a single GPU were being used, wouldn't it
> only show mapping to one GPU ID per .log file?
>

I forget the details here, but organizing the mapping has to be done on a
per-node basis. It would not surprise me if the reporting was not strictly
valid on a per-simulation basis, but it ought to mention that the 8 GPUs
are asserts of the node, and not necessarily of the simulation.

There is absolutely no way that any simulation with a single domain can
share 8 GPUs.

Mark


> Regards,
> -Maxwell
>
> 
> From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Mark
> Abraham 
> Sent: Wednesday, September 16, 2015 10:08 AM
> To: gmx-us...@gromacs.org; gromacs.org_gmx-users@maillist.sys.kth.se
> Subject: Re: [gmx-users] Efficiently running multiple simulations
>
> Hi,
>
>
> On Wed, Sep 16, 2015 at 4:41 PM Zimmerman, Maxwell 
> wrote:
>
> > Hi Mark,
> >
> > Sorry for the confusion, what I meant to say was that each node on the
> > cluster has 8 GPUs and 16 CPUs.
> >
>
> OK. Please note that "CPU" is ambiguous, so you should prefer not to use it
> without clarification.
>
> Unless the GPUs are weak and the CPU is strong, 2 CPU cores per GPU will
> likely be under-powered for PME simulations in GROMACS.
>
> When I attempt to specify the GPU IDs for running 8 simulations on a node
> > using the "-multi" and "-gpu_id", each .log file has the following:
> >
> > "8 GPUs user-selected for this run.
> > Mapping of GPUs to the 8 PP ranks in this node: #0, #1, #2, #3, #4, #5,
> > #6, #7"
> >
> > This makes me think that each simulation is competing for each of the
> GPUs
>
>
> You are running 8 simulations, each of which has a single domain, each of
> which is mapped to a single PP rank, each of which is mapped to a different
> single GPU. Perfect.
>
> explaining my performance loss per simulation compared to running 1
> > simulation on 1 GPU and 2 CPUs.
>
>
> Very likely you are not comparing with what you think you are, e.g. you
> need to compare with an otherwise empty node running something like
>
> mpirun -np 1 mdrun_mpi -ntomp 2 -gpu_id 0 -pin on
>
> so that you actually have a single process running on two pinned CPU cores
> and a single GPU. This should be fairly comparable with the mdrun -multi
> setup
>
> A side-by-side diff of that log file and the log file of the 0th member of
> the multi-sim should show very few differences until the simulation starts,
> and comparable performance. If not, please share your .log files on a
> file-sharing service.
>
> If this interpretation is correct, is there a better way to pin each
> > simulation to a single GPU and 2 CPUs? If my interpretation is incorrect,
> > is there a more efficient way to use the "-multi" option to match the
> > performance I see of running a single simulation * 8?
> >
>
> mdrun will handle all of that correctly if it hasn't been crippled by how
> the MPI library has organized life. You want it to assign ranks to cores
> that are close to each other and their matching GPU. That tends to be the
> default behaviour, but clusters intended for node sharing can do weird
> things. (It is not yet clear that any of this is a problem.)
>
> Mark
>
>
> > Regards,
> > -Maxwell
> >
> >
> > 
> > From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se <
> > gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Mark
> > Abraham 
> > Sent: Wednesday, September 16, 2015 3:52 AM