Re: [gmx-users] compiling issue

2015-01-08 Thread Éric Germaneau

So sorry, I forgot to mention I use *GMX 5.0.4*.

On 01/09/2015 08:21 AM, Éric Germaneau wrote:

Dear all,

I'm trying to build GMX on a Intel CentOS release 6.6 machine using 
icc 14.0 and CUDA 6.5.

Here are the error I get:

   [  1%] Built target mdrun_objlib
   In file included from 
/usr/local/cuda/include/crt/device_runtime.h(251),

 from
   /usr/lib/gcc/x86_64-redhat-linux/4.4.7/include/stddef.h(212):
   /usr/local/cuda/include/crt/storage_class.h(61): remark #7:
   unrecognized token
  #define __storage_auto__device__ @@@ COMPILER @@@ ERROR @@@

   ...//

   /usr/local/cuda/include/crt/host_runtime.h(121): remark #82: storage
   class is not first
  static void nv_dummy_param_ref(void *param) { volatile static
   void * *__ref __attribute__((unused)); __ref = (volatile void *
   *)param; }

   ...

   Scanning dependencies of target cuda_tools
   Linking CXX static library ../../../../lib/libcuda_tools.a
   [  1%] Built target cuda_tools
   [  2%] [  2%] Building NVCC (Device) object
src/gromacs/mdlib/nbnxn_cuda/CMakeFiles/nbnxn_cuda.dir/nbnxn_cuda_generated_nbnxn_cuda_data_mgmt.cu.o
   Building NVCC (Device) object
src/gromacs/mdlib/nbnxn_cuda/CMakeFiles/nbnxn_cuda.dir/nbnxn_cuda_generated_nbnxn_cuda.cu.o
   /usr/local/cuda/include/crt/host_runtime.h(121): remark #82: storage
   class is not first
  static void nv_dummy_param_ref(void *param) { volatile static
   void * *__ref __attribute__((unused)); __ref = (volatile void *
   *)param;

   ...

   /tmp/iccZVwEChas_.s: Assembler messages:
   /tmp/iccZVwEChas_.s:375: Error: suffix or operands invalid for 
`vpaddd'

   /tmp/iccZVwEChas_.s:467: Error: no such instruction: `vpbroadcastd
   %xmm0,%ymm0'
   /tmp/iccZVwEChas_.s:628: Error: suffix or operands invalid for `vpxor'
   /tmp/iccZVwEChas_.s:629: Error: suffix or operands invalid for
   `vpcmpeqd'
   /tmp/iccZVwEChas_.s:630: Error: no such instruction: `vpbroadcastd
   %xmm0,%ymm0'
   /tmp/iccZVwEChas_.s:709: Error: suffix or operands invalid for
   `vpcmpeqd'
   /tmp/iccZVwEChas_.s:711: Error: suffix or operands invalid for `vpxor'
   /tmp/iccZVwEChas_.s:712: Error: suffix or operands invalid for 
`vpsubd'
   /tmp/iccZVwEChas_.s:713: Error: suffix or operands invalid for 
`vpaddd'

   /tmp/iccZVwEChas_.s:1620: Error: no such instruction: `shlx
   %r8d,%eax,%r11d'
   /tmp/iccZVwEChas_.s:2000: Error: no such instruction: `shlx
   %r8d,%eax,%r10d'
   /tmp/iccZVwEChas_.s:2107: Error: no such instruction: `shlx
   %r9d,%eax,%eax'
   /tmp/iccZVwEChas_.s:2485: Error: suffix or operands invalid for 
`vpaddd'
   /tmp/iccZVwEChas_.s:3255: Error: suffix or operands invalid for 
`vpaddd'
   /tmp/iccZVwEChas_.s:3650: Error: suffix or operands invalid for 
`vpaddd'
   /tmp/iccZVwEChas_.s:4154: Error: suffix or operands invalid for 
`vpaddd'

   CMake Error at gpu_utils_generated_memtestG80_core.cu.o.cmake:264
   (message):
  Error generating file
/home/eric/soft/science/opensource/gromacs/build-5.0.4/src/gromacs/gmxlib/gpu_utils/CMakeFiles/gpu_utils.dir//./gpu_utils_generated_memtestG80_core.cu.o


   make[2]: ***
[src/gromacs/gmxlib/gpu_utils/CMakeFiles/gpu_utils.dir/gpu_utils_generated_memtestG80_core.cu.o]
   Error 1
   make[1]: ***
   [src/gromacs/gmxlib/gpu_utils/CMakeFiles/gpu_utils.dir/all] Error 2
   make: *** [all] Error 2

The CPU version compile smoothly.
Any hint here ?

   Éric.



--
Éric Germaneau (艾海克), Specialist
Center for High Performance Computing
Shanghai Jiao Tong University
Room 205 Network Center, 800 Dongchuan Road, Shanghai 200240 China
M:german...@sjtu.edu.cn P:+86-136-4161-6480 W:http://hpc.sjtu.edu.cn
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] RDF plot with large g(r) values

2015-01-08 Thread Kester Wong
Hi Andre,Thank you for your input.보낸사람 : "André Farias de Moura" mo...@ufscar.br받는사람 : Discussion list for GROMACS us gmx-us...@gromacs.org받은날짜 : 2015년 1월 8일(목) 22:05:39제목 : Re: [gmx-users] RDF plot with large g(r) valuesRDF values are sensitive to the volume of the system, so if you put the
same solutes inside a larger/smaller box, RDF values change accordingly
(check basic definitions of RDF in simulation handbooks to make sure you
understand this relation).Ah right, that explains why my RDF values dropped if I put a larger amount of water in the same box.

and even if you have the same size and composition, RDF may become really
large if molecules aggregate.Yep, I have a droplet where the Na+ and OH- ions tend to aggregate within the solution, the high g(r) peak in the RDF almost doubles the other systems with no aggregation.In this case, can I still use my RDF plots?Regards,Kester
On Thu, Jan 8, 2015 at 7:48 AM, Kester Wong  wrote:

 Dear all,



 My apologies if this question sounds too basic of if it has been covered.

 I did some RDF calculations, and as when I plotted the figures, the g(r)
 values are in the hundreds, whereas the papers that I have seen are all in
 the range of 0-12.

 The x-axis (nm), however, seemed to be correct.



 Regards,

 Kester


 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.




-- 
_

Prof. Dr. André Farias de Moura
Department of Chemistry
Federal University of São Carlos
São Carlos - Brazil
phone: +55-16-3351-8090
-- 
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] compiling issue

2015-01-08 Thread Éric Germaneau

Dear all,

I'm trying to build GMX on a Intel CentOS release 6.6 machine using icc 
14.0 and CUDA 6.5.

Here are the error I get:

   [  1%] Built target mdrun_objlib
   In file included from /usr/local/cuda/include/crt/device_runtime.h(251),
 from
   /usr/lib/gcc/x86_64-redhat-linux/4.4.7/include/stddef.h(212):
   /usr/local/cuda/include/crt/storage_class.h(61): remark #7:
   unrecognized token
  #define __storage_auto__device__ @@@ COMPILER @@@ ERROR @@@

   ...//

   /usr/local/cuda/include/crt/host_runtime.h(121): remark #82: storage
   class is not first
  static void nv_dummy_param_ref(void *param) { volatile static
   void * *__ref __attribute__((unused)); __ref = (volatile void *
   *)param; }

   ...

   Scanning dependencies of target cuda_tools
   Linking CXX static library ../../../../lib/libcuda_tools.a
   [  1%] Built target cuda_tools
   [  2%] [  2%] Building NVCC (Device) object
   
src/gromacs/mdlib/nbnxn_cuda/CMakeFiles/nbnxn_cuda.dir/nbnxn_cuda_generated_nbnxn_cuda_data_mgmt.cu.o
   Building NVCC (Device) object
   
src/gromacs/mdlib/nbnxn_cuda/CMakeFiles/nbnxn_cuda.dir/nbnxn_cuda_generated_nbnxn_cuda.cu.o
   /usr/local/cuda/include/crt/host_runtime.h(121): remark #82: storage
   class is not first
  static void nv_dummy_param_ref(void *param) { volatile static
   void * *__ref __attribute__((unused)); __ref = (volatile void *
   *)param;

   ...

   /tmp/iccZVwEChas_.s: Assembler messages:
   /tmp/iccZVwEChas_.s:375: Error: suffix or operands invalid for `vpaddd'
   /tmp/iccZVwEChas_.s:467: Error: no such instruction: `vpbroadcastd
   %xmm0,%ymm0'
   /tmp/iccZVwEChas_.s:628: Error: suffix or operands invalid for `vpxor'
   /tmp/iccZVwEChas_.s:629: Error: suffix or operands invalid for
   `vpcmpeqd'
   /tmp/iccZVwEChas_.s:630: Error: no such instruction: `vpbroadcastd
   %xmm0,%ymm0'
   /tmp/iccZVwEChas_.s:709: Error: suffix or operands invalid for
   `vpcmpeqd'
   /tmp/iccZVwEChas_.s:711: Error: suffix or operands invalid for `vpxor'
   /tmp/iccZVwEChas_.s:712: Error: suffix or operands invalid for `vpsubd'
   /tmp/iccZVwEChas_.s:713: Error: suffix or operands invalid for `vpaddd'
   /tmp/iccZVwEChas_.s:1620: Error: no such instruction: `shlx
   %r8d,%eax,%r11d'
   /tmp/iccZVwEChas_.s:2000: Error: no such instruction: `shlx
   %r8d,%eax,%r10d'
   /tmp/iccZVwEChas_.s:2107: Error: no such instruction: `shlx
   %r9d,%eax,%eax'
   /tmp/iccZVwEChas_.s:2485: Error: suffix or operands invalid for `vpaddd'
   /tmp/iccZVwEChas_.s:3255: Error: suffix or operands invalid for `vpaddd'
   /tmp/iccZVwEChas_.s:3650: Error: suffix or operands invalid for `vpaddd'
   /tmp/iccZVwEChas_.s:4154: Error: suffix or operands invalid for `vpaddd'
   CMake Error at gpu_utils_generated_memtestG80_core.cu.o.cmake:264
   (message):
  Error generating file
   
/home/eric/soft/science/opensource/gromacs/build-5.0.4/src/gromacs/gmxlib/gpu_utils/CMakeFiles/gpu_utils.dir//./gpu_utils_generated_memtestG80_core.cu.o


   make[2]: ***
   
[src/gromacs/gmxlib/gpu_utils/CMakeFiles/gpu_utils.dir/gpu_utils_generated_memtestG80_core.cu.o]
   Error 1
   make[1]: ***
   [src/gromacs/gmxlib/gpu_utils/CMakeFiles/gpu_utils.dir/all] Error 2
   make: *** [all] Error 2

The CPU version compile smoothly.
Any hint here ?

   Éric.

--
Éric Germaneau (艾海克), Specialist
Center for High Performance Computing
Shanghai Jiao Tong University
Room 205 Network Center, 800 Dongchuan Road, Shanghai 200240 China
M:german...@sjtu.edu.cn P:+86-136-4161-6480 W:http://hpc.sjtu.edu.cn
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] RDF plot with large g(r) values

2015-01-08 Thread Kester Wong
Dear Justin, No, I am not comparing equivalent systems, nor reproducing previous findings. My simulation box contains a droplet of 2000 water molecules on graphene.The RDF of g(r) Ow-Ow shows a first peak at r = 0.3 nm, with a peak height of 400.For 6000 and 10,000 water molecules, the peak heights are 200 and 100, respectively. Is this due to the large vacuum in my simulation box?Regards,KesterOn 1/8/15 4:48 AM, Kester Wong wrote:
 Dear all,



 My apologies if this question sounds too basic of if it has been covered.

 I did some RDF calculations, and as when I plotted the figures, the g(r) values
 are in the hundreds, whereas the papers that I have seen are all in the range of
 0-12.


Are you comparing equivalent systems, e.g. trying to reproduce previous 
findings?  If not, it's just apples and oranges.  Your result is not necessarily 
unusual.

-Justin

-- 
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
-- 
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gromacs.org_gmx-users Digest, Vol 129, Issue 32

2015-01-08 Thread asasa qsqs
Dear Justin,I want to use GridMAT-MD program for last 70 ns my simulation, wat 
must i do?Many thanks,Mrs. Mahdavi 

 On Friday, January 9, 2015 5:21 AM, 
gromacs.org_gmx-users-requ...@maillist.sys.kth.se 
gromacs.org_gmx-users-requ...@maillist.sys.kth.se wrote:
   

 Send gromacs.org_gmx-users mailing list submissions to
    gromacs.org_gmx-users@maillist.sys.kth.se

To subscribe or unsubscribe via the World Wide Web, visit
    https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
or, via email, send a message with subject or body 'help' to
    gromacs.org_gmx-users-requ...@maillist.sys.kth.se

You can reach the person managing the list at
    gromacs.org_gmx-users-ow...@maillist.sys.kth.se

When replying, please edit your Subject line so it is more specific
than Re: Contents of gromacs.org_gmx-users digest...


Today's Topics:

  1. Re: Error : Atomtype not found (Justin Lemkul)
  2. rotating triclinic box (felipe zapata)
  3. Re: rotating triclinic box (Tsjerk Wassenaar)
  4. compiling issue (?ric Germaneau)
  5. Re: compiling issue (?ric Germaneau)
  6. Re: RDF plot with large g(r) values (Kester Wong)


--

Message: 1
Date: Thu, 08 Jan 2015 10:27:26 -0500
From: Justin Lemkul jalem...@vt.edu
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] Error : Atomtype not found
Message-ID: 54aea1de.6020...@vt.edu
Content-Type: text/plain; charset=windows-1252; format=flowed



On 1/8/15 10:14 AM, protim chakraborti wrote:
 Respected Dr. Lemkul
 Thanks for the suggestion. I have checked the ffnonbonded.itp and found
 that copper is entered out there in the following form and format

 ; Ions and noble gases (useful for tutorials)
 Cu2+    29      63.54600        2.00    A      2.08470e-01    4.76976e+00
 Ar      18      39.94800        0.00    A      3.41000e-01    2.74580e-02

 would this be not be suffice or i need to add Cu separately? or may be I
 have to run starting from pdb2gmx itself!


Those parameters are not CHARMM parameters.  They appear to have been copied 
over from OPLS-AA, which had several ions removed due to unknown origins (as 
was 
Ar).  Do not use these parameters for a CHARMM simulation.  I recommend that 
they be removed entirely, as we did with our CHARMM36 port.

-Justin

-- 
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==


--

Message: 2
Date: Thu, 8 Jan 2015 16:25:03 -0500
From: felipe zapata tifonza...@gmail.com
To: gromacs.org_gmx-users@maillist.sys.kth.se
Subject: [gmx-users] rotating triclinic box
Message-ID:
    CA+AeLgSe5Pj8iRcgaFU9-E4zx7830PK9p=oatuudnfa9zrm...@mail.gmail.com
Content-Type: text/plain; charset=ISO-8859-1

Hi all,
I have a triclinic box containing several chains of a biopolymer with the
following crystal information:

CRYST1  115.000  75.000  75.000  80.37 118.08 114.80 P 1          1

I want to apply a semiistropic pressure coupling orthogonal to the chains
(XY plane), but unfortunately the strands of the polymer are oriented along
the x-axis instead of the z-axis. How can I change the orientation of the
box in such a way that the strand are orientated along the z-axis? it means
I want to rotate the triclinic box swapping the x and z axes.

Best,

Felipe


--

Message: 3
Date: Thu, 8 Jan 2015 22:39:03 +0100
From: Tsjerk Wassenaar tsje...@gmail.com
To: Discussion list for GROMACS users gmx-us...@gromacs.org
Subject: Re: [gmx-users] rotating triclinic box
Message-ID:
    cabze1sjrmvh-hgwdrtnk_gkrs3_f5mopcl1k3dce+qrt79s...@mail.gmail.com
Content-Type: text/plain; charset=UTF-8

Hi Felipe,

Rotate 90 degrees around y:

editconf -rotate 0 90 0

Cheers,

Tsjerk
On Jan 8, 2015 10:32 PM, felipe zapata tifonza...@gmail.com wrote:

 Hi all,
 I have a triclinic box containing several chains of a biopolymer with the
 following crystal information:

 CRYST1  115.000  75.000  75.000  80.37 118.08 114.80 P 1          1

 I want to apply a semiistropic pressure coupling orthogonal to the chains
 (XY plane), but unfortunately the strands of the polymer are oriented along
 the x-axis instead of the z-axis. How can I change the orientation of the
 box in such a way that the strand are orientated along the z-axis? it means
 I want to rotate the triclinic box swapping the x and z axes.

 Best,

 Felipe
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 

[gmx-users] GridMAT-MD

2015-01-08 Thread asasa qsqs
Dear Justin,I want to use?GridMAT-MD program?for last 70 ns my simulation, wat 
must i do?Many thanks,Mrs. Mahdavi

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Changing number of processors after a job restart

2015-01-08 Thread Nash, Anthony
Hi all,

This is probably quite a fundamental bit of knowledge I am missing (and 
struggling to find). In an effort to just get a system running rather than 
waiting on a queue I am considering taking my job which has already ran for 48 
hours and reducing the requested number of nodes. I would use the usual -cpi 
.cpt -noappend notation in the job script to resubmit.

I have a feeling though, that all manor of parallel calculations were preserved 
in the check point file and are loaded upon restart. Would my job reload and 
recalculate all the relevant cell decomposition, etc., without throwing up an 
error.

Many thanks
Anthony 
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] rotating triclinic box

2015-01-08 Thread Tsjerk Wassenaar
Hi Felipe,

Rotate 90 degrees around y:

editconf -rotate 0 90 0

Cheers,

Tsjerk
On Jan 8, 2015 10:32 PM, felipe zapata tifonza...@gmail.com wrote:

 Hi all,
 I have a triclinic box containing several chains of a biopolymer with the
 following crystal information:

 CRYST1  115.000   75.000   75.000  80.37 118.08 114.80 P 1   1

 I want to apply a semiistropic pressure coupling orthogonal to the chains
 (XY plane), but unfortunately the strands of the polymer are oriented along
 the x-axis instead of the z-axis. How can I change the orientation of the
 box in such a way that the strand are orientated along the z-axis? it means
 I want to rotate the triclinic box swapping the x and z axes.

 Best,

 Felipe
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] invalid order for directive atomtypes, but only one ligand

2015-01-08 Thread Justin Lemkul



On 1/7/15 4:19 PM, Jonathan Saboury wrote:

Hello all,

I'm getting an error when running grompp, invalid order for directive
atomtypes.

I used to get this error whenever there were two or more additional .itp's
for ligands. This would be fixed by adding the [atomtypes] together and
deleting dupicates.

This case however there is only one ligand I am adding, so no idea where I
should concatenate the [atomtypes].

Getting error:
---
Fatal error: Syntax error - File biotin_GMX.itp, line 3
Last line read: '[ atomtypes ]'
Invalid order for directive atomtypes
---

Commands: http://pastebin.com/raw.php?i=FWA2W2Dd

Zip of all files (2.7MB): http://ge.tt/35Gex882/v/0



You haven't provided system.top, which is where the problem would become 
evident.  There's nothing wrong with biotin_GMX.itp itself, but however it is 
being #included in system.top must be wrong.  A ligand that introduces new 
[atomtypes] must be #included after the parent force field, and prior to any 
[moleculetype] definition.  Force field-level directives must all appear before 
any molecule-level directives.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] RDF plot with large g(r) values

2015-01-08 Thread Justin Lemkul



On 1/8/15 4:48 AM, Kester Wong wrote:

Dear all,



My apologies if this question sounds too basic of if it has been covered.

I did some RDF calculations, and as when I plotted the figures, the g(r) values
are in the hundreds, whereas the papers that I have seen are all in the range of
0-12.



Are you comparing equivalent systems, e.g. trying to reproduce previous 
findings?  If not, it's just apples and oranges.  Your result is not necessarily 
unusual.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] RDF plot with large g(r) values

2015-01-08 Thread André Farias de Moura
RDF values are sensitive to the volume of the system, so if you put the
same solutes inside a larger/smaller box, RDF values change accordingly
(check basic definitions of RDF in simulation handbooks to make sure you
understand this relation).

and even if you have the same size and composition, RDF may become really
large if molecules aggregate.

On Thu, Jan 8, 2015 at 7:48 AM, Kester Wong kester2...@ibs.re.kr wrote:

 Dear all,



 My apologies if this question sounds too basic of if it has been covered.

 I did some RDF calculations, and as when I plotted the figures, the g(r)
 values are in the hundreds, whereas the papers that I have seen are all in
 the range of 0-12.

 The x-axis (nm), however, seemed to be correct.



 Regards,

 Kester


 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.




-- 
_

Prof. Dr. André Farias de Moura
Department of Chemistry
Federal University of São Carlos
São Carlos - Brazil
phone: +55-16-3351-8090
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Log output of GPU accelerated GROMACS

2015-01-08 Thread Ebert Maximilian
Dear list,

I was wondering why the log file does not always contains the same information. 
In one of my configurations I got the following information:

GPU timings
-
 Computing: Count  Wall t (s)  ms/step   %
-
 Pair list H2D   1251   0.5630.450 0.2
 X / q H2D  50001   6.9980.140 2.9
 Nonbonded F kernel 47500 212.1184.46686.7
 Nonbonded F+ene k.  1250   9.3717.497 3.8
 Nonbonded F+ene+prune k.1251   9.7597.801 4.0
 F D2H  50001   5.8740.117 2.4
-
 Total244.6834.894   100.0
-

Force evaluation time GPU/CPU: 4.894 ms/4.012 ms = 1.220
For optimal performance this ratio should be close to 1!

But I never got this information in any other configuration using GPUs. Is this 
only part of the output if there is a problem with too much work for the GPU?

Thank you very much,

Max
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Error : Atomtype not found

2015-01-08 Thread protim chakraborti
Respected Dr. Lemkul
Thanks for the suggestion. I have checked the ffnonbonded.itp and found
that copper is entered out there in the following form and format

; Ions and noble gases (useful for tutorials)
Cu2+29  63.546002.00A   2.08470e-01 4.76976e+00
Ar  18  39.948000.00A   3.41000e-01 2.74580e-02

would this be not be suffice or i need to add Cu separately? or may be I
have to run starting from pdb2gmx itself!

Regards

-- 
Pratim Chakraborti
+919831004707
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Error : Atomtype not found

2015-01-08 Thread Justin Lemkul



On 1/8/15 10:14 AM, protim chakraborti wrote:

Respected Dr. Lemkul
Thanks for the suggestion. I have checked the ffnonbonded.itp and found
that copper is entered out there in the following form and format

; Ions and noble gases (useful for tutorials)
Cu2+29  63.546002.00A   2.08470e-01 4.76976e+00
Ar  18  39.948000.00A   3.41000e-01 2.74580e-02

would this be not be suffice or i need to add Cu separately? or may be I
have to run starting from pdb2gmx itself!



Those parameters are not CHARMM parameters.  They appear to have been copied 
over from OPLS-AA, which had several ions removed due to unknown origins (as was 
Ar).  Do not use these parameters for a CHARMM simulation.  I recommend that 
they be removed entirely, as we did with our CHARMM36 port.


-Justin

--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Performance difference between MPI ranks and OpenMP

2015-01-08 Thread Ebert Maximilian
Hi list,

I have another question regarding performance. Is there any performance 
difference if I start a process on a 8 CPU machine with 8 MPI ranks and 1 
OpenMP or 4 MPI ranks an 2 OpenMP? Both should use the 8 CPUs right?

Thank you very much,

Max
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Performance difference between MPI ranks and OpenMP

2015-01-08 Thread Carsten Kutzner

On 08 Jan 2015, at 15:38, Ebert Maximilian m.eb...@umontreal.ca wrote:

 Hi list,
 
 I have another question regarding performance. Is there any performance 
 difference if I start a process on a 8 CPU machine with 8 MPI ranks and 1 
 OpenMP or 4 MPI ranks an 2 OpenMP? Both should use the 8 CPUs right?
Right, but there is a performance difference (try it out! :)
Normally using just one layer of parallelization is fastest (less overhead), 
i.e.
using just OpenMP threads or just MPI ranks. However, more than 8 or so OpenMP 
threads per rank seldom yield optimal performance, so that a hybrid 
parallelization
approach makes sense in such situations.

Carsten


 
 Thank you very much,
 
 Max
 -- 
 Gromacs Users mailing list
 
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner
http://www.mpibpc.mpg.de/grubmueller/sppexa

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Performance difference between MPI ranks and OpenMP

2015-01-08 Thread Ebert Maximilian
The reason I am asking is because I want to use two GPUs and 8 CPUs. So for now 
I have 2 MPI ranks and 4 OpenMP threads. Is there a way to have 8 MPI ranks but 
only use 2 GPUs? I also tried 8 MPI ranks with -gpu_id  but it was 
about the same as 2 MPI ranks with 4 OpenMP.

Max

-Ursprüngliche Nachricht-
Von: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
[mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] Im Auftrag von 
Carsten Kutzner
Gesendet: Donnerstag, 8. Januar 2015 15:48
An: gmx-us...@gromacs.org
Betreff: Re: [gmx-users] Performance difference between MPI ranks and OpenMP


On 08 Jan 2015, at 15:38, Ebert Maximilian m.eb...@umontreal.ca wrote:

 Hi list,
 
 I have another question regarding performance. Is there any performance 
 difference if I start a process on a 8 CPU machine with 8 MPI ranks and 1 
 OpenMP or 4 MPI ranks an 2 OpenMP? Both should use the 8 CPUs right?
Right, but there is a performance difference (try it out! :) Normally using 
just one layer of parallelization is fastest (less overhead), i.e.
using just OpenMP threads or just MPI ranks. However, more than 8 or so OpenMP 
threads per rank seldom yield optimal performance, so that a hybrid 
parallelization approach makes sense in such situations.

Carsten


 
 Thank you very much,
 
 Max
 --
 Gromacs Users mailing list
 
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry Theoretical and Computational 
Biophysics Am Fassberg 11, 37077 Goettingen, Germany Tel. +49-551-2012313, Fax: 
+49-551-2012302 http://www.mpibpc.mpg.de/grubmueller/kutzner
http://www.mpibpc.mpg.de/grubmueller/sppexa

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] g_tune_pme_mpi on GPU cluster fails

2015-01-08 Thread Ebert Maximilian
I always read this few hundred atoms per core. But how is this in the context 
of a GPU? For instance we use the GTX580 with 512 cores. Do they count as 
cores? Because the system I am working on has 13,000 atoms. With 8 CPU cores 
and 2 GPUs (with a total of 1024 cores) how do I count this? Do I have 1032 
cores for 13,000 atoms? Do I count them individually? 13,000/8 and 13,000/1024? 
So that if I would have more CPUs per node I could achieve better performance?

Max

-Ursprüngliche Nachricht-
Von: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
[mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] Im Auftrag von 
Carsten Kutzner
Gesendet: Donnerstag, 8. Januar 2015 15:51
An: gmx-us...@gromacs.org
Betreff: Re: [gmx-users] g_tune_pme_mpi on GPU cluster fails


On 08 Jan 2015, at 15:32, Ebert Maximilian m.eb...@umontreal.ca wrote:

 Hi Carsten,
 
 I was benchmarking my first system and I do not see any improvement in using 
 more than one GPU node.
Whether you see an improvement or not depends on the size of your system and 
the latency of your interconnect. If it is Gigabit Ethernet, it might be the 
bottleneck.

 In the end I think having a node dedicated as PME node would make sense after 
 all since our GPU cluster only consists of 9 nodes. What do you mean with 
 high parallelization? What do you consider high?
If you have so many cores that you end up with just a few hundred atoms per 
core I would say.

Carsten

 
 Max
 
 -Ursprüngliche Nachricht-
 Von: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 [mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] Im Auftrag 
 von Carsten Kutzner
 Gesendet: Mittwoch, 7. Januar 2015 17:04
 An: gmx-us...@gromacs.org
 Betreff: Re: [gmx-users] g_tune_pme_mpi on GPU cluster fails
 
 Hi,
 
 there are two issues here:
 
 a) you must not start multiple copies of g_tune_pme, but a single one.
 g_tune_pme will by itself launch MPI-parallel mdrun processes (the path to 
 the mdrun executable needs to be specified in the MDRUN environment variable, 
 you might need to set others as well depending on your queueing system - read 
 g_tune_pme -h.
 
 b) g_tune_pme can not (yet) automatically deal with GPU nodes. With GPUs, 
 separate PME nodes will also only make sense at very high parallelization - 
 how many of these nodes do you want to use in parallel?
 
 Carsten
 
 
 On 07 Jan 2015, at 15:11, Ebert Maximilian m.eb...@umontreal.ca wrote:
 
 Hi there,
 
 I have again a question regarding our GPU cluster. I tried to g_tune_pme_mpi 
 on the cluster. After starting it across 3 nodes I get the following errors:
 
 
 Command line:
 g_tune_pme_mpi -v -x -deffnm 1G68_run1ns -s ../run100ns.tpr
 
 Reading file ../run100ns.tpr, VERSION 5.0.1 (single precision) 
 Reading file ../run100ns.tpr, VERSION 5.0.1 (single precision) 
 Reading file ../run100ns.tpr, VERSION 5.0.1 (single precision) 
 Reading file ../run100ns.tpr, VERSION 5.0.1 (single precision) Will 
 test 1 tpr file.
 Will test 1 tpr file.
 Will test 1 tpr file.
 Will test 1 tpr file.
 [ngpu-a4-06:13382] [[25869,1],0] ORTE_ERROR_LOG: A message is 
 attempting to be sent to a process whose contact information is 
 unknown in file rml_oob_send.c at line 104 [ngpu-a4-06:13382] 
 [[25869,1],0] could not get route to [[INVALID],INVALID] 
 [ngpu-a4-06:13382] [[25869,1],0] ORTE_ERROR_LOG: A message is 
 attempting to be sent to a process whose contact information is 
 unknown in file base/plm_base_proxy.c at line 81 [ngpu-a4-06:13385] 
 [[25869,1],1] ORTE_ERROR_LOG: A message is attempting to be sent to a 
 process whose contact information is unknown in file rml_oob_send.c 
 at line 104 [ngpu-a4-06:13385] [[25869,1],1] could not get route to 
 [[INVALID],INVALID] [ngpu-a4-06:13385] [[25869,1],1] ORTE_ERROR_LOG: A 
 message is attempting to be sent to a process whose contact information is 
 unknown in file base/plm_base_proxy.c at line 81 [ngpu-a4-06:13384] 
 [[25869,1],2] ORTE_ERROR_LOG: A message is attempting to be sent to a 
 process whose contact information is unknown in file rml_oob_send.c at line 
 104 [ngpu-a4-06:13384] [[25869,1],2] could not get route to 
 [[INVALID],INVALID] [ngpu-a4-06:13384] [[25869,1],2] ORTE_ERROR_LOG: A 
 message is attempting to be sent to a process whose contact information is 
 unknown in file base/plm_base_proxy.c at line 81 .
 = PBS: job killed: walltime 3641 exceeded limit 3600
 mpirun: killing job...
 
 [ngpu-a4-06:13356] [[25869,0],0]-[[25869,1],3] mca_oob_tcp_msg_recv: 
 readv failed: Connection reset by peer (104) [ngpu-a4-06:13356] 
 [[25869,0],0]-[[25869,1],2] mca_oob_tcp_msg_recv: readv failed:
 Connection reset by peer (104) [ngpu-a4-06:13356] 
 [[25869,0],0]-[[25869,1],0] mca_oob_tcp_msg_recv: readv failed:
 Connection reset by peer (104) [ngpu-a4-06:13356] 
 [[25869,0],0]-[[25869,1],1] mca_oob_tcp_msg_recv: readv failed:
 Connection reset by peer (104)
 
 
 Any idea what is wrong?
 
 Thank you very much!
 
 Max
 
 -Ursprüngliche 

Re: [gmx-users] g_tune_pme_mpi on GPU cluster fails

2015-01-08 Thread Carsten Kutzner

On 08 Jan 2015, at 15:56, Ebert Maximilian m.eb...@umontreal.ca wrote:

 I always read this few hundred atoms per core.
I mean CPU cores.

 But how is this in the context of a GPU? For instance we use the GTX580 with 
 512 cores. Do they count as cores? Because the system I am working on has 
 13,000 atoms. With 8 CPU cores and 2 GPUs (with a total of 1024 cores) how do 
 I count this? Do I have 1032 cores for 13,000 atoms? Do I count them 
 individually? 13,000/8 and 13,000/1024? So that if I would have more CPUs per 
 node I could achieve better performance?
Your system is quite small so that I would expect that you will get best
performance using a single node only, possibly with 1 or 2 (not sure, though) 
GPUs.

Carsten


 
 Max
 
 -Ursprüngliche Nachricht-
 Von: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 [mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] Im Auftrag von 
 Carsten Kutzner
 Gesendet: Donnerstag, 8. Januar 2015 15:51
 An: gmx-us...@gromacs.org
 Betreff: Re: [gmx-users] g_tune_pme_mpi on GPU cluster fails
 
 
 On 08 Jan 2015, at 15:32, Ebert Maximilian m.eb...@umontreal.ca wrote:
 
 Hi Carsten,
 
 I was benchmarking my first system and I do not see any improvement in using 
 more than one GPU node.
 Whether you see an improvement or not depends on the size of your system and 
 the latency of your interconnect. If it is Gigabit Ethernet, it might be the 
 bottleneck.
 
 In the end I think having a node dedicated as PME node would make sense 
 after all since our GPU cluster only consists of 9 nodes. What do you mean 
 with high parallelization? What do you consider high?
 If you have so many cores that you end up with just a few hundred atoms per 
 core I would say.
 
 Carsten
 
 
 Max
 
 -Ursprüngliche Nachricht-
 Von: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 [mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] Im Auftrag 
 von Carsten Kutzner
 Gesendet: Mittwoch, 7. Januar 2015 17:04
 An: gmx-us...@gromacs.org
 Betreff: Re: [gmx-users] g_tune_pme_mpi on GPU cluster fails
 
 Hi,
 
 there are two issues here:
 
 a) you must not start multiple copies of g_tune_pme, but a single one.
 g_tune_pme will by itself launch MPI-parallel mdrun processes (the path to 
 the mdrun executable needs to be specified in the MDRUN environment 
 variable, you might need to set others as well depending on your queueing 
 system - read g_tune_pme -h.
 
 b) g_tune_pme can not (yet) automatically deal with GPU nodes. With GPUs, 
 separate PME nodes will also only make sense at very high parallelization - 
 how many of these nodes do you want to use in parallel?
 
 Carsten
 
 
 On 07 Jan 2015, at 15:11, Ebert Maximilian m.eb...@umontreal.ca wrote:
 
 Hi there,
 
 I have again a question regarding our GPU cluster. I tried to 
 g_tune_pme_mpi on the cluster. After starting it across 3 nodes I get the 
 following errors:
 
 
 Command line:
 g_tune_pme_mpi -v -x -deffnm 1G68_run1ns -s ../run100ns.tpr
 
 Reading file ../run100ns.tpr, VERSION 5.0.1 (single precision) 
 Reading file ../run100ns.tpr, VERSION 5.0.1 (single precision) 
 Reading file ../run100ns.tpr, VERSION 5.0.1 (single precision) 
 Reading file ../run100ns.tpr, VERSION 5.0.1 (single precision) Will 
 test 1 tpr file.
 Will test 1 tpr file.
 Will test 1 tpr file.
 Will test 1 tpr file.
 [ngpu-a4-06:13382] [[25869,1],0] ORTE_ERROR_LOG: A message is 
 attempting to be sent to a process whose contact information is 
 unknown in file rml_oob_send.c at line 104 [ngpu-a4-06:13382] 
 [[25869,1],0] could not get route to [[INVALID],INVALID] 
 [ngpu-a4-06:13382] [[25869,1],0] ORTE_ERROR_LOG: A message is 
 attempting to be sent to a process whose contact information is 
 unknown in file base/plm_base_proxy.c at line 81 [ngpu-a4-06:13385] 
 [[25869,1],1] ORTE_ERROR_LOG: A message is attempting to be sent to a 
 process whose contact information is unknown in file rml_oob_send.c 
 at line 104 [ngpu-a4-06:13385] [[25869,1],1] could not get route to 
 [[INVALID],INVALID] [ngpu-a4-06:13385] [[25869,1],1] ORTE_ERROR_LOG: A 
 message is attempting to be sent to a process whose contact information is 
 unknown in file base/plm_base_proxy.c at line 81 [ngpu-a4-06:13384] 
 [[25869,1],2] ORTE_ERROR_LOG: A message is attempting to be sent to a 
 process whose contact information is unknown in file rml_oob_send.c at line 
 104 [ngpu-a4-06:13384] [[25869,1],2] could not get route to 
 [[INVALID],INVALID] [ngpu-a4-06:13384] [[25869,1],2] ORTE_ERROR_LOG: A 
 message is attempting to be sent to a process whose contact information is 
 unknown in file base/plm_base_proxy.c at line 81 .
 = PBS: job killed: walltime 3641 exceeded limit 3600
 mpirun: killing job...
 
 [ngpu-a4-06:13356] [[25869,0],0]-[[25869,1],3] mca_oob_tcp_msg_recv: 
 readv failed: Connection reset by peer (104) [ngpu-a4-06:13356] 
 [[25869,0],0]-[[25869,1],2] mca_oob_tcp_msg_recv: readv failed:
 Connection reset by peer (104) [ngpu-a4-06:13356] 
 

Re: [gmx-users] g_tune_pme_mpi on GPU cluster fails

2015-01-08 Thread Ebert Maximilian
Hi Carsten,

I was benchmarking my first system and I do not see any improvement in using 
more than one GPU node. In the end I think having a node dedicated as PME node 
would make sense after all since our GPU cluster only consists of 9 nodes. What 
do you mean with high parallelization? What do you consider high?

Max

-Ursprüngliche Nachricht-
Von: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
[mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] Im Auftrag von 
Carsten Kutzner
Gesendet: Mittwoch, 7. Januar 2015 17:04
An: gmx-us...@gromacs.org
Betreff: Re: [gmx-users] g_tune_pme_mpi on GPU cluster fails

Hi,

there are two issues here:

a) you must not start multiple copies of g_tune_pme, but a single one.
g_tune_pme will by itself launch MPI-parallel mdrun processes (the path to the 
mdrun executable needs to be specified in the MDRUN environment variable, you 
might need to set others as well depending on your queueing system - read 
g_tune_pme -h.

b) g_tune_pme can not (yet) automatically deal with GPU nodes. With GPUs, 
separate PME nodes will also only make sense at very high parallelization - how 
many of these nodes do you want to use in parallel?

Carsten


On 07 Jan 2015, at 15:11, Ebert Maximilian m.eb...@umontreal.ca wrote:

 Hi there,
 
 I have again a question regarding our GPU cluster. I tried to g_tune_pme_mpi 
 on the cluster. After starting it across 3 nodes I get the following errors:
 
 
 Command line:
  g_tune_pme_mpi -v -x -deffnm 1G68_run1ns -s ../run100ns.tpr
 
 Reading file ../run100ns.tpr, VERSION 5.0.1 (single precision) Reading 
 file ../run100ns.tpr, VERSION 5.0.1 (single precision) Reading file 
 ../run100ns.tpr, VERSION 5.0.1 (single precision) Reading file 
 ../run100ns.tpr, VERSION 5.0.1 (single precision) Will test 1 tpr 
 file.
 Will test 1 tpr file.
 Will test 1 tpr file.
 Will test 1 tpr file.
 [ngpu-a4-06:13382] [[25869,1],0] ORTE_ERROR_LOG: A message is 
 attempting to be sent to a process whose contact information is 
 unknown in file rml_oob_send.c at line 104 [ngpu-a4-06:13382] 
 [[25869,1],0] could not get route to [[INVALID],INVALID] 
 [ngpu-a4-06:13382] [[25869,1],0] ORTE_ERROR_LOG: A message is 
 attempting to be sent to a process whose contact information is 
 unknown in file base/plm_base_proxy.c at line 81 [ngpu-a4-06:13385] 
 [[25869,1],1] ORTE_ERROR_LOG: A message is attempting to be sent to a 
 process whose contact information is unknown in file rml_oob_send.c at 
 line 104 [ngpu-a4-06:13385] [[25869,1],1] could not get route to 
 [[INVALID],INVALID] [ngpu-a4-06:13385] [[25869,1],1] ORTE_ERROR_LOG: A 
 message is attempting to be sent to a process whose contact information is 
 unknown in file base/plm_base_proxy.c at line 81 [ngpu-a4-06:13384] 
 [[25869,1],2] ORTE_ERROR_LOG: A message is attempting to be sent to a process 
 whose contact information is unknown in file rml_oob_send.c at line 104 
 [ngpu-a4-06:13384] [[25869,1],2] could not get route to [[INVALID],INVALID] 
 [ngpu-a4-06:13384] [[25869,1],2] ORTE_ERROR_LOG: A message is attempting to 
 be sent to a process whose contact information is unknown in file 
 base/plm_base_proxy.c at line 81 .
 = PBS: job killed: walltime 3641 exceeded limit 3600
 mpirun: killing job...
 
 [ngpu-a4-06:13356] [[25869,0],0]-[[25869,1],3] mca_oob_tcp_msg_recv: 
 readv failed: Connection reset by peer (104) [ngpu-a4-06:13356] 
 [[25869,0],0]-[[25869,1],2] mca_oob_tcp_msg_recv: readv failed: 
 Connection reset by peer (104) [ngpu-a4-06:13356] 
 [[25869,0],0]-[[25869,1],0] mca_oob_tcp_msg_recv: readv failed: 
 Connection reset by peer (104) [ngpu-a4-06:13356] 
 [[25869,0],0]-[[25869,1],1] mca_oob_tcp_msg_recv: readv failed: 
 Connection reset by peer (104)
 
 
 Any idea what is wrong?
 
 Thank you very much!
 
 Max
 
 -Ursprüngliche Nachricht-
 Von: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 [mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] Im Auftrag 
 von Ebert Maximilian
 Gesendet: Mittwoch, 7. Januar 2015 14:43
 An: gmx-us...@gromacs.org
 Betreff: Re: [gmx-users] Working on a GPU cluster with GROMACS 5
 
 Hi Carsten,
 
 thanks again for your reply. The why our cluster is setup is that you ask for 
 GPUs using the ppn command and not CPUs. Therefore, I put 4 there. But to 
 rule out the possibility that someone is actually using the note I called for 
 7 GPUs (so the entire note) but with GPU id just assign the first 4 to 
 GROMACS. I still get the same error. I also tried -gpu_id 00 or -gpu_id  
 to change the CPU and to only use a single GPU but I always get:
 
 NOTE: You assigned GPUs to multiple MPI processes.
 
 ---
 Program gmx_mpi, VERSION 5.0.1
 Source code file: 
 /RQusagers/rqchpbib/stubbsda/gromacs-5.0.1/src/gromacs/gmxlib/cuda_too
 ls/pmalloc_cuda.cu, line: 61
 
 Fatal error:
 cudaMallocHost of size 4 bytes failed: all CUDA-capable devices are 
 busy or unavailable
 
 For more information and tips 

Re: [gmx-users] Performance difference between MPI ranks and OpenMP

2015-01-08 Thread Carsten Kutzner

On 08 Jan 2015, at 15:51, Ebert Maximilian m.eb...@umontreal.ca wrote:

 The reason I am asking is because I want to use two GPUs and 8 CPUs. So for 
 now I have 2 MPI ranks and 4 OpenMP threads. Is there a way to have 8 MPI 
 ranks but only use 2 GPUs? I also tried 8 MPI ranks with -gpu_id  but 
 it was about the same as 2 MPI ranks with 4 OpenMP.
You did it correctly and it may very well be that the performance difference
is not that large.

Carsten


 
 Max
 
 -Ursprüngliche Nachricht-
 Von: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 [mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] Im Auftrag von 
 Carsten Kutzner
 Gesendet: Donnerstag, 8. Januar 2015 15:48
 An: gmx-us...@gromacs.org
 Betreff: Re: [gmx-users] Performance difference between MPI ranks and OpenMP
 
 
 On 08 Jan 2015, at 15:38, Ebert Maximilian m.eb...@umontreal.ca wrote:
 
 Hi list,
 
 I have another question regarding performance. Is there any performance 
 difference if I start a process on a 8 CPU machine with 8 MPI ranks and 1 
 OpenMP or 4 MPI ranks an 2 OpenMP? Both should use the 8 CPUs right?
 Right, but there is a performance difference (try it out! :) Normally using 
 just one layer of parallelization is fastest (less overhead), i.e.
 using just OpenMP threads or just MPI ranks. However, more than 8 or so 
 OpenMP threads per rank seldom yield optimal performance, so that a hybrid 
 parallelization approach makes sense in such situations.
 
 Carsten
 
 
 
 Thank you very much,
 
 Max
 --
 Gromacs Users mailing list
 
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.
 
 
 --
 Dr. Carsten Kutzner
 Max Planck Institute for Biophysical Chemistry Theoretical and Computational 
 Biophysics Am Fassberg 11, 37077 Goettingen, Germany Tel. +49-551-2012313, 
 Fax: +49-551-2012302 http://www.mpibpc.mpg.de/grubmueller/kutzner
 http://www.mpibpc.mpg.de/grubmueller/sppexa
 
 --
 Gromacs Users mailing list
 
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.
 -- 
 Gromacs Users mailing list
 
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner
http://www.mpibpc.mpg.de/grubmueller/sppexa

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] rotating triclinic box

2015-01-08 Thread felipe zapata
Hi all,
I have a triclinic box containing several chains of a biopolymer with the
following crystal information:

CRYST1  115.000   75.000   75.000  80.37 118.08 114.80 P 1   1

I want to apply a semiistropic pressure coupling orthogonal to the chains
(XY plane), but unfortunately the strands of the polymer are oriented along
the x-axis instead of the z-axis. How can I change the orientation of the
box in such a way that the strand are orientated along the z-axis? it means
I want to rotate the triclinic box swapping the x and z axes.

Best,

Felipe
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.