Re: [gmx-users] using_gromacs_with_openmpi

2015-04-24 Thread Mark Abraham
Hi,

Asking are you done yet? and assembling the message while it is coming
off the network is work for a CPU... CPU utilization (e.g. as seen by htop
utility) is necessary for maximal productivity, but it isn't sensitive
about whether the work being done is useful.

Mark

On Wed, Apr 22, 2015 at 11:02 PM, Hossein H haji...@gmail.com wrote:

 But I wonder if the the performance degrade due to network latency why
 mdrun process uses 100% CPU on each node!!!

 On Thu, Apr 23, 2015 at 1:05 AM, Justin Lemkul jalem...@vt.edu wrote:

 
 
  On 4/22/15 4:24 PM, Hossein H wrote:
 
  the performance degrades even when I only use two nodes!!! and the mdrun
  process uses almost 100% of CPU on each node
 
 
  Like I said, any benefit you might theoretically get from multiple CPU is
  being undermined by latency in the gigabit ethernet connection.  It's
  generally not adequate for high-performance MD simulations.
 
 
  -Justin
 
   On Thu, Apr 23, 2015 at 12:47 AM, Justin Lemkul jalem...@vt.edu
 wrote:
 
 
 
  On 4/22/15 4:14 PM, Hossein H wrote:
 
   isn't gigabyte Ethernet adequate even for 2 nodes?
 
 
   Well, what does your benchmarking show you?  Likely the performance
  degrades because the interconnect is simply too slow to benefit from
  additional CPU horsepower.
 
  -Justin
 
 
On Thu, Apr 23, 2015 at 12:36 AM, Justin Lemkul jalem...@vt.edu
  wrote:
 
 
 
 
  On 4/22/15 3:59 PM, Hossein H wrote:
 
Dear GROMACS users and developers
 
 
  I've compiled GROMACS 4.5.5 with gcc and *enable-mpi* option for a
  small
  cluster e.g 4 nodes on 1GbE network
  The code was compiled without problem and i can use it in sequential
  and
  parallel mode (using mpirun command), but the performance over the
  network
  is very poor. indeed i can run the jobs on one nodes faster than 4
  nodes.
 
  because each nodes has 4 cores, i run jobs over network using
  following
  command:
 
 mpirun -np 16 --host node01,node02,node03,node04 mdrun -deffnm
  input
  -v
 
 
 
 
  Someone has any idea why the performance is so poor?
 
 
Because you're using gigabit ethernet as your connection.  That's
  not
 
  adequate for parallelization across machines.
 
  -Justin
 
  --
  ==
 
  Justin A. Lemkul, Ph.D.
  Ruth L. Kirschstein NRSA Postdoctoral Fellow
 
  Department of Pharmaceutical Sciences
  School of Pharmacy
  Health Sciences Facility II, Room 629
  University of Maryland, Baltimore
  20 Penn St.
  Baltimore, MD 21201
 
  jalem...@outerbanks.umaryland.edu | (410) 706-7441
  http://mackerell.umaryland.edu/~jalemkul
 
  ==
  --
  Gromacs Users mailing list
 
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
  posting!
 
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
  * For (un)subscribe requests visit
  https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
 or
  send a mail to gmx-users-requ...@gromacs.org.
 
 
   --
  ==
 
  Justin A. Lemkul, Ph.D.
  Ruth L. Kirschstein NRSA Postdoctoral Fellow
 
  Department of Pharmaceutical Sciences
  School of Pharmacy
  Health Sciences Facility II, Room 629
  University of Maryland, Baltimore
  20 Penn St.
  Baltimore, MD 21201
 
  jalem...@outerbanks.umaryland.edu | (410) 706-7441
  http://mackerell.umaryland.edu/~jalemkul
 
  ==
  --
  Gromacs Users mailing list
 
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
  posting!
 
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
  * For (un)subscribe requests visit
  https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
  send a mail to gmx-users-requ...@gromacs.org.
 
 
  --
  ==
 
  Justin A. Lemkul, Ph.D.
  Ruth L. Kirschstein NRSA Postdoctoral Fellow
 
  Department of Pharmaceutical Sciences
  School of Pharmacy
  Health Sciences Facility II, Room 629
  University of Maryland, Baltimore
  20 Penn St.
  Baltimore, MD 21201
 
  jalem...@outerbanks.umaryland.edu | (410) 706-7441
  http://mackerell.umaryland.edu/~jalemkul
 
  ==
  --
  Gromacs Users mailing list
 
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
  posting!
 
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
  * For (un)subscribe requests visit
  https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
  send a mail to gmx-users-requ...@gromacs.org.
 
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 

Re: [gmx-users] Optimal GPU setup for workstation with Gromacs 5

2015-04-24 Thread Kutzner, Carsten
Hi JJ,

 On 24 Apr 2015, at 03:02, Jingjie Yeo (IHPC) ye...@ihpc.a-star.edu.sg wrote:
 
 Hi Carsten,
 
 In this case of 2 x GTX980, as far as I can tell, the clock speed and GPU ram 
 is significantly lower. For simulations of more
The GTX 980s have a clock rate of about 1200 MHz, whereas the clock rate of 
the Tesla K40 is 732 MHz. The latter has more CUDA cores, however.
A rough estimate on how well the GROMACS short-ranged kernels perform on a card 
is the
product of clock rate and CUDA cores. In this metric, the GTX 980 is slightly 
better
than the K40. In addition, the GTX 980 also has the newer Maxwell generation 
chip, 
yielding somewhat higher performance due to better instruction scheduling.

The amount of GPU memory is almost never an issue with GROMACS unless you want 
to run
enormously large MD systems. The largest system that we benchmarked had 12 
million atoms
and this was using 1.2 GB of GPU memory, so you could even run a couple of these
on a 980.

In our tests with a 2 million atom system on a node with 2x E5-2680v2 
processors,
using two 980s resulted in a 14% increased GROMACS performance when compared to 
using two K40s.

Note that from the money you save by buying GTX instead of Tesla cards you can
get another node to run another simulation on :)

Carsten


  than a million atoms, would it be advisable to go for more cores or more 
 clock speed, or having both is the best case scenario?
 
 JJ
 
 -Original Message-
 Date: Thu, 23 Apr 2015 08:03:46 +
 From: Kutzner, Carsten ckut...@gwdg.de
 To: gmx-us...@gromacs.org gmx-us...@gromacs.org
 Subject: Re: [gmx-users] Optimal GPU setup for workstation with
   Gromacs 5
 Message-ID: fe42569f-159b-49b1-a644-86a866aaa...@mpibpc.mpg.de
 Content-Type: text/plain; charset=us-ascii
 
 Hi,
 
 On 23 Apr 2015, at 08:03, Jingjie Yeo (IHPC) ye...@ihpc.a-star.edu.sg 
 wrote:
 
 Dear all,
 
 My workstation specs are 2 x Intel Xeon E5-2695v2 2.40 GHz, 12 Cores. I 
 would like to combine this with an optimal GPU setup for Gromacs 5 running 
 simulations with millions of atoms. May I know what are the recommended 
 setups? My vendor proposed doing a dual K40 Tesla GPU setup, would that be 
 optimal?
 I would propose to put two GTX 980 into that machine instead.
 This will give you the same GROMACS performance at a fraction of the price.
 
 Carsten
 
 
 Best Regards,
 JJ
 -- 
 Gromacs Users mailing list
 
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.
 -- 
 Gromacs Users mailing list
 
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.

--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner
http://www.mpibpc.mpg.de/grubmueller/sppexa

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Optimal GPU setup for workstation with Gromacs 5

2015-04-24 Thread Jingjie Yeo (IHPC)
Hi Carsten,

Thank you so much for the information! I checked with my vendor on his opinions 
on the GTX and his reply was that GTX cards will not minimize calculation 
errors the way Tesla and Quadro cards will. Do you think this is an issue?

JJ

-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
[mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] On Behalf Of 
Kutzner, Carsten
Sent: Friday, 24 April, 2015 16:40
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] Optimal GPU setup for workstation with Gromacs 5

Hi JJ,

 On 24 Apr 2015, at 03:02, Jingjie Yeo (IHPC) ye...@ihpc.a-star.edu.sg wrote:
 
 Hi Carsten,
 
 In this case of 2 x GTX980, as far as I can tell, the clock speed and 
 GPU ram is significantly lower. For simulations of more
The GTX 980s have a clock rate of about 1200 MHz, whereas the clock rate of the 
Tesla K40 is 732 MHz. The latter has more CUDA cores, however.
A rough estimate on how well the GROMACS short-ranged kernels perform on a card 
is the product of clock rate and CUDA cores. In this metric, the GTX 980 is 
slightly better than the K40. In addition, the GTX 980 also has the newer 
Maxwell generation chip, yielding somewhat higher performance due to better 
instruction scheduling.

The amount of GPU memory is almost never an issue with GROMACS unless you want 
to run enormously large MD systems. The largest system that we benchmarked had 
12 million atoms and this was using 1.2 GB of GPU memory, so you could even run 
a couple of these on a 980.

In our tests with a 2 million atom system on a node with 2x E5-2680v2 
processors, using two 980s resulted in a 14% increased GROMACS performance when 
compared to using two K40s.

Note that from the money you save by buying GTX instead of Tesla cards you can 
get another node to run another simulation on :)

Carsten


  than a million atoms, would it be advisable to go for more cores or more 
 clock speed, or having both is the best case scenario?
 
 JJ
 
 -Original Message-
 Date: Thu, 23 Apr 2015 08:03:46 +
 From: Kutzner, Carsten ckut...@gwdg.de
 To: gmx-us...@gromacs.org gmx-us...@gromacs.org
 Subject: Re: [gmx-users] Optimal GPU setup for workstation with
   Gromacs 5
 Message-ID: fe42569f-159b-49b1-a644-86a866aaa...@mpibpc.mpg.de
 Content-Type: text/plain; charset=us-ascii
 
 Hi,
 
 On 23 Apr 2015, at 08:03, Jingjie Yeo (IHPC) ye...@ihpc.a-star.edu.sg 
 wrote:
 
 Dear all,
 
 My workstation specs are 2 x Intel Xeon E5-2695v2 2.40 GHz, 12 Cores. I 
 would like to combine this with an optimal GPU setup for Gromacs 5 running 
 simulations with millions of atoms. May I know what are the recommended 
 setups? My vendor proposed doing a dual K40 Tesla GPU setup, would that be 
 optimal?
 I would propose to put two GTX 980 into that machine instead.
 This will give you the same GROMACS performance at a fraction of the price.
 
 Carsten
 
 
 Best Regards,
 JJ
 --
 Gromacs Users mailing list
 
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.
 --
 Gromacs Users mailing list
 
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.

--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry Theoretical and Computational 
Biophysics Am Fassberg 11, 37077 Goettingen, Germany Tel. +49-551-2012313, Fax: 
+49-551-2012302 http://www.mpibpc.mpg.de/grubmueller/kutzner
http://www.mpibpc.mpg.de/grubmueller/sppexa

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] assign disulfide bond

2015-04-24 Thread HongTham
Hi all gmx user,
I want to ask if there is the way to specialized the residue index to
assign a disulfide bond between 2 apart Cysteins.
Because my protein is homology model so initial structure, 2 residues are
apart each other. as the result, when I run pdb2gmx command, this disulfide
bond can not be formed. (there is formed disulfide bond between 2 this
Cysteins in wild type protein). I also took a look on specbond.dat  but
there no residue index is specialized. There is only the description about
the residue names and atom names.

CYS SG  1   CYS SG  1   0.2 CYS2CYS2

Could anybody help me?

Thank you so much.

Hongtham
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] on the regression tests

2015-04-24 Thread Mark Abraham
On Thu, Apr 23, 2015 at 6:29 AM, Brett brettliu...@163.com wrote:

 Dear All,

 When I install gromacs-5.0.3, after the make check step, it indicates
 Regression tests have not been run. If you want to run them from the build
 systems,get the correct version of the regression tests ackage and set
 REGRESSION_PATH in CMake to point to it, or set REGRESSIONTEST_DOWNLOAF=ON.

 Without doing the regression tests, I complete the sudo make install and
 source /usr/local/gromacs/bin/GMXPC steps and completing the gromacs
 installation.

 Will you please let me know is any side-effect on my gromacs installation
 without the regression tests?


None, except that you can have lower confidence that GROMACS will work as
intended. Problems are fairly unlikely, however (depending how weird your
distro and compiler are).

Mark
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] on regreressiontests

2015-04-24 Thread Mark Abraham
Hi,

Yes, this is confusing. You can ignore the advice to check mdp differences.

Mark

On Fri, Apr 24, 2015 at 7:51 AM, Brett brettliu...@163.com wrote:

 Dear All,

 For my installed gromacs, I just run the regression tests. It indicates
 PASSED but check mdp file differences, All 12 rotation tests PASSED, All o
 extra tests PASSED, All 42 pdb2gmx tests PASSED.

 Will you please advise whether I can use my installed gromacs without
 concern? What does it mean for PASSED but check mdp file differences?

 I am looking forward to getting your reply.

 Brett
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] Gromacs 5.0.4 Installation error

2015-04-24 Thread Mark Abraham
Hi,

This can't happen with a correct toolchain and unmodified source files. I
think you should unpack the tarball fresh and try again.

Mark

On Wed, Apr 22, 2015 at 2:46 PM, Sarath Kumar Baskaran 
bskumar.t...@gmail.com wrote:

 Hi all,

 While Trying to install GROMACS 5.0.4 with the following compiling
 arguments,
 during make i am getting the following error,
  I dont know how to solve it
 Please help me

 *# cmake .. -DGMX_THREAD_MPI=ON
 -DCMAKE_INSTALL_PREFIX=/usr/local/gromacs-5.0.4-gpu -DGMX_GPU=ON
 -DGMX_BUILD_OWN_FFTW=ON -DGMX_DEFAULT_SUFFIX=OFF
 -DGMX_BINARY_SUFFIX=-5.0.4-gpu -DGMX_LIBS_SUFFIX=-5.0.4-gpu
 -DGMX_PREFER_STATIC_LIBS=ON -DBUILD_SHARED_LIBS=OFF*

 -- The C compiler identification is GNU 4.8.3
 -- The CXX compiler identification is GNU 4.8.3
 -- Check for working C compiler: /usr/bin/cc
 -- Check for working C compiler: /usr/bin/cc -- works
 -- Detecting C compiler ABI info
 -- Detecting C compiler ABI info - done
 -- Check for working CXX compiler: /usr/bin/c++
 -- Check for working CXX compiler: /usr/bin/c++ -- works
 -- Detecting CXX compiler ABI info
 -- Detecting CXX compiler ABI info - done
 -- Looking for NVIDIA GPUs present in the system
 -- Number of NVIDIA GPUs detected: 1
 -- Found CUDA: /usr/local/cuda (found suitable version 7.0, minimum
 required is 4.0)
 -- Checking for GCC x86 inline asm
 -- Checking for GCC x86 inline asm - supported
 -- Detecting best SIMD instructions for this CPU
 -- Detected best SIMD instructions for this CPU - SSE4.1
 -- Try OpenMP C flag = [-fopenmp]
 -- Performing Test OpenMP_FLAG_DETECTED
 -- Performing Test OpenMP_FLAG_DETECTED - Success
 -- Try OpenMP CXX flag = [-fopenmp]
 -- Performing Test OpenMP_FLAG_DETECTED
 -- Performing Test OpenMP_FLAG_DETECTED - Success
 -- Found OpenMP: -fopenmp
 -- Performing Test CFLAGS_WARN
 -- Performing Test CFLAGS_WARN - Success
 -- Performing Test CFLAGS_WARN_EXTRA
 -- Performing Test CFLAGS_WARN_EXTRA - Success
 -- Performing Test CFLAGS_WARN_REL
 -- Performing Test CFLAGS_WARN_REL - Success
 -- Performing Test CFLAGS_WARN_UNINIT
 -- Performing Test CFLAGS_WARN_UNINIT - Success
 -- Performing Test CFLAGS_EXCESS_PREC
 -- Performing Test CFLAGS_EXCESS_PREC - Success
 -- Performing Test CFLAGS_COPT
 -- Performing Test CFLAGS_COPT - Success
 -- Performing Test CFLAGS_NOINLINE
 -- Performing Test CFLAGS_NOINLINE - Success
 -- Performing Test CXXFLAGS_WARN
 -- Performing Test CXXFLAGS_WARN - Success
 -- Performing Test CXXFLAGS_WARN_EXTRA
 -- Performing Test CXXFLAGS_WARN_EXTRA - Success
 -- Performing Test CXXFLAGS_WARN_REL
 -- Performing Test CXXFLAGS_WARN_REL - Success
 -- Performing Test CXXFLAGS_EXCESS_PREC
 -- Performing Test CXXFLAGS_EXCESS_PREC - Success
 -- Performing Test CXXFLAGS_COPT
 -- Performing Test CXXFLAGS_COPT - Success
 -- Performing Test CXXFLAGS_NOINLINE
 -- Performing Test CXXFLAGS_NOINLINE - Success
 -- Looking for include file unistd.h
 -- Looking for include file unistd.h - found
 -- Looking for include file pwd.h
 -- Looking for include file pwd.h - found
 -- Looking for include file dirent.h
 -- Looking for include file dirent.h - found
 -- Looking for include file time.h
 -- Looking for include file time.h - found
 -- Looking for include file sys/time.h
 -- Looking for include file sys/time.h - found
 -- Looking for include file io.h
 -- Looking for include file io.h - not found
 -- Looking for include file sched.h
 -- Looking for include file sched.h - found
 -- Looking for include file regex.h
 -- Looking for include file regex.h - found
 -- Looking for C++ include regex
 -- Looking for C++ include regex - not found
 -- Looking for posix_memalign
 -- Looking for posix_memalign - found
 -- Looking for memalign
 -- Looking for memalign - found
 -- Looking for _aligned_malloc
 -- Looking for _aligned_malloc - not found
 -- Looking for gettimeofday
 -- Looking for gettimeofday - found
 -- Looking for fsync
 -- Looking for fsync - found
 -- Looking for _fileno
 -- Looking for _fileno - not found
 -- Looking for fileno
 -- Looking for fileno - found
 -- Looking for _commit
 -- Looking for _commit - not found
 -- Looking for sigaction
 -- Looking for sigaction - found
 -- Looking for sysconf
 -- Looking for sysconf - found
 -- Looking for rsqrt
 -- Looking for rsqrt - not found
 -- Looking for rsqrtf
 -- Looking for rsqrtf - not found
 -- Looking for sqrtf
 -- Looking for sqrtf - not found
 -- Looking for sqrt in m
 -- Looking for sqrt in m - found
 -- Looking for clock_gettime in rt
 -- Looking for clock_gettime in rt - found
 -- Checking for sched.h GNU affinity API
 -- Performing Test sched_affinity_compile
 -- Performing Test sched_affinity_compile - Success
 -- Check if the system is big endian
 -- Searching 16 bit integer
 -- Looking for sys/types.h
 -- Looking for sys/types.h - found
 -- Looking for stdint.h
 -- Looking for stdint.h - found
 -- Looking for stddef.h
 -- Looking for stddef.h - found
 -- Check size of unsigned short
 -- Check size of unsigned short - done
 -- Using 

Re: [gmx-users] Regarding Topolbuild1_3

2015-04-24 Thread Ray, Bruce D
 Date: Fri, 24 Apr 2015 07:10:00 +
 From: Deepali Sharma deepa...@dut.ac.za
 To: gromacs.org_gmx-users@maillist.sys.kth.se
   gromacs.org_gmx-users@maillist.sys.kth.se
 Subject: [gmx-users] Regarding Topolbuild1_3
 Message-ID:
   629112cec39ae346bc1195c8c8f90c9da4b51...@smlsmbx01.dut.ac.za
 Content-Type: text/plain; charset=UTF-8
 
 Hi,
 
 I installed topolbuild1_3, ran the following command:
 
 /Desktop/np/topolbuild1_3/src$ ./topolbuild -dir 
 /home/deepali/Desktop/np/topolbuild1_3/src -ff oplsaa -n ZnO -charge
 
 Fatal error.
 Source code file: atom_types.c, line: 87
 Cannot open file 
 /home/deepali/Desktop/np/topolbuild1_3/src/ATOMTYPE_OPLSAA1.DEF
 
 Unable to find the origin of the error.
 
 Deepali Sharma
 Postdoctoral Fellow
 Department of Chemistry
 Durban University of Technology (DUT)
 Durban, SA


The error message means that the directory in which the definition files for the
force fields are found is not the directory you give with the -dir 
specification.
Judging from the directory that you gave, those files are probably located in
/home/deepali/Desktop/np/topolbuild1_3/dat

I hope that helps.


-- 
Bruce D. Ray, Ph.D.
Associate Scientist
IUPUI
Physics Dept.
402 N. Blackford St., Rm. LD-061
Indianapolis, IN  46202


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] gmx_simd_check_and_reset_overflow undefined for sparc64 hpc ace

2015-04-24 Thread Matthew Harrigan
Hi all, 
I'm having trouble compiling gromacs (from git master) for sparc64 SIMD. It 
fails with 

gromacs/src/gromacs/mdlib/tpi.cpp, line 470: error: identifier 
gmx_simd_check_and_reset_overflow is undefined 
gmx_simd_check_and_reset_overflow(); 

It looks like this should be defined in simd/impl_sparc64_hpc_ace.h but isn't. 

Has anyone run into this? I noticed that for some of the SIMD implementations 
(intel qpx, vmx, vsx), that function just returns 0. Is that appropriate here? 

Thanks, 
Matt 
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] membrane protein simulation

2015-04-24 Thread Mostafa Javaheri
Hi Tsjerk,

Yes, it would be very helpful. thanks for your help.

Best regards,

Mostafa

On Thu, Apr 23, 2015 at 9:40 AM, Tsjerk Wassenaar tsje...@gmail.com wrote:

 Hi Mostafa,

 We have a complete simulation system of bacteriorhodopsin in the purple
 membrane, which you can use as basis for your simulations if you want.

 Best,

 Tsjerk
 On Apr 22, 2015 10:08 PM, Mostafa Javaheri javaheri.grom...@gmail.com
 wrote:

  Dear Justin
  I am going to simulate a homo trimer trans-membrane protein; Base on the
  crystallographic structures there is 7 phosphatidyl glycerol phosphate
  (PGP) per each monomer and also 3 glycolipid molecules (S-TGA-1, or
 3-HSO 3
  -Galpβ1-6ManpR1-2GlcpR-1-archeol) located inside the trimer on the
  extracellular side of membrane. In the membrane protein tutorial of
 gromacs
  1,2-dipalmitoyl-*sn*-glycero-3-phosphatidylcholine (DPPC) is introduced
 as
  the standard lipids, so should I treat PGPs and glycolipids as ligands
 and
  going through protein ligand complex tutorial?
  Or treat glycolipids as ligands, continue the membrane protein tutorial
 and
  use DPPCs instead of PGPs?
  Is it OK if I replace PGP with DPPC from the standpoint of simulation?
  Sincerely
  --
  Gromacs Users mailing list
 
  * Please search the archive at
  http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
  posting!
 
  * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
  * For (un)subscribe requests visit
  https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
  send a mail to gmx-users-requ...@gromacs.org.
 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.

-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

Re: [gmx-users] Optimal GPU setup for workstation with Gromacs 5

2015-04-24 Thread Jingjie Yeo (IHPC)
Hi Carsten,

Thank you so much for the information! Seems like the GTX is the way to go. One 
last question, regarding the Maxwell architecture, would there be any kind of 
backward compatibility issue, say if I need to use both Gromacs 4.6 and 5.0, 
since it is the newest architecture?

JJ

-Original Message-
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
[mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] On Behalf Of 
Kutzner, Carsten
Sent: Friday, April 24, 2015 5:51 PM
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] Optimal GPU setup for workstation with Gromacs 5

Hi JJ,

 On 24 Apr 2015, at 10:53, Jingjie Yeo (IHPC) ye...@ihpc.a-star.edu.sg wrote:
 
 Hi Carsten,
 
 Thank you so much for the information! I checked with my vendor on his 
 opinions on the GTX and his reply was that GTX cards will not minimize 
 calculation errors the way Tesla and Quadro cards will. Do you think this is 
 an issue?
The GeForce cards do not offer ECC memory, so they cannot pick up memory errors 
occurring on the GPU, which are however very unlikely. Run an extensive memory 
check (e.g. memtestCL) before first usage of the cards. We have tested nearly
300 GeForce cards, and only 7 of them had problems with memory, so we replaced 
them.

Carsten


 
 JJ
 
 -Original Message-
 From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se 
 [mailto:gromacs.org_gmx-users-boun...@maillist.sys.kth.se] On Behalf 
 Of Kutzner, Carsten
 Sent: Friday, 24 April, 2015 16:40
 To: gmx-us...@gromacs.org
 Subject: Re: [gmx-users] Optimal GPU setup for workstation with 
 Gromacs 5
 
 Hi JJ,
 
 On 24 Apr 2015, at 03:02, Jingjie Yeo (IHPC) ye...@ihpc.a-star.edu.sg 
 wrote:
 
 Hi Carsten,
 
 In this case of 2 x GTX980, as far as I can tell, the clock speed and 
 GPU ram is significantly lower. For simulations of more
 The GTX 980s have a clock rate of about 1200 MHz, whereas the clock rate of 
 the Tesla K40 is 732 MHz. The latter has more CUDA cores, however.
 A rough estimate on how well the GROMACS short-ranged kernels perform on a 
 card is the product of clock rate and CUDA cores. In this metric, the GTX 980 
 is slightly better than the K40. In addition, the GTX 980 also has the newer 
 Maxwell generation chip, yielding somewhat higher performance due to better 
 instruction scheduling.
 
 The amount of GPU memory is almost never an issue with GROMACS unless you 
 want to run enormously large MD systems. The largest system that we 
 benchmarked had 12 million atoms and this was using 1.2 GB of GPU memory, so 
 you could even run a couple of these on a 980.
 
 In our tests with a 2 million atom system on a node with 2x E5-2680v2 
 processors, using two 980s resulted in a 14% increased GROMACS performance 
 when compared to using two K40s.
 
 Note that from the money you save by buying GTX instead of Tesla cards 
 you can get another node to run another simulation on :)
 
 Carsten
 
 
 than a million atoms, would it be advisable to go for more cores or more 
 clock speed, or having both is the best case scenario?
 
 JJ
 
 -Original Message-
 Date: Thu, 23 Apr 2015 08:03:46 +
 From: Kutzner, Carsten ckut...@gwdg.de
 To: gmx-us...@gromacs.org gmx-us...@gromacs.org
 Subject: Re: [gmx-users] Optimal GPU setup for workstation with
  Gromacs 5
 Message-ID: fe42569f-159b-49b1-a644-86a866aaa...@mpibpc.mpg.de
 Content-Type: text/plain; charset=us-ascii
 
 Hi,
 
 On 23 Apr 2015, at 08:03, Jingjie Yeo (IHPC) ye...@ihpc.a-star.edu.sg 
 wrote:
 
 Dear all,
 
 My workstation specs are 2 x Intel Xeon E5-2695v2 2.40 GHz, 12 Cores. I 
 would like to combine this with an optimal GPU setup for Gromacs 5 running 
 simulations with millions of atoms. May I know what are the recommended 
 setups? My vendor proposed doing a dual K40 Tesla GPU setup, would that be 
 optimal?
 I would propose to put two GTX 980 into that machine instead.
 This will give you the same GROMACS performance at a fraction of the price.
 
 Carsten
 
 
 Best Regards,
 JJ
 --
 Gromacs Users mailing list
 
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send 
 a mail to gmx-users-requ...@gromacs.org.
 --
 Gromacs Users mailing list
 
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
 
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
 
 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
 mail to gmx-users-requ...@gromacs.org.
 
 --
 Dr. Carsten Kutzner
 Max Planck Institute for Biophysical Chemistry Theoretical and 
 Computational Biophysics Am Fassberg 11, 37077 Goettingen, Germany 
 Tel. +49-551-2012313, Fax: +49-551-2012302 
 http://www.mpibpc.mpg.de/grubmueller/kutzner

[gmx-users] pdb2gmx and periodic molecule

2015-04-24 Thread Alex
Ahoy,

What I have here is a 100% precisely set up periodic ssdna chain of
six residues without termini. The goal is to get the 'polymerization'
across the box. So, I boxed the chain and set up the periodicity pretty much
perfectly.

To test, translation in the direction of periodicity by the
periodicity constant results in bonds perfectly recognized by things like VMD or
pymol (in the concatenated structure).

When running pdb2gmx on the boxed structure, the periodicity seems to
be ignored and then expectedly I get a fatal error with a custom FF that works
fine on finite chains.
x2top sees pbc, but we're not using it here. So, no topology in sight.

Any ideas?

Thanks,

Alex

p.s. I can post the test structure if there's any doubt about periodicity
of the structure itself.


-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] About Analsysi tool

2015-04-24 Thread vidhya sankar
Dear Justin Thank you for your previous reply,  
   I am doing Dynamics for CNT-wraapped by 
Cyclic peptide .  The Cyclic peptide i am stacking Beyond Hydrogen bond 
distance.  AT end of Simulation, the Assembly become Collapsed 
Is there is Any tool To list out  interaction energy  and  analyze  the 
interaction energy (may be decomposing during simulation) between Cyclic 
peptides and Between Cyclic peptides Vs CNT

Thanks in Advance 
S.Vidhyasankar
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.

[gmx-users] help on Graphene Nano Sheets

2015-04-24 Thread Marcello Cammarata

Hi people,
I was able to run the graphene layer sheet, by follow your advices.
I had a problem with the dihetral angle,
i solved it by chancing, im the top file the funct parameter from 1 to 
3, as reported in bold below,

.
[ dihedrals ]
;  aiajakal functc0c1 c2
c3c4c5

6 1 2 3 *3 *
2 1 6 5 *3 *
2 18887 *3*
1 2 3 4 *3 *
...

and also, by changing in rtp files all_dihedrals from 1 to 0,
..
[ bondedtypes ]
; bonds  angles  dihedrals  impropers all_dihedrals nrexcl HH14 RemoveDih
1 1  3  1 *0* 3 1   0
..

But, when i run the mdrun file, after the minimizzation, the model 
result destroyer, all the atom of graphene are spreads arround monitor,

and they are not yet connected as in the original geometry.
I would like to have some advices. What is wrong? how i have to set the 
dihedral angles?


Thanks.


Il 23/04/2015 20:04, abhijit Kayal ha scritto:

  Hi  Mercelo

Copy the oplsaa.ff directory to your working directory. Then in
ffnonbonded.itp file add the following lines..
  opls_995   C  612.01100 0.000   A3.4e-01
3.61200e-01
  opls_996   C  612.01100 0.000   A3.4e-01
3.61200e-01
  opls_997   C  612.01100 0.000   A3.4e-01
3.61200e-01

Then in   atomname2type.n2t file add the following lines.
Copls_9950  12.011  2C  0.142  C 0.142
Copls_9960  12.011  3C  0.142  C 0.142  C 0.142
Copls_9970  12.011  1C  0.142

Then use g_x2top. This will work.

On Thu, Apr 23, 2015 at 10:54 PM, Alex nedoma...@gmail.com wrote:


I think we've covered this when I was having the same issue. If you're
trying to simulate a multi-molecule system, then just copy the entire
forcefield .ff folder to your local directory and modify the following
files:

ffbonded.itp
ffnonbonded.itp
atomnames2types.n2t

No need to create rtp entries for graphene/nanotubes, so you can
ignore that part of Andrea's tutorial.

Instead of just C in his tutorial, come up with a unique label
to avoid conflicts, which should then also be used in your PDB. Then what
I would do is create a topology entry
for just graphene and convert it to a stand-alone itp by stripping off
system
definitions. This is basically opening the top file, removing those
definitions and resaving as *.itp.

After that, you can just use pdb2gmx on the remaining
parts of the system and complete the system topology by hand. I know,
this sounds like a lot of manual tweaking, but this is the paradigm.

Good luck!

Alex


MC I run a graphene nanolayer, usin the below description (from previous
mail.
MC Now i got an other error.  It look like there is a conflict in a force
MC field system, someone can suggest me how to modify it?
MC the force field file came from Minoia advices, reported in the web
side!
MC /
MC //Program grompp, VERSION 4.6.7//
MC //Source code file:
MC /home/marcello/DATI/software/gromacs-4.6.7/src/kernel/topio.c, line:
752//
MC //
MC //Fatal error://
MC //Syntax error - File forcefield.itp, line 31//
MC //Last line read://
MC //'1   3   yes 0.5 0.5'//
MC //Found a second defaults directive./*
MC *



--
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-requ...@gromacs.org.






--
Marcello Cammarata, Ph.D.
3208790796

--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] help on Graphene Nano Sheets

2015-04-24 Thread Alex
Andrea Minoia's tutorial describes the setup of angles, which in the
case of graphene, is extremely simple from the geometry standpoint. There is no 
need for
chancing, just add the proper entries under [dihedraltypes] in
ffbonded.itp, x2top will take care of the rest.
It's also a pretty good idea to actually read the manual on the definition of 
angles and dihedrals.

Energy minimization destroys the sheet, because there's a mess in your
setup.

Alex

MC Hi people,
MC I was able to run the graphene layer sheet, by follow your advices.
MC I had a problem with the dihetral angle,
MC i solved it by chancing, im the top file the funct parameter from 1 to
MC 3, as reported in bold below,
MC .
MC [ dihedrals ]
MC ;  aiajakal functc0c1 c2 
MC c3c4c5
MC  6 1 2 3 *3 *
MC  2 1 6 5 *3 *
MC  2 18887 *3*
MC  1 2 3 4 *3 *
MC ...

MC and also, by changing in rtp files all_dihedrals from 1 to 0,
MC ..
MC [ bondedtypes ]
MC ; bonds  angles  dihedrals  impropers all_dihedrals nrexcl HH14 RemoveDih
MC  1 1  3  1 *0* 3 1   0
MC ..

MC But, when i run the mdrun file, after the minimizzation, the model 
MC result destroyer, all the atom of graphene are spreads arround monitor,
MC and they are not yet connected as in the original geometry.
MC I would like to have some advices. What is wrong? how i have to set the
MC dihedral angles?

MC Thanks.


MC Il 23/04/2015 20:04, abhijit Kayal ha scritto:
   Hi  Mercelo

 Copy the oplsaa.ff directory to your working directory. Then in
 ffnonbonded.itp file add the following lines..
   opls_995   C  612.01100 0.000   A3.4e-01
 3.61200e-01
   opls_996   C  612.01100 0.000   A3.4e-01
 3.61200e-01
   opls_997   C  612.01100 0.000   A3.4e-01
 3.61200e-01

 Then in   atomname2type.n2t file add the following lines.
 Copls_9950  12.011  2C  0.142  C 0.142
 Copls_9960  12.011  3C  0.142  C 0.142  C 0.142
 Copls_9970  12.011  1C  0.142

 Then use g_x2top. This will work.

 On Thu, Apr 23, 2015 at 10:54 PM, Alex nedoma...@gmail.com wrote:

 I think we've covered this when I was having the same issue. If you're
 trying to simulate a multi-molecule system, then just copy the entire
 forcefield .ff folder to your local directory and modify the following
 files:

 ffbonded.itp
 ffnonbonded.itp
 atomnames2types.n2t

 No need to create rtp entries for graphene/nanotubes, so you can
 ignore that part of Andrea's tutorial.

 Instead of just C in his tutorial, come up with a unique label
 to avoid conflicts, which should then also be used in your PDB. Then what
 I would do is create a topology entry
 for just graphene and convert it to a stand-alone itp by stripping off
 system
 definitions. This is basically opening the top file, removing those
 definitions and resaving as *.itp.

 After that, you can just use pdb2gmx on the remaining
 parts of the system and complete the system topology by hand. I know,
 this sounds like a lot of manual tweaking, but this is the paradigm.

 Good luck!

 Alex


 MC I run a graphene nanolayer, usin the below description (from previous
 mail.
 MC Now i got an other error.  It look like there is a conflict in a force
 MC field system, someone can suggest me how to modify it?
 MC the force field file came from Minoia advices, reported in the web
 side!
 MC /
 MC //Program grompp, VERSION 4.6.7//
 MC //Source code file:
 MC /home/marcello/DATI/software/gromacs-4.6.7/src/kernel/topio.c, line:
 752//
 MC //
 MC //Fatal error://
 MC //Syntax error - File forcefield.itp, line 31//
 MC //Last line read://
 MC //'1   3   yes 0.5 0.5'//
 MC //Found a second defaults directive./*
 MC *



 --
 Gromacs Users mailing list

 * Please search the archive at
 http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
 posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
 https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
 send a mail to gmx-users-requ...@gromacs.org.




MC -- 
MC Marcello Cammarata, Ph.D.
MC 3208790796




-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gromacs.org_gmx-users Digest, Vol 132, Issue 117

2015-04-24 Thread Simon Longela
Hi

I am trying to launch a simulation of ameloblastin (421 amino acids).

This will take long time.

But can you help me to estimate how much time I must put as wall time on
the script below?

I used 500hours, it run then the simulation stopped after that time, when I
increased at 1700hours, then the simulation never start!!!

I have 68 000CPU hours remain on my account.

Please advise.

Thanks.

SCRIPT:

#!/bin/bash

#PBS -lnodes=2:ppn=16

#PBS -lwalltime=500:00:00

#PSB -lpmem=5500MB

workdir=/global/work/$USER/$PBS_JOBID

mkdir -p $workdir

cd $PBS_O_WORKDIR

cp md.mdp nvt.gro nvt.cpt topol.top $workdir cd $workdir module load

gromacs/4.5.5 grompp -f md.mdp -c nvt.gro -t nvt.cpt -p topol.top -o

md_0_1.tpr -maxwarn 10 -zero mdrun -deffnm md_0_1 -nt 8 echo Info from

the system echo PDW  ls -la $PWD echo Workdir ls -la $workdir echo DIR

ls -la echo PBS_O_WORKDIR  ls -la $PBS_O_WORKDIR

echo Start copy data ...

cp * $PBS_O_WORKDIR

echo Check your data and do not forget remove $workdir #rm -rf $workdir



Simon
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [gmx-users] gromacs.org_gmx-users Digest, Vol 132, Issue 117

2015-04-24 Thread Justin Lemkul


Please use a real subject line rather than generically replying to the digest.

On 4/24/15 1:32 PM, Simon Longela wrote:

Hi

I am trying to launch a simulation of ameloblastin (421 amino acids).

This will take long time.

But can you help me to estimate how much time I must put as wall time on
the script below?



Run a short benchmark.  Even a few thousand steps would be enough.



I used 500hours, it run then the simulation stopped after that time, when I


This is what checkpointing is for.  It doesn't matter if the job stops because 
of a wallclock limit, mdrun -cpi will pick up from where it left off.


-Justin


increased at 1700hours, then the simulation never start!!!

I have 68 000CPU hours remain on my account.

Please advise.

Thanks.

SCRIPT:


#!/bin/bash



#PBS -lnodes=2:ppn=16



#PBS -lwalltime=500:00:00



#PSB -lpmem=5500MB



workdir=/global/work/$USER/$PBS_JOBID



mkdir -p $workdir



cd $PBS_O_WORKDIR



cp md.mdp nvt.gro nvt.cpt topol.top $workdir cd $workdir module load



gromacs/4.5.5 grompp -f md.mdp -c nvt.gro -t nvt.cpt -p topol.top -o



md_0_1.tpr -maxwarn 10 -zero mdrun -deffnm md_0_1 -nt 8 echo Info from



the system echo PDW  ls -la $PWD echo Workdir ls -la $workdir echo DIR



ls -la echo PBS_O_WORKDIR  ls -la $PBS_O_WORKDIR



echo Start copy data ...



cp * $PBS_O_WORKDIR



echo Check your data and do not forget remove $workdir #rm -rf $workdir




Simon



--
==

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalem...@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==
--
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Regarding Topolbuild1_3

2015-04-24 Thread Deepali Sharma
Hi,

I installed topolbuild1_3, ran the following command:

/Desktop/np/topolbuild1_3/src$ ./topolbuild -dir 
/home/deepali/Desktop/np/topolbuild1_3/src -ff oplsaa -n ZnO -charge

Fatal error.
Source code file: atom_types.c, line: 87
Cannot open file /home/deepali/Desktop/np/topolbuild1_3/src/ATOMTYPE_OPLSAA1.DEF

Unable to find the origin of the error.

Deepali Sharma
Postdoctoral Fellow
Department of Chemistry
Durban University of Technology (DUT)
Durban, SA





This e-mail is subject to our Disclaimer, to view click 
http://www.dut.ac.za/disclaimer;


Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and automatically archived 
by Mimecast SA (Pty) Ltd, an innovator in Software as a Service (SaaS) for 
business.  Mimecast Unified Email Management (UEM) offers email continuity, 
security, archiving and compliance with all current legislation.  To find out 
more, visit http://www.mimecast.co.za/uem.
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


[gmx-users] Command-Line Options for Cray XK7 Nodes

2015-04-24 Thread Mitchell Dorrell
Hi all,
How can I find the command line options used for the benchmarks at the end
of the http://www.gromacs.org/GPU_acceleration; page?

I'm running a similarly-sized simulation (200k atoms) on Titan, which is
also XK7/K20X. After many attempts that did not run at all (I'm new to
Gromacs), I've settled on this so far, using 64 nodes, which gets me 57.581
ns/day:

aprun -n 512 -S 4 -j 1 -d 1 mdrun_mpi -v -dlb yes -gpu_id 00 -npme 128

For reference: Each node has a 16-core Bulldozer processor and an nVidia
GPU. The processor is structured as shown here:
https://www.olcf.ornl.gov/kb_articles/xk7-cpu-description/

Thank you for your help!
Mitchell Dorrell
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.