Re: [gmx-users] Gromacs on Stampede

2013-10-12 Thread Arun Sharma
Hello,

I have a question about running gromacs utilities on Stampede and hopefully 
someone can point me in the right direction. I compiled gromacs using 
instructions in this thread and mdrun works fine. Also, some utilities like 
g_energy, g_analyze (single - core utilities, I believe) seem to be working 
fine. 

I am interested in computing life time of hydrogen bonds and this calculation 
is  quite expensive. Is there a way to submit this as a job using 32 or higher 
cores? When I run g_hbond on my workstation (16 cores) it runs on 16 threads by 
default. However, I am not sure if it is a good idea to run it on Stampede 
without submitting it as a job. 

I noticed that g_hbond works on OpenMP, while gromacs was compiled for Mpi 
according to these instructions. Just curious, if that would be the reason and 
if there is a suitable workaround for this problem.

As always, help is greatly appreciated. 
Thanks,




On Friday, October 11, 2013 5:31 AM, Arun Sharma arunsharma_...@yahoo.com 
wrote:
 
Dear Chris,

Thank you so much for providing the scripts and such detailed instructions. I 
was trying to load the gromacs module that is already available and was unable 
to get it to run. 

Thanks to you, I now have a working gromacs installation.




On Thursday, October 10, 2013 2:59 PM, Christopher Neale 
chris.ne...@mail.utoronto.ca wrote:

Dear Arun:

here is how I compile fftw and gromacs on stampede. 
I have also included a job script and a script to submit a chain of jobs.
As Szilárd notes, this does not use the MICs, but it is still a rather fast 
machine.

# Compilation for single precision gromacs plus mdrun_mpi
#

# Compile fftw on stampede:
cd fftw-3.3.3
mkdir exec
export FFTW_LOCATION=$(pwd)/exec
module purge
module load intel/13.0.2.146
export CC=icc
export CXX=icpc
./configure --enable-float --enable-threads --prefix=${FFTW_LOCATION} 
--enable-sse2
make -j4
make -j4 install
cd ../


# Compile gromacs 4.6.1 on stampede:

cd gromacs-4.6.1
mkdir source
mv * source
mkdir exec
cd exec

module purge
module load intel/13.0.2.146
module load cmake/2.8.9
export FFTW_LOCATION=$(pwd)/../fftw-3.3.3/exec
export CXX=icpc
export CC=icc
cmake ../source/ \
      -DCMAKE_PREFIX_PATH=$FFTW_LOCATION \
      -DCMAKE_INSTALL_PREFIX=$(pwd) \
      -DGMX_X11=OFF \
      -DCMAKE_CXX_COMPILER=${CXX} \
      -DCMAKE_C_COMPILER=${CC} \
      -DGMX_PREFER_STATIC_LIBS=ON \
      -DGMX_MPI=OFF
make -j4
make -j4 install

cd ../
mkdir exec2
cd exec2

module purge
module load intel/13.0.2.146
module load cmake/2.8.9
module load mvapich2/1.9a2
export FFTW_LOCATION=$(pwd)/../fftw-3.3.3/exec
export CXX=mpicxx
export CC=mpicc
cmake ../source/ \
      -DCMAKE_PREFIX_PATH=$FFTW_LOCATION \
      -DCMAKE_INSTALL_PREFIX=$(pwd) \
      -DGMX_X11=OFF \
      -DCMAKE_CXX_COMPILER=${CXX} \
      -DCMAKE_C_COMPILER=${CC} \
      -DGMX_PREFER_STATIC_LIBS=ON \
      -DGMX_MPI=ON
make -j4 mdrun
make -j4 install-mdrun

cp bin/mdrun_mpi ../exec/bin
cd ../






# Here is a script that you can submit to run gromacs on stampede:
# Set SBATCH -A according to your allocation
# Set SBATCH -N to number of nodes
# Set SBATCH -n to number of nodes x 16 (= number of CPU cores)
# Set PATH and GMXLIB according to your compilation of gromacs
# Remove -notunepme option if you don't mind some of the new optimizations

#!/bin/bash
#SBATCH -J test                     # Job name
#SBATCH -o myjob.%j.out      # Name of stdout output file (%j expands to jobId)
#SBATCH -p normal               # Queue name
#SBATCH -N 7                       # Total number of nodes requested (16 
cores/node)
#SBATCH -n 112                    # Total number of mpi tasks requested
#SBATCH -t 48:00:00             # Run time (hh:mm:ss) 

#SBATCH -A TG-XX      # -- Allocation name to charge job against

PATH=/home1/02417/cneale/exe/gromacs-4.6.1/exec/bin:$PATH
GMXLIB=/home1/02417/cneale/exe/gromacs-4.6.1/exec/share/gromacs/top


# grompp -f md.mdp -p new.top -c crashframe.gro -o md3.tpr -r restr.gro

ibrun mdrun_mpi -notunepme -deffnm md3 -dlb yes -npme 16 -cpt 60 -cpi md3.cpt 
-nsteps 50 -maxh 47.9 -noappend

cp md3.cpt backup_md3_$(date|sed s/ /_/g).cpt


# submit the above script like this:

sbatch script.sh


# or create a chain of jobs like this:

N=8
script=stamp.sh
if [ ! -e last_job_in_chain ]; then
  id=$(sbatch ${script}|tail -n 1 |awk '{print $NF}')
  echo $id  last_job_in_chain
  let N--
fi
id=$(cat last_job_in_chain)
for((i=1;i=N;i++)); do
  id=$(sbatch -d afterany:${id} ${script}|tail -n 1 |awk

Re: [gmx-users] Gromacs on Stampede

2013-10-11 Thread Arun Sharma
Dear Chris,

Thank you so much for providing the scripts and such detailed instructions. I 
was trying to load the gromacs module that is already available and was unable 
to get it to run. 

Thanks to you, I now have a working gromacs installation.




On Thursday, October 10, 2013 2:59 PM, Christopher Neale 
chris.ne...@mail.utoronto.ca wrote:
 
Dear Arun:

here is how I compile fftw and gromacs on stampede. 
I have also included a job script and a script to submit a chain of jobs.
As Szilárd notes, this does not use the MICs, but it is still a rather fast 
machine.

# Compilation for single precision gromacs plus mdrun_mpi
#

# Compile fftw on stampede:
cd fftw-3.3.3
mkdir exec
export FFTW_LOCATION=$(pwd)/exec
module purge
module load intel/13.0.2.146
export CC=icc
export CXX=icpc
./configure --enable-float --enable-threads --prefix=${FFTW_LOCATION} 
--enable-sse2
make -j4
make -j4 install
cd ../


# Compile gromacs 4.6.1 on stampede:

cd gromacs-4.6.1
mkdir source
mv * source
mkdir exec
cd exec

module purge
module load intel/13.0.2.146
module load cmake/2.8.9
export FFTW_LOCATION=$(pwd)/../fftw-3.3.3/exec
export CXX=icpc
export CC=icc
cmake ../source/ \
      -DCMAKE_PREFIX_PATH=$FFTW_LOCATION \
      -DCMAKE_INSTALL_PREFIX=$(pwd) \
      -DGMX_X11=OFF \
      -DCMAKE_CXX_COMPILER=${CXX} \
      -DCMAKE_C_COMPILER=${CC} \
      -DGMX_PREFER_STATIC_LIBS=ON \
      -DGMX_MPI=OFF
make -j4
make -j4 install

cd ../
mkdir exec2
cd exec2

module purge
module load intel/13.0.2.146
module load cmake/2.8.9
module load mvapich2/1.9a2
export FFTW_LOCATION=$(pwd)/../fftw-3.3.3/exec
export CXX=mpicxx
export CC=mpicc
cmake ../source/ \
      -DCMAKE_PREFIX_PATH=$FFTW_LOCATION \
      -DCMAKE_INSTALL_PREFIX=$(pwd) \
      -DGMX_X11=OFF \
      -DCMAKE_CXX_COMPILER=${CXX} \
      -DCMAKE_C_COMPILER=${CC} \
      -DGMX_PREFER_STATIC_LIBS=ON \
      -DGMX_MPI=ON
make -j4 mdrun
make -j4 install-mdrun

cp bin/mdrun_mpi ../exec/bin
cd ../






# Here is a script that you can submit to run gromacs on stampede:
# Set SBATCH -A according to your allocation
# Set SBATCH -N to number of nodes
# Set SBATCH -n to number of nodes x 16 (= number of CPU cores)
# Set PATH and GMXLIB according to your compilation of gromacs
# Remove -notunepme option if you don't mind some of the new optimizations

#!/bin/bash
#SBATCH -J test                     # Job name
#SBATCH -o myjob.%j.out      # Name of stdout output file (%j expands to jobId)
#SBATCH -p normal               # Queue name
#SBATCH -N 7                       # Total number of nodes requested (16 
cores/node)
#SBATCH -n 112                    # Total number of mpi tasks requested
#SBATCH -t 48:00:00             # Run time (hh:mm:ss) 

#SBATCH -A TG-XX      # -- Allocation name to charge job against

PATH=/home1/02417/cneale/exe/gromacs-4.6.1/exec/bin:$PATH
GMXLIB=/home1/02417/cneale/exe/gromacs-4.6.1/exec/share/gromacs/top


# grompp -f md.mdp -p new.top -c crashframe.gro -o md3.tpr -r restr.gro

ibrun mdrun_mpi -notunepme -deffnm md3 -dlb yes -npme 16 -cpt 60 -cpi md3.cpt 
-nsteps 50 -maxh 47.9 -noappend

cp md3.cpt backup_md3_$(date|sed s/ /_/g).cpt


# submit the above script like this:

sbatch script.sh


# or create a chain of jobs like this:

N=8
script=stamp.sh
if [ ! -e last_job_in_chain ]; then
  id=$(sbatch ${script}|tail -n 1 |awk '{print $NF}')
  echo $id  last_job_in_chain
  let N--
fi
id=$(cat last_job_in_chain)
for((i=1;i=N;i++)); do
  id=$(sbatch -d afterany:${id} ${script}|tail -n 1 |awk '{print $NF}')
  echo $id  last_job_in_chain
done

--
gmx-users mailing list    gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Gromacs on Stampede

2013-10-10 Thread Arun Sharma

Hello,

Does anyone have experience running gromacs and data analysis tools on Stampede 
or similar supercomputer. Do we have a set of best practices or approaches for 
this situation.

Any input is highly appreciated.
Thanks

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Minimum distance periodic images, protein simulation

2013-09-20 Thread Arun Sharma
Hello,
I ran a 100-ns long simulation of a small protein (trp-cage) at an elevated 
temperature. I analysed the distance between periodic images using

g_mindist -f md-run-1-noPBC.xtc -s md-run-1.tpr -n index.ndx -od mindist.xvg 
-pi 

The output shows that there are situations when the closest distance between 
certain atoms is much lesser than 1 nm. Conventional wisdom says that if this 
happens the simulation results are questionable. Is this completely true? If 
this is indeed true, how would I ensure that this does not happen again? 

I have posted the output of g_mindist at http://postimg.org/image/bnc0ej3nb/

Any comments and clarifications are highly appreciated

Thanks,
-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists