[gmx-users] Gromacs on Stampede

2013-10-13 Thread Christopher Neale
Why not put it in a slurm script and submit that script as a (probably 
single-node) job. It is not generally 
acceptable to use a large fraction of the head node of a shared resource for a 
substantial amount of 
time.

If your problem is different and of a gromacs nature, you may need to describe 
it better. (i.e., if you're really just saying that you can't use MPI with 
g_hbond then show us what you did, what happened, and likely somebody will be 
able to answer you. Personally, I don't think any of the analysis tools are 
MPI-enabled, but I could be wrong).

If you problem is really more about using stampede, you can get help directly 
by submitting an xsede help 
ticket (portal.xsede.org).

Chris.

-- original message --

Hello,

I have a question about running gromacs utilities on Stampede and hopefully 
someone can point me in the right direction. I compiled gromacs using 
instructions in this thread and mdrun works fine. Also, some utilities like 
g_energy, g_analyze (single - core utilities, I believe) seem to be working 
fine. 

I am interested in computing life time of hydrogen bonds and this calculation 
is  quite expensive. Is there a way to submit this as a job using 32 or higher 
cores? When I run g_hbond on my workstation (16 cores) it runs on 16 threads by 
default. However, I am not sure if it is a good idea to run it on Stampede 
without submitting it as a job. 

I noticed that g_hbond works on OpenMP, while gromacs was compiled for Mpi 
according to these instructions. Just curious, if that would be the reason and 
if there is a suitable workaround for this problem.

As always, help is greatly appreciated. 
Thanks,
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Gromacs on Stampede

2013-10-12 Thread Arun Sharma
Hello,

I have a question about running gromacs utilities on Stampede and hopefully 
someone can point me in the right direction. I compiled gromacs using 
instructions in this thread and mdrun works fine. Also, some utilities like 
g_energy, g_analyze (single - core utilities, I believe) seem to be working 
fine. 

I am interested in computing life time of hydrogen bonds and this calculation 
is  quite expensive. Is there a way to submit this as a job using 32 or higher 
cores? When I run g_hbond on my workstation (16 cores) it runs on 16 threads by 
default. However, I am not sure if it is a good idea to run it on Stampede 
without submitting it as a job. 

I noticed that g_hbond works on OpenMP, while gromacs was compiled for Mpi 
according to these instructions. Just curious, if that would be the reason and 
if there is a suitable workaround for this problem.

As always, help is greatly appreciated. 
Thanks,




On Friday, October 11, 2013 5:31 AM, Arun Sharma arunsharma_...@yahoo.com 
wrote:
 
Dear Chris,

Thank you so much for providing the scripts and such detailed instructions. I 
was trying to load the gromacs module that is already available and was unable 
to get it to run. 

Thanks to you, I now have a working gromacs installation.




On Thursday, October 10, 2013 2:59 PM, Christopher Neale 
chris.ne...@mail.utoronto.ca wrote:

Dear Arun:

here is how I compile fftw and gromacs on stampede. 
I have also included a job script and a script to submit a chain of jobs.
As Szilárd notes, this does not use the MICs, but it is still a rather fast 
machine.

# Compilation for single precision gromacs plus mdrun_mpi
#

# Compile fftw on stampede:
cd fftw-3.3.3
mkdir exec
export FFTW_LOCATION=$(pwd)/exec
module purge
module load intel/13.0.2.146
export CC=icc
export CXX=icpc
./configure --enable-float --enable-threads --prefix=${FFTW_LOCATION} 
--enable-sse2
make -j4
make -j4 install
cd ../


# Compile gromacs 4.6.1 on stampede:

cd gromacs-4.6.1
mkdir source
mv * source
mkdir exec
cd exec

module purge
module load intel/13.0.2.146
module load cmake/2.8.9
export FFTW_LOCATION=$(pwd)/../fftw-3.3.3/exec
export CXX=icpc
export CC=icc
cmake ../source/ \
      -DCMAKE_PREFIX_PATH=$FFTW_LOCATION \
      -DCMAKE_INSTALL_PREFIX=$(pwd) \
      -DGMX_X11=OFF \
      -DCMAKE_CXX_COMPILER=${CXX} \
      -DCMAKE_C_COMPILER=${CC} \
      -DGMX_PREFER_STATIC_LIBS=ON \
      -DGMX_MPI=OFF
make -j4
make -j4 install

cd ../
mkdir exec2
cd exec2

module purge
module load intel/13.0.2.146
module load cmake/2.8.9
module load mvapich2/1.9a2
export FFTW_LOCATION=$(pwd)/../fftw-3.3.3/exec
export CXX=mpicxx
export CC=mpicc
cmake ../source/ \
      -DCMAKE_PREFIX_PATH=$FFTW_LOCATION \
      -DCMAKE_INSTALL_PREFIX=$(pwd) \
      -DGMX_X11=OFF \
      -DCMAKE_CXX_COMPILER=${CXX} \
      -DCMAKE_C_COMPILER=${CC} \
      -DGMX_PREFER_STATIC_LIBS=ON \
      -DGMX_MPI=ON
make -j4 mdrun
make -j4 install-mdrun

cp bin/mdrun_mpi ../exec/bin
cd ../






# Here is a script that you can submit to run gromacs on stampede:
# Set SBATCH -A according to your allocation
# Set SBATCH -N to number of nodes
# Set SBATCH -n to number of nodes x 16 (= number of CPU cores)
# Set PATH and GMXLIB according to your compilation of gromacs
# Remove -notunepme option if you don't mind some of the new optimizations

#!/bin/bash
#SBATCH -J test                     # Job name
#SBATCH -o myjob.%j.out      # Name of stdout output file (%j expands to jobId)
#SBATCH -p normal               # Queue name
#SBATCH -N 7                       # Total number of nodes requested (16 
cores/node)
#SBATCH -n 112                    # Total number of mpi tasks requested
#SBATCH -t 48:00:00             # Run time (hh:mm:ss) 

#SBATCH -A TG-XX      # -- Allocation name to charge job against

PATH=/home1/02417/cneale/exe/gromacs-4.6.1/exec/bin:$PATH
GMXLIB=/home1/02417/cneale/exe/gromacs-4.6.1/exec/share/gromacs/top


# grompp -f md.mdp -p new.top -c crashframe.gro -o md3.tpr -r restr.gro

ibrun mdrun_mpi -notunepme -deffnm md3 -dlb yes -npme 16 -cpt 60 -cpi md3.cpt 
-nsteps 50 -maxh 47.9 -noappend

cp md3.cpt backup_md3_$(date|sed s/ /_/g).cpt


# submit the above script like this:

sbatch script.sh


# or create a chain of jobs like this:

N=8
script=stamp.sh
if [ ! -e last_job_in_chain ]; then
  id=$(sbatch ${script}|tail -n 1 |awk '{print $NF}')
  echo $id  last_job_in_chain
  let N--
fi
id=$(cat last_job_in_chain)
for((i=1;i=N;i++)); do
  id=$(sbatch -d afterany:${id} ${script}|tail -n 1 |awk 

Re: [gmx-users] Gromacs on Stampede

2013-10-11 Thread Arun Sharma
Dear Chris,

Thank you so much for providing the scripts and such detailed instructions. I 
was trying to load the gromacs module that is already available and was unable 
to get it to run. 

Thanks to you, I now have a working gromacs installation.




On Thursday, October 10, 2013 2:59 PM, Christopher Neale 
chris.ne...@mail.utoronto.ca wrote:
 
Dear Arun:

here is how I compile fftw and gromacs on stampede. 
I have also included a job script and a script to submit a chain of jobs.
As Szilárd notes, this does not use the MICs, but it is still a rather fast 
machine.

# Compilation for single precision gromacs plus mdrun_mpi
#

# Compile fftw on stampede:
cd fftw-3.3.3
mkdir exec
export FFTW_LOCATION=$(pwd)/exec
module purge
module load intel/13.0.2.146
export CC=icc
export CXX=icpc
./configure --enable-float --enable-threads --prefix=${FFTW_LOCATION} 
--enable-sse2
make -j4
make -j4 install
cd ../


# Compile gromacs 4.6.1 on stampede:

cd gromacs-4.6.1
mkdir source
mv * source
mkdir exec
cd exec

module purge
module load intel/13.0.2.146
module load cmake/2.8.9
export FFTW_LOCATION=$(pwd)/../fftw-3.3.3/exec
export CXX=icpc
export CC=icc
cmake ../source/ \
      -DCMAKE_PREFIX_PATH=$FFTW_LOCATION \
      -DCMAKE_INSTALL_PREFIX=$(pwd) \
      -DGMX_X11=OFF \
      -DCMAKE_CXX_COMPILER=${CXX} \
      -DCMAKE_C_COMPILER=${CC} \
      -DGMX_PREFER_STATIC_LIBS=ON \
      -DGMX_MPI=OFF
make -j4
make -j4 install

cd ../
mkdir exec2
cd exec2

module purge
module load intel/13.0.2.146
module load cmake/2.8.9
module load mvapich2/1.9a2
export FFTW_LOCATION=$(pwd)/../fftw-3.3.3/exec
export CXX=mpicxx
export CC=mpicc
cmake ../source/ \
      -DCMAKE_PREFIX_PATH=$FFTW_LOCATION \
      -DCMAKE_INSTALL_PREFIX=$(pwd) \
      -DGMX_X11=OFF \
      -DCMAKE_CXX_COMPILER=${CXX} \
      -DCMAKE_C_COMPILER=${CC} \
      -DGMX_PREFER_STATIC_LIBS=ON \
      -DGMX_MPI=ON
make -j4 mdrun
make -j4 install-mdrun

cp bin/mdrun_mpi ../exec/bin
cd ../






# Here is a script that you can submit to run gromacs on stampede:
# Set SBATCH -A according to your allocation
# Set SBATCH -N to number of nodes
# Set SBATCH -n to number of nodes x 16 (= number of CPU cores)
# Set PATH and GMXLIB according to your compilation of gromacs
# Remove -notunepme option if you don't mind some of the new optimizations

#!/bin/bash
#SBATCH -J test                     # Job name
#SBATCH -o myjob.%j.out      # Name of stdout output file (%j expands to jobId)
#SBATCH -p normal               # Queue name
#SBATCH -N 7                       # Total number of nodes requested (16 
cores/node)
#SBATCH -n 112                    # Total number of mpi tasks requested
#SBATCH -t 48:00:00             # Run time (hh:mm:ss) 

#SBATCH -A TG-XX      # -- Allocation name to charge job against

PATH=/home1/02417/cneale/exe/gromacs-4.6.1/exec/bin:$PATH
GMXLIB=/home1/02417/cneale/exe/gromacs-4.6.1/exec/share/gromacs/top


# grompp -f md.mdp -p new.top -c crashframe.gro -o md3.tpr -r restr.gro

ibrun mdrun_mpi -notunepme -deffnm md3 -dlb yes -npme 16 -cpt 60 -cpi md3.cpt 
-nsteps 50 -maxh 47.9 -noappend

cp md3.cpt backup_md3_$(date|sed s/ /_/g).cpt


# submit the above script like this:

sbatch script.sh


# or create a chain of jobs like this:

N=8
script=stamp.sh
if [ ! -e last_job_in_chain ]; then
  id=$(sbatch ${script}|tail -n 1 |awk '{print $NF}')
  echo $id  last_job_in_chain
  let N--
fi
id=$(cat last_job_in_chain)
for((i=1;i=N;i++)); do
  id=$(sbatch -d afterany:${id} ${script}|tail -n 1 |awk '{print $NF}')
  echo $id  last_job_in_chain
done

--
gmx-users mailing list    gmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Gromacs on Stampede

2013-10-10 Thread Arun Sharma

Hello,

Does anyone have experience running gromacs and data analysis tools on Stampede 
or similar supercomputer. Do we have a set of best practices or approaches for 
this situation.

Any input is highly appreciated.
Thanks

-- 
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


Re: [gmx-users] Gromacs on Stampede

2013-10-10 Thread Szilárd Páll
Hi,

GROMACS does not have Xeon Phi support, so you'll be better off using
only the CPUs in Stampede. Porting and optimization is in progress,
but it will probably be a few months before you can test some
Phi-optimized mdrun.

Running (most) analyses on Phi is not really feasible. While there are
a few analysis tools that support OpenMP and even with those I/O will
be a severe bottleneck if you were considering using the Phi-s for
analysis.

So for now, I would stick to using only the CPUs in the system.

Cheers,
--
Szilárd Páll


On Thu, Oct 10, 2013 at 12:58 PM, Arun Sharma arunsharma_...@yahoo.com wrote:

 Hello,

 Does anyone have experience running gromacs and data analysis tools on 
 Stampede or similar supercomputer. Do we have a set of best practices or 
 approaches for this situation.

 Any input is highly appreciated.
 Thanks

 --
 gmx-users mailing listgmx-users@gromacs.org
 http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
 http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
 * Please don't post (un)subscribe requests to the list. Use the
 www interface or send it to gmx-users-requ...@gromacs.org.
 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


[gmx-users] Gromacs on Stampede

2013-10-10 Thread Christopher Neale
Dear Arun:

here is how I compile fftw and gromacs on stampede. 
I have also included a job script and a script to submit a chain of jobs.
As Szilárd notes, this does not use the MICs, but it is still a rather fast 
machine.

# Compilation for single precision gromacs plus mdrun_mpi
#

# Compile fftw on stampede:
cd fftw-3.3.3
mkdir exec
export FFTW_LOCATION=$(pwd)/exec
module purge
module load intel/13.0.2.146
export CC=icc
export CXX=icpc
./configure --enable-float --enable-threads --prefix=${FFTW_LOCATION} 
--enable-sse2
make -j4
make -j4 install
cd ../


# Compile gromacs 4.6.1 on stampede:

cd gromacs-4.6.1
mkdir source
mv * source
mkdir exec
cd exec

module purge
module load intel/13.0.2.146
module load cmake/2.8.9
export FFTW_LOCATION=$(pwd)/../fftw-3.3.3/exec
export CXX=icpc
export CC=icc
cmake ../source/ \
  -DCMAKE_PREFIX_PATH=$FFTW_LOCATION \
  -DCMAKE_INSTALL_PREFIX=$(pwd) \
  -DGMX_X11=OFF \
  -DCMAKE_CXX_COMPILER=${CXX} \
  -DCMAKE_C_COMPILER=${CC} \
  -DGMX_PREFER_STATIC_LIBS=ON \
  -DGMX_MPI=OFF
make -j4
make -j4 install

cd ../
mkdir exec2
cd exec2

module purge
module load intel/13.0.2.146
module load cmake/2.8.9
module load mvapich2/1.9a2
export FFTW_LOCATION=$(pwd)/../fftw-3.3.3/exec
export CXX=mpicxx
export CC=mpicc
cmake ../source/ \
  -DCMAKE_PREFIX_PATH=$FFTW_LOCATION \
  -DCMAKE_INSTALL_PREFIX=$(pwd) \
  -DGMX_X11=OFF \
  -DCMAKE_CXX_COMPILER=${CXX} \
  -DCMAKE_C_COMPILER=${CC} \
  -DGMX_PREFER_STATIC_LIBS=ON \
  -DGMX_MPI=ON
make -j4 mdrun
make -j4 install-mdrun

cp bin/mdrun_mpi ../exec/bin
cd ../






# Here is a script that you can submit to run gromacs on stampede:
# Set SBATCH -A according to your allocation
# Set SBATCH -N to number of nodes
# Set SBATCH -n to number of nodes x 16 (= number of CPU cores)
# Set PATH and GMXLIB according to your compilation of gromacs
# Remove -notunepme option if you don't mind some of the new optimizations

#!/bin/bash
#SBATCH -J test # Job name
#SBATCH -o myjob.%j.out  # Name of stdout output file (%j expands to jobId)
#SBATCH -p normal   # Queue name
#SBATCH -N 7   # Total number of nodes requested (16 
cores/node)
#SBATCH -n 112# Total number of mpi tasks requested
#SBATCH -t 48:00:00 # Run time (hh:mm:ss) 

#SBATCH -A TG-XX  # -- Allocation name to charge job against

PATH=/home1/02417/cneale/exe/gromacs-4.6.1/exec/bin:$PATH
GMXLIB=/home1/02417/cneale/exe/gromacs-4.6.1/exec/share/gromacs/top


# grompp -f md.mdp -p new.top -c crashframe.gro -o md3.tpr -r restr.gro

ibrun mdrun_mpi -notunepme -deffnm md3 -dlb yes -npme 16 -cpt 60 -cpi md3.cpt 
-nsteps 50 -maxh 47.9 -noappend

cp md3.cpt backup_md3_$(date|sed s/ /_/g).cpt


# submit the above script like this:

sbatch script.sh


# or create a chain of jobs like this:

N=8
script=stamp.sh
if [ ! -e last_job_in_chain ]; then
  id=$(sbatch ${script}|tail -n 1 |awk '{print $NF}')
  echo $id  last_job_in_chain
  let N--
fi
id=$(cat last_job_in_chain)
for((i=1;i=N;i++)); do
  id=$(sbatch -d afterany:${id} ${script}|tail -n 1 |awk '{print $NF}')
  echo $id  last_job_in_chain
done

--
gmx-users mailing listgmx-users@gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-requ...@gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists