Dear all,
This is a working method for building parallel Siesta-3.2 which is based on
MPI. The system is based on Rocks-6 with a frontend and some compute nodes
and I have test the procedure which is based on GNU compilers several
times. The following libraries are needed:
  - gcc/g++/gfortran: Here we use 4.4.7 which is available in Rocks-6.x
  - OpenMPI: New versions use wrapper compilers, however we need a version
which produce libmpi.a after compiling from the source. Here, we use 1.6.5
  - OpenBLAS: Although I have tested the standard BLAS-3.6.0 along with
lapack-3.2, Robert (RCP) told me that OpenBLAS has better performance and I
test that and verified that the runtime was better than standard BLAS. Here
we use 0.2.18.
  - LAPACK: Here we use 3.2.
  - SCALAPACK: This library basically contains LAPACK/BLAS/BLACS, but
Siesta need LAPACK and BLAS separately which I mentioned before. Therefore,
don't confuse these libraries. Although SCALAPACK is the big one, you
shouldn't rely on its LAPACK. Useful thing about this library is that we
can use the BLACS which is embedded in SCALAPACk. That means you don't need
separate BLACS (e.g. MPIBLACS). Here, we use 2.0.0.

Please note that it is possible to use NetCDF for better performance
(Asstated by Robert), however it has some dependencies and for that reason,
I didn't use that for Siesta. So, that is an optional library.


All we have to do are
  1. Build libraries. The root folder that contain the libraries are
/export/apps/computer/
    1.a. Build OpenMPI
    1.b. Build OpenBLAS. Rename the created library.
    1.c. Build LAPACK. Rename the created library.
    1.d. Build SCALAPACK. Rename the created library.
  2. Build Siesta. The root folder is /export/apps/chemistry/


I will now explain the steps in detail.

(1.a) the commands are
wget
https://www.open-mpi.org/software/ompi/v1.6/downloads/openmpi-1.6.5.tar.bz2
--no-check-certificate
tar xjf openmpi-1.6.5.tar.bz2
cd openmpi-1.6.5
./configure --prefix=/export/apps/computer/openmpi-1.6.5 --enable-static
make
make install
cd ..


(1.b)
wget http://github.com/xianyi/OpenBLAS/archive/v0.2.18.tar.gz
mv v0.2.18 openblas-0.2.18.tar.gz
tar zxf openblas-0.2.18.tar.gz
cd OpenBLAS-0.2.18/
make libs
cd ..


(1.c)
wget http://www.netlib.org/lapack/lapack-3.2.tgz
tar zxf lapack-3.2.tgz
cd lapack-3.2
cp make.inc.example make.inc
vim make.inc
     modify the following lines:
     OPTS     = -Os
     BLASLIB      = ../../../OpenBLAS-0.2.18/libopenblas.a
make lib
mv lapack_LINUX.a liblapack.a
cd ..


(1.d)
wget http://www.netlib.org/scalapack/scalapack-2.0.0.tgz
tar zxf scalapack-2.0.0.tgz
cd scalapack
cp SLmake.inc.example SLmake.inc
make lib
cd ..




(2)
wget https://dl.dropbox.com/u/20267285/SIESTA-DOWNLOADS/siesta-3.2-pl-5.tgz
tar zxf siesta-3.2-pl-5.tgz
cd siesta-3.2-pl-5
mkdir spar
cd spar
sh ../Src/obj_setup.sh
../Src/configure --enable-mpi
cp /export/apps/computer/openmpi-1.6.5/lib/libmpi_f90.a MPI/
vim arch.make
    modify the following lines:
    FC=/export/apps/computer/openmpi-1.6.5/bin/mpif90
    LDFLAGS=-L/export/apps/computer/scalapack
-L/export/apps/computer/OpenBLAS-0.2.18 -L/export/apps/computer/lapack-3.2
    BLAS_LIBS=-lopenblas
    LAPACK_LIBS= -llapack
    SCALAPACK_LIBS=-lscalapack
    LIBS=$(SCALAPACK_LIBS) $(LAPACK_LIBS) $(BLAS_LIBS)
    MPI_INTERFACE=libmpi_f90.a
    MPI_INCLUDE=.
make


You should now see the siesta binary. To verify the parallel siesta, run
"./siesta" and you should see an output like this

# ./siesta
Siesta Version:                                        siesta-3.2-pl-5
Architecture  : x86_64-unknown-linux-gnu--unknown
Compiler flags: /export/apps/computer/openmpi-1.6.5/bin/mpif90 -g -Os
PARALLEL version

* Running in serial mode with MPI
>> Start of run:  13-JUL-2016  11:05:50





How to run a siesta code in parallel?
Now that we have built siesta in parallel, let us try to run a sample code.
All we have to do is to create a host file which contains the hosts that
can run siesta and the tell mpi to use the allowed hosts and run siesta
with a given number of cores using an input file (*.fdf).

In your home, run the following commands:

$ echo "compute-0-2" >> hosts
$ echo "compute-0-3" >> hosts
$ /share/apps/computer/openmpi-1.6.5/bin/mpirun -hostfile hosts -np 4
/share/apps/chemistry/siesta-3.2-pl-5/spar/siesta < file.fdf


In another terminal ssh to compute-0-3 and compute-0-2 and run "top"
command to verify that each host is running two processes of siesta (4
process which is specified by np switch).
Please note that in Rocks, compute nodes use /share/apps and they don't see
/export/apps. Fortunately, in the frontend, /export/ is the same as
/share/. In other words, if you are on the frontend, it doesn't matter to
use /export/ or /share/. However, when you want to submit your job to the
MPI, you have to use /share/ becasue compute nodes only see /share/.




How to build transiesta in parallel?
Similar to parallel siesta! you can borrow the arch.make from
spar/arch.make. Run the following commands.

mkdir tpar
cd tpar
sh ../Src/obj_setup.sh
../Src/configure --enable-mpi
cp ../spar/arch.make .
make transiesta

Verify it by running "./transiesta".






Hope this guide help others.



Regards,
Mahmood

Responder a