Michael,

I'm pretty sure this issue has already been discussed before on the list,
but here goes the short version of the story.

Siesta needs infiniband or myrinet (in general, a low latency internode
connection) to work properly in parallel, otherwise you should try to run it
in parallel only on one node with many cores, using mpi. If you have to use
many nodes, Gigabit interconnection won't do the job. Even in the case of
mpi on only one node with many cores, if your system is too small,
parallelism will not work efficiently.

No modifications in the fdf file are needed to run siesta in parallel,
except for the Parallel Over Orbitals, and Parallel Over K modes. Check the
manual and the list archives for that.

Marcos

On Tue, Mar 1, 2011 at 6:32 PM, Michael Shin <[email protected]> wrote:

> Hello out Huan,
> It works and I successfully compiled parallel SIESTA. However, When I run
> the example of Fe, I found the computational time increases with increasing
> the number of nodes. For example with 1 node it takes about 3 mints, and
> with 2 nodes it takes about 4 mints.
> But with 4 nodes, it took about 30 mints.
> See the attached arch,make file, and two output files.
> Do I have to make any special changes in FDF file when I want to run
> parallel calculations.
> Any help from any one is welcome.
>
>
>
> --- On *Sat, 2/26/11, Huan Tran <[email protected]>* wrote:
>
>
> From: Huan Tran <[email protected]>
> Subject: Re: [SIESTA-L] Parallel_version
> To: [email protected]
> Date: Saturday, February 26, 2011, 11:10 AM
>
>
> For those who need to install parallel siesta, here is something I found on
> the web. For me, it works.
>
> Installation of SIESTA-3.0-b(parallel version):
>
> 1. Installation of MPICH2:
>
>    a) to get mpich2
>
>     http://www-unix.mcs.anl.gov/mpi/mpich/
>
>
>    b) installation instruction:
>
>     http://hydra.nac.uci.edu/~sev/docs/mpich2-readme.txt
>
>
>
>     $ tar xfz mpich2.tar.gz
>
>     $ cd mpich2-1.0.1
>     $ ./configure   # default installation directory (/urs/local/bin/)
>     $ make
>     $ make install ( need to be a root)
>
>
>     $ which mpd  # (print /usr/local/bin )
>     $ which mpiexec #(print /usr/local/bin/)
>
>     $ which mpirun  #(print  /usr/local/bin/)
>
>     $ cd   #(go to home directory)
>     $ vi .mpd.conf #(type secretword = what ever you want and save)
>
>
>     $ chmod 600 .mpd.conf
>
>     $ mpd $
>     $ mpdtrace  (print the localhost name)
>
>     $ mpdallexit
>
>     $ vi hostfile (type localhost X no, X = no of core of that machine )
>
>
> 2. BLAS installation:
>
>
>    to get BLAS
>
>    http://www.netlib.org/blas/blas.tgz
>
>
>    for installation
>
>    http://wiki.ifca.es/e-ciencia/index.php/BLAS
>
>
> or
>    $ tar -zxvf blas.tgz
>    $ cd BLAS
>    $ vi make.inc (need to edit ---see below)
>
>    $ make
>
>   edit
>
>    FORTRAN = /usr/local/bin/mpif90
>    LOADER  = /usr/lcoal/bin/mpif90
>
>
>
>   $ make clean (for cleaning)
>
> 3. lapack installation:
>
>    to get
>    http://www.netlib.org/blas/lapack.tgz
>
>
>
>
>    for installation
>    go to
>    http://wiki.ifca.es/e-ciencia/index.php/LAPACK
>   or
>   $ tar -zxvf lapack.tgz
>   $ cd lapack-3.2/
>
>
>   $ cp make.inc.example make.inc
>   $ vi make.inc (need to edit ...see below)
>   $ make all
>
>   edit
>
>   FORTRAN  = /usr/local/bin/mpif90
>   LOADER   = /usr/local/bin/mpif90
>   TIMER    = EXT_ETIME
>
>
>   BLASLIB  = /home/../../BLAS/blas$(PLAT).a  (give the full path)
>
>
>    for cleaning
>
>    $ rm *.a  #(make clean is not going to work)
>
>
> 4. BLACS installation:
>
>    to get
>
>
>
>
>    http://www.netlib.org/blacs/mpiblacs.tgz
>    http://www.netlib.org/blacs/mpiblacs-patch03.tgz
>
>
>
>    for installation
>
>    http://wiki.ifca.es/e-ciencia/index.php/BLACS
>   or
>
>   $ tar -zxvf mpiblacs.tgz
>
>
>   $ tar -zxvf mpiblacs-patch03.tgz
>
>   $ cd BLACS/
>   $ cp BMAKES/Bmake.MPI-LINUX ./Bmake.inc
>   $ vi Bmake.inci (need to edit .....see below)
>   $ make mpi
>
>   edit
>
>   BTOPdir   = $(HOME)/Software/BLACS  # directory we're compiling BLACS at
>
>
>   MPIdir    = /usr/local/            # Open MPI directory
>   MPILIBdir =
>   MPIINCdir = /usr/local/include/
>   MPILIB    = /usr/local/lib/libmpich.a
>   SYSINC    =
>   INTFACE   = -DAdd_
>   TRANSCOMM = -DUseMpich
>
>
>   F77       = /usr/local/bin/mpif90 # MPI wrapper for Fortran compiler
>   CC        = /usr/local/bin/mpicc  # MPI wrapper for C compiler
>   CCFLAGS   = -O3
>
>
>   for cleaning
>
>   $ find ./* -name '*.o' | xargs rm
>
>
>
>
> 5. Scalapack installation:
>
>    to get
>
>    http://www.netlib.org/scalapack/scalapack.tgz
>
>    for installation
>
>
>
>    http://wiki.ifca.es/e-ciencia/index.php/ScaLAPACK
>   or
>
>   $ tar -zxvf TARs/scalapack.tgz
>   $ cd scalapack-1.8.0/
>
>
>   $ cp SLmake.inc.example SLmake.inc
>
>   $ vi SLmake.inc  (need to edit ...see below)
>   $ make
>
>   for cleaning
>
>   $ make clean
>   $ rm libscalapack.a
>
>
>
>   edit
>
>
>   home       = $(HOME)/Software/scalapack-1.8.0
>
>
>   BLACSdir   = $(HOME)/Software/BLACS/LIB
>
>   SMPLIB     = /usr/local/bin/libmpich.a
>   BLACSFINIT = $(BLACSdir)/blacsF77init_MPI-$(PLAT)-$(BLACSDBGLVL).a
>   BLACSCINIT = $(BLACSdir)/blacsCinit_MPI-$(PLAT)-$(BLACSDBGLVL).a
>
>
>   BLACSLIB   = $(BLACSdir)/blacs_MPI-$(PLAT)-$(BLACSDBGLVL).a
>
>   F77        = /usr/local/bin/mpif90
>   CC         = /usr/local/bin/mpicc
>   F77FLAGS   =  -O2 $(NOOPT)
>
>
>   CCFLAGS    =
>
>   CDEFS      = -DAdd_ -DNO_IEEE $(USEMPI)
>   BLASLIB    = $(HOME)/Software/BLAS/blas_LINUX.a
>   LAPACKLIB  = $(HOME)/Software/lapack-3.2/lapack_LINUX.a
>
>
>
>
> 6. Siesta installation:
>
>
>    go to
>   http://wiki.ifca.es/e-ciencia/index.php/SIESTA#Serial_version
>
>
>
>   1. $ tar -zxvf siesta-3.0-b.tgz
>   2. $ cd siesta-3.0-b/Obj/
>
>   3. $ sh ../Src/obj_setup.sh
>   4. $ ../Src/configure
>   5. $ vi arch.make (need to edit this file .... see the web page...as
>
>
>        it depends on where did you  install the other libraries.... ... )
>   6. $ make ( this is for siesta, for transiesta see below )
>
>   7  $ make clean (to clean the *.o file and recompile again
>
>
>        to compile transiesta type make transiesta instead of make)
>
>
>  edit
>    FC= /usr/local/bin/mpif9
>    FFLAGS=-g -O2
>    FPPFLAGS= -DMPI -DFC_HAVE_FLUSH -DFC_HAVE_ABORT
>
>    BLAS_LIBS=$(HOME)/../BLAS/blas_LINUX.a
>
>
>    LAPACK_LIBS=$(HOME)/../lapack-3.2/lapack_LINUX.a
>    BLACS_LIBS=$(HOME)/../BLACS/LIB/blacsF77init_MPI-LINUX-0.a \
>            $(HOME)/../BLACS/LIB/blacs_MPI-LINUX-0.a \
>
>            $(HOME)/../BLACS/LIB/blacsCinit_MPI-LINUX-0.a
>
>
>    SCALAPACK_LIBS=$(HOME)/../scalapack-1.8.0/libscalapack.a
>
>    COMP_LIBS=dc_lapack.a
>    MPI_INTERFACE=libmpi_f90.a
>    MPI_INCLUDE=.
>
> 7. To run siesta from any directory
>
>
>     $ cd yourdirectory
>     $ ln -s ../path to siesta/Obj/siesta . ( to link the siesta )
>     $ mpdboot -n 1 -f hostfile
>     $ mpiexec -n <np> ./siesta < xyz.fdf | tee xyz.out &
>
>
>     **** make sure to kill the mpd after every run ****
>     $ mpdallexit
>
>
> On Sat, Feb 26, 2011 at 9:47 AM, Michael Shin 
> <[email protected]<http://us.mc1619.mail.yahoo.com/mc/[email protected]>
> > wrote:
>
>   Dear All
> Can any one guide me and send me some related steps for installation of
> parallel version of SIESTA-3.0-rc2.
> I will be grateful if some has successfully installed the parallel version
> of SIESTA-3.0-rc2 and can send me arch.make file.
> Also tell me the FORTRAN compiler name which can support parallel version
> of SIESTA-3.0-rc2
> Regrads
>
>
>
>

Responder a