Dear All Thanks for your help.
 
How to solve the following WARNING
Siesta Version:                                        siesta-3.0-rc2
Architecture  : x86_64-unknown-linux-gnu--unknown
Compiler flags: mpif90 -g -O3
PARALLEL version

* Running on    4 nodes in parallel
>> Start of run:   2-MAR-2011  10:58:02

                           ***********************       
                           *  WELCOME TO SIESTA  *       
                           ***********************       

reinit: Reading from standard input
************************** Dump of input data file ****************************
# $Id: Fe.fdf,v 1.1 1999/04/20 12:52:43 emilio Exp $
# -----------------------------------------------------------------------------
# FDF for bcc iron
#
# GGA, Ferromagnetic.
# Scalar-relativistic pseudopotential with non-linear partial-core correction
#
# E. Artacho, April 1999
# -----------------------------------------------------------------------------
SystemName       bcc Fe ferro GGA   # Descriptive name of the system
SystemLabel            Fe           # Short name for naming files
# Output options
WriteCoorStep
WriteMullikenPop       1
WriteXML               F
# Species and atoms
NumberOfSpecies        1
NumberOfAtoms          1
%block ChemicalSpeciesLabel
  1  26  Fe
%endblock ChemicalSpeciesLabel
# Basis
PAO.EnergyShift       50 meV
PAO.BasisSize         DZP
%block PAO.Basis
  Fe  2
  0  2  P
  6. 0.
  2  2
  0. 0.
%endblock PAO.Basis
LatticeConstant       2.87 Ang
%block LatticeVectors
 0.50000   0.500000  0.500000
 0.50000  -0.500000  0.500000
 0.50000   0.500000 -0.500000
%endblock LatticeVectors
KgridCutoff          15. Ang
%block BandLines
  1  0.00000   0.000000  0.000000  \Gamma
 40  2.00000   0.000000  0.000000  H
 28  1.00000   1.000000  0.000000  N
 28  0.00000   0.000000  0.000000  \Gamma
 34  1.00000   1.000000  1.000000  P
%endblock BandLines
xc.functional         GGA           # Exchange-correlation functional
xc.authors            PBE           # Exchange-correlation version
SpinPolarized         true          # Logical parameters are: yes or no
MeshCutoff           150. Ry        # Mesh cutoff. real space mesh
# SCF options
MaxSCFIterations       40           # Maximum number of SCF iter
DM.MixingWeight       0.1           # New DM amount for next SCF cycle
DM.Tolerance          1.d-3         # Tolerance in maximum difference
                                    # between input and output DM
DM.UseSaveDM          true          # to use continuation files
DM.NumberPulay         3
SolutionMethod        diagon        # OrderN or Diagon
ElectronicTemperature  25 meV       # Temp. for Fermi smearing
# MD options
MD.TypeOfRun           cg           # Type of dynamics:
MD.NumCGsteps           0           # Number of CG steps for
                                    #   coordinate optimization
MD.MaxCGDispl          0.1 Ang      # Maximum atomic displacement
                                    #   in one CG step (Bohr)
MD.MaxForceTol         0.04 eV/Ang  # Tolerance in the maximum
                                    #   atomic force (Ry/Bohr)
# Atomic coordinates
AtomicCoordinatesFormat     Fractional
%block AtomicCoordinatesAndAtomicSpecies
  0.000000000000    0.000000000000    0.000000000000  1
%endblock AtomicCoordinatesAndAtomicSpecies
************************** End of input data file *****************************

reinit: -----------------------------------------------------------------------
reinit: System Name: bcc Fe ferro GGA
reinit: -----------------------------------------------------------------------
reinit: System Label: 
Fe                                                          
reinit: -----------------------------------------------------------------------

initatom: Reading input for the pseudopotentials and atomic orbitals ----------
 Species number:            1  Label: Fe Atomic number:          26
Ground state valence configuration:   4s02  3d06
Reading pseudopotential information in formatted form from Fe.psf

Pseudopotential generated from a relativistic atomic calculation
There are spin-orbit pseudopotentials available
Spin-orbit interaction is not included in this calculation

Valence configuration for pseudopotential generation:
4s( 2.00) rc: 2.00
4p( 0.00) rc: 2.00
3d( 6.00) rc: 2.00
4f( 0.00) rc: 2.00
For Fe, standard SIESTA heuristics set lmxkb to 3
 (one more than the basis l, including polarization orbitals).
Use PS.lmax or PS.KBprojectors blocks to override.
 Warning: Empty PAO shell. l =           1
 Will have a KB projector anyway...

<basis_specs>
===============================================================================
Fe                   Z=  26    Mass=  55.850        Charge= 0.17977+309
Lmxo=2 Lmxkb=3     BasisType=split      Semic=F
L=0  Nsemic=0  Cnfigmx=4
          n=1  nzeta=2  polorb=1
            splnorm:   0.15000    
               vcte:    0.0000    
               rinn:    0.0000    
                rcs:    6.0000      0.0000    
            lambdas:    1.0000      1.0000    
L=1  Nsemic=0  Cnfigmx=4
L=2  Nsemic=0  Cnfigmx=3
          n=1  nzeta=2  polorb=0
            splnorm:   0.15000    
               vcte:    0.0000    
               rinn:    0.0000    
                rcs:    0.0000      0.0000    
            lambdas:    1.0000      1.0000    
-------------------------------------------------------------------------------
L=0  Nkbl=1  erefs: 0.17977+309
L=1  Nkbl=1  erefs: 0.17977+309
L=2  Nkbl=1  erefs: 0.17977+309
L=3  Nkbl=1  erefs: 0.17977+309
===============================================================================
</basis_specs>

atom: Called for Fe                    (Z =  26)

read_vps: Pseudopotential generation method:
read_vps: ATM3      Troullier-Martins                       
Total valence charge:    8.00000

read_vps: Pseudopotential includes a core correction:
read_vps: Pseudo-core for xc-correction

xc_check: Exchange-correlation functional:
xc_check: GGA Perdew, Burke & Ernzerhof 1996
V l=0 = -2*Zval/r beyond r=  2.7645
V l=1 = -2*Zval/r beyond r=  2.7645
V l=2 = -2*Zval/r beyond r=  2.7645
V l=3 = -2*Zval/r beyond r=  2.7645
All V_l potentials equal beyond r=  1.9726
This should be close to max(r_c) in ps generation
All pots = -2*Zval/r beyond r=  2.7645
Using large-core scheme for Vlocal

atom: Estimated core radius    2.76453
atom: Maximum radius for 4*pi*r*r*local-pseudopot. charge    3.05528
atom: Maximum radius for r*vlocal+2*Zval:    2.79930
GHOST: No ghost state for L =  0
GHOST: No ghost state for L =  1
GHOST: No ghost state for L =  2
GHOST: No ghost state for L =  3

KBgen: Kleinman-Bylander projectors: 
   l= 0   rc=  2.047986   el= -0.388305   Ekb=  4.259322   kbcos=  0.262992
   l= 1   rc=  2.047986   el= -0.097543   Ekb=  2.850785   kbcos=  0.194191
   l= 2   rc=  2.022544   el= -0.553241   Ekb=-12.567334   kbcos= -0.683368
   l= 3   rc=  2.047986   el=  0.003178   Ekb= -1.649997   kbcos= -0.006611

KBgen: Total number of  Kleinman-Bylander projectors:   16
atom: -------------------------------------------------------------------------

atom: SANKEY-TYPE ORBITALS:
atom: Selected multiple-zeta basis: split     

SPLIT: Orbitals with angular momentum L= 0

SPLIT: Basis orbitals for state 4s

   izeta = 1
                 lambda =    1.000000
                     rc =    6.000769
                 energy =   -0.359899
                kinetic =    0.368794
    potential(screened) =   -0.728693
       potential(ionic) =   -6.200046
WARNING: Minimum split_norm parameter:  0.52689. Will not be able to generate 
orbital with split_norm =  0.15000
See manual for new split options
ERROR STOP from Node:    0

--- On Tue, 3/1/11, Marcos Veríssimo Alves <[email protected]> 
wrote:


From: Marcos Veríssimo Alves <[email protected]>
Subject: Re: [SIESTA-L] Parallel_version
To: [email protected]
Date: Tuesday, March 1, 2011, 3:50 PM


Michael,


I'm pretty sure this issue has already been discussed before on the list, but 
here goes the short version of the story.


Siesta needs infiniband or myrinet (in general, a low latency internode 
connection) to work properly in parallel, otherwise you should try to run it in 
parallel only on one node with many cores, using mpi. If you have to use many 
nodes, Gigabit interconnection won't do the job. Even in the case of mpi on 
only one node with many cores, if your system is too small, parallelism will 
not work efficiently. 


No modifications in the fdf file are needed to run siesta in parallel, except 
for the Parallel Over Orbitals, and Parallel Over K modes. Check the manual and 
the list archives for that.


Marcos


On Tue, Mar 1, 2011 at 6:32 PM, Michael Shin <[email protected]> wrote:






Hello out Huan,
It works and I successfully compiled parallel SIESTA. However, When I run the 
example of Fe, I found the computational time increases with increasing the 
number of nodes. For example with 1 node it takes about 3 mints, and with 2 
nodes it takes about 4 mints.
But with 4 nodes, it took about 30 mints.
See the attached arch,make file, and two output files.
Do I have to make any special changes in FDF file when I want to run parallel 
calculations.
Any help from any one is welcome.
 


--- On Sat, 2/26/11, Huan Tran <[email protected]> wrote:


From: Huan Tran <[email protected]>
Subject: Re: [SIESTA-L] Parallel_version
To: [email protected]
Date: Saturday, February 26, 2011, 11:10 AM





For those who need to install parallel siesta, here is something I found on the 
web. For me, it works.

Installation of SIESTA-3.0-b(parallel version):

1. Installation of MPICH2:


   a) to get mpich2 
   
    http://www-unix.mcs.anl.gov/mpi/mpich/


   b) installation instruction:
  
    http://hydra.nac.uci.edu/~sev/docs/mpich2-readme.txt



    
    $ tar xfz mpich2.tar.gz

    $ cd mpich2-1.0.1
    $ ./configure   # default installation directory (/urs/local/bin/)
    $ make
    $ make install ( need to be a root)

    
    $ which mpd  # (print /usr/local/bin )
    $ which mpiexec #(print /usr/local/bin/)

    $ which mpirun  #(print  /usr/local/bin/)

    $ cd   #(go to home directory)
    $ vi .mpd.conf #(type secretword = what ever you want and save)



    $ chmod 600 .mpd.conf
    
    $ mpd $
    $ mpdtrace  (print the localhost name)

    $ mpdallexit
       
    $ vi hostfile (type localhost X no, X = no of core of that machine )


2. BLAS installation:




   to get BLAS
   
   http://www.netlib.org/blas/blas.tgz

   
   for installation
    
   http://wiki.ifca.es/e-ciencia/index.php/BLAS



or
   $ tar -zxvf blas.tgz
   $ cd BLAS
   $ vi make.inc (need to edit ---see below)

   $ make    
  
  edit 
   
   FORTRAN = /usr/local/bin/mpif90
   LOADER  = /usr/lcoal/bin/mpif90 




  $ make clean (for cleaning) 

3. lapack installation:
   
   to get 
   http://www.netlib.org/blas/lapack.tgz

   



   for installation 
   go to 
   http://wiki.ifca.es/e-ciencia/index.php/LAPACK
  or
  $ tar -zxvf lapack.tgz
  $ cd lapack-3.2/




  $ cp make.inc.example make.inc
  $ vi make.inc (need to edit ...see below)
  $ make all
  
  edit 

  FORTRAN  = /usr/local/bin/mpif90
  LOADER   = /usr/local/bin/mpif90
  TIMER    = EXT_ETIME




  BLASLIB  = /home/../../BLAS/blas$(PLAT).a  (give the full path)


   for cleaning 
  
   $ rm *.a  #(make clean is not going to work)


4. BLACS installation:
   
   to get 

 



   http://www.netlib.org/blacs/mpiblacs.tgz
   http://www.netlib.org/blacs/mpiblacs-patch03.tgz




 
   for installation 
 
   http://wiki.ifca.es/e-ciencia/index.php/BLACS
  or 
  
  $ tar -zxvf mpiblacs.tgz



  $ tar -zxvf mpiblacs-patch03.tgz

  $ cd BLACS/
  $ cp BMAKES/Bmake.MPI-LINUX ./Bmake.inc
  $ vi Bmake.inci (need to edit .....see below)
  $ make mpi

  edit

  BTOPdir   = $(HOME)/Software/BLACS  # directory we're compiling BLACS at




  MPIdir    = /usr/local/            # Open MPI directory
  MPILIBdir =
  MPIINCdir = /usr/local/include/
  MPILIB    = /usr/local/lib/libmpich.a
  SYSINC    =
  INTFACE   = -DAdd_
  TRANSCOMM = -DUseMpich




  F77       = /usr/local/bin/mpif90 # MPI wrapper for Fortran compiler
  CC        = /usr/local/bin/mpicc  # MPI wrapper for C compiler
  CCFLAGS   = -O3

  
  for cleaning 

  $ find ./* -name '*.o' | xargs rm





 
5. Scalapack installation:
   
   to get 
 
   http://www.netlib.org/scalapack/scalapack.tgz

   for installation 





   http://wiki.ifca.es/e-ciencia/index.php/ScaLAPACK
  or 

  $ tar -zxvf TARs/scalapack.tgz
  $ cd scalapack-1.8.0/



  $ cp SLmake.inc.example SLmake.inc

  $ vi SLmake.inc  (need to edit ...see below)
  $ make

  for cleaning 

  $ make clean
  $ rm libscalapack.a 

  

  edit 


  home       = $(HOME)/Software/scalapack-1.8.0



  BLACSdir   = $(HOME)/Software/BLACS/LIB                            

  SMPLIB     = /usr/local/bin/libmpich.a
  BLACSFINIT = $(BLACSdir)/blacsF77init_MPI-$(PLAT)-$(BLACSDBGLVL).a
  BLACSCINIT = $(BLACSdir)/blacsCinit_MPI-$(PLAT)-$(BLACSDBGLVL).a



  BLACSLIB   = $(BLACSdir)/blacs_MPI-$(PLAT)-$(BLACSDBGLVL).a

  F77        = /usr/local/bin/mpif90                              
  CC         = /usr/local/bin/mpicc                              
  F77FLAGS   =  -O2 $(NOOPT)                                      



  CCFLAGS    =                                                  

  CDEFS      = -DAdd_ -DNO_IEEE $(USEMPI)                      
  BLASLIB    = $(HOME)/Software/BLAS/blas_LINUX.a           
  LAPACKLIB  = $(HOME)/Software/lapack-3.2/lapack_LINUX.a    






6. Siesta installation:


   go to 
  http://wiki.ifca.es/e-ciencia/index.php/SIESTA#Serial_version



  
  1. $ tar -zxvf siesta-3.0-b.tgz  
  2. $ cd siesta-3.0-b/Obj/

  3. $ sh ../Src/obj_setup.sh
  4. $ ../Src/configure
  5. $ vi arch.make (need to edit this file .... see the web page...as 



       it depends on where did you  install the other libraries.... ... )
  6. $ make ( this is for siesta, for transiesta see below )

  7  $ make clean (to clean the *.o file and recompile again



       to compile transiesta type make transiesta instead of make)


 edit 
   FC= /usr/local/bin/mpif9
   FFLAGS=-g -O2
   FPPFLAGS= -DMPI -DFC_HAVE_FLUSH -DFC_HAVE_ABORT

   BLAS_LIBS=$(HOME)/../BLAS/blas_LINUX.a



   LAPACK_LIBS=$(HOME)/../lapack-3.2/lapack_LINUX.a
   BLACS_LIBS=$(HOME)/../BLACS/LIB/blacsF77init_MPI-LINUX-0.a \
           $(HOME)/../BLACS/LIB/blacs_MPI-LINUX-0.a \

           $(HOME)/../BLACS/LIB/blacsCinit_MPI-LINUX-0.a




   SCALAPACK_LIBS=$(HOME)/../scalapack-1.8.0/libscalapack.a

   COMP_LIBS=dc_lapack.a
   MPI_INTERFACE=libmpi_f90.a
   MPI_INCLUDE=.

7. To run siesta from any directory
   

    $ cd yourdirectory
    $ ln -s ../path to siesta/Obj/siesta . ( to link the siesta )
    $ mpdboot -n 1 -f hostfile
    $ mpiexec -n <np> ./siesta < xyz.fdf | tee xyz.out &    




    **** make sure to kill the mpd after every run ****
    $ mpdallexit

On Sat, Feb 26, 2011 at 9:47 AM, Michael Shin <[email protected]> wrote:






Dear All
Can any one guide me and send me some related steps for installation of 
parallel version of SIESTA-3.0-rc2.
I will be grateful if some has successfully installed the parallel version of 
SIESTA-3.0-rc2 and can send me arch.make file.
Also tell me the FORTRAN compiler name which can support parallel version of 
SIESTA-3.0-rc2
Regrads






      

Responder a