Hi,

We use VASP 4.6 in parallel with opemmpi 1.1.2 without any problems on
x86_64 with opensuse and compiled with gcc and Intel fortran and use
torque PBS.

I used standard configure to build openmpi something like 

./configure --prefix=/usr/local --enable-static --with-threads
--with-tm=/usr/local --with-libnuma

I used the ACLM math lapack libs and built Blacs and Scalapack with them
too.  

I attached my vasp makefile, I might of added 

mpi.o : mpi.F
        $(CPP)
        $(FC) -FR -lowercase -O0 -c $*$(SUFFIX)

to the end of the make file, It doesn't look like it is in the example
makefiles they give, but I compiled this a while ago.

Hope this helps. 

Cheers,
Kevin 





On Tue, 2007-05-08 at 19:18 -0700, Steven Truong wrote:
> Hi, all.  I am new to OpenMPI and after initial setup I tried to run
> my app but got the followign errors:
> 
> [node07.my.com:16673] *** An error occurred in MPI_Comm_rank
> [node07.my.com:16673] *** on communicator MPI_COMM_WORLD
> [node07.my.com:16673] *** MPI_ERR_COMM: invalid communicator
> [node07.my.com:16673] *** MPI_ERRORS_ARE_FATAL (goodbye)
> [node07.my.com:16674] *** An error occurred in MPI_Comm_rank
> [node07.my.com:16674] *** on communicator MPI_COMM_WORLD
> [node07.my.com:16674] *** MPI_ERR_COMM: invalid communicator
> [node07.my.com:16674] *** MPI_ERRORS_ARE_FATAL (goodbye)
> [node07.my.com:16675] *** An error occurred in MPI_Comm_rank
> [node07.my.com:16675] *** on communicator MPI_COMM_WORLD
> [node07.my.com:16675] *** MPI_ERR_COMM: invalid communicator
> [node07.my.com:16675] *** MPI_ERRORS_ARE_FATAL (goodbye)
> [node07.my.com:16676] *** An error occurred in MPI_Comm_rank
> [node07.my.com:16676] *** on communicator MPI_COMM_WORLD
> [node07.my.com:16676] *** MPI_ERR_COMM: invalid communicator
> [node07.my.com:16676] *** MPI_ERRORS_ARE_FATAL (goodbye)
> mpiexec noticed that job rank 2 with PID 16675 on node node07 exited
> on signal 60 (Real-time signal 26).
> 
>  /usr/local/openmpi-1.2.1/bin/ompi_info
>                 Open MPI: 1.2.1
>    Open MPI SVN revision: r14481
>                 Open RTE: 1.2.1
>    Open RTE SVN revision: r14481
>                     OPAL: 1.2.1
>        OPAL SVN revision: r14481
>                   Prefix: /usr/local/openmpi-1.2.1
>  Configured architecture: x86_64-unknown-linux-gnu
>            Configured by: root
>            Configured on: Mon May  7 18:32:56 PDT 2007
>           Configure host: neptune.nanostellar.com
>                 Built by: root
>                 Built on: Mon May  7 18:40:28 PDT 2007
>               Built host: neptune.my.com
>               C bindings: yes
>             C++ bindings: yes
>       Fortran77 bindings: yes (all)
>       Fortran90 bindings: yes
>  Fortran90 bindings size: small
>               C compiler: gcc
>      C compiler absolute: /usr/bin/gcc
>             C++ compiler: g++
>    C++ compiler absolute: /usr/bin/g++
>       Fortran77 compiler: /opt/intel/fce/9.1.043/bin/ifort
>   Fortran77 compiler abs: /opt/intel/fce/9.1.043/bin/ifort
>       Fortran90 compiler: /opt/intel/fce/9.1.043/bin/ifort
>   Fortran90 compiler abs: /opt/intel/fce/9.1.043/bin/ifort
>              C profiling: yes
>            C++ profiling: yes
>      Fortran77 profiling: yes
>      Fortran90 profiling: yes
>           C++ exceptions: no
>           Thread support: posix (mpi: no, progress: no)
>   Internal debug support: no
>      MPI parameter check: runtime
> Memory profiling support: no
> Memory debugging support: no
>          libltdl support: yes
>    Heterogeneous support: yes
>  mpirun default --prefix: yes
>            MCA backtrace: execinfo (MCA v1.0, API v1.0, Component v1.2.1)
>               MCA memory: ptmalloc2 (MCA v1.0, API v1.0, Component v1.2.1)
>            MCA paffinity: linux (MCA v1.0, API v1.0, Component v1.2.1)
>            MCA maffinity: first_use (MCA v1.0, API v1.0, Component v1.2.1)
>            MCA maffinity: libnuma (MCA v1.0, API v1.0, Component v1.2.1)
>                MCA timer: linux (MCA v1.0, API v1.0, Component v1.2.1)
>          MCA installdirs: env (MCA v1.0, API v1.0, Component v1.2.1)
>          MCA installdirs: config (MCA v1.0, API v1.0, Component v1.2.1)
>            MCA allocator: basic (MCA v1.0, API v1.0, Component v1.0)
>            MCA allocator: bucket (MCA v1.0, API v1.0, Component v1.0)
>                 MCA coll: basic (MCA v1.0, API v1.0, Component v1.2.1)
>                 MCA coll: self (MCA v1.0, API v1.0, Component v1.2.1)
>                 MCA coll: sm (MCA v1.0, API v1.0, Component v1.2.1)
>                 MCA coll: tuned (MCA v1.0, API v1.0, Component v1.2.1)
>                   MCA io: romio (MCA v1.0, API v1.0, Component v1.2.1)
>                MCA mpool: rdma (MCA v1.0, API v1.0, Component v1.2.1)
>                MCA mpool: sm (MCA v1.0, API v1.0, Component v1.2.1)
>                  MCA pml: cm (MCA v1.0, API v1.0, Component v1.2.1)
>                  MCA pml: ob1 (MCA v1.0, API v1.0, Component v1.2.1)
>                  MCA bml: r2 (MCA v1.0, API v1.0, Component v1.2.1)
>               MCA rcache: vma (MCA v1.0, API v1.0, Component v1.2.1)
>                  MCA btl: self (MCA v1.0, API v1.0.1, Component v1.2.1)
>                  MCA btl: sm (MCA v1.0, API v1.0.1, Component v1.2.1)
>                  MCA btl: tcp (MCA v1.0, API v1.0.1, Component v1.0)
>                 MCA topo: unity (MCA v1.0, API v1.0, Component v1.2.1)
>                  MCA osc: pt2pt (MCA v1.0, API v1.0, Component v1.2.1)
>               MCA errmgr: hnp (MCA v1.0, API v1.3, Component v1.2.1)
>               MCA errmgr: orted (MCA v1.0, API v1.3, Component v1.2.1)
>               MCA errmgr: proxy (MCA v1.0, API v1.3, Component v1.2.1)
>                  MCA gpr: null (MCA v1.0, API v1.0, Component v1.2.1)
>                  MCA gpr: proxy (MCA v1.0, API v1.0, Component v1.2.1)
>                  MCA gpr: replica (MCA v1.0, API v1.0, Component v1.2.1)
>                  MCA iof: proxy (MCA v1.0, API v1.0, Component v1.2.1)
>                  MCA iof: svc (MCA v1.0, API v1.0, Component v1.2.1)
>                   MCA ns: proxy (MCA v1.0, API v2.0, Component v1.2.1)
>                   MCA ns: replica (MCA v1.0, API v2.0, Component v1.2.1)
>                  MCA oob: tcp (MCA v1.0, API v1.0, Component v1.0)
>                  MCA ras: dash_host (MCA v1.0, API v1.3, Component v1.2.1)
>                  MCA ras: gridengine (MCA v1.0, API v1.3, Component v1.2.1)
>                  MCA ras: localhost (MCA v1.0, API v1.3, Component v1.2.1)
>                  MCA ras: slurm (MCA v1.0, API v1.3, Component v1.2.1)
>                  MCA ras: tm (MCA v1.0, API v1.3, Component v1.2.1)
>                  MCA rds: hostfile (MCA v1.0, API v1.3, Component v1.2.1)
>                  MCA rds: proxy (MCA v1.0, API v1.3, Component v1.2.1)
>                  MCA rds: resfile (MCA v1.0, API v1.3, Component v1.2.1)
>                MCA rmaps: round_robin (MCA v1.0, API v1.3, Component v1.2.1)
>                 MCA rmgr: proxy (MCA v1.0, API v2.0, Component v1.2.1)
>                 MCA rmgr: urm (MCA v1.0, API v2.0, Component v1.2.1)
>                  MCA rml: oob (MCA v1.0, API v1.0, Component v1.2.1)
>                  MCA pls: gridengine (MCA v1.0, API v1.3, Component v1.2.1)
>                  MCA pls: proxy (MCA v1.0, API v1.3, Component v1.2.1)
>                  MCA pls: rsh (MCA v1.0, API v1.3, Component v1.2.1)
>                  MCA pls: slurm (MCA v1.0, API v1.3, Component v1.2.1)
>                  MCA pls: tm (MCA v1.0, API v1.3, Component v1.2.1)
>                  MCA sds: env (MCA v1.0, API v1.0, Component v1.2.1)
>                  MCA sds: pipe (MCA v1.0, API v1.0, Component v1.2.1)
>                  MCA sds: seed (MCA v1.0, API v1.0, Component v1.2.1)
>                  MCA sds: singleton (MCA v1.0, API v1.0, Component v1.2.1)
>                  MCA sds: slurm (MCA v1.0, API v1.0, Component v1.2.1)
> 
> As you can see, I used Gnu gcc and g++ with Intel Fortran Compiler to
> compile Open MPI and I am not sure if there are any special flags that
> I need to have.
> ./configure --prefix=/usr/local/openmpi-1.2.1 --disable-ipv6
> --with-tm=/usr/local/pbs  --enable-mpirun-prefix-by-default
> --enable-mpi-f90
> 
> After getting mpif90, I compiled my application (VASP) with this new
> parellel compiler but then I could not run it through PBS.
> 
> #PBS -N Pt.CO.bridge.25ML
> ### Set the number of nodes that will be used. Ensure
> ### that the number "nodes" matches with the need of your job
> ### DO NOT MODIFY THE FOLLOWING LINE FOR SINGLE-PROCESSOR JOBS!
> #PBS -l nodes=node07:ppn=4
> #PBS -l walltime=96:00:00
> ##PBS -M a...@my.com
> #PBS -m abe
> export NPROCS=`wc -l $PBS_NODEFILE |gawk '//{print $1}'`
> echo $NPROCS
> echo The master node of this job is `hostname`
> echo The working directory is `echo $PBS_O_WORKDIR`
> echo The node file is $PBS_NODEFILE
> echo This job runs on the following $NPROCS nodes:
> echo `cat $PBS_NODEFILE`
> echo "=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-"
> echo
> echo command to EXE:
> echo
> echo
> cd $PBS_O_WORKDIR
> 
> echo "cachesize=4000 mpiblock=500 npar=4 procgroup=4 mkl ompi"
> 
> date
> /usr/local/openmpi-1.2.1/bin/mpiexec -mca mpi_paffinity_alone 1 -np
> $NPROCS /hom e/struong/bin/vaspmpi_mkl_ompi >"$PBS_JOBID".out
> date
> ------------
> 
> My environment is CentOS 4.4 x86_64, Intel Xeon, Torque, Maui.
> 
> Could somebody here tell me what I missed or did incorrectly?
> 
> Thank you very much.
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
.SUFFIXES: .inc .f .f90 .F
#-----------------------------------------------------------------------
# Makefile for Intel Fortran compiler for Operton systems
#
# The makefile was tested only under Linux on Intel platforms
# (Suse 5.3- Suse 9.2)
# the followin compiler versions have been tested
# 5.0, 6.0, 7.0 and 7.1 (some 8.0 versions seem to fail compiling the code,
#  8.1 is slower than 8.0)
# presently we recommend version 7.1 or 7.0, since these
# releases have been used to compile the present code versions
#
# it might be required to change some of library pathes, since
# LINUX installation vary a lot
# Hence check ***ALL**** options in this makefile very carefully
#-----------------------------------------------------------------------
#
# BLAS must be installed on the machine
# there are several options:
#   Kazushige Goto's BLAS is required
#   http://www.cs.utexas.edu/users/kgoto/signup_first.html
# (see below libgoto comments)
#
# FFT:
#   the fftw.3.0.1 must be available and installed, since
#   the ifc compiler creates crap code if the build in fft routines are used
# (see below fftw comments)
#-----------------------------------------------------------------------

# all CPP processed fortran files have the extension .f90
SUFFIX=.f90

#-----------------------------------------------------------------------
# fortran compiler and linker
#-----------------------------------------------------------------------
FC=ifort
# fortran linker
FCL=$(FC)


#-----------------------------------------------------------------------
# whereis CPP ?? (I need CPP, can't use gcc with proper options)
# that's the location of gcc for SUSE 5.3
#
#  CPP_   =  /usr/lib/gcc-lib/i486-linux/2.7.2/cpp -P -C 
#
# that's probably the right line for some Red Hat distribution:
#
#  CPP_   =  /usr/lib/gcc-lib/i386-redhat-linux/2.7.2.3/cpp -P -C
#
#  SUSE X.X, maybe some Red Hat distributions:

CPP_ =  ./preprocess <$*.F | /usr/bin/cpp -P -C -traditional >$*$(SUFFIX)

#-----------------------------------------------------------------------
# possible options for CPP:
# NGXhalf             charge density   reduced in X direction
# wNGXhalf            gamma point only reduced in X direction
# avoidalloc          avoid ALLOCATE if possible
# IFC                 work around some IFC bugs
# CACHE_SIZE          1000 for PII,PIII, 5000 for Athlon, 8000-12000 P4
# RPROMU_DGEMV        use DGEMV instead of DGEMM in RPRO (depends on used BLAS)
# RACCMU_DGEMV        use DGEMV instead of DGEMM in RACC (depends on used BLAS)
#-----------------------------------------------------------------------

CPP     = $(CPP_)  -DHOST=\"LinuxIFC\" \
          -Dkind8 -DNGXhalf -DCACHE_SIZE=12000 -DPGF90 -Davoidalloc \
          -Duse_cray_ptr -DIFC
#          -DRPROMU_DGEMV  -DRACCMU_DGEMV

#-----------------------------------------------------------------------
# general fortran flags  (there must a trailing blank on this line)
#-----------------------------------------------------------------------

FFLAGS =  -FR -lowercase -assume byterecl 

#-----------------------------------------------------------------------
# optimization
# we have tested whether higher optimisation improves performance
# -axK  SSE1 optimization,  but also generate code executable on all mach.
#       xK improves performance somewhat on XP, and a is required in order
#       to run the code on older Athlons as well
# -xW   SSE2 optimization
# -axW  SSE2 optimization,  but also generate code executable on all mach.
# -tpp6 P3 optimization
# -tpp7 P4 optimization
#-----------------------------------------------------------------------

OFLAG=-O3 -xW -tpp7

OFLAG_HIGH = $(OFLAG)
OBJ_HIGH = 

OBJ_NOOPT = 
DEBUG  = -FR -O0
INLINE = $(OFLAG)


#-----------------------------------------------------------------------
# the following lines specify the position of BLAS  and LAPACK
# on Operton, you really need the  libgoto library
#-----------------------------------------------------------------------

#BLAS= $(HOME)/OPT/src/vasp.4.6/libgoto_opteron-32-r0.99.so -lpthread
#BLAS= $(HOME)/OPT/src/vasp.4.6/libgoto_opt32-r0.96.so -lpthread
#BLAS=  /opt/libs/libgoto/libgoto_p4_512-r0.6.so
BLAS=	/opt/acml3.6.0/ifort64/lib/libacml.a

# LAPACK, simplest use vasp.4.lib/lapack_double
#LAPACK= ../vasp.4.lib/lapack_double.o
LAPACK= /opt/acml3.6.0/ifort64/lib/libacml.a

#-----------------------------------------------------------------------

LIB  = -L../vasp.4.lib -ldmy \
     ../vasp.4.lib/linpack_double.o $(LAPACK) \
     $(BLAS)

# options for linking (for compiler version 6.X, 7.1) nothing is required
LINK    = 
# compiler version 7.0 generates some vector statments which are located
# in the svml library, add the LIBPATH and the library (just in case)
#LINK    =  -L/opt/intel/compiler70/ia32/lib/ -lsvml 

#-----------------------------------------------------------------------
# fft libraries:
# On Operton you really have to used fftw.3.0.X (http://www.fftw.org)
# the ifc compiler creates suboptimal performance on the Opteron for
# the build in fft routines
#
# fftw.3.0.1 was compiled using the following command lines:
# > export CC="gcc -m32"
# > export F77="f77 -m32"
# > ./configure  --enable-sse2 --prefix=/home/kresse/ifc_opt/fftw-3.0.1/
# > make 
# > make install
# PLEASE do not send querries related to fftw to the vasp site
#-----------------------------------------------------------------------

FFT3D   = fft3dfurth.o fft3dlib.o
#FFT3D   = fftw3d.o fft3dlib.o   /usr/local/lib/libfftw3.a


#=======================================================================
# MPI section, uncomment the following lines
# 
# one comment for users of mpich or lam:
# You must *not* compile mpi with g77/f77, because f77/g77             
# appends *two* underscores to symbols that contain already an        
# underscore (i.e. MPI_SEND becomes mpi_send__).  The pgf90/ifc
# compilers however append only one underscore.
# Precompiled mpi version will also not work !!!
#
# We found that mpich.1.2.1 and lam-6.5.X to lam-7.0.4 are stable
# mpich.1.2.1 was configured with 
#  ./configure -prefix=/usr/local/mpich_nodvdbg -fc="pgf77 -Mx,119,0x200000"  \
# -f90="pgf90 " \
# --without-romio --without-mpe -opt=-O \
# 
# lam was configured with the line
#  ./configure  -prefix /opt/libs/lam-7.0.4 --with-cflags=-O -with-fc=ifc \
# --with-f77flags=-O --without-romio
# 
# please note that you might be able to use a lam or mpich version 
# compiled with f77/g77, but then you need to add the following
# options: -Msecond_underscore (compilation) and -g77libs (linking)
#
# !!! Please do not send me any queries on how to install MPI, I will
# certainly not answer them !!!!
#=======================================================================
#-----------------------------------------------------------------------
# fortran linker for mpi: if you use LAM and compiled it with the options
# suggested above,  you can use the following line
#-----------------------------------------------------------------------

FC=mpif90
FCL=$(FC)

#-----------------------------------------------------------------------
# additional options for CPP in parallel version (see also above):
# NGZhalf               charge density   reduced in Z direction
# wNGZhalf              gamma point only reduced in Z direction
# scaLAPACK             use scaLAPACK (usually slower on 100 Mbit Net)
#-----------------------------------------------------------------------

CPP    = $(CPP_) -DMPI  -DHOST=\"LinuxIFC\" -DIFC \
     -Dkind8 -DNGZhalf -DCACHE_SIZE=4000 -DPGF90 -Davoidalloc \
     -DMPI_BLOCK=2000  \
     -Duse_cray_ptr	-DscaLAPACK
    #-DRPROMU_DGEMV  -DRACCMU_DGEMV

#-----------------------------------------------------------------------
# location of SCALAPACK
# if you do not use SCALAPACK simply uncomment the line SCA
#-----------------------------------------------------------------------

BLACS=/home/vasp/Desktop/ACML/BLACS
SCA_=/home/vasp/Desktop/ACML/scalapack

SCA= $(SCA_)/libscalapack.a  \
 $(BLACS)/LIB/libblacsF77init.a $(BLACS)/LIB/libblacs.a $(BLACS)/LIB/libblacsF77init.a

#SCA=

#-----------------------------------------------------------------------
# libraries for mpi
#-----------------------------------------------------------------------

LIB     = -L../vasp.4.lib -ldmy  \
      ../vasp.4.lib/linpack_double.o $(LAPACK) \
      $(SCA) $(BLAS)

# FFT: fftmpi.o with fft3dlib of Juergen Furthmueller
FFT3D   = fftmpi.o fftmpi_map.o fft3dlib.o 

#fftw.3.0.1 is much faster on Opteron
#FFT3D   = fftmpiw.o fftmpi_map.o fft3dlib.o /usr/local/lib/libfftw3.a  

#-----------------------------------------------------------------------
# general rules and compile lines
#-----------------------------------------------------------------------
BASIC=   symmetry.o symlib.o   lattlib.o  random.o   

SOURCE=  base.o     mpi.o      smart_allocate.o      xml.o  \
         constant.o jacobi.o   main_mpi.o  scala.o   \
         asa.o      lattice.o  poscar.o   ini.o      setex.o     radial.o  \
         pseudo.o   mgrid.o    mkpoints.o wave.o      wave_mpi.o  $(BASIC) \
         nonl.o     nonlr.o    dfast.o    choleski2.o    \
         mix.o      charge.o   xcgrad.o   xcspin.o    potex1.o   potex2.o  \
         metagga.o  constrmag.o pot.o      cl_shift.o force.o    dos.o      elf.o      \
         tet.o      hamil.o    steep.o    \
         chain.o    dyna.o     relativistic.o LDApU.o sphpro.o  paw.o   us.o \
         ebs.o      wavpre.o   wavpre_noio.o broyden.o \
         dynbr.o    rmm-diis.o reader.o   writer.o   tutor.o xml_writer.o \
         brent.o    stufak.o   fileio.o   opergrid.o stepver.o  \
         dipol.o    xclib.o    chgloc.o   subrot.o   optreal.o   davidson.o \
         edtest.o   electron.o shm.o      pardens.o  paircorrection.o \
         optics.o   constr_cell_relax.o   stm.o    finite_diff.o \
         elpol.o    setlocalpp.o 

INC=

vasp: $(SOURCE) $(FFT3D) $(INC) main.o 
	rm -f vasp
	$(FCL) -o vasp $(LINK) main.o  $(SOURCE)   $(FFT3D) $(LIB) 
makeparam: $(SOURCE) $(FFT3D) makeparam.o main.F $(INC)
	$(FCL) -o makeparam  $(LINK) makeparam.o $(SOURCE) $(FFT3D) $(LIB)
zgemmtest: zgemmtest.o base.o random.o $(INC)
	$(FCL) -o zgemmtest $(LINK) zgemmtest.o random.o base.o $(LIB)
dgemmtest: dgemmtest.o base.o random.o $(INC)
	$(FCL) -o dgemmtest $(LINK) dgemmtest.o random.o base.o $(LIB) 
ffttest: base.o smart_allocate.o mpi.o mgrid.o random.o ffttest.o $(FFT3D) $(INC)
	$(FCL) -o ffttest $(LINK) ffttest.o mpi.o mgrid.o random.o smart_allocate.o base.o $(FFT3D) $(LIB)
kpoints: $(SOURCE) $(FFT3D) makekpoints.o main.F $(INC)
	$(FCL) -o kpoints $(LINK) makekpoints.o $(SOURCE) $(FFT3D) $(LIB)

clean:	
	-rm -f *.g *.f *.o *.L *.mod ; touch *.F

main.o: main$(SUFFIX)
	$(FC) $(FFLAGS)$(DEBUG)  $(INCS) -c main$(SUFFIX)
xcgrad.o: xcgrad$(SUFFIX)
	$(FC) $(FFLAGS) $(INLINE)  $(INCS) -c xcgrad$(SUFFIX)
xcspin.o: xcspin$(SUFFIX)
	$(FC) $(FFLAGS) $(INLINE)  $(INCS) -c xcspin$(SUFFIX)

makeparam.o: makeparam$(SUFFIX)
	$(FC) $(FFLAGS)$(DEBUG)  $(INCS) -c makeparam$(SUFFIX)

makeparam$(SUFFIX): makeparam.F main.F 
#
# MIND: I do not have a full dependency list for the include
# and MODULES: here are only the minimal basic dependencies
# if one strucuture is changed then touch_dep must be called
# with the corresponding name of the structure
#
base.o: base.inc base.F
mgrid.o: mgrid.inc mgrid.F
constant.o: constant.inc constant.F
lattice.o: lattice.inc lattice.F
setex.o: setexm.inc setex.F
pseudo.o: pseudo.inc pseudo.F
poscar.o: poscar.inc poscar.F
mkpoints.o: mkpoints.inc mkpoints.F
wave.o: wave.inc wave.F
nonl.o: nonl.inc nonl.F
nonlr.o: nonlr.inc nonlr.F

$(OBJ_HIGH):
	$(CPP)
	$(FC) $(FFLAGS) $(OFLAG_HIGH) $(INCS) -c $*$(SUFFIX)
$(OBJ_NOOPT):
	$(CPP)
	$(FC) $(FFLAGS) $(INCS) -c $*$(SUFFIX)

fft3dlib_f77.o: fft3dlib_f77.F
	$(CPP)
	$(F77) $(FFLAGS_F77) -c $*$(SUFFIX)

.F.o:
	$(CPP)
	$(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX)
.F$(SUFFIX):
	$(CPP)
$(SUFFIX).o:
	$(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX)

# special rules
#-----------------------------------------------------------------------
# these special rules are cummulative (that is once failed
#   in one compiler version, stays in the list forever)
# -tpp5|6|7 P, PII-PIII, PIV
# -xW use SIMD (does not pay of on PII, since fft3d uses double prec)
# all other options do no affect the code performance since -O1 is used
#-----------------------------------------------------------------------

fft3dlib.o : fft3dlib.F
	$(CPP)
	$(FC) -FR -lowercase -O1 -tpp7 -xW -prefetch- -prev_div -unroll0 -e95 -vec_report3 -c $*$(SUFFIX)
fft3dfurth.o : fft3dfurth.F
	$(CPP)
	$(FC) -FR -lowercase -O1 -c $*$(SUFFIX)

radial.o : radial.F
	$(CPP)
	$(FC) -FR -lowercase -O1 -c $*$(SUFFIX)

symlib.o : symlib.F
	$(CPP)
	$(FC) -FR -lowercase -O1 -c $*$(SUFFIX)

symmetry.o : symmetry.F
	$(CPP)
	$(FC) -FR -lowercase -O1 -c $*$(SUFFIX)

dynbr.o : dynbr.F
	$(CPP)
	$(FC) -FR -lowercase -O1 -c $*$(SUFFIX)

broyden.o : broyden.F
	$(CPP)
	$(FC) -FR -lowercase -O2 -c $*$(SUFFIX)

us.o : us.F
	$(CPP)
	$(FC) -FR -lowercase -O1 -c $*$(SUFFIX)

wave.o : wave.F
	$(CPP)
	$(FC) -FR -lowercase -O0 -c $*$(SUFFIX)

LDApU.o : LDApU.F
	$(CPP)
	$(FC) -FR -lowercase -O2 -c $*$(SUFFIX)
mpi.o : mpi.F
	$(CPP)
	$(FC) -FR -lowercase -O0 -c $*$(SUFFIX) 

Reply via email to