Hi every one:
I managed to compile siesta 2.0.2 in parallel in the computer with the following specifications;

Model: SGI Altix 3700 Bx2
Architecture: 128 Itanium2 Processors 1.6GHz
Memory size: 256 GB  Cache Size: 6 MB
ProPack Version: SGI ProPack 5 for Linux, Build 500r1-0607180902
Operating System: SUSE Linux Enterprise Server 10.1 (ia64)
Kernel Release: Linux silky 2.6.16.27-0.6-default

I used this libraries: SGI MPI Library, SGI Scientific Computing Software Library (SCSL) (for blas and lapack), SGI Scientific Library for Distributed Shared Memory (SDSM) (for scalapack).

This is the arch.make i used:

#
# This file is part of the SIESTA package.
#
# Copyright (c) Fundacion General Universidad Autonoma de Madrid:
# E.Artacho, J.Gale, A.Garcia, J.Junquera, P.Ordejon, D.Sanchez-Portal
# and J.M.Soler, 1996-2006.
#
# Use of this software constitutes agreement with the full conditions
# given in the SIESTA license, as signed by all legitimate users.
#
.SUFFIXES:
.SUFFIXES: .f .F .o .a .f90 .F90

SIESTA_ARCH=ia64-unknown-linux-gnu--unknown

FPP=
FPP_OUTPUT=
#FC=mpif90 -intel
FC=mpif90
RANLIB=ranlib

SYS=nag

SP_KIND=4
DP_KIND=8
KINDS=$(SP_KIND) $(DP_KIND)

FFLAGS=-g -O3
FPPFLAGS= -DMPI -DFC_HAVE_FLUSH -DFC_HAVE_ABORT
LDFLAGS=

ARFLAGS_EXTRA=

FCFLAGS_fixed_f=-FI
FCFLAGS_free_f90=
FPPFLAGS_fixed_F=-FI -cpp
FPPFLAGS_free_F90=

BLAS_LIBS=-lscs
LAPACK_LIBS=
BLACS_LIBS=
SCALAPACK_LIBS= -lsdsm


COMP_LIBS=dc_lapack.a

NETCDF_LIBS=
NETCDF_INTERFACE=

LIBS=$(SCALAPACK_LIBS) $(BLACS_LIBS) $(LAPACK_LIBS) $(BLAS_LIBS) $ (NETCDF_LIBS)

#SIESTA needs an F90 interface to MPI
#This will give you SIESTA's own implementation
#If your compiler vendor offers an alternative, you may change
#to it here.
MPI_INTERFACE=libmpi_f90.a
MPI_INCLUDE=/usr/include

#Dependency rules are created by autoconf according to whether
#discrete preprocessing is necessary or not.
.F.o:
        $(FC) -c $(FFLAGS) $(INCFLAGS) $(FPPFLAGS) $(FPPFLAGS_fixed_F)  $<
.F90.o:
        $(FC) -c $(FFLAGS) $(INCFLAGS) $(FPPFLAGS) $(FPPFLAGS_free_F90) $<
.f.o:
        $(FC) -c $(FFLAGS) $(INCFLAGS) $(FCFLAGS_fixed_f)  $<
.f90.o:
        $(FC) -c $(FFLAGS) $(INCFLAGS) $(FCFLAGS_free_f90)  $<


I had to modify the MPI/mpi.F so that the fist line look like:

      USE MPI__INCLUDE, ONLY : DAT_single => MPI_real,
     & DAT_2single => MPI_2real,
     & DAT_double => MPI_double_precision,
     & DAT_2double => MPI_2double_precision,
     & DAT_complex => MPI_complex,
     & DAT_dcomplex => MPI_double_complex

        USE MPI__INCLUDE

I also removed the first "&" at the end of every "more than one line statement". For instance, where the line like

SUBROUTINE MPI_WAITANY( &
     &        COUNT, ARRAY_OF_REQUESTS, INDEX, STATUS, IERROR)
appears, I have

          SUBROUTINE MPI_WAITANY(
     &        COUNT, ARRAY_OF_REQUESTS, INDEX, STATUS, IERROR)

After that siesta compiles fine (with low optimization -O0, however). Nevertheless, when I try to compute h2o exemple this what I get this error:

MPI: On host silky, Program /data/cacarden/siesta-2.0.2/Src/siesta, Rank 1, Process 18789 received signal SIGFPE(8)


MPI: --------stack traceback-------
line: 2 Unable to parse input as legal command or C expression.
line: 2 Unable to parse input as legal command or C expression.
MPI: Intel(R) Debugger for applications running on IA-64, Version 11.0, Build [1.1510.2.109]

MPI: -----stack traceback ends-----
MPI: On host silky, Program /data/cacarden/siesta-2.0.2/Src/siesta, Rank 1, Process 18789: Dumping core on signal SIGFPE(8) into directory /data/cacarden/siesta-2.0.2/Src/h2o
MPI: MPI_COMM_WORLD rank 1 has terminated without calling MPI_Finalize()
MPI: aborting job
MPI: Received signal 8

It seems as a problem with the type of INTEGERS.

Any ideas what's going on?

Thanks,

Carlos Cardenas

Reply via email to