I'm not an expert of MPI, but I stronly encourage you to use

use mpi
implicit none

This can save a LOT of time in the debug.


On 14 May 2013 18:00,  <users-requ...@open-mpi.org> wrote:
> Send users mailing list submissions to
>         us...@open-mpi.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         http://www.open-mpi.org/mailman/listinfo.cgi/users
> or, via email, send a message with subject or body 'help' to
>         users-requ...@open-mpi.org
>
> You can reach the person managing the list at
>         users-ow...@open-mpi.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of users digest..."
>
>
> Today's Topics:
>
>    1. MPI_SUM is not defined on the MPI_INTEGER datatype (Hayato KUNIIE)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Wed, 15 May 2013 00:39:06 +0900
> From: Hayato KUNIIE <kuni...@oita.email.ne.jp>
> Subject: [OMPI users] MPI_SUM is not defined on the MPI_INTEGER
>         datatype
> To: us...@open-mpi.org
> Message-ID: <51925a9a.50...@oita.email.ne.jp>
> Content-Type: text/plain; charset=ISO-2022-JP
>
> Hello I'm kuni255
>
> I build bewulf type PC Cluster (Cent OS release 6.4). And I studing
> about MPI.(Open MPI Ver.1.6.4) I tried following sample which using
> MPI_REDUCE.
>
> Then, Error occured.
>
> This cluster system consist of one head node and 2 slave nodes.
> And sharing home directory in head node by NFS. so Open MPI is installed
> each nodes.
>
> When I test this program on only head node, program is run correctly.
> and output result.
> But When I test this program on only slave node, same error occured.
>
> Please tell me, good idea : )
>
> Error message
> [bwslv01:30793] *** An error occurred in MPI_Reduce: the reduction
> operation MPI_SUM is not defined on the MPI_INTEGER datatype
> [bwslv01:30793] *** on communicator MPI_COMM_WORLD
> [bwslv01:30793] *** MPI_ERR_OP: invalid reduce operation
> [bwslv01:30793] *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort
> --------------------------------------------------------------------------
> mpirun has exited due to process rank 1 with PID 30793 on
> node bwslv01 exiting improperly. There are two reasons this could occur:
>
> 1. this process did not call "init" before exiting, but others in
> the job did. This can cause a job to hang indefinitely while it waits
> for all processes to call "init". By rule, if one process calls "init",
> then ALL processes must call "init" prior to termination.
>
> 2. this process called "init", but exited without calling "finalize".
> By rule, all processes that call "init" MUST call "finalize" prior to
> exiting or it will be considered an "abnormal termination"
>
> This may have caused other processes in the application to be
> terminated by signals sent by mpirun (as reported here).
> --------------------------------------------------------------------------
> [bwhead.clnet:02147] 1 more process has sent help message
> help-mpi-errors.txt / mpi_errors_are_fatal
> [bwhead.clnet:02147] Set MCA parameter "orte_base_help_aggregate" to 0
> to see all help / error messages
>
>
>
>
> Fortran90 source code
> include 'mpif.h'
> parameter(nmax=12)
> integer n(nmax)
>
> call mpi_init(ierr)
> call mpi_comm_size(MPI_COMM_WORLD, isize, ierr)
> call mpi_comm_rank(MPI_COMM_WORLD, irank, ierr)
> ista=irank*(nmax/isize) + 1
> iend=ista+(nmax/isize-1)
> isum=0
> do i=1,nmax
> n(i) = i
> isum = isum + n(i)
> end do
> call mpi_reduce(isum, itmp, 1, MPI_INTEGER, MPI_SUM,
> & 0, MPI_COMM_WORLD, ierr)
>
> if (irank == 0) then
> isum=itmp
> WRITE(*,*) isum
> endif
> call mpi_finalize(ierr)
> end
>
>
> ------------------------------
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> End of users Digest, Vol 2574, Issue 1
> **************************************

Reply via email to