Re: [OMPI users] Calling MPI_send MPI_recv from a fortran subroutine
Please also put IMPICIT NONE to your SUBROUTINE and replace INCLUDE 'mpif.h' by USE mpi. This comes with the benefit of interface checking. The compiler will throw an error due to missing tags. It is even better style to write: PROGRAM main USE MPI ... CONTAINS SUBROUTINE ... ... END SUBROUTINE ... END PROGRAM main For more complex programs please use modules instead.
Re: [OMPI users] Calling MPI_send MPI_recv from a fortran subroutine
oh! it works now. Thanks a lot and sorry about my negligence. 2013/3/1 Ake Sandgren> On Fri, 2013-03-01 at 01:24 +0900, Pradeep Jha wrote: > > Sorry for those mistakes. I addressed all the three problems > > - I put "implicit none" at the top of main program > > - I initialized tag. > > - changed MPI_INT to MPI_INTEGER > > - "send_length" should be just "send", it was a typo. > > > > > > But the code is still hanging in sendrecv. The present form is below: > > > > "tag" isn't iniitalized to anything so it may very well be totally > different in all the processes. > ALWAYS initialize variables before using them. > > > main.f > > > > > > program main > > > > implicit none > > > > include 'mpif.h' > > > > integer me, np, ierror > > > > call MPI_init( ierror ) > > call MPI_comm_rank( mpi_comm_world, me, ierror ) > > call MPI_comm_size( mpi_comm_world, np, ierror ) > > > > call sendrecv(me, np) > > > > call mpi_finalize( ierror ) > > > > stop > > end > > > > sendrecv.f > > > > > > subroutine sendrecv(me, np) > > > > include 'mpif.h' > > > > integer np, me, sender, tag > > integer, dimension(mpi_status_size) :: status > > > > integer, dimension(1) :: recv, send > > > > if (me.eq.0) then > > > > do sender = 1, np-1 > > call mpi_recv(recv, 1, mpi_integer, sender, tag, > > & mpi_comm_world, status, ierror) > > > > end do > > end if > > > > if ((me.ge.1).and.(me.lt.np)) then > > send(1) = me*12 > > > > call mpi_send(send, 1, mpi_integer, 0, tag, > > &mpi_comm_world, ierror) > > end if > > > > return > > end > > > > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users >
Re: [OMPI users] Calling MPI_send MPI_recv from a fortran subroutine
On Fri, 2013-03-01 at 01:24 +0900, Pradeep Jha wrote: > Sorry for those mistakes. I addressed all the three problems > - I put "implicit none" at the top of main program > - I initialized tag. > - changed MPI_INT to MPI_INTEGER > - "send_length" should be just "send", it was a typo. > > > But the code is still hanging in sendrecv. The present form is below: > "tag" isn't iniitalized to anything so it may very well be totally different in all the processes. ALWAYS initialize variables before using them. > main.f > > > program main > > implicit none > > include 'mpif.h' > > integer me, np, ierror > > call MPI_init( ierror ) > call MPI_comm_rank( mpi_comm_world, me, ierror ) > call MPI_comm_size( mpi_comm_world, np, ierror ) > > call sendrecv(me, np) > > call mpi_finalize( ierror ) > > stop > end > > sendrecv.f > > > subroutine sendrecv(me, np) > > include 'mpif.h' > > integer np, me, sender, tag > integer, dimension(mpi_status_size) :: status > > integer, dimension(1) :: recv, send > > if (me.eq.0) then > > do sender = 1, np-1 > call mpi_recv(recv, 1, mpi_integer, sender, tag, > & mpi_comm_world, status, ierror) > > end do > end if > > if ((me.ge.1).and.(me.lt.np)) then > send(1) = me*12 > > call mpi_send(send, 1, mpi_integer, 0, tag, > &mpi_comm_world, ierror) > end if > > return > end
Re: [OMPI users] Calling MPI_send MPI_recv from a fortran subroutine
I don't see tag being set to any value On Feb 28, 2013, at 8:24 AM, Pradeep Jhawrote: > Sorry for those mistakes. I addressed all the three problems > - I put "implicit none" at the top of main program > - I initialized tag. > - changed MPI_INT to MPI_INTEGER > - "send_length" should be just "send", it was a typo. > > But the code is still hanging in sendrecv. The present form is below: > > main.f > > > program main > > implicit none > > include 'mpif.h' > > integer me, np, ierror > > call MPI_init( ierror ) > call MPI_comm_rank( mpi_comm_world, me, ierror ) > call MPI_comm_size( mpi_comm_world, np, ierror ) > > call sendrecv(me, np) > > call mpi_finalize( ierror ) > > stop > end > sendrecv.f > > > subroutine sendrecv(me, np) > > include 'mpif.h' > > integer np, me, sender, tag > integer, dimension(mpi_status_size) :: status > > integer, dimension(1) :: recv, send > > if (me.eq.0) then > > do sender = 1, np-1 > call mpi_recv(recv, 1, mpi_integer, sender, tag, > & mpi_comm_world, status, ierror) > > end do > end if > > if ((me.ge.1).and.(me.lt.np)) then > send(1) = me*12 > > call mpi_send(send, 1, mpi_integer, 0, tag, > &mpi_comm_world, ierror) > end if > > return > end > > > 2013/3/1 Jeff Squyres (jsquyres) > On Feb 28, 2013, at 9:59 AM, Pradeep Jha > wrote: > > > Is it possible to call the MPI_send and MPI_recv commands inside a > > subroutine and not the main program? > > Yes. > > > I have written a minimal program for what I am trying to do. It is > > compiling fine but it is not working. The program just hangs in the > > "sendrecv" subroutine. Any ideas how can I do it? > > You seem to have several errors in the sendrecv subroutine. I would strongly > encourage you to use "implicit none" to avoid many of these errors. Here's a > few errors I see offhand: > > - tag is not initialized > - what's send_length(1)? > - use MPI_INTEGER, not MPI_INT (MPI_INT = C int, MPI_INTEGER = Fortran > INTEGER) > > > > main.f > > > > > > program main > > > > include 'mpif.h' > > > > integer me, np, ierror > > > > call MPI_init( ierror ) > > call MPI_comm_rank( mpi_comm_world, me, ierror ) > > call MPI_comm_size( mpi_comm_world, np, ierror ) > > > > call sendrecv(me, np) > > > > call mpi_finalize( ierror ) > > > > stop > > end > > > > sendrecv.f > > > > > > subroutine sendrecv(me, np) > > > > include 'mpif.h' > > > > integer np, me, sender > > integer, dimension(mpi_status_size) :: status > > > > integer, dimension(1) :: recv, send > > > > if (me.eq.0) then > > > > do sender = 1, np-1 > > call mpi_recv(recv, 1, mpi_int, sender, tag, > > & mpi_comm_world, status, ierror) > > > > end do > > end if > > > > if ((me.ge.1).and.( > > me.lt.np > > )) then > > send_length(1) = me*12 > > > > call mpi_send(send, 1, mpi_int, 0, tag, > > &mpi_comm_world, ierror) > > end if > > > > return > > end > > > > ___ > > users mailing list > > us...@open-mpi.org > > http://www.open-mpi.org/mailman/listinfo.cgi/users > > > -- > Jeff Squyres > jsquy...@cisco.com > For corporate legal information go to: > http://www.cisco.com/web/about/doing_business/legal/cri/ > > > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users > > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users
Re: [OMPI users] Calling MPI_send MPI_recv from a fortran subroutine
Sorry for those mistakes. I addressed all the three problems - I put "implicit none" at the top of main program - I initialized tag. - changed MPI_INT to MPI_INTEGER - "send_length" should be just "send", it was a typo. But the code is still hanging in sendrecv. The present form is below: main.f program main implicit none include 'mpif.h' integer me, np, ierror call MPI_init( ierror ) call MPI_comm_rank( mpi_comm_world, me, ierror ) call MPI_comm_size( mpi_comm_world, np, ierror ) call sendrecv(me, np) call mpi_finalize( ierror ) stop end sendrecv.f subroutine sendrecv(me, np) include 'mpif.h' integer np, me, sender, tag integer, dimension(mpi_status_size) :: status integer, dimension(1) :: recv, send if (me.eq.0) then do sender = 1, np-1 call mpi_recv(recv, 1, mpi_integer, sender, tag, & mpi_comm_world, status, ierror) end do end if if ((me.ge.1).and.(me.lt.np)) then send(1) = me*12 call mpi_send(send, 1, mpi_integer, 0, tag, &mpi_comm_world, ierror) end if return end 2013/3/1 Jeff Squyres (jsquyres)> On Feb 28, 2013, at 9:59 AM, Pradeep Jha > wrote: > > > Is it possible to call the MPI_send and MPI_recv commands inside a > subroutine and not the main program? > > Yes. > > > I have written a minimal program for what I am trying to do. It is > compiling fine but it is not working. The program just hangs in the > "sendrecv" subroutine. Any ideas how can I do it? > > You seem to have several errors in the sendrecv subroutine. I would > strongly encourage you to use "implicit none" to avoid many of these > errors. Here's a few errors I see offhand: > > - tag is not initialized > - what's send_length(1)? > - use MPI_INTEGER, not MPI_INT (MPI_INT = C int, MPI_INTEGER = Fortran > INTEGER) > > > > main.f > > > > > > program main > > > > include 'mpif.h' > > > > integer me, np, ierror > > > > call MPI_init( ierror ) > > call MPI_comm_rank( mpi_comm_world, me, ierror ) > > call MPI_comm_size( mpi_comm_world, np, ierror ) > > > > call sendrecv(me, np) > > > > call mpi_finalize( ierror ) > > > > stop > > end > > > > sendrecv.f > > > > > > subroutine sendrecv(me, np) > > > > include 'mpif.h' > > > > integer np, me, sender > > integer, dimension(mpi_status_size) :: status > > > > integer, dimension(1) :: recv, send > > > > if (me.eq.0) then > > > > do sender = 1, np-1 > > call mpi_recv(recv, 1, mpi_int, sender, tag, > > & mpi_comm_world, status, ierror) > > > > end do > > end if > > > > if ((me.ge.1).and.( > > me.lt.np > > )) then > > send_length(1) = me*12 > > > > call mpi_send(send, 1, mpi_int, 0, tag, > > &mpi_comm_world, ierror) > > end if > > > > return > > end > > > > ___ > > users mailing list > > us...@open-mpi.org > > http://www.open-mpi.org/mailman/listinfo.cgi/users > > > -- > Jeff Squyres > jsquy...@cisco.com > For corporate legal information go to: > http://www.cisco.com/web/about/doing_business/legal/cri/ > > > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users >
Re: [OMPI users] Calling MPI_send MPI_recv from a fortran subroutine
On Feb 28, 2013, at 9:59 AM, Pradeep Jhawrote: > Is it possible to call the MPI_send and MPI_recv commands inside a subroutine > and not the main program? Yes. > I have written a minimal program for what I am trying to do. It is compiling > fine but it is not working. The program just hangs in the "sendrecv" > subroutine. Any ideas how can I do it? You seem to have several errors in the sendrecv subroutine. I would strongly encourage you to use "implicit none" to avoid many of these errors. Here's a few errors I see offhand: - tag is not initialized - what's send_length(1)? - use MPI_INTEGER, not MPI_INT (MPI_INT = C int, MPI_INTEGER = Fortran INTEGER) > main.f > > > program main > > include 'mpif.h' > > integer me, np, ierror > > call MPI_init( ierror ) > call MPI_comm_rank( mpi_comm_world, me, ierror ) > call MPI_comm_size( mpi_comm_world, np, ierror ) > > call sendrecv(me, np) > > call mpi_finalize( ierror ) > > stop > end > > sendrecv.f > > > subroutine sendrecv(me, np) > > include 'mpif.h' > > integer np, me, sender > integer, dimension(mpi_status_size) :: status > > integer, dimension(1) :: recv, send > > if (me.eq.0) then > > do sender = 1, np-1 > call mpi_recv(recv, 1, mpi_int, sender, tag, > & mpi_comm_world, status, ierror) > > end do > end if > > if ((me.ge.1).and.( > me.lt.np > )) then > send_length(1) = me*12 > > call mpi_send(send, 1, mpi_int, 0, tag, > &mpi_comm_world, ierror) > end if > > return > end > > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users -- Jeff Squyres jsquy...@cisco.com For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/