Hi Pradeep

Just curious:

Did you change "mpi_int" into "MPI_INTEGER" also?

Presumably the two types are the same size,
but I wonder if somehow the two type names are interchangeable
in OpenMPI (I would guess they're not),
although declared in different header files, etc.

Thank you,
Gus Correa


On 02/18/2013 08:32 PM, Pradeep Jha wrote:
That was careless of me. Thanks for pointing it out. Declaring "status",
"ierr" and putting "implicit none" solved the problem.

Thanks again.


2013/2/19 Jeff Squyres (jsquyres) <jsquy...@cisco.com
<mailto:jsquy...@cisco.com>>

    +1.  The problem is that you didn't declare status or ierr.  Since
    you didn't declare status, you're buffer overflowing, and random Bad
    Things happen from there.

    You should *always* "implicit none" to catch these kinds of errors.


    On Feb 18, 2013, at 2:02 PM, Gus Correa <g...@ldeo.columbia.edu
    <mailto:g...@ldeo.columbia.edu>> wrote:

     > Hi Pradeep
     >
     > For what it is worth, in the MPI Fortran bindings/calls the
     > datatype to use is "MPI_INTEGER", not "mpi_int" (which you used;
     > MPI_INT is in the MPI C bindings):
     >
     > http://linux.die.net/man/3/mpi_integer
     >
     > Also, just to prevent variables to inadvertently come with
     > the wrong type, you could add:
     >
     > implicit none
     >
     > to the top of your code.
     > You already have a non-declared "ierr" in "call mpi_send".
     > (You declared "ierror" as an integer, but not "ierr".)
     > Although this one may not cause any harm;
     > names starting with "i" are integers by default, in old Fortran.
     >
     > I hope this helps,
     > Gus Correa
     >
     >
     > On 02/18/2013 01:26 PM, jody wrote:
     >> Hi Pradeep
     >>
     >> I am not sure if this is the reason, but usually it is a bad idea to
     >> force an order of receives (such as you do in your receive loop -
     >> first from sender 1 then from sender 2 then from sender 3)
     >> Unless you implement it so, there is no guarantee the sends are
     >> performed in this order. B
     >>
     >> It is better if you accept messages from all senders
    (MPI_ANY_SOURCE)
     >> instead of particular ranks and then check where the
     >> message came from by examining the status fields
     >> (http://www.mpi-forum.org/docs/mpi22-report/node47.htm)
     >>
     >> Hope this helps
     >>   Jody
     >>
     >>
     >> On Mon, Feb 18, 2013 at 5:06 PM, Pradeep Jha
     >> <prad...@ccs.engg.nagoya-u.ac.jp
    <mailto:prad...@ccs.engg.nagoya-u.ac.jp>>  wrote:
     >>> I have attached a sample of the MPI program I am trying to
    write. When I run
     >>> this program using "mpirun -np 4 a.out", my output is:
     >>>
     >>>  Sender:            1
     >>>  Data received from            1
     >>>  Sender:            2
     >>>  Data received from            1
     >>>  Sender:            2
     >>>
     >>> And the run hangs there. I dont understand why does the
    "sender" variable
     >>> change its value after MPI_recv? Any ideas?
     >>>
     >>> Thank you,
     >>>
     >>> Pradeep
     >>>
     >>>
     >>>  program mpi_test
     >>>
     >>>   include 'mpif.h'
     >>>
     >>> !----------------( Initialize variables )--------------------
     >>>   integer, dimension(3) :: recv, send
     >>>
     >>>   integer :: sender, np, rank, ierror
     >>>
     >>>   call  mpi_init( ierror )
     >>>   call  mpi_comm_rank( mpi_comm_world, rank, ierror )
     >>>   call  mpi_comm_size( mpi_comm_world, np, ierror )
     >>>
     >>> !----------------( Main program )--------------------
     >>>
     >>> !     receive the data from the other processors
     >>>   if (rank.eq.0) then
     >>>      do sender = 1, np-1
     >>>         print *, "Sender: ", sender
     >>>         call mpi_recv(recv, 3, mpi_int, sender, 1,
     >>> &        mpi_comm_world, status, ierror)
     >>>         print *, "Data received from ",sender
     >>>      end do
     >>>   end if
     >>>
     >>> !   send the data to the main processor
     >>>   if (rank.ne.0) then
     >>>      send(1) = 3
     >>>      send(2) = 4
     >>>      send(3) = 4
     >>>      call mpi_send(send, 3, mpi_int, 0, 1, mpi_comm_world, ierr)
     >>>   end if
     >>>
     >>>
     >>> !----------------( clean up )--------------------
     >>>   call mpi_finalize(ierror)
     >>>
     >>>   return
     >>>   end program mpi_test`
     >>>
     >>>
     >>> _______________________________________________
     >>> users mailing list
     >>> us...@open-mpi.org <mailto:us...@open-mpi.org>
     >>> http://www.open-mpi.org/mailman/listinfo.cgi/users
     >> _______________________________________________
     >> users mailing list
     >> us...@open-mpi.org <mailto:us...@open-mpi.org>
     >> http://www.open-mpi.org/mailman/listinfo.cgi/users
     >
     > _______________________________________________
     > users mailing list
     > us...@open-mpi.org <mailto:us...@open-mpi.org>
     > http://www.open-mpi.org/mailman/listinfo.cgi/users


    --
    Jeff Squyres
    jsquy...@cisco.com <mailto:jsquy...@cisco.com>
    For corporate legal information go to:
    http://www.cisco.com/web/about/doing_business/legal/cri/


    _______________________________________________
    users mailing list
    us...@open-mpi.org <mailto:us...@open-mpi.org>
    http://www.open-mpi.org/mailman/listinfo.cgi/users




_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to