Ok, so from what you say, on a "execution system" point view, the ring
communication is well achieved (i.e respecting the good order with, in last
position, rank0 which receives from rank 6) but the stdout doesn't reflect
what really happened, does it ?

Is there a way to make stdout respect the expected order ?

Thanks


2014-03-16 0:42 GMT+01:00 Ralph Castain <r...@open-mpi.org>:

> The explanation is simple: there is no rule about ordering of stdout. So
> even though your rank0 may receive its MPI message last, its stdout may
> well be printed before one generated on a remote node. Reason is that rank
> 0 may well be local to mpirun, and thus the stdout can be handled
> immediately. However, your rank6 may well be on a remote node, and that
> daemon has to forward the stdout to mpirun for printing.
>
> Like I said - no guarantee about ordering of stdout.
>
>
> On Mar 15, 2014, at 2:43 PM, christophe petit <
> christophe.peti...@gmail.com> wrote:
>
> Hello,
>
> I followed a simple MPI example to do a ring communication.
>
> Here's the figure that illustrates this example with 7 processes :
>
> http://i.imgur.com/Wrd6acv.png
>
> Here the code :
>
>
> --------------------------------------------------------------------------------------------------------------------------
>  program ring
>
>  implicit none
>  include 'mpif.h'
>
>  integer, dimension( MPI_STATUS_SIZE ) :: status
>  integer, parameter                    :: tag=100
>  integer :: nb_procs, rank, value, &
>             num_proc_previous,num_proc_next,code
>
>  call MPI_INIT (code)
>  call MPI_COMM_SIZE ( MPI_COMM_WORLD ,nb_procs,code)
>  call MPI_COMM_RANK ( MPI_COMM_WORLD ,rank,code)
>
>  num_proc_next=mod(rank+1,nb_procs)
>  num_proc_previous=mod(nb_procs+rank-1,nb_procs)
>
>  if (rank == 0) then
>     call MPI_SEND (1000,1, MPI_INTEGER ,num_proc_next,tag, &
>                    MPI_COMM_WORLD ,code)
>     call MPI_RECV (value,1, MPI_INTEGER ,num_proc_previous,tag, &
>                    MPI_COMM_WORLD ,status,code)
>  else
>     call MPI_RECV (value,1, MPI_INTEGER ,num_proc_previous,tag, &
>                    MPI_COMM_WORLD ,status,code)
>     call MPI_SEND (rank+1000,1, MPI_INTEGER ,num_proc_next,tag, &
>                    MPI_COMM_WORLD ,code)
>  end if
>  print *,'Me, process ',rank,', I have received ',value,' from process
> ',num_proc_previous
>
>  call MPI_FINALIZE (code)
> end program ring
>
>
> --------------------------------------------------------------------------------------------------------------------------
>
> At the execution, I expect to always have :
>
> Me, process            1 , I have received         1000  from
> process            0
>  Me, process            2 , I have received         1001  from
> process            1
>  Me, process            3 , I have received         1002  from
> process            2
>  Me, process            4 , I have received         1003  from
> process            3
>  Me, process            5 , I have received         1004  from
> process            4
>  Me, process            6 , I have received         1005  from
> process            5
>  Me, process            0 , I have received         1006  from
> process            6
>
> But sometimes, I have the reception of process 0 from process 6 which is
> not the last reception, like this :
>
>  Me, process            1 , I have received         1000  from
> process            0
>  Me, process            2 , I have received         1001  from
> process            1
>  Me, process            3 , I have received         1002  from
> process            2
>  Me, process            4 , I have received         1003  from
> process            3
>  Me, process            5 , I have received         1004  from
> process            4
>  Me, process            0 , I have received         1006  from
> process            6
>  Me, process            6 , I have received         1005  from
> process            5
>
> where reception of process 0 from process 6 happens before the reception
> of process 6 from process 5
>
> or like on this result :
>
>  Me, process            1 , I have received         1000  from
> process            0
>  Me, process            2 , I have received         1001  from
> process            1
>  Me, process            3 , I have received         1002  from
> process            2
>  Me, process            4 , I have received         1003  from
> process            3
>  Me, process            0 , I have received         1006  from
> process            6
>  Me, process            5 , I have received         1004  from
> process            4
>  Me, process            6 , I have received         1005  from
> process            5
>
> where process 0 receives between the reception of process 4 and 5.
>
> How can we explain this strange result ? I thought that standard use of
> MPI_SEND and MPI_RECV were blocking by default and,
> with this result, it seems to be not blocking.
>
> I tested this example on Debian 7.0 with open-mpi package.
>
> Thanks for your help
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Reply via email to