Re: [OMPI users] random problems with a ring communication example

2014-03-15 Thread Ralph Castain

On Mar 15, 2014, at 6:21 PM, christophe petit  
wrote:

> Ok, so from what you say, on a "execution system" point view, the ring 
> communication is well achieved (i.e respecting the good order with, in last 
> position, rank0 which receives from rank 6) but the stdout doesn't reflect 
> what really happened, does it ?

Well, it reflects what you printed, but not the order in which things happened.

> 
> Is there a way to make stdout respect the expected order ?

In your program, have each rank!=0 proc recv the message from the previous 
rank, print the message, sleep(1), and then send.

> 
> Thanks
> 
> 
> 2014-03-16 0:42 GMT+01:00 Ralph Castain :
> The explanation is simple: there is no rule about ordering of stdout. So even 
> though your rank0 may receive its MPI message last, its stdout may well be 
> printed before one generated on a remote node. Reason is that rank 0 may well 
> be local to mpirun, and thus the stdout can be handled immediately. However, 
> your rank6 may well be on a remote node, and that daemon has to forward the 
> stdout to mpirun for printing.
> 
> Like I said - no guarantee about ordering of stdout.
> 
> 
> On Mar 15, 2014, at 2:43 PM, christophe petit  
> wrote:
> 
>> Hello,
>> 
>> I followed a simple MPI example to do a ring communication.
>> 
>> Here's the figure that illustrates this example with 7 processes :
>> 
>> http://i.imgur.com/Wrd6acv.png
>> 
>> Here the code :
>> 
>> --
>>  program ring
>> 
>>  implicit none
>>  include 'mpif.h'
>> 
>>  integer, dimension( MPI_STATUS_SIZE ) :: status
>>  integer, parameter:: tag=100
>>  integer :: nb_procs, rank, value, &
>> num_proc_previous,num_proc_next,code
>> 
>>  call MPI_INIT (code)
>>  call MPI_COMM_SIZE ( MPI_COMM_WORLD ,nb_procs,code)
>>  call MPI_COMM_RANK ( MPI_COMM_WORLD ,rank,code)
>>  
>>  num_proc_next=mod(rank+1,nb_procs) 
>>  num_proc_previous=mod(nb_procs+rank-1,nb_procs)
>>  
>>  if (rank == 0) then
>> call MPI_SEND (1000,1, MPI_INTEGER ,num_proc_next,tag, &
>>MPI_COMM_WORLD ,code)
>> call MPI_RECV (value,1, MPI_INTEGER ,num_proc_previous,tag, &
>>MPI_COMM_WORLD ,status,code)
>>  else
>> call MPI_RECV (value,1, MPI_INTEGER ,num_proc_previous,tag, &
>>MPI_COMM_WORLD ,status,code)
>> call MPI_SEND (rank+1000,1, MPI_INTEGER ,num_proc_next,tag, &
>>MPI_COMM_WORLD ,code)
>>  end if
>>  print *,'Me, process ',rank,', I have received ',value,' from process 
>> ',num_proc_previous
>>  
>>  call MPI_FINALIZE (code)
>> end program ring
>> 
>> --
>> 
>> At the execution, I expect to always have :
>> 
>> Me, process1 , I have received 1000  from process
>> 0
>>  Me, process2 , I have received 1001  from process   
>>  1
>>  Me, process3 , I have received 1002  from process   
>>  2
>>  Me, process4 , I have received 1003  from process   
>>  3
>>  Me, process5 , I have received 1004  from process   
>>  4
>>  Me, process6 , I have received 1005  from process   
>>  5
>>  Me, process0 , I have received 1006  from process   
>>  6
>> 
>> But sometimes, I have the reception of process 0 from process 6 which is not 
>> the last reception, like this :
>> 
>>  Me, process1 , I have received 1000  from process   
>>  0
>>  Me, process2 , I have received 1001  from process   
>>  1
>>  Me, process3 , I have received 1002  from process   
>>  2
>>  Me, process4 , I have received 1003  from process   
>>  3
>>  Me, process5 , I have received 1004  from process   
>>  4
>>  Me, process0 , I have received 1006  from process   
>>  6
>>  Me, process6 , I have received 1005  from process   
>>  5
>> 
>> where reception of process 0 from process 6 happens before the reception of 
>> process 6 from process 5
>> 
>> or like on this result :
>> 
>>  Me, process1 , I have received 1000  from process   
>>  0
>>  Me, process2 , I have received 1001  from process   
>>  1
>>  Me, process3 , I have received 1002  from process   
>>  2
>>  Me, process4 , I have received 1003  from process   
>>  3
>>  Me, process0 , I have received 1006  

Re: [OMPI users] random problems with a ring communication example

2014-03-15 Thread christophe petit
Ok, so from what you say, on a "execution system" point view, the ring
communication is well achieved (i.e respecting the good order with, in last
position, rank0 which receives from rank 6) but the stdout doesn't reflect
what really happened, does it ?

Is there a way to make stdout respect the expected order ?

Thanks


2014-03-16 0:42 GMT+01:00 Ralph Castain :

> The explanation is simple: there is no rule about ordering of stdout. So
> even though your rank0 may receive its MPI message last, its stdout may
> well be printed before one generated on a remote node. Reason is that rank
> 0 may well be local to mpirun, and thus the stdout can be handled
> immediately. However, your rank6 may well be on a remote node, and that
> daemon has to forward the stdout to mpirun for printing.
>
> Like I said - no guarantee about ordering of stdout.
>
>
> On Mar 15, 2014, at 2:43 PM, christophe petit <
> christophe.peti...@gmail.com> wrote:
>
> Hello,
>
> I followed a simple MPI example to do a ring communication.
>
> Here's the figure that illustrates this example with 7 processes :
>
> http://i.imgur.com/Wrd6acv.png
>
> Here the code :
>
>
> --
>  program ring
>
>  implicit none
>  include 'mpif.h'
>
>  integer, dimension( MPI_STATUS_SIZE ) :: status
>  integer, parameter:: tag=100
>  integer :: nb_procs, rank, value, &
> num_proc_previous,num_proc_next,code
>
>  call MPI_INIT (code)
>  call MPI_COMM_SIZE ( MPI_COMM_WORLD ,nb_procs,code)
>  call MPI_COMM_RANK ( MPI_COMM_WORLD ,rank,code)
>
>  num_proc_next=mod(rank+1,nb_procs)
>  num_proc_previous=mod(nb_procs+rank-1,nb_procs)
>
>  if (rank == 0) then
> call MPI_SEND (1000,1, MPI_INTEGER ,num_proc_next,tag, &
>MPI_COMM_WORLD ,code)
> call MPI_RECV (value,1, MPI_INTEGER ,num_proc_previous,tag, &
>MPI_COMM_WORLD ,status,code)
>  else
> call MPI_RECV (value,1, MPI_INTEGER ,num_proc_previous,tag, &
>MPI_COMM_WORLD ,status,code)
> call MPI_SEND (rank+1000,1, MPI_INTEGER ,num_proc_next,tag, &
>MPI_COMM_WORLD ,code)
>  end if
>  print *,'Me, process ',rank,', I have received ',value,' from process
> ',num_proc_previous
>
>  call MPI_FINALIZE (code)
> end program ring
>
>
> --
>
> At the execution, I expect to always have :
>
> Me, process1 , I have received 1000  from
> process0
>  Me, process2 , I have received 1001  from
> process1
>  Me, process3 , I have received 1002  from
> process2
>  Me, process4 , I have received 1003  from
> process3
>  Me, process5 , I have received 1004  from
> process4
>  Me, process6 , I have received 1005  from
> process5
>  Me, process0 , I have received 1006  from
> process6
>
> But sometimes, I have the reception of process 0 from process 6 which is
> not the last reception, like this :
>
>  Me, process1 , I have received 1000  from
> process0
>  Me, process2 , I have received 1001  from
> process1
>  Me, process3 , I have received 1002  from
> process2
>  Me, process4 , I have received 1003  from
> process3
>  Me, process5 , I have received 1004  from
> process4
>  Me, process0 , I have received 1006  from
> process6
>  Me, process6 , I have received 1005  from
> process5
>
> where reception of process 0 from process 6 happens before the reception
> of process 6 from process 5
>
> or like on this result :
>
>  Me, process1 , I have received 1000  from
> process0
>  Me, process2 , I have received 1001  from
> process1
>  Me, process3 , I have received 1002  from
> process2
>  Me, process4 , I have received 1003  from
> process3
>  Me, process0 , I have received 1006  from
> process6
>  Me, process5 , I have received 1004  from
> process4
>  Me, process6 , I have received 1005  from
> process5
>
> where process 0 receives between the reception of process 4 and 5.
>
> How can we explain this strange result ? I thought that standard use of
> MPI_SEND and MPI_RECV were blocking by default and,
> with this result, it seems to be not blocking.
>
> I tested this example on Debian 7.0 with open-mpi 

Re: [OMPI users] random problems with a ring communication example

2014-03-15 Thread Ralph Castain
The explanation is simple: there is no rule about ordering of stdout. So even 
though your rank0 may receive its MPI message last, its stdout may well be 
printed before one generated on a remote node. Reason is that rank 0 may well 
be local to mpirun, and thus the stdout can be handled immediately. However, 
your rank6 may well be on a remote node, and that daemon has to forward the 
stdout to mpirun for printing.

Like I said - no guarantee about ordering of stdout.


On Mar 15, 2014, at 2:43 PM, christophe petit  
wrote:

> Hello,
> 
> I followed a simple MPI example to do a ring communication.
> 
> Here's the figure that illustrates this example with 7 processes :
> 
> http://i.imgur.com/Wrd6acv.png
> 
> Here the code :
> 
> --
>  program ring
> 
>  implicit none
>  include 'mpif.h'
> 
>  integer, dimension( MPI_STATUS_SIZE ) :: status
>  integer, parameter:: tag=100
>  integer :: nb_procs, rank, value, &
> num_proc_previous,num_proc_next,code
> 
>  call MPI_INIT (code)
>  call MPI_COMM_SIZE ( MPI_COMM_WORLD ,nb_procs,code)
>  call MPI_COMM_RANK ( MPI_COMM_WORLD ,rank,code)
>  
>  num_proc_next=mod(rank+1,nb_procs) 
>  num_proc_previous=mod(nb_procs+rank-1,nb_procs)
>  
>  if (rank == 0) then
> call MPI_SEND (1000,1, MPI_INTEGER ,num_proc_next,tag, &
>MPI_COMM_WORLD ,code)
> call MPI_RECV (value,1, MPI_INTEGER ,num_proc_previous,tag, &
>MPI_COMM_WORLD ,status,code)
>  else
> call MPI_RECV (value,1, MPI_INTEGER ,num_proc_previous,tag, &
>MPI_COMM_WORLD ,status,code)
> call MPI_SEND (rank+1000,1, MPI_INTEGER ,num_proc_next,tag, &
>MPI_COMM_WORLD ,code)
>  end if
>  print *,'Me, process ',rank,', I have received ',value,' from process 
> ',num_proc_previous
>  
>  call MPI_FINALIZE (code)
> end program ring
> 
> --
> 
> At the execution, I expect to always have :
> 
> Me, process1 , I have received 1000  from process 
>0
>  Me, process2 , I have received 1001  from process
> 1
>  Me, process3 , I have received 1002  from process
> 2
>  Me, process4 , I have received 1003  from process
> 3
>  Me, process5 , I have received 1004  from process
> 4
>  Me, process6 , I have received 1005  from process
> 5
>  Me, process0 , I have received 1006  from process
> 6
> 
> But sometimes, I have the reception of process 0 from process 6 which is not 
> the last reception, like this :
> 
>  Me, process1 , I have received 1000  from process
> 0
>  Me, process2 , I have received 1001  from process
> 1
>  Me, process3 , I have received 1002  from process
> 2
>  Me, process4 , I have received 1003  from process
> 3
>  Me, process5 , I have received 1004  from process
> 4
>  Me, process0 , I have received 1006  from process
> 6
>  Me, process6 , I have received 1005  from process
> 5
> 
> where reception of process 0 from process 6 happens before the reception of 
> process 6 from process 5
> 
> or like on this result :
> 
>  Me, process1 , I have received 1000  from process
> 0
>  Me, process2 , I have received 1001  from process
> 1
>  Me, process3 , I have received 1002  from process
> 2
>  Me, process4 , I have received 1003  from process
> 3
>  Me, process0 , I have received 1006  from process
> 6
>  Me, process5 , I have received 1004  from process
> 4
>  Me, process6 , I have received 1005  from process
> 5
> 
> where process 0 receives between the reception of process 4 and 5.
> 
> How can we explain this strange result ? I thought that standard use of 
> MPI_SEND and MPI_RECV were blocking by default and,
> with this result, it seems to be not blocking.
> 
> I tested this example on Debian 7.0 with open-mpi package.
> 
> Thanks for your help
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users



[OMPI users] random problems with a ring communication example

2014-03-15 Thread christophe petit
Hello,

I followed a simple MPI example to do a ring communication.

Here's the figure that illustrates this example with 7 processes :

http://i.imgur.com/Wrd6acv.png

Here the code :

--
 program ring

 implicit none
 include 'mpif.h'

 integer, dimension( MPI_STATUS_SIZE ) :: status
 integer, parameter:: tag=100
 integer :: nb_procs, rank, value, &
num_proc_previous,num_proc_next,code

 call MPI_INIT (code)
 call MPI_COMM_SIZE ( MPI_COMM_WORLD ,nb_procs,code)
 call MPI_COMM_RANK ( MPI_COMM_WORLD ,rank,code)

 num_proc_next=mod(rank+1,nb_procs)
 num_proc_previous=mod(nb_procs+rank-1,nb_procs)

 if (rank == 0) then
call MPI_SEND (1000,1, MPI_INTEGER ,num_proc_next,tag, &
   MPI_COMM_WORLD ,code)
call MPI_RECV (value,1, MPI_INTEGER ,num_proc_previous,tag, &
   MPI_COMM_WORLD ,status,code)
 else
call MPI_RECV (value,1, MPI_INTEGER ,num_proc_previous,tag, &
   MPI_COMM_WORLD ,status,code)
call MPI_SEND (rank+1000,1, MPI_INTEGER ,num_proc_next,tag, &
   MPI_COMM_WORLD ,code)
 end if
 print *,'Me, process ',rank,', I have received ',value,' from process
',num_proc_previous

 call MPI_FINALIZE (code)
end program ring

--

At the execution, I expect to always have :

Me, process1 , I have received 1000  from
process0
 Me, process2 , I have received 1001  from
process1
 Me, process3 , I have received 1002  from
process2
 Me, process4 , I have received 1003  from
process3
 Me, process5 , I have received 1004  from
process4
 Me, process6 , I have received 1005  from
process5
 Me, process0 , I have received 1006  from
process6

But sometimes, I have the reception of process 0 from process 6 which is
not the last reception, like this :

 Me, process1 , I have received 1000  from
process0
 Me, process2 , I have received 1001  from
process1
 Me, process3 , I have received 1002  from
process2
 Me, process4 , I have received 1003  from
process3
 Me, process5 , I have received 1004  from
process4
 Me, process0 , I have received 1006  from
process6
 Me, process6 , I have received 1005  from
process5

where reception of process 0 from process 6 happens before the reception of
process 6 from process 5

or like on this result :

 Me, process1 , I have received 1000  from
process0
 Me, process2 , I have received 1001  from
process1
 Me, process3 , I have received 1002  from
process2
 Me, process4 , I have received 1003  from
process3
 Me, process0 , I have received 1006  from
process6
 Me, process5 , I have received 1004  from
process4
 Me, process6 , I have received 1005  from
process5

where process 0 receives between the reception of process 4 and 5.

How can we explain this strange result ? I thought that standard use of
MPI_SEND and MPI_RECV were blocking by default and,
with this result, it seems to be not blocking.

I tested this example on Debian 7.0 with open-mpi package.

Thanks for your help