I'm confused by this statement. The examples pointed to are handling
blocking sends and receives, while this example is purely based on
non-blocking communications. In this particular case I see no hard of
waiting on the requests in any random order as long as all of them are
posted before the first wait.

  George.


On Thu, Jan 8, 2015 at 5:24 PM, Jeff Squyres (jsquyres) <jsquy...@cisco.com>
wrote:

> Also, you are calling WAITALL on all your sends and then WAITALL on all
> your receives.  This is also incorrect and may deadlock.
>
> WAITALL on *all* your pending requests (sends and receives -- put them all
> in a single array).
>
> Look at examples 3.8 and 3.9 in the MPI-3.0 document.
>
>
>
> On Jan 8, 2015, at 5:15 PM, George Bosilca <bosi...@icl.utk.edu> wrote:
>
> > Diego,
> >
> > Non-blocking communications only indicate a communication will happen,
> it does not force them to happen. They will only complete on the
> corresponding MPI_Wait, which also marks the moment starting from where the
> data can be safely altered or accessed (in the case of the MPI_Irecv). Thus
> deallocating your buffer after the MPI_Isend and MPI_Irecv is incorrect.
> Also printing the supposedly received values (line 127) is incorrect as
> there is no reason to have the non-blocking receive completed at that
> moment.
> >
> >   George.
> >
> >
> > On Thu, Jan 8, 2015 at 5:06 PM, Diego Avesani <diego.aves...@gmail.com>
> wrote:
> > Dear Tom, Dear Jeff, Dear all,
> > Thanks again
> >
> > for Tom:
> > you are right, I fixed it.
> >
> > for Jeff:
> > if I do not insert the CALL MPI_BARRIER(MPI_COMM_WORLD, MPIdata%iErr)
> > in the line 112, the program does not stop.
> >
> > Am I right?
> > Here the new version
> >
> >
> >
> > Diego
> >
> >
> > On 8 January 2015 at 21:12, Tom Rosmond <rosm...@reachone.com> wrote:
> > With array bounds checking your program returns an out-of-bounds error
> > in the mpi_isend call at line 104.  Looks like 'send_request' should be
> > indexed with 'sendcount', not 'icount'.
> >
> > T. Rosmond
> >
> >
> >
> > On Thu, 2015-01-08 at 20:28 +0100, Diego Avesani wrote:
> > > the attachment
> > >
> > > Diego
> > >
> > >
> > >
> > > On 8 January 2015 at 19:44, Diego Avesani <diego.aves...@gmail.com>
> > > wrote:
> > >         Dear all,
> > >         I found the error.
> > >         There is a  Ndata2send(iCPU) instead of Ndata2recv(iCPU).
> > >         In the attachment there is the correct version of the program.
> > >
> > >
> > >         Only one thing, could do you check if the use of MPI_WAITALL
> > >         and MPI_BARRIER is correct?
> > >
> > >
> > >         Thanks again
> > >
> > >
> > >
> > >
> > >
> > >         Diego
> > >
> > >
> > >
> > >         On 8 January 2015 at 18:48, Diego Avesani
> > >         <diego.aves...@gmail.com> wrote:
> > >                 Dear all,
> > >                 thanks thank a lot, I am learning a lot.
> > >
> > >
> > >
> > >                 I have written a simple program that send vectors of
> > >                 integers from a CPU to another.
> > >
> > >
> > >                 The program is written (at least for now) for 4 CPU.
> > >
> > >
> > >                 The program is quite simple:
> > >                 Each CPU knows how many data has to send to the other
> > >                 CPUs. This info is than send to the other CPUS. In
> > >                 this way each CPU knows how may data has to receive
> > >                 from other CPUs.
> > >
> > >
> > >                 This part of the program works.
> > >
> > >
> > >                 The problem is in the second part.
> > >
> > >
> > >                 In the second part, each processor sends a vector of
> > >                 integer to the other processor. The size is given and
> > >                 each CPU knows the size of the incoming vector form
> > >                 the first part of the program.
> > >
> > >
> > >                 In this second part the program fails and I do not
> > >                 know why.
> > >
> > >
> > >                 In the attachment you can find the program. Could you
> > >                 please help me. Problably I didn't understand properly
> > >                 the ISEND and IRECV subroutine.
> > >
> > >
> > >                 Thanks again
> > >
> > >
> > >
> > >                 Diego
> > >
> > >
> > >
> > >
> > >
> > >
> > > _______________________________________________
> > > users mailing list
> > > us...@open-mpi.org
> > > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> > > Link to this post:
> http://www.open-mpi.org/community/lists/users/2015/01/26131.php
> >
> >
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> > Link to this post:
> http://www.open-mpi.org/community/lists/users/2015/01/26132.php
> >
> >
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> > Link to this post:
> http://www.open-mpi.org/community/lists/users/2015/01/26137.php
> >
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> > Link to this post:
> http://www.open-mpi.org/community/lists/users/2015/01/26138.php
>
>
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2015/01/26139.php
>

Reply via email to