Re: [OMPI users] send and receive vectors + variable length

2015-01-09 Thread Diego Avesani
Dear all, thanks a lot, really Thanks a lot Diego On 9 January 2015 at 19:56, Jeff Squyres (jsquyres) wrote: > On Jan 9, 2015, at 1:54 PM, Diego Avesani wrote: > > > What does it mean "YMMV"? > >

Re: [OMPI users] send and receive vectors + variable length

2015-01-09 Thread Jeff Squyres (jsquyres)
On Jan 9, 2015, at 1:54 PM, Diego Avesani wrote: > What does it mean "YMMV"? http://netforbeginners.about.com/od/xyz/f/What-Is-YMMV.htm :-) -- Jeff Squyres jsquy...@cisco.com For corporate legal information go to:

Re: [OMPI users] send and receive vectors + variable length

2015-01-09 Thread Diego Avesani
What does it mean "YMMV"? On 9 January 2015 at 19:44, Jeff Squyres (jsquyres) wrote: > YMMV Diego

Re: [OMPI users] send and receive vectors + variable length

2015-01-09 Thread Jeff Squyres (jsquyres)
On Jan 9, 2015, at 12:39 PM, George Bosilca wrote: > I totally agree with Dave here. Moreover, based on the logic exposed by Jeff, > there is no right solution because if one choose to first wait on the receive > requests this also leads to a deadlock as the send requests

Re: [OMPI users] send and receive vectors + variable length

2015-01-09 Thread Diego Avesani
Dear Jeff, Dear George, Dear Dave, Dear all, so, is it correct to use *MPI_Waitall *? Is my program ok now? Do you see other problems? Thanks again Diego On 9 January 2015 at 18:39, George Bosilca wrote: > I totally agree with Dave here. Moreover, based on the logic

Re: [OMPI users] send and receive vectors + variable length

2015-01-09 Thread George Bosilca
I totally agree with Dave here. Moreover, based on the logic exposed by Jeff, there is no right solution because if one choose to first wait on the receive requests this also leads to a deadlock as the send requests might not be progressed. As a side note, posting the receive requests first

Re: [OMPI users] send and receive vectors + variable length

2015-01-09 Thread Dave Goodell (dgoodell)
On Jan 9, 2015, at 7:46 AM, Jeff Squyres (jsquyres) wrote: > Yes, I know examples 3.8/3.9 are blocking examples. > > But it's morally the same as: > > MPI_WAITALL(send_requests...) > MPI_WAITALL(recv_requests...) > > Strictly speaking, that can deadlock, too. > > It

Re: [OMPI users] send and receive vectors + variable length

2015-01-09 Thread Jeff Squyres (jsquyres)
Yes, I know examples 3.8/3.9 are blocking examples. But it's morally the same as: MPI_WAITALL(send_requests...) MPI_WAITALL(recv_requests...) Strictly speaking, that can deadlock, too. It reality, it has far less chance of deadlocking than examples 3.8 and 3.9 (because you're likely within

Re: [OMPI users] send and receive vectors + variable length

2015-01-09 Thread Diego Avesani
Dear George, Dear Jeff, Dear All, Thanks Thanks a lot Here, the new version of the program. Now there is only one barrier. There is no more allocate\deallocate in the receive part. What do you think? Is all right? did I miss something or I need to improve something else? I have not complete

Re: [OMPI users] send and receive vectors + variable length

2015-01-08 Thread George Bosilca
I'm confused by this statement. The examples pointed to are handling blocking sends and receives, while this example is purely based on non-blocking communications. In this particular case I see no hard of waiting on the requests in any random order as long as all of them are posted before the

Re: [OMPI users] send and receive vectors + variable length

2015-01-08 Thread Diego Avesani
Dear Jeff, Dear George, Dear all, Is not send_request a vector? Are you suggesting to use CALL MPI_WAIT(REQUEST(:), MPI_STATUS_IGNORE, MPIdata%iErr) I will try tomorrow morning, and also to fix the sending and receiving allocate deallocate, Problaly I will have to think again to the program. I

Re: [OMPI users] send and receive vectors + variable length

2015-01-08 Thread Jeff Squyres (jsquyres)
Also, you are calling WAITALL on all your sends and then WAITALL on all your receives. This is also incorrect and may deadlock. WAITALL on *all* your pending requests (sends and receives -- put them all in a single array). Look at examples 3.8 and 3.9 in the MPI-3.0 document. On Jan 8,

Re: [OMPI users] send and receive vectors + variable length

2015-01-08 Thread George Bosilca
Diego, Non-blocking communications only indicate a communication will happen, it does not force them to happen. They will only complete on the corresponding MPI_Wait, which also marks the moment starting from where the data can be safely altered or accessed (in the case of the MPI_Irecv). Thus

Re: [OMPI users] send and receive vectors + variable length

2015-01-08 Thread Diego Avesani
Dear Tom, Dear Jeff, Dear all, Thanks again for Tom: you are right, I fixed it. for Jeff: if I do not insert the CALL MPI_BARRIER(MPI_COMM_WORLD, MPIdata%iErr) in the line 112, the program does not stop. Am I right? Here the new version Diego On 8 January 2015 at 21:12, Tom Rosmond

Re: [OMPI users] send and receive vectors + variable length

2015-01-08 Thread Tom Rosmond
With array bounds checking your program returns an out-of-bounds error in the mpi_isend call at line 104. Looks like 'send_request' should be indexed with 'sendcount', not 'icount'. T. Rosmond On Thu, 2015-01-08 at 20:28 +0100, Diego Avesani wrote: > the attachment > > Diego > > > > On 8

Re: [OMPI users] send and receive vectors + variable length

2015-01-08 Thread Diego Avesani
the attachment Diego On 8 January 2015 at 19:44, Diego Avesani wrote: > Dear all, > I found the error. > There is a Ndata2send(iCPU) instead of Ndata2recv(iCPU). > In the attachment there is the correct version of the program. > > Only one thing, could do you check

Re: [OMPI users] send and receive vectors + variable length

2015-01-08 Thread Jeff Squyres (jsquyres)
What do you need the barriers for? On Jan 8, 2015, at 1:44 PM, Diego Avesani wrote: > Dear all, > I found the error. > There is a Ndata2send(iCPU) instead of Ndata2recv(iCPU). > In the attachment there is the correct version of the program. > > Only one thing, could

Re: [OMPI users] send and receive vectors + variable length

2015-01-08 Thread Diego Avesani
Dear all, I found the error. There is a Ndata2send(iCPU) instead of Ndata2recv(iCPU). In the attachment there is the correct version of the program. Only one thing, could do you check if the use of MPI_WAITALL and MPI_BARRIER is correct? Thanks again Diego On 8 January 2015 at 18:48, Diego