I think the return of ascontiguous will be reused by python before the data
is really transferred by the isend. The input buffer for the isend
operation should be const fire the entire duration of the isend+wait window.


On Fri, Feb 1, 2019, 12:27 Konstantinos Konstantinidis <kostas1...@gmail.com

> Hi, consider a setup of one MPI sender process unicasting J messages to
> each of N MPI receiver processes, i.e., the total number of transmissions
> is J*N. Each transmission is a block of a matrix, which has been split both
> horizontally and vertically. Each block has been stored as an element of a
> 2D list, i.e. a list of J lists, each of which has N elements. I won't go
> into the details of the Python code (please see attached) but I am pasting
> the problematic part here:
> On the sender's level:
> *reqA = [None] * J * N*
> *for i in range(J):*
> * for j in range(N):*
> * reqA[i*N+j] = comm.Isend([np.ascontiguousarray(Ahv[i][j//k][j%k]),
> MPI.INT <http://MPI.INT>], dest=j+1, tag=15)*
> *MPI.Request.Waitall(reqA)*
> On the receiver's level:
> *A = []*
> *rA = [None] * J*
> *for i in range(J):*
> * A.append(np.empty_like(np.matrix([[0]*(n//k) for j in range(m//q)])))*
> * rA[i] = comm.Irecv(A[i], source=0, tag=15)*
> *MPI.Request.Waitall(rA)*
> *#test*
> *for i in range(J):*
> * print "For job %d, worker %d received splits of A, b" % (i,
> comm.Get_rank()-1)*
> * print(A[i])*
> There are some other transmissions/requests in between which can be found
> in the attached file. I do not think they interfere with those, though.
> When I print all of the received blocks, they look like some memory trash
> and none of them is the same as the blocks that were sent. I am using
> Python 2.7.14 and Open MPI 2.1.2 (I have also attached the output of
> ompi_info).
> Thanks,
> Kostas Konstantinidis
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
users mailing list

Reply via email to