For further information, the test also fails with MPI-1.8.4rc1.

2014-11-20 16:51 GMT+01:00 Ghislain Viguier <ghislain.vigu...@gmail.com>:

> Dear support,
>
> I'm encountering an issue with the MPI_Neighbor_alltoallw request of
> mpi-1.8.3.
> I have enclosed a test case with information of my workstation.
>
> In this test, I define a weighted topology for 5 processes, where the
> weight represent the number of buffers to send/receive :
>     rank
>       0 : | x |
>       1 : | 2 | x |
>       2 : | 1 | 1 | x |
>       3 : | 3 | 2 | 3 | x |
>       4 : | 5 | 2 | 2 | 2 | x |
>
> In this topology, the rank 1 will send/receive :
>    2 buffers to/from the rank 0,
>    1 buffer to/from the rank 2,
>    2 buffers to/from the rank 3,
>    2 buffers to/from the rank 4,
>
> The send buffer are defined with the MPI_Type_create_hindexed_block. This
> allows to use a same buffer for several communications without duplicating
> it (read only).
> Here the rank 1 will have 2 send buffers (the max of 2, 1, 2, 2).
> The receiver buffer is a contiguous buffer defined with
> MPI_Type_contiguous request.
> Here, the receiver buffer of the rank 1 is of size : 7 (2+1+2+2)
>
> This test case succesful for 2 or 3 processes. For 4 processes, the test
> fails 1 times for 3 successes. For 5 processes, the test fails all the time.
>
> The error code is : *** MPI_ERR_IN_STATUS: error code in status
>
> I don't understand what I am doing wrong.
>
> Could you please have a look on it?
>
> Thank you very much.
>
> Best regards,
> Ghislain Viguier
>
> --
> Ghislain Viguier
> Tél. 06 31 95 03 17
>



-- 
Ghislain Viguier
Tél. 06 31 95 03 17

Reply via email to