Any update on this? Can it be used in the RMA part?
George.
On Wed, Apr 23, 2014 at 1:58 AM, Gilles Gouaillardet
wrote:
> my bad :-(
>
> this has just been fixed
>
> Gilles
>
> On 2014/04/23 14:55, Nathan Hjelm wrote:
>> The ompi_datatype_flatten.c file appears
my bad :-(
this has just been fixed
Gilles
On 2014/04/23 14:55, Nathan Hjelm wrote:
> The ompi_datatype_flatten.c file appears to be missing. Let me know once
> it is committed and I will take a look. I will see if I can write the
> RMA code using it over the next week or so.
>
The ompi_datatype_flatten.c file appears to be missing. Let me know once
it is committed and I will take a look. I will see if I can write the
RMA code using it over the next week or so.
-Nathan
On Wed, Apr 23, 2014 at 02:43:12PM +0900, Gilles Gouaillardet wrote:
> Nathan,
>
> i uploaded this
George,
i am sorry i cannot see how flatten datatype can be helpful here :-(
in this example, the master must broadcast a long vector. this datatype
is contiguous
so the flatten'ed datatype *is* the type provided by the MPI application.
how would pipelining happen in this case (e.g. who has to
Nathan,
i uploaded this part to github :
https://github.com/ggouaillardet/ompi-svn-mirror/tree/flatten-datatype
you really need to check the last commit :
https://github.com/ggouaillardet/ompi-svn-mirror/commit/a8d014c6f144fa5732bdd25f8b6b05b07ea8
please consider this as experimental and
21, 2014 9:04 PM
To: Open MPI Developers
Subject: Re: [OMPI devel] coll/tuned MPI_Bcast can crash or silently fail when
using distinct datatypes across tasks
Indeed there are many potential solutions, but all require too much
intervention on the code to be generic enough. As we discussed
privately
Indeed there are many potential solutions, but all require too much
intervention on the code to be generic enough. As we discussed
privately mid last year, the "flatten datatype" approach seems to me
to be the most profitable.It is simple to implement and it is also
generic, a simple change will
Dear OpenMPI developers,
i just created #4531 in order to track this issue :
https://svn.open-mpi.org/trac/ompi/ticket/4531
Basically, the coll/tuned implementation of MPI_Bcast does not work when
two tasks
uses datatypes of different sizes.
for example, if the root send two large vectors of