I answered on the patch issue in another post.
Regarding the todo list:

On 15/07/2017 00:01, MM wrote:
[..]
Dream wishes for boost mpi are extension to support MPI3-1, for
c++11/14 if applicable, clearer documentation in terms of optimization
of mpi datatypes, and usage of serialization lib.

rds,
- As for the missing features (include 3.x):
- Async collective: would be great and probably not so difficult.
Remote memory access: would be trickier, and I don't know if they're catching up. I suspect that with more and more people doing hybrid parallelism on one hand, and the async collectives on the other hand, their usage is quite limited.
Any input ?
- MPI2 IO: still not sure what a good C++ API would be, they're like (possibly async) global operation (gather ?) except the data is not received but stored.

More general:
- documentation: yes, that's missing, I'll admit that boost having decided to use yet another in house tool does not help, never been able to generated the doc locally which makes it tricky. - C++11/14: Id love to go that way, the thing is that HPC is quite conservative. People need to build on platforms they don't control and where the default settings are set to.. default. Intel's C++ for one does not have C++11 on by default. Any though ? - Overall interface: I guess I'd like to obsolete some stuff, especially wrt to global operation, the dispatch between the <global op> and <global op>v version does not make that much sense with serialization (where the data footprint is not alway uniform, even when the number of slots is). And I'm not sur the skeleton approach is the more natural. Started thinking about stuff. - Archive: my priority is to move our binary archive (not sure the portable one is of any use, never saw an MPI implementation claiming to support heterogeneous clusters anyway) to Boost.Serialize. They certainly have a use for a fast un versioned binary archive, and that would save use from looking to deep into their implementation. That would protect us from the kind of issues we just had.

Now, there is the time issue :-)

Alain

_______________________________________________
Boost-mpi mailing list
Boost-mpi@lists.boost.org
https://lists.boost.org/mailman/listinfo.cgi/boost-mpi

Reply via email to