Many thanks!

Note that it's a holiday week here in the US -- I'm only on for a short time 
here this morning; I'll likely disappear again shortly until next week.  :-)



On Nov 27, 2014, at 8:12 AM, Nick Papior Andersen <nickpap...@gmail.com> wrote:

> Sure, I will make the changes and commit to make them OMPI specific.
> 
> I will post forward my problems on the devel list.
> 
> I will keep you posted. :)
> 
> 2014-11-27 13:58 GMT+01:00 Jeff Squyres (jsquyres) <jsquy...@cisco.com>:
> On Nov 26, 2014, at 2:08 PM, Nick Papior Andersen <nickpap...@gmail.com> 
> wrote:
> 
> > Here is my commit-msg:
> > "
> > We can now split communicators based on hwloc full capabilities up to BOARD.
> > I.e.:
> > HWTHREAD,CORE,L1CACHE,L2CACHE,L3CACHE,SOCKET,NUMA,NODE,BOARD
> > where NODE is the same as SHARED.
> > "
> >
> > Maybe what I did could be useful somehow?
> > I mean to achieve the effect one could do:
> > comm_split_type(MPI_COMM_TYPE_Node,comm)
> > create new group from all nodes not belonging to this group.
> > This can even be more fine tuned if one wishes to create a group of 
> > "master" cores on each node.
> 
> I will say that there was a lot of debate about this kind of functionality at 
> the MPI Forum.  The problem is that although x86-based clusters are quite 
> common these days, they are not the only kind of machines used for HPC out 
> there, and the exact definitions of these kinds of concepts (hwthread, core, 
> lXcache, socket, numa, ...etc.) can vary between architectures.
> 
> Hence, the compromise was to just have MPI_COMM_TYPE_SHARED, where the 
> resulting communicator contains processes that share a single memory space.
> 
> That being said, since OMPI uses hwloc for all of its supported 
> architectures, it might be worthwhile to have an OMPI extension for 
> OMPI_COMM_TYPE_<foo> for the various different types.  One could/should only 
> use these new constants if the OPEN_MPI macro is defined and is 1.
> 
> And *that* being said, one of the goals of MPI is portability, so anyone 
> using these constants would inherently non-portable.  :-)
> 
> > I have not been able to compile it due to my autogen.pl giving me some 
> > errors.
> 
> What kind of errors?  (we might want to move this discussion to the devel 
> list...)
> 
> >  However, I think it should compile just fine.
> >
> > Do you think it could be useful?
> >
> > If interested you can find my, single commit branch, at: 
> > https://github.com/zerothi/ompi
> 
> This looks interesting.
> 
> Can you file a pull requests against the ompi master, and send something to 
> the devel list about this functionality?
> 
> I'd still strongly suggest renaming these constants to the "OMPI_" to 
> differentiate them from standard MPI constants / functionality.
> 
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to: 
> http://www.cisco.com/web/about/doing_business/legal/cri/
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2014/11/25878.php
> 
> 
> 
> -- 
> Kind regards Nick
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2014/11/25879.php


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/

Reply via email to