It really depends what are you trying to achieve. If the question is
rhetorical: "can I write a code that does in parallel broadcasts on
independent groups of processes ?" then the answer is yes, this is
certainly possible. If however you add a hint of practicality in your
question "can I write an efficient parallel broadcast between independent
groups of processes?" then I'm afraid the answer will be a negative one.

Let's not look at how you can write the multiple bcast code as the answer
in the stackoverflow is correct, but instead look at what resources these
collective operations are using. In general you can assume that nodes are
connected by a network, able to move data at a rate B in both directions
(full duplex). Assuming the implementation of the bcast algorithm is not
entirely moronic, the bcast can saturate the network with a single process
per node. Now, if you have multiple processes per node (P) then either you
schedule them sequentially (so that each one has the full bandwidth B) or
you let them progress in parallel in which case each participating process
can claim a lower bandwidth B/P (as it is shared between all processes on
the nore).

So even if you are able to expose enough parallelism, physical resources
will impose the real hard limit.

That being said I have the impression you are trying to implement an
MPI_Allgather(v) using a series of MPI_Bcast. Is that true ?

  George.

PS: Few other constraints: the cost of creating the q^(k-1)]*(q-1)
communicator might be prohibitive; the MPI library might support a limited
number of communicators.


On Tue, Oct 31, 2017 at 11:42 PM, Konstantinos Konstantinidis <
kostas1...@gmail.com> wrote:

> Assume that we have K=q*k nodes (slaves) where q,k are positive integers
> >= 2.
>
> Based on the scheme that I am currently using I create [q^(k-1)]*(q-1)
> groups (along with their communicators). Each group consists of k nodes and
> within each group exactly k broadcasts take place (each node broadcasts
> something to the rest of them). So in total [q^(k-1)]*(q-1)*k MPI
> broadcasts take place. Let me skip the details of the above scheme.
>
> Now theoretically I figured out that there are q-1 groups that can
> communicate in parallel at the same time i.e. groups that have no common
> nodes and I would like to utilize that to speedup the shuffling. I have
> seen here https://stackoverflow.com/questions/11372012/mpi-
> several-broadcast-at-the-same-time that this is possible in MPI.
>
> In my case it's more complicated since q,k are parameters of the problem
> and change between different experiments. If I get the idea about the 2nd
> method that is proposed there and assume that we have only 3 groups within
> which some communication takes places one can simply do:
>
> *if my rank belongs to group 1{*
> *    comm1.Bcast(..., ..., ..., rootId);*
> *}else if my rank belongs to group 2{*
> *    comm2.Bcast(..., ..., ..., rootId);*
> *}else if my rank belongs to group3{*
> *    comm3.Bcast(..., ..., ..., rootId);*
> *} *
>
> where comm1, comm2, comm3 are the corresponding sub-communicators that
> contain only the members of each group.
>
> But how can I generalize the above idea to arbitrary number of groups or
> perhaps do something else?
>
> The code is in C++ and the MPI installed is described in the attached file.
>
> Regards,
> Kostas
>
>
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to