Adrian Croucher <a.crouc...@auckland.ac.nz> writes: > hi > > This question is not strictly PETSc-related but I figure you guys are > probably most likely to know the answer. > > I need to do some simple computations involving only a small subset of > the parallel processes in my simulation. > > From reading the MPI documentation and a few tutorials it looks like > mpi_comm_create_group() is probably the thing to use for creating the > MPI communicator in this case. Since there are only a few processes in > the group, from what I can gather this function should be more efficient > than mpi_comm_create(), as it's only collective on the group, not the > whole parent communicator (MPI_COMM_WORLD). > > 1) Is it correct that mpi_comm_create_group() should be a better option > in this case than mpi_comm_create()? I had a grep through the PETSc > source code and there are some calls to mpi_comm_create() but none to > mpi_comm_create_group(). But maybe the use case is different.
Sure, but this is probably premature optimization. Many find MPI_Comm_split() to be a more convenient interface. > 2) If mpi_comm_create_group() is the better option, is it necessary to > call it on all processes, or only the ones in the group? The tutorial at > https://mpitutorial.com/tutorials/introduction-to-groups-and-communicators/ > calls it on all processes, but other stuff I've read suggests you only > need to call it on processes in the group. It seems to work either way, > but you have to use the communicator a little differently. I think the man page is fairly clear. | MPI_Comm_create_group is similar to MPI_Comm_create; however, MPI_Comm_create must be called by all processes in the group of comm, whereas MPI_Comm_create_group must be called by all processes in group, which is a subgroup of the group of comm.