abc def

When the parent does a spawn call, it presumably blocks until the child
tasks have called MPI_Init.  The standard allows some flexibility on this
but at least after spawn, the spawn side must be able to issue
communication calls involving the children and expect them to work.

What you seem to be missing is that when a parent has spawned a set of
children, the parent tasks and child tasks are connected. If you want the
children to do an MPI_Finalize and actually finish before the parent calls
MPI_Finalize, you must use MPI_Comm_disconnect on the intercommunicator
between the spawn side and the children.

The MPI standard makes MPI_Finalize collective across all currently
connected processes so you cannot assume the children will return from
MPI_Finalize until the parent process have entered MPI_Finalize.

MPI_Comm_disconnect makes the parent and children independent so an
MPI_Finalize by the children can return and the processes end, even though
the parent continues on.

In your example, perhaps the best approach is to have the children call
MPI_Barrier after the file is written and have the parent call MPI_Barrier
before the file is read. Have both parent and children call
MPI_Comm_disconnect before the parent does another spawn so the children
can finalize and go away.


Dick Treumann  -  MPI Team
IBM Systems & Technology Group
Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
Tele (845) 433-7846         Fax (845) 433-8363



                                                                       
  From:       Jeff Squyres <jsquy...@cisco.com>                        
                                                                       
  To:         "Open MPI Users" <us...@open-mpi.org>                    
                                                                       
  Date:       03/17/2010 12:21 PM                                      
                                                                       
  Subject:    Re: [OMPI users] running externalprogram  on    same  processor   
(Fortran)
                                                                       
  Sent by:    users-boun...@open-mpi.org                               
                                                                       





On Mar 16, 2010, at 5:12 AM, abc def wrote:

> 1. Since Spawn is non-blocking, but I need the parent to wait until the
child completes, I am thinking there must be a way to pass a variable from
the child to the parent just prior to the FINALIZE command in the child, to
signal that the parent can pick up the output files from the child. Am I
right in assuming that the message from the child to the parent will go to
the correct parent process? The value of "parent" in "CALL
MPI_COMM_GET_PARENT(parent, ierr)" is the same in all spawned processes,
which is why I ask this question.

Yes, you can MPI_SEND (etc.) between the parents and children, just like
you would expect.  Just be aware that the communicator between the parents
and children is an *inter*communicator -- so you need to express the
source/destination in terms of the "other" group.  Check out the MPI spec
for a description of intercommunicators.

> 2. By launching the parent with the "--mca mpi_yield_when_idle 1" option,
the child should be able to take CPU power from any blocked parent process,
thus avoiding the busy-poll problem mentioned below.

Somewhat.  Note that the parents aren't blocked -- they *are* busy polling,
but they call yield() in every pool loop.

> If each host has 4 processors and I'm running on 2 hosts (ie, 8
processors in total), then I also assume that the spawned child will launch
on the same host as the associated parent?

If you have told Open MPI about 8 process slots and are using all of them,
then spawned processes will start overlaying the original process slots --
effectively in the same order.

--
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/


_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to