Attached is are some simple examples (in C) that collectively does most of what you are trying to do.

You have some args wrong in your call. See slave_spawn.c for how to use info_keys.

HTH
Ralph

Attachment: simple_spawn.c
Description: Binary data

Attachment: slave_spawn.c
Description: Binary data

Attachment: slave.c
Description: Binary data


On Mar 6, 2010, at 9:35 AM, abc def wrote:

Thanks for your kind help so far.

Following your suggestions I've been trying to figure out MPI_COMM_SPAWN, but it's at the edge of my understanding so it's not easy.

I read that the changing of directories can be achieved using info keys, however these are very cryptic: I can't seem to find any precise information anywhere about how to use them. I tried the following: 

-------------------------------------
WRITE(crank,'(I1)') irank
dir="test-" // crank

CALL SYSTEM("mkdir " // dir)

CALL MPI_COMM_SPAWN("mpitest-2.ex",MPI_ARGV_NULL,1,"wdir ./" // dir,irank,MPI_COMM_SELF,ierr)
---------------------------------------

MPI_ARGV_NULL: there are no arguments to mpitest-2.ex

1: I want to spawn 1 process per original process (the original process sitting idle, maybe waiting for a return parameter from its child - have not figured out how to achieve communication between the two processes yet, but that's the next step)

"wdir ./test-1" (for example): the directory in which the new process should run. I don't think this is correct, but as I say, I can't find precise information about the info keys (at least, in a way that I can understand it) - can anyone help me here?

irank: the current rank of the process, so every process spawns its own process

MPI_COMM_SELF: I assume this is the new name for the child processes, like MPI_COMM_WORLD.

ierr: error value

So it compiles, but crashes once I reach the spawn command.

Can you help?
Thank you very much.

> From: jsquy...@cisco.com
> Date: Fri, 5 Mar 2010 15:02:57 -0500
> To: us...@open-mpi.org
> Subject: Re: [OMPI users] running externalprogram on same processor (Fortran)
> 
> On Mar 5, 2010, at 2:38 PM, Ralph Castain wrote:
> 
> >> CALL SYSTEM("cd " // TRIM(dir) // " ; mpirun -machinefile ./machinefile -np 1 /home01/group/Execute/DLPOLY.X > job.out 2> job.err ; cd - > /dev/null")
> > 
> > That is guaranteed not to work. The problem is that mpirun sets environmental variables for the original launch. Your system call carries over those envars, causing mpirun to become confused.
> 
> You should be able to use MPI_COMM_SPAWN to launch this MPI job. Check the man page for MPI_COMM_SPANW; I believe we have info keys to specify things like what hosts to launch on, etc.
> 
> >> Do you think MPI_COMM_SPAWN can help?
> > 
> > It's the only method supported by the MPI standard. If you need it to block until this new executable completes, you could use a barrier or other MPI method to determine it.
> 
> I believe that the user said they wanted to use the same cores as their original MPI job occupies for the new job -- they basically want the old job to block until the new job completes. Keep in mind that OMPI busy-polls waiting for progress, so you might actually get hosed here (two procs competing for time on the same core).
> 
> I'm not immediately thinking of a good way to avoid this issue -- perhaps you could kludge something up such that the parent job polls on sleep() and checking to see if a message has arrived from the child (i.e., the last thing the child does before it calls MPI_FINALIZE is to send a message to its parents and then MPI_COMM_DISCONNECT from its parents). If the parent finds that it has a message from the child(ren), it can MPI_COMM_DISCONNECT and continue processing.
> 
> Kinda hackey, but it might work...?
> 
> -- 
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


We want to hear all your funny, exciting and crazy Hotmail stories. Tell us now _______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to