Re: [OMPI users] OMPI users] OMPI users] OMPI users] OMPI users] MPI inside MPI (still)

2014-12-18 Thread Reuti
Am 18.12.2014 um 04:24 schrieb Alex A. Schmidt : > > The option system("env -i ...") has been tested earlier by me and it does > work. There is doubt though it would work along with a job scheduler. > I will reserve this as a last resort solution. You could also redirect the stdin

Re: [OMPI users] OMPI users] OMPI users] OMPI users] OMPI users] MPI inside MPI (still)

2014-12-17 Thread Ralph Castain
We can certainly add an MPI_Info key to redirect stdin, stdout, and stderr. However, that won't happen in the immediate future, nor would it come into the 1.8 series. Meantime, I suspect wrapping these codes in scripts sounds like the way to go. You would call mpirun to start the job in the

Re: [OMPI users] OMPI users] OMPI users] OMPI users] OMPI users] MPI inside MPI (still)

2014-12-17 Thread Alex A. Schmidt
The option system("env -i ...") has been tested earlier by me and it does work. There is doubt though it would work along with a job scheduler. I will reserve this as a last resort solution. mpi_comm_spawn("/bin/sh","-c","siesta < infile",..) definitely does not work. Patching siesta to start

Re: [OMPI users] OMPI users] OMPI users] OMPI users] OMPI users] MPI inside MPI (still)

2014-12-17 Thread Gilles Gouaillardet
Alex, You do not want to spawn mpirun. Or if this is really what you want, then just use system("env -i ...") I think what you need is spawn a shell that do the redirection and then invoke your app. This is something like MPI_Comm_spawn("/bin/sh", "-c", "siesta < infile") That being said, i

Re: [OMPI users] OMPI users] OMPI users] OMPI users] MPI inside MPI (still)

2014-12-17 Thread Alex A. Schmidt
Let me rephrase the previous message: Putting "/bin/sh" in command with info key "ompi_non_mpi" set to ".true." (if command is empty, mpi_comm_spawn tries to execute ' ') of mpi_comm_spawn and "-c" "mpirun -n 1 myapp" in args results in this message:

Re: [OMPI users] OMPI users] OMPI users] OMPI users] MPI inside MPI (still)

2014-12-17 Thread Alex A. Schmidt
Putting "/bin/sh" in command with info key "ompi_non_mpi" set to ".true." (if command is empty, mpi_comm_spawn tries to execute ' ') of mpi_comm_spawn and "-c" "mpirun -n 1 myapp" in args results in this message: /usr/bin/sh: -c: option requires an argument Putting a single string in args as

Re: [OMPI users] OMPI users] OMPI users] OMPI users] MPI inside MPI (still)

2014-12-17 Thread George Bosilca
I don't think this has any chance of working. The redirection is something interpreted by the shell, and when Open MPI "fork-exec" a process it does not behave as the shell. Thus a potentially non-portable solution would be to instead of launching the mpirun directly to launch it through a shell.

Re: [OMPI users] OMPI users] OMPI users] OMPI users] MPI inside MPI (still)

2014-12-17 Thread Alex A. Schmidt
Ralph, Sorry, "<" as an element of argv to mpi_comm_spawn is interpreted just the same, as another parameter by the spawnee process. But I am confused: wouldn't it be redundant to put "mpirun" "-n" "1" "myapp" as elements of argv, considering role of the other parameters of mpi_comm_spawn like

Re: [OMPI users] OMPI users] OMPI users] OMPI users] MPI inside MPI (still)

2014-12-17 Thread Ralph Castain
Have you tried putting the "<" as a separate parameter? In other words, since you are specifying the argv, you have to specify each of them separately. So it looks more like: "mpirun", "-n", "1", "myapp", "<", "stdinfile" Does that work? Ralph On Wed, Dec 17, 2014 at 8:07 AM, Alex A. Schmidt

Re: [OMPI users] OMPI users] OMPI users] OMPI users] MPI inside MPI (still)

2014-12-17 Thread Alex A. Schmidt
Ralph, I am afraid I will have to insist on i/o redirection matter for the spawnee process. I have a "child" mpi code that do just 2 things: read the 3 parameters passed to it and print them, and then read data from stdin and show it. So, if "stdin_file" is a text file with two lines, say: 10

Re: [OMPI users] OMPI users] OMPI users] OMPI users] MPI inside MPI (still)

2014-12-15 Thread Alex A. Schmidt
Ralph, I guess you mean "call mpi_comm_spawn( 'siesta', '< infile' , 2 ,...)" to execute 'mpirun -n 2 siesta < infile' on the spawnee side. That was my first choice. Well, siesta behaves as if no stdin file was present... Alex 2014-12-15 17:07 GMT-02:00 Ralph Castain : > >

Re: [OMPI users] OMPI users] OMPI users] OMPI users] MPI inside MPI (still)

2014-12-15 Thread Ralph Castain
You should be able to just include that in your argv that you pass to the Comm_spawn API. On Mon, Dec 15, 2014 at 9:27 AM, Alex A. Schmidt wrote: > > George, > > Thanks for the tip. In fact, calling mpi_comm_spawn right away with MPI > _COMM_SELF > has worked for me just as well

Re: [OMPI users] OMPI users] OMPI users] OMPI users] MPI inside MPI (still)

2014-12-15 Thread Alex A. Schmidt
George, Thanks for the tip. In fact, calling mpi_comm_spawn right away with MPI _COMM_SELF has worked for me just as well -- no subgroups needed at all. I am testing this openmpi app named "siesta" in parallel. The source code is available, so making it "spawn ready" by adding the pair

Re: [OMPI users] OMPI users] OMPI users] OMPI users] MPI inside MPI (still)

2014-12-14 Thread George Bosilca
Alex, The code looks good, and is 100% MPI standard accurate. I would change the way you create the subcoms in the parent. You do a lot of useless operations, as you can achieve exactly the same outcome (one communicator per node), either by duplicating MPI_COMM_SELF or doing an MPI_Comm_split

Re: [OMPI users] OMPI users] OMPI users] OMPI users] MPI inside MPI (still)

2014-12-14 Thread Alex A. Schmidt
Hi, Sorry, guys. I don't think the newbie here can follow any discussion beyond basic mpi... Anyway, if I add the pair call MPI_COMM_GET_PARENT(mpi_comm_parent,ierror) call MPI_COMM_DISCONNECT(mpi_comm_parent,ierror) on the spawnee side I get the proper response in the spawning processes.

Re: [OMPI users] OMPI users] OMPI users] OMPI users] MPI inside MPI (still)

2014-12-13 Thread Gilles Gouaillardet
Alex, Are you calling MPI_Comm_disconnect in the 3 "master" tasks and with the same remote communicator ? I also read the man page again, and MPI_Comm_disconnect does not ensure the remote processes have finished or called MPI_Comm_disconnect, so that might not be the thing you need. George,

Re: [OMPI users] OMPI users] OMPI users] MPI inside MPI (still)

2014-12-13 Thread George Bosilca
MPI_Comm_disconnect should be a local operation, there is no reason for it to deadlock. I looked at the code and everything is local with the exception of a call to PMIX.FENCE. Can you attach to your deadlocked processes and confirm that they are stopped in the pmix.fence? George. On Sat, Dec

Re: [OMPI users] OMPI users] OMPI users] MPI inside MPI (still)

2014-12-13 Thread Alex A. Schmidt
Hi Sorry, I was calling mpi_comm_disconnect on the group comm handler, not on the intercomm handler returned from the spawn call as it should be. Well, calling the disconnect on the intercomm handler does halt the spwaner side but the wait is never completed since, as George points out, there is

Re: [OMPI users] OMPI users] OMPI users] MPI inside MPI (still)

2014-12-13 Thread Gilles Gouaillardet
George is right about the semantic However i am surprised it returns immediatly... That should either work or hang imho The second point is no more mpi related, and is batch manager specific. You will likely find a submit parameter to make the command block until the job completes. Or you can