[OMPI users] MPI inside MPI (still)

2014-12-11 Thread Alex A. Schmidt
Dear OpenMPI users, Regarding to this previous post from 2009, I wonder if the reply from Ralph Castain is still valid. My need is similar but quite simpler: to make a system call from an openmpi fortran application to run a third

Re: [OMPI users] MPI inside MPI (still)

2014-12-11 Thread Alex A. Schmidt
the "inside" run. > > on top of that, invoking system might fail depending on the interconnect > you use. > > Bottom line, i believe Ralph's reply is still valid, even if five years > have passed : > changing your workflow, or using MPI_Comm_spawn is a much better app

Re: [OMPI users] MPI inside MPI (still)

2014-12-11 Thread Alex A. Schmidt
PATH=/bin ... > > and that being also said, this "trick" could be just a bad idea : > you might be using a scheduler, and if you empty the environment, the > scheduler > will not be aware of the "inside" run. > > on top of that, invoking system might fail depending on the i

Re: [OMPI users] MPI inside MPI (still)

2014-12-11 Thread Alex A. Schmidt
to process it also benefits from a parallel environment. Alex 2014-12-12 2:30 GMT-02:00 Gilles Gouaillardet <gilles.gouaillar...@iferc.org >: > > Alex, > > just to make sure ... > this is the behavior you expected, right ? > > Cheers, > > Gilles > > >

Re: [OMPI users] MPI inside MPI (still)

2014-12-12 Thread Alex A. Schmidt
the mpi_comm_spawn call... Alex 2014-12-12 2:42 GMT-02:00 Alex A. Schmidt <a...@ufsm.br>: > > Gilles, > > Well, yes, I guess > > I'll do tests with the real third party apps and let you know. > These are huge quantum chemistry codes (dftb+, siesta and Gaussian) &g

Re: [OMPI users] OMPI users] MPI inside MPI (still)

2014-12-12 Thread Alex A. Schmidt
are using third party apps, why dont you do something like > system("env -i qsub ...") > with the right options to make qsub blocking or you manually wait for the > end of the job ? > > That looks like a much cleaner and simpler approach to me. > > Cheers, > > G

Re: [OMPI users] OMPI users] OMPI users] MPI inside MPI (still)

2014-12-13 Thread Alex A. Schmidt
Gilles > > George Bosilca <bosi...@icl.utk.edu> wrote: > You have to call MPI_Comm_disconnect on both sides of the > intercommunicator. On the spawner processes you should call it on the > intercom, while on the spawnees you should call it on the > MPI_Comm_get_parent. >

Re: [OMPI users] OMPI users] OMPI users] OMPI users] MPI inside MPI (still)

2014-12-14 Thread Alex A. Schmidt
a call to PMIX.FENCE. Can you attach to your deadlocked > processes and confirm that they are stopped in the pmix.fence? > > George. > > > On Sat, Dec 13, 2014 at 8:47 AM, Alex A. Schmidt <a...@ufsm.br> wrote: > >> Hi >> >> Sorry, I was calling mpi_comm_disconnect

Re: [OMPI users] OMPI users] OMPI users] OMPI users] MPI inside MPI (still)

2014-12-15 Thread Alex A. Schmidt
ther by duplicating MPI_COMM_SELF or doing an > MPI_Comm_split with the color equal to your rank. > > George. > > > On Sun, Dec 14, 2014 at 2:20 AM, Alex A. Schmidt <a...@ufsm.br> wrote: > >> Hi, >> >> Sorry, guys. I don't think the newbie here can

Re: [OMPI users] OMPI users] OMPI users] OMPI users] MPI inside MPI (still)

2014-12-15 Thread Alex A. Schmidt
t;r...@open-mpi.org>: > > You should be able to just include that in your argv that you pass to the > Comm_spawn API. > > > On Mon, Dec 15, 2014 at 9:27 AM, Alex A. Schmidt <a...@ufsm.br> wrote: > >> George, >> >> Thanks for the tip. In fact, calling mpi_com

Re: [OMPI users] OMPI users] OMPI users] OMPI users] MPI inside MPI (still)

2014-12-17 Thread Alex A. Schmidt
uot;< stdin_file" args(4) = " " will make "child" outputs only 1 line [A] [B] [< stdin_file] and then fails because there is not stdin data to read from. Please, note that surprisingly the whole string "< stdin_file" is interpreted as a third parameter to "ch

Re: [OMPI users] OMPI users] OMPI users] OMPI users] MPI inside MPI (still)

2014-12-17 Thread Alex A. Schmidt
..@open-mpi.org>: > > Have you tried putting the "<" as a separate parameter? In other words, > since you are specifying the argv, you have to specify each of them > separately. So it looks more like: > > "mpirun", "-n", "1", "myapp

Re: [OMPI users] OMPI users] OMPI users] OMPI users] MPI inside MPI (still)

2014-12-17 Thread Alex A. Schmidt
;> >> >> >> 2014-12-17 15:16 GMT-02:00 Ralph Castain <r...@open-mpi.org>: >>> >>> Have you tried putting the "<" as a separate parameter? In other words, >>> since you are specifying the argv, you have to specify each of them >>

Re: [OMPI users] OMPI users] OMPI users] OMPI users] MPI inside MPI (still)

2014-12-17 Thread Alex A. Schmidt
ocess >> it does not behave as the shell. >> >> Thus a potentially non-portable solution would be to instead of launching >> the mpirun directly to launch it through a shell. Maybe something like >> "/bin/sh", "-c", "mpirun -n 1 myapp&

Re: [OMPI users] OMPI users] OMPI users] OMPI users] OMPI users] MPI inside MPI (still)

2014-12-17 Thread Alex A. Schmidt
your app. > This is something like > MPI_Comm_spawn("/bin/sh", "-c", "siesta < infile") > > That being said, i strongly recommend you patch siesta so it can be > invoked like this > siesta -in infile > (plus the MPI_Comm_disconnect call explain