Hi,

I've run into the same problem as discussed in the thread Lev Gelb: "Re:
[OMPI users] Recursive use of "orterun" (Ralph H
Castain)"<http://www.open-mpi.org/community/lists/users/2007/07/3655.php>

I am running a parallel python code, then from python I launch a C++
parallel program using the python os.system command, then I come back in
python and keep going.

With LAM/MPI there is no problem with this.

But Open-mpi systematically crashes, because the python os.system command
launches the C++ program with the same OMPI_* environment variables as for
the Python program. As discussed in the thread, I have tried filtering the
OMPI_* variables prior to launching the C++ program with an
os.execvecommand, but then it fails to return the hand to python and
instead simply
terminates when the C++ program ends.

There is a workaround (
http://thread.gmane.org/gmane.comp.clustering.open-mpi.user/986): create a
*.sh file with the following lines:

--------
for i in $(env | grep OMPI_MCA |sed 's/=/ /' | awk '{print $1}')
do
   unset $i
done

# now the C++ call
mpirun -np 2  ./MoM/communicateMeshArrays
----------

and then call the *.sh program through the python os.system command.

What I would like to know is that if this "problem" will get fixed in
open-MPI? Is there another way to elegantly solve this issue? Meanwhile, I
will stick to the ugly *.sh hack listed above.

Cheers

Ides

Reply via email to