Dear all, thank you for the answers which I find very helpful in trying to understand if everything works as expected (I mean parallel runs). Personally, thanks to Jonathan's simple checkouts, I found out that nothing works as expected. I was not able to work on the issue recently due to sudo constraints, so finally (to be root independent :-)) I decided to make my own user installation in the VIRTUAL ENVIRONMENT following instructions of Jonathan given for Mac (http://matforge.org/fipy/wiki/SnowLeopardSourceBuild). I'm working with Scientific Linux 5, so I'm not completely sure I'll succeed, but will let you know soon.
Concerning last comments of jtg, I think swig should be present for trilinos, which actually makes C and Python interaction possible, at least installation of swig should be made before Trlinons according to cited instructions and indirectly mentioned in the "Note" area in the FiPy Manual when talking about Trilinos installation. Cheers, Igor. On Sun, May 9, 2010 at 1:36 AM, jtg <[email protected]> wrote: > > Dear All, > > To summarize this tale of woe: Still can't get the simple parallel > fipy example to work. That is, > > mpirun -n 2 python mesh1D.py > > gives 2 identical outputs of the calculations for each step. > > Per Dr. Guyer's helpful e-mails to Igor and myself, I have verified > > o trilinos 9.0.3 has been completely re-built with OpenMPI > 1.3.3; trilinos examples (C++) work w/ mpirun as expected > > o libpytrilinos *is* linked against libmpi (ldd libpytrilinos) > > o libpytrilinos has *no* MPI symbols; *all* other .so libs in > that same lib directory *do* have MPI symbols > > o mpi4py is installed in the standard site-packages location > (same place as numpy, scipy, and even fipy itself) > > > 1) At the moment, it seems that the lack of MPI symbols in > libpytrilinos is the most fundamental issue > > 2) Bear with me just a moment. We have an mpi4py that > "works" -- that is, this very simple-minded program behaves > as expected (and others, not so simple-minded) > ======================= > #!/usr/bin/env python > > from sys import * > from mpi4py import MPI > > tot_procs= MPI.COMM_WORLD.Get_size() > rank= MPI.COMM_WORLD.Get_rank() > machine= MPI.Get_processor_name() > > tst_msg= "Aloha, World. This proc= %d (of %d) is running on > machine= %s.\n" > > stdout.write(tst_msg % (rank, tot_procs, machine)) > ======================= > > But, obviously, to run properly this example must import > mpi4py. > > 3) HOW DOES PyTrilinos KNOW ABOUT mpi4py? > At the moment, I think that's the problem. By hook or by > crook something in PyTrilinos has to "import mpi4py"; > how does that happen? Or, how *should* it happen, since > apparently it isn't? Does mpi4py perhaps need to be located > somewhere specifically? > > All the trilinos configuration flags have to do with "mpi"; there > are no python stubs of any kind in my release of OpenMPI. > there are cryptic/generic import statements in the PyTrilinos > files, but nowhere is anything like mpi4py explicitly called out, > that I have been able to find. > > I use only 2 mpi flags when configuring trilinos > --with-mpi=[...]/trilinos/OPENMPI \ > --with-mpi-compilers \ > > I believe that the code implementing the bindings between > mpi4py and (standard) MPI is located in a lib called MPI.so; > which, at our facility, after an standard install of mpi4py, > wound up here > [our_python_area]/site-packages/mpi4py/MPI.so > > (As mentioned, this is the same "site-packages" containing > things like numpy and fipy). > > 5) Something else that might help: if you have a working > fipy/trilinos installation: are you using mpi4py? where is > it installed? is it in fact associated with libpytrilinos (does > "ldd libpytrilinos" list MPI.so)? > > That's enough grief for one day... thanks for your help. > > +jtg+ > >
