On Jan 10, 2012, at 3:07 PM, Seufzer, William J. wrote:

> I was successful with building FiPy, the dependancies, and mpi4py, on a Mac 
> and was able to run examples with MPI. I could see multiple cores engage and 
> noticed the improvement in wall time execution. With that success I decided 
> to save some time and purchased the enthought.com single user package and I'm 
> trying to get that to run on a cluster (SUSE Enterprise 64 bit) from within 
> my account (no root). I haven't gone down the route of including Trilinos 
> yet, just the basics for now.

PyTrilinos is absolutely mandatory for running FiPy in parallel. mpi4py is 
required, too, but only because PyTrilinos does not fully expose the MPI 
communicator.

> I took examples/parallel.py, took out the Trilinos stuff, and renamed it 
> simple.py. It appears that the MPI part is working, but the parallel and 
> Grid1D is not distributing across the nodes. When I do an mpirun I get:
> 
> me@cluster% mpirun -np 3 python simple.py
> mpi4py: processor 0 of 3 :: FiPy: 10 cells on processor 0 of 1
> mpi4py: processor 1 of 3 :: FiPy: 10 cells on processor 0 of 1
> mpi4py: processor 2 of 3 :: FiPy: 10 cells on processor 0 of 1
> 
> The mpirun that is running is from my enthought Python install.

FiPy is not partitioning the mesh because PyTrilinos is missing, but I'll go 
ahead and address your next question:


> I can see that enthought python is built using gcc, but on the cluster I 
> built mpi4py with the Intel compiler. I believe I need to do so to be 
> compatible with the clusters MPI environment and job queue (PBS). (?) The 
> above run was on a development node; not run through PBS.

I don't know that all the packages need to be compiled with the same compiler, 
but they must be built against the same python. If the packages are completely 
independent, then I don't think there are any ABI compatibility issues that 
would mandate using the same compiler. A number of FiPy's prerequisites link 
against NumPy, though, and as soon as you start linking, you are much more 
likely to need the same compiler.

> Am I trying to mix apples and organges? Should I build all the stuff, Python, 
> pysparse, etc. with the intel compiler? Should I rebuild mpi4py with gcc?

Building everything with the same compiler is probably going to be the least 
trouble in the long run. If your cluster queue and MPI environment wants the 
icc compiler, then that's what you should use.



_______________________________________________
fipy mailing list
[email protected]
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]

Reply via email to