Thanks Dan,

Yes, I ran across 4 nodes (32 cores) and my log file returned a randomized list 
of integers 0 through 31. With other information from PBS I could see the names 
of the 4 nodes that were allocated (I believe I didn't have 32 processes on one 
node).

Previous to this I inserted lines from the fipy parallel example into my code 
and got the expected result from mpi an epetra.

from mpi4py import MPI
from PyTrilinos import Epetra

m4comm = MPI.COMM_WORLD
epcomm = Epetra.PyComm()

myRank = m4comm.Get_rank()

mpi4py_info = "mpi4py: processor %d of %d" % (m4comm.Get_rank(),
                                              m4comm.Get_size())
trilinos_info = "PyTrilinos: processor %d of %d" % (epcomm.MyPID(),
                                                    epcomm.NumProc())
print " :: ".join((mpi4py_info, trilinos_info))

I still have the "from mpi4py import MPI" in my code and use the myRank==0 node 
to print out status as the program runs.

Cheers,

Bill


On Mar 18, 2014, at 4:02 PM, Daniel Wheeler <[email protected]>
 wrote:

On Tue, Mar 18, 2014 at 7:28 AM, Seufzer, William J. (LARC-D307)
<[email protected]> wrote:
> Fipy developers,
> 
> I have Fipy running with Trilinos on our cluster but I can't seem to go 
> beyond a single node (with multiple cores).
> 
> I have a 3D .geo file that I built by hand that does not cause errors with 
> gmsh. The following line:
> 
> mesh = Gmsh3D(open('tmsh3d.geo').read())
> 
> works on one node and 8 cores but appears to hang on 2 nodes and 16 cores. I 
> have a print statement after this line that never executes. When I look at 
> how the nodes are utilized (we have a show_loads command) only one of the 
> nodes has busy cores.

Hi Bill,

I think that the first step is to confirm that Trilinos is working
correctly independent of FiPy. To do this just run

   from PyTrilinos import Epetra
   print Epetra.PyComm().MyPID()

in a script. Does this work on more than one node?

-- 
Daniel Wheeler
_______________________________________________
fipy mailing list
[email protected]
http://www.ctcms.nist.gov/fipy
 [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]


_______________________________________________
fipy mailing list
[email protected]
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]

Reply via email to