Dear All,
A couple of days ago Dr. Wheeler noted:
> Most of the examples that use grid meshes should give a speed up.
>From comments by Eddie and Igor, I thought maybe
the 2D world would be a good place to start, so I ex-
perimented with the mesh20x20 example. It would
appear that my most immediate problem with parallel
FiPy is not really the parallelization in itself, but rather
the Trilinos solvers.
Here's a quick outline of why it looks that way:
1) used "copy_script" on the "mesh20x20.py"
example (in examples/diffusion), then filtered out
all the comment lines.
2) eliminated the calls to the viewer -- all I care
about is the solver
3) changed the 20x20 mesh in the example to
400x400
After carrying out the steps above, I collected the
resulting 32-lines of code into a file called
"mesh400x400.py", with which I then compared
the execution times for 3 scenarios: using the
Pyparse solver; using the Trilinos solver, but on
one processor only; using the Trilinos solver with
mpi and 4 processors. Here are the results:
solver time
-------------------------------------------
pysparse 0m19.014s
trilinos w 1 proc 1m33.709s
trilinos w 4 proc 0m49.786s
It would appear that the parallelization is indeed
working, but that it just doesn't have much to
work with. In each case I verified that the
FIPY_SOLVERS env variable was set correctly,
that I was using FiPy-2.1, and that the expected
number of processors was engaged. I did *not*
explicitly specify a particular solver.
The source for "mesh400x400.py" is included just
below this summary. The exact command line for
each of the 3 simple tests and the resulting output
follows the code listing. If any one has the time
and inclination, it would be helpful to me if you could
run the 3 cases and report the results. If there are
better tests, I would be overjoyed to learn of them
(perhaps a different solver?).
Thanks...
======================================
Source for mesh400x400.py, derived from
../examples/diffusion/mesh20x20.py
from fipy import *
nx = 400
ny = nx
dx = 1.
dy = dx
L = dx * nx
mesh = Grid2D(dx=dx, dy=dy, nx=nx, ny=ny)
phi = CellVariable(name = "solution variable",
mesh = mesh,
value = 0.)
D = 1.
eq = TransientTerm() == DiffusionTerm(coeff=D)
valueTopLeft = 0
valueBottomRight = 1
x, y = mesh.getFaceCenters()
facesTopLeft = ((mesh.getFacesLeft() & (y > L / 2))
| (mesh.getFacesTop() & (x < L / 2)))
facesBottomRight = ((mesh.getFacesRight() & (y < L / 2))
| (mesh.getFacesBottom() & (x > L / 2)))
BCs = (FixedValue(faces=facesTopLeft, value=valueTopLeft),
FixedValue(faces=facesBottomRight, value=valueBottomRight))
timeStepDuration = 10 * 0.9 * dx**2 / (2 * D)
steps = 10
for step in range(steps):
eq.solve(var=phi,
boundaryConditions=BCs,
dt=timeStepDuration)
print numerix.allclose(phi(((L,), (0,))), valueBottomRight, atol = 1e-2)
DiffusionTerm().solve(var=phi,
boundaryConditions = BCs)
print numerix.allclose(phi(((L,), (0,))), valueBottomRight, atol = 1e-2)
======================================
Time PySparse on mesh400x400
python ver=2.6.3 (r263:75183, Oct 12 2009, 13:16:14)
[GCC 4.3.2]
fipy default solver=
fipy.solvers.pysparse.linearLUSolver.LinearLUSolver
fipy ver=2.1
$ time python mesh400x400.py
../site-packages/FiPy-2.1-py2.6.egg/fipy/solvers/pysparse/linearPCGSolver.py:70:
DeprecationWarning: PyArray_FromDimsAndDataAndDescr: use
PyArray_NewFromDescr.
info, iter, relres = itsolvers.pcg(A, b, x, self.tolerance,
self.iterations, Assor)
True
True
real 0m19.014s
user 0m18.601s
sys 0m0.368s
======================================
Time Trilinos on mesh400x400...
python ver=2.6.3 (r263:75183, Oct 12 2009, 13:16:14)
[GCC 4.3.2]
fipy default solver=
fipy.solvers.trilinos.linearGMRESSolver.LinearGMRESSolver
fipy ver=2.1
..first with 1 processor -- i.e., without mpi
$ time python mesh400x400.py
True
True
*** An error occurred in MPI_Comm_free
*** after MPI was finalized
*** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
[WillardGibbs:32404] Abort after MPI_FINALIZE completed successfully;
not able to guarantee that all other processes were killed!
real 1m33.709s
user 1m32.826s
sys 0m0.876s
$ time mpiexec -n 4 python mesh400x400.py
True
True
True
True
True
True
True
True
*** An error occurred in MPI_Comm_free
*** after MPI was finalized
*** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
[WillardGibbs:32442] Abort after MPI_FINALIZE completed successfully;
not able to guarantee that all other processes were killed!
*** An error occurred in MPI_Comm_free
*** after MPI was finalized
*** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
[WillardGibbs:32444] Abort after MPI_FINALIZE completed successfully;
not able to guarantee that all other processes were killed!
*** An error occurred in MPI_Comm_free
*** after MPI was finalized
*** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
[WillardGibbs:32443] Abort after MPI_FINALIZE completed successfully;
not able to guarantee that all other processes were killed!
*** An error occurred in MPI_Comm_free
*** after MPI was finalized
*** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
[WillardGibbs:32445] Abort after MPI_FINALIZE completed successfully;
not able to guarantee that all other processes were killed!
real 0m49.785s
user 3m13.156s
sys 0m1.504s