On Sat, May 22, 2010 at 11:07 AM, jtg <hi.jo...@gmail.com> wrote:

> 1) Is there an example in the FiPy 2.1 release that demontrates a
> performance improvement from using the trilinos solvers in parallel?
> (That is, is there an example that runs in a certain amount of time
> with the pysparse solvers and then less time using mpirun and the
> trilinos solvers?)  I'm hoping to compare the set-up for something
> that "works" with what we are doing in order to look for clues as to
> the difference.

Most of the examples that use grid meshes should give a speed up. I
will run one of the examples and give you some numbers. I'll try and
do this within the week.

> 2) Just in a very general way, what features would you expect a FiPy
> problem to have that would lend themselves to improved performance
> under the parallelization scheme FiPy 2.1 implements?

Obviously, you have to be using one of the grid meshes and have the
longest axis along the x, y or z direction in 1, 2 or 3D. The current
partitioning scheme is 1D in all dimensions and utterly trivial and
suboptimal, but it worked for the problems that I was interested in
(square grid).  Our hope is to use PyMetis or Gmsh to implement
optimal partitioning. The parallel changes to the code are independent
of the partitioning scheme so this shouldn't require major changes
just to the Gmsh mesh classes. Hopefully will be part of the next
release, but no guarantees right now.

--
Daniel Wheeler


Reply via email to