On Mon, Sep 29, 2014 at 4:09 PM, Serbulent UNSAL <[email protected]>
wrote:

> Thanks for help.
>
> Using LinearPCGSolver gives some more speed but it it still slower than
> serial version ( 41 sec vs. 55 sec ).
>

Serbulent,

It seems that in this notebook,
http://nbviewer.ipython.org/github/wd15/fipy-efficiency/blob/master/notebooks/FiPy-IPython.ipynb,
the serial version PCG using Trilinos is faster than the serial version of
PySparse without preconditioning. Preconditioning with Trilinos can often
impact efficiency quite heavily.



>
> I tried to use Gmsh mesh with using workaround defined at
> http://wd15.github.io/2014/01/30/fipy-trilinos-anaconda/
> But Gmsh is even slower than normal mesh.
>

I don't think I factored that in when writing up the notebooks. I might
look into that.


>
> Since I use a big mesh (400x400) I think, I will have communication issue
> that defined at the end of the first notebook.
>

Take a look at the second notebook,
http://nbviewer.ipython.org/github/wd15/fipy-efficiency/blob/master/notebooks/cluster.ipynb,
400x400 should be large enough to get some reasonable scaling.



>
> I'm open for any alternative strategies and suggestions.
>

I am not sure what to suggest. Certainly disable the preconditioners in
Trilinos and see what happens. It may be that we need to use
preconditioning in a less naive way in FiPy (not precondition at every
sweep, there may be ways to do that with Trilinos).

-- 
Daniel Wheeler
_______________________________________________
fipy mailing list
[email protected]
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]

Reply via email to