Thanks for help.

Using LinearPCGSolver gives some more speed but it it still slower than
serial version ( 41 sec vs. 55 sec ).

I tried to use Gmsh mesh with using workaround defined at
http://wd15.github.io/2014/01/30/fipy-trilinos-anaconda/
But Gmsh is even slower than normal mesh.

Since I use a big mesh (400x400) I think, I will have communication issue
that defined at the end of the first notebook.

I'm open for any alternative strategies and suggestions.


2014-09-29 21:51 GMT+03:00 Kris Kuhlman <[email protected]>:

> there are a couple recent ipython notebooks that displayed the parallel
> capabilities (and issues) of fipy.  maybe they are useful?
>
> http://wd15.github.io/2014/02/20/parallel-fipy-in-ipython/
>
>
> On Mon, Sep 29, 2014 at 12:00 PM, Serbulent UNSAL <[email protected]>
> wrote:
>
>> Hi Again,
>>
>> To simplify problem in my previous mail I use mesh20x20 example with a
>> 400x400 mesh. Code is below. And results have same problem;
>>
>> http://pastebin.com/iUPSs6UJ
>>
>> With standard solver;
>> Total time for 100 steps: 00:00:41
>>
>> With tirilinos with single core;
>> Total time for 100 steps: 00:02:17
>>
>> With trilinos with 8 cores;
>> Total time for 100 steps: 00:01:19
>>
>> Still has same issue with trilinos :(
>>
>> Thanks,
>>
>> Serbulent
>>
>>
>> _______________________________________________
>> fipy mailing list
>> [email protected]
>> http://www.ctcms.nist.gov/fipy
>>   [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]
>>
>>
>
> _______________________________________________
> fipy mailing list
> [email protected]
> http://www.ctcms.nist.gov/fipy
>   [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]
>
>
_______________________________________________
fipy mailing list
[email protected]
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]

Reply via email to