On Wed, Oct 8, 2014 at 3:52 AM, Serbulent UNSAL <[email protected]>
wrote:


> There should be some communication overhead but this couldn't explain 2.5
> time slower solution."
>
> So may be it is a good idea that forwarding problem to Trilinos upstream
> if you also confirm the results with 40,000 cells vs. 160,000 cells.
>

It seems clear to me that the big shared memory machines seem to scale very
well, but the smaller workstations have problems. I believe that I'm using
the same underlying numerical libraries on both systems.


>
> Serbulent
>
> Ps: If you decided to open a bug report to trilinos please share the
> report number, so I try to follow and contribute for a solution.
>

I'm not planning on doing that. Maybe a question to the mailing list
demoing different architectures with a simple PyTrilinos example (not
FiPy). Whatever is going on is probably well understood by people who are
into HPC.

-- 
Daniel Wheeler
_______________________________________________
fipy mailing list
[email protected]
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]

Reply via email to