Hello,

Thanks for detailed info. Confirmation of parallel running was my first
action both with parallel.py and printing "fipy.parallelComm.procID", also
print mpi4py version of procID and Epetra's version
"Epetra.PyComm().MyPID()"

So I'm definetly sure :)

Your test are consistent with my results. At least from 1 to 8 cores, cos I
have no parallel enviroment beyond 8 cores.

But as I mentioned in my last mail I think something wrong with trilinos
quoted as;

"I try to solve a mesh  with 40,000 cells (200x200) on 1 core with trilinos
and it takes about 17 secs.
Than I give 160,000 cells to 4 core (40,000 cells on each core) and result
is 46 secs.

There should be some communication overhead but this couldn't explain 2.5
time slower solution."

So may be it is a good idea that forwarding problem to Trilinos upstream if
you also confirm the results with 40,000 cells vs. 160,000 cells.

Serbulent

Ps: If you decided to open a bug report to trilinos please share the report
number, so I try to follow and contribute for a solution.

2014-10-02 20:11 GMT+03:00 Guyer, Jonathan E. Dr. <[email protected]>:

>
> On Oct 2, 2014, at 11:27 AM, Daniel Wheeler <[email protected]>
> wrote:
>
> > Also, are you certain you are running in parallel. This has caught me
> out before. Print "fipy.parallelComm.procID", also print mpi4py version of
> procID and Epetra's version "Epetra.PyComm().MyPID()".
>
> mpirun -np 2 python examples/parallel.py
> _______________________________________________
> fipy mailing list
> [email protected]
> http://www.ctcms.nist.gov/fipy
>   [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]
>
_______________________________________________
fipy mailing list
[email protected]
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]

Reply via email to