On the previous episode of "Solving in Parallel"...

>From Nist HQ, Dr. Guyer reported his Pysparse / Trilinos
comparison results...

> ...running examples/phase/anisotropy.py (*with* viewers)
> for 10 steps
>
> solver             100x100      500x500      1000x1000
>
> pysparse               5.75          65.8          287
> Trilinos w 1 proc   12.6          176             844
> Trilinos w 2 proc   13.7          134             710


Meanwhile, here at the Chicken of the Sea Think Tank:

Machine Info:
o CPUs (/proc/cpuinfo helpfully reports):
4-processors -- Dual Core Opteron Processor 280s,
running at 2411.111 MHz, .

o kernel (/proc/version says):
Linux version 2.6.26-2-amd64...

o memory: 8,200,116k

Eddie brought up a point about the compiler:  we compiled
trilinos, OpenMPI, and all the support libs that didn't come
with the OS (basically a Debian release), specifically many
of the numeric libs, with GCC 4.3.2.

Continuing with Dr. Guyer's anisotropy example, here are
our times for his 3 mesh sizes (same 10 steps, but no
viewer)

  solver              100x100  500x500   1000x1000
----------------------------------------------------------------------------
pysparse               2.4         43           171  seconds
trilinos 1 proc         4.5       103           480
trilinos 2 proc         2.9         54           263
trilinos 4 proc         3.3         37           192

Note the near equivalence of the trilinos-1 to pysparse ratio,
between Guyer's result and the above, in the case of the 1000
by 1000 mesh:  2.9 and 2.8.  It is our results for both 2 and
4 processors working on the 500x500 mesh that seem "in-
consistent" in this little table.

As Alice once noted, "Curiousier and curiousier"...

Reply via email to