Hi,

On 02/15/2012 03:11 PM, Timo Heister wrote:
Hi,

In principle I see several different possibilities how to do that. One could
either use tasks or threads to solve the linear systems simultaneously, or
use Trilinos or PETSc to solve them one after another, but using multiple
MPI Processes.
The first options has the advantage that there is no communication
required between the threads. The disadvantage is, that you need the
memory to store all matrices at the same time. The second case will be
a bit slower, but you can only keep one L_i around at a time.
I have to keep the L_i around for other reasons as well. They are quite expensive to assemble, and I need to do these calculations in every time step. So no real advantage or disadvantage.
I did some tests with Tasks [...] However I only observe a speedup of<= 10
% over the serial solution, which I find a bit disappointing.
How do you measure that? (You have to make sure to compare wall clock
time) One option is to use the linux tool "time".
How many seconds does your problem take total (in mustn't be too fast
to get reliable results)? Which parts are done in parallel? What kind
of load does "top" show while your code is running?

I used clocks from the ctime module of the C++ standard library. The speedup was around 2-10% for all runtimes between 0.2 and 30 seconds, for problemsizes of around 20000 and 70000 dofs and between 1 and 100 repetitions. linux "time" results are consistent with that.

Each tasks that calculate tmp = L_i b, then solves M x_i = tmp and then returns.

Top shows high load on all processors, even for the "serial" version.

In another reply that came shortly after yours, Matthias Meier mentioned that SolverCG is using multiple threads. I was not aware of that, but that explains everything I observe in my tests. So it seems that SolverCG already uses all my cores so efficiently that further efforts are unneccessary.

Thank you for your efforts

Johannes

_______________________________________________
dealii mailing list http://poisson.dealii.org/mailman/listinfo/dealii

Reply via email to