Niclas Jansson wrote:
> Niclas Jansson <[email protected]> writes:
> 
>> [email protected] (DOLFIN) writes:
>>
>>> One or more new changesets pushed to the primary dolfin repository.
>>> A short summary of the last three changesets is included below.
>>>
>>> changeset:   6855:51f268bf79d79b0de5d3cb20a7cacb35357c81ce
>>> tag:         tip
>>> user:        "Garth N. Wells <[email protected]>"
>>> date:        Wed Aug 26 14:30:18 2009 +0100
>>> files:       demo/pde/elasticity/cpp/main.cpp dolfin/fem/Assembler.cpp
>>> description:
>>> Add OpenMP code to assembler (commented out for now).
>>>
> 
> If insertion is done in a critical section there would virtually be
> no scope for speedup (for larger number of threads)
> 

I have a problem where the element tensors are very complicated and 
costly to compute and the linear solve is cheap - with a small number of 
threads I get a modest speed up. Annoyingly, PETSc dies when calling 
VecGetValues with more than one thread.

> A better idea would probably be to either
> 
> 1) Partition the mesh, and only have a critical section for shared
> cells.
> 
> 2) Assemble into a private matrix and either add them together A_1 +
> ... + A_n or add them together with some fancy tree reduction algorithm.
> 

The best idea is for us to complete the message passing-based 
parallelistion ;). I have interior facet integrals which are not yet 
supported which is why I've tried OpenMP inside the assembler.

Garth

> Niclas
> _______________________________________________
> DOLFIN-dev mailing list
> [email protected]
> http://www.fenics.org/mailman/listinfo/dolfin-dev

_______________________________________________
DOLFIN-dev mailing list
[email protected]
http://www.fenics.org/mailman/listinfo/dolfin-dev

Reply via email to