Thank you, everything is clear now and i managed to accomplish what I
On Tuesday, 3 September 2019 19:09:00 UTC-4, Wolfgang Bangerth wrote:
> > Is there a different between how DynamicSparsityPatterns and
> > BlockDynamicSparsityPatterns
> Is there a different between how DynamicSparsityPatterns and
> BlockDynamicSparsityPatterns behave?
The latter is just an array of the former. Under the hood, every block
is simply a DynamicSparsityPattern that can be initialized in the same
way one always does.
> When you look
Thank you very much for your message.
Is there a different between how DynamicSparsityPatterns and
When you look at step-40, which is the first "MPI" step, the way the
sparsity pattern is made is (
I don't quite recall if we ever used the BlockDynamicSparsityPattern in a
parallel context. For sure, the way you're initializing it implies that every
process allocates the memory for all DoFs as it's not given the information
about locally_relevant_dofs. I'd have to look up whether
I am currently working on a parallel implementation of step-57, thus I am
learning to live with BlockVectors, BlockMatrices and BlockSparsityPatterns
Originally, I thought that I could make my sparsity pattern the following
way (i.e as in step-57, but distributing it