While trying to port step-15 to a MPI-capable version of the program, I 
came to the following point in the code:
 for (unsigned int i=0; i<dofs_per_cell; ++i)
 {
     for (unsigned int j=0; j<dofs_per_cell; ++j)
         system_matrix.add (local_dof_indices[i],
         local_dof_indices[j],
         cell_matrix 
<https://www.dealii.org/8.4.0/doxygen/deal.II/namespaceLocalIntegrators_1_1Advection.html#aa57fdeca62a0708d77768a3bb2aeb826>
(i,j));

     system_rhs(local_dof_indices[i]) += cell_rhs(i);
 }
In comparison every example with PETSc::MPI uses 
hanging_node_constraints.distribute_local_to_global (cell_matrix, cell_rhs,
local_dof_indices,
system_matrix, system_rhs);

The first version does not work for PETScWrappers::MPI::SparseMatrix due to 
the lack of .add(). Now, in order to use the function 
distribute_local_to_global() my local matrices have to be full matrices. On 
the other hand, only sparse matrices support the .add()-function.

Are there possible solutions for that in the examples I did not see? Or 
maybe other suggestions?
Thanks!


-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to