On 9/27/22 11:54, Rahul Gopalan Ramachandran wrote:

I am having problem with applying static condensation as explained in step-44 in parallel. The objective is to employ a parallel solver. Any guidance on the issue would be much helpful.

As of now, the code works when run on single MPI process, but fails when np > 1. The error originates from the following part of the code where it tries to read the elements of the tangent_matrix, which is a PETScWrapper::MPI::BlockSparseMatrix. The issue arises trying to access the dofs not owned by the MPI process at the boundary partition.

//-----------------------------------------------------------------------------
for (unsigned int i=0; i<dofs_per_cell; ++i)
       for (unsigned int j=0; j<dofs_per_cell; ++j)
         data.k_orig(i,j) =  tangent_matrix.el(data.local_dof_indices[i],
                             data.local_dof_indices[j]);
//-----------------------------------------------------------------------------


Rahul -- the problem is that, unlike vectors, matrices are generally not written to use data structures that provide you with the equivalent of "ghost elements". That is because, in most applications, matrices are "write only": you build the matrix element-by-element, but you never read from it element-by-element. As a consequence, people don't provide the facilities to let you read elements stored on another process.

If you need this kind of functionality, you probably have to build it yourself. The way I would approach this is by looking at your algorithm, figuring out which matrix elements you need to be able to read for rows not locally owned (most likely the rows that correspond to locally-relevant but not locally-owned; it may also be locally-active but not locally-owned). You will then have to read these elements on the process that owns them and send them to all processes that need them but don't own them. In the current development version, you can use functions such as Utilities::MPI::Isend to do that with whatever data structure you find convenient, but it can also be done with standard MPI calls.

This isn't particularly convenient, but it is the best anyone can offer short of writing the necessary functionality in PETSc itself.

Best
 W.

--
------------------------------------------------------------------------
Wolfgang Bangerth          email:                 [email protected]
                           www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/6c1a9f21-2918-37ac-87e8-fd0fe8614bdb%40colostate.edu.

Reply via email to