Hello Dr. Bangerth,

Thank you for the clarification. Isend should be the way to patch this. 
Alternatively, what is your opinion on copying the matrix to a single processor 
(if that is possible) using MPI_Gather? Would the cell iterators then no longer 
work? Ignoring scalability will this also be an option?

Regards,
Rahul

> On Sep 28, 2022, at 5:05 AM, Wolfgang Bangerth <[email protected]> wrote:
> 
> On 9/27/22 11:54, Rahul Gopalan Ramachandran wrote:
>> I am having problem with applying static condensation as explained in 
>> step-44 in parallel. The objective is to employ a parallel solver. Any 
>> guidance on the issue would be much helpful.
>> As of now, the code works when run on single MPI process, but fails when np 
>> > 1. The error originates from the following part of the code where it tries 
>> to read the elements of the tangent_matrix, which is a 
>> PETScWrapper::MPI::BlockSparseMatrix. The issue arises trying to access the 
>> dofs not owned by the MPI process at the boundary partition.
>> //-----------------------------------------------------------------------------
>> for (unsigned int i=0; i<dofs_per_cell; ++i)
>>       for (unsigned int j=0; j<dofs_per_cell; ++j)
>>         data.k_orig(i,j) =  tangent_matrix.el(data.local_dof_indices[i],
>>                             data.local_dof_indices[j]);
>> //-----------------------------------------------------------------------------
> 
> Rahul -- the problem is that, unlike vectors, matrices are generally not 
> written to use data structures that provide you with the equivalent of "ghost 
> elements". That is because, in most applications, matrices are "write only": 
> you build the matrix element-by-element, but you never read from it 
> element-by-element. As a consequence, people don't provide the facilities to 
> let you read elements stored on another process.
> 
> If you need this kind of functionality, you probably have to build it 
> yourself. The way I would approach this is by looking at your algorithm, 
> figuring out which matrix elements you need to be able to read for rows not 
> locally owned (most likely the rows that correspond to locally-relevant but 
> not locally-owned; it may also be locally-active but not locally-owned). You 
> will then have to read these elements on the process that owns them and send 
> them to all processes that need them but don't own them. In the current 
> development version, you can use functions such as Utilities::MPI::Isend to 
> do that with whatever data structure you find convenient, but it can also be 
> done with standard MPI calls.
> 
> This isn't particularly convenient, but it is the best anyone can offer short 
> of writing the necessary functionality in PETSc itself.
> 
> Best
> W.
> 
> -- 
> ------------------------------------------------------------------------
> Wolfgang Bangerth          email:                 [email protected]
>                           www: http://www.math.colostate.edu/~bangerth/
> 
> -- 
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see 
> https://groups.google.com/d/forum/dealii?hl=en
> --- You received this message because you are subscribed to a topic in the 
> Google Groups "deal.II User Group" group.
> To unsubscribe from this topic, visit 
> https://groups.google.com/d/topic/dealii/3PXbitrTj_k/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to 
> [email protected].
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/dealii/6c1a9f21-2918-37ac-87e8-fd0fe8614bdb%40colostate.edu.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/FC88F9C7-CCA5-451A-B06C-1328164E4A68%40gmail.com.

Reply via email to