Hi,

    I have a multi physics application with discipline1 defined on comm1 and 
discipline2 on comm2.

    My intent is to use the nested matrix for the KSP solver where each 
diagonal block is provided by the disciplines, and the off-diagonal blocks are 
defined as shell-matrices with matrix vector products. 

    I am a bit unclear about how to deal with the case of different set of 
processors on comm1 and comm2. I have the following questions and would 
appreciate some guidance: 

— Would it make sense to define a comm_global as a union of comm1 and comm2 for 
the MatCreateNest? 

— The diagonal blocks are available on comm1 and comm2 only. Should 
MatAssemblyBegin/End for these diagonal blocks be called on comm1 and comm2 
separately?

— What comm should be used for the off-diagonal shell matrices? 

— Likewise, when calling VecGetSubVector and VecRestoreSubVector to get 
sub-vectors corresponding to discipline1 (or 2), what comm should these 
function calls be made? 

Thanks,
Manav


Reply via email to