Olivier Jamond <[email protected]> writes: >> Given the structure of C it seems you should just explicitly construct >> Sp and use GAMG (or other preconditioners, even a direct solver) directly on >> Sp. Trying to avoid explicitly forming Sp will give you a much slower >> performing solving for what benefit? If C was just some generic monster than >> forming Sp might be unrealistic but in your case CCt is is block diagonal >> with tiny blocks which means (C*Ct)^(-1) is block diagonal with tiny blocks >> (the blocks are the inverses of the blocks of (C*Ct)). >> >> Sp = Ct*C + Qt * S * Q = Ct*C + [I - Ct * (C*Ct)^(-1)*C] S [I - Ct * >> (C*Ct)^(-1)*C] >> >> [Ct * (C*Ct)^(-1)*C] will again be block diagonal with slightly larger >> blocks. >> >> You can do D = (C*Ct) with MatMatMult() then write custom code that zips >> through the diagonal blocks of D inverting all of them to get iD then use >> MatPtAP applied to C and iD to get Ct * (C*Ct)^(-1)*C then MatShift() to >> include the I then MatPtAP or MatRAR to get [I - Ct * (C*Ct)^(-1)*C] S [I - >> Ct * (C*Ct)^(-1)*C] then finally MatAXPY() to get Sp. The complexity of >> each of the Mat operations is very low because of the absurdly simple >> structure of C and its descendants. You might even be able to just use >> MUMPS to give you the explicit inv(C*Ct) without writing custom code to get >> iD. > > At this time, I didn't manage to compute iD=inv(C*Ct) without using > dense matrices, what may be a shame because all matrices are sparse . Is > it possible? > > And I get no idea of how to write code to manually zip through the > diagonal blocks of D to invert them...
You could use MatInvertVariableBlockDiagonal(), which should perhaps return a Mat instead of a raw array. If you have constant block sizes, MatInvertBlockDiagonalMat will return a Mat.
