That's not a bad idea. I'll try that on the large RANS adjoints. However, I have been trying to replicate the same sort of behavior on smaller Euler meshes (~500k DOF) and I have been somewhat successful is replicating the same sort of issue, although the effect isn't as severe. For these cases, even with a single processor with an ILU(1) preconditioner, I'm seeing different convergence rates. Obviously in this case the ASM isn't the culprit. I still need to do a bit more digging to try to get to the bottom of this. When I get something, I'll post it.
Thanks, Gaetan On Wed, Nov 26, 2014 at 11:10 AM, Jed Brown <[email protected]> wrote: > Gaetan Kenway <[email protected]> writes: > > The untransposed system converges about 6 orders of magnitude with > > GRMES(100), ASM (overlap 1) and ILU(1) with RCM reordering. The test is > run > > on 128 processors. There are no convergence difficulties. > > > > However, when I try to solve the transpose of the same system, by either > > calling KSPSolveTranspose() or by assembling the transpose of the linear > > system and its preconditioner and calling KSPSolve(), GMRES stagnates > after > > a negligible drop in the residual and no further progress is made. > > Just a guess here, but the ASM default is "restricted ASM". Can you > compare the nontransposed and transposed convergence with each of > > -pc_type <restrict,basic,interpolate> > > I.e., 6 runs in total; how does each converge or not? >
