On Wed, Feb 26, 2014 at 6:25 AM, Jed Brown <[email protected]> wrote:
> Adriano Côrtes <[email protected]> writes: > > > Dear all, > > > > I'm solving a saddle-point problem coming from a 2d stokes problem > > discretized by an inf-sup stable mixed element. > > By now I'm playing in serial, because I want to understand the > > behavior of the solver. I have the whole matrix assembled in memory. > > After playing with GMRES and ILU with different fill-ins, > > ILU is typically terrible for saddle point problems. > > > I started playing with ASM to see if I can get better results. By > > using -pc_asm_blocks and -pc_asm_overlap I tried some variations none > > giving better results. > > > > My questions are > > > > 1. how the blocks are built by PETSc, since my problem is a saddle-point > one? > > It starts with the set of owned variables and adds overlap by taking all > neighbors represented in the graph. You generally need a minimum > overlap of 1 for saddle point problems. > > > 2. From the theoretical point-of-view, block-factorizations, that is > > using PCFieldsplit, are in general the best we can have in terms of > > performance (number of iterations and scalability)? > > There is no consensus on this and I'm actually fond of "monolithic" > multigrid methods, but it is harder to reuse components and harder to > debug convergence. PCFieldSplit is a good methodology. Some of my > talks have a comparison slide. Here is a high-level one from last week. > > http://59a2.org/files/20140221-ExploitsInImplicitness.pdf > More specifically, you can use -pc_fieldsplit_detect_saddle_point on your matrix, and then construct any of the block methods from options. I list most of the interesting ones in the Paris Tutorial on our website. Matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener
