> On Feb 23, 2016, at 2:03 PM, Justin Chang <[email protected]> wrote: > > Two more questions, somewhat related maybe. > > Is there a practical case where one would use plain Jacobi preconditioning > over ILU?
For well conditioned problems an iteration of Jacobi is cheaper than an iteration of ILU (about 1/2 the work) so jacobi can beat ILU For problems where ILU produces 0 (or tiny) pivots and thus produces a bad preconditioner > > Also, what exactly is happening when one uses -pc_bjacobi_blocks 2 ? By default PETSc uses one block per MPI process. -pc_bjacobi_blocks 2 would produce exactly 2 blocks totally. See PCBJacobiSetTotalBlocks() and PCBJacobiSetLocalBlocks() > > Thanks, > Justin > > On Wed, Jan 13, 2016 at 9:37 PM, Justin Chang <[email protected]> wrote: > Thanks Satish, > > And yes I meant sequentially. > > On Wed, Jan 13, 2016 at 8:26 PM, Satish Balay <[email protected]> wrote: > On Wed, 13 Jan 2016, Justin Chang wrote: > > > Hi all, > > > > What exactly is the difference between these two preconditioners? When I > > use them to solve a Galerkin finite element poisson problem, I get the > > exact same performance (iterations, wall-clock time, etc). > > you mean - when you run sequentially? > > With block jacobi - you decide the number of blocks. The default is > 1-block/proc > i.e - for sequnetial run you have only 1block i.e the whole matrix. > > So the following are essentially the same: > -pc_type bjacobi -pc_bjacobi_blocks 1 [default] -sub_pc_type ilu [default] > -pc_type ilu > > Satish > > > Only thing is I can't seem to use ILU in parallel though. > > >
