> On Nov 15, 2017, at 9:57 PM, Mark Lohry <mlo...@gmail.com> wrote:
> 
> What are the limitations of ILU in parallel you're referring to? Does 
> Schwarz+local ILU typically fare better?

  If ILU works fine for scalably in parallel that is great. Most of the PETSc 
team has an explicit bias against ILU generally speaking.

  Barry

> 
> On Nov 15, 2017 10:50 PM, "Smith, Barry F." <bsm...@mcs.anl.gov> wrote:
> 
> 
> > On Nov 15, 2017, at 9:40 PM, Jed Brown <j...@jedbrown.org> wrote:
> >
> > "Smith, Barry F." <bsm...@mcs.anl.gov> writes:
> >
> >>> On Nov 15, 2017, at 6:38 AM, Mark Lohry <mlo...@gmail.com> wrote:
> >>>
> >>> I've found ILU(0) or (1) to be working well for my problem, but the petsc 
> >>> implementation is serial only. Running with -pc_type hypre -pc_hypre_type 
> >>> pilut with default settings has considerably worse convergence. I've 
> >>> tried using -pc_hypre_pilut_factorrowsize (number of actual elements in 
> >>> row) to trick it into doing ILU(0), to no effect.
> >>>
> >>> Is there any way to recover classical ILU(k) from pilut?
> >>>
> >>> Hypre's docs state pilut is no longer supported, and Euclid should be 
> >>> used for anything moving forward. pc_hypre_boomeramg has options for 
> >>> Euclid smoothers. Any hope of a pc_hypre_type euclid?
> >>
> >>  Not unless someone outside the PETSc team decides to put it back in.
> >
> > PETSc used to have a Euclid interface.  My recollection is that Barry
> > removed it because users were finding too many bugs in Euclid and
> > upstream wasn't fixing them.  A contributed revival of the interface
> > won't fix the upstream problem.
> 
>    The hypre team now claims they care about Euclid. But given the 
> limitations of ILU in parallel I can't imagine anyone cares all that much.
> 
> 

Reply via email to