The default is some kind of Jacobi plus Chebyshev, for a certain class of 
problems, it is quite good.



> On Jan 10, 2023, at 3:31 PM, Mark Lohry <[email protected]> wrote:
> 
> So what are people using for GAMG configs on GPU? I was hoping petsc today 
> would be performance competitive with AMGx but it sounds like that's not the 
> case?
> 
> On Tue, Jan 10, 2023 at 3:03 PM Jed Brown <[email protected] 
> <mailto:[email protected]>> wrote:
>> Mark Lohry <[email protected] <mailto:[email protected]>> writes:
>> 
>> > I definitely need multigrid. I was under the impression that GAMG was
>> > relatively cuda-complete, is that not the case? What functionality works
>> > fully on GPU and what doesn't, without any host transfers (aside from
>> > what's needed for MPI)?
>> >
>> > If I use -ksp-pc_type gamg -mg_levels_pc_type pbjacobi -mg_levels_ksp_type
>> > richardson is that fully on device, but -mg_levels_pc_type ilu or
>> > -mg_levels_pc_type sor require transfers?
>> 
>> You can do `-mg_levels_pc_type ilu`, but it'll be extremely slow (like 20x 
>> slower than an operator apply). One can use Krylov smoothers, though that's 
>> more synchronization. Automatic construction of operator-dependent 
>> multistage smoothers for linear multigrid (because Chebyshev only works for 
>> problems that have eigenvalues near the real axis) is something I've wanted 
>> to develop for at least a decade, but time is always short. I might put some 
>> effort into p-MG with such smoothers this year as we add DDES to our 
>> scale-resolving compressible solver.

Reply via email to