On Fri, Aug 8, 2025 at 3:29 PM Matthew Knepley <knep...@gmail.com> wrote:
> On Fri, Aug 8, 2025 at 12:22 PM Frank Bramkamp <bramk...@nsc.liu.se> > wrote: > >> Dear PETSc Team, >> >> I have the following general questions. >> When using GMRES with classic gram schmidt, you have an optional >> refinement. >> >> The default is not to use any refinement, is that correct ? > > > Yes. It has been this way at least since 2013 from checking the log. > > >> So you use the standard classic gram schmidt >> without any additional refinement step. >> >> I originally thought that you use one refinement step if needed as >> default. Or was that somehow changed one day >> and I remember it from older settings ?! >> > > It is available, but we changed it since refinement was almost never > happening. > > >> With refinement one also needs about twice the MPI communication, if >> refinement is done in each iteration, >> which can be a bit more costly due to the MPI_ALLREDUCE for the dot >> products. >> > > It is reduction that is expensive on large runs. > > >> I solve the compressible navier stokes equations with turbulence. In >> which cases do you have the experience one should >> better use a refinement step ?! So far I typically used the default >> options, which seemed to work well for me so far. >> > > If it is converging, everything is likely fine. You can always check the > true residual to detect breakdown, or turn on refinement if necessary with > the option. > > >> for classic gram schmidt one also has to compute multiple dot products >> which is VecMDot in PETSC if I see this correct. >> > > Yes. > > >> Do you by standard use blas GEMV to compute multiple dot products at once >> or do you typically prefer your >> custom implementation. I think you also have a version where you unroll >> manual loops by a factor of 4 or so. >> I am not quite sure what is the typical default for this, or in which >> situation one would not use GEMV. >> > > It turns out that Junchao did extensive tests on this last year (I > believe). By default, we use GEMV for MDot, but not for MAXPY (since > performance here was worse with vendor GEMV). It is difficult (or > impossible) to tell which algorithm will be better without testing, so > there are options to select each one. > The experimental results were here, https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/6580*note_1477784906__;Iw!!G_uCfscf7eWS!ZtYn-pBESeQYKuVZWdDAExVVX8TAYzfmL3zhMHnUbQcF73fi0TYJOnIs2Kufu58HSzDbtYoMUGGF-E7vzU_1s4L9-tBB$ > > Thanks, > > Matt > > >> Thanks a lot. I wish you a nice weekend, Frank >> >> >> >> >> >> >> >> >> >> >> > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZtYn-pBESeQYKuVZWdDAExVVX8TAYzfmL3zhMHnUbQcF73fi0TYJOnIs2Kufu58HSzDbtYoMUGGF-E7vzU_1s11XGS3n$ > > <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!fhbzgnVYBw6ICZM6N0XXNLIcj9qe9bcopUK7lxSsqxqVlv-SvpS8bdrcwYq0TvTbHmbgbSinCLXrvJcM-AWO$> >