On Wed, Apr 5, 2023 at 2:56 PM Robert Haas <robertmh...@gmail.com> wrote: > > + /* > + * Balance and update limit values for autovacuum workers. We must > + * always do this in case the autovacuum launcher or another > + * autovacuum worker has recalculated the number of workers across > + * which we must balance the limit. This is done by the launcher when > + * launching a new worker and by workers before vacuuming each table. > + */ > > I don't quite understand what's going on here. A big reason that I'm > worried about this whole issue in the first place is that sometimes > there's a vacuum going on a giant table and you can't get it to go > fast. You want it to absorb new settings, and to do so quickly. I > realize that this is about the number of workers, not the actual cost > limit, so that makes what I'm about to say less important. But ... is > this often enough? Like, the time before we move onto the next table > could be super long. The time before a new worker is launched should > be ~autovacuum_naptime/autovacuum_max_workers or ~20s with default > settings, so that's not horrible, but I'm kind of struggling to > understand the rationale for this particular choice. Maybe it's fine.
VacuumUpdateCosts() also calls AutoVacuumUpdateCostLimit(), so this will happen if a config reload is pending the next time vacuum_delay_point() is called (which is pretty often -- roughly once per block vacuumed but definitely more than once per table). Relevant code is at the top of vacuum_delay_point(): if (ConfigReloadPending && IsAutoVacuumWorkerProcess()) { ConfigReloadPending = false; ProcessConfigFile(PGC_SIGHUP); VacuumUpdateCosts(); } - Melanie