On Sat, May 3, 2025 at 1:10 AM Daniil Davydov <3daniss...@gmail.com> wrote:
>
> On Sat, May 3, 2025 at 5:28 AM Masahiko Sawada <sawada.m...@gmail.com> wrote:
> >
> > > In current implementation, the leader process sends a signal to the
> > > a/v launcher, and the launcher tries to launch all requested workers.
> > > But the number of workers never exceeds `autovacuum_max_workers`.
> > > Thus, we will never have more a/v workers than in the standard case
> > > (without this feature).
> >
> > I have concerns about this design. When autovacuuming on a single
> > table consumes all available autovacuum_max_workers slots with
> > parallel vacuum workers, the system becomes incapable of processing
> > other tables. This means that when determining the appropriate
> > autovacuum_max_workers value, users must consider not only the number
> > of tables to be processed concurrently but also the potential number
> > of parallel workers that might be launched. I think it would more make
> > sense to maintain the existing autovacuum_max_workers parameter while
> > introducing a new parameter that would either control the maximum
> > number of parallel vacuum workers per autovacuum worker or set a
> > system-wide cap on the total number of parallel vacuum workers.
> >
>
> For now we have max_parallel_index_autovac_workers - this GUC limits
> the number of parallel a/v workers that can process a single table. I
> agree that the scenario you provided is problematic.
> The proposal to limit the total number of supportive a/v workers seems
> attractive to me (I'll implement it as an experiment).
>
> It seems to me that this question is becoming a key one. First we need
> to determine the role of the user in the whole scheduling mechanism.
> Should we allow users to determine priority? Will this priority affect
> only within a single vacuuming cycle, or it will be more 'global'?
> I guess I don't have enough expertise to determine this alone. I will
> be glad to receive any suggestions.

What I roughly imagined is that we don't need to change the entire
autovacuum scheduling, but would like autovacuum workers to decides
whether or not to use parallel vacuum during its vacuum operation
based on GUC parameters (having a global effect) or storage parameters
(having an effect on the particular table). The criteria of triggering
parallel vacuum in autovacuum might need to be somewhat pessimistic so
that we don't unnecessarily use parallel vacuum on many tables.

>
> > > About `at_params.nworkers = N` - that's exactly what we're doing (you
> > > can see it in the `vacuum_rel` function). But we cannot fully reuse
> > > code of VACUUM PARALLEL, because it creates its own processes via
> > > dynamic bgworkers machinery.
> > > As I said above - we don't want to consume additional resources. Also
> > > we don't want to complicate communication between processes (the idea
> > > is that a/v workers can only send signals to the a/v launcher).
> >
> > Could you elaborate on the reasons why you don't want to use
> > background workers and avoid complicated communication between
> > processes? I'm not sure whether these concerns provide sufficient
> > justification for implementing its own parallel index processing.
> >
>
> Here are my thoughts on this. A/v worker has a very simple role - it
> is born after the launcher's request and must do exactly one 'task' -
> vacuum table or participate in parallel index vacuum.
> We also have a dedicated 'launcher' role, meaning the whole design
> implies that only the launcher is able to launch processes.
>
> If we allow a/v worker to use bgworkers, then :
> 1) A/v worker will go far beyond his responsibility.
> 2) Its functionality will overlap with the functionality of the launcher.

While I agree that the launcher process is responsible for launching
autovacuum worker processes but I'm not sure it should be for
launching everything related autovacuums. It's quite possible that we
have parallel heap vacuum and processing the particular index with
parallel workers in the future. The code could get more complex if we
have the autovacuum launcher process launch such parallel workers too.
I believe it's more straightforward to divide the responsibility like
in a way that the autovacuum launcher is responsible for launching
autovacuum workers and autovacuum workers are responsible for
vacuuming tables no matter how to do that.

> 3) Resource consumption can jump dramatically, which is unexpected for
> the user.

What extra resources could be used if we use background workers
instead of autovacuum workers?

> Autovacuum will also be dependent on other resources
> (bgworkers pool). The current design does not imply this.

I see your point but I think it doesn't necessarily need to reflect it
at the infrastructure layer. For example, we can internally allocate
extra background worker slots for parallel vacuum workers based on
max_parallel_index_autovac_workers in addition to
max_worker_processes. Anyway we might need something to check or
validate max_worker_processes value to make sure that every autovacuum
worker can use the specified number of parallel workers for parallel
vacuum.

> I wanted to create a patch that would fit into the existing mechanism
> without drastic innovations. But if you think that the above is not so
> important, then we can reuse VACUUM PARALLEL code and it would
> simplify the final implementation)

I'd suggest using the existing infrastructure if we can achieve the
goal with it. If we find out there are some technical difficulties to
implement it without new infrastructure, we can revisit this approach.

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com


Reply via email to