On Tue, Mar 11, 2025 at 5:39 AM Andy Fan <zhihuifan1...@163.com> wrote:
> Currently when a query needs some parallel workers, postmaster spawns
> some backend for this query and when the work is done, the backend
> exit.  there are some wastage here, e.g. syscache, relcache, smgr cache,
> vfd cache and fork/exit syscall itself.
>
> I am thinking if we should preallocate (or create lazily) some backends
> as a pool for parallel worker. The benefits includes:
>
> (1) Make the startup cost of a parallel worker lower in fact.
> (2) Make the core most suitable for the cases where executor need to a
> new worker to run a piece of plan more. I think this is needed in some
> data redistribution related executor in a distributed database.

I don't want to discourage your investigation, but two things to consider:

 1. In what state do existing parallel worker use cases expect to see
their forked worker processes? Any time you reuse processes in a pool,
you risk bugs if you don't set the pooled process's memory exactly
"right" -- you don't want to carry over any state from a previous
iteration. (This is easier in a thread pool, where the thread has
stack memory but not its own heap.)

 2. For PQ, in particular, how does the cost of serializing +
deserializing the query itself compare to the cost of fork()ing the
process?

James


Reply via email to