On Wed, 3 Jan 2024 at 03:02, David Rowley <dgrowle...@gmail.com> wrote:
>
> On Thu, 14 Dec 2023 at 19:21, Matthias van de Meent
> <boekewurm+postg...@gmail.com> wrote:
> >
> > On Thu, 7 Dec 2023 at 13:09, David Rowley <dgrowle...@gmail.com> wrote:
> > > We could also easily serialize plans to binary format for copying to
> > > parallel workers rather than converting them to a text-based
> > > serialized format. It would also allow us to do things like serialize
> > > PREPAREd plans into a nicely compact single allocation that we could
> > > just pfree in a single pfree call on DEALLOCATE.
> >
> > I'm not sure what benefit you're refering to. If you mean "it's more
> > compact than the current format" then sure; but the other points can
> > already be covered by either the current nodeToString format, or by
> > nodeCopy-ing the prepared plan into its own MemoryContext, which would
> > allow us to do essentially the same thing.
>
> There's significantly less memory involved in just having a plan
> serialised into a single chunk of memory vs a plan stored in its own
> MemoryContext.  With the serialised plan, you don't have any power of
> 2 rounding up wastage that aset.c does and don't need extra space for
> all the MemoryChunks that would exist for every single palloc'd chunk
> in the MemoryContext version.

I was envisioning this to use the Bump memory context you proposed
over in [0], as to the best of my knowledge prepared plans are not
modified, so nodeCopy-ing a prepared plan into bump context could be a
good use case for those contexts. This should remove the issue of
rounding and memorychunk wastage in aset.

> I think it would be nice if one day in the future if a PREPAREd plan
> could have multiple different plans cached. We could then select which
> one to use by looking at statistics for the given parameters and
> choose the plan that's most suitable for the given parameters.   Of
> course, this is a whole entirely different project. I mention it just
> because being able to serialise a plan would make the memory
> management and overhead for such a feature much more manageable.
> There'd likely need to be some eviction logic in such a feature as the
> number of possible plans for some complex query is quite likely to be
> much more than we'd care to cache.

Yeah, that'd be nice, but is also definitely future work.

Kind regards,

Matthias van de Meent
Neon (https://neon.tech)

[0]: 
https://www.postgresql.org/message-id/flat/CAApHDvqGSpCU95TmM%3DBp%3D6xjL_nLys4zdZOpfNyWBk97Xrdj2w%40mail.gmail.com


Reply via email to