On 2020-03-25 17:51:50 -0400, Tom Lane wrote:
> Andres Freund <and...@anarazel.de> writes:
> > I wonder if it'd make sense to store the locks needed for
> > AcquirePlannerLocks/AcquireExecutorLocks in a better form.
> Perhaps, but I'm not sure that either of those functions represent
> material overhead in cases besides this one.

For pgbench -M prepared -S GetCachedPlan() and its children are 2.36% of
the time. 1.75% of the total is RevalidateCachedQuery(). 1.13% of that
in turn is LockAcquireExtended.

That's not huge, but also not nothing. And this includes client
roundtrips. So I assume it'd show up larger when executing actual
queries in a function, or when pipelining (which e.g. pgjdbc has on by

If I to simple lookups from pgbench_accounts in a loop in plpgsql
GetCachedPlan() is 4.43% and the LockAcquireExtended()'s called from
within are 1.46%.

So it's plausible that making this a more generic optimization would be

> > Would it make sense to instead compute this as we go when building a
> > valid CachedPlanSource? If we make it a property of a is_valid
> > CachedPlanSource, we can assert that the plan is safe for use in
> > CachedPlanIsSimplyValid().
> I'm inclined to think not, because it'd just be overhead for other
> users of cached plans.

Even if we make RevalidateCachedQuery take advantage of the simpler
tests when possible?  While there's plenty of cases where it'd not be
applicable, it seems likely that those wouldn't notice the small
slowdown either.

> >> /*
> >> +   * Likewise for the simple-expression resource owner.  (Note: it'd be
> >> +   * safer to create this as a child of TopTransactionResourceOwner; but
> >> +   * right now that causes issues in transaction-spanning procedures, so
> >> +   * make it standalone.)
> >> +   */
> > Hm. I'm quite unfamiliar with this area of the code - so I'm likely just
> > missing something: Given that you're using a post xact cleanup hook to
> > release the resowner, I'm not quite sure I understand this comment. The
> > XACT_EVENT_ABORT/COMMIT callbacks are called before
> > TopTransactionResourceOwner is released, no?
> The comment is there because the regression tests fall over if you try
> to do it the other way :-(.  The failure I saw was specific to a
> transaction being done in a DO block, and maybe we could handle that
> differently from the case for a normal procedure; but it seemed better
> to me to make them the same.

I'm still confused as to why it actually fixes the issue. Feel we should
at least understand what's going on before commtting.

> >> +void
> >> +plpgsql_free_simple_resowner(ResourceOwner simple_eval_resowner)
> >> +{
> >> +  /*
> >> +   * At this writing, the only thing that could actually get released is
> >> +   * plancache refcounts; but we may as well do the full release protocol.
> > Hm, any chance that the multiple resowner calls here could show up in a
> > profile? Probably not?
> Doubt it.  On the other hand, as the code stands it's certain that the
> resowner contains nothing but plancache pins (while I was writing the
> patch it wasn't entirely clear that that would hold).  So we could
> drop the two unnecessary calls.  There are assertions in
> ResourceOwnerDelete that would fire if we somehow missed releasing
> anything, so it doesn't seem like much of a maintenance hazard.

One could even argue that that's a nice crosscheck: Due to the later
release it'd not actually be correct to just add "arbitrary" things to
that resowner.

> > Could it be worth optimizing the path generation logic so that a
> > push/pop of an override path restores the old generation?
> (1) Not given the existing set of uses of the push/pop capability, which
> so far as I can see is *only* CREATE SCHEMA.

Oh. Well, then that'd be something for later.

I do recall that there were issues with SET search_path in functions
causing noticable slowdowns...

> It's not involved in any other manipulations of the search path.  And
> (2) as this is written, it's totally unsafe for the generation counter
> ever to back up; that risks false match detections later.

I was just thinking of backing up the 'active generation' state. New
generations would have to come from a separate 'next generation'


Andres Freund

Reply via email to