On 27/2/26 02:42, Richard Guo wrote:
On Thu, Feb 26, 2026 at 8:37 PM Andrei Lepikhov <[email protected]> wrote:
I want to correct your statement on the 'no regression' phrase. In
practice, users often report issues after each new sublink
transformation goes live.

This happens because the transformation changes the 'join problem'
order. Before, the subplan's join list was handled independently, but
now its relations are mixed with those from higher levels. If the join
collapse limit is exceeded, this can sometimes cause much worse
performance than in earlier Postgres versions.

I'm not convinced this is a regression.  In scenarios where the join
tree becomes too large, wouldn't the standard solution be for the user
to tune join_collapse_limit (and maybe also geqo_threshold)?

Depends on the exact definition of regression. In my mind, if a new Postgres version produces a worse plan by costs and execution times, and the list of possible solutions includes solely buying a more performant server (which is exactly what the join_collapse_limit increment means in a highly tuned production system) and rewriting the query, this is definitely a regression.

Also, do not forget the implicitly added LATERAL restriction. It is quite a simple puzzle to find a corner case that limits the search scope to less productive query plans.

I want to say I'm not against pull-up techniques as they are. I just point out that, according to (my personal) real-world statistics, correlated subquery pull-ups cause the most problematic user complaints, because we have almost no tools to help without changing the query generation driver. It happens because of a Postgres planner limitation: any parse-tree transformation is 'blind' and irreversible. This issue happens more frequently when people try to use Postgres for kinda analytics tasks like 'monthly accountant report'. So, it may be beneficial to tangle extending pull-up techniques with work on how to ease corner cases.

--
regards, Andrei Lepikhov,
pgEdge


Reply via email to