On Mon, 9 Oct 2023 at 23:35, Ashutosh Bapat
<ashutosh.bapat....@gmail.com> wrote:
>
> On Mon, Oct 9, 2023 at 6:25 AM David Rowley <dgrowle...@gmail.com> wrote:
> >
> > However, it may also be worth you reading over [3] and the ultimate
> > reason I changed my mind on that being a good idea. Pushing LIMITs
> > below an Append seems quite incomplete when we don't yet push sorts
> > below Appends, which is what that patch did.
>
> When the paths are already ordered according to ORDER BY
> specification, pushing down LIMIT will give them extra benefit of
> being cost effective. Do you think we can proceed along those lines?
> Later when we implement Sorting push down we will adjust the LIMIT
> pushdown code for the same.

What are there benefits if the paths are already ordered?  e.g if it's
an index scan then we'll only pull the tuples we need from it.

I think if we did manage to get something working to push Sorts below
Appends then ExecSetTupleBound() would take care of most of the limit
problem (with the exception of FDWs).  If you look at
ExecSetTupleBound() you'll see that it does recursively descend into
AppendStates and set the bound on the append children.  That's not
going to work when the Sort is above the Append. So isn't it mostly
the work_mem * n_append_children concern that is holding us up here?

> We have that problem with partitionwise join. Have you seen it in the
> field? I have not seen such reports but that could be because not many
> know the partitionwise join needs to be explicitly turned ON. The
> solution we will develop here will solve problem with partitionwise
> join as well. It's hard to solve this problem. If there's a real case
> where LIMIT pushdown helps without fixing Sort pushdown case, it might
> help proceeding with the same.

I've not heard anything about that.  What I saw were just complaints
about the planner being too slow to produce a plan which accessed well
in excess of the number of partitions that we recommend in the
partitioning best practices documents.

David


Reply via email to