Re: Consider low startup cost in add_partial_path

2020-04-05 Thread Tomas Vondra

Hi,

For the record, here is the relevant part of the Incremental Sort patch
series, updating add_partial_path and add_partial_path_precheck to also
consider startup cost.

The changes in the first two patches are pretty straight-forward, plus
there's a proposed optimization in the precheck function to only run
compare_pathkeys if entirely necessary. I'm currently evaluating those
changes and I'll post the results to the incremental sort thread.


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>From 761b935584229243ecc6fd47d83e86d6b1b382c7 Mon Sep 17 00:00:00 2001
From: Tomas Vondra 
Date: Sun, 28 Jul 2019 15:55:54 +0200
Subject: [PATCH 1/5] Consider low startup cost when adding partial path

45be99f8cd5d606086e0a458c9c72910ba8a613d added `add_partial_path` with the
comment:

> Neither do we need to consider startup costs:
> parallelism is only used for plans that will be run to completion.
> Therefore, this routine is much simpler than add_path: it needs to
> consider only pathkeys and total cost.

I'm not entirely sure if that is still true or not--I can't easily come
up with a scenario in which it's not, but I also can't come up with an
inherent reason why such a scenario cannot exist.

Regardless, the in-progress incremental sort patch uncovered a new case
where it definitely no longer holds, and, as a result a higher cost plan
ends up being chosen because a low startup cost partial path is ignored
in favor of a lower total cost partial path and a limit is a applied on
top of that which would normal favor the lower startup cost plan.
---
 src/backend/optimizer/util/pathnode.c | 65 +--
 1 file changed, 31 insertions(+), 34 deletions(-)

diff --git a/src/backend/optimizer/util/pathnode.c 
b/src/backend/optimizer/util/pathnode.c
index 8ba8122ee2..b570bfd3be 100644
--- a/src/backend/optimizer/util/pathnode.c
+++ b/src/backend/optimizer/util/pathnode.c
@@ -733,10 +733,11 @@ add_path_precheck(RelOptInfo *parent_rel,
  *
  *   Because we don't consider parameterized paths here, we also don't
  *   need to consider the row counts as a measure of quality: every path 
will
- *   produce the same number of rows.  Neither do we need to consider 
startup
- *   costs: parallelism is only used for plans that will be run to 
completion.
- *   Therefore, this routine is much simpler than add_path: it needs to
- *   consider only pathkeys and total cost.
+ *   produce the same number of rows.  It may however matter how much the
+ *   path ordering matches the final ordering, needed by upper parts of the
+ *   plan. Because that will affect how expensive the incremental sort is,
+ *   we need to consider both the total and startup path, in addition to
+ *   pathkeys.
  *
  *   As with add_path, we pfree paths that are found to be dominated by
  *   another partial path; this requires that there be no other references 
to
@@ -774,44 +775,40 @@ add_partial_path(RelOptInfo *parent_rel, Path *new_path)
/* Compare pathkeys. */
keyscmp = compare_pathkeys(new_path->pathkeys, 
old_path->pathkeys);
 
-   /* Unless pathkeys are incompatible, keep just one of the two 
paths. */
+   /*
+* Unless pathkeys are incompatible, see if one of the paths 
dominates
+* the other (both in startup and total cost). It may happen 
that one
+* path has lower startup cost, the other has lower total cost.
+*
+* XXX Perhaps we could do this only when incremental sort is 
enabled,
+* and use the simpler version (comparing just total cost) 
otherwise?
+*/
if (keyscmp != PATHKEYS_DIFFERENT)
{
-   if (new_path->total_cost > old_path->total_cost * 
STD_FUZZ_FACTOR)
-   {
-   /* New path costs more; keep it only if 
pathkeys are better. */
-   if (keyscmp != PATHKEYS_BETTER1)
-   accept_new = false;
-   }
-   else if (old_path->total_cost > new_path->total_cost
-* STD_FUZZ_FACTOR)
+   PathCostComparison costcmp;
+
+   /*
+* Do a fuzzy cost comparison with standard fuzziness 
limit.
+*/
+   costcmp = compare_path_costs_fuzzily(new_path, old_path,
+   
 STD_FUZZ_FACTOR);
+
+   if (costcmp == COSTS_BETTER1)
{
-   /* Old path costs more; keep it only if 
pathkeys are better. */
-   if (keyscmp 

Re: Consider low startup cost in add_partial_path

2019-10-24 Thread James Coleman
On Fri, Oct 4, 2019 at 8:36 AM Robert Haas  wrote:
>
> On Wed, Oct 2, 2019 at 10:22 AM James Coleman  wrote:
> > In all cases I've been starting with:
> >
> > set enable_hashjoin = off;
> > set enable_nestloop = off;
> > set max_parallel_workers_per_gather = 4;
> > set min_parallel_index_scan_size = 0;
> > set min_parallel_table_scan_size = 0;
> > set parallel_setup_cost = 0;
> > set parallel_tuple_cost = 0;
> >
> > I've also tried various combinations of random_page_cost,
> > cpu_index_tuple_cost, cpu_tuple_cost.
> >
> > Interestingly I've noticed plans joining two relations that look like:
> >
> >  Limit
> >->  Merge Join
> >  Merge Cond: (t1.pk = t2.pk)
> >  ->  Gather Merge
> >Workers Planned: 4
> >->  Parallel Index Scan using t_pkey on t t1
> >  ->  Gather Merge
> >Workers Planned: 4
> >->  Parallel Index Scan using t_pkey on t t2
> >
> > Where I would have expected a Gather Merge above a parallelized merge
> > join. Is that reasonable to expect?
>
> Well, you told the planner that parallel_setup_cost = 0, so starting
> workers is free. And you told the planner that parallel_tuple_cost =
> 0, so shipping tuples from the worker to the leader is also free. So
> it is unclear why it should prefer a single Gather Merge over two
> Gather Merges: after all, the Gather Merge is free!
>
> If you use give those things some positive cost, even if it's smaller
> than the default, you'll probably get a saner-looking plan choice.

That makes sense.

Right now I currently see trying to get this a separate test feels a
bit like a distraction.

Given there doesn't seem to be an obvious way to reproduce the issue
currently, but we know we have a reproduction example along with
incremental sort, what is the path forward for this? Is it reasonable
to try to commit it anyway knowing that it's a "correct" change and
been demonstrated elsewhere?

James




Re: Consider low startup cost in add_partial_path

2019-10-04 Thread Robert Haas
On Wed, Oct 2, 2019 at 10:22 AM James Coleman  wrote:
> In all cases I've been starting with:
>
> set enable_hashjoin = off;
> set enable_nestloop = off;
> set max_parallel_workers_per_gather = 4;
> set min_parallel_index_scan_size = 0;
> set min_parallel_table_scan_size = 0;
> set parallel_setup_cost = 0;
> set parallel_tuple_cost = 0;
>
> I've also tried various combinations of random_page_cost,
> cpu_index_tuple_cost, cpu_tuple_cost.
>
> Interestingly I've noticed plans joining two relations that look like:
>
>  Limit
>->  Merge Join
>  Merge Cond: (t1.pk = t2.pk)
>  ->  Gather Merge
>Workers Planned: 4
>->  Parallel Index Scan using t_pkey on t t1
>  ->  Gather Merge
>Workers Planned: 4
>->  Parallel Index Scan using t_pkey on t t2
>
> Where I would have expected a Gather Merge above a parallelized merge
> join. Is that reasonable to expect?

Well, you told the planner that parallel_setup_cost = 0, so starting
workers is free. And you told the planner that parallel_tuple_cost =
0, so shipping tuples from the worker to the leader is also free. So
it is unclear why it should prefer a single Gather Merge over two
Gather Merges: after all, the Gather Merge is free!

If you use give those things some positive cost, even if it's smaller
than the default, you'll probably get a saner-looking plan choice.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Consider low startup cost in add_partial_path

2019-10-02 Thread James Coleman
On Sat, Sep 28, 2019 at 7:21 PM James Coleman  wrote:
> Now the trick is to figure out a way to demonstrate it in test :)
>
> Basically we need:
> Path A: Can short circuit with LIMIT but has high total cost
> Path B: Can’t short circuit with LIMIT but has lower total cost
>
> (Both must be parallel aware of course.)

I'm adding one requirement, or clarifying it anyway: the above paths
must be partial paths, and can't just apply at the top level of the
parallel part of the plan. I.e., the lower startup cost has to matter
at a subtree of the parallel portion of the plan.

> Maybe ordering in B can be a sort node and A can be an index scan (perhaps 
> with very high random page cost?) and force choosing a parallel plan?
>
> I’m trying to describe this to jog my thoughts (not in front of my laptop 
> right now so can’t try it out).
>
> Any other ideas?

I've been playing with this a good bit, and I'm struggling to come up
with a test case. Because the issue only manifests in a subtree of the
parallel portion of the plan, a scan on a single relation won't do.
Merge join seems like a good area to look at because it requires
ordering, and that ordering can be either the result of an index scan
(short-circuit-able) or an explicit sort (not short-circuit-able). But
I've been unable to make that result in any different plans with
either 2 or 3 relations joined together, ordered, and a limit applied.

In all cases I've been starting with:

set enable_hashjoin = off;
set enable_nestloop = off;
set max_parallel_workers_per_gather = 4;
set min_parallel_index_scan_size = 0;
set min_parallel_table_scan_size = 0;
set parallel_setup_cost = 0;
set parallel_tuple_cost = 0;

I've also tried various combinations of random_page_cost,
cpu_index_tuple_cost, cpu_tuple_cost.

Interestingly I've noticed plans joining two relations that look like:

 Limit
   ->  Merge Join
 Merge Cond: (t1.pk = t2.pk)
 ->  Gather Merge
   Workers Planned: 4
   ->  Parallel Index Scan using t_pkey on t t1
 ->  Gather Merge
   Workers Planned: 4
   ->  Parallel Index Scan using t_pkey on t t2

Where I would have expected a Gather Merge above a parallelized merge
join. Is that reasonable to expect?

If there doesn't seem to be an obvious way to reproduce the issue
currently, but we know we have a reproduction example along with
incremental sort, what is the path forward for this? Is it reasonable
to try to commit it anyway knowing that it's a "correct" change and
been demonstrated elsewhere?

James




Re: Consider low startup cost in add_partial_path

2019-09-28 Thread James Coleman
On Saturday, September 28, 2019, Tomas Vondra 
wrote:

> On Sat, Sep 28, 2019 at 12:16:05AM -0400, Robert Haas wrote:
>
>> On Fri, Sep 27, 2019 at 2:24 PM James Coleman  wrote:
>>
>>> Over in the incremental sort patch discussion we found [1] a case
>>> where a higher cost plan ends up being chosen because a low startup
>>> cost partial path is ignored in favor of a lower total cost partial
>>> path and a limit is a applied on top of that which would normal favor
>>> the lower startup cost plan.
>>>
>>> 45be99f8cd5d606086e0a458c9c72910ba8a613d originally added
>>> `add_partial_path` with the comment:
>>>
>>> > Neither do we need to consider startup costs:
>>> > parallelism is only used for plans that will be run to completion.
>>> > Therefore, this routine is much simpler than add_path: it needs to
>>> > consider only pathkeys and total cost.
>>>
>>> I'm not entirely sure if that is still true or not--I can't easily
>>> come up with a scenario in which it's not, but I also can't come up
>>> with an inherent reason why such a scenario cannot exist.
>>>
>>
>> I think I just didn't think carefully about the Limit case.
>>
>>
> Thanks! In that case I suggest we treat it as a separate patch/fix,
> independent of the incremental sort patch. I don't want to bury it in
> that patch series, it's already pretty large.
>

Now the trick is to figure out a way to demonstrate it in test :)

Basically we need:
Path A: Can short circuit with LIMIT but has high total cost
Path B: Can’t short circuit with LIMIT but has lower total cost

(Both must be parallel aware of course.)

Maybe ordering in B can be a sort node and A can be an index scan (perhaps
with very high random page cost?) and force choosing a parallel plan?

I’m trying to describe this to jog my thoughts (not in front of my laptop
right now so can’t try it out).

Any other ideas?

James


Re: Consider low startup cost in add_partial_path

2019-09-28 Thread Tomas Vondra

On Sat, Sep 28, 2019 at 12:16:05AM -0400, Robert Haas wrote:

On Fri, Sep 27, 2019 at 2:24 PM James Coleman  wrote:

Over in the incremental sort patch discussion we found [1] a case
where a higher cost plan ends up being chosen because a low startup
cost partial path is ignored in favor of a lower total cost partial
path and a limit is a applied on top of that which would normal favor
the lower startup cost plan.

45be99f8cd5d606086e0a458c9c72910ba8a613d originally added
`add_partial_path` with the comment:

> Neither do we need to consider startup costs:
> parallelism is only used for plans that will be run to completion.
> Therefore, this routine is much simpler than add_path: it needs to
> consider only pathkeys and total cost.

I'm not entirely sure if that is still true or not--I can't easily
come up with a scenario in which it's not, but I also can't come up
with an inherent reason why such a scenario cannot exist.


I think I just didn't think carefully about the Limit case.



Thanks! In that case I suggest we treat it as a separate patch/fix,
independent of the incremental sort patch. I don't want to bury it in
that patch series, it's already pretty large.

regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Consider low startup cost in add_partial_path

2019-09-27 Thread Robert Haas
On Fri, Sep 27, 2019 at 2:24 PM James Coleman  wrote:
> Over in the incremental sort patch discussion we found [1] a case
> where a higher cost plan ends up being chosen because a low startup
> cost partial path is ignored in favor of a lower total cost partial
> path and a limit is a applied on top of that which would normal favor
> the lower startup cost plan.
>
> 45be99f8cd5d606086e0a458c9c72910ba8a613d originally added
> `add_partial_path` with the comment:
>
> > Neither do we need to consider startup costs:
> > parallelism is only used for plans that will be run to completion.
> > Therefore, this routine is much simpler than add_path: it needs to
> > consider only pathkeys and total cost.
>
> I'm not entirely sure if that is still true or not--I can't easily
> come up with a scenario in which it's not, but I also can't come up
> with an inherent reason why such a scenario cannot exist.

I think I just didn't think carefully about the Limit case.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company