Re: [HACKERS] Optimizer questions

2016-03-10 Thread Tom Lane
I wrote:
> As far as that goes, it seems to me after thinking about it that
> non-sort-column tlist items containing SRFs should always be postponed,
> too.  Performing a SRF before sorting bloats the sort data vertically,
> rather than horizontally, but it's still bloat.  (Although as against
> that, when you have ORDER BY + LIMIT, postponing SRFs loses the ability
> to use a bounded sort.)  The killer point though is that unless the sort
> is stable, it might cause surprising changes in the order of the SRF
> output values.  Our sorts aren't stable; here's an example in HEAD:

> # select q1, generate_series(1,9) from int8_tbl order by q1 limit 7;
>  q1  | generate_series 
> -+-
>  123 |   2
>  123 |   3
>  123 |   4
>  123 |   5
>  123 |   6
>  123 |   7
>  123 |   1
> (7 rows)

> I think that's pretty surprising, and if we have an opportunity to
> provide more intuitive semantics here, we should do it.

Here's an updated patch (requires current HEAD) that takes care of
window functions correctly and also does something reasonable with
set-returning functions:

# explain verbose select q1, generate_series(1,9) from int8_tbl order by q1 
limit 7;
   QUERY PLAN   
 
-
 Limit  (cost=1.11..1.14 rows=7 width=12)
   Output: q1, (generate_series(1, 9))
   ->  Result  (cost=1.11..26.16 rows=5000 width=12)
 Output: q1, generate_series(1, 9)
 ->  Sort  (cost=1.11..1.12 rows=5 width=8)
   Output: q1
   Sort Key: int8_tbl.q1
   ->  Seq Scan on public.int8_tbl  (cost=0.00..1.05 rows=5 width=8)
 Output: q1
(9 rows)

# select q1, generate_series(1,9) from int8_tbl order by q1 limit 7;
 q1  | generate_series 
-+-
 123 |   1
 123 |   2
 123 |   3
 123 |   4
 123 |   5
 123 |   6
 123 |   7
(7 rows)

I added some user-facing documentation too.  I think this is committable,
though maybe we should add a regression test case or two.

regards, tom lane

diff --git a/doc/src/sgml/ref/select.sgml b/doc/src/sgml/ref/select.sgml
index 44810f4..0520f2c 100644
*** a/doc/src/sgml/ref/select.sgml
--- b/doc/src/sgml/ref/select.sgml
*** UNBOUNDED FOLLOWING
*** 993,998 
--- 993,1028 
  cases it is not possible to specify new names with AS;
  the output column names will be the same as the table columns' names.
 
+ 
+
+ According to the SQL standard, the expressions in the output list should
+ be computed before applying DISTINCT, ORDER
+ BY, or LIMIT.  This is obviously necessary
+ when using DISTINCT, since otherwise it's not clear
+ what values are being made distinct.  However, in many cases it is
+ convenient if output expressions are computed after ORDER
+ BY and LIMIT; particularly if the output list
+ contains any volatile or expensive functions.  With that behavior, the
+ order of function evaluations is more intuitive and there will not be
+ evaluations corresponding to rows that never appear in the output.
+ PostgreSQL will effectively evaluate output expressions
+ after sorting and limiting, so long as those expressions are not
+ referenced in DISTINCT, ORDER BY
+ or GROUP BY.  (As a counterexample, SELECT
+ f(x) FROM tab ORDER BY 1 clearly must evaluate f(x)
+ before sorting.)  Output expressions that contain set-returning functions
+ are effectively evaluated after sorting and before limiting, so
+ that LIMIT will act to cut off the output from a
+ set-returning function.
+
+ 
+
+ 
+  PostgreSQL versions before 9.6 did not provide any
+  guarantees about the timing of evaluation of output expressions versus
+  sorting and limiting; it depended on the form of the chosen query plan.
+ 
+

  

diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index a2cd6de..51f4ee4 100644
*** a/src/backend/optimizer/plan/planner.c
--- b/src/backend/optimizer/plan/planner.c
*** static RelOptInfo *create_distinct_paths
*** 127,132 
--- 127,133 
  	  RelOptInfo *input_rel);
  static RelOptInfo *create_ordered_paths(PlannerInfo *root,
  	 RelOptInfo *input_rel,
+ 	 PathTarget *target,
  	 double limit_tuples);
  static PathTarget *make_group_input_target(PlannerInfo *root, List *tlist);
  static List *postprocess_setop_tlist(List *new_tlist, List *orig_tlist);
*** static PathTarget *make_window_input_tar
*** 135,140 
--- 136,144 
  		 List *tlist, List *activeWindows);
  static List *make_pathkeys_for_window(PlannerInfo *root, WindowClause *wc,
  		 List 

Re: [HACKERS] Optimizer questions

2016-03-10 Thread Tom Lane
konstantin knizhnik  writes:
> But right now the rule for cost estimation makes it not possible to apply 
> this optimization for simple expressions like this: ...
> I wonder is there any advantages of earlier evaluation of such simple 
> expressions if them are not needed for sort?

Well, as I said, my patch was intentionally written to be conservative
about when to change the semantics from what it's been for the last
twenty years.  I think there's a good argument for changing it for
volatile functions, but I'm less convinced that we should whack around
what happens in cases where there is not a clear benefit.  It's fairly
likely that there are users out there who have (perhaps without even
knowing it) optimized their queries to work well with eval-before-sort
behavior, perhaps by applying data-narrowing functions.  They won't
thank us for changing the behavior "just because".

If we had a more reliable idea of whether we are widening or narrowing
the sort data by postponing eval, I'd be willing to be more aggressive
about doing it.  But without that, I think it's best to be conservative
and only change when there's a pretty clear potential benefit.

> Also I do not completely understand your concern about windows functions.

The point is just that we have to not disassemble WindowFunc nodes when
building the sort-input tlist, in the same way that we don't disassemble
Aggref nodes.  If we are sorting the output of a WindowAgg, the WindowFunc
expressions have to appear as expressions in the WindowAgg's output tlist.

>> I think it's probably also broken for SRFs in the tlist; we need to work
>> out what semantics we want for those.  If we postpone any SRF to after
>> the Sort, we can no longer assume that a query LIMIT enables use of
>> bounded sort (because the SRF might repeatedly return zero rows).
>> I don't have a huge problem with that, but I think now would be a good
>> time to nail down some semantics.

As far as that goes, it seems to me after thinking about it that
non-sort-column tlist items containing SRFs should always be postponed,
too.  Performing a SRF before sorting bloats the sort data vertically,
rather than horizontally, but it's still bloat.  (Although as against
that, when you have ORDER BY + LIMIT, postponing SRFs loses the ability
to use a bounded sort.)  The killer point though is that unless the sort
is stable, it might cause surprising changes in the order of the SRF
output values.  Our sorts aren't stable; here's an example in HEAD:

# select q1, generate_series(1,9) from int8_tbl order by q1 limit 7;
 q1  | generate_series 
-+-
 123 |   2
 123 |   3
 123 |   4
 123 |   5
 123 |   6
 123 |   7
 123 |   1
(7 rows)

I think that's pretty surprising, and if we have an opportunity to
provide more intuitive semantics here, we should do it.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimizer questions

2016-03-10 Thread konstantin knizhnik

On Mar 10, 2016, at 1:56 AM, Tom Lane wrote:

> Konstantin Knizhnik  writes:
>> I think that the best approach is to generate two different paths: 
>> original one, when projection is always done before sort and another one 
>> with postponed projection of non-trivial columns. Then we compare costs 
>> of two paths and choose the best one.
>> Unfortunately, I do not understand now how to implement it with existed 
>> grouping_planner.
>> Do you think that it is possible?
> 
> After fooling with this awhile, I don't think it's actually necessary
> to do that.  See attached proof-of-concept patch.

O, you did all my work:)

But right now the rule for cost estimation makes it not possible to apply this 
optimization for simple expressions like this:

postgres=# create table a (b integer);
CREATE TABLE
postgres=# insert into a values (generate_series(1, 10));
INSERT 0 10
postgres=# select b+b from a order by b limit 1;
NOTICE:  int4pl
NOTICE:  int4pl
NOTICE:  int4pl
NOTICE:  int4pl
NOTICE:  int4pl
NOTICE:  int4pl
NOTICE:  int4pl
NOTICE:  int4pl
NOTICE:  int4pl
NOTICE:  int4pl
 ?column? 
--
2
(1 row)
postgres=# create or replace function twice(x integer) returns integer as $$ 
begin raise notice 'exec function' ; return x + x ; end $$ language plpgsql;
CREATE FUNCTION
postgres=# select twice(b) from a order by b limit 1;   
NOTICE:  exec function
NOTICE:  int4pl
 twice 
---
 2
(1 row)


I wonder is there any advantages of earlier evaluation of such simple 
expressions if them are not needed for sort?
It seems to be only true for narrowing functions, like md5... But I think that 
it is quite exotic case, isn't it?
May be it is reasonable to be more optimistic in estimation cost of postponed 
expression evaluation?


Also I do not completely understand your concern about windows functions.
Is there any example of query with windows or aggregate functions when this 
optimization (postponing expression evaluation) can be applied?
It will be also interesting to me to know if there are some other cases (except 
SORT+LIMIT) when delaying projection leeds to more efficient plan.



> 
> Although this patch gets through our regression tests, that's only because
> it's conservative about deciding to postpone evaluation; if it tried to
> postpone evaluation in a query with window functions, it would fail
> miserably, because pull_var_clause doesn't know about window functions.
> I think that that's a clear oversight and we should extend it to offer
> the same sorts of behaviors as it does for Aggrefs.  But that would be
> slightly invasive, there being a dozen or so callers; so I didn't bother
> to do it yet pending comments on this patch.
> 
> I think it's probably also broken for SRFs in the tlist; we need to work
> out what semantics we want for those.  If we postpone any SRF to after
> the Sort, we can no longer assume that a query LIMIT enables use of
> bounded sort (because the SRF might repeatedly return zero rows).
> I don't have a huge problem with that, but I think now would be a good
> time to nail down some semantics.
> 
> Some other work that would be useful would be to refactor so that the
> cost_qual_eval operations aren't so redundant.  But that's just a small
> time savings not a question of functionality.
> 
> And we'd have to document that this changes the behavior for volatile
> functions.  For the better, I think: this will mean that you get
> consistent results no matter whether the query is implemented by
> indexscan or seqscan-and-sort, which has never been true before.
> But it is a change.
> 
> Do people approve of this sort of change in general, or this patch
> approach in particular?  Want to bikeshed the specific
> when-to-postpone-eval policies implemented here?  Other comments?
> 
>   regards, tom lane
> 
> diff --git a/src/backend/optimizer/plan/planner.c 
> b/src/backend/optimizer/plan/planner.c
> index 8937e71..b15fca1 100644
> *** a/src/backend/optimizer/plan/planner.c
> --- b/src/backend/optimizer/plan/planner.c
> *** static RelOptInfo *create_distinct_paths
> *** 126,131 
> --- 126,132 
> RelOptInfo *input_rel);
>  static RelOptInfo *create_ordered_paths(PlannerInfo *root,
>RelOptInfo *input_rel,
> +  PathTarget *target,
>double limit_tuples);
>  static PathTarget *make_group_input_target(PlannerInfo *root, List *tlist);
>  static List *postprocess_setop_tlist(List *new_tlist, List *orig_tlist);
> *** static PathTarget *make_window_input_tar
> *** 134,139 
> --- 135,142 
>List *tlist, List 
> *activeWindows);
>  static List *make_pathkeys_for_window(PlannerInfo *root, WindowClause *wc,
>List *tlist);
> + 

Re: [HACKERS] Optimizer questions

2016-03-09 Thread Tom Lane
Konstantin Knizhnik  writes:
> I think that the best approach is to generate two different paths: 
> original one, when projection is always done before sort and another one 
> with postponed projection of non-trivial columns. Then we compare costs 
> of two paths and choose the best one.
> Unfortunately, I do not understand now how to implement it with existed 
> grouping_planner.
> Do you think that it is possible?

After fooling with this awhile, I don't think it's actually necessary
to do that.  See attached proof-of-concept patch.

Although this patch gets through our regression tests, that's only because
it's conservative about deciding to postpone evaluation; if it tried to
postpone evaluation in a query with window functions, it would fail
miserably, because pull_var_clause doesn't know about window functions.
I think that that's a clear oversight and we should extend it to offer
the same sorts of behaviors as it does for Aggrefs.  But that would be
slightly invasive, there being a dozen or so callers; so I didn't bother
to do it yet pending comments on this patch.

I think it's probably also broken for SRFs in the tlist; we need to work
out what semantics we want for those.  If we postpone any SRF to after
the Sort, we can no longer assume that a query LIMIT enables use of
bounded sort (because the SRF might repeatedly return zero rows).
I don't have a huge problem with that, but I think now would be a good
time to nail down some semantics.

Some other work that would be useful would be to refactor so that the
cost_qual_eval operations aren't so redundant.  But that's just a small
time savings not a question of functionality.

And we'd have to document that this changes the behavior for volatile
functions.  For the better, I think: this will mean that you get
consistent results no matter whether the query is implemented by
indexscan or seqscan-and-sort, which has never been true before.
But it is a change.

Do people approve of this sort of change in general, or this patch
approach in particular?  Want to bikeshed the specific
when-to-postpone-eval policies implemented here?  Other comments?

regards, tom lane

diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 8937e71..b15fca1 100644
*** a/src/backend/optimizer/plan/planner.c
--- b/src/backend/optimizer/plan/planner.c
*** static RelOptInfo *create_distinct_paths
*** 126,131 
--- 126,132 
  	  RelOptInfo *input_rel);
  static RelOptInfo *create_ordered_paths(PlannerInfo *root,
  	 RelOptInfo *input_rel,
+ 	 PathTarget *target,
  	 double limit_tuples);
  static PathTarget *make_group_input_target(PlannerInfo *root, List *tlist);
  static List *postprocess_setop_tlist(List *new_tlist, List *orig_tlist);
*** static PathTarget *make_window_input_tar
*** 134,139 
--- 135,142 
  		 List *tlist, List *activeWindows);
  static List *make_pathkeys_for_window(PlannerInfo *root, WindowClause *wc,
  		 List *tlist);
+ static PathTarget *make_sort_input_target(PlannerInfo *root,
+ 	   PathTarget *final_target);
  
  
  /*
*** grouping_planner(PlannerInfo *root, bool
*** 1378,1383 
--- 1381,1387 
  	int64		offset_est = 0;
  	int64		count_est = 0;
  	double		limit_tuples = -1.0;
+ 	PathTarget *final_target;
  	RelOptInfo *current_rel;
  	RelOptInfo *final_rel;
  	ListCell   *lc;
*** grouping_planner(PlannerInfo *root, bool
*** 1437,1442 
--- 1441,1449 
  		/* Save aside the final decorated tlist */
  		root->processed_tlist = tlist;
  
+ 		/* Also extract the PathTarget form of the setop result tlist */
+ 		final_target = current_rel->cheapest_total_path->pathtarget;
+ 
  		/*
  		 * Can't handle FOR [KEY] UPDATE/SHARE here (parser should have
  		 * checked already, but let's make sure).
*** grouping_planner(PlannerInfo *root, bool
*** 1461,1467 
  	else
  	{
  		/* No set operations, do regular planning */
! 		PathTarget *final_target;
  		PathTarget *grouping_target;
  		PathTarget *scanjoin_target;
  		bool		have_grouping;
--- 1468,1474 
  	else
  	{
  		/* No set operations, do regular planning */
! 		PathTarget *sort_input_target;
  		PathTarget *grouping_target;
  		PathTarget *scanjoin_target;
  		bool		have_grouping;
*** grouping_planner(PlannerInfo *root, bool
*** 1657,1678 
  		final_target = create_pathtarget(root, tlist);
  
  		/*
  		 * If we have window functions to deal with, the output from any
  		 * grouping step needs to be what the window functions want;
! 		 * otherwise, it should just be final_target.
  		 */
  		if (activeWindows)
  			grouping_target = make_window_input_target(root,
  	   tlist,
  	   activeWindows);
  		else
! 			grouping_target = final_target;
  
  		/*
  		 * If we have grouping or 

Re: [HACKERS] Optimizer questions

2016-03-09 Thread Konstantin Knizhnik



On 09.03.2016 09:15, Tom Lane wrote:

I wrote:

BTW, there's some additional refactoring I had had in mind to do in
grouping_planner to make its handling of the targetlist a bit more
organized; in particular, I'd like to see it using PathTarget
representation more consistently throughout the post-scan-join steps.

See 51c0f63e4d76a86b44e87876a6addcfffb01ec28 --- I think this gets
things to where we could plug in additional levels of targets without
too much complication.

regards, tom lane


So, if I correctly understand you, there are two major concerns:

1. Volatile functions. I wonder if it is really required to reevaluate 
volatile function for each record even if LIMIT clause is present?

Documentation says:
"A query using a volatile function will re-evaluate the function at 
every row where its value is needed."
So if we are using sort with limit and value of function is not used in 
sort, then we it is correct to say that value of this function is no 
needed, so there is no need to re-evaluate it, isn't it?


2. Narrowing functions, like md5.
Here I do not have any good idea how to support it. Looks like cost of 
SORT should depend on tuple width. Only in this case optimizer can 
determine whether it is more efficient to evaluate function earlier or 
postpone its execution.


I think that the best approach is to generate two different paths: 
original one, when projection is always done before sort and another one 
with postponed projection of non-trivial columns. Then we compare costs 
of two paths and choose the best one.
Unfortunately, I do not understand now how to implement it with existed 
grouping_planner.

Do you think that it is possible?

Alternative approach is to do something like in my proposed patch, but 
take in account cost of function execution and check presence of 
volatile/narrowing functions. This approach provides better flexibility, 
because we can choose subset of columns not-used in sort, which 
evaluation should be postponed. But here we once again make local 
decision while construction of the path instead of comparing costs of 
full paths.





--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimizer questions

2016-03-08 Thread Tom Lane
I wrote:
> BTW, there's some additional refactoring I had had in mind to do in
> grouping_planner to make its handling of the targetlist a bit more
> organized; in particular, I'd like to see it using PathTarget
> representation more consistently throughout the post-scan-join steps.

See 51c0f63e4d76a86b44e87876a6addcfffb01ec28 --- I think this gets
things to where we could plug in additional levels of targets without
too much complication.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimizer questions

2016-03-08 Thread Tom Lane
Konstantin Knizhnik  writes:
> On 03/08/2016 07:01 AM, Tom Lane wrote:
>> I've not spent a lot of time on this, but I think maybe what would make
>> sense is to consider both the case where function calculations are
>> postponed to after ORDER BY and the case where they aren't, and generate
>> Paths for both.  Neither approach is a slam-dunk win.  For example,
>> suppose that one of the tlist columns is md5(wide_column) --- it will
>> likely not be preferable to pass the wide column data through the sort
>> step rather than reducing it to a hash first.  This would require some
>> work in grouping_planner to track two possible pathtargets, and work in
>> create_ordered_paths to generate paths for both possibilities.  A possible
>> objection is that this would add planning work even when no real benefit
>> is possible; so maybe we should only consider the new way if the tlist has
>> significant eval cost?  Not sure about that.  There is also something
>> to be said for the idea that we should try to guarantee consistent
>> semantics when the tlist contains volatile functions.

> Attached please find rebased patch.

This may be rebased, but it doesn't seem to respond to any of my concerns
above.  In particular, if we're going to change behavior in this area,
I think we need to try to ensure that volatile functions in the tlist will
see consistent behavior no matter which plan shape is chosen.  People may
well be depending on the existing behavior for particular queries.  If
we're going to break their queries, I'd at least like to be able to say
that there's now a predictable, well-defined semantics.  Something like
"volatile functions in the tlist that are not in sort/group columns are
certain to be evaluated after sorting occurs, if there is an ORDER BY".
This should not be dependent on whether there's a LIMIT, nor GROUP BY
nor windowing functions, nor on random other whims of the optimizer.
So I'm not at all happy with a patch that changes things only for the
LIMIT-but-no-grouping-or-windows case.

I'm not sure whether it's worth pursuing cost-based comparisons for
functions that aren't volatile.  In an ideal world we could deal with the
md5() example I gave, but as things stand I suspect we usually have too
little idea whether the result of f(x) is wider or narrower than x itself.
(One thing we do know is that f's output won't be toasted, whereas x might
be; so it might be a bad idea to bet on function results getting
narrower.)  So I'm afraid it's going to be really hard to tell whether
we're making the sort step itself more or less expensive if we postpone
function evals.  But we do know for certain that we're adding a projection
step that wasn't there before when we postpone functions to after the
Sort.  That kind of suggests that there should be some minimum estimated
cost of the functions before we add that extra step (unless they are
volatile of course).  Or only do it if there is a LIMIT or we have a
tuple_fraction suggesting that partial eval of the query is of interest.

BTW, there's some additional refactoring I had had in mind to do in
grouping_planner to make its handling of the targetlist a bit more
organized; in particular, I'd like to see it using PathTarget
representation more consistently throughout the post-scan-join steps.
I had figured that was just neatnik-ism and could be done later,
but we may need to do it now in order to be able to deal with these
considerations in a cleaner fashion.  I really don't like the way
that this patch hacks up the behavior of make_scanjoin_target() without
even a comment explaining its new API.

The rough sketch of what I'd had in mind is that we should convert the
processed, final tlist into PathTarget form immediately after
query_planner, since we know we're going to need that eventually anyway.
Then, if we need to do grouping/aggregation or windowing, derive modified
PathTargets that we want to use for the inputs of those steps.  (This'd
require some infrastructure that doesn't currently exist for manipulating
PathTargets, particularly the ability to copy a PathTarget and/or add a
column to an existing PathTarget.)

The way this optimization would fit into that structure is that the
DISTINCT/ORDER BY steps would now also have a desired input pathtarget
that might be different from the final target.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimizer questions

2016-03-08 Thread Konstantin Knizhnik

On 03/08/2016 07:01 AM, Tom Lane wrote:

Konstantin Knizhnik  writes:

Attached please find improved version of the optimizer patch for LIMIT clause.

This patch isn't anywhere close to working after 3fc6e2d7f5b652b4.
(TBH, the reason I was negative about this upthread is that I had that
one in the oven and knew it would conflict spectacularly.)  I encourage
you to think about how an optimization of this sort could be made to
work in a non-kluge fashion in the new code structure.

I've not spent a lot of time on this, but I think maybe what would make
sense is to consider both the case where function calculations are
postponed to after ORDER BY and the case where they aren't, and generate
Paths for both.  Neither approach is a slam-dunk win.  For example,
suppose that one of the tlist columns is md5(wide_column) --- it will
likely not be preferable to pass the wide column data through the sort
step rather than reducing it to a hash first.  This would require some
work in grouping_planner to track two possible pathtargets, and work in
create_ordered_paths to generate paths for both possibilities.  A possible
objection is that this would add planning work even when no real benefit
is possible; so maybe we should only consider the new way if the tlist has
significant eval cost?  Not sure about that.  There is also something
to be said for the idea that we should try to guarantee consistent
semantics when the tlist contains volatile functions.

For now, I've set this commitfest entry to Waiting on Author.  There's
still time to consider a rewrite in this 'fest, if you can get it done
in a week or two.

regards, tom lane


Attached please find rebased patch.
Unfortunately 3fc6e2d7f5b652b4 still doesn't fix the problem with "lazy" 
evaluation of target list.
This is why my patch is still useful. But frankly speaking I am not sure that 
it is best way of fixing this problem,
because it takes in account only one case: sort+limit. May be the same 
optimization can be useful for other queries.


--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 5fc8e5b..709d1ad 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -126,8 +126,9 @@ static RelOptInfo *create_ordered_paths(PlannerInfo *root,
 	 RelOptInfo *input_rel,
 	 double limit_tuples);
 static PathTarget *make_scanjoin_target(PlannerInfo *root, List *tlist,
-	 AttrNumber **groupColIdx);
+		AttrNumber **groupColIdx, bool* splitted_projection);
 static int	get_grouping_column_index(Query *parse, TargetEntry *tle);
+static int	get_sort_column_index(Query *parse, TargetEntry *tle);
 static List *postprocess_setop_tlist(List *new_tlist, List *orig_tlist);
 static List *select_active_windows(PlannerInfo *root, WindowFuncLists *wflists);
 static List *make_windowInputTargetList(PlannerInfo *root,
@@ -1381,6 +1382,7 @@ grouping_planner(PlannerInfo *root, bool inheritance_update,
 	RelOptInfo *current_rel;
 	RelOptInfo *final_rel;
 	ListCell   *lc;
+	bool splitted_projection = false;
 
 	/* Tweak caller-supplied tuple_fraction if have LIMIT/OFFSET */
 	if (parse->limitCount || parse->limitOffset)
@@ -1657,7 +1659,7 @@ grouping_planner(PlannerInfo *root, bool inheritance_update,
 		 * that were obtained within query_planner().
 		 */
 		sub_target = make_scanjoin_target(root, tlist,
-		  );
+		  , _projection);
 
 		/*
 		 * Forcibly apply that tlist to all the Paths for the scan/join rel.
@@ -1801,6 +1803,13 @@ grouping_planner(PlannerInfo *root, bool inheritance_update,
 	{
 		Path	   *path = (Path *) lfirst(lc);
 
+		if (splitted_projection)
+		{			
+			path = apply_projection_to_path(root, current_rel,
+			path, create_pathtarget(root, tlist));
+		}
+
+
 		/*
 		 * If there is a FOR [KEY] UPDATE/SHARE clause, add the LockRows node.
 		 * (Note: we intentionally test parse->rowMarks not root->rowMarks
@@ -3775,15 +3784,17 @@ create_ordered_paths(PlannerInfo *root,
 static PathTarget *
 make_scanjoin_target(PlannerInfo *root,
 	 List *tlist,
-	 AttrNumber **groupColIdx)
+	 AttrNumber **groupColIdx,
+	 bool* splitted_projection)
 {
 	Query	   *parse = root->parse;
-	List	   *sub_tlist;
-	List	   *non_group_cols;
+	List	   *sub_tlist = NIL;
+	List	   *non_group_cols = NIL;
 	List	   *non_group_vars;
 	int			numCols;
 
 	*groupColIdx = NULL;
+	*splitted_projection = false;
 
 	/*
 	 * If we're not grouping or aggregating or windowing, there's nothing to
@@ -3791,14 +3802,66 @@ make_scanjoin_target(PlannerInfo *root,
 	 */
 	if (!parse->hasAggs && !parse->groupClause && !parse->groupingSets &&
 		!root->hasHavingQual && !parse->hasWindowFuncs)
+	{
+		if (parse->sortClause && limit_needed(parse)) {
+			ListCell   *tl;
+			bool contains_non_vars = false;
+			

Re: [HACKERS] Optimizer questions

2016-03-08 Thread Alvaro Herrera
Tom Lane wrote:
> Konstantin Knizhnik  writes:
> > Attached please find improved version of the optimizer patch for LIMIT 
> > clause.

> For now, I've set this commitfest entry to Waiting on Author.  There's
> still time to consider a rewrite in this 'fest, if you can get it done
> in a week or two.

Yes.  Given that Konstantin will have to struggle to get this patch
rebased on top of upper-planner pathification which appeared out of the
blue at the last minute, it seems fair to give some additional time
for the required work.

However, we still have a commitfest schedule to adhere to, and
Konstantin has two other patches in the commitfest:
* Support ALTER INDEX ... WHERE ... clause
* eXtensible Transaction Manager API (v2)

and since we also need his contribution as a patch reviewer, it seems
unfair to just let all his patches move forward --- if we did that, he
would have no time at all to review other's patches, which is a
requirement.

Since we're only one week into the commitfest, I think it's his
prerogative to decide what to do.  I think there are two options: he can
either continue with this patch only, and get back from WoA to
Needs-Review in (hopefully) one week; or he can drop this one from the
commitfest right now and concentrate on the two other ones.  Either way,
as I already stated, we need his contribution as a reviewer for other
patche, too.

(If I were in his socks, I wouldn't have any hope that the XTM patch
would go in for 9.6 at this point; the most I'd hope is to have lots of
feedback in order to have something to propose for early 9.7.  I don't
know the status of the ALTER INDEX one, so I can't comment there.)

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimizer questions

2016-03-07 Thread Tom Lane
Konstantin Knizhnik  writes:
> Attached please find improved version of the optimizer patch for LIMIT clause.

This patch isn't anywhere close to working after 3fc6e2d7f5b652b4.
(TBH, the reason I was negative about this upthread is that I had that
one in the oven and knew it would conflict spectacularly.)  I encourage
you to think about how an optimization of this sort could be made to
work in a non-kluge fashion in the new code structure.

I've not spent a lot of time on this, but I think maybe what would make
sense is to consider both the case where function calculations are
postponed to after ORDER BY and the case where they aren't, and generate
Paths for both.  Neither approach is a slam-dunk win.  For example,
suppose that one of the tlist columns is md5(wide_column) --- it will
likely not be preferable to pass the wide column data through the sort
step rather than reducing it to a hash first.  This would require some
work in grouping_planner to track two possible pathtargets, and work in
create_ordered_paths to generate paths for both possibilities.  A possible
objection is that this would add planning work even when no real benefit
is possible; so maybe we should only consider the new way if the tlist has
significant eval cost?  Not sure about that.  There is also something
to be said for the idea that we should try to guarantee consistent
semantics when the tlist contains volatile functions.

For now, I've set this commitfest entry to Waiting on Author.  There's
still time to consider a rewrite in this 'fest, if you can get it done
in a week or two.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimizer questions

2016-01-30 Thread Konstantin Knizhnik

Unfortunately this two statements are not equivalent: second one can (in 
theory, but not for this particular data set) return more than 10 result 
records.
Such optimization will be correct if t2.i is declared as unique.

But the most efficient plan for this query will be generated if there is index 
for t1.v.
In this case no explicit sot is needed. Limit is still not pushed down, but it 
is not a problem because nested join is used which is not materializing its 
result
(produces records on demand):

# explain analyze select * from t1 left outer join t2 on t1.k=t2.k order by 
t1.v limit 10;
  QUERY PLAN
--
 Limit  (cost=0.58..4.38 rows=10 width=16) (actual time=0.069..0.157 rows=10 
loops=1)
   ->  Nested Loop Left Join  (cost=0.58..37926.63 rows=11 width=16) 
(actual time=0.067..0.154 rows=10 loops=1)
 ->  Index Scan using idxv on t1  (cost=0.29..3050.31 rows=11 
width=8) (actual time=0.046..0.053 rows=10 loops=1)
 ->  Index Scan using t2_pkey on t2  (cost=0.29..0.34 rows=1 width=8) 
(actual time=0.007..0.007 rows=1 loops=10)
   Index Cond: (t1.k = k)
 Planning time: 0.537 ms
 Execution time: 0.241 ms
(7 rows)


On 01/30/2016 01:01 AM, Alexander Korotkov wrote:

On Fri, Jan 8, 2016 at 11:58 AM, Konstantin Knizhnik > wrote:

Attached please find improved version of the optimizer patch for LIMIT 
clause.
Now I try to apply this optimization only for non-trivial columns requiring 
evaluation.
May be it will be better to take in account cost of this columns evaluation 
but right now I just detect non-variable columns.


We may think about more general feature: LIMIT pushdown. In the Konstantin's 
patch planner push LIMIT before tlist calculation.
But there are other cases when calculating LIMIT first would be beneficial. For 
instance, we can do LIMIT before JOINs. That is possible only for LEFT JOIN 
which is not used in filter and ordering clauses. See the example below.

# create table t1 as (select i, random() v from generate_series(1,100) i);
SELECT 100

# create table t2 as (select i, random() v from generate_series(1,100) i);
SELECT 100

# explain analyze select * from t1 left join t2 on t1.i = t2.i order by t1.v 
limit 10;
  QUERY PLAN

 Limit  (cost=87421.64..87421.67 rows=10 width=24) (actual 
time=1486.276..1486.278 rows=10 loops=1)
   ->  Sort  (cost=87421.64..89921.64 rows=100 width=24) (actual 
time=1486.275..1486.275 rows=10 loops=1)
 Sort Key: t1.v
 Sort Method: top-N heapsort  Memory: 25kB
 ->  Hash Left Join  (cost=27906.00..65812.00 rows=100 width=24) 
(actual time=226.180..1366.238 rows=100
   Hash Cond: (t1.i = t2.i)
 ->  Seq Scan on t1  (cost=0.00..15406.00 rows=100 width=12) (actual 
time=0.010..77.041 rows=100 l
 ->  Hash  (cost=15406.00..15406.00 rows=100 width=12) (actual 
time=226.066..226.066 rows=100 loop
 Buckets: 131072  Batches: 1  Memory Usage: 46875kB
 ->  Seq Scan on t2  (cost=0.00..15406.00 rows=100 width=12) (actual 
time=0.007..89.002 rows=100
 Planning time: 0.123 ms
 Execution time: 1492.118 ms
(12 rows)

# explain analyze select * from (select * from t1 order by t1.v limit 10) t1 
left join t2 on t1.i = t2.i;
  QUERY PLAN

 Hash Right Join  (cost=37015.89..56171.99 rows=10 width=24) (actual 
time=198.478..301.278 rows=10 loops=1)
   Hash Cond: (t2.i = t1.i)
   ->  Seq Scan on t2  (cost=0.00..15406.00 rows=100 width=12) (actual 
time=0.005..74.303 rows=100 loops=1)
   ->  Hash  (cost=37015.77..37015.77 rows=10 width=12) (actual 
time=153.584..153.584 rows=10 loops=1)
 Buckets: 1024  Batches: 1  Memory Usage: 1kB
 ->  Limit  (cost=37015.64..37015.67 rows=10 width=12) (actual 
time=153.579..153.580 rows=10 loops=1)
 ->  Sort  (cost=37015.64..39515.64 rows=100 width=12) (actual 
time=153.578..153.578 rows=10 loops=1)
 Sort Key: t1.v
 Sort Method: top-N heapsort  Memory: 25kB
 ->  Seq Scan on t1  (cost=0.00..15406.00 rows=100 width=12) (actual 
time=0.012..78.828 rows=100
 Planning time: 0.132 ms
 Execution time: 301.308 ms
(12 rows)

In this example LIMIT pushdown makes query 5 times faster. It would be very 
nice if optimizer make this automatically.

--
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com 
The Russian Postgres Company



--
Konstantin Knizhnik
Postgres Professional: 

Re: [HACKERS] Optimizer questions

2016-01-29 Thread Alexander Korotkov
On Fri, Jan 8, 2016 at 11:58 AM, Konstantin Knizhnik <
k.knizh...@postgrespro.ru> wrote:

> Attached please find improved version of the optimizer patch for LIMIT
> clause.
> Now I try to apply this optimization only for non-trivial columns
> requiring evaluation.
> May be it will be better to take in account cost of this columns
> evaluation but right now I just detect non-variable columns.
>

We may think about more general feature: LIMIT pushdown. In the
Konstantin's patch planner push LIMIT before tlist calculation.
But there are other cases when calculating LIMIT first would be beneficial.
For instance, we can do LIMIT before JOINs. That is possible only for LEFT
JOIN which is not used in filter and ordering clauses. See the example
below.

# create table t1 as (select i, random() v from generate_series(1,100)
i);
SELECT 100

# create table t2 as (select i, random() v from generate_series(1,100)
i);
SELECT 100

# explain analyze select * from t1 left join t2 on t1.i = t2.i order by
t1.v limit 10;
  QUERY PLAN

 Limit  (cost=87421.64..87421.67 rows=10 width=24) (actual
time=1486.276..1486.278 rows=10 loops=1)
   ->  Sort  (cost=87421.64..89921.64 rows=100 width=24) (actual
time=1486.275..1486.275 rows=10 loops=1)
 Sort Key: t1.v
 Sort Method: top-N heapsort  Memory: 25kB
 ->  Hash Left Join  (cost=27906.00..65812.00 rows=100
width=24) (actual time=226.180..1366.238 rows=100
   Hash Cond: (t1.i = t2.i)
   ->  Seq Scan on t1  (cost=0.00..15406.00 rows=100
width=12) (actual time=0.010..77.041 rows=100 l
   ->  Hash  (cost=15406.00..15406.00 rows=100 width=12)
(actual time=226.066..226.066 rows=100 loop
 Buckets: 131072  Batches: 1  Memory Usage: 46875kB
 ->  Seq Scan on t2  (cost=0.00..15406.00 rows=100
width=12) (actual time=0.007..89.002 rows=100
 Planning time: 0.123 ms
 Execution time: 1492.118 ms
(12 rows)

# explain analyze select * from (select * from t1 order by t1.v limit 10)
t1 left join t2 on t1.i = t2.i;
  QUERY PLAN

 Hash Right Join  (cost=37015.89..56171.99 rows=10 width=24) (actual
time=198.478..301.278 rows=10 loops=1)
   Hash Cond: (t2.i = t1.i)
   ->  Seq Scan on t2  (cost=0.00..15406.00 rows=100 width=12) (actual
time=0.005..74.303 rows=100 loops=1)
   ->  Hash  (cost=37015.77..37015.77 rows=10 width=12) (actual
time=153.584..153.584 rows=10 loops=1)
 Buckets: 1024  Batches: 1  Memory Usage: 1kB
 ->  Limit  (cost=37015.64..37015.67 rows=10 width=12) (actual
time=153.579..153.580 rows=10 loops=1)
   ->  Sort  (cost=37015.64..39515.64 rows=100 width=12)
(actual time=153.578..153.578 rows=10 loops=1)
 Sort Key: t1.v
 Sort Method: top-N heapsort  Memory: 25kB
 ->  Seq Scan on t1  (cost=0.00..15406.00 rows=100
width=12) (actual time=0.012..78.828 rows=100
 Planning time: 0.132 ms
 Execution time: 301.308 ms
(12 rows)

In this example LIMIT pushdown makes query 5 times faster. It would be very
nice if optimizer make this automatically.

--
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


Re: [HACKERS] Optimizer questions

2016-01-18 Thread Gilles Darold
Le 18/01/2016 03:47, Bruce Momjian a écrit :
> On Tue, Jan  5, 2016 at 05:55:28PM +0300, konstantin knizhnik wrote:
>> Hi hackers, 
>>
>> I want to ask two questions about PostgreSQL optimizer.
>> I have the following query:
>>
>> SELECT o.id as id,s.id as sid,o.owner,o.creator,o.parent_id
>> as dir_id,s.mime_id,m.c_type,s.p_file,s.mtime,o.ctime,o.name
>> ,o.title,o.size,o.deleted,la.otime,la.etime,uo.login as owner_login,uc.login 
>> as
>> creator_login,(CASE WHEN f.user_id IS NULL THEN 0 ELSE 1 END) AS flagged,
>> (select 'userid\\:'||string_agg(user_id,' userid\\:') from 
>> get_authorized_users
>> (o.id)) as acl FROM objects s JOIN objects o ON s.original_file=o.id LEFT 
>> JOIN
>> flags f ON o.id=f.obj_id AND o.owner=f.user_id LEFT JOIN 
>> objects_last_activity
>> la ON o.id = la.obj_id AND o.owner = la.user_id, mime m, users uc , users uo
>> WHERE (s.mime_id=904 or s.mime_id=908) AND m.mime_id = o.mime_id AND o.owner 
>> =
>> uo.user_id AND o.creator = uc.user_id ORDER BY s.mtime LIMIT 9;
> FYI, I could not make any sense out of this query, and I frankly can't
> figure out how others can udnerstand it.  :-O   Anyway, I ran it through
> pgFormatter (https://github.com/darold/pgFormatter), which I am showing
> here because I was impressed with the results:
>
>   SELECT
>   o.id AS id,
>   s.id AS sid,
>   o.owner,
>   o.creator,
>   o.parent_id AS dir_id,
>   s.mime_id,
>   m.c_type,
>   s.p_file,
>   s.mtime,
>   o.ctime,
>   o.name,
>   o.title,
>   o.size,
>   o.deleted,
>   la.otime,
>   la.etime,
>   uo.login AS owner_login,
>   uc.login AS creator_login,
>   (
>   CASE
>   WHEN f.user_id IS NULL THEN 0
>   ELSE 1
>   END ) AS flagged,
>   (
>   SELECT
>   'userid\\:' || string_agg (
>   user_id,
>   ' userid\\:' )
>   FROM
>   get_authorized_users (
>   o.id ) ) AS acl
>   FROM
>   objects s
>   JOIN objects o ON s.original_file = o.id
>   LEFT JOIN flags f ON o.id = f.obj_id
>   AND o.owner = f.user_id
>   LEFT JOIN objects_last_activity la ON o.id = la.obj_id
>   AND o.owner = la.user_id,
>   mime m,
>   users uc,
>   users uo
>   WHERE (
>   s.mime_id = 904
>   OR s.mime_id = 908 )
>   AND m.mime_id = o.mime_id
>   AND o.owner = uo.user_id
>   AND o.creator = uc.user_id
>   ORDER BY
>   s.mtime
>   LIMIT 9;
>


Thanks Bruce for the pointer on this tool, even if it is not perfect I
think it can be useful. There's also an on-line version at
http://sqlformat.darold.net/ that every one can use without having to
install it and to format queries up to 20Kb with option to control the
output format. It can also anonymize SQL queries.

Actually this is the SQL formatter/beautifier used in pgBadger, it has
been extracted as an independent project to be able to improve this part
of pgBadger without having to run it each time.

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimizer questions

2016-01-18 Thread Konstantin Knizhnik
I am sorry for badly formatted query - I just cut it from C++ 
client program.


I have one more question to community regarding this patch.
Proposed patch fix the problem particularly for SORT+LIMIT clauses.
In this case evaluation of expressions which are not used in sort is 
alway waste of time.
But I wonder if we should delay evaluation of complex expressions even 
if there is no LIMIT?
Quite often client application doesn't fetch all query results even if 
there is no LIMIT clause.




On 18.01.2016 05:47, Bruce Momjian wrote:

On Tue, Jan  5, 2016 at 05:55:28PM +0300, konstantin knizhnik wrote:

Hi hackers,

I want to ask two questions about PostgreSQL optimizer.
I have the following query:

SELECT o.id as id,s.id as sid,o.owner,o.creator,o.parent_id
as dir_id,s.mime_id,m.c_type,s.p_file,s.mtime,o.ctime,o.name
,o.title,o.size,o.deleted,la.otime,la.etime,uo.login as owner_login,uc.login as
creator_login,(CASE WHEN f.user_id IS NULL THEN 0 ELSE 1 END) AS flagged,
(select 'userid\\:'||string_agg(user_id,' userid\\:') from get_authorized_users
(o.id)) as acl FROM objects s JOIN objects o ON s.original_file=o.id LEFT JOIN
flags f ON o.id=f.obj_id AND o.owner=f.user_id LEFT JOIN objects_last_activity
la ON o.id = la.obj_id AND o.owner = la.user_id, mime m, users uc , users uo
WHERE (s.mime_id=904 or s.mime_id=908) AND m.mime_id = o.mime_id AND o.owner =
uo.user_id AND o.creator = uc.user_id ORDER BY s.mtime LIMIT 9;

FYI, I could not make any sense out of this query, and I frankly can't
figure out how others can udnerstand it.  :-O   Anyway, I ran it through
pgFormatter (https://github.com/darold/pgFormatter), which I am showing
here because I was impressed with the results:

SELECT
o.id AS id,
s.id AS sid,
o.owner,
o.creator,
o.parent_id AS dir_id,
s.mime_id,
m.c_type,
s.p_file,
s.mtime,
o.ctime,
o.name,
o.title,
o.size,
o.deleted,
la.otime,
la.etime,
uo.login AS owner_login,
uc.login AS creator_login,
(
CASE
WHEN f.user_id IS NULL THEN 0
ELSE 1
END ) AS flagged,
(
SELECT
'userid\\:' || string_agg (
user_id,
' userid\\:' )
FROM
get_authorized_users (
o.id ) ) AS acl
FROM
objects s
JOIN objects o ON s.original_file = o.id
LEFT JOIN flags f ON o.id = f.obj_id
AND o.owner = f.user_id
LEFT JOIN objects_last_activity la ON o.id = la.obj_id
AND o.owner = la.user_id,
mime m,
users uc,
users uo
WHERE (
s.mime_id = 904
OR s.mime_id = 908 )
AND m.mime_id = o.mime_id
AND o.owner = uo.user_id
AND o.creator = uc.user_id
ORDER BY
s.mtime
LIMIT 9;



--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimizer questions

2016-01-17 Thread Bruce Momjian
On Tue, Jan  5, 2016 at 05:55:28PM +0300, konstantin knizhnik wrote:
> Hi hackers, 
> 
> I want to ask two questions about PostgreSQL optimizer.
> I have the following query:
> 
> SELECT o.id as id,s.id as sid,o.owner,o.creator,o.parent_id
> as dir_id,s.mime_id,m.c_type,s.p_file,s.mtime,o.ctime,o.name
> ,o.title,o.size,o.deleted,la.otime,la.etime,uo.login as owner_login,uc.login 
> as
> creator_login,(CASE WHEN f.user_id IS NULL THEN 0 ELSE 1 END) AS flagged,
> (select 'userid\\:'||string_agg(user_id,' userid\\:') from 
> get_authorized_users
> (o.id)) as acl FROM objects s JOIN objects o ON s.original_file=o.id LEFT JOIN
> flags f ON o.id=f.obj_id AND o.owner=f.user_id LEFT JOIN objects_last_activity
> la ON o.id = la.obj_id AND o.owner = la.user_id, mime m, users uc , users uo
> WHERE (s.mime_id=904 or s.mime_id=908) AND m.mime_id = o.mime_id AND o.owner =
> uo.user_id AND o.creator = uc.user_id ORDER BY s.mtime LIMIT 9;

FYI, I could not make any sense out of this query, and I frankly can't
figure out how others can udnerstand it.  :-O   Anyway, I ran it through
pgFormatter (https://github.com/darold/pgFormatter), which I am showing
here because I was impressed with the results:

SELECT
o.id AS id,
s.id AS sid,
o.owner,
o.creator,
o.parent_id AS dir_id,
s.mime_id,
m.c_type,
s.p_file,
s.mtime,
o.ctime,
o.name,
o.title,
o.size,
o.deleted,
la.otime,
la.etime,
uo.login AS owner_login,
uc.login AS creator_login,
(
CASE
WHEN f.user_id IS NULL THEN 0
ELSE 1
END ) AS flagged,
(
SELECT
'userid\\:' || string_agg (
user_id,
' userid\\:' )
FROM
get_authorized_users (
o.id ) ) AS acl
FROM
objects s
JOIN objects o ON s.original_file = o.id
LEFT JOIN flags f ON o.id = f.obj_id
AND o.owner = f.user_id
LEFT JOIN objects_last_activity la ON o.id = la.obj_id
AND o.owner = la.user_id,
mime m,
users uc,
users uo
WHERE (
s.mime_id = 904
OR s.mime_id = 908 )
AND m.mime_id = o.mime_id
AND o.owner = uo.user_id
AND o.creator = uc.user_id
ORDER BY
s.mtime
LIMIT 9;

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimizer questions

2016-01-08 Thread Konstantin Knizhnik


Attached please find improved version of the optimizer patch for LIMIT clause.
Now I try to apply this optimization only for non-trivial columns requiring 
evaluation.
May be it will be better to take in account cost of this columns evaluation but 
right now I just detect non-variable columns.




On 01/06/2016 12:03 PM, David Rowley wrote:

On 6 January 2016 at 13:13, Alexander Korotkov > wrote:

On Wed, Jan 6, 2016 at 12:08 AM, Tom Lane > wrote:

konstantin knizhnik > writes:
> 1. The cost compared in grouping_planner doesn't take in account 
price of get_authorized_users - it is not changed when I am altering function 
cost. Is it correct behavior?

The general problem of accounting for tlist eval cost is not handled 
very
well now, but especially not with respect to the idea that different 
paths
might have different tlist costs.  I'm working on an upper-planner 
rewrite
which should make this better, or at least make it practical to make it
better.


Hmm... Besides costing it would be nice to postpone calculation of 
expensive tlist functions after LIMIT.


I'd agree that it would be more than the costings that would need to be 
improved here.

The most simple demonstration of the problem I can think of is, if I apply the 
following:

diff --git a/src/backend/utils/adt/int.c b/src/backend/utils/adt/int.c
index 29d92a7..2ec9822 100644
--- a/src/backend/utils/adt/int.c
+++ b/src/backend/utils/adt/int.c
@@ -641,6 +641,8 @@ int4pl(PG_FUNCTION_ARGS)

result = arg1 + arg2;

+   elog(NOTICE, "int4pl(%d, %d)", arg1,arg2);
+
/*
 * Overflow check.  If the inputs are of different signs then their sum
 * cannot overflow.  If the inputs are of the same sign, their sum had

Then do:

create table a (b int);
insert into a select generate_series(1,10);
select b+b as bb from a order by b limit 1;
NOTICE:  int4pl(1, 1)
NOTICE:  int4pl(2, 2)
NOTICE:  int4pl(3, 3)
NOTICE:  int4pl(4, 4)
NOTICE:  int4pl(5, 5)
NOTICE:  int4pl(6, 6)
NOTICE:  int4pl(7, 7)
NOTICE:  int4pl(8, 8)
NOTICE:  int4pl(9, 9)
NOTICE:  int4pl(10, 10)
 bb

  2
(1 row)

We can see that int4pl() is needlessly called 9 times. Although, I think this 
does only apply to queries with LIMIT. I agree that it does seem like an 
interesting route for optimisation.

It seems worthwhile to investigate how we might go about improving this so that 
the evaluation of the target list happens after LIMIT, at least for the columns 
which are not required before LIMIT.

Konstantin, are you thinking of looking into this more, with plans to implement 
code to improve this?

--
 David Rowley http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services





--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c
index 797df31..4cbccac 100644
--- a/src/backend/optimizer/plan/planner.c
+++ b/src/backend/optimizer/plan/planner.c
@@ -106,6 +106,7 @@ static bool choose_hashed_distinct(PlannerInfo *root,
 static List *make_subplanTargetList(PlannerInfo *root, List *tlist,
 	   AttrNumber **groupColIdx, bool *need_tlist_eval);
 static int	get_grouping_column_index(Query *parse, TargetEntry *tle);
+static int	get_sort_column_index(Query *parse, TargetEntry *tle);
 static void locate_grouping_columns(PlannerInfo *root,
 		List *tlist,
 		List *sub_tlist,
@@ -2402,6 +2403,13 @@ grouping_planner(PlannerInfo *root, double tuple_fraction)
 		  parse->limitCount,
 		  offset_est,
 		  count_est);
+		if (parse->sortClause && tlist != result_plan->targetlist)
+		{			
+			result_plan = (Plan *) make_result(root,
+			   tlist,
+			   NULL,
+			   result_plan);			
+		}
 	}
 
 	/*
@@ -3964,8 +3972,8 @@ make_subplanTargetList(PlannerInfo *root,
 	   bool *need_tlist_eval)
 {
 	Query	   *parse = root->parse;
-	List	   *sub_tlist;
-	List	   *non_group_cols;
+	List	   *sub_tlist = NIL;
+	List	   *non_group_cols = NIL;
 	List	   *non_group_vars;
 	int			numCols;
 
@@ -3978,6 +3986,61 @@ make_subplanTargetList(PlannerInfo *root,
 	if (!parse->hasAggs && !parse->groupClause && !parse->groupingSets && !root->hasHavingQual &&
 		!parse->hasWindowFuncs)
 	{
+		if (parse->sortClause && limit_needed(parse)) {
+			ListCell   *tl;
+			bool contains_non_vars = false;
+			*need_tlist_eval = false;	/* only eval if not flat tlist */
+			foreach(tl, tlist)
+			{
+TargetEntry *tle = (TargetEntry *) lfirst(tl);
+int			colno;
+
+colno = get_sort_column_index(parse, tle);
+if (colno >= 0)
+{
+	TargetEntry *newtle;
+	
+	newtle = makeTargetEntry(tle->expr,
+			 list_length(sub_tlist) 

Re: [HACKERS] Optimizer questions

2016-01-06 Thread Konstantin Knizhnik

On 01/06/2016 12:03 PM, David Rowley wrote:

Konstantin, are you thinking of looking into this more, with plans to implement 
code to improve this?



I am not familiar with PostgreSQL optimizer, but I now looking inside its code 
and trying to find a way to fix this problem.
But if there is somebody is familar with optimizer and already knows solution, 
please let me know so that I do no spend time on this.

Actually I expect that most of the logic is already present in optimizer: there 
are several places where it is considered where and how to evaluate tlist.
For example, there are functions use_physical_tlist/disuse_physical_tlist which 
triggers whether the relation targetlist or only those Vars actually needed by 
the query. Not sure that that them are related to this particular case...

So most likely it will be enough to insert just one function call is the proper 
place, but it is necessary to find the function and the place;)



--
 David Rowley http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services




Re: [HACKERS] Optimizer questions

2016-01-06 Thread David Rowley
On 6 January 2016 at 13:13, Alexander Korotkov 
wrote:

> On Wed, Jan 6, 2016 at 12:08 AM, Tom Lane  wrote:
>
>> konstantin knizhnik  writes:
>> > 1. The cost compared in grouping_planner doesn't take in account price
>> of get_authorized_users - it is not changed when I am altering function
>> cost. Is it correct behavior?
>>
>> The general problem of accounting for tlist eval cost is not handled very
>> well now, but especially not with respect to the idea that different paths
>> might have different tlist costs.  I'm working on an upper-planner rewrite
>> which should make this better, or at least make it practical to make it
>> better.
>>
>
> Hmm... Besides costing it would be nice to postpone calculation of
> expensive tlist functions after LIMIT.
>

I'd agree that it would be more than the costings that would need to be
improved here.

The most simple demonstration of the problem I can think of is, if I apply
the following:

diff --git a/src/backend/utils/adt/int.c b/src/backend/utils/adt/int.c
index 29d92a7..2ec9822 100644
--- a/src/backend/utils/adt/int.c
+++ b/src/backend/utils/adt/int.c
@@ -641,6 +641,8 @@ int4pl(PG_FUNCTION_ARGS)

result = arg1 + arg2;

+   elog(NOTICE, "int4pl(%d, %d)", arg1,arg2);
+
/*
 * Overflow check.  If the inputs are of different signs then their
sum
 * cannot overflow.  If the inputs are of the same sign, their sum
had

Then do:

create table a (b int);
insert into a select generate_series(1,10);
select b+b as bb from a order by b limit 1;
NOTICE:  int4pl(1, 1)
NOTICE:  int4pl(2, 2)
NOTICE:  int4pl(3, 3)
NOTICE:  int4pl(4, 4)
NOTICE:  int4pl(5, 5)
NOTICE:  int4pl(6, 6)
NOTICE:  int4pl(7, 7)
NOTICE:  int4pl(8, 8)
NOTICE:  int4pl(9, 9)
NOTICE:  int4pl(10, 10)
 bb

  2
(1 row)

We can see that int4pl() is needlessly called 9 times. Although, I think
this does only apply to queries with LIMIT. I agree that it does seem like
an interesting route for optimisation.

It seems worthwhile to investigate how we might go about improving this so
that the evaluation of the target list happens after LIMIT, at least for
the columns which are not required before LIMIT.

Konstantin, are you thinking of looking into this more, with plans to
implement code to improve this?

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] Optimizer questions

2016-01-06 Thread Simon Riggs
On 6 January 2016 at 09:03, David Rowley 
wrote:


> It seems worthwhile to investigate how we might go about improving this so
> that the evaluation of the target list happens after LIMIT, at least for
> the columns which are not required before LIMIT.
>

Currently, the FunctionScan executes until it decides to stop. Then later
the LIMIT is applied.

To change this, you'd need to pass down the LIMIT value into the
FunctionScan node, as is already done between Limit/Sort nodes.

On the FunctionScan,  you can set the max_calls value in the
FuncCallContext, but currently that isn't enforced.

-- 
Simon Riggshttp://www.2ndQuadrant.com/

PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


[HACKERS] Optimizer questions

2016-01-05 Thread konstantin knizhnik
Hi hackers, 

I want to ask two questions about PostgreSQL optimizer.
I have the following query:

SELECT o.id as id,s.id as sid,o.owner,o.creator,o.parent_id as 
dir_id,s.mime_id,m.c_type,s.p_file,s.mtime,o.ctime,o.name,o.title,o.size,o.deleted,la.otime,la.etime,uo.login
 as owner_login,uc.login as creator_login,(CASE WHEN f.user_id IS NULL THEN 0 
ELSE 1 END) AS flagged,(select 'userid\\:'||string_agg(user_id,' userid\\:') 
from get_authorized_users(o.id)) as acl FROM objects s JOIN objects o ON 
s.original_file=o.id LEFT JOIN flags f ON o.id=f.obj_id AND o.owner=f.user_id 
LEFT JOIN objects_last_activity la ON o.id = la.obj_id AND o.owner = 
la.user_id, mime m, users uc , users uo WHERE (s.mime_id=904 or s.mime_id=908) 
AND m.mime_id = o.mime_id AND o.owner = uo.user_id AND o.creator = uc.user_id 
ORDER BY s.mtime LIMIT 9;

It is executed more than one hours. And the same query with "limit 8" is 
executed in just few seconds.
get_authorized_users is quite expensive function performing recursive query 
through several tables.

In first case (limit 9) sequential scan with sort is used  while in the second 
case index scan on "mtime" index is performed and so no sort is needed.
In the second case we can extract just 9 rows and calculate 
get_authorized_users  for them. In the first case we have to calculate this 
functions for ALL (~15M) rows.

I investigated plans of both queries and find out there are three reasons of 
the problem:

1. Wrong estimation of filter selectivity: 65549 vs. 154347 real.
2. Wrong estimation of joins selectivity: 284 vs. 65549 real
3. Wrong estimation of get_authorized_users  cost: 0.25 vs. 57.379 real 

Below is output of explain analyse:

Limit  (cost=1595355.77..1595355.79 rows=9 width=301) (actual 
time=3823128.752..3823128.755 rows=9 loops=1)
   ->  Sort  (cost=1595355.77..1595356.48 rows=284 width=301) (actual 
time=3823128.750..3823128.753 rows=9 loops=1)
 Sort Key: s.mtime
 Sort Method: top-N heapsort  Memory: 30kB
 ->  Nested Loop  (cost=1.96..1595349.85 rows=284 width=301) (actual 
time=75.453..3822829.640 rows=65549 loops=1)
   ->  Nested Loop Left Join  (cost=1.54..1589345.66 rows=284 
width=271) (actual time=1.457..59068.107 rows=65549 loops=1)
 ->  Nested Loop Left Join  (cost=0.98..1586923.67 rows=284 
width=255) (actual time=1.430..55661.926 rows=65549 loops=1)
   Join Filter: (((o.id)::text = (f.obj_id)::text) AND 
((o.owner)::text = (f.user_id)::text))
   Rows Removed by Join Filter: 132670698
   ->  Nested Loop  (cost=0.98..1576821.09 rows=284 
width=236) (actual time=0.163..27721.566 rows=65549 loops=1)
 Join Filter: ((o.mime_id)::integer = 
(m.mime_id)::integer)
 Rows Removed by Join Filter: 48768456
 ->  Seq Scan on mime m  (cost=0.00..13.45 
rows=745 width=30) (actual time=0.008..1.372 rows=745 loops=1)
 ->  Materialize  (cost=0.98..1573634.65 
rows=284 width=214) (actual time=0.004..22.918 rows=65549 loops=745)
   ->  Nested Loop  (cost=0.98..1573633.23 
rows=284 width=214) (actual time=0.142..7095.554 rows=65549 loops=1)
 ->  Nested Loop  
(cost=0.56..1570232.37 rows=406 width=184) (actual time=0.130..6468.376 
rows=65549 loops=1)
   ->  Seq Scan on objects s  
(cost=0.00..1183948.58 rows=45281 width=63) (actual time=0.098..5346.023 
rows=154347 loops=1)
 Filter: 
(((mime_id)::integer = 904) OR ((mime_id)::integer = 908))
 Rows Removed by 
Filter: 15191155
   ->  Index Scan using 
objects_pkey on objects o  (cost=0.56..8.52 rows=1 width=140) (actual 
time=0.006..0.007 rows=0 loops=154347)
 Index Cond: 
((id)::text = (s.original_file)::text)
 ->  Index Scan using users_pkey on 
users uc  (cost=0.42..8.37 rows=1 width=49) (actual time=0.008..0.009 rows=1 
loops=65549)
   Index Cond: ((user_id)::text 
= (o.creator)::text)
   ->  Materialize  (cost=0.00..48.36 rows=2024 
width=38) (actual time=0.001..0.133 rows=2024 loops=65549)
 ->  Seq Scan on flags f  (cost=0.00..38.24 
rows=2024 width=38) (actual time=0.009..0.462 rows=2024 loops=1)
 ->  Index Scan using objects_last_activity_unique_index on 
objects_last_activity la  (cost=0.56..8.52 rows=1 width=54) (actual 
time=0.044..0.046 rows=1 loops=65549)
   Index Cond: (((o.id)::text = (obj_id)::text) AND 
((o.owner)::text = 

Re: [HACKERS] Optimizer questions

2016-01-05 Thread Tom Lane
konstantin knizhnik  writes:
> 1. The cost compared in grouping_planner doesn't take in account price of 
> get_authorized_users - it is not changed when I am altering function cost. Is 
> it correct behavior?

The general problem of accounting for tlist eval cost is not handled very
well now, but especially not with respect to the idea that different paths
might have different tlist costs.  I'm working on an upper-planner rewrite
which should make this better, or at least make it practical to make it
better.

In the meantime you can probably force things to work the way you want
by doing the sort/limit in a sub-select and evaluating the expensive
function only in the outer select.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimizer questions

2016-01-05 Thread Alexander Korotkov
On Wed, Jan 6, 2016 at 12:08 AM, Tom Lane  wrote:

> konstantin knizhnik  writes:
> > 1. The cost compared in grouping_planner doesn't take in account price
> of get_authorized_users - it is not changed when I am altering function
> cost. Is it correct behavior?
>
> The general problem of accounting for tlist eval cost is not handled very
> well now, but especially not with respect to the idea that different paths
> might have different tlist costs.  I'm working on an upper-planner rewrite
> which should make this better, or at least make it practical to make it
> better.
>

Hmm... Besides costing it would be nice to postpone calculation of
expensive tlist functions after LIMIT.

--
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


Re: [HACKERS] optimizer questions

2006-02-15 Thread Jens-Wolfhard Schicke
--On Dienstag, Februar 14, 2006 10:35:12 -0600 hector Corrada Bravo 
[EMAIL PROTECTED] wrote:



Before I start trying this (creating aggregate paths seems the
reasonable thing to do) I would like your counsel.

1) Regardless of the optimization problem, is the executor able to
execute aggregate nodes within join trees (that is, not as the result
of subqueries)?

2) Has anyone tried something like this before?
I did and failed because I did not quite understand how the Postgres 
internal variables should be initialized.


My approach was to supply other join pathes if one of the two tables was an 
aggregate.


Mit freundlichem Gruß
Jens Schicke
--
Jens Schicke  [EMAIL PROTECTED]
asco GmbH http://www.asco.de
Mittelweg 7   Tel 0531/3906-127
38106 BraunschweigFax 0531/3906-400

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[HACKERS] optimizer questions

2006-02-14 Thread hector Corrada Bravo
Hello everyone,

I am working with the Postgres optimizer for the first time, so bear with me...

I want to extend the optimizer to deal with aggregate queries a bit
better. The idea is from an old paper by Chaudhuri and Shim in VLDB
94. The gist of it is that when computing aggregates over the result
of joining multiple tables, under some conditions the aggregate can be
pushed into the join tree to reduce the size of join operands making
resulting plans cheaper.

So here is my problem, due to the path/plan separation of the Postgres
optimizer, this is not trivial (joins are decided in path, aggregates
decided in plan). As it stands, aggregate nodes can only appear as the
top node of subqueries.

Before I start trying this (creating aggregate paths seems the
reasonable thing to do) I would like your counsel.

1) Regardless of the optimization problem, is the executor able to
execute aggregate nodes within join trees (that is, not as the result
of subqueries)?

2) Has anyone tried something like this before?

3) For debugging purposes: Has anyone figured out a way to feed
hand-crafted plans to the executor? Setting up some of the data
structures (PlannerInfo, target lists) etc. does not look trivial. By
this I mean, beyond giving explicit join clauses in queries.

4) Any other suggestions?

Thank you very much,
Hector

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] optimizer questions

2006-02-14 Thread Martijn van Oosterhout
On Tue, Feb 14, 2006 at 10:35:12AM -0600, hector Corrada Bravo wrote:
 Hello everyone,
 
 I am working with the Postgres optimizer for the first time, so bear with 
 me...
 
 I want to extend the optimizer to deal with aggregate queries a bit
 better. The idea is from an old paper by Chaudhuri and Shim in VLDB
 94. The gist of it is that when computing aggregates over the result
 of joining multiple tables, under some conditions the aggregate can be
 pushed into the join tree to reduce the size of join operands making
 resulting plans cheaper.
 
 So here is my problem, due to the path/plan separation of the Postgres
 optimizer, this is not trivial (joins are decided in path, aggregates
 decided in plan). As it stands, aggregate nodes can only appear as the
 top node of subqueries.

Well, the basic approach in the beginning should probably be to change
the planner to modify the plan so you get a new plan with the
aggregation in the right place.

 Before I start trying this (creating aggregate paths seems the
 reasonable thing to do) I would like your counsel.
 
 1) Regardless of the optimization problem, is the executor able to
 execute aggregate nodes within join trees (that is, not as the result
 of subqueries)?

An Aggregate is just a node type that can be stacked above any other
node. Ofcourse, it requires the input rows to be in the appropriate
order but other than that.

 2) Has anyone tried something like this before?

No idea.

 3) For debugging purposes: Has anyone figured out a way to feed
 hand-crafted plans to the executor? Setting up some of the data
 structures (PlannerInfo, target lists) etc. does not look trivial. By
 this I mean, beyond giving explicit join clauses in queries.

Nope sorry.

 4) Any other suggestions?

Well, I honestly have no idea what kind of transformations you have in
mind so not really. However, you should probably realise that in
PostgreSQL aggregates are not special and users can make their own.
Whatever you think of needs to take that into account.

Have a ncie day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


signature.asc
Description: Digital signature


Re: [HACKERS] optimizer questions

2006-02-14 Thread Tom Lane
hector Corrada Bravo [EMAIL PROTECTED] writes:
 1) Regardless of the optimization problem, is the executor able to
 execute aggregate nodes within join trees (that is, not as the result
 of subqueries)?

Sure.

 3) For debugging purposes: Has anyone figured out a way to feed
 hand-crafted plans to the executor? Setting up some of the data
 structures (PlannerInfo, target lists) etc. does not look trivial. By
 this I mean, beyond giving explicit join clauses in queries.

It's not really very practical --- the data structures are too complex
to create by hand in any reasonable way.  You can probably test by
entering modified queries that do the aggregations in sub-selects,
though.

The most practical way to implement something like this is probably to
restructure the query during the prep stage, pushing the aggregates
down into sub-queries automatically.  The 8.1 min/max optimization code
does something related, although I'll freely admit that's a bit of a hack.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster