Re: [HACKERS] Optimizing DISTINCT with LIMIT

2008-12-16 Thread tmp

You could add it to here -- note that if we decide it isn't worth it it'll
just get removed.


Which category would you recommend? Optimizer / Executor?

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimizing DISTINCT with LIMIT

2008-12-08 Thread Tom Lane
Gregory Stark [EMAIL PROTECTED] writes:
 But I can also see Tom's reluctance. It's a fair increase in the amount of
 code to maintain in that file for a pretty narrow use case. On the other hand
 it looks like it would be all in that file. The planner wouldn't have to do
 anything special to set it up which is nice.

No, the planner would have to be changed to be aware of the behavioral
difference.  Otherwise it might pick some other plan besides the one
that has the performance advantage.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimizing DISTINCT with LIMIT

2008-12-06 Thread David Lee Lambert
On Thursday 04 December 2008 15:09, Gregory Stark wrote:
 tmp [EMAIL PROTECTED] writes:

  Also, it is my impression that many people use LIMIT to minimize the
  evaluation time of sub queries from which the outer query only needs a
  small subset of the sub query output.

 I've seen lots of queries which only pull a subset of the results too --
 but it's always a specific subset. So that means using ORDER BY or a WHERE
 clause to control it.

I use ORDER BY random() LIMIT :some_small_number frequently to get a feel 
for data.  That always builds the unrandomized relation and then sorts it.  I 
guess an alternate path for single-table queries would be to randomly choose 
a block number and then a tuple number;  but that would be biased toward long 
rows (of which fewer can appear in a block).

-- 
David Lee Lambert ... Software Developer
Cell phone: +1 586-873-8813 ; alt. email [EMAIL PROTECTED] or 
[EMAIL PROTECTED]
GPG key at http://www.lmert.com/keyring.txt

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimizing DISTINCT with LIMIT

2008-12-06 Thread Grzegorz Jaskiewicz


On 2008-12-06, at 11:29, David Lee Lambert wrote:




I use ORDER BY random() LIMIT :some_small_number frequently to get  
a feel
for data.  That always builds the unrandomized relation and then  
sorts it.  I
guess an alternate path for single-table queries would be to  
randomly choose
a block number and then a tuple number;  but that would be biased  
toward long

rows (of which fewer can appear in a block).


but that's going to be extremely slow, due to speed of random()  
function.



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimizing DISTINCT with LIMIT

2008-12-06 Thread Greg Stark
It's slow because there's no way around running through the entire  
input. The optimization tmp is talking about wouldn't be relevant  
becase there is an order by clause which was precisely why I I said it  
was a fairly narrow use case. Most people who use limit want a  
specific subset even if that specific subset is random. Without the  
order by the subset is entirely arbitrary but not useully random.


Incidentally order by ... limit is amenable to an optimization which  
avoids having to *sort* the whole input even though it still has to  
read the whole input. We implemented that in 8.3.



greg

On 6 Dec 2008, at 06:08 PM, Grzegorz Jaskiewicz [EMAIL PROTECTED]  
wrote:




On 2008-12-06, at 11:29, David Lee Lambert wrote:




I use ORDER BY random() LIMIT :some_small_number frequently to  
get a feel
for data.  That always builds the unrandomized relation and then  
sorts it.  I
guess an alternate path for single-table queries would be to  
randomly choose
a block number and then a tuple number;  but that would be biased  
toward long

rows (of which fewer can appear in a block).


but that's going to be extremely slow, due to speed of random()  
function.



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimizing DISTINCT with LIMIT

2008-12-05 Thread tmp

I would tend to think it's worth it myself.


I am unfortunately not familiar enough with the postgresql code base to 
be comfortable to provide a patch. Can I submit this optimization 
request to some sort of issue tracker or what should I do?


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimizing DISTINCT with LIMIT

2008-12-05 Thread Gregory Stark
tmp [EMAIL PROTECTED] writes:

 I would tend to think it's worth it myself.

 I am unfortunately not familiar enough with the postgresql code base to be
 comfortable to provide a patch. Can I submit this optimization request to some
 sort of issue tracker or what should I do?

You could add it to here -- note that if we decide it isn't worth it it'll
just get removed.

http://wiki.postgresql.org/wiki/Todo

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Ask me about EnterpriseDB's 24x7 Postgres support!

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Optimizing DISTINCT with LIMIT

2008-12-04 Thread tmp

As far as I have understood the following query
  SELECT DISTINCT foo
  FROM bar
  LIMIT baz
is done by first sorting the input and then traversing the sorted data, 
ensuring uniqueness of output and stopping when the LIMIT threshold is 
reached. Furthermore, a part of the sort procedure is to traverse input 
at least one time.


Now, if the input is large but the LIMIT threshold is small, this 
sorting step may increase the query time unnecessarily so here is a 
suggestion for optimization:
  If the input is sufficiently large and the LIMIT threshold 
sufficiently small, maintain the DISTINCT output by hashning while 
traversing the input and stop when the LIMIT threshold is reached. No 
sorting required and *at* *most* one read of input.


Use case: Websites that needs to present small samples of huge queries fast.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimizing DISTINCT with LIMIT

2008-12-04 Thread Gregory Stark
tmp [EMAIL PROTECTED] writes:

   If the input is sufficiently large and the LIMIT threshold sufficiently
 small, maintain the DISTINCT output by hashning while traversing the input and
 stop when the LIMIT threshold is reached. No sorting required and *at* *most*
 one read of input.

You mean like this?

postgres=# explain select distinct x  from i limit 5;
QUERY PLAN 
---
 Limit  (cost=54.50..54.51 rows=1 width=304)
   -  HashAggregate  (cost=54.50..54.51 rows=1 width=304)
 -  Seq Scan on i  (cost=0.00..52.00 rows=1000 width=304)
(3 rows)


This will be in the upcoming 8.4 release.


Versions since about 7.4 or so have been capable of producing this plan but
not for DISTINCT, only for the equivalent GROUP BY query:

postgres=# explain select x  from i group by x limit 5;

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Ask me about EnterpriseDB's 24x7 Postgres support!

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimizing DISTINCT with LIMIT

2008-12-04 Thread Heikki Linnakangas

Gregory Stark wrote:

tmp [EMAIL PROTECTED] writes:


  If the input is sufficiently large and the LIMIT threshold sufficiently
small, maintain the DISTINCT output by hashning while traversing the input and
stop when the LIMIT threshold is reached. No sorting required and *at* *most*
one read of input.


You mean like this?

postgres=# explain select distinct x  from i limit 5;
QUERY PLAN 
---

 Limit  (cost=54.50..54.51 rows=1 width=304)
   -  HashAggregate  (cost=54.50..54.51 rows=1 width=304)
 -  Seq Scan on i  (cost=0.00..52.00 rows=1000 width=304)
(3 rows)


Does that know to stop scanning as soon as it has seen 5 distinct values?

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimizing DISTINCT with LIMIT

2008-12-04 Thread Gregory Stark

Heikki Linnakangas [EMAIL PROTECTED] writes:

 Gregory Stark wrote:
 Does that know to stop scanning as soon as it has seen 5 distinct values?

Uhm, hm. Apparently not :(


postgres=# create or replace function v(integer) returns integer as $$begin 
raise notice 'called %', $1; return $1; end$$ language plpgsql volatile;
CREATE FUNCTION
postgres=# select distinct v(i) from generate_series(1,10) as a(i) limit 3;
NOTICE:  0: called 1
LOCATION:  exec_stmt_raise, pl_exec.c:2542
NOTICE:  0: called 2
LOCATION:  exec_stmt_raise, pl_exec.c:2542
NOTICE:  0: called 3
LOCATION:  exec_stmt_raise, pl_exec.c:2542
NOTICE:  0: called 4
LOCATION:  exec_stmt_raise, pl_exec.c:2542
NOTICE:  0: called 5
LOCATION:  exec_stmt_raise, pl_exec.c:2542
NOTICE:  0: called 6
LOCATION:  exec_stmt_raise, pl_exec.c:2542
NOTICE:  0: called 7
LOCATION:  exec_stmt_raise, pl_exec.c:2542
NOTICE:  0: called 8
LOCATION:  exec_stmt_raise, pl_exec.c:2542
NOTICE:  0: called 9
LOCATION:  exec_stmt_raise, pl_exec.c:2542
NOTICE:  0: called 10
LOCATION:  exec_stmt_raise, pl_exec.c:2542
 v 
---
 5
 4
 6
(3 rows)

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Ask me about EnterpriseDB's PostGIS support!

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimizing DISTINCT with LIMIT

2008-12-04 Thread Tom Lane
Heikki Linnakangas [EMAIL PROTECTED] writes:
 Gregory Stark wrote:
 You mean like this?
 
 postgres=# explain select distinct x  from i limit 5;
 QUERY PLAN 
 ---
 Limit  (cost=54.50..54.51 rows=1 width=304)
 -  HashAggregate  (cost=54.50..54.51 rows=1 width=304)
 -  Seq Scan on i  (cost=0.00..52.00 rows=1000 width=304)
 (3 rows)

 Does that know to stop scanning as soon as it has seen 5 distinct values?

In principle, if there are no aggregate functions, then nodeAgg could
return a row immediately upon making any new entry into the hash table.
Whether it's worth the code uglification is debatable ... I think it
would require a third major pathway through nodeAgg.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimizing DISTINCT with LIMIT

2008-12-04 Thread tmp

In principle, if there are no aggregate functions, then nodeAgg could
return a row immediately upon making any new entry into the hash table.
Whether it's worth the code uglification is debatable ... I think it
would require a third major pathway through nodeAgg.


Regarding whether it's worth the effort: In each of my three past jobs 
(all using postgresql) I have met several queries that would fetch a 
small subset of a large - even huge - input. I think that types of 
queries are relatively common out there, but if they are executed for 
e.g. a web-client it is simply a no-go with the current late LIMIT 
evaluation.


Also, it is my impression that many people use LIMIT to minimize the 
evaluation time of sub queries from which the outer query only needs a 
small subset of the sub query output.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimizing DISTINCT with LIMIT

2008-12-04 Thread Gregory Stark

tmp [EMAIL PROTECTED] writes:

 Regarding whether it's worth the effort: In each of my three past jobs (all
 using postgresql) I have met several queries that would fetch a small subset 
 of
 a large - even huge - input. I think that types of queries are relatively
 common out there, but if they are executed for e.g. a web-client it is simply 
 a
 no-go with the current late LIMIT evaluation.

 Also, it is my impression that many people use LIMIT to minimize the 
 evaluation
 time of sub queries from which the outer query only needs a small subset of 
 the
 sub query output.

I've seen lots of queries which only pull a subset of the results too -- but
it's always a specific subset. So that means using ORDER BY or a WHERE clause
to control it.

In this example the subset returned is completely arbitrary. That's a much
finer slice of queries. 

I would tend to think it's worth it myself. I can see cases where the subset
selected doesn't really matter -- for instance if you're only testing whether
there are at least a certain number of distinct values. Or if you're using up
some inventory and it's not important what order you use them up only that you
fetch some candidate inventory and process them.

But I can also see Tom's reluctance. It's a fair increase in the amount of
code to maintain in that file for a pretty narrow use case. On the other hand
it looks like it would be all in that file. The planner wouldn't have to do
anything special to set it up which is nice.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Ask me about EnterpriseDB's 24x7 Postgres support!

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers