Re: [PERFORM] sequential scan on select distinct

2004-10-11 Thread Mischa Sandberg
Tom Lane wrote:
Ole Langbehn [EMAIL PROTECTED] writes:
What do you think about the idea of an UniqueSort which would do
sort+unique in one pass ? 

This is what oracle does and it is quite fast with it...

Hashing is at least as fast, if not faster.
			regards, tom lane
I got good mileage in a different SQL engine, by combining the 
hash-aggregate and sort nodes into a single operator.
The hash table was just an index into the equivalent of the heap used 
for generating runs. That gave me partially aggregated data,
or eliminated duplicate keys, without extra memory overhead of the 
hash-aggregation node below the sort. Memory was scarce then ... :-)

BTW I'm really puzzled that Oracle is pushing 'index skip scan' as a new 
feature. Wasn't this in the original Oracle Rdb --- one of Gennady 
Antoshenkov's tweaks?

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] sequential scan on select distinct

2004-10-08 Thread Pierre-Frdric Caillaud

The really tricky part is that a DISTINCT ON needs to know about a  
first()
aggregate. And to make optimal use of indexes, a last() aggregate as  
well. And
ideally the planner/executor needs to know something is magic about
first()/last() (and potentially min()/max() at some point) and that they  
don't
need the complete set of tuples to calculate their results.
	I'm going to be accused of hand-waving again, but please pardon me, I'm  
enthusiastic, and I like to propose new idead, you can kick me if you  
don't like them or if I put out too much uninformed bull !

Idea :
	The aggregate accumulation function could have a way to say :
stop ! I've had enough of these values ! Get on with the next item in the  
GROUP BY clause !
	I don't know how, or if, the planner could use this (guess: no) or the  
index scan use this (guess: no) but it would at least save the function  
calls. I'd guess this idea is quite useless.

	Aggregates could have an additional attribute saying how much values it  
will need ('max_rows' maybe). This would prevent the creation of magic  
aggregates for max() (which is a kind of special-casing), keep it generic  
(so users can create magic aggregates like this).
	Aggregates already consist of a bunch of functions (start, accumulate,  
return retuls) so this could be just another element in this set.
	This information would be known ahead of time and could influence the  
query plans too. I'm going to wave my hand and say not too much planning  
cost because I guess the aggregate details are fetched during planning so  
fetching one more attribute would not be that long...
	For instance first() would have max_rows=1, and users could code a first  
N accumulator-in-array which would have max_rows=N...
	This does not solve the problem of min() and max() which need max_rows=1  
only if the result is sorted... hum... maybe another attribute like  
max_rows_sorted = 1 for max() and -1 for min() meaning 'first 1' or 'last  
1' (or first N or last N)... according to the order by clause it would  
be known that the 'first N' of an 'order by ... asc' is the same as the  
'last N' from an 'order by ... desc'

???


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] sequential scan on select distinct

2004-10-08 Thread Pierre-Frdric Caillaud

Hashing is at least as fast, if not faster.
regards, tom lane
Probably quite faster if the dataset is not huge...
UniqueSort would be useful for GROUP BY x ORDER BY x though
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [PERFORM] sequential scan on select distinct

2004-10-07 Thread Pierre-Frdric Caillaud

I don't really think it would be a useful plan anyway.  What *would* be
useful is to support HashAggregate as an implementation alternative for
DISTINCT --- currently I believe we only consider that for GROUP BY.
The DISTINCT planning code is fairly old and crufty and hasn't been
redesigned lately.
			regards, tom lane
I see this as a minor annoyance only because I can write GROUP BY
instead of DISTINCT and get the speed boost. It probably annoys people
trying to port applications to postgres though, forcing them to rewrite
their queries.
* SELECT DISTINCT : 21442.296 ms (by default, uses an index scan)
disabling index_scan = Sort + Unique : 14512.105 ms
* GROUP BY : 1793.651 ms using HashAggregate
* skip index scan by function : 13.833 ms
The HashAggregate speed boost is good, but rather pathetic compared
to a skip index scan ; but it's still worth having if updating the
DISTINCT code is easy.
Note that it would also benefit UNION queries which apparently use
DISTINCT
internally and currently produce this :
--
explain analyze select number from
((select number from dummy) union (select number from dummy)) as foo;
  Subquery Scan foo  (cost=287087.62..317087.62 rows=200 width=4)
(actual time=33068.776..35575.330 rows=255 loops=1)
-  Unique  (cost=287087.62..297087.62 rows=200 width=4) (actual
time=33068.763..35574.126 rows=255 loops=1)
  -  Sort  (cost=287087.62..292087.62 rows=200 width=4)
(actual time=33068.757..34639.180 rows=200 loops=1)
Sort Key: number
-  Append  (cost=0.00..49804.00 rows=200 width=4)
(actual time=0.055..7412.551 rows=200 loops=1)
  -  Subquery Scan *SELECT* 1  (cost=0.00..24902.00
rows=100 width=4) (actual time=0.054..3104.165 rows=100 loops=1)
-  Seq Scan on dummy  (cost=0.00..14902.00
rows=100 width=4) (actual time=0.051..1792.348 rows=100 loops=1)
  -  Subquery Scan *SELECT* 2  (cost=0.00..24902.00
rows=100 width=4) (actual time=0.048..3034.462 rows=100 loops=1)
-  Seq Scan on dummy  (cost=0.00..14902.00
rows=100 width=4) (actual time=0.044..1718.682 rows=100 loops=1)
  Total runtime: 36265.662 ms
--
But could instead do this :
explain analyze select number from
((select number from dummy) union all (select number from dummy)) as foo
group by number;
  HashAggregate  (cost=74804.00..74804.00 rows=200 width=4) (actual
time=10753.648..10753.890 rows=255 loops=1)
-  Subquery Scan foo  (cost=0.00..69804.00 rows=200 width=4)
(actual time=0.059..8992.084 rows=200 loops=1)
  -  Append  (cost=0.00..49804.00 rows=200 width=4) (actual
time=0.055..6688.639 rows=200 loops=1)
-  Subquery Scan *SELECT* 1  (cost=0.00..24902.00
rows=100 width=4) (actual time=0.054..2749.708 rows=100 loops=1)
  -  Seq Scan on dummy  (cost=0.00..14902.00
rows=100 width=4) (actual time=0.052..1640.427 rows=100 loops=1)
-  Subquery Scan *SELECT* 2  (cost=0.00..24902.00
rows=100 width=4) (actual time=0.038..2751.916 rows=100 loops=1)
  -  Seq Scan on dummy  (cost=0.00..14902.00
rows=100 width=4) (actual time=0.034..1637.818 rows=100 loops=1)
  Total runtime: 10754.120 ms
--
A 3x speedup, but still a good thing to have.
When I LIMIT the two subqueries to 100k rows instead of a million, the
times are about equal.
When I LIMIT one of the subqueries to 100k and leave the other to 1M,
UNION ALL   17949.609 ms
UNION + GROUP BY6130.417 ms
Still some performance to be gained...
--
Of course it can't use a skip index scan on a subquery, but I could
instead :
I know it's pretty stupid to use the same table twice but it's just an
example. However, if you think about table partitions and views, a select
distinct number from a view having multiple partitions would yield this
type of query, and that table partitioning seems like a hot subject lately.
let's create a dummy example view :
create view dummy_view as (select * from dummy) union all (select * from
dummy);
explain analyze select number from dummy_view group by number;
  HashAggregate  (cost=74804.00..74804.00 rows=200 width=4) (actual
time=10206.456..10206.713 rows=255 loops=1)
-  Subquery Scan dummy_view  (cost=0.00..69804.00 rows=200
width=4) (actual time=0.060..8431.776 rows=200 loops=1)
  -  Append  (cost=0.00..49804.00 rows=200 width=8) (actual
time=0.055..6122.125 rows=200 loops=1)
  

Re: [PERFORM] sequential scan on select distinct

2004-10-07 Thread Tom Lane
=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?= [EMAIL PROTECTED] writes:
   Present state is that DISTINCT and UNION are slow with or without using
 the GROUP BY trick. Including the index skip scan in the planning options
 would only happen when appropriate cases are detected. This detection
 would be very fast.

You have no basis whatever for making that last assertion; and since
it's the critical point, I don't intend to let you slide by without
backing it up.  I think that looking for relevant indexes would be
nontrivial; the more so in cases like you've been armwaving about just
above, where you have to find a relevant index for each of several
subqueries.  The fact that the optimization wins a lot when it wins
is agreed, but the time spent trying to apply it when it doesn't work
is a cost that has to be set against that.  I don't accept your premise
that every query for which skip-index isn't relevant is so slow that
planning time does not matter.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [PERFORM] sequential scan on select distinct

2004-10-07 Thread Greg Stark

Pierre-Frédéric Caillaud [EMAIL PROTECTED] writes:

   I see this as a minor annoyance only because I can write GROUP BY
 instead of DISTINCT and get the speed boost. It probably annoys people
 trying to port applications to postgres though, forcing them to rewrite
 their queries.

Yeah, really DISTINCT and DISTINCT ON are just special cases of GROUP BY. It
seems it makes more sense to put the effort into GROUP BY and just have
DISTINCT and DISTINCT ON go through the same code path. Effectively rewriting
it internally as a GROUP BY.

The really tricky part is that a DISTINCT ON needs to know about a first()
aggregate. And to make optimal use of indexes, a last() aggregate as well. And
ideally the planner/executor needs to know something is magic about
first()/last() (and potentially min()/max() at some point) and that they don't
need the complete set of tuples to calculate their results.

-- 
greg


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] sequential scan on select distinct

2004-10-07 Thread Ole Langbehn
Am Donnerstag, 7. Oktober 2004 14:01 schrieb Pierre-Frédéric Caillaud:
  Side Note :

  What do you think about the idea of an UniqueSort which would do
 sort+unique in one pass ? 
This is what oracle does and it is quite fast with it...
-- 
Ole Langbehn

freiheit.com technologies gmbh
Theodorstr. 42-90 / 22761 Hamburg, Germany
fon       +49 (0)40 / 890584-0
fax       +49 (0)40 / 890584-20

Freie Software durch Bücherkauf fördern | http://bookzilla.de/

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PERFORM] sequential scan on select distinct

2004-10-07 Thread Tom Lane
Ole Langbehn [EMAIL PROTECTED] writes:
 What do you think about the idea of an UniqueSort which would do
 sort+unique in one pass ? 

 This is what oracle does and it is quite fast with it...

Hashing is at least as fast, if not faster.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [PERFORM] sequential scan on select distinct

2004-10-06 Thread Pierre-Frdric Caillaud
You could try :
explain analyze select land from customer_dim group by land;
It will be a lot faster but I can't make it use the index on my machine...
Example :
	create table dummy as (select id, id%255 as number from a large table  
with 1M rows);
	so we have a table with 256 (0-255) disctinct number values.


= explain analyze select distinct number from dummy;
 Unique  (cost=69.83..74.83 rows=200 width=4) (actual  
time=13160.490..14414.004 rows=255 loops=1)
   -  Sort  (cost=69.83..72.33 rows=1000 width=4) (actual  
time=13160.483..13955.792 rows=100 loops=1)
 Sort Key: number
 -  Seq Scan on dummy  (cost=0.00..20.00 rows=1000 width=4)  
(actual time=0.052..1759.145 rows=100 loops=1)
 Total runtime: 14442.872 ms

=   Horribly slow because it has to sort 1M rows for the Unique.

= explain analyze select number from dummy group by number;
 HashAggregate  (cost=22.50..22.50 rows=200 width=4) (actual  
time=1875.214..1875.459 rows=255 loops=1)
   -  Seq Scan on dummy  (cost=0.00..20.00 rows=1000 width=4) (actual  
time=0.107..1021.014 rows=100 loops=1)
 Total runtime: 1875.646 ms

=	A lot faster because it HashAggregates instead of sorting (but still  
seq scan)


Now :
create index dummy_idx on dummy(number);
Let's try again.

explain analyze select distinct number from dummy;
 Unique  (cost=0.00..35301.00 rows=200 width=4) (actual  
time=0.165..21781.732 rows=255 loops=1)
   -  Index Scan using dummy_idx on dummy  (cost=0.00..32801.00  
rows=100 width=4) (actual time=0.162..21154.752 rows=100 loops=1)
 Total runtime: 21782.270 ms

= Index scan the whole table. argh. I should have ANALYZized.

explain analyze select number from dummy group by number;
 HashAggregate  (cost=17402.00..17402.00 rows=200 width=4) (actual  
time=1788.425..1788.668 rows=255 loops=1)
   -  Seq Scan on dummy  (cost=0.00..14902.00 rows=100 width=4)  
(actual time=0.048..960.063 rows=100 loops=1)
 Total runtime: 1788.855 ms
=	Still the same...

Let's make a function :
The function starts at the lowest number and advances to the next number  
in the index until they are all exhausted.

CREATE OR REPLACE FUNCTION sel_distinct()
	RETURNS SETOF INTEGER
	LANGUAGE plpgsql
	AS '
DECLARE
	pos INTEGER;
BEGIN
	SELECT INTO pos number FROM dummy ORDER BY number ASC LIMIT 1;
	IF NOT FOUND THEN
		RAISE NOTICE ''no records.'';
		RETURN;
	END IF;
	
	LOOP
		RETURN NEXT pos;
		SELECT INTO pos number FROM dummy WHERE numberpos ORDER BY number ASC  
LIMIT 1;
		IF NOT FOUND THEN
			RETURN;
		END IF;
	END LOOP;
END;
';

explain analyze select * from sel_distinct();
 Function Scan on sel_distinct  (cost=0.00..12.50 rows=1000 width=4)  
(actual time=215.472..215.696 rows=255 loops=1)
 Total runtime: 215.839 ms

That's better !

Why not use DESC instead of ASC ?
CREATE OR REPLACE FUNCTION sel_distinct()
	RETURNS SETOF INTEGER
	LANGUAGE plpgsql
	AS '
DECLARE
	pos INTEGER;
BEGIN
	SELECT INTO pos number FROM dummy ORDER BY number DESC LIMIT 1;
	IF NOT FOUND THEN
		RAISE NOTICE ''no records.'';
		RETURN;
	END IF;
	
	LOOP
		RETURN NEXT pos;
		SELECT INTO pos number FROM dummy WHERE numberpos ORDER BY number DESC  
LIMIT 1;
		IF NOT FOUND THEN
			RETURN;
		END IF;
	END LOOP;
END;
';

explain analyze select * from sel_distinct();
 Function Scan on sel_distinct  (cost=0.00..12.50 rows=1000 width=4)  
(actual time=13.500..13.713 rows=255 loops=1)
 Total runtime: 13.857 ms

	Hum hum ! Again, a lot better !
	Index scan backwards seems a lot faster than index scan forwards. Why, I  
don't know, but here you go from 15 seconds to 14 milliseconds...

	I don't know WHY (oh why) postgres does not use this kind of strategy  
when distinct'ing an indexed field... Anybody got an idea ?


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [PERFORM] sequential scan on select distinct

2004-10-06 Thread Ole Langbehn
Am Mittwoch, 6. Oktober 2004 12:19 schrieb Pierre-Frédéric Caillaud:
  You could try :

  explain analyze select land from customer_dim group by land;
  It will be a lot faster but I can't make it use the index on my machine...
this already speeds up my queries to about 1/4th of the time, which is about 
the range of mysql and oracle.

  Example :

 [..]

  Hum hum ! Again, a lot better !
  Index scan backwards seems a lot faster than index scan forwards. Why, I
 don't know, but here you go from 15 seconds to 14 milliseconds...
thanks for this very extensive answer, it helped me a lot.

  I don't know WHY (oh why) postgres does not use this kind of strategy
 when distinct'ing an indexed field... Anybody got an idea ?
That's the big question I still would like to see answered too. Can anyone 
tell us?

TiA
-- 
Ole Langbehn

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] sequential scan on select distinct

2004-10-06 Thread Greg Stark

Pierre-Frédéric Caillaud [EMAIL PROTECTED] writes:

   I don't know WHY (oh why) postgres does not use this kind of strategy
 when distinct'ing an indexed field... Anybody got an idea ?

Well there are two questions here. Why given the current plans available does
postgres choose a sequential scan instead of an index scan. And why isn't
there this kind of skip index scan available.

Postgres chooses a sequential scan with a sort (or hash aggregate) over an
index scan because it expects it to be faster. sequential scans are much
faster than random access scans of indexes, plus index scans need to read many
more blocks. If you're finding the index scan to be just as fast as sequential
scans you might consider lowering random_page_cost closer to 1.0. But note
that you may be getting fooled by a testing methodology where more things are
cached than would be in production.

why isn't a skip index scan plan available? Well, nobody's written the code
yet. It would part of the same code needed to get an index scan used for:

select y,min(x) from bar group by y

And possibly also related to the TODO item:

Use index to restrict rows returned by multi-key index when used with
non-consecutive keys to reduce heap accesses

For an index on col1,col2,col3, and a WHERE clause of col1 = 5 and col3 =
9, spin though the index checking for col1 and col3 matches, rather than
just col1


Note that the optimizer would have to make a judgement call based on the
expected number of distinct values. If you had much more than 256 distinct
values then the your plpgsql function wouldn't have performed well at all.

-- 
greg


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PERFORM] sequential scan on select distinct

2004-10-06 Thread Pierre-Frdric Caillaud
There are even three questions here :
- given that 'SELECT DISTINCT field FROM table' is exactly
the same as 'SELECT field FROM table GROUP BY field, postgres could
transform the first into the second and avoid itself a (potentially
killer) sort.
	On my example the table was not too large but on a very large table,  
sorting all the values and then discinct'ing them does not look too  
appealing.

	Currently Postgres does Sort+Unique, but there could be a DistinctSort  
instead of a Sort, that is a thing that sorts and removes the duplicates  
at the same time. Not that much complicated to code than a sort, and much  
faster in this case.
	Or there could be a DistinctHash, which would be similar or rather  
identical to a HashAggregate and would again skip the sort.

	It would (as a bonus) speed up queries like UNION (not ALL), that kind of  
things. For example :

 explain (select number from dummy) union (select number from dummy);
 Unique  (cost=287087.62..297087.62 rows=200 width=4)
   -  Sort  (cost=287087.62..292087.62 rows=200 width=4)
 Sort Key: number
 -  Append  (cost=0.00..49804.00 rows=200 width=4)
   -  Subquery Scan *SELECT* 1  (cost=0.00..24902.00  
rows=100 width=4)
 -  Seq Scan on dummy  (cost=0.00..14902.00  
rows=100 width=4)
   -  Subquery Scan *SELECT* 2  (cost=0.00..24902.00  
rows=100 width=4)
 -  Seq Scan on dummy  (cost=0.00..14902.00  
rows=100 width=4)

This is scary !
I can rewrite it as such (and the planner could, too) :
explain select * from ((select number from dummy) union all (select number  
from dummy)) as foo group by number;
 HashAggregate  (cost=74804.00..74804.00 rows=200 width=4)
   -  Subquery Scan foo  (cost=0.00..69804.00 rows=200 width=4)
 -  Append  (cost=0.00..49804.00 rows=200 width=4)
   -  Subquery Scan *SELECT* 1  (cost=0.00..24902.00  
rows=100 width=4)
 -  Seq Scan on dummy  (cost=0.00..14902.00  
rows=100 width=4)
   -  Subquery Scan *SELECT* 2  (cost=0.00..24902.00  
rows=100 width=4)
 -  Seq Scan on dummy  (cost=0.00..14902.00  
rows=100 width=4)

which avoids a large sort...
However there must be cases in which performing a sort is faster, like  
when there are a lot of distinct values and the HashAggregate becomes huge  
too.

Well there are two questions here. Why given the current plans available  
does
postgres choose a sequential scan instead of an index scan. And why isn't
	Well because it needs to get all the rows in the table in order.
	in this case seq scan+sort is about twice as fast as index scan.
	Interestingly, once I ANALYZED the table, postgres will chooses to  
index-scan, which is slower.

there this kind of skip index scan available.
It would be really nice to have a skip index scan available.
	I have an other idea, lets call it the indexed sequential scan :
	When pg knows there are a lot of rows to access, it will ignore the index  
and seqscan. This is because index access is very random, thus slow.  
However postgres could implement an indexed sequential scan where :
	- the page numbers for the matching rows are looked up in the index
	(this is fast as an index has good locality)
	- the page numbers are grouped so we have a list of pages with one and  
only one instance of each page number
	- the list is then sorted so we have page numbers in-order
	- the pages are loaded in sorted order (doing a kind of partial  
sequential scan) which would be faster than reading them randomly.

Other ideas later

Postgres chooses a sequential scan with a sort (or hash aggregate) over  
an
index scan because it expects it to be faster. sequential scans are much
faster than random access scans of indexes, plus index scans need to  
read many
more blocks. If you're finding the index scan to be just as fast as  
sequential
scans you might consider lowering random_page_cost closer to 1.0. But  
note
that you may be getting fooled by a testing methodology where more  
things are
cached than would be in production.

why isn't a skip index scan plan available? Well, nobody's written the  
code
yet. It would part of the same code needed to get an index scan used for:

select y,min(x) from bar group by y
And possibly also related to the TODO item:
Use index to restrict rows returned by multi-key index when used with
non-consecutive keys to reduce heap accesses
For an index on col1,col2,col3, and a WHERE clause of col1 = 5 and  
col3 =
9, spin though the index checking for col1 and col3 matches, rather  
than
just col1

Note that the optimizer would have to make a judgement call based on the
expected number of distinct values. If you had much more than 256  
distinct
values then the your plpgsql function wouldn't have performed well at  
all.


---(end of 

Re: [PERFORM] sequential scan on select distinct

2004-10-06 Thread Tom Lane
Greg Stark [EMAIL PROTECTED] writes:
 why isn't a skip index scan plan available? Well, nobody's written the code
 yet.

I don't really think it would be a useful plan anyway.  What *would* be
useful is to support HashAggregate as an implementation alternative for
DISTINCT --- currently I believe we only consider that for GROUP BY.
The DISTINCT planning code is fairly old and crufty and hasn't been
redesigned lately.

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PERFORM] sequential scan on select distinct

2004-10-06 Thread Greg Stark

Tom Lane [EMAIL PROTECTED] writes:

 Greg Stark [EMAIL PROTECTED] writes:
  why isn't a skip index scan plan available? Well, nobody's written the code
  yet.
 
 I don't really think it would be a useful plan anyway.  

Well it would clearly be useful in this test case, where has a small number of
distinct values in a large table, and an index on the column. His plpgsql
function that emulates such a plan is an order of magnitude faster than the
hash aggregate plan even though it has to do entirely separate index scans for
each key value.

I'm not sure where the break-even point would be, but it would probably be
pretty low. Probably somewhere around the order of 1% distinct values in the
table. That might be uncommon, but certainly not impossible.

But regardless of how uncommon it is, it could be considered important in
another sense: when you need it there really isn't any alternative. It's an
algorithmic improvement with no bound on the performance difference. Nothing
short of using a manually maintained materialized view would bring the
performance into the same ballpark.

So even if it's only useful occasionally, not having the plan available can
leave postgres with no effective plan for what should be an easy query.


-- 
greg


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [PERFORM] sequential scan on select distinct

2004-10-06 Thread Tom Lane
Greg Stark [EMAIL PROTECTED] writes:
 But regardless of how uncommon it is, it could be considered important in
 another sense: when you need it there really isn't any alternative. It's an
 algorithmic improvement with no bound on the performance difference.

[ shrug... ]  There are an infinite number of special cases for which
that claim could be made.  The more we load down the planner with
seldom-useful special cases, the *lower* the overall performance will
be, because we'll waste cycles checking for the special cases in every
case ...

In this particular case, it's not merely a matter of the planner, either.
You'd need some new type of plan node in the executor, so there's a
pretty fair amount of added code bulk that will have to be written and
then maintained.

I'm open to being persuaded that this is worth doing, but the bar is
going to be high; I think there are a lot of other more-profitable ways
to invest our coding effort and planning cycles.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings