On Sun, Aug 14, 2005 at 07:27:38PM -0500, John Arbash Meinel wrote: > My guess is that this is part of a larger query. There isn't really much > you can do. If you want all 3.2M rows, then you have to wait for them to > be pulled in.
To me, it looks like he'll get 88 rows, not 3.2M. Surely we must be able to do something better than a full sequential scan in this case? test=# create table foo ( bar char(4) ); CREATE TABLE test=# insert into foo values ('0000'); INSERT 24773320 1 test=# insert into foo values ('0000'); INSERT 24773321 1 test=# insert into foo values ('1111'); INSERT 24773322 1 test=# select * from foo group by bar; bar ------ 1111 0000 (2 rows) I considered doing some odd magic with generate_series() and subqueries with LIMIT 1, but it was a bit too weird in the end :-) /* Steinar */ -- Homepage: http://www.sesse.net/ ---------------------------(end of broadcast)--------------------------- TIP 4: Have you searched our list archives? http://archives.postgresql.org