"Matthew Nuzum" <[EMAIL PROTECTED]> writes:
> I believe there are about 40,000,000 rows, I expect there to be about
> 10,000,000 groups. PostgreSQL version is 7.3.2 and the sort_mem is at the
> default setting.
Okay. I doubt that the nearby suggestion to convert the min()s to
indexscans will help at all, given those numbers --- there aren't enough
rows per group to make it a win.
I think you've just gotta put up with the sorting required to bring the
groups together. LIMIT or subdividing the query will not make it
faster, because the sort step is the expensive part. You could probably
improve matters by increasing sort_mem as much as you can stand ---
maybe something like 10M to 100M (instead of the default 1M). Obviously
you don't want to make it a big fraction of your available RAM, or it
will hurt the concurrent processing, but on modern machines I would
think you could give this a few tens of MB without any problem. (Note
that you want to just SET sort_mem in this one session, not increase it
I would strongly suggest doing the min and max calculations together:
select groupid, min(col), max(col) from ...
because if you do them in two separate queries 90% of the effort will be
regards, tom lane
---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings