Alban Hertroys wrote
> Something that might help you, but I'm not sure whether it
> might hurt
> the performance of other queries, is to cluster that table on
> val_datestamp_idx. That way the records are already (mostly) sorted
> on disk in the order of the datestamps, which seems to be the brunt
> of above query plan.
I've a question about this suggestion, in relation to what the cost estimation
calculation does, or could possibly do:
If there are 4000 distinct values in the index, found randomly amongst 75
million rows, then you might be able to check the visibility of all those index
values through reading a smaller number of disk pages than if the table was
clustered by that index.
As an example, say there are 50 rows per page, at a minimum you could be very
lucky and determine that they where all visible through reading only 80 data
pages. More likely you'd be able to determine that through a few hundred pages.
If the table was clustered by an index on that field, you'd have to read 4000
pages.
Is this question completely unrelated to PostgreSQL implementation reality, or
something worth considering?
Regards,
Stephen Denne.
Disclaimer:
At the Datamail Group we value team commitment, respect, achievement, customer
focus, and courage. This email with any attachments is confidential and may be
subject to legal privilege. If it is not intended for you please advise by
reply immediately, destroy it and do not copy, disclose or use it in any way.
__________________________________________________________________
This email has been scanned by the DMZGlobal Business Quality
Electronic Messaging Suite.
Please see http://www.dmzglobal.com/dmzmessaging.htm for details.
__________________________________________________________________
--
Sent via pgsql-general mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general