>> The existing cost estimation
>> code effectively assumes that they're perfectly uniformly distributed;
>> which is a good average-case assumption but can be horribly wrong in
>> the worst case.


Sorry, just an outsider jumping in with a quick comment.

Every year or two the core count goes up. Can/should/does postgres ever attempt 
two strategies in parallel, in cases where strategy A is generally good but 
strategy B prevents bad worst case behaviour? Kind of like a Schrödinger's Cat 
approach to scheduling. What problems would it raise?

Graeme. 



-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to