Another day, another timing out query rewritten to force a more stable
query plan.

While I know that the planner almost always chooses a good plan, I
tend to think it is trying too hard. While 99% of the queries might be
10% faster, 1% might be timing out which makes my users cross and my
life difficult. I'd much rather have systems that are less efficient
overall, but stable with a very low rate of timeouts.

I was wondering if the planner should be much more pessimistic,
trusting in Murphy's Law and assuming the worst case is the likely
case? Would this give me a much more consistent system? Would it
consistently grind to a halt doing full table scans? Do we actually
know the worst cases, and would it be a relatively easy task to update
the planner so we can optionally enable this behavior per transaction
or across a system? Boolean choice between pessimistic or optimistic,
or is pessimism a dial?

-- 
Stuart Bishop <stu...@stuartbishop.net>
http://www.stuartbishop.net/


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to