Decibel! <[EMAIL PROTECTED]> writes: > On Nov 25, 2008, at 7:06 PM, Gregory Stark wrote: >>> The thought occurs to me that we're looking at this from the wrong side of >>> the >>> coin. I've never, ever seen query plan time pose a problem with Postgres, >>> even >>> without using prepared statements. >> >> I certainly have seen plan times be a problem. I wonder if you have too and >> just didn't realize it. With a default_stats_target of 1000 you'll have >> hundreds of kilobytes of data to slog through to plan a moderately complex >> query with a few text columns. Forget about prepared queries, I've seen plan >> times be unusable for ad-hoc interactive queries before. > > Can you provide any examples?
At the time I couldn't understand what the problem was. In retrospect I'm certain this was the problem. I had a situation where just running EXPLAIN took 5-10 seconds. I suspect I had some very large toasted arrays which were having to be detoasted each time. IIRC I actually reloaded the database with pg_dump and the problem went away. > And no, I've never seen a system where a few milliseconds of plan time > difference would pose a problem. I'm not saying they don't exist, only that I > haven't seen them (including 2 years working as a consultant). How many milliseconds does it take to read a few hundred kilobytes of toasted, compressed data? These can easily be more data than the actual query is going to read. Now ideally this will all be cached but the larger the data set the less likely it will be. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training! -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers