>>> Decibel! <[EMAIL PROTECTED]> wrote: 
> On Nov 25, 2008, at 7:06 PM, Gregory Stark wrote:
>>> The thought occurs to me that we're looking at this from the  
>>> wrong  side of the
>>> coin. I've never, ever seen query plan time pose a  problem with  
>>> Postgres, even
>>> without using prepared statements.
>>
>> I certainly have seen plan times be a problem. I wonder if you have 

>> too and
>> just didn't realize it. With a default_stats_target of 1000 you'll 

>> have
>> hundreds of kilobytes of data to slog through to plan a moderately 

>> complex
>> query with a few text columns. Forget about prepared queries, I've 

>> seen plan
>> times be unusable for ad-hoc interactive queries before.
> 
> Can you provide any examples?
> 
> And no, I've never seen a system where a few milliseconds of plan  
> time difference would pose a problem.
 
When we first brought the statewide circuit court data onto
PostgreSQL, on some early version of 8.1, we tried boosting the
statistics targets for a few dozen important columns, and had to back
off because of plan times up in the 20 to 30 second range.  I hadn't
tried it lately, so I just gave it a go with switching from a default
statistics target of 10 with no overrides to 1000.
 
The plan time for a fairly complex query which is run over 300,000
times per day went from 55 ms to 315 ms; however, with the particular
search criteria I used (which I knew to be "challenging") the run time
went from something which exceeded my patience tolerance for the test
(over two minutes) to two seconds, so a better plan was definitely
found.
 
I'm not sure what this suggests in terms of a good default value, but
just to put some numbers out there from a real-world application....
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to