Dan Harris wrote:
> On Jul 14, 2005, at 10:12 PM, John A Meinel wrote:
>> My biggest question is why the planner things the Nested Loop would be
>> so expensive.
>> Have you tuned any of the parameters? It seems like something is  out of
>> whack. (cpu_tuple_cost, random_page_cost, etc...)
> here's some of my postgresql.conf.  Feel free to blast me if I did
> something idiotic here.
> shared_buffers = 50000
> effective_cache_size = 1348000
> random_page_cost = 3
> work_mem = 512000

Unless you are the only person connecting to this database, your
work_mem is very high. And if you haven't modified maintenance_work_mem
it is probably very low. work_mem might be causing postgres to think it
can fit all of a merge into ram, making it faster, I can't say for sure.

> max_fsm_pages = 80000

This seems high, but it depends how many updates/deletes you get
in-between vacuums. It may not be too excessive. VACUUM [FULL] VERBOSE
replies with how many free pages are left, if you didn't use that
already for tuning. Though it should be tuned based on a steady state
situation. Not a one time test.

> log_min_duration_statement = 60000
> fsync = true ( not sure if I'm daring enough to run without this )
> wal_buffers = 1000
> checkpoint_segments = 64
> checkpoint_timeout = 3000

These seem fine to me.

Can you include the output of EXPLAIN SELECT both with and without SET
join_collapselimit? Since your tables have grown, I can't compare the
estimated number of rows, and costs very well.

EXPLAIN without ANALYZE is fine, since I am wondering what the planner
is thinking things cost.


> stats_command_string = true
> stats_row_level = true

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to