> latest changes:
> shared_buffers = 35642

This is fine, it's about 14% of available RAM.  Though the way you calculated 
it still confuses me.   It's not complicated; it should be between 6% and 15% 
of available RAM; since you're doing a data-transformation DB, yours should 
be toward the high end. 

> max_fsm_relations = 1000
> max_fsm_pages = 10000

You want to raise this a whole lot if your data transformations involve large 
delete or update batches.    I'd suggest running "vacuum analyze verbose" 
between steps to see how many dead pages you're accumulating.

> wal_buffers = 64
> sort_mem = 32768
> vacuum_mem = 32768
> effective_cache_size = 10000

This is way the heck too low.  it's supposed to be the size of all available 
RAM; I'd set it to 2GB*65% as a start.

> IO is active, but hardly saturated. CPU load is hefty though, load
> average is at 4 now.

Unless you're doing huge statistical aggregates (like radar charts), or heavy 
numerical calculations-by-query, high CPU and idle I/O usually indicates a 
really bad query, like badly mismatched data types on a join or unconstrained 
joins or  overblown formatting-by-query.

-Josh Berkus
 Aglio Database Solutions
 San Francisco

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Reply via email to