> On Jun 8, 2018, at 10:23 AM, David G. Johnston <david.g.johns...@gmail.com> 
> wrote:
> 
> ​Not sure what the right answer is but its seems your database (those tables 
> at least) are mis-configured for the workload being ​executed against them.  
> Significantly increasing the aggressiveness of the auto-vacuum process and/or 
> inserting manual vacuum analyze commands into your application at appropriate 
> times are probably necessary.
> 

I’m fine with changing up table parameters, which is the option that would make 
sense for us (thanks for pointing that out).  I have the auto vacuum threshold 
high because of other huge tables, and was not aware of the per table settings. 
 I’ll use this excuse one time, I inherited this setup, now I own it :-)

I’m concerned about a query that’s going against two tables that have had 300k 
entries in them (ie now empty and 2 entries) taking so long.  Even if those 
tables where full, the query should of taken no time at all.  The machine has 
64GB memory, 12 physical cores (+12 hyper threads) and the storage is on a ZFS 
pool with 5 mirrored vdevs of 7.2k SAS drives.  The entire db size is 2.63GB, 
easily fitting into memory.  This is a production appliance, and is build to 
handle the load.  Obviously needs some intelligent tuning though.


nspname relname n_tup_ins       n_tup_upd       n_tup_del       n_live_tup      
n_dead_tup      reltuples       av_threshold    last_vacuum     last_analyze    
av_needed       pct_dead
ds3     blob    303498  2559    303496  2       0       2       5000    
2018-06-08 04:35:00.000000      NULL    false   0
ds3     job_entry       303659  815     303659  0       0       0       5000    
2018-06-08 04:35:00.000000      NULL    false   0

Best,
Robert

Reply via email to