> -----Original Message-----
> From: [EMAIL PROTECTED] [mailto:pgsql-performance-
> [EMAIL PROTECTED] On Behalf Of Markus Benne
> Sent: Wednesday, August 31, 2005 12:14 AM
> To: pgsql-performance@postgresql.org
> Subject: [PERFORM] When to do a vacuum for highly active table
> 
> We have a highly active table that has virtually all
> entries updated every 5 minutes.  Typical size of the
> table is 50,000 entries, and entries have grown fat.
> 
> We are currently vaccuming hourly, and towards the end
> of the hour we are seeing degradation, when compared
> to the top of the hour.
> 
> Vaccum is slowly killing our system, as it is starting
> to take up to 10 minutes, and load at the time of
> vacuum is 6+ on a Linux box.  During the vacuum,
> overall system is goin unresponsive, then comes back
> once vacuum completes.

Play with vacuum_cost_delay option. In our case it made BIG difference
(going from very heavy hitting to almost unnoticed vacuuming.)

Hope it helps.

Rigmor Ukuhe

> 
> If we run vacuum less frequently, degradation
> continues to the point that we can't keep up with the
> throughput, plus vacuum takes longer anyway.
> 
> Becoming quite a pickle:-)
> 
> We are thinking of splitting the table in two: the
> part the updates often, and the part the updates
> infrequently as we suspect that record size impacts
> vacuum.
> 
> Any ideas?
> 
> 
> Thanks,
> Mark
> 
> -----------------
> 
> ---------------------------(end of broadcast)---------------------------
> TIP 2: Don't 'kill -9' the postmaster


---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
       choose an index scan if your joining column's datatypes do not
       match

Reply via email to