[EMAIL PROTECTED] ("Anderson Boechat Lopes") writes: > I´m new here and i´m not sure if this is the right email to > solve my problem.
This should be OK... > Well, i have a very large database, with vary tables and very > registers. Every day, too many operations are perfomed in that DB, > with queries that insert, delete and update. Once a week some > statistics are collected using vacuum analyze. > > The problem is after a period of time (one month, i think), the > queries takes too much time to perform. A simple update can take 10 > seconds or more to perform. It seems fairly likely that two effects are coming in... -> The tables that are being updated have lots of dead tuples. -> The vacuums aren't doing much good because the number of dead tuples is so large that you blow out the FSM (Free Space Map), and thus they can't free up space. -> Another possibility is that if some tables shrink to small size, and build up to large size (we see this with the _rserv_log_1_ and _rserv_log_2_ tables used by the eRServ replication system), the statistics may need to be updated a LOT more often. You might want to consider running VACUUM a whole lot more often than once a week. If there is any regular time that the system isn't terribly busy, you might want to vacuum some or all tables at that time. pg_autovacuum might be helpful; it will automatically do vacuums on tables when they have been updated heavily. There may be more to your problem, but VACUUMing more would allow us to get rid of "too many dead tuples around" as a cause. -- "cbbrowne","@","acm.org" http://cbbrowne.com/info/x.html Would-be National Mottos: USA: "There oughta' be a law!" ---------------------------(end of broadcast)--------------------------- TIP 8: explain analyze is your friend