于 2013/9/16 1:31, Kevin Grittner 写道:
There's your problem.  You left so little space between when
autovacuum would kick in for wraparound prevention (2 billion
transactions) and when the server prevents new transactions in
order to protect your data (2 ^ 31 - 1000000 transactions) that
autovacuum didn't have enough time to complete its effort to do so.

Changing a setting to ten times its default value is something
which should always be approached with caution.  In this case you
changed the threshold for starting the work to prevent data loss
from a little under 10% of the distance to the disastrous condition
to a little under 100% of that distance.

You could play with non-standard setting for these, but if you go
anywhere near this extreme you risk downtime like you have just
experienced.  Personally, I have never had a reason to change these
from the defaults.
OK, thanks for pointing out the problem. This was changed quite long time ago when I saw too frequent auto vacuums to prevent the wrap-around on a very busy/large table which slow down the performance. I will change it back to the default to see how it works.

To ensure that autovacuum can keep up with the activity on a
machine like this, I have generally gone to something like:

autovacuum_cost_limit = 800

If you have more than a couple large tables which take long enough
to scan to prevent small, frequently-updated tables from getting
attention soon enough, you might want to boost
autovacuum_max_workers, too.
I will try the parameters as you suggested too.

So you guys still think the problem is on db1(that's my main db) as the error message stated? Just auto-vacuum on db1 kicked off somehow and fixed the problem when I was running vacuum on other dbs?


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Reply via email to