I want to see if there is a concensus of opinion out there. We've all known that data loss "could" happen if vacuum is not run and you perform more than 2b transactions. These days with faster and bigger computers and disks, it more likely that this problem can be hit in months -- not years.
To me, the WORST thing a program can do is lose data. (Certainly this is bad for a database.) I don't think there is any real excuse for this. While the 2b transaction problem was always there, it seemed so remote that I never obcessed about it. Now that it seems like a real problem that more than one user has hit, I am worried. In fact, I think it is so bad, that I think we need to back-port a fix to previous versions and issue a notice of some kind. Here as my suggestions: (1) As Tom has already said, at some point start issuing warning in the log that vacuum needs to be run. (2) At some point, stop accepting transactions on anything but template1, issuing an error saying the vacuum needs to be run. (3) Either with psql on template1 or "postgres" or some "vacuumall" program, open the database in single user mode or on template1 and vacuum database. (4) This should remain even after autovacuum is in place. If for some reason auto vacuum is installed but not running, we still need to protect the data from a stupid admin. (Last time I looked, auto vacuum used various stats, and that may be something an admin disables.) (5) Vacuum could check for a wrap-around condition in the database cluster and take it upon itself to run more broadly even if it was directed only towards a table. We've been saying that mysql is ok if you don't care about your data, I would hate if people started using this issue against postgresql. ---------------------------(end of broadcast)--------------------------- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])