Tom Lane wrote: >One problem with it is that a too-small target would result in vacuum >proceeding to scan indexes after having accumulated only a few dead >tuples, resulting in increases (potentially enormous ones) in the total >work needed to vacuum the table completely.
Yeah. This is also my big concern about the idea of Simon and you. Every vacuum stop causes an index scan, this means the total time of vacuum is relative to how much times vacuum have stopped. >I think it's sufficient to have two cases: abort now, and restart from >the last cycle-completion point next time (this would basically just be If there is only one cycle, then there is a problem for this approach. (If maintenance work memory is not so small, this situation is normal.) >or set a flag to stop at the next cycle-completion point. The extra cost to clean indexes may prevent this approach to work in practices. >Perhaps a more useful answer to the problem of using a >defined maintenance window is to allow VACUUM to respond to changes in >the vacuum cost delay settings on-the-fly. This is a good idea! Itagaki also have talked about exactly the same idea to me yesterday. But if we change the parameters on-fly to make vacuum less aggressive, my concern is that: is there any potential problems to run vacuum in several days? Although I don’t have plan to touch VACUUM FULL, but seems concurrent VACUUM also holds excusive lock when truncating table. I am a little worrying about this kind of problem for this approach. Also maybe we need some share memory area to share the cost-delay parameter between VACUUMs, or any other ideas? ---------------------------(end of broadcast)--------------------------- TIP 4: Have you searched our list archives? http://archives.postgresql.org