Tom Lane wrote: > Huh? There is no extra cost in what I suggested; it'll perform > exactly the same number of index scans that it would do anyway.
The things I wanted to say is that: If we can stop at any point, we can make maintenance memory large sufficient to contain all of the dead tuples, then we only need to clean index for once. No matter how many times vacuum stops, indexes are cleaned for once. But in your proposal, indexes will be scan as many as vacuum stops. Those extra indexes cleaning are thought as the extra cost compared with stop-on-dime approach. To vacuum a large table by stopping 8 times, tests show the extra cost can be one third of the stop-on-dime approach. >So I'm not really convinced that being able to stop a table > vacuum halfway is critical. To run vacuum on the same table for a long period, it is critical to be sure: 1. not to eat resources that foreground processes needs 2. not to block vacuuming of hot-updated tables 3. not to block any transaction, not to block any backup activities In the current implementation of concurrent vacuum, the third is not satisfied obviously, the first issue comes to my mind is the lazy_truncate_heap, it takes AccessExclusiveLock for a long time, that is problematic. Except we change such kinds of mechanism to ensure that there is no problem to run vacuum on the same table for several days, we can not say we don’t need to stop in a half way. Best Regards, -- Galy Lee <[EMAIL PROTECTED]> NTT Open Source Software Center ---------------------------(end of broadcast)--------------------------- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly