Tom Lane wrote:
BTW, to what extent might this whole problem be simplified if we adopt
chunk-at-a-time vacuuming (compare current discussion with Galy Lee)?
If the unit of work has a reasonable upper bound regardless of table
size, maybe the problem of big tables starving small ones goes away.

So if we adopted chunk-at-a-time then perhaps each worker processes the list of tables in OID order (or some unique and stable order) and does one chunk per table that needs vacuuming. This way an equal amount of bandwidth is given to all tables.

That does sounds simpler. Is chunk-at-a-time a realistic option for 8.3?


---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

Reply via email to