Tom Lane wrote:
"Matthew T. O'Connor" <> writes:
Tom Lane wrote:
I'm inclined to propose an even simpler algorithm in which every worker
acts alike;

That is what I'm proposing except for one difference, when you catch up to an older worker, exit.

No, that's a bad idea, because it means that any large table starves
even-larger tables.

True, but the assumption I'm making is that there is a finite amount of bandwidth available and more concurrent activity will have a net negative effect the time it takes to vacuum all tables. I'm willing to pay that price to prevent small hot tables from getting starved, but less willing to pay the same price for large tables where the percentage of bloat will be much smaller.

(Note: in all this I assume we're all using "size" as a shorthand for
some sort of priority metric that considers number of dirty tuples not
only size.  We don't want every worker insisting on passing over every
small read-only table every time, for instance.)

I was using size to mean reltuples. The whole concept of sorting by size was designed to ensure that smaller (more susceptible to bloat) tables got priority. It might be useful for workers to sort their to-do lists by some other metric, but I don't have a clear vision of what that might be.

---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at


Reply via email to