ITAGAKI Takahiro wrote: > Alvaro Herrera <[EMAIL PROTECTED]> wrote: > > > Here is the autovacuum patch I am currently working with. This is > > basically the same as the previous patch; I have tweaked the database > > list management so that after a change in databases (say a new database > > is created or a database is dropped), the list is recomputed to account > > for the change, keeping the ordering of the previous list. > > I'm interested in your multiworkers autovacuum proposal. > > I'm researching the impact of multiworkers with autovacuum_vacuum_cost_limit. > Autovacuum will consume server resources up to autovacuum_max_workers times > as many as before. I think we might need to change the semantics of > autovacuum_vacuum_cost_limit when we have multiworkers.
Yes, that's correct. Per previous discussion, what I actually wanted to do was to create a GUC setting to simplify the whole thing, something like "autovacuum_max_mb_per_second" or "autovacuum_max_io_per_second". Then, have each worker use up to (max_per_second/active workers) as much IO resources. This way, the maximum use of IO resources by vacuum can be easily determined and limited by the DBA; certainly much simpler than the vacuum cost limiting feature. > BTW, I found an unwitting mistake in the foreach_worker() macro. > These two operations are same in C. > - worker + 1 > - (WorkerInfo *) (((char *) worker) + sizeof(WorkerInfo)) Ah, thanks. I had originally coded the macro like you suggest, but then during the development I needed to use the "i" variable as well, so I added it. Apparently later I removed that usage; I see that there are no such uses left in the current code. The "+ sizeof(WorkerInfo)" part is just stupidity on my part, sorry about that. -- Alvaro Herrera http://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support ---------------------------(end of broadcast)--------------------------- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate