Am 07.11.2013 12:42, schrieb Dilip kumar: 

> This patch
implementing the following TODO item 
> 
> Allow parallel cores to be
used by vacuumdb 
> 
>
http://www.postgresql.org/message-id/4f10a728.7090...@agliodbs.com [1]

> 
> Like Parallel pg_dump, vacuumdb is provided with the option to run
the vacuum of multiple tables in parallel. [ VACUUMDB –J ] 
> 
> 1. One
new option is provided with vacuumdb to give the number of workers. 
>

> 2. All worker will be started in beginning and all will be waiting
for the vacuum instruction from the master. 
> 
> 3. Now, if table list
is provided in vacuumdb command using -t then, it will send the vacuum
of one table to one of the IDLE worker, next table to next IDLE worker
and so on. 
> 
> 4. If vacuum is given for one DB then, it will execute
select on pg_class to get the table list and fetch the table name one by
one and also assign the vacuum responsibility to IDLE workers. 
> 
>
[...]

For this use case, would it make sense to queue work (tables) in
order of their size, starting on the largest one? 

For the case where
you have tables of varying size this would lead to a reduced overall
processing time as it prevents large (read: long processing time) tables
to be processed in the last step. While processing large tables at first
and filling up "processing slots/jobs" when they get free with smaller
tables one after the other would safe overall execution time. 

Regards


Jan 

-- 
professional: http://www.oscar-consult.de



Links:
------
[1]
http://www.postgresql.org/message-id/4f10a728.7090...@agliodbs.com

Reply via email to