On Friday, January 13, 2012 10:50:32 PM Josh Berkus wrote:
> Hackers,
> 
> It occurs to me that I would find it quite personally useful if the
> vacuumdb utility was multiprocess capable.
> 
> For example, just today I needed to manually analyze a database with
> over 500 tables, on a server with 24 cores.   And I needed to know when
> the analyze was done, because it was part of a downtime.  I had to
> resort to a python script.
> 
> I'm picturing doing this in the simplest way possible: get the list of
> tables and indexes, divide them by the number of processes, and give
> each child process its own list.
That doesn't sound like a good idea. Its way too likely that you will end up 
with one backend doing all the work because it got some big tables.

I don't think this task deserves using threads or subprocesses. Multiple 
connections from one process seems way more sensible and mostly avoids the 
above problem.

Andres

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to