Am 13.01.2012 22:50, schrieb Josh Berkus:
It occurs to me that I would find it quite personally useful if the
vacuumdb utility was multiprocess capable.

For example, just today I needed to manually analyze a database with
over 500 tables, on a server with 24 cores.   And I needed to know when
the analyze was done, because it was part of a downtime.  I had to
resort to a python script.

I'm picturing doing this in the simplest way possible: get the list of
tables and indexes, divide them by the number of processes, and give
each child process its own list.

Any reason not to hack on this for 9.3?

I don't see any reason not to do it, but plenty to do it.
Right now I have systems hosting many databases, I need to vacuum full from time to time. I have wrapped vacuumdb with a shell script to actually use all the capacity that is available. A vacuumdb -faz just isn't that usefull on large machines anymore.

Jan



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to