On Fri, Oct 7, 2016 at 3:20 PM, Robert Haas <> wrote:
> On Wed, Oct 5, 2016 at 10:58 AM, Francisco Olarte
> <> wrote:
>> On Tue, Oct 4, 2016 at 7:50 PM, Robert Haas <> wrote:
>>> On Mon, Oct 3, 2016 at 5:44 PM, Alvaro Herrera <> 
>>> wrote:
>> ...
>>>> I wonder if the real answer isn't just to disallow -f with parallel
>>>> vacuuming.
>>> Seems like we should figure out which catalog tables are needed in
>>> order to perform a VACUUM, and force those to be done last and one at
>>> a time.
>> Is the system catalog a bottleneck for people who has real use for
>> paralell vacuum?

> I don't know, but it seems like the documentation for vacuumdb
> currently says, more or less, "Hey, if you use -j with -f, it may not
> work!", which seems unacceptable to me.  It should be the job of the
> person writing the feature to make it work in all cases, not the job
> of the person using the feature to work around the problem when it
> doesn't.

That may be the case, but the only ways to solve it seems to be
disallow full paralell as suggested.

OTOH what I was asking was just if people think the time gained by
minimizing the part of pg_catalog serially processed on a
full-paralell case would be enough to warrant the increased code
complexity and bug surface.

Anyway, I'll stick to my original plan even if someone decides to fix
or disallow full paralell as I think it has it uses.

Francisco Olarte.

Sent via pgsql-hackers mailing list (
To make changes to your subscription:

Reply via email to