On Fri, May 12, 2017 at 1:06 PM, Bossart, Nathan <bossa...@amazon.com> wrote:
> On 5/11/17, 7:20 PM, "Michael Paquier" <michael.paqu...@gmail.com> wrote:
>> It seems to me that it would have been less invasive to loop through
>> vacuum() for each relation. Do you foresee advantages in allowing
>> vacuum() to handle multiple? I am not sure if is is worth complicating
>> the current logic more considering that you have as well so extra
>> logic to carry on option values.
> That was the approach I first prototyped.  The main disadvantage that I found 
> was that the command wouldn’t fail-fast if one of the tables or columns 
> didn’t exist, and I thought that it might be frustrating to encounter such an 
> error in the middle of vacuuming several large tables.  It’s easy enough to 
> change the logic to emit a warning and simply move on to the next table, but 
> that seemed like it could be easily missed among the rest of the vacuum log 
> statements (especially with the verbose option specified).  What are your 
> thoughts on this?

Hm. If multiple tables are specified and that some of them take a long
time, it could be possible that an error still happens if the
definition of one of those tables changes while VACUUM is in the
middle of running. And this makes moot the error checks that happened
at first step. So it seems to me that we definitely should have a
WARNING if multiple tables are defined anyway, and that to avoid code
duplication we may want to just do those checks once, before
processing one of the listed tables. It is true that is would be easy
to miss a WARNING in the VERBOSE logs, but issuing an ERROR would
really be frustrating in the middle of a nightly run of VACUUM.

> In the spirit of simplifying things a bit, I do think it is possible to 
> eliminate one of the new node types, since the fields for each are almost 
> identical.

Two looks too much for a code just aiming at scaling up vacuum to
handle N items. It may be possible to make things even more simple,
but I have not put much thoughts into that to be honest.

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to