On Mon, Sep 4, 2017 at 11:47 PM, Bossart, Nathan <bossa...@amazon.com> wrote:
> I've made this change in v14 of the main patch.
>
> In case others had opinions regarding the de-duplication patch, I've
> attached that again as well.

+   /*
+    * Create the relation list in a long-lived memory context so that it
+    * survives transaction boundaries.
+    */
+   old_cxt = MemoryContextSwitchTo(AutovacMemCxt);
+   rangevar = makeRangeVar(tab->at_nspname, tab->at_relname, -1);
+   rel = makeVacuumRelation(rangevar, NIL, tab->at_relid);
+   rel_list = list_make1(rel);
+   MemoryContextSwitchTo(old_cxt);
That's way better, thanks for the new patch.

So vacuum_multiple_tables_v14.patch is good for a committer in my
opinion. With this patch, if the same relation is specified multiple
times, then it gets vacuum'ed that many times. Using the same column
multi-times results in an error as on HEAD, but that's not a new
problem with this patch.

So I would tend to think that the same column specified multiple times
should cause an error, and that we could let VACUUM run work N times
on a relation if it is specified this much. This feels more natural,
at least to me, and it keeps the code simple.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to