On Fri, 13 Mar 2020 at 01:43, Masahiko Sawada <masahiko.saw...@2ndquadrant.com> wrote: > > On Thu, 12 Mar 2020 at 16:28, David Rowley <dgrowle...@gmail.com> wrote: > > Laurenz highlighted a seemingly very valid reason that the current > > GUCs cannot be reused. Namely, say the table has 1 billion rows, if we > > use the current scale factor of 0.2, then we'll run an insert-only > > vacuum every 200 million rows. If those INSERTs are one per > > transaction then the new feature does nothing as the wraparound vacuum > > will run instead. Since this feature was born due to large insert-only > > tables, this concern seems very valid to me. > > Yeah, I understand and agree that since most people would use default > values we can reduce mis-configuration cases by adding separate GUCs > that have appropriate default values for that purpose but on the other > hand I'm not sure it's worth that we cover the large insert-only table > case by adding separate GUCs in spite of being able to cover it even > by existing two GUCs.
In light of the case above, do you have an alternative suggestion? > If we want to disable this feature on the > particular table, we can have a storage parameter that means not to > consider the number of inserted tuples rather than having multiple > GUCs that allows us to fine tuning. And IIUC even in the above case, I > think that if we trigger insert-only vacuum by comparing the number of > inserted tuples to the threshold computed by existing threshold and > scale factor, we can cover it. So you're suggesting we drive the insert-vacuums from existing scale_factor and threshold? What about the 1 billion row table example above?