On Thu, Jan 26, 2023 at 12:54 PM Robert Haas <robertmh...@gmail.com> wrote: > > The overwhelming cost is usually FPIs in any case. If you're not > > mostly focussing on that, you're focussing on the wrong thing. At > > least with larger tables. You just have to focus on the picture over > > time, across multiple VACUUM operations. > > I think that's all mostly true, but the cases where being more > aggressive can cause *extra* FPIs are worthy of just as much attention > as the cases where we can reduce them.
It's a question of our exposure to real problems, in no small part. What can we afford to be wrong about? What problem can be fixed by the user more or less as it emerges, and what problem doesn't have that quality? There is very good reason to believe that the large majority of all data that people store in a system like Postgres is extremely cold data: https://www.microsoft.com/en-us/research/video/cost-performance-in-modern-data-stores-how-data-cashing-systems-succeed/ https://brandur.org/fragments/events Having a separate aggressive step that rewrites an entire large table, apparently at random, is just a huge burden to users. You've said that you agree that it sucks, but somehow I still can't shake the feeling that you don't fully understand just how much it sucks. -- Peter Geoghegan