On Tue, Apr 8, 2025 at 12:17 AM Nathan Bossart <nathandboss...@gmail.com> wrote: > > On Mon, Apr 07, 2025 at 10:33:47PM +0200, Hannu Krosing wrote: > > The obvious solution would be to handle the table > > `pg_largeobject_metadata` the same way as we currently handle > > `pg_largeobject `by not doing anything with it in `pg_dump > > --binary-upgrade` and just handle the contents it like we do for user > > tables in pg_upgrade itself. > > > > This should work fine for all source database versions starting from PgSQL > > v12. > > Unfortunately, the storage format for aclitem changed in v16, so this would > need to be restricted to upgrades from v16 and newer.
Have we also changed the external format of aclitem any time since v 9.2 or are the changes just to storage ? If external formats have been stable we can still get reasonable performance with dumping the data (2 min for 100M rows) Plus dumping data would work for all the supported source versions. The worst case would still be quite bad with 80+ min for the full set of 4 billion LOs but even that would be much better than the 3 days with current wayd. > That being said, I > regularly hear about slow upgrades with many LOs, so I think it'd be > worthwhile to try to improve matters in v19. Changing the LO export to dumping pg_largeobject_metadata content instead of creating the LOs should be a nice small change confined to pg_dump --binary-upgrade only so perhaps we could squeeze it in v18 still. -- Hannu