On Mon, Mar 8, 2021 at 5:33 PM Tom Lane <t...@sss.pgh.pa.us> wrote: > > Robins Tharakan <thara...@gmail.com> writes: > > On Mon, 8 Mar 2021 at 23:34, Magnus Hagander <mag...@hagander.net> wrote: > >> Without looking, I would guess it's the schema reload using > >> pg_dump/pg_restore and not actually pg_upgrade itself. This is a known > >> issue in pg_dump/pg_restore. And if that is the case -- perhaps just > >> running all of those in a single transaction would be a better choice? > >> One could argue it's still not a proper fix, because we'd still have a > >> huge memory usage etc, but it would then only burn 1 xid instead of > >> 500M... > > > (I hope I am not missing something but) When I tried to force pg_restore to > > use a single transaction (by hacking pg_upgrade's pg_restore call to use > > --single-transaction), it too failed owing to being unable to lock so many > > objects in a single transaction. > > It does seem that --single-transaction is a better idea than fiddling with > the transaction wraparound parameters, since the latter is just going to > put off the onset of trouble. However, we'd have to do something about > the lock consumption. Would it be sane to have the backend not bother to > take any locks in binary-upgrade mode?
I believe the problem occurs when writing them rather than when reading them, and I don't think we have a binary upgrade mode there. We could invent one of course. Another option might be to exclusively lock pg_largeobject, and just say that if you do that, we don't have to lock the individual objects (ever)? -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/