It occurs to me that, because this restoration process is fundamentally
noninteractive, there is no longer any reason that we cannot support
backing up large objects in plain PSQL dumps. The dump script for
each LO would look something like
begin;
select lo_create(original-OID-of-large-object);
select lo_open(original-OID, 131072); -- 131072 = INV_WRITE
-- we can assume the lo_open will return FD 0, since it is the
-- only open LOFD in this transaction
select lowrite(0, 'suitably quoted bytea data');
select lowrite(0, 'more quoted bytea data');
-- repeat lowrite until all written in convenient chunks
commit;
This is incredibly attractive, because it immediately fixes the
long-standing gotcha that pg_dumpall doesn't dump large objects.
I'm strongly in favor of this. Please don't forget to put in COMMENT ON
LARGE OBJECT 131072 IS 'blah'; though, to make LO support complete.
(Currently LOB comments are not dumped and restored ... ever)
A minor objection to the above is that you couldn't usefully wrap
begin/commit around the whole of a pg_dump script as you can now.
But I think that's a small loss in comparison to the gain. It'd
be possible to avoid that problem by reducing the above to one
SQL command:
select lowrite(lo_open(lo_create(original-OID), 131072), 'data');
but I don't think that scales real well to LOs containing many megabytes
of data.
pg_dump often has little buggy things (ie. non-existent userid's of
dropped users in ACLs (waiting for alvaro's shared dep patch)) that
prevent you from using begin/commit around the whole thing anyway.
Also, people who don't use LOBs (most people) can still put begin/commit.
So, I'm still strongly in favour of this.
Chris
---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend