On 31 October 2014 14:49, Andres Freund <and...@2ndquadrant.com> wrote:
> On 2014-10-31 10:02:28 -0400, Tom Lane wrote:
>> Andres Freund <and...@2ndquadrant.com> writes:
>> > On 2014-10-31 09:48:52 -0400, Tom Lane wrote:
>> >> But more to the point, this seems like optimizing pg_dump startup by
>> >> adding overhead everywhere else, which doesn't really sound like a
>> >> great tradeoff to me.
>>
>> > Well, it'd finally make pg_dump "correct" under concurrent DDL. That's
>> > quite a worthwile thing.
>>
>> I lack adequate caffeine at the moment, so explain to me how this adds
>> any guarantees whatsoever?  It sounded like only a performance
>> optimization from here.
>
> A performance optimization might be what Simon intended, but it isn't
> primarily what I (and presumably Robert) thought it be useful for.
>
> Consider the example in
> http://archives.postgresql.org/message-id/20130507141526.GA6117%40awork2.anarazel.de
>
> If pg_dump were to take the 'ddl lock' *before* acquiring the snapshot
> to lock all tables, that scenario couldn't happen anymore. As soon as
> pg_dump has acquired the actual locks the ddl lock could be released
> again.
>
> Taking the ddl lock from SQL would probably require some 'backup' or
> superuser permission, but luckily there seems to be movement around
> that.

Good idea. But it is a different idea. I can do that as well...

-- 
 Simon Riggs                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to