Lucas <luca...@gmail.com> writes: > I made a small modification in pg_dump to prevent parallel backup failures > due to exclusive lock requests made by other tasks.
> The modification I made take shared locks for each parallel backup worker > at the very beginning of the job. That way, any other job that attempts to > acquire exclusive locks will wait for the backup to finish. I do not think this would eliminate the problem; all it's doing is making the window for trouble a bit narrower. Also, it implies taking out many locks that would never be used, since no worker process will be touching all of the tables. I think a real solution involves teaching the backend to allow a worker process to acquire a lock as long as its master already has the same lock. There's already queue-jumping logic of that sort in the lock manager, but it doesn't fire because we don't see that there's a potential deadlock. What needs to be worked out, mostly, is how we can do that without creating security hazards (since the backend would have to accept a command enabling this behavior from the client). Maybe it's good enough to insist that leader and follower be same user ID, or maybe not. There's some related problems in parallel query, which AFAIK we just have an ugly kluge solution for ATM. It'd be better if there were a clear model of when to allow a parallel worker to get a lock out-of-turn. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers