2014-03-17 12:52 GMT+01:00 Jürgen Strobel <juergen...@strobel.info>:

>
> Hi,
>
> at work at my company I inherited responsibility for a large PG 8.1 DB,
> with a an extreme number of tables (~300000). Surprisingly this is
> working quite well, except for maintenance and backup. I am tasked with
> finding a way to do dump & restore to 9.3 with as little downtime as
> possible.
>
> Using 9.3's pg_dump with -j12 I found out that pg_dump takes 6 hours to
> lock tables using a single thread, then does the data dump in 1 more
> hour using 12 workers. However if I patch out the explicit LOCK TABLE
> statements this only takes 1 hour total. Of course no one else is using
> the DB at this time. In a pathological test case scenario in a staging
> environment the dump time decreased from 5 hours to 5 minutes.
>
> I've googled the problem and there seem to be more people with similar
> problems, so I made this a command line option --no-table-locks and
> wrapped it up in as nice a patch against github/master as I can manage
> (and I didn't use C for a long time). I hope you find it useful.
>

Joe Conway sent me a tip so commit eeb6f37d89fc60c6449ca12ef9e914
91069369cb significantly decrease a time necessary for locking. So it can
help to.

I am not sure, if missing lock is fully correct. In same situation I though
about some form of database level lock. So you can get a protected access
by one statement.

Regards

Pavel Stehule


>
> regards,
> Jürgen Strobel
>
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
>

Reply via email to