On Thu, Dec 7, 2017 at 2:31 PM, Laurenz Albe <laurenz.a...@cybertec.at> wrote:
> Gunther wrote:
>> Something is wrong with the dump thing. And no, it's not SSL or whatever,
>> I am doing it on a local system with local connections. Version 9.5 
>> something.
>
> That's a lot of useful information.
>
> Try to profile where the time is spent, using "perf" or similar.
>
> Do you connect via the network, TCP localhost or UNIX sockets?
> The last option should be the fastest.

You can use SSL over a local TCP connection. Whether it's the case is the thing.

In my experience, SSL isn't a problem, but compression *is*. With a
modern-enough openssl, enabling compression is tough, it's forcefully
disabled by default due to the vulnerabilities that were discovered
related to its use lately.

So chances are, no matter what you configured, compression isn't being used.

I never measured it compared to earlier versions, but pg_dump is
indeed quite slow, and the biggest offender is formatting the COPY
data to be transmitted over the wire. That's why parallel dump is so
useful, you can use all your cores and achieve almost perfect
multicore acceleration.

Compression of the archive is also a big overhead, if you want
compression but want to keep the overhead to the minimum, set the
minimum compression level (1).

Something like:

pg_dump -Fd -j 8 -Z 1 -f target_dir yourdb

Reply via email to