When investigating the zlib lead I looked at 8.4 installation and 9.5 
installation. 9.5 includes zlib.h (/Library/PostgreSQL//9.5/include/zlib.h), 
but 8.4 doesn't. But that's a header file and I have no idea how that really 
works and if that's the one used by pgres9.5 or not. The version in it says 
1.2.8 and that's what the Instruments are showing when I monitor pg_dump while 
running.

Guess I'll have to install instruments in a dev env and do a pg_dump with 8.4 
to see the difference. Tedious. =/

--
Henrik Cednert
cto | compositor

Filmlance International
mobile [ + 46 (0)704 71 89 54 ]
skype  [ cednert ]

On 22 Nov 2017, at 13:17, Henrik Cednert (Filmlance) 
<henrik.cedn...@filmlance.se<mailto:henrik.cedn...@filmlance.se>> wrote:


This sender failed our fraud detection checks and may not be who they appear to 
be. Learn about spoofing<http://aka.ms/LearnAboutSpoofing>
        Feedback<http://aka.ms/SafetyTipsFeedback>
Hello

I've ran it with all the different compression levels on one of the smaller 
db's now. And not sending any flags to it see is, as I've seen hinted on some 
page on internet, same as level 6.

I do, somewhat, share the opinion that something is up with zlib. But at the 
same time I haven't touch it since the 8.4 installation so it's a mystery how 
it could've failed on its own. The only thing performed was an upgrade from 8.4 
to 9.5. But yes, I can not really say exactly what that upgrade touched and 
what it didn't touch. Will investigate further.


COMPRESSION LEVEL: 0
FILE SIZE: 6205982696
real 0m38.218s
user 0m3.558s
sys 0m17.309s


COMPRESSION LEVEL: 1
FILE SIZE: 1391475419
real 4m3.725s
user 3m54.132s
sys 0m5.565s


COMPRESSION LEVEL: 2
FILE SIZE: 1344563403
real 4m18.574s
user 4m9.466s
sys 0m5.417s


COMPRESSION LEVEL: 3
FILE SIZE: 1267601394
real 5m23.373s
user 5m14.339s
sys 0m5.462s


COMPRESSION LEVEL: 4
FILE SIZE: 1241632684
real 6m19.501s
user 6m10.148s
sys 0m5.655s


COMPRESSION LEVEL: 5
FILE SIZE: 1178377949
real 9m18.449s
user 9m9.733s
sys 0m5.169s


COMPRESSION LEVEL: 6
FILE SIZE: 1137727582
real 13m28.424s
user 13m19.842s
sys 0m5.036s


COMPRESSION LEVEL: 7
FILE SIZE: 1126257786
real 16m39.392s
user 16m30.094s
sys 0m5.724s


COMPRESSION LEVEL: 8
FILE SIZE: 1111804793
real 30m37.135s
user 30m26.785s
sys 0m6.660s


COMPRESSION LEVEL: 9
FILE SIZE: 1112194596
real 33m40.325s
user 33m27.122s
sys 0m6.498s


COMPRESSION LEVEL AT DEFAULT NO FLAG PASSED TO 'pg_dump'
FILE SIZE: 1140261276
real 13m18.178s
user 13m9.417s
sys 0m5.242s


--
Henrik Cednert
cto | compositor

Filmlance International
mobile [ + 46 (0)704 71 89 54 ]
skype  [ cednert ]

On 22 Nov 2017, at 11:32, Matthew Hall 
<mh...@mhcomputing.net<mailto:mh...@mhcomputing.net>> wrote:


On Nov 21, 2017, at 10:18 PM, Henrik Cednert (Filmlance) 
<henrik.cedn...@filmlance.se<mailto:henrik.cedn...@filmlance.se>> wrote:

WHat's the normal way to deal with compression? Dump uncompressed and use 
something that threads better to compress the dump?

I would say most likely your zlib is screwed up somehow, like maybe it didn't 
get optimized right by the C compiler or something else sucks w/ the 
compression settings. The CPU should easily blast away at that faster than 
disks can read.

I did do some studies of this previously some years ago, and I found gzip -6 
offered the best ratio between size reduction and CPU time out of a very wide 
range of formats, but at the time xz was also not yet available.

If I were you I would first pipe the uncompressed output through a separate 
compression command, then you can experiment with the flags and threads, and 
you already get another separate process for the kernel to put on other CPUs as 
an automatic bonus for multi-core with minimal work.

After that, xz is GNU standard now and has xz -T for cranking up some threads, 
with little extra effort for the user. But it can be kind of slow so probably 
need to lower the compression level somewhat depending a bit on some time 
testing. I would try on some medium sized DB table, like a bit over the size of 
system RAM, instead of dumping this great big DB, in order to benchmark a 
couple times until it looks happy.

Matthew


Reply via email to