On Apr 13 2011, at 16:22 , Xueming Shen wrote: > On 04-13-2011 3:00 PM, Mike Duigou wrote: >> Mike, can you share the results of performance testing at various >> compression levels? Is there much difference between the levels or an >> apparent "sweet spot"? >> >> For low hanging fruit for jdk 7 it might be worth considering raising the >> default compression level from 5 to 6 (the zlib default). Raising the level >> from 5 to 6 entails (by today's > > Hi Mike, > > zlib1.2.3/zlib.h states the default is level 6. I've not checked or run any > test to verify if its > implementation matches its docs, any reason you think zlib actually is using > level 5 as > default? I can take look into it further...
I was basing this on Mike Skells' earlier posting: >>>> 1. Allowing the Jar utility to have other compression levels (currently it >>>> allows default (5) only) If it's using Z_DEFAULT_COMPRESSION then it's actually using 6 as you suggest. Sorry for the confusion. Mike > ... > ZEXTERN int ZEXPORT deflateInit OF((z_streamp strm, int level)); > > Initializes the internal stream state for compression. The fields > zalloc, zfree and opaque must be initialized before by the caller. > If zalloc and zfree are set to Z_NULL, deflateInit updates them to > use default allocation functions. > > The compression level must be Z_DEFAULT_COMPRESSION, or between 0 and 9: > 1 gives best speed, 9 gives best compression, 0 gives no compression at > all (the input data is simply copied a block at a time). > Z_DEFAULT_COMPRESSION requests a default compromise between speed and > compression (currently equivalent to level 6). > ...