On 29.05.2014, at 23:19, Glenn Maynard <gl...@zewt.org> wrote:
> 
> Anyway, this has derailed the thread.  We have an API for compression
> already.  It already supports a compression level argument for JPEG.
> Having an equivalent argument for PNG is a no-brainer.  The only
> difference to JPEG is that it should be described as the "compression
> level" rather than "quality level", since with PNG it has no effect on
> quality, only the file size and time it takes to compress.

I don't think it's a no-brainer. There are several ways it could be interpreted:


1. As zlib's compression level

However, this has marginal utility, because these days even the maximum level, 
even on mobile devices, is reasonably fast. Lower level would be useful only 
for very large images on very slow devices, but UAs can have a good heuristic 
for ensuring reasonable compression time without any input from the page's 
author.

I expect exponential increase in computing power make this setting completely 
irrelevant by the time it's implemented in most browsers.


2. Enable brute-force search for best combinations of zlib's compression level, 
memory level and window size 

OptiPNG and pngcrush show that "maximum" settings in zlib don't always give 
smallest file and best compression is obtained by trying hundreds of 
combinations of zlib parameters.

If browsers choose this approach for a high "compression level" that will be a 
couple of *orders of magnitude* slower than the first option. If different 
vendors don't agree on orders of magnitude of time it takes to compress an 
image, such parameter could be unusable.


3. Compression parameters in other gzip implementations

For example Zopfli compressor produces files smaller than zlib, but is much 
much slower. Instead of 1-9 scale it takes "number of iterations" as the 
compression level.

And it can even use a totally different approach to compression level: I've 
modified Zopfli[1] to make it aim for constant processing time on any machine. 
Faster machines will just produce smaller files. Browsers could use this 
approach to ensure every PNG is compressed in < 0.5s or so, or the compression 
level parameter could be a number of seconds to spend on the compression.


And that's just for lossless PNG. It's possible to encode standard PNG in a 
*lossy* fashion (http://pngmini.com/lossypng.html), and there are few ways to 
do it:

Images can be converted to PNG-8 (vector quantization is a form of lossy 
compression) and then compression level could be interpreted as number of 
unique colors or mean square error of the quantized image (the latter option is 
used by http://pngquant.org). This generally makes files 3-4 times smaller, but 
has a limit on maximum quality that can be achieved.

For higher quality it's possible to make truecolor PNG lossy by taking 
advantage of the fact that PNG filters are predictors. Instead of writing all 
pixels as they are in the input image the encoder can replace some pixels with 
values matching filters' prediction. This simplifies the data and generally 
halves the file size (and costs almost no extra CPU time). The threshold used 
to choose between source and predicted values for pixels acts similarly to 
JPEG's quality level.

So there are multiple ways such parameter can be interpreted, and it can result 
in wildly different visual quality, file size and time taken to compress the 
image.

-- 
regards, Kornel

[1] https://github.com/pornel/zopfli

Reply via email to