Sven Neumann wrote:

> As already explained in my previous mail, the decimation routines are
> only used for the pre-scaling steps. As soon as the image is close
> enough to the final size, the chosen interpolation routine is used. This
> gives continuous results for all scale factors as there is no longer any
> special casing for scaling down by 50%.

What I don't understand is why there's a need to interpolate at all in 
the case of scaling an image down.  When scaling up, interpolation is 
used to estimate missing information, but when scaling down there is no 
missing information to be estimated - the problem is instead finding the 
best strategy for *discarding* information.

What I do in PhotoPrint is just use a simple sub-pixel-capable box 
filter - which is what your current approach 
(scale-by-nearest-power-of-two, then interpolate) is approximating.

The routine looks like this:

        // We accumulate pixel values from a potentially
        // large number of pixels and process all the samples
        // in a pixel at one time.
        double tmp[IS_MAX_SAMPLESPERPIXEL];
        for(int i=0;i<samplesperpixel;++i)

        ISDataType *srcdata=source->GetRow(row);

        // We use a Bresenham-esque method of calculating the
        // pixel boundaries for scaling - add the smaller value
        // to an accumulator until it exceeds the larger value,
        // then subtract the larger value, leaving the remainder
        // in place for the next round.
        int a=0;
        int src=0;
        int dst=0;
                // Add the smaller value (destination width)

                // As long as the counter is less than the larger value
                // (source width), we take full pixels.
                        for(int i=0;i<samplesperpixel;++i)

                double p=source->width-(a-width);
                // p now contains the proportion of the next pixel
                // to be counted towards the output pixel.

                // And a now contains the remainder,
                // ready for the next round.

                // So we add p * the new source pixel
                // to the current output pixel...
                for(int i=0;i<samplesperpixel;++i)

                // Store it...
                for(int i=0;i<samplesperpixel;++i)
                        rowbuffer[samplesperpixel*dst+i] =

                // And start off the next output pixel with
                // (1-p) * the source pixel.
                for(int i=0;i<samplesperpixel;++i)

> The main problem with the code in trunk is though that I think that the
> results of the new code are too blurry. Please have a look at the tests
> that I published at http://svenfoo.org/scalepatch/. And please try the
> patch and do your own tests.

The slight blurriness comes, I think, from performing the scaling in two 
distinct stages.  Just for kicks, since I had a rare spare hour to play 
with such things, here are versions of the 3% and 23% test from your 
page, for comparison, scaled using the downsample filter whose core is 
posted above:


Hope this is some help

All the best,
Alastair M. Robinson

Gimp-developer mailing list

Reply via email to