Hello Sven:

> ... To see this, create a new image, apply a standard grid on it
> using Filter->Render->Patterns->Grid and scale it down in one direction
> by a scale factor smaller than 0.5. The one pixel wide grid lines will
> become blurry. I don't think this is acceptable and so far the only
> choice we have is to apply either gimp-decimate.diff or
> gimp-decimate-2.diff...

A cartoon of what I understand about downsampling, which may be useful when 
people debate the various merits of downsampling methods:

There are two extremes in what most people expect from downsampling. What they 
expect basically depends on what the downsampling is applied to:

--- Old school CG type images (e.g., Super Mario or the old Wilbur, the Gimp 
mascot). Then, nearest neighbour (and analogous methods) will, in most 
situation, do better than box filtering (and most LINEAR interpolatory 
methods). The reason is that the picture is made up of flat colour areas with 
sharp boundaries, and anything (linear) which deviates a lot from nearest 
neighbour will not preserve the property of begin made up of flat colour areas 
separated by sharp lines. For most people, blur, in this context, is more 
annoying than aliasing.

--- Digital photographs, in which the image is usually made up of smooth colour 
areas with blurry boundaries, and in addition, there is noise and demosaicing 
artifacts. Then, in general, nearest neighbour is not acceptable, because it 
amplifies the noise (which is not present in CG images) and aliasing is more 
visually jarring than blur. In this situation, 
box filtering (especially its exact area variant) and analogous methods will, 
in most situations, do better than nearest neighbour.


Linear methods cannot make both groups of people happy.

Making most people happy will require TWO (linear) downsampling methods. 

Alternatively, it will require having a parameter (called blur?) which, when 
equal 0, gives a method which is close to nearest neighbour, and when equal to 
1, gives a method which is close to box filtering. 

I can help with this.


Another important point which was raised is that if the image is enlarged in 
one direction and reduced in the other, one single method is unlikely to do 

Within the GEGL approach, it may be that such situations are better handled by 
upsampling first (using a good upsample method) in the upsampling direction, 
then feeding the result to the downsampler in the downsampling direction. 

That is: Don't expect one single method/"plug-in" to do both.

In summary:

To stretch in one direction and shrink in the other, first do a one direction 
stretch, followed by a one direction shrink.


Also, in previous emails about this, I saw the following valid point being made:

Suppose that the following strategy is followed for downsampling. 

To make things more explicit, I'll use specific numbers.

Suppose that we want to downsample an image from dimensions 128x64 to 15x9 
(original pixel dimensions are powers of two for the sake of simplicity).

First, box filter down (by powers of two, a different number of times in each 
direction) to 16x16, then use a standard resampling method (bilinear, say) to 
downsample to 15x9.

The point that was made was that doing things this way is not continuous, 
meaning that scaling factors which are almost the same will not give images 
which are almost the same.

For example, if one followed this strategy to downsample to 17x9 instead of 
15x9, one would first box filter down to 32x9, then apply bilinear. It should 
surprise no one that 
this may produce a fairly different picture.

The point I want to make about this is that it is possible to fix this 
"discontinuous" behavior, as follows. 

Produce TWO box filtered down images.

In the case of downsampling from 128x64 to 15x9, the two downsamples would be 
of dimensions 16x16 and 8x8. 

Then, downsample the 16x16 to 15x9 using, say, bilinear, and upsample the 8x8 
to 15x9 using, again, bilinear, making sure that the sampling keeps the 
alignment of the images (I know how to do this: it is not hard).

Then, blend the two images as follows:

Let Theta = ((15-8)/(16-8)+(9-8)/(16-8))/2.

Final image = Theta * downsample + (1-Theta) * upsample.

If you think about what this does, you will realize that this satisfies the 
criterion that nearby downsampling factors give nearby images.

(WARNING: nearest neighbour is discontinuous, so the nearby images can actually 
be quite different. But they will be less different than with standard nearest 

If someone wants to implement the above, I can help.


I hope someone finds the above useful when thinking about the downsampling 

With regards,

Nicolas Robidoux
Laurentian University/Universite Laurentienne

Gimp-developer mailing list

Reply via email to