Alastair M. Robinson wrote:
> Hi
>
> Sven Neumann wrote:
>
>   
>> As already explained in my previous mail, the decimation routines are
>> only used for the pre-scaling steps. As soon as the image is close
>> enough to the final size, the chosen interpolation routine is used. This
>> gives continuous results for all scale factors as there is no longer any
>> special casing for scaling down by 50%.
>>     
>
> What I don't understand is why there's a need to interpolate at all in 
> the case of scaling an image down.  When scaling up, interpolation is 
> used to estimate missing information, but when scaling down there is no 
> missing information to be estimated - the problem is instead finding the 
> best strategy for *discarding* information.
>
> What I do in PhotoPrint is just use a simple sub-pixel-capable box 
> filter - which is what your current approach 
> (scale-by-nearest-power-of-two, then interpolate) is approximating.
>
> The routine looks like this:
>
>       // We accumulate pixel values from a potentially
>       // large number of pixels and process all the samples
>       // in a pixel at one time.
>       double tmp[IS_MAX_SAMPLESPERPIXEL];
>       for(int i=0;i<samplesperpixel;++i)
>               tmp[i]=0;
>
>       ISDataType *srcdata=source->GetRow(row);
>
>       // We use a Bresenham-esque method of calculating the
>       // pixel boundaries for scaling - add the smaller value
>       // to an accumulator until it exceeds the larger value,
>       // then subtract the larger value, leaving the remainder
>       // in place for the next round.
>       int a=0;
>       int src=0;
>       int dst=0;
>       while(dst<width)
>       {
>               // Add the smaller value (destination width)
>               a+=width;
>
>               // As long as the counter is less than the larger value
>               // (source width), we take full pixels.
>               while(a<source->width)
>               {
>                       if(src>=source->width)
>                               src=source->width-1;
>                       for(int i=0;i<samplesperpixel;++i)
>                               tmp[i]+=srcdata[samplesperpixel*src+i];
>                       ++src;
>                       a+=width;
>               }
>
>               double p=source->width-(a-width);
>               p/=width;
>               // p now contains the proportion of the next pixel
>               // to be counted towards the output pixel.
>
>               a-=source->width;
>               // And a now contains the remainder,
>               // ready for the next round.
>
>               // So we add p * the new source pixel
>               // to the current output pixel...
>               if(src>=source->width)
>                       src=source->width-1;
>               for(int i=0;i<samplesperpixel;++i)
>                       tmp[i]+=p*srcdata[samplesperpixel*src+i];
>
>               // Store it...
>               for(int i=0;i<samplesperpixel;++i)
>               {
>                       rowbuffer[samplesperpixel*dst+i] =
>                               0.5+(tmp[i]*width)/source->width;
>               }
>               ++dst;
>
>               // And start off the next output pixel with
>               // (1-p) * the source pixel.
>               for(int i=0;i<samplesperpixel;++i)
>                       tmp[i]=(1.0-p)*srcdata[samplesperpixel*src+i];
>               ++src;
>       }
>
>   
>> The main problem with the code in trunk is though that I think that the
>> results of the new code are too blurry. Please have a look at the tests
>> that I published at http://svenfoo.org/scalepatch/. And please try the
>> patch and do your own tests.
>>     
>
> The slight blurriness comes, I think, from performing the scaling in two 
> distinct stages.  Just for kicks, since I had a rare spare hour to play 
> with such things, here are versions of the 3% and 23% test from your 
> page, for comparison, scaled using the downsample filter whose core is 
> posted above:
>
> http://www.blackfiveservices.co.uk/3Percent.png
> http://www.blackfiveservices.co.uk/23Percent.png
>
> Hope this is some help
>
> All the best,
> --
> Alastair M. Robinson
>
> _______________________________________________
> Gimp-developer mailing list
> Gimp-developer@lists.XCF.Berkeley.EDU
> https://lists.XCF.Berkeley.EDU/mailman/listinfo/gimp-developer
>
>
>   
The code is not interpolating rather resampling (supersampling in case 
of lanczos and bicubic) in the case of scaling down.
The different filters lanczos,bicubic, box are just what you describe :

the problem is instead finding the best strategy for *discarding* information.



_______________________________________________
Gimp-developer mailing list
Gimp-developer@lists.XCF.Berkeley.EDU
https://lists.XCF.Berkeley.EDU/mailman/listinfo/gimp-developer

Reply via email to