Re: [Gimp-developer] patch for scale-region.c
Hi, On Thu, 2008-08-28 at 22:38 +0200, Geert Jordaens wrote: What are your doubts with the new code? Why would a simple box-filter be better for decimating? My doubts with the current approach are manifold: The current code has decimation routines, but they are only suited for the special case of scaling down by a factor of 0.5. There are two problems with this: (1) The code actually uses the special decimation routine not only if scaling down by 0.5 in both directions, but also if the scale factor is 0.5 in just one direction (and different in the other). (2) If scaling down by 50%, a special algorithm is used (the said decimation routines), but a very different algorithm is used when scaling down by other scale factors (say 51%). This introduces a discontinuity and gives quite unexpected results. The other grief I have with the current code is the way that the decimation routines are implemented. Let's have a look at the cubic interpolation code, the lanczos code has the same issue. What the code does is it calculates four weighted pixel sums for the neighborhood of the four source pixels that surround the center of the destination pixels. It then averages these four pixel sums. This is quite inefficient as exactly the same operation could be done faster and with less errors by calculating one larger pixel sum using an appropriate weighting matrix. My patch addresses these issues the following way: The decimation routines are replaced by a straight-forward 2x2 box filter. It also introduces special decimation routines to decimate in X or Y direction only. As already explained in my previous mail, the decimation routines are only used for the pre-scaling steps. As soon as the image is close enough to the final size, the chosen interpolation routine is used. This gives continuous results for all scale factors as there is no longer any special casing for scaling down by 50%. The main problem with the code in trunk is though that I think that the results of the new code are too blurry. Please have a look at the tests that I published at http://svenfoo.org/scalepatch/. And please try the patch and do your own tests. I will be away over the weekend. Hopefully a few people can do some tests in that time so that we can decide how to proceed early next week. Sven ___ Gimp-developer mailing list Gimp-developer@lists.XCF.Berkeley.EDU https://lists.XCF.Berkeley.EDU/mailman/listinfo/gimp-developer
Re: [Gimp-developer] patch for scale-region.c
Hi, I also started to play with some enhancements on top of the patch we are discussing here. Using a steep gaussian filter instead of the plain box filter seems to be a good compromise. It's better at suppressing Moire patterns, at the cost of introducing a little blur. I have not yet finished this patch though and it has some issues. It works for the most common cases though and if someone is interested, it can be downloaded here: http://sven.gimp.org/misc/gimp-decimate-2.diff Sven ___ Gimp-developer mailing list Gimp-developer@lists.XCF.Berkeley.EDU https://lists.XCF.Berkeley.EDU/mailman/listinfo/gimp-developer
Re: [Gimp-developer] patch for scale-region.c
Hi Sven Neumann wrote: As already explained in my previous mail, the decimation routines are only used for the pre-scaling steps. As soon as the image is close enough to the final size, the chosen interpolation routine is used. This gives continuous results for all scale factors as there is no longer any special casing for scaling down by 50%. What I don't understand is why there's a need to interpolate at all in the case of scaling an image down. When scaling up, interpolation is used to estimate missing information, but when scaling down there is no missing information to be estimated - the problem is instead finding the best strategy for *discarding* information. What I do in PhotoPrint is just use a simple sub-pixel-capable box filter - which is what your current approach (scale-by-nearest-power-of-two, then interpolate) is approximating. The routine looks like this: // We accumulate pixel values from a potentially // large number of pixels and process all the samples // in a pixel at one time. double tmp[IS_MAX_SAMPLESPERPIXEL]; for(int i=0;isamplesperpixel;++i) tmp[i]=0; ISDataType *srcdata=source-GetRow(row); // We use a Bresenham-esque method of calculating the // pixel boundaries for scaling - add the smaller value // to an accumulator until it exceeds the larger value, // then subtract the larger value, leaving the remainder // in place for the next round. int a=0; int src=0; int dst=0; while(dstwidth) { // Add the smaller value (destination width) a+=width; // As long as the counter is less than the larger value // (source width), we take full pixels. while(asource-width) { if(src=source-width) src=source-width-1; for(int i=0;isamplesperpixel;++i) tmp[i]+=srcdata[samplesperpixel*src+i]; ++src; a+=width; } double p=source-width-(a-width); p/=width; // p now contains the proportion of the next pixel // to be counted towards the output pixel. a-=source-width; // And a now contains the remainder, // ready for the next round. // So we add p * the new source pixel // to the current output pixel... if(src=source-width) src=source-width-1; for(int i=0;isamplesperpixel;++i) tmp[i]+=p*srcdata[samplesperpixel*src+i]; // Store it... for(int i=0;isamplesperpixel;++i) { rowbuffer[samplesperpixel*dst+i] = 0.5+(tmp[i]*width)/source-width; } ++dst; // And start off the next output pixel with // (1-p) * the source pixel. for(int i=0;isamplesperpixel;++i) tmp[i]=(1.0-p)*srcdata[samplesperpixel*src+i]; ++src; } The main problem with the code in trunk is though that I think that the results of the new code are too blurry. Please have a look at the tests that I published at http://svenfoo.org/scalepatch/. And please try the patch and do your own tests. The slight blurriness comes, I think, from performing the scaling in two distinct stages. Just for kicks, since I had a rare spare hour to play with such things, here are versions of the 3% and 23% test from your page, for comparison, scaled using the downsample filter whose core is posted above: http://www.blackfiveservices.co.uk/3Percent.png http://www.blackfiveservices.co.uk/23Percent.png Hope this is some help All the best, -- Alastair M. Robinson ___ Gimp-developer mailing list Gimp-developer@lists.XCF.Berkeley.EDU https://lists.XCF.Berkeley.EDU/mailman/listinfo/gimp-developer
Re: [Gimp-developer] patch for scale-region.c
Alastair M. Robinson wrote: Hi Sven Neumann wrote: As already explained in my previous mail, the decimation routines are only used for the pre-scaling steps. As soon as the image is close enough to the final size, the chosen interpolation routine is used. This gives continuous results for all scale factors as there is no longer any special casing for scaling down by 50%. What I don't understand is why there's a need to interpolate at all in the case of scaling an image down. When scaling up, interpolation is used to estimate missing information, but when scaling down there is no missing information to be estimated - the problem is instead finding the best strategy for *discarding* information. What I do in PhotoPrint is just use a simple sub-pixel-capable box filter - which is what your current approach (scale-by-nearest-power-of-two, then interpolate) is approximating. The routine looks like this: // We accumulate pixel values from a potentially // large number of pixels and process all the samples // in a pixel at one time. double tmp[IS_MAX_SAMPLESPERPIXEL]; for(int i=0;isamplesperpixel;++i) tmp[i]=0; ISDataType *srcdata=source-GetRow(row); // We use a Bresenham-esque method of calculating the // pixel boundaries for scaling - add the smaller value // to an accumulator until it exceeds the larger value, // then subtract the larger value, leaving the remainder // in place for the next round. int a=0; int src=0; int dst=0; while(dstwidth) { // Add the smaller value (destination width) a+=width; // As long as the counter is less than the larger value // (source width), we take full pixels. while(asource-width) { if(src=source-width) src=source-width-1; for(int i=0;isamplesperpixel;++i) tmp[i]+=srcdata[samplesperpixel*src+i]; ++src; a+=width; } double p=source-width-(a-width); p/=width; // p now contains the proportion of the next pixel // to be counted towards the output pixel. a-=source-width; // And a now contains the remainder, // ready for the next round. // So we add p * the new source pixel // to the current output pixel... if(src=source-width) src=source-width-1; for(int i=0;isamplesperpixel;++i) tmp[i]+=p*srcdata[samplesperpixel*src+i]; // Store it... for(int i=0;isamplesperpixel;++i) { rowbuffer[samplesperpixel*dst+i] = 0.5+(tmp[i]*width)/source-width; } ++dst; // And start off the next output pixel with // (1-p) * the source pixel. for(int i=0;isamplesperpixel;++i) tmp[i]=(1.0-p)*srcdata[samplesperpixel*src+i]; ++src; } The main problem with the code in trunk is though that I think that the results of the new code are too blurry. Please have a look at the tests that I published at http://svenfoo.org/scalepatch/. And please try the patch and do your own tests. The slight blurriness comes, I think, from performing the scaling in two distinct stages. Just for kicks, since I had a rare spare hour to play with such things, here are versions of the 3% and 23% test from your page, for comparison, scaled using the downsample filter whose core is posted above: http://www.blackfiveservices.co.uk/3Percent.png http://www.blackfiveservices.co.uk/23Percent.png Hope this is some help All the best, -- Alastair M. Robinson ___ Gimp-developer mailing list Gimp-developer@lists.XCF.Berkeley.EDU https://lists.XCF.Berkeley.EDU/mailman/listinfo/gimp-developer The code is not interpolating rather resampling (supersampling in case of lanczos and bicubic) in the case of scaling down. The different filters lanczos,bicubic, box are just what you describe : the problem is instead finding the best strategy for *discarding* information. ___ Gimp-developer mailing list Gimp-developer@lists.XCF.Berkeley.EDU https://lists.XCF.Berkeley.EDU/mailman/listinfo/gimp-developer
Re: [Gimp-developer] patch for scale-region.c
Sven Neumann wrote: Hi, On Thu, 2008-08-28 at 22:38 +0200, Geert Jordaens wrote: What are your doubts with the new code? Why would a simple box-filter be better for decimating? My doubts with the current approach are manifold: The current code has decimation routines, but they are only suited for the special case of scaling down by a factor of 0.5. There are two problems with this: (1) The code actually uses the special decimation routine not only if scaling down by 0.5 in both directions, but also if the scale factor is 0.5 in just one direction (and different in the other). (2) If scaling down by 50%, a special algorithm is used (the said decimation routines), but a very different algorithm is used when scaling down by other scale factors (say 51%). This introduces a discontinuity and gives quite unexpected results. The other grief I have with the current code is the way that the decimation routines are implemented. Let's have a look at the cubic interpolation code, the lanczos code has the same issue. What the code does is it calculates four weighted pixel sums for the neighborhood of the four source pixels that surround the center of the destination pixels. It then averages these four pixel sums. This is quite inefficient as exactly the same operation could be done faster and with less errors by calculating one larger pixel sum using an appropriate weighting matrix. My patch addresses these issues the following way: The decimation routines are replaced by a straight-forward 2x2 box filter. It also introduces special decimation routines to decimate in X or Y direction only. As already explained in my previous mail, the decimation routines are only used for the pre-scaling steps. As soon as the image is close enough to the final size, the chosen interpolation routine is used. This gives continuous results for all scale factors as there is no longer any special casing for scaling down by 50%. The main problem with the code in trunk is though that I think that the results of the new code are too blurry. Please have a look at the tests that I published at http://svenfoo.org/scalepatch/. And please try the patch and do your own tests. I will be away over the weekend. Hopefully a few people can do some tests in that time so that we can decide how to proceed early next week. Sven Thanks for the answers on my questions. I'd like to comment on them. 1. I agree that there is some optimisation to be done, I had already started a 1D version of the scale-region though did not find the time to test it completely. 2. The whole point of doing the scale by 2 was that in case of decimating a standard (integer) precomputed filter can be applied. This a advantage for filters like bicubic and lanczos. The final scale step had to use the interpolation code. As for testing the patch no luck here, I'm renovating my house (partly) this weekend. Geert ___ Gimp-developer mailing list Gimp-developer@lists.XCF.Berkeley.EDU https://lists.XCF.Berkeley.EDU/mailman/listinfo/gimp-developer
Re: [Gimp-developer] patch for scale-region.c
Hi, On Fri, 2008-08-29 at 14:15 +0200, Sven Neumann wrote: I also started to play with some enhancements on top of the patch we are discussing here. Using a steep gaussian filter instead of the plain box filter seems to be a good compromise. It's better at suppressing Moire patterns, at the cost of introducing a little blur. I have not yet finished this patch though and it has some issues. It works for the most common cases though and if someone is interested, it can be downloaded here: http://sven.gimp.org/misc/gimp-decimate-2.diff In the meantime this patch has evolved and now also includes decimation routines that only scale in one direction. But currently I am still in favor of the first patch (gimp-decimate.diff). Long-term we should try to get rid of the multi-pass scaling approach and instead implement the scaling properly. But for 2.6, I'd like to suggest that we apply gimp-decimate.diff. Sven ___ Gimp-developer mailing list Gimp-developer@lists.XCF.Berkeley.EDU https://lists.XCF.Berkeley.EDU/mailman/listinfo/gimp-developer
[Gimp-developer] Proposal for 2.6 splash screen.
Hi, I noticed that there was a lack of splashes for 2.6 so I made one for an option. Here it is for your appraisal: http://a.death.pri.ee/gimp-splash2.png Best, Alexia PS: This is a photo manipulation, not a painting. The base image is a photo of a sand sculpture altered with some processing and brush/paintwork. ___ Gimp-developer mailing list Gimp-developer@lists.XCF.Berkeley.EDU https://lists.XCF.Berkeley.EDU/mailman/listinfo/gimp-developer
Re: [Gimp-developer] Proposal for 2.6 splash screen.
After an idea on IRC (thanks Martin!) the text on it got reworked into this: http://a.death.pri.ee/gimp-splash3.png that seems better. On Friday 29 August 2008 23:26:57 Alexia Death wrote: Hi, I noticed that there was a lack of splashes for 2.6 so I made one for an option. Here it is for your appraisal: http://a.death.pri.ee/gimp-splash2.png Best, Alexia PS: This is a photo manipulation, not a painting. The base image is a photo of a sand sculpture altered with some processing and brush/paintwork. ___ Gimp-developer mailing list Gimp-developer@lists.XCF.Berkeley.EDU https://lists.XCF.Berkeley.EDU/mailman/listinfo/gimp-developer
Re: [Gimp-developer] Proposal for 2.6 splash screen.
2008/8/29 Alexia Death [EMAIL PROTECTED]: After an idea on IRC (thanks Martin!) the text on it got reworked into this: http://a.death.pri.ee/gimp-splash3.png that seems better. That looks really nice! One thing I suggest is to make the GIMP logo a bit brighter, I don't think it stands out quite enough for a splash screen. Henk ___ Gimp-developer mailing list Gimp-developer@lists.XCF.Berkeley.EDU https://lists.XCF.Berkeley.EDU/mailman/listinfo/gimp-developer