Re: [Gimp-developer] Re: Re: GIMP and multiple processors
On 02.03.2005, at 00:23, GSR - FR wrote: Yes, radial rainbow hoop gradient (linear 6 pixel to right sawtooth) without supersampling it paints mostly red and with it shows the muddy colour mix you would get if you render big and scale down: http://www.infernal-iceberg.com/gimp/tmp/gradient-supersampling-03- crop.png There're lots of nasties on can trigger with deliberate choices, but do they really matter in reality? The price is an user decision, and default is supersampling off, right? If it is removed, the price you impose is not so low: render into a big version then scale down and copy. Which means fucked up workflow and no adaptive algorithm, so even slower computing and user working a lot more. Apart from a blend on a big image with a scaledown being a magnitude faster than rendering on the small image with activated supersampling, I'm actually for a good reason to improve the supersampling code rather than remove it. But so far the input has not been very convincing. Dunno... but should GIMP care and target a worse solution cos someone else is behind? Huh? The goal is perfection and this is only to reach by *thinking* and constantly reconsidering approaches. By simply throwing code and UI elements at an implementation in the hope to hit a problem you're gaining nothing but a buggy, bloated and unnecessary complicated application. Servus, Daniel PGP.sig Description: This is a digitally signed message part
[Gimp-developer] Re: Re: GIMP and multiple processors
Hi, [EMAIL PROTECTED] (2005-03-01 at 2059.06 -0800): It ought to be easy enough to detect when antialiasing will be needed and automagically turn it on. I havnt looked at the supersampling code yet, but I think it might be much faster to do the supersampling in a second pass since such a small percentage of pixels actually need it. That is what adaptive means, it computes extra samples in the pixels that change too much. But instead of checking at the end, it checks at the same time it calculates the gradient, and does not compute more samples than needed. IIRC, exactly what POVRay does. If you make it auto, it is going to go always slower due the forced extra checks, instead of letting the user decide if the result is poor or not for what he wants: checking a gradient that is ultra smooth is a waste, a poor result that is later processed with noise or blur is not so poor IOW, supersampling is nice for the small set of cases in which it really matters, otherwise it is going to be slower always. Of course, it is going to be faster in many cases than full sampling and scaling down. If anybody figures a better method than user selectable adaptive (best case as fast as no oversampling, worst case as slow as adaptive), I guess POVRay Team will like to hear too. :] Or maybe GIMP could also do the background trickery as reported in other mails, do not compute composition stack when it is not needed (areas out of image window, zoom not 1:1, fully opaque normal pixels...) and many other things to make it feel fast. GSR ___ Gimp-developer mailing list Gimp-developer@lists.xcf.berkeley.edu http://lists.xcf.berkeley.edu/mailman/listinfo/gimp-developer
Re: [Gimp-developer] Re: Re: GIMP and multiple processors
On 02.03.2005, at 20:22, GSR - FR wrote: IOW, supersampling is nice for the small set of cases in which it really matters, otherwise it is going to be slower always. Of course, it is going to be faster in many cases than full sampling and scaling down. If anybody figures a better method than user selectable adaptive (best case as fast as no oversampling, worst case as slow as adaptive), I guess POVRay Team will like to hear too. :] It might as well be that the adaption is the root of the speed problem. As is the code is a mungo-jungo of hardcoded computation that works differently (or at least seems so) than other region based code. It does not operate on tiles but on rows, does its own memory allocation and thus is hardly parallizable and very likely much slower than it needs to be. And hey, 3 times adaptive supersampling blending a layer takes *much* longer an a manual 10x oversampling by blending a larger image and scaling it down to the original size with Lanczos; this is a UP machine BTW. My assumption here is that if the adaptive supersampling code takes magnitudes longer to render than without supersampling it could be benefitial to simply use the common code to the render depthxdepth times the amount of tiles to fill and simply do some weighting on this data to fill the final tile. Very easy, reuses existing code, runs multithreaded and is likely quite a bit faster than the stuff now is. I would also look into the possibility of analyzing the inputs (gradient and repeat type) to find degenerated cases and recommend the use of supersampling to the users... Servus, Daniel PGP.sig Description: This is a digitally signed message part
[Gimp-developer] Re: Re: GIMP and multiple processors
Hi, [EMAIL PROTECTED] (2005-03-01 at 2248.36 +0100): Supersampling is to avoid aliasing, which is not caused only by those discontinuities but high frequency data (IIRC abrupt change is like infinite frequency). You can have aliasing with a square wave (segments that do not match) but also with a sine wave (segments that match). Right. But where in reality can this happen using a gradient blend? Yes, radial rainbow hoop gradient (linear 6 pixel to right sawtooth) without supersampling it paints mostly red and with it shows the muddy colour mix you would get if you render big and scale down: http://www.infernal-iceberg.com/gimp/tmp/gradient-supersampling-03-crop.png I just played around with the blend tool on a 100x100px image and looked very closely for any artifacts with and without supersampling. The result was that I couldn't produce any visible aliasing effects no matter how hard I try other than by using a sawtooth repeat pattern. That seems like a *huge* price to pay for something that can be easily done by accident. The price is an user decision, and default is supersampling off, right? If it is removed, the price you impose is not so low: render into a big version then scale down and copy. Which means fucked up workflow and no adaptive algorithm, so even slower computing and user working a lot more. What does the commercial counterpart offer here? Dunno... but should GIMP care and target a worse solution cos someone else is behind? What I know is those other commercial apps implement better things in other fronts, like dithering in some operations to avoid histrogram holes or 16/32 bit buffers to edit high dynamic range images or whatever, and we are talking about giving crappy results always cos the computer takes some seconds more when asked to be more precise. GSR ___ Gimp-developer mailing list Gimp-developer@lists.xcf.berkeley.edu http://lists.xcf.berkeley.edu/mailman/listinfo/gimp-developer