Re: [Gimp-developer] Re: Re: GIMP and multiple processors

2005-03-02 Thread Daniel Egger
On 02.03.2005, at 00:23, GSR - FR wrote:
Yes, radial rainbow hoop gradient (linear 6 pixel to right sawtooth)
without supersampling it paints mostly red and with it shows the muddy
colour mix you would get if you render big and scale down:
http://www.infernal-iceberg.com/gimp/tmp/gradient-supersampling-03- 
crop.png
There're lots of nasties on can trigger with deliberate
choices, but do they really matter in reality?
The price is an user decision, and default is supersampling off,
right? If it is removed, the price you impose is not so low: render
into a big version then scale down and copy. Which means fucked up
workflow and no adaptive algorithm, so even slower computing and user
working a lot more.
Apart from a blend on a big image with a scaledown being
a magnitude faster than rendering on the small image with
activated supersampling, I'm actually for a good reason to
improve the supersampling code rather than remove it.
But so far the input has not been very convincing.
Dunno... but should GIMP care and target a worse solution cos someone
else is behind?
Huh? The goal is perfection and this is only to reach by
*thinking* and constantly reconsidering approaches. By
simply throwing code and UI elements at an implementation
in the hope to hit a problem you're gaining nothing but a
buggy, bloated and unnecessary complicated application.
Servus,
  Daniel


PGP.sig
Description: This is a digitally signed message part


Re: [Gimp-developer] Re: Re: GIMP and multiple processors

2005-03-02 Thread Daniel Egger
On 02.03.2005, at 20:22, GSR - FR wrote:
IOW, supersampling is nice for the small set of cases in which it
really matters, otherwise it is going to be slower always. Of course,
it is going to be faster in many cases than full sampling and scaling
down. If anybody figures a better method than user selectable adaptive
(best case as fast as no oversampling, worst case as slow as
adaptive), I guess POVRay Team will like to hear too. :]
It might as well be that the adaption is the root of
the speed problem. As is the code is a mungo-jungo of
hardcoded computation that works differently (or at least
seems so) than other region based code. It does not
operate on tiles but on rows, does its own memory
allocation and thus is hardly parallizable and very
likely much slower than it needs to be.
And hey, 3 times adaptive supersampling blending a
layer takes *much* longer an a manual 10x oversampling by
blending a larger image and scaling it down to the
original size with Lanczos; this is a UP machine BTW.
My assumption here is that if the adaptive
supersampling code takes magnitudes longer to render
than without supersampling it could be benefitial to
simply use the common code to the render depthxdepth
times the amount of tiles to fill and simply do some
weighting on this data to fill the final tile. Very
easy, reuses existing code, runs multithreaded and is
likely quite a bit faster than the stuff now is.
I would also look into the possibility of analyzing
the inputs (gradient and repeat type) to find
degenerated cases and recommend the use of supersampling
to the users...
Servus,
  Daniel


PGP.sig
Description: This is a digitally signed message part