On Wed, 18 Feb 2004 15:18:49 +0100
Dave Neary <[EMAIL PROTECTED]> wrote:
> Hi Ernst,
> Ernst Lippe wrote:
> > Dave Neary <[EMAIL PROTECTED]> wrote:
> >>As a matter of interest, do you do any optimisation based on the current
> >>viewport in the preview? Do you generate a viewport-sized (+ margin, say)
> >>drawable from the original drawable that you pass to the function connected to
> >>the "update-preview" signal?
> > The first question that you face when you want to add this, should the
> > drawable be scaled or not?
> The kind of logic I would have thought might be reasonable is:
> 1) if the original drawable is zoomed out, do a messy scale of the original to
> give the new drawable (accuracy isn't *hugely* important in general in this
> case), then take the tiles containing the part of that drawable that will be
> visible in the viewport, and generate a drawable from them, and apply the
> transform to that.
> 2) If the original drawable is at 100% or greater, then calculate the pixels
> that will be in the viewport, take the tiles containing that bounding box,
> generate a drawable from them, apply the filter, then expand the result as
> necessary to bring it up to the scale ratio.
> For example, say I'm doing a blur on a 400% zoomed copy of an image, and the
> viewport (256x256, say) is showing pixels (0,0) -> (64,64). In this case I take
> all the tiles that box covers (easy enough in this case, it's just 1 tile), make
> a new drawable with those tiles, apply my blur (on a 64x64image it should be
> quick), and zoom the result to 256x256.
> If I'm blurring a 25% zoomed copy, the easiest way is to do the scale on the
> image first, blur that with a radius r * 0.25, and show the result.
> In the former case (zoomed in), I'm not blurring 90% of the image data that
> won't ever be displayed in the viewport, and in the latter I'm doing a "good
> enough" zoom on scaled image data (with no interpolation). Also, as long as the
> zoom ratio doesn't change, I keep my reference zoomed drawable around so that I
> don't have to re-do the calculation/zoom every time I pan around in the viewfinder.
> How does that sound?
I don't think that this is the best approach for this specific case,
and I don't think that it can be generalized for several other
First of all you need a larger input area otherwise the image near the
edges of the preview are incorrect. Because in most cases the preview
image will be small, such defects are very noticable because a large
part of the preview is "close to the edge". But the actual size of the
margin depends on the blur radius, so when you want the preview to
provide the scaled data, there should also be some mechanism to tell
the preview how large this extra margin should be.
Second, actually there are very few algorithms that are truely
scale-independent, in fact the only real algorithms that I have seen
that are really scale independent are those where every output pixel
was completely determined by the corresponding input pixel (i.e. the
output pixel was independent from all other input pixels). So at best
it is approximately scale-independent and we should hope that the
difference is not too great, so that it is not visible to the
user. Probably when your input image is already slightly blurred,
i.e. it does not contain any sharp edges the difference will not be
really noticable. But when the image does contain sharp differences
there can be an important visual difference: consider an image with
alternating black and white pixels (they alternate in both horizontal
and vertical) when the image is zoomed at 50% and then blurred the
result is either completely black or completely white, but never the
true value which is simply grey. Of course this is an highly
artificial example but I have noticed similar effects in real images.
Also with this approach you will probably get some very obvious visual
effects when you zoom in or out. When you are zooming you expect that
the image simply gets bigger or smaller, when there are other
differences (and almost by definition you will get these when you run
the algorithm on scaled data) the image will seem to "flicker" during
the zoom. It is surprising how sensitive our brains are to such
minute differences. In my experience this flickering is highly
annoying, it does not feel "smooth". Of course your own milage may
vary in this respect, but perhaps you should try it yourself.
Another point is that is not possible to generalize this to other
algorithms, e.g. what would you do with a sharpen filter that only
looks at the immediate neighbour pixels? If you only show the scaled
version of the original image (which seems to be the only reasonable
solution) you will give a wrong impression of the effects of the
When you want to add a preview to some plug-in I would strongly
suggest that you first implement it without any scaling in the
plug-in. It is a lot easier to implement and it is guaranteed to give
consistent results. If you are really convinced after this first step,
that it gives a horrible performance, you could try to implement a new
version that uses scaled data. After that you can at least compare
the two versions and determine if it was really worth all the effort
and how much the quality of the preview has deteriorated.
> > There is not much point in using an unscaled drawable because the
> > plug-in could easily extract it from the original image, and there is
> > no performance advantage by doing it in the preview.
> The performance advantage is surely in performing the preview calculation on a
> (possibly small) subset of the total image data, isn't it?
Yes, but that is something that the plug-in algorithm should do,
because it is the only place where you can determine what inputs are
needed to generate a specific output area. Think for example of some
whirl plug-in, to compute a given output area it will only need a
subpart of the original image, but it can be very difficult to
determine what part is really needed. So it is the responsibility of
the plug-in algorithm to compute only a specific output area.
Anyhow, a good plug-in should already have the functionality to
compute the outputs for a given area (gimp_drawable_mask_bounds), so
it should not be too difficult to modify this to use the area that was
determined by the preview.
Gimp-developer mailing list