Ernst Lippe wrote:
Dave Neary <[EMAIL PROTECTED]> wrote:
1) if the original drawable is zoomed out, do a messy scale of the original to give the new drawable (accuracy isn't *hugely* important in general in this case), then take the tiles containing the part of that drawable that will be visible in the viewport, and generate a drawable from them, and apply the transform to that.

2) If the original drawable is at 100% or greater, then calculate the pixels that will be in the viewport, take the tiles containing that bounding box, generate a drawable from them, apply the filter, then expand the result as necessary to bring it up to the scale ratio.

How does that sound?

First of all you need a larger input area otherwise the image near the edges of the preview are incorrect.

That's a minor implementation issue - we can take the drawable used to generate the preview to be the viewport +/- some arbitrary amount of pixels, or perhaps take 1 tile more than we need in the horizontal and vertical direction.

part of the preview is "close to the edge". But the actual size of the
margin depends on the blur radius, so when you want the preview to
provide the scaled data, there should also be some mechanism to tell
the preview how large this extra margin should be.

This assumes that the preview should be precise. One of the merits of the preview, though, is that it is an impression of the effect and renders quickly - quick and dirty should be OK. Of course, there's a compromise to be made in there. But I don't think plug-in previews need to be 100% exact.

But when the image does contain sharp differences
there can be an important visual difference: consider an image with
alternating black and white pixels (they alternate in both horizontal
and vertical) when the image is zoomed at 50% and then blurred the

When the image is zoomed to 50%, the image itself is either grey or all black/white (depending on the scaling algorithm we use) - if it's grey, then so will the blur be. That said, I get the point.

Also with this approach you will probably get some very obvious visual
effects when you zoom in or out.

Again, I see the point. And I agree that your proposal to start with unscaled data and see how slow it is before moving on to scaled copies is reasonable.

Yes, but that is something that the plug-in algorithm should do,
because it is the only place where you can determine what inputs are
needed to generate a specific output area.  Think for example of some
whirl plug-in, to compute a given output area it will only need a
subpart of the original image, but it can be very difficult to
determine what part is really needed. So it is the responsibility of
the plug-in algorithm to compute only a specific output area.

Good point. But shouldn't the preview widget cater to the most common case, while allowing the plug-in to address the less common case? I would prefer to see all convolution based plug-ins (that are essentially local) and render plug-ins (where the result is entirely predefined by a seed) to have a nice easy way of generating a preview that consisted of more or less 1 or 2 function calls, and have a more complicated API to allow things like whorl and the like to calculate their effects using a preview widget, with some more work.

Anyhow, a good plug-in should already have the functionality to
compute the outputs for a given area (gimp_drawable_mask_bounds), so
it should not be too difficult to modify this to use the area that was
determined by the preview.

It would be nice to move some of this common code into the preview widget itself, so that the common case doesn't have to worry about it.


Dave Neary

_______________________________________________ Gimp-developer mailing list [EMAIL PROTECTED] http://lists.xcf.berkeley.edu/mailman/listinfo/gimp-developer

Reply via email to