On Thu, 19 Feb 2004 16:45:45 +0100
Dave Neary [EMAIL PROTECTED] wrote:
Ernst Lippe wrote:
Dave Neary [EMAIL PROTECTED] wrote:
1) if the original drawable is zoomed out, do a messy scale of the original to
give the new drawable (accuracy isn't *hugely* important in general in this
case), then take the tiles containing the part of that drawable that will be
visible in the viewport, and generate a drawable from them, and apply the
transform to that.
2) If the original drawable is at 100% or greater, then calculate the pixels
that will be in the viewport, take the tiles containing that bounding box,
generate a drawable from them, apply the filter, then expand the result as
necessary to bring it up to the scale ratio.
How does that sound?
First of all you need a larger input area otherwise the image near the
edges of the preview are incorrect.
That's a minor implementation issue - we can take the drawable used to
generate the preview to be the viewport +/- some arbitrary amount of
pixels, or perhaps take 1 tile more than we need in the horizontal and
For the current preview it is even a non-issue, it only becomes
relevant when you expect that the preview should give you an already
scaled image. Even when the preview should generate a scaled image, I
think that you should think very carefully about the margins. I don't
like the idea of having fixed margins because then you are taking a
design decision in a place where it does not belong, it obviously
belongs in the plug-in and not in the preview. How do you handle the
case where part of the margins fall outside the drawable? The normal
solution would to supply zeroes for these areas, but there are several
plug-in algorithms that are convolutions and they usually have a nasty
behaviour when there are too many zero's around. I think that in any
case the preview should always give the absolute image coordinates of
the area that must be rendered to the plug-in, there are several
plug-in's that need this information (most warping plug-in's need
it). Wouldn't it be confusing to the implementor when the area that
they are supposed to render is different from the input area?
part of the preview is close to the edge. But the actual size of the
margin depends on the blur radius, so when you want the preview to
provide the scaled data, there should also be some mechanism to tell
the preview how large this extra margin should be.
This assumes that the preview should be precise. One of the merits of
the preview, though, is that it is an impression of the effect and
renders quickly - quick and dirty should be OK. Of course, there's a
compromise to be made in there. But I don't think plug-in previews need
to be 100% exact.
This is a decision for the plug-in maker, but I believe
that the preview should be as accurate as possible.
It is probably a bias from my background, my main plug-in
does some pretty slow computations, and therefore badly
needs a preview. I really hate it, when I discover, after
a long period of waiting, that I chose the wrong parameters
because of a defect in the preview process.
In some cases it may be a valid decision, I am just arguing
that it should not be the default decision that can be taken
without further analysis, because the implicit assumption
users will never see the difference is in general wrong
Yes, but that is something that the plug-in algorithm should do,
because it is the only place where you can determine what inputs are
needed to generate a specific output area. Think for example of some
whirl plug-in, to compute a given output area it will only need a
subpart of the original image, but it can be very difficult to
determine what part is really needed. So it is the responsibility of
the plug-in algorithm to compute only a specific output area.
Good point. But shouldn't the preview widget cater to the most common
case, while allowing the plug-in to address the less common case? I
would prefer to see all convolution based plug-ins (that are essentially
local) and render plug-ins (where the result is entirely predefined by a
seed) to have a nice easy way of generating a preview that consisted of
more or less 1 or 2 function calls, and have a more complicated API to
allow things like whorl and the like to calculate their effects using a
preview widget, with some more work.
Yes, but the most general solution is simply to let the plug-in
work on unscaled data and leave the scaling to the preview.
This works for all plug-ins. When you look at it this way
a plug-in algorithm that is scale-independent is only a
Also you are assuming here that convolutions are scale-independent,
this would only be true if we were dealing with continuous images,
convolutions are in general not scale-independent when you deal with
images that consist of discrete pixels. This may not be very
important when you are only