Re: [Gimp-developer] Re: GIMP preview widget (was tentative 2.2 feature list)

2004-02-24 Thread Ernst Lippe
On Thu, 19 Feb 2004 16:45:45 +0100
Dave Neary <[EMAIL PROTECTED]> wrote:

> Hi,
> 
> Ernst Lippe wrote:
> > Dave Neary <[EMAIL PROTECTED]> wrote:
> >>1) if the original drawable is zoomed out, do a messy scale of the original to 
> >>give the new drawable (accuracy isn't *hugely* important in general in this 
> >>case), then take the tiles containing the part of that drawable that will be 
> >>visible in the viewport, and generate a drawable from them, and apply the 
> >>transform to that.
> >>
> >>2) If the original drawable is at 100% or greater, then calculate the pixels 
> >>that will be in the viewport, take the tiles containing that bounding box, 
> >>generate a drawable from them, apply the filter, then expand the result as 
> >>necessary to bring it up to the scale ratio.
> 
> >>How does that sound?
> > 
> > First of all you need a larger input area otherwise the image near the
> > edges of the preview are incorrect.
> 
> That's a minor implementation issue - we can take the drawable used to 
> generate the preview to be the viewport +/- some arbitrary amount of 
> pixels, or perhaps take 1 tile more than we need in the horizontal and 
> vertical direction.

For the current preview it is even a non-issue, it only becomes
relevant when you expect that the preview should give you an already
scaled image. Even when the preview should generate a scaled image, I
think that you should think very carefully about the margins. I don't
like the idea of having fixed margins because then you are taking a
design decision in a place where it does not belong, it obviously
belongs in the plug-in and not in the preview. How do you handle the
case where part of the margins fall outside the drawable? The "normal"
solution would to supply zeroes for these areas, but there are several
plug-in algorithms that are convolutions and they usually have a nasty
behaviour when there are too many zero's around.  I think that in any
case the preview should always give the absolute image coordinates of
the area that must be rendered to the plug-in, there are several
plug-in's that need this information (most "warping" plug-in's need
it).  Wouldn't it be confusing to the implementor when the area that
they are supposed to render is different from the input area?


> > part of the preview is "close to the edge". But the actual size of the
> > margin depends on the blur radius, so when you want the preview to
> > provide the scaled data, there should also be some mechanism to tell
> > the preview how large this extra margin should be.
> 
> This assumes that the preview should be precise. One of the merits of 
> the preview, though, is that it is an impression of the effect and 
> renders quickly - quick and dirty should be OK. Of course, there's a 
> compromise to be made in there. But I don't think plug-in previews need 
> to be 100% exact.

This is a decision for the plug-in maker, but I believe
that the preview should be as accurate as possible.
It is probably a bias from my background, my main plug-in
does some pretty slow computations, and therefore badly
needs a preview. I really hate it, when I discover, after
a long period of waiting, that I chose the wrong parameters
because of a defect in the preview process. 

In some cases it may be a valid decision, I am just arguing
that it should not be the "default" decision that can be taken
without further analysis, because the implicit assumption
"users will never see the difference" is in general wrong

> > Yes, but that is something that the plug-in algorithm should do,
> > because it is the only place where you can determine what inputs are
> > needed to generate a specific output area.  Think for example of some
> > whirl plug-in, to compute a given output area it will only need a
> > subpart of the original image, but it can be very difficult to
> > determine what part is really needed. So it is the responsibility of
> > the plug-in algorithm to compute only a specific output area.
> 
> Good point. But shouldn't the preview widget cater to the most common 
> case, while allowing the plug-in to address the less common case? I 
> would prefer to see all convolution based plug-ins (that are essentially 
> local) and render plug-ins (where the result is entirely predefined by a 
> seed) to have a nice easy way of generating a preview that consisted of 
> more or less 1 or 2 function calls, and have a more complicated API to 
> allow things like whorl and the like to calculate their effects using a 
> preview widget, with some more work.
Yes, but the most general solution is simply to let the plug-in
work on unscaled data and leave the scaling to the preview.
This works for all plug-ins. When you look at it this way
a plug-in algorithm that is scale-independent is only a
special case.

Also you are assuming here that convolutions are scale-independent,
this would only be true if we were dealing with continuous images,
convolutions are in general not scale-independent when you deal

Re: [Gimp-developer] Re: GIMP preview widget (was tentative 2.2 feature list)

2004-02-20 Thread David Hodson
Sven Neumann wrote:

> So you are seeing the GimpPreview as just a widget that plug-ins can
> draw on. However our goal is to provide a way to add a preview to
> plug-ins basically w/o changing their code. The change should be
> limited to the plug-in dialog and a few hooks here and there.
I thought I'd repeat this, because it's a major design constraint.
(Also, a very good idea.)
I think it's clear that some plugins (e.g. sharpen, noise removal)
will require a small, unzoomed region of the image to judge the
effect, while others (e.g. whirl, colourise) require a small version
of the entire image.
For the first group: if the plugin uses gimp_drawable_mask_bounds
correctly, then it should work correctly if the preview reports
only a small masked area.
For the second group: if the plugin is scale-independent, then it
should work correctly if the preview reports a small image.
--
David Hodson  --  this night wounds time
___
Gimp-developer mailing list
[EMAIL PROTECTED]
http://lists.xcf.berkeley.edu/mailman/listinfo/gimp-developer


Re: [Gimp-developer] Re: GIMP preview widget (was tentative 2.2 feature list)

2004-02-19 Thread Dave Neary
Hi,

Ernst Lippe wrote:
Dave Neary <[EMAIL PROTECTED]> wrote:
1) if the original drawable is zoomed out, do a messy scale of the original to 
give the new drawable (accuracy isn't *hugely* important in general in this 
case), then take the tiles containing the part of that drawable that will be 
visible in the viewport, and generate a drawable from them, and apply the 
transform to that.

2) If the original drawable is at 100% or greater, then calculate the pixels 
that will be in the viewport, take the tiles containing that bounding box, 
generate a drawable from them, apply the filter, then expand the result as 
necessary to bring it up to the scale ratio.

How does that sound?
First of all you need a larger input area otherwise the image near the
edges of the preview are incorrect.
That's a minor implementation issue - we can take the drawable used to 
generate the preview to be the viewport +/- some arbitrary amount of 
pixels, or perhaps take 1 tile more than we need in the horizontal and 
vertical direction.

part of the preview is "close to the edge". But the actual size of the
margin depends on the blur radius, so when you want the preview to
provide the scaled data, there should also be some mechanism to tell
the preview how large this extra margin should be.
This assumes that the preview should be precise. One of the merits of 
the preview, though, is that it is an impression of the effect and 
renders quickly - quick and dirty should be OK. Of course, there's a 
compromise to be made in there. But I don't think plug-in previews need 
to be 100% exact.

But when the image does contain sharp differences
there can be an important visual difference: consider an image with
alternating black and white pixels (they alternate in both horizontal
and vertical) when the image is zoomed at 50% and then blurred the
When the image is zoomed to 50%, the image itself is either grey or all 
black/white (depending on the scaling algorithm we use) - if it's grey, 
then so will the blur be. That said, I get the point.

Also with this approach you will probably get some very obvious visual
effects when you zoom in or out.
Again, I see the point. And I agree that your proposal to start with 
unscaled data and see how slow it is before moving on to scaled copies 
is reasonable.

Yes, but that is something that the plug-in algorithm should do,
because it is the only place where you can determine what inputs are
needed to generate a specific output area.  Think for example of some
whirl plug-in, to compute a given output area it will only need a
subpart of the original image, but it can be very difficult to
determine what part is really needed. So it is the responsibility of
the plug-in algorithm to compute only a specific output area.
Good point. But shouldn't the preview widget cater to the most common 
case, while allowing the plug-in to address the less common case? I 
would prefer to see all convolution based plug-ins (that are essentially 
local) and render plug-ins (where the result is entirely predefined by a 
seed) to have a nice easy way of generating a preview that consisted of 
more or less 1 or 2 function calls, and have a more complicated API to 
allow things like whorl and the like to calculate their effects using a 
preview widget, with some more work.

Anyhow, a good plug-in should already have the functionality to
compute the outputs for a given area (gimp_drawable_mask_bounds), so
it should not be too difficult to modify this to use the area that was
determined by the preview.
It would be nice to move some of this common code into the preview 
widget itself, so that the common case doesn't have to worry about it.

Cheers,
Dave.
--
Dave Neary
[EMAIL PROTECTED]
___
Gimp-developer mailing list
[EMAIL PROTECTED]
http://lists.xcf.berkeley.edu/mailman/listinfo/gimp-developer


[Gimp-developer] Re: GIMP preview widget (was tentative 2.2 feature list)

2004-02-19 Thread Ernst Lippe
On Wed, 18 Feb 2004 15:18:49 +0100
Dave Neary <[EMAIL PROTECTED]> wrote:

> 
> Hi Ernst,
> 
> Ernst Lippe wrote:
> > Dave Neary <[EMAIL PROTECTED]> wrote:
> >>As a matter of interest, do you do any optimisation based on the current 
> >>viewport in the preview? Do you generate a viewport-sized (+ margin, say) 
> >>drawable from the original drawable that you pass to the function connected to 
> >>the "update-preview" signal?
>  >
> > The first question that you face when you want to add this, should the
> > drawable be scaled or not? 
> 
> The kind of logic I would have thought might be reasonable is:
> 
> 1) if the original drawable is zoomed out, do a messy scale of the original to 
> give the new drawable (accuracy isn't *hugely* important in general in this 
> case), then take the tiles containing the part of that drawable that will be 
> visible in the viewport, and generate a drawable from them, and apply the 
> transform to that.
> 
> 2) If the original drawable is at 100% or greater, then calculate the pixels 
> that will be in the viewport, take the tiles containing that bounding box, 
> generate a drawable from them, apply the filter, then expand the result as 
> necessary to bring it up to the scale ratio.
> 
> For example, say I'm doing a blur on a 400% zoomed copy of an image, and the 
> viewport (256x256, say) is showing pixels (0,0) -> (64,64). In this case I take 
> all the tiles that box covers (easy enough in this case, it's just 1 tile), make 
> a new drawable with those tiles, apply my blur (on a 64x64image it should be 
> quick), and zoom the result to 256x256.
> 
> If I'm blurring a 25% zoomed copy, the easiest way is to do the scale on the 
> image first, blur that with a radius r * 0.25, and show the result.
> 
> In the former case (zoomed in), I'm not blurring 90% of the image data that 
> won't ever be displayed in the viewport, and in the latter I'm doing a "good 
> enough" zoom on scaled image data (with no interpolation). Also, as long as the 
> zoom ratio doesn't change, I keep my reference zoomed drawable around so that I 
> don't have to re-do the calculation/zoom every time I pan around in the viewfinder.
> 
> How does that sound?

I don't think that this is the best approach for this specific case,
and I don't think that it can be generalized for several other
algorithms.

First of all you need a larger input area otherwise the image near the
edges of the preview are incorrect. Because in most cases the preview
image will be small, such defects are very noticable because a large
part of the preview is "close to the edge". But the actual size of the
margin depends on the blur radius, so when you want the preview to
provide the scaled data, there should also be some mechanism to tell
the preview how large this extra margin should be.

Second, actually there are very few algorithms that are truely
scale-independent, in fact the only real algorithms that I have seen
that are really scale independent are those where every output pixel
was completely determined by the corresponding input pixel (i.e. the
output pixel was independent from all other input pixels). So at best
it is approximately scale-independent and we should hope that the
difference is not too great, so that it is not visible to the
user. Probably when your input image is already slightly blurred,
i.e. it does not contain any sharp edges the difference will not be
really noticable. But when the image does contain sharp differences
there can be an important visual difference: consider an image with
alternating black and white pixels (they alternate in both horizontal
and vertical) when the image is zoomed at 50% and then blurred the
result is either completely black or completely white, but never the
true value which is simply grey. Of course this is an highly
artificial example but I have noticed similar effects in real images.

Also with this approach you will probably get some very obvious visual
effects when you zoom in or out. When you are zooming you expect that
the image simply gets bigger or smaller, when there are other
differences (and almost by definition you will get these when you run
the algorithm on scaled data) the image will seem to "flicker" during
the zoom.  It is surprising how sensitive our brains are to such
minute differences. In my experience this flickering is highly
annoying, it does not feel "smooth". Of course your own milage may
vary in this respect, but perhaps you should try it yourself.

Another point is that is not possible to generalize this to other
algorithms, e.g. what would you do with a sharpen filter that only
looks at the immediate neighbour pixels?  If you only show the scaled
version of the original image (which seems to be the only reasonable
solution) you will give a wrong impression of the effects of the
plug-in.

When you want to add a preview to some plug-in I would strongly
suggest that you first implement it without any scaling in the
plug-in. It is a lot easier to