I agree with Oleg that precision downsizing often
requires more than my simple down sampling.
Algorithms one chooses to use quite properly
depend upon both the data at hand and also the goal.
I see a huge difference between screen shots and photos--
I had been talking about photos in my previous remarks.
Downsampling by definition is loss of information
and what information you are willing to loss
is a matter of your goals or taste.

My simple downsampling can on occasion produce Moire
effects which horrify and confuse many observers
but which make me smile because I understand them
and am not confused by them. Most, but not all
"averaging" or "interpolation" or filtering scehmes
move the entire image toward the gray center of color
space making images subtly soft. My visualization
work often requires designing things to highlight
information; so I don't usually favor soft images.

An exception is the Savitsky-Golay which uses the wavy
bell form Oleg described. (this can be found in the
filter1 script of image3) But one is confronted
with what color space to apply the filter in.
Disturbing color shifts can occur if this type
of filtering is done in RGB space. This surely
is worthwhile for some applications, but I remain
unconvinced that it should be used routinely.

In fact, in writing filtering with J notes for
Quote quad; preprints available via:
http://ww2.lafayette.edu/~reiterc/abstracts/ci_2004d.html
http://ww2.lafayette.edu/~reiterc/abstracts/ci_2004e.html
I was struck by the effectiveness of median filters
because they retained the original data.

A teacher of mine had that as a mantra.
Use your data, not averages. Don't pretend
the average had existence. That is extreme, but
take a look at Best Analogs for replacing missing
image data. It uses exact best fits to data
from the image. Powerful, computationally heavy stuff.
http://ww2.lafayette.edu/~reiterc/mvq/bafrmid/index.html
A script for using that (on small images)
is available with the fvj3 addon to my
fractals, visualization and J book (3rd ed)
which also discusses the Best analogs algorithm.

Happy to average or use actual data, depending on
data and goals,
Best,
Cliff

Oleg Kobchenko wrote:
The simplest way, I think, to improve nearest neighbor when down-samplig is to apply a moving
average beforehand with span based on expected ratio
(that would also have to be expanded--values scaled--
to preserve range).

--- Oleg Kobchenko <[EMAIL PROTECTED]> wrote:

Mentioned below "resize_image" uses nearest-neighbor
sampling, also described in
   http://www.jsoftware.com/jwiki/Scripts/Zoom_View

It is fast, which is good for interactive but approximate applications. However, for long-lasting
quality images it is highly unacceptable due to
noticible distortion. You can simply tell by looking
at the result that it was produced by nearest-neighbor.

A better result for down-sampling would be a weighted
average approach, such as linear interpolation. This is
fine for smooth natural photographs.

However, for precision down-sampling, such as making smaller screenshots so that features are preserved,
like font is readable, better algorithms exist. For
many J Wiki screenshots, the Lanczos resampling was used,

  http://www.jsoftware.com/jwiki/EditingGuidelines/Images

which uses a bell-shaped kernel with wavy rims--similar
to what Sony was showcasing for its camera/camcoder image
processors.

However, I don't know if those exist in J.
...

--
Clifford A. Reiter
Mathematics Department, Lafayette College
Easton, PA 18042 USA,   610-330-5277
http://www.lafayette.edu/~reiterc
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to