John Francis explained:
> > Is this something that has always been a problem
> > with video technology or
> > is it (a) just since the digital revolution 
> >  and   (b) only something to worry about in MOVING
> > images?
> 
> No, and no.
> 
> It's not quite the same sort of thing as a Moire effect,
> although both fall under the general umbrella of what is
> referred to as 'aliasing'. 

Both the cause, and the appearance of the result, are
similar enough to Moire that I would not be surprised
to hear it casually/carelessly referred to as Moire 
effect.  In fact, they're similar enough that one can
use an example of a Moire effect to _illustrate_ your
explanation of video aliasing (including the reason that 
the effect only appears when the feature size and the 
resolution are very close to each other.

What follows is neither a replacement for John's 
explanation nor a disagreement with it.  It's more
like a "further study", or "experiments you can do at
home" to amplify his explanation.


I'm pretty sure most of us are familiar with the most
common examples of Moire patterns -- two layers of a
sheer fabric, for example.  I'd suggest using two 
pieces of window screen.  Call the one closest to you
the "sample mask", akin to the sensor in a video 
camera or the colour mask of a television screen.
Call the farther one the "subject".  If they're right
against each other and perfectly aligned, the "subject"
disappears, being an exact copy of the "mask".  But
if you tilt the farther screen, or move it a short 
distance away so that perpective makes them no longer 
appear the same size, and you start seeing Moire patterns.  
Light and dark areas that do not exist in either subject 
or mask appear in the image.

Replace the "subject" screen with a chess board, and
the aliasing artifacts aren't terribly noticeable:
if the chess board is tilted slightly, the edges of 
the squares will be interact with the lines of the screen
to make them not appear _perfectly_ straight, but you
have to really get close to notice it.  Replace the
chess board with fine-weave cloth, and again the patterns
don't interact much -- you see either the shape of the
screen or the weave of the cloth despite the screen,
but you don't see Moire effects unless the screen is
finer than what I'm picturing as I write this.  :-)

That's why details much larger _or_ much smaller than
the sampling rate don't cause the same kind of glitchiness
as details close to the size of the sampling rate.

Now look at the two screens again, and look at how the
Moire effect appears.  Look at it square by square.  If
instead of "a square with a line of the subject across it 
at an angle" you interpret it as "this square has a glitch
in it", a glitch which can cause a colour shift or an 
incorrect brightness value when the electronics of the
camera process it, depending on how much of the square
is obscured by the line crossing it, that's sortakinda
what's going on with the television camera.  (Important
word:  "sortakinda".)  The fact that it's a "hard" transition
matters.  The peculiarities of the way video is encoded
also matter.  

Look at the chess board again.  Copy what you see to a 
sheet of graph paper, but fill in each square with a solid
block of black or white depending on whether the the square
of the screen has more black or more white in it.  On your
graph paper, diagonal lines have been replaced with a
stair-step pattern.  In computer graphics we call (or used
to call) these "the jaggies", and they're another example
of aliasing.  Do the same thing again, but use six shades
of grey as well as black and white, choosing the shade of
grey closest to the _average_ colour for each square.  Note
how the "jaggies" are less obvious, especially if you squint?
Using eight colours is better than using two for rendering
shapes so that they look right when you don't have infinite
resolution, with the cost being that you no longer have an
absolutely definite edge if you look too close.  If you're
trying to _draw_ a solid black-and-white shape on a computer
screen, such as a circle ... or a _letter_ in a typeface ...
you get the jaggies.  If you do the math to convert a 
higher-resolution version of the shape into "how it would
be shaded in grey-scale when converted to the resolution
we're really going to use", that's "anti-aliasing".  When
you place text in an image with GIMP or Photoshop and one
of the options is whether to use anti-aliasing, that's 
what it means.  It makes your text look smoother.  

Do the same thing with the fine-weave fabric instead of 
the chess board, and you get a grid filled with a solid
colour -- the details beyond the resolution of the mask
have been lost ... but there's no glitchiness.  Neither
Moire or odd chroma artifacts.


For anyone not already a math-weenie, if this statement
had not already been clear, it should be now, even if 
the phrase "spatial frequency" had been unfamiliar:

> The Moire effect is caused by interference between two
> spatial frequencies - it's the spatial analogue of the
> temporal wagon-wheel reversing effect in movies.  Moire
> fringes show up in still images - no motion required.

At this point, if John's explanation wasn't crystal clear
before, it's time to go re-read his message.  (And if it
already was completely clear, I've just wasted your time.)


The remaining unexplained bit is in the paragraph where
I said that in video instead of getting a Moire effect 
or "normal" aliasing, you get "glitchiness" which may
show up as odd colour changes -- what John referred to 
as "chroma crawl".  That's because I don't know the exact
mechanism of those glitches yet.  If I Google for the
details of NTSC encoding (the video encoding used for
classic (non-HDTV) colour television in North America --
I don't know how this affects PAL or HDTV), I'm enough of 
a math-weenie that the reason for that particular glitch 
will probably jump out at me and make me go "Duh!", but 
really that detail isn't _that_ important right now -- 
basically what you need to know is that too many too-quick 
transistions confuse televisions in different ways than 
they confuse GIF files or people using graph paper to 
pretend they're computers, and the result can be really
distracting.  

The thing is, now you can see a) why and how the TV
camera gets confused, b) how this is like and unlike
Moire patterns, and c) why and how the _size_ of the 
pattern (relative to the resolution of the capture device)
matters.  

Extreme close-ups are going to act more like the chess
board.  :-)

                                        -- Glenn

Reply via email to