On Mon, Aug 31, 2009 at 7:19 PM, James Richard Tyrer<[email protected]> wrote:
> I think that persistence might be an issue.  With CRTs, TV sets have a
> longer persistence phosphor than computer monitors because then need to
> display 30 frames per second interlaced images.  I do not know if that is an
> issue with LCD sets, but if they are going to display 3D at even 24 frames
> per second, then they are going to need to have some control over the
> persistence.

Someone will argue with me or point out an exception, but...

In general, all LCD displays are 60Hz.  First, companies like IDTech
make the "glass", which has an LVDS interface, and then they sell the
glass to companies like Planar who make stands and housings, etc.
They also supply the DVI interface.  The LVDS interfaces are fixed at
60Hz and scan in groups of columns in a different order from DVI.  So
in between DVI and LVDS is a framebuffer.  You can load via DVI that
framebuffer at whatever rate you want, but the logic in the panel will
only ever scan it out to the glass at 60Hz.

Also, the "persistence" of LCDs is much longer than the frame period.
The LCD pixels are sample-and-hold.  They hold their state until
they're changed in the next frame.  There's no fade-out like with a
CRT.  This plays havoc with the human eye.  We expect to actually see
things moving in a totally analog way.  But due to persistence of
vision, something that flickers like a CRT or a movie projector will
work with the eye.  But LCDs jump from image to the next, and
persistence of vision makes us therefore see both the old and new
images at the same time, resulting in a weird kind of motion blur.

Someone named Klompenhouwer published a paper in 2004 at Society for
Information Display.  He characterized the display as a "transfer
function" and the eye as a "transfer function" in terms of how motion,
space, and time are filtered by the apparatus (the eye and the
monitor).  Making an analogy with matrix algebra, let's call L the LCD
transfer function, C the CRT transfer function, and E the eye transfer
function.  Let's say we like C*E.  But we don't like L*E.  If we can
compute the inverse of L, then we can compensate for the LCD.  We
preprocess the video in software or hardware before it gets to the
monitor where we apply something like C*inv(L).  When it goes through
the LCD, we get C*inv(L)*L*E, which simplifies to C*E.  This
oversimplifies to amplifying the high-frequency components of the
video in the direction of motion for the moving objects.  What
complicates this is that we have to make assumptions about where
people are looking and how their eyes actually move (saccades),
although humans are surprisingly consistent (we instinctively follow
what's moving, and our eyes 'leap' from one thing to another, while
the brain ignores the blur during the eye motion).

If you could speed up the LVDS framerate, that would HELP.  I wouldn't
get rid of the problem completely unless you could have video that was
actually like 1000fps (or something, I'm guessing).  Of course, your
video recording are 30fps.  Another option is to cycle the rows of
backlights out of phase with the rasterscan so that the pixels change
where the backlights are dark, so that you don't see the transition.
This creates a 60Hz flicker that you can see out of the corner of your
eye.

-- 
Timothy Normand Miller
http://www.cse.ohio-state.edu/~millerti
Open Graphics Project
_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)

Reply via email to