Let me summarize the options you've discussed and comment on each.
Assume 59.94 video.

Option 1: Run the display at 59.94. This is what you were attempting to do
by inserting modelines I presume? Using this method you don't
introduce any more judder than already existed in the video sequence.

The issue I have here is that you are inserting unknown modelines.
Really the only one with any right to determine the available modelines
is the graphics driver. The driver usually has (On other OS's, XFree
is a little different) a set of canned timings and then may par them
down or add a few more after talking it over with the monitor. XFree
moved most of this up into the device independent portion since most
drivers make use of the same canned timings. This isn't ideal but it
works most of the time. Allowing user defined modelines in XF86Config
is bad enough, but having Apps insert modelines on the fly is really
scary.  The ideal solution here would be to let the driver have a
set of "available" timings as well as the set of "active" ones (The
ones that are in use in the CTRL-ALT+- list) Then your app could
query for a driver supported set of timings, even when the user isn't
actively using them. At least this way the driver has the ability to
provide a "table" of known good timings.

Option 2: Run the display really fast and hope nobody notices. This is
the easiest and probably works pretty well. The faster the refresh the
smaller the added judder, go fast enough and it just doesn't matter
anymore.

Option 3: Work on the video stream to make the judder go away. This is
very hard but this seems to be the goal of your deinterlacer anyway
right?  The video you are getting at 59.94 may be the result of 3:2
pulldown so it may already have judder. You have to detect this and
get back to the 24fps to get rid of the judder. Plus you may have to
timeshift half the fields to get rid of the jaggies. Is it really
that absurd to add in the additional step of weighting the pixels as
was described in your link? Seems like that would produce excellent
results. This also has another advantage, it scales up with faster
processors.
For example assume infinite processor power. If your video is 59.94
with 3:2 pulldown you've got 24fps of real video. Assume your display
is going at 100hz. You could display 100fps by linearly weighting and
blending the pixels of your 24fps video to generate 100fps of unique
video. Basically this is motion blur for video.

The link you gave also suggests that flat panels with their "always on"
type pixels are not idea for video because the eye can detect the
judder more easily than with a crt's "flashing" pixels. Blurring the
video would probably produce better results at high speed than would
be produced with clean pixels.

I vote for #3, let me know when you're done :)

-Matt


_______________________________________________
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert

Reply via email to