Sottek, Matthew J ([EMAIL PROTECTED]):

> Let me summarize the options you've discussed and comment on each.

  Matt!  Great summary of the options.

> Assume 59.94 video.

  Let's assume true 59.94 video, no 3:2 pulldown or other conversions,
and I want to display the full motion or better.  Playback of
24/25fps video isn't a problem, and 29.97fps video isn't too big a deal
either, unless you're outputting to a TV.

> Option 1: Run the display at 59.94. This is what you were attempting
> to do by inserting modelines I presume? Using this method you don't
> introduce any more judder than already existed in the video sequence.
> 
> The issue I have here is that you are inserting unknown modelines.
> [...] The ideal solution here would be to let the driver have a set of
> "available" timings as well as the set of "active" ones (The ones that
> are in use in the CTRL-ALT+- list) Then your app could query for a
> driver supported set of timings, even when the user isn't actively
> using them. At least this way the driver has the ability to provide a
> "table" of known good timings.

  I'd love to be able to adjust the refresh rate through a nice API,
rather than giving raw modelines.  I think that would be ideal, and
makes sense in the world of LCD screens and CRTs.  I would really like
to see 59.94fps output working with the refresh rate synced before I
start to try and do fancy interpolations.

  With VidMode there is the API to query a list of dot clocks.  You
think maybe it's reasonable to export a higher-level API?

> Option 2: Run the display really fast and hope nobody notices. This is
> the easiest and probably works pretty well. The faster the refresh the
> smaller the added judder, go fast enough and it just doesn't matter
> anymore.

  Yep.  But this only works if your monitor can go >= 95hz, which isn't
possible on any LCD screen I've seen.

> Option 3: Work on the video stream to make the judder go away. This is
> very hard but this seems to be the goal of your deinterlacer anyway
> RIGHT?

  Sure, but right now I'm using alot of time just showing the frames as
I get them.  Remember that I have to copy/transform the image from v4l
to a shared x buffer, and then x has to copy that again to video memory,
and then display it.  The extra copy hurts at 768 x 480 x 16bpp x 60hz.

> Is it really that absurd to add in the additional step of weighting
> the pixels as was described in your link? Seems like that would
> produce excellent results. This also has another advantage, it scales
> up with faster processors.

  Well sure, but note how the author pans linear interpolation as being
a potentially bad idea.  I haven't tried it yet, and I will soon, but
I'm not optimistic that this is a reasonable conversion method.

-- 
Billy Biggs
[EMAIL PROTECTED]
_______________________________________________
Xpert mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/xpert

Reply via email to