On Tue, 11 Dec 2007 04:57:48 +0100, IL'dar AKHmetgaleev
<[EMAIL PROTECTED]> wrote:
На Mon, 10 Dec 2007 17:43:20 +0100
"Herman Robak" <[EMAIL PROTECTED]> записано:
Which leads me to another missing feature: "fit to (size)". If you
choose a canvas size, say PAL, then using the camera/projector to
resize HDV or Youtube clips to fit a 4:3 PAL frame is a bit of work.
Especially if you want to preserve interlacing!
Perhaps working with interlaced video as with video with doubled fps by
splitting fields to frames will be easier. Especially when you have to
join interlaced and progressive images in same project. And of course
filters like blur will be much easier to implement.
By smart re-framing with some optical flow algorithms will be possible
to interlace/deinterlace video just by re-framing.
Just before exporting or/and viewing such images might be converted
back to interlaced by joining odd and even frames to fields.
This workaround would be acceptable for standard definition video.
For 1080i, however, the extra memory footprint and bandwidth would
be significant.
Another re-think: Do we need a project framerate?
The framerate is a property of the source material and the rendered
output. Does the render pipeline need a fixed internal framerate?
--
Herman Robak
_______________________________________________
Cinelerra mailing list
[email protected]
https://init.linpro.no/mailman/skolelinux.no/listinfo/cinelerra