On Saturday, 28 November 2020, 15:49:09 GMT, Simon Roberts
<[email protected]> wrote:
> So... the term "field" isn't used to describe one half of an interlaced>
> frame? 'coz I have devices that claim to output progressive and interlaced>
> as a choice, and they're not CRTs..
This is a somewhat vexed question that, in the end, comes down to semantics.
For the sake of providing a comprehensive overview, let's take it from the very
start, which most people will already know but let's be complete.
In the beginning, analogue video was transmitted wirelessly and displayed on
cathode ray tubes. Frame rates were 25 to 30 frames per second. At that rate,
the image would have flickered visibly, but increasing the frame rate to
compensate would require faster electronics and more radio bandwidth, which was
considered undesirable. As an alternative, the decision was made to first send
all the odd lines, and then send all the even lines, in two separate fields.
The result is sometimes visible as a slight vertical shimmer in interlaced
video displayed on a CRT, more so at 25fps than 30fps. Crucially, it does mean
that odd lines and even lines are not photographed by a camera at the same
instant in time, so the resulting pictures have some of the motion rendering
characteristics of a 50fps video.
The first complexity in this matter involved transferring film to video. Film
can be shot at 50fps and each frame transferred to a single video field, but
this is rarely done. It's much more usual to shoot film at 24fps (or 25fps in
regions traditionally using 25fps video) and use the same film frame for both
fields. Traditionally, on a CRT, this means there is still interlace shimmer
but that both fields were photographed at the same time. This produces altered
motion rendering which is a large part of the "film look." The practice of
transferring 24fps film to 30fps formats uses a technique called 3:2 pulldown
which is outside the scope of this discussion, although it is possible to shoot
film at 30fps and end up with the same situation, technically.
When companies such as Sony started making video equipment aimed at replacing
film, capable of shooting at 24fps and without interlacing, lots of equipment
still expected to receive all of the odd lines first, then all of the even
lines afterward. Particularly, CRT monitors were still around, and would have
suffered the old flicker problem if they'd tried to display 24fps images
without interlacing. To compensate, the simple decision was made to interlace
the image, sending the odd lines then the even lines, even though the camera
had photographed them all simultaneously. Sony called it "progressive segmented
frame," PSF.
This is convenient because a digital video stream containing interlaced frames
can theoretically contain either interlaced or progressive material, or even
both sequentially, without the equipment which uses it needing to be very aware
of what the image data actually represents.
It turns out this can be a bad idea, because interlaced video displays nicely
on a CRT but poorly on almost anything else. LCD displays show all of the lines
simultaneously, so that the time difference between them can become visible as
combing. Often LCD displays will process video they think is interlaced to
reduce this. This can be done well but if it's done poorly or unnecessarily it
will reduce, perhaps noticeably, the resolution of the video.
Modern equipment often has lots of frame buffers and can drop or duplicate
frames to its heart's desire. Many displays negotiate mutually-compatible
configurations with the equipment they're connected to, so it's entirely
possible that connecting the same camera to two different monitors will yield
different signals each time, perhaps including duplicated frames to achieve a
compatible signal format.
Well, that became an essay.
P
_______________________________________________
ffmpeg-user mailing list
[email protected]
https://ffmpeg.org/mailman/listinfo/ffmpeg-user
To unsubscribe, visit link above, or email
[email protected] with subject "unsubscribe".