Re: [FFmpeg-user] bwdif filter question
On 09/23/2020 05:27 PM, Paul B Mahol wrote: On Wed, Sep 23, 2020 at 04:26:27PM -0400, Mark Filipak (ffmpeg) wrote: On 09/23/2020 03:53 PM, Carl Eugen Hoyos wrote: Am Di., 22. Sept. 2020 um 00:47 Uhr schrieb Mark Filipak (ffmpeg) : On 09/21/2020 06:07 PM, Carl Eugen Hoyos wrote: Am Mo., 21. Sept. 2020 um 14:16 Uhr schrieb Mark Filipak (ffmpeg) : Here is what you wrote: "The following makes little sense, it is just meant as an example: $ ffmpeg -f lavfi -i testsrc2,field -vf bwdif -f null -" That "explains" nothing. Worse, it seems crass and sarcastic. No. This was an example to show you how you can feed one field to a filter in our system, this is what you had asked for ... I didn't ask for that. This is not true: How can a frame contain just one field? I did not ask for an example to see "how you can feed one field to a filter". I asked how a frame can contain just one field. You have yet to answer that. I think it's impossible. You may be referring to a frame that is deinterlaced and cut in half (e.g. from 720x576 to 720x288), in which case the frame contains no field. You wrote: "(If you provide only one field, no FFmpeg deinterlacer will produce useful output.)". Of course I agree with the "no...useful output" part, however, how can a person "provide only one field"? That implies that "provide only one field" is an option. I think that's impossible, so I asked you how it was possible. I did not ask how to implement that impossibility on the command line (which I think is likewise impossible). It is along these lines that misunderstanding and confusion and novice angst ensues. Am I nitpicking? I think not. You are an authority. When an authority uses loose language, misunderstanding and confusion and angst must follow. But MPEG and ffmpeg seems to be primed to require loose language. That needs to end. Try to read and follow separatefields, weave and doubleweave filters documentation. Thank you, Paul. I do try to read them. Is there something specific to which you can point? All inputs are accepted and appreciated. I'm sure we both endeavor to make ffmpeg better. ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] bwdif filter question
On Wed, Sep 23, 2020 at 04:26:27PM -0400, Mark Filipak (ffmpeg) wrote: > On 09/23/2020 03:53 PM, Carl Eugen Hoyos wrote: > > Am Di., 22. Sept. 2020 um 00:47 Uhr schrieb Mark Filipak (ffmpeg) > > : > > > > > > On 09/21/2020 06:07 PM, Carl Eugen Hoyos wrote: > > > > Am Mo., 21. Sept. 2020 um 14:16 Uhr schrieb Mark Filipak (ffmpeg) > > > > : > > > > > > > Here is what you wrote: > > > > > "The following makes little sense, it is just meant as an example: > > > > > $ ffmpeg -f lavfi -i testsrc2,field -vf bwdif -f null -" > > > > > > > > > > That "explains" nothing. Worse, it seems crass and sarcastic. > > > > > > > > No. > > > > This was an example to show you how you can feed one field to > > > > a filter in our system, this is what you had asked for ... > > > > > > I didn't ask for that. > > > > This is not true: > > > How can a frame contain just one field? > > I did not ask for an example to see "how you can feed one field to a > filter". I asked how a frame can contain just one field. You have yet to > answer that. I think it's impossible. You may be referring to a frame that > is deinterlaced and cut in half (e.g. from 720x576 to 720x288), in which > case the frame contains no field. > > You wrote: "(If you provide only one field, no FFmpeg deinterlacer will > produce useful output.)". Of course I agree with the "no...useful output" > part, however, how can a person "provide only one field"? That implies that > "provide only one field" is an option. I think that's impossible, so I asked > you how it was possible. I did not ask how to implement that impossibility > on the command line (which I think is likewise impossible). It is along > these lines that misunderstanding and confusion and novice angst ensues. > > Am I nitpicking? I think not. You are an authority. When an authority uses > loose language, misunderstanding and confusion and angst must follow. But > MPEG and ffmpeg seems to be primed to require loose language. That needs to > end. Try to read and follow separatefields, weave and doubleweave filters documentation. > ___ > ffmpeg-user mailing list > ffmpeg-user@ffmpeg.org > https://ffmpeg.org/mailman/listinfo/ffmpeg-user > > To unsubscribe, visit link above, or email > ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe". ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] bwdif filter question
On 09/23/2020 03:53 PM, Carl Eugen Hoyos wrote: Am Di., 22. Sept. 2020 um 00:47 Uhr schrieb Mark Filipak (ffmpeg) : On 09/21/2020 06:07 PM, Carl Eugen Hoyos wrote: Am Mo., 21. Sept. 2020 um 14:16 Uhr schrieb Mark Filipak (ffmpeg) : Here is what you wrote: "The following makes little sense, it is just meant as an example: $ ffmpeg -f lavfi -i testsrc2,field -vf bwdif -f null -" That "explains" nothing. Worse, it seems crass and sarcastic. No. This was an example to show you how you can feed one field to a filter in our system, this is what you had asked for ... I didn't ask for that. This is not true: How can a frame contain just one field? I did not ask for an example to see "how you can feed one field to a filter". I asked how a frame can contain just one field. You have yet to answer that. I think it's impossible. You may be referring to a frame that is deinterlaced and cut in half (e.g. from 720x576 to 720x288), in which case the frame contains no field. You wrote: "(If you provide only one field, no FFmpeg deinterlacer will produce useful output.)". Of course I agree with the "no...useful output" part, however, how can a person "provide only one field"? That implies that "provide only one field" is an option. I think that's impossible, so I asked you how it was possible. I did not ask how to implement that impossibility on the command line (which I think is likewise impossible). It is along these lines that misunderstanding and confusion and novice angst ensues. Am I nitpicking? I think not. You are an authority. When an authority uses loose language, misunderstanding and confusion and angst must follow. But MPEG and ffmpeg seems to be primed to require loose language. That needs to end. ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] bwdif filter question
Am Di., 22. Sept. 2020 um 00:47 Uhr schrieb Mark Filipak (ffmpeg) : > > On 09/21/2020 06:07 PM, Carl Eugen Hoyos wrote: > > Am Mo., 21. Sept. 2020 um 14:16 Uhr schrieb Mark Filipak (ffmpeg) > > : > >> Here is what you wrote: > >> "The following makes little sense, it is just meant as an example: > >> $ ffmpeg -f lavfi -i testsrc2,field -vf bwdif -f null -" > >> > >> That "explains" nothing. Worse, it seems crass and sarcastic. > > > > No. > > This was an example to show you how you can feed one field to > > a filter in our system, this is what you had asked for ... > > I didn't ask for that. This is not true: > How can a frame contain just one field? Carl Eugen ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] bwdif filter question
On 09/23/2020 12:19 PM, Greg Oliver wrote: On Tue, Sep 22, 2020 at 1:14 PM Mark Filipak (ffmpeg) wrote: -snip- He has repeatedly posted to either understand or define better the internals of ffmpeg itself... Thanks for the kind words. Yaknow, I'm not special or a wizard. I suffer the same assumptions as everyone. As I work on my glossary, I'm amazed when I realize something that I had wrong, but had worked on steadily for weeks without actually seeing. Let me give you an example. Last night I realized no matter whether a stream is frame or TFF (top_field_first) or BFF (bottom_field_first), that macroblock samples have exactly the same order; that it's the order that these samples are read out by the decoder that determines whether the 1st sample goes to line 1 or line 2, and whether the 4 luminance blocks are concurrent (aka "progressive"). In other words, TFF and BFF are not formats. They are access methods!! That realization caused me to dump a raft of seemingly clever, seemingly insightful diagrams that had taken weeks of revisions to hone. I realized that those diagrams were crap and just reinforced concepts that seem reasonable and are universally accepted but that can't survive close scrutiny. That kind of insight (which makes me think I'm stupid for not seeing it immediately) will be in the glossary. The existing stuff not only implies that fields exist -- fields do not exist (no such structure, at least not in streams) and it took me a month of learning how to parse macroblocks to discover that -- the existing stuff implies that TFF and BFF are differing formats, but they're not formats at all! I contend that ordinary users can understand the differences between (hard) structure and (soft) description, and between a format and a method. I think ordinary users are so hungry to get real information that they're willing beg and plead and (nearly) debase themselves. ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] bwdif filter question
On Tue, Sep 22, 2020 at 1:14 PM Mark Filipak (ffmpeg) wrote: > I could use some help in my efforts. Due to my ignorance, it's taking me > weeks to figure out things > that should be resolved in minutes. In 5 days, I'll be 74 years old. With > corona virus and age, I > don't know how much longer I'll be around, but I'm sure I can help the > ffmpeg project if the > principals in the project will just stop sniping at me and share knowledge. > > Thank you for your oh-so valuable contributions. I will study AVFrame to > see how I can use it to > better communicate. I must say that I am by no means an ffmpeg expert (or even an advanced user). I am on more mailing lists than your average person though. The fact that "troll" keeps coming up in response to Mark's posts is quite annoying. He has repeatedly posted to either understand or define better the internals of ffmpeg itself. His posts are always well defined and pointed - if that is hard for you who want to define it as trolling to understand, then you should kick rocks. I personally learn more about ffmpeg from his posts (and therefore the responses from understandable others) than I would otherwise from reading outdated wiki and docs. If you feel threatened by people who desire knowledge or otherwise demand (subtlety) proper definitions of technology - maybe you should troll elsewhere. I have never experienced a post on this list that maintains otherwise. Please quit making me read the BS that accompanies most of the intelligent questions or contradiction responses - I find it very educational in the least. ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] bwdif filter question
On 09/22/2020 04:20 AM, Edward Park wrote: Not so, Ted. The following two definitions are from the glossary I'm preparing (and which cites H.262). Ah okay I thought that was a bit weird, I assume it was a typo but I saw h.242 and thought two different types of "frames" were being mixed. Before saying anything if the side project you mentioned was a layman’s glossary type reference material, I think you should base it off of the definitions section instead of the bitstream definitions, just my $.02. H.242 was indeed a typo ...Oh, wait! Doesn't (H.222+H.262)/2 = H.242?. :-) I'm not sure what you mean by "definitions section", but I don't believe in "layman's" glossaries. I believe novices can comprehend structures at a codesmith's level if the structures are precisely represented. The novices who can't comprehend the structures need to learn. If they don't want to learn, then they're not really serious. This video processing stuff is for serious people. That written, what is not reasonable, IMHO, is to expect novices to learn codesmith-jargon and codesmith-shorthand. English has been around for a long time and it includes everything that is needed. I would show you some of my mpegps parser documentation and some of my glossary stuff, but 90% of it is texipix diagrams and/or spreadsheet-style, plaintext tables that are formatted way wider than 70 characters/line, so won't paste into email. -snip- Since you capitalize "AVFrames", I assume that you cite a standard of some sort. I'd very much like to see it. Do you have a link? This was the main info I was trying to add, it's not a standard of any kind, quite the opposite, actually, since technically its declaration could be changed in a single commit, but I don't think that is a common occurrence. AVFrame is a struct that is used to abstract/implement all frames in the many different formats ffmpeg handles. it is noted that its size could change as fields are added to the struct. There's documentation generated for it here: https://www.ffmpeg.org/doxygen/trunk/structAVFrame.html Oh, Thank You! That's going to help me to communicate/discuss with the developers. H.262 refers to "frame pictures" and "field pictures" without clearly delineating them. I am calling them "pictures" and "halfpictures". I thought ISO 13818-2 was basically the identical standard, and it gives pretty clear definitions imo, here are some excerpts. (Wall of text coming up… standards are very wordy by necessity) --snip 13818-2 excerpts-- To me, that's all descriptions, not definitions. To me, it's vague and ambiguous. To me, it's sort of nebulous. Standards don't need to be wordy. The *more* one writes, the greater the chance of mistakes and ambiguity. Write less, not more. Novices aren't dumb, they're just ignorant. By your use of "struct" in your reply, I take it that you're a 'C' codesmith -- I write assy & other HLL & hardware description languages like VHDL & Verilog, but I've never written 'C'. I've employed 'C' codesmiths, therefore, I'm a bit conversant with 'C', but just a bit. What I've noted is that codesmiths generally don't know how to write effective English. Writing well constructed English is difficult and time consuming at first, as difficult as learning how to effectively use any syntax that requires knowledge and experience. There are clear rules but most codesmiths don't know them, especially if English is their native language. They write like they speak: conversationally. And when others don't understand what's written, rather than revise smaller, the grammar-challenged revise longer thinking that yet-another-perspective is what's needed. That produces ambiguity because different perspectives promotes ambiguity. IMHO, there should solely be just one perspective: structure. Usage is the place for description but that's not (or shouldn't be) in the domain of a glossary. So field pictures are decoded fields, and frame pictures are decoded frames? Not sure if I understand 100% but I think it’s pretty clear, “two field pictures comprise a coded frame.” IIRC field pictures aren’t decoded into separate fields because two frames in one packet makes something explode within FFmpeg Well, packets are just how transports chop up a stream in order to send it, piecewise, via a packetized media. They don't matter. I think that, for mpegps, start at 'sequence_header_code' (i.e. x00 00 01 B3) and proceed from there, through the transport packet headers, throwing out the packet headers, until encountering the next 'sequence_header_code' or the 'sequence_end_code' (i.e. x00 00 01 B7). I don't know how frames are passed from the decoder to a filter inside ffmpeg. I don't know whether the frames are in the form of decoded samples in a macroblock structure or are just a glob of bytes. Considering the differences between 420 & 422 & 444, I think that the frames passed from the decoder must have some structure
Re: [FFmpeg-user] bwdif filter question
On 09/22/2020 05:59 AM, Nicolas George wrote: Mark Filipak (ffmpeg) (12020-09-21): No so, Ted. The following two definitions are from the glossary I'm preparing (and which cites H.262). Quoting yourself does not prove you right. You are correct. That's why H.262 is in the definition. I'm not quoting myself. I'm quoting H.262. ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] bwdif filter question
Mark Filipak (ffmpeg) (12020-09-21): > No so, Ted. The following two definitions are from the glossary I'm preparing > (and which cites H.262). Quoting yourself does not prove you right. -- Nicolas George signature.asc Description: PGP signature ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] bwdif filter question
Hello, >> I'm not entirely aware of what is being discussed, but progressive_frame = >> !interlaced_frame kind of sent me back a bit, I do remember the discrepancy >> you noted in some telecopied material, so I'll just quickly paraphrase from >> what we looked into before, hopefully it'll be relevant. >> The AVFrame interlaced_frame flag isn't completely unrelated to mpeg >> progressive_frame, but it's not a simple inverse either, very >> context-dependent. With mpeg video, it seems it is an interlaced_frame if it >> is not progressive_frame ... > > No so, Ted. The following two definitions are from the glossary I'm preparing > (and which cites H.262). Ah okay I thought that was a bit weird, I assume it was a typo but I saw h.242 and thought two different types of "frames" were being mixed. Before saying anything if the side project you mentioned was a layman’s glossary type reference material, I think you should base it off of the definitions section instead of the bitstream definitions, just my $.02. I read over what I wrote and I don't think it helps at all, let me try again, I am saying that there are the "frames" in the context of a container, and a different kind of video "frame" that has a width and height dimension. (When I wrote "picture frames" I meant to refer to physical wooden picture frames for photo prints, but with terms like frame pictures in play not very effective in hindsight) > Since you capitalize "AVFrames", I assume that you cite a standard of some > sort. I'd very much like to see it. Do you have a link? This was the main info I was trying to add, it's not a standard of any kind, quite the opposite, actually, since technically its declaration could be changed in a single commit, but I don't think that is a common occurrence. AVFrame is a struct that is used to abstract/implement all frames in the many different formats ffmpeg handles. it is noted that its size could change as fields are added to the struct. There's documentation generated for it here: https://www.ffmpeg.org/doxygen/trunk/structAVFrame.html > H.262 refers to "frame pictures" and "field pictures" without clearly > delineating them. I am calling them "pictures" and "halfpictures". I thought ISO 13818-2 was basically the identical standard, and it gives pretty clear definitions imo, here are some excerpts. (Wall of text coming up… standards are very wordy by necessity) > 6.1.1. Video sequence > > The highest syntactic structure of the coded video bitstream is the video > sequence. > > A video sequence commences with a sequence header which may optionally be > followed by a group of pictures header and then by one or more coded frames. > The order of the coded frames in the coded bitstream is the order in which > the decoder processes them, but not necessarily in the correct order for > display. The video sequence is terminated by a sequence_end_code. At various > points in the video sequence a particular coded frame may be preceded by > either a repeat sequence header or a group of pictures header or both. (In > the case that both a repeat sequence header and a group of pictures header > immediately precede a particular picture, the group of pictures header shall > follow the repeat sequence header.) > > 6.1.1.1. Progressive and interlaced sequences > This specification deals with coding of both progressive and interlaced > sequences. > > The output of the decoding process, for interlaced sequences, consists of a > series of reconstructed fields that are separated in time by a field period. > The two fields of a frame may be coded separately (field- pictures). > Alternatively the two fields may be coded together as a frame > (frame-pictures). Both frame pictures and field pictures may be used in a > single video sequence. > > In progressive sequences each picture in the sequence shall be a frame > picture. The sequence, at the output of the decoding process, consists of a > series of reconstructed frames that are separated in time by a frame period. > > 6.1.1.2. Frame > > A frame consists of three rectangular matrices of integers; a luminance > matrix (Y), and two chrominance matrices (Cb and Cr). > > The relationship between these Y, Cb and Cr components and the primary > (analogue) Red, Green and Blue Signals (E’R , E’G and E’B ), the chromaticity > of these primaries and the transfer characteristics of the source frame may > be specified in the bitstream (or specified by some other means). This > information does not affect the decoding process. > > 6.1.1.3. Field > > A field consists of every other line of samples in the three rectangular > matrices of integers representing a frame. > > A frame is the union of a top field and a bottom field. The top field is the > field that contains the top-most line of each of the three matrices. The > bottom field is the other one. > > 6.1.1.4. Picture > > A reconstructed picture is obtained by decoding a
Re: [FFmpeg-user] bwdif filter question
On 09/21/2020 06:54 PM, Bouke wrote: On 22 Sep 2020, at 00:44, Mark Filipak (ffmpeg) wrote: Paul Mahol accused me He was not the only one. Go away! and no, this is not aimed at you, but to the rest of the bunch to NOT FEED THE TROLL You calling me a troll doesn't make it so. Anyone following this thread know from which direction the insults come. ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] bwdif filter question
> On 22 Sep 2020, at 00:44, Mark Filipak (ffmpeg) wrote: > > Paul Mahol accused me He was not the only one. Go away! and no, this is not aimed at you, but to the rest of the bunch to NOT FEED THE TROLL ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] bwdif filter question
On 09/21/2020 06:07 PM, Carl Eugen Hoyos wrote: Am Mo., 21. Sept. 2020 um 14:16 Uhr schrieb Mark Filipak (ffmpeg) : On 09/21/2020 03:33 AM, Carl Eugen Hoyos wrote: Am 21.09.2020 um 01:56 schrieb Mark Filipak (ffmpeg) : How can it 'deinterlace' a single field? It can’t and that is what I explained several times in my last two mails. Here is what you wrote: "The following makes little sense, it is just meant as an example: $ ffmpeg -f lavfi -i testsrc2,field -vf bwdif -f null -" That "explains" nothing. Worse, it seems crass and sarcastic. No. This was an example to show you how you can feed one field to a filter in our system, this is what you had asked for ... I didn't ask for that. That was in your reply to a comment from Mark Himsley. "No matter if the raster contains one field, two interlaced fields or a progressive frame, the filter will always see an input frame." I simply asked how a deinterlacing filter would handle an input that has only one field. It's a question that, I note, you have not answered except that it "makes little sense", to which I agreed. ... I used the filter that is the topic in this mailing list thread. In addition, I explained - not only but including above - that this is not a useful example for an interlace filter, just as feeding a progressive frame is not useful. I agree in both cases of course. Please understand that I have shown significantly more patience with you then with most other users here and significantly more patience than most people on this mailist list (including the silent ones) have with you. I can only ask you to accept the answers you receive instead of interpreting every single one of them as a personal attack just because they don't match what you expect. Paul Mahol accused me of attacking you. That's absurd of course. Now you accuse me of feeling attacked. How would you know what I feel? I don't feel attacked. You and Paul need to get your stories aligned. :-) ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] bwdif filter question
Am Mo., 21. Sept. 2020 um 14:16 Uhr schrieb Mark Filipak (ffmpeg) : > > On 09/21/2020 03:33 AM, Carl Eugen Hoyos wrote: > > > >> Am 21.09.2020 um 01:56 schrieb Mark Filipak (ffmpeg) : > >> > >> How can it 'deinterlace' a single field? > > > > It can’t and that is what I explained several times in my last two mails. > > Here is what you wrote: > "The following makes little sense, it is just meant as an example: > $ ffmpeg -f lavfi -i testsrc2,field -vf bwdif -f null -" > > That "explains" nothing. Worse, it seems crass and sarcastic. No. This was an example to show you how you can feed one field to a filter in our system, this is what you had asked for, I used the filter that is the topic in this mailing list thread. In addition, I explained - not only but including above - that this is not a useful example for an interlace filter, just as feeding a progressive frame is not useful. Please understand that I have shown significantly more patience with you then with most other users here and significantly more patience than most people on this mailist list (including the silent ones) have with you. I can only ask you to accept the answers you receive instead of interpreting every single one of them as a personal attack just because they don't match what you expect. Carl Eugen ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] bwdif filter question
On 09/21/2020 11:24 AM, Edward Park wrote: Morning, Hi Ted! Regarding 'progressive_frame', ffmpeg has 'interlaced_frame' in lieu of 'progressive_frame'. I think that 'interlaced_frame' = !'progressive_frame' but I'm not sure. Confirming it as a fact is a side project that I work on only occasionally. H.242 defines "interlace" as solely the condition of PAL & NTSC scan-fields (i.e. field period == (1/2)(1/FPS)), but I don't want to pursue that further because I don't want to be perceived as a troll. :-) I'm not entirely aware of what is being discussed, but progressive_frame = !interlaced_frame kind of sent me back a bit, I do remember the discrepancy you noted in some telecopied material, so I'll just quickly paraphrase from what we looked into before, hopefully it'll be relevant. The AVFrame interlaced_frame flag isn't completely unrelated to mpeg progressive_frame, but it's not a simple inverse either, very context-dependent. With mpeg video, it seems it is an interlaced_frame if it is not progressive_frame ... No so, Ted. The following two definitions are from the glossary I'm preparing (and which cites H.262). 'progressive_frame' [noun]: 1, A metadata bit differentiating a picture or halfpicture frame ('1') from a scan frame ('0'). 2, H.262 §6.3.10: "If progressive_frame is set to 0 it indicates that the two fields of the frame are interlaced fields in which an interval of time of the field period exists between (corresponding spatial samples) of the two fields. ... If progressive_frame is set to 1 it indicates that the two fields (of the frame) are actually from the same time instant as one another." interlace [noun]: 1, H.262 §3.74: "The property of conventional television frames [1] where alternating lines of the frame represent different instances in time." [1] H.262 clearly limits interlace to scan-fields and excludes concurrent fields (and also the non-concurrent fields that can result from hard telecine). 2, Informal: The condition in which the samples of odd and even rows (or lines) alternate. [verb], informal: To weave or reweave fields. -- A note about my glossary: "picture frame", "halfpicture frame", and "scan frame" are precisely and unambiguously defined by (and differentiated from one another by) their physical structures (including any metadata that may demarcate them), not by their association to other features and not by the context in which they appear. I endeavor to make all definitions strong in likewise manner. ... and it shouldn't result where mpeg progressive_sequence is set. Basically, the best you can generalize from that is the frame stores interlaced video. (Yes interlaced_frame means the frame has interlaced material) Doesn't help at all... But I don't think it can be helped? Since AVFrames accommodates many more types of video frame data than just the generations of mpeg coded. Since you capitalize "AVFrames", I assume that you cite a standard of some sort. I'd very much like to see it. Do you have a link? I think it was often said (not as much anymore) that "FFmpeg doesn't output fields" and I think at least part of the reason is this. At the visually essential level, there is the "picture" described as a single instance of a sequence of frames/fields/lines or what have you depending on the format and technology; the image that you actually see. H.262 refers to "frame pictures" and "field pictures" without clearly delineating them. I am calling them "pictures" and "halfpictures". But that's a visual projection of the decoded and rendered video, or if you're encoding, it's what you want to see when you decode and render your encoding. I think the term itself has a very abstract(?) nuance. The picture seen at a certain presentation timestamp either has been decoded, or can be encoded as frame pictures or field pictures. You see. You are using the H.262 nomenclature. That's fine, and I'm considering using it also even though it appears to be excessively wordy. Basically, I prefer "pictures" for interlaced content and "halfpictures" for deinterlaced content unweaved from a picture. Both are stored in "frames", a red herring in the terminology imo ... Actually, it is frames that exist. Fields don't exist as discrete, unitary structures in macroblocks in streams. ... The AVFrame that ffmpeg deals with isn't necessarily a "frame" as in a rectangular picture frame with width and height, but closer to how the data is temporally "framed," e.g. in packets with header data, where one AVFrame has one video frame (picture). Image data could be scanned by macroblock, unless you are playing actual videotape. You singing a sweet song, Ted. Frames actually do exist in streams and are denoted by metadata. The data inside slices inside macroblocks I am calling framesets. I firmly believe that every structure should have a unique name. So when interlace scanned fields are stored in
Re: [FFmpeg-user] bwdif filter question
Morning, > Regarding 'progressive_frame', ffmpeg has 'interlaced_frame' in lieu of > 'progressive_frame'. I think that 'interlaced_frame' = !'progressive_frame' > but I'm not sure. Confirming it as a fact is a side project that I work on > only occasionally. H.242 defines "interlace" as solely the condition of PAL & > NTSC scan-fields (i.e. field period == (1/2)(1/FPS)), but I don't want to > pursue that further because I don't want to be perceived as a troll. :-) I'm not entirely aware of what is being discussed, but progressive_frame = !interlaced_frame kind of sent me back a bit, I do remember the discrepancy you noted in some telecopied material, so I'll just quickly paraphrase from what we looked into before, hopefully it'll be relevant. The AVFrame interlaced_frame flag isn't completely unrelated to mpeg progressive_frame, but it's not a simple inverse either, very context-dependent. With mpeg video, it seems it is an interlaced_frame if it is not progressive_frame, and it shouldn't result where mpeg progressive_sequence is set. Basically, the best you can generalize from that is the frame stores interlaced video. (Yes interlaced_frame means the frame has interlaced material) Doesn't help at all... But I don't think it can be helped? Since AVFrames accommodates many more types of video frame data than just the generations of mpeg coded. I think it was often said (not as much anymore) that "FFmpeg doesn't output fields" and I think at least part of the reason is this. At the visually essential level, there is the "picture" described as a single instance of a sequence of frames/fields/lines or what have you depending on the format and technology; the image that you actually see. But that's a visual projection of the decoded and rendered video, or if you're encoding, it's what you want to see when you decode and render your encoding. I think the term itself has a very abstract(?) nuance. The picture seen at a certain presentation timestamp either has been decoded, or can be encoded as frame pictures or field pictures. Both are stored in "frames", a red herring in the terminology imo. The AVFrame that ffmpeg deals with isn't necessarily a "frame" as in a rectangular picture frame with width and height, but closer to how the data is temporally "framed," e.g. in packets with header data, where one AVFrame has one video frame (picture). Image data could be scanned by macroblock, unless you are playing actual videotape. So when interlace scanned fields are stored in frames, it's more than that both fields and frames are generalized into a single structure for both types of pictures called "frames" – AVFrames, as the prefix might suggest, also are audio frames. And though it's not a very good analogy to field-based video, multiple channels of sound can be interleaved. I apologize that was a horrible job at quickly paraphrasing but if there was any conflation of the packet-like frames and picture-like frames or interlaced scanning video lines and macro block scanning I think the info might be able to shift your footing and give you another perspective, even if it's not 100% accurate. Regards, Ted Park ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] bwdif filter question
On 09/21/2020 09:26 AM, Paul B Mahol wrote: On Mon, Sep 21, 2020 at 08:11:59AM -0400, Mark Filipak (ffmpeg) wrote: On 09/21/2020 03:33 AM, Carl Eugen Hoyos wrote: Am 21.09.2020 um 01:56 schrieb Mark Filipak (ffmpeg) : How can it 'deinterlace' a single field? It can’t and that is what I explained several times in my last two mails. Here is what you wrote: "The following makes little sense, it is just meant as an example: $ ffmpeg -f lavfi -i testsrc2,field -vf bwdif -f null -" That "explains" nothing. Worse, it seems crass and sarcastic. The perfect word is "snarky". Do you know that word? It's a word invented by the man who wrote "Alice In Wonderland". Sometimes it seems that what you write is meant to pull people down a psychedelic rabbit hole and into a fantasy world. Just because something is possible with ffmpeg, if it doesn't make sense to do it, don't mention it. If you do mention it and you write that it makes "little sense", then explain why it makes little sense. In this case, it doesn't make "little sense". It makes *no* sense. Please refrain from attacking other people on this mailing list. I am not attacking Carl Eugen. I'm trying to help him. ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] bwdif filter question
On Mon, Sep 21, 2020 at 08:11:59AM -0400, Mark Filipak (ffmpeg) wrote: > On 09/21/2020 03:33 AM, Carl Eugen Hoyos wrote: > > > > > > > Am 21.09.2020 um 01:56 schrieb Mark Filipak (ffmpeg) : > > > > > > How can it 'deinterlace' a single field? > > > > It can’t and that is what I explained several times in my last two mails. > > Here is what you wrote: > "The following makes little sense, it is just meant as an example: > $ ffmpeg -f lavfi -i testsrc2,field -vf bwdif -f null -" > > That "explains" nothing. Worse, it seems crass and sarcastic. The perfect > word is "snarky". Do you know that word? It's a word invented by the man who > wrote "Alice In Wonderland". Sometimes it seems that what you write is meant > to pull people down a psychedelic rabbit hole and into a fantasy world. > > Just because something is possible with ffmpeg, if it doesn't make sense to > do it, don't mention it. If you do mention it and you write that it makes > "little sense", then explain why it makes little sense. > > In this case, it doesn't make "little sense". It makes *no* sense. Please refrain from attacking other people on this mailing list. ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] bwdif filter question
On 09/21/2020 03:33 AM, Carl Eugen Hoyos wrote: Am 21.09.2020 um 01:56 schrieb Mark Filipak (ffmpeg) : How can it 'deinterlace' a single field? It can’t and that is what I explained several times in my last two mails. Here is what you wrote: "The following makes little sense, it is just meant as an example: $ ffmpeg -f lavfi -i testsrc2,field -vf bwdif -f null -" That "explains" nothing. Worse, it seems crass and sarcastic. The perfect word is "snarky". Do you know that word? It's a word invented by the man who wrote "Alice In Wonderland". Sometimes it seems that what you write is meant to pull people down a psychedelic rabbit hole and into a fantasy world. Just because something is possible with ffmpeg, if it doesn't make sense to do it, don't mention it. If you do mention it and you write that it makes "little sense", then explain why it makes little sense. In this case, it doesn't make "little sense". It makes *no* sense. ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] bwdif filter question
> Am 21.09.2020 um 01:56 schrieb Mark Filipak (ffmpeg) : > > How can it 'deinterlace' a single field? It can’t and that is what I explained several times in my last two mails. Carl Eugen ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] bwdif filter question
On 09/20/2020 05:44 PM, Carl Eugen Hoyos wrote: Am So., 20. Sept. 2020 um 06:59 Uhr schrieb Mark Filipak (ffmpeg) : On 09/18/2020 03:01 PM, Carl Eugen Hoyos wrote: Am 16.09.2020 um 15:58 schrieb Mark Himsley : On Mon, 14 Sep 2020 at 15:42, Mark Filipak (ffmpeg) wrote: Is the input to the bwdif filter fields or frames? The input to every filter in a filter chain is a raster of pixels. That raster may contain one frame or two fields. That may not be wrong (apart from Paul’s comment) but I wonder how useful it is: No matter if the raster contains one field, two interlaced fields or a progressive frame, the filter will always see an input frame. "...if the raster contains *one field*...the filter will always see an input *frame*." How is that possible? How can a frame contain just one field? The following makes little sense, it is just meant as an example: $ ffmpeg -f lavfi -i testsrc2,field -vf bwdif -f null - Here, the input to the bwdif consists of frames that contain one field (of the original input). Thanks, Carl Eugen. Kindly forgive my ignorance -- I can't read 'C' code and probably couldn't find the relevant code section if my life depended on it. If bwdif is the *only* filter, then, from previous discussions, I understand that its input (i.e. the decoder's output) is raw frames (e.g. 720x576)? If raw frames, then I can understand the above to mean that the filter is 'fed' only one field (e.g. 720x288). Logically, to me, that would be a frame (i.e. a 720x288 frame), but no matter (let's forget that). However, even then, the filter is receiving only one field. How can it 'deinterlace' a single field? I'm mystified. Does it line double in such a circumstance? Or does it deinterlace the current single field with the next single field one frame later? The fact that there is metadata that may signal the content is also not necessarily helpful as this metadata is typically wrong (often signalling fields when a frame is provided). Can you provide an example (or a link to an example)? I've examined a great number of DSM mpeg presentation streams ('VOB's & 'm2ts's) and I've not seen a single case. What metadata are you looking at? sequence_extension: 'progressive_sequence'? picture_coding_extension: 'picture_structure'? picture_coding_extension: 'top_field_first'? picture_coding_extension: 'repeat_first_field'? I would expect that most commercial encodings you have uses one of the above, independently of the content... Based on my experience, and to the best of my knowledge, every MPEG PS & TS have all 5 metadata values. Certainly, every MPEG stream *I've* parsed have all 5. picture_coding_extension: 'progressive_frame'? ... while this is unusual, even for movies in PAL streams. For what it's worth, I have only one PAL movie, "The Man Who Would Be King", from Australia. It has all 5 metadata values and appears to be a regular MPEG PS. Regarding 'progressive_frame', ffmpeg has 'interlaced_frame' in lieu of 'progressive_frame'. I think that 'interlaced_frame' = !'progressive_frame' but I'm not sure. Confirming it as a fact is a side project that I work on only occasionally. H.242 defines "interlace" as solely the condition of PAL & NTSC scan-fields (i.e. field period == (1/2)(1/FPS)), but I don't want to pursue that further because I don't want to be perceived as a troll. :-) - Mark. ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] bwdif filter question
Am So., 20. Sept. 2020 um 06:59 Uhr schrieb Mark Filipak (ffmpeg) : > > On 09/18/2020 03:01 PM, Carl Eugen Hoyos wrote: > >> Am 16.09.2020 um 15:58 schrieb Mark Himsley : > >>> On Mon, 14 Sep 2020 at 15:42, Mark Filipak (ffmpeg) > >>> wrote: > >>> Is the input to the bwdif filter fields or frames? > >> The input to every filter in a filter chain is a raster of pixels. > >> That raster may contain one frame or two fields. > > > > That may not be wrong (apart from Paul’s comment) but I wonder how useful > > it is: > > No matter if the raster contains one field, two interlaced fields or a > > progressive > > frame, the filter will always see an input frame. > > "...if the raster contains *one field*...the filter will always see an input > *frame*." > How is that possible? How can a frame contain just one field? The following makes little sense, it is just meant as an example: $ ffmpeg -f lavfi -i testsrc2,field -vf bwdif -f null - Here, the input to the bwdif consists of frames that contain one field (of the original input). > > The fact that there is metadata that may signal the content is also not > > necessarily > > helpful as this metadata is typically wrong (often signalling fields when a > > frame is provided). > > Can you provide an example (or a link to an example)? I've examined a > great number of DSM mpeg presentation streams ('VOB's & 'm2ts's) and > I've not seen a single case. What metadata are you looking at? > sequence_extension: 'progressive_sequence'? > picture_coding_extension: 'picture_structure'? > picture_coding_extension: 'top_field_first'? > picture_coding_extension: 'repeat_first_field'? I would expect that most commercial encodings you have uses one of the above, independently of the content... > picture_coding_extension: 'progressive_frame'? ... while this is unusual, even for movies in PAL streams. Otoh, I typically saw pal dvb streams, maybe my claim is only true for them. Carl Eugen ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] bwdif filter question
On 09/18/2020 03:01 PM, Carl Eugen Hoyos wrote: Am 16.09.2020 um 15:58 schrieb Mark Himsley : On Mon, 14 Sep 2020 at 15:42, Mark Filipak (ffmpeg) wrote: Is the input to the bwdif filter fields or frames? The input to every filter in a filter chain is a raster of pixels. That raster may contain one frame or two fields. That may not be wrong (apart from Paul’s comment) but I wonder how useful it is: No matter if the raster contains one field, two interlaced fields or a progressive frame, the filter will always see an input frame. "...if the raster contains *one field*...the filter will always see an input *frame*." How is that possible? How can a frame contain just one field? The fact that there is metadata that may signal the content is also not necessarily helpful as this metadata is typically wrong (often signalling fields when a frame is provided). Can you provide an example (or a link to an example)? I've examined a great number of DSM mpeg presentation streams ('VOB's & 'm2ts's) and I've not seen a single case. What metadata are you looking at? sequence_extension: 'progressive_sequence'? picture_coding_extension: 'picture_structure'? picture_coding_extension: 'top_field_first'? picture_coding_extension: 'repeat_first_field'? picture_coding_extension: 'progressive_frame'? That’s why the filter ignores the information by default. (If you provide only one field, no FFmpeg deinterlacer will produce useful output.) The bwdif filter will interpret a single raster and is designed to output two rasters, each containing one or the other of the fields that were contained in the input raster. "...interpret a *single raster*...one or the other of the fields...in the *input raster*." Mark Himsley, how are you defining "raster"? I thought you were equating a "single raster" with a frame and "two rasters" with fields, but now I'm unsure what you mean. You can request that the filter outputs one instead of two rasters for one input raster. -- COVID-19 perspective, 0 AM UTC, 20 September 2020. Yesterday, China: 14 new cases, S.Korea: 110, U.S.: 42,533. Today, U.S.: 4% of world population, 22% of cases, 21% of deaths. Today, U.S.: Of 4,427,517 resolved cases, 95% lived, 5% died. 22 Jan: U.S. & S.Korea reported 1st cases on the same day. 6 Mar, S.Korea: 140,000 total tests; results in 4 hours. 6 Mar, U.S.: 2000 total tests; results in 1 to 2 weeks. May, for the first time, U.S. care-homes are required to report. 1 Jun, total care-home deaths, S.Korea: 0, U.S.: 26,000 to 40,000. 5 Aug, U.S. tests still only 1/4 of number needed; 4 day results. 1 Sep, over 60% of U.S. nurses still lack protective equipment. 18 Sep, U.S. doctors & nurses still acutely lack PPE; 1200 dead. ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] bwdif filter question
> Am 16.09.2020 um 15:58 schrieb Mark Himsley : > >> On Mon, 14 Sep 2020 at 15:42, Mark Filipak (ffmpeg) >> wrote: >> >> Is the input to the bwdif filter fields or frames? > > The input to every filter in a filter chain is a raster of pixels. > That raster may contain one frame or two fields. That may not be wrong (apart from Paul’s comment) but I wonder how useful it is: No matter if the raster contains one field, two interlaced fields or a progressive frame, the filter will always see an input frame. The fact that there is metadata that may signal the content is also not necessarily helpful as this metadata is typically wrong (often signalling fields when a frame is provided). That’s why the filter ignores the information by default. (If you provide only one field, no FFmpeg deinterlacer will produce useful output.) > The bwdif filter will interpret a single raster and is designed to > output two rasters, each containing one or the other of the fields > that were contained in the input raster. You can request that the filter outputs one instead of two rasters for one input raster. Carl Eugen ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] bwdif filter question
On Wed, Sep 16, 2020 at 02:58:25PM +0100, Mark Himsley wrote: > On Mon, 14 Sep 2020 at 15:42, Mark Filipak (ffmpeg) > wrote: > > > > Is the input to the bwdif filter fields or frames? > > The input to every filter in a filter chain is a raster of pixels. > That raster may contain one frame or two fields. > The bwdif filter will interpret a single raster and is designed to > output two rasters, each containing one or the other of the fields > that were contained in the input raster. It also can contain only single fields. ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] bwdif filter question
On Mon, 14 Sep 2020 at 15:42, Mark Filipak (ffmpeg) wrote: > > Is the input to the bwdif filter fields or frames? The input to every filter in a filter chain is a raster of pixels. That raster may contain one frame or two fields. The bwdif filter will interpret a single raster and is designed to output two rasters, each containing one or the other of the fields that were contained in the input raster. -- Mark Himsley ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] bwdif filter question
On 09/14/2020 03:23 PM, Bouke wrote: Note: I'm experimenting with virtual identities in my Thunderbird email client because the ffmpeg archives publish email addresses and I wish to spoof in order to avoid spamming harvesters. And the fact that you are a troll has nothing to do with it? How did this list get so bad? ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] bwdif filter question
> Am 14.09.2020 um 16:39 schrieb Mark Filipak (ffmpeg) : > > Is the input to the bwdif filter fields or frames? In general, FFmpeg’s filter system doesn’t know about fields, only frames that may contain progressive content or interlaced content that you may want to de-interlace. Carl Eugen ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-user] bwdif filter question
> > Note: I'm experimenting with virtual identities in my Thunderbird email > client because the ffmpeg archives publish email addresses and I wish to > spoof in order to avoid spamming harvesters. And the fact that you are a troll has nothing to do with it? ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-user] bwdif filter question
Is the input to the bwdif filter fields or frames? Since, according to previous threads, ffmpeg decoders solely produce frames, Based on this: https://ffmpeg.org/ffprobe-all.html#bwdif I honestly can't figure out which it is: fields or frames. Thanks! Note: I'm experimenting with virtual identities in my Thunderbird email client because the ffmpeg archives publish email addresses and I wish to spoof in order to avoid spamming harvesters. So, if this thread gets screwed up, kindly be tolerant. Thanks! ___ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".