On 01/03/2024 23:29, Tomas Härdin wrote:
sön 2024-02-25 klockan 05:14 +0100 skrev Jerome Martinez:
[...]
I also needed to add a dedicated AVStream field for saying that the
decoder is able to manage this functionality (and is needed there).

What is the added value to call the decoder twice from decode.c
rather
than recursive call (or a master function in the decoder calling the
current function twice, if preferred) inside the decoder only?
We get support for all codecs that can do SEPARATE_FIELDS in MXF,
rather than a half measure that only supports j2k, that we'd have to
fix later anyway.

I personally don't understand how, because in both cases we need a decoder aware about this feature (for the stride during decoding), anyway if you think that it will be useful in the future for another codec which would have separate fields, I warned about the issues I see in practice and it does not matter to me that it is done the way you want, my goal being that the upstream FFmpeg, rather than only my build, does not trash half of a frame (current behavior), however it is done.


[...]
I didn't find specifications for the essence label UL corresponding
ULs aren't for specifying interlacing type (though I wouldn't be
surprised if there's a mapping that misuse them for that)


In practice for MXF jp2k the store method is provided by the UL (byte 15 of the essence container label), it is in the specs and all the other items (frame layout, sample rate, edit rate, aspect ratio) alone don't provide enough information (and are often buggy), I did not decide about that.

About other formats and if it should not depend on the UL, I did not find any information about separate fields, difficult for me to prove that something does not exist, could you indicate another spec specifying differently separate fields?

In practice and as far as I know, we currently have only jp2k with 2 completely separate codestreams in MXF, so I implemented in my patch for all existing specifications (and files) I am aware about, i.e. 1.


[...]

but if it appears would be only a matter of mapping the MXF signaling
to
the new AVStream field and supporting the feature in the decoders
(even
if we implement the idea of calling the decoder twice, the decoder
needs
to be expanded for this feature).
So in other words putting it into every decoder for which there exists
an MXF mapping for SEPARATE_FIELDS, rather than doing it properly?


As said above, I am not convinced that calling the decoder twice from decode.c (or similar) is properly doing things, but if you think that this is properly done if it is done this way, fine for me.

Patch v3 contains all the requested changes (MXF config propagation to the decoder, calling the decoder twice), is there anything else in this patch proposal preventing it to be applied?

Jérôme
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Reply via email to