On 2/12/2024 2:04 PM, Andreas Rheinhardt wrote:
James Almer:
On 2/12/2024 1:40 PM, Andreas Rheinhardt wrote:
James Almer:
On 2/6/2024 10:05 AM, James Almer wrote:
Signed-off-by: James Almer <jamr...@gmail.com>
---
Now reading decriptors from extradata, plus a setting to ensure any
descriptors
present inband are omitted has been added.

    doc/bitstream_filters.texi            |  16 +
    libavcodec/bitstream_filters.c        |   1 +
    libavcodec/bsf/Makefile               |   1 +
    libavcodec/bsf/iamf_frame_split_bsf.c | 887
++++++++++++++++++++++++++
    4 files changed, 905 insertions(+)
    create mode 100644 libavcodec/bsf/iamf_frame_split_bsf.c

Will apply the set soon if there are no objections.

I still object to the BSF in #1 existing as it just duplicates parsing
code into lavc and lavf. And the issue with creating new framings for
stuff for which no framing except raw data can't exist is still there.

I insist on using the split bsf, but i can try to remove the merge one
and do that within lavf, to avoid creating packets with OBU framing.

Why is splitting not simply done inside lavf (and inside the demuxer,
not the generic code in demux.c)? What is the advantage of that?

Not making a mess in mov.c's read_packet() from reiterated calls because one Track Sample has packets for several AVStreams.

 Do such
packets as the split bsf expects exist somewhere in the wild outside of
isobmff files?

Sure, it's raw iamf. Other containers may also support it in the future, like Matroska, mpegs and the like.


- Andreas

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Reply via email to