#8426: Streaming some MXF files off S3 takes very long to prepare
--------------------------------------+------------------------------------
             Reporter:  zerodefect    |                    Owner:
                 Type:  defect        |                   Status:  new
             Priority:  normal        |                Component:  avformat
              Version:  git-master    |               Resolution:
             Keywords:  mxf seekable  |               Blocked By:
             Blocking:                |  Reproduced by developer:  0
Analyzed by developer:  0             |
--------------------------------------+------------------------------------

Comment (by Tjoppen):

 This is because mxfdec.c parses all partition headers in the file before
 demuxing. This is done backwards since there is no "!NextPartition" value
 in the partition headers. This is necessary to figure out the BodySID of
 each partition, which is necessary for seeking to work. See
 mxf_read_seek(), mxf_edit_unit_absolute_offset() and
 mxf_absolute_bodysid_offset().

 If you do not need seeking you can mark the file as not seekable and it'll
 work just fine, as you have noticed. You could configure your web server
 to refuse seeking on the files to make this work automagically for
 everyone else.

 Potential solutions:

 The file has a !RandomIndexPack (RIP) of size 11087, with 921 entries.
 This means 921+ seeks to parse all partitions, and at least as many
 allocations. It should be possible to do this lazily, especially when a
 file has a RIP. You still run into an almost impossible problem if a user
 wants to seek into the middle of a file though, and I would rather not
 make the code even more complicated to try and handle that.

--
Ticket URL: <https://trac.ffmpeg.org/ticket/8426#comment:1>
FFmpeg <https://ffmpeg.org>
FFmpeg issue tracker
_______________________________________________
FFmpeg-trac mailing list
[email protected]
https://ffmpeg.org/mailman/listinfo/ffmpeg-trac

To unsubscribe, visit link above, or email
[email protected] with subject "unsubscribe".

Reply via email to