Hi everyone,

I'm developping a media player featuring advanced video/frame processing.
To be able to seek to any particular frame, I need to know, for each frame,
its presentation time and if available, if it's a key frame or not. The
latter is optional, really.

To do that I build an "index" once the media is valid and opened.
It's a basically a loop, fetching a packet, checking if it's the video
stream I'm interested in, feeding the codec until he can produces a frame
or reaching the end of the file.

This is perfectly working but it's way too long for me, like 20 seconds for
a ~800MB mp4/h264 file.
I'm considering the following options:
- building the index in the background while playing the media: I guess
I'll have to copy the decoder context to not mix both processes but I
expect having playback (stuttering) issues.
- use multiple threads: from what I read, I can only use one thread for a
stream. I may eventually use two threads, one for video and one for audio
but I want to speed up video decoding fisrt.
- use the GPU to decode frames faster: while it sounds like a good idea, I
have to add two constraints: I'm on Windows 7 (and soon, Windows 10) and I
want a generic approach so not H.264-only.
- decode only packets: it's tempting but I think I will either loose
timestamp information or precision (because of I, P, B frames.
- use an option (av_dict_set) to do a dummy frame decoding: is there
anything like that?
- use reference counting to avoid costly frame allocations: I already tried
that but didn't see any difference.

For now, I will measure time elapsed in each piece of code in order to
pinpoint lengthy operations.
If you guys have any hint, I'd be glad to hear them.

Thanks.
_______________________________________________
ffmpeg-user mailing list
[email protected]
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
[email protected] with subject "unsubscribe".

Reply via email to