On Thu, 30 Oct 2014 20:31:17 -0500 Rodger Combs <rodger.co...@gmail.com> wrote:
> libavcodec currently has support for hardware-accelerated decoding, but no > support for encoding, and libavcodec+libavfilter+ffmpeg provide no support > for a decode->filter->encode pipeline that doesn't involve copying buffers > back and forth from the video card and cutting out a significant amount of > the gain provided by using hardware acceleration to begin with. It'd be > useful to provide a way to leave buffers on the GPU when possible, and copy > back and forth only when using a filter that can't be done on the GPU. > Some filters could even be run without copying back and forth; for instance: > scaling (for some scalers), overlays, cropping, drawtext/subtitles (the > drawing component, anyway), deinterlacing, trim, and some post-processing > could likely be done for a number of GPUs relatively easily, and others could > likely also be done with additional work. > This would probably require significant changes to AVFrame, various > lavc/lavfi structs and APIs, and ffmpeg.c, but it could likely produce > significant improvements in speed and power consumption when using systems > that can support a full decode->filter->encode pipeline on the GPU. > > Thoughts on feasibility and/or implementation details? It would be pretty simple. You just need a convention to pass the API context around, and the application would have to create this context. Other than that, I think some video players are already doing things like this. I don't get the need for hw encoding though. Doesn't that usually produce rather bad quality? (Except with "real" hw as used in broadcasting etc., but where high bitrates are still needed to save the day.) I don't give a shit about ffmpeg.c, though. _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel