On Mon, Aug 26, 2019 at 11:28 AM Darrin Smith <[email protected]> wrote:
>
> Are there any "tricks" to improve the performance of merging an image
> (overlay) with a video?
>
> I have a video snippet (cropped from a larger video) that I then add an
> overlay to so additional data is added in the video. The png I use is the
> same size as the video source.  I typically notice a 3X time to merge the
> png and the video as compared to the length of the video clip. So, if the
> clip is 5 seconds long, it normally takes FFMpeg 15 seconds to merge them
> together to create a final video. I'm using a Pixel 3XL. Not THE fastest
> out there, but certainly in the upper tier.

The overlay filter isn't really designed to run on slow ARM CPUs (i.e.
blending not really optimized).  Usually you would do this sort of
compositing further down the pipeline in OpenGL on an embedded target.

Does the PNG really need to be the same size, or did you just do that
because it was convenient?  If the latter, see if you can just blend
the region you care about (potentially using multiple overlay filters
if there are a couple of regions).  In general, anything that isnt'
hardware accelerated and touches every pixel in every video frame is
going to run very poorly on an ARM target.

Also, might be worth dumping out the pipeline and making sure you're
not getting some unexpected YUV->RGB->YUV colorspace conversion in the
middle.

Devin

-- 
Devin J. Heitmueller - Kernel Labs
http://www.kernellabs.com
_______________________________________________
ffmpeg-user mailing list
[email protected]
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
[email protected] with subject "unsubscribe".

Reply via email to