Hi mratsim, The libraries provide good medium level access. AFAIK they don't expose motion vectors though. So I'd either extend and depend them or go bare metal. I opted for the second choice (to also learn nim) and replicate this as a starter: [https://github.com/vadimkantorov/mpegflow](https://github.com/vadimkantorov/mpegflow)
Do you think wrapping ffms2 will be less effort than ffmpeg directly? I'm working on realtime object detection professionally. Motion vectors from (hardware) encoders allow us to better extract objects. FWIW, calling ffmpeg as a subprocess is very practical for almost all other manipulation/streaming tasks we need. Personally, I think video processing with Arraymancer makes a great showcase.
