> The filter pipeline starts with a file or camera device, and various > filters are applied sequentially to frames. However I spend a lot of time > converting frames to QImages for analysis and painting. I'm hoping there's > a faster way to do this. Some of the filters alter the frame, some just > provide information about the frame. > > But each time, I have to unpack a QVideoFrame's pixels and make sure the > filter can process that pixel format, or convert it to one format that it > expects. I'm getting processing times of 55msec on my mackbook pro, which > give me 18FPS from a 25FPS video, so I'm dropping frames. I am starting to > think the ideal would be to have some "Box of Pixels" data structure that > both QImage and QVideoFrame can use. But for now, I convert each frame to a > QImage at each stage of the pipeline. >
I'm not that versed in image manipulation but isn't that the point of the QVideoFilterRunnable::LastInChain flag ? Quoting the doc: "flags contains additional information about the filter's invocation. For example the LastInChain flag indicates that the filter is the last in a VideoOutput's associated filter list. This can be very useful in cases where multiple filters are chained together and the work is performed on image data in some custom format (for example a format specific to some computer vision framework). To avoid conversion on every filter in the chain, all intermediate filters can return a QVideoFrame hosting data in the custom format. Only the last, where the flag is set, returns a QVideoFrame in a format compatible with Qt." You could try using just one pixel format and use that in all your filters without reconverting it at each step.
_______________________________________________ Development mailing list Development@qt-project.org https://lists.qt-project.org/listinfo/development