I sent a message on interest@ but no one replied so I'm escalating it here. I 
am making a series of filters, but I'm encountering performance issues. I think 
it's because of lack of my understanding, or detail in the docs.

The filter pipeline starts with a file or camera device, and various filters 
are applied sequentially to frames. However I spend a lot of time converting 
frames to QImages for analysis and painting. I'm hoping there's a faster way to 
do this. Some of the filters alter the frame, some just provide information 
about the frame. 

But each time, I have to unpack a QVideoFrame's pixels and make sure the filter 
can process that pixel format, or convert it to one format that it expects. I'm 
getting processing times of 55msec on my mackbook pro, which give me 18FPS from 
a 25FPS video, so I'm dropping frames.  I am starting to think the ideal would 
be to have some "Box of Pixels" data structure that both QImage and QVideoFrame 
can use. But for now, I convert each frame to a QImage at each stage of the 
pipeline.

In addition to that, I've discovered that the QVideoSurfaceFormat in the run() 
is const, which means that for those frames with scan line direction BottomTop, 
I cannot correct the scan lines and instead always flip the QImage fo the next 
frame because I cannot change the scan line direction. The same applies to 
isMirrored(). I'd like to orient the frame properly at the start and leave it 
for the rest of the pipeline. But instead I have to keep flipping and 
unflipping it with QImage::mirrored() every frame every filter in the pipeline. 
That's just silly. 

A few things could actually help:
1) being able to change the surfaceFormat
2) QImage and QVideoFrame use the same pixel data (when the formats match) (the 
pipeline then keeps referencing 
3) QVideoFrame gets a pixel(x,y, surfaceFormat) function that takes into 
account the surfaceFormat 
4) make QPainter be able to take a QVideoFrame
5) Be able to specify the surface format for the pipeline before a frame gets 
to the pipeline

Some of my filters are all QImage based, but some make use of OpenCV, so then I 
have to convert it to a OpenCV "mat" this is fortunately a fast operation under 
ideal conditions, but sometimes has a conversion penalty. Usually I don't have 
to convert back from mat because it's not pixels that I'm getting from OpenCV.

        switch (img.format()) {
        case QImage::Format_RGB888:{
                auto result = qimage_to_mat_ref(img, CV_8UC3);
                if(swap){
                        cv::cvtColor(result, result, CV_RGB2BGR);
                }
                return result;
        }
        case QImage::Format_Grayscale8:
        case QImage::Format_Indexed8:{
                return qimage_to_mat_ref(img, CV_8U);
        }
        case QImage::Format_RGB32:
        case QImage::Format_ARGB32:
        case QImage::Format_ARGB32_Premultiplied:{
                return qimage_to_mat_ref(img, CV_8UC4);
        }

cv::Mat qimage_to_mat_ref(QImage &img, int format)
{
        return cv::Mat(img.height(), img.width(), format, img.bits(), 
img.bytesPerLine());
}


Aside from OpenCLing the conversion, is there anything I can do?




_______________________________________________
Development mailing list
Development@qt-project.org
https://lists.qt-project.org/listinfo/development

Reply via email to