I'm still plugging away at this. My life is being made difficult by not being able to get a pointer to the buffer. For OpenCV, some routines want a color image, others want an 8 bit gray scale. It would be really great if I could use both of these at the same time.
 
For example, take the color video frame, make it gray scale, then run houghLlines on it, using that information to highlight the lines in the color frame. I tried to do this with a simple QMap<int, cv::Mat> but there's no way I can access it, because there's no QAbstractBuffer *QVideoFrame::buffer(). I might be able to hack it in using QAbstractPlanarVideoBuffer, but that feels very hacky (plane 0= color, plane2=B&W) in addition sometimes the type needs to change from quint8s to floats.
 
I feel like I'm really off in the weeds here and would like someone to chime in, if I'm completely missing something or if these are shortcomings in the Qt API?
 
 
Sent: Monday, January 07, 2019 at 5:22 PM
From: "Jason H" <jh...@gmx.com>
To: "Jason H" <jh...@gmx.com>
Cc: "Pierre-Yves Siret" <py.si...@gmail.com>, "Qt development mailing list" <development@qt-project.org>
Subject: Re: [Development] QAbstractVideoFilter, the pipeline and QImage
I'm trying to implement a QAbstractVideo buffer that uses cv::Mat (or my own custom type, CvMatVideoBuffer or ByteArrayVideoBuffer respectively), but I'm running into a mental block with how this should work. Only map() gives pixel data, I really want a QAbstractVideoBuffer *QVideoFrame::buffer() which I can then cast to my custom type.  Generally when I'm fighting Qt in this way, I'm doing something wrong.
 
I can convert between QImage/cv::Mat with:
cv::Mat qimage_to_mat_cpy(QImage const &img, bool swap)
{
    return qimage_to_mat_ref(const_cast<QImage&>(img), swap).clone();
}


QImage mat_to_qimage_ref(cv::Mat &mat, QImage::Format format)
{
    return QImage(mat.data, mat.cols, mat.rows, static_cast<int>(mat.step), format);
}

cv::Mat  qimage_to_mat_ref(QImage &img, int format)
{
    return cv::Mat(img.height(), img.width(), format, img.bits(), img.bytesPerLine());
}
 
Is there an example of how to "properly" use Qt's video pipleline filters, frames, and buffers with OpenCV?  I think there should be class(es) that converts a QVideoFrame to a cv::Mat, and one that converts from cv::Mat to QVideoFrame:
filters: [toMat, blur, sobel, houghLines, toVideoFrame]
 
Many thanks in advance. 
 
 
Sent: Monday, January 07, 2019 at 10:57 AM
From: "Jason H" <jh...@gmx.com>
To: "Jason H" <jh...@gmx.com>
Cc: "Pierre-Yves Siret" <py.si...@gmail.com>, "Qt development mailing list" <development@qt-project.org>
Subject: Re: [Development] QAbstractVideoFilter, the pipeline and QImage
I have been thinking about this more, and I think we also need to convert when the pipeline switches between internal formats. This would allow standard filter tooklits to be "a thing" for Qt. 
 
For example, if my pipeline filters are written to use QImage (because scanline() and pixel()) , and someone else's use cv::Mat (OpenCV), alternating between formats is not possible in the same pipeline. I think the panacia is to be able to convert not just at the end, but for any step:
[gauss, sobel, houghLines, final] -> formats: [QVideoFrame->cv::Mat, cv::Mat, cv::Mat->QImage, QImage->QVideoFrame] where each format step is the (inputFormat -> outputFormat)
 
Just my 0.02BTC.
 
 
Sent: Wednesday, January 02, 2019 at 12:33 PM
From: "Jason H" <jh...@gmx.com>
To: "Pierre-Yves Siret" <py.si...@gmail.com>
Cc: "Qt development mailing list" <development@qt-project.org>
Subject: Re: [Development] QAbstractVideoFilter, the pipeline and QImage
Thanks for pointing that out. I guess that could work. It's not as elegant as what I want, where everyhing presents the same way. Now each and every filter has to have
 
if (flags & QVideoFilterRunnable::LastInChain) {
   ... generate the frame for the backend per the surfaceFormat
}
 
As there are many surfaceFormats, that if(){} block is huge, and duplicated in each filter. True, I can create a "final filter" that does this to avoid all that boilerpate code that takes the frame and converts it back to what it needs to be. But what I suggested was Qt should provide this automatically both in the filter chain. The difference is this:
 
 
VideoOutput {
   filters: [sobel, houghLines]
}
 
VideoOutput {
   filters: [sobel, houghLines, final]
}
 
Ideally that final filter checks the frame matches what it expects and only if it does not, performs a conversion.  Maybe there's a way to register a conversion from a custom type to a QVideoFrame?
Also, if the VideoOutput is not needed* the final filter need not be invoked.
 
By not needed, I mean the video output element is not visable, or it's area is 0. Sometimes, we want to provide intel about the frames, without affecting them. Currently, this is inherently syncronous, which negatively impacts frame rate.
I should be able to use two (or more) VideoOutputs, one for real-time video display and another for info-only filter pipeline, and these can be distributed across cpu cores. Unfortuantely, the VideoOutput takes over the video source forcing source-output mappings to be 1:1. It would be really nice if it could be 1:N. I experimented with this, and the first VideoOutput is the only one to receive a frame from a source, and the only one with an active filter pipeline. How could I have 3 VideoOutputs, each with it's own filter pipeline and visualize them simulatneously?
 
Camera { id: camera }
 
VideoOutput {  // only this one works. If I move this after the next one, then that one works.
   filters: [sobel, houghLines]  
   source: camera
}
 
VideoOutput {
   filters: [sobel, houghLines, final]
   source: camera
}
 
So to sum this up:
- Qt should provide automatic frame reconstruction for the final frame (that big if(){} block) (it should be boilerplate)
- A way to register custom format to QVideoFrame reconstruction function
- Allow for multiple VideoOutputs (and filter pipelines) from the same source
-- Maybe an element for no video output pipeline?
 
Am wrong in thinking any of that doesn't already exist or is a good idea?
 
Sent: Saturday, December 22, 2018 at 5:10 AM
From: "Pierre-Yves Siret" <py.si...@gmail.com>
To: "Jason H" <jh...@gmx.com>
Cc: "Qt development mailing list" <development@qt-project.org>
Subject: Re: [Development] QAbstractVideoFilter, the pipeline and QImage
 
The filter pipeline starts with a file or camera device, and various filters are applied sequentially to frames. However I spend a lot of time converting frames to QImages for analysis and painting. I'm hoping there's a faster way to do this. Some of the filters alter the frame, some just provide information about the frame.

But each time, I have to unpack a QVideoFrame's pixels and make sure the filter can process that pixel format, or convert it to one format that it expects. I'm getting processing times of 55msec on my mackbook pro, which give me 18FPS from a 25FPS video, so I'm dropping frames.  I am starting to think the ideal would be to have some "Box of Pixels" data structure that both QImage and QVideoFrame can use. But for now, I convert each frame to a QImage at each stage of the pipeline.
 
I'm not that versed in image manipulation but isn't that the point of the QVideoFilterRunnable::LastInChain flag ?
Quoting the doc: 
"flags contains additional information about the filter's invocation. For example the LastInChain flag indicates that the filter is the last in a VideoOutput's associated filter list. This can be very useful in cases where multiple filters are chained together and the work is performed on image data in some custom format (for example a format specific to some computer vision framework). To avoid conversion on every filter in the chain, all intermediate filters can return a QVideoFrame hosting data in the custom format. Only the last, where the flag is set, returns a QVideoFrame in a format compatible with Qt."
 
You could try using just one pixel format and use that in all your filters without reconverting it at each step.
 
_______________________________________________ Development mailing list Development@qt-project.org https://lists.qt-project.org/listinfo/development
_______________________________________________
Development mailing list
Development@qt-project.org
https://lists.qt-project.org/listinfo/development

Reply via email to