Re: [Development] QAbstractVideoFilter, the pipeline and QImage

2019-02-04 Thread Val Doroshchuk
> I really want a QAbstractVideoBuffer *QVideoFrame::buffer() which I can then 
> cast to my custom type.

https://codereview.qt-project.org/#/c/251783/


From: Development  on behalf of Val 
Doroshchuk 
Sent: Wednesday, January 9, 2019 9:43 AM
To: Jason H
Cc: Qt development mailing list
Subject: Re: [Development] QAbstractVideoFilter, the pipeline and QImage


Hi,

> I really want a QAbstractVideoBuffer *QVideoFrame::buffer() which I can then 
> cast to my custom type.


Sorry, but just a note.

QAbstractVideoBuffer is an abstraction that was supposed to be an impl of 
accessing to data, nothing more.


> I'm trying to implement a QAbstractVideo buffer that uses cv::Mat


Why you need to use cv::Mat there?

If needed, you could create QAbstractVideoBuffer::UserHandle and return an id 
to access to cv's data without downloading/uploading within mapping.


>  I think there should be class(es) that converts a QVideoFrame to a cv::Mat, 
> and one that converts from cv::Mat to QVideoFrame: filters: [toMat, blur, 
> sobel, houghLines, toVideoFrame]


Is it performance wise operation to convert QVideoFrame to cv::Mat and back in 
this case?

(qt_imageFromVideoFrame(QVideoFrame) can convert to QImage, and should be 
relatively fast)




From: Development  on behalf of Jason H 

Sent: Tuesday, January 8, 2019 6:33:14 PM
To: Jason H
Cc: Qt development mailing list
Subject: Re: [Development] QAbstractVideoFilter, the pipeline and QImage

I'm still plugging away at this. My life is being made difficult by not being 
able to get a pointer to the buffer. For OpenCV, some routines want a color 
image, others want an 8 bit gray scale. It would be really great if I could use 
both of these at the same time.

For example, take the color video frame, make it gray scale, then run 
houghLlines on it, using that information to highlight the lines in the color 
frame. I tried to do this with a simple QMap but there's no way I 
can access it, because there's no QAbstractBuffer *QVideoFrame::buffer(). I 
might be able to hack it in using QAbstractPlanarVideoBuffer, but that feels 
very hacky (plane 0= color, plane2=B) in addition sometimes the type needs to 
change from quint8s to floats.

I feel like I'm really off in the weeds here and would like someone to chime 
in, if I'm completely missing something or if these are shortcomings in the Qt 
API?


Sent: Monday, January 07, 2019 at 5:22 PM
From: "Jason H" 
To: "Jason H" 
Cc: "Pierre-Yves Siret" , "Qt development mailing list" 

Subject: Re: [Development] QAbstractVideoFilter, the pipeline and QImage
I'm trying to implement a QAbstractVideo buffer that uses cv::Mat (or my own 
custom type, CvMatVideoBuffer or ByteArrayVideoBuffer respectively), but I'm 
running into a mental block with how this should work. Only map() gives pixel 
data, I really want a QAbstractVideoBuffer *QVideoFrame::buffer() which I can 
then cast to my custom type.  Generally when I'm fighting Qt in this way, I'm 
doing something wrong.

I can convert between QImage/cv::Mat with:

cv::Mat qimage_to_mat_cpy(QImage const , bool swap)
{
return qimage_to_mat_ref(const_cast(img), swap).clone();
}


QImage mat_to_qimage_ref(cv::Mat , QImage::Format format)
{
return QImage(mat.data, mat.cols, mat.rows, static_cast(mat.step), 
format);
}

cv::Mat  qimage_to_mat_ref(QImage , int format)
{
return cv::Mat(img.height(), img.width(), format, img.bits(), 
img.bytesPerLine());
}



Is there an example of how to "properly" use Qt's video pipleline filters, 
frames, and buffers with OpenCV?  I think there should be class(es) that 
converts a QVideoFrame to a cv::Mat, and one that converts from cv::Mat to 
QVideoFrame:
filters: [toMat, blur, sobel, houghLines, toVideoFrame]

Many thanks in advance.


Sent: Monday, January 07, 2019 at 10:57 AM
From: "Jason H" 
To: "Jason H" 
Cc: "Pierre-Yves Siret" , "Qt development mailing list" 

Subject: Re: [Development] QAbstractVideoFilter, the pipeline and QImage
I have been thinking about this more, and I think we also need to convert when 
the pipeline switches between internal formats. This would allow standard 
filter tooklits to be "a thing" for Qt.

For example, if my pipeline filters are written to use QImage (because 
scanline() and pixel()) , and someone else's use cv::Mat (OpenCV), alternating 
between formats is not possible in the same pipeline. I think the panacia is to 
be able to convert not just at the end, but for any step:
[gauss, sobel, houghLines, final] -> formats: [QVideoFrame->cv::Mat, cv::Mat, 
cv::Mat->QImage, QImage->QVideoFrame] where each format step is the 
(inputFormat -> outputFormat)

Just my 0.02BTC.


Sent: Wednesday, January 02, 2019 at 12:33 PM
From: "Jason H" 
To: "Pierre-Yves Siret" 
Cc: "Qt development mailing

Re: [Development] QAbstractVideoFilter, the pipeline and QImage

2019-01-11 Thread Val Doroshchuk
> is this happening on multiple threads (or cores? - if available) Are there 
> multiple frames "in flight"?


No and no, just one thread.


From: Jason H 
Sent: Thursday, January 10, 2019 5:28:11 PM
To: Val Doroshchuk
Cc: Qt development mailing list
Subject: Re: [Development] QAbstractVideoFilter, the pipeline and QImage

> From: "Val Doroshchuk" 

> Hi,
> thanks for explanation.

> Does it sound good to try to create a data storage, where converted data is 
> located per real frame, (which will be created on first filter).
> Followed filters will use QVideoFrames with custom 
> QAbstractVideoBuffer::UserHandle and id from the storage?

I don't know about the QAbstractVideoBuffer::UserHandle method. That's why I'm 
having this conversation. I don't know what the best answer is. I was playing 
with the idea yesterday of just using the metadata with a 
QString("%1_%2x%3").arg(cvMatType,width,height) key and checking to see if that 
exists at whatever pipeline stage needs it. This way, I can just return the 
frame, skip the conversion back and pay for only the mats I need.

Unless there's a better way? One thing I wondered is since it's a runnable, is 
this happening on multiple threads (or cores? - if available) Are there 
multiple frames "in flight"?




From: Jason H 
Sent: Wednesday, January 9, 2019 4:28:01 PM
To: Val Doroshchuk
Cc: Qt development mailing list
Subject: Re: [Development] QAbstractVideoFilter, the pipeline and QImage


>> I really want a QAbstractVideoBuffer *QVideoFrame::buffer() which I can then 
>> cast to my custom type.

> Sorry, but just a note.
> QAbstractVideoBuffer is an abstraction that was supposed to be an impl of 
> accessing to data, nothing more.


>> I'm trying to implement a QAbstractVideo buffer that uses cv::Mat

> Why you need to use cv::Mat there?
> If needed, you could create QAbstractVideoBuffer::UserHandle and return an id 
> to access to cv's data without downloading/uploading within mapping.

Because the video frame needs to be converted to a format that OpenCV 
understands. Various OpenCV functions need particular pixel representation 
types: grayscale 8 bit, floating point, float or int array if 3,4 color 
channels. I basically want to do this conversion once, then pass that on to 
later stages in the pipeline. In general the OpenCV functions don't modify the 
frame itself, but produce information about the frame. The filters I have emit 
the results of the computation in some cases (a list of line segments for 
houghLines) while others, like sobel, create a new image. Depending on the 
pipeline, I might want multiple OpenCV representations. I generally use 
QPainter though, to draw on the frame after having gotten the information.

>>  I think there should be class(es) that converts a QVideoFrame to a cv::Mat, 
>> and one that converts from cv::Mat to QVideoFrame: filters: [toMat, blur, 
>> sobel, houghLines, toVideoFrame]

> Is it performance wise operation to convert QVideoFrame to cv::Mat and back 
> in this case?
> (qt_imageFromVideoFrame(QVideoFrame) can convert to QImage, and should be 
> relatively fast)

Well, I generally don't need to convert back, though this is possible. There 
are OpenCV functions that produce an image. I'm fine with using OpenCV to 
extract info and keep the drawing in Qt (QPainter). So there's no need for me 
to convert it back. Though I will say that the OpenCV functions are _very_ 
fast, supporting multiple acceleration methods (uses Eigen). I wrote my own 
implementations of some of them purely in Qt, and the OpenCV stuff puts them to 
shame. Image saving is the only thing where Qt is faster (IMHO, empirical 
evidence not yet collected).  One other technique I use is to scale the image 
down (say by 50% or 25%) which quarters or 16ths the time respectively, if I 
don't need pixel perfect accuracy.

I would expect some way to attach cv::Mats to a frame. That way I could look 
through what mats are available, and only pay a penalty when I have to. If a 
previous filter operation already produced a 8bit gray scale, I would just use 
that, and if it doesn't exist create it. Later filters could then use it, or 
create their own.

if (!frame.containsMat(CV_8U)) frame.insertMat(CV_8U, frame.toMat(CV_8U));

cv::Mat mat =  frame.mat(CV_8U)
...
frame.insertMat(CV_8U, mat)//if I need to save (like for sobel)

WRT the "Big Picture": I'm trying to a point where I can in QML, have filters, 
which are OpenCV functions programmed to a dynamic filter pipeline.  My 
approach is working but the cost of all the conversions is very expensive. 
We're talking 50msec per frame, which gets me down into 1 filter is 15pfs 
territory, 2 filters is 5 fps, etc. My source is 25-29.97FPS. The way I've been 
doing this i

Re: [Development] QAbstractVideoFilter, the pipeline and QImage

2019-01-10 Thread Jason H
> From: "Val Doroshchuk" 

> Hi,
> thanks for explanation.
 
> Does it sound good to try to create a data storage, where converted data is 
> located per real frame, (which will be created on first filter).
> Followed filters will use QVideoFrames with custom 
> QAbstractVideoBuffer::UserHandle and id from the storage?
 
I don't know about the QAbstractVideoBuffer::UserHandle method. That's why I'm 
having this conversation. I don't know what the best answer is. I was playing 
with the idea yesterday of just using the metadata with a 
QString("%1_%2x%3").arg(cvMatType,width,height) key and checking to see if that 
exists at whatever pipeline stage needs it. This way, I can just return the 
frame, skip the conversion back and pay for only the mats I need.

Unless there's a better way? One thing I wondered is since it's a runnable, is 
this happening on multiple threads (or cores? - if available) Are there 
multiple frames "in flight"? 




From: Jason H 
Sent: Wednesday, January 9, 2019 4:28:01 PM
To: Val Doroshchuk
Cc: Qt development mailing list
Subject: Re: [Development] QAbstractVideoFilter, the pipeline and QImage
 

>> I really want a QAbstractVideoBuffer *QVideoFrame::buffer() which I can then 
>>cast to my custom type.
 
> Sorry, but just a note.
> QAbstractVideoBuffer is an abstraction that was supposed to be an impl of 
> accessing to data, nothing more.
 
 
>> I'm trying to implement a QAbstractVideo buffer that uses cv::Mat
 
> Why you need to use cv::Mat there? 
> If needed, you could create QAbstractVideoBuffer::UserHandle and return an id 
> to access to cv's data without downloading/uploading within mapping.

Because the video frame needs to be converted to a format that OpenCV 
understands. Various OpenCV functions need particular pixel representation 
types: grayscale 8 bit, floating point, float or int array if 3,4 color 
channels. I basically want to do this conversion once, then pass that on to 
later stages in the pipeline. In general the OpenCV functions don't modify the 
frame itself, but produce information about the frame. The filters I have emit 
the results of the computation in some cases (a list of line segments for 
houghLines) while others, like sobel, create a new image. Depending on the 
pipeline, I might want multiple OpenCV representations. I generally use 
QPainter though, to draw on the frame after having gotten the information.
 
>>  I think there should be class(es) that converts a QVideoFrame to a cv::Mat, 
>>and one that converts from cv::Mat to QVideoFrame: filters: [toMat, blur, 
>>sobel, houghLines, toVideoFrame]
 
> Is it performance wise operation to convert QVideoFrame to cv::Mat and back 
> in this case?
> (qt_imageFromVideoFrame(QVideoFrame) can convert to QImage, and should be 
> relatively fast)
 
Well, I generally don't need to convert back, though this is possible. There 
are OpenCV functions that produce an image. I'm fine with using OpenCV to 
extract info and keep the drawing in Qt (QPainter). So there's no need for me 
to convert it back. Though I will say that the OpenCV functions are _very_ 
fast, supporting multiple acceleration methods (uses Eigen). I wrote my own 
implementations of some of them purely in Qt, and the OpenCV stuff puts them to 
shame. Image saving is the only thing where Qt is faster (IMHO, empirical 
evidence not yet collected).  One other technique I use is to scale the image 
down (say by 50% or 25%) which quarters or 16ths the time respectively, if I 
don't need pixel perfect accuracy.

I would expect some way to attach cv::Mats to a frame. That way I could look 
through what mats are available, and only pay a penalty when I have to. If a 
previous filter operation already produced a 8bit gray scale, I would just use 
that, and if it doesn't exist create it. Later filters could then use it, or 
create their own.

if (!frame.containsMat(CV_8U)) frame.insertMat(CV_8U, frame.toMat(CV_8U));

cv::Mat mat =  frame.mat(CV_8U)
...
frame.insertMat(CV_8U, mat)//if I need to save (like for sobel)

WRT the "Big Picture": I'm trying to a point where I can in QML, have filters, 
which are OpenCV functions programmed to a dynamic filter pipeline.  My 
approach is working but the cost of all the conversions is very expensive. 
We're talking 50msec per frame, which gets me down into 1 filter is 15pfs 
territory, 2 filters is 5 fps, etc. My source is 25-29.97FPS. The way I've been 
doing this is copying the QVideoFrame to QImage, then using that for a Mat.If I 
can just pay the conversion penalty once, I think that would go a long way in 
helping.

Maybe what I need to do is to make the cv::Map a QVariant and store it as 
metadata and use QVideoFrame availableMetaData()?


 


From: Develo

Re: [Development] QAbstractVideoFilter, the pipeline and QImage

2019-01-10 Thread Val Doroshchuk
Hi,

thanks for explanation.


Does it sound good to try to create a data storage, where converted data is 
located per real frame, (which will be created on first filter).

Followed filters will use QVideoFrames with custom 
QAbstractVideoBuffer::UserHandle and id from the storage?



From: Jason H 
Sent: Wednesday, January 9, 2019 4:28:01 PM
To: Val Doroshchuk
Cc: Qt development mailing list
Subject: Re: [Development] QAbstractVideoFilter, the pipeline and QImage


>> I really want a QAbstractVideoBuffer *QVideoFrame::buffer() which I can then 
>> cast to my custom type.

> Sorry, but just a note.
> QAbstractVideoBuffer is an abstraction that was supposed to be an impl of 
> accessing to data, nothing more.


>> I'm trying to implement a QAbstractVideo buffer that uses cv::Mat

> Why you need to use cv::Mat there?
> If needed, you could create QAbstractVideoBuffer::UserHandle and return an id 
> to access to cv's data without downloading/uploading within mapping.

Because the video frame needs to be converted to a format that OpenCV 
understands. Various OpenCV functions need particular pixel representation 
types: grayscale 8 bit, floating point, float or int array if 3,4 color 
channels. I basically want to do this conversion once, then pass that on to 
later stages in the pipeline. In general the OpenCV functions don't modify the 
frame itself, but produce information about the frame. The filters I have emit 
the results of the computation in some cases (a list of line segments for 
houghLines) while others, like sobel, create a new image. Depending on the 
pipeline, I might want multiple OpenCV representations. I generally use 
QPainter though, to draw on the frame after having gotten the information.

>>  I think there should be class(es) that converts a QVideoFrame to a cv::Mat, 
>> and one that converts from cv::Mat to QVideoFrame: filters: [toMat, blur, 
>> sobel, houghLines, toVideoFrame]

> Is it performance wise operation to convert QVideoFrame to cv::Mat and back 
> in this case?
> (qt_imageFromVideoFrame(QVideoFrame) can convert to QImage, and should be 
> relatively fast)

Well, I generally don't need to convert back, though this is possible. There 
are OpenCV functions that produce an image. I'm fine with using OpenCV to 
extract info and keep the drawing in Qt (QPainter). So there's no need for me 
to convert it back. Though I will say that the OpenCV functions are _very_ 
fast, supporting multiple acceleration methods (uses Eigen). I wrote my own 
implementations of some of them purely in Qt, and the OpenCV stuff puts them to 
shame. Image saving is the only thing where Qt is faster (IMHO, empirical 
evidence not yet collected).  One other technique I use is to scale the image 
down (say by 50% or 25%) which quarters or 16ths the time respectively, if I 
don't need pixel perfect accuracy.

I would expect some way to attach cv::Mats to a frame. That way I could look 
through what mats are available, and only pay a penalty when I have to. If a 
previous filter operation already produced a 8bit gray scale, I would just use 
that, and if it doesn't exist create it. Later filters could then use it, or 
create their own.

if (!frame.containsMat(CV_8U)) frame.insertMat(CV_8U, frame.toMat(CV_8U));

cv::Mat mat =  frame.mat(CV_8U)
...
frame.insertMat(CV_8U, mat)//if I need to save (like for sobel)

WRT the "Big Picture": I'm trying to a point where I can in QML, have filters, 
which are OpenCV functions programmed to a dynamic filter pipeline.  My 
approach is working but the cost of all the conversions is very expensive. 
We're talking 50msec per frame, which gets me down into 1 filter is 15pfs 
territory, 2 filters is 5 fps, etc. My source is 25-29.97FPS. The way I've been 
doing this is copying the QVideoFrame to QImage, then using that for a Mat.If I 
can just pay the conversion penalty once, I think that would go a long way in 
helping.

Maybe what I need to do is to make the cv::Map a QVariant and store it as 
metadata and use QVideoFrame availableMetaData()?





From: Development  on behalf of Jason H 

Sent: Tuesday, January 8, 2019 6:33:14 PM
To: Jason H
Cc: Qt development mailing list
Subject: Re: [Development] QAbstractVideoFilter, the pipeline and QImage


I'm still plugging away at this. My life is being made difficult by not being 
able to get a pointer to the buffer. For OpenCV, some routines want a color 
image, others want an 8 bit gray scale. It would be really great if I could use 
both of these at the same time.

For example, take the color video frame, make it gray scale, then run 
houghLlines on it, using that information to highlight the lines in the color 
frame. I tried to do this with a simple QMap but there's no way I 
can access it, because there's no QAbstractBuffer *QVideoFrame::buffer(). I 
might 

Re: [Development] QAbstractVideoFilter, the pipeline and QImage

2019-01-09 Thread Jason H

>> I really want a QAbstractVideoBuffer *QVideoFrame::buffer() which I can then 
>>cast to my custom type.
 
> Sorry, but just a note.
> QAbstractVideoBuffer is an abstraction that was supposed to be an impl of 
> accessing to data, nothing more.
 
 
>> I'm trying to implement a QAbstractVideo buffer that uses cv::Mat
 
> Why you need to use cv::Mat there? 
> If needed, you could create QAbstractVideoBuffer::UserHandle and return an id 
> to access to cv's data without downloading/uploading within mapping.

Because the video frame needs to be converted to a format that OpenCV 
understands. Various OpenCV functions need particular pixel representation 
types: grayscale 8 bit, floating point, float or int array if 3,4 color 
channels. I basically want to do this conversion once, then pass that on to 
later stages in the pipeline. In general the OpenCV functions don't modify the 
frame itself, but produce information about the frame. The filters I have emit 
the results of the computation in some cases (a list of line segments for 
houghLines) while others, like sobel, create a new image. Depending on the 
pipeline, I might want multiple OpenCV representations. I generally use 
QPainter though, to draw on the frame after having gotten the information. 
 
>>  I think there should be class(es) that converts a QVideoFrame to a cv::Mat, 
>>and one that converts from cv::Mat to QVideoFrame: filters: [toMat, blur, 
>>sobel, houghLines, toVideoFrame]
 
> Is it performance wise operation to convert QVideoFrame to cv::Mat and back 
> in this case?
> (qt_imageFromVideoFrame(QVideoFrame) can convert to QImage, and should be 
> relatively fast)
 
Well, I generally don't need to convert back, though this is possible. There 
are OpenCV functions that produce an image. I'm fine with using OpenCV to 
extract info and keep the drawing in Qt (QPainter). So there's no need for me 
to convert it back. Though I will say that the OpenCV functions are _very_ 
fast, supporting multiple acceleration methods (uses Eigen). I wrote my own 
implementations of some of them purely in Qt, and the OpenCV stuff puts them to 
shame. Image saving is the only thing where Qt is faster (IMHO, empirical 
evidence not yet collected).  One other technique I use is to scale the image 
down (say by 50% or 25%) which quarters or 16ths the time respectively, if I 
don't need pixel perfect accuracy.

I would expect some way to attach cv::Mats to a frame. That way I could look 
through what mats are available, and only pay a penalty when I have to. If a 
previous filter operation already produced a 8bit gray scale, I would just use 
that, and if it doesn't exist create it. Later filters could then use it, or 
create their own.

if (!frame.containsMat(CV_8U)) frame.insertMat(CV_8U, frame.toMat(CV_8U));

cv::Mat mat =  frame.mat(CV_8U)
...
frame.insertMat(CV_8U, mat)//if I need to save (like for sobel)

WRT the "Big Picture": I'm trying to a point where I can in QML, have filters, 
which are OpenCV functions programmed to a dynamic filter pipeline.  My 
approach is working but the cost of all the conversions is very expensive. 
We're talking 50msec per frame, which gets me down into 1 filter is 15pfs 
territory, 2 filters is 5 fps, etc. My source is 25-29.97FPS. The way I've been 
doing this is copying the QVideoFrame to QImage, then using that for a Mat.If I 
can just pay the conversion penalty once, I think that would go a long way in 
helping. 

Maybe what I need to do is to make the cv::Map a QVariant and store it as 
metadata and use QVideoFrame availableMetaData()? 


 


From: Development  on behalf of Jason H 

Sent: Tuesday, January 8, 2019 6:33:14 PM
To: Jason H
Cc: Qt development mailing list
Subject: Re: [Development] QAbstractVideoFilter, the pipeline and QImage
 

I'm still plugging away at this. My life is being made difficult by not being 
able to get a pointer to the buffer. For OpenCV, some routines want a color 
image, others want an 8 bit gray scale. It would be really great if I could use 
both of these at the same time.
 
For example, take the color video frame, make it gray scale, then run 
houghLlines on it, using that information to highlight the lines in the color 
frame. I tried to do this with a simple QMap but there's no way I 
can access it, because there's no QAbstractBuffer *QVideoFrame::buffer(). I 
might be able to hack it in using QAbstractPlanarVideoBuffer, but that feels 
very hacky (plane 0= color, plane2=B) in addition sometimes the type needs to 
change from quint8s to floats.
 
I feel like I'm really off in the weeds here and would like someone to chime 
in, if I'm completely missing something or if these are shortcomings in the Qt 
API?
 
 

Sent: Monday, January 07, 2019 at 5:22 PM
From: "Jason H" 
To: "Jason H" 
Cc: "Pierre-Yves Siret" , "Qt d

Re: [Development] QAbstractVideoFilter, the pipeline and QImage

2019-01-09 Thread Val Doroshchuk
Hi,

> I really want a QAbstractVideoBuffer *QVideoFrame::buffer() which I can then 
> cast to my custom type.


Sorry, but just a note.

QAbstractVideoBuffer is an abstraction that was supposed to be an impl of 
accessing to data, nothing more.


> I'm trying to implement a QAbstractVideo buffer that uses cv::Mat


Why you need to use cv::Mat there?

If needed, you could create QAbstractVideoBuffer::UserHandle and return an id 
to access to cv's data without downloading/uploading within mapping.


>  I think there should be class(es) that converts a QVideoFrame to a cv::Mat, 
> and one that converts from cv::Mat to QVideoFrame: filters: [toMat, blur, 
> sobel, houghLines, toVideoFrame]


Is it performance wise operation to convert QVideoFrame to cv::Mat and back in 
this case?

(qt_imageFromVideoFrame(QVideoFrame) can convert to QImage, and should be 
relatively fast)




From: Development  on behalf of Jason H 

Sent: Tuesday, January 8, 2019 6:33:14 PM
To: Jason H
Cc: Qt development mailing list
Subject: Re: [Development] QAbstractVideoFilter, the pipeline and QImage

I'm still plugging away at this. My life is being made difficult by not being 
able to get a pointer to the buffer. For OpenCV, some routines want a color 
image, others want an 8 bit gray scale. It would be really great if I could use 
both of these at the same time.

For example, take the color video frame, make it gray scale, then run 
houghLlines on it, using that information to highlight the lines in the color 
frame. I tried to do this with a simple QMap but there's no way I 
can access it, because there's no QAbstractBuffer *QVideoFrame::buffer(). I 
might be able to hack it in using QAbstractPlanarVideoBuffer, but that feels 
very hacky (plane 0= color, plane2=B) in addition sometimes the type needs to 
change from quint8s to floats.

I feel like I'm really off in the weeds here and would like someone to chime 
in, if I'm completely missing something or if these are shortcomings in the Qt 
API?


Sent: Monday, January 07, 2019 at 5:22 PM
From: "Jason H" 
To: "Jason H" 
Cc: "Pierre-Yves Siret" , "Qt development mailing list" 

Subject: Re: [Development] QAbstractVideoFilter, the pipeline and QImage
I'm trying to implement a QAbstractVideo buffer that uses cv::Mat (or my own 
custom type, CvMatVideoBuffer or ByteArrayVideoBuffer respectively), but I'm 
running into a mental block with how this should work. Only map() gives pixel 
data, I really want a QAbstractVideoBuffer *QVideoFrame::buffer() which I can 
then cast to my custom type.  Generally when I'm fighting Qt in this way, I'm 
doing something wrong.

I can convert between QImage/cv::Mat with:

cv::Mat qimage_to_mat_cpy(QImage const , bool swap)
{
return qimage_to_mat_ref(const_cast(img), swap).clone();
}


QImage mat_to_qimage_ref(cv::Mat , QImage::Format format)
{
return QImage(mat.data, mat.cols, mat.rows, static_cast(mat.step), 
format);
}

cv::Mat  qimage_to_mat_ref(QImage , int format)
{
return cv::Mat(img.height(), img.width(), format, img.bits(), 
img.bytesPerLine());
}



Is there an example of how to "properly" use Qt's video pipleline filters, 
frames, and buffers with OpenCV?  I think there should be class(es) that 
converts a QVideoFrame to a cv::Mat, and one that converts from cv::Mat to 
QVideoFrame:
filters: [toMat, blur, sobel, houghLines, toVideoFrame]

Many thanks in advance.


Sent: Monday, January 07, 2019 at 10:57 AM
From: "Jason H" 
To: "Jason H" 
Cc: "Pierre-Yves Siret" , "Qt development mailing list" 

Subject: Re: [Development] QAbstractVideoFilter, the pipeline and QImage
I have been thinking about this more, and I think we also need to convert when 
the pipeline switches between internal formats. This would allow standard 
filter tooklits to be "a thing" for Qt.

For example, if my pipeline filters are written to use QImage (because 
scanline() and pixel()) , and someone else's use cv::Mat (OpenCV), alternating 
between formats is not possible in the same pipeline. I think the panacia is to 
be able to convert not just at the end, but for any step:
[gauss, sobel, houghLines, final] -> formats: [QVideoFrame->cv::Mat, cv::Mat, 
cv::Mat->QImage, QImage->QVideoFrame] where each format step is the 
(inputFormat -> outputFormat)

Just my 0.02BTC.


Sent: Wednesday, January 02, 2019 at 12:33 PM
From: "Jason H" 
To: "Pierre-Yves Siret" 
Cc: "Qt development mailing list" 
Subject: Re: [Development] QAbstractVideoFilter, the pipeline and QImage
Thanks for pointing that out. I guess that could work. It's not as elegant as 
what I want, where everyhing presents the same way. Now each and every filter 
has to have

if (flags & QVideoFilterRunnable::LastInChain) {
   ... generate the frame for the backend per the surfaceFormat
}

As there are many surf

Re: [Development] QAbstractVideoFilter, the pipeline and QImage

2019-01-08 Thread Jason H
I'm still plugging away at this. My life is being made difficult by not being able to get a pointer to the buffer. For OpenCV, some routines want a color image, others want an 8 bit gray scale. It would be really great if I could use both of these at the same time.

 

For example, take the color video frame, make it gray scale, then run houghLlines on it, using that information to highlight the lines in the color frame. I tried to do this with a simple QMap but there's no way I can access it, because there's no QAbstractBuffer *QVideoFrame::buffer(). I might be able to hack it in using QAbstractPlanarVideoBuffer, but that feels very hacky (plane 0= color, plane2=B) in addition sometimes the type needs to change from quint8s to floats.

 

I feel like I'm really off in the weeds here and would like someone to chime in, if I'm completely missing something or if these are shortcomings in the Qt API?

 

 

Sent: Monday, January 07, 2019 at 5:22 PM
From: "Jason H" 
To: "Jason H" 
Cc: "Pierre-Yves Siret" , "Qt development mailing list" 
Subject: Re: [Development] QAbstractVideoFilter, the pipeline and QImage




I'm trying to implement a QAbstractVideo buffer that uses cv::Mat (or my own custom type, CvMatVideoBuffer or ByteArrayVideoBuffer respectively), but I'm running into a mental block with how this should work. Only map() gives pixel data, I really want a QAbstractVideoBuffer *QVideoFrame::buffer() which I can then cast to my custom type.  Generally when I'm fighting Qt in this way, I'm doing something wrong.

 

I can convert between QImage/cv::Mat with:


cv::Mat qimage_to_mat_cpy(QImage const , bool swap)
{
    return qimage_to_mat_ref(const_cast(img), swap).clone();
}


QImage mat_to_qimage_ref(cv::Mat , QImage::Format format)
{
    return QImage(mat.data, mat.cols, mat.rows, static_cast(mat.step), format);
}

cv::Mat  qimage_to_mat_ref(QImage , int format)
{
    return cv::Mat(img.height(), img.width(), format, img.bits(), img.bytesPerLine());
}



 

Is there an example of how to "properly" use Qt's video pipleline filters, frames, and buffers with OpenCV?  I think there should be class(es) that converts a QVideoFrame to a cv::Mat, and one that converts from cv::Mat to QVideoFrame:

filters: [toMat, blur, sobel, houghLines, toVideoFrame]

 

Many thanks in advance. 

 

 

Sent: Monday, January 07, 2019 at 10:57 AM
From: "Jason H" 
To: "Jason H" 
Cc: "Pierre-Yves Siret" , "Qt development mailing list" 
Subject: Re: [Development] QAbstractVideoFilter, the pipeline and QImage




I have been thinking about this more, and I think we also need to convert when the pipeline switches between internal formats. This would allow standard filter tooklits to be "a thing" for Qt. 

 

For example, if my pipeline filters are written to use QImage (because scanline() and pixel()) , and someone else's use cv::Mat (OpenCV), alternating between formats is not possible in the same pipeline. I think the panacia is to be able to convert not just at the end, but for any step:

[gauss, sobel, houghLines, final] -> formats: [QVideoFrame->cv::Mat, cv::Mat, cv::Mat->QImage, QImage->QVideoFrame] where each format step is the (inputFormat -> outputFormat)

 

Just my 0.02BTC.

 

 

Sent: Wednesday, January 02, 2019 at 12:33 PM
From: "Jason H" 
To: "Pierre-Yves Siret" 
Cc: "Qt development mailing list" 
Subject: Re: [Development] QAbstractVideoFilter, the pipeline and QImage



Thanks for pointing that out. I guess that could work. It's not as elegant as what I want, where everyhing presents the same way. Now each and every filter has to have

 

if (flags & QVideoFilterRunnable::LastInChain) {

   ... generate the frame for the backend per the surfaceFormat

}


 

As there are many surfaceFormats, that if(){} block is huge, and duplicated in each filter. True, I can create a "final filter" that does this to avoid all that boilerpate code that takes the frame and converts it back to what it needs to be. But what I suggested was Qt should provide this automatically both in the filter chain. The difference is this:

 

 

VideoOutput {

   filters: [sobel, houghLines]

}

 


VideoOutput {

   filters: [sobel, houghLines, final]

}

 

Ideally that final filter checks the frame matches what it expects and only if it does not, performs a conversion.  Maybe there's a way to register a conversion from a custom type to a QVideoFrame?

Also, if the VideoOutput is not needed* the final filter need not be invoked.

 

By not needed, I mean the video output element is not visable, or it's area is 0. Sometimes, we want to provide intel about the frames, without affecting them. Currently, this is inherently syncronous, which negatively impacts frame rate.

I should be able to use two (or more) VideoOutputs, one for real-time video display and another for info-only filter pipelin

Re: [Development] QAbstractVideoFilter, the pipeline and QImage

2019-01-07 Thread Jason H

I'm trying to implement a QAbstractVideo buffer that uses cv::Mat (or my own custom type, CvMatVideoBuffer or ByteArrayVideoBuffer respectively), but I'm running into a mental block with how this should work. Only map() gives pixel data, I really want a QAbstractVideoBuffer *QVideoFrame::buffer() which I can then cast to my custom type.  Generally when I'm fighting Qt in this way, I'm doing something wrong.

 

I can convert between QImage/cv::Mat with:


cv::Mat qimage_to_mat_cpy(QImage const , bool swap)
{
    return qimage_to_mat_ref(const_cast(img), swap).clone();
}


QImage mat_to_qimage_ref(cv::Mat , QImage::Format format)
{
    return QImage(mat.data, mat.cols, mat.rows, static_cast(mat.step), format);
}

cv::Mat  qimage_to_mat_ref(QImage , int format)
{
    return cv::Mat(img.height(), img.width(), format, img.bits(), img.bytesPerLine());
}



 

Is there an example of how to "properly" use Qt's video pipleline filters, frames, and buffers with OpenCV?  I think there should be class(es) that converts a QVideoFrame to a cv::Mat, and one that converts from cv::Mat to QVideoFrame:

filters: [toMat, blur, sobel, houghLines, toVideoFrame]

 

Many thanks in advance. 

 

 

Sent: Monday, January 07, 2019 at 10:57 AM
From: "Jason H" 
To: "Jason H" 
Cc: "Pierre-Yves Siret" , "Qt development mailing list" 
Subject: Re: [Development] QAbstractVideoFilter, the pipeline and QImage




I have been thinking about this more, and I think we also need to convert when the pipeline switches between internal formats. This would allow standard filter tooklits to be "a thing" for Qt. 

 

For example, if my pipeline filters are written to use QImage (because scanline() and pixel()) , and someone else's use cv::Mat (OpenCV), alternating between formats is not possible in the same pipeline. I think the panacia is to be able to convert not just at the end, but for any step:

[gauss, sobel, houghLines, final] -> formats: [QVideoFrame->cv::Mat, cv::Mat, cv::Mat->QImage, QImage->QVideoFrame] where each format step is the (inputFormat -> outputFormat)

 

Just my 0.02BTC.

 

 

Sent: Wednesday, January 02, 2019 at 12:33 PM
From: "Jason H" 
To: "Pierre-Yves Siret" 
Cc: "Qt development mailing list" 
Subject: Re: [Development] QAbstractVideoFilter, the pipeline and QImage



Thanks for pointing that out. I guess that could work. It's not as elegant as what I want, where everyhing presents the same way. Now each and every filter has to have

 

if (flags & QVideoFilterRunnable::LastInChain) {

   ... generate the frame for the backend per the surfaceFormat

}


 

As there are many surfaceFormats, that if(){} block is huge, and duplicated in each filter. True, I can create a "final filter" that does this to avoid all that boilerpate code that takes the frame and converts it back to what it needs to be. But what I suggested was Qt should provide this automatically both in the filter chain. The difference is this:

 

 

VideoOutput {

   filters: [sobel, houghLines]

}

 


VideoOutput {

   filters: [sobel, houghLines, final]

}

 

Ideally that final filter checks the frame matches what it expects and only if it does not, performs a conversion.  Maybe there's a way to register a conversion from a custom type to a QVideoFrame?

Also, if the VideoOutput is not needed* the final filter need not be invoked.

 

By not needed, I mean the video output element is not visable, or it's area is 0. Sometimes, we want to provide intel about the frames, without affecting them. Currently, this is inherently syncronous, which negatively impacts frame rate.

I should be able to use two (or more) VideoOutputs, one for real-time video display and another for info-only filter pipeline, and these can be distributed across cpu cores. Unfortuantely, the VideoOutput takes over the video source forcing source-output mappings to be 1:1. It would be really nice if it could be 1:N. I experimented with this, and the first VideoOutput is the only one to receive a frame from a source, and the only one with an active filter pipeline. How could I have 3 VideoOutputs, each with it's own filter pipeline and visualize them simulatneously?

 

Camera { id: camera }

 


VideoOutput {  // only this one works. If I move this after the next one, then that one works.

   filters: [sobel, houghLines]  

   source: camera

}

 


VideoOutput {

   filters: [sobel, houghLines, final]

   source: camera

}



 

So to sum this up:

- Qt should provide automatic frame reconstruction for the final frame (that big if(){} block) (it should be boilerplate)

- A way to register custom format to QVideoFrame reconstruction function

- Allow for multiple VideoOutputs (and filter pipelines) from the same source

-- Maybe an element for no video output pipeline?

 

Am wrong in thinking any of that doesn't already exist or is a good idea?

 



Sent: Saturday

Re: [Development] QAbstractVideoFilter, the pipeline and QImage

2019-01-07 Thread Jason H

I have been thinking about this more, and I think we also need to convert when the pipeline switches between internal formats. This would allow standard filter tooklits to be "a thing" for Qt. 

 

For example, if my pipeline filters are written to use QImage (because scanline() and pixel()) , and someone else's use cv::Mat (OpenCV), alternating between formats is not possible in the same pipeline. I think the panacia is to be able to convert not just at the end, but for any step:

[gauss, sobel, houghLines, final] -> formats: [QVideoFrame->cv::Mat, cv::Mat, cv::Mat->QImage, QImage->QVideoFrame] where each format step is the (inputFormat -> outputFormat)

 

Just my 0.02BTC.

 

 

Sent: Wednesday, January 02, 2019 at 12:33 PM
From: "Jason H" 
To: "Pierre-Yves Siret" 
Cc: "Qt development mailing list" 
Subject: Re: [Development] QAbstractVideoFilter, the pipeline and QImage



Thanks for pointing that out. I guess that could work. It's not as elegant as what I want, where everyhing presents the same way. Now each and every filter has to have

 

if (flags & QVideoFilterRunnable::LastInChain) {

   ... generate the frame for the backend per the surfaceFormat

}


 

As there are many surfaceFormats, that if(){} block is huge, and duplicated in each filter. True, I can create a "final filter" that does this to avoid all that boilerpate code that takes the frame and converts it back to what it needs to be. But what I suggested was Qt should provide this automatically both in the filter chain. The difference is this:

 

 

VideoOutput {

   filters: [sobel, houghLines]

}

 


VideoOutput {

   filters: [sobel, houghLines, final]

}

 

Ideally that final filter checks the frame matches what it expects and only if it does not, performs a conversion.  Maybe there's a way to register a conversion from a custom type to a QVideoFrame?

Also, if the VideoOutput is not needed* the final filter need not be invoked.

 

By not needed, I mean the video output element is not visable, or it's area is 0. Sometimes, we want to provide intel about the frames, without affecting them. Currently, this is inherently syncronous, which negatively impacts frame rate.

I should be able to use two (or more) VideoOutputs, one for real-time video display and another for info-only filter pipeline, and these can be distributed across cpu cores. Unfortuantely, the VideoOutput takes over the video source forcing source-output mappings to be 1:1. It would be really nice if it could be 1:N. I experimented with this, and the first VideoOutput is the only one to receive a frame from a source, and the only one with an active filter pipeline. How could I have 3 VideoOutputs, each with it's own filter pipeline and visualize them simulatneously?

 

Camera { id: camera }

 


VideoOutput {  // only this one works. If I move this after the next one, then that one works.

   filters: [sobel, houghLines]  

   source: camera

}

 


VideoOutput {

   filters: [sobel, houghLines, final]

   source: camera

}



 

So to sum this up:

- Qt should provide automatic frame reconstruction for the final frame (that big if(){} block) (it should be boilerplate)

- A way to register custom format to QVideoFrame reconstruction function

- Allow for multiple VideoOutputs (and filter pipelines) from the same source

-- Maybe an element for no video output pipeline?

 

Am wrong in thinking any of that doesn't already exist or is a good idea?

 



Sent: Saturday, December 22, 2018 at 5:10 AM
From: "Pierre-Yves Siret" 
To: "Jason H" 
Cc: "Qt development mailing list" 
Subject: Re: [Development] QAbstractVideoFilter, the pipeline and QImage





 


The filter pipeline starts with a file or camera device, and various filters are applied sequentially to frames. However I spend a lot of time converting frames to QImages for analysis and painting. I'm hoping there's a faster way to do this. Some of the filters alter the frame, some just provide information about the frame.

But each time, I have to unpack a QVideoFrame's pixels and make sure the filter can process that pixel format, or convert it to one format that it expects. I'm getting processing times of 55msec on my mackbook pro, which give me 18FPS from a 25FPS video, so I'm dropping frames.  I am starting to think the ideal would be to have some "Box of Pixels" data structure that both QImage and QVideoFrame can use. But for now, I convert each frame to a QImage at each stage of the pipeline.

 

I'm not that versed in image manipulation but isn't that the point of the QVideoFilterRunnable::LastInChain flag ?

Quoting the doc: 
"flags contains additional information about the filter's invocation. For example the LastInChain flag indicates that the filter is the last in a VideoOutput's associated filter list. This can be very useful in cases where multiple filters are chained together an

Re: [Development] QAbstractVideoFilter, the pipeline and QImage

2019-01-02 Thread Jason H
Thanks for pointing that out. I guess that could work. It's not as elegant as what I want, where everyhing presents the same way. Now each and every filter has to have

 

if (flags & QVideoFilterRunnable::LastInChain) {

   ... generate the frame for the backend per the surfaceFormat

}


 

As there are many surfaceFormats, that if(){} block is huge, and duplicated in each filter. True, I can create a "final filter" that does this to avoid all that boilerpate code that takes the frame and converts it back to what it needs to be. But what I suggested was Qt should provide this automatically both in the filter chain. The difference is this:

 

 

VideoOutput {

   filters: [sobel, houghLines]

}

 


VideoOutput {

   filters: [sobel, houghLines, final]

}

 

Ideally that final filter checks the frame matches what it expects and only if it does not, performs a conversion.  Maybe there's a way to register a conversion from a custom type to a QVideoFrame?

Also, if the VideoOutput is not needed* the final filter need not be invoked.

 

By not needed, I mean the video output element is not visable, or it's area is 0. Sometimes, we want to provide intel about the frames, without affecting them. Currently, this is inherently syncronous, which negatively impacts frame rate.

I should be able to use two (or more) VideoOutputs, one for real-time video display and another for info-only filter pipeline, and these can be distributed across cpu cores. Unfortuantely, the VideoOutput takes over the video source forcing source-output mappings to be 1:1. It would be really nice if it could be 1:N. I experimented with this, and the first VideoOutput is the only one to receive a frame from a source, and the only one with an active filter pipeline. How could I have 3 VideoOutputs, each with it's own filter pipeline and visualize them simulatneously?

 

Camera { id: camera }

 


VideoOutput {  // only this one works. If I move this after the next one, then that one works.

   filters: [sobel, houghLines]  

   source: camera

}

 


VideoOutput {

   filters: [sobel, houghLines, final]

   source: camera

}



 

So to sum this up:

- Qt should provide automatic frame reconstruction for the final frame (that big if(){} block) (it should be boilerplate)

- A way to register custom format to QVideoFrame reconstruction function

- Allow for multiple VideoOutputs (and filter pipelines) from the same source

-- Maybe an element for no video output pipeline?

 

Am wrong in thinking any of that doesn't already exist or is a good idea?

 



Sent: Saturday, December 22, 2018 at 5:10 AM
From: "Pierre-Yves Siret" 
To: "Jason H" 
Cc: "Qt development mailing list" 
Subject: Re: [Development] QAbstractVideoFilter, the pipeline and QImage





 


The filter pipeline starts with a file or camera device, and various filters are applied sequentially to frames. However I spend a lot of time converting frames to QImages for analysis and painting. I'm hoping there's a faster way to do this. Some of the filters alter the frame, some just provide information about the frame.

But each time, I have to unpack a QVideoFrame's pixels and make sure the filter can process that pixel format, or convert it to one format that it expects. I'm getting processing times of 55msec on my mackbook pro, which give me 18FPS from a 25FPS video, so I'm dropping frames.  I am starting to think the ideal would be to have some "Box of Pixels" data structure that both QImage and QVideoFrame can use. But for now, I convert each frame to a QImage at each stage of the pipeline.

 

I'm not that versed in image manipulation but isn't that the point of the QVideoFilterRunnable::LastInChain flag ?

Quoting the doc: 
"flags contains additional information about the filter's invocation. For example the LastInChain flag indicates that the filter is the last in a VideoOutput's associated filter list. This can be very useful in cases where multiple filters are chained together and the work is performed on image data in some custom format (for example a format specific to some computer vision framework). To avoid conversion on every filter in the chain, all intermediate filters can return a QVideoFrame hosting data in the custom format. Only the last, where the flag is set, returns a QVideoFrame in a format compatible with Qt."

 

You could try using just one pixel format and use that in all your filters without reconverting it at each step.

 







___
Development mailing list
Development@qt-project.org
https://lists.qt-project.org/listinfo/development


Re: [Development] QAbstractVideoFilter, the pipeline and QImage

2018-12-22 Thread Pierre-Yves Siret
> The filter pipeline starts with a file or camera device, and various
> filters are applied sequentially to frames. However I spend a lot of time
> converting frames to QImages for analysis and painting. I'm hoping there's
> a faster way to do this. Some of the filters alter the frame, some just
> provide information about the frame.
>
> But each time, I have to unpack a QVideoFrame's pixels and make sure the
> filter can process that pixel format, or convert it to one format that it
> expects. I'm getting processing times of 55msec on my mackbook pro, which
> give me 18FPS from a 25FPS video, so I'm dropping frames.  I am starting to
> think the ideal would be to have some "Box of Pixels" data structure that
> both QImage and QVideoFrame can use. But for now, I convert each frame to a
> QImage at each stage of the pipeline.
>

I'm not that versed in image manipulation but isn't that the point of
the QVideoFilterRunnable::LastInChain flag ?
Quoting the doc:
"flags contains additional information about the filter's invocation. For
example the LastInChain flag indicates that the filter is the last in a
VideoOutput's associated filter list. This can be very useful in cases
where multiple filters are chained together and the work is performed on
image data in some custom format (for example a format specific to some
computer vision framework). To avoid conversion on every filter in the
chain, all intermediate filters can return a QVideoFrame hosting data in
the custom format. Only the last, where the flag is set, returns a
QVideoFrame in a format compatible with Qt."

You could try using just one pixel format and use that in all your filters
without reconverting it at each step.
___
Development mailing list
Development@qt-project.org
https://lists.qt-project.org/listinfo/development


[Development] QAbstractVideoFilter, the pipeline and QImage

2018-12-21 Thread Jason H
I sent a message on interest@ but no one replied so I'm escalating it here. I 
am making a series of filters, but I'm encountering performance issues. I think 
it's because of lack of my understanding, or detail in the docs.

The filter pipeline starts with a file or camera device, and various filters 
are applied sequentially to frames. However I spend a lot of time converting 
frames to QImages for analysis and painting. I'm hoping there's a faster way to 
do this. Some of the filters alter the frame, some just provide information 
about the frame. 

But each time, I have to unpack a QVideoFrame's pixels and make sure the filter 
can process that pixel format, or convert it to one format that it expects. I'm 
getting processing times of 55msec on my mackbook pro, which give me 18FPS from 
a 25FPS video, so I'm dropping frames.  I am starting to think the ideal would 
be to have some "Box of Pixels" data structure that both QImage and QVideoFrame 
can use. But for now, I convert each frame to a QImage at each stage of the 
pipeline.

In addition to that, I've discovered that the QVideoSurfaceFormat in the run() 
is const, which means that for those frames with scan line direction BottomTop, 
I cannot correct the scan lines and instead always flip the QImage fo the next 
frame because I cannot change the scan line direction. The same applies to 
isMirrored(). I'd like to orient the frame properly at the start and leave it 
for the rest of the pipeline. But instead I have to keep flipping and 
unflipping it with QImage::mirrored() every frame every filter in the pipeline. 
That's just silly. 

A few things could actually help:
1) being able to change the surfaceFormat
2) QImage and QVideoFrame use the same pixel data (when the formats match) (the 
pipeline then keeps referencing 
3) QVideoFrame gets a pixel(x,y, surfaceFormat) function that takes into 
account the surfaceFormat 
4) make QPainter be able to take a QVideoFrame
5) Be able to specify the surface format for the pipeline before a frame gets 
to the pipeline

Some of my filters are all QImage based, but some make use of OpenCV, so then I 
have to convert it to a OpenCV "mat" this is fortunately a fast operation under 
ideal conditions, but sometimes has a conversion penalty. Usually I don't have 
to convert back from mat because it's not pixels that I'm getting from OpenCV.

switch (img.format()) {
case QImage::Format_RGB888:{
auto result = qimage_to_mat_ref(img, CV_8UC3);
if(swap){
cv::cvtColor(result, result, CV_RGB2BGR);
}
return result;
}
case QImage::Format_Grayscale8:
case QImage::Format_Indexed8:{
return qimage_to_mat_ref(img, CV_8U);
}
case QImage::Format_RGB32:
case QImage::Format_ARGB32:
case QImage::Format_ARGB32_Premultiplied:{
return qimage_to_mat_ref(img, CV_8UC4);
}

cv::Mat qimage_to_mat_ref(QImage , int format)
{
return cv::Mat(img.height(), img.width(), format, img.bits(), 
img.bytesPerLine());
}


Aside from OpenCLing the conversion, is there anything I can do?




___
Development mailing list
Development@qt-project.org
https://lists.qt-project.org/listinfo/development