RE: i.mx35 live video

2012-02-27 Thread Alex Gershgorin


-Original Message-
From: Guennadi Liakhovetski [mailto:g.liakhovet...@gmx.de] 
Sent: Sunday, February 26, 2012 10:58 PM
To: Alex Gershgorin
Cc: linux-media@vger.kernel.org
Subject: RE: i.mx35 live video

On Sun, 26 Feb 2012, Alex Gershgorin wrote:

  Thanks Guennadi for your quick response ,  
  
  Hi Alex
   
   Hi Guennadi,
  
   We would like to use I.MX35 processor in new project.
   An important element of the project is to obtain life video from the 
   camera and display it on display.
   For these purposes, we want to use mainline Linux kernel which supports 
   all the necessary drivers for the implementation of this task.
   As I understand that soc_camera is not currently supported userptr 
   method, in which case how I can configure the video pipeline in user space
   to get the live video on display, without the intervention of the 
   processor.
  
  soc-camera does support USERPTR, also the mx3_camera driver claims to
  support it.
  
  I based on soc-camera.txt document.
 
  Yeah, I really have to update it...
 
  The soc-camera subsystem provides a unified API between camera host drivers 
  and
  camera sensor drivers. It implements a V4L2 interface to the user, currently
  only the mmap method is supported.
  
  In any case, I glad that this supported :-) 
  
  What do you think it is possible to implement video streaming without 
  the intervention of the processor?
 
 It might be difficult to completely eliminate the CPU, at the very least 
 you need to queue and dequeue buffers to and from the V4L driver. To avoid 
 even that, in principle, you could try to use only one buffer, but I don't 
 think the current version of the mx3_camera driver would be very happy 
 about that. You could take 2 buffers and use panning, then you'd just have 
 to send queue and dequeue buffers and pan the display. But in any case, 
 you probably will have to process buffers, but your most important 
 advantage is, that you won't have to copy data, you only have to move 
 pointers around.
 
 The method that you describe is exactly what I had in mind.
 It would be more correct to say it is minimum CPU intervention and not 
 without CPU intervention. 

 As far I understand, I can implement MMAP method for frame buffer device 
 and pass this pointer directly to mx3_camera driver with use USERPTR 
 method, then send queue and dequeue buffers to mx3_camera driver.
 What is not clear, if it is possible to pass the same pointer of frame 
 buffer in mx3_camera, if the driver is using two buffers?

Sorry, I really don't know for sure. It should work, but I don't think I 
tested thid myself nor I remember anybody reporting having tested this 
mode. So, you can either try to search mailing list archives, or just test 
it. Begin with a simpler mode - USERPTR with separately allocated buffers 
and copying them manually to the framebuffer, then try to switch to just 
one buffer in this same mode, then switch to direct framebuffer memory.

Thanks Gennady this a good road map, in the near future I will be testing it 
and will get back to you.

Regards,
Alex Gershgorin




 
 
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: i.mx35 live video

2012-02-27 Thread Alex Gershgorin


 

Hi,

On 02/26/2012 09:58 PM, Guennadi Liakhovetski wrote:
 It might be difficult to completely eliminate the CPU, at the very least
 you need to queue and dequeue buffers to and from the V4L driver. To avoid
 even that, in principle, you could try to use only one buffer, but I don't
 think the current version of the mx3_camera driver would be very happy
 about that. You could take 2 buffers and use panning, then you'd just have
 to send queue and dequeue buffers and pan the display. But in any case,
 you probably will have to process buffers, but your most important
 advantage is, that you won't have to copy data, you only have to move
 pointers around.

 The method that you describe is exactly what I had in mind.
 It would be more correct to say it is minimum CPU intervention and not 
 without CPU intervention.
 
 As far I understand, I can implement MMAP method for frame buffer device
 and pass this pointer directly to mx3_camera driver with use USERPTR
 method, then send queue and dequeue buffers to mx3_camera driver.
 What is not clear, if it is possible to pass the same pointer of frame
 buffer in mx3_camera, if the driver is using two buffers?

It should work when you request 2 USERPTR buffers and assign same address 
(frame buffer start) to them. I've seen setups like this working with 
videbuf2 based drivers.

Thanks for you information this is what I had in mind :-) 

However it's really poor configuration, to avoid tearing
you could just set framebuffer virtual window size to contain at least
two screen windows and for the second buffer use framebuffer start address
with a proper offset as the USERPTR address. Then you could just add display 
panning to display every frame.  

Looks good I'll try to implement this method.
Thank you for your advice. 

Regards,
Alex Gershgorin



--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: i.mx35 live video

2012-02-26 Thread Alex Gershgorin

Thanks Guennadi for your quick response ,  

Hi Alex
 
 Hi Guennadi,

 We would like to use I.MX35 processor in new project.
 An important element of the project is to obtain life video from the camera 
 and display it on display.
 For these purposes, we want to use mainline Linux kernel which supports all 
 the necessary drivers for the implementation of this task.
 As I understand that soc_camera is not currently supported userptr method, in 
 which case how I can configure the video pipeline in user space
 to get the live video on display, without the intervention of the processor.

soc-camera does support USERPTR, also the mx3_camera driver claims to
support it.

I based on soc-camera.txt document.

The soc-camera subsystem provides a unified API between camera host drivers and
camera sensor drivers. It implements a V4L2 interface to the user, currently
only the mmap method is supported.

In any case, I glad that this supported :-) 

What do you think it is possible to implement video streaming without the 
intervention of the processor?   

Regards,

Alex Gershgorin 
 
  
 


 


 
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: i.mx35 live video

2012-02-26 Thread Guennadi Liakhovetski
On Sun, 26 Feb 2012, Alex Gershgorin wrote:

 
 Thanks Guennadi for your quick response ,  
 
 Hi Alex
  
  Hi Guennadi,
 
  We would like to use I.MX35 processor in new project.
  An important element of the project is to obtain life video from the camera 
  and display it on display.
  For these purposes, we want to use mainline Linux kernel which supports all 
  the necessary drivers for the implementation of this task.
  As I understand that soc_camera is not currently supported userptr method, 
  in which case how I can configure the video pipeline in user space
  to get the live video on display, without the intervention of the processor.
 
 soc-camera does support USERPTR, also the mx3_camera driver claims to
 support it.
 
 I based on soc-camera.txt document.

Yeah, I really have to update it...

 The soc-camera subsystem provides a unified API between camera host drivers 
 and
 camera sensor drivers. It implements a V4L2 interface to the user, currently
 only the mmap method is supported.
 
 In any case, I glad that this supported :-) 
 
 What do you think it is possible to implement video streaming without 
 the intervention of the processor?

It might be difficult to completely eliminate the CPU, at the very least 
you need to queue and dequeue buffers to and from the V4L driver. To avoid 
even that, in principle, you could try to use only one buffer, but I don't 
think the current version of the mx3_camera driver would be very happy 
about that. You could take 2 buffers and use panning, then you'd just have 
to send queue and dequeue buffers and pan the display. But in any case, 
you probably will have to process buffers, but your most important 
advantage is, that you won't have to copy data, you only have to move 
pointers around.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: i.mx35 live video

2012-02-26 Thread Alex Gershgorin



 Thanks Guennadi for your quick response ,  
 
 Hi Alex
  
  Hi Guennadi,
 
  We would like to use I.MX35 processor in new project.
  An important element of the project is to obtain life video from the camera 
  and display it on display.
  For these purposes, we want to use mainline Linux kernel which supports all 
  the necessary drivers for the implementation of this task.
  As I understand that soc_camera is not currently supported userptr method, 
  in which case how I can configure the video pipeline in user space
  to get the live video on display, without the intervention of the processor.
 
 soc-camera does support USERPTR, also the mx3_camera driver claims to
 support it.
 
 I based on soc-camera.txt document.

 Yeah, I really have to update it...

 The soc-camera subsystem provides a unified API between camera host drivers 
 and
 camera sensor drivers. It implements a V4L2 interface to the user, currently
 only the mmap method is supported.
 
 In any case, I glad that this supported :-) 
 
 What do you think it is possible to implement video streaming without 
 the intervention of the processor?

It might be difficult to completely eliminate the CPU, at the very least 
you need to queue and dequeue buffers to and from the V4L driver. To avoid 
even that, in principle, you could try to use only one buffer, but I don't 
think the current version of the mx3_camera driver would be very happy 
about that. You could take 2 buffers and use panning, then you'd just have 
to send queue and dequeue buffers and pan the display. But in any case, 
you probably will have to process buffers, but your most important 
advantage is, that you won't have to copy data, you only have to move 
pointers around.

The method that you describe is exactly what I had in mind.
It would be more correct to say it is minimum CPU intervention and not 
without CPU intervention. 
As far I understand, I can implement MMAP method for frame buffer device and 
pass this pointer directly to mx3_camera driver with use USERPTR method, then 
send queue and dequeue buffers to mx3_camera driver.
What is not clear, if it is possible to pass the same pointer of frame buffer 
in mx3_camera, if the driver is using two buffers?

Thanks,
Alex Gershgorin



 

 
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: i.mx35 live video

2012-02-26 Thread Guennadi Liakhovetski
On Sun, 26 Feb 2012, Alex Gershgorin wrote:

  Thanks Guennadi for your quick response ,  
  
  Hi Alex
   
   Hi Guennadi,
  
   We would like to use I.MX35 processor in new project.
   An important element of the project is to obtain life video from the 
   camera and display it on display.
   For these purposes, we want to use mainline Linux kernel which supports 
   all the necessary drivers for the implementation of this task.
   As I understand that soc_camera is not currently supported userptr 
   method, in which case how I can configure the video pipeline in user space
   to get the live video on display, without the intervention of the 
   processor.
  
  soc-camera does support USERPTR, also the mx3_camera driver claims to
  support it.
  
  I based on soc-camera.txt document.
 
  Yeah, I really have to update it...
 
  The soc-camera subsystem provides a unified API between camera host drivers 
  and
  camera sensor drivers. It implements a V4L2 interface to the user, currently
  only the mmap method is supported.
  
  In any case, I glad that this supported :-) 
  
  What do you think it is possible to implement video streaming without 
  the intervention of the processor?
 
 It might be difficult to completely eliminate the CPU, at the very least 
 you need to queue and dequeue buffers to and from the V4L driver. To avoid 
 even that, in principle, you could try to use only one buffer, but I don't 
 think the current version of the mx3_camera driver would be very happy 
 about that. You could take 2 buffers and use panning, then you'd just have 
 to send queue and dequeue buffers and pan the display. But in any case, 
 you probably will have to process buffers, but your most important 
 advantage is, that you won't have to copy data, you only have to move 
 pointers around.
 
 The method that you describe is exactly what I had in mind.
 It would be more correct to say it is minimum CPU intervention and not 
 without CPU intervention. 

 As far I understand, I can implement MMAP method for frame buffer device 
 and pass this pointer directly to mx3_camera driver with use USERPTR 
 method, then send queue and dequeue buffers to mx3_camera driver.
 What is not clear, if it is possible to pass the same pointer of frame 
 buffer in mx3_camera, if the driver is using two buffers?

Sorry, I really don't know for sure. It should work, but I don't think I 
tested thid myself nor I remember anybody reporting having tested this 
mode. So, you can either try to search mailing list archives, or just test 
it. Begin with a simpler mode - USERPTR with separately allocated buffers 
and copying them manually to the framebuffer, then try to switch to just 
one buffer in this same mode, then switch to direct framebuffer memory.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: i.mx35 live video

2012-02-26 Thread Sylwester Nawrocki
Hi,

On 02/26/2012 09:58 PM, Guennadi Liakhovetski wrote:
 It might be difficult to completely eliminate the CPU, at the very least
 you need to queue and dequeue buffers to and from the V4L driver. To avoid
 even that, in principle, you could try to use only one buffer, but I don't
 think the current version of the mx3_camera driver would be very happy
 about that. You could take 2 buffers and use panning, then you'd just have
 to send queue and dequeue buffers and pan the display. But in any case,
 you probably will have to process buffers, but your most important
 advantage is, that you won't have to copy data, you only have to move
 pointers around.

 The method that you describe is exactly what I had in mind.
 It would be more correct to say it is minimum CPU intervention and not 
 without CPU intervention.
 
 As far I understand, I can implement MMAP method for frame buffer device
 and pass this pointer directly to mx3_camera driver with use USERPTR
 method, then send queue and dequeue buffers to mx3_camera driver.
 What is not clear, if it is possible to pass the same pointer of frame
 buffer in mx3_camera, if the driver is using two buffers?

It should work when you request 2 USERPTR buffers and assign same address 
(frame buffer start) to them. I've seen setups like this working with videbuf2
based drivers. However it's really poor configuration, to avoid tearing
you could just set framebuffer virtual window size to contain at least
two screen windows and for the second buffer use framebuffer start address
with a proper offset as the USERPTR address. Then you could just add display
panning to display every frame.  

--

Regards,
Sylwester

 Sorry, I really don't know for sure. It should work, but I don't think I
 tested thid myself nor I remember anybody reporting having tested this
 mode. So, you can either try to search mailing list archives, or just test
 it. Begin with a simpler mode - USERPTR with separately allocated buffers
 and copying them manually to the framebuffer, then try to switch to just
 one buffer in this same mode, then switch to direct framebuffer memory.
 
 Thanks
 Guennadi
 ---
 Guennadi Liakhovetski, Ph.D.
 Freelance Open-Source Software Developer
 http://www.open-technology.de/

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html