Re: [PATCH] media: mx2_camera: Add YUYV output format.

2012-06-08 Thread javier Martin
Hi Fabio,

On 7 June 2012 19:35, Fabio Estevam feste...@gmail.com wrote:
 Hi Javier,

 On Thu, Jun 7, 2012 at 5:30 AM, javier Martin
 javier.mar...@vista-silicon.com wrote:

 As i stated, the driver is still in an early development stage, it
 doesn't do anything useful yet. But this is the public git repository
 if you want to take a look:

 git repo: https://github.com/jmartinc/video_visstrim.git
 branch:  mx27-codadx6

 Thanks, I will take a look at your tree when I am back to the office next 
 week.

 I also see that Linaro's tree has support for VPU for mx5/mx6:
 http://git.linaro.org/gitweb?p=landing-teams/working/freescale/kernel.git;a=summary

 ,so we should probably think in unifying it with mx27 support there too.


 FYI we are only interested on adding support for the encoding path of
 the VPU, but we are trying our best to make it modular (as it is done
 in Samsung's [1]), so that anyone can add decoding support later.

 Ok, sounds good.

 By the way, you work for Freescale, don't you?

 Yes, correct.

 We have a couple of issues with the i.MX27 VPU:

 1- Firmware for the VPU is provided as a table of binary values inside
 a source file which is licensed as GPL, however software is packaged
 in a .tar.gz file that is marked as NDA. Do we have the right to
 distribute this firmware with our products?

To address this issue it would be great if you, as a Freescale
employee, could send VPU firmware binary file to linux-firmware with a
LICENSE. file as well:

http://git.kernel.org/?p=linux/kernel/git/dwmw2/linux-firmware.git;a=tree

 2- There is a BUG in the firmware that marks P frames as IDR when it
 should only be done to I frames. Would it be possible to have access
 to the source code of the firmware in order to fix that problem?

 I will need to check this next week when I am back to the office.


Regards.

-- 
Javier Martin
Vista Silicon S.L.
CDTUC - FASE C - Oficina S-345
Avda de los Castros s/n
39005- Santander. Cantabria. Spain
+34 942 25 32 60
www.vista-silicon.com
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC] Support for H.264/MPEG4 encoder (VPU) in i.MX27.

2012-06-08 Thread javier Martin
Fabio,

On 8 June 2012 08:51, javier Martin javier.mar...@vista-silicon.com wrote:
 Hi Fabio,

 On 7 June 2012 19:35, Fabio Estevam feste...@gmail.com wrote:
 Hi Javier,

 On Thu, Jun 7, 2012 at 5:30 AM, javier Martin
 javier.mar...@vista-silicon.com wrote:

 As i stated, the driver is still in an early development stage, it
 doesn't do anything useful yet. But this is the public git repository
 if you want to take a look:

 git repo: https://github.com/jmartinc/video_visstrim.git
 branch:  mx27-codadx6

 Thanks, I will take a look at your tree when I am back to the office next 
 week.

 I also see that Linaro's tree has support for VPU for mx5/mx6:
 http://git.linaro.org/gitweb?p=landing-teams/working/freescale/kernel.git;a=summary

 ,so we should probably think in unifying it with mx27 support there too.

If you refer to driver in [1] I have some concerns: i.MX27 VPU should
be implemented as a V4L2 mem2mem device since it gets raw pictures
from memory and outputs encoded frames to memory (some discussion
about the subject can be fond here [2]), as Exynos driver from Samsung
does. However, this driver you've mentioned doesn't do that: it just
creates several mapping regions so that the actual functionality is
implemented in user space by a library provided by Freescale, which
regarding i.MX27 it is also GPL.

What we are trying to do is implementing all the functionality in
kernel space using mem2mem V4L2 framework so that it can be accepted
in mainline.

Please, correct me if the driver you are talking about is not the one in [1].

Regards.

[1] 
http://git.linaro.org/gitweb?p=landing-teams/working/freescale/kernel.git;a=blob;f=drivers/mxc/vpu/mxc_vpu.c;h=27b09e56d5a3f6cb7eeba16fe5493cbec46c65cd;hb=d0f289f67f0ff403d92880c410b009f1fd4e69f3
[2] http://www.mail-archive.com/linux-media@vger.kernel.org/msg36555.html

-- 
Javier Martin
Vista Silicon S.L.
CDTUC - FASE C - Oficina S-345
Avda de los Castros s/n
39005- Santander. Cantabria. Spain
+34 942 25 32 60
www.vista-silicon.com
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] Support for H.264/MPEG4 encoder (VPU) in i.MX27.

2012-06-08 Thread javier Martin
Hi Robert,

On 8 June 2012 09:26, Robert Schwebel r.schwe...@pengutronix.de wrote:
 Hi Javier,

 On Fri, Jun 08, 2012 at 09:21:13AM +0200, javier Martin wrote:
 If you refer to driver in [1] I have some concerns: i.MX27 VPU should
 be implemented as a V4L2 mem2mem device since it gets raw pictures
 from memory and outputs encoded frames to memory (some discussion
 about the subject can be fond here [2]), as Exynos driver from Samsung
 does. However, this driver you've mentioned doesn't do that: it just
 creates several mapping regions so that the actual functionality is
 implemented in user space by a library provided by Freescale, which
 regarding i.MX27 it is also GPL.

 What we are trying to do is implementing all the functionality in
 kernel space using mem2mem V4L2 framework so that it can be accepted
 in mainline.

 We will work on the VPU driver and it's migration towards a proper
 mem2mem device very soon, mainly on MX53, but of course MX27 should be
 taken care of by the same driver.

 So I'd suggest that we coordinate that work somehow.

Do you plan to provide both encoding and decoding support or just one of them?

-- 
Javier Martin
Vista Silicon S.L.
CDTUC - FASE C - Oficina S-345
Avda de los Castros s/n
39005- Santander. Cantabria. Spain
+34 942 25 32 60
www.vista-silicon.com
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] Support for H.264/MPEG4 encoder (VPU) in i.MX27.

2012-06-08 Thread Robert Schwebel
On Fri, Jun 08, 2012 at 09:39:15AM +0200, javier Martin wrote:
 Do you plan to provide both encoding and decoding support or just one
 of them?

We need both.

rsc
-- 
Pengutronix e.K.   | |
Industrial Linux Solutions | http://www.pengutronix.de/  |
Peiner Str. 6-8, 31137 Hildesheim, Germany | Phone: +49-5121-206917-0|
Amtsgericht Hildesheim, HRA 2686   | Fax:   +49-5121-206917- |
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: SH7724, VOU, PAL mode

2012-06-08 Thread Guennadi Liakhovetski
Hi Janusz

On Wed, 6 Jun 2012, Janusz Uzycki wrote:

   This is why v4l2-ctl -s 5 used before my simple test program (modified
   capture example with mmap method) finally has no effect for VOU.
   When the test program opens video device it causes reset PAL mode in VOU
   and
   does not in TV encoder.
  
  Right, this is actually a bug in the VOU driver. It didn't affect me,
  because I was opening the device only once before all the configuration
  and streaming. Could you create and submit a patch to save the standard in
  driver private data and restore it on open() after the reset? I guess,
  other configuration parameters are lost too, feel free to fix them as
  well.
 
 Here you are. Something like that?

Yes, thanks, something like that. But please use a proper line-wrap 
formatting and submit it as a patch, according to the rules, specified in 
Documentation/SubmittingPatches, i.e., i.e., send it as a separate email 
with an appropriate subject, patch description, your Signed-off-by.

 I avoided encoder reconfiguration (it is
 not needed) - do not call sh_vou_s_std() and in result
 v4l2_device_call_until_err().
 
 --- sh_vou.c.orig   2012-06-06 15:58:28.785086939 +
 +++ sh_vou.c2012-06-06 17:04:30.684586479 +
 @@ -1183,6 +1183,13 @@ static int sh_vou_open(struct file *file
vou_dev-status = SH_VOU_IDLE;
return ret;
}
 +
 +   /* restore std */
 +   if (vou_dev-std  V4L2_STD_525_60)
 +   sh_vou_reg_ab_set(vou_dev, VOUCR,
 + sh_vou_ntsc_mode(vou_dev-pdata-bus_fmt)  29, 7  29);
 +   else
 +   sh_vou_reg_ab_set(vou_dev, VOUCR, 5  29, 7  29);
}
 
videobuf_queue_dma_contig_init(vou_file-vbq, sh_vou_video_qops,

Looks like you copied this code from sh_vou_s_std(). This does fix your 
PAL problem, at least to some extent, right? But we think, that we will 
eventually have to restore more configuration, than only the video 
standard on open(), right? Thinking about that, maybe it's easier to 
actually configure the hardware not as currently done immediately, but 
inside the .vidioc_streamon() handler. Could you maybe try the patch, 
attached below (to be applied without your proposed fix)? It's only 
compile tested, so, no guarantees.

 I tested the std restoring and picture is synced/stable in PAL mode now.
 However I  have still problem after 480 line because next lines are always
 green.
 Fixing other configuration parameters seems little more complicated
 (sh_vou_s_fmt_vid_out(), sh_vou_configure_geometry()).
 
 I noticed that VOU is limited to NTSC resolution: Maximum  
   destination
 image
 size: 720 x 240 per field.
   
You mean in the datasheet?
   
   Yes, exactly.
   
I don't have it currently at hand, but I seem
to remember, that there was a bug and the VOU does actually support a 
   full
PAL resolution too. I'm not 100% certain, though.
   
   OK, I will test it. Do you remember how you discovered that?
  
  Asked the manufacturer company :)
 
 OK:) I found the sentence in sh_vou.c: Cropping scheme: max useful image is
 720x480, and the total video area is 858x525 (NTSC) or 864x625 (PAL). Next is
 some magic:)

BTW, I've found the commit, that addresses that PAL documentation bug: 
765fe17c4f018c019f1976455084f528474fc7f8

  No, I'll send it to you off the list - Laurent agreed. But he also said,
  it was a preliminary version of his yavta proram, so, you might be able to
  use that one.
 
 OK, the code you sent works much better than my simple video output program
 but the same problem after 480 line. I have to investigate it.
 I use yavta for frame capturing tests.
 
 Unfortunately I will be outside the company next 2 weeks. I will have remote
 access only to my hardware so more tests are not possible then.

Ok, let's wait then. Have a good journey!

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

Subject: V4L: sh_vou: move hardware configuration to .vidioc_streamon()

The V4L2 spec mandates, that drivers preserve their configuration across 
close() / open() cycles. Currently the sh_vou driver violates this 
requirement by resetting the hardware upon each open() thus losing VOU's 
configuration. This patch moves hardware configuration from respective 
ioctl()s to the .vidioc_streamon() handler to guarantee, that the 
streaming is always performed with a configured hardware.

Signed-off-by: Guennadi Liakhovetski g.liakhovet...@gmx.de
---
diff --git a/drivers/media/video/sh_vou.c b/drivers/media/video/sh_vou.c
index 6a72987..50e39f7 100644
--- a/drivers/media/video/sh_vou.c
+++ b/drivers/media/video/sh_vou.c
@@ -72,6 +72,8 @@ struct sh_vou_device {
struct list_head queue;
v4l2_std_id std;
int pix_idx;
+   int w_idx;
+   int h_idx;
struct videobuf_buffer *active;
enum 

Re: [RFC] Support for H.264/MPEG4 encoder (VPU) in i.MX27.

2012-06-08 Thread Sascha Hauer
On Fri, Jun 08, 2012 at 09:39:15AM +0200, javier Martin wrote:
 Hi Robert,
 
 On 8 June 2012 09:26, Robert Schwebel r.schwe...@pengutronix.de wrote:
  Hi Javier,
 
  On Fri, Jun 08, 2012 at 09:21:13AM +0200, javier Martin wrote:
  If you refer to driver in [1] I have some concerns: i.MX27 VPU should
  be implemented as a V4L2 mem2mem device since it gets raw pictures
  from memory and outputs encoded frames to memory (some discussion
  about the subject can be fond here [2]), as Exynos driver from Samsung
  does. However, this driver you've mentioned doesn't do that: it just
  creates several mapping regions so that the actual functionality is
  implemented in user space by a library provided by Freescale, which
  regarding i.MX27 it is also GPL.
 
  What we are trying to do is implementing all the functionality in
  kernel space using mem2mem V4L2 framework so that it can be accepted
  in mainline.
 
  We will work on the VPU driver and it's migration towards a proper
  mem2mem device very soon, mainly on MX53, but of course MX27 should be
  taken care of by the same driver.
 
  So I'd suggest that we coordinate that work somehow.
 
 Do you plan to provide both encoding and decoding support or just one of them?

We have both encoding and decoding. It works on i.MX51/53, but was
originally written for i.MX27 aswell. I haven't tested i.MX27 for longer
now, so it might or might not work. Find the source here:

git://git.pengutronix.de/git/imx/gst-plugins-fsl-vpu.git

The main difference from the FSL code is that the whole VPU
functionality is in a kernel module which talks (mostly) v4l2. Our next
taks is to convert this into a real mem2mem device, right now it only
works with the included gstreamer plugin. You'll need a small kernel
patch to register the device and to add the clocks.

The gstreamer plugin is in a horrible state, but with the conversion to
mem2mem we hope to get rid of this entirely.

Sascha

-- 
Pengutronix e.K.   | |
Industrial Linux Solutions | http://www.pengutronix.de/  |
Peiner Str. 6-8, 31137 Hildesheim, Germany | Phone: +49-5121-206917-0|
Amtsgericht Hildesheim, HRA 2686   | Fax:   +49-5121-206917- |
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] Support for H.264/MPEG4 encoder (VPU) in i.MX27.

2012-06-08 Thread javier Martin
On 8 June 2012 10:48, Sascha Hauer s.ha...@pengutronix.de wrote:
 On Fri, Jun 08, 2012 at 09:39:15AM +0200, javier Martin wrote:
 Hi Robert,

 On 8 June 2012 09:26, Robert Schwebel r.schwe...@pengutronix.de wrote:
  Hi Javier,
 
  On Fri, Jun 08, 2012 at 09:21:13AM +0200, javier Martin wrote:
  If you refer to driver in [1] I have some concerns: i.MX27 VPU should
  be implemented as a V4L2 mem2mem device since it gets raw pictures
  from memory and outputs encoded frames to memory (some discussion
  about the subject can be fond here [2]), as Exynos driver from Samsung
  does. However, this driver you've mentioned doesn't do that: it just
  creates several mapping regions so that the actual functionality is
  implemented in user space by a library provided by Freescale, which
  regarding i.MX27 it is also GPL.
 
  What we are trying to do is implementing all the functionality in
  kernel space using mem2mem V4L2 framework so that it can be accepted
  in mainline.
 
  We will work on the VPU driver and it's migration towards a proper
  mem2mem device very soon, mainly on MX53, but of course MX27 should be
  taken care of by the same driver.
 
  So I'd suggest that we coordinate that work somehow.

 Do you plan to provide both encoding and decoding support or just one of 
 them?

 We have both encoding and decoding. It works on i.MX51/53, but was
 originally written for i.MX27 aswell. I haven't tested i.MX27 for longer
 now, so it might or might not work. Find the source here:

 git://git.pengutronix.de/git/imx/gst-plugins-fsl-vpu.git

Much too late...

http://www.digipedia.pl/usenet/thread/18550/20724/

-- 
Javier Martin
Vista Silicon S.L.
CDTUC - FASE C - Oficina S-345
Avda de los Castros s/n
39005- Santander. Cantabria. Spain
+34 942 25 32 60
www.vista-silicon.com
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] Support for H.264/MPEG4 encoder (VPU) in i.MX27.

2012-06-08 Thread javier Martin
Hi,
I've checked this matter with a colleague and we have several reasons
to doubt that the i.MX27 and the i.MX53 can share the same driver for
their Video Processing Units (VPU):

1. The VPU in the i.MX27 is a codadx6 with support for H.264, H.263
and MPEG4-Part2 [1]. Provided Freescale is using the same IP provider
for i.MX53 and looking at the features that the VPU in this SoC
supports (1080p resolution, VP8) we are probably dealing with a coda
9 series [2].

2. An important part of the functionality for controlling the
codadx6 is implemented using software messages between the main CPU
and the VPU, this means that a different firmware loaded in the VPU
can substantially change the way it is handled. As previously stated,
i.MX27 and i.MX53 have different IP blocks and because of this, those
messages will be very different.

For these reasons we suggest that we carry on developing different
drivers for the i.MX27 and the i.MX53. Though it's true that both
drivers could share some overhead given by the use of mem2mem
framework, I don't think this is a good enough reason the merge them.

By the way, driver for the VPU in the i.MX27 will be called
codadx6[3], I suggest you call your driver coda9 to avoid
confusion.


[1] 
http://www.chipsnmedia.com/product_search/product_view.php?part_idx=30idx=48
[2] 
http://www.chipsnmedia.com/product_search/product_view.php?part_idx=20idx=53
[3] 
https://github.com/jmartinc/video_visstrim/tree/mx27-codadx6/drivers/media/video/codadx6


-- 
Javier Martin
Vista Silicon S.L.
CDTUC - FASE C - Oficina S-345
Avda de los Castros s/n
39005- Santander. Cantabria. Spain
+34 942 25 32 60
www.vista-silicon.com
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] Support for H.264/MPEG4 encoder (VPU) in i.MX27.

2012-06-08 Thread Sascha Hauer
On Fri, Jun 08, 2012 at 11:02:31AM +0200, javier Martin wrote:
 Hi,
 I've checked this matter with a colleague and we have several reasons
 to doubt that the i.MX27 and the i.MX53 can share the same driver for
 their Video Processing Units (VPU):
 
 1. The VPU in the i.MX27 is a codadx6 with support for H.264, H.263
 and MPEG4-Part2 [1]. Provided Freescale is using the same IP provider
 for i.MX53 and looking at the features that the VPU in this SoC
 supports (1080p resolution, VP8) we are probably dealing with a coda
 9 series [2].
 
 2. An important part of the functionality for controlling the
 codadx6 is implemented using software messages between the main CPU
 and the VPU, this means that a different firmware loaded in the VPU
 can substantially change the way it is handled. As previously stated,
 i.MX27 and i.MX53 have different IP blocks and because of this, those
 messages will be very different.
 
 For these reasons we suggest that we carry on developing different
 drivers for the i.MX27 and the i.MX53. Though it's true that both
 drivers could share some overhead given by the use of mem2mem
 framework, I don't think this is a good enough reason the merge them.
 
 By the way, driver for the VPU in the i.MX27 will be called
 codadx6[3], I suggest you call your driver coda9 to avoid
 confusion.

Well, our driver works on i.MX27 and i.MX5. Yes, it needs some
abstraction for different register layouts and different features, but
the cores behave sufficiently similar that it makes sense to share the
code in a single driver.

Sascha


-- 
Pengutronix e.K.   | |
Industrial Linux Solutions | http://www.pengutronix.de/  |
Peiner Str. 6-8, 31137 Hildesheim, Germany | Phone: +49-5121-206917-0|
Amtsgericht Hildesheim, HRA 2686   | Fax:   +49-5121-206917- |
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[GIT PULL FOR v3.6] Clean up and improve zr364xx

2012-06-08 Thread Hans Verkuil
Update zr364xx to the latest frameworks (except for vb2) and add
suspend/resume support.

Tested with actual hardware on both little and big endian hosts.

Regards,

Hans

The following changes since commit 5472d3f17845c4398c6a510b46855820920c2181:

  [media] mt9m032: Implement V4L2_CID_PIXEL_RATE control (2012-05-24 09:27:24 
-0300)

are available in the git repository at:

  git://linuxtv.org/hverkuil/media_tree.git zr364xx

for you to fetch changes up to d08f93c5d6b195f0c85683703c737a2b1714d9a5:

  zr364xx: add suspend/resume support. (2012-06-08 13:04:55 +0200)


Hans Verkuil (7):
  zr364xx: embed video_device and register it at the end of probe.
  zr364xx: introduce v4l2_device.
  zr364xx: convert to the control framework.
  zr364xx: fix querycap and fill in colorspace.
  zr364xx: add support for control events.
  zr364xx: allow multiple opens.
  zr364xx: add suspend/resume support.

 drivers/media/video/zr364xx.c |  484 
+
 1 file changed, 213 insertions(+), 271 deletions(-)
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] Support for H.264/MPEG4 encoder (VPU) in i.MX27.

2012-06-08 Thread javier Martin
On 8 June 2012 11:23, Sascha Hauer s.ha...@pengutronix.de wrote:
 On Fri, Jun 08, 2012 at 11:02:31AM +0200, javier Martin wrote:
 Hi,
 I've checked this matter with a colleague and we have several reasons
 to doubt that the i.MX27 and the i.MX53 can share the same driver for
 their Video Processing Units (VPU):

 1. The VPU in the i.MX27 is a codadx6 with support for H.264, H.263
 and MPEG4-Part2 [1]. Provided Freescale is using the same IP provider
 for i.MX53 and looking at the features that the VPU in this SoC
 supports (1080p resolution, VP8) we are probably dealing with a coda
 9 series [2].

 2. An important part of the functionality for controlling the
 codadx6 is implemented using software messages between the main CPU
 and the VPU, this means that a different firmware loaded in the VPU
 can substantially change the way it is handled. As previously stated,
 i.MX27 and i.MX53 have different IP blocks and because of this, those
 messages will be very different.

 For these reasons we suggest that we carry on developing different
 drivers for the i.MX27 and the i.MX53. Though it's true that both
 drivers could share some overhead given by the use of mem2mem
 framework, I don't think this is a good enough reason the merge them.

 By the way, driver for the VPU in the i.MX27 will be called
 codadx6[3], I suggest you call your driver coda9 to avoid
 confusion.

 Well, our driver works on i.MX27 and i.MX5. Yes, it needs some
 abstraction for different register layouts and different features, but
 the cores behave sufficiently similar that it makes sense to share the
 code in a single driver.

Hi Sascha,

From our point of view the current situation is the following:
We have a very reliable driver for the VPU which is not mainline but
it's been used for two years in our products. This driver only
supports encoding in the i.MX27 chip.
In parallel, you have a a multichip driver in progress which is not
mainline either, not fully V4L2 compatible and not tested for i.MX27.
[1]
At the same time, we have a driver in progress for i.MX27 encoding
only which follows V4L2 mem2mem framework. [2].

The first thing to decide would be which of both drivers we take as a
base for final mainline developing.
In our view, cleaning up driver from Pengutronix [1] would imply a lot
of effort of maintaining code that we cannot really test (i.MX5,
i.MX6) whereas if we continue using [2] we would have something valid
for i.MX27 much faster. Support for decoding and other chips could be
added later.

The second thing would be whether we keep on developing or whether we
should wait for you to have something in mainline. We have already
allocated resources to the development of the driver and long-term
testing to achieve product level reliability. Pengutronix does not
seem to be committed to developing the features relevant to our
product (lack of YUV420 support for i.MX27 camera driver[6]) nor
committed to any deadline (lack of answers or development on dmaengine
for i.MX27[4][5]). Moreover, development effort is only 50% of the
cost and we would still have to spend a lot of time checking the video
stream manually in different real-rife conditions (only extensive
testing allowed us to catch the P frame marked as IDR bug [7]).

As a conclusion we propose that we keep on developing our driver for
encoding in the i.MX27 VPU under the following conditions:
- We will provide a more generic name for the driver than codadx6,
maybe something as imx_vpu.
- We will do an extra effort and will study your code in [1] in order
to provide a modular approach that makes adding new functionality (new
chips or decoding) as easy as possible while making obvious that
further patches do not break support for encoding in the i.MX27.

Does it sound reasonable for you?

Regards.

[1] git://git.pengutronix.de/git/imx/gst-plugins-fsl-vpu.git
[2] 
https://github.com/jmartinc/video_visstrim/tree/mx27-codadx6/drivers/media/video/codadx6
[3] http://www.spinics.net/lists/linux-media/msg37920.html
[4] http://www.spinics.net/lists/arm-kernel/msg159196.html
[5] http://lists.infradead.org/pipermail/linux-arm-kernel/2012-March/087842.html
[6] http://patchwork.linuxtv.org/patch/8826/
[7] 
http://article.gmane.org/gmane.linux.drivers.video-input-infrastructure/49166

-- 
Javier Martin
Vista Silicon S.L.
CDTUC - FASE C - Oficina S-345
Avda de los Castros s/n
39005- Santander. Cantabria. Spain
+34 942 25 32 60
www.vista-silicon.com
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] media: videobuf2: fix kernel panic due to missing assign NULL to alloc_ctx

2012-06-08 Thread Albert Wang
  In function vb2_dma_contig_cleanup_ctx(), we only kfree the alloc_ctx
  If we didn't assign NULL to this point after kfree it,
  we may encounter the following kernel panic:

 kernel BUG at kernel/cred.c:98!
 Unable to handle kernel NULL pointer dereference at virtual address 
 pgd = c0004000
 [] *pgd=
 Internal error: Oops: 817 [#1] PREEMPT SMP
 Modules linked in: runcase_sysfs galcore mv_wtm_drv mv_wtm_prim
 CPU: 0Not tainted  (3.0.8+ #213)
 PC is at __bug+0x18/0x24
 LR is at __bug+0x14/0x24
 pc : [c0054670]lr : [c005466c]psr: 6113
 sp : c0681ec0  ip : f683e000  fp : 
 r10: e8ab4b58  r9 : 0fff  r8 : 0002
 r7 : e8665698  r6 : c10079ec  r5 : e8b13d80  r4 : e8b13d98
 r3 :   r2 : c0681eb4  r1 : c05c9ccc  r0 : 0035
 Flags: nZCv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment kernel
 Control: 10c53c7d  Table: 29c3406a  DAC: 0015

  the root cause is we may encounter some i2c or HW issue with sensor
  which result in driver exit with exception during soc_camera_set_fmt()
  from soc_camera_open():

ret = soc_camera_set_fmt(icd, f);
if (ret  0)
goto esfmt;

  it will call ici-ops-remove() in following code:

  esfmt:
pm_runtime_disable(icd-vdev-dev);
  eresume:
ici-ops-remove(icd);

  ici-ops-remove() will call vb2_dma_contig_cleanup_ctx() for cleanup
  but we didn't do ici-ops-init_videobuf2() yet at that time
  it will result in kfree a non-NULL point twice

Change-Id: I1c66dd08438ae90abe555c52edcdbca0d39d829d
Signed-off-by: Albert Wang twan...@marvell.com
---
 drivers/media/video/videobuf2-dma-contig.c |1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/drivers/media/video/videobuf2-dma-contig.c 
b/drivers/media/video/videobuf2-dma-contig.c
index 4b71326..9881171 100755
--- a/drivers/media/video/videobuf2-dma-contig.c
+++ b/drivers/media/video/videobuf2-dma-contig.c
@@ -178,6 +178,7 @@ EXPORT_SYMBOL_GPL(vb2_dma_contig_init_ctx);
 void vb2_dma_contig_cleanup_ctx(void *alloc_ctx)
 {
kfree(alloc_ctx);
+   alloc_ctx = NULL;
 }
 EXPORT_SYMBOL_GPL(vb2_dma_contig_cleanup_ctx);
 
-- 
1.7.0.4

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


re: [media] DRX-K: Initial check-in

2012-06-08 Thread Dan Carpenter
Hello Ralph Metzler,

The patch 43dd07f758d8: [media] DRX-K: Initial check-in from Jul 3, 
2011, leads to the following warning:
drivers/media/dvb/frontends/drxk_hard.c:2980 ADCSynchronization()
 warn: suspicious bitop condition

  2977  status = read16(state, IQM_AF_CLKNEG__A, clkNeg);
  2978  if (status  0)
  2979  goto error;
  2980  if ((clkNeg | IQM_AF_CLKNEG_CLKNEGDATA__M) ==
  2981  IQM_AF_CLKNEG_CLKNEGDATA_CLK_ADC_DATA_POS) {

IQM_AF_CLKNEG_CLKNEGDATA__M is 2.
IQM_AF_CLKNEG_CLKNEGDATA_CLK_ADC_DATA_POS is 0.
So this condition can never be true.

  2982  clkNeg = (~(IQM_AF_CLKNEG_CLKNEGDATA__M));
  2983  clkNeg |=
  2984  
IQM_AF_CLKNEG_CLKNEGDATA_CLK_ADC_DATA_NEG;
  2985  } else {
  2986  clkNeg = (~(IQM_AF_CLKNEG_CLKNEGDATA__M));
  2987  clkNeg |=
  2988  
IQM_AF_CLKNEG_CLKNEGDATA_CLK_ADC_DATA_POS;

clkNeg |= 0; -- doesn't make much sense to the unenlightened.

  2989  }

I wrote a Smatch thing to find places that do x |= 0 inspired by the
above oddity and it finds several other places which do that:

drivers/media/dvb/frontends/drxk_hard.c:2525 GetQAMSignalToNoise() info: why 
not propagate 'status' from read16() instead of -22?
drivers/media/dvb/frontends/drxk_hard.c:2980 ADCSynchronization() warn: 
suspicious bitop condition
drivers/media/dvb/frontends/drxk_hard.c:2988 ADCSynchronization() warn: x |= 0
drivers/media/dvb/frontends/drxk_hard.c:3833 SetDVBT() warn: x |= 0
drivers/media/dvb/frontends/drxk_hard.c:3847 SetDVBT() warn: x |= 0
drivers/media/dvb/frontends/drxk_hard.c:3888 SetDVBT() warn: x |= 0
drivers/media/dvb/frontends/drxk_hard.c:3915 SetDVBT() warn: x |= 0
drivers/media/dvb/frontends/drxk_hard.c:3931 SetDVBT() warn: x |= 0

regards,
dan carpenter

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 04/12] v4l: vb2-dma-contig: add setup of sglist for MMAP buffers

2012-06-08 Thread Tomasz Stanislawski
Hi Laurent and Subash,

I confirm the issue found by Subash. The function
vb2_dc_kaddr_to_pages does fail for some occasions.
The failures are rather strange like 'got 95 of
150 pages'. It took me some time to find the reason
of the problem.

I found that dma_alloc_coherent for iommu an ARM does
use ioremap_page_range to map a buffer to the kernel
space. The mapping is done by updating the page-table.

The problem is that any process has a different first-level
page-table. The ioremap_page_range updates only the table
for init process. The PT present in current-mm shares
a majority of entries of 1st-level PT at kernel range
(above 0xc000) but *not all*. That is why
vb2_dc_kaddr_to_pages worked for small buffers and
occasionally failed for larger buffers.

I found two ways to fix this problem.
a) use init_mm instead of current-mm while
   creating an artificial vma
b) access the dma memory by calling
   *((volatile int *)kaddr) = 0;
   before calling follow_pfn
   This way a fault is generated and the PT is
   updated by copying entries from init_mm.

What do you think about presented solutions?

Regards,
Tomasz Stanislawski



On 06/07/2012 04:28 PM, Subash Patel wrote:
 Hello Tomasz,
 
 On 06/07/2012 06:06 AM, Laurent Pinchart wrote:
 Hi Tomasz,

 On Wednesday 06 June 2012 13:56:42 Tomasz Stanislawski wrote:
 On 06/06/2012 10:06 AM, Laurent Pinchart wrote:
 On Wednesday 23 May 2012 15:07:27 Tomasz Stanislawski wrote:
 This patch adds the setup of sglist list for MMAP buffers.
 It is needed for buffer exporting via DMABUF mechanism.

 Signed-off-by: Tomasz Stanislawskit.stanisl...@samsung.com
 Signed-off-by: Kyungmin Parkkyungmin.p...@samsung.com
 ---

   drivers/media/video/videobuf2-dma-contig.c |   70 +-
   1 file changed, 68 insertions(+), 2 deletions(-)

 diff --git a/drivers/media/video/videobuf2-dma-contig.c
 b/drivers/media/video/videobuf2-dma-contig.c index 52b4f59..ae656be
 100644
 --- a/drivers/media/video/videobuf2-dma-contig.c
 +++ b/drivers/media/video/videobuf2-dma-contig.c

 [snip]

 +static int vb2_dc_kaddr_to_pages(unsigned long kaddr,
 +struct page **pages, unsigned int n_pages)
 +{
 +unsigned int i;
 +unsigned long pfn;
 +struct vm_area_struct vma = {
 +.vm_flags = VM_IO | VM_PFNMAP,
 +.vm_mm = current-mm,
 +};
 +
 +for (i = 0; i  n_pages; ++i, kaddr += PAGE_SIZE) {

 The follow_pfn() kerneldoc mentions that it looks up a PFN for a user
 address. The only users I've found in the kernel sources pass a user
 address. Is it legal to use it for kernel addresses ?

 It is not completely legal :). As I understand the mm code, the follow_pfn
 works only for IO/PFN mappings. This is the typical case (every case?) of
 mappings created by dma_alloc_coherent.

 In order to make this function work for a kernel pointer, one has to create
 an artificial VMA that has IO/PFN bits on.

 This solution is a hack-around for dma_get_pages (aka dma_get_sgtable). This
 way the dependency on dma_get_pages was broken giving a small hope of
 merging vb2 exporting.

 Marek prepared a patchset 'ARM: DMA-mapping: new extensions for buffer
 sharing' that adds dma buffers with no kernel mappings and dma_get_sgtable
 function.

 However this patchset is still in a RFC state.

 That's totally understood :-) I'm fine with keeping the hack for now until 
 the
 dma_get_sgtable() gets in a usable/mergeable state, please just mention it in
 the code with something like

 /* HACK: This is a temporary workaround until the dma_get_sgtable() function
 becomes available. */

 I have prepared a patch that removes vb2_dc_kaddr_to_pages and substitutes
 it with dma_get_pages. It will become a part of vb2-exporter patches just
 after dma_get_sgtable is merged (or at least acked by major maintainers).

 The above function call (because of follow_pfn) doesn't succeed for all the 
 allocated pages. Hence I created a patch(attached)
 which is based on [1] series. One can apply it for using your present 
 patch-set in the meantime.
 
 Regards,
 Subash
 [1] http://www.spinics.net/lists/kernel/msg1343092.html

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] Support for H.264/MPEG4 encoder (VPU) in i.MX27.

2012-06-08 Thread Sascha Hauer
On Fri, Jun 08, 2012 at 01:32:27PM +0200, javier Martin wrote:
 On 8 June 2012 11:23, Sascha Hauer s.ha...@pengutronix.de wrote:
  On Fri, Jun 08, 2012 at 11:02:31AM +0200, javier Martin wrote:
 
 Hi Sascha,
 
 From our point of view the current situation is the following:
 We have a very reliable driver for the VPU which is not mainline but
 it's been used for two years in our products. This driver only
 supports encoding in the i.MX27 chip.
 In parallel, you have a a multichip driver in progress which is not
 mainline either, not fully V4L2 compatible and not tested for i.MX27.
 [1]
 At the same time, we have a driver in progress for i.MX27 encoding
 only which follows V4L2 mem2mem framework. [2].
 
 The first thing to decide would be which of both drivers we take as a
 base for final mainline developing.
 In our view, cleaning up driver from Pengutronix [1] would imply a lot
 of effort of maintaining code that we cannot really test (i.MX5,
 i.MX6) whereas if we continue using [2] we would have something valid
 for i.MX27 much faster. Support for decoding and other chips could be
 added later.
 
 The second thing would be whether we keep on developing or whether we
 should wait for you to have something in mainline. We have already
 allocated resources to the development of the driver and long-term
 testing to achieve product level reliability. Pengutronix does not
 seem to be committed to developing the features relevant to our
 product (lack of YUV420 support for i.MX27 camera driver[6]) nor
 committed to any deadline (lack of answers or development on dmaengine
 for i.MX27[4][5]). Moreover, development effort is only 50% of the
 cost and we would still have to spend a lot of time checking the video
 stream manually in different real-rife conditions (only extensive
 testing allowed us to catch the P frame marked as IDR bug [7]).
 
 As a conclusion we propose that we keep on developing our driver for
 encoding in the i.MX27 VPU under the following conditions:
 - We will provide a more generic name for the driver than codadx6,
 maybe something as imx_vpu.

Since we already know that this is a codadx6 we should name it codadx
instead of anything vpu specific. I also suggest codadx instead of
anything more specific like codadx6 since we should rather motivate
people to merge other coda cores into this driver.

 - We will do an extra effort and will study your code in [1] in order
 to provide a modular approach that makes adding new functionality (new
 chips or decoding) as easy as possible while making obvious that
 further patches do not break support for encoding in the i.MX27.

This sounds like a plan. I am happy that you are willing to keep an eye
on our driver. It would be a pity to have unnecessary barriers for
merging codadx9 stuff later.

Sascha


-- 
Pengutronix e.K.   | |
Industrial Linux Solutions | http://www.pengutronix.de/  |
Peiner Str. 6-8, 31137 Hildesheim, Germany | Phone: +49-5121-206917-0|
Amtsgericht Hildesheim, HRA 2686   | Fax:   +49-5121-206917- |
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/5] usb: gadget/uvc: Add super-speed support to UVC webcam gadget

2012-06-08 Thread Laurent Pinchart
Hi Bhupesh,

Thanks for the patch.

As Felipe has already applied the patch to his public tree, I'll send 
incremental cleanup patches. Here's my review nonetheless, with a question 
which I'd like to know your opinion about to write the cleanup patches.

On Friday 01 June 2012 15:08:56 Bhupesh Sharma wrote:
 This patch adds super-speed support to UVC webcam gadget.
 
 Also in this patch:
   - We add the configurability to pass bInterval, bMaxBurst, mult
 factors for video streaming endpoint (ISOC IN) through module
 parameters.
 
   - We use config_ep_by_speed helper routine to configure video
 streaming endpoint.
 
 Signed-off-by: Bhupesh Sharma bhupesh.sha...@st.com
 ---
  drivers/usb/gadget/f_uvc.c  |  241 +++-
  drivers/usb/gadget/f_uvc.h  |8 +-
  drivers/usb/gadget/uvc.h|4 +-
  drivers/usb/gadget/webcam.c |   29 +-
  4 files changed, 247 insertions(+), 35 deletions(-)
 
 diff --git a/drivers/usb/gadget/f_uvc.c b/drivers/usb/gadget/f_uvc.c
 index dd7d7a9..2a8bf06 100644
 --- a/drivers/usb/gadget/f_uvc.c
 +++ b/drivers/usb/gadget/f_uvc.c
 @@ -29,6 +29,25 @@
 
  unsigned int uvc_gadget_trace_param;
 
 +/*-
 */ +
 +/* module parameters specific to the Video streaming endpoint */
 +static unsigned streaming_interval = 1;
 +module_param(streaming_interval, uint, S_IRUGO|S_IWUSR);
 +MODULE_PARM_DESC(streaming_interval, 1 - 16);
 +
 +static unsigned streaming_maxpacket = 1024;

unsigned int please.

 +module_param(streaming_maxpacket, uint, S_IRUGO|S_IWUSR);
 +MODULE_PARM_DESC(streaming_maxpacket, 0 - 1023 (fs), 0 - 1024 (hs/ss));
 +
 +static unsigned streaming_mult;
 +module_param(streaming_mult, uint, S_IRUGO|S_IWUSR);
 +MODULE_PARM_DESC(streaming_mult, 0 - 2 (hs/ss only));

I'd rather like to integrate this into the streaming_maxpacket parameter, and 
compute the multiplier at runtime. This shouldn't be difficult for high speed, 
the multiplier for max packet sizes between 1 and 1024 is 1, between 1025 and 
2048 is 2 and between 2049 and 3072 is 3.

 +static unsigned streaming_maxburst;
 +module_param(streaming_maxburst, uint, S_IRUGO|S_IWUSR);
 +MODULE_PARM_DESC(streaming_maxburst, 0 - 15 (ss only));

Do you think maxburst could also be integrated into the streaming_maxpacket 
parameter ? Put it another way, can we computer the multiplier and the burst 
value from a single maximum number of bytes per service interval, or do they 
have to be specified independently ? If using more than one burst, the 
wMaxPacketSize value must be 1024 for HS. Only multiples of 1024 higher than 
1024 can thus be achieved through different multipler/burst settings.

 +
  /*
 --
 * Function descriptors
   */
 @@ -84,7 +103,7 @@ static struct usb_interface_descriptor uvc_control_intf
 __initdata = { .iInterface= 0,
  };
 
 -static struct usb_endpoint_descriptor uvc_control_ep __initdata = {
 +static struct usb_endpoint_descriptor uvc_fs_control_ep __initdata = {
   .bLength= USB_DT_ENDPOINT_SIZE,
   .bDescriptorType= USB_DT_ENDPOINT,
   .bEndpointAddress   = USB_DIR_IN,
 @@ -124,7 +143,7 @@ static struct usb_interface_descriptor
 uvc_streaming_intf_alt1 __initdata = { .iInterface= 0,
  };
 
 -static struct usb_endpoint_descriptor uvc_streaming_ep = {
 +static struct usb_endpoint_descriptor uvc_fs_streaming_ep = {
   .bLength= USB_DT_ENDPOINT_SIZE,
   .bDescriptorType= USB_DT_ENDPOINT,
   .bEndpointAddress   = USB_DIR_IN,
 @@ -133,15 +152,72 @@ static struct usb_endpoint_descriptor uvc_streaming_ep
 = { .bInterval= 1,
  };
 
 +static struct usb_endpoint_descriptor uvc_hs_streaming_ep = {
 + .bLength= USB_DT_ENDPOINT_SIZE,
 + .bDescriptorType= USB_DT_ENDPOINT,
 + .bEndpointAddress   = USB_DIR_IN,
 + .bmAttributes   = USB_ENDPOINT_XFER_ISOC,
 + .wMaxPacketSize = cpu_to_le16(1024),
 + .bInterval  = 1,

wMaxPacketSize and bInterval are now initialized from module parameters, so 
I'd leave it to zero and add a comment about it.

 +};

Please keep the indentation style consistent with the rest of the file.

 +
 +/* super speed support */
 +static struct usb_endpoint_descriptor uvc_ss_control_ep __initdata = {
 + .bLength =  USB_DT_ENDPOINT_SIZE,
 + .bDescriptorType =  USB_DT_ENDPOINT,
 +
 + .bEndpointAddress = USB_DIR_IN,
 + .bmAttributes = USB_ENDPOINT_XFER_INT,
 + .wMaxPacketSize =   cpu_to_le16(STATUS_BYTECOUNT),
 + .bInterval =8,
 +};

The FS/HS/SS control endpoint descriptors are identical, there's no need to 
define separate descriptors. You also don't set the SS endpoint number to the 
FS endpoint number. As you don't call 

cron job: media_tree daily build: WARNINGS

2012-06-08 Thread Hans Verkuil
This message is generated daily by a cron job that builds media_tree for
the kernels and architectures in the list below.

Results of the daily build of media_tree:

date:Fri Jun  8 19:00:16 CEST 2012
git hash:5472d3f17845c4398c6a510b46855820920c2181
gcc version:  i686-linux-gcc (GCC) 4.6.3
host hardware:x86_64
host os:  3.3-6.slh.2-amd64

linux-git-arm-eabi-davinci: WARNINGS
linux-git-arm-eabi-exynos: WARNINGS
linux-git-arm-eabi-omap: WARNINGS
linux-git-i686: WARNINGS
linux-git-m32r: WARNINGS
linux-git-mips: WARNINGS
linux-git-powerpc64: WARNINGS
linux-git-x86_64: WARNINGS
linux-2.6.31.12-i686: WARNINGS
linux-2.6.32.6-i686: WARNINGS
linux-2.6.33-i686: WARNINGS
linux-2.6.34-i686: WARNINGS
linux-2.6.35.3-i686: WARNINGS
linux-2.6.36-i686: WARNINGS
linux-2.6.37-i686: WARNINGS
linux-2.6.38.2-i686: WARNINGS
linux-2.6.39.1-i686: WARNINGS
linux-3.0-i686: WARNINGS
linux-3.1-i686: WARNINGS
linux-3.2.1-i686: WARNINGS
linux-3.3-i686: WARNINGS
linux-3.4-i686: WARNINGS
linux-2.6.31.12-x86_64: WARNINGS
linux-2.6.32.6-x86_64: WARNINGS
linux-2.6.33-x86_64: WARNINGS
linux-2.6.34-x86_64: WARNINGS
linux-2.6.35.3-x86_64: WARNINGS
linux-2.6.36-x86_64: WARNINGS
linux-2.6.37-x86_64: WARNINGS
linux-2.6.38.2-x86_64: WARNINGS
linux-2.6.39.1-x86_64: WARNINGS
linux-3.0-x86_64: WARNINGS
linux-3.1-x86_64: WARNINGS
linux-3.2.1-x86_64: WARNINGS
linux-3.3-x86_64: WARNINGS
linux-3.4-x86_64: WARNINGS
apps: WARNINGS
spec-git: WARNINGS
sparse: ERRORS

Detailed results are available here:

http://www.xs4all.nl/~hverkuil/logs/Friday.log

Full logs are available here:

http://www.xs4all.nl/~hverkuil/logs/Friday.tar.bz2

The V4L-DVB specification from this daily build is here:

http://www.xs4all.nl/~hverkuil/spec/media.html
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Synchronization framework

2012-06-08 Thread Erik Gilling
On Thu, Jun 7, 2012 at 10:52 AM, Maarten Lankhorst
m.b.lankho...@gmail.com wrote:
 I do agree you need some way to synch userspace though, but I
 think adding a new api for userspace is not the way to go.

I'm not sure I understand how you propose to expose the functionality
to userspace in a way that does not depend on the driver that owns
the buffer w/o adding an API.

-Erik
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Linaro-mm-sig] [RFC] Synchronizing access to buffers shared with dma-buf between drivers/devices

2012-06-08 Thread Erik Gilling
On Thu, Jun 7, 2012 at 4:35 AM, Tom Cooksey tom.cook...@arm.com wrote:
 The alternate is to not associate sync objects with buffers and
 have them be distinct entities, exposed to userspace. This gives
 userpsace more power and flexibility and might allow for use-cases
 which an implicit synchronization mechanism can't satisfy - I'd
 be curious to know any specifics here.

Time and time again we've had problems with implicit synchronization
resulting in bugs where different drivers play by slightly different
implicit rules.  We're convinced the best way to attack this problem
is to move as much of the command and control of synchronization as
possible into a single piece of code (the compositor in our case.)  To
facilitate this we're going to be mandating this explicit approach in
the K release of Android.

 However, every driver which
 needs to participate in the synchronization mechanism will need
 to have its interface with userspace modified to allow the sync
 objects to be passed to the drivers. This seemed like a lot of
 work to me, which is why I prefer the implicit approach. However
 I don't actually know what work is needed and think it should be
 explored. I.e. How much work is it to add explicit sync object
 support to the DRM  v4l2 interfaces?

 E.g. I believe DRM/GEM's job dispatch API is in-order
 in which case it might be easy to just add wait for this fence
 and signal this fence ioctls. Seems like vmwgfx already has
 something similar to this already? Could this work over having
 to specify a list of sync objects to wait on and another list
 of sync objects to signal for every operation (exec buf/page
 flip)? What about for v4l2?

If I understand you right a job submission with explicit sync would
become 3 submission:
1) submit wait for pre-req fence job
2) submit render job
3) submit signal ready fence job

Does DRM provide a way to ensure these 3 jobs are submitted
atomically?  I also expect GPU vendor would like to get clever about
GPU to GPU fence dependancies.  That could probably be handled
entirely in the userspace GL driver.

 I guess my other thought is that implicit vs explicit is not
 mutually exclusive, though I'd guess there'd be interesting
 deadlocks to have to debug if both were in use _at the same
 time_. :-)

I think this is an approach worth investigating.  I'd like a way to
either opt out of implicit sync or have a way to check if a dma-buf
has an attached fence and detach it.  Actually, that could work really
well. Consider:

* Each dma_buf has a single fence slot
* on submission
   * the driver will extract the fence from the dma_buf and queue a wait on it.
   * the driver will replace that fence with it's own complettion
fence before the job submission ioctl returns.
* dma_buf will have two userspace ioctls:
   * DETACH: will return the fence as an FD to userspace and clear the
fence slot in the dma_buf
   * ATTACH: takes a fence FD from userspace and attaches it to the
dma_buf fence slot.  Returns an error if the fence slot is non-empty.

In the android case, we can do a detach after every submission and an
attach right before.

-Erik
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Linaro-mm-sig] [RFC] Synchronizing access to buffers shared with dma-buf between drivers/devices

2012-06-08 Thread Daniel Vetter
On Fri, Jun 08, 2012 at 01:56:05PM -0700, Erik Gilling wrote:
 On Thu, Jun 7, 2012 at 4:35 AM, Tom Cooksey tom.cook...@arm.com wrote:
  The alternate is to not associate sync objects with buffers and
  have them be distinct entities, exposed to userspace. This gives
  userpsace more power and flexibility and might allow for use-cases
  which an implicit synchronization mechanism can't satisfy - I'd
  be curious to know any specifics here.
 
 Time and time again we've had problems with implicit synchronization
 resulting in bugs where different drivers play by slightly different
 implicit rules.  We're convinced the best way to attack this problem
 is to move as much of the command and control of synchronization as
 possible into a single piece of code (the compositor in our case.)  To
 facilitate this we're going to be mandating this explicit approach in
 the K release of Android.
 
  However, every driver which
  needs to participate in the synchronization mechanism will need
  to have its interface with userspace modified to allow the sync
  objects to be passed to the drivers. This seemed like a lot of
  work to me, which is why I prefer the implicit approach. However
  I don't actually know what work is needed and think it should be
  explored. I.e. How much work is it to add explicit sync object
  support to the DRM  v4l2 interfaces?
 
  E.g. I believe DRM/GEM's job dispatch API is in-order
  in which case it might be easy to just add wait for this fence
  and signal this fence ioctls. Seems like vmwgfx already has
  something similar to this already? Could this work over having
  to specify a list of sync objects to wait on and another list
  of sync objects to signal for every operation (exec buf/page
  flip)? What about for v4l2?
 
 If I understand you right a job submission with explicit sync would
 become 3 submission:
 1) submit wait for pre-req fence job
 2) submit render job
 3) submit signal ready fence job
 
 Does DRM provide a way to ensure these 3 jobs are submitted
 atomically?  I also expect GPU vendor would like to get clever about
 GPU to GPU fence dependancies.  That could probably be handled
 entirely in the userspace GL driver.

Well, drm doesn't provide any way to submit a job. These are all done in
driver-private ioctls. And I guess with your proposal below we can do
exactly what you want.

  I guess my other thought is that implicit vs explicit is not
  mutually exclusive, though I'd guess there'd be interesting
  deadlocks to have to debug if both were in use _at the same
  time_. :-)
 
 I think this is an approach worth investigating.  I'd like a way to
 either opt out of implicit sync or have a way to check if a dma-buf
 has an attached fence and detach it.  Actually, that could work really
 well. Consider:
 
 * Each dma_buf has a single fence slot
 * on submission
* the driver will extract the fence from the dma_buf and queue a wait on 
 it.
* the driver will replace that fence with it's own complettion
 fence before the job submission ioctl returns.

This is pretty much what I've had in mind with the extension that we
probably need both a read and a write fence - in a lot of cases multiple
people want to use a buffer for reads (e.g. when decoding video streams
the decode needs it as a reference frame wheras later stages use it
read-only, too).

 * dma_buf will have two userspace ioctls:
* DETACH: will return the fence as an FD to userspace and clear the
 fence slot in the dma_buf
* ATTACH: takes a fence FD from userspace and attaches it to the
 dma_buf fence slot.  Returns an error if the fence slot is non-empty.

I am not yet sold on explicit fences, especially for cross-device sync. I
do see uses for explicit fences that can be accessed from userspace for
individual drivers - otherwise tricks like suballocation are a bit hard to
pull off. But for cross-device buffer sharing I don't quite see the point,
especially since the current Linux userspace graphics stack manages to do
so without (e.g. DRI2 is all implicit sync'ed).

btw, I'll try to stitch together a more elaborate discussion over the w/e,
I have a few more pet-peeves with your actual implementation ;-)

Yours, Daniel
-- 
Daniel Vetter
Mail: dan...@ffwll.ch
Mobile: +41 (0)79 365 57 48
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Linaro-mm-sig] [RFC] Synchronizing access to buffers shared with dma-buf between drivers/devices

2012-06-08 Thread Erik Gilling
On Fri, Jun 8, 2012 at 2:42 PM, Daniel Vetter dan...@ffwll.ch wrote:
 I think this is an approach worth investigating.  I'd like a way to
 either opt out of implicit sync or have a way to check if a dma-buf
 has an attached fence and detach it.  Actually, that could work really
 well. Consider:

 * Each dma_buf has a single fence slot
 * on submission
    * the driver will extract the fence from the dma_buf and queue a wait on 
 it.
    * the driver will replace that fence with it's own complettion
 fence before the job submission ioctl returns.

 This is pretty much what I've had in mind with the extension that we
 probably need both a read and a write fence - in a lot of cases multiple
 people want to use a buffer for reads (e.g. when decoding video streams
 the decode needs it as a reference frame wheras later stages use it
 read-only, too).

I actually hit send instead of save draft on this before talking
this over with some co-workers.  We came up with the same issues.  I'm
actually less concerned about the specifics as long as we have a way
to attach and detach the fences.

 * dma_buf will have two userspace ioctls:
    * DETACH: will return the fence as an FD to userspace and clear the
 fence slot in the dma_buf
    * ATTACH: takes a fence FD from userspace and attaches it to the
 dma_buf fence slot.  Returns an error if the fence slot is non-empty.

 I am not yet sold on explicit fences, especially for cross-device sync. I
 do see uses for explicit fences that can be accessed from userspace for
 individual drivers - otherwise tricks like suballocation are a bit hard to
 pull off. But for cross-device buffer sharing I don't quite see the point,
 especially since the current Linux userspace graphics stack manages to do
 so without (e.g. DRI2 is all implicit sync'ed).

The current linux graphics stack does not allow synchronization
between the GPU and a camera/video decoder.  When we've seen people
try to support this behind the scenes, they get it wrong and introduce
bugs that can take weeks to track down.  As stated in the previous
email, one of our goals is to centrally manage synchronization so that
it's easer for people bringing up a platform to get it right.

 btw, I'll try to stitch together a more elaborate discussion over the w/e,
 I have a few more pet-peeves with your actual implementation ;-)

Happy to hear feedback on the specifics.

-Erik
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3 1/9] string: introduce memweight

2012-06-08 Thread Akinobu Mita
memweight() is the function that counts the total number of bits set
in memory area.  Unlike bitmap_weight(), memweight() takes pointer
and size in bytes to specify a memory area which does not need to be
aligned to long-word boundary.

Signed-off-by: Akinobu Mita akinobu.m...@gmail.com
Cc: Anders Larsen a...@alarsen.net
Cc: Alasdair Kergon a...@redhat.com
Cc: dm-de...@redhat.com
Cc: linux-fsde...@vger.kernel.org
Cc: Laurent Pinchart laurent.pinch...@ideasonboard.com
Cc: linux-media@vger.kernel.org
Cc: Mark Fasheh mfas...@suse.com
Cc: Joel Becker jl...@evilplan.org
Cc: ocfs2-de...@oss.oracle.com
Cc: Jan Kara j...@suse.cz
Cc: linux-e...@vger.kernel.org
Cc: Andrew Morton a...@linux-foundation.org
Cc: Andreas Dilger adilger.ker...@dilger.ca
Cc: Theodore Ts'o ty...@mit.edu
Cc: Matthew Wilcox matt...@wil.cx
---
v3: add comment for the last loop, adviced by Jan Kara
v2: simplify memweight(), adviced by Jan Kara

 include/linux/string.h |3 +++
 lib/string.c   |   36 
 2 files changed, 39 insertions(+), 0 deletions(-)

diff --git a/include/linux/string.h b/include/linux/string.h
index e033564..ffe0442 100644
--- a/include/linux/string.h
+++ b/include/linux/string.h
@@ -145,4 +145,7 @@ static inline bool strstarts(const char *str, const char 
*prefix)
return strncmp(str, prefix, strlen(prefix)) == 0;
 }
 #endif
+
+extern size_t memweight(const void *ptr, size_t bytes);
+
 #endif /* _LINUX_STRING_H_ */
diff --git a/lib/string.c b/lib/string.c
index e5878de..e467186 100644
--- a/lib/string.c
+++ b/lib/string.c
@@ -26,6 +26,7 @@
 #include linux/export.h
 #include linux/bug.h
 #include linux/errno.h
+#include linux/bitmap.h
 
 #ifndef __HAVE_ARCH_STRNICMP
 /**
@@ -824,3 +825,38 @@ void *memchr_inv(const void *start, int c, size_t bytes)
return check_bytes8(start, value, bytes % 8);
 }
 EXPORT_SYMBOL(memchr_inv);
+
+/**
+ * memweight - count the total number of bits set in memory area
+ * @ptr: pointer to the start of the area
+ * @bytes: the size of the area
+ */
+size_t memweight(const void *ptr, size_t bytes)
+{
+   size_t w = 0;
+   size_t longs;
+   const unsigned char *bitmap = ptr;
+
+   for (; bytes  0  ((unsigned long)bitmap) % sizeof(long);
+   bytes--, bitmap++)
+   w += hweight8(*bitmap);
+
+   longs = bytes / sizeof(long);
+   if (longs) {
+   BUG_ON(longs = INT_MAX / BITS_PER_LONG);
+   w += bitmap_weight((unsigned long *)bitmap,
+   longs * BITS_PER_LONG);
+   bytes -= longs * sizeof(long);
+   bitmap += longs * sizeof(long);
+   }
+   /*
+* The reason that this last loop is distinct from the preceding
+* bitmap_weight() call is to compute 1-bits in the last region smaller
+* than sizeof(long) properly on big-endian systems.
+*/
+   for (; bytes  0; bytes--, bitmap++)
+   w += hweight8(*bitmap);
+
+   return w;
+}
+EXPORT_SYMBOL(memweight);
-- 
1.7.7.6

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3 5/9] video/uvc: use memweight()

2012-06-08 Thread Akinobu Mita
Use memweight() to count the total number of bits set in memory area.

Signed-off-by: Akinobu Mita akinobu.m...@gmail.com
Acked-by: Laurent Pinchart laurent.pinch...@ideasonboard.com
Cc: linux-media@vger.kernel.org
---
No changes from v1

 drivers/media/video/uvc/uvc_ctrl.c |5 ++---
 1 files changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/media/video/uvc/uvc_ctrl.c 
b/drivers/media/video/uvc/uvc_ctrl.c
index af26bbe..f7061a5 100644
--- a/drivers/media/video/uvc/uvc_ctrl.c
+++ b/drivers/media/video/uvc/uvc_ctrl.c
@@ -2083,7 +2083,7 @@ int uvc_ctrl_init_device(struct uvc_device *dev)
/* Walk the entities list and instantiate controls */
list_for_each_entry(entity, dev-entities, list) {
struct uvc_control *ctrl;
-   unsigned int bControlSize = 0, ncontrols = 0;
+   unsigned int bControlSize = 0, ncontrols;
__u8 *bmControls = NULL;
 
if (UVC_ENTITY_TYPE(entity) == UVC_VC_EXTENSION_UNIT) {
@@ -2101,8 +2101,7 @@ int uvc_ctrl_init_device(struct uvc_device *dev)
uvc_ctrl_prune_entity(dev, entity);
 
/* Count supported controls and allocate the controls array */
-   for (i = 0; i  bControlSize; ++i)
-   ncontrols += hweight8(bmControls[i]);
+   ncontrols = memweight(bmControls, bControlSize);
if (ncontrols == 0)
continue;
 
-- 
1.7.7.6

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [PATCH 3/5] usb: gadget/uvc: Add super-speed support to UVC webcam gadget

2012-06-08 Thread Bhupesh SHARMA
Hi Laurent,

Thanks for your review comments.
Please find my comments inline:

 As Felipe has already applied the patch to his public tree, I'll send
 incremental cleanup patches. Here's my review nonetheless, with a
 question
 which I'd like to know your opinion about to write the cleanup patches.

Not to worry, I can send incremental patches.
Anyways I need to ensure 4/5 and 5/5 patches of the series also cleanly apply
on Felipe's tree as he was facing issues while applying these patches.

Please review 4/5 and 5/5 patches of this patch-set as well and then I can
re-circulate patches for re-review and incorporation in gadget-next.

 On Friday 01 June 2012 15:08:56 Bhupesh Sharma wrote:
  This patch adds super-speed support to UVC webcam gadget.
 
  Also in this patch:
  - We add the configurability to pass bInterval, bMaxBurst, mult
factors for video streaming endpoint (ISOC IN) through module
parameters.
 
  - We use config_ep_by_speed helper routine to configure video
streaming endpoint.
 
  Signed-off-by: Bhupesh Sharma bhupesh.sha...@st.com
  ---
   drivers/usb/gadget/f_uvc.c  |  241
 +++-
   drivers/usb/gadget/f_uvc.h  |8 +-
   drivers/usb/gadget/uvc.h|4 +-
   drivers/usb/gadget/webcam.c |   29 +-
   4 files changed, 247 insertions(+), 35 deletions(-)
 
  diff --git a/drivers/usb/gadget/f_uvc.c b/drivers/usb/gadget/f_uvc.c
  index dd7d7a9..2a8bf06 100644
  --- a/drivers/usb/gadget/f_uvc.c
  +++ b/drivers/usb/gadget/f_uvc.c
  @@ -29,6 +29,25 @@
 
   unsigned int uvc_gadget_trace_param;
 
  +/*--
 ---
  */ +
  +/* module parameters specific to the Video streaming endpoint */
  +static unsigned streaming_interval = 1;
  +module_param(streaming_interval, uint, S_IRUGO|S_IWUSR);
  +MODULE_PARM_DESC(streaming_interval, 1 - 16);
  +
  +static unsigned streaming_maxpacket = 1024;

 unsigned int please.

Ok.

  +module_param(streaming_maxpacket, uint, S_IRUGO|S_IWUSR);
  +MODULE_PARM_DESC(streaming_maxpacket, 0 - 1023 (fs), 0 - 1024
 (hs/ss));
  +
  +static unsigned streaming_mult;
  +module_param(streaming_mult, uint, S_IRUGO|S_IWUSR);
  +MODULE_PARM_DESC(streaming_mult, 0 - 2 (hs/ss only));

 I'd rather like to integrate this into the streaming_maxpacket
 parameter, and
 compute the multiplier at runtime. This shouldn't be difficult for high
 speed,
 the multiplier for max packet sizes between 1 and 1024 is 1, between
 1025 and
 2048 is 2 and between 2049 and 3072 is 3.

There is a specific purpose for keeping these three module parameters separate,
with xHCI hosts and USB3 device-side controllers still in stabilization phase, 
it
is easy for a person switching from USB2.0 to USB3.0 to understand these module
parameters as the major difference points in respect to ISOC IN endpoints.

A similar methodology is already used in the reference USB gadget zero (see 
here [1])
and I have tried to keep the same methodology here as well.

[1] 
http://git.kernel.org/?p=linux/kernel/git/balbi/usb.git;a=blob;f=drivers/usb/gadget/f_sourcesink.c

  +static unsigned streaming_maxburst;
  +module_param(streaming_maxburst, uint, S_IRUGO|S_IWUSR);
  +MODULE_PARM_DESC(streaming_maxburst, 0 - 15 (ss only));

 Do you think maxburst could also be integrated into the
 streaming_maxpacket
 parameter ? Put it another way, can we computer the multiplier and the
 burst
 value from a single maximum number of bytes per service interval, or do
 they
 have to be specified independently ? If using more than one burst, the
 wMaxPacketSize value must be 1024 for HS. Only multiples of 1024 higher
 than
 1024 can thus be achieved through different multipler/burst settings.

Pls see above..

  +
   /*
  -
 -
  * Function descriptors
*/
  @@ -84,7 +103,7 @@ static struct usb_interface_descriptor
 uvc_control_intf
  __initdata = { .iInterface  = 0,
   };
 
  -static struct usb_endpoint_descriptor uvc_control_ep __initdata = {
  +static struct usb_endpoint_descriptor uvc_fs_control_ep __initdata =
 {
  .bLength= USB_DT_ENDPOINT_SIZE,
  .bDescriptorType= USB_DT_ENDPOINT,
  .bEndpointAddress   = USB_DIR_IN,
  @@ -124,7 +143,7 @@ static struct usb_interface_descriptor
  uvc_streaming_intf_alt1 __initdata = { .iInterface  = 0,
   };
 
  -static struct usb_endpoint_descriptor uvc_streaming_ep = {
  +static struct usb_endpoint_descriptor uvc_fs_streaming_ep = {
  .bLength= USB_DT_ENDPOINT_SIZE,
  .bDescriptorType= USB_DT_ENDPOINT,
  .bEndpointAddress   = USB_DIR_IN,
  @@ -133,15 +152,72 @@ static struct usb_endpoint_descriptor
 uvc_streaming_ep
  = { .bInterval  = 1,
   };
 
  +static struct usb_endpoint_descriptor uvc_hs_streaming_ep = {
  +   .bLength= USB_DT_ENDPOINT_SIZE,
  +   .bDescriptorType