Re: [PATCH/RFC v3 00/19] Common Display Framework

2013-10-11 Thread Tomi Valkeinen
On 09/10/13 17:08, Andrzej Hajda wrote:

 As I have adopted existing internal driver for MIPI-DSI bus, I did not
 take too much
 care for DT. You are right, 'bta-timeout' is a configuration parameter
 (however its
 minimal value is determined by characteristic of the DSI-slave). On the
 other
 side currently there is no good place for such configuration parameters
 AFAIK.

The minimum bta-timeout should be deducable from the DSI bus speed,
shouldn't it? Thus there's no need to define it anywhere.

 - enable_hs and enable_te, used to enable/disable HS mode and
 tearing-elimination
 
 It seems there should be a way to synchronize TE signal with panel,
 in case signal is provided only to dsi-master. Some callback I suppose?
 Or transfer synchronization should be done by dsi-master.

Hmm, can you explain a bit what you mean?

Do you mean that the panel driver should get a callback when DSI TE
trigger happens?

On OMAP, when using DSI TE trigger, the dsi-master does it all. So the
panel driver just calls update() on the dsi-master, and then the
dsi-master will wait for TE, and then start the transfer. There's also a
callback to the panel driver when the transfer has completed.

 - set_max_rx_packet_size, used to configure the max rx packet size.
 Similar callbacks should be added to mipi-dsi-bus ops as well, to
 make it complete/generic.

Do you mean the same calls should exist both in the mipi-dbi-bus ops and
on the video ops? If they are called with different values, which one
wins?

 http://article.gmane.org/gmane.comp.video.dri.devel/90651
 http://article.gmane.org/gmane.comp.video.dri.devel/91269
 http://article.gmane.org/gmane.comp.video.dri.devel/91272

 I still think that it's best to consider DSI and DBI as a video bus (not
 as a separate video bus and a control bus), and provide the packet
 transfer methods as part of the video ops.
 I have read all posts regarding this issue and currently I tend
 to solution where CDF is used to model only video streams,
 with control bus implemented in different framework.
 The only concerns I have if we should use Linux bus for that.

Ok. I have many other concerns, as I've expressed in the mails =). I
still don't see how it could work. So I'd very much like to see a more
detailed explanation how the separate control  video bus approach would
deal with different scenarios.

Let's consider a DSI-to-HDMI encoder chip. Version A of the chip is
controlled via DSI, version B is controlled via i2c. As the output of
the chip goes to HDMI connector, the DSI bus speed needs to be set
according to the resolution of the HDMI monitor.

So, with version A, the encoder driver would have some kind of pointers
to ctrl_ops and video_ops (or, pointers to dsi_bus instance and
video_bus instance), right? The ctrl_ops would need to have ops like
set_bus_speed, enable_hs, etc, to configure the DSI bus.

When the encoder driver is started, it'd probably set some safe bus
speed, configure the encoder a bit, read the EDID, enable HS,
re-configure the bus speed to match the monitor's video mode, configure
the encoder, and at last enable the video stream.

Version B would have i2c_client and video_ops. When the driver starts,
it'd  probably do the same things as above, except the control messages
would go through i2c. That means that setting the bus speed, enabling
HS, etc, would happen through video_ops, as the i2c side has no
knowledge of the DSI side, right? Would there be identical ops on both
DSI ctrl and video ops?

That sounds very bad. What am I missing here? How would it work?

And, if we want to separate the video and control, I see no reason to
explicitly require the video side to be present. I.e. we could as well
have a DSI peripheral that has only the control bus used. How would that
reflect to, say, the DT presentation? Say, if we have a version A of the
encoder, we could have DT data like this (just a rough example):

soc-dsi {
encoder {
input: endpoint {
remote-endpoint = soc-dsi-ep;
/* configuration for the DSI lanes */
dsi-lanes = 0 1 2 3 4 5;
};
};
};

So the encoder would be places inside the SoC's DSI node, similar to how
an i2c device would be placed inside SoC's i2c node. DSI configuration
would be inside the video endpoint data.

Version B would be almost the same:

i2c0 {
encoder {
input: endpoint {
remote-endpoint = soc-dsi-ep;
/* configuration for the DSI lanes */
dsi-lanes = 0 1 2 3 4 5;
};
};
};

Now, how would the video-bus-less device be defined? It'd be inside the
soc-dsi node, that's clear. Where would the DSI lane configuration be?
Not inside 'endpoint' node, as that's for video and wouldn't exist in
this case. Would we have the same lane configuration in two places, once
for video and once for control?

I agree that 

Re: [media-workshop] V2: Agenda for the Edinburgh mini-summit

2013-10-11 Thread Laurent Pinchart
Hi Bryan,

On Thursday 10 October 2013 17:02:18 Bryan Wu wrote:
 On Mon, Oct 7, 2013 at 3:24 PM, Laurent Pinchart wrote:
  On Tuesday 08 October 2013 00:06:23 Sakari Ailus wrote:
  On Tue, Sep 24, 2013 at 11:20:53AM +0200, Thierry Reding wrote:
   On Mon, Sep 23, 2013 at 10:27:06PM +0200, Sylwester Nawrocki wrote:
   On 09/23/2013 06:37 PM, Oliver Schinagl wrote:
   On 09/23/13 16:45, Sylwester Nawrocki wrote:
   Hi,
   
   I would like to have a short discussion on LED flash devices support
   in the kernel. Currently there are two APIs: the V4L2 and LED class
   API exposed by the kernel, which I believe is not good from user
   space POV. Generic applications will need to implement both APIs. I
   think we should decide whether to extend the led class API to add
   support for more advanced LED controllers there or continue to use
   the both APIs with overlapping functionality. There has been some
   discussion about this on the ML, but without any consensus reached
   [1].
   
   What about the linux-pwm framework and its support for the backlight
   via dts?
   
   Or am I talking way to uninformed here. Copying backlight to
   flashlight with some minor modification sounds sensible in a way...
   
   I'd assume we don't need yet another user interface for the LEDs ;)
   AFAICS the PWM subsystem exposes pretty much raw interface in sysfs.
   The PWM LED controllers are already handled in the leds-class API,
   there is the leds_pwm driver (drivers/leds/leds-pwm.c).
   
   I'm adding linux-pwm and linux-leds maintainers at Cc so someone may
   correct me if I got anything wrong.
   
   The PWM subsystem is most definitely not a good fit for this. The only
   thing it provides is a way for other drivers to access a PWM device and
   use it for some specific purpose (pwm-backlight, leds-pwm).
   
   The sysfs support is a convenience for people that needs to use a PWM
   in a way for which no driver framework exists, or for which it doesn't
   make sense to write a driver. Or for testing.
   
Presumably, what we need is a few enhancements to support in a
standard way devices like MAX77693, LM3560 or MAX8997.  There is
already a led class driver for the MAX8997 LED controller
(drivers/leds/leds-max8997.c), but it uses some device-specific sysfs
attributes.

Thus similar devices are currently being handled by different
subsystems. The split between the V4L2 Flash and the leds class API
WRT to Flash LED controller drivers is included in RFC [1], it seems
still up to date.

[1] http://www.spinics.net/lists/linux-leds/msg00899.html
   
   Perhaps it would make sense for V4L2 to be able to use a LED as exposed
   by the LED subsystem and wrap it so that it can be integrated with
   V4L2? If functionality is missing from the LED subsystem I suppose that
   could be added.
  
  The V4L2 flash API supports also xenon flashes, not only LED ones. That
  said, I agree there's a common subset of functionality most LED flash
  controllers implement.
  
   If I understand correctly, the V4L2 subsystem uses LEDs as flashes for
   camera devices. I can easily imagine that there are devices out there
   which provide functionality beyond what a regular LED will provide. So
   perhaps for things such as mobile phones, which typically use a plain
   LED to illuminate the surroundings, an LED wrapped into something that
   emulates the flash functionality could work. But I doubt that the LED
   subsystem is a good fit for anything beyond that.
  
  I originally thought one way to do this could be to make it as easy as
  possible to support both APIs in driver which some aregued, to which I
  agree, is rather poor desing.
  
  Does the LED API have a user space interface library like libv4l2? If
  yes, one option oculd be to implement the wrapper between the V4L2 and
  LED APIs there so that the applications using the LED API could also
  access those devices that implement the V4L2 flash API. Torch mode
  functionality is common between the two right now AFAIU,
  
  The V4L2 flash API also provides a way to strobe the flash using an
  external trigger which typically connected to the sensor (and the user
  can choose between that and software strobe). I guess that and Xenon
  flashes aren't currently covered by the LED API.
  
  The issue is that we have a LED API targetted at controlling LEDs, a V4L2
  flash API targetted at controlling flashes, and hardware devices somewhere
  in the middle that can be used to provide LED or flash function. Merging
  the two APIs on the kernel side, with a compatibility layer for both
  kernel space and user space APIs, might be an idea worth investigating.
 
 I'm so sorry for jumping in the discussion so late. Some how the
 emails from linux-media was archived in my Gmail and I haven't
 checkout this for several weeks.
 
 I agree right now LED API doesn't  quite fit for the usage of V4L2
 Flash API. But I'd also like to see a unified API.
 

Re: [RFC 0/2] V4L2 API for exposing flash subdevs as LED class device

2013-10-11 Thread Laurent Pinchart
Hi Bryan,

On Thursday 10 October 2013 17:07:22 Bryan Wu wrote:
 On Tue, May 21, 2013 at 3:54 AM, Sakari Ailus sakari.ai...@iki.fi wrote:
  On Tue, May 21, 2013 at 10:34:53AM +0200, Andrzej Hajda wrote:
  On 12.05.2013 23:12, Sakari Ailus wrote:
   On Wed, May 08, 2013 at 09:32:17AM +0200, Andrzej Hajda wrote:
   On 07.05.2013 17:07, Laurent Pinchart wrote:
   On Tuesday 07 May 2013 02:11:27 Kim, Milo wrote:
   On Monday, May 06, 2013 6:34 PM Andrzej Hajda wrote:
   This RFC proposes generic API for exposing flash subdevices via LED
   framework.
   
   Rationale
   
   Currently there are two frameworks which are used for exposing LED
   flash to user space:
   - V4L2 flash controls,
   - LED framework(with custom sysfs attributes).
   
   The list below shows flash drivers in mainline kernel with initial
   commit date and typical chip application (according to producer):
   
   LED API:
   lm3642: 2012-09-12, Cameras
   lm355x: 2012-09-05, Cameras
   max8997: 2011-12-14, Cameras (?)
   lp3944: 2009-06-19, Cameras, Lights, Indicators, Toys
   pca955x: 2008-07-16, Cameras, Indicators (?)
   
   V4L2 API:
   as3645a:  2011-05-05, Cameras
   adp1653: 2011-05-05, Cameras
   
   V4L2 provides richest functionality, but there is often demand from
   application developers to provide already established LED API. We
   would like to have an unified user interface for flash devices.
   Some of devices already have the LED API driver exposing limited
   set of a Flash IC functionality. In order to support all required
   features the LED API would have to be extended or the V4L2 API
   would need to be used. However when switching from a LED to a V4L2
   Flash driver existing LED API interface would need to be retained.
   
   Proposed solution
   
   This patch adds V4L2 helper functions to register existing V4L2
   flash subdev as LED class device. After registration via
   v4l2_leddev_register appropriate entry in /sys/class/leds/ is
   created. During registration all V4L2 flash controls are enumerated
   and corresponding attributes are added.
   
   I have attached also patch with new max77693-led driver using
   v4l2_leddev. This patch requires presence of the patch max77693:
   added device tree support:
   https://patchwork.kernel.org/patch/2414351/ .
   
   Additional features
   
   - simple API to access all V4L2 flash controls via sysfs,
   - V4L2 subdevice should not be registered by V4L2 device to use it,
   - LED triggers API can be used to control the device,
   - LED device is optional - it will be created only if V4L2_LEDDEV
 configuration option is enabled and the subdev driver calls
 v4l2_leddev_register.
   
   Doubts
   
   This RFC is a result of a uncertainty which API developers should
   expose by their flash drivers. It is a try to gluing together both
   APIs. I am not sure if it is the best solution, but I hope there
   will be some discussion and hopefully some decisions will be taken
   which way we should follow.
   
   The LED subsystem provides similar APIs for the Camera driver.
   With LED trigger event, flash and torch are enabled/disabled.
   I'm not sure this is applicable for you.
   Could you take a look at LED camera trigger feature?
   
   For the camera LED trigger,
   https://git.kernel.org/cgit/linux/kernel/git/cooloney/linux-leds.git
   /commit/ ?h=f or-nextid=48a1d032c954b9b06c3adbf35ef4735dd70ab757
   
   Example of camera flash driver,
   https://git.kernel.org/cgit/linux/kernel/git/cooloney/linux-leds.git
   /commit/ ?h=f or-nextid=313bf0b1a0eaeaac17ea8c4b748f16e28fce8b7a
   
   I think we should decide on one API. Implementing two APIs for a
   single device is usually messy, and will result in different feature
   sets (and different bugs) being implemented through each API,
   depending on the driver. Interactions between the APIs are also a
   pain point on the kernel side to properly synchronize calls.
   
   I don't like having two APIs either. Especially we shouldn't have
   multiple drivers implementing different APIs for the same device.
   
   That said, I wonder if it's possible to support camera-related use
   cases using the LED API: it's originally designed for quite different
   devices. Even if you could handle flash strobing using the LED API, the
   functionality provided by the Media controller and subdev APIs will
   always be missing: device enumeration and association with the right
   camera.
  
  Is there a generic way to associate flash and camera subdevs in
  current V4L2 API? The only ways I see now are:
  - both belongs to the same media controller, but this is not enough if
  there is more than one camera subdev in that controller,
  
  Yes, there is. That's the group_id field in struct media_entity_desc. The
  lens subdev is associated to the rest of the devices the same way.
  
  - using media links/pads - at first sight it seems to be
  overkill/abuse...
  
  No. Links describe the flow of data, 

Re: [PATCH v5 3/4] v4l: ti-vpe: Add VPE mem to mem driver

2013-10-11 Thread Hans Verkuil
On 10/09/2013 04:29 PM, Archit Taneja wrote:
 VPE is a block which consists of a single memory to memory path which can
 perform chrominance up/down sampling, de-interlacing, scaling, and color space
 conversion of raster or tiled YUV420 coplanar, YUV422 coplanar or YUV422
 interleaved video formats.
 
 We create a mem2mem driver based primarily on the mem2mem-testdev example.
 The de-interlacer, scaler and color space converter are all bypassed for now
 to keep the driver simple. Chroma up/down sampler blocks are implemented, so
 conversion beteen different YUV formats is possible.
 
 Each mem2mem context allocates a buffer for VPE MMR values which it will use
 when it gets access to the VPE HW via the mem2mem queue, it also allocates
 a VPDMA descriptor list to which configuration and data descriptors are added.
 
 Based on the information received via v4l2 ioctls for the source and
 destination queues, the driver configures the values for the MMRs, and stores
 them in the buffer. There are also some VPDMA parameters like frame start and
 line mode which needs to be configured, these are configured by direct 
 register
 writes via the VPDMA helper functions.
 
 The driver's device_run() mem2mem op will add each descriptor based on how the
 source and destination queues are set up for the given ctx, once the list is
 prepared, it's submitted to VPDMA, these descriptors when parsed by VPDMA will
 upload MMR registers, start DMA of video buffers on the various input and 
 output
 clients/ports.
 
 When the list is parsed completely(and the DMAs on all the output ports done),
 an interrupt is generated which we use to notify that the source and 
 destination
 buffers are done.
 
 The rest of the driver is quite similar to other mem2mem drivers, we use the
 multiplane v4l2 ioctls as the HW support coplanar formats.
 
 Signed-off-by: Archit Taneja arc...@ti.com

Acked-by: Hans Verkuil hans.verk...@cisco.com

Regards,

Hans

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH/RFC v3 00/19] Common Display Framework

2013-10-11 Thread Andrzej Hajda
On 10/11/2013 08:37 AM, Tomi Valkeinen wrote:
 On 09/10/13 17:08, Andrzej Hajda wrote:

 As I have adopted existing internal driver for MIPI-DSI bus, I did not
 take too much
 care for DT. You are right, 'bta-timeout' is a configuration parameter
 (however its
 minimal value is determined by characteristic of the DSI-slave). On the
 other
 side currently there is no good place for such configuration parameters
 AFAIK.
 The minimum bta-timeout should be deducable from the DSI bus speed,
 shouldn't it? Thus there's no need to define it anywhere.
Hmm, specification says This specified period shall be longer then
the maximum possible turnaround delay for the unit to which the
turnaround request was sent.

 - enable_hs and enable_te, used to enable/disable HS mode and
 tearing-elimination
 It seems there should be a way to synchronize TE signal with panel,
 in case signal is provided only to dsi-master. Some callback I suppose?
 Or transfer synchronization should be done by dsi-master.
 Hmm, can you explain a bit what you mean?

 Do you mean that the panel driver should get a callback when DSI TE
 trigger happens?

 On OMAP, when using DSI TE trigger, the dsi-master does it all. So the
 panel driver just calls update() on the dsi-master, and then the
 dsi-master will wait for TE, and then start the transfer. There's also a
 callback to the panel driver when the transfer has completed.
Yes I though about a callback, but approach with DSI-master taking care
of synchronization in fact better fits to exynos-dsi and I suspect to
omap also.

 - set_max_rx_packet_size, used to configure the max rx packet size.
 Similar callbacks should be added to mipi-dsi-bus ops as well, to
 make it complete/generic.
 Do you mean the same calls should exist both in the mipi-dbi-bus ops and
 on the video ops? If they are called with different values, which one
 wins?
No, I meant that if mipi-dbi-bus want to be complete it should have
similar ops.
I did not think about scenario with two overlaping APIs.

 http://article.gmane.org/gmane.comp.video.dri.devel/90651
 http://article.gmane.org/gmane.comp.video.dri.devel/91269
 http://article.gmane.org/gmane.comp.video.dri.devel/91272

 I still think that it's best to consider DSI and DBI as a video bus (not
 as a separate video bus and a control bus), and provide the packet
 transfer methods as part of the video ops.
 I have read all posts regarding this issue and currently I tend
 to solution where CDF is used to model only video streams,
 with control bus implemented in different framework.
 The only concerns I have if we should use Linux bus for that.
 Ok. I have many other concerns, as I've expressed in the mails =). I
 still don't see how it could work. So I'd very much like to see a more
 detailed explanation how the separate control  video bus approach would
 deal with different scenarios.

 Let's consider a DSI-to-HDMI encoder chip. Version A of the chip is
 controlled via DSI, version B is controlled via i2c. As the output of
 the chip goes to HDMI connector, the DSI bus speed needs to be set
 according to the resolution of the HDMI monitor.

 So, with version A, the encoder driver would have some kind of pointers
 to ctrl_ops and video_ops (or, pointers to dsi_bus instance and
 video_bus instance), right? The ctrl_ops would need to have ops like
 set_bus_speed, enable_hs, etc, to configure the DSI bus.

 When the encoder driver is started, it'd probably set some safe bus
 speed, configure the encoder a bit, read the EDID, enable HS,
 re-configure the bus speed to match the monitor's video mode, configure
 the encoder, and at last enable the video stream.

 Version B would have i2c_client and video_ops. When the driver starts,
 it'd  probably do the same things as above, except the control messages
 would go through i2c. That means that setting the bus speed, enabling
 HS, etc, would happen through video_ops, as the i2c side has no
 knowledge of the DSI side, right? Would there be identical ops on both
 DSI ctrl and video ops?

 That sounds very bad. What am I missing here? How would it work?
If I undrestand correctly you think about CDF topology like below:

DispContr(SoC) --- DSI-master(SoC) --- encoder(DSI or I2C)

But I think with mipi-dsi-bus topology could look like:

DispContr(SoC) --- encoder(DSI or I2C)

DSI-master will not have its own entity, in the graph it could be
represented
by the link(---), as it really does not process the video, only
transports it.

In case of version A I think everything is clear.
In case of version B it does not seems so nice at the first sight, but
still seems quite straightforward to me - special plink in encoder's
node pointing
to DSI-master, driver will find the device in runtime and use ops as needed
(additional ops/helpers required).
This is also the way to support devices which can be controlled by DSI
and I2C
in the same time. Anyway I suspect such scenario will be quite rare.


 And, if we want to separate the video and control, I see 

Re: [PATCH/RFC v3 00/19] Common Display Framework

2013-10-11 Thread Tomi Valkeinen
On 11/10/13 14:19, Andrzej Hajda wrote:
 On 10/11/2013 08:37 AM, Tomi Valkeinen wrote:

 The minimum bta-timeout should be deducable from the DSI bus speed,
 shouldn't it? Thus there's no need to define it anywhere.
 Hmm, specification says This specified period shall be longer then
 the maximum possible turnaround delay for the unit to which the
 turnaround request was sent.

Ah, you're right. We can't know how long the peripheral will take
responding. I was thinking of something that only depends on the
bus-speed and the timings for that.

 If I undrestand correctly you think about CDF topology like below:
 
 DispContr(SoC) --- DSI-master(SoC) --- encoder(DSI or I2C)
 
 But I think with mipi-dsi-bus topology could look like:
 
 DispContr(SoC) --- encoder(DSI or I2C)
 
 DSI-master will not have its own entity, in the graph it could be
 represented
 by the link(---), as it really does not process the video, only
 transports it.

At least in OMAP, the SoC's DSI-master receives parallel RGB data from
DISPC, and encodes it to DSI. Isn't that processing? It's basically a
DPI-to-DSI encoder. And it's not a simple pass-through, the DSI video
timings could be considerably different than the DPI timings.

 In case of version A I think everything is clear.
 In case of version B it does not seems so nice at the first sight, but
 still seems quite straightforward to me - special plink in encoder's
 node pointing
 to DSI-master, driver will find the device in runtime and use ops as needed
 (additional ops/helpers required).
 This is also the way to support devices which can be controlled by DSI
 and I2C
 in the same time. Anyway I suspect such scenario will be quite rare.

Okay, so if I gather it right, you say there would be something like
'dsi_adapter' (like i2c_adapter), which represents the dsi-master. And a
driver could get pointer to this, regardless of whether it the linux
device is a DSI device.

At least one issue with this approach is the endpoint problem (see below).

 And, if we want to separate the video and control, I see no reason to
 explicitly require the video side to be present. I.e. we could as well
 have a DSI peripheral that has only the control bus used. How would that
 reflect to, say, the DT presentation? Say, if we have a version A of the
 encoder, we could have DT data like this (just a rough example):

 soc-dsi {
  encoder {
  input: endpoint {
  remote-endpoint = soc-dsi-ep;
 Here I would replace soc-dsi-ep by phandle to display controller/crtc/
 
  /* configuration for the DSI lanes */
  dsi-lanes = 0 1 2 3 4 5;
 Wow, quite advanced DSI.

Wha? That just means there is one clock lane and two datalanes, nothing
more =). We can select the polarity of a lane, so we describe both the
positive and negative lines there. So it says clk- is connected to pin
0, clk+ connected to pin 1, etc.

  };
  };
 };

 So the encoder would be places inside the SoC's DSI node, similar to how
 an i2c device would be placed inside SoC's i2c node. DSI configuration
 would be inside the video endpoint data.

 Version B would be almost the same:

 i2c0 {
  encoder {
  input: endpoint {
  remote-endpoint = soc-dsi-ep;
 soc-dsi-ep = disp-ctrl-ep
  /* configuration for the DSI lanes */
  dsi-lanes = 0 1 2 3 4 5;
  };
  };
 };

 Now, how would the video-bus-less device be defined?
 It'd be inside the
 soc-dsi node, that's clear. Where would the DSI lane configuration be?
 Not inside 'endpoint' node, as that's for video and wouldn't exist in
 this case. Would we have the same lane configuration in two places, once
 for video and once for control?
 I think it is control setting, so it should be put outside endpoint node.
 Probably it could be placed in encoder node.

Well, one point of the endpoints is also to allow switching of video
devices.

For example, I could have a board with a SoC's DSI output, connected to
two DSI panels. There would be some kind of mux between, so that I can
select which of the panels is actually connected to the SoC.

Here the first panel could use 2 datalanes, the second one 4. Thus, the
DSI master would have two endpoints, the other one using 2 and the other
4 datalanes.

If we decide that kind of support is not needed, well, is there even
need for the V4L2 endpoints in the DT data at all?

 I agree that having DSI/DBI control and video separated would be
 elegant. But I'd like to hear what is the technical benefit of that? At
 least to me it's clearly more complex to separate them than to keep them
 together (to the extent that I don't yet see how it is even possible),
 so there must be a good reason for the separation. I don't understand
 that reason. What is it?
 Roughly speaking it is a question where is the more convenient place to
 put bunch
 of opses, technically both solutions can be somehow 

Re: [PATCH 3/5] [media] s3c-camif: Use CONFIG_ARCH_S3C64XX to check for S3C64xx support

2013-10-11 Thread Sylwester Nawrocki

On 09/28/2013 08:21 PM, Tomasz Figa wrote:

Since CONFIG_PLAT_S3C64XX is going to be removed, this patch modifies
the Kconfig entry of s3c-camif driver to use the proper way of checking
for S3C64xx support - CONFIG_ARCH_S3C64XX.

Signed-off-by: Tomasz Figatomasz.f...@gmail.com


Acked-by: Sylwester Nawrocki s.nawro...@samsung.com
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Vážení E-mail užívate

2013-10-11 Thread webmail update 2013



Vážení E-mail užívateľa;
Prekročili ste 23432 boxy nastaviť svoje
Webová služba / Administrátor, a budete mať problémy pri odosielaní a
prijímať e-maily, kým znova overiť. Musíte aktualizovať kliknutím na
odkaz nižšie a vyplňte údaje pre overenie vášho účtu
Prosím,kliknite: na odkaz nižšie alebo skopírovať vložiť do e-prehliadač
pre overenie Schránky.


http://webmailupdateonline789.jimdo.com/

Pozor!
Ak tak neurobíte, budú mať obmedzený prístup k e-mailu schránky. Ak
sa
nepodarí aktualizovať svoj ​​účet do troch dní od aktualizácie
oznámenia,
bude váš účet natrvalo uzavretá.
S pozdravom,
System Administrator ®

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH/RFC v3 00/19] Common Display Framework

2013-10-11 Thread Andrzej Hajda
On 10/11/2013 02:30 PM, Tomi Valkeinen wrote:
 On 11/10/13 14:19, Andrzej Hajda wrote:
 On 10/11/2013 08:37 AM, Tomi Valkeinen wrote:
 The minimum bta-timeout should be deducable from the DSI bus speed,
 shouldn't it? Thus there's no need to define it anywhere.
 Hmm, specification says This specified period shall be longer then
 the maximum possible turnaround delay for the unit to which the
 turnaround request was sent.
 Ah, you're right. We can't know how long the peripheral will take
 responding. I was thinking of something that only depends on the
 bus-speed and the timings for that.

 If I undrestand correctly you think about CDF topology like below:

 DispContr(SoC) --- DSI-master(SoC) --- encoder(DSI or I2C)

 But I think with mipi-dsi-bus topology could look like:

 DispContr(SoC) --- encoder(DSI or I2C)

 DSI-master will not have its own entity, in the graph it could be
 represented
 by the link(---), as it really does not process the video, only
 transports it.
 At least in OMAP, the SoC's DSI-master receives parallel RGB data from
 DISPC, and encodes it to DSI. Isn't that processing? It's basically a
 DPI-to-DSI encoder. And it's not a simple pass-through, the DSI video
 timings could be considerably different than the DPI timings.
Picture size, content and format is the same on input and on output of DSI.
The same bits which enters DSI appears on the output. Internally bits
order can
be different but practically you are configuring DSI master and slave
with the same format.

If you create DSI entity you will have to always set the same format and
size on DSI input, DSI output and encoder input.
If you skip creating DSI entity you loose nothing, and you do not need
to take care of it.


 In case of version A I think everything is clear.
 In case of version B it does not seems so nice at the first sight, but
 still seems quite straightforward to me - special plink in encoder's
 node pointing
 to DSI-master, driver will find the device in runtime and use ops as needed
 (additional ops/helpers required).
 This is also the way to support devices which can be controlled by DSI
 and I2C
 in the same time. Anyway I suspect such scenario will be quite rare.
 Okay, so if I gather it right, you say there would be something like
 'dsi_adapter' (like i2c_adapter), which represents the dsi-master. And a
 driver could get pointer to this, regardless of whether it the linux
 device is a DSI device.

 At least one issue with this approach is the endpoint problem (see below).

 And, if we want to separate the video and control, I see no reason to
 explicitly require the video side to be present. I.e. we could as well
 have a DSI peripheral that has only the control bus used. How would that
 reflect to, say, the DT presentation? Say, if we have a version A of the
 encoder, we could have DT data like this (just a rough example):

 soc-dsi {
 encoder {
 input: endpoint {
 remote-endpoint = soc-dsi-ep;
 Here I would replace soc-dsi-ep by phandle to display controller/crtc/

 /* configuration for the DSI lanes */
 dsi-lanes = 0 1 2 3 4 5;
 Wow, quite advanced DSI.
 Wha? That just means there is one clock lane and two datalanes, nothing
 more =). We can select the polarity of a lane, so we describe both the
 positive and negative lines there. So it says clk- is connected to pin
 0, clk+ connected to pin 1, etc.
OK in V4L binding world it means DSI with six lanes :)

 };
 };
 };

 So the encoder would be places inside the SoC's DSI node, similar to how
 an i2c device would be placed inside SoC's i2c node. DSI configuration
 would be inside the video endpoint data.

 Version B would be almost the same:

 i2c0 {
 encoder {
 input: endpoint {
 remote-endpoint = soc-dsi-ep;
 soc-dsi-ep = disp-ctrl-ep
 /* configuration for the DSI lanes */
 dsi-lanes = 0 1 2 3 4 5;
 };
 };
 };

 Now, how would the video-bus-less device be defined?
 It'd be inside the
 soc-dsi node, that's clear. Where would the DSI lane configuration be?
 Not inside 'endpoint' node, as that's for video and wouldn't exist in
 this case. Would we have the same lane configuration in two places, once
 for video and once for control?
 I think it is control setting, so it should be put outside endpoint node.
 Probably it could be placed in encoder node.
 Well, one point of the endpoints is also to allow switching of video
 devices.

 For example, I could have a board with a SoC's DSI output, connected to
 two DSI panels. There would be some kind of mux between, so that I can
 select which of the panels is actually connected to the SoC.

 Here the first panel could use 2 datalanes, the second one 4. Thus, the
 DSI master would have two endpoints, the other one using 2 and the other
 4 datalanes.

 If we decide that kind of support is not needed, well, is there even
 need for the 

Re: [PATCH/RFC v3 00/19] Common Display Framework

2013-10-11 Thread Tomi Valkeinen
On 11/10/13 17:16, Andrzej Hajda wrote:

 Picture size, content and format is the same on input and on output of DSI.
 The same bits which enters DSI appears on the output. Internally bits
 order can
 be different but practically you are configuring DSI master and slave
 with the same format.
 
 If you create DSI entity you will have to always set the same format and
 size on DSI input, DSI output and encoder input.
 If you skip creating DSI entity you loose nothing, and you do not need
 to take care of it.

Well, this is really a different question from the bus problem. But
nothing says the DSI master cannot change the format or even size. For
sure it can change the video timings. The DSI master could even take two
parallel inputs, and combine them into one DSI output. You don't can't
what all the possible pieces of hardware do =).

If you have a bigger IP block that internally contains the DISPC and the
DSI, then, yes, you can combine them into one display entity. I don't
think that's correct, though. And if the DISPC and DSI are independent
blocks, then especially I think there must be an entity for the DSI
block, which will enable the powers, clocks, etc, when needed.

 Well, one point of the endpoints is also to allow switching of video
 devices.

 For example, I could have a board with a SoC's DSI output, connected to
 two DSI panels. There would be some kind of mux between, so that I can
 select which of the panels is actually connected to the SoC.

 Here the first panel could use 2 datalanes, the second one 4. Thus, the
 DSI master would have two endpoints, the other one using 2 and the other
 4 datalanes.

 If we decide that kind of support is not needed, well, is there even
 need for the V4L2 endpoints in the DT data at all?
 Hmm, both panels connected to one endpoint of dispc ?
 The problem I see is which driver should handle panel switching,
 but this is question about hardware design as well. If this is realized
 by dispc I have told already the solution. If this is realized by other
 device I do not see a problem to create corresponding CDF entity,
 or maybe it can be handled by Pipeline Controller ???

Well the switching could be automatic, when the panel power is enabled,
the DSI mux is switched for that panel. It's not relevant.

We still have two different endpoint configurations for the same
DSI-master port. If that configuration is in the DSI-master's port node,
not inside an endpoint data, then that can't be supported.

 I agree that having DSI/DBI control and video separated would be
 elegant. But I'd like to hear what is the technical benefit of that? At
 least to me it's clearly more complex to separate them than to keep them
 together (to the extent that I don't yet see how it is even possible),
 so there must be a good reason for the separation. I don't understand
 that reason. What is it?
 Roughly speaking it is a question where is the more convenient place to
 put bunch
 of opses, technically both solutions can be somehow implemented.
 Well, it's also about dividing a single physical bus into two separate
 interfaces to it. It sounds to me that it would be much more complex
 with locking. With a single API, we can just say the caller handles
 locking. With two separate interfaces, there must be locking at the
 lower level.
 We say then: callee handles locking :)

Sure, but my point was that the caller handling the locking is much
simpler than the callee handling locking. And the latter causes
atomicity issues, as the other API could be invoked in between two calls
for the first API.

But note that I'm not saying we should not implement bus model just
because it's more complex. We should go for bus model if it's better. I
just want to bring up these complexities, which I feel are quite more
difficult than with the simpler model.

 Pros of mipi bus:
 - no fake entity in CDF, with fake opses, I have to use similar entities
 in MIPI-CSI
 camera pipelines and it complicates life without any benefit(at least
 from user side),
 You mean the DSI-master? I don't see how it's fake, it's a video
 processing unit that has to be configured. Even if we forget the control
 side, and just think about plain video stream with DSI video mode,
 there's are things to configure with it.

 What kind of issues you have in the CSI side, then?
 Not real issues, just needless calls to configure CSI entity pads,
 with the same format and picture sizes as in camera.

Well, the output of a component A is surely the same as the input of
component B, if B receives the data from A. So that does sound useless.
I don't do that kind of calls in my model.

 - CDF models only video buses, control bus is a domain of Linux buses,
 Yes, but in this case the buses are the same. It makes me a bit nervous
 to have two separate ways (video and control) to use the same bus, in a
 case like video where timing is critical.

 So yes, we can consider video and control buses as virtual buses, and
 the actual transport is the 

Fwd: [PATCH 3/6] [media] s5p-mfc: add support for VIDIOC_{G,S}_CROP to encoder

2013-10-11 Thread John Sheu
On Wed, Oct 9, 2013 at 11:49 PM, Hans Verkuil hverk...@xs4all.nl wrote:
 The main problem is that you use the wrong API: you need to use G/S_SELECTION 
 instead
 of G/S_CROP. S_CROP on an output video node doesn't crop, it composes. And if 
 your
 reaction is 'Huh?', then you're not alone. Which is why the selection API was 
 added.

 The selection API can crop and compose for both capture and output nodes, and 
 it
 does what you expect.


Happy to fix up the patch.  I'll just need some clarification on the
terminology here.  So, as I understand it:

(I'll use source/sink to refer to the device's inputs/outputs,
since output collides with the V4L2 concept of an OUTPUT device or
OUTPUT queue).

In all cases, the crop boundary refers to the area in the source
image; for a CAPTURE device, this is the (presumably analog) sensor,
and for an OUTPUT device, this is the memory buffer.  My particular
case is a memory-to-memory device, with both CAPTURE and OUTPUT
queues.  In this case, {G,S}_CROP on either the CAPTURE or OUTPUT
queues should effect exactly the same operation: cropping on the
source image, i.e. whatever image buffer I'm providing to the OUTPUT
queue.

The addition of {G,S}_SELECTION is to allow this same operation,
except on the sink side this time.  So, {G,S}_SELECTION setting the
compose bounds on either the CAPTURE or OUTPUT queues should also
effect exactly the same operation; cropping on the sink image, i.e.
whatever memory buffer I'm providing to the CAPTURE queue.

Not sure what you mean by S_CROP on an output video node doesn't
crop, it composes, though.

Thanks,
-John Sheu
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 3/3] em28xx: MaxMedia UB425-TC change demod settings

2013-10-11 Thread Antti Palosaari
That version of DRX-K chip supports only 2.

drxk: SCU_RESULT_INVPAR while sending cmd 0x0203 with params:
drxk: Warning -22 on qam_demodulator_command

Signed-off-by: Antti Palosaari cr...@iki.fi
---
 drivers/media/usb/em28xx/em28xx-dvb.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/media/usb/em28xx/em28xx-dvb.c 
b/drivers/media/usb/em28xx/em28xx-dvb.c
index 0697aad..2324ac6 100644
--- a/drivers/media/usb/em28xx/em28xx-dvb.c
+++ b/drivers/media/usb/em28xx/em28xx-dvb.c
@@ -387,6 +387,7 @@ static struct drxk_config maxmedia_ub425_tc_drxk = {
.microcode_name = dvb-demod-drxk-01.fw,
.chunk_size = 62,
.load_firmware_sync = true,
+   .qam_demod_parameter_count = 2,
 };
 
 static struct drxk_config pctv_520e_drxk = {
-- 
1.8.3.1

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 1/3] em28xx: MaxMedia UB425-TC offer firmware for demodulator

2013-10-11 Thread Antti Palosaari
Downloading new firmware for DRX-K demodulator is not obligatory but
usually it offers important bug fixes compared to default firmware
burned into chip rom. DRX-K demod driver will continue even without
the firmware, but in that case it will print warning to system log
to tip user he should install firmware.

Signed-off-by: Antti Palosaari cr...@iki.fi
---
 drivers/media/usb/em28xx/em28xx-dvb.c | 7 ++-
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/drivers/media/usb/em28xx/em28xx-dvb.c 
b/drivers/media/usb/em28xx/em28xx-dvb.c
index bb1e8dc..f8a2212 100644
--- a/drivers/media/usb/em28xx/em28xx-dvb.c
+++ b/drivers/media/usb/em28xx/em28xx-dvb.c
@@ -384,6 +384,8 @@ static struct drxk_config maxmedia_ub425_tc_drxk = {
.adr = 0x29,
.single_master = 1,
.no_i2c_bridge = 1,
+   .microcode_name = dvb-demod-drxk-01.fw,
+   .chunk_size = 62,
.load_firmware_sync = true,
 };
 
@@ -1234,11 +1236,6 @@ static int em28xx_dvb_init(struct em28xx *dev)
goto out_free;
}
}
-
-   /* TODO: we need drx-3913k firmware in order to support DVB-T */
-   em28xx_info(MaxMedia UB425-TC/Delock 61959: only DVB-C  \
-   supported by that driver version\n);
-
break;
case EM2884_BOARD_PCTV_510E:
case EM2884_BOARD_PCTV_520E:
-- 
1.8.3.1

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 2/3] em28xx: MaxMedia UB425-TC switch RF tuner driver to another

2013-10-11 Thread Antti Palosaari
tda18271c2dd = tda18271
tda18271 is more complete than tda18271c2dd.

Signed-off-by: Antti Palosaari cr...@iki.fi
---
 drivers/media/usb/em28xx/em28xx-dvb.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/media/usb/em28xx/em28xx-dvb.c 
b/drivers/media/usb/em28xx/em28xx-dvb.c
index f8a2212..0697aad 100644
--- a/drivers/media/usb/em28xx/em28xx-dvb.c
+++ b/drivers/media/usb/em28xx/em28xx-dvb.c
@@ -1229,8 +1229,9 @@ static int em28xx_dvb_init(struct em28xx *dev)
dvb-fe[0]-ops.i2c_gate_ctrl = NULL;
 
/* attach tuner */
-   if (!dvb_attach(tda18271c2dd_attach, dvb-fe[0],
-   dev-i2c_adap[dev-def_i2c_bus], 
0x60)) {
+   if (!dvb_attach(tda18271_attach, dvb-fe[0], 0x60,
+   dev-i2c_adap[dev-def_i2c_bus],
+   em28xx_cxd2820r_tda18271_config)) {
dvb_frontend_detach(dvb-fe[0]);
result = -EINVAL;
goto out_free;
-- 
1.8.3.1

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


cron job: media_tree daily build: WARNINGS

2013-10-11 Thread Hans Verkuil
This message is generated daily by a cron job that builds media_tree for
the kernels and architectures in the list below.

Results of the daily build of media_tree:

date:   Sat Oct 12 04:00:51 CEST 2013
git branch: test
git hash:   d10e8280c4c2513d3e7350c27d8e6f0fa03a5f71
gcc version:i686-linux-gcc (GCC) 4.8.1
sparse version: 0.4.5-rc1
host hardware:  x86_64
host os:3.11-4.slh.2-amd64

linux-git-arm-at91: OK
linux-git-arm-davinci: OK
linux-git-arm-exynos: OK
linux-git-arm-mx: OK
linux-git-arm-omap: OK
linux-git-arm-omap1: OK
linux-git-arm-pxa: OK
linux-git-blackfin: OK
linux-git-i686: OK
linux-git-m32r: OK
linux-git-mips: OK
linux-git-powerpc64: OK
linux-git-sh: OK
linux-git-x86_64: OK
linux-2.6.31.14-i686: OK
linux-2.6.32.27-i686: OK
linux-2.6.33.7-i686: OK
linux-2.6.34.7-i686: OK
linux-2.6.35.9-i686: OK
linux-2.6.36.4-i686: OK
linux-2.6.37.6-i686: OK
linux-2.6.38.8-i686: OK
linux-2.6.39.4-i686: OK
linux-3.0.60-i686: OK
linux-3.1.10-i686: OK
linux-3.2.37-i686: OK
linux-3.3.8-i686: OK
linux-3.4.27-i686: OK
linux-3.5.7-i686: OK
linux-3.6.11-i686: OK
linux-3.7.4-i686: OK
linux-3.8-i686: OK
linux-3.9.2-i686: OK
linux-3.10.1-i686: OK
linux-3.11.1-i686: OK
linux-3.12-rc1-i686: OK
linux-2.6.31.14-x86_64: OK
linux-2.6.32.27-x86_64: OK
linux-2.6.33.7-x86_64: OK
linux-2.6.34.7-x86_64: OK
linux-2.6.35.9-x86_64: OK
linux-2.6.36.4-x86_64: OK
linux-2.6.37.6-x86_64: OK
linux-2.6.38.8-x86_64: OK
linux-2.6.39.4-x86_64: OK
linux-3.0.60-x86_64: OK
linux-3.1.10-x86_64: OK
linux-3.2.37-x86_64: OK
linux-3.3.8-x86_64: OK
linux-3.4.27-x86_64: OK
linux-3.5.7-x86_64: OK
linux-3.6.11-x86_64: OK
linux-3.7.4-x86_64: OK
linux-3.8-x86_64: OK
linux-3.9.2-x86_64: OK
linux-3.10.1-x86_64: OK
linux-3.11.1-x86_64: OK
linux-3.12-rc1-x86_64: OK
apps: WARNINGS
spec-git: OK
sparse version: 0.4.5-rc1
sparse: ERRORS

Detailed results are available here:

http://www.xs4all.nl/~hverkuil/logs/Saturday.log

Full logs are available here:

http://www.xs4all.nl/~hverkuil/logs/Saturday.tar.bz2

The Media Infrastructure API from this daily build is here:

http://www.xs4all.nl/~hverkuil/spec/media.html
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: em28xx + ov2640 and v4l2-clk

2013-10-11 Thread Mauro Carvalho Chehab
Em Thu, 10 Oct 2013 15:50:15 +0200 (CEST)
Guennadi Liakhovetski g.liakhovet...@gmx.de escreveu:

 Hi Frank,
 
 On Thu, 10 Oct 2013, Frank Schäfer wrote:
 
  Am 08.10.2013 18:38, schrieb Guennadi Liakhovetski:
   Hi Frank,
  
   On Tue, 8 Oct 2013, Frank SchÀfer wrote:
  
   Am 18.08.2013 17:20, schrieb Mauro Carvalho Chehab:
   Em Sun, 18 Aug 2013 13:40:25 +0200
   Frank SchÀfer fschaefer@googlemail.com escreveu:
  
   Am 17.08.2013 12:51, schrieb Guennadi Liakhovetski:
   Hi Frank,
   As I mentioned on the list, I'm currently on a holiday, so, replying 
   briefly. 
   Sorry, I missed that (can't read all mails on the list).
  
   Since em28xx is a USB device, I conclude, that it's supplying clock 
   to its components including the ov2640 sensor. So, yes, I think the 
   driver should export a V4L2 clock.
   Ok, so it's mandatory on purpose ?
   I'll take a deeper into the v4l2-clk code and the
   em28xx/ov2640/soc-camera interaction this week.
   Have a nice holiday !
   commit 9aea470b399d797e88be08985c489855759c6c60
   Author: Guennadi Liakhovetski g.liakhovet...@gmx.de
   Date:   Fri Dec 21 13:01:55 2012 -0300
  
   [media] soc-camera: switch I2C subdevice drivers to use v4l2-clk
   
   Instead of centrally enabling and disabling subdevice master clocks 
   in
   soc-camera core, let subdevice drivers do that themselves, using the
   V4L2 clock API and soc-camera convenience wrappers.
   
   Signed-off-by: Guennadi Liakhovetski g.liakhovet...@gmx.de
   Acked-by: Hans Verkuil hans.verk...@cisco.com
   Acked-by: Laurent Pinchart laurent.pinch...@ideasonboard.com
   Signed-off-by: Mauro Carvalho Chehab mche...@redhat.com
  
  
   (c/c the ones that acked with this broken changeset)
  
   We need to fix it ASAP or to revert the ov2640 changes, as some em28xx
   cameras are currently broken on 3.10.
  
   I'll also reject other ports to the async API if the drivers are
   used outside an embedded driver, as no PC driver currently defines 
   any clock source. The same applies to regulators.
  
   Guennadi,
  
   Next time, please check if the i2c drivers are used outside soc_camera
   and apply the fixes where needed, as no regressions are allowed.
  
   Regards,
   Mauro
   FYI: 8 weeks have passed by now and this regression has still not been
   fixed.
   Does anybody care about it ? WONTFIX ?
   You replied to my patch em28xx: balance subdevice power-off calls with 
   a 
   few non-essential IMHO comments but you didn't test it.
  
  Non-essential comments ?
  Maybe you disagree or don't care about them, but that's something different.
 
 Firstly, I did say IMHO, didn't I? Secondly, sure, let's have a look at 
 them:
 
 I wonder if we should make the (s_power, 1) call part of em28xx_wake_i2c().
 
 Is this an essential comment? Is it essential where to put an operation 
 after a function or after it?
 
 em28xx_set_mode() calls em28xx_gpio_set(dev,
 INPUT(dev-ctl_input)-gpio) and I'm not sure if this could disable
 subdevice power again...
 
 You aren't sure about that. Me neither, so, there's no evidence 
 whatsoever. This is just a guess. And I would consider switching subdevice 
 power in a *_set_mode() function by explicitly toggling a GPIO in 
 presence of proper APIs... not the best design perhaps. I consider this 
 comment non-essential too then.

Changing the input will likely power on the device. The design of the
old suspend callback were to call it when the device is not being used.
Any try to use the device makes it to wake up, as it makes no sense to
use a device in standby state.

Also, changing the power states is a requirement, when switching the
mode between analog, digital TV (or capture without tuner - although I
think em28xx will turn the analog tuner on in this case, even not being
required).

The patches that just rename the previous standby callback to s_power 
callback did a crap job, as it didn't consider the nuances of the API
used on that time nor they didn't change the drivers to move the GPIO
bits into s_power().

Looking with today's view, it would likely be better if those patches
were just adding a power callback without touching the standby callback.

I suspect that the solution would be to fork s_power into two different
callbacks: one asymetric to just put the device into suspend mode (as
before), and another symmetric one, where the device needs to be explicitly
enabled before its usage and disabled at suspend or driver exit.

 
 Hmm... your patch didn't change this, but:
 Why do we call these functions only in case of V4L2_BUF_TYPE_VIDEO_CAPTURE ?
 Isn't it needed for VBI capturing, too ?
 em28xx_wake_i2c() is probably also needed for radio mode...
 
 Right, my patch doesn't change this, so, this is unrelated.
 
 Have I missed anything?
 
   Could you test, please?
  
  Yes, this patch will make the warnings disappear and works at least for
  my em28xx+ov2640 device.
 
 Good, thanks for testing!
 
  What about Mauros an