Re: [RFC PATCH v4 3/8] staging: imx-drm: Document updated imx-drm device tree bindings

2014-03-06 Thread Laurent Pinchart
On Thursday 27 February 2014 16:10:41 Tomi Valkeinen wrote:
 On 27/02/14 15:43, Russell King - ARM Linux wrote:
  That may be - but the problem with CDF solving this problem is that it's
  wrong.  It's fixing what is in actual fact a *generic* problem in a much
  too specific way.  To put it another way, it's forcing everyone to fix
  the same problem in their own separate ways because no one is willing to
  take a step back and look at the larger picture.
  
  We can see that because ASoC has exactly the same problem - it has to
  wait until all devices (DMA, CPU DAIs, codecs etc) are present before it
  can initialise, just like DRM.  Can you re-use the CDF solution for ASoC?
  No.  Can it be re-used elsewhere in non-display subsystems?  No.
  
  Therefore, CDF is yet another implementation specific solution to a
  generic problem which can't be re-used.
  
  Yes, I realise that CDF may do other stuff, but because of the above, it's
  a broken solution.
 
 What? Because CDF didn't fix a particular subproblem for everyone, it's
 broken solution? Or did I miss your point?

Furthermore CDF was an RFC, a proof of concept implementation of the various 
components involved to solve the problems at hand. It was in no way meant to 
be merged as-is, and I would certainly have made the asynchronous registration 
code generic had I been requested to do so specifically. Unfortunately and 
sadly miscommunication lead to CDF being rejected in one block with a fuzzy 
message on how to proceed. We won't rewrite the past, but let's move forward 
in the right direction.

 The main point of CDF is not solving the initialization issue. If that
 was the point, it would've been Common Initialization Framework.
 
 The main point of CDF is to allow us to have encoder and panel drivers
 that can be used by all platforms, in complex display pipeline setups.
 It just also has to have some solution for the initialization problem to
 get things working.
 
 In fact, Laurent's CDF version has a solution for init problem which, I
 my memory serves me right, is very much similar to yours. It just wasn't
 generic. I don't remember if Laurent had a specific master node defined,
 but the LCD controller was very much like it. It would be trivial to
 change it to use the component helpers.

That's correct. The CDF composite device model was based on the V4L2 composite 
device model, implemented in drivers/media/v4l2-core/v4l2-async.c. Both are 
very similar in purpose to the component framework. The reason why it wasn't 
generic in the first place was that I wanted to implement a full solution as a 
proof of concept first between polishing each part independently. That turned 
out not to be the best decision ever.

 My solution is different, because I don't like the idea of requiring all
 the display components to be up and running to use any of the displays.
 In fact, it's not a solution at all for me, as it would prevent displays
 working on boards that do not have all the display components installed,
 or if the user didn't compile all the drivers.

As mentioned in my reply to Russell's component framework patch, I would like 
to base v4l2-async on top of the component framework. For this to be possible 
we need support for partial bind in the component framework, which would make 
it possible to support your use cases. Let's discuss how to achieve that in 
the other mail thread.

-- 
Regards,

Laurent Pinchart
___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


Re: [RFC PATCH v4 3/8] staging: imx-drm: Document updated imx-drm device tree bindings

2014-02-27 Thread Tomi Valkeinen
On 25/02/14 16:23, Philipp Zabel wrote:

 +Freescale i.MX DRM master device
 +
 +
 +The freescale i.MX DRM master device is a virtual device needed to list all
 +IPU or other display interface nodes that comprise the graphics subsystem.
 +
 +Required properties:
 +- compatible: Should be fsl,imx-drm
 +- ports: Should contain a list of phandles pointing to display interface 
 ports
 +  of IPU devices
 +
 +example:
 +
 +imx-drm {
 + compatible = fsl,imx-drm;
 + ports = ipu_di0;
 +};

I'm not a fan of having non-hardware related things in the DT data.
Especially if it makes direct references to our SW, in this case DRM.
There's no DRM on the board. I wanted to avoid all that with OMAP
display bindings.

Is there even need for such a master device? You can find all the
connected display devices from any single display device, by just
following the endpoint links.

  display@di0 {
   compatible = fsl,imx-parallel-display;
   edid = [edid-data];
 - crtc = ipu 0;
   interface-pix-fmt = rgb24;
 +
 + port {
 + display_in: endpoint {
 + remote-endpoint = ipu_di0_disp0;
 + };
 + };
  };

Shouldn't the pix-fmt be defined in the endpoint node? It is about pixel
format for a particular endpoint, isn't it?

 diff --git a/Documentation/devicetree/bindings/staging/imx-drm/ldb.txt 
 b/Documentation/devicetree/bindings/staging/imx-drm/ldb.txt
 index ed93778..578a1fc 100644
 --- a/Documentation/devicetree/bindings/staging/imx-drm/ldb.txt
 +++ b/Documentation/devicetree/bindings/staging/imx-drm/ldb.txt
 @@ -50,12 +50,14 @@ have a look at 
 Documentation/devicetree/bindings/video/display-timing.txt.
  
  Required properties:
   - reg : should be 0 or 1
 - - crtcs : a list of phandles with index pointing to the IPU display 
 interfaces
 -   that can be used as video source for this channel.
   - fsl,data-mapping : should be spwg or jeida
This describes how the color bits are laid out in the
serialized LVDS signal.
   - fsl,data-width : should be 18 or 24
 + - port: A port node with endpoint definitions as defined in
 +   Documentation/devicetree/bindings/media/video-interfaces.txt.
 +   On i.MX6, there should be four ports (port@[0-3]) that correspond
 +   to the four LVDS multiplexer inputs.

Is the ldb something that's on the imx SoC?

Do you have a public branch somewhere? It'd be easier to look at the
final result, as I'm not familiar with imx.

 Tomi




signature.asc
Description: OpenPGP digital signature
___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


Re: [RFC PATCH v4 3/8] staging: imx-drm: Document updated imx-drm device tree bindings

2014-02-27 Thread Philipp Zabel
Am Donnerstag, den 27.02.2014, 13:06 +0200 schrieb Tomi Valkeinen:
 On 25/02/14 16:23, Philipp Zabel wrote:
 
  +Freescale i.MX DRM master device
  +
  +
  +The freescale i.MX DRM master device is a virtual device needed to list all
  +IPU or other display interface nodes that comprise the graphics subsystem.
  +
  +Required properties:
  +- compatible: Should be fsl,imx-drm
  +- ports: Should contain a list of phandles pointing to display interface 
  ports
  +  of IPU devices
  +
  +example:
  +
  +imx-drm {
  +   compatible = fsl,imx-drm;
  +   ports = ipu_di0;
  +};
 
 I'm not a fan of having non-hardware related things in the DT data.
 Especially if it makes direct references to our SW, in this case DRM.
 There's no DRM on the board. I wanted to avoid all that with OMAP
 display bindings.
 
 Is there even need for such a master device? You can find all the
 connected display devices from any single display device, by just
 following the endpoint links.

I don't particularly like this either, but it kind of has been decided.

For the i.MX6 display subsystem there is no clear single master device,
and the physical configuration changes across the SoC family. The
i.MX6Q/i.MX6D SoCs have two separate display controller devices IPU1 and
IPU2, with two output ports each. The i.MX6DL/i.MX6S SoCs only have one
IPU1, but it is accompanied by separate lower-power LCDIF display
controller with a single output. These may or may not be connected
indirectly across the encoder input multiplexers, so collecting them
would require scanning the whole device tree from an always-enabled
imx-drm platform device if we didn't have this node.

Also, we are free to just ignore this node in the future, if a better
way is found.

   display@di0 {
  compatible = fsl,imx-parallel-display;
  edid = [edid-data];
  -   crtc = ipu 0;
  interface-pix-fmt = rgb24;
  +
  +   port {
  +   display_in: endpoint {
  +   remote-endpoint = ipu_di0_disp0;
  +   };
  +   };
   };
 
 Shouldn't the pix-fmt be defined in the endpoint node? It is about pixel
 format for a particular endpoint, isn't it?
 
  diff --git a/Documentation/devicetree/bindings/staging/imx-drm/ldb.txt 
  b/Documentation/devicetree/bindings/staging/imx-drm/ldb.txt
  index ed93778..578a1fc 100644
  --- a/Documentation/devicetree/bindings/staging/imx-drm/ldb.txt
  +++ b/Documentation/devicetree/bindings/staging/imx-drm/ldb.txt
  @@ -50,12 +50,14 @@ have a look at 
  Documentation/devicetree/bindings/video/display-timing.txt.
   
   Required properties:
- reg : should be 0 or 1
  - - crtcs : a list of phandles with index pointing to the IPU display 
  interfaces
  -   that can be used as video source for this channel.
- fsl,data-mapping : should be spwg or jeida
 This describes how the color bits are laid out in the
 serialized LVDS signal.
- fsl,data-width : should be 18 or 24
  + - port: A port node with endpoint definitions as defined in
  +   Documentation/devicetree/bindings/media/video-interfaces.txt.
  +   On i.MX6, there should be four ports (port@[0-3]) that correspond
  +   to the four LVDS multiplexer inputs.
 
 Is the ldb something that's on the imx SoC?

Yes. It consists of two LVDS encoders. On i.MX5 each channel is
connected to one display interface of the single IPU.
On i.MX6Q its parallel input can be connected to any of the four IPU1/2
display interfaces using a 4-port multiplexer (and on i.MX6DL it can be
connected to IPU1 or LCDIF).

 Do you have a public branch somewhere? It'd be easier to look at the
 final result, as I'm not familiar with imx.

Not yet, I will prepare a branch with the next version.

regards
Philipp

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


Re: [RFC PATCH v4 3/8] staging: imx-drm: Document updated imx-drm device tree bindings

2014-02-27 Thread Russell King - ARM Linux
On Thu, Feb 27, 2014 at 02:06:25PM +0100, Philipp Zabel wrote:
 For the i.MX6 display subsystem there is no clear single master device,
 and the physical configuration changes across the SoC family. The
 i.MX6Q/i.MX6D SoCs have two separate display controller devices IPU1 and
 IPU2, with two output ports each.

Not also forgetting that there's another scenario too: you may wish
to drive IPU1 and IPU2 as two completely separate display subsystems
in some hardware, but as a combined display subsystem in others.

Here's another scenario.  You may have these two IPUs on the SoC, but
there's only one display output.  You want to leave the second IPU
disabled, as you wouldn't want it to be probed or even exposed to
userland.

On the face of it, the top-level super-device node doesn't look very
hardware-y, but it actually is - it's about how a board uses the
hardware provided.  This is entirely in keeping with the spirit of DT,
which is to describe what hardware is present and how it's connected
together, whether it be at the chip or board level.

If this wasn't the case, we wouldn't even attempt to describe what devices
we have on which I2C buses - we'd just list the hardware on the board
without giving any information about how it's wired together.

This is no different - however, it doesn't have (and shouldn't) be
subsystem specific... but - and this is the challenge we then face - how
do you decide that on one board with a single zImage kernel, with both
DRM and fbdev built-in, whether to use the DRM interfaces or the fbdev
interfaces?  We could have both matching the same compatible string, but
we'd also need some way to tell each other that they're not allowed to
bind.

Before anyone argues against it isn't hardware-y, stop and think.
What if I design a board with two Epson LCD controllers on board and
put a muxing arrangement on their output.  Is that one or two devices?
What if I want them to operate as one combined system?  What if I have
two different LCD controllers on a board.  How is this any different
from the two independent IPU hardware blocks integrated inside an iMX6
SoC with a muxing arrangement on their output?

It's very easy to look at a SoC and make the wrong decision...

-- 
FTTC broadband for 0.8mile line: now at 9.7Mbps down 460kbps up... slowly
improving, and getting towards what was expected from it.
___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


Re: [RFC PATCH v4 3/8] staging: imx-drm: Document updated imx-drm device tree bindings

2014-02-27 Thread Tomi Valkeinen
On 27/02/14 13:56, Russell King - ARM Linux wrote:

 Is there even need for such a master device? You can find all the
 connected display devices from any single display device, by just
 following the endpoint links.
 
 Please read up on what has been discussed over previous years:
 
 http://lists.freedesktop.org/archives/dri-devel/2013-July/041159.html

Thanks, that was an interesting thread. Too bad I missed it, it was
during the holiday season. And seems Laurent missed it also, as he
didn't make any replies.

The thread seemed to go over the very same things that had already been
discussed with CDF.

 Tomi




signature.asc
Description: OpenPGP digital signature
___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


Re: [RFC PATCH v4 3/8] staging: imx-drm: Document updated imx-drm device tree bindings

2014-02-27 Thread Russell King - ARM Linux
On Thu, Feb 27, 2014 at 03:16:03PM +0200, Tomi Valkeinen wrote:
 On 27/02/14 13:56, Russell King - ARM Linux wrote:
 
  Is there even need for such a master device? You can find all the
  connected display devices from any single display device, by just
  following the endpoint links.
  
  Please read up on what has been discussed over previous years:
  
  http://lists.freedesktop.org/archives/dri-devel/2013-July/041159.html
 
 Thanks, that was an interesting thread. Too bad I missed it, it was
 during the holiday season. And seems Laurent missed it also, as he
 didn't make any replies.
 
 The thread seemed to go over the very same things that had already been
 discussed with CDF.

That may be - but the problem with CDF solving this problem is that it's
wrong.  It's fixing what is in actual fact a *generic* problem in a much
too specific way.  To put it another way, it's forcing everyone to fix
the same problem in their own separate ways because no one is willing to
take a step back and look at the larger picture.

We can see that because ASoC has exactly the same problem - it has to
wait until all devices (DMA, CPU DAIs, codecs etc) are present before it
can initialise, just like DRM.  Can you re-use the CDF solution for ASoC?
No.  Can it be re-used elsewhere in non-display subsystems?  No.

Therefore, CDF is yet another implementation specific solution to a
generic problem which can't be re-used.

Yes, I realise that CDF may do other stuff, but because of the above, it's
a broken solution.

-- 
FTTC broadband for 0.8mile line: now at 9.7Mbps down 460kbps up... slowly
improving, and getting towards what was expected from it.
___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


Re: [RFC PATCH v4 3/8] staging: imx-drm: Document updated imx-drm device tree bindings

2014-02-27 Thread Tomi Valkeinen
On 27/02/14 15:00, Russell King - ARM Linux wrote:
 On Thu, Feb 27, 2014 at 02:06:25PM +0100, Philipp Zabel wrote:
 For the i.MX6 display subsystem there is no clear single master device,
 and the physical configuration changes across the SoC family. The
 i.MX6Q/i.MX6D SoCs have two separate display controller devices IPU1 and
 IPU2, with two output ports each.
 
 Not also forgetting that there's another scenario too: you may wish
 to drive IPU1 and IPU2 as two completely separate display subsystems
 in some hardware, but as a combined display subsystem in others.
 
 Here's another scenario.  You may have these two IPUs on the SoC, but
 there's only one display output.  You want to leave the second IPU
 disabled, as you wouldn't want it to be probed or even exposed to
 userland.

I first want to say I don't see anything wrong with such a super node.
As you say, it does describe hardware. But I also want to say that I
still don't see a need for it. Or, maybe more exactly, I don't see a
need for it in general. Maybe there are certain cases where two devices
has to be controlled by a master device. Maybe this one is one of those.

In the imx case, why wouldn't this work, without any master node, with
the IPU nodes separate in the DT data:

- One IPU enabled, one disabled: nothing special here, just set the
other IPU to status=disabled in the DT data. The driver for the
enabled IPU would register the required DRM entities.

- Two IPUs as separate units: almost the same as above, but both would
independently register the DRM entities.

- Two IPUs in combined mode:

Pick one IPU as the master, and one as slave. Link the IPU nodes in DT
data with phandles, say: master=ipu1 on the slave IPU and
slave=ipu0 on the master.

The master one will register the DRM entities, and the slave one will
just do what the master says.

As for the probe time are we ready yet? problem, the IPU driver can
just delay registering the DRM entities until all the nodes in its graph
have been probed. The component helpers can probably be used here.

 On the face of it, the top-level super-device node doesn't look very
 hardware-y, but it actually is - it's about how a board uses the
 hardware provided.  This is entirely in keeping with the spirit of DT,
 which is to describe what hardware is present and how it's connected
 together, whether it be at the chip or board level.

No disagreement there. I'm mostly put off by the naming. The binding doc
says it's a DRM master device, compatible with fsl,imx-drm. Now,
naming may not be the most important thing in the world, but I'd rather
use generic terms, not linux driver stack names.

 If this wasn't the case, we wouldn't even attempt to describe what devices
 we have on which I2C buses - we'd just list the hardware on the board
 without giving any information about how it's wired together.
 
 This is no different - however, it doesn't have (and shouldn't) be
 subsystem specific... but - and this is the challenge we then face - how
 do you decide that on one board with a single zImage kernel, with both
 DRM and fbdev built-in, whether to use the DRM interfaces or the fbdev
 interfaces?  We could have both matching the same compatible string, but
 we'd also need some way to tell each other that they're not allowed to
 bind.

Yes, that's an annoying problem, we have that on OMAP. It's a clear sign
that our video support is rather messed up.

My opinion is that the fbdev and drm drivers for a single hardware
should be exclusive at compile time. We don't allow multiple drivers for
single device for other subsystems either, do we? Eventually we should
have only one driver for one hardware device.

If that's not possible, then the drivers in question could have an
option to enable or disable themselves, passed via the kernel command
line, so that the user can select which subsystem to use.

 Before anyone argues against it isn't hardware-y, stop and think.
 What if I design a board with two Epson LCD controllers on board and
 put a muxing arrangement on their output.  Is that one or two devices?
 What if I want them to operate as one combined system?  What if I have
 two different LCD controllers on a board.  How is this any different
 from the two independent IPU hardware blocks integrated inside an iMX6
 SoC with a muxing arrangement on their output?

Well, generally speaking, I think one option is to treat the two
controllers separately and let the userspace handle it. That may or may
not be viable, depending on the hardware, but to me it resembles very
much a PC with two video cards.

If you want the two controllers to operate together more closely, you
always need special code for that particular case.

This is what CDF has been trying to accomplish: individual drivers for
each display entity, connected together via ports and endpoints. Driver
for Epson LCD controller would expose an API, that can be used handle
the LCD controller, it wouldn't make any other demands on how it's used,
is it 

Re: [RFC PATCH v4 3/8] staging: imx-drm: Document updated imx-drm device tree bindings

2014-02-27 Thread Tomi Valkeinen
On 27/02/14 15:43, Russell King - ARM Linux wrote:

 That may be - but the problem with CDF solving this problem is that it's
 wrong.  It's fixing what is in actual fact a *generic* problem in a much
 too specific way.  To put it another way, it's forcing everyone to fix
 the same problem in their own separate ways because no one is willing to
 take a step back and look at the larger picture.
 
 We can see that because ASoC has exactly the same problem - it has to
 wait until all devices (DMA, CPU DAIs, codecs etc) are present before it
 can initialise, just like DRM.  Can you re-use the CDF solution for ASoC?
 No.  Can it be re-used elsewhere in non-display subsystems?  No.
 
 Therefore, CDF is yet another implementation specific solution to a
 generic problem which can't be re-used.
 
 Yes, I realise that CDF may do other stuff, but because of the above, it's
 a broken solution.

What? Because CDF didn't fix a particular subproblem for everyone, it's
broken solution? Or did I miss your point?

The main point of CDF is not solving the initialization issue. If that
was the point, it would've been Common Initialization Framework.

The main point of CDF is to allow us to have encoder and panel drivers
that can be used by all platforms, in complex display pipeline setups.
It just also has to have some solution for the initialization problem to
get things working.

In fact, Laurent's CDF version has a solution for init problem which, I
my memory serves me right, is very much similar to yours. It just wasn't
generic. I don't remember if Laurent had a specific master node defined,
but the LCD controller was very much like it. It would be trivial to
change it to use the component helpers.

My solution is different, because I don't like the idea of requiring all
the display components to be up and running to use any of the displays.
In fact, it's not a solution at all for me, as it would prevent displays
working on boards that do not have all the display components installed,
or if the user didn't compile all the drivers.

 Tomi




signature.asc
Description: OpenPGP digital signature
___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


Re: [RFC PATCH v4 3/8] staging: imx-drm: Document updated imx-drm device tree bindings

2014-02-27 Thread Tomi Valkeinen
On 27/02/14 18:54, Philipp Zabel wrote:

 - One IPU enabled, one disabled: nothing special here, just set the
 other IPU to status=disabled in the DT data. The driver for the
 enabled IPU would register the required DRM entities.
 
 that should work. Let the enabled IPU create the imx-drm platform device
 on probe, parse the device tree and ignore everything only hanging off
 of the disabled IPU.

I think you misunderstood me a bit.

What I meant is that there's no need for imx-drm device at all, neither
in the DT data or in the kernel side.

There'd just be the DT nodes for the IPUs, which would cause the IPU
platform devices to be created, and a driver for the IPU. So just like
for any other normal platform device.

In the simplest cases, where only one IPU is enabled, or the IPUs want
to be considered as totally independent, there'd be nothing special. The
IPU driver would just register the drm entities.

 [Reordering a bit...]
 - Two IPUs in combined mode:

 Pick one IPU as the master, and one as slave. Link the IPU nodes in DT
 data with phandles, say: master=ipu1 on the slave IPU and
 slave=ipu0 on the master.

 The master one will register the DRM entities, and the slave one will
 just do what the master says.
 
 That might work, too. Just let the each IPU scan the graph and try to
 find the imx-drm master before creating the imx-drm platform device.
 The first IPU fill find no preexisting master and create the imx-drm
 platform device as above, adding the other IPU as well as the other
 components with component_master_add_child. It just has to make sure
 that the other IPU is added to the list before the encoders are.
 
 The second IPU will scan the graph, find a preexisting master for the
 other IPU node, register its component and just wait to be bound by the
 master.

Here the slave IPU doesn't need to scan the graph at all. It just needs
to make itself available somehow to the master. Maybe just by exported
functions, or registering itself somewhere.

Only the master IPU will scan the graph, and as all the entities are
connected to the same graph, including the slave IPU, the master can
find all the relevant nodes.

 - Two IPUs as separate units: almost the same as above, but both would
 independently register the DRM entities.
 
 Here the second IPU would not be connected to the first IPU via the
 graph - it would not find a preexisting imx-drm device when scanning its
 graph and create its own imx-drm device just like the first IPU did.
 As a result there are two completely separate DRM devices.

I understood that that would be the idea, two separate, independent DRM
devices. Like two graphics cards on a PC.

 That being said, this change could be made at any time in the future,
 in a backwards compatible fashion, by just declaring the imx-drm node
 optional and ignoring it if it exists.

Yes, I agree.

And I don't even know if the master-slave method I described is valid,
although I don't see why it would not work. The master
display-subsystem DT node does make sense to me in cases like this,
where the IPUs need to be driven as a single unit.

 Did anybody propose such a generic term? How about:
 
 -imx-drm {
 - compatible = fsl,imx-drm;
 - ports = ipu1_di0, ipu1_di1;
 -};
 +display-subsystem {
 + compatible = fsl,imx-display-subsystem;
 + ports = ipu1_di0, ipu1_di1;
 +};

That sounds fine to me.

I wonder how it works if, say, there are 4 IPUs, and you want to run
them in two pairs. In that case you need two of those display-subsystem
nodes. But I guess it's just a matter of assigning a number for them
with 'regs' property, and making sure the driver has nothing that
prevents multiple instances of it.

 If this wasn't the case, we wouldn't even attempt to describe what devices
 we have on which I2C buses - we'd just list the hardware on the board
 without giving any information about how it's wired together.

 This is no different - however, it doesn't have (and shouldn't) be
 subsystem specific... but - and this is the challenge we then face - how
 do you decide that on one board with a single zImage kernel, with both
 DRM and fbdev built-in, whether to use the DRM interfaces or the fbdev
 interfaces?  We could have both matching the same compatible string, but
 we'd also need some way to tell each other that they're not allowed to
 bind.

 Yes, that's an annoying problem, we have that on OMAP. It's a clear sign
 that our video support is rather messed up.

 My opinion is that the fbdev and drm drivers for a single hardware
 should be exclusive at compile time. We don't allow multiple drivers for
 single device for other subsystems either, do we? Eventually we should
 have only one driver for one hardware device.

 If that's not possible, then the drivers in question could have an
 option to enable or disable themselves, passed via the kernel command
 line, so that the user can select which subsystem to use.
 
 That is the exact same problem as having multiple drivers 

[RFC PATCH v4 3/8] staging: imx-drm: Document updated imx-drm device tree bindings

2014-02-25 Thread Philipp Zabel
This patch updates the device tree binding documentation for i.MX IPU/display
nodes using the OF graph bindings documented in
Documentation/devicetree/bindings/media/video-interfaces.txt.

Signed-off-by: Philipp Zabel p.za...@pengutronix.de
---
 .../bindings/staging/imx-drm/fsl-imx-drm.txt   | 48 +++---
 .../devicetree/bindings/staging/imx-drm/ldb.txt| 20 +++--
 2 files changed, 59 insertions(+), 9 deletions(-)

diff --git a/Documentation/devicetree/bindings/staging/imx-drm/fsl-imx-drm.txt 
b/Documentation/devicetree/bindings/staging/imx-drm/fsl-imx-drm.txt
index b876d49..bfa19a4 100644
--- a/Documentation/devicetree/bindings/staging/imx-drm/fsl-imx-drm.txt
+++ b/Documentation/devicetree/bindings/staging/imx-drm/fsl-imx-drm.txt
@@ -1,3 +1,22 @@
+Freescale i.MX DRM master device
+
+
+The freescale i.MX DRM master device is a virtual device needed to list all
+IPU or other display interface nodes that comprise the graphics subsystem.
+
+Required properties:
+- compatible: Should be fsl,imx-drm
+- ports: Should contain a list of phandles pointing to display interface ports
+  of IPU devices
+
+example:
+
+imx-drm {
+   compatible = fsl,imx-drm;
+   ports = ipu_di0;
+};
+
+
 Freescale i.MX IPUv3
 
 
@@ -7,18 +26,31 @@ Required properties:
   datasheet
 - interrupts: Should contain sync interrupt and error interrupt,
   in this order.
-- #crtc-cells: 1, See below
 - resets: phandle pointing to the system reset controller and
   reset line index, see reset/fsl,imx-src.txt for details
+Optional properties:
+- port@[0-3]: Port nodes with endpoint definitions as defined in
+  Documentation/devicetree/bindings/media/video-interfaces.txt.
+  Ports 0 and 1 should correspond to CSI0 and CSI1,
+  ports 2 and 3 should correspond to DI0 and DI1, respectively.
 
 example:
 
 ipu: ipu@1800 {
-   #crtc-cells = 1;
+   #address-cells = 1;
+   #size-cells = 0;
compatible = fsl,imx53-ipu;
reg = 0x1800 0x08000;
interrupts = 11 10;
resets = src 2;
+
+   ipu_di0: port@2 {
+   reg = 2;
+
+   ipu_di0_disp0: endpoint {
+   remote-endpoint = display_in;
+   };
+   };
 };
 
 Parallel display support
@@ -26,19 +58,25 @@ Parallel display support
 
 Required properties:
 - compatible: Should be fsl,imx-parallel-display
-- crtc: the crtc this display is connected to, see below
 Optional properties:
 - interface_pix_fmt: How this display is connected to the
-  crtc. Currently supported types: rgb24, rgb565, bgr666
+  display interface. Currently supported types: rgb24, rgb565, bgr666
 - edid: verbatim EDID data block describing attached display.
 - ddc: phandle describing the i2c bus handling the display data
   channel
+- port: A port node with endpoint definitions as defined in
+  Documentation/devicetree/bindings/media/video-interfaces.txt.
 
 example:
 
 display@di0 {
compatible = fsl,imx-parallel-display;
edid = [edid-data];
-   crtc = ipu 0;
interface-pix-fmt = rgb24;
+
+   port {
+   display_in: endpoint {
+   remote-endpoint = ipu_di0_disp0;
+   };
+   };
 };
diff --git a/Documentation/devicetree/bindings/staging/imx-drm/ldb.txt 
b/Documentation/devicetree/bindings/staging/imx-drm/ldb.txt
index ed93778..578a1fc 100644
--- a/Documentation/devicetree/bindings/staging/imx-drm/ldb.txt
+++ b/Documentation/devicetree/bindings/staging/imx-drm/ldb.txt
@@ -50,12 +50,14 @@ have a look at 
Documentation/devicetree/bindings/video/display-timing.txt.
 
 Required properties:
  - reg : should be 0 or 1
- - crtcs : a list of phandles with index pointing to the IPU display interfaces
-   that can be used as video source for this channel.
  - fsl,data-mapping : should be spwg or jeida
   This describes how the color bits are laid out in the
   serialized LVDS signal.
  - fsl,data-width : should be 18 or 24
+ - port: A port node with endpoint definitions as defined in
+   Documentation/devicetree/bindings/media/video-interfaces.txt.
+   On i.MX6, there should be four ports (port@[0-3]) that correspond
+   to the four LVDS multiplexer inputs.
 
 example:
 
@@ -77,23 +79,33 @@ ldb: ldb@53fa8008 {
 
lvds-channel@0 {
reg = 0;
-   crtcs = ipu 0;
fsl,data-mapping = spwg;
fsl,data-width = 24;
 
display-timings {
/* ... */
};
+
+   port {
+   lvds0_in: endpoint {
+   remote-endpoint = ipu_di0_lvds0;
+   };
+   };
};
 
lvds-channel@1 {
reg = 1;
-   crtcs = ipu 1;
fsl,data-mapping = spwg;
fsl,data-width = 24;